Using Object Storage with S3 API
KakaoCloud's Object Storage provides some APIs compatible with AWS S3 object storage, allowing you to use KakaoCloud Object Storage in workloads that already utilize S3.
This document explains usage examples of some APIs for working with Buckets and Objects. For detailed API functionality provided by KakaoCloud Object Storage, refer to the API Reference document.
- Estimated time: 30 minutes
- User environment
- Operating system: MacOS, Ubuntu
- Region:
kr-central-2
- Operating system: MacOS, Ubuntu
- Prerequisites
Before you start
Issue API authentication token
-
Access the terminal on your local machine. Modify the following command by replacing
ACCESS_KEY
andACCESS_SECRET_KEY
with your 'Access key' and 'Secret access key'. Then, run the command to issue an API authentication token.Refer to the API usage preparation document and follow the steps in Get API authentication token.
export API_TOKEN=$(curl -s -X POST -i https://iam.kakaocloud.com/identity/v3/auth/tokens -H "Content-Type: application/json" -d \
'{
"auth": {
"identity": {
"methods": [
"application_credential"
],
"application_credential": {
"id": "{ACCESS_KEY}",
"secret": "{ACCESS_SECRET_KEY}"
}
}
}
}' | grep x-subject-token | awk -v RS='\r\n' '{print $2}') -
Verify the issued API authentication token.
echo $API_TOKEN
Issue credentials for S3 API
-
To issue credentials for using the S3 API, you need your
User Unique ID
. You can find yourUser Unique ID
under [Console] > [Account Information]. -
Enter the issued
User Unique ID
,API_TOKEN
, andProject ID
into the environment variables section below. Then, run the script to obtain the credentials for using the S3 API.echo $(curl -s -X POST -i https://iam.kakaocloud.com/identity/v3/users/{USER_ID}/credentials/OS-EC2 \
-H "Content-Type: application/json" \
-H "X-Auth-Token: ${API_TOKEN}" -d \
'{
"tenant_id": "{PROJECT_ID}"
}')Checking project IDThe project ID can be found at the top of the Your Project section on the KakaoCloud Console main screen.
-
Verify the
access
andsecret
values from the output.Key Environment Variable "access" S3_ACCESS_KEY "secret" S3_SECRET_ACCESS_KEY
Type 1. AWS CLI example
This is an example of using KakaoCloud Object Storage with the AWS Command Line Interface (CLI).
AWS CLI configuration
Set up the credentials and environment for running AWS CLI. If AWS CLI is not installed, install AWS CLI first.
- Mac
- Linux(Ubuntu)
-
Install AWS CLI using the Homebrew package manager or the
curl
command.Homebrew$ brew install awscli
curl$ curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
$ sudo installer -pkg AWSCLIV2.pkg -target / -
Verify that the installation was successful by running the following command.
Verify AWS CLI installation# Check the installation path
$ which aws
/usr/local/bin/aws
# Check version
$ aws --version
aws-cli/2.10.0 Python/3.11.2 Darwin/18.7.0 botocore/2.4.5
-
Install using the
curl
command.curl$ curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
$ sudo installer -pkg AWSCLIV2.pkg -target / -
Verify that the installation was successful by running the following command.
Verify AWS CLI installation# Check installation path
$ which aws
/usr/local/bin/aws
# Check version
$ aws --version
aws-cli/2.10.0 Python/3.11.2 Darwin/18.7.0 botocore/2.4.5
To proceed, check the S3 credentials.
-
Use the
configure
command to set up the credentials.aws configure
-
Enter your credentials based on the following:
AWS Access key ID: {S3_ACCESS_KEY}
AWS Secret access key: {S3_SECRET_ACCESS_KEY}
Default region name: kr-central-2
Default output format: -
Once the configuration is complete, you can use the
s3
command as follows:aws --endpoint-url={endpoint} s3 {command} s3://{bucket}
# Example
# aws --endpoint-url=https://objectstorage.kr-central-2.kakaocloud.com s3 lsRegion Endpoint kr-central-2 https://objectstorage.kr-central-2.kakaocloud.com
AWS CLI example
aws --endpoint-url={endpoint} s3 mb s3://{bucket_name}
aws --endpoint-url={endpoint} s3 ls
aws --endpoint-url={endpoint} s3 ls s3://{bucket_name}
aws --endpoint-url={endpoint} s3 rb s3://{bucket_name}
aws --endpoint-url={endpoint} s3 cp {local_path} s3://{bucket_name}/{upload_path}
aws --endpoint-url={endpoint} s3 cp s3://{bucket_name}/{file_path} {local_path}
aws --endpoint-url={endpoint} s3 rm s3://{bucket_name}/{file_path}
Type 2. Python SDK (Boto3) example
This is an example of using KakaoCloud Object Storage with the AWS Python SDK (Boto3).
Create and configure S3 client
-
Install Boto3 using
pip
.pip install boto3
```bash
$ pip install boto3infoBoto3 is currently supported only on Python 3.8 or higher.
-
Configure the client with your credentials and environment information.
import boto3
client = boto3.client(
region_name="kr-central-2",
endpoint_url="{ENDPOINT}",
aws_access_key_id="{S3_ACCESS_KEY}",
aws_secret_access_key="{S3_SECRET_ACCESS_KEY}",
service_name="s3"
)
SDK usage example
def create_bucket(bucket_name):
try:
return client.create_bucket(Bucket=bucket_name)
except Exception as e:
raise #...
def get_list_buckets() :
try:
response = client.list_buckets()
return [bucket.get('Name') for bucket in response.get('Buckets', [])]
except Exception as e:
raise #...
def get_list_objects(bucket_name):
try:
response = client.list_objects(Bucket=bucket_name)
return [obj.get('Key') for obj in response.get('Contents', [])]
except Exception as e:
raise #...
def delete_bucket(bucket_name):
try:
return client.delete_bucket(Bucket=bucket_name)
except Exception as e:
raise #...
# Upload a file to a specific bucket
# Upload file to the bucket
def upload_file(local_path, bucket_name, file_name) :
try :
# client.upload_file('/Documents/hello.jpeg', 'bucket', 'hello.jpeg')
return client.upload_file(local_path, bucket_name, file_name)
except Exception as e:
raise
# file_name : Name of the file to download
# local_path : Local path and filename to save the downloaded file
def download_file(bucket_name, file_name, local_path) :
try :
# client.download_file('bucket', 'hello.jpeg', '/Downloads/hello.jpeg')
return client.download_file(bucket_name, file_name, local_path)
except Exception as e:
raise
def delete_object(bucket_name, file_name) :
try :
return client.delete_object(Bucket=bucket_name, Key=file_name)
except Exception as e :
raise
Type 3. Java SDK example
This is an example of using KakaoCloud Object Storage with the AWS Java SDK. This document is based on aws-java-sdk-v2
.
Create and configure S3 client
- Maven
- Gradle
-
Add the following dependencies to your
pom.xml
.<dependencies>
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>s3</artifactId>
<version>2.23.7</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.32</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.4.12</version>
</dependency>
</dependencies> -
Rebuild Maven to apply the added dependencies.
-
Add the following dependencies to your
build.gradle
.implementation platform('software.amazon.awssdk:bom:2.23.7')
implementation 'software.amazon.awssdk:s3'
implementation 'ch.qos.logback:logback-classic:1.4.12' -
추가한 의존성을 반영하기 위해 Gradle을 새로 빌드합니다.
-
Rebuild Gradle to apply the added dependencies.
String s3Endpoint = "https://objectstorage.kr-central-2.kakaocloud.com";
String accessKey = "{S3_ACCESS_KEY}";
String secretAccessKey = "{S3_SECRET_ACCESS_KEY}";
String region = "kr-central-2";
final S3Client client = S3Client.builder()
.credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create(accessKey, secretAccessKey)))
.endpointOverride(URI.create(s3Endpoint))
.forcePathStyle(true)
.region(Region.of(region))
.build();
SDK usage example
private void createBucket(S3Client client, String bucketName) {
try {
CreateBucketResponse res = client.createBucket(
CreateBucketRequest.builder()
.bucket(bucketName)
.build());
}
catch (Exception e){
e.printStackTrace();
}
}
private void listBuckets(S3Client client) {
try {
ListBucketsResponse res = client.listBuckets();
System.out.println(res);
}
catch (Exception e){
e.printStackTrace();
}
}
private void listObjects(S3Client client, String bucketName) {
try {
ListObjectsResponse res = client.listObjects(
ListObjectsRequest.builder()
.bucket(bucketName)
.build());
System.out.println(res);
}
catch (Exception e){
e.printStackTrace();
}
}
private void deleteBucket(S3Client client, String bucketName) {
try {
DeleteBucketResponse res = client.deleteBucket(
DeleteBucketRequest.builder()
.bucket(bucketName)
.build());
System.out.println(res);
}
catch (Exception e){
e.printStackTrace();
}
}
// objectKey : Name of the file to upload
// filePath : Local path where the file to upload is located
private void uploadObject(S3Client client, String bucketName, String objectKey, String filePath){
try {
Path path = Paths.get(filePath);
PutObjectRequest putObjectRequest = PutObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.build();
client.putObject(putObjectRequest, path);
} catch (Exception e){
e.printStackTrace();
}
}
// objectKey : Name of the file to download
// filePath : Local path and filename to save the downloaded file
private void downloadObject(S3Client client, String bucketName, String objectKey, String filePath){
try {
Path path = Paths.get(filePath);
GetObjectRequest getObjectRequest = GetObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.build();
ResponseBytes<GetObjectResponse> responseBytes = client.getObjectAsBytes(getObjectRequest);
byte[] data = responseBytes.asByteArray();
File myFile = new File(filePath);
OutputStream os = new FileOutputStream(myFile);
os.write(data);
os.close();
} catch (IOException ex) {
ex.printStackTrace();
} catch (S3Exception e){
e.printStackTrace();
}
}
// objectKey : File name to delete
private void deleteObject(S3Client client, String bucketName, String objectKey){
try {
DeleteObjectRequest deleteObjectRequest = DeleteObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.build();
client.deleteObject(deleteObjectRequest);
} catch (Exception e){
e.printStackTrace();
}
}
Type 4. Go SDK example
This is an example of using KakaoCloud Object Storage with the AWS Go SDK.
Create and configure S3 client
-
Initialize the local project using the following command.
$ go mod init {project_name}
-
Fetch AWS SDK packages for Go V2 using Go modules.
$ go get github.com/aws/aws-sdk-go-v2/config
$ go get github.com/aws/aws-sdk-go-v2/credentials
$ go get github.com/aws/aws-sdk-go-v2/service/s3 -
Configure the client with your credentials and environment information.
var accessKeyId = "{S3_ACCESS_KEY}"
var accessKeySecret = "{S3_SECRET_ACCESS_KEY}"
var endpoint = "{ENDPOINT}"
var region = "kr-central-2"
resolver := aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
return aws.Endpoint{
URL: endpoint,
}, nil
})
cfg, err := config.LoadDefaultConfig(context.TODO(),
config.WithRegion(region),
config.WithEndpointResolverWithOptions(resolver),
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(accessKeyId, accessKeySecret, "")),
)
if err != nil {
log.Fatal(err)
}
client := s3.NewFromConfig(cfg, func(options *s3.Options) {
options.UsePathStyle = true
})
SDK usage example
bucketname := "{BUCKET_NAME}"
_, err = client.CreateBucket(context.TODO(), &s3.CreateBucketInput{
Bucket: &bucketname,
})
if err != nil {
log.Fatal(err)
}
_, err = client.ListBuckets(context.TODO(), &s3.ListBucketsInput{})
if err != nil {
log.Fatal(err)
}
bucketname := "{BUCKET_NAME}"
_, err = client.ListObjectsV2(context.TODO(), &s3.ListObjectsV2Input{
Bucket: &bucketname,
})
if err != nil {
log.Fatal(err)
}
bucketname := "{BUCKET_NAME}"
_, err = client.DeleteBucket(context.TODO(), &s3.DeleteBucketInput{
Bucket: &bucketname,
})
if err != nil {
log.Fatal(err)
}
bucketname := "{BUCKET_NAME}"
filePath := "{LOCAL_PATH}" // Local path where the file to upload is located
objectKey := "{FILE_NAME}" // File name to upload
file, err := os.Open(filePath)
if err != nil {
log.Fatal(err)
}
defer file.Close()
_, err = client.PutObject(context.TODO(), &s3.PutObjectInput{
Bucket: &bucketname,
Key: &objectKey,
Body: file,
})
if err != nil {
log.Fatal(err)
}
bucketname := "{BUCKET_NAME}"
filePath := "{LOCAL_PATH}" // Local path and filename to save the downloaded file
objectKey := "{FILE_NAME}" // Name of the file to download
file, err := os.Create(filePath)
if err != nil {
panic(err)
}
defer file.Close()
result, err := client.GetObject(context.TODO(), &s3.GetObjectInput{
Bucket: &bucketname,
Key: &objectKey,
})
if err != nil {
log.Fatal(err)
}
_, err = io.Copy(file, result.Body)
if err != nil {
panic(err)
}
bucketname := "{BUCKET_NAME}"
objectKey := "{FILE_NAME}" // File name to delete
_, err = client.DeleteObject(context.TODO(), &s3.DeleteObjectInput{
Bucket: &bucketname,
Key: &objectKey,
})
if err != nil {
log.Fatal(err)
}