Skip to main content

Using Object Storage with S3 API

KakaoCloud's Object Storage provides some APIs compatible with AWS S3 object storage, allowing you to use KakaoCloud Object Storage in workloads that already utilize S3.
This document explains usage examples of some APIs for working with Buckets and Objects. For detailed API functionality provided by KakaoCloud Object Storage, refer to the API Reference document.

info
  • Estimated time: 30 minutes
  • Recommended user environment
    • Operating system: macOS, Ubuntu
    • Region: kr-central-2
  • Prerequisites

Prerequisites

Issue API authentication token

  1. Access the terminal on your local machine. Modify the following command by replacing ACCESS_KEY and ACCESS_SECRET_KEY with your 'Access Key' and 'Secret Access Key'. Then, run the command to issue an API authentication token.

    Refer to the API usage preparation document and follow the steps in Get API authentication token.

    export API_TOKEN=$(curl -s -X POST -i https://iam.kakaocloud.com/identity/v3/auth/tokens -H "Content-Type: application/json" -d \
    '{
    "auth": {
    "identity": {
    "methods": [
    "application_credential"
    ],
    "application_credential": {
    "id": "{ACCESS_KEY}",
    "secret": "{ACCESS_SECRET_KEY}"
    }
    }
    }
    }' | grep x-subject-token | awk -v RS='\r\n' '{print $2}')
  2. Verify the issued API authentication token.

    echo $API_TOKEN

Issue credentials for using S3 API

  1. To issue credentials for using the S3 API, you need your User Unique ID. You can find your User Unique ID under [Console] > [Account Information].

  2. Enter the issued User Unique ID, API_TOKEN, and Project ID into the environment variables section below. Then, run the script to obtain the credentials for using the S3 API.

    echo $(curl -s -X POST -i https://iam.kakaocloud.com/identity/v3/users/{USER_ID}/credentials/OS-EC2 \
    -H "Content-Type: application/json" \
    -H "X-Auth-Token: ${API_TOKEN}" -d \
    '{
    "tenant_id": "{PROJECT_ID}"
    }')
    How to get the Project ID

    You can find the Project ID in the URI of the project’s console page (project_id value).
    For example, in the URL https://console.kakaocloud.com/transit-gateway/transit-gateways?project_id=073fc84cbd86412ef9f6d269780ef89bb&region=kr-central-2, the Project ID is the part before the &, which is 073fc84cbd86412ef9f6d269780ef89bb.

  3. Verify the access and secret values from the output.

    KeyEnvironment Variable
    "access"S3_ACCESS_KEY
    "secret"S3_SECRET_ACCESS_KEY

Type 1. AWS CLI example

This is an example of using KakaoCloud Object Storage with the AWS Command Line Interface (CLI).

AWS CLI configuration

Set up the credentials and environment for running AWS CLI. If AWS CLI is not installed, install AWS CLI first.

  1. Install AWS CLI using the Homebrew package manager or the curl command.

    Homebrew
    $ brew install awscli
    curl
    $ curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
    $ sudo installer -pkg AWSCLIV2.pkg -target /
  2. Verify that the installation was successful by running the following command.

    Verify AWS CLI installation
    # Check the installation path
    $ which aws
    /usr/local/bin/aws

    # Check version
    $ aws --version
    aws-cli/2.10.0 Python/3.11.2 Darwin/18.7.0 botocore/2.4.5

To proceed, check the S3 credentials.

  1. Use the configure command to set up the credentials.

    aws configure
  2. Enter your credentials based on the following:

    AWS Access Key ID: {S3_ACCESS_KEY}
    AWS Secret Access Key: {S3_SECRET_ACCESS_KEY}
    Default region name: kr-central-2
    Default output format:
  3. Once the configuration is complete, you can use the s3 command as follows:

    aws --endpoint-url={endpoint} s3 {command} s3://{bucket}
    # Example
    # aws --endpoint-url=https://objectstorage.kr-central-2.kakaocloud.com s3 ls
    RegionEndpoint
    kr-central-2https://objectstorage.kr-central-2.kakaocloud.com

AWS CLI example

Create a bucket
aws --endpoint-url={endpoint} s3 mb s3://{bucket_name}
List all buckets
aws --endpoint-url={endpoint} s3 ls
List the contents of a specific bucket
aws --endpoint-url={endpoint} s3 ls s3://{bucket_name}
Delete bucket
aws --endpoint-url={endpoint} s3 rb s3://{bucket_name}
Upload file
aws --endpoint-url={endpoint} s3 cp {local_path} s3://{bucket_name}/{upload_path}
Download file
aws --endpoint-url={endpoint} s3 cp s3://{bucket_name}/{file_path} {local_path}
Delete file
aws --endpoint-url={endpoint} s3 rm s3://{bucket_name}/{file_path}

Type 2. Python SDK (Boto3) example

This is an example of using KakaoCloud Object Storage with the AWS Python SDK (Boto3).

Create and configure S3 client

  1. Install Boto3 using pip.

    pip install boto3


    ```bash
    $ pip install boto3
    info

    Boto3 is currently supported only on Python 3.8 or higher.

  2. Configure the client with your credentials and environment information.

    import boto3

    client = boto3.client(
    region_name="kr-central-2",
    endpoint_url="{ENDPOINT}",
    aws_access_key_id="{S3_ACCESS_KEY}",
    aws_secret_access_key="{S3_SECRET_ACCESS_KEY}",
    service_name="s3"
    )

SDK usage example

Create bucket
def create_bucket(bucket_name):
try:
return client.create_bucket(Bucket=bucket_name)
except Exception as e:
raise #...
List all buckets
def get_list_buckets() :
try:
response = client.list_buckets()
return [bucket.get('Name') for bucket in response.get('Buckets', [])]
except Exception as e:
raise #...
List the contents of a specific bucket
def get_list_objects(bucket_name):
try:
response = client.list_objects(Bucket=bucket_name)
return [obj.get('Key') for obj in response.get('Contents', [])]
except Exception as e:
raise #...
Delete bucket
def delete_bucket(bucket_name):
try:
return client.delete_bucket(Bucket=bucket_name)
except Exception as e:
raise #...
Upload file
# Upload a file to a specific bucket
# Upload file to the bucket
def upload_file(local_path, bucket_name, file_name) :
try :
# client.upload_file('/Documents/hello.jpeg', 'bucket', 'hello.jpeg')
return client.upload_file(local_path, bucket_name, file_name)
except Exception as e:
raise
Download file
# file_name : Name of the file to download
# local_path : Local path and filename to save the downloaded file
def download_file(bucket_name, file_name, local_path) :
try :
# client.download_file('bucket', 'hello.jpeg', '/Downloads/hello.jpeg')
return client.download_file(bucket_name, file_name, local_path)
except Exception as e:
raise
Delete file
def delete_object(bucekt_name, file_name) :
try :
return client.delete_object(Bucket=bucekt_name, Key=file_name)
except Exception as e :
raise

Type 3. Java SDK example

This is an example of using KakaoCloud Object Storage with the AWS Java SDK. This document is based on aws-java-sdk-v2.

Create and configure S3 client

  1. Add the following dependencies to your pom.xml.

    <dependencies>
    <dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>s3</artifactId>
    <version>2.23.7</version>
    </dependency>

    <dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>1.7.32</version>
    </dependency>
    <dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.4.12</version>
    </dependency>
    </dependencies>
  2. Rebuild Maven to apply the added dependencies.

  1. Rebuild Gradle to apply the added dependencies.

    String s3Endpoint = "https://objectstorage.kr-central-2.kakaocloud.com";
    String accessKey = "{S3_ACCESS_KEY}";
    String secretAccessKey = "{S3_SECRET_ACCESS_KEY}";
    String region = "kr-central-2";


    final S3Client client = S3Client.builder()
    .credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create(accessKey, secretAccessKey)))
    .endpointOverride(URI.create(s3Endpoint))
    .forcePathStyle(true)
    .region(Region.of(region))
    .build();

SDK usage example

Create a bucket
private void createBucket(S3Client client, String bucketName) {
try {
CreateBucketResponse res = client.createBucket(
CreateBucketRequest.builder()
.bucket(bucketName)
.build());
}
catch (Exception e){
e.printStackTrace();
}
}
List all buckets
private void listBuckets(S3Client client) {
try {
ListBucketsResponse res = client.listBuckets();
System.out.println(res);
}
catch (Exception e){
e.printStackTrace();
}
}
List the contents of a specific bucket
private void listObjects(S3Client client, String bucketName) {
try {
ListObjectsResponse res = client.listObjects(
ListObjectsRequest.builder()
.bucket(bucketName)
.build());
System.out.println(res);
}
catch (Exception e){
e.printStackTrace();
}
}
Delete bucket
private void deleteBucket(S3Client client, String bucketName) {
try {
DeleteBucketResponse res = client.deleteBucket(
DeleteBucketRequest.builder()
.bucket(bucketName)
.build());
System.out.println(res);
}
catch (Exception e){
e.printStackTrace();
}
}
Upload file
// objectKey : Name of the file to upload
// filePath : Local path where the file to upload is located
private void uploadObject(S3Client client, String bucketName, String objectKey, String filePath){
try {
Path path = Paths.get(filePath);
PutObjectRequest putObjectRequest = PutObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.build();

client.putObject(putObjectRequest, path);
} catch (Exception e){
e.printStackTrace();
}
}
Download file
// objectKey : Name of the file to download
// filePath : Local path and filename to save the downloaded file
private void downloadObject(S3Client client, String bucketName, String objectKey, String filePath){
try {
Path path = Paths.get(filePath);
GetObjectRequest getObjectRequest = GetObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.build();

ResponseBytes<GetObjectResponse> responseBytes = client.getObjectAsBytes(getObjectRequest);

byte[] data = responseBytes.asByteArray();

File myFile = new File(filePath);
OutputStream os = new FileOutputStream(myFile);
os.write(data);
os.close();
} catch (IOException ex) {
ex.printStackTrace();
} catch (S3Exception e){
e.printStackTrace();
}
}
Delete file
// objectKey : File name to delete
private void deleteObject(S3Client client, String bucketName, String objectKey){
try {
DeleteObjectRequest deleteObjectRequest = DeleteObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.build();

client.deleteObject(deleteObjectRequest);

} catch (Exception e){
e.printStackTrace();
}
}

Type 4. Go SDK example

This is an example of using KakaoCloud Object Storage with the AWS Go SDK.

Create and configure S3 client

  1. Initialize the local project using the following command.

    $ go mod init {project_name}
  2. Fetch AWS SDK packages for Go V2 using Go modules.

    $ go get github.com/aws/aws-sdk-go-v2/config
    $ go get github.com/aws/aws-sdk-go-v2/credentials
    $ go get github.com/aws/aws-sdk-go-v2/service/s3
  3. Configure the client with your credentials and environment information.

    var accessKeyId = "{S3_ACCESS_KEY}"
    var accessKeySecret = "{S3_SECRET_ACCESS_KEY}"
    var endpoint = "{ENDPOINT}"
    var region = "kr-central-2"
    resolver := aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
    return aws.Endpoint{
    URL: endpoint,
    }, nil
    })

    cfg, err := config.LoadDefaultConfig(context.TODO(),
    config.WithRegion(region),
    config.WithEndpointResolverWithOptions(resolver),
    config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(accessKeyId, accessKeySecret, "")),
    )
    if err != nil {
    log.Fatal(err)
    }

    client := s3.NewFromConfig(cfg, func(options *s3.Options) {
    options.UsePathStyle = true
    })

SDK usage example

Create a bucket
bucketname := "{BUCKET_NAME}"
_, err = client.CreateBucket(context.TODO(), &s3.CreateBucketInput{
Bucket: &bucketname,
})
if err != nil {
log.Fatal(err)
}
List all buckets
_, err = client.ListBuckets(context.TODO(), &s3.ListBucketsInput{})
if err != nil {
log.Fatal(err)
}
List the contents of a specific bucket
bucketname := "{BUCKET_NAME}"
_, err = client.ListObjectsV2(context.TODO(), &s3.ListObjectsV2Input{
Bucket: &bucketname,
})
if err != nil {
log.Fatal(err)
}
Delete bucket
bucketname := "{BUCKET_NAME}"
_, err = client.DeleteBucket(context.TODO(), &s3.DeleteBucketInput{
Bucket: &bucketname,
})
if err != nil {
log.Fatal(err)
}
Upload file
bucketname := "{BUCKET_NAME}"
filePath := "{LOCAL_PATH}" // Local path where the file to upload is located
objectKey := "{FILE_NAME}" // File name to upload

file, err := os.Open(filePath)
if err != nil {
log.Fatal(err)
}
defer file.Close()

_, err = client.PutObject(context.TODO(), &s3.PutObjectInput{
Bucket: &bucketname,
Key: &objectKey,
Body: file,
})
if err != nil {
log.Fatal(err)
}
Download file
bucketname := "{BUCKET_NAME}"
filePath := "{LOCAL_PATH}" // Local path and filename to save the downloaded file
objectKey := "{FILE_NAME}" // Name of the file to download

file, err := os.Create(filePath)
if err != nil {
panic(err)
}
defer file.Close()

result, err := client.GetObject(context.TODO(), &s3.GetObjectInput{
Bucket: &bucketname,
Key: &objectKey,
})
if err != nil {
log.Fatal(err)
}

_, err = io.Copy(file, result.Body)
if err != nil {
panic(err)
}
Delete file
bucketname := "{BUCKET_NAME}"
objectKey := "{FILE_NAME}" // File name to delete

_, err = client.DeleteObject(context.TODO(), &s3.DeleteObjectInput{
Bucket: &bucketname,
Key: &objectKey,
})
if err != nil {
log.Fatal(err)
}