Skip to main content

API overview

The Hadoop Eco API is an interface for managing resources programmatically via code.
With the REST API provided by Hadoop Eco, you can create, retrieve, and manage clusters through methods other than the KakaoCloud console.

API common

  • API responses are provided in JSON format.
  • To use the API, you need access key credentials (access key ID and secret access key).
  • For the job cluster API, an Open API key must be issued from the Hadoop Eco cluster.

Prepare API usage

To call the API provided by Hadoop Eco, you must issue access key credentials. For more details, refer to the credentials documentation.

Hadoop Eco API classification

Hadoop Eco Open API

The Hadoop Eco Open API supports basic cluster operations such as creating a new cluster, retrieving cluster information, deleting, and scaling clusters. When using the Hadoop Eco Open API, issuing an Open API key is not required.

Job cluster API

In general, a cluster in Hadoop Eco changes to the Terminated state after completing a job, and can no longer be used. However, by using the job cluster API, you can repeatedly execute job scheduling on a previously created cluster that is in the Terminated state.

The job cluster API enhances cluster reusability and allows repeated task execution on an existing cluster without needing to create a new one each time, enabling efficient cluster management.

💡 Open API key issuance requirements

When issuing an Open API key from the console, all of the following conditions must be met:

  • The cluster type is Core Hadoop.
  • The job scheduling option is set to Hive or Spark during cluster creation.
  • The Hadoop Eco cluster is in the Terminated(User Command) or Terminated(User) state.
Notes for issuing Open API key for Hadoop Eco clusters
  • If you use an Open API cluster created by linking to an external metastore during 'Step 5: Service integration settings' in the cluster creation process, you must maintain the exact same metastore configuration.
  • If the external metastore is deleted or modified, the cluster may not function properly.
  • If the API key is not deleted and the security group is deleted, reissuing the API key will recreate the security group, allowing continued use of the Open API cluster.

For detailed instructions on issuing an Open API key, refer to the Open API key documentation.

API model

The API model for Hadoop Eco is as follows.

Model – cluster

DataTypeDescription
Credential-IDStringAccess key ID
Credential-SecretStringSecret access key
Hadoop-Eco-Api-KeyStringOpen API key issued from the console
request-idStringRequest ID
workerCntIntegerNumber of Hadoop Eco worker nodes
- Count: 1 to 1,000
workerVolumeSizeIntegerBlock storage size for Hadoop Eco worker nodes
- Size: 50 to 5120 GB
configHdfsReplicationIntegerNumber of Hadoop Eco HDFS replications
- Count: 1 to 500
configHdfsBlockSizeIntegerHadoop Eco HDFS block size
- Size: 1 to 1024 MB
userTaskDeployModeStringHadoop Eco Spark job deploy mode
- Mode: client, cluster
userTaskExecOptsStringHadoop Eco Spark or Hive job configuration parameters
userTaskExecParamsStringHadoop Eco Spark application parameters