API overview
Hadoop Eco API is an interface for managing resources using a code-based programming approach.
Through the REST API provided by Hadoop Eco, you can handle cluster creation, querying, and more in ways other than the KakaoCloud console.
API common
- API responses are provided in
JSON
format. - Access key information (access key ID and secret access key) is required to use the API.
- For the job cluster API, you must issue an Open API key for the Hadoop Eco cluster.
Prepare API usage
To call the API provided by Hadoop Eco, you must issue access key information. For detailed instructions, refer to the Get access key document.
Hadoop ECO API classification
Hadoop Eco Open API
Hadoop Eco Open API performs general cluster creation and management tasks, such as creating new clusters, querying clusters, deleting clusters, and scaling them. Open API key issuance is not required when using the Hadoop Eco Open API.
Job cluster API
In Hadoop Eco, clusters typically switch to the Terminated
state once a specific task is completed, making them unavailable for further use. However, with the job cluster API, you can repeatedly schedule tasks on clusters already in the Terminated
state.
The job cluster API improves cluster reusability, allowing repeated task execution on existing clusters without creating new clusters every time, enabling efficient cluster management.
💡 Open API key issuance conditions
When issuing an Open API key in the console, all the following conditions must be met:
- The cluster type must be
Core Hadoop
. - When creating the cluster, the task scheduling activation option must be set to
Hive
orSpark
. - The Hadoop Eco cluster state must be
Terminated(User Command)
orTerminated(User)
.
- If you use Open API for a cluster created by linking an external metastore during 'Step 5: Service integration settings' of the cluster creation process, the external metastore configuration must remain unchanged.
- If the external metastore is deleted or its information is changed, the cluster may not function properly.
- If a security group is deleted without deleting the API key, reissuing the API key will recreate the security group, enabling you to use the Open API cluster again.
For detailed Open API key issuance instructions, refer to Open API key.
API model
The API model of Hadoop Eco is as follows:
Model - cluster
Data | Type | Description |
---|---|---|
Credential-ID | String | Access key ID |
Credential-Secret | String | Secret access key |
Hadoop-Eco-Api-Key | String | Open API key issued from the console |
request-id | String | Request ID |
workerCnt | Integer | Hadoop Eco worker node count - range: 1 ~ 1,000 |
workerVolumeSize | Integer | Hadoop Eco worker node block storage size - range: 50 ~ 5120GB |
configHdfsReplication | Integer | Hadoop Eco HDFS replication count - range: 1 ~ 500 |
configHdfsBlockSize | Integer | Hadoop Eco HDFS block size - range: 1 ~ 1024MB |
userTaskDeployMode | String | Hadoop Eco Spark job deploy mode - modes: client , cluster |
userTaskExecOpts | String | Hadoop Eco Spark or Hive job configuration parameters |
userTaskExecParams | String | Hadoop Eco Spark application parameters |