API overview
The Hadoop Eco API is an interface for managing resources programmatically via code.
With the REST API provided by Hadoop Eco, you can create, retrieve, and manage clusters through methods other than the KakaoCloud console.
API common
- API responses are provided in
JSON
format. - To use the API, you need access key credentials (access key ID and secret access key).
- For the job cluster API, an Open API key must be issued from the Hadoop Eco cluster.
Prepare API usage
To call the API provided by Hadoop Eco, you must issue access key credentials. For more details, refer to the credentials documentation.
Hadoop Eco API classification
Hadoop Eco Open API
The Hadoop Eco Open API supports basic cluster operations such as creating a new cluster, retrieving cluster information, deleting, and scaling clusters. When using the Hadoop Eco Open API, issuing an Open API key is not required.
Job cluster API
In general, a cluster in Hadoop Eco changes to the Terminated
state after completing a job, and can no longer be used. However, by using the job cluster API, you can repeatedly execute job scheduling on a previously created cluster that is in the Terminated
state.
The job cluster API enhances cluster reusability and allows repeated task execution on an existing cluster without needing to create a new one each time, enabling efficient cluster management.
💡 Open API key issuance requirements
When issuing an Open API key from the console, all of the following conditions must be met:
- The cluster type is
Core Hadoop
. - The job scheduling option is set to
Hive
orSpark
during cluster creation. - The Hadoop Eco cluster is in the
Terminated(User Command)
orTerminated(User)
state.
- If you use an Open API cluster created by linking to an external metastore during 'Step 5: Service integration settings' in the cluster creation process, you must maintain the exact same metastore configuration.
- If the external metastore is deleted or modified, the cluster may not function properly.
- If the API key is not deleted and the security group is deleted, reissuing the API key will recreate the security group, allowing continued use of the Open API cluster.
For detailed instructions on issuing an Open API key, refer to the Open API key documentation.
API model
The API model for Hadoop Eco is as follows.
Model – cluster
Data | Type | Description |
---|---|---|
Credential-ID | String | Access key ID |
Credential-Secret | String | Secret access key |
Hadoop-Eco-Api-Key | String | Open API key issued from the console |
request-id | String | Request ID |
workerCnt | Integer | Number of Hadoop Eco worker nodes - Count: 1 to 1,000 |
workerVolumeSize | Integer | Block storage size for Hadoop Eco worker nodes - Size: 50 to 5120 GB |
configHdfsReplication | Integer | Number of Hadoop Eco HDFS replications - Count: 1 to 500 |
configHdfsBlockSize | Integer | Hadoop Eco HDFS block size - Size: 1 to 1024 MB |
userTaskDeployMode | String | Hadoop Eco Spark job deploy mode - Mode: client , cluster |
userTaskExecOpts | String | Hadoop Eco Spark or Hive job configuration parameters |
userTaskExecParams | String | Hadoop Eco Spark application parameters |