Cluster Re-work API
In Hadoop Eco, a cluster typically changes its status to Terminated once it completes a specific task, after which the cluster can no longer be used. However, by using the Cluster Re-work API, you can repeatedly execute task scheduling on an existing cluster that is in the Terminated state.
The Cluster Re-work API increases cluster reusability and supports efficient cluster management by allowing repetitive tasks on existing clusters without the need to create a new cluster every time.
The Cluster Re-work API operates through the following process:
- Select Cluster: Select the cluster for re-work in the console. This must be an existing cluster in the
Terminatedstate. - Issue Open API Key: To use the Cluster Re-work API, you must issue an Open API Key for that cluster in the KakaoCloud Console. This key is required to access and control the cluster during API calls.
- Issuing an Open API Key automatically creates a dedicated security group for that Open API cluster, and deleting the API Key will delete the security group.
- Call Cluster Re-work API: Call the Cluster Re-work API using the issued Open API Key. This allows you to schedule and execute new tasks on a cluster in the
Terminatedstate.
Prepare for API usage
To call the Cluster Re-work API, you must issue an access key and obtain an Open API Key for the Hadoop Eco cluster from the console.
Create job cluster
Repeatedly executes task scheduling on a cluster that is currently in the Terminated state.
Request
Request Syntax
curl -X POST '[https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/](https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/){cluster-id}' \
--header 'Hadoop-Eco-Api-Key: {hadoop-eco-api-key}' \
--header 'Credential-ID: {credential-id}' \
--header 'Credential-Secret: {credential-secret}' \
--header 'Content-Type: application/json' \
--data-raw '
{
"instanceSpecs": [
{
"type": "MASTER|WORKER|TASK",
"nodeCnt": integer,
"volumeSize": integer
}
],
"config": {
"hdfsBlockSize": integer,
"hdfsReplication": integer
},
"userTaskInfo": {
"deployMode": "string",
"execOpts": "string",
"execParams": "string"
},
"securityGroupIds": ["string"]
}'
API call method
| Method | Request URL |
|---|---|
| POST | https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/{cluster-id} |
| Type | Parameter | Type | Description |
|---|---|---|---|
| URL | {cluster-id}* | String | ID of the cluster - Can be verified at KakaoCloud Console > Analytics > Hadoop Eco > Cluster > Cluster Information |
Request Header
| Request | Type | Description |
|---|---|---|
{credential-id}* | String | User's access key ID - Viewable at Profile (top right) > Credentials > IAM Access Key |
{credential-secret}* | String | User's secret access key - Can only be confirmed at the time of access key creation |
{hadoop-eco-api-key}* | String | Open API key - Issued from Cluster Tasks > Issue Open API Key in the Hadoop Eco cluster menu |
Request Elements
These are not required but can be used if existing cluster settings need to be changed.
| Request | Type | Description |
|---|---|---|
| type | MASTER/WORKER/TASK | Node type |
| nodeCnt | Integer | Number of Hadoop Eco worker nodes - Count: 1 – 1,000 |
| volumeSize | Integer | Block storage size for Hadoop Eco worker nodes - Size: 100 – 5,120 GB |
| hdfsBlockSize | Integer | HDFS block size for Hadoop Eco - Size: 1 – 1,024 MB |
| hdfsReplication | Integer | HDFS replication factor for Hadoop Eco - Count: 1 – 500 |
| deployMode | String | Spark job deploy mode for Hadoop Eco - Mode: client, cluster |
| execOpts | String | Hive configuration parameters for Hadoop Eco |
| execParams | String | Job application parameters for Hadoop Eco |
| securityGroupIds | String | Security group ID |
Response
Response Syntax
{
"clusterId": "string",
"clusterName": "string",
"requestId": "string"
}
Response Elements
| Response | Description |
|---|---|
| clusterId | Created cluster ID |
| clusterName | Created cluster name |
| requestId | Task request ID |
Status codes
| HTTP Status | Description |
|---|---|
200 | Success |
202 | Cluster task (creation) in progress |
400 | Request information error |
401, 403 | Authentication failed, No permission |
404 | Cluster not found |
Retrieve job cluster details
You can retrieve the status of a cluster for which an Open API Key has been issued.
Request
Request Syntax
curl -X GET '[https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/](https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/){cluster-id}/requests/{request-id}' \
--header 'Hadoop-Eco-Api-Key: {hadoop-eco-api-key}' \
--header 'Credential-ID: {credential-id}' \
--header 'Credential-Secret: {credential-secret}' \
--header 'Content-Type: application/json'
curl -X GET '[https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/](https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/){cluster-id}/requests/{request-id}?verbose=true' \
--header 'Hadoop-Eco-Api-Key: {hadoop-eco-api-key}' \
--header 'Credential-ID: {credential-id}' \
--header 'Credential-Secret: {credential-secret}' \
--header 'Content-Type: application/json'
API call method
| Method | Request URL |
|---|---|
| GET | https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/{cluster-id}/requests/{request-id} |
| Path | Type | Description |
|---|---|---|
{cluster-id}* | String | ID of the cluster |
{request-id}* | String | requestId value received in the response after Creating job cluster |
Query parameters
| Request | Type | Description |
|---|---|---|
| verbose | Boolean | When set to true, status up to Master/Worker nodes can be retrieved - true, false |
Response
Response Syntax
{
"clusterId": "string",
"clusterName": "string",
"requestId": "string",
"requestStatus": "string",
"requestResult": "string"
}
{
"clusterId": "string",
"clusterName": "string",
"requestId": "string",
"requestStatus": "string",
"requestResult": "string",
"clusterType": "string",
"clusterVersion": "string",
"isHa": true|false,
"installComponents": [
"string"
],
"masterInfo": [
{
"instanceGroupId": "string",
"flavorId": "string",
"flavorName": "string",
"volumeSize": integer,
"nodeCnt": integer
}
],
"workerInfo": [
{
"instanceGroupId": "string",
"flavorId": "string",
"flavorName": "string",
"volumeSize": integer,
"nodeCnt": integer
}
],
"taskInfo": [
{
"instanceGroupId": "string",
"flavorId": "string",
"flavorName": "string",
"volumeSize": integer,
"nodeCnt": integer
}
],
"imageName": "string",
"securityGroupIds": [
"string"
],
"keypairName": "string",
"owner": "string",
"config": {
"hdfsBlockSize": integer,
"hdfsReplication": integer,
"configText": "string",
"userScript": "string"
},
"userTask": {
"type": "string",
"terminationPolicy": "string",
"fileUrl": "string",
"hiveQuery": "string",
"deployMode": "string",
"execOpts": "string",
"execParams": "string",
"logUrl": "string"
}
}
Response Elements
| Response | Description |
|---|---|
| clusterId | Cluster ID |
| clusterName | Cluster name |
| requestId | Request ID |
| requestStatus | Request status |
| requestResult | Request result |
| clusterType | Created cluster type |
| clusterVersion | Created cluster version |
| isHa | High Availability status |
| installComponents | Installed components |
| masterInfo/workerInfo/taskInfo▼ | Instance group information |
| instanceGroupId | Instance group ID |
| flavorId | Flavor ID |
| flavorName | Flavor name |
| volumeSize | Volume size |
| nodeCnt | Node count |
| imageName | Image name |
| securityGroupIds | Security group IDs |
| keypairName | Key pair name |
| owner | Cluster owner |
| config▼ | Configuration information |
| hdfsBlockSize | HDFS block size |
| hdfsReplication | HDFS replication count |
| configText | Injected cluster configuration info |
| userScript | Injected user script |
| userTask▼ | Task details |
| type | Task type |
| termination_policy | Cluster behavior after task completion |
| fileUrl | Location of executed task file |
| hive_query | Executed Hive query string |
| deployMode | Spark job deployment mode |
| exec_opts | Option information string of executed task |
| exec_params | Parameter information string of executed task |
| logUrl | Location where task result logs are stored |
Status codes
| HTTP Status | Description |
|---|---|
200 | Success |
400 | Request information error |
401, 403 | Authentication failed, No permission |
404 | Cluster not found |