Skip to main content

Cluster Re-work API

In Hadoop Eco, a cluster typically changes its status to Terminated once it completes a specific task, after which the cluster can no longer be used. However, by using the Cluster Re-work API, you can repeatedly execute task scheduling on an existing cluster that is in the Terminated state.

The Cluster Re-work API increases cluster reusability and supports efficient cluster management by allowing repetitive tasks on existing clusters without the need to create a new cluster every time.

The Cluster Re-work API operates through the following process:

  1. Select Cluster: Select the cluster for re-work in the console. This must be an existing cluster in the Terminated state.
  2. Issue Open API Key: To use the Cluster Re-work API, you must issue an Open API Key for that cluster in the KakaoCloud Console. This key is required to access and control the cluster during API calls.
    • Issuing an Open API Key automatically creates a dedicated security group for that Open API cluster, and deleting the API Key will delete the security group.
  3. Call Cluster Re-work API: Call the Cluster Re-work API using the issued Open API Key. This allows you to schedule and execute new tasks on a cluster in the Terminated state.

Prepare for API usage

To call the Cluster Re-work API, you must issue an access key and obtain an Open API Key for the Hadoop Eco cluster from the console.

Create job cluster

Repeatedly executes task scheduling on a cluster that is currently in the Terminated state.

Request

Request Syntax

Cluster Creation Request Syntax
curl -X POST '[https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/](https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/){cluster-id}' \
--header 'Hadoop-Eco-Api-Key: {hadoop-eco-api-key}' \
--header 'Credential-ID: {credential-id}' \
--header 'Credential-Secret: {credential-secret}' \
--header 'Content-Type: application/json' \
--data-raw '
{
"instanceSpecs": [
{
"type": "MASTER|WORKER|TASK",
"nodeCnt": integer,
"volumeSize": integer
}
],
"config": {
"hdfsBlockSize": integer,
"hdfsReplication": integer
},
"userTaskInfo": {
"deployMode": "string",
"execOpts": "string",
"execParams": "string"
},
"securityGroupIds": ["string"]
}'

API call method

MethodRequest URL
POSThttps://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/{cluster-id}
TypeParameterTypeDescription
URL{cluster-id}*StringID of the cluster
- Can be verified at KakaoCloud Console > Analytics > Hadoop Eco > Cluster > Cluster Information

Request Header

RequestTypeDescription
{credential-id}*StringUser's access key ID
- Viewable at Profile (top right) > Credentials > IAM Access Key
{credential-secret}*StringUser's secret access key
- Can only be confirmed at the time of access key creation
{hadoop-eco-api-key}*StringOpen API key
- Issued from Cluster Tasks > Issue Open API Key in the Hadoop Eco cluster menu

Request Elements

These are not required but can be used if existing cluster settings need to be changed.

RequestTypeDescription
typeMASTER/WORKER/TASKNode type
nodeCntIntegerNumber of Hadoop Eco worker nodes
- Count: 1 – 1,000
volumeSizeIntegerBlock storage size for Hadoop Eco worker nodes
- Size: 100 – 5,120 GB
hdfsBlockSizeIntegerHDFS block size for Hadoop Eco
- Size: 1 – 1,024 MB
hdfsReplicationIntegerHDFS replication factor for Hadoop Eco
- Count: 1 – 500
deployModeStringSpark job deploy mode for Hadoop Eco
- Mode: client, cluster
execOptsStringHive configuration parameters for Hadoop Eco
execParamsStringJob application parameters for Hadoop Eco
securityGroupIdsStringSecurity group ID

Response

Response Syntax

Cluster Creation Response Syntax
{
"clusterId": "string",
"clusterName": "string",
"requestId": "string"
}

Response Elements

ResponseDescription
clusterIdCreated cluster ID
clusterNameCreated cluster name
requestIdTask request ID
Status codes
HTTP StatusDescription
200Success
202Cluster task (creation) in progress
400Request information error
401, 403Authentication failed, No permission
404Cluster not found

Retrieve job cluster details

You can retrieve the status of a cluster for which an Open API Key has been issued.

Request

Request Syntax

Cluster Status Inquiry Request Syntax
curl -X GET '[https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/](https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/){cluster-id}/requests/{request-id}' \
--header 'Hadoop-Eco-Api-Key: {hadoop-eco-api-key}' \
--header 'Credential-ID: {credential-id}' \
--header 'Credential-Secret: {credential-secret}' \
--header 'Content-Type: application/json'

curl -X GET '[https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/](https://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/){cluster-id}/requests/{request-id}?verbose=true' \
--header 'Hadoop-Eco-Api-Key: {hadoop-eco-api-key}' \
--header 'Credential-ID: {credential-id}' \
--header 'Credential-Secret: {credential-secret}' \
--header 'Content-Type: application/json'

API call method

MethodRequest URL
GEThttps://hadoop-eco.kr-central-2.kakaocloud.com/v2/hadoop-eco/clusters/{cluster-id}/requests/{request-id}
PathTypeDescription
{cluster-id}*StringID of the cluster
{request-id}*StringrequestId value received in the response after Creating job cluster

Query parameters

RequestTypeDescription
verboseBooleanWhen set to true, status up to Master/Worker nodes can be retrieved
- true, false

Response

Response Syntax

Response Syntax when verbose=false
{
"clusterId": "string",
"clusterName": "string",
"requestId": "string",
"requestStatus": "string",
"requestResult": "string"
}
Response Syntax when verbose=true
{
"clusterId": "string",
"clusterName": "string",
"requestId": "string",
"requestStatus": "string",
"requestResult": "string",
"clusterType": "string",
"clusterVersion": "string",
"isHa": true|false,
"installComponents": [
"string"
],
"masterInfo": [
{
"instanceGroupId": "string",
"flavorId": "string",
"flavorName": "string",
"volumeSize": integer,
"nodeCnt": integer
}
],
"workerInfo": [
{
"instanceGroupId": "string",
"flavorId": "string",
"flavorName": "string",
"volumeSize": integer,
"nodeCnt": integer
}
],
"taskInfo": [
{
"instanceGroupId": "string",
"flavorId": "string",
"flavorName": "string",
"volumeSize": integer,
"nodeCnt": integer
}
],
"imageName": "string",
"securityGroupIds": [
"string"
],
"keypairName": "string",
"owner": "string",
"config": {
"hdfsBlockSize": integer,
"hdfsReplication": integer,
"configText": "string",
"userScript": "string"
},
"userTask": {
"type": "string",
"terminationPolicy": "string",
"fileUrl": "string",
"hiveQuery": "string",
"deployMode": "string",
"execOpts": "string",
"execParams": "string",
"logUrl": "string"
}
}

Response Elements

ResponseDescription
clusterIdCluster ID
clusterNameCluster name
requestIdRequest ID
requestStatusRequest status
requestResultRequest result
clusterTypeCreated cluster type
clusterVersionCreated cluster version
isHaHigh Availability status
installComponentsInstalled components
masterInfo/workerInfo/taskInfo▼Instance group information
    instanceGroupIdInstance group ID
    flavorIdFlavor ID
    flavorNameFlavor name
    volumeSizeVolume size
    nodeCntNode count
imageNameImage name
securityGroupIdsSecurity group IDs
keypairNameKey pair name
ownerCluster owner
config▼Configuration information
    hdfsBlockSizeHDFS block size
    hdfsReplicationHDFS replication count
    configTextInjected cluster configuration info
    userScriptInjected user script
userTask▼Task details
    typeTask type
    termination_policyCluster behavior after task completion
    fileUrlLocation of executed task file
    hive_queryExecuted Hive query string
    deployModeSpark job deployment mode
    exec_optsOption information string of executed task
    exec_paramsParameter information string of executed task
    logUrlLocation where task result logs are stored
Status codes
HTTP StatusDescription
200Success
400Request information error
401, 403Authentication failed, No permission
404Cluster not found