Skip to main content

API overview

Hadoop Eco API is an interface for managing resources using a code-based programming approach.
Through the REST API provided by Hadoop Eco, you can handle cluster creation, querying, and more in ways other than the KakaoCloud console.

API common

  • API responses are provided in JSON format.
  • Access key information (access key ID and secret access key) is required to use the API.
  • For the job cluster API, you must issue an Open API key for the Hadoop Eco cluster.

Prepare API usage

To call the API provided by Hadoop Eco, you must issue access key information. For detailed instructions, refer to the Get access key document.

Hadoop ECO API classification

Hadoop Eco Open API

Hadoop Eco Open API performs general cluster creation and management tasks, such as creating new clusters, querying clusters, deleting clusters, and scaling them. Open API key issuance is not required when using the Hadoop Eco Open API.

Job cluster API

In Hadoop Eco, clusters typically switch to the Terminated state once a specific task is completed, making them unavailable for further use. However, with the job cluster API, you can repeatedly schedule tasks on clusters already in the Terminated state.

The job cluster API improves cluster reusability, allowing repeated task execution on existing clusters without creating new clusters every time, enabling efficient cluster management.

💡 Open API key issuance conditions

When issuing an Open API key in the console, all the following conditions must be met:

  • The cluster type must be Core Hadoop.
  • When creating the cluster, the task scheduling activation option must be set to Hive or Spark.
  • The Hadoop Eco cluster state must be Terminated(User Command) or Terminated(User).
Notes on issuing Hadoop Eco cluster Open API key
  • If you use Open API for a cluster created by linking an external metastore during 'Step 5: Service integration settings' of the cluster creation process, the external metastore configuration must remain unchanged.
  • If the external metastore is deleted or its information is changed, the cluster may not function properly.
  • If a security group is deleted without deleting the API key, reissuing the API key will recreate the security group, enabling you to use the Open API cluster again.

For detailed Open API key issuance instructions, refer to Open API key.

API model

The API model of Hadoop Eco is as follows:

Model - cluster

DataTypeDescription
Credential-IDStringAccess key ID
Credential-SecretStringSecret access key
Hadoop-Eco-Api-KeyStringOpen API key issued from the console
request-idStringRequest ID
workerCntIntegerHadoop Eco worker node count
- range: 1 ~ 1,000
workerVolumeSizeIntegerHadoop Eco worker node block storage size
- range: 50 ~ 5120GB
configHdfsReplicationIntegerHadoop Eco HDFS replication count
- range: 1 ~ 500
configHdfsBlockSizeIntegerHadoop Eco HDFS block size
- range: 1 ~ 1024MB
userTaskDeployModeStringHadoop Eco Spark job deploy mode
- modes: client, cluster
userTaskExecOptsStringHadoop Eco Spark or Hive job configuration parameters
userTaskExecParamsStringHadoop Eco Spark application parameters