Skip to main content

Monitoring Hadoop Cluster with Monitoring Flow

This guide explains how to monitor the node status and connectivity of a Hadoop Eco cluster using KakaoCloud's Monitoring Flow.

Basic Information
  • Estimated Time: 30 minutes
  • User Environment
    • Recommended OS: Ubuntu
    • Region: kr-central-2
  • Prerequisites

Scenario Overview

In this scenario, we will demonstrate how to use various features of the Monitoring Flow service to monitor the node status of a Hadoop cluster. The main steps are as follows:

  • Create a Hadoop cluster using KakaoCloud's Hadoop Eco
  • Create a flow connection to link the subnet where the API server resides
  • Create a scenario in Monitoring Flow to monitor the Hadoop cluster nodes
  • Receive monitoring alerts through Alert Center integration

Prerequisites

Network Setup

To enable communication between the Monitoring Flow service and the Hadoop cluster, set up the network environment. Create a VPC and subnets as per the instructions below.

VPC and Subnet: tutorial
  1. Go to KakaoCloud Console > Beyond Networking Service > VPC.

  2. Click the [+ Create VPC] button and set it up as follows:

    CategoryItemSettings/Values
    VPC InformationVPC Nametutorial
    VPC IP CIDR Block10.0.0.0/16
    Availability ZoneNumber of Availability Zones1
    First AZkr-central-2-a
    Subnet SettingsNumber of Public Subnets per Availability Zone1
    kr-central-2-a

    Public Subnet IPv4 CIDR Block: 10.0.0.0/20

  3. After confirming the generated topology, click the [Create] button.

    • The subnet status will change from Pending Create > Pending Update > Active. Once it becomes Active, proceed to the next step.

Step-by-Step Process

This section guides you through the steps to monitor a Hadoop cluster using Monitoring Flow.

Step 1. Create a Hadoop Eco Cluster

Create a Hadoop Eco cluster to set up the environment for Monitoring Flow monitoring.

  1. Go to KakaoCloud Console > Analytics > Hadoop Eco > Cluster.

  2. If no cluster exists, refer to the Cluster Creation guide to create a cluster.

    • In the VPC Settings section, choose the VPC and subnet to connect to the flow connection.
    • In the Security Group Configuration section, choose to [Create a new security group], which will automatically set the inbound and outbound policies for Hadoop Eco. The created security group can be found under VPC > Security.
  3. Verify the information of the created cluster.

    • The cluster is successfully created when its status changes to Running.
    • Find the Cluster ID, which is required for the Monitoring Flow step configuration.

    Image Cluster Details

Step 2. Create a Flow Connection

Create a flow connection in the Monitoring Flow service to allow access to the cluster.

  1. Go to KakaoCloud Console > Management > Monitoring Flow > Flow Connections.

  2. Click the [Create Flow Connection] button to enter the creation screen.

  3. In the VPC Selection section, select the activated VPC registered in the security group.

  4. Choose the subnet connected to the VPC to link with the flow connection.

    Image Create Flow Connection

Step 3. Create a Scenario

Create a scenario in Monitoring Flow to monitor the status of the Hadoop cluster and set the execution schedule.

  1. Go to KakaoCloud Console > Management > Monitoring Flow > Scenarios.

  2. Click the [Create Scenario] button to enter the creation screen.

  3. Select the Use KakaoCloud VPC option to access KakaoCloud internal resources.

  4. Choose one of the registered flow connections.

  5. Set the execution schedule for the scenario.

    Image Create Scenario

Step 4. Add Scenario Steps

Use steps such as Default Variable, API, Set Variables, For, and If to build the monitoring flow in Monitoring Flow.

  1. Go to KakaoCloud Console > Management > Monitoring Flow > Scenarios.
  2. In the Scenario menu, click on the name of the created scenario and review the details.

1. Manage Default Variables

The Default Variable step defines basic variables necessary for the scenario, which will be used repeatedly. Default Variables must be set before creating any steps.

In this tutorial, we'll register the Default Variables as follows:

  1. On the selected scenario's detail page, click the [Edit Scenario Steps] button at the top right.
  2. Click the [Manage Default Variables] button at the top right to register the Default Variables.
  3. Fill out the table with the following values. Please input all values for this tutorial.
ItemKeyTypeValueDescription
1vm-ipStringThe vm-ip variable is used to store the IP address of a specific virtual machine.
2vm-listJSON List[]
3vm-statusstring
4vm-is-masterstringfalseSet the initial value of the vm-is-master variable to false to define its default state

Image Manage Default Variables

  1. After saving, click the [Close] button.

2. API-1

Add a step to check the status of the Hadoop cluster's API server.

  1. In the Scenario Details tab, click the [Add Scenario Step] button.
  2. In the New Step Settings panel, select API as the step type.
  3. Fill out the fields according to the following table.
ItemExampleDescription
TypeAPISelect API
Step NameAPI-1Step name
Expected Code200The server should return a 200 status code to indicate that it is functioning normally
MethodGETUse an HTTP GET request to check the status of the API server
URLhttp://${Private-IP}API URL (use the private IP of the cluster node)

Image API 스텝 추가-1

  1. After entering all the required fields, click Add Next Step on the left panel.

3. Set Variables

The Set Variables step is where you update existing variables or set new values. In this step, you will use API response data to set variable values.

  1. In the Set New Step section of the right panel on the scenario step screen, select Set Variables as the step type.
  2. Enter the fields according to the following table:
FieldExampleDescription
TypeSet VariablesSelect Set Variables
Step NameVariables-1Name to assign to the step
Variable${vm-list}Choose one from the Default Variable list
Step APIAPI-1Select the API step stored in the previous step
Step Request/ResponserequestChoose between request or response
Step ComponentbodyChoose parameters, headers, or body based on the request/response
KeycontentEnter or select the value of the selected request/response result

Image Add Set Variables Step

  1. After entering all the required fields, click [Add Sub Step] on the left panel.

4. For

The For step is used to perform repetitive tasks, allowing you to repeat actions for each item in a variable list. In this step, monitoring tasks are performed repeatedly for all nodes within a cluster.

  1. In the Set New Step section of the right panel on the scenario step screen, select For as the step type.
  2. Enter the fields according to the following table:
FieldExample 1Example 2Example 3Description
TypeForSelect For
Step NameFor-1Name to assign to the step
TypeforeachSpecify the repetition format, either count or foreach
Base Variable${vm-list}Specify the list of variables to repeat
- Choose a JSON list to iterate over each element
Marker Variable${vm-ip}${vm-status}${vm-is-master}Use this variable to reference specific elements during the repetition
Marker Valuemarker.ipmarker.statusmarker.ismasterSpecify the location of the data to be referenced
- The marker value updates every iteration, and the task is executed for each element in the list

Image Add For Step

  1. After entering all the required fields, click [Add Sub Step] on the left panel.
Basic Information

Sub steps must be added for both For and If steps.

Image Message guiding the addition of sub steps for For step

5. If

The If step determines whether to perform an action based on a specific condition. In this step, you use an If condition to send a notification if the status of a specific node is abnormal.

  1. In the Set New Step section of the right panel on the scenario step screen, select If as the step type.
  2. Enter the fields according to the following table:
FieldExample 1Example 2Description
TypeIfSelect If
LogicalandChoose the condition combination method, either and or or
Left Operand${vm-status}${vm-is-master}Enter the condition for the operation
- Input a variable
Comparison====Choose the comparison operator
Right OperandRunningfalseEnter the comparison value
- Be mindful of case sensitivity for accurate values

Image Add API Step-2

  1. After entering all the required fields, click [Add Sub Step] on the left panel.

6. API-2

API step is added to check the status of the API server.

  1. Click the [Add Scenario Step] button in the Details tab of the scenario.
  2. In the Set New Step section of the right panel, set the step type to API.
  3. Enter the fields according to the following table:
FieldExampleDescription
TypeAPISelect API
Step NameAPI-2Name to assign to the step
Expected Code200A 200 status code must be returned for the server to be considered operational
MethodGETUse the HTTP GET request to check the API server's status
URLhttp://${vm-ip}:7001/v1/agent/healthURL to access the API
  • Note: ${vm-ip} represents the API server's IP address, which should be replaced with the actual IP address of the server you want to configure.

Image Create Flow Connection

  1. After entering all the required fields, click the [Test] button at the top-right corner to run the test.

Step 5. Test the Scenario

Check the results of the scenario test execution. If the 200 response is returned from the set URL during the test in Step 4, the API server is operating normally. If an unexpected response code is returned, it indicates an issue with the server status, and errors can be diagnosed.

  1. If the test runs successfully, Success will appear under [Execution Results].

Image Test Success Result

  1. If an unexpected response code is returned, it will show as Failure, indicating a potential issue with the server status. At this point, you can view the error details at the bottom of the right panel to resolve the issue.

    caution
    • If accessing a deactivated instance, a call failure error may occur, so ensure that the correct IP address is entered.
  2. After closing the test screen, save the scenario in the Edit Scenario Step screen. The saved scenario will be automatically executed according to the configured schedule.

Step 6. Check Execution Results

Check the execution results and time of the scenario to evaluate whether it is functioning correctly. You can verify whether the scenario was completed without errors through the detailed information.

  1. In the Kakao Cloud Console, go to Management > Monitoring Flow > Scenarios menu.
  2. Select the scenario you want to check and click the Execution Results tab in the detail screen.

Image Scenario Execution Results List

  1. Click on the event in the execution result list to view the detailed result.

Image Detailed Scenario Execution Results

Step 7. Create Alert Policy in Alert Center

Set an alert policy in the Alert Center to receive notifications based on the execution results of the Monitoring Flow. The following example shows how to set up metric conditions to receive notifications based on the success or failure of the scenario.

  1. Select the Management > Alert Center > Alert Policy menu from the Kakao Cloud Console. For detailed instructions, refer to the Create Alert Policy document.

  2. Set the alert conditions based on the table below.

ItemSetting Description
Condition TypeMetric
ServiceMonitoring Flow
Condition SettingsSet the metric item to receive success/failure alerts
- Setting Scenario Success Count and Scenario Fail Count in the scenario will allow you to receive all metric alerts
- To receive alerts only on failures, set only Scenario Fail Count
Resource ItemSelect the scenario for which you want to receive alerts
Threshold1 count or more
Duration1 minute
  1. Click the [Next] button at the bottom to complete the creation of the alert policy.

Image Creating Alert Policy

  1. Based on the set alert policy, you will receive real-time notifications about the execution results (success or failure) of the Monitoring Flow scenario, enabling you to quickly detect and respond to system state changes.