Skip to main content

Component execution and commands

Run component

Hadoop Eco runs components as OS system services. System services operate components using commands in the format systemctl [start|stop|restart] [service-name].
The following shows how to run components as OS system services in Hadoop Eco.

Example of running component (Zookeeper service)
ActionCommand
Start   sudo systemctl start zookeeper
Stopsudo systemctl stop zookeeper
Restartsudo systemctl restart zookeeper

Component execution commands

The following are commands for running components.

info

For Trino, coordinator and worker are determined by options, and both run with the same command.

ModuleComponentCommand
Zookeeper   zookeepersudo systemctl start zookeeper
HDFSnamenodesudo systemctl start namenode
journalnodesudo systemctl start journalnode
zkfcsudo systemctl start zkfc
datanodesudo systemctl start datanode
secondarynamenodesudo systemctl start secondarynamenode
Yarnresource managersudo systemctl start resourcemanager
nodemanagersudo systemctl start nodemanager
timelineserversudo systemctl start timelineserver
jobhistoryserversudo systemctl start jobhistoryserver
tez-uisudo systemctl start tez-ui
sparkhistoryserversudo systemctl start sparkhistoryserver
HBasehmastersudo systemctl start hmaster
regionserversudo systemctl start regionserver
Trinocoordinatorsudo systemctl start trino
workersudo systemctl start trino
Oozieooziesudo systemctl start oozie
Huehuesudo systemctl start hue
Zeppelinzeppelinsudo systemctl start zeppelin
Druiddruid master-query server (coordinator-overlord, broker, router)sudo systemctl start druid-master-query-server
druid master-broker server (coordinator-overlord, broker)sudo systemctl start druid-master-broker-server
druid data server (historical, middleManager)sudo systemctl start druid-data-server
Kafkakafkasudo systemctl start kafka
Supersetsupersetsudo systemctl start superset