Hadoop running start-dfs.sh reports ERROR: Attempting to operate on hdfs as root error solution

created at 12-08-2021 views: 46

error

Starting namenodes on [master]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.

Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.

Starting secondary namenodes
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.

Starting journal nodes
ERROR: Attempting to operate on hdfs journalnode as root
ERROR: but there is no HDFS_JOURNALNODE_USER defined. Aborting operation.

Starting ZK Failover Controllers on NN hosts
ERROR: Attempting to operate on hdfs zkfc as root
ERROR: but there is no HDFS_ZKFC_USER defined. Aborting operation.

reason

Use the root account to start the service, but it is not pre-defined

solution

* This step needs to be executed on each machine, or you can modify it on one machine first, and then use scp to synchronize to other machines

1. Modify start-dfs.sh and stop-dfs.sh

cd /home/hadoop/sbin
vim start-dfs.sh
vim stop-dfs.sh

Add the following to the head:

HDFS_ZKFC_USER=root
HDFS_JOURNALNODE_USER=root
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=root
#HADOOP_SECURE_DN_USER=root

2. Modify start-yarn.sh and stop-yarn.sh

cd /home/hadoop/sbin
vim start-yarn.sh
vim stop-yarn.sh

Add the following to the head:

#HADOOP_SECURE_DN_USER=root
HDFS_DATANODE_SECURE_USER=root
YARN_NODEMANAGER_USER=root
YARN_RESOURCEMANAGER_USER=root

3. Synchronize to other machines

cd /home/hadoop/sbin
scp * c2:/home/hadoop/sbin
scp * c3:/home/hadoop/sbin
scp * c4:/home/hadoop/sbin
created at:12-08-2021
edited at: 12-08-2021: