We have installed ES Elasticsearch & Kibana Environment Installation (Centos) , but usually, ES will be used in the form of a cluster. The following describes the construction of a cluster and common pits.
Here is a demonstration by building two nodes, and the methods for three or more are the same.
Prepare two servers
node1 | node2 | |
node_name | es-node1 | es-node2 |
servers | xxx.xxx.xxx.1 | xxx.xxx.xxx.2 |
Create folders on two servers separately
/data
storage directory/config
configuration storage directory/log
storage directory/plugins
plugin storage directorystatements:
mkdir -p /data/elasticsearch/data
mkdir -p /data/elasticsearch/config
mkdir -p /data/elasticsearch/logs
mkdir -p /data/elasticsearch/plugins
Create a new configuration file in the config folder of xxx.xxx.xxx.1
(es-node1)
cd /data/elasticsearch/config
vi elasticsearch.yml
content of the configuration file :
# Set the cluster name. The names of all nodes in the cluster must be consistent.
cluster.name: es-cluster
# Set the node name. The node name in the cluster must be unique.
node.name: es-node1
# Indicates whether the node will be the main node, true means yes; false means no
node.master: true
# Whether the current node is used to store data, yes: true, no: false
node.data: true
# Need to lock physical memory, yes: true, no: false
#bootstrap.memory_lock: true
# Listening address, used to access the es
network.host: 0.0.0.0
network.publish_host: xxx.xxx.xxx.1
# es http port provided externally, default 9200
http.port: 9200
# TCP's default listening port, default 9300
transport.tcp.port: 9300
# Set this parameter to ensure that the nodes in the cluster can know the other N nodes that are qualified as masters. The default is 1, for large clusters, you can set a larger value (2-4)
discovery.zen.minimum_master_nodes: 1
# The new configuration after es7.x is written into the device address of the candidate master node, which can be selected as the master node after the service is turned on
discovery.seed_hosts: ["xxx.xxx.xxx.1:9300","xxx.xxx.xxx.2:9300"]
discovery.zen.fd.ping_timeout: 1m
discovery.zen.fd.ping_retries: 5
# New configuration after es7.x, this configuration is needed to elect the master when a new cluster is initialized
cluster.initial_master_nodes: ["xxx.xxx.xxx.1:9300"]
# Whether to support cross-domain, yes: true, this configuration is required when using the head plug-in
http.cors.enabled: true
# "*" means that all domain names are supported
http.cors.allow-origin: "*"
In the same way, create a new configuration file in the config folder of xxx.xxx.xxx.2
(es-node2), the content is as follows
cluster.name: es-cluster
node.name: es-node2
node.master: true
node.data: true
network.host: 0.0.0.0
network.publish_host: xxx.xxx.xxx.2
http.port: 9200
transport.tcp.port: 9300
discovery.zen.minimum_master_nodes: 1
discovery.seed_hosts: ["xxx.xxx.xxx.1:9300","xxx.xxx.xxx.2:9300"]
discovery.zen.fd.ping_timeout: 1m
discovery.zen.fd.ping_retries: 5
cluster.initial_master_nodes: ["xxx.xxx.xxx.1:9300"]
http.cors.enabled: true
http.cors.allow-origin: "*"
es-node1
docker run -d --network=host --privileged=true -e ES_JAVA_OPTS="-Xms2048m -Xmx2048m" -e TAKE_FILE_OWNERSHIP=true --name es-node1 -v /data/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/elasticsearch/data:/usr/share/elasticsearch/data -v /data/elasticsearch/logs:/usr/share/elasticsearch/logs -v /data/elasticsearch/plugins:/usr/share/elasticsearch/plugins docker.elastic.co/elasticsearch/elasticsearch:7.6.2
es-node2
docker run -d --network=host --privileged=true -e ES_JAVA_OPTS="-Xms2048m -Xmx2048m" -e TAKE_FILE_OWNERSHIP=true --name es-node2 -v /data/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/elasticsearch/data:/usr/share/elasticsearch/data -v /data/elasticsearch/logs:/usr/share/elasticsearch/logs -v /data/elasticsearch/plugins:/usr/share/elasticsearch/plugins docker.elastic.co/elasticsearch/elasticsearch:7.6.2
Open the browser and visit http://xxx.xxx.xxx.1:9200/_cat/nodes?pretty
If it shows
xxx.xxx.xxx.1 23 68 1 0.19 0.15 0.11 dilm * es-node1
xxx.xxx.xxx.2 9 96 4 1.21 0.48 0.28 dilm - es-node2
Then the configuration is successful
1. If the ES automatically shuts down a few seconds after starting, you can use the following command to view the log to locate the problem
docker logs {containerId}
2. Visit http://xxx.xxx.xxx.1:9200/_cat/nodes?pretty
{
"error": {
"root_cause": [
{
"type": "master_not_discovered_exception",
"reason": null
}
],
"type": "master_not_discovered_exception",
"reason": null
},
"status": 503
}
The reason is that the initial master node is not set, and the following configuration specifications need to be added to the configuration file
cluster.initial_master_nodes: ["xxx.xxx.xxx.1:9300"]