Download from the official website:
wget https://download.elastic.co/logstash/logstash/logstash-2.3.2.tar.gz
Unzip after downloading:
tar -xvf logstash-2.3.2.tar.gz
After renamed to logstash
, move to /usr/lcoal
:
mv logstash /usr/local
Remember to install the jdk of java8!
First, create a kafka/
folder under /usr/local
to store kafka and zookeeper:
mkdir /usr/local/kafka
After entering the kafka folder, download:
wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.6.3/apache-zookeeper-3.6.3-bin.tar.gz
Unzip after downloading:
tar -xvf apache-zookeeper-3.6.3-bin.tar.gz
Create a soft link:
ln -s apache-zookeeper-3.6.3-bin zookeeper
Go to the zookeeper's conf/
folder, create a zoo.cfg
file, and copy the content in zoo_sample.cfg
:
cp zoo_sample.cfg ./zoo.cfg
as the picture shows:
Remember that the default port number of zookeeper is 2181
, which will be used in the future.
Enter the bin folder of zookeeper:
common commands:
./bin/zkServer.sh start
./bin/zkServer.sh stop
./bin/zkServer.sh restart
./bin/zkServer.sh status
Enter the start command:
./zkServer.sh start
Check the status again to confirm whether it is started, and restart if it is not started.
Go to the /usr/local/kafka
folder.
Download kafka:
wget https://mirrors.bfsu.edu.cn/apache/kafka/2.8.0/kafka_2.13-2.8.0.tgz
Unzip:
tar -xvf kafka_2.13-2.8.0.tgz
Create a soft link:
ln -s kafka_2.13-2.8.0 kafka
vim kafka/config/server.properties
There are two points to note:
The zookeeper connection is configured according to the actual situation.
start kafka:
bin/kafka-server-start.sh -daemon config/server.properties
Create topic:
bin/kafka-topics.sh --create --topic suricata-http-log --replication-factor 1 --partitions 1 --zookeeper localhost:2181
suricata-http-log
: topic-idlocalhost:2181
: host and port of zookeeperthese two items need to be modified by yourself.
The location is arbitrary, such as in my personal folder:
vim /home/canvas/Test/logstash-kafka.conf
The conf file is created.
The content written in it is as follows:
The input here specifies the source file eve-http.json and the path of the source file that I want to output.
output here indicates that we want to output to kafka.
Note: The codec was originally specified as json, but it was found that there was a problem with the format when outputting, so I commented it. Maybe the problem is that my source file itself is in json format.
Although there is no need to start the producer in this case, here is the instruction to start the producer:
Start producer
bin/kafka-console-producer.sh --broker-list localhost:9092 --sync --topic suricata-http-log
Start consumer
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic suricata-http-log --from-beginning
Enter a string in the producer interface and press Enter to confirm, the consumer will have the same output.
logstash/bin/logstash -f /home/canvas/Test/logstash-kafka.conf
The following prompt indicates successful startup:
After the startup is successful, when the log file eve-http.json
in the input file path written in /home/canvas/Test/logstash-kafka.conf
has a new log generated, the consumer side of kafka, Will also output synchronously:
Synchronous output log
The red line part is the newly added mark of kafka, and the updated content in the log file is the part after the red line, which is consistent with the content in the log source file.
This completes our task.