Filebeat
synopsis
Filebeat is a lightweight delivery program for forwarding and centralizing log data. Installed as a proxy on a server, Filebeat monitors a specified location file or location, collects log events, and forwards them to Elasticsearch or Logstash for indexing.
architecture diagram
Install Filebeat
Download and install
wget /downloads/beats/filebeat/filebeat-6.0.1-x86_64.rpm
yum install ./filebeat-6.0.1-x86_64.rpm
configure
Modify the Filebeat configuration file
vim /etc/filebeat/ # main configuration file
\- type: log # document type
paths.
\- /var/log/httpd/* # Where to read in data from # Output to elasticsearch or logstash either way is fine
: # Output data to Elasticsearch. either logstash or the following
hosts: ["localhost:9200"]
: # Send data to logstash, configure logstash to receive it using beats
hosts: ["172.18.68.14:5044"] : # Send data to logstash, configure logstash to receive it using beats.
Define the input log file path
:
- type: log
enabled: true
paths:
- /var/log/*.log
In this example, getting all the files under the path /var/log/*.log as input means that Filebeat will get all the files ending with .log in the /var/log directory.
In order to grab all files from a predefined subdirectory level, the following pattern can be used,:/var/log/*/*.log
. This one will grab the/var/log
directory, all the files in the directory that begin with.log
Documents ending with
Currently configured with such a directory structure, then only the and , and not the and will be crawled.
Send to Elasticsearch
If you are sending an output directory to Elasticsearch (and not using Logstash), then set the ip address and port to be able to find Elasticsearch
:
hosts: ["192.168.1.42:9200"]
If you plan to use the kibana dashboard
:
host: "localhost:5601"
:
hosts: ["myEShost:9200"]
username: "filebeat_internal"
password: "{pwd}"
:
host: "mykibanahost:5601"
username: "my_kibana_user"
password: "{pwd}"
Configuring filebeat to use Logstash
If you want to use Logstash for additional processing of data collected by filebeat, then you need to configure filebeat to use Logstash.
:
hosts: ["127.0.0.1:5044"]
Full Configuration
#=========================== Filebeat inputs ==============
:
- type: log
enabled: true
paths:
- /var/log/*.log
#============================== Dashboards ===============
: false
#============================== Kibana ==================
:
host: "192.168.101.5:5601"
#-------------------------- Elasticsearch output ---------
:
hosts: ["localhost:9200"]
Start Filebeat
systemctl start filebeat
Configure Logstash to receive data from FIlebeat collection
vim /etc/logstash//
input {
beats {
port => 5044 # Listen on 5044 to receive data from Filebeat.
}
}
filter {
grok {
match => {
"message" => "%{COMBINEDAPACHELOG}" # match HTTP logs
}
remove_field => "message" # don't show original message, only after match
}
}
output {
elasticsearch {
hosts => ["http://172.18.68.11:9200", "http://172.18.68.12:9200", "http://172.18.68.13:9200"] # cluster IPs
index => "logstash-%{+}"
action => "index"
document_type => "apache_logs"
}
}
Starting Logstash
/usr/share/logstash/bin/logstash -f /etc/logstash//
Analog Log Access
via curlcommandto simulate customer visits and generate access logs
curl 127.0.0.1
curl 172.18.68.51
curl 172.18.68.52
curl 172.18.68.53
verification information
Clear the old data from previous experiments (to delete you have to type delete in the dialog box), and then you can see the filebeat capture data filtered by Logtash before sending it to Elasticsearch.