Syslog 日志收集

[success]该模块使用Ubuntu 12.04、Centos 7和macOS Sierra等操作系统的日志进行了测试。
此模块不适用于Windows。

1. 配置

参考:https://www.elastic.co/guide/en/beats/filebeat/7.3/filebeat-module-system.html#configuring-system-module

参考2:kibana ui Home 界面

2. DashBoard 界面

Apache 日志收集

1. 配置

apache模块使用2.2.22和2.4.23版本的日志进行了测试。
在Windows上,使用安装在Chocolatey存储库中的Apache HTTP服务器测试模块。

参考1: https://www.elastic.co/guide/en/beats/filebeat/7.2/filebeat-module-apache.html#running-apache-modules

2. dashboard

Nginx 日志收集

Nginx模块使用版本1.10中的日志进行了测试。
在Windows上,该模块使用安装在Chocolatey存储库中的Nginx进行测试。

1. 安装

参考1: https://www.elastic.co/guide/en/beats/filebeat/7.3/filebeat-module-nginx.html

2. Dashboard

Mysql 日志收集

mysql模块使用mysql 5.5、5.7和8.0、MariaDB 10.1、10.2和10.3以及Percona 5.7和8.0的日志进行测试。

1. 安装

参考1:https://www.elastic.co/guide/en/beats/filebeat/7.3/filebeat-module-mysql.html

2. Dashboard

自定义索引日志收集

1. 使用默认设置

使用默认索引名称和索引生命周期策略不需要任何配置

索引名称:filebeat-7.3.0-2019.08.26-000001
索引生命周期策略:filebeat-7.2.0 50G/30D

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ cat /etc/filebeat/filebeat.yml |grep -v "#"|grep -v "^$"
filebeat.inputs:
- type: log
enabled: false
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "192.168.20.187:5601"
output.elasticsearch:
hosts: ["192.168.20.188:9200", "192.168.20.189:9200"]
username: "elastic"
password: "xiodi.cn123"
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
logging.level: warning

结果:

  • 创建索引名称:filebeat-7.2.0-2019.08.30-000001
  • 使用索引生命周期管理:filebeat-7.2.0

2. 自定义索引,不使用索引生命周期策略

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ cat /etc/filebeat/filebeat.yml |grep -v "#"|grep -v "^$"
filebeat.inputs:
- type: log
enabled: false
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
setup.template.settings:
index.number_of_shards: 2
setup.template:
name: "filebeat-index"
pattern: "filebeat-index-*"
setup.ilm.enabled: false
setup.kibana:
host: "192.168.20.187:5601"
output.elasticsearch:
hosts: ["192.168.20.188:9200", "192.168.20.189:9200"]
username: "elastic"
password: "${ES_PWD}"
index: "filebeat-index-%{[agent.version]}-%{+yyyy.MM.dd}"
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
logging.level: warning

结果:

  • 创建索引名称:filebeat-index-7.2.0-2019.08.30
  • 每天会自动创建一个索引
  • 无索引别名

3. 自定义索引和索引生命周期策略

索引生命周期管理:https://www.elastic.co/guide/en/beats/filebeat/current/ilm.html

//例1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ cat /etc/filebeat/filebeat.yml |grep -v "#"|grep -v "^$"
filebeat.inputs:
- type: log
enabled: false
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.ilm.enabled: auto
setup.ilm.rollover_alias: "filebeat-index2"
setup.ilm.pattern: "{now/d}-000001"
setup.ilm.policy_name: "filebeat-index2"
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "192.168.20.187:5601"
output.elasticsearch:
hosts: ["192.168.20.188:9200", "192.168.20.189:9200"]
username: "elastic"
password: "xiodi.cn123"
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
  • setup.ilm.enabled:有 true,false,auto,默认为auto

  • setup.ilm.rollover_alias: 默认为 filebeat-{agent.version} ,修改该值,并不会移除 agent.version 。

  • setup.ilm.pattern: 滚动的模式,默认为 %{now/d}-000001

更多滚动模式:https://www.elastic.co/guide/en/elasticsearch/reference/7.3/indices-rollover-index.html#_using_date_math_with_the_rollover_api

4. 同时定义自动和手动索引管理

同时定义自动和手动索引管理,手动不生效。

//定义索引策略

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
PUT _ilm/policy/filebeat-index3
{
"policy": {
"phases": {
"hot": {
"min_age": "1d",
"actions": {
"set_priority": {
"priority": 100
},
"rollover": {
"max_age": "7d",
"max_docs": 1000,
"max_size": "5gb"
}
}
}
}
}
}

//例2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ cat /etc/filebeat/filebeat.yml |grep -v "#"|grep -v "^$"
filebeat.inputs:
- type: log
enabled: false
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.ilm.enabled: auto
setup.ilm.rollover_alias: "filebeat-index3"
setup.ilm.pattern: "{now/d}-000001"
setup.ilm.policy_name: "filebeat-index3"
setup.template:
name: "filebeat-3-index"
pattern: "filebeat-3-index-*"
setup.dashboards.enabled: true
setup.kibana:
host: "192.168.20.187:5601"
output.elasticsearch:
hosts: ["192.168.20.188:9200", "192.168.20.189:9200"]
username: "elastic"
password: "xiodi.cn123"
index: "filebeat-2-index-%{[agent.version]}-%{+yyyy.MM.dd}"
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~

使用 Kafka 作为队列

1. Zookeeper 集群配置

Kafka 包里面默认带有 zookeeper ,zookeeper 集群使用 Raft 选举模式,故至少要三个节点(生产中应部署在三个不同的服务器实例上)。

//下载包

1
2
3
$ wget http://mirror.bit.edu.cn/apache/kafka/2.3.0/kafka_2.12-2.3.0.tgz
$ tar xf kafka_2.12-2.3.0.tgz -C /opt/
$ cd /opt/kafka_2.12-2.3.0/

//修改配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cp config/zookeeper.properties config/zookeeper.2181.properties
$ cp config/zookeeper.properties config/zookeeper.2182.properties
$ cp config/zookeeper.properties config/zookeeper.2183.properties

$ cat config/zookeeper.2181.properties |grep -v "#"|grep -v "^$"
dataDir=/data/zookeeper/2181 # 其它两个节点需要修改为 2182,2183
clientPort=2181 # 其它两个节点需要修改为 2182,2183
maxClientCnxns=0
tickTime=2000
initLimit=10
syncLimit=5
server.1=192.168.20.228:12888:13888
server.2=192.168.20.228:22888:23888
server.3=192.168.20.228:32888:33888

//为实例添加集群标识

1
2
3
4
$ mkdir -pv /data/zookeeper/{2181,2182,2183}
$ echo 1 > /data/zookeeper/2181/myid
$ echo 2 > /data/zookeeper/2182/myid
$ echo 3 > /data/zookeeper/2183/myid

//启动 zookeeper 集群

1
2
3
$ bin/zookeeper-server-start.sh config/zookeeper.2181.properties > /dev/null 2>&1 &
$ bin/zookeeper-server-start.sh config/zookeeper.2182.properties > /dev/null 2>&1 &
$ bin/zookeeper-server-start.sh config/zookeeper.2183.properties > /dev/null 2>&1 &

2. Kafka 集群配置

//修改配置

1
2
3
4
5
6
7
8
9
10
11
12
13
$ cp config/server.properties config/server.9092.properties
$ cp config/server.properties config/server.9093.properties
$ cp config/server.properties config/server.9094.properties

$ vim config/server.9092.properties
broker.id=1
listeners=PLAINTEXT://:9092
...
log.dirs=/data/kafka-logs/1
...
zookeeper.connect=192.168.20.228:2181,192.168.20.228:2182,192.168.20.228:2183
...
auto.create.topics.enable=true # 不加该命令,filebeat 无法创建topic,需要手工创建

另外两个节点需要修改 id 为 2 和 3 , listeners 修改为 9093 和 9094 ,log.dirs 修改为 2, 3

//启动 kafka 集群

1
2
3
$ bin/kafka-server-start.sh -daemon config/server.9092.properties
$ bin/kafka-server-start.sh -daemon config/server.9093.properties
$ bin/kafka-server-start.sh -daemon config/server.9094.properties

3. 测试

//创建 topic

1
2
3
4
5
$ bin/kafka-topics.sh --create \
--zookeeper 192.168.20.177:2181,192.168.20.177:2182,192.168.20.177:2183 \
--replication-factor 3 \
--partitions 3 \
--topic topic-test

–replication-factor 副本数量
–partition 分区数量

//查看 topic

1
2
3
$ bin/kafka-topics.sh \
--zookeeper 192.168.20.177:2181,192.168.20.177:2182,192.168.20.177:2183 \
--list

// 查看 topic 详情

1
2
3
$ bin/kafka-topics.sh \
--zookeeper 192.168.20.177:2181,192.168.20.177:2182,192.168.20.177:2183 \
--describe --topic topic-test

//生产者产生数据

1
2
3
4
5
$ bin/kafka-console-producer.sh \
--broker-list 192.168.20.177:9092,192.168.20.177:9093,192.168.20.177:9094 \
--topic topic-test

hello aishangwei kafka

//另开一个窗口消费者查看

1
2
3
4
5
$ bin/kafka-console-consumer.sh \
--bootstrap-server 192.168.20.177:9092,192.168.20.177:9093,192.168.20.177:9094 \
--from-beginning --topic topic-test

hello

4. Filebeat 配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ cat /etc/filebeat/filebeat.yml |grep -v "#"|grep -v "^$"
filebeat.inputs:
- type: log
enabled: false
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.kafka:
hosts: ["192.168.20.177:9092"]
topic: log-topic
required_acks: 1
compression: gzip
max_message_bytes: 1000000
...

5. Logstash 配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
input {
kafka {
bootstrap_servers => "192.168.20.177:9092, 192.168.20.177:9093, 192.168.20.177:9094"
group_id => "httpgroup"
topics => "log-topic"
consumer_threads => 3
decorate_events => true
codec => "json"
}
}

filter {
if [log][file][path] == "/var/log/messages" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: %{GREEDYDATA:message}" }
overwrite => [ "message" ]
}
date {
match => [ "system.syslog.timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
target => "@timestamp"
timezone => "Asia/Shanghai"
remove_field => [ "system.syslog.timestamp" ]
}
}
}

output {
if [event][module] == "system" {
elasticsearch {
hosts => ["192.168.20.177:9200", "192.168.20.176:9200"]
user => "elastic"
password => "xiodi.cn123"
index => "filebeat-logstash-system-%{+YYYY.MM.dd}"
codec => "json"
}
}
}

6. 查看

(1)创建 Index Patterns

Pattern: filebeat-logstash-system-*

(2) Discover 查看日志

使用 Redis 作为队列

1. 安装配置 redis

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ wget http://download.redis.io/releases/redis-5.0.5.tar.gz
$ tar xf redis-5.0.5.tar.gz
$ cd redis-5.0.5/deps
$ make hiredis lua jemalloc linenoise
$ cd ..
$ make PREFIX=/opt/redis install

$ mkdir /opt/redis/conf
$ cp redis.conf /opt/redis/conf
$ vim /opt/redis/conf/redis.conf
bind 192.168.20.227
...
daemonize yes
...
requirepass xiodi.cn123
...

$ /opt/redis/bin/redis-server /opt/redis/conf/redis.conf

2. 验证

1
$ /opt/redis/bin/redis-cli -a xiodi.cn123 -h 192.168.20.227

3. Filebeat 配置

//Beats tag 配置

1
2
3
4
5
6
7
8
9
Type:Output
Hosts: 192.168.20.176:6379
Username
Password

Other Config:
key: filebeat-test
db: 15
timeout: 5

4. Logstash 配置

// logstash 主配置

1
2
3
4
5
6
7
8
9
10
11
12
$ cat /etc/logstash/logstash.yml |grep -v "#"|grep -v "^$"
path.data: /var/lib/logstash
path.logs: /var/log/logstash
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: xiodi.cn123
xpack.monitoring.elasticsearch.hosts: ["http://192.168.20.176:9200", "http://192.168.20.177:9200"]
xpack.management.enabled: true
xpack.management.pipeline.id: ["logstash_log_001"]
xpack.management.elasticsearch.username: elastic
xpack.management.elasticsearch.password: xiodi.cn123
xpack.management.elasticsearch.hosts: ["http://192.168.20.176:9200", "http://192.168.20.177:9200"]

//pipe 配置(kibana上配置)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
input {
redis {
host => "192.168.20.176"
port => 6379
key => "filebeat-test"
data_type => "list"
password => "xiodi.cn123"
db => 15
}
}

filter {
if [log][file][path] == "/var/log/nginx/access.log" {
grok {
match => { "message" => '%{IPV4:nginx.access.ip} - (%{DATA:user.name}|-) \[%{HTTPDATE:nginx.access.time}\] "%{DATA:nginx.access.info}" %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long} "%{DATA:http.request.referrer}" "%{DATA:user_agent.original}"' }
}
grok {
match => { "nginx.access.info" => "%{WORD:http.request.method} %{DATA:url.original} HTTP/%{NUMBER:http.version}" }
remove_field => [ "nginx.access.info","message" ]
}
geoip {
source => "nginx.access.ip"
target => "geoip"
}
date {
match => [ "nginx.access.time", "dd/MMM/yyyy:H:m:s Z" ]
target => "@timestamp"
timezone => "Asia/Shanghai"
remove_field => [ "nginx.access.time" ]
}
}
if [log][file][path] == "/var/log/nginx/error.log" {
grok {
match => { "message" => '%{DATA:nginx.error.time} \[%{DATA:log.level}\] %{NUMBER:process.pid:long}#%{NUMBER:process.thread.id:long}: (\*%{NUMBER:nginx.error.connection_id:long} )?%{GREEDYDATA:message}' }
overwrite => [ "message" ]
}
date {
match => [ "nginx.error.time", "yyyy/MM/dd H:m:s" ]
target => "@timestamp"
timezone => "Asia/Shanghai"
remove_field => [ "nginx.error.time" ]
}
}

}

output {
if [event][module] == "nginx" {
elasticsearch {
hosts => ["192.168.20.175:9200", "192.168.20.177:9200"]
user => "elastic"
password => "xiodi.cn123"
index => "filebeat-logstash-nginx-%{+YYYY.MM.dd}"
codec => "json"
}
}
}

5. 查看

(1)创建 Index Patterns

Pattern: filebeat-logstash-nginx-*

(2) Discover 查看日志