Logstash插件主要分为(常用插件):input、filter、output、Codec等

Logstash input plugins:https://www.elastic.co/guide/en/logstash/5.1/input-plugins.html
Logstash filter plugins:https://www.elastic.co/guide/en/logstash/5.1/filter-plugins.html
Logstash output plugins:https://www.elastic.co/guide/en/logstash/5.1/output-plugins.html
Logstash Codec plugins:https://www.elastic.co/guide/en/logstash/5.1/codec-plugins.html

环境:Centos 6.6
   ElasticSearch 5.1.1
   Logstash 5.1.1
   Kibana 5.1.1

Logstash的codec用法之Logstash展示Nginx日志

参考:https://www.elastic.co/guide/en/logstash/5.1/codec-plugins.html

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#文件必须以.conf结尾,因为Logstash默认扫描*.conf文件
[root@ELK ~]# cat /etc/logstash/ #在用grok时可以直接调用conf.d/nginx.conf
input {
file {
path => ["/var/log/nginx/access.log"]
type => "nginx"
start_position => "beginning" #这里参数可以写beginning和end
}
}
output {
stdout {
codec => rubydebug{}
}
}
[root@ELK ~]#
[root@ELK ~]# /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
"path" => "/var/log/nginx/access.log",
"@timestamp" => 2016-12-14T03:39:56.805Z,
"@version" => "1",
"host" => "0.0.0.0",
"message" => "192.168.31.100 - - [13/Dec/2016:18:22:21 +0800] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2\" \"-\"",
"type" => "nginx",
"tags" => []
}
....
#当再次请求Nginx时(即Nginx access.log日志再次输出时),会打印到屏幕

注,beginning和end区别:
beginning是文件从头开始读取end是读取最新内容。第一次读取后会生成缓存文件 (隐藏文件)记录读取的进度,如:.sincedb_d883144359d3b4f516b37dba51fab2a2(隐藏文件)。rpm安装的在/var/lib/logstash/plugins/inputs/file这个目录下,具体位置需要搜索。

Logstash的codec插件之multiline用法

参考:https://www.elastic.co/guide/en/logstash/5.1/plugins-codecs-multiline.html
https://kibana.logstash.es/content/logstash/plugins/codec/multiline.html

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[root@ELK ~]# cat /etc/logstash/conf.d/multiline.conf
input {
stdin {
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
}
output {
stdout {
codec => rubydebug{}
}
}
[root@ELK ~]#
#测试配置文件语法
[root@ELK ~]# /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/multiline.conf -t
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
[root@ELK ~]#
[root@ELK ~]# /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/multiline.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
The stdin plugin is now waiting for input:
[hello world]
[Windows of the world]
{
"@timestamp" => 2016-12-14T07:53:20.808Z,
"@version" => "1",
"host" => "0.0.0.0",
"message" => "[hello world]",
"tags" => []
}
Good mornning
[
{
"@timestamp" => 2016-12-14T07:54:00.060Z,
"@version" => "1",
"host" => "0.0.0.0",
"message" => "[Windows of the world]\nGood mornning",
"tags" => [
[0] "multiline"
]
}
#multiline插件是以“[”做标记,如果出现“[”,则把两个“[”之间的内容在message输出,“[”是结尾符

Logstash的filter插件之grok用法

参考:https://www.elastic.co/guide/en/logstash/5.1/plugins-filters-grok.html
http://kibana.logstash.es/content/logstash/plugins/filter/grok.html

主要是做正则匹配,拆分
下面两个地址需要拨VPN才能访问:
官方提供的grok表达式:https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns
grok在线调试:https://grokdebug.herokuapp.com/
grok正则表达式:https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
在用grok时可以直接调用,也可以自己写,如:
image

grok里正则在这里调试
image

#日志内容本身都是一个类似于key-value 的格式,但是格式具体的样式却是多种多样的。logstash提供filters/kv插件,帮助处理不同样式的key-value日志,变成实际的LogStash::Event数据。

#kv没用会,下面是摘抄的示例
https://kibana.logstash.es/content/logstash/plugins/filter/kv.html
https://www.elastic.co/guide/en/logstash/current/plugins-filters-kv.html

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
#解析test.log文件内容
[root@ELK ~]# cat /root/test.log
192.168.31.105 - - [14/Dec/2016:15:17:56 +0800] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36" "-"
[root@ELK ~]#
[root@ELK ~]# cat /etc/logstash/conf.d/samp.conf
input {
file {
path => "/root/test.log"
start_position => "beginning"
}
}
filter {
grok {
match => ["message","%{IPV4:remote_addr} \- %{USERNAME:remote_user} \[%{DATA:time_local}\] (?<http_request>\"[A-Za-z][A-Za-z]+\s*\/\s*[A-Za-z][A-Za-z]+\/[0-9][0-9]*\.[0-9][0-9]*\") (?<status>[0-9][0-9]*) (?<body_bytes_sent>[0-9][0-9]*) (?<http_referer>\"[\s\S]*?\") (?<http_user_agent>\"[\s\S]*?\") (?<http_x_forwarded_for>\"[\s\S]*?\")"]
}
kv {
source => "http_user_agent"
field_split => ";"
value_split => "="
remove_field => [ "\\""]
}
urldecode {
all_fields => true
}
}
#这里的kv插件好像没有效果额..,kv适合有规律的拆分
output {
stdout {
codec => rubydebug
}
}
[root@ELK ~]#
[root@ELK ~]# /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/samp.conf -t
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
[root@ELK ~]# /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/samp.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
"remote_addr" => "192.168.31.105",
"body_bytes_sent" => "0",
"time_local" => "14/Dec/2016:15:17:56 +0800",
"message" => "192.168.31.105 - - [14/Dec/2016:15:17:56 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36\" \"-\"",
"tags" => [],
"http_user_agent" => "\"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36\"",
"remote_user" => "-",
"path" => "/root/test.log",
"@timestamp" => 2016-12-14T16:05:20.443Z,
"http_referer" => "\"-\"",
"@version" => "1",
"host" => "0.0.0.0",
"http_x_forwarded_for" => "\"-\"",
"http_request" => "\"GET / HTTP/1.1\"",
"status" => "304"
}
...
[root@ELK ~]#
#Logstash日志输出
[root@ELK ~]# tail -f /var/log/logstash/logstash-plain.log
[2016-12-15T00:02:28,195][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[2016-12-15T00:05:21,397][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2016-12-15T00:05:21,477][INFO ][logstash.pipeline ] Pipeline main started
[2016-12-15T00:05:22,156][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
...
[root@ELK ~]#

示例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
filter {
ruby {
init => "@kname = ['method','uri','verb']"
code => "
new_event = LogStash::Event.new(Hash[@kname.zip(event.get('request').split('|'))])
new_event.remove('@timestamp')
event.append(new_event)
"
}
if [uri] {
ruby {
init => "@kname = ['url_path','url_args']"
code => "
new_event = LogStash::Event.new(Hash[@kname.zip(event.get('uri').split('?'))])
new_event.remove('@timestamp')
event.append(new_event)
"
}
kv {
prefix => "url_"
source => "url_args"
field_split => "&"
remove_field => [ "url_args", "uri", "request" ]
}
}
}

1
2
3
4
#解释
Nginx 访问日志中的 $request,通过这段配置,可以详细切分成 method, url_path, verb, url_a, url_b ...
进一步的,如果 url_args 中有过多字段,可能导致 Elasticsearch 集群因为频繁 update mapping 或者消耗太多内存在 cluster state 上而宕机。所以,更优的选择,是只保留明确有用的 url_args 内容,其他部分舍去
1
2
3
4
5
6
7
kv {
prefix => "url_"
source => "url_args"
field_split => "&"
include_keys => [ "uid", "cip" ]
remove_field => [ "url_args", "uri", "request" ]
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#grok分析/var/log/secure日志
input {
file {
path => ["/var/log/secure"]
start_position => "end"
}
}
filter {
grok {
match => ["message",".* sshd\[\d+\]: (?<status>\S+) .* (?<ClientIP>(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})?) .*"]
}
}
output {
stdout {
codec => rubydebug{}
}
}

Logstash的filter插件之json用法

参考:https://www.elastic.co/guide/en/logstash/5.1/plugins-filters-json.html
https://kibana.logstash.es/content/logstash/plugins/filter/json.html

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
#修改Nginx日志格式
[root@ELK ~]# head -32 /etc/nginx/nginx.conf | tail -15
# log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
log_format json '{"clinet":"$remote_addr",'
'"@timestamp":"$time_iso8601",'
'"@version":"1",'
'"host":"$server_addr",'
'"size":$body_bytes_sent,'
'"responsetime":$request_time,'
'"domain":"$host",'
'"status":"$status",'
'"ua":"$http_user_agent"}';
access_log /var/log/nginx/access.log json;
[root@ELK ~]#
[root@ELK ~]# cat /etc/logstash/conf.d/json.conf
input {
file {
path => ["/var/log/nginx/access.log"]
type => "Nginx"
start_position => "end"
codec => json
}
}
output {
stdout {
codec => rubydebug{}
}
}
[root@ELK ~]#
[root@ELK ~]# /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/json.conf -t
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
[root@ELK ~]#
[root@ELK ~]# /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/json.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
"path" => "/var/log/nginx/access.log",
"@timestamp" => 2016-12-15T07:33:00.000Z,
"size" => 0,
"domain" => "192.168.31.100",
"@version" => "1",
"host" => "192.168.31.100",
"responsetime" => 0.0,
"ua" => "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36",
"type" => "Nginx",
"clinet" => "192.168.31.105",
"status" => "304",
"tags" => []
}

Logstash的filter插件之geoip用法

参考:https://www.elastic.co/guide/en/logstash/5.1/plugins-filters-geoip.html
https://kibana.logstash.es/content/logstash/plugins/filter/geoip.html
GeoIP 是最常见的免费IP地址归类查询库

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# logstash v5.6.4
# logstash 配置文件
# cat logstash_10.20.conf
input {
file {
path => ["/root/test.log"]
start_position => "end"
}
}
filter {
grok {
match => ["message","%{IPV4:remote_addr} [\-] %{USERNAME:remote_user} \[%{DATA:time_local}\] \"(?<request>[^\"]*)\" (?<status>\d+) (?<body_bytes_sent>\d+) \"(?<http_referer>[^\"]*)\" \"(?<http_user_agent>[^\"]*)\" \"%{IPV4:http_x_forwarded_for}"] }
geoip {
source => "http_x_forwarded_for"
fields => ["ip","city_name","country_name","location","latitude","longitude","timezone","region_name"]
}
}
output {
stdout {
codec => rubydebug{}
}
}

1
2
3
4
5
#nginx 日志
100.116.46.20 - - [15/Nov/2017:11:59:34 +0800] "GET /static/js/util/plugins/cordova-plugin-geolocation/www/geolocation.js HTTP/1.0" 200 8629 "https://qian.julend.com/static/pageHtml/loan.html?s_qian=iOS" "Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_3 like Mac OS X) AppleWebKit/603.3.8 (KHTML, like Gecko) Mobile/14G60 GrowingIO/0.9.47-20170111143620 (4303466304)" "27.8.223.7"
#nginx grok匹配
%{IPV4:remote_addr} [\-] %{USERNAME:remote_user} \[%{DATA:time_local}\] \"(?<request>[^\"]*)\" (?<status>\d+) (?<body_bytes_sent>\d+) \"(?<http_referer>[^\"]*)\" \"(?<http_user_agent>[^\"]*)\" \"%{IPV4:http_x_forwarded_for}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/logstash_10.20.conf --path.data /data/elk_data/logstash/
dataSending Logstash's logs to /data/elk_data/logstash/logs which is now configured via log4j2.properties
{
"remote_addr" => "100.116.46.20",
"request" => "GET /static/js/util/plugins/cordova-plugin-geolocation/www/geolocation.js HTTP/1.0",
"geoip" => {
"city_name" => "Chongqing",
"timezone" => "Asia/Shanghai",
"ip" => "27.8.223.7",
"latitude" => 29.5628,
"country_name" => "China",
"region_name" => "Chongqing",
"location" => {
"lon" => 106.5528,
"lat" => 29.5628
},
"longitude" => 106.5528
},
"body_bytes_sent" => "8629",
"time_local" => "15/Nov/2017:11:59:34 +0800",
"message" => "100.116.46.20 - - [15/Nov/2017:11:59:34 +0800] \"GET /static/js/util/plugins/cordova-plugin-geolocation/www/geolocation.js HTTP/1.0\" 200 8629 \"https://qian.julend.com/static/pageHtml/loan.html?s_qian=iOS\" \"Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_3 like Mac OS X) AppleWebKit/603.3.8 (KHTML, like Gecko) Mobile/14G60 GrowingIO/0.9.47-20170111143620 (4303466304)\" \"27.8.223.7\"",
"http_user_agent" => "Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_3 like Mac OS X) AppleWebKit/603.3.8 (KHTML, like Gecko) Mobile/14G60 GrowingIO/0.9.47-20170111143620(4303466304)",
"remote_user" => "-",
"path" => "/root/test.log",
"@timestamp" => 2017-11-15T06:16:03.829Z,
"http_referer" => "https://qian.julend.com/static/pageHtml/loan.html?s_qian=iOS",
"@version" => "1",
"host" => "elk-cluster3",
"http_x_forwarded_for" => "27.8.223.7",
"status" => "200"
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 去掉geoip后,nginx日志拆分
{
"remote_addr" => "100.116.46.20",
"request" => "GET /static/js/util/plugins/cordova-plugin-geolocation/www/geolocation.js HTTP/1.0",
"body_bytes_sent" => "8629",
"time_local" => "15/Nov/2017:10:59:34 +0800",
"message" => "100.116.46.20 - - [15/Nov/2017:10:59:34 +0800] \"GET /static/js/util/plugins/cordova-plugin-geolocation/www/geolocation.js HTTP/1.0\" 200 8629 \"https://qian.julend.com/static/pageHtml/loan.html?s_qian=iOS\" \"Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_3 like Mac OS X) AppleWebKit/603.3.8 (KHTML, like Gecko) Mobile/14G60 GrowingIO/0.9.47-20170111143620 (4303466304)\" \"27.8.223.7\"",
"http_user_agent" => "Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_3 like Mac OS X) AppleWebKit/603.3.8 (KHTML, like Gecko) Mobile/14G60 GrowingIO/0.9.47-20170111143620 (4303466304)",
"remote_user" => "-",
"path" => "/root/test.log",
"@timestamp" => 2017-11-15T06:01:24.940Z,
"http_referer" => "https://qian.julend.com/static/pageHtml/loan.html?s_qian=iOS",
"@version" => "1",
"host" => "elk-cluster3",
"http_x_forwarded_for" => "27.8.223.7",
"status" => "200"
}

Logstash的filter插件之date时间处理

参考:https://www.elastic.co/guide/en/logstash/5.6/plugins-filters-date.html
https://kibana.logstash.es/content/logstash/plugins/filter/date.html
filters/date 插件可以用来转换你的日志记录中的时间字符串,变成 LogStash::Timestamp 对象,然后转存到 @timestamp 字段里
注意:因为在稍后的 outputs/elasticsearch 中常用的 %{+YYYY.MM.dd} 这种写法必须读取 @timestamp 数据,所以一定不要直接删掉这个字段保留自己的字段,而是应该用 filters/date 转换后删除自己的字段!

建议打开 Nginx 的 access_log 配置项的 buffer 参数,对极限响应性能有极大提升!

filters/date 插件支持五种时间格式

  • ISO8601
    类似 “2011-04-19T03:44:01.103Z” 这样的格式。具体Z后面可以有 “08:00”也可以没有,”.103”这个也可以没有。常用场景里来说,Nginx 的 log_format 配置里就可以使用 $time_iso8601 变量来记录请求时间成这种格式。
  • UNIX
    UNIX 时间戳格式,记录的是从 1970 年起始至今的总秒数。Squid 的默认日志格式中就使用了这种格式。
  • UNIX_MS
    这个时间戳则是从 1970 年起始至今的总毫秒数。据我所知,JavaScript 里经常使用这个时间格式。
  • TAI64N
    TAI64N 格式比较少见,是这个样子的:@4000000052f88ea32489532c。我目前只知道常见应用中, qmail 会用这个格式。
  • Joda-Time 库
    Logstash 内部使用了 Java 的 Joda 时间库来作时间处理。所以我们可以使用 Joda 库所支持的时间格式来作具体定义。Joda 时间格式定义见下表:

时间格式:

Symbol Meaning Presentation Examples
G era text AD
C century of era (>=0) number 20
Y year of era (>=0) year 1996
x weekyear year 1996
w week of weekyear number 27
e day of week number 2
E day of week text Tuesday; Tue
y year year 1996
D day of year number 189
M month of year month July; Jul; 07
d day of month number 10
a halfday of day text PM
K hour of halfday (0~11) number 0
h clockhour of halfday (1~12) number 12
H hour of day (0~23) number 0
k clockhour of day (1~24) number 24
m minute of hour number 30
s second of minute number 55
S fraction of second number 978
z time zone text Pacific Standard Time; PST
Z time zone offset/id zone -0800; -08:00; America/Los_Angeles
escape for text delimiter
‘’ single quote literal

http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html

Joda 时间格式的配置:

1
2
3
4
5
6
7
8
filter {
grok {
match => ["message", "%{HTTPDATE:logdate}"]
}
date {
match => ["logdate", "dd/MMM/yyyy:HH:mm:ss Z"]
}
}

注意:时区偏移量只需要用一个字母 Z 即可

Logstash的filter插件之secure日志切割

参考:https://www.elastic.co/guide/en/logstash/5.6/plugins-filters-mutate.html
这个需要配合filebeat操作

1
2
3
4
5
6
7
8
9
10
11
# grep -iv '#' /etc/filebeat/filebeat.yml |grep -iv '^$'
filebeat.prospectors:
- input_type: log
paths:
- /var/log/secure
document_type: secure_2.250
exclude_lines: ["system"] #过滤不含system日志
include_lines: ["Accepted"] #过滤含Accepted的日志
output.logstash:
hosts: ["192.168.1.41:5044"]
#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#logstash配置文件
# cat logstash_test.conf
input {
beats {
port => 5045
}
}
filter {
if [type] == 'secure_2.250'{
grok {
match => ["message",".* sshd\[\d+\]: (?<status>\S+) .* (?<ClientIP>(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})?) .*"] #正则过滤secure日志,且赋值到message字段
}
mutate{
split => ["message"," "] #把message字段以空格拆分,message字段的值来自grok
add_field => {
"username" => "%{[message][8]}" #数组是以0开始的,第9个字段在数组里角标为8
"loginip" => "%{[message][10]}"
"login_result" => "%{[message][5]}"
}
}
}
}
output {
stdout {
codec => rubydebug{}
}
}
#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
#输出结果
{
"offset" => 136859,
"input_type" => "log",
"source" => "/var/log/secure",
"login_result" => "Accepted",
"message" => [
[ 0] "Nov",
[ 1] "16",
[ 2] "14:38:14",
[ 3] "localhost",
[ 4] "sshd[9305]:",
[ 5] "Accepted",
[ 6] "publickey",
[ 7] "for",
[ 8] "yfshare",
[ 9] "from",
[10] "192.168.1.123",
[11] "port",
[12] "2269",
[13] "ssh2:",
[14] "RSA",
[15] "51:37:66:2f:2b:c0:74:d1:0b:15:f8:9c:7f:84:64:0a"
],
"type" => "secure_2.250",
"ClientIP" => "192.168.1.123",
"tags" => [
[0] "beats_input_codec_plain_applied"
],
"@timestamp" => 2017-11-16T06:38:19.012Z,
"loginip" => "192.168.1.123",
"@version" => "1",
"beat" => {
"name" => "localhost",
"hostname" => "localhost",
"version" => "5.6.4"
},
"host" => "localhost",
"status" => "Accepted",
"username" => "yfshare"
}

在Kibana上的Visualize上选择Data Table。
Kianan_Visualize_dataTable
如果报上面的错误no cached mapping for this field. refresh field list from the management > index patterns pageTransform
原因是我们自己新增字段后,Kibana上index是之前加上的,新增字段不存在会报错,刷新下index即可
解决方案是:management > index patterns,刷新对应的index即可
Kianan_Visualize_dataTable

Logstash的output插件之发生email

参考:https://www.elastic.co/guide/en/logstash/5.1/plugins-outputs-email.html
https://kibana.logstash.es/content/logstash/plugins/output/email.html
outputs/email 插件支持 SMTP 协议和 sendmail 两种方式,通过 via 参数设置。SMTP 方式有较多的 options 参数可配置。sendmail 只能利用本机上的 sendmail 服务来完成

126 邮箱发送到 qq 邮箱示例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
output {
email {
port => "25"
address => "smtp.126.com"
username => "test@126.com"
password => ""
authentication => "plain"
use_tls => true
from => "test@126.com"
subject => "Warning: %{title}"
to => "test@qq.com"
via => "smtp"
body => "%{message}"
}
}

Logstash的output插件之发生HDFS

参考:https://www.elastic.co/guide/en/logstash/5.6/plugins-outputs-webhdfs.html
https://kibana.logstash.es/content/logstash/plugins/output/hdfs.html
该插件最初来源于社区,目前已被官方收录,Logstash 5.6支持,使用的是Hadoop的 WebHDFS 接口,其本质是发送 POST 数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
input {
...
}
filter {
...
}
output {
webhdfs {
host => "127.0.0.1" # (required)
port => 50070 # (optional, default: 50070)
path => "/user/logstash/dt=%{+YYYY-MM-dd}/logstash-%{+HH}.log" # (required)
user => "hue" # (required)
}
}

更多插件请见:https://kibana.logstash.es/content/logstash/plugins/filter/


本文出自”Jack Wang Blog”:http://www.yfshare.vip/2017/11/18/Logstash插件/