ELK由Elasticsearch、Logstash和Kibana三部分组件组成,这里新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash,官方也推荐此工具

Filebeat隶属于Beats。目前Beats包含四种工具:

  1. Packetbeat(搜集网络流量数据)
  2. Topbeat(搜集系统、进程和文件系统级别的 CPU 和内存使用情况等数据)
  3. Filebeat(搜集文件数据)
  4. Winlogbeat(搜集 Windows 事件日志数据)

    测试环境ELK架构讲解

    官方ELK架构图:
    filebeat

目前测试环境的拓扑图:
ELK_拓扑图

Elasticsearch基本原理

利用磁盘缓存实现的准实时检索
translog提供的磁盘同步控制
segment merge

官方文档
Filebeat:
https://www.elastic.co/cn/products/beats/filebeat
https://www.elastic.co/guide/en/beats/filebeat/5.6/index.html

Logstash:
https://www.elastic.co/cn/products/logstash
https://www.elastic.co/guide/en/logstash/5.6/index.html

Elasticsearch:
https://www.elastic.co/cn/products/elasticsearch
https://www.elastic.co/guide/en/elasticsearch/reference/5.6/index.html

elasticsearch中文社区:
https://elasticsearch.cn/

Kibana使用指南

Kibana有另一个名字 elasticsearch dashboard,设计参考了 Splunk
Kibana主要功能 Discover,Visualize,Dashboard,Timelion,DevTools,Management

lucene query syntax
kibana

kibana

Kibana

kibana

Elasticsearch查询语法

Elasticsearch Query DSL查询语法
kibana

Elasticsearch Query DSL

1
2
3
4
5
6
7
{
"query": {
"term": {
"_id": "AV9orWWLv96dGKcHDVyi"
}
}
}

kibana_search

querystring 语法
上例中?q=后面的就是querystring语法,会在kibana中经常使用

  • 全文检索:直接写搜索的单词。如:?q=aaaa
  • 单字段的全文检索:在搜索单词之前加上字段名和冒号。如:?q=user:jack
  • 单字段的精确检索:在搜索单词前加上双引号。如:?q=user:"jack"
  • 多个检索条件的组合:可以使用NOTANDOR来组合检索,必须是大写的。如:?q=user:("jack" OR "bob") AND NOT mesg:aaaa
  • 字段是否存在_exists_:user表示要求user字段存在,_missing_:user表示user字段不存在。如:?q=_exists_:beat.name
  • 通配符:用?表示单字母,*表示任意个字母。如:?q=DevE?N?q=DevE?N*
  • 正则:比通配符更复杂一点点。如:?q=/dub{2}o/
    注:Elasticsearch正则性能不好,尽量不要使用太复杂或不使用。参考Elasticsearch正则表达式语法
  • 近似搜索:用~表示搜索单词可能有一字母写的不对。如:?q=dubao~
  • 范围搜索:对数值和时间。如:?q=@timestamp:>150928968615?q=@timestamp:["now-1m" TO "now"]
1
curl -XGET http://localhost:9200/192.168.1.21-account_errorlog-2017.10.29/_search?pretty=true?q='com.alibaba.dubbo.monitor.MonitorService'
1
curl -XGET http://localhost:9200/192.168.1.21-account_errorlog-2017.10.29/_search?pretty=true?q=beat.hostname='DevEVN-21'

Elasticsearch其他查询语法
Elasticsearch全文搜索

Kibana

Kibana

日志分析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#nginx日志格式
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
#nginx access日志
192.168.1.153 - - [18/Oct/2017:21:48:00 +0800] "GET /ErrorPages/404.html HTTP/1.0" 404 571 "https://devqian.julend.com/static/pageHtml/bindCardCallback.html?view=zhimaCredit&params=XHk1tuyGnXS1wGSpis2PiC9UKCizLIiGIm0eJ%2Fz1lXc8JdGVXxB2Ftg9hFKZjFfwh3w%2FM6UIEpxQ11TTqXf31xNx9N%2FzGMc00rpvf6YpKEJPQFFCL4X9eLQdLJl5KaaGiK8AdNZS38dJDKEzFJP0o1523nCOhjJEtgS0394oLIB2siIqYF7RTVgBaU1xE72m3vmZcvqvgVZUH8ozt8tftXh4Wq5paf647mA%2FZZJOaXabeFEwuwa7dhtVXTT8CBdnOOiDC5WmwBCC%2FnIqKmfTI1gmFvQUobDA9RY4UcMdMY9RixLkhSdQ8k5LtpD2NVuJDLEscZAdfROhyYeEnZ7Ptg%3D%3D&sign=ABeWMdLqNeFUhDYKUJsJfzLolxA%2Fo64Wl8mmgX36yqM5VWHMvB7VcpZJ3AA7WsH%2BTFe6EARuABMe2HupY6DbzuITufa9aPZX9BHbN6dLPMre0n8aHZnr59h%2FC6Us%2FAW11tu2n6l%2F1dSyCo951diQ55en%2BO%2FffRJ1ldFUwzhzNCQ%3D" "Mozilla/5.0 (Linux; Android 7.1.1; OPPO R11t Build/NMF26X; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/55.0.2883.91 Mobile Safari/537.36" "121.238.20.93"
#logstash grok分析nginx日志格式
grok {
match => ["message","%{IPV4:remote_addr} [\-] %{USERNAME:remote_user} \[%{DATA:time_local}\] \"(?<request>[^\"]*)\" (?<status>\d+) (?<body_bytes_sent>\d+) \"(?<http_referer>[^\"]*)\" \"(?<http_user_agent>[^\"]*)\" \"%{IPV4:http_x_forwarded_for}"]
}
[root@ELK ~]# /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/geoip.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
"geoip" => {},
"ua" => "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36",
"type" => "Nginx",
"clinet" => "192.168.31.105",
"tags" => [
[0] "_geoip_lookup_failure"
],
"path" => "/var/log/nginx/access.log",
"@timestamp" => 2016-12-15T08:13:19.000Z,
"size" => 0,
"domain" => "192.168.31.100",
"@version" => "1",
"host" => "192.168.31.100",
"responsetime" => 0.0,
"status" => "304"
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
2017-10-24_15:42:00.408 [DubboMonitorSendTimer-thread-3] ERROR [com.alibaba.dubbo.common.logger.slf4j.Slf4jLogger] [DUBBO] Unexpected error occur at send statistic, cause:
Failed to invoke the method collect in the service com.alibaba.dubbo.monitor.MonitorService. No provider available for the service com.alibaba.dubbo.monitor.MonitorService from registry 192.168.1.21:2181 on the consumer 192.168.1.21 using the dubbo version 2.5.3. Please check if the providers have been started and registered., dubbo version: 2.5.3, current host: 192.168.1.21com.alibaba.dubbo.rpc.RpcException: Failed to invoke the method collect in the service com.alibaba.dubbo.monitor.MonitorService. No provider available for the service com.a
libaba.dubbo.monitor.MonitorService from registry 192.168.1.21:2181 on the consumer 192.168.1.21 using the dubbo version 2.5.3. Please check if the providers have been started and registered. at com.alibaba.dubbo.rpc.cluster.support.AbstractClusterInvoker.checkInvokers(AbstractClusterInvoker.java:246) ~[dubbo-2.5.3.jar:2.5.3]
at com.alibaba.dubbo.rpc.cluster.support.FailoverClusterInvoker.doInvoke(FailoverClusterInvoker.java:55) ~[dubbo-2.5.3.jar:2.5.3]
at com.alibaba.dubbo.rpc.cluster.support.AbstractClusterInvoker.invoke(AbstractClusterInvoker.java:227) ~[dubbo-2.5.3.jar:2.5.3]
at com.alibaba.dubbo.rpc.cluster.support.wrapper.MockClusterInvoker.invoke(MockClusterInvoker.java:72) ~[dubbo-2.5.3.jar:2.5.3]
at com.alibaba.dubbo.rpc.proxy.InvokerInvocationHandler.invoke(InvokerInvocationHandler.java:52) ~[dubbo-2.5.3.jar:2.5.3]
at com.alibaba.dubbo.common.bytecode.proxy8.collect(proxy8.java) ~[na:2.5.3]
at com.alibaba.dubbo.monitor.dubbo.DubboMonitor.send(DubboMonitor.java:113) ~[dubbo-2.5.3.jar:2.5.3]
at com.alibaba.dubbo.monitor.dubbo.DubboMonitor$1.run(DubboMonitor.java:70) ~[dubbo-2.5.3.jar:2.5.3]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_80]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [na:1.7.0_80]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [na:1.7.0_80]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.7.0_80]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_80]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
2017-10-24_15:43:00.413 [DubboMonitorSendTimer-thread-3] INFO [com.alibaba.dubbo.common.logger.slf4j.Slf4jLogger] [DUBBO] Send statistics to monitor zookeeper://192.168.1
.21:2181/com.alibaba.dubbo.monitor.MonitorService?dubbo=2.5.3&interface=com.alibaba.dubbo.monitor.MonitorService&pid=11958&timestamp=1508762187622, dubbo version: 2.5.3, current host: 192.168.1.21
  • multiline

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    input {
    beats {
    port => 5044
    codec => multiline {
    pattern => "^%{YEAR}[/-]%{MONTHNUM}[/-]%{MONTHDAY}[/_]%{TIME} "
    negate => true
    what => previous
    }
    }
    }
  • Log4J
    需要配置 Java 应用的 Log4J 设置,启动一个内置的 SocketAppender。修改应用的 log4j.xml 配置文件,添加如下配置段

    1
    2
    3
    4
    5
    6
    <appender name="LOGSTASH" class="org.apache.log4j.net.SocketAppender">
    <param name="RemoteHost" value="logstash_hostname" />
    <param name="ReconnectionDelay" value="60000" />
    <param name="LocationInfo" value="true" />
    <param name="Threshold" value="DEBUG" />
    </appender>

然后把这个新定义的 appender 对象加入到 root logger 里,可以跟其他已有 logger 共存

1
2
3
4
5
<root>
<level value="INFO"/>
<appender-ref ref="OTHERPLACE"/>
<appender-ref ref="LOGSTASH"/>
</root>

log4j.properties 配置文件

1
2
3
4
5
6
7
8
log4j.rootLogger=DEBUG, logstash
###SocketAppender###
log4j.appender.logstash=org.apache.log4j.net.SocketAppender
log4j.appender.logstash.Port=4560
log4j.appender.logstash.RemoteHost=logstash_hostname
log4j.appender.logstash.ReconnectionDelay=60000
log4j.appender.logstash.LocationInfo=true

Log4J 会持续尝试连接你配置的 logstash_hostname 这个地址,建立连接后,即开始发送日志数据

Logstash,Java 应用端的配置完成以后,开始设置 Logstash 的接收端,配置如下所示,其中 4560 端口是 Log4J SocketAppender 的默认对端端口

1
2
3
4
5
6
input {
log4j {
type => "log4j-json"
port => 4560
}
}

  • JSON Event layout
    如果无法采用 socketappender ,必须使用文件方式的,其实 Log4J 有一个 layout 特性,用来控制日志输出的格式。和 Nginx 日志自己拼接 JSON 输出类似,也可以通过 layout 功能,记录成 JSON 格式。

参考:https://github.com/logstash/log4j-jsonevent-layout

logstash官方提供的扩展包,可以通过mvnrepository.com搜索下载
http://central.maven.org/maven2/net/logstash/log4j/jsonevent-layout/1.7/jsonevent-layout-1.7.jar

或者直接编辑自己项目的pom.xml添加依赖

1
2
3
4
5
<dependency>
<groupId>net.logstash.log4j</groupId>
<artifactId>jsonevent-layout</artifactId>
<version>1.7</version>
</dependency>

然后修改项目的log4j.properties文件

1
2
3
4
5
6
log4j.rootCategory=WARN,RollingLog
log4j.appender.RollingLog=org.apache.log4j.DailyRollingFileAppender
log4j.appender.RollingLog.Threshold=TRACE
log4j.appender.RollingLog.File=api.log
log4j.appender.RollingLog.DatePattern=.yyyy-MM-dd
log4j.appender.RollingLog.layout=net.logstash.log4j.JSONEventLayoutV1

如果是log4j.xml文件

1
2
3
4
<appender name="Console" class="org.apace.log4j.ConsoleAppender">
<param name="Threshould" value="TRACE" />
<layout class="net.logstash.log4j.JSONEventLayoutV1" />
</appender>

生成的文件就是Logstash标准的JSON格式,logstash使用下面配置读取

1
2
3
4
5
6
input {
file {
codec => json
path => ["/path/to/log4j.log"]
}
}

生成的Logstash事件如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
"mdc": {},
"line_number": "29",
"class":"org.eclipse,jetty.examples.logging.EchoFormServlet",
"@version":1,
"source_host":"jvstratusmbp.local",
"thread_name":"qtp513694835-14",
"message":"Got request from 0:0:0:0:0:0:0:1%0 using Mozilla\/5.0 (Macintosh;Inter Mac OS X 10_9_1) AppleWebit\/537.36 (KHTML, like Gecko) Chrome \/32.0.1700.77 Safari\/537.36",
"@timestamp":"2014-01-27T19:52:35.738z",
"level":"INFO",
"file":"EchoFormServlet.java",
"method":"doPost",
"logger_name":"org.eclipse.jetty.example.logging.EchoFormServlet"
}

如果使用的不是Log4J而是logback项目来记录JAVA日志,Logstash官方也有类似的扩展包,修改pom.xml

1
2
3
4
5
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>4.4</version>
</dependency>


本文出自”Jack Wang Blog”:http://www.yfshare.vip/2017/10/30/Kibana5使用指南/