扫一扫,微信登陆

 青浦修电脑 青浦笔记本维修 青浦手机维修 青浦电器维修

搜索
查看: 468|回复: 0

ELK日志收集

[复制链接]

1万

主题

1万

帖子

5万

积分

论坛元老

Rank: 8Rank: 8

积分
56206
发表于 2022-9-5 08:10:05 | 显示全部楼层 |阅读模式
搭建ELK
$ P; {$ W3 {3 yELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:
9 U' X  X3 Q; _; q$ b : R! X- Y5 m- \  R/ I# g0 m* h! K
处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单9 u/ M% J! D# F& `! T; ~
Elasticsearch
$ f9 E/ G6 _" O! F& Felasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。
) U: z; z# Z+ p* P ! F- @0 Y+ ]$ l/ ]: Z2 i( i- q
elasticsearch的特点:- v& r- z% l# g8 N: A

5 \" P2 [' e: R' C实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json! `) D: C7 n% Y! P
部署elasticsearch - G0 d4 d: G1 |5 L
GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发4 L* Z% {0 n! j

- a# A) }  N  P( C" R; }3 C/ Rcentos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步
6 ^/ K5 M# V& }: c, ]1 S" o
' p: m0 j4 z  h6 |7 ]服务器1:172.20.22.243 U- C6 k% P0 P- G' J5 q
) n2 F+ Z6 @; d; S! C
服务器2:172.20.22.27- ]+ @! T; o9 o3 ?5 b2 \$ _* {; L
; z1 j# m0 F6 N9 q5 U
服务器3:172.20.22.28
/ C. s; T9 m3 Y+ c' o% T& Q 7 `$ W4 S) C' u& f- T7 U
###ubuntu8 N* e6 F" g  o) ~
# apt install -y ntpdate
; O' A6 h3 ?: q1 F* t4 X# rm -f /etc/localtime
  ], }2 X- X! ^3 l+ h6 k- ~7 ~# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
: M$ V4 C( f  ]# hwclock --systohc
5 x$ o/ F  z1 z/ u* u# H9 F( h  S# ntpdate -u ntp1.aliyun.com+ K1 O  T" I  d; o* n. B
###设置内核参数
9 p/ {! `% ^, G) o+ b! v5 l# vim /etc/security/limits.conf+ E( i" D" _: K7 T  [
*                soft        nofile                500000
* i( ]) F/ [2 @: c# {/ `*                hard        nofile                5000003 |- ~- ]: L0 ~
# vim /etc/security/limits.d/20-nproc.conf
2 N8 K$ h& }3 L*          soft    nproc     4096' `# h$ _7 B* Z) S. `
elasticsearch soft    nproc     unlimited6 n" [6 O6 m6 E  @0 B+ a) H
root       soft    nproc     unlimited
! [6 J# N8 @! ?7 t3 J( ^4 C###安装jdk* r) W, c$ ~1 L% L  b5 N
# apt install -y openjdk-8-jdk
" S8 v+ }6 ~6 V
3 |& u$ _! g$ q5 [, {###每个节点都安装
, |) t  ^. o6 ^2 S4 m# ls -lrt elasticsearch-7.12.1-amd64.deb1 t3 |! S) L- N# W
# dpkg -i elasticsearch-7.12.1-amd64.deb; @8 D5 q, t, G+ l/ q7 n
###节点1配置文件
5 a9 w% t& h# T& D/ w7 x# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
; [; |& w+ `5 p: l, s8 @! p2 T, mcluster.name: m63-elastic        #集群名称3 a6 }* }7 H4 a) M. G
node.name: node1                 #当前节点在集群内的节点名称
2 y& i0 o9 L9 n. m1 p- c8 Q0 @* y. @path.data: /data/elasticsearch   #数据保存目录. E. n! t9 d( E, C4 U
path.logs: /data/elasticsearch   #日志保存目录
1 J5 n/ L: v; N. O4 T$ zbootstrap.memory_lock: true      #服务启动的时候锁定足够的内存,防止数据写入swap7 {" w: C4 y: Z
network.host: 172.20.22.24       #监听IP
, F( N5 z4 A; V) a( D+ F3 Xhttp.port: 9200                  #监听端口
0 c# \0 t" k6 T) S; j###集群中node节点发现列表
! I! B9 L9 p# [/ y4 O* ~$ b1 z8 y8 Odiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]4 O, t5 H4 B( D* q* Y, O3 B( `( u
###集群初始化哪些节点可以被选举为master% |% }. W8 v+ t% h. x
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
% \1 I4 l( l7 F$ t" g# Daction.destructive_requires_name: true) R, \  F8 v& T8 }8 w
# mkdir /data/elasticsearch -p
: K3 N; }, s3 |# z# chown -R elasticsearch. /data/elasticsearch
5 n6 |- y* }9 ?1 N) x7 B- G! F  c3 E$ \# systemctl start elasticsearch.service
2 T2 p6 O- f% b7 X, h  v2 I###节点20 D* g5 `- c( S5 e/ e
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
& W+ P# A, R0 \cluster.name: m63-elastic
! }% G8 F4 y! z: h! @node.name: node29 p; g" T$ _( }/ t
path.data: /data/elasticsearch
' i9 z) n$ }! S7 ~% |2 I! cpath.logs: /data/elasticsearch" Y- r7 m. o  T
network.host: 172.20.22.27
6 i: ^. R, n4 V; ghttp.port: 9200  ]( n$ s2 X/ h# F
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]" g$ z9 |7 O8 g; X) d% u
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
6 y4 e/ V& b/ z/ G3 O" B4 a7 [7 Aaction.destructive_requires_name: true
$ j0 Z8 J0 U2 {* ?) j% }+ Y. q# mkdir /data/elasticsearch -p- U0 A5 z4 X# r$ N6 W* b4 x0 G
# chown -R elasticsearch. /data/elasticsearch
# J% ^2 ^9 b# z, Z. A) O# systemctl start elasticsearch.service- i9 o- J* `. j% ~7 L
###节点3
  J- n1 w+ ~$ R( M% S# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml$ C6 {- z0 H, B! w( H9 Q* ~' _
cluster.name: m63-elastic
  T' J! D( A7 v! ~' |$ C: Gnode.name: node3
: v8 _# X4 w+ I/ d, I2 Z# Y7 [path.data: /data/elasticsearch- W  @0 J  u: v6 O6 n8 r+ s
path.logs: /data/elasticsearch& x+ I& t6 l) n- R4 D
network.host: 172.20.22.288 }/ d4 g9 g0 _
http.port: 9200" y$ p# A( M3 z% g1 b8 t3 w
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
! L/ w  O5 r! @- J0 \5 p5 Icluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]+ u3 u# v2 z+ {6 J; w4 V; v8 J
action.destructive_requires_name: true, E4 [0 g% ^+ |
# mkdir /data/elasticsearch -p" T8 V1 ~% k0 e) B; z" ?; S
# chown -R elasticsearch. /data/elasticsearch6 o; |" D( g+ f/ M
# systemctl start elasticsearch.service # C3 x/ H6 W3 l, V
浏览器访问验证 + d  o2 @; l4 T
http://$IP:9200
" D% K) n) T' ?! j. z, X! N0 Q# }
  ]0 W' B( E9 l9 Q
) ?* ^' y$ _0 o, A3 [* u$ o3 ~$ J
# E$ n# z: T6 Y2 a- y3 z1 ZLogstash ' F( ]) B( U  P
Logstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。* F0 i5 n  k% K  A
. L8 S! \. {+ O
部署Logstash
4 u& o& {0 C3 O; d" ~* D. P8 wLogstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地
. Q8 M1 u8 W8 z9 F1 I5 x0 p
8 C! I8 A3 V2 n* C' A) K0 whttps://github.com/elastic/logstash #GitHub
6 z# {0 z' v, y; H
1 X) \7 n  O) L9 n6 o: PElastic Stack and Product Documentation | Elastic
& d! _- a" C/ b. n7 z) o7 L6 v+ G & S- i2 ?( O0 G* K9 T
环境准备:关闭防火墙和selinux,并且安装java环境
! _8 j2 h2 X6 c! g# b2 V3 N& ~* @& X
* R0 `# V0 r8 k. A4 F" y# apt install -y openjdk-8-jdk6 N9 u  {3 I8 F2 y
# ls -lrt logstash-7.12.1-amd64.deb0 X$ {# c+ l5 y0 T% O4 c
# dpkg -i logstash-7.12.1-amd64.deb0 x" a4 n) l& ~# y; \' u, d4 `7 h
###启动测试
. |9 V( n$ i9 i% e# a! A# /usr/share/logstash/bin/logstash -e  'input { stdin {} } output { stdout {}}'   ##标准输入和标准输出) ~  l) B2 U" g) Q" |; n
hello world!~$ ^, S9 d; I# U( b7 F6 Y
{
1 G# M% ?, D2 ^6 p1 G+ W+ }      "@version" => "1",
, V) L" R1 ]$ z4 d    "@timestamp" => 2022-04-13T06:16:32.212Z,7 X2 o8 `: E8 _" \9 C5 N5 C
          "host" => "jenkins-slave",
: N' L7 z9 G; o0 d; P2 i       "message" => "hello world!~"
4 M+ `, N* a" [3 a% i}) `5 j8 X! r( }0 S
###通过配置文件启动4 w1 J) r2 f. V9 b" }% G
# cd /etc/logstash/conf.d/
3 o5 N3 Q: S5 K' M+ q8 D' Z: L# cat test.conf - Q1 Z+ J2 F: E2 S3 v
input { : ?  X% |+ J  \; f: M+ G3 e3 u5 f
  stdin {}
# [' p6 ?# @' {# J) t4 k" |/ W}6 U  M$ K; u, w1 \" k, M
output {
$ a9 y3 W, h- ?  stdout {}7 U% e+ d! ^; g1 B5 Q' w* _1 i# J
}! q( B% W+ I- A. k. V+ S* H

7 ]! e- i2 p  \6 w0 D###通过指定配置文件启动
: Z5 s1 U$ I4 E' O* i# /usr/share/logstash/bin/logstash -f test.conf -t   ##检查配置文件语法
' s+ W+ }6 l- R8 j# /usr/share/logstash/bin/logstash -f test.conf
, d0 p3 w# J, r& G/ @" F2 N
$ [" P* @/ B, d, C( y####输出到elasticsearch+ K0 u! H9 F* x' q# z: v5 |
# cat test.conf
0 d: l- B* q2 C, c. f! |input { # H! N( Y0 p  b5 u3 q
  stdin {}6 M- L/ C$ r/ M, a8 I. r1 B  U) B
}0 c1 ~1 _8 w- _* \1 v
output {
/ L3 s) x5 A: a9 j) _  #stdout {}
7 U6 |$ ]' o5 q! g9 T5 S) }1 d+ k, q* _  elasticsearch {
, J4 S: W0 e) ^# h, D' ^) j( D8 D4 v    hosts => ["172.20.22.24:9200"]/ a! [( o& X3 V4 e3 D# \
    index => "magedu-m63-test-%{+YYYY.MM.dd}"# ]3 P& N, s* O4 ^+ [
  }1 o- B2 H& K. o
}
& j# K+ }4 Q: D' T- v# /usr/share/logstash/bin/logstash -f test.conf
9 ^0 ]( @, x% R+ N9 |2 i6 e- Jversion1
5 y& X+ T0 ?; ]version2" n* d! n( ~3 m$ q$ k/ z
version3% }9 Z: B  j$ j! @
test1
1 |& G, _5 G, B& b/ a) }1 V7 Qtest2( F/ L/ c' M) V/ ]/ |- i  T; p, G; r
test3& T, O6 s# T# c' q/ Y7 O

) B$ B8 }' ?' E1 K* D/ E; w0 X. H####elasticsearch服务器查看收集到的数据+ Z9 v! r6 A1 m$ q
# ls -lrt /data/elasticsearch/nodes/0/indices/' v' }0 ^/ L0 V3 ]# z4 Q8 a, R
total 4# Q4 z* i: ?( X0 ^4 a2 u
drwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA ' R3 M- X! f. F+ u4 @3 k9 @& _
kibana 3 d% L6 g0 l2 J) P: d8 i1 d9 m
kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等
/ x& f' I9 `1 A: I/ [& Y* |5 U$ Y& Q
2 ~: X- g( t7 G. f+ F& K部署kibana 2 W9 x# b3 N. ~* g: f) \, Y6 |' D
# ls -lrt kibana-7.12.1-amd64.deb
* ~, y# L6 Q) J4 `9 Z# dpkg -i kibana-7.12.1-amd64.deb
6 H7 d! [) ~7 _2 g# P# grep "^[^$|#]" /etc/kibana/kibana.yml# r7 L) x4 k0 S" i: Y- S4 X
server.port: 5601
7 g2 h! u* V8 I$ ~( e3 V! Cserver.host: "172.20.22.24"
  P& U- q% _5 felasticsearch.hosts: ["http://172.20.22.27:9200"]
( O8 N$ ?: X) \2 V2 y5 N2 Oi18n.locale: "zh-CN"; b9 P8 {: g- c1 o; \) P+ N
# systemctl restart kibana   r( z1 W, V6 A! G! W3 v3 u& s
浏览器访问http://172.20.22.24:56010 Z6 ?; Z2 I+ |6 m

' X% n. V+ \+ g2 gStack Management-->索引模式-->创建索引模式" P% ^6 z  G& J8 A2 F3 j
$ ]! w! u& }  B$ k. T6 q8 G
, Y% x6 R0 S3 \- P* Y: Z: ]% V
选择时间字段
$ I5 t9 ~, g. ^6 i# J  z( z% G% }  ~) Y) ^! o
查看对应创建的索引日志信息  N7 U9 q2 K% G' v  J9 [

* i# Y/ Y" B7 o9 x" s
: X" B' u7 c7 S! M0 B$ k ' ]; {/ X2 g( f6 I: N' z( H
收集tomcat日志
. ], k# x! E. q! u收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现
6 ]( w# Z) t8 Q  |- B% o0 R
% y" W& P0 V0 g6 N, X  U部署tomcat & W, d) K# p+ K" m& p7 h0 v
####tomcat1,172.20.22.30/ `/ z3 R4 Q' [6 k) F6 \
# apt install -y openjdk-8-jdk4 {3 J7 Y/ S& k& P" E
# ls -lrt apache-tomcat-8.5.77.tar.gz 5 S+ N+ }  X/ u! j4 l9 Y
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
, ^( R4 b/ t( S% Z* a# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/, x; Q( d% I1 W& z8 r  i
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
  c( @8 F# B  a; x# cd /usr/local/tomcat7 A  ]" w# {5 {7 m# q( p) ?" [( O
###修改tomcat日志格式为json. I: c4 e2 b5 C! J& v( Y! O
# vim conf/server.xml, ]# v. N8 W" k; P
....
' [, P% C; j6 Z, Z" D5 ^/ ? 4 M# [- o: _# d( D& R
....
7 k' M1 n$ j+ _1 @$ E. ~& e# mkdir /usr/local/tomcat/webapps/myapp
! e% E. M1 Z0 y+ f* ]# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html
7 L* C7 t" g" c8 G: p; r# S% ]# ./bin/catalina.sh start- w* w- T+ L/ M) r, [
3 R# V' a" [- P. _
###访问测试. ~) D. y% ?1 X) B( ^, _- b
# curl http://172.20.22.30:8080/myapp/
& a; J; v# Q* c# _###查看访问日志
$ Z* m$ b4 D# ], j  K# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log
% h! ^7 L7 S$ |& B. y/ i
7 [2 Z- x0 \1 S4 k####tomcat2,172.20.22.267 `6 k- M. b/ y& s
# apt install -y openjdk-8-jdk
! j! A; L$ X; W# ls -lrt apache-tomcat-8.5.77.tar.gz
9 d% W# {5 P- S: W+ b/ V6 }-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz' P* R4 v! n& N) W
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/; t, {9 ], @3 q* A( b2 v6 z4 Y
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
8 D) o4 u+ B8 y& E( F# cd /usr/local/tomcat% \8 u! S$ B: C  ?
###修改tomcat日志格式为json  f9 N1 [% Y4 H* @. v! D  r
# vim conf/server.xml
" M% d, a/ Q' v/ R  N6 J( W4 n( c% t....
% K2 [/ W9 S7 y2 K+ C
2 i% p8 n' R/ W! c....
& p& z3 b& s5 I1 Q; o5 {5 X# mkdir /usr/local/tomcat/webapps/myapp. f0 Q6 |1 W( Y8 _
# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html
  u& M8 d6 y8 C# ./bin/catalina.sh start
, S/ Y' e7 j' p; P0 A6 m0 N: c
+ b( t. U0 k2 h0 {9 X###访问测试# ?) k+ @, l- s6 h" w
# curl http://172.20.22.26:8080/myapp/
# t- N1 ?1 \. h# o# D1 F###查看访问日志
$ u2 q; x7 j6 C7 ^' A# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log
) @3 \/ B, p9 u部署logstash   b5 w6 M* L) m; C2 |, a) G
在tomcat服务器安装logstash收集tomcat和系统日志
( j* p, w' t3 N1 q2 y& y! Y 3 w) L1 X6 P/ p& l& r5 Y
####tomcat1,172.20.22.30! d+ D0 V% H, o4 g5 ~- E9 K( }# }
# ls -lrt logstash-7.12.1-amd64.deb) ]4 P- t1 F# w8 |' x# m
# dpkg -i logstash-7.12.1-amd64.deb2 _9 P  L) ^3 R4 v4 F( r6 L+ a
# vim /etc/systemd/system/logstash.service
- [. v. e4 i0 d  L...0 ]1 h7 J8 \1 q6 i+ x2 Z: G) m. D9 P
User=root7 f# x& p% E& W/ r
Group=root$ s+ q/ c# R" F7 Z0 L; d  C# T2 y
...1 o. D) b+ |1 z  \( N2 m
# cd /etc/logstash/conf.d2 i2 J& q: x7 v* c: d/ d
# cat tomcat.conf8 `8 I" ^! v) ?( q3 U4 R8 g
input { $ u, Q2 Z, d+ x/ v2 X
  file {
4 _/ L% M5 L& |& R7 `# m; o1 n. P    path => "/usr/local/tomcat/logs/tomcat_access_log*.log"& E2 a$ V6 a4 C& n6 i0 d
    type => "tomcat-log"
& S, m' u" V6 ]! F3 T    start_position => "beginning"
& ^2 s- i* d& d- \    stat_interval => "3"
7 w0 R+ S2 [7 f! v8 w+ u& l  }
* t7 G. e- c0 C) f; {  file {5 ?. {, f  q$ q9 |1 I9 C$ Z
    path => "/var/log/syslog"; L( h6 I  L+ x1 ?) t1 |
    type => "systemlog"
- C3 m1 b) E, ?* _. a    start_position => "beginning"
& x# M. c  m/ a6 x    stat_interval => "3", G) q" o8 E! `# s
  }1 P% Z4 V6 O- @5 s5 r
}" \* b& `/ @, Y. z8 u! b
output {
  C( n4 y) L* q. b4 r  if [type] == "tomcat-log" {
# ]6 k* g- ^. F$ H  elasticsearch {
$ ?' P) [8 x1 r. c    hosts => ["172.20.22.24:9200","172.20.22.27:9200"]. i7 b$ P3 V, d7 O! u5 i) y6 }
    index => "elk-tomcat-%{+YYYY.MM.dd}"
4 V2 h- Q& B$ T, q6 A$ I( K  }}- S6 k+ g/ i, w: U7 ]' y
  if [type] == "systemlog" {1 U3 z( z; b6 C2 L$ ~
  elasticsearch {9 N7 @& l& l  ]) n$ O
    hosts => ["172.20.22.27:9200","172.20.22.27:9200"]/ K3 c& K; ^% ~3 }& ?# |9 O3 f
    index => "elk-syslog-%{+YYYY.MM.dd}"
' n" p6 W+ j5 ^! w( M, _  }}
" d* D- N; z* T0 R- I7 ]+ ]" I}
% T" H2 `+ S- @7 S2 m2 X5 d* G+ N4 z+ r2 U2 C+ L- f
# /usr/share/logstash/bin/logstash -f tomcat.conf -t' v9 ?8 \0 d3 R' F, V; q
# systemctl daemon-reload
- S: ~0 c. _) {- z# systemctl start logstash.service
# Y3 X7 a5 n3 ^: L5 [3 _# scp tomcat.conf root@3172.20.22.26; A, e" ~( G5 c8 R  I3 a1 n5 t% Q- d
. k) d  e3 C: C4 ~- d$ _
####tomcat2,172.20.22.26" q$ K# E& d5 m, U* N# Y
# ls -lrt logstash-7.12.1-amd64.deb
- _& U" v1 g, O9 D# dpkg -i logstash-7.12.1-amd64.deb
% c. M) N1 a$ }; h* F# vim /etc/systemd/system/logstash.service
0 b6 Z1 T# }4 e( O' M: @, `) z...8 m5 e  l; N' j7 j  e  k5 d" c
User=root2 ?& ]' V6 u9 x& r
Group=root
0 U' X8 x# t$ c6 X...
  D& W( m/ f, J; ~" A' I# systemctl daemon-reload: E& V; n+ `: I1 x8 r
# systemctl daemon-reload/ Y8 K. o9 j7 @( t8 W, I" N: @
# systemctl start logstash.service ! t( @: O" s% e! a  Q* D
通过kibana展现
& C& S! Y2 Z. x/ n& c! W1 ~$ o% C3 i
+ x2 P# d* i" m8 E+ h
; C$ r0 s( R6 c4 ^+ _$ Q收集Java日志 ! ]9 q6 o/ Z7 L
使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并, ^6 i% N4 C4 S1 K" `

4 w6 `" w8 y8 f+ S& KMultiline codec plugin | Logstash Reference [8.1] | Elastic
/ K  S5 f( L; z# y2 V: q* V 6 m3 I) U8 v9 Z: N
添加logstash配置文件 1 ?0 v: e: s3 N
###收集logstash自身的日志,172.20.22.26
1 t" T; c: s- c+ {# A# cd /etc/logstash/conf.d
+ X9 l0 k- V2 L) [7 V0 p# cat java.conf
/ l1 ^0 m7 C4 Q. Y. kinput {
5 W2 `" I3 g- {9 z  file {
( _% M8 N; J3 A% k    path => "/var/log/logstash/logstash-plain.log"
8 v! ~" n! z' B    type => "logstash-log"
; Z3 c) @# M9 ^    start_position => "beginning"
# w5 J* e. K7 |4 _$ s  |    stat_interval => "3". `3 c7 k2 S+ j' z3 z. {; k* ?
    codec => multiline {
$ U( i, {6 `( M) `: ~; o* u      pattern => "^\["
# V0 [; y- d  K9 U      negate => true9 q! M& }8 t0 [9 }
      what => "previous"
+ L# g& ^( u4 m( m& ^   }}  B. }! f- F  h% B; t* e
}
5 p6 G1 V" B+ `2 P& O& B3 qoutput {
1 r- g, A  R0 z; I3 m2 a  if [type] == "logstash-log" {% i8 x6 R* K% b$ b
  elasticsearch {( _4 @2 b+ p8 }. R8 |/ d
    hosts => ["172.20.22.24"]
4 O# T: m# N( h2 a4 ^( A4 ~    index => "logstash-log-%{+YYYY.MM.dd}"
7 T! ]9 S7 n4 Q( A  }}
2 T' v, K4 v+ ?5 o% l5 i}% V+ @4 _8 z3 R! {$ [

) B0 ]% P0 j- G) w( a8 B# /usr/share/logstash/bin/logstash -f java.conf -t
- \- F; ?% y# I" g# l; y6 M# systemctl restart logstash.service
( m$ n6 T  g4 i2 s$ I+ y6 q5 |
( k& |4 a: l8 T& e+ u8 R$ X###收集logstash自身的日志,172.20.22.308 t/ }* K/ z- K- }. w9 Q
# cd /etc/logstash/conf.d
# ~% {0 ?% y8 N$ c# cat java.conf 7 M* U6 b+ B$ j3 G6 m. c
input {
; {8 |8 `8 s, q4 P+ k% ]/ D2 w9 k  file {
, |1 j' `6 f3 o9 |5 U4 }    path => "/var/log/logstash/logstash-plain.log"& u- y: E. q! j) e/ B4 v8 p7 m) I
    type => "logstash-log": Q: a" d; `" K  L& |: s
    start_position => "beginning"# F  f  N3 {# ?* R
    stat_interval => "3"$ F$ a& s/ N" H& C
    codec => multiline {! y0 G/ F( T5 p
      pattern => "^\["
7 h' z: A3 z( b; Y; A! e. X' }      negate => true
( ^7 c1 }5 Q. {: ?8 r$ k4 F; B, |      what => "previous" + F1 Y5 ~8 |9 C. I+ G5 S
   }}
+ D5 ^& e# [0 @}
; N7 _+ p- A/ K' P5 i  m0 X: ~/ C8 xoutput {
+ [. ^$ `+ A$ O4 B& |( o, d  if [type] == "logstash-log" {
6 ?/ K3 K7 f4 d  elasticsearch {* _# ]5 Q( U/ |6 G! F
    hosts => ["172.20.22.24"]
0 o4 N1 H. ], l# F8 g$ k! K    index => "logstash-log-%{+YYYY.MM.dd}"
8 h3 y1 `" z6 O* |  }}3 V+ X  I+ G5 T' y
}  U/ y, n5 T5 E: w
: g( z. ~% A' T
# /usr/share/logstash/bin/logstash -f java.conf -t8 `/ W* i: }4 f  ]- y/ ]( u& _
# systemctl restart logstash.service
' w7 g& d* t9 h, ~6 S# i5 w查看kibana收集到的日志
* q- @# m/ w) g5 v( ]0 h  \% b! i! ]- u/ b' a2 z
+ j  s7 H) ?5 p9 L0 O9 B+ X) a: m

- H; u) x- C% q6 q; u6 E9 d9 Dfilebeat结合redis、logstash收集nginx日志
- j7 [. [* O; v. E使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch2 N  F# P, S9 L  L- T/ C2 n
4 q1 {8 i! X4 W8 i/ ?2 s! H4 R
web1:172.20.22.30,部署好nginx、filebeat、llogstash
  h. D) C+ k. g% {0 y/ V+ h 8 S$ W5 M; _' P1 j7 C: N# a0 u
web2:172.20.22.26,部署好nginx、filebeat、llogstash3 r, a' i$ v+ N- t( v

3 v+ a  h+ `, A8 J: Mlogstash服务器2:172.20.22.23,redis服务器:172.20.23.157
; A7 P4 a0 v+ ~8 e0 o ) q+ R% v" K7 N$ d6 \0 L& G
nginx服务器相关配置 % e. i" U  s* a( H. M7 t
部署nginx ( [; {3 w1 h4 p, ?6 E0 K3 \
# wget http://nginx.org/download/nginx-1.18.0.tar.gz
& y  O: P- D. x& P; m! M2 _/ H# tar xf nginx-1.18.0.tar.gz0 v0 s7 @4 m2 e
# cd nginx-1.18.0& e4 M4 J, \/ F. o- R
# ./configure --prefix=/usr/local/nginx --with-http_ssl_module9 _1 k  M/ n. }
# make -j4 && make install
( q3 a) H: \8 b  Q! M# /usr/local/nginx/sbin/nginx
8 z! p0 a8 t/ {' o( M% ]部署配置logstash " F  o1 S8 f& b. E8 p/ X
把filebeat收集到的日志信息发送到redis
4 N* K  W( K; i% m$ T ) ^& y3 p/ @+ C; f# T. I8 b
# apt install -y openjdk-8-jdk
) `# v6 r9 E+ o# dpkg -i logstash-7.12.1-amd64.deb# A, f' D' o# ^; l  I+ [9 z
# cat /etc/logstash/conf.d/beats-to-redis.conf 1 A6 j& C6 _3 B- [. n; \
input {( O1 b3 ]0 [; I3 b9 z
  beats {* h7 T8 r& e) ]
    port => 5044
/ u" O5 H& J9 h% [" Y    codec => "json"- ~' B" F; D) H1 e& O) I
  }' \; G. e5 K: c! v: @4 ]+ k
  beats {. Q9 x9 c; \- m7 {1 |: T3 O
    port => 50459 e; `% y& m% t; n1 H& w
    codec => "json"$ g! {' b: c+ |5 C; g. b' ~- Q
  }
+ G( [$ t% D) x8 z. X}
0 @( Q/ w) C( k+ B! `output {
+ t- e- J- m; r  if [fields][project] == "filebeat-systemlog" {4 _) C. @/ P/ t! j
    redis {
5 w/ m1 v  {& h2 ~' W      data_type => "list": H$ G) m" `, F# m4 }+ J
      key => "filebeat-redis-systemlog"
5 [9 ?: L' M; }2 D      host => "172.20.23.157") j% y0 ]" W" I- v4 k
      port => "6379"
7 l* D$ j3 l1 P, [      db => "0"# C1 w  B( a8 f  h" w  }9 j+ O$ K
      password => "12345678"8 w! a" A( y' ~  K7 h5 Z% ?
  }}
7 _1 o2 g2 Y2 |, V$ u8 f  if [fields][project] == "filebeat-nginx-accesslog" {
+ l, J+ ?- k6 M4 F& F" Z    redis {
& q: t3 B8 ~# |+ C, {# Y+ ]      data_type => "list"$ @$ Z  N: f- C0 {
      key => "filebeat-redis-nginx-accesslog"
) }+ I, R& S! [: ?0 ^      host => "172.20.23.157"
1 E) ^+ C9 Z( m$ h+ `5 I      port => "6379"- {/ H* c- ^2 [! V2 D; l: g
      db => "1"
( Y0 T9 s3 L. G9 N+ R      password => "12345678"
' T- m* V8 {# O7 \  }}/ ~2 |8 z  \5 `4 _' K4 f- J
  if [fields][project] == "filebeat-nginx-errorlog" {
" ]5 m, y" Z$ i7 K! Y' T    redis {
; P$ z, T' E6 t# \$ Z      data_type => "list"
, _. V% r! J0 g1 k1 F) q( }/ Y  ^      key => "filebeat-redis-nginx-errorlog"# @( N, w8 u5 n, J
      host => "172.20.23.157"1 E( H5 M2 W! L, l6 d9 h1 v
      port => "6379"
; ~1 @1 o; l1 M! Y+ K! r  q      db => "1"5 c1 V/ r/ _, m5 m* J/ H9 M
      password => "12345678"2 q. `) R! x5 p3 V
  }}: Q3 s  N6 }( j
}, Y7 X) ~* Y5 K# c8 k6 H
# systemctl start logstash
4 o4 t% N3 W0 a' w% [4 f3 v# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/
! q( z# F2 |7 i) G8 E1 ^8 ^. ^部署配置filebeat
+ X# V: T" q, S1 t. Y通过filebeat收集日志信息发送到logstash
5 J  R7 _2 {$ s5 ]2 ~2 W' ?
8 d4 S! p0 D# n' N7 R! @: ?# dpkg -i filebeat-7.12.1-amd64.deb) p: J, Q' p: d' |$ ~! R. ?! X- p0 x
# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"6 z; E+ a- I/ x% m) ~# [
filebeat.inputs:
. s) |$ w! n/ a0 \5 Q7 E. ?- type: log
. Y) g+ Q7 n  ?1 N& [+ C" k: M, |  enabled: true. y& ~) K7 J$ q9 T
  paths:
) ]+ V* d  u& ^; e    - /var/log/syslog
; r# I6 H* q2 P6 T7 M9 h8 I3 _  fields:, e% r8 r! a3 m1 d
    project: filebeat-systemlog
% x; r0 _2 s9 ]5 c- type: log. s5 {9 x0 D5 L- ~( W6 f4 ?" @
  enabled: true. t' [  ^* M& I- f9 S
  paths:
7 O; z* E. F9 Q4 k    - /usr/local/nginx/logs/access.log1 Y: M* q% ]6 c0 ]' l, x9 r
  fields:$ m: F7 g! b4 s; Y! v5 e/ N% A
    project: filebeat-nginx-accesslog
: }( D: ]9 t0 i' H: ^" h- type: log! X2 Q6 X- r# l$ r0 x. [2 ~5 p
  enabled: true9 ~& S' J$ J. M, ]5 T: w
  paths:& o6 {) y, U  ?2 S, v7 u
    - /usr/local/nginx/logs/error.log
& z$ j+ g' K8 B  fields:
4 X' D1 h/ w4 |9 b. ~5 o4 o    project: filebeat-nginx-errorlog! r3 g  ^9 Q+ X" F) O1 s
filebeat.config.modules:
. a0 ]& K/ e. @  a# A4 t  path: ${path.config}/modules.d/*.yml7 T: \: R6 P) b. f" V
  reload.enabled: false
" v, C9 @- D- L+ jsetup.template.settings:, u! c" I1 [% @# {2 Z( ?
  index.number_of_shards: 1
+ `8 }1 c" S& A" G" h4 hsetup.kibana:
$ F( \! w" z3 }4 l* E, dprocessors:' O" ^- D, \. U- U* I$ ]
  - add_host_metadata:# n2 |6 t% l. ~$ `# N
      when.not.contains.tags: forwarded
9 @* H5 D" A& c3 k: i  - add_cloud_metadata: ~* Y  T0 U" ~3 C5 w# F& y/ X4 P* L+ H
  - add_docker_metadata: ~9 P  D6 }4 L; Q  Y
  - add_kubernetes_metadata: ~
4 I7 T! Y) o0 t# Youtput.logstash:
. r$ \. u' o* Y! \% Q& q  hosts: ["172.20.22.30:5044","172.20.22.30:5045"]
$ w9 ?8 y; f% k  enabled: true
5 w: N* e2 S7 u7 _8 s0 r  worker: 2
& m) Z5 p# C3 J+ C6 I) l  compression_level: 3
! s+ {  J  c; o. U5 a3 w  loadbalance: true
2 I2 v6 G4 w/ [3 \% b" ^$ |# @/ \2 g) L) r& l* {" S2 E
# systemctl start filebeat& v% J- P" S) t* A
# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/
: I4 e% B9 |8 u  L; \/ Z2 |" [- w9 Wlogstash服务器配置
% N2 b, v. J/ w' s3 I) H  a' Alogstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch
6 C) X0 c2 m+ o" e + x* L, K& d4 T  @2 i
# apt install -y openjdk-8-jdk
) n2 i% A: {1 [& O' H( }# dpkg -i logstash-7.12.1-amd64.deb
. _, q( X; h# k' R# cat /etc/logstash/conf.d/redis-to-es.conf 0 Z0 M, |8 R/ ?
input {
$ K; _. n4 I5 E& B% G3 ^  redis {
0 m6 ]( ~- @% h" U) n3 K4 I8 K    data_type => "list"
9 n- v) A, F5 M! b) z* Z0 g    key => "filebeat-redis-nginx-accesslog"
; C/ p+ l( Z" [- `4 V* k    host => "172.20.23.157"6 l1 ?3 h# ?1 T/ v0 V# \
    port => "6379"4 G: b; K# l) q3 Q8 k& U# b3 U
    db => "1"  O, t- P. W( x
    password => "12345678"
& |/ k+ x1 i' F  }; d/ I- i- L: h$ Q2 ]: q
  redis {
( f' u8 O7 H$ h$ S" U# G5 S& ]    data_type => "list"
  D& k' q% Y9 W1 [    key => "filebeat-redis-nginx-errorlog"
: G- z( ]# D% S6 w3 f9 H    host => "172.20.23.157"" l+ C( V! t7 T
    port => "6379"$ I! {6 B2 y7 A( f
    db => "1"
/ ^0 P* S* ?* R    password => "12345678"
$ e* p8 `) d* m  }8 j4 u( n$ v& I: @, U  a& n
  redis {9 ?( k/ n- G7 \6 t
    data_type => "list"
% H7 W, z- \' s6 g) B$ _4 q    key => "filebeat-redis-systemlog"6 }& S0 `" A. M) h4 }! R, Y8 Y
    host => "172.20.23.157"
7 x( n( e, Y) W( t6 Y& m    port => "6379"
6 ^5 i7 m1 ^$ }5 {1 x    db => "0"7 `) ]) H: ?+ X
    password => "12345678"
; w  s2 J+ {) j, a. ~  }( [5 `5 W" O1 E, n
}
/ p8 B# g) X2 a5 Loutput {
6 O& _$ F* l/ ~+ ]* l  if [fields][project] == "filebeat-systemlog" {
* _6 V; G* T* s# m6 ~9 P0 K+ M    elasticsearch {
3 I8 ~3 e/ {% s+ a      hosts => ["172.20.22.28:9200"]
7 h) c. s# J$ s0 e      index => "filebeat-systemlog-%{+YYYY.MM.dd}"0 s  S) R0 L4 D+ V5 E
  }}
) |0 @( P0 s) m1 |' z- n! \  if [fields][project] == "filebeat-nginx-accesslog" {
2 n) g/ x3 V& y! N1 @* n    elasticsearch {6 c' S$ B5 `, g& }0 p4 W
      hosts => ["172.20.22.28:9200"]$ }( {' W  ~, n, w
      index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"& p6 T$ Y) B" t; Z  K% i0 b
  }}
/ u# p& j4 z% {& ?/ [8 j. m  if [fields][project] == "filebeat-nginx-errorlog" {
$ p; a7 {( K- ?& c6 I( ?0 [    elasticsearch {9 k: N6 V# l1 o# y
      hosts => ["172.20.22.28:9200"]) _3 V2 r4 S/ c
      index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"# h) Z6 [& X9 r- {+ d6 [/ _9 @
  }}, z( P7 F0 U( r" U! E6 S- a) t
}! t2 r! f) @  @
# systemctl restart logstash.service ; u) Q. E* T9 [1 n' E* h
redis安装配置
* ?2 T9 Q7 J8 Uredis服务器:172.20.23.157,9 }- R7 |& r9 Z8 F4 \9 Z8 J* L5 x" t
+ M5 ?0 c4 X( I8 Y3 u4 t) i3 M- ~
# yum install -y redis
9 }$ B5 e) o2 V- [5 V# vim /etc/redis.conf
3 B# u; o8 x- \# O4 I####修改以下配置项
/ W; W7 `$ ]& T& L( |bind 0.0.0.0
" n4 ?7 r2 b' y, [- F% y....0 K2 g; n* i4 ~6 d
save ""  z* k4 c3 ^- s$ j3 }% \9 Y
....
0 m( \6 V' A+ H2 O% W4 jrequirepass 12345678$ y. S' r1 y* e9 u3 ?' N) y
....
1 o& _4 l$ h  V# systemctl start redis. `. Q1 C( C& \% j/ s  ]. v" d0 P
###测试连接redis
1 x0 p* ^# K  i( @, F& K3 {# redis-cli
8 Y7 c  v" i* w+ Y- l: y% X127.0.0.1:6379> auth 12345678, h* M6 T4 M8 T* S5 Z- D! y
OK" p: I3 O) m/ L4 ^- Y. i
127.0.0.1:6379> ping) o2 V0 w; e! M
PONG
0 ~! j. q) E* }* S3 V' I5 Z7 M$ j
# b! U  n0 I, X###验证收集到的日志信息5 b; Z% l" i! R% r! k
127.0.0.1:6379[1]> keys *
: Q! F4 W* F% [7 d; C- ?0 E# v1) "filebeat-redis-nginx-accesslog"
, ^% g3 j/ R6 s1 v7 g7 O& y4 @2) "filebeat-redis-nginx-errorlog"; j1 c7 k6 I* O, _. x; Y4 ~6 R
127.0.0.1:6379[1]> select 0  S$ Q  O6 P. F
OK
% m5 X( q9 a8 O( u, a  n127.0.0.1:6379> keys *) }9 p+ Y: g8 M2 |
1) "filebeat-redis-systemlog" 0 M# @9 r  z2 p* ^' y+ E2 T
通过head插件验证生成的索引/ {2 u5 O6 ~3 ^2 w) b' s
: o( s' B$ A! O* {, Z

# q3 H- s' e6 i6 ^8 H* h& Qkibana验证收集到的日志信息
7 ^4 [8 n+ l* U- D" q

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Copyright © 2001-2013 Comsenz Inc.Powered by Discuz!X3.4( 沪ICP备18024137号 )
快速回复 返回顶部 返回列表