扫一扫,微信登陆

 青浦修电脑 青浦笔记本维修 青浦手机维修 青浦电器维修

搜索
查看: 467|回复: 0

ELK日志收集

[复制链接]

1万

主题

1万

帖子

5万

积分

论坛元老

Rank: 8Rank: 8

积分
56206
发表于 2022-9-5 08:10:05 | 显示全部楼层 |阅读模式
搭建ELK % ?' m, \, S: u
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:
8 y) v  ], v/ q1 l( C) f
1 T, ^+ L' F1 [8 u. `. T4 R处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单0 G2 A' q* }0 `7 {. q
Elasticsearch
; D! `4 D% J$ J( U. G5 Ielasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。
+ O0 X7 a% ~' I+ S) ~! d3 V
$ n) G' i( ^0 o( p" uelasticsearch的特点:
) T/ U( j* y. ^# `. M / N1 u$ B$ r  k2 ]: |- ~! O
实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json$ o" W+ G0 K" q# i7 W7 D9 `
部署elasticsearch 6 a3 i& v4 c- h% }" K7 Y4 @' @
GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发
4 A1 {7 N% a. B9 c. s( N. @
1 t5 E* {. i; }* x. Vcentos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步# R9 O' @' w+ d

+ C5 G9 b! [. E+ e服务器1:172.20.22.24
! D* o3 ~' K0 {/ \4 A4 A7 d ; Q" A& x" T1 E$ G
服务器2:172.20.22.278 \' ~! p5 C& `
0 i& o+ d+ X$ `( k& ?" p$ ]
服务器3:172.20.22.28
( ~0 |! L  x& ]5 T" ] ' m1 W3 g6 j6 k( j
###ubuntu, w( N. C: H& \
# apt install -y ntpdate% Q0 |& _0 R/ C2 {, \
# rm -f /etc/localtime' w& b/ L% D$ Y
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime' a6 K0 s: h* s
# hwclock --systohc0 S* q  h1 N4 B: [. P" e8 `
# ntpdate -u ntp1.aliyun.com
' x! t# F; E; l: d, _6 v###设置内核参数, k- f1 {- x) z0 e3 Z
# vim /etc/security/limits.conf1 j* }. c7 J( \+ c# C
*                soft        nofile                500000
# f2 V3 a! Q& Q0 U$ M( R*                hard        nofile                500000$ L5 Y/ s) v! K
# vim /etc/security/limits.d/20-nproc.conf % @/ x& d, G9 x
*          soft    nproc     4096
9 h% i: Y- w. h! O* \. Q: H5 _elasticsearch soft    nproc     unlimited/ v& ~+ j- Q* o, @7 t3 J
root       soft    nproc     unlimited3 {1 Z) M) E1 U+ o
###安装jdk
& B5 f3 u/ Q+ D1 V' |# apt install -y openjdk-8-jdk
; S; L7 W% L( j4 ~2 p) x. |1 j( }2 R3 d1 f8 e+ r+ l# D
###每个节点都安装! r& D, h5 ^% c7 I2 \1 a
# ls -lrt elasticsearch-7.12.1-amd64.deb4 P) h8 O, |! X5 ~0 C
# dpkg -i elasticsearch-7.12.1-amd64.deb
+ ^4 M* z* X6 Q7 s6 b4 j9 @4 @* @" V###节点1配置文件
9 _; E3 U8 V+ V# Q" j' N: E$ Y9 w% Z# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
  h, ^( u' X6 @cluster.name: m63-elastic        #集群名称/ z' w$ D, D: z. O
node.name: node1                 #当前节点在集群内的节点名称" T5 r8 H0 @2 R
path.data: /data/elasticsearch   #数据保存目录8 i% n; g0 i1 \
path.logs: /data/elasticsearch   #日志保存目录
/ }: \3 u% `& v9 I7 \1 b9 z3 Lbootstrap.memory_lock: true      #服务启动的时候锁定足够的内存,防止数据写入swap( l6 A8 s$ W5 m2 [3 U
network.host: 172.20.22.24       #监听IP, _3 B+ a9 l& ?! i' }- J; M
http.port: 9200                  #监听端口5 p! I4 y( ?- _% B* f7 c7 Z% I  O
###集群中node节点发现列表
/ l) x% E3 U* B  D1 fdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
$ [: `1 q3 L" F1 v# v; N* r6 K) {###集群初始化哪些节点可以被选举为master! }# n7 o; Q5 D& D8 j
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]6 q/ M  m) q, A$ L& C* K( H
action.destructive_requires_name: true
1 h& g- B( ]$ \# mkdir /data/elasticsearch -p
: ]7 d3 m9 W8 C6 G* m: k% C# chown -R elasticsearch. /data/elasticsearch
& O' @6 a) C  q6 v# systemctl start elasticsearch.service
: W! ~( y' R2 }) r8 I###节点2
5 W4 Q, h+ C, [" b6 W# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
  e# |! L& M; N1 Z8 }0 s/ x! Zcluster.name: m63-elastic7 O5 h6 X  q; `
node.name: node28 a+ f! J7 A7 j0 p+ E' `  V
path.data: /data/elasticsearch
8 U* C  V4 |: {& F' y( y5 `path.logs: /data/elasticsearch- M' I4 _! c* w: }( f
network.host: 172.20.22.27- p7 ~1 w7 q4 e1 D
http.port: 9200
- X9 t6 v# A5 h& I& y2 B, A& x7 t# Q# rdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]6 z7 f( E/ b* f- M8 c2 ?
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]3 J+ [/ h- ]' G+ w* w& {7 \
action.destructive_requires_name: true
2 t. K/ O3 d' |2 f! O4 v# mkdir /data/elasticsearch -p' Q! R: r7 p- N- Y: A6 X4 z; x+ m! r
# chown -R elasticsearch. /data/elasticsearch
& V; }$ C+ n8 T5 O% N# systemctl start elasticsearch.service& q; L5 w6 U6 p; A1 m
###节点3
& P1 s( s! G* N0 q* J# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml- E+ H$ U4 J) e- R# y
cluster.name: m63-elastic
' o: p, ?& C6 ]% P- gnode.name: node3
  h8 R/ i( ^* r0 Q& _& w/ Wpath.data: /data/elasticsearch
9 p1 z7 T) x/ G" x& ~8 zpath.logs: /data/elasticsearch( C, \6 e2 |* c6 ]  U
network.host: 172.20.22.288 F3 m$ \% o" N& u
http.port: 9200& M0 z. u1 X9 b, C) B1 I* p' g/ d
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
8 B; J: t; U% A0 dcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]- ]' G  w+ K  E8 ]: H6 C9 }& f
action.destructive_requires_name: true
2 I1 {& R6 K) l9 t" t# mkdir /data/elasticsearch -p
. k. T0 w& g/ }6 j* ?1 X6 j# chown -R elasticsearch. /data/elasticsearch
; y! K: T9 x* R6 S; }# systemctl start elasticsearch.service
, d% k# B. _/ J( B) y浏览器访问验证 * A, R- q. o- ^, Q. O: [$ j
http://$IP:92004 R0 R% r7 B$ C  H" ], e8 ^  [. A

9 M  \1 ~% `# d $ C* Y8 N# R) `
  v) ~, j# X) D! \2 E) t3 n
Logstash
6 C$ h7 v9 w+ B% R6 U7 {% uLogstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。
! m$ k) Y' c! g) T2 i$ Q . A( @- g5 T4 B9 s# }! o
部署Logstash
6 }7 T9 b6 S! _5 u- e: kLogstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地
% q3 Q+ P, q; M# X3 K( U7 J, W % @% L: U0 p+ h( y
https://github.com/elastic/logstash #GitHub
& J/ q" N, @. T) p& ]
# j& c( ^- k2 b& y) _; F% GElastic Stack and Product Documentation | Elastic
% p: \9 I. F+ W3 K! Q; _; G) u. N  ]6 v
% |7 G/ K* {2 k7 g1 }) C% u环境准备:关闭防火墙和selinux,并且安装java环境9 ^& c1 i8 F  H) x) A) ]
: b8 P9 J: Q5 E, u: _
# apt install -y openjdk-8-jdk! u( a/ K8 F% H( N0 a7 y3 P
# ls -lrt logstash-7.12.1-amd64.deb
0 w* q6 C5 ]% p3 ]1 P$ H: B# dpkg -i logstash-7.12.1-amd64.deb
# V& V( T/ v5 }/ T% _' C1 \###启动测试6 E, U- T0 |, ], M3 C
# /usr/share/logstash/bin/logstash -e  'input { stdin {} } output { stdout {}}'   ##标准输入和标准输出% _* y2 ]" ~0 @9 f" {
hello world!~9 z9 @" l. o6 q; w5 ?. s
{
# a: U4 W" A5 `1 R5 `. W0 i      "@version" => "1",
& Y4 s* z3 B$ G. ~( p% i    "@timestamp" => 2022-04-13T06:16:32.212Z,
2 c' y$ I! q# b          "host" => "jenkins-slave",
6 w" A, S: h# I  g# M! E       "message" => "hello world!~"4 e8 [" q" K; J; U4 s( m/ }
}
/ q. _  U( F# e###通过配置文件启动
) w) L, G* P+ N* f6 l, _# W2 p2 I: u2 h# cd /etc/logstash/conf.d/% v/ I' t2 Y1 m" \, V3 b; q
# cat test.conf
% T/ r$ ?/ A/ q" vinput {
% B0 r7 a  k- `0 }0 }* e( g+ R  stdin {}
; c# t% i5 i  t2 T& D! |8 F% p}
1 P. o! t7 c) m+ r  c  y  G9 joutput {
8 z/ E: U2 V$ b5 l' p  stdout {}
& D' O+ \& @5 \- A7 p1 O, x6 J+ y}5 ?0 \5 ]4 u' M" s6 a
& a; j4 @! G: B7 i
###通过指定配置文件启动
% |! G: l$ R: k  h9 G# /usr/share/logstash/bin/logstash -f test.conf -t   ##检查配置文件语法
- z, \; S4 Q$ ^+ N$ z3 m# /usr/share/logstash/bin/logstash -f test.conf% k& W: L. m* x% X1 i
1 L# a) r0 N: J' C
####输出到elasticsearch
! B* N8 g6 o  e5 H" g5 a7 @0 \4 O" W7 g: e# cat test.conf
3 h; j% {. O7 K3 x: N) i! ainput {
* _! \; B5 k# U+ Y0 Y+ x% a  stdin {}( S' W4 d( G% T+ Z& Z4 M& t
}
8 s! ]! g8 k) F0 O6 F* doutput {
% U: z: T; _4 w* x7 v9 H$ W  #stdout {}
3 d+ J( g7 ^4 E( w$ l: U4 H  elasticsearch {
5 Z3 S7 X0 {$ o    hosts => ["172.20.22.24:9200"]+ A0 k. [% L& V% u, z7 Q
    index => "magedu-m63-test-%{+YYYY.MM.dd}"9 f( ]" S  `) p- x' q6 ?  W
  }# r) w- [- }# v; |: i: M
}2 H- {; f7 S0 x* @4 _2 g
# /usr/share/logstash/bin/logstash -f test.conf
& d. L0 U/ Y7 B. t. V. b1 d( S0 @version14 ?* I) ^" M, i& T8 Y9 S
version2
8 F, Z" \, D+ `& H; E) |9 Fversion3  h% }* F  ?0 h8 q3 Q1 H4 S* |
test1+ x, `6 m9 d' t/ E1 K; ?, O
test2
. b9 g! I7 ~& e) @test31 s8 q5 `, Z/ P$ C7 t( i+ ~
; g2 F$ Q$ g4 ~$ e
####elasticsearch服务器查看收集到的数据8 Q7 T6 a, D& c9 M- L( x
# ls -lrt /data/elasticsearch/nodes/0/indices/7 m, S# O# H& i0 C6 D6 V  o, |
total 4
- A4 l$ d7 w5 x8 _6 c4 G+ \drwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA 6 w' X( l* S2 D
kibana 3 Y3 o# {6 [7 V8 c2 W6 i
kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等
/ Z( ~. l" P2 E$ v5 K+ m
9 A, g% f* `" I" o部署kibana
) N, r, }3 i3 x9 E( ]: `2 b" n# ls -lrt kibana-7.12.1-amd64.deb4 s; S& @2 a6 }7 C2 o
# dpkg -i kibana-7.12.1-amd64.deb4 S9 G7 x' N3 [
# grep "^[^$|#]" /etc/kibana/kibana.yml
7 E$ S/ b% N" ?1 C% Z5 p* Pserver.port: 5601
/ d. e" o* V0 M- U5 L5 A3 s, c7 S: A" {server.host: "172.20.22.24"( |7 [. e) B' q$ _
elasticsearch.hosts: ["http://172.20.22.27:9200"]
& e0 d) G" N, k, R' v' Y; r4 [) s* Qi18n.locale: "zh-CN"
6 I. k3 V# Y# q# T' J# systemctl restart kibana
" Y1 I# N/ s7 J  n浏览器访问http://172.20.22.24:56018 M; D1 K2 y: ?1 ]5 k

$ T2 u/ \0 L# k& L; y: {1 g8 rStack Management-->索引模式-->创建索引模式
  M/ u% j1 C2 |. G. d4 K. `
7 C! ^  @# w! g. P4 e) r2 ?" m  | & o0 B4 `% c8 B2 S; {& I: D  M/ p$ b
选择时间字段% M. j( c0 h* W: }0 ?
$ j6 O2 p! j0 J- {- j% H& B, a+ T
查看对应创建的索引日志信息
- G8 B, P9 y9 _9 _$ f. l% R  Q8 w: j' `8 `- M$ N! `

  @8 {9 q; k6 p8 l- t" _% M8 j7 C# \
) ?, r" j1 ]6 ]. f5 F收集tomcat日志
% h. x+ d* Y5 @9 ~  @: p收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现
1 P8 y& X# E2 ~3 `% W% k, O4 @ . f8 X1 i! Y/ z
部署tomcat   q* y2 |6 ], f& ?$ z. j/ K
####tomcat1,172.20.22.30
2 M3 v" b" [( V4 s5 i" c# apt install -y openjdk-8-jdk5 i# N9 v1 K! V# y* w
# ls -lrt apache-tomcat-8.5.77.tar.gz + q1 _  u. O* L9 R" i8 b% [
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
$ O5 _& H$ X/ T) [" N# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/* o6 f( A& o0 ^% j8 Y2 s  W
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat" i" P6 _1 b# y1 A; V; z
# cd /usr/local/tomcat: O& h: _% Z& `; B
###修改tomcat日志格式为json) ?1 l0 e2 E  r, M4 w( W$ O2 X
# vim conf/server.xml& T" c7 x- ]7 s) t9 b4 ^3 i
..../ z2 v) }8 ^3 y5 Z7 H4 ?: n) y
* M. r; O: l( }- c! H6 H: }
....
; F$ F5 S7 A1 _+ d" K2 P' E8 C# mkdir /usr/local/tomcat/webapps/myapp
' {  Y8 ]- ?- D0 ?; S7 W$ r# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html$ k7 w, l" b: W
# ./bin/catalina.sh start9 X3 x+ q$ `3 p* O
6 y* u7 c# U5 v: u# x+ G  S
###访问测试0 Z) I7 {: N/ m. l( u$ l+ O
# curl http://172.20.22.30:8080/myapp/
" B: i: I% L- f5 C2 Y###查看访问日志
+ m6 A1 w; R9 c4 X# i6 S2 U- b# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log0 {8 ?- c( K% S4 Y& w2 C, f
7 F" ^( l9 Y) T0 P
####tomcat2,172.20.22.26# l$ M- E  P' n$ ^$ i) u3 _$ L
# apt install -y openjdk-8-jdk
5 N! g" C* ]! H6 h  P# ls -lrt apache-tomcat-8.5.77.tar.gz ' S( C# r6 w: K  L2 X+ v; w
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
, r  }. p# N" w6 ?- d# b4 \# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
* e8 x0 o. ]! H- y) x# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat$ ~7 y: ?; J& `6 f5 R% k4 h
# cd /usr/local/tomcat2 a4 F0 L- }2 j( D, r0 p0 m
###修改tomcat日志格式为json
( ]' U  s' o$ N9 y) I; I" y# vim conf/server.xml1 i) R( E) D7 S* B1 V' D
....
# z; q' X4 Z6 Q* r, p; q / R1 k, p9 x  W: c" j" G0 a6 r
....9 _) F# d# o% z& \  V
# mkdir /usr/local/tomcat/webapps/myapp8 f2 \" P! k% G) H; q( ]6 _
# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html% V" h8 |/ p, r2 T
# ./bin/catalina.sh start
1 J: I- f( j& N8 z% n! Z- W
) k( \1 Y, W- `( V###访问测试0 @' Q: i" i' K# a, V0 ]/ Q
# curl http://172.20.22.26:8080/myapp/
' o+ j, p7 m5 o  P% O8 k9 J' ^###查看访问日志
$ B6 V- A1 R" L" N( s# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log - y8 n+ D5 }0 Y4 P- J
部署logstash
- N5 [+ T4 N% q在tomcat服务器安装logstash收集tomcat和系统日志
/ n0 a# U0 Z" m/ t  H . b. ?! l9 x+ k+ k9 @/ |
####tomcat1,172.20.22.30
4 |0 b; o# p& T% E* K# ls -lrt logstash-7.12.1-amd64.deb
  }$ e% F( u- E4 D" Z6 Y# dpkg -i logstash-7.12.1-amd64.deb0 C- l: y0 ]  |3 @: q: ?1 \& u/ Q
# vim /etc/systemd/system/logstash.service5 F0 @& [9 ?2 }2 J* x' \+ J4 V
...
5 Y% U  z; I4 p8 Q+ b* h$ GUser=root
" p& ?+ ^( n# G# S/ J) d* WGroup=root; O& u# i% e* B8 g7 I
..." Q( r& g2 x% W1 N
# cd /etc/logstash/conf.d8 V' n! Z' z0 T) U: Z: Y/ [
# cat tomcat.conf, H/ X/ a, m6 W( H
input {
8 t  H* f9 `$ I, Y  z  file {  E4 k/ Y( R3 m- n; E7 h2 ]3 ^
    path => "/usr/local/tomcat/logs/tomcat_access_log*.log"+ |& u4 C+ x9 A& ^$ y2 I
    type => "tomcat-log"
: j7 W- ], ?" G3 P    start_position => "beginning"" T1 o$ C$ F/ S5 h  L" R1 h
    stat_interval => "3"2 D% s2 q$ S# W
  }
# \) D0 n8 b3 ?# f8 b1 K  file {
- a$ [7 E* n% S! b- D0 r    path => "/var/log/syslog"8 n" Z# `/ G3 a/ a2 [, @6 d9 U! w
    type => "systemlog"
; d+ ~9 v# ~  W    start_position => "beginning"
. }: u: v0 e$ c    stat_interval => "3"
( D" p& e5 C4 Y( q1 c  }; ]' Z' U+ }4 H7 p  K5 o
}
# {0 C$ J+ ~. m/ g  O: V* ~output {. n: d+ n4 A7 L( e0 G8 M) O
  if [type] == "tomcat-log" {* a4 G6 b: K/ f0 L# K3 n  A
  elasticsearch {
$ a+ h8 P) [1 q- u" p+ k9 |    hosts => ["172.20.22.24:9200","172.20.22.27:9200"]9 c; S/ j5 c7 U3 r( F% I
    index => "elk-tomcat-%{+YYYY.MM.dd}"$ @0 ]& o. k$ n% D9 \
  }}. R, f9 F( ?% x
  if [type] == "systemlog" {
' O+ l  u) J8 D6 y; E& k% x  elasticsearch {
2 m* Y7 B, r# k/ F9 I3 z2 q/ H; A    hosts => ["172.20.22.27:9200","172.20.22.27:9200"]
8 h: }! R# q6 k: b  `    index => "elk-syslog-%{+YYYY.MM.dd}", ?# \" }9 j. d" q, O- A
  }}
; q  a* U/ g) Q) |- q' ~}
) q+ S  K% Y7 o9 S6 _4 S  h# _4 z+ F6 u
# /usr/share/logstash/bin/logstash -f tomcat.conf -t* R+ O/ e1 s7 e/ {2 X- f
# systemctl daemon-reload" ~, g/ q9 f! z
# systemctl start logstash.service
5 ~; y" Y# L* R4 n; _% R" b# scp tomcat.conf root@3172.20.22.26
5 I* m4 _: v% s1 I/ v
5 d( `) h% Z" k' P  K/ D####tomcat2,172.20.22.26
5 }8 L- D, `  ?. K8 ~# ls -lrt logstash-7.12.1-amd64.deb
, s8 [! J2 r: l* C4 _( [( C' B# F# dpkg -i logstash-7.12.1-amd64.deb5 U: {# l+ a1 e4 y' x, @- J# A
# vim /etc/systemd/system/logstash.service1 ^. o- x! _6 Q1 R( Z
...4 C7 Y* t/ V/ b4 e( _
User=root0 o! k3 p3 P! u+ `* z2 q
Group=root0 B( O+ }5 g2 h/ l7 K( k0 X
...: @; B# t6 g# B4 `# S
# systemctl daemon-reload7 F8 b# t8 V$ s6 p
# systemctl daemon-reload0 q# o1 O* v( a/ G% y7 S# U2 Q
# systemctl start logstash.service 6 |4 t, S' h9 {* c" g, Z
通过kibana展现
  X( i$ Z6 b1 H0 [
3 K& s* e  U- H/ d3 c: n
, u" m& c: s+ B, D收集Java日志
; o& r' e2 @" q; Q! }7 |$ X  b使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并
# o' M& `0 i& T
% Y1 X+ Q& {1 W' ?Multiline codec plugin | Logstash Reference [8.1] | Elastic: l, b$ K( b+ S2 _! T" Z6 }

/ ~& r/ Z: `7 |8 P1 E& M- h( c添加logstash配置文件
- H! t3 U/ S- ~6 U2 Y! o( Y5 @! H8 y###收集logstash自身的日志,172.20.22.26
4 \( W8 R7 W. Z( y# cd /etc/logstash/conf.d2 ^( V4 x6 u- t* _
# cat java.conf ) H$ A. E9 H! d% O
input {& s/ w7 k) d# Y# W  k, ~5 f
  file {
5 p% W8 Z# V4 E( ^: J& N, U    path => "/var/log/logstash/logstash-plain.log"5 f. u/ j* H8 p7 y  c
    type => "logstash-log": @- d3 G) |9 p! C- D) e- h' F
    start_position => "beginning"( j7 }, B% E$ {5 b. c
    stat_interval => "3"
' K$ c/ ~6 }" z    codec => multiline {8 d! d% `- ^5 f3 u& Y' F
      pattern => "^\["
; i8 F/ y% V9 Y1 G      negate => true
& O( ]4 B3 U1 }% Y' |% z  C5 i      what => "previous"
8 ^3 b/ U* I* D* H  a% a4 Y   }}* v' t* a- n5 ?" {
}; R7 z  a+ K7 U; p
output {
2 X! F8 ]) D" K1 E  if [type] == "logstash-log" {7 T( }; j" [! ~3 [7 h2 L
  elasticsearch {  a! K! }. r5 M, u  E2 C2 E
    hosts => ["172.20.22.24"]
6 m# i* _* y. p! ]    index => "logstash-log-%{+YYYY.MM.dd}"4 w. {2 @0 z2 @! @8 ]
  }}
( P8 |1 D. m5 K& T* k1 _}
; B+ q" }( M4 e+ E. b0 w" t/ [! u. a4 A5 p& ], y. R+ S
# /usr/share/logstash/bin/logstash -f java.conf -t0 B. k0 i  U; h9 F0 b8 z" s7 F, I
# systemctl restart logstash.service
6 N- n9 G6 l' C
) g1 s' L+ B. \' ~& S! g: W###收集logstash自身的日志,172.20.22.30
% n' B, ]# V' d7 P4 v7 i! @1 d# J# cd /etc/logstash/conf.d" {( ?5 x9 Z- s0 x
# cat java.conf , Z$ o/ X/ K0 T$ a" [9 \
input {
" d) M: `! ^1 d5 o7 o8 S" ?  file {6 a/ x+ W2 M2 @& T& a+ h
    path => "/var/log/logstash/logstash-plain.log"
! B1 C7 p' ?6 _% o" H; D    type => "logstash-log"* m! t9 v7 ]/ N, V6 o- ]7 e
    start_position => "beginning"
2 d! X9 ^- o0 r: t    stat_interval => "3"
  C/ L* t1 M) g3 z) j6 J    codec => multiline {, ?5 d  o0 G! ^) }% D
      pattern => "^\["
; n" Z9 |7 B* N* V6 I8 v/ b8 f" d9 p      negate => true. e; |. W& o. }3 P2 q; n2 ]  ]
      what => "previous" 4 [' n$ p) k+ ]# z& ~; U
   }}8 [( D  q: W* E3 r/ H7 d; I
}7 N8 _( }$ ?5 T7 a
output {
& _% Q& }5 v: X8 x8 \  if [type] == "logstash-log" {
! x9 {% W" v* b. G  g( N4 S, ~' n  elasticsearch {2 o3 o  {6 m+ k$ t, ~+ O- a
    hosts => ["172.20.22.24"]
& D# W" J4 L& F* R2 k. F    index => "logstash-log-%{+YYYY.MM.dd}"
& K2 n1 K. I8 G1 b# P, v  }}
, Q: Z6 Q3 ~; d/ Y9 c* `, L}
; r  h- N- e: O+ U3 u+ \' s
7 r- ~* ]4 y6 ?/ `$ r2 y% X# /usr/share/logstash/bin/logstash -f java.conf -t
" `7 A! x( S: H# systemctl restart logstash.service
/ i3 Q7 Y' C- z) p# I2 n6 D查看kibana收集到的日志
* n) f7 J3 g% D( `) J; b# y2 P" n1 r) n0 F5 B

' M. L2 }+ X; R$ X# @9 W
4 k$ r& ~4 a0 C: V# pfilebeat结合redis、logstash收集nginx日志 & b" g4 Q4 C1 p0 p5 l
使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch
4 o5 a9 v+ X4 m- ]. F0 f) I, M
/ J6 T  U' ~8 H; q. vweb1:172.20.22.30,部署好nginx、filebeat、llogstash
9 h4 F7 i) c- k+ k+ |+ I7 t+ O
! y0 E4 e# |9 Kweb2:172.20.22.26,部署好nginx、filebeat、llogstash- M0 g% C; V) R' f
# i5 z% t  v  W
logstash服务器2:172.20.22.23,redis服务器:172.20.23.1579 ~' b. K% f6 o+ H" K- \7 q  j
6 D! A* e  l2 J- {
nginx服务器相关配置
1 A3 v" p: F  B! X$ r' L8 p5 E部署nginx * A  m8 G2 N8 j( M
# wget http://nginx.org/download/nginx-1.18.0.tar.gz3 y$ k& i5 u7 U4 r9 r1 z' `& \
# tar xf nginx-1.18.0.tar.gz7 ?( ]! g7 u& I+ q0 E
# cd nginx-1.18.0
% f3 Y- X' m# V- {, b  x9 b6 E# ./configure --prefix=/usr/local/nginx --with-http_ssl_module
$ p9 r# Q2 A1 ^# k1 ?2 \' j# make -j4 && make install# N- o- ?2 h% \4 D
# /usr/local/nginx/sbin/nginx
+ F: \3 x3 e" B" X部署配置logstash
: E: }" W# l' K+ A: @- o把filebeat收集到的日志信息发送到redis9 {" S. h( g; D# L. d1 x0 c
6 o5 w  P. D, g
# apt install -y openjdk-8-jdk
+ x1 _* Q$ ^, j! q. `- E% |, T# dpkg -i logstash-7.12.1-amd64.deb
' t# \4 e1 w  u0 \, F# cat /etc/logstash/conf.d/beats-to-redis.conf
1 V* K* z! m: s2 s4 A7 iinput {4 C1 a: k& U3 k; p- Q( |: R
  beats {5 `1 `0 S! V- S% s9 U
    port => 5044/ u! Z4 m/ W1 ?5 D+ n/ X
    codec => "json"
7 f1 N" {8 w# r- R* E9 D- }( m  }
" _' v  E5 E7 r$ F2 a- ?# r  beats {
/ E, N* l2 S7 K3 T% n& Q* _  L3 N# c$ N    port => 5045
8 z+ [9 O: V+ Q( R% X8 S    codec => "json"
8 l. J- G$ q) y( s  }; C0 Z7 {7 O* u6 r' c6 R
}, ]3 C6 @, L! K  Z; I6 x# M
output {
* q" |7 T. V3 M# c  N  if [fields][project] == "filebeat-systemlog" {
% N1 S4 q. L! Q) `8 @4 ~    redis {
0 M1 h% P* p9 I, h* I2 [/ L      data_type => "list") d5 {) K7 @0 G- Q: R0 ?
      key => "filebeat-redis-systemlog"
  F$ d) r' ?1 m: Q8 N/ T      host => "172.20.23.157"& C1 ^) i8 ^, E% R: ]1 c3 Q
      port => "6379"% i, E# Y) n2 T. X
      db => "0"9 K0 E# o: v6 w3 ?/ I  [/ b4 G# ?
      password => "12345678") ]3 }) k9 X( T& Y- L; w- \
  }}7 ?8 Y* P/ B$ o# b/ ?( |; J
  if [fields][project] == "filebeat-nginx-accesslog" {
; ^1 m8 B! b7 d  A( s    redis {( A3 G4 X6 e+ p+ Q
      data_type => "list"+ w/ m' y" q. E1 W% y
      key => "filebeat-redis-nginx-accesslog"" [& P+ L1 m4 Z# o* C
      host => "172.20.23.157"
( W# d# s- l* E) M      port => "6379"- ?7 j8 ~* O$ |% S
      db => "1"
5 ]$ ?7 U2 O& W5 `      password => "12345678"" J. h5 v) ~' Q2 M- q6 P6 k
  }}
1 G/ u- ~5 [! ]/ V" _! e  if [fields][project] == "filebeat-nginx-errorlog" {4 H' H- i( j# B3 w
    redis {% n7 b( R6 I8 n/ @( N1 t
      data_type => "list"
# t" M# s4 ~% c) G% j4 [      key => "filebeat-redis-nginx-errorlog"5 ~# ?, M1 t, b0 O, X2 m' j7 x
      host => "172.20.23.157"* {+ w. D7 R, a1 a! r; C
      port => "6379"; j7 l1 D8 C% b, `" v$ b2 o! D
      db => "1"
* f; k$ o: t* e. Y1 i1 h      password => "12345678"
4 V# X" Y9 b; c" I  }}& f& d, s" l/ q: x/ ^$ ]  v
}
1 A' ^' @$ B! {" S. n) c, x# systemctl start logstash: N' Q7 k8 c" i! a/ _. f8 `
# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/ ' R6 j  H( N2 I$ I5 H+ q2 Y
部署配置filebeat ! L* w* e5 B+ t# ~, D" L- N
通过filebeat收集日志信息发送到logstash/ u$ z0 s$ \: M4 m7 R; L

: q0 G. U6 j+ Z. p# dpkg -i filebeat-7.12.1-amd64.deb/ F& i( P2 |; F* R
# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"6 p& ~' i" ~/ k: L7 H: s
filebeat.inputs:8 K; J- V5 O' t2 g% d# {
- type: log+ b) O7 Y3 R3 i( o% b
  enabled: true
; F' g+ J- V+ B$ R  paths:
! Z* x, f- T+ e- Q8 u    - /var/log/syslog
8 ~1 N  S  X- j3 a  fields:
. q3 c; G* l) V2 L" r    project: filebeat-systemlog
6 V0 D) P) R2 ]9 i. m* L- type: log
* u! h: c8 t  E: {" o) R  enabled: true
/ {1 e. V  l5 t& x+ J  paths:8 S. x) J" I" ~1 N
    - /usr/local/nginx/logs/access.log1 d/ C( K) Y5 w6 A, `6 j  n
  fields:9 x) ~* n: m( Y5 _/ ?
    project: filebeat-nginx-accesslog
  C- ~" _" C2 e8 o4 d* {) P1 }- type: log
6 C, W1 j) ~% \' O. w  enabled: true
9 ?6 Y" h( o1 O3 ^  paths:' V4 g; u; ~2 f6 J' X. s9 k
    - /usr/local/nginx/logs/error.log* Y& N1 p$ }: A9 I) Y7 N+ G9 y
  fields:
7 i: n! Y& j+ x6 z, l    project: filebeat-nginx-errorlog  r/ U- _' ~; i; X
filebeat.config.modules:- E7 B) o$ h, i& _1 ^1 k$ n) x
  path: ${path.config}/modules.d/*.yml' }6 G: B" _$ u+ t3 }1 L& k. M8 k
  reload.enabled: false
, ~1 u" W; z9 P/ Ksetup.template.settings:
9 ~# m8 K  v/ ]7 ^; n) _- j8 s  index.number_of_shards: 1" H0 T5 X5 V9 b; j* w
setup.kibana:
$ I; }4 B6 C  S( E7 g5 ?* _" W; yprocessors:7 A6 o  D- w9 q
  - add_host_metadata:1 |! A2 Y4 \2 K# \' L
      when.not.contains.tags: forwarded
+ V/ m4 Q9 P! b  - add_cloud_metadata: ~) _) u( r; j; L1 L9 ]
  - add_docker_metadata: ~
  J6 l& u% l. Y  - add_kubernetes_metadata: ~
9 Z: k4 U: X$ r0 W. w3 m* Loutput.logstash:
. G& Y$ j# C' f  hosts: ["172.20.22.30:5044","172.20.22.30:5045"]2 ~) p$ l( S6 u6 F# H( [
  enabled: true5 G, c! `( U9 a" n3 g$ U
  worker: 2
$ z$ A# b7 @" s! D! T2 [- V  compression_level: 3; E' C  T. J  _  c4 l
  loadbalance: true
7 K0 _0 b/ _* [9 S* F) @3 h/ H0 b" B3 m
# systemctl start filebeat
# D1 y0 T! p! c" U2 x# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/
4 _8 a2 k9 z  a6 {" |logstash服务器配置
0 M+ D/ T! G' E7 D) o9 ?logstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch
% o! [! `9 J5 g) d5 \6 _ 0 ~8 x0 r; ?# b5 ]6 D& ~4 K: d8 b
# apt install -y openjdk-8-jdk) B, I# m8 Y/ i4 w& R
# dpkg -i logstash-7.12.1-amd64.deb
3 ^% F8 z: B$ |; T4 U# cat /etc/logstash/conf.d/redis-to-es.conf # H, T, {6 g& G, T& h& p
input {5 p/ C$ N* r" m7 C
  redis {0 s/ q/ H% p# z( J' J1 y
    data_type => "list"
1 i# X; D% n% x- G1 U    key => "filebeat-redis-nginx-accesslog"
! J3 A: |( [/ D* d9 e, |    host => "172.20.23.157"
: i% F0 A/ v% U9 w1 `( o    port => "6379"
. ~( z% ~9 e# k    db => "1"1 N% ]) V% J) B
    password => "12345678"
6 x3 `( b; ]) O! p$ F  }
, ]0 M7 Q5 ]' n( O. Q, L  redis {
$ J9 l+ J) ^+ l- r4 D    data_type => "list"6 S9 K6 w. U! N: e# Q1 Y
    key => "filebeat-redis-nginx-errorlog"9 ]% Z" Q/ o1 M% S9 V, b, O
    host => "172.20.23.157"( `* ~2 W) }. D5 d8 g
    port => "6379"( p8 Z) }. [# w- F: [+ l
    db => "1"& H% F5 {/ ~( t/ {( \
    password => "12345678"
) W0 V+ [' y. s4 ~0 E; E  }
4 Z$ M5 A$ Q1 u5 i. h7 g$ c! w' k  redis {
8 o8 J0 a8 p7 i3 |    data_type => "list"" \" b: i* _. L8 H8 V3 b% S7 \
    key => "filebeat-redis-systemlog"
3 l; q+ I8 O# a# t8 B    host => "172.20.23.157"3 U8 F+ }* n1 f7 a/ \: F: c
    port => "6379"
+ C+ N- y- M7 Y7 O/ w- e# d    db => "0"! a: t2 H3 L! }" ^( y
    password => "12345678"3 r" I. c( d7 \% C
  }
. c- S  G0 Q9 y! l4 i, ?}
# a" C! C. R5 [% F: x# Qoutput {
7 `- C& x6 m& k5 I  if [fields][project] == "filebeat-systemlog" {
4 V+ E, Z: x) \7 P/ i4 K( S1 b- l) N    elasticsearch {1 Y0 N+ Y1 ^; a
      hosts => ["172.20.22.28:9200"]
( \6 @, R1 {9 m8 U7 d      index => "filebeat-systemlog-%{+YYYY.MM.dd}"1 _- i/ w4 H% U" l% ]0 z
  }}
) @5 o& n6 P+ s6 p; L8 a  if [fields][project] == "filebeat-nginx-accesslog" {; F2 h# \. U+ ?+ @, q* K* t6 e
    elasticsearch {" c+ v2 J  h9 V. j/ L
      hosts => ["172.20.22.28:9200"]
5 o; B0 D) }' ]5 |2 a% H) |) W# v      index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
3 E! q7 g: a9 `8 P, N  m  }}8 s' w1 z( [1 x) E; C* c/ c
  if [fields][project] == "filebeat-nginx-errorlog" {
9 _/ v$ T, f7 @4 \    elasticsearch {
* v: Z& M/ M$ B1 M' `% w      hosts => ["172.20.22.28:9200"]8 i0 W* ~8 O" ^+ S4 k8 W
      index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"! F+ s8 X* v1 }. y8 A  z
  }}+ Z/ \5 ~+ g4 W# N, I$ C
}/ t$ {) a( V8 j* e5 n
# systemctl restart logstash.service 8 R( W$ u# w& s1 @
redis安装配置
6 X" y1 T9 \" I% V; S+ Iredis服务器:172.20.23.157,2 O1 a6 @5 |+ f( J6 l& o! ]

0 Q/ z9 ~6 o( Y+ S# C# yum install -y redis4 i9 j: M7 c: X" \" j# J
# vim /etc/redis.conf
3 V: [, u! u3 L, n####修改以下配置项- `% h2 S/ `6 p' c* `1 Y
bind 0.0.0.00 L  m# y- K5 T# D
....! c. J% c( n! {, e  |
save ""
/ }- z6 `' W5 s; ]- j) h....
9 t4 `/ M  H; f8 yrequirepass 12345678
* c2 l. t. z8 W- t8 c....7 w& C0 T0 Q5 y) e
# systemctl start redis
: C& P$ i* M: E7 G###测试连接redis2 m& S1 F- s; i, I& |
# redis-cli
  l0 n$ n2 m, z; x  x127.0.0.1:6379> auth 12345678
+ H6 \  E7 x, v$ P! C3 eOK
/ v% `! a% ]. c" D127.0.0.1:6379> ping
" R" t5 A3 i: E) D5 QPONG
# I% ]% P$ t$ Q) L2 @8 `/ T# t; \2 J$ O0 E  N) Z  D! D: j
###验证收集到的日志信息& J  F" a. e3 Y5 g" Z
127.0.0.1:6379[1]> keys *
, \$ Z0 ~2 V& r8 l. o% I; u1) "filebeat-redis-nginx-accesslog"
5 G7 g/ y5 U% F2) "filebeat-redis-nginx-errorlog"" F+ c: r& t' r- e% j) A
127.0.0.1:6379[1]> select 0
. B# [# M! W6 c" [& ^OK: `6 a" {- w+ ^& _! c0 G* Y
127.0.0.1:6379> keys *( I0 K* J0 i6 |! ?! O7 j- j
1) "filebeat-redis-systemlog" 7 k/ n7 n: [- d
通过head插件验证生成的索引: K( B! m1 Q0 c  q% G' m" j9 p9 y
5 q# f7 Y  m! P4 H' w
. Y5 d- J; S8 i6 G* E% ?
kibana验证收集到的日志信息
, f# [/ J6 n9 H7 j% _

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Copyright © 2001-2013 Comsenz Inc.Powered by Discuz!X3.4( 沪ICP备18024137号 )
快速回复 返回顶部 返回列表