请选择 进入手机版 | 继续访问电脑版

扫一扫,微信登陆

 青浦修电脑 青浦笔记本维修 青浦手机维修 青浦电器维修

搜索
查看: 855|回复: 0

ELK日志收集

[复制链接]

1万

主题

1万

帖子

5万

积分

论坛元老

Rank: 8Rank: 8

积分
56206
发表于 2022-9-5 08:10:05 | 显示全部楼层 |阅读模式
搭建ELK 9 b6 S" S9 \% n  F4 M8 N
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:2 v( b  X' H: L: }  {/ w) X

' r5 d# s& n2 y) Z% _3 q处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单3 I% k4 p$ o! o8 p) p! C
Elasticsearch * k7 N; L% U# |* W
elasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。- v1 P: M: s4 d# p! U* p' w4 Z7 }- B
# G3 m7 k* i/ b8 y9 R3 a
elasticsearch的特点:4 ]6 c7 n& c4 m4 D' Y  }
0 j+ X; Y) e' R5 d- i
实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json
, w( c$ L6 s( |8 s, @) Q' F部署elasticsearch
/ s% r) d8 h+ S) I8 @' _GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发! p: \* a. o: V" }8 f$ s3 f
# ^8 b8 }; u4 j9 z8 c* g; U! s
centos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步2 Y/ D* D# R; }

% J% d( Q/ `% J1 G; F% Z" D9 O服务器1:172.20.22.24* G! D4 R# }5 \# T% P1 `! ^

% M3 ~3 m- F# ~( ^7 F服务器2:172.20.22.27
0 F+ \' j1 v% S! k7 T. L* L6 v
3 S: I  U/ E5 O4 g( s, p服务器3:172.20.22.28
) S% Q, T& V; q3 p 8 l0 B8 c' j# D5 h& c7 ~2 |
###ubuntu
5 z6 N2 K* W  ]2 i# I7 s# apt install -y ntpdate9 L$ ~# Z- Y. [1 W9 I- y* n
# rm -f /etc/localtime
* ]9 k4 e6 K1 o/ K( b1 O# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
6 K4 P  g% J0 q$ w2 Q% a1 Y# hwclock --systohc8 ?/ S- L" X8 B4 N8 g
# ntpdate -u ntp1.aliyun.com
3 X" `7 v) B' ^. P###设置内核参数( ]" E1 R4 e+ S, ^& S7 g  k
# vim /etc/security/limits.conf2 D+ i# H# k# ]& u* i1 l8 I
*                soft        nofile                500000/ G% Q. S& H# e
*                hard        nofile                500000
. u8 F4 u- J# S( C# vim /etc/security/limits.d/20-nproc.conf 4 D9 ?9 b6 R5 G2 p" r; ~& f
*          soft    nproc     4096
* P9 Y2 Y" E( C3 f# w$ a4 J+ }elasticsearch soft    nproc     unlimited
$ o6 W) u& r* Y! z6 _/ \( eroot       soft    nproc     unlimited. @9 a- D( y; k2 w& u- [0 Z2 F
###安装jdk7 |9 V% t' C' R/ ^& p: ?7 c
# apt install -y openjdk-8-jdk- ]( W( H2 o; f; t
" p5 N  w! E9 m3 ~8 K2 [. d
###每个节点都安装
. ]6 H) A% M% N0 o* A; [# ls -lrt elasticsearch-7.12.1-amd64.deb
! b) a1 `! @. a* K# dpkg -i elasticsearch-7.12.1-amd64.deb# s" ~% R) {/ _7 ^; _1 V
###节点1配置文件$ t+ R% ]8 ~0 s+ B" X4 W9 I" b; s
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml4 S. a1 q& Q/ g0 b, f( k5 f, k( |
cluster.name: m63-elastic        #集群名称# L9 @, {  l% Q6 }, N
node.name: node1                 #当前节点在集群内的节点名称2 N, a0 o1 C: L/ ]* \
path.data: /data/elasticsearch   #数据保存目录
9 D1 G. v' a' ~" w6 i  j. f# Dpath.logs: /data/elasticsearch   #日志保存目录7 g( a- U6 J9 J/ F: m
bootstrap.memory_lock: true      #服务启动的时候锁定足够的内存,防止数据写入swap( M- k2 X- e3 N! D" C2 h
network.host: 172.20.22.24       #监听IP, k3 Z7 H) k. Y2 g' E1 W) `" R
http.port: 9200                  #监听端口
& d$ y3 H( A  w# S8 C  r###集群中node节点发现列表% a: T  `' n  C) s  V9 F5 ]
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]0 K/ A$ C6 W' ^
###集群初始化哪些节点可以被选举为master. M$ L, e8 T3 I, @. Q
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]7 y: L& J, j4 I- z- f2 O# [! @
action.destructive_requires_name: true
  l# Q/ u6 x+ ^9 v; v- ^9 t" l# mkdir /data/elasticsearch -p- f1 {$ @: W  r& M3 }
# chown -R elasticsearch. /data/elasticsearch& n' b$ k! D4 Z% x
# systemctl start elasticsearch.service8 I- h( f8 z6 B% v# ?1 O1 Y
###节点2
8 s1 u/ A) k* E6 V# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
/ E- W/ `1 G- `# e% m, [cluster.name: m63-elastic1 G- z/ g+ u- y8 G% V/ k7 z
node.name: node2
; m: l9 r6 C2 _9 jpath.data: /data/elasticsearch: o, ~' g$ ]# s" K
path.logs: /data/elasticsearch
  _$ _; z- `; M) I2 g$ hnetwork.host: 172.20.22.27* Y; O- H) p* }. O7 t( J( v# P0 L
http.port: 9200
) \+ S" C4 O8 t2 S. m4 X9 |discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
5 ?0 g" Z. r- Y+ e" Z) p7 dcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
% ?/ d+ s) ?& yaction.destructive_requires_name: true
6 X  P, B- E" F# e* U# mkdir /data/elasticsearch -p
. A! D. j1 T8 O* m2 l) ~# chown -R elasticsearch. /data/elasticsearch+ o" ?( i1 F) g1 M7 H9 Q/ M" w2 e
# systemctl start elasticsearch.service
5 t: r( P# v8 K1 f9 z1 f, ?###节点3+ S0 H# p1 D4 H8 i( `* C9 v' R
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
) Y+ v% }( f+ \# a6 {) Y% vcluster.name: m63-elastic
$ F0 U7 l) C( Pnode.name: node3
  r3 E6 h/ r$ ]! C% x( l2 D) gpath.data: /data/elasticsearch2 W8 M; i4 I  z7 X  |0 m
path.logs: /data/elasticsearch
6 \/ I6 A, @. pnetwork.host: 172.20.22.28" T! S, q- n8 y) W: b6 R
http.port: 9200
7 k4 [+ {% I: }2 d- x! A$ Bdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]% l( Q: N4 ?4 [8 h
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
# _. o  ~. Q5 K( m! k; }. O: Gaction.destructive_requires_name: true! A9 ]7 T/ b1 h; J0 b& I  m) O
# mkdir /data/elasticsearch -p5 ]4 b! `( E, j# `- T2 ~. G
# chown -R elasticsearch. /data/elasticsearch' R* t6 ?4 i% {& C( f  [. r& g
# systemctl start elasticsearch.service " O0 `- G5 b- k8 }4 c
浏览器访问验证
$ y' x- q& B( F; m- |3 l9 Ghttp://$IP:92001 T7 a5 c1 r6 e: A0 @0 q
# j& n7 U$ ?, x% r5 ?7 F

1 e, i) t) d& {% r) p. c
# t* C: |! E1 X  rLogstash
  L1 D2 G* Q5 p3 N6 j9 b4 E$ w0 [Logstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。
1 n) g5 c3 c3 B6 G) C4 v
: S' ]+ ~3 R" J: q6 s部署Logstash
/ E4 d& ?7 l& T. k9 u  |7 ?5 s$ q8 [Logstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地
! `* \+ W1 e$ G. p5 q- \" t
5 P2 a# k8 d/ X' `https://github.com/elastic/logstash #GitHub
! }" `+ w& H6 x& c2 c   }6 ~' `( T1 g& I/ w
Elastic Stack and Product Documentation | Elastic; k9 J& i% A/ R

! m$ X. I' ~, ~环境准备:关闭防火墙和selinux,并且安装java环境
" o' \+ @) N  v$ s% I( ] " p% L: W) N7 G% C& P3 C5 G
# apt install -y openjdk-8-jdk5 Z9 I7 n2 l( c4 D9 [( a
# ls -lrt logstash-7.12.1-amd64.deb! N4 @0 Z$ d5 b
# dpkg -i logstash-7.12.1-amd64.deb- i1 v3 b% Z) l6 \
###启动测试
; w' w; P; i, }( b- k* D# /usr/share/logstash/bin/logstash -e  'input { stdin {} } output { stdout {}}'   ##标准输入和标准输出) X) b* B3 y5 _2 C- M+ S0 e8 y
hello world!~
9 O6 W" c& b5 h' H{
# E% B3 @4 Q# p# b. t" w      "@version" => "1",( c  Q4 m, [6 _
    "@timestamp" => 2022-04-13T06:16:32.212Z,% p, N# Y* f& n/ j3 X- I. b
          "host" => "jenkins-slave",1 x& f) L- r- G/ S. Y& N% j4 c8 T
       "message" => "hello world!~"
2 n' d2 J7 Q0 U( `" L' @+ b3 z4 e}6 |9 f( X/ i* l  t0 Q
###通过配置文件启动
  s" M2 w7 {% I# V. X# cd /etc/logstash/conf.d/
" s$ @; i* n3 g# @# cat test.conf " v, Q, ?% N+ V: W, ^$ e, r+ k* C! c
input {
  D: c9 Z3 T+ |5 L  stdin {}3 g, w5 H/ f; [0 ]7 ?3 _: G% G
}
$ m0 _  I7 i* t6 X  i' S" _- w4 zoutput {
; v! U# p6 F6 U8 X  stdout {}
6 _6 m$ ?2 `6 ~; s# }}
8 ~* X( i5 k& m% E* Y& g0 y
0 L3 S8 B; Y! Y# F9 O###通过指定配置文件启动- v3 P* O* i8 J& G' e2 H
# /usr/share/logstash/bin/logstash -f test.conf -t   ##检查配置文件语法
3 l1 [3 u+ n. y# /usr/share/logstash/bin/logstash -f test.conf+ U2 F0 v: O" ~
9 S9 N7 r' o/ [! B5 x% R
####输出到elasticsearch8 ^: d' D" D$ b) S/ v% t
# cat test.conf
8 j# H: ?/ p' Z+ w1 _input { + u1 |+ t# F+ [5 _0 @) }
  stdin {}
: H: r7 [/ M# L. k}: ^2 @, w% n4 S$ S
output {/ Y  e: P7 w* W# A& G  F
  #stdout {}  z; d' Z8 e% L7 y4 s
  elasticsearch {
3 K, c1 K. o) I7 I  y    hosts => ["172.20.22.24:9200"], k+ A. U& M! q2 n& H5 n
    index => "magedu-m63-test-%{+YYYY.MM.dd}"3 j( t$ e6 @( j- z- A* E( T! A* t
  }
. g* ]/ b8 F% Q# d& L}
  U% {9 I* X+ z' k* H+ y# /usr/share/logstash/bin/logstash -f test.conf+ N; @  H* L0 l0 \9 v; s" W
version1. _- s8 B$ F- U) |# L! o2 c
version21 L4 x4 y) k3 {) ~$ [) y
version3
& T- P. J% k: t% X2 Jtest1
) B7 F! a" `+ [$ ztest2' g2 Y. m* v1 s* [  Y
test3
# V0 I0 d* `4 J, C6 Y! `) E- S% r& ?& ~% x, @
####elasticsearch服务器查看收集到的数据6 u. c/ J; n2 }' g# n" ^# T" Z  W$ y" Z
# ls -lrt /data/elasticsearch/nodes/0/indices/4 P9 t- c; _$ P. a- A. q4 f
total 4" w+ d8 K+ _, A& K" j6 g5 b; V
drwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA
7 C$ [, N" p/ i" _8 hkibana
/ a7 x8 k% c* J6 ]! K0 |kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等9 ~4 y" U' _& r& a/ c8 o3 o

, ]$ G- |; N% c; M/ e/ [0 w部署kibana ' U. A0 ?9 Q3 p1 X) l" I& }
# ls -lrt kibana-7.12.1-amd64.deb8 `" _7 G: C  C6 U3 Y
# dpkg -i kibana-7.12.1-amd64.deb
6 k$ P7 t  q) J! a0 W5 n# grep "^[^$|#]" /etc/kibana/kibana.yml
9 q: p" n0 V" L' R& b+ hserver.port: 5601+ v* Y3 Z9 N0 i5 h0 {
server.host: "172.20.22.24"( q7 ~0 }  k" Q1 H6 u
elasticsearch.hosts: ["http://172.20.22.27:9200"]3 z; D' X" o, ~
i18n.locale: "zh-CN"
: m1 S9 d  O2 v2 r# systemctl restart kibana 6 @0 h( y4 m7 O) p! I
浏览器访问http://172.20.22.24:56013 k) c. }9 H8 r, I: K# C/ _

$ \/ u9 s1 x5 d; f" D5 I' BStack Management-->索引模式-->创建索引模式
- E4 a% u) w: {0 V1 q7 i
5 o# T/ f" p5 Z3 p
1 t1 y2 p) W) @% \' A0 B  {1 E3 y选择时间字段
/ ^; H- A4 a4 x. Q
: i' g6 B% C" Y" E查看对应创建的索引日志信息
, ?* t8 w' P( @! t3 c6 z7 T
! j' z3 P- q7 u0 P* I+ p
4 z  h3 w$ U0 S  ^: L8 g7 @, {   F( K8 B: ?2 Y* R/ i
收集tomcat日志 & `6 m* e" O! ~+ g# D
收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现. [3 @* m* t0 P3 q6 U- F
3 K5 T, F1 d0 N3 N6 A6 ]
部署tomcat # Z3 c* H# ^4 I" t
####tomcat1,172.20.22.30
5 i7 ]) Y$ K( ~. U& n1 f2 H) A# apt install -y openjdk-8-jdk" I! H3 i. ~8 e6 y) M- |0 ^1 p
# ls -lrt apache-tomcat-8.5.77.tar.gz
% ?) K5 c, T; [4 |+ g' g0 F-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz* v& C, z2 J3 G- V# B
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/% C+ S3 g4 f7 k
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
: N! T% j$ w9 J. m) h- O( O0 p# cd /usr/local/tomcat
# y  ]4 P7 T* O2 I0 ?+ J# r0 l###修改tomcat日志格式为json4 O+ `/ }: g8 N! L- w
# vim conf/server.xml
  P) [. t( k3 z3 `4 t....; _) v3 P5 o; O! e+ S

  a8 j9 g$ r: V% J& a....2 s+ {- r; x2 a- Z9 W
# mkdir /usr/local/tomcat/webapps/myapp
7 @9 Y( |  ~) {% y# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html
  t/ z" h* |, Z- F5 t( u! ^- ?, I5 u# ./bin/catalina.sh start' \% u8 g* r$ u. K3 o. N
, X# m& U8 g: s! |
###访问测试
) t6 m, S  Z: P( u" L) r# curl http://172.20.22.30:8080/myapp/
; ?9 F3 a) k; u+ N' y) r###查看访问日志
3 o  X, ]$ e$ A  W  ?# c5 x# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log
( z* C( W1 Q" Y
; X) d( |) K: B8 y####tomcat2,172.20.22.268 ^& z2 Z% c0 B5 B" A# c
# apt install -y openjdk-8-jdk
) n1 r& r5 j" e6 C# C# ls -lrt apache-tomcat-8.5.77.tar.gz ( L- ~+ P8 ~4 ~  H' o' T
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
2 A* D6 I: d3 c) j( M# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/# _# h% x$ K/ d' D, S3 ?! D
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
9 _( [# }) z2 ^, f# cd /usr/local/tomcat
$ u  q. G7 |9 h9 i###修改tomcat日志格式为json% }: a* W7 ?3 q0 C1 b. e1 T
# vim conf/server.xml
7 J) H2 N$ \" E) b1 o....2 E* u& R6 W2 [
2 T8 ?0 X1 i$ J& p) o! A
...., S7 p( D: x; D, Z4 D
# mkdir /usr/local/tomcat/webapps/myapp2 R( ~8 B: y0 J  G& S2 V7 K
# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html
3 w1 ?7 Z! T$ n/ @# ./bin/catalina.sh start
# x' p7 V$ \5 T) O' E: d$ |* B# m. B9 @3 H
###访问测试
- U, h  l1 h% |4 P, R$ z8 z; U% z# curl http://172.20.22.26:8080/myapp/
  l9 U" y6 T' i* T2 x. ^###查看访问日志8 U0 k" Q$ M3 c/ }% j3 ]5 z7 I
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log + c0 S3 k1 R( V7 M
部署logstash
. A/ P1 I1 P2 }& I, b在tomcat服务器安装logstash收集tomcat和系统日志
$ E+ Z9 h+ t  K 1 |$ L' C) V; o6 T" r( {. Z+ {6 U
####tomcat1,172.20.22.30
+ U0 ^: i# m5 X0 O' e& u: |# ls -lrt logstash-7.12.1-amd64.deb; K: t: h% O1 |! x' F1 {8 L# w' L
# dpkg -i logstash-7.12.1-amd64.deb
) o9 K; `) ~& ], G7 i# vim /etc/systemd/system/logstash.service7 X) n) X' h+ m! V6 J
.... s8 _5 M6 Z# H3 g
User=root
6 J! Z( Q" z- |6 OGroup=root! Q9 \5 N: u6 v$ }
...+ \6 e7 Z  G. i9 f9 l' F
# cd /etc/logstash/conf.d
7 `9 F" k5 n9 l9 {4 o# cat tomcat.conf+ y* m& v4 m8 o% p& X
input { 2 D2 w5 E5 _" w) D" T8 M" I
  file {( t7 X. g( W9 T& O/ U, U5 d! E- g) ?$ @
    path => "/usr/local/tomcat/logs/tomcat_access_log*.log"; ~& _" \8 c1 N  k
    type => "tomcat-log"1 E; w4 l0 D% Z7 \# D8 s  \
    start_position => "beginning"
5 \3 ~- T$ i: q5 b, a0 d    stat_interval => "3"( }( `! ?9 D$ g& Y2 Y
  }
1 A! F; g2 b; B! \0 `  file {; Y; X- J! T0 ^) v
    path => "/var/log/syslog"
- e2 w: |* h- E% S! v    type => "systemlog"
+ _& w3 }, p( y9 b$ [" z6 |    start_position => "beginning"
3 Z3 d3 S4 k- O9 P: o, P( \    stat_interval => "3"
4 N: ^, K: Z8 O$ f* h4 J) v  }
5 K: i- Q: u  A. Q; n, d8 V}- X7 A. t2 R9 U$ u4 S. x# @2 c, N: ]& S
output {( h8 I  p. h5 a8 N1 `$ m9 C
  if [type] == "tomcat-log" {2 ~. }# {+ E6 [! `' ~
  elasticsearch {
/ U1 N" @2 J+ y$ f6 g9 _/ v    hosts => ["172.20.22.24:9200","172.20.22.27:9200"]
  e- w% \* e  C1 I  m" p    index => "elk-tomcat-%{+YYYY.MM.dd}"* ?$ z. T: I2 \) z
  }}
& d6 w: M* h8 m% I  if [type] == "systemlog" {
5 `5 z' T- A+ B0 ^0 k- o  elasticsearch {! [( z1 x" r4 [3 W4 h/ f9 x
    hosts => ["172.20.22.27:9200","172.20.22.27:9200"]
) q/ f' z8 j. f5 [: V    index => "elk-syslog-%{+YYYY.MM.dd}"  A9 g# u( ]# [4 n4 O7 v7 Q4 H0 P
  }}
8 `. D* I& j$ r$ n( B0 _}: Y9 }. j9 z& X( e$ R  `( I, i1 r0 }
7 z" c- `& X& K
# /usr/share/logstash/bin/logstash -f tomcat.conf -t" ~" x5 K" ~; h0 e
# systemctl daemon-reload' _, P. ?, g% B
# systemctl start logstash.service1 W  b& M+ }+ u+ c
# scp tomcat.conf root@3172.20.22.263 j) N2 e7 g$ K4 B9 A& R% i( @
# D; l9 F4 [3 L. c6 `. x
####tomcat2,172.20.22.26
: O: y: U# \! V( e* ^+ L1 f# ls -lrt logstash-7.12.1-amd64.deb
# U# Y  w6 |) R( O6 @# dpkg -i logstash-7.12.1-amd64.deb3 w# `0 [' B3 P& V' b' t4 @3 u5 l
# vim /etc/systemd/system/logstash.service
; c; `+ q$ F) a, ^) W6 a  X, B...) J; `+ R5 R* U3 Y1 J7 J# I
User=root
- O- i! @! I& t" d/ g- zGroup=root( b0 {' g* N- L$ }6 n8 K" }7 G% I1 J
...
+ Z" u* |) c6 Q: }% T; L# systemctl daemon-reload
) S+ B) g, b/ v# systemctl daemon-reload, z# x# K+ y. R6 L4 q8 V. L1 M
# systemctl start logstash.service 6 f# m/ B6 v" |" A. p
通过kibana展现
2 N1 a/ k/ V% Y9 n7 a6 f+ A7 q7 z  H& Q' V. L) S
6 l6 h( Y2 j- w5 S1 f; L* x
收集Java日志
# B& v' b  d- P8 n使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并
- c+ t" i% B$ l1 g0 D" D) T
3 |# J6 h4 c9 q* M$ h) VMultiline codec plugin | Logstash Reference [8.1] | Elastic
9 V) e" I3 \& {8 C1 |
! _. F: g# N; B1 X$ B! j* \2 r添加logstash配置文件
7 J  e8 d" n$ L1 i- G###收集logstash自身的日志,172.20.22.26# e6 f% _+ _$ k& B7 C" W2 T0 R
# cd /etc/logstash/conf.d
8 F/ w+ {- b8 V# cat java.conf $ }$ }& B/ G! S! w" G3 X7 S
input {
* a$ {  G! {+ }. N" Z# w  file {
: ?9 j+ R9 K/ U( E+ q: Z3 W3 P, U    path => "/var/log/logstash/logstash-plain.log"
: W4 o# x" a4 ]9 r    type => "logstash-log"
+ K. Y1 V( ~4 s& h0 t* \    start_position => "beginning"
9 T. K  y+ I% l$ Q, S0 y    stat_interval => "3"
2 j1 }5 g1 }$ S* i$ N, t    codec => multiline {/ g5 m8 ?& k& M+ [& a! Y
      pattern => "^\["
7 @; \/ D' t. E+ ?0 A3 n5 M      negate => true
6 G6 P% Q4 V/ ^& a/ c      what => "previous" 6 A* Y2 ?8 b+ y; q
   }}
$ R$ Z( k/ i3 O, L+ |, }}5 u2 G" {4 V* `, P1 g
output {
2 j& ~; E, ]) W5 p; C2 k/ ~# J  if [type] == "logstash-log" {
# H% X( v4 }" j& b4 J! c3 D! z  elasticsearch {. U  {, G2 K" I/ }. b
    hosts => ["172.20.22.24"]6 h) C& Q3 u1 P8 c, j
    index => "logstash-log-%{+YYYY.MM.dd}"
( _. F3 ?- }' q% N: [  }}+ ]; [: |6 X0 q6 u, D/ f
}8 V; i/ D; g$ B' I
; d% }9 P. z: d9 _: C6 S
# /usr/share/logstash/bin/logstash -f java.conf -t
5 K, J1 ^2 ^0 k; o  p# systemctl restart logstash.service1 @8 v. Q+ \# K

$ b0 x9 M  k. s* [  p4 X###收集logstash自身的日志,172.20.22.300 C1 X  d8 E0 _" K$ U
# cd /etc/logstash/conf.d9 k/ j7 a' J" i. I9 g0 g
# cat java.conf
, D& X$ H2 S- ~5 \  U. B. m3 Einput {& F0 w1 w; n! m) r7 F
  file {! a$ W9 L4 O/ w/ }5 }2 l8 e0 D# h: v
    path => "/var/log/logstash/logstash-plain.log"
, G& h5 \+ E- D' z. j6 P& E+ |    type => "logstash-log"
8 t, g. Q7 H' x4 G8 b% u4 t% I6 c    start_position => "beginning"& G( r* T% h3 q9 M
    stat_interval => "3"# [9 I( Q' n( t+ i
    codec => multiline {
3 I' D2 r' w1 |  L5 T4 o      pattern => "^\["
- ]! I% g/ ~5 {, J- {5 W+ L& F+ e      negate => true
) b: v8 l# ^# m8 D# f/ E      what => "previous" , H1 a" B; c" N
   }}0 |- b! U3 e$ L* p( ]# V
}
  C8 g# l5 }% e+ l1 Zoutput {: `, }" k& E* [% ^
  if [type] == "logstash-log" {4 `( D; d; C$ t% R7 ]% L
  elasticsearch {
" Q' K" k3 Y; K3 q  b9 u, E    hosts => ["172.20.22.24"]
4 L6 E4 P0 S8 ?0 y% F9 @    index => "logstash-log-%{+YYYY.MM.dd}"+ v% X) F! r5 J
  }}9 [7 g1 S- @, E& ]1 r" s$ t
}
) t! b2 `. u. ]" e. M+ O6 Q" {1 h. a1 a6 a
4 r2 W& A; j1 Z# /usr/share/logstash/bin/logstash -f java.conf -t
: ?. S' L/ k, ?6 q# systemctl restart logstash.service
4 X0 K3 a7 x9 U  s5 z% ?查看kibana收集到的日志8 L+ [. O2 c% @- F. s) J5 z" I, Y

# [  O8 B  G( l& A4 z9 E' Y7 `" M
9 }8 c8 e/ `# j& I- G0 o2 k & }) B3 q+ w/ K# n( n
filebeat结合redis、logstash收集nginx日志
. g; [  m3 m7 P, k" ~使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch
& L% S9 J& [- G/ ]. K% t
8 _$ s: }' f8 D, Aweb1:172.20.22.30,部署好nginx、filebeat、llogstash
( g. h8 t& Q+ z9 L5 E/ z 5 ]+ G+ p; Z& t1 H4 e
web2:172.20.22.26,部署好nginx、filebeat、llogstash. F$ Q2 P( P5 P6 D

5 s, M. F8 @& I, G! \5 c5 @+ Ilogstash服务器2:172.20.22.23,redis服务器:172.20.23.1571 V% {$ @8 B0 e# I6 L$ T1 [

+ }0 G+ R( s' m7 r6 @! Xnginx服务器相关配置 1 j* N* Q7 u2 I  p, A( ]* R& b
部署nginx 9 C3 v/ Y) Y* n. ?& T# D0 J7 D  H) m
# wget http://nginx.org/download/nginx-1.18.0.tar.gz
7 k6 I5 L' e+ h" R# tar xf nginx-1.18.0.tar.gz
2 ?/ n: B# L/ |: S% I$ p1 H# cd nginx-1.18.05 w; I! h8 M; B0 v
# ./configure --prefix=/usr/local/nginx --with-http_ssl_module, f" F8 d5 Q: h7 R# K' |( n
# make -j4 && make install# E" I$ J5 q5 \) T+ L9 P% D
# /usr/local/nginx/sbin/nginx
# M! s$ Q7 d8 D7 i部署配置logstash
4 H8 N8 c1 x# n3 f6 v把filebeat收集到的日志信息发送到redis* [$ m; Z. g" H# [% A
# X" W8 P3 C* J/ s  S: ?: C- e
# apt install -y openjdk-8-jdk
# y: E8 K: e. ?# _# dpkg -i logstash-7.12.1-amd64.deb0 d+ i* c# F3 A; H# U  n3 a- o
# cat /etc/logstash/conf.d/beats-to-redis.conf
4 q+ o1 o$ ^* I1 Einput {
; R7 p; [, C- a4 D0 h0 R! h  beats {; H$ B, L/ t9 t- o; k% c
    port => 5044) y0 T: N0 S2 ~- }. t# B
    codec => "json"- K# `' z% u" t% l3 W+ _
  }$ h; y. `4 C* D2 k
  beats {
3 i3 W9 U) Y9 W    port => 50455 O9 R1 G. j$ a
    codec => "json"1 z+ V/ }6 ~4 X; G# ^2 I3 N8 h/ {
  }1 G' `% U! d- [$ q3 t  F0 `
}
2 R  o5 k. _; youtput {% E3 ^/ p* D; U
  if [fields][project] == "filebeat-systemlog" {' J3 b" X4 S0 S2 g
    redis {
, ^" {% j" m! Q      data_type => "list"
% ~) a! o+ j3 h      key => "filebeat-redis-systemlog"
- |& c9 N# E# g- R6 N; W      host => "172.20.23.157"- `- ~' l: }  j9 |' h
      port => "6379"$ p; b# |5 q3 Y' l" w: A* W/ w
      db => "0"
% j4 d" {( E& Z5 I( j7 [) _      password => "12345678"3 B* o8 U. _" c. b
  }}  I9 F7 h5 u7 o
  if [fields][project] == "filebeat-nginx-accesslog" {
( t7 N+ l4 g4 i* Q    redis {6 M* r( p2 C+ Y7 Y# U
      data_type => "list"
/ Z( y3 [8 p9 o% C+ z9 r      key => "filebeat-redis-nginx-accesslog"2 ^; m7 y/ n$ }& v
      host => "172.20.23.157"/ F* G) N, `* r- D. c
      port => "6379"
" v3 _  |1 T: w$ v3 L      db => "1"
- R& b, g6 E+ Q3 D$ e' R      password => "12345678"# @/ R% y  N7 @
  }}: o) v9 j( X6 j9 ~- _; s
  if [fields][project] == "filebeat-nginx-errorlog" {9 `( D6 q! a- u# R
    redis {2 a  K# h+ x3 i6 W
      data_type => "list"! ?, u& V# E6 r0 O4 w4 E
      key => "filebeat-redis-nginx-errorlog", f, i" q& H0 |
      host => "172.20.23.157"" [; q  v2 ]' y
      port => "6379"7 M! [1 V! P& ~
      db => "1"7 {- D) F. N! y0 v& ?( k% h& R
      password => "12345678"# `( {3 F' _0 d7 g( c0 L
  }}4 f8 J- ^( o* B- p# @& _
}( U2 S! H- W/ @  a, n! k
# systemctl start logstash
3 o: m" l) H4 d' V) I0 ?. A# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/ ' i4 d& j/ T1 q/ S4 N4 g
部署配置filebeat + t$ t4 L& B( f* ?
通过filebeat收集日志信息发送到logstash
0 R, L! Q7 j( Y6 Q ; b* j1 `1 |0 d3 d
# dpkg -i filebeat-7.12.1-amd64.deb
! M; \, F& t4 M9 K+ O# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"; H% F) Y6 d8 M. C. T" J$ c
filebeat.inputs:0 B. P1 K/ P. N
- type: log' A. V' g" k8 ^) {& z6 O
  enabled: true( Q5 O5 x: t( W) J3 E
  paths:7 \; N: Y4 \! Q; B
    - /var/log/syslog
, o$ v! U. H' h3 H: `  J  fields:
$ g' Z; v# n; M+ j9 l4 z$ d    project: filebeat-systemlog
. R4 R0 ?# Y1 {% y9 N- type: log
# p. H9 s7 C" y% p7 ]. Y' V  enabled: true- p" ~; ?: W, {: @) B
  paths:  Z) r2 o( d0 e  L. X8 c( Q
    - /usr/local/nginx/logs/access.log
/ T/ {: W7 w1 H& y% @  fields:4 n* j% j, T7 P8 O3 T# I- \
    project: filebeat-nginx-accesslog2 u- T- C0 R% e9 g
- type: log( [. P5 ]2 Z3 n9 d$ c* F. O
  enabled: true8 H0 y( d  l  Q1 J+ T2 I
  paths:8 v" x$ q" z. U( N1 I. U, b
    - /usr/local/nginx/logs/error.log( G! b1 a5 I- Z" ~( c8 a6 z
  fields:, Y9 D7 t1 x, u7 r# z
    project: filebeat-nginx-errorlog- E8 }1 U2 `' |/ N4 X
filebeat.config.modules:
' S* m% W$ W+ o7 h: B3 ?5 l, Y' q  path: ${path.config}/modules.d/*.yml
1 W* r) e, ^4 M  reload.enabled: false* ^$ t$ i7 Q& O: B7 S% e2 a' Y
setup.template.settings:
0 v7 o5 t  t! p  index.number_of_shards: 1& q% g4 m7 F4 i9 s+ t
setup.kibana:
) |: |- O9 i& x/ Xprocessors:& M+ W* O: ]; f, ]1 K" g
  - add_host_metadata:
4 n; q& S; Y; q/ Z: T, c- S* I2 }      when.not.contains.tags: forwarded
8 Y, u0 Y; s2 \, M2 l8 b  - add_cloud_metadata: ~
/ |) b* z9 y/ ]( J4 [  - add_docker_metadata: ~
7 }+ @& ^1 K5 I5 u- E0 z2 I% v: v6 w  - add_kubernetes_metadata: ~
2 }5 V  u  |# ?, E/ ~3 ooutput.logstash:
# Z* h$ R* E0 F' [( M) d4 Y  hosts: ["172.20.22.30:5044","172.20.22.30:5045"]2 d$ V& _. q1 n8 d5 k2 C. G( ^
  enabled: true
3 T5 W0 n. A* |$ K) [4 `6 t  worker: 2
* [" _  V3 U3 d  compression_level: 3
9 t$ D" ~7 v: z, H7 i' H  loadbalance: true
3 b7 D) u) z/ b+ w& b3 K# r5 _, b$ Y( o6 T, D- y
# systemctl start filebeat
: y7 {! H$ ]! v# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/ , d; Q' p' t# b2 t+ r' C8 W! [
logstash服务器配置
- _. M- v$ l3 `/ w" E7 l& Alogstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch# T  {" N8 Z% ?) ^

$ }/ H3 O/ x, U5 N" h2 D# apt install -y openjdk-8-jdk: N: V+ w# O- }3 R; u' E7 a8 b; J
# dpkg -i logstash-7.12.1-amd64.deb
9 x% V( r/ ]) I5 ]5 i6 C6 B8 n# cat /etc/logstash/conf.d/redis-to-es.conf
1 r1 w( h9 }7 S4 Y" {) x; j4 L! Yinput {
) M5 u$ I1 `5 c4 M2 s  redis {- f1 w* g' j/ R7 @
    data_type => "list"
3 ^- H2 d" }" j! i, I    key => "filebeat-redis-nginx-accesslog"3 Q0 H+ n% b( H( M4 g1 l
    host => "172.20.23.157"
- b3 f9 ~4 t+ |/ l+ m    port => "6379"5 C5 J. t4 E3 [3 y  ~# V. n9 Y
    db => "1"
; v; }9 I7 ]& q2 o    password => "12345678"" I! t7 C( Z: e0 {& E+ V
  }
6 {9 o- V7 H' k: ?$ e4 L) F/ R  redis {
8 w' H$ [. [8 l3 u+ ]' l5 ~. s    data_type => "list"9 e5 ?! F. o' |2 }. \9 m) {
    key => "filebeat-redis-nginx-errorlog"
0 e7 Y) p& T  Q; o$ k* f    host => "172.20.23.157"
, A# G; W; a' J( s    port => "6379"9 {. A. ^- b6 k4 |3 ~
    db => "1"
( S6 ^7 V# U" P1 G$ y6 p1 ?6 @- Q: f    password => "12345678"' t9 x3 j/ [" ^# p, r& X+ g6 m
  }
) H* B6 l' ^2 ]2 j  redis {8 c6 {( ^1 H/ G$ w. n
    data_type => "list"  i" S0 x3 f) b. l4 z
    key => "filebeat-redis-systemlog"4 M7 G. A+ W1 Z; s" j! S
    host => "172.20.23.157"
' O6 I. z) Y4 s* z+ M" F+ ]    port => "6379"0 m, X5 P% J- R! s, |
    db => "0"
# l% y+ f1 s9 b  v/ k    password => "12345678"4 n6 u. ?# n0 ~
  }& G' [) B) \1 [5 ]9 p; u; O
}4 a  P5 H" R7 n, L2 U3 S# l) v
output {
! ~6 W  {8 d1 W  C& E  if [fields][project] == "filebeat-systemlog" {6 `% a0 Q  D% J: [& |9 G
    elasticsearch {
1 U" f  z. N3 ~' R3 K0 _      hosts => ["172.20.22.28:9200"]6 G& ?8 R1 o6 q# }1 s- b
      index => "filebeat-systemlog-%{+YYYY.MM.dd}"
! h: u. L) X- \: e, N. A  I1 T+ ]  }}
4 y8 }' L# H' ]  if [fields][project] == "filebeat-nginx-accesslog" {
3 a2 D: C" H/ v/ G+ A    elasticsearch {. G0 b4 t) n3 b; v, t& i
      hosts => ["172.20.22.28:9200"]/ x$ d+ c* s/ Y+ c) z7 L
      index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
+ d+ h+ R% y6 P1 g( Z7 s, U  }}, S$ \- l6 C2 @3 M  {
  if [fields][project] == "filebeat-nginx-errorlog" {, L& K: a- r: s1 S1 ^) F
    elasticsearch {
* Q! Q% Z4 u5 D0 l" d8 }0 O) d* T      hosts => ["172.20.22.28:9200"]; G" m! H: G: h; v* j& s$ ?
      index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"
; H) A' a/ j' @* e5 ~- _  }}/ u, ~# `5 U" ?5 Q% I
}# y1 Q+ Z6 [# r
# systemctl restart logstash.service 2 Y& c/ B" e/ ~( x7 K5 v2 v
redis安装配置
+ W% J, ?; e' X# M% z* s* U3 Vredis服务器:172.20.23.157,
# z+ Q4 T' P# U' R5 z " t) x1 Y9 V# T# }
# yum install -y redis
3 q1 e! Y0 K' ], `) `# vim /etc/redis.conf
# |1 ^/ S, {- W0 ?: W####修改以下配置项
. s- d' r) m9 D4 B5 `bind 0.0.0.0# O6 ]7 G* s4 B% H8 w
....% N* r9 w5 \# k# L9 Q+ c& u
save ""
% t; ~, [/ N- Z; ]/ ?...., v5 F9 C2 j1 H5 |4 L$ B! W. N
requirepass 12345678# a/ \- M9 C; i( R! `$ \
....' }2 u5 R+ V$ X/ U+ }1 W
# systemctl start redis
* t2 P7 |0 j2 Q+ C2 ^( t! L###测试连接redis
% h8 G7 g2 |* w' N0 f2 J# redis-cli 9 Z# @! ^6 H# b# ~( E3 o: V
127.0.0.1:6379> auth 123456787 R, J" V$ W; n1 u/ [1 `
OK9 f0 {- ^7 L) a# N8 ^, O' [
127.0.0.1:6379> ping
# w! V0 x* B) \PONG6 N7 ]6 r6 R, x+ j# ^+ i4 F3 J

; X6 e2 ~3 V+ B/ O- d. F" n###验证收集到的日志信息6 G9 `, T/ c% A" V* m  P7 o! Y
127.0.0.1:6379[1]> keys *
, U- P& o1 I/ W1 u2 Y1 G1) "filebeat-redis-nginx-accesslog"
3 j  L4 E, I, f1 T+ }8 A+ \2) "filebeat-redis-nginx-errorlog"
: Q! P( Y; Y7 W1 e127.0.0.1:6379[1]> select 0$ z  T. i; `9 h5 v) l7 |) j8 U
OK
' S/ S7 t7 ?6 a1 [. z127.0.0.1:6379> keys *
! c8 q$ h5 K2 |9 T/ x  S4 J/ O' i+ x1) "filebeat-redis-systemlog" 0 ^+ o7 i4 L5 X2 o( \- [, r
通过head插件验证生成的索引! d# c( i- X: R5 u* E* |

$ t2 J# P/ d! k. n $ Z# n6 @8 R$ m3 P& ~: l, e# ~
kibana验证收集到的日志信息 * N* e8 w+ ~2 V7 m0 c; C2 k9 x

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Copyright © 2001-2013 Comsenz Inc.Powered by Discuz!X3.4( 沪ICP备18024137号 )
快速回复 返回顶部 返回列表