扫一扫,微信登陆

 青浦修电脑 青浦笔记本维修 青浦手机维修 青浦电器维修

搜索
查看: 329|回复: 0

ELK日志收集

[复制链接]

1万

主题

1万

帖子

5万

积分

论坛元老

Rank: 8Rank: 8

积分
56206
发表于 2022-9-5 08:10:05 | 显示全部楼层 |阅读模式
搭建ELK ' W" }- @% `. g! ~
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:( B5 H; _* }$ |+ S

2 {% F0 V: _& b7 Q) N  N处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单
2 i( q. k# I, s7 W4 F: |, V8 yElasticsearch - I$ j: k7 }9 a
elasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。0 ?( }  U) f- }! d& L; x, W

5 K6 t1 R1 Z+ {2 y/ f  I7 V4 l' V! j+ Celasticsearch的特点:
. H4 `! j0 s8 P4 p+ W, Q% C ! X9 F9 {. n/ U1 B
实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json
# i0 E8 m3 {0 a/ O% P部署elasticsearch # Q# w% Q. Q  z
GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发2 b. b9 n% K5 J4 M

4 K2 g9 r& w9 p  J7 _+ `- Y1 Zcentos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步) P7 i: Q, |  D- m0 \5 e$ A) Y( [* x

6 z; M. x( [) A# w" u服务器1:172.20.22.24+ O0 c0 p3 n' w6 O4 n, r
9 U, ?+ F0 {5 y3 [& R  C: d
服务器2:172.20.22.27
: A' D) j- N4 r* J3 n! J, X& ? 8 w% @) N. N' V
服务器3:172.20.22.28
! J2 Y+ O2 j1 Z; l; J7 ` * o6 x: T. @  H5 G, ]
###ubuntu1 F5 v3 L$ K6 T, Z9 A3 e
# apt install -y ntpdate
; ^3 b9 @9 E& }6 O) @1 w1 j0 r) X# rm -f /etc/localtime9 S" [! t9 H% R8 `$ c' C
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime; r/ j# k2 ~- h7 d
# hwclock --systohc8 q9 Z# I. @0 u  t6 Y
# ntpdate -u ntp1.aliyun.com
, @( T6 I3 i+ L" B: f5 F/ g###设置内核参数9 e. U; \$ B/ w8 L
# vim /etc/security/limits.conf( }# f* ~, V2 s* P3 c8 b
*                soft        nofile                500000
$ K$ r* n& e2 Z3 a*                hard        nofile                500000
9 ~" D  i2 s* ^2 T# vim /etc/security/limits.d/20-nproc.conf
; N5 U+ a! _" q8 G6 N. g: _*          soft    nproc     4096
- A( a  R% |! ], `6 P4 {elasticsearch soft    nproc     unlimited
4 N; X( ~' j+ V, S+ I& m, Oroot       soft    nproc     unlimited4 M. J( V- w' U( `3 }
###安装jdk
0 W9 x7 a1 v) V& [: Q2 f$ v9 R# apt install -y openjdk-8-jdk1 P! h3 x" @& D$ b/ o
: ~  E7 H; \5 a, ~- |
###每个节点都安装
' t$ k# Q$ U" T: H5 Z4 H' {/ W# ls -lrt elasticsearch-7.12.1-amd64.deb# s6 ^' \4 G8 m8 D
# dpkg -i elasticsearch-7.12.1-amd64.deb
6 F0 f6 u1 s: n% E, e+ f, g% H# x& P###节点1配置文件' C$ P9 G+ V3 w& J! u
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml; l; G% u2 k0 {  D+ I7 @  ?+ G
cluster.name: m63-elastic        #集群名称
6 u( J3 l0 M1 K" M5 Q1 cnode.name: node1                 #当前节点在集群内的节点名称3 X  J1 a2 d* j$ X9 ]0 ^" e9 R
path.data: /data/elasticsearch   #数据保存目录8 |2 j$ s1 {- k6 |
path.logs: /data/elasticsearch   #日志保存目录1 A6 U) h0 }0 j6 t
bootstrap.memory_lock: true      #服务启动的时候锁定足够的内存,防止数据写入swap
) H( M3 U( P( S& K) O" {network.host: 172.20.22.24       #监听IP4 W# J7 Q: V! J4 A& u3 P5 u" w- y
http.port: 9200                  #监听端口. S. w1 Y( u" q3 X4 ]' Z" L
###集群中node节点发现列表
% L6 E) g: f7 gdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
# J% Q! n+ q2 }5 N###集群初始化哪些节点可以被选举为master
9 E: n# w. s3 o2 Gcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
+ R4 Z. Y2 L6 ?$ |1 @# l7 g/ qaction.destructive_requires_name: true
3 l. F, a3 m" o% G9 a# mkdir /data/elasticsearch -p" M0 H) v3 ~9 I" q2 u# j6 Z+ a
# chown -R elasticsearch. /data/elasticsearch  U/ P; T# W% S( a* a5 |" t$ o, o! U- _
# systemctl start elasticsearch.service' D) t9 x" k2 r7 }8 B" F- s
###节点2; @$ o, p) c( T9 I( o
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
+ C7 a9 x# B. J) w, S+ Zcluster.name: m63-elastic
0 [7 J# q% h* B  F/ p% @$ I4 f4 [node.name: node2
: Z) A* y* b" v2 B$ }' q' f. npath.data: /data/elasticsearch$ @( ?& P' l- I8 \* N, y/ a
path.logs: /data/elasticsearch
0 M# h; ?+ n0 a! h* x4 S+ o1 tnetwork.host: 172.20.22.27" k* j* E- V& N6 g, ?
http.port: 9200$ g" h+ W3 G4 Y; s) H" P+ R; G
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]# E) P) z3 h* y( L  b* M' ^4 r
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
& n7 |7 Q6 V; u- v& N) I+ F# S% Paction.destructive_requires_name: true
6 l$ `3 R/ w/ N/ a1 ?  Y# mkdir /data/elasticsearch -p' m- N: j& D! y& C9 I+ U7 V
# chown -R elasticsearch. /data/elasticsearch5 K3 S1 a2 Y9 A# M9 S$ o
# systemctl start elasticsearch.service- ]4 q3 e! T* m" D- ?) a
###节点3. Q% v( V0 Q0 t; w4 s% x
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml6 K2 F3 Q/ r3 s6 n- f) N
cluster.name: m63-elastic) Y" ?0 W2 R  x. S6 X7 {/ p+ {1 b
node.name: node3
" G  S* b( Y9 j0 v1 f; e8 }; dpath.data: /data/elasticsearch
9 P' Y( \) S- M( ?3 h' N# S5 Qpath.logs: /data/elasticsearch
8 ~7 b( R; Y2 @network.host: 172.20.22.28' D; G7 N/ [' ^( h& Q. @4 y
http.port: 9200" V  e. w% t( t" s3 q8 K
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
6 Q/ Z$ Z# {1 i. a! |5 _; Lcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
/ t% f' w' T- [; Eaction.destructive_requires_name: true
- d# x1 k: ^( g, g( r# mkdir /data/elasticsearch -p8 l2 G0 [% K9 b8 j8 x. A
# chown -R elasticsearch. /data/elasticsearch
+ t( S1 _) O. y$ M0 A# systemctl start elasticsearch.service 7 ?% Q, V$ O, R- j! l
浏览器访问验证
% k  ]( l# N/ r! w1 b7 ~http://$IP:9200
( I& D  |/ L2 t9 C4 N9 n0 b6 r+ B" w" a2 p' u3 k- H
) {( c% u4 p2 g" M' I0 H, h

7 o+ e+ C* n1 c' O4 m- QLogstash ! V* o& q0 Z0 Y! N/ `& ?
Logstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。
( |) _  Q& s5 z5 n$ J
2 u+ ^, c8 w: t+ u, i部署Logstash
: X: J) S; z7 P: oLogstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地3 N% X4 \1 r6 b, l) c/ K! F
, H. d# ~  `- ?  s1 ?
https://github.com/elastic/logstash #GitHub2 {7 v. Q6 b$ j

# G4 u: z9 K# Z6 n$ bElastic Stack and Product Documentation | Elastic
3 Q" S# q8 S! ?/ k/ Q' m
) L; z6 s* U5 c  W- Z5 f环境准备:关闭防火墙和selinux,并且安装java环境
. z4 E2 `( y1 K( j  S6 {! G5 S ; u/ a2 W- j6 [* X! _/ Z9 I5 _8 V9 U
# apt install -y openjdk-8-jdk* f" \: d: R; F$ |  M  C' U- s
# ls -lrt logstash-7.12.1-amd64.deb4 A0 ]1 l+ h* U( L7 w8 y9 [& Y
# dpkg -i logstash-7.12.1-amd64.deb
9 L# ?% P* B" B8 e# \  q###启动测试
8 _% k2 a; V( |' X; K2 Q2 q: |# /usr/share/logstash/bin/logstash -e  'input { stdin {} } output { stdout {}}'   ##标准输入和标准输出
: B5 G, Z* i0 ~" D6 l9 s" G1 e7 Khello world!~/ @  Q( Q( V) l4 S
{8 _( A7 |  t% \9 C7 F
      "@version" => "1",# p: `" o& M( S) Z
    "@timestamp" => 2022-04-13T06:16:32.212Z,
8 I: _- R! ]+ F% e          "host" => "jenkins-slave",% Z* `9 S, X: l5 }' l
       "message" => "hello world!~"
- ~6 f: T3 c7 O2 j" h' n. K}% _* Z) K3 O$ W2 m/ O. }
###通过配置文件启动
0 b* ^/ V, f( X9 `# cd /etc/logstash/conf.d/
- O6 `1 m. f# R+ t' W: A2 d" M8 \# cat test.conf
2 i& b7 Y; b7 B( [0 d2 R9 e4 }input { 6 N5 G5 t+ v6 P$ m6 t
  stdin {}6 `0 X3 r5 N# m/ s! e
}4 W! H4 W0 k! r% P6 h6 v' A4 U
output {7 y+ ~3 ]: O4 \+ g
  stdout {}
, f1 n4 c6 z; x, R0 Z# U}9 d. E  d, t) Y7 }
4 ^7 n$ D+ d! a3 W
###通过指定配置文件启动& y8 m0 u6 L; w9 q( g' `0 g/ z
# /usr/share/logstash/bin/logstash -f test.conf -t   ##检查配置文件语法+ E5 W# Z8 j8 }  s
# /usr/share/logstash/bin/logstash -f test.conf* b/ [- D4 s8 |/ T# \6 R

; l3 D! L( [4 J9 g. X4 j# I####输出到elasticsearch2 G9 [4 X% S. n- ^1 P: w
# cat test.conf 5 f8 E5 |: Y8 D
input { 8 @* f3 C) }5 A5 b: B, \
  stdin {}
$ o/ O0 R( _" o0 g0 y9 ^3 h3 t* f}4 |" y2 R2 l; Y  G; q( i8 J! H& \
output {
- h' ^( J( \0 g  #stdout {}9 x9 F: L0 o" v( o" Y$ B
  elasticsearch {# W8 j" {- w! J; Y- ]
    hosts => ["172.20.22.24:9200"]
6 A& |+ I  n0 t3 H4 y% m    index => "magedu-m63-test-%{+YYYY.MM.dd}"7 t! H/ E% C4 ^7 E9 G
  }
* g6 @, w' f! t; W0 v- ^}
, i4 h6 b$ n  D6 u# /usr/share/logstash/bin/logstash -f test.conf7 p  o5 S! w, \3 g" C  V3 O! @0 ?
version1: ^+ K7 j7 [! M: X4 \( R) t, H
version2
& A" E2 a% i7 A5 mversion3! b- @6 F8 u. F8 X& J
test16 Z3 q% i  s# ^+ k9 G2 t" f" T
test2
1 E% }1 e' P( M" t/ t  ?# @1 Ptest33 C: J3 }! b  j; E/ y
6 A. [1 H; p: O6 ~
####elasticsearch服务器查看收集到的数据. L9 I# T  N% P6 @2 _
# ls -lrt /data/elasticsearch/nodes/0/indices/
" G3 A* G# m  \) |9 F9 utotal 4. i: j( c( l4 V! d# a
drwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA 1 [0 N5 Q. m5 J, c0 v+ k) W
kibana   H9 F4 }0 T# v8 {4 w6 z" e
kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等' U7 G& d/ t9 R( w" |( d7 E
. ?: k4 J. |& L$ B3 B
部署kibana
7 D% Y3 P( ~( N& w9 g# ls -lrt kibana-7.12.1-amd64.deb
4 k/ T6 h6 Y" s( D0 x/ Q+ D% G: Y! p# dpkg -i kibana-7.12.1-amd64.deb
# G0 G0 p# K' v# grep "^[^$|#]" /etc/kibana/kibana.yml
; |" E! g0 q/ z1 c, aserver.port: 5601
; Y4 e+ ?5 q5 I+ z: lserver.host: "172.20.22.24"
% u/ P. n5 `/ Nelasticsearch.hosts: ["http://172.20.22.27:9200"]
9 Q& m! a- E3 ~, u+ Bi18n.locale: "zh-CN"2 H( s1 h: ^2 F2 K/ G) A2 J  t" d* u
# systemctl restart kibana / L' E* w" w1 O4 U; l5 o8 C2 [
浏览器访问http://172.20.22.24:5601+ a* T; }/ D- A3 D) V  s

  q+ ~1 }' N: y% X" G6 OStack Management-->索引模式-->创建索引模式
3 L# v9 s7 C% x) G- n4 A. O) d7 D8 F4 a. v2 i$ p
: p; ?  N7 x4 I* D! q- C
选择时间字段' p+ |% v1 W9 j  j$ [( }6 o1 G7 T' x

$ R# }2 l* ]3 p& b查看对应创建的索引日志信息
; D8 l0 r& o0 s, C  S% c: F1 J9 ?0 W  C/ h/ M

' ]# g* G$ ^% l8 ^7 s& Z ; J; n, \5 s5 S* E
收集tomcat日志 ! n# v' {1 T) E% G, }
收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现9 o: T3 x# }. \" |/ U5 K

5 c  g2 n3 _) M, @部署tomcat ; N3 E7 D3 V  r  k
####tomcat1,172.20.22.30
" `& M* q+ f- N# apt install -y openjdk-8-jdk
4 h" k3 D6 J9 d$ F9 q( q# ls -lrt apache-tomcat-8.5.77.tar.gz $ u+ h3 x" G! U! c4 l( ]
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
0 z+ c9 O6 W# r  ]# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
- S9 s7 \7 T* r3 n6 n( D# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
- q0 K/ t+ s! K  k1 m. c# cd /usr/local/tomcat
$ E$ x' ~0 W0 S7 |0 a2 q+ [9 n5 Q###修改tomcat日志格式为json
& j0 V$ T. t& L1 ]# vim conf/server.xml
; b$ ^0 M$ C" }9 {....
( w9 ~- I0 `5 x' Y
# K8 a9 ]* e/ O  G% L..... X$ ]% U7 |; p- p& Z
# mkdir /usr/local/tomcat/webapps/myapp. f6 }7 R* Z/ A9 Y' h# H
# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html$ r5 z" X" _& w  \; x
# ./bin/catalina.sh start& `, v# o; C7 W& s2 v

7 m* y3 z6 x9 i! o9 y4 p* U3 q! j( s###访问测试# V9 o9 l! X2 g6 U8 a7 s  A/ \6 \
# curl http://172.20.22.30:8080/myapp/; Y/ T- \! Z, U6 }2 |! W0 `
###查看访问日志( E1 c. r6 i1 f0 o" C  L
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log  |. c4 z( O5 {- _- U) D% X
. d+ c* k" D" k& x7 r
####tomcat2,172.20.22.26
- S! K0 J" f& i) d' T, F# apt install -y openjdk-8-jdk
% x  W9 m( W& T' @# ^7 j# ls -lrt apache-tomcat-8.5.77.tar.gz ! G3 W6 D1 o7 |, Q# p' R/ K; E
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz( {% f% U( c8 I9 a# o9 r/ {. h7 l
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/% J) B* r. X- N
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
3 o" H2 f4 ]# P, W, J, e5 z- x' o# cd /usr/local/tomcat
( U( }" m( ?9 K) g$ W; s###修改tomcat日志格式为json
/ x. m. T. O' w# g9 C! \8 ?# vim conf/server.xml, m8 C; J3 |! i5 b- |# j$ h
....
$ }' {- z  G* p  y
& ^3 N+ T( w3 R: r( r& y/ M....4 P& m7 L7 f4 ~- O  V
# mkdir /usr/local/tomcat/webapps/myapp' A( x! J" E  G, X/ e) v
# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html/ z, z# _1 Z1 c, ]/ Y0 Z' i- G
# ./bin/catalina.sh start
  u. l2 \. a4 X. ^' U. b5 m( W( Q. q0 L. h4 X$ H' @& e
###访问测试3 s" F/ w7 C; ]+ P7 i8 V: z3 v
# curl http://172.20.22.26:8080/myapp/: B0 ^& L5 `: y" C. j3 @0 w
###查看访问日志
$ U; b( N5 ~* C$ M6 ^/ C( k7 H# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log ) G: c. {" v- o0 M4 f6 u5 O
部署logstash ) ^6 a9 h8 x7 G, J) ~4 J# }% A
在tomcat服务器安装logstash收集tomcat和系统日志7 w% c4 q. N0 S9 ?+ ^  g# f) P; o

1 a6 k8 G- B3 M+ d, B####tomcat1,172.20.22.30' x6 o  E* R8 p
# ls -lrt logstash-7.12.1-amd64.deb
: d7 E# b( C6 c, o3 V& H; D. u# dpkg -i logstash-7.12.1-amd64.deb
) z/ j! f  y2 L& e$ Z! H  y  i' n# vim /etc/systemd/system/logstash.service
6 M8 i1 m! \- z- w% X' d...' t3 \* e7 A/ M& o; }0 @
User=root: y6 P# `0 m* a+ P- W; g0 E
Group=root
9 g/ u0 g! A9 l* [$ N0 n1 `4 \...! s4 @1 M6 k: q  @
# cd /etc/logstash/conf.d2 o0 e$ g4 p* m
# cat tomcat.conf. R& G% a* v7 @1 s" E
input {
( g  `( e) E( ]& _* i  X& ~- q+ Y; ~% h  file {
' i, P) A- l$ y2 {( ?' Q2 n! T) T% u    path => "/usr/local/tomcat/logs/tomcat_access_log*.log"
" ?) h9 L: [4 @- R    type => "tomcat-log"8 Y% w3 }, j: w# J. H" s- W2 L- Y
    start_position => "beginning"9 z; U% e& ~" V. y; [6 A4 o) @
    stat_interval => "3"
# T4 l; q: L. y6 |  }3 O1 X( C6 R' M& o
  file {
; [0 U; \9 H, s3 e2 M4 U2 _    path => "/var/log/syslog"
1 T* I1 g$ f2 I! J9 O: g; S    type => "systemlog"
* {# `3 H2 ]3 Q1 V! B    start_position => "beginning"
" V- @9 n# n& O3 f    stat_interval => "3", ]8 A; z7 i) K1 J" [2 i
  }
3 K% p2 n, \# T) Y; @2 ]}) M- x3 n8 u. L& @1 D; |
output {' {  t- h! c- w. x! ^
  if [type] == "tomcat-log" {! c- M: E6 W4 r
  elasticsearch {: O4 [; g( h0 a& N  ^8 v6 }
    hosts => ["172.20.22.24:9200","172.20.22.27:9200"]& \; G" `0 e; b$ W5 u' b6 k
    index => "elk-tomcat-%{+YYYY.MM.dd}"% h6 Y. T+ L1 i( o; B% Y* F
  }}) v! n8 |& i6 s
  if [type] == "systemlog" {
& C3 p( {4 s6 b2 R$ y  elasticsearch {
) _3 H0 x% O8 p% l* a8 X    hosts => ["172.20.22.27:9200","172.20.22.27:9200"]
* f: X5 f! {' c    index => "elk-syslog-%{+YYYY.MM.dd}"0 Y% e  U" N+ X3 L* u
  }}
& j, q. C) K# C5 v# c1 i}
+ b$ s, w( a& F3 K+ D4 Z9 B5 [
% v" A8 J' Z9 W1 [- C8 v. l# /usr/share/logstash/bin/logstash -f tomcat.conf -t* C! x* L) T* z% m
# systemctl daemon-reload1 l/ O/ V! q. h1 k3 K2 G. T
# systemctl start logstash.service
2 H) h8 a! z4 v. P2 ?! L# scp tomcat.conf root@3172.20.22.26: k$ Q0 B" M6 Z/ x" o

+ \$ y7 p1 r7 \2 K* c, a* n####tomcat2,172.20.22.26
  }; P4 Z1 @3 X: S+ [# ls -lrt logstash-7.12.1-amd64.deb
3 a9 l/ d! A( |4 M. f$ p# dpkg -i logstash-7.12.1-amd64.deb) N, H% n" D: _! r
# vim /etc/systemd/system/logstash.service. a: Q! q2 h* d4 V8 B9 l
...1 \4 Q. n$ q3 _: p4 a) Y
User=root
9 |) n+ w  p; m/ }4 BGroup=root3 E: ^3 L+ O1 K0 R: ]. I
...
7 U- i6 [, a: ~! n; p# systemctl daemon-reload/ ~3 {1 o" r5 C& J- I9 l$ F
# systemctl daemon-reload
0 E3 e  [: j4 N* P; `7 G; `9 ^# systemctl start logstash.service " [' ^) M! H7 s7 D) \3 e+ d- I0 b. @1 \
通过kibana展现/ g& S8 t& ?- v  |3 ]1 Y
$ e& B6 C  k) \$ K9 @- E
, E2 {& I; f( S" {5 \! C3 R1 Q
收集Java日志
$ U3 ^* T. w- d( p6 d使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并
& O  u. }0 y  ]2 y- A
7 `7 @6 c  Z2 tMultiline codec plugin | Logstash Reference [8.1] | Elastic
8 O* i9 [: r; N- B 5 l* E( }5 @1 P! ^
添加logstash配置文件
1 e, ^5 m# t9 x0 W###收集logstash自身的日志,172.20.22.26- b4 {0 |9 H# t0 t2 T$ K
# cd /etc/logstash/conf.d0 U" b/ f' `: s, f- w7 ?
# cat java.conf
0 R8 y( R3 E  U! v: }input {
' a% N1 V1 o3 }& t- L8 Y8 t  file {  m, {2 P9 \. _9 s
    path => "/var/log/logstash/logstash-plain.log"
7 s2 n# m( g3 x( \# H    type => "logstash-log"
0 V+ O( M4 G" Q    start_position => "beginning"
' e9 A, t& Z1 m$ V3 B9 w1 L    stat_interval => "3"
- D9 v4 C5 l8 n3 M* w3 B    codec => multiline {
6 a+ `  V! V* T: P+ w      pattern => "^\[", h8 q6 X& D4 B% h0 Y, V/ R
      negate => true8 x# ?. D' s2 e; N2 D( n
      what => "previous"
3 i4 n1 [, ^3 H   }}9 K5 c$ ]  c0 a4 Z* G
}' v" c6 d' C4 A. S
output {
& Q6 F5 N) S; C( A6 P% p  if [type] == "logstash-log" {6 `( g3 X! c& S2 F
  elasticsearch {
% d# J' [1 m) e    hosts => ["172.20.22.24"]/ _2 w1 l+ L; ]0 B2 B
    index => "logstash-log-%{+YYYY.MM.dd}"
9 T& }; c1 |: h) b/ B; _  }}# \& `2 ^/ k( j$ _6 O5 B0 M
}
% z* p/ g( ~, Q7 f- o$ r( _- L0 }3 R
# /usr/share/logstash/bin/logstash -f java.conf -t' Y( f% N* ]* x* U6 [9 V1 [
# systemctl restart logstash.service8 b& w. A; m5 _$ k  i7 u+ ]
3 |) a4 Z  ]2 g
###收集logstash自身的日志,172.20.22.306 x4 ]8 }6 x" N0 Y4 g2 _) a9 E
# cd /etc/logstash/conf.d7 d1 _( o9 F) m. S. r
# cat java.conf . @  D# O1 b# T" p+ U
input {
! D/ N  E' y; T- g- F  file {! j2 O) b# \1 k/ K: ^
    path => "/var/log/logstash/logstash-plain.log"8 M  I: _' z4 t0 K" n
    type => "logstash-log"2 @1 Y; w  i* ~, x+ [! o2 a( g# o
    start_position => "beginning"
& ?& S" O8 _5 G9 F0 x    stat_interval => "3"! U5 @% N( A1 {  N1 j1 \2 A
    codec => multiline {3 N, K/ w% X0 _- ~2 Q
      pattern => "^\["
, [: c. C7 v( s  i7 u: P      negate => true
" @4 {. k- F- k  n1 a      what => "previous"
, Y$ ~, `3 B. I8 }   }}+ W! d/ i  ?, V+ B' d
}
$ \; E, r6 u1 |  Houtput {
5 c, y0 p. F$ D+ v8 _8 y  if [type] == "logstash-log" {
3 z; }! S& u  B' ?' A% u  elasticsearch {
$ y" X, L4 V* r    hosts => ["172.20.22.24"]
! U1 C8 m5 n1 k# y  ]    index => "logstash-log-%{+YYYY.MM.dd}"
& m' P- Y9 g* J/ g& [7 I+ |! i8 d  }}* s% y0 J$ T: M
}$ C0 ?$ u0 G8 F/ E

6 G4 X3 t# C$ Q# /usr/share/logstash/bin/logstash -f java.conf -t, `) c( T2 G/ C6 {1 N3 L. k
# systemctl restart logstash.service 3 c( R- ]! ^2 K( }: w9 i+ C& z+ g
查看kibana收集到的日志
% T6 T' T8 s6 V+ ?1 d; C- V# e4 ~8 F' ~: ]
) `+ a" G# l5 j

5 ~5 H/ B; ?: ^5 Kfilebeat结合redis、logstash收集nginx日志 . [" T' M9 c# `. n" x
使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch
8 a2 h# l0 x, L3 x" d% e, z , ^/ L# n6 T) N3 O1 }) m
web1:172.20.22.30,部署好nginx、filebeat、llogstash5 R/ E$ s* f* e( @. @% l
, D; `$ r5 R$ B1 _$ B' K" w% H
web2:172.20.22.26,部署好nginx、filebeat、llogstash
% z- U) k1 s0 v1 B1 I5 N/ O7 j 7 H- g; U- B( y7 E% [  X. V
logstash服务器2:172.20.22.23,redis服务器:172.20.23.157
0 H* S5 g; v* I# } " h# }& S; W( X
nginx服务器相关配置 % k; T: E" J& T0 x) r
部署nginx 6 `7 n% s  Y0 y% r6 l
# wget http://nginx.org/download/nginx-1.18.0.tar.gz
$ n. g, b0 e" \; S2 C2 }6 x# tar xf nginx-1.18.0.tar.gz
' |) }+ s0 |, b$ `# cd nginx-1.18.00 d; Q1 r* b0 H
# ./configure --prefix=/usr/local/nginx --with-http_ssl_module( t- N5 |) C. _/ R
# make -j4 && make install
- T3 R3 f& y) o+ [. M+ G9 R# /usr/local/nginx/sbin/nginx
' J. x9 J$ t; y部署配置logstash 3 F! Q9 E" n5 R3 g/ p
把filebeat收集到的日志信息发送到redis
: b6 S, a" y1 p4 r3 X
) X3 L; V$ K5 {( v; q0 Y7 U( Z# apt install -y openjdk-8-jdk
- d- z7 Z  E9 h3 _( R# dpkg -i logstash-7.12.1-amd64.deb) c. s; ^" z1 T
# cat /etc/logstash/conf.d/beats-to-redis.conf # k! Y, ^/ @) Y2 N: g4 G1 F
input {
2 ?. f6 F! s! y& x* x  g  beats {* l' {* g: m1 q; _( G
    port => 5044% r, f$ z9 u$ j: u  \2 T
    codec => "json", U5 s3 w& d9 o
  }
; v) c/ z& _" _  beats {
4 r5 ]  x1 _( P$ G$ K. G    port => 5045
) @5 x) t! L, Y0 L7 i    codec => "json"7 d% c4 ?* f/ o; B
  }
; ^5 n, q* p6 }$ L/ O}
$ t3 |/ e4 A4 M7 ?" Youtput {3 T, V1 S! L% f. {( x. S- G
  if [fields][project] == "filebeat-systemlog" {
; m0 y( Q0 T- [2 h  ]- ]$ n: E2 n$ }3 ^    redis {
. ]8 X( [' N: c3 ~& n# R- D      data_type => "list"
7 W% R/ L  l! _. S      key => "filebeat-redis-systemlog"
9 {8 r1 J& T$ \/ H& s      host => "172.20.23.157"
# R( L* s( M2 T3 K3 u      port => "6379"+ C- [2 e1 w3 n% r
      db => "0"  Y9 s5 ^  ^8 [3 h7 U
      password => "12345678"+ b1 n1 N( J. ^# p! }
  }}
8 A2 X0 J" J& G) ^) c0 E; T* y  if [fields][project] == "filebeat-nginx-accesslog" {( j7 P  h9 t, W1 g
    redis {% q& ~; t4 e4 \+ I8 Q( J0 k
      data_type => "list"
4 E4 Q% B6 Y9 X& s0 b7 U1 k5 Q      key => "filebeat-redis-nginx-accesslog"" p: s3 N4 J4 y
      host => "172.20.23.157"
- y3 B/ b3 o$ l8 q0 e      port => "6379"
2 L/ \& O7 i! ]6 k9 V! q- i9 T( Q      db => "1"; p2 y2 _- x) X6 J' f& U* Z
      password => "12345678", P0 L0 p6 C" {# `+ g4 A
  }}
$ |0 \& q2 d- Y. L) ^- P2 ^  if [fields][project] == "filebeat-nginx-errorlog" {
+ S2 k. e0 s9 U8 n+ m- Q% _; g3 ~    redis {( e5 F1 Q9 Z: E4 m' F; W
      data_type => "list"# l/ d- M- N! R3 x% |1 z8 @
      key => "filebeat-redis-nginx-errorlog"( U# D# X; O( {9 R$ M
      host => "172.20.23.157"
7 b7 @- K- r1 x      port => "6379"
' v. @6 R% b3 e6 i+ N      db => "1"
1 H. t5 U9 M" G4 R. W) W      password => "12345678"
# y. s8 ~1 n$ e* v; @/ t  }}9 E* q( ~; C" {
}
$ v0 l' j( R% |2 L# systemctl start logstash* E  {5 ?: a7 e+ m) `" r7 |2 {4 V
# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/ ' E/ x$ W0 ~% q* ~6 z5 `- q% e: j
部署配置filebeat
/ x" y; W9 Y% ~. }% t- ?" u$ A9 \通过filebeat收集日志信息发送到logstash
5 B4 t9 j' g0 q6 W) v. u / N/ R/ M7 r7 v% A* v
# dpkg -i filebeat-7.12.1-amd64.deb1 q1 p' e6 r# j! `/ Y9 L% `
# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"
& {8 C. i# C/ T$ w# ?filebeat.inputs:
) B& F: K, b8 G6 w, l2 |0 u( ^- type: log
) \" v, l* S6 Z1 _- v& o6 {  enabled: true
: e, H- G1 q( D# L  paths:; I+ n2 E0 R: S
    - /var/log/syslog
" p2 H) ^/ e, X3 T, C5 W2 I  fields:: s$ M' G+ S5 j
    project: filebeat-systemlog
. S6 f0 P. Q$ v9 J% t& k/ `- type: log+ L) g1 r4 `6 h0 n5 R. R
  enabled: true% X; G, k- d5 E2 m& y) L! D
  paths:( k0 {' k: X/ g& i9 J; c! z0 s& K
    - /usr/local/nginx/logs/access.log
5 U1 Y: B$ t+ o# `  fields:+ }6 K) [% c' S' t# a3 _9 W* @. \1 u5 b
    project: filebeat-nginx-accesslog
2 S& t3 S. q+ U7 t8 A8 v- type: log: v/ g7 P. c% j; S; v" r7 k) V! ]
  enabled: true
! |& |0 X8 x2 J1 P  paths:1 f# p! u  P; {6 P& Q3 D
    - /usr/local/nginx/logs/error.log
4 y0 G8 l* o! U  fields:
+ {! n9 B5 X1 }" u/ \    project: filebeat-nginx-errorlog$ B# ^0 y* B/ V& R% y/ r
filebeat.config.modules:
+ G2 X$ v7 T$ `  p: C  path: ${path.config}/modules.d/*.yml$ f  v0 X1 U, G( B, b& s
  reload.enabled: false
, ^, B$ J& l2 j) ?. D  h! qsetup.template.settings:* U) @3 S. g7 t+ d* q
  index.number_of_shards: 1# x1 \# B/ W& F% k5 x
setup.kibana:/ d( L4 E# _0 B( g, ^# R
processors:
1 U, H' n. y" C9 G' `* N  - add_host_metadata:# G( k- g  C& J" r, @
      when.not.contains.tags: forwarded/ @2 v0 |. ?5 [: ^  G4 i/ Q( p+ @
  - add_cloud_metadata: ~) f6 l# n) @% y; e3 r4 `% u
  - add_docker_metadata: ~7 `4 u3 }8 D- `* z
  - add_kubernetes_metadata: ~
6 `! L9 r* c+ Loutput.logstash:
% v# }6 H* [3 c  j( W% J' _  hosts: ["172.20.22.30:5044","172.20.22.30:5045"]+ z2 C! @: l6 K% `* G+ g
  enabled: true. T2 b% [; V0 l5 i4 [
  worker: 2% U' `: b$ T5 P2 `3 X% o7 m2 ~
  compression_level: 3
/ O+ |1 @5 w- L% B) l  ^  loadbalance: true
4 H' A* i& E9 |# l- s8 p# k7 n% l6 o
# systemctl start filebeat
  |- ?' y. F7 ]* \- f# u# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/
) m9 |7 R* e! f3 {9 v3 Elogstash服务器配置 7 m& E7 F5 ]- G
logstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch1 t- U: F6 {2 q+ m- n. O! p/ \( Z

. M: q0 q3 N9 a6 `" |: L# apt install -y openjdk-8-jdk
' A5 q' ?8 l8 F. Y8 Z8 Y6 O( M- \( W# dpkg -i logstash-7.12.1-amd64.deb
* Q( \- g2 K& l6 D$ ^# cat /etc/logstash/conf.d/redis-to-es.conf 1 ~' W- g% z( @. c, V
input {- B* \/ [. x4 X
  redis {- S& l) K: E; P4 b! e/ \9 H
    data_type => "list") _/ }- D$ ~6 X: _8 S0 v# v8 K
    key => "filebeat-redis-nginx-accesslog"
) f) l4 m; q0 t- y& j  u' O    host => "172.20.23.157"( t. v; S( O, A; l* @1 J
    port => "6379"8 t" Q" c! P7 d1 D
    db => "1"4 g+ t. T: N  K! b; F
    password => "12345678"
: T& ~' ?+ X; Y( g, h; M  }  L) S3 B7 T6 a( o4 L
  redis {
* |0 t; V2 X  y# d4 I+ `( s    data_type => "list": i! z: u0 |, x( e. \
    key => "filebeat-redis-nginx-errorlog"
  c( r! n+ P3 j    host => "172.20.23.157"& `  a7 ?5 Y- B
    port => "6379"8 }9 u. s5 C' b! M( Y
    db => "1"2 s- T) k7 z/ @3 v) u$ h! r' G8 r4 a
    password => "12345678"+ t- G" i% p( L0 [2 x! X
  }
' ]$ j; ~1 ^$ |. M: ]4 r4 u1 }& I8 }/ ^  redis {
- Y, C7 z4 O/ J8 Z, C1 e    data_type => "list"
1 I- o6 z2 |! m( I* ]7 m8 `9 X    key => "filebeat-redis-systemlog"
# @- N+ `% p+ J: n* l* a    host => "172.20.23.157"; h) D, U6 F0 V6 X
    port => "6379"% f8 l; A6 G% \. Z( F8 Q) d
    db => "0"1 ^2 X* H% s+ `; V  M; Y
    password => "12345678". u- K0 o0 D+ c7 v9 w) x
  }
  w* W. ?: L0 m1 N; q( i1 w& n}) {# m! V- S  H$ y+ @6 N# ]
output {0 p* `7 n7 ]4 a9 L1 l$ I
  if [fields][project] == "filebeat-systemlog" {0 O5 A/ @  v  v) v3 I
    elasticsearch {
! p( ]7 L8 I$ W' }9 m      hosts => ["172.20.22.28:9200"]) t$ J, J7 b" s9 j6 ?" b8 Q
      index => "filebeat-systemlog-%{+YYYY.MM.dd}"
2 g# M- w' m2 O  }}
, D: _( s8 Z* W: Y  if [fields][project] == "filebeat-nginx-accesslog" {4 Z. i" ~& D3 X4 R+ `
    elasticsearch {* X! |" c6 |$ f
      hosts => ["172.20.22.28:9200"]
5 b4 @1 G( V/ i      index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"1 l0 A9 r8 E. u
  }}8 s( h, w, b- }$ s1 Q
  if [fields][project] == "filebeat-nginx-errorlog" {$ y6 w/ G% l+ z- g% T
    elasticsearch {
7 ^7 b; S) k2 p* {( g, r) v      hosts => ["172.20.22.28:9200"]
6 w& U5 b: {/ q9 p4 {- h# ^) O      index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"! Q9 J! Y0 R! @% |  N( |
  }}4 d  U( w9 o# F0 g% S0 q
}
7 X) P0 `8 h, t" O# systemctl restart logstash.service   |7 p# d& `% t1 a; c3 ]
redis安装配置 , b. Q7 l2 P) r
redis服务器:172.20.23.157,
- u/ F2 v4 j2 N  }% E8 I9 o
- B1 s, t, H! g' ?# yum install -y redis6 c1 ^7 _8 ]- d
# vim /etc/redis.conf
1 d  P7 Y0 k6 @( x; O. H2 B6 X  N####修改以下配置项7 F6 q* ~: L; Q/ A# A3 _
bind 0.0.0.0. C6 }# }4 c- H
....# a5 s; h+ z: v$ m) z" b
save ""
6 k$ K+ {& X* p5 v& @( D6 u9 Y....
; Q) C9 B2 V( t: S: {requirepass 123456786 ?3 `- F: h" B7 f* v
....
" a, v+ G" ?- }) C# o: c; @; f# systemctl start redis
0 B) ?* `( f. x8 U' S###测试连接redis
6 l8 ?- }6 y/ N. U7 F# redis-cli
4 b" f; M# c  g1 K% }8 K127.0.0.1:6379> auth 123456788 x6 G) Z4 U$ o$ I6 p$ ~2 e
OK
& Y+ e+ G& q0 m0 w: h127.0.0.1:6379> ping
2 ]& c. K  o/ C  o: D/ V, k! MPONG$ R, e5 {) G7 d: B; N2 @0 N% m

' U4 i; p# Y. V2 ]  d, G###验证收集到的日志信息& X0 H% z+ N! \' P3 _% [2 |
127.0.0.1:6379[1]> keys *
5 t. _3 u/ K1 P( U1) "filebeat-redis-nginx-accesslog"# L" ^* x, @/ u, I
2) "filebeat-redis-nginx-errorlog"
+ k9 T  L# V- u8 R127.0.0.1:6379[1]> select 0
" L# ^6 G  ~: KOK9 v, l6 Y1 o% T
127.0.0.1:6379> keys *  c6 t3 e. J5 h( |
1) "filebeat-redis-systemlog"
$ j7 T# l- a& k/ C: l' M+ e3 p, e通过head插件验证生成的索引- y6 d. ]- y* g( K2 B- f
0 e* l* \' B! `( F8 ]# d# M

2 p& v5 D& v" \/ ^kibana验证收集到的日志信息
& O; J* E2 V6 u

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Copyright © 2001-2013 Comsenz Inc.Powered by Discuz!X3.4( 沪ICP备18024137号 )
快速回复 返回顶部 返回列表