|
搭建ELK $ u$ k2 ?* `3 ]; g) M5 G
ELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:
$ B, H. g: P$ r8 R
: Z S% @- {( o: ]+ ]% D2 k# w处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单
* p' n; E& {0 e1 T% {3 w7 eElasticsearch
% S$ j7 q6 o# ^, b3 pelasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。/ ^% \0 \! C3 N' f q/ O- q# e) M
: |$ Y- L0 G. p, I
elasticsearch的特点:8 d9 w5 J! m, [2 S
5 {6 k$ O) P7 J4 a: c
实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json0 X7 G. e9 P+ A6 k5 w/ I
部署elasticsearch 5 Y6 F9 S4 i9 x( i7 t* f6 J
GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发
- J" F+ W* u% U* g, r
6 I, K. a! {$ ], w5 acentos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步
. F. C3 |4 l. g Z$ O7 ]
7 C) w9 m) F8 F6 G9 h6 [' I3 n; v服务器1:172.20.22.248 e, l7 K6 i/ l$ a; s+ h1 ~
- V9 H/ a0 ]5 \服务器2:172.20.22.27# U, l9 @& e+ H* u2 s
! _/ ^, j9 o8 ^, A: I w
服务器3:172.20.22.28
. r1 J" |" k( G$ n: f1 r# ^
) U. t$ Z+ T2 X/ C###ubuntu
% K; ~; I( a5 m8 S# apt install -y ntpdate' V1 Y' C% B/ M3 ~9 p9 v
# rm -f /etc/localtime+ i- Z6 N P; }5 X I
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
, f9 B( ], }# `% E# hwclock --systohc0 A4 q( E: g9 S$ H& a g. f
# ntpdate -u ntp1.aliyun.com
3 ~ ~) F: f* e4 G6 T( I6 O7 p###设置内核参数
( m* n1 J' @8 U) G# A# vim /etc/security/limits.conf
% u |; K5 U# v" r* soft nofile 500000
$ `" w" m8 K5 C) h2 g" @$ ?2 o- ~* hard nofile 5000000 X. ? G8 W i @0 m
# vim /etc/security/limits.d/20-nproc.conf
U1 w$ x/ m5 [* soft nproc 4096
, a( @& }/ M7 C) o! Eelasticsearch soft nproc unlimited
* a, R! [! ^4 ]root soft nproc unlimited
% c) K! n9 I, ^ }1 i5 W- g###安装jdk; w, Z1 S! N0 U( G( Z+ y, k
# apt install -y openjdk-8-jdk
& m4 r! s: w( a4 h# a) v
$ R% D5 t* S* a###每个节点都安装
( x: K5 O$ W/ D8 P6 R# ls -lrt elasticsearch-7.12.1-amd64.deb
: x# S, ^% V& `2 _ W2 ]' N# dpkg -i elasticsearch-7.12.1-amd64.deb
5 O0 e& l9 ]2 f###节点1配置文件
3 R9 }( a, }: j/ d# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
7 k1 J$ U* t- T% a! [8 X/ L! }9 o4 ncluster.name: m63-elastic #集群名称* G) n6 p1 @3 N: c$ n; h
node.name: node1 #当前节点在集群内的节点名称
4 a, v3 i! I1 |path.data: /data/elasticsearch #数据保存目录+ D/ S$ S% J' x8 K" L
path.logs: /data/elasticsearch #日志保存目录, U' |7 C5 K/ Z }* f& z4 c
bootstrap.memory_lock: true #服务启动的时候锁定足够的内存,防止数据写入swap
* m+ R3 q0 n' e( Dnetwork.host: 172.20.22.24 #监听IP5 _8 j3 O. E/ k9 s8 s6 ?4 h. D
http.port: 9200 #监听端口3 x) s# |/ [* k# I- k5 g- V1 `) J
###集群中node节点发现列表
- B( |, O1 ?4 X2 F: d7 ~2 ddiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
+ x, D" O9 e2 q) S8 F! s8 L [" _###集群初始化哪些节点可以被选举为master
) h1 I+ L) H4 }& F# Mcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]' y! D7 S- G0 I
action.destructive_requires_name: true7 Z% k3 C; _ b6 x* W
# mkdir /data/elasticsearch -p
) Q H- t' n! a# e$ `% i# chown -R elasticsearch. /data/elasticsearch
# C V0 o8 p# G" J# systemctl start elasticsearch.service% Y+ W- \/ ~8 A
###节点2
, K3 _, n/ O! q g. Q2 m/ G# l4 o# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
* ? K C$ G" g' }, |7 p scluster.name: m63-elastic
) h5 ^, Z3 B5 H/ g0 J3 F9 E- Q# i9 inode.name: node2
D0 t0 ^. v* Ppath.data: /data/elasticsearch
$ q. G/ c0 Z5 T$ N8 r& T2 q* lpath.logs: /data/elasticsearch% V: f4 n0 t* {' O, f
network.host: 172.20.22.27& s) M! i% }0 f* K3 T0 Q: z+ `; X
http.port: 9200: k4 t" b2 a" C$ Z! {7 u
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]" L: W& u" u$ Y S2 C. t
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
5 N1 p9 p/ l0 ~8 _ I4 qaction.destructive_requires_name: true
$ ? x8 g: t: ^- c( `* c0 h# mkdir /data/elasticsearch -p
" d1 |0 V3 J, I2 O# _) d9 I& w/ R. f# chown -R elasticsearch. /data/elasticsearch4 }) M t' P3 x+ K
# systemctl start elasticsearch.service5 l9 v" B" h4 Y; }( W: L6 E2 T9 h
###节点3) X6 W/ ]$ Y3 q3 K4 z) w
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml4 a* I/ {) e. w3 m1 E) |
cluster.name: m63-elastic
. K6 t9 v- M; J9 p4 x) h/ Jnode.name: node3. O' i5 M5 t% u
path.data: /data/elasticsearch
% \' D; v/ ?. C7 Q: a' H5 zpath.logs: /data/elasticsearch
- t$ O2 M9 Q( Onetwork.host: 172.20.22.28( Z7 t" K8 d5 `, S
http.port: 92002 f3 W" D5 Q- f* i. L3 `4 W
discovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]5 V6 H7 ~; o$ U5 R+ y
cluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]. e Z2 [. K) k
action.destructive_requires_name: true
: {* O. B' o/ m; [# mkdir /data/elasticsearch -p
9 {7 y9 V. q) B( }# chown -R elasticsearch. /data/elasticsearch, Y8 Z- [7 M5 q- ` j
# systemctl start elasticsearch.service
9 M! Y# J8 C. h浏览器访问验证 - o6 W l. H5 X& Y/ \1 O- _
http://$IP:9200
% @3 x$ \( q2 I: u" A6 r. M1 `- b1 O% s! Q1 a1 ?
, Y6 F; {9 M9 a/ t) A 0 @8 l8 Z& g" V2 K
Logstash
' J0 }+ k$ D. i9 S2 jLogstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。% c. T- ]7 k- h8 [8 E- X7 [
' i" v# P$ Q7 c l. [部署Logstash 4 h6 B) A! c- p% `% i4 r
Logstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地
% Z' F1 d: Q R2 Q7 Q/ t# Q$ T
2 Z3 c! y9 k; D g: @ _* Zhttps://github.com/elastic/logstash #GitHub
& D% F9 F4 Q5 V/ S. k 3 J2 l: a- u8 Z
Elastic Stack and Product Documentation | Elastic. t3 k' I, D) y+ {
1 ~7 [+ P' w+ [: L
环境准备:关闭防火墙和selinux,并且安装java环境
0 ~- @4 i3 e' c, u* g0 }% O
9 w% S% z; S% b5 z8 o# apt install -y openjdk-8-jdk
2 M( A* o2 j7 Z5 h, t% L# ls -lrt logstash-7.12.1-amd64.deb
; R* B' \" u- ]2 s" P& N# dpkg -i logstash-7.12.1-amd64.deb) V T) j6 y# A' o% k
###启动测试- h+ ?# q/ H+ |: t6 ?# K3 n1 U
# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout {}}' ##标准输入和标准输出
* Z7 U0 X7 U! Q a Yhello world!~" G3 t* U3 y, W/ G
{& S! c4 X) s ^9 N0 C% U
"@version" => "1",. Z% ], c% x$ D' G4 f" x
"@timestamp" => 2022-04-13T06:16:32.212Z,
7 x4 X- \2 N. h" a- x "host" => "jenkins-slave",
3 v# S' R2 f! Y0 Z7 U" Y "message" => "hello world!~"
4 M# I5 G- |8 B4 G! j' G}
- q2 k. c( P' H0 R###通过配置文件启动
1 z5 J% H" K/ B% @ h2 m# cd /etc/logstash/conf.d/
9 l% }( t, w; n" M# cat test.conf % S+ R; m1 A, M& K, G( n) ]5 P
input { . c" f; ]! ~/ o+ _9 P/ f
stdin {} t/ [0 z7 J5 P4 h+ c- T# G4 K
}
' d* V' O( @. r! }) ?" o( V: P+ }output {
$ \( Z3 y7 T* F, F8 M/ }* t stdout {}
6 L7 q" U+ L; T' ~1 v( d( i; d. j H} S7 u4 |9 U$ D, `& ~; H5 ]
: ~1 n- g/ O' g3 @! h
###通过指定配置文件启动# \' R/ ]. q- M: t; D! y/ V
# /usr/share/logstash/bin/logstash -f test.conf -t ##检查配置文件语法* u5 Z; J9 T" K
# /usr/share/logstash/bin/logstash -f test.conf
0 k; [$ B. \, I1 X
/ n7 A+ W4 X" K* H####输出到elasticsearch
8 o1 R. n: J0 |- S2 j& }, m# cat test.conf # K& H7 i9 V9 E% d0 u. \0 Z
input {
) g7 k( c, ~! N: Q/ k stdin {}; Q' f2 T/ ~0 T1 k; V4 v
}
0 s! y+ T+ a5 T# ?7 Joutput {
4 Q2 j1 C9 V! E, g #stdout {}
/ E% X3 ^! I; Q+ W elasticsearch {0 y: G# u1 d0 X' u# @( K1 v7 b2 d
hosts => ["172.20.22.24:9200"]+ z7 h6 {# f2 u6 u+ y
index => "magedu-m63-test-%{+YYYY.MM.dd}"
$ @6 Q6 A0 v" V. W$ R' Z9 c' X }( ~, m, l: V" x' I9 P j' j% O
}9 W! h& E0 E* m5 A/ H
# /usr/share/logstash/bin/logstash -f test.conf: j1 F# D7 k# E6 b; C
version13 q! o- Q* z7 q& f) O, ^* J: M
version2# [7 J5 |4 t* a
version3
6 S0 ]- |9 r' ~; [& ?1 F1 {test1
. B7 J7 m; O+ C& e3 W+ f) rtest2
6 ~9 S* w8 S* @$ `2 x4 c$ G% }3 _test3- W+ \9 U* u- G8 j
) R" G- x E0 M# k7 U
####elasticsearch服务器查看收集到的数据* b9 f# D/ u( L6 m: t. J
# ls -lrt /data/elasticsearch/nodes/0/indices/# z9 L+ ?+ T& ?. b1 l
total 4
; x8 V! z" g0 Z, \ ^- Q2 h5 cdrwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA
1 r; p. C( V) Qkibana
7 j o2 \! C. R8 Ikibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等
1 [) |- E( u) I6 ?' V+ l % x& `6 Y" L% t; D
部署kibana
2 V/ [; G4 b% O, y/ _) l+ A8 Q# ls -lrt kibana-7.12.1-amd64.deb
% s4 {: h0 x/ k/ v' l X# dpkg -i kibana-7.12.1-amd64.deb
) z, L) g$ b4 [& e' t2 v1 [# grep "^[^$|#]" /etc/kibana/kibana.yml
: O9 Q8 S! t5 ~( F: \" ?7 H1 K! C/ yserver.port: 5601
! q0 M7 F% ~ N0 s" h; q% }7 h, Qserver.host: "172.20.22.24"4 V' D/ I+ q( X0 i3 O
elasticsearch.hosts: ["http://172.20.22.27:9200"]
* _' Q- w1 s9 P$ b7 g# ^2 Gi18n.locale: "zh-CN"
0 p/ ]1 D' z/ o/ Y( o1 P" v# systemctl restart kibana # r+ L h6 J7 i" J
浏览器访问http://172.20.22.24:5601
& C% v2 p- M" H8 i& J
& `8 `' A, D6 T. z+ C# ]0 gStack Management-->索引模式-->创建索引模式# f5 I9 C# w2 p7 p ]1 ]
+ c1 A2 ?4 B: x6 m _# ]( l
6 c. k* Q: b! i7 Q0 P& Y3 f选择时间字段
* i$ k1 w+ a* F% T8 L! M- w( X% ]
, [) o s0 I& j# N+ R查看对应创建的索引日志信息
- a* u: v3 o' j
I% n, D) U4 S" {' p L+ B( e, K
( t+ J7 R0 V5 k! T' k. R 8 o: \" c$ P" |% q2 O
收集tomcat日志 ; {9 {# O0 L7 ^: m
收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现
8 w+ x7 _9 D2 D7 S |3 J& _
+ L8 l4 A$ s5 K$ q; [4 D# w3 v# I部署tomcat 8 [2 U9 G+ t/ k
####tomcat1,172.20.22.300 S/ _9 v( Z8 i% G. i4 f
# apt install -y openjdk-8-jdk
5 t1 q( A7 p/ d3 @$ f) {$ M7 q7 m# ls -lrt apache-tomcat-8.5.77.tar.gz # \1 P, l7 d" X: |- E& `. ^( i9 E
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz# H2 y: m( M9 d; T+ Y# l8 Y
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/6 e6 p& [( Y; Y6 _3 s
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat4 ?3 Q2 Y) M1 P$ m# H6 l$ A( E% P
# cd /usr/local/tomcat! i1 x9 y" ~% B9 w' f B# }
###修改tomcat日志格式为json
( a5 I7 \) i( L h0 G# }# vim conf/server.xml
/ a, g5 H* B3 }( C....! K4 i2 g' f* S0 i% I, k
& j' u, j; d6 J: z0 j) h. T- \( G....
# s2 Y7 Y+ ], ?2 r" Z5 w% P4 v8 y# mkdir /usr/local/tomcat/webapps/myapp5 m$ g1 G# I& ]+ v
# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html3 G6 }# m5 l5 t/ w- V
# ./bin/catalina.sh start
* `9 w: l9 b. G+ S/ m: o9 I/ s. b) n y6 C$ q. _: ]
###访问测试
( {! ~6 ?7 d3 E6 c0 J- n, {# curl http://172.20.22.30:8080/myapp/- F4 o V5 }6 W F
###查看访问日志( R; i$ V5 B& T2 m% `
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log
+ z/ V7 V, a( l H( f6 E" {2 o6 I) u$ n) B9 q' w% q7 T( D+ p
####tomcat2,172.20.22.26
* L6 I% ]; L* c& y( ?3 d# apt install -y openjdk-8-jdk
* A+ B$ }6 @, ^: I0 Z! @/ V# B# ls -lrt apache-tomcat-8.5.77.tar.gz
" w0 f4 R' m( g% D% f-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz6 l, ^+ f* Z; Z! i- c
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
) d# F1 i$ _+ I# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat8 m7 a( s2 G( Y% b
# cd /usr/local/tomcat; S9 a* x4 o7 F, j& `
###修改tomcat日志格式为json
8 U4 K- b& @- b5 f) j$ t) d3 B3 N# vim conf/server.xml
~) k" ?3 u5 Z/ W....
) Z/ ]5 `; }, h4 n2 m+ n* e
: G+ I2 ~+ z: L2 L* J..... d9 S6 w: W8 F$ T7 W3 _- |/ {( v
# mkdir /usr/local/tomcat/webapps/myapp
0 ~& Q( ?; S0 s0 ?" s# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html, Z1 _# P3 V6 Z2 K6 X- D5 d
# ./bin/catalina.sh start- i; ? }! o" A/ V& k. F
) Y7 A+ c3 w$ S' G# d1 p3 n' V###访问测试8 y3 u" l7 _7 i- r, ]+ D
# curl http://172.20.22.26:8080/myapp/
, [# [) p9 q6 a1 q5 Y# k \###查看访问日志
8 a& r2 n/ U+ G. ^ u# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log
) s/ D/ [5 T: g部署logstash 4 I) j8 r/ K. G: P& w+ F3 K
在tomcat服务器安装logstash收集tomcat和系统日志
& [5 h# ^, x& J' y" u& }7 z' V
' s8 Q* |. ^& b( m/ `####tomcat1,172.20.22.301 E) z$ v) S5 l+ M) v O+ Q' J, b
# ls -lrt logstash-7.12.1-amd64.deb
! P, N3 a/ d+ s! B# dpkg -i logstash-7.12.1-amd64.deb# ~0 ~- Y V' B7 {) B6 P
# vim /etc/systemd/system/logstash.service
% L; b" b* D& F0 }.... L( s( _2 g) L
User=root! G8 m, ?0 W- l P/ j& j
Group=root% J! X; }% d+ H
...6 R0 d0 B. Q b& J1 ~" y
# cd /etc/logstash/conf.d
9 V; r" l" {/ C2 {" H- x# cat tomcat.conf! B- d) `1 y) E
input {
+ M& |7 T, c& _, x file {" I+ J# O+ ~5 o2 c" H! z
path => "/usr/local/tomcat/logs/tomcat_access_log*.log"0 x. D% B0 A( r3 x+ b/ L, d, h! D
type => "tomcat-log"
+ g$ w6 ]+ s. R- y6 W G0 \9 C' c start_position => "beginning"; \7 ^* P3 C' p% G+ @ o
stat_interval => "3"
. _4 \5 J5 x" n }
1 W' N* r7 J0 e file {
: `+ K$ a& x/ a4 H% o' N1 Y path => "/var/log/syslog"* _* ]2 l6 U) k4 t' M4 L5 D
type => "systemlog"
/ R7 I+ _7 i1 t3 {' S/ V start_position => "beginning"7 r7 D/ G; r9 c$ F
stat_interval => "3"4 z* n: ?( Q, Y6 H" k
}
7 m8 c' @3 w% L6 o}
9 ?. ?8 g% K+ n: {+ V( Eoutput {3 [6 |! B! m/ t! g8 J" _
if [type] == "tomcat-log" {
8 b% `/ i+ d) l x6 P# e5 E% d& o elasticsearch {& u0 E3 L! Z/ N e) T! l5 H
hosts => ["172.20.22.24:9200","172.20.22.27:9200"]- Y% R1 F$ |" _
index => "elk-tomcat-%{+YYYY.MM.dd}"2 _5 C/ v- b) Q1 @
}}
' ~, s4 B5 G% w) M( H$ x5 j if [type] == "systemlog" {: {) k. b& A$ l0 c; A
elasticsearch {
" o1 Y& A& W3 T0 F6 {* S' b hosts => ["172.20.22.27:9200","172.20.22.27:9200"]
) y8 N5 A5 h8 Z/ s index => "elk-syslog-%{+YYYY.MM.dd}"
4 U/ U) ^% ]$ T0 J8 n# [* R }}
3 `8 u* K: l; H- g2 J) [( Q9 R1 h6 m}4 m, b2 v) `+ `% R$ K; t
) J% S, }- ~" A& z# /usr/share/logstash/bin/logstash -f tomcat.conf -t$ S0 o6 S7 K5 ^, \
# systemctl daemon-reload
: {. n# ~+ `% l2 V# systemctl start logstash.service" F1 ^9 h% x( `& ]3 I
# scp tomcat.conf root@3172.20.22.26) B( T; [0 ^- P
0 v0 Q" B+ h9 F* w! h
####tomcat2,172.20.22.267 P8 A. a# G% s+ _3 z
# ls -lrt logstash-7.12.1-amd64.deb( L1 @$ J, e0 p4 S' v, \% n
# dpkg -i logstash-7.12.1-amd64.deb
7 p# D A0 m' M, i# vim /etc/systemd/system/logstash.service
- J+ ~# `' I* j.... B. g' F6 e: X& M9 a
User=root$ n' p7 Y: M/ E9 }
Group=root k' ?; q- P; H3 A4 B
...% F# l# \, [! \/ ]4 ?+ ]
# systemctl daemon-reload
# c1 Z. j0 f m( I |: C1 j# systemctl daemon-reload
8 R5 a. }2 f! O" F* B3 ^/ ^# n# systemctl start logstash.service : O* |1 X. B+ l7 H6 L/ F
通过kibana展现6 u4 \3 u# V' q$ p6 E* a' J
$ R8 f/ n- x- m$ g; P 3 d$ B8 ]/ K+ q( ?; W
收集Java日志 9 d+ h! L. ~# I7 |+ c. O5 K" U- C$ P
使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并
/ c- V' t9 R d' B7 O# [
; Y4 V" Z8 \3 m$ ]3 A8 kMultiline codec plugin | Logstash Reference [8.1] | Elastic
! n O1 @4 i# F R5 V7 l* b1 Y- L
5 m$ K& R# s: k2 ~添加logstash配置文件
0 p: U2 A. b H) d, v. P###收集logstash自身的日志,172.20.22.26
J& _# v, g9 }9 a7 h: j1 i# cd /etc/logstash/conf.d" E9 D, e, @& y& E% Y) B3 J7 J
# cat java.conf
3 K) F, j6 l1 `9 U9 Dinput { {9 S* t) L$ k! p& ?! L
file {
2 r" n, Z" c+ ~4 F! U) x2 o path => "/var/log/logstash/logstash-plain.log"
+ e& E" X m1 s type => "logstash-log"% Z' b; b" W5 {( A: v' a0 q5 h" [$ s
start_position => "beginning"
( V4 x4 v, L5 \! H% u" S stat_interval => "3"
7 ]0 _1 [$ O( N+ D codec => multiline {* Q6 s# `2 q5 i# W
pattern => "^\["
- F% }# }& I- z! L! F% R3 [ negate => true
* C6 V: \4 W0 Z3 P8 W what => "previous" ) N1 O3 H7 n) O! }- s
}}) V5 O7 F) p; ?: l
}$ A" ^) d& W0 D% X$ k
output {8 C7 [7 v/ h4 c" _+ T6 i2 D
if [type] == "logstash-log" {
7 k. j8 ~ ]5 Y elasticsearch {5 A# @$ m' Z( h
hosts => ["172.20.22.24"]) o" ~6 Z( T3 [, T
index => "logstash-log-%{+YYYY.MM.dd}"1 }2 d. C, X+ K h% C1 t7 @
}}
7 B+ L% c! ?5 I}: l1 Z1 ~) r1 D: A* C0 ~
7 `4 ]; U3 p+ h* b+ K" ^. o
# /usr/share/logstash/bin/logstash -f java.conf -t
; s- o# g6 K8 Y' p1 W# systemctl restart logstash.service* l( _7 i/ E7 s6 M8 N' @* w
: O2 e- [0 F* ?* }0 o( x6 B _
###收集logstash自身的日志,172.20.22.30
/ d0 I3 n6 g" D" o% j* n' w# cd /etc/logstash/conf.d
- ^5 M3 B& Q, l' \$ i- ?/ B8 A1 p# cat java.conf : E8 J; @4 y. Z) {# x
input {( @6 X8 @8 r ]4 s. q$ L
file {$ L- K$ }& ~! K/ R4 A. b
path => "/var/log/logstash/logstash-plain.log") e5 x$ q0 @5 f8 c* r$ x
type => "logstash-log"4 L( F3 ]9 j: C% W! c, k8 D
start_position => "beginning"
- ]' ^* ]8 w% X6 p$ ^! G. u stat_interval => "3"
1 {# H+ f7 ]. A* [5 U2 b, Z codec => multiline {
1 i* M- F4 P* i7 m$ C) u- c pattern => "^\["
7 M8 d" U1 X7 w1 \ negate => true+ ~9 Q: A! k @" \% ]4 W
what => "previous" 9 G( v3 ?. y( r Z A
}}( G) G" D3 u4 Y; {$ ]- p0 I
}
& h( n7 H0 i/ Loutput {
# D' m# P- ~: g& J1 t: y if [type] == "logstash-log" {. ?, ^2 K, ^. V! z" Z+ h
elasticsearch {; C" M' G2 Q/ o2 \
hosts => ["172.20.22.24"]
9 a/ z( C+ Z! F2 ~" ?1 E& R J( l+ s) z index => "logstash-log-%{+YYYY.MM.dd}"
* F \2 [, P4 ~ }}
; a" `# D( p. D( ]}
9 `6 \$ O. ~- e. J# W" C) |5 P7 A: ?% W- C6 M" Z$ E
# /usr/share/logstash/bin/logstash -f java.conf -t7 x1 e) D6 x$ P8 s# |4 u$ l
# systemctl restart logstash.service b2 K# G9 d f+ O" {' E. W0 v, }* x
查看kibana收集到的日志4 e# I4 [2 l& V l( H
& I3 Y' G: u: ^) s" B & c) m U+ A8 G" a, V2 j5 X* S
o6 ?" x! c9 x1 c9 \; Sfilebeat结合redis、logstash收集nginx日志
( E9 I7 ]2 m! J5 R, `使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch. |8 i0 c0 G3 ?: c& ~5 ^$ A
( O: `$ n0 Q# i7 X6 p& x
web1:172.20.22.30,部署好nginx、filebeat、llogstash
8 \0 Y" Z) N' ~8 K7 g# J4 |
' e1 k4 e- C$ `web2:172.20.22.26,部署好nginx、filebeat、llogstash! h- Y2 ~& f3 [# R: \" `9 _
p; s, t$ V: V2 u2 }( ^
logstash服务器2:172.20.22.23,redis服务器:172.20.23.157
0 U8 B5 b2 e$ f% A ) Z* x r6 s& \6 u% [9 ?0 ^
nginx服务器相关配置 9 x- Y( R3 o' C1 w0 d+ E
部署nginx
0 [! p- T4 h3 D7 H: p( K2 |1 B# wget http://nginx.org/download/nginx-1.18.0.tar.gz
+ Y6 J7 e4 N R7 l0 N: a2 C# tar xf nginx-1.18.0.tar.gz
W- m) d- H! x7 y: x$ ^3 k# cd nginx-1.18.0
: _3 }; S* u O6 B$ K( o# ./configure --prefix=/usr/local/nginx --with-http_ssl_module
5 c" P) _- s( ]- g7 _, p# make -j4 && make install" i$ D4 O% E$ e! S, I8 X
# /usr/local/nginx/sbin/nginx ?2 h2 a' Z9 t: t$ B/ g
部署配置logstash
. `" f& i+ U" i/ s$ Z) F" c- k9 H把filebeat收集到的日志信息发送到redis' L% C( V1 g+ _ M! ^
/ d4 x& }% y% Y4 v8 M6 a
# apt install -y openjdk-8-jdk0 d' \+ }' x' @5 v) h5 }* q
# dpkg -i logstash-7.12.1-amd64.deb u% a& k) z$ L, p( ^; c7 W
# cat /etc/logstash/conf.d/beats-to-redis.conf
5 Y) G; [ G: _' l* |input {
; _: n2 |% D& |$ x beats {
: [9 G: O: ~+ `% M port => 5044
4 L6 ^& H0 \* t. m; w! b codec => "json"
* H7 N9 C; M5 L4 k9 e7 \0 O# | }2 y9 v' l% C+ H G' j) M- i+ k
beats {
/ W4 d; w0 W* \2 g& u port => 5045
2 A2 i/ W( y# _0 G- D& m codec => "json"
" E9 A+ D7 S: Y1 t& J$ Q* | }' k5 W# l g; K) T) v
}
* ~4 b( {. |, l# }5 _0 Goutput {
* e7 {" F7 ~2 X( _1 D. A9 m if [fields][project] == "filebeat-systemlog" {9 K0 ^( A5 W5 j
redis {
: ^2 G% `2 b6 \( N& ?/ u: J data_type => "list"% s% z9 V/ o% l
key => "filebeat-redis-systemlog"
1 @6 m9 ~( k$ u. A7 L/ ]+ H host => "172.20.23.157"( v. M8 ]7 |/ x; R. s$ F
port => "6379"' ~) v( o$ X }- s" e: n7 X' b
db => "0"
$ j2 ?% b/ u# E: t/ y password => "12345678"9 Z7 E c. W5 B* j6 t: d9 A2 s8 _5 k4 ^9 y
}}; ^) { Z* U* H
if [fields][project] == "filebeat-nginx-accesslog" {5 \8 x- S) I. A$ A; r
redis {
+ p x* X" P/ A9 L data_type => "list"
- R/ V+ q! ?' D% d$ {6 W, h key => "filebeat-redis-nginx-accesslog"
6 Q( Q! ~) H b9 M# w. y9 a host => "172.20.23.157"+ D3 P; `" x) h0 t( ^! M$ S
port => "6379"
$ b) Z( {( Y6 \- n db => "1", }' ~0 m/ {: f4 C9 B# g
password => "12345678"" {7 f# X$ o- A4 G
}}$ N8 i! g. d% k2 T1 c& `4 a/ I* x
if [fields][project] == "filebeat-nginx-errorlog" {
0 M; t% m' Q" h redis {
+ v% Z! R- i% P% N; s7 {% W9 r' e data_type => "list"
$ a% D: c/ D$ W/ i/ n/ e key => "filebeat-redis-nginx-errorlog"
$ Q4 {* K; H! X* T# r1 L host => "172.20.23.157"
6 V! }3 Q/ T# E- T port => "6379"4 K2 ~6 q1 f# g1 j6 m
db => "1"
0 c: Q7 y9 ?- l& X password => "12345678", @0 }& z- E. ^! Q
}}& ]" }" X9 H6 d6 M
}
0 q8 F k* C' Z1 F; K# systemctl start logstash/ f2 r3 K, ]* x0 p
# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/ + k+ i$ d! g. O. {$ d9 W9 s
部署配置filebeat ) x2 v) S! T: h: R$ U u, N: O; ^
通过filebeat收集日志信息发送到logstash; b, B! e* r/ g$ Q+ H: C; S9 G5 u# y
7 L) l7 H! j/ P' I0 w: L+ z
# dpkg -i filebeat-7.12.1-amd64.deb
7 O1 H: _2 V. Q1 y. h* F2 @6 S& C# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"
# D J9 \/ G) w" P' J( N+ \* Dfilebeat.inputs:
$ q+ l O3 f. y% Y2 I- type: log
% p o* w7 Z4 x5 f enabled: true/ ?8 Y- W" z* x* S* a
paths:
6 D q- A3 j$ N* o7 y - /var/log/syslog% U( t3 [$ O$ \6 Z
fields:0 G3 Y1 G* O- [
project: filebeat-systemlog2 V T0 D( M7 N( l5 w
- type: log/ h5 i( U3 m1 U: q9 ^- s" p# c
enabled: true
* L. X6 i/ `" F7 e paths:7 b/ y( I& v6 `, ?, o1 W
- /usr/local/nginx/logs/access.log
4 Q# E2 W9 x7 {3 X, V fields:' A v3 [' Q; l: y8 l5 i# B9 `' N
project: filebeat-nginx-accesslog
$ k1 r" D+ j- q$ [- type: log+ `1 n$ }$ a$ U2 q3 {
enabled: true" Y/ g/ T8 S3 Q, S
paths:9 B5 H5 {/ H$ r% c* u7 d
- /usr/local/nginx/logs/error.log# D) q" v0 J, X
fields:: z3 E5 ]* [/ `1 m
project: filebeat-nginx-errorlog
" O& h/ u8 A. X" I9 Efilebeat.config.modules:( X" n' k( d v% h" N. u- I
path: ${path.config}/modules.d/*.yml
5 |; ~$ [8 d' j5 ~! y3 X* |; x reload.enabled: false! J6 L' {% o6 [, i+ l
setup.template.settings:
% q/ G4 i) ]/ k7 y& q3 T4 p+ c' y+ u- J index.number_of_shards: 1$ Z- F4 C# C7 R: L$ x
setup.kibana:1 y( O8 d5 r% m$ ]
processors:
% T6 r. y, F9 `1 G( K - add_host_metadata:+ s1 W! H& N1 T- M& c. M1 y
when.not.contains.tags: forwarded
/ K, a5 Z9 Q* ?# E8 F2 w - add_cloud_metadata: ~
9 M1 W9 \" _" \# I$ e - add_docker_metadata: ~
( N7 x/ p1 b6 y" P - add_kubernetes_metadata: ~/ i6 f6 J/ X: ?# l; U
output.logstash:
0 a" E2 d9 ?: r4 ]7 z hosts: ["172.20.22.30:5044","172.20.22.30:5045"]
3 \5 t- {. ?2 Y! o) @( N5 j7 B enabled: true" G' K0 J5 n& a8 U/ t3 t
worker: 2' P9 }4 t; w) @( P$ ?" b8 k
compression_level: 3
$ d+ e; F" ^& M! h, V loadbalance: true8 h8 X& ~+ `# C8 }8 G% s
z5 X" I" }7 i' v ]
# systemctl start filebeat
. E$ k# t7 D% @# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/
# e- l [; {: u( F3 Vlogstash服务器配置
1 @6 a" M* U9 i5 Q+ X4 v ?. Ylogstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch
' @7 Y8 Y( L% H+ R/ w 6 P. [8 R( |6 P- @
# apt install -y openjdk-8-jdk
7 |* \9 C" Q: ] q7 |8 I# dpkg -i logstash-7.12.1-amd64.deb
$ t8 q: z9 U( b7 g. y) ~# cat /etc/logstash/conf.d/redis-to-es.conf 1 }! B* C* N, L+ Z# t Z. r/ T
input {
& F% n5 J' J) X- F$ ^& Z; f redis {
* @$ S' `$ P$ ^) d% ^; R. E# w data_type => "list"
& t9 s V) n- ?8 K key => "filebeat-redis-nginx-accesslog"4 c" M2 s) X1 S' v# L
host => "172.20.23.157"2 O, y# W( `' [ v( J' M1 O
port => "6379"" d1 {' G) s9 |9 l J8 l. Y+ A
db => "1"
7 m$ W$ Z: {+ ?% C3 N2 r+ P password => "12345678"" e q+ K* A6 }! D. w
} W; ~6 j3 R/ b
redis {
1 \' ~: H. k' ?1 ?5 \: h8 i data_type => "list"
m8 W( Y4 p- ?# V/ k key => "filebeat-redis-nginx-errorlog"3 b' L7 t9 m: `
host => "172.20.23.157"
* a0 c, a' w5 J9 l" q port => "6379"
+ h# n; e3 {6 d: h; n2 t' { db => "1"
# W9 f* T4 E- T. t$ c4 ?+ T+ d6 T+ n password => "12345678"
9 O9 h6 P2 C) P$ J }' g: L0 s1 r7 |. E6 L3 n0 N& y
redis {1 o) u* Y% R; T5 b
data_type => "list"% z' Q5 k1 R% Y/ W& y
key => "filebeat-redis-systemlog") K; G$ C K2 Y) y
host => "172.20.23.157"
! `* J9 w/ \0 \; J! [& P8 i6 _7 H7 x6 d port => "6379"$ y# J9 V/ ^& o
db => "0"9 ?) M2 D* w& P" O$ B
password => "12345678"
; [% U4 e9 J5 Z, g0 M6 C }
- j$ V- |' l: ?; d7 {}
/ D) X2 }0 D! q3 e4 Youtput {* V% J: o: I& f6 P& P) a: B- H
if [fields][project] == "filebeat-systemlog" {
& R K a' q% f( m; F/ C elasticsearch {
. w4 |7 F/ I, Q8 a" P; F! I hosts => ["172.20.22.28:9200"]
0 t3 {& C! [' p6 m: _# o2 D index => "filebeat-systemlog-%{+YYYY.MM.dd}"+ r3 i. w" O" s) d/ R$ g8 l/ n& Y
}}7 U2 e/ V9 @1 u0 x) l U( K$ }4 R2 }
if [fields][project] == "filebeat-nginx-accesslog" {4 `; t, [4 Z7 p% ?2 u0 ?
elasticsearch {
2 y& T4 J' {: i5 {! c hosts => ["172.20.22.28:9200"]
9 @' U7 D }. y8 g$ N. ~- T index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
% w q3 x6 i( T. k# G! E+ @ }}
6 f! {* H" N0 B( t8 Y9 p7 T2 A& m5 e+ f if [fields][project] == "filebeat-nginx-errorlog" {
& M! s; f9 O9 j# G elasticsearch {
5 w8 z5 B' h& ]% h( u/ A8 N hosts => ["172.20.22.28:9200"]
& b! V( v- [, X/ y* V* B x index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"5 h% q& \0 v$ M7 e
}}4 u: A3 F* O0 [2 x. ]' H; a
}
3 v) C' m% N3 |! L# q# systemctl restart logstash.service
# C3 G1 x* _& S8 m% oredis安装配置 % F- ^" O* }% W( a8 E. @% ^* J
redis服务器:172.20.23.157,
8 f5 @( `, N. a + @4 r' G3 [0 d; b6 e
# yum install -y redis
7 {6 d* J& m, D4 U4 y# vim /etc/redis.conf
' k" G: T5 q1 ~3 w####修改以下配置项7 [7 C/ d& p1 H9 Q
bind 0.0.0.04 j2 Z/ u) l+ i6 e7 H W
....3 D3 a8 u/ h. b2 q
save ""4 l' {3 H# c* \* {# l
....
$ U) Y4 R4 ?& P4 g8 \% m( Jrequirepass 12345678# k' n( k" f& m" m' C! Z
...., z3 L; D$ z3 k$ g: j4 c
# systemctl start redis
, H8 c" i7 o1 W###测试连接redis
, T8 a% R5 F) n4 L3 B- O+ ]# redis-cli 6 Q! y/ m7 h3 j* U3 O
127.0.0.1:6379> auth 123456785 U) d& i5 |2 a. q4 j3 V ?
OK, m2 u; O! T+ |8 y. d
127.0.0.1:6379> ping
v) G% o" v8 x8 ~8 s5 S$ u: KPONG, J/ E/ U& X, a& F) t1 v" l
+ ]3 V p5 I& U
###验证收集到的日志信息
1 Y7 c# c" f' }, a' n0 \( B0 c- O2 r127.0.0.1:6379[1]> keys *
0 u; }; H" S" R. q* u1) "filebeat-redis-nginx-accesslog"& S2 i6 N% V( E% b: k- l8 D
2) "filebeat-redis-nginx-errorlog"
7 O7 w9 |& O: E# \127.0.0.1:6379[1]> select 0
- G% q q0 _5 O7 ~# R# ^: X7 COK. x( B$ @" L5 j5 d2 i8 x* }
127.0.0.1:6379> keys *
2 `8 H9 k, `) y' W) a: {; N1) "filebeat-redis-systemlog" ) {; S( |# [4 T
通过head插件验证生成的索引
! d, p$ Y# M4 {1 C: R) I" j/ v
9 |. k. D/ y8 P8 b
; h T# K7 _# okibana验证收集到的日志信息
+ D' v8 t5 G" H2 J4 f: y* A$ ~ |
本帖子中包含更多资源
您需要 登录 才可以下载或查看,没有账号?立即注册
x
|