|
搭建ELK
5 n5 }9 @! Z' _( Y6 XELK是由elasticsearch、logstash、kibana三个开源软件组成的一个组合体,ELK是elastic公司公司研发的一套完整的日志收集、分析和展示的企业级解决方案,在这三个软件当中,每个软件用于完成不同的功能,官方域名为elastic.io,ELK stack的主要优点:: E/ W9 A0 Z+ J# {& M2 h
+ `- w( l r5 f) S$ a/ l! F
处理方式灵活:elasticsearch是实时全文索引,具有强大的搜索功能配置相当简单:elasticsearch的API全部使用JSON接口,logstash使用模块配置,kibana的配置文件部分更简单检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿数据的查询秒级响应。集群线性扩展:elasticsearch和logstash都可以灵活线性扩展前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单
1 x, q n" Q! }# p" W+ l9 B r1 LElasticsearch
3 g7 f9 F5 f, c: g( a, I, _elasticsearch是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如nginx、tomcat、系统日志等功能。
' f% U% ]5 O J; c$ i) d) B% ] ! k3 `7 L' h* t/ J4 c3 |- m
elasticsearch的特点:, d: U& k8 ~ @5 _# k! B/ h$ P* P
# O P& `3 O9 S实时收索、实时分析分布式架构、实时文件存储文档导向,所有对象都是文档高可用,易扩展,支持集群,分片与复制接口友好,支持json
1 G& b0 l1 K5 G2 p% f3 f) ~ T部署elasticsearch 1 N$ o" A# ?' U' F+ j5 P
GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine,基于java开发8 p% D, G9 K* C& Z
# N; T- ?) y, t: H) ~3 `, J6 J
centos系统关闭服务器的防火墙和selinux,ubuntu关闭防火墙,保持各服务器时间同步
1 d9 e% R4 z) o6 ~' J/ `5 [0 D6 L- s& Q
, d# X O3 R. ?/ t# m, P4 R& K0 d服务器1:172.20.22.24
( i }8 _" W8 ~; x# P 5 G/ P1 f+ Q0 T1 i$ T3 D2 g
服务器2:172.20.22.27
9 A+ n4 ]" Y" C$ w ~: {- |$ f, K/ ]
服务器3:172.20.22.28# I( {, H/ f) ]
" n" T5 ^% x& i- C( c3 ~###ubuntu
/ x7 c1 P# ]* Q" n# apt install -y ntpdate9 Q, s' d4 P0 X- ^) L: R5 @
# rm -f /etc/localtime4 Q$ `, z4 x$ h& f# l1 S" x
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
, b3 L+ w7 r% |% d( J# hwclock --systohc
: R; H, \: O1 q, U e! V# ntpdate -u ntp1.aliyun.com
5 X$ `4 W& d" m# t# \1 F( ^ R###设置内核参数
3 y+ B; k8 C9 {; v2 c+ ]: l# vim /etc/security/limits.conf# f& {! { u$ M2 q p: i
* soft nofile 500000. @0 w- r6 ^$ u% v/ j; M) R
* hard nofile 500000, D! J- d; x# E0 P0 J! g- E
# vim /etc/security/limits.d/20-nproc.conf : q7 h# }* g Z4 Q: l7 b* M
* soft nproc 4096
3 v! G, k6 Z4 `$ A& ~# uelasticsearch soft nproc unlimited
( Z% c2 c1 P& P- V4 P: \, Uroot soft nproc unlimited
' x1 S2 _- E; m2 ~5 Z###安装jdk
% @* B) y/ E: [* E# apt install -y openjdk-8-jdk6 ], h( O, |+ P8 s+ E+ t% L3 P
" v2 B' h6 l* H! ]: |5 ?& L###每个节点都安装 U$ W! X' f1 y0 S2 ]
# ls -lrt elasticsearch-7.12.1-amd64.deb6 O8 v1 r: J; H6 F1 D- ]
# dpkg -i elasticsearch-7.12.1-amd64.deb
5 r% R8 j7 f* X2 f0 P- x9 j# |###节点1配置文件% S- t; Y# A9 K; k9 q% }
# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
# h: y0 b9 R) u8 W* I1 y& c! fcluster.name: m63-elastic #集群名称
7 |" a+ |0 A2 w4 p5 t$ Qnode.name: node1 #当前节点在集群内的节点名称4 E* G) ^* Z$ _, H7 l$ J+ f- i
path.data: /data/elasticsearch #数据保存目录+ ]) ]) a4 {6 {9 t
path.logs: /data/elasticsearch #日志保存目录
7 ^) {& Y" ?" S4 Ybootstrap.memory_lock: true #服务启动的时候锁定足够的内存,防止数据写入swap' b9 x+ T4 p$ _$ X `' o
network.host: 172.20.22.24 #监听IP
$ Y5 N3 Z$ V2 x$ {2 Y5 ]8 Rhttp.port: 9200 #监听端口7 a7 J% M$ ^2 m: L! z" D! ]6 q
###集群中node节点发现列表
4 {' u9 Y* j6 L7 H; ]$ [& udiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
4 ?5 K9 J$ M o0 |( ` }- a& z###集群初始化哪些节点可以被选举为master
- [( u- R; z X# _' E( M$ Ccluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]$ n" F$ b# ~6 K) d- _
action.destructive_requires_name: true
' F+ X q4 \" u# mkdir /data/elasticsearch -p
2 A7 v) z3 d; b1 F/ _$ y: w# chown -R elasticsearch. /data/elasticsearch4 Q$ K0 i& H% F. Q7 Y% j
# systemctl start elasticsearch.service! @8 ^4 C) x) h' ^
###节点2
% E% p2 f6 y9 @5 R# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
4 a! ^ s( @% n v: ?- q7 ~cluster.name: m63-elastic- f1 \3 C# J$ x8 P% N) c; K1 V
node.name: node2
- J: U o3 X/ u4 f( }: z/ @- Epath.data: /data/elasticsearch
& \4 J! |) P7 C% |7 z1 h2 C0 @; apath.logs: /data/elasticsearch
, P! ~6 k0 N+ B, G2 f) ^network.host: 172.20.22.27
8 V+ y* _! ?. B& R; G4 a d; F5 d' thttp.port: 9200
4 L8 I( b5 j) F; n% W* t- adiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
" W H4 _" g; }. e9 b' ~( Hcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]+ S; K- _" @! I
action.destructive_requires_name: true" Z9 a, b2 g) w, p% N
# mkdir /data/elasticsearch -p5 M1 ^ X. |4 ]; v
# chown -R elasticsearch. /data/elasticsearch
0 z% `; J0 d6 L6 a( M S* j$ x; ~- D# systemctl start elasticsearch.service
5 s3 U+ ]" C# w/ Z2 ?+ W/ L###节点3
* A, ?0 M+ y. F. Y9 W9 o# grep '^[^#]' /etc/elasticsearch/elasticsearch.yml
$ _( ?0 O( U. l& [8 r: N8 R; Mcluster.name: m63-elastic1 t. U% w7 h |5 p! ~( U1 `8 A& j
node.name: node3
2 q( q1 D8 r- l* a- D7 Tpath.data: /data/elasticsearch
8 Q3 G' V) ~: l% U6 vpath.logs: /data/elasticsearch7 G" |: f6 o0 `: h
network.host: 172.20.22.28
& p& c+ C' G$ a; y1 ]http.port: 9200
! Y* n3 \9 n: l* A% J, n. rdiscovery.seed_hosts: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
9 |, P$ e5 Z( w* J E! tcluster.initial_master_nodes: ["172.20.22.24", "172.20.22.27","172.20.22.28"]
6 V. p1 b8 \ a# ~ Z" F% Zaction.destructive_requires_name: true
8 y7 M8 D3 H% i$ u# Z# mkdir /data/elasticsearch -p
+ ]" P+ q4 e' Q1 L' w) O. n" w# chown -R elasticsearch. /data/elasticsearch) d3 R2 L) i3 a- x
# systemctl start elasticsearch.service 6 ?; ~* k7 P7 N- U0 M
浏览器访问验证
! S" ?0 r8 P2 y& |$ ehttp://$IP:9200+ |( W7 J7 ^. m7 M
1 g$ d& J/ h4 m* ]% U7 g, W0 T
2 b$ a) I d+ X& v+ U, b
' x9 ~- f' U# LLogstash
1 V" u4 ^4 `) f6 L& A& rLogstash是一个具有实时传输能力的数据收集引擎,其可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析,最终把经过处理的日志发送给elasticsearch。8 x8 [$ p* I4 r c2 f7 k3 S7 G
# s! J9 ?; _# y
部署Logstash
& O+ `& x& Q1 @8 I/ u7 \3 I& MLogstash是一个开源的数据收集引擎,可以水平伸缩,而且logstash是整个ELK当中用于最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地* r1 m9 `# Z y' S5 J
4 J4 n# y2 x$ k/ }; m: h
https://github.com/elastic/logstash #GitHub
4 d4 Y( e8 `6 W& F5 f) c# n% u6 a5 N& { - F4 L& j* j3 `) U; r5 ]9 u
Elastic Stack and Product Documentation | Elastic7 ^3 a" c7 }! Y
9 w4 d5 B, S! n4 k/ O
环境准备:关闭防火墙和selinux,并且安装java环境: J2 \$ T, A9 e! x
0 U& w+ W0 Q& `7 K; A: O
# apt install -y openjdk-8-jdk z* l. u+ q( w
# ls -lrt logstash-7.12.1-amd64.deb
! }! S1 Z6 B- @* l8 `# dpkg -i logstash-7.12.1-amd64.deb
4 M5 ` W' k# ^, T###启动测试
8 ]2 s) H9 @/ I) s& J( A# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout {}}' ##标准输入和标准输出
) D3 A( H" Z U0 i shello world!~, a* v6 R7 @# u6 y4 Z
{
3 S$ y9 ?* i4 I5 a6 x. h1 k "@version" => "1",+ C, {$ b$ n( m$ c1 J$ v" ?. F Q
"@timestamp" => 2022-04-13T06:16:32.212Z,
0 ^. b6 w$ \5 O; n6 e. |) h+ O3 m# A "host" => "jenkins-slave",& }% d& W3 L/ p( G
"message" => "hello world!~"# }, v) i! k0 O% ^1 m7 l
}
8 ]) @; I( Y1 s) A; g5 c( b###通过配置文件启动8 J9 q1 u6 Z6 _. m1 v
# cd /etc/logstash/conf.d/8 B* F" D5 K8 Z
# cat test.conf
6 R# h1 U! B! V7 winput {
3 X+ F, w# |7 \ stdin {}$ f( Q Z* U8 D( P8 y. Y5 ]
}
# l6 o6 W; v( W! D5 |( foutput {
8 a- H( b: m/ f+ T m8 Z9 f stdout {}% Z5 X c6 S; b' K6 a
}
7 q( Y5 Q8 r0 h- H. P3 V- K2 Q# ?+ d! N) j; [# S
###通过指定配置文件启动
( C1 i; [) I# x# /usr/share/logstash/bin/logstash -f test.conf -t ##检查配置文件语法2 M3 C; N1 [9 L7 V1 p
# /usr/share/logstash/bin/logstash -f test.conf
& }. G) H0 d, h8 Q4 g$ x. d1 q- d+ Y
####输出到elasticsearch
. d/ x1 T& e8 t: P' b' m# cat test.conf 9 d8 R) c4 j% r' p8 h! }! m( y
input { 1 \. }' v9 a( y' S3 K' B
stdin {}# d5 y, Y: X- F
}
9 m- I2 \ E: g) y8 \output {/ `: J6 y1 p% |# R
#stdout {}
& k; n' }; @& Z N9 C elasticsearch { o- @6 T. x, p
hosts => ["172.20.22.24:9200"]8 ^6 s/ C' w/ `+ b7 t, ]3 _. {( g8 a
index => "magedu-m63-test-%{+YYYY.MM.dd}"
' E0 P* q! i: Y1 k+ P }3 O% M' [. ]# W+ L
}+ X* D( s# M' F" w% p+ I
# /usr/share/logstash/bin/logstash -f test.conf
; f- J O8 q/ d! F2 bversion1, y( f( o8 A& S9 N0 e H
version2
) Z# R' Z5 H# Lversion3: Y! x- X5 h& O3 J
test1
) _3 O8 ^- O5 r3 y7 l9 u( Btest2. j0 h0 a, z2 Z/ X3 B
test3
( V( I! b- ]0 C2 O" I' x5 f% v& U6 b& b/ I. B2 ~ H, I
####elasticsearch服务器查看收集到的数据
5 C( J) j. l& r8 [2 U, v# ls -lrt /data/elasticsearch/nodes/0/indices/6 b. Q0 n" Q6 v& ]* B+ G
total 4: ]# W7 R, E R- u( x. X9 X/ L
drwxr-xr-x 4 elasticsearch elasticsearch 4096 Apr 13 14:36 DyCv8w7mTleuAvlItAJlWA $ |$ x3 t$ q( ?! t
kibana H- z7 H7 G; t6 B2 ~8 V9 F
kibana为elasticsearch提供一个查看数据的web界面,其主要是通过elasticsearch的API接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等
1 |4 H" u: T2 C9 J6 s9 \0 \
q7 a0 ]: s5 ?" ^- ]8 R' w8 u$ q8 ~部署kibana * d3 {0 Z* h% `) a6 w
# ls -lrt kibana-7.12.1-amd64.deb, I3 W7 z7 u( U4 M4 w% F2 y/ ?3 i
# dpkg -i kibana-7.12.1-amd64.deb( W. \7 ]2 m- A: k2 ^2 C
# grep "^[^$|#]" /etc/kibana/kibana.yml
% Y" j; s7 h% G3 pserver.port: 5601
7 G7 Q# Q6 O2 Rserver.host: "172.20.22.24"
& B& }: B1 N; x0 v# L I! ielasticsearch.hosts: ["http://172.20.22.27:9200"]
- p/ {# T3 c% G: yi18n.locale: "zh-CN"
0 W; ~: ?+ d" _$ p. n$ o* _# systemctl restart kibana " F* U( B2 N" E" ?3 G3 m
浏览器访问http://172.20.22.24:56014 f5 k: l% z# b$ Z! ?6 l: h
+ r& z8 E/ C4 c. y5 b& m \6 NStack Management-->索引模式-->创建索引模式
6 c- U' ]& d C9 [- r% t, x2 N) [
& F( N' H% r! w% w
% ]9 {/ b3 t. @' [) Q选择时间字段- b# t9 g: s5 p3 h: ^/ b6 m
) Q% ]4 M( z; h7 j/ g1 n3 b, L查看对应创建的索引日志信息
' t3 n0 Q7 ]3 k. b, D- I( @7 o, t( z) {# M
; y1 C$ v: f, L8 L( X
1 K) V) `8 E7 a+ w" m- M
收集tomcat日志 2 ?- f; [+ r$ {
收集tomcat服务器的访问日志以及tomcat错误日志进行实时统计,在kibana页面进行搜索展现,每台tomcat服务器要安装logstash负责收集日志,然后将日志转发给elasticsearch进行分析,再通过kibana在前端展现
Q9 w" b9 t$ k. v ?6 b
( i% ^- Y1 c2 j# g, i1 |' R& _/ j部署tomcat . z: G& c/ j! [8 k
####tomcat1,172.20.22.30
* a+ N1 L% v1 q0 q4 x4 A# apt install -y openjdk-8-jdk
! k3 V2 ~$ g* v+ T! J# ls -lrt apache-tomcat-8.5.77.tar.gz + c; U. `$ O+ M
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz& L+ E5 k* s+ E9 @# `. s
# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/
! p4 n4 h- f7 L, S# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat. q& m7 h$ M% r) l
# cd /usr/local/tomcat
/ \9 X! w8 i8 W, C3 p: n( U###修改tomcat日志格式为json. u1 F+ V3 I* B/ k
# vim conf/server.xml
4 p0 ? q% H& e. _* w5 L% W....
7 b6 V5 \0 z7 A/ M . _7 t( y2 N: ?9 b( @
....- P8 p6 v1 g; J3 `2 T4 a3 O
# mkdir /usr/local/tomcat/webapps/myapp, |# U+ y% `! x/ Z8 Y/ e
# echo "web1 172.20.22.30" > /usr/local/tomcat/webapps/myapp/index.html" H0 I. l: c# b3 Y+ l; B( B6 r
# ./bin/catalina.sh start3 c' a' l2 F) C. c
6 D5 v; K8 x0 l4 ^/ H+ Q/ r# ?# Z; D###访问测试
$ C0 M0 E3 c' ~# curl http://172.20.22.30:8080/myapp/
9 ]& u: [9 F T1 M3 o###查看访问日志
" S, W* Z3 |, d2 N( K# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-13.log
" Y" r( B% B1 e+ [( ]6 L
: f3 j' x; U- z2 T. e6 F$ O####tomcat2,172.20.22.262 S$ J; w" V: J( |8 J' D9 E: I
# apt install -y openjdk-8-jdk! k4 j/ i* e7 q1 g1 g
# ls -lrt apache-tomcat-8.5.77.tar.gz 8 i/ k# x5 d# F5 A' v/ C0 Y
-rw-r--r-- 1 root root 10559655 Apr 13 21:44 apache-tomcat-8.5.77.tar.gz
- W* L+ o t) j1 {8 [$ ~% a" g# tar xf apache-tomcat-8.5.77.tar.gz -C /usr/local/src/# _* {6 z) W7 [7 ?
# ln -s /usr/local/src/apache-tomcat-8.5.77 /usr/local/tomcat
2 l) {$ E3 m' i5 A8 E, W% X6 B# cd /usr/local/tomcat
/ x, }7 d) u4 X###修改tomcat日志格式为json
( I# W' g5 ?3 Q$ e. p. @+ G# vim conf/server.xml
: o6 x# i0 v A( }- D k....
8 y% z$ p; ^: \, @$ ~+ C
) q4 b! W. n% Y7 u! p0 b....
1 H7 `; r9 g8 V" b% U5 K# mkdir /usr/local/tomcat/webapps/myapp
. M8 }$ ?& p n" y' v# echo "web2 172.20.22.26" > /usr/local/tomcat/webapps/myapp/index.html+ z _4 j: G% l! Z; D! ~* p: `7 `
# ./bin/catalina.sh start
% a9 T K9 i# l; g
8 b* i0 ~$ w5 F9 _9 }###访问测试
. O; h3 n: J) I8 X4 C$ R# curl http://172.20.22.26:8080/myapp/0 U- `* F4 H( G) z1 O: l) p' K2 Q
###查看访问日志* \# P8 y% L) [' l# {
# tail -f /usr/local/tomcat/logs/tomcat_access_log.2022-04-14.log
% @: Q" j" S/ n8 H部署logstash
5 U4 y; ]$ U4 F) v5 X在tomcat服务器安装logstash收集tomcat和系统日志
+ n5 A$ C1 ?8 n7 C $ ~ f# P Z# ~1 [
####tomcat1,172.20.22.30' r5 P* R# s5 l& \1 d: `
# ls -lrt logstash-7.12.1-amd64.deb3 E8 ~) H2 ^* D5 B. f. n3 _; H( k
# dpkg -i logstash-7.12.1-amd64.deb
, K0 T! o0 K1 V' x' X" E1 S# vim /etc/systemd/system/logstash.service3 a9 x, _. E' s0 ^5 y
...
# M* M( z e6 C4 P' W$ b" N* uUser=root
; W# K9 r& l1 x. Z( Q" iGroup=root6 F$ x ` d1 ~5 X- }6 J0 P
...
( j; T4 ^) S9 N8 w# cd /etc/logstash/conf.d
7 Q, v7 S& w2 S9 ^! X# cat tomcat.conf2 ?" {# p6 b" @/ d
input {
( @, g3 [' ?" |! ] file {9 b* o' c. P4 \6 Z5 v
path => "/usr/local/tomcat/logs/tomcat_access_log*.log"6 h3 H1 `! `) q! K8 `8 ]4 Q* X
type => "tomcat-log"
( u' j! N, J' s9 h/ V5 q+ a9 k start_position => "beginning"" x7 l5 O, e9 D7 ~1 C, i; d' s# X- [
stat_interval => "3"* j5 e7 e0 y" E9 T
}
! H" L# N, w" O, u file {( ?- ^6 D+ Z6 M S: a" |* E0 l
path => "/var/log/syslog"* i! c% z# o. [3 p0 O& C' O
type => "systemlog" x* C8 _' ^" {' b! y6 ~2 ~
start_position => "beginning"
% J$ P' g# |- H- S6 N Q/ @8 |1 ^ stat_interval => "3"/ e7 g9 B* s! z$ u% Z& D4 t# _" w
}
% y# \# N( F, ~/ t0 D) p4 j$ L. y}8 L8 j# V6 o: t( W* x2 L
output {' Y m! `4 ]3 I
if [type] == "tomcat-log" {
3 h) D' s$ P" C7 R( `' ^ elasticsearch {, G5 l" R, t0 _/ o' Q4 K1 J
hosts => ["172.20.22.24:9200","172.20.22.27:9200"]
9 Y6 h4 \5 {2 U K( D index => "elk-tomcat-%{+YYYY.MM.dd}"+ p3 _" ]7 ]9 U+ S+ [7 a$ \
}}
( \; b# Y5 A# S+ q8 X9 L if [type] == "systemlog" {4 e# S" C8 z7 S/ @
elasticsearch {# e" i/ ` a1 l' K
hosts => ["172.20.22.27:9200","172.20.22.27:9200"]
. b- a8 g e: y$ a1 t! Z: p# _ index => "elk-syslog-%{+YYYY.MM.dd}"7 n, {% |2 p/ S& P: t) p
}}
) s! h7 X1 f' K O$ U}
# u( T0 F5 t+ [- j* j* i+ h
2 m* x* Y- @& f) \$ L5 Q# /usr/share/logstash/bin/logstash -f tomcat.conf -t& t- k2 ? u+ \, k
# systemctl daemon-reload. a) }5 ?6 w& \) J6 K# H L0 K
# systemctl start logstash.service2 a7 @! I3 V; K
# scp tomcat.conf root@3172.20.22.26' W2 W8 T7 O% S) |
% ^; ~' I8 X- k
####tomcat2,172.20.22.26
K# T/ z0 Q! i1 l# ls -lrt logstash-7.12.1-amd64.deb
( @, b* ?5 W' f t" K K3 J# dpkg -i logstash-7.12.1-amd64.deb" ^) p* X# F* X( t% f9 f
# vim /etc/systemd/system/logstash.service
+ O6 }- Z7 ?" g2 p* |4 k0 |7 ?...
& G. m% K. c6 `% O0 L6 ZUser=root
, a3 H" I6 Q" n5 V7 b% hGroup=root0 ?6 j! U! D" r: p3 l5 Y; i
.... d! E. A- a: \* p! I
# systemctl daemon-reload
5 H8 z; E( z4 j0 ~ i# g( D/ i# systemctl daemon-reload
6 ^2 l( ~ P, r: o# systemctl start logstash.service 8 p& m! w F2 W1 m7 B
通过kibana展现
$ L! ~1 T! o5 E7 }6 Y( p
8 l, L9 v- M2 x* s: g
. o3 P1 H8 l1 A收集Java日志
1 e& ^( L6 r' V9 U% h使用codec的multiline插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用what指定将匹配到的行与前面的行合并还是和后面的行合并2 _/ h0 a% G; M+ c
+ `% J: {) d( A+ c+ ]Multiline codec plugin | Logstash Reference [8.1] | Elastic
1 Q7 |) q, X3 M6 Z+ ?1 l. l+ x
# ?+ J r2 R- ^7 |4 y* Z. |添加logstash配置文件 " P; ]) Y ~! g
###收集logstash自身的日志,172.20.22.26
+ G$ }$ w% i) u' |! Q7 @# cd /etc/logstash/conf.d
0 O7 G+ H$ `5 U/ C- d# cat java.conf 9 t5 I% g# l8 w* P: n, X2 _, p& a
input {+ H4 f2 k, Q2 \; Z* |
file {
: Q1 V0 M) x5 E: l0 E path => "/var/log/logstash/logstash-plain.log"2 M" N, J( e: C( Y, d/ O& k
type => "logstash-log" C( j0 F* S. W; G# _
start_position => "beginning"
- ? u0 }# g- M/ y, ] stat_interval => "3"5 K I0 \+ k; h% c Z
codec => multiline {
. E- ]+ n) g6 W( y! f& x pattern => "^\["
7 W$ x9 t' ^- r0 r% Y7 T% J negate => true; G0 v) ~ R6 U# x$ w- Q0 n0 A S* `
what => "previous"
, d/ M8 Z8 S: i r$ k+ O3 }" } }}" A4 G; i0 Z+ _! j
}
8 S7 ^' t8 z8 s$ J! Z2 J' p5 woutput {2 X# f2 F/ V+ i* J8 p
if [type] == "logstash-log" {7 W! ^+ d% S7 a) N5 {, {' u" |
elasticsearch {
9 s7 H( T9 E$ P& o5 z; b hosts => ["172.20.22.24"]
0 p9 J, J, k) t& X/ i: n4 ` index => "logstash-log-%{+YYYY.MM.dd}"
9 w' a9 ~# v# Y. h }}
: T7 {0 c. z# s}
, k4 |% R3 D" s5 m2 ~
! V! [. J, X9 c, `# /usr/share/logstash/bin/logstash -f java.conf -t1 n5 l. r( w! l. a* b- ~: G; e
# systemctl restart logstash.service
# }/ e2 ~( k/ S
! V0 l4 }, Y7 C C9 U; h###收集logstash自身的日志,172.20.22.30. p" E$ c2 b. Q% I, ~
# cd /etc/logstash/conf.d. k' M! Z/ V6 [, F. N3 k9 q' N+ K
# cat java.conf - ]# M1 `8 G4 A* E# r6 {* O- R
input {
* ^) H+ }2 S+ q# s" n2 S file {
$ q. W( M* f3 x) ^8 Z& `9 H path => "/var/log/logstash/logstash-plain.log"
^$ F6 I: l' e1 }1 m type => "logstash-log"
9 Z, \9 ~0 W, w6 ` v" p start_position => "beginning"
9 B; D1 D* ^9 D+ j& |3 A2 t stat_interval => "3"
' _# I4 \' V. @, N- k. S! ?% C+ f codec => multiline {$ W, ~" T2 U% g1 G
pattern => "^\["3 H% X( e h! L. f' p* c* f: i
negate => true
2 s C& o, r5 Z1 D. `' q1 s what => "previous" 3 y# e; Z9 ^3 Q& K; ~
}}
- z8 E+ U+ X% I1 y# D& L}) L1 e" m$ @; s1 q) K8 I* p5 N
output {
" Y% P5 k- D& S+ L if [type] == "logstash-log" {
/ w8 I) R% d5 `! k- Y- N' R elasticsearch { c; ^, C% @) @1 T
hosts => ["172.20.22.24"]
! p& ^2 B* F( y+ p index => "logstash-log-%{+YYYY.MM.dd}"
7 M6 T& G, `- c# @4 T' d6 g' Y }}5 J$ }% V# w6 {. }
}5 ^8 i0 n r9 V. O8 z) D; l; Z
/ `8 h" L2 H6 G- l1 O
# /usr/share/logstash/bin/logstash -f java.conf -t0 N3 k- j7 C, Q l2 X# A
# systemctl restart logstash.service 6 \; C% W, H4 ?8 D. x
查看kibana收集到的日志
( d" a/ }' W" n7 K1 @
& C# g% Z) e: @; M. ^0 |
0 i7 ~. d) S; J9 p
7 m% J; V/ L6 Gfilebeat结合redis、logstash收集nginx日志 - `% P( h7 C X7 b, @5 K- Q* y
使用filebeat收集日志发送到logstash1,再由logstash1发送到redis,最后再由logstash2发送到elasticsearch
8 G9 U5 x) a4 r4 F
D% z2 Z4 t3 f, c( H+ P2 _web1:172.20.22.30,部署好nginx、filebeat、llogstash# _/ m$ v7 L' t* Z+ i
8 [" Y q# b, C' F F
web2:172.20.22.26,部署好nginx、filebeat、llogstash( F( Q7 m% Z% ~! i# [# ?4 ^) p
8 Q! }6 h# z+ U: _4 {, ilogstash服务器2:172.20.22.23,redis服务器:172.20.23.157
- l5 v/ h" w0 M# q ) M0 J# y$ o: o1 Q
nginx服务器相关配置 & F1 x( x O6 z; @# P9 U
部署nginx 8 `( x/ d6 J, N- B. n, Q3 ?
# wget http://nginx.org/download/nginx-1.18.0.tar.gz- h2 m# c, Y" r3 w1 I4 L J
# tar xf nginx-1.18.0.tar.gz8 W% B8 B1 c; C
# cd nginx-1.18.0% `6 x$ x2 S% u/ H z
# ./configure --prefix=/usr/local/nginx --with-http_ssl_module
5 b' p: i% P, B7 _* P0 g% ~# make -j4 && make install- O: U" d& O( J- w; Q' }- ~
# /usr/local/nginx/sbin/nginx ) U6 j. H" i) R9 T- |! k
部署配置logstash
* m4 A$ k$ x* n6 v把filebeat收集到的日志信息发送到redis
; r+ G R- f# `( c# w# O ; T, k9 u, O6 {( M! H$ s' v
# apt install -y openjdk-8-jdk
0 ~+ g6 a3 [4 d! }- G; F" F# dpkg -i logstash-7.12.1-amd64.deb
# s. q: V4 Z( D+ p7 Z% M5 p: Y# cat /etc/logstash/conf.d/beats-to-redis.conf * l( C3 g' U4 e3 `
input {* o4 y; h% M; m5 H
beats {4 ]( V* C' i6 t, u
port => 5044% i1 d3 o- n2 `: P
codec => "json"# o" ^( A) n- t% ^
}1 L3 n8 t. |) t: Y& n$ K/ V. O
beats {2 I& v. N" {+ m8 V
port => 5045
4 V2 \% s/ p8 a. `" q* j; G codec => "json"# [4 g" v0 t5 i
}
B1 P/ _+ l( W/ F$ x: y( i3 I4 f! D& ]}2 E/ b$ i# z* M
output {
j" I W2 j. R$ ^) }2 ^ if [fields][project] == "filebeat-systemlog" {
: n, q9 t m* X+ B" \- X redis {
, R/ c, L# S) I7 t1 C# N data_type => "list": m: s; f! J- K
key => "filebeat-redis-systemlog") u+ y2 o3 {" Y. U+ D5 z$ K
host => "172.20.23.157"
) k2 v2 ]: r% P& x port => "6379"
O# p7 g7 O7 V/ r6 \: `1 J db => "0"
. S F. q/ ]' Q) p! w, O2 e password => "12345678"+ E6 y1 w4 X" ?1 k
}}
$ d2 E3 B1 [; W/ X# ^( J$ I t+ y if [fields][project] == "filebeat-nginx-accesslog" {
* i4 V: l+ {' W6 I* | redis {
8 ] n8 Y7 U# t3 t. _ data_type => "list"" J. ]! ^3 W. z% p/ }# T
key => "filebeat-redis-nginx-accesslog"
: T3 i% w/ P1 U9 H host => "172.20.23.157"
# y% b7 t" O4 x port => "6379"0 {5 P6 }8 E4 }6 f9 u
db => "1"+ U! L" x% j; D' `) `1 P
password => "12345678"" g! E0 N3 F. `* f+ c
}}
5 h$ r2 N8 e1 w# V f if [fields][project] == "filebeat-nginx-errorlog" {8 b6 L/ q% h& f8 F8 ~
redis {2 i7 u* F8 G' K. \
data_type => "list"+ J! {+ [- J9 B) y
key => "filebeat-redis-nginx-errorlog"
4 Z# l: u7 K# [. M host => "172.20.23.157"9 X* U5 u+ r. f! N
port => "6379"+ M1 P: \9 E5 I7 s5 a% P6 C
db => "1"$ d/ H9 O+ f) {0 N7 N* `3 L7 a5 ]
password => "12345678". O7 {7 o+ g0 a. f1 f8 s
}}6 {8 m+ t8 }& o5 F5 z# n5 S
}
: l J/ L# s2 m1 [3 P. h8 r# systemctl start logstash# P9 A- y- s# ^0 h6 }+ ]: \
# scp /etc/logstash/conf.d/beats-to-redis.conf root@172.20.22.26:/etc/logstash/conf.d/
8 q5 |6 {0 d" p. q/ m; g+ P部署配置filebeat 3 {$ B* `& c L; A- V* Z
通过filebeat收集日志信息发送到logstash
* N& Z6 ^: i! E7 W0 ?; V
+ f% m8 Q* y0 d' g- w! x6 y# dpkg -i filebeat-7.12.1-amd64.deb9 B' `( Z+ D+ \9 P5 V
# grep -v "#" /etc/filebeat/filebeat.yml | grep "^[^$]"5 W. c! {4 S& r, D% o: @
filebeat.inputs:/ U& w9 t" B( W; m
- type: log* `6 e7 F/ ^. W& ~8 k7 q& {6 z
enabled: true8 Z" |' o( @, `6 q
paths:
8 w6 i! Z; H" P5 I5 W1 x' A$ v - /var/log/syslog+ q# D7 ]( R* J/ _
fields:
8 y- b J! k$ t* G project: filebeat-systemlog
, T9 H) P) T7 y, Z- type: log* B5 e$ n8 q' B
enabled: true- D( g" k7 j i
paths:- [6 A7 g) \% k2 M; m) L/ A
- /usr/local/nginx/logs/access.log
+ f/ K9 Q/ m% P" S fields:
, z" G7 T* ?0 q e, M* C. m project: filebeat-nginx-accesslog9 [( g" u& x7 \' d+ W
- type: log
3 ^- V) Q# d* Z9 l enabled: true
+ `+ G) O$ U; m. P1 t6 z4 a paths:2 ?: _0 ]* |1 J" ~, I" y2 i3 F( Q
- /usr/local/nginx/logs/error.log; e3 t; X" Q' V2 M, I/ ~
fields:0 [( @( W+ O6 a6 @: t" l6 `$ z/ o# o
project: filebeat-nginx-errorlog r/ L- z1 D. J% g) h, t4 r, ~
filebeat.config.modules:
" q( R; m t& S/ d3 A) ^ path: ${path.config}/modules.d/*.yml
- {0 L. b; P1 {& Q reload.enabled: false
6 y, E$ i& y* @9 k$ e8 Esetup.template.settings:, u( Y4 t3 P( F* A
index.number_of_shards: 1
1 d. t1 e% Q- ~& T1 A. {( L$ `setup.kibana:% U& t' f, Q) A5 M0 s: U; s- A3 ?. a
processors:- r/ R' l$ I, K1 o
- add_host_metadata: U( r; G. }9 Y0 _
when.not.contains.tags: forwarded
( Q# ?& q6 m; L4 C7 F; ? - add_cloud_metadata: ~: Q" X8 W& C O2 u2 S
- add_docker_metadata: ~
2 @+ b5 y. t( L0 x. t" l& i - add_kubernetes_metadata: ~: f8 {/ k: G6 `" F6 b
output.logstash:5 a) K! H7 E" Y! M j5 h
hosts: ["172.20.22.30:5044","172.20.22.30:5045"]
* b) W* H8 a! r0 h7 d9 C: I ] enabled: true8 e: E8 G9 p" U: r! Q+ \% x
worker: 2
8 t1 o* P% k, _' ` compression_level: 3
0 b& [; M" A4 {$ W' t! {5 K loadbalance: true9 I w6 u& O/ j( o# s, _
4 I- _5 ?/ E) U3 o( ~
# systemctl start filebeat
) q$ l8 v' j' C* ]$ S- I# scp /etc/filebeat/filebeat.yml root@172.20.22.26:/etc/filebeat/
+ f$ P/ B% i/ Ylogstash服务器配置 ' u1 U) J5 I6 }7 r; \
logstash服务器2:172.20.22.23,把redis缓存的日志发送到elasticsearch2 \ J# k* s$ T- c# |# E! I- H
5 H9 S! x9 [( H0 V" [- \8 n# apt install -y openjdk-8-jdk# d. s5 u k5 W* M
# dpkg -i logstash-7.12.1-amd64.deb
% V+ r! o7 I! r" N; [# cat /etc/logstash/conf.d/redis-to-es.conf
* B- |' Y4 A1 x* L) w" ~% oinput {
$ ?) Y% h, ^9 I5 {+ ] redis {: F6 @, U4 V' u6 V3 L+ [' J: f* |$ b
data_type => "list"
5 `# U( _2 H7 S% F key => "filebeat-redis-nginx-accesslog"
( D$ ~. q3 K* o* i host => "172.20.23.157"
: u: x4 c8 o! S' L' q$ P) F# S1 P5 s port => "6379"
- @7 I6 A: ]- T+ k- q; ^9 S! ^ db => "1"
+ G) p" B; I3 h$ ~ password => "12345678"5 o+ i" N& |' l7 `( c A+ M
}
& Q& s$ J0 Y9 Y) ~ redis {- B7 n" D7 Q9 b b
data_type => "list"# ?+ O, D7 C+ `$ q) F3 p
key => "filebeat-redis-nginx-errorlog"
/ E i$ T9 R" j* p$ T4 j host => "172.20.23.157"
3 g8 l' u+ m3 F/ y& G j- S port => "6379"
( n1 d. x8 C* Q) t5 T db => "1"
; H Y/ H7 r, Y2 w1 W6 l) J password => "12345678"
2 ?# u( \8 i& S% v. X }( A' J# c" \; h8 g2 L
redis {8 B# x$ K4 X9 `; p# T7 ~4 d
data_type => "list"
% o$ s' y) y: N( U8 H key => "filebeat-redis-systemlog"/ l, P. r" [+ F! t/ C5 ^" \
host => "172.20.23.157"
% u* g4 m4 P/ i$ N1 b/ M0 M7 o port => "6379"
; R( l+ | ?4 o+ e* g* X+ | db => "0"
" a- e2 o) U" w8 U: X password => "12345678"
. m* F9 {- f3 Y. ^ }* J. {/ b) |+ e U5 ?; G$ \* H
}2 ~! k$ W- T$ z2 q5 G& Z5 Z
output {
3 Q$ C7 k+ N0 l" z if [fields][project] == "filebeat-systemlog" {2 v+ ]" l6 \- ]5 p' a
elasticsearch {
7 [& a& y; {. t) h$ X hosts => ["172.20.22.28:9200"]
) ^; v& V; T% C index => "filebeat-systemlog-%{+YYYY.MM.dd}"4 g' K1 ^4 f; z8 E8 W6 K
}}. o* D) u$ G7 `6 b% t R2 e
if [fields][project] == "filebeat-nginx-accesslog" {
7 S! m. D1 _# X1 [7 S9 l, v% d elasticsearch {
+ Y/ d5 ^8 |' m) t( g hosts => ["172.20.22.28:9200"]0 d; V/ F3 t2 Q9 n
index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
! r8 ?/ s, I# H# [/ l1 x4 X1 z S }}
/ O M- S/ G. W9 k$ l9 V if [fields][project] == "filebeat-nginx-errorlog" {" F' `4 s; W' g6 Z( i5 W
elasticsearch {) H( D3 ~& \8 i3 {6 o
hosts => ["172.20.22.28:9200"], r) X* S6 y2 H( W# A" |) u6 p
index => "filebeat-nginx-errorlog-%{+YYYY.MM.dd}"- d# P1 Z! J" y4 i. p
}}+ s+ h) I7 @4 K! y" o; x
}
% l4 {1 e, a& E0 p# c P* w N# systemctl restart logstash.service 2 _3 O. z7 `3 w; _" U, X
redis安装配置
5 U4 r2 U, c f3 p0 W& G: K# y$ d1 oredis服务器:172.20.23.157,
7 }) r. {4 S) E R4 h6 ]( Y2 B4 d$ l3 `& {
# yum install -y redis
& n8 L7 ^9 j2 ?6 z# vim /etc/redis.conf& t! H6 p1 y- p% M
####修改以下配置项
5 m4 c& `: P1 l% W. Mbind 0.0.0.0; k: I' K/ z, C( F: N8 q! a4 s
....1 k$ U* P1 O6 K; s2 V+ E7 x
save ""
8 C5 Z" ]/ i' e; n2 M....
* E( o8 I: ]6 _ G1 O9 v6 S4 Krequirepass 12345678) q1 |' x" @8 t1 q4 M
....
* A' y$ @+ v9 K, f' a# L6 y; ?% K+ d# systemctl start redis+ d5 {' s* Y0 ?$ k9 z9 J9 O
###测试连接redis- M9 [: k8 f, f! c) V' P
# redis-cli 0 |# {/ e$ ?4 x& i& u% u0 R
127.0.0.1:6379> auth 12345678& u/ ?8 Q" l3 a8 `; C2 K- Y
OK
+ c* }2 Z; E* b127.0.0.1:6379> ping
0 W1 s' |; m5 v3 y6 |: s3 s, ZPONG
- w0 i/ X0 G# d
/ G: q/ n1 [. g2 b6 o###验证收集到的日志信息
9 [8 k7 ~* {2 e( ?( R" d2 z127.0.0.1:6379[1]> keys *4 ]1 V; [. E8 r' [% P! X8 a
1) "filebeat-redis-nginx-accesslog"% [2 B8 ?+ L: r7 X4 x7 o2 ^
2) "filebeat-redis-nginx-errorlog"5 ]+ k3 B' Q Z/ M5 G& \
127.0.0.1:6379[1]> select 0* F1 c0 `% |1 V8 S& }, F' }: \7 W
OK
2 c+ g7 p" a5 E127.0.0.1:6379> keys *7 G2 ^, f1 ^( _; h6 f: e2 X: O" l/ Y
1) "filebeat-redis-systemlog" m+ `) ^3 g5 x$ B0 D! [
通过head插件验证生成的索引
5 {0 I. d1 z7 E0 g3 E' V9 h5 E0 k4 d; V
" a5 I6 p- t1 W . C6 Q8 O. a3 g6 {% ?
kibana验证收集到的日志信息
G: P% ?" o! n% h4 k/ m |
本帖子中包含更多资源
您需要 登录 才可以下载或查看,没有账号?立即注册
x
|