匠心精神 - 良心品质腾讯认可的专业机构-IT人的高薪实战学院

咨询电话:4000806560

使用ELK实现日志集中管理:最佳实践

使用ELK实现日志集中管理:最佳实践

随着互联网的普及和发展,我们面对的数据量越来越庞大,其中包括各类系统生成的日志。如何高效地管理这些日志,是我们作为运维人员必须面对的问题。本文将介绍使用ELK实现日志集中管理的最佳实践,帮助您更好地管理各种类型的日志。

一、ELK介绍

ELK是由三个开源项目Elasticsearch、Logstash和Kibana组成的日志管理平台。Elasticsearch是一个实时搜索和分析引擎,Kibana是一个基于Web的图形化界面,提供了丰富的数据可视化功能,而Logstash则是一个用于将各种日志数据转换、过滤和传输到Elasticsearch的工具。

二、环境搭建

1. 安装Elasticsearch

首先,我们需要从Elasticsearch官网上下载最新的ELK版本。在本文中,我们使用的是7.10.2版本。下载完成后,我们解压缩并进入到elasticsearch的bin目录中,运行以下命令启动:

```
./elasticsearch
```

如果一切顺利,您应该可以看到类似下面的输出:

```
[2021-05-01T17:27:54,885][INFO ][o.e.n.Node               ] [node-1] version[7.10.2], pid[1234], build[default/tar/747e1cc71def077253878a59143c1f785afa92b9/2021-01-13T00:42:12.435326Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.2/11.0.2+9]
[2021-05-01T17:27:54,886][INFO ][o.e.n.Node               ] [node-1] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms512m, -Xmx512m, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Des.path.home=/opt/elasticsearch-7.10.2, -Des.path.conf=/opt/elasticsearch-7.10.2/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2021-05-01T17:27:55,785][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [aggs-matrix-stats]
[2021-05-01T17:27:55,785][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [analysis-common]
[2021-05-01T17:27:55,786][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [geo]
[2021-05-01T17:27:55,786][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [ingest-common]
[2021-05-01T17:27:55,786][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [ingest-geoip]
[2021-05-01T17:27:55,787][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [ingest-user-agent]
[2021-05-01T17:27:55,787][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [lang-expression]
[2021-05-01T17:27:55,787][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [lang-mustache]
[2021-05-01T17:27:55,787][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [lang-painless]
[2021-05-01T17:27:55,788][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [parent-join]
[2021-05-01T17:27:55,788][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [percolator]
[2021-05-01T17:27:55,788][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [rank-eval]
[2021-05-01T17:27:55,788][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [reindex]
[2021-05-01T17:27:55,789][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [repository-url]
[2021-05-01T17:27:55,789][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [transport-netty4]
[2021-05-01T17:27:55,789][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [vectors]
[2021-05-01T17:27:55,789][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-analytics]
[2021-05-01T17:27:55,790][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-async]
[2021-05-01T17:27:55,790][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-ccr]
[2021-05-01T17:27:55,790][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-core]
[2021-05-01T17:27:55,790][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-deprecation]
[2021-05-01T17:27:55,791][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-enrich]
[2021-05-01T17:27:55,791][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-eql]
[2021-05-01T17:27:55,791][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-graph]
[2021-05-01T17:27:55,791][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-ilm]
[2021-05-01T17:27:55,791][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-logstash]
[2021-05-01T17:27:55,792][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-ml]
[2021-05-01T17:27:55,792][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-monitoring]
[2021-05-01T17:27:55,792][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-rollup]
[2021-05-01T17:27:55,792][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-security]
[2021-05-01T17:27:55,793][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-sql]
[2021-05-01T17:27:55,793][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-stack]
[2021-05-01T17:27:55,793][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-voting-only-node]
[2021-05-01T17:27:55,793][INFO ][o.e.p.PluginsService     ] [node-1] loaded module [x-pack-watcher]
[2021-05-01T17:27:55,794][INFO ][o.e.p.PluginsService     ] [node-1] no plugins loaded
[2021-05-01T17:27:56,337][INFO ][o.e.x.s.a.s.FileRolesStore] [node-1] parsed [0] roles from file [/opt/elasticsearch-7.10.2/config/roles.yml]
[2021-05-01T17:27:57,455][INFO ][o.e.t.TransportsService   ] [node-1] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2021-05-01T17:27:57,904][INFO ][o.e.b.BootstrapChecks    ] [node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2021-05-01T17:27:57,911][INFO ][o.e.c.c.Coordinator      ] [node-1] cluster UUID [Cm9m8w8mROaf6vaFBOlUEA]
[2021-05-01T17:27:58,146][INFO ][o.e.c.s.MasterService    ] [node-1] elected-as-master ([1] nodes joined)[{node-1}{L8dgtKlOQj2-aygcWpWUMA}{n4jx5Z52ScKzd-mivzS7Rw}{127.0.0.1}{127.0.0.1:9300}{dilmrt}{ml.machine_memory=1073741824, xpack.installed=true, transform.node=true, ml.max_open_jobs=20} elect leader, BECOME_MASTER_TASK, FINISH_ELECTION], term: 1, version: 1, delta: master node changed {previous [], current [{node-1}{L8dgtKlOQj2-aygcWpWUMA}{n4jx5Z52ScKzd-mivzS7Rw}{127.0.0.1}{127.0.0.1:9300}{dilmrt}{ml.machine_memory=1073741824, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]}
[2021-05-01T17:27:58,252][INFO ][o.e.c.s.ClusterApplierService] [node-1] master node changed {previous [], current [{node-1}{L8dgtKlOQj2-aygcWpWUMA}{n4jx5Z52ScKzd-mivzS7Rw}{127.0.0.1}{127.0.0.1:9300}{dilmrt}{ml.machine_memory=1073741824, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
[2021-05-01T17:27:58,306][INFO ][o.e.h.AbstractHttpServerTransport] [node-1] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2021-05-01T17:27:58,307][INFO ][o.e.n.Node               ] [node-1] started
[2021-05-01T17:27:58,399][INFO ][o.e.g.GatewayService     ] [node-1] recovered [0] indices into cluster_state
```

这说明Elasticsearch已经启动成功了。您可以通过访问http://localhost:9200,来验证是否启动成功。

2. 安装Logstash

接下来,我们需要安装Logstash。与Elasticsearch一样,您可以从官网上下载最新的版本。在本文中,我们使用的是7.10.2版本。然后,解压缩并进入logstash的bin目录,运行以下命令启动:

```
./logstash -f /path/to/config/file
```

这里的配置文件是一个YAML格式的文件,用于定义Logstash如何处理数据。本文中,我们将介绍如何配置Logstash以收集和过滤各种日志数据。您可以将配置文件的路径替换为您自己的路径。

3. 安装Kibana

最后,我们需要安装Kibana。同样,在官网上下载最新版本,并解压缩。然后,进入到kibana的bin目录,运行以下命令启动:

```
./kibana
```

Kibana启动后,您可以通过访问http://localhost:5601,来访问Kibana的Web界面。

三、数据收集

ELK的核心功能之一是数据收集,它可以帮助我们收集各种类型的日志数据,并将其集中管理,以便进行分析和可视化。

1. 收集文件日志

在日常工作中,我们通常需要收集各种文件日志,例如nginx、Apache、Tomcat、MySQL等。通过Logstash收集文件日志非常容易。您只需按照以下步骤操作:

a. 创建一个配置文件

在您的Logstash安装目录中,创建一个名为logstash.conf的文件,并将以下内容复制到文件中:

```
input {
  file {
    path => "/path/to/logfile.log"
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "filebeat-%{+YYYY.MM.dd}"
  }
}
```

这个配置文件定义了Logstash如何收集和输出数据。其中,input段定义了一个文件输入插件,它将收集指定路径下的日志文件。output段定义了一个Elasticsearch输出插件,它将把收集到的数据输出到Elasticsearch中。

上面配置文件中的hosts参数指定了Elasticsearch的地址和端口。这里我们使用的是默认的地址localhost和端口9200。而index参数则用于指定数据的索引名称。这里我们