【实战】记一次基于Docker环境搭建ELK部署过程
1 部署环境信息
CentOS Linux release 7.9.2009 (Core)
ElasticSearch:7.12.0
ElasticSearch-head:5
Kibana:7.12.0
logstash:7.12.0
2 安装docker
使用yum命令安装docker yum install -y docker-ce
3 安装docker-compose
下载docker-compose 执行文件
1
| $ sudo curl -L "https://github.com/docker/compose/releases/download/1.29.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
|
添加执行权限
1
| $ sudo chmod +x /usr/local/bin/docker-compose
|
创建软链接
1
| $ sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
|
查看是否安装成功,查看版本信息
1 2
| $ docker-compose --version docker-compose version 1.29.1, build c34c88b2
|
4 拉取镜像
1 2 3 4 5 6 7 8
| $ docker pull docker.elastic.co/elasticsearch/elasticsearch:7.12.0 --elasticsearch $ docker pull docker.elastic.co/kibana/kibana:7.12.0 --kibana $ docker pull docker.elastic.co/logstash/logstash:7.12.0 --logstash $ docker pull mobz/elasticsearch-head:5 --elasticsearch-head
|
查看拉取镜像列表
5 elasticsearch安装
注意:elasticsearch、Logstash、Kibana版本必须一致,不然会出现各种莫名的问题,比如Kibana连不上ES或者Logstash连不上ES。这里我使用的是7.12.0版本。
参考链接:
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
参看官方文档使用docker-compose创建elasticsearch集群
5.1 创建docker-compose.yml文件
在/opt/docker目录下创建docker-compose.yml文件
docker-compose.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69
| version: '2.2' services: es01: image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0 container_name: es01 environment: - node.name=es01 - cluster.name=es-docker-cluster - discovery.seed_hosts=es02,es03 - cluster.initial_master_nodes=es01,es02,es03 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - data01:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - elastic es02: image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0 container_name: es02 environment: - node.name=es02 - cluster.name=es-docker-cluster - discovery.seed_hosts=es01,es03 - cluster.initial_master_nodes=es01,es02,es03 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - data02:/usr/share/elasticsearch/data networks: - elastic es03: image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0 container_name: es03 environment: - node.name=es03 - cluster.name=es-docker-cluster - discovery.seed_hosts=es01,es02 - cluster.initial_master_nodes=es01,es02,es03 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - data03:/usr/share/elasticsearch/data networks: - elastic
volumes: data01: driver: local data02: driver: local data03: driver: local networks: elastic: driver: bridge
|
注:
Docker分别命名为data01、data02和data03,用于存储节点数据目录,使数据在重启过程中保持不变。如果它们还不存在,那么在docker-compose创建集群时创建它们。
5.2 创建volume
1 2 3
| $ docker volume create data01 $ docker volume create data02 $ docker volume create data03
|
- 查看volume详细信息
1
| $ docker volume inspect data01
|
全部都准备完毕,开始创建elasticSearch容器。
5.3 创建ElasticSarch集群容器
1 2
| $ cd /opt/docker $ docker-compose up
|
1
| $ curl -X GET "localhost:9200/_cat/nodes?v=true&pretty"
|
1 2 3
| $ docker update $ docker update $ docker update
|
至此elasitcsearch集群部署完成!
5.3 遇到的坑
5.3.1 vm.max_map_count
- 创建elasticSearch容器时需要修改Linux vm.max_map_count参数。
max_map_count:指定该进程可以使用的最大内存映射区域数。在大多数情况下,默认 值为65530。如果您需要将更多文件映射到您的应用程序,请增加此值。
1 2
| $ sysctl -a | grep vm.max_map_count vm.max_map_count = 65530
|
1 2
| $ sysctl -w vm.max_map_count=262144 vm.max_map_count = 262144
|
1 2
| $ sysctl -a | grep vm.max_map_count vm.max_map_count = 262144
|
head插件无法链接到es节点,需要设置跨域访问。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| $ docker ps -a
$ docker exec -it 容器id /bin/bash
$ vi /config/elasticsearch.yml 在最后一行添加跨域访问 http.cors.enabled: true http.cors.allow-origin: "*"
[root@27224a557c03 elasticsearch]# cat config/elasticsearch.yml cluster.name: "docker-cluster" network.host: 0.0.0.0 http.cors.enabled: true http.cors.allow-origin: "*"
|
6 elasticsearch head插件安装
创建elasticsearch head容器,映射9100端口,–restart=always设置开机自动启动容器
1
| $ docker run -d -p 9100:9100 --restart=always docker.io/mobz/elasticsearch-head:5
|
访问head web界面:
参看head插件是否正常启动。
6.1 遇到的坑
6.2 Docker容器ElasticSearch-Head创建索引无响应406
解决方法:
修改head的 Content-Type 设置,
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| $ docker ps -a
$ docker exec -it 容器id /bin/bash root@b93fa4e29ba2:/usr/src/app# vim _site/vendor.js 1. 6886行 /contentType: "application/x-www-form-urlencoded 改成 contentType: "application/json;charset=UTF-8" 2. 7574行 var inspectData = s.contentType ==`= "application/x-www-form-urlencoded" &&` 改成 var inspectData = s.contentType === "application/json;charset=UTF-8" && root@b93fa4e29ba2:/usr/src/app/_site exit
[root@localhost ~]
|
7 Kibana安装
1
| $ docker run --name elk-kibana -e ELASTICSEARCH_URL=http://172.18.0.4:9200 -p 5601:5601 --restart=always 7a6b1047dd48
|
-e ELASTICSEARCH_URL=http://172.18.0.4:9200:设置ELASTICSEARCH_URL用于连接es节点
–restart=always:设置开机启动
进入kibana容器内,修改/config/kibana.yml配置文件,添加下述内容
1 2 3 4 5 6 7
| server.name: kibana server.host: "0" # 连接es节点 elasticsearch.hosts: [ "http://elk-es01:9200" ] monitoring.ui.container.elasticsearch.enabled: true # web界面设置成中文 i18n.locale: "zh-CN"
|
- 重启Kibana容器
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
| [root@docker /]# docker exec -it --user=root 7a0b4cb0bca2 /bin/bash bash: printf: write error: Interrupted system call [root@7a0b4cb0bca2 kibana] [root@7a0b4cb0bca2 kibana]
server.name: kibana server.host: "0" elasticsearch.hosts: [ "http://elk-es01:9200" ] monitoring.ui.container.elasticsearch.enabled: true i18n.locale: "zh-CN" [root@7a0b4cb0bca2 kibana] exit [root@docker /] 7a0b4cb0bca2 [root@docker /]
|
访问web页面:http://192.168.10.30:5601
8 logstash安装
创建logstash容器
1
| docker run -it -d -p 5044:5044 -p 5040:5040 --restart=always --name logstash --net=docker_elastic docker.elastic.co/logstash/logstash:7.12.0
|
–net:指定网络组为docker_elastic
–restart=always:设置开机启动
进入logstash容器内,编辑logstash.yml配置文件添加连接es节点配置。
1 2 3 4 5 6 7 8 9 10
| [root@docker ~] [root@b5f3e8eeea13 logstash] [root@b5f3e8eeea13 logstash] http.host: "0.0.0.0" xpack.monitoring.elasticsearch.hosts: [ "http://elk-es01:9200" ] [root@b5f3e8eeea13 logstash] exit [root@docker ~] b5f3e8eeea13 [root@docker ~]
|
8.1 通过TCP方式采集web服务日志
1.在logstash/config/路径下创建logstash.conf文件
2.编辑logstash.conf,输入以下配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
| input { tcp { mode => "server" host => "0.0.0.0" port => 4560 codec => json_lines } }
output { elasticsearch { hosts => "127.0.0.1:9200" action=>"index" index => "index-logstash-%{+YYYY.MM.dd}" } }
|
3.修改logstash.yml文件
1 2
| path.config: /usr/share/logstash/config/logstash.conf
|
4.web项目中添加logstash依赖
1 2 3 4 5 6
| <dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>5.3</version> </dependency>
|
5.创建logback.xml文件,新增appender
1 2 3 4 5 6 7 8 9
| <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>127.0.0.1:4560</destination> <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/> </appender> <root level="INFO"> <appender-ref ref="LOGSTASH"/> </root>
|
6.重启logstash容器
1
| docker restart b5f3e8eeea13
|
采集多个服务的日志
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
| input { tcp { add_field => {"service" => "admin"} mode => "server" host => "0.0.0.0" port => 4560 codec => json_lines } tcp { add_field => {"service" => "auth"} mode => "server" host => "0.0.0.0" port => 4561 codec => json_lines } } output { if [service] == "admin"{ elasticsearch { hosts => "127.0.0.1:9200" index => "admin-logstash-%{+YYYY.MM.dd}" } } if [service] == "auth"{ elasticsearch { hosts => "127.0.0.1:9200" index => "auth-logstash-%{+YYYY.MM.dd}" } } }
|