一个简单的小开发
首页
分类
标签
归档
关于

YDS

30岁大龄程java程序员,心不老,神不灭
首页
分类
标签
归档
关于
  • K8S

  • DOCKER

    • DOCKER单机安装drone
    • DOCKER安装ELK环境
      • 安装
      • LOGSTASH配置文件
      • JAVA设置LOGBACK
  • HELM

  • LINUX

  • DEVOPS
  • DOCKER
yds
2024-01-02
目录

DOCKER安装ELK环境

# DOCKER安装ELK环境

文档介绍了使用docker安装elk环境的步骤,使用java的logback日志框架将日志数据发送到rabbitmq,然后使用logstash将日志再同步到elasticsearch, 最后就是在kibana中可以直观的查到日志信息。

# 安装

这里使用了docker-compose将elk的各个组件安装,文件内容如下:

version: '3'
services:
  rabbitmq:
    restart: always
    image: rabbitmq:3-management
    ports:
      - 15672:15672
      - 5672:5672
    environment:
      RABBITMQ_DEFAULT_USER: yds
      RABBITMQ_DEFAULT_PASS: dadada
    volumes:
      - ${PWD}/rabbitmq:/var/lib/rabbitmq
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
    container_name: elasticsearch
    environment:
      - discovery.type=single-node
    ports:
      - 9200:9200
    volumes:
      - ${PWD}/elasticsearch:/usr/share/elasticsearch/data
  logstash:
    image: docker.elastic.co/logstash/logstash:7.9.0
    container_name: logstash
    volumes:
      - ${PWD}/logstash/config/:/usr/share/logstash/config
      - ${PWD}/logstash/data/:/usr/share/logstash/data
      - ${PWD}/logstash/pipeline/:/usr/share/logstash/pipeline
    command: logstash -f /usr/share/logstash/pipeline/logstash.conf
  kibana:
    image: docker.elastic.co/kibana/kibana:7.9.0
    container_name: kibana
    ports:
      - 5601:5601
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

执行 docker-compose up -d
mq: localhost:15672
es: localhost:9200
bibana: localhost:5601

执行前可能还需要将logstash的配置文件copy出来

docker run -d -P --name logstash  docker.elastic.co/logstash/logstash:7.9.0
docker cp logstash:/usr/share/logstash/pipeline logstash/
docker cp logstash:/usr/share/logstash/data logstash/
docker cp logstash:/usr/share/logstash/config logstash/
1
2
3
4

# LOGSTASH配置文件

设置一下logstash.conf

input {
  rabbitmq {
    host => "192.168.1.6"
    port => 5672
    user => "yds"
    password => "dadada"
    queue => "log_queue"
    durable => true
    codec => "json"
  }
}

output {
  elasticsearch {
    hosts => ["192.168.1.6:9200"]
    index => "logstash-rabbitmq"
  }
  stdout {
    codec => rubydebug
  }
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

这里的queue需要和rabbitmq里的queue对应起来

# JAVA设置LOGBACK

pom.xml

  ...
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-amqp</artifactId>
		</dependency>
  ...

1
2
3
4
5
6
7

logback.xml

<configuration>
  
  <property name="LOG_DIR" value="."/>
  <property name="LOG_FILE" value="app"/>

  <!-- 文件滚动 -->
  <appender name="APP_LOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${LOG_DIR}/${LOG_FILE}.log</file>
    <encoder>
      <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>
      <charset>utf8</charset>
    </encoder>
    <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
      <fileNamePattern>${LOG_DIR}/${LOG_FILE}.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
      <!--日志最大保存15天-->
      <maxHistory>5</maxHistory>
      <!--日志最大的文件大小-->
      <maxFileSize>100MB</maxFileSize>
      <!--日志最大保存1GB-->
      <totalSizeCap>1GB</totalSizeCap>
    </rollingPolicy>
  </appender>

  <!-- 异步写入 -->
  <appender name ="ASYNC_APP_LOG" class="ch.qos.logback.classic.AsyncAppender">
    <discardingThreshold>20</discardingThreshold>
    <queueSize>512</queueSize>
    <neverBlock>true</neverBlock>
    <appender-ref ref="APP_LOG"/>
  </appender>

  <!-- RabbitMQ -->
  <appender name="AMQP_RABBITMQ" class="org.springframework.amqp.rabbit.logback.AmqpAppender">
    <layout>
      <pattern><![CDATA[ %d %p %t [%c] - <%m>%n ]]></pattern>
    </layout>
    <addresses>localhost:5672</addresses>
    <username>yds</username>
    <password>dadada</password>
    <declareExchange>true</declareExchange>
    <applicationId>java-web-demo</applicationId>
    <!-- 需要正确的绑定到queue中 -->
    <routingKeyPattern>info</routingKeyPattern>
    <generateId>true</generateId>
    <charset>UTF-8</charset>
    <deliveryMode>NON_PERSISTENT</deliveryMode>
  </appender>

  <!-- 控制台 -->
  <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
    <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>
    </encoder>
  </appender>

  <springProfile name="default,local">
    <root level="INFO">
      <appender-ref ref="STDOUT"/>
      <appender-ref ref="AMQP_RABBITMQ"/>
    </root>
  </springProfile>

  <springProfile name="pro">
    <root level="INFO">
      <appender-ref ref="ASYNC_APP_LOG"/>
    </root>
  </springProfile>

</configuration>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
#ELK#ES#RABBITMQ#JAVA#LOGBACK
上次更新: 2024/09/30, 01:34:11
DOCKER单机安装drone
HELM3安装

← DOCKER单机安装drone HELM3安装→

最近更新
01
使用docker-compose安装mysql
09-30
02
鸿蒙app开发中的数据驱动ui渲染问题
08-01
03
LINUX连接openvpn
07-02
更多文章>
Theme by Vdoing | Copyright © 2020-2024 YDS
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式