Kibana 和 Logstash 安装配置

Elasticsearch、Kibana、Logstash版本

  • Elasticsearch:7.2.0
  • Kibana:7.2.0
  • Logstash:7.2.0

Kibana和Logstash共同使用一台服务器

  • 服务器配置:2核4G,系统盘40G固态硬盘

Kibana 独立服务器

把Kibana从Elasticsearch节点中迁移出来,并使用RPM方式安装。

一、安装Kibana(RPM方式)

下载并安装公共签名密钥

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

添加yum源存储库配置

vim /etc/yum.repos.d/kibana.repo
[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

安装Kibana

yum install kibana

二、Kibana配置,及开机启动

kibana.yml

vim /etc/kibana/kibana.yml
# HTTP访问端口
server.port: 5601

# HTTP访问IP,内网IP、外网IP都可以访问
server.host: "0.0.0.0"

# Elasticsearch节点地址(目前只支持单个地址)
elasticsearch.hosts: ["http://172.18.112.10:9200"]

# Elasticsearch账号和密码
elasticsearch.username: "elastic"
elasticsearch.password: "elasticpassword"

# Kibana Web页面国际化【简体中文】
i18n.locale: "zh-CN"

开机启动

systemctl daemon-reload
systemctl enable kibana.service

启动和关闭

systemctl start kibana.service
systemctl stop kibana.service
systemctl status kibana.service
systemctl restart kibana.service

查看 Kibana 网站

ip:5601

三、Kibana目录结构

Type Description Default Location
home Kibana安装的主目录或 $KIBANA_HOME /usr/share/kibana
bin 二进制脚本目录。包括启动Kibana服务器和kibana-plugin安装插件 /usr/share/kibana/bin
config 配置文件目录。包括 kibana.yml /etc/kibana
data Kibana数据目录。Kibana及其插件写入磁盘数据文件的位置 /var/lib/kibana
optimize 透明的源代码。某些管理操作(例如插件安装)导致源代码在运行中重新传输 /usr/share/kibana/optimize
plugins 插件文件位置。每个插件都将包含在一个子目录中 /usr/share/kibana/plugins

Elasticsearch 索引管理

# 新建索引,并初始化字段
PUT index_t_settlement_info
{
  "settings": {
    "index": {
      "number_of_shards": 5,
      "number_of_replicas": 1
    }
  },
  "mappings": {
    "properties": {
      "id": {
        "type": "long"
      }
    }
  }
}

# 创建索引别名
POST /_aliases
{
    "actions": [
        {"add": {"index":"index_t_settlement_info","alias":"t_settlement_info"}}
    ]
}

# 别名切换到另外的索引上(此操作是原子操作)
POST /_aliases
{
    "actions": [
        {"remove": {"index":"index_t_settlement_info","alias":"t_settlement_info"}}
        {"add": {"index":"new_index_t_settlement_info","alias":"t_settlement_info"}}
    ]
}

# 向索引添加字段(不能修改字段)
PUT index_t_settlement_info
{
  "mappings": {
      "properties": {
        "user_id": {
          "type": "keyword"
        }
      }
  }
}

Logstash 安装配置

一、安装Logstash(RPM方式)

下载并安装公共签名密钥

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

添加yum源存储库配置

vim /etc/yum.repos.d/logstash.repo
[logstash-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

安装Logstash

yum install logstash

二、Logstash配置,及开机启动

logstash.yml

vim /etc/logstash/logstash.yml
# 启用定时重新加载配置
config.reload.automatic: true
# 定时重新加载配置周期
config.reload.interval: 3s

# 持久队列
queue.type: persisted
# 控制耐久性
queue.checkpoint.writes: 1
# 死信队列
dead_letter_queue.enable: true

# 启用Logstash节点监控
xpack.monitoring.enabled: true
# Elasticsearch账号和密码
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: elasticpassword
# Elasticsearch节点地址列表
xpack.monitoring.elasticsearch.hosts: ["172.18.112.10", "172.18.112.11", "172.18.112.12"]
# 发现Elasticsearch集群的其他节点
xpack.monitoring.elasticsearch.sniffing: true
# 发送监控数据的频率
xpack.monitoring.collection.interval: 10s
# 启用监控管道信息
xpack.monitoring.collection.pipeline.details.enabled: true

开机启动

systemctl daemon-reload
systemctl enable logstash.service

启动和关闭

systemctl start logstash.service
systemctl stop logstash.service
systemctl status logstash.service
systemctl restart logstash.service

三、Logstash目录结构

Type Description Default Location
home Logstash安装的主目录 /usr/share/logstash
bin 二进制脚本目录。包括启动Logstash服务器和logstash-plugin安装插件 /usr/share/logstash/bin
settings 配置文件目录。包括logstash.yml,jvm.options,和startup.options /etc/logstash
config Logstash管道配置文件目录 /etc/logstash/conf.d/*.conf
logs 日志文件目录 /var/log/logstash
plugins 本地非Ruby-Gem插件文件。每个插件都包含在一个子目录中。建议仅用于开发 /usr/share/logstash/plugins
data logstash及其插件用于任何持久性需求的数据文件置 /var/lib/logstash

四、Mysql数据导入Elasticsearch

1. 下载安装Java Mysql驱动包

下载兼容mysql对应版本的mysql-connector-java.jar驱动包

  • Mysql版本:5.7.20-log
  • 驱动包版本:mysql-connector-java-5.1.48.tar.gz(可以选择5.1.*其他最新版本)
  • 官方下载地址:dev.mysql.com/downloads/connector/... (点击Looking for previous GA versions?选择其他老版本)
  • 系统兼容版本:选择平台无关平台独立对应的版本

新建java驱动包存放目录

mkdir /usr/share/logstash/java

上传mysql-connector-java.jar驱动包

mv mysql-connector-java-5.1.48-bin.jar /usr/share/logstash/java

修改java目录及子目录文件拥有者

chown -R logstash:logstash /usr/share/logstash/java

2. 任务配置(位置/etc/logstash/conf.d/*.conf

  • Mysql导入Elasticsearch的具体配置,一个任务一个配置文件
  • conf.d/*.conf配置修改后,无需重启logstash,logstash自动定时刷新(3秒)

新建t_settlement_info目录,独立存放特定的任务

mkdir /etc/logstash/conf.d/t_settlement_info

创建/t_settlement_info.conf配置

vim /etc/logstash/conf.d/t_settlement_info/t_settlement_info.conf
input {
  jdbc {
    id => "t_settlement_info.input_jdbc"
    #数据库驱动路径(mkdir /usr/share/logstash/java)
    jdbc_driver_library => "/usr/share/logstash/java/mysql-connector-java-5.1.48-bin.jar"
    jdbc_driver_class => "com.mysql.jdbc.Driver"
    #数据库连接相关配置
    jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/database"
    jdbc_user => "mysql_user"
    jdbc_password => "mysql_password"
    #任务计划,多久执行一次,在此每1分钟执行一次
    schedule => "* * * * *"
    #执行的sql语句
    statement => "SELECT * FROM t_settlement_info WHERE id > :sql_last_value ORDER BY id LIMIT 10000"
    #是否清除先前的运行状态
    clean_run => false
    #启用追踪,如果为true,则需要指定tracking_column,默认是timestamp
    use_column_value => true
    #指定追踪的字段,在此我设置的追踪的字段为id
    tracking_column => "id"
    #追踪字段的类型,目前只有数字(numeric)和时间类型(timestamp),默认是数字类型
    tracking_column_type => "numeric"
    #记录最后一次运行的结果
    record_last_run => true
    #上面运行结果的保存位置(mkdir /usr/share/logstash/last-run-metadata)
    last_run_metadata_path => "/usr/share/logstash/last-run-metadata/.logstash_jdbc_last_run.t_settlement_info"
    #是否将字段名称转小写,当字段已经为小写时,不用此项
    lowercase_column_names => false
  }
}

output {
  elasticsearch {
    id => "t_settlement_info.output_elasticsearch"
    hosts => ["172.18.112.10","172.18.112.11","172.18.112.12"]
    index => "t_settlement_info"
    action => "update"
    doc_as_upsert => true
    document_id => "%{id}"
    user => "elastic"
    password => "elasticpassword"
  }
}

创建last-run-metadata目录,单独记录每个持久队列最后一次运行的追踪字段(logstash默认只使用一个文件记录)

# 新建目录
mkdir /usr/share/logstash/last-run-metadata

# 修改目录拥有者
chown -R logstash:logstash /usr/share/logstash/last-run-metadata

3. 管道配置(位置/etc/logstash/pipelines.yml

  • pipelines.yml配置修改后,无需重启logstash,logstash自动定时刷新(3秒)

自定义管道配置

vim /etc/logstash/pipelines.yml
# 默认管道,多任务共同使用一个队列,任务之前竞争排序执行。暂时关闭默认管道,有不强调效率的任务可以开启
#- pipeline.id: main
#  path.config: "/etc/logstash/conf.d/*.conf"

# 单任务独立使用一个队列
- pipeline.id: t_settlement_info_11
  path.config: "/etc/logstash/conf.d/t_settlement_info/t_settlement_info_11.conf"

# 单任务独立使用一个队列
- pipeline.id: t_settlement_info_22
  path.config: "/etc/logstash/conf.d/t_settlement_info/t_settlement_info_22.conf"

# 单任务独立使用一个队列
- pipeline.id: t_settlement_info_33
  path.config: "/etc/logstash/conf.d/t_settlement_info/t_settlement_info_33.conf"
本作品采用《CC 协议》,转载必须注明作者和本文链接
讨论数量: 0
(= ̄ω ̄=)··· 暂无内容!

讨论应以学习和精进为目的。请勿发布不友善或者负能量的内容,与人为善,比聪明更重要!