环境
操作系统版本: CentOS 7 x86_64
ip地址: 10.16.5.12
初始化操作系统,基础设置略过,注意selinux,时间同步,防火墙,时区,最大文件数,最大进程数
安装mongodb,下面是yum方式安装,我使用的是脚本自动安装二进制方式
[root@master ~]#cat > /etc/yum.repos.d/mongodb.repo <<EOF [mongodb] name=MongoDB Repository baseurl=https://repo.mongodb.org/yum/redhat/\$releasever/mongodb-org/3.2/x86_64/ gpgcheck=1 enabled=1 gpgkey=https://www.mongodb.org/static/pgp/server-3.2.asc EOF [root@master ~]#yum install -y mongodb-org [root@master ~]#rpm -ql mongodb-org-server #这里使用默认配置 [root@master ~]#systemctl start mongod [root@master ~]#systemctl enable mongod 或者使用自动安装脚本来进行安装
graylog安装
因为graylog 2.4版本不支持elasticsearch-6.x版本,这里使用elasticsearch-5.6.10版本 elasticsearch安装见另外的博客 #安装graylog,可以去官网看看最新版本的,新版本功能更多,这里通过rpm包方式安装 [root@master ~]#cat > /etc/yum.repos.d/graylog.repo <<EOF [graylog] name=graylog baseurl=https://packages.graylog2.org/repo/el/stable/2.4/\$basearch/ gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-graylog EOF [root@master ~]#yum install -y graylog-server [root@master ~]#yum install -y epel-release [root@master ~]#yum install -y pwgen [root@master ~]#pwgen -N 1 -s 96 #生成随机密码 Z2LoxxeFvWoAPbMF0sIlYWhHH06leW6bfUAeImqhUe86Wzq8p4HDZAyKTQpaedvBuCoKYjaQAGQTj93R33sREiSIVt1sTRg0 [root@master ~]#echo -n admin | sha256sum ##生成管理员密码为admin 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 - 下面的配置内容供参考 [root@master ~]#cp /etc/graylog/server/server.conf{,.bak} [root@master ~]#cat > /etc/graylog/server/server.conf <<EOF is_master = true node_id_file = /etc/graylog/server/node-id password_secret = Z2LoxxeFvWoAPbMF0sIlYWhHH06leW6bfUAeImqhUe86Wzq8p4HDZAyKTQpaedvBuCoKYjaQAGQTj93R33sREiSIVt1sTRg0 root_password_sha2 = 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 root_timezone = PRC plugin_dir = /usr/share/graylog-server/plugin rest_listen_uri = http://10.16.47.133:9000/api/ web_listen_uri = http://10.16.47.133:9000 web_endpoint_uri = http://graylog.abc.com/api/ rotation_strategy = count elasticsearch_hosts = http://10.16.47.133:9200 elasticsearch_max_docs_per_index = 20000000 elasticsearch_max_number_of_indices = 20 retention_strategy = delete elasticsearch_shards = 4 elasticsearch_replicas = 0 elasticsearch_index_prefix = graylog allow_leading_wildcard_searches = false allow_highlighting = false elasticsearch_analyzer = standard output_batch_size = 500 output_flush_interval = 1 output_fault_count_threshold = 5 output_fault_penalty_seconds = 30 processbuffer_processors = 5 outputbuffer_processors = 3 processor_wait_strategy = blocking ring_size = 65536 inputbuffer_ring_size = 65536 inputbuffer_processors = 2 inputbuffer_wait_strategy = blocking message_journal_enabled = true message_journal_dir = /var/lib/graylog-server/journal lb_recognition_period_seconds = 3 mongodb_uri = mongodb://127.0.0.1:27017/graylog mongodb_max_connections = 1000 mongodb_threads_allowed_to_block_multiplier = 5 content_packs_dir = /usr/share/graylog-server/contentpacks content_packs_auto_load = grok-patterns.json proxied_requests_thread_pool_size = 32 EOF 注意,当graylog机器前面还有一个反代机器的时候,需要在配置中添加如下这个配置,即告诉浏览器,往这个api上面发送请求。 web_endpoint_uri = http://graylog.abc.com/api/ 反代机器也需要配置内容,具体看下面的说明 http://docs.graylog.org/en/2.4/pages/configuration/web_interface.html#using-a-layer-3-load-balancer-forwarding-tcp-ports #当使用的java不在/usr/bin/java时,需要修改配置中java路径为实际 #修改graylog使用的最小最大内存 [root@master ~]# vim /etc/sysconfig/graylog-server # Path to the java executable. JAVA=/usr/local/jdk/bin/java # Default Java options for heap and garbage collection. GRAYLOG_SERVER_JAVA_OPTS="-Xms2g -Xmx4g [root@master ~]#systemctl enable graylog-server [root@master ~]#systemctl daemon-reload [root@master ~]#systemctl restart graylog-server 访问测试(graylog server启动较慢,请等待几分钟) 监听端口9000 登录账户密码:admin/admin
优化
graylog服务优化 输出日志比较慢,input大于output /var/lib/graylog-server/journal/messagejournal-0 为中间消息,文件大表示有积压 http://docs.graylog.org/en/2.4/pages/configuration/server.conf.html the two settings output_batch_size (max. number of messages sent to Elasticsearch in a batch) and outputbuffer_processors (number of threads working on the output buffer) have direct influence on the throughput of messages to Elasticsearch.
kafka日志定时清理
graylog中日志不会自动删除,一直在本地保存着 kafka日志及topic数据清理 https://blog.csdn.net/qiaqia609/article/details/78899298 cd /home/andblog/runtime/kafka_2.12-1.1.0; bin/kafka-server-stop.sh 然后删日志 cd /home/andblog/runtime/zookeeper-3.4.9; bin/zkCli.sh -server 10.16.15.192:2181,10.16.15.193:2181,10.16.15.194:2181 cd /home/andblog/runtime/kafka_2.12-1.1.0; bin/kafka-server-start.sh -daemon config/server.properties 删topic cd /home/andblog/runtime/kafka_2.12-1.1.0; bin/kafka-topics.sh --list --zookeeper 10.16.15.192:2181 删队列日志 zk里面删目录 bin/kafka-topics.sh --delete --zookeeper 10.16.15.192:2181 --topic applogs ========================================= graylog kafka定时清理,看博客 修改日志进入速度 重启才能传数据,文件过多,删data目录 在server.properties里设置 log.cleanup.policy=delete log.retention.hours=24 log.retention.bytes=52857600
–
–
–
评论前必须登录!
注册