基于keepalived双vip的MySQL高可用集群
标签: 基于keepalived双vip的MySQL高可用集群 博客 51CTO博客
2023-04-05 18:23:48 196浏览
一、机器准备及IP地址规划
ansible 2.9.27 ip:192.168.1.124
prometheus 192.168.1.103
MySQL Router 8.0.32
mysql集群
master 192.168.1.150
slave 192.168.1.151
slave 192.168.1.152
延迟备份 backup 192.168.1.153
keepalived集群
192.168.1.148
192.168.1.149
项目名称:基于MySQLrouter、keepalived双vip的MySQL高可用集群
项目环境:8台服务器(1G,1核),centos7.9 mysq5.7.35 mysqlrouter8.0.28 keepalived1.3.5 zabbix ansible
项目描述:
本项目的目的是构建一个高可用的能实现读写分离的高效的MySQL集群,确保业务的稳定,能沟通方便的监控整个集群,同时能批量的去部署和管理整个集群。
项目步骤:
1.安装好centos7.8的系统,部署好ansible,在所有的机器之间配置SSH免密通道
2.部署好zabbix监控系统
3.通过ansible去以二进制方式安装部署MySQL,主要是通过编写好的脚本一键安装二进制版本的MySQL。
4.使用ansible安装mysqlrouter 和keepalived,在另外2台中间件服务器上,实现读写分离和高可用,在keepalived上配置2个实例,实现2个vip,互为master和backup,更加好的提升高可用的性能。
5.在3台MySQL服务器上配置好主从复制,建立读写分离使用的用户,形成一个master+2个slave节点(半同步+GTID)的集群,提供数据库服务,部署一台延迟备份的服务器(延迟30s)
6.尝试部署mysql的MHA,实现自动的故障切换,确保master宕机,能自动提升另外一台slave为主
7.验证测试读写分离和高可用以及主从的failover
8.使用压力测试软件(sysbench)测试整个MySQL集群的性能(cpu、IO、内存等)
项目心得:
1.ip地址配置桥接,而且是静态ip比较好
2.一定要规划好整个集群的架构,配置要细心,脚本要提前准备好,边做边修改
3.防火墙和selinux的问题应该关闭
4.对MySQL的集群、读写分离、故障切换和高可用有了深入的理解
5.prometheus监控软件和ansible自动化运维工具更加熟练
6.keepalived的配置需要更加细心
二、使用ansible部署MySQL集群
1、建立免密通道
[root@ansible ansible]# ssh-keygen #生成密钥,默认回车即可
[root@ansible ansible]# ssh-copy-id root@192.168.1.150 #拷贝密钥至master
[root@ansible ansible]# ssh-copy-id root@192.168.1.151 #拷贝密钥至slave1
[root@ansible ansible]# ssh-copy-id root@192.168.1.152 #拷贝密钥至slave2
[root@ansible ansible]# ssh-copy-id root@192.168.1.153 #拷贝密钥至backup
2、安装ansible
[root@ansible ansible]# yum install epel-release -y
[root@ansible ansible]# yum install ansible -y
[root@ansible ansible]# cat hosts
[mysql]
192.168.1.150
192.168.1.151
192.168.1.152
192.168.1.153
[root@ansible ansible]# ansible mysql -m ping
192.168.1.150 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.152 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.151 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.153 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
[root@ansible ansible]#
# 以上结果就说明,ansible已经全部安装成功并且连接成功。
3、使用ansible部署MySQL集群
# 编写一键安装MySQL脚本,如下:
[root@ansible ansible]# cat onekey_install_mysql_binary.sh
#!/bin/bash
#解决软件的依赖关系
yum install cmake ncurses-devel gcc gcc-c++ vim lsof bzip2 openssl-devel ncurses-compat-libs -y
#mkdir /mysql
cd /mysql
#解压mysql二进制安装包
tar xf mysql-5.7.35-linux-glibc2.12-x86_64.tar.gz
#移动mysql解压后的文件到/usr/local下改名叫mysql
mv mysql-5.7.35-linux-glibc2.12-x86_64 /usr/local/mysql
#新建组和用户 mysql
groupadd mysql
#mysql这个用户的shell 是/bin/false 属于mysql组
useradd -r -g mysql -s /bin/false mysql
#关闭firewalld防火墙服务,并且设置开机不要启动
service firewalld stop
systemctl disable firewalld
#临时关闭selinux
setenforce 0
#永久关闭selinux
sed -i '/^SELINUX=/ s/enforcing/disabled/' /etc/selinux/config
#新建存放数据的目录
mkdir /data/mysql -p
#修改/data/mysql目录的权限归mysql用户和mysql组所有,这样mysql用户可以对这个文件夹进行读写了
chown mysql:mysql /data/mysql/
#只是允许mysql这个用户和mysql组可以访问,其他人都不能访问
chmod 750 /data/mysql/
#进入/usr/local/mysql/bin目录
cd /usr/local/mysql/bin/
#初始化mysql
./mysqld --initialize --user=mysql --basedir=/usr/local/mysql/ --datadir=/data/mysql &>passwd.txt
#让mysql支持ssl方式登录的设置
./mysql_ssl_rsa_setup --datadir=/data/mysql/
#获得临时密码
tem_passwd=$(cat passwd.txt |grep "temporary"|awk '{print $NF}')
#$NF表示最后一个字段
# abc=$(命令) 优先执行命令,然后将结果赋值给abc
# 修改PATH变量,加入mysql bin目录的路径
#临时修改PATH变量的值
export PATH=/usr/local/mysql/bin/:$PATH
#重新启动linux系统后也生效,永久修改
echo 'PATH=/usr/local/mysql/bin:$PATH' >>/root/.bashrc
#复制support-files里的mysql.server文件到/etc/init.d/目录下叫mysqld
cp ../support-files/mysql.server /etc/init.d/mysqld
#修改/etc/init.d/mysqld脚本文件里的datadir目录的值
sed -i '70c datadir=/data/mysql' /etc/init.d/mysqld
#生成/etc/my.cnf配置文件
cat >/etc/my.cnf <<EOF
[mysqld_safe]
[client]
socket=/data/mysql/mysql.sock
[mysqld]
socket=/data/mysql/mysql.sock
port = 3306
open_files_limit = 8192
innodb_buffer_pool_size = 512M
character-set-server=utf8
[mysql]
auto-rehash
prompt=\\u@\\d \\R:\\m mysql>
EOF
#修改内核的open file的数量
ulimit -n 1000000
#设置开机启动的时候也配置生效
echo "ulimit -n 1000000" >>/etc/rc.local
chmod +x /etc/rc.d/rc.local
#启动mysqld进程
service mysqld start
#将mysqld添加到linux系统里服务管理名单里
/sbin/chkconfig --add mysqld
#设置mysqld服务开机启动
/sbin/chkconfig mysqld on
#初次修改密码需要使用--connect-expired-password 选项
#-e 后面接的表示是在mysql里需要执行命令 execute 执行
#set password='Sanchuang123#'; 修改root用户的密码为Sanchuang123#
mysql -uroot -p$tem_passwd --connect-expired-password -e "set password='Sanchuang123#';"
#检验上一步修改密码是否成功,如果有输出能看到mysql里的数据库,说明成功。
mysql -uroot -p'Sanchuang123#' -e "show databases;"
# 使用ansible将mysql源码传过去。
[root@ansible ansible]# ansible mysql -m copy -a 'src=/ansible/mysql-5.7.35-linux-glibc2.12-x86_64.tar.gz dest=/mysql/'
# 使用ansible编写playbook执行安装。
[root@ansible ansible]# cat onekey_install_mysql.yaml
---
- name: 安装mysql
hosts: mysql
gather_facts: yes
tasks:
- name: 安装mysql
script: /ansible/onekey_install_mysql_binary.sh
[root@ansible ansible]# ansible-playbook onekey_install_mysql.yaml
# 至此MySQL集群已经搭建完成。
二、mysqlrouter读写分离部署实现
MySQL Router是一个MySQL官方提供的开源软件,用于管理和路由MySQL数据库的连接。它可以作为中间层在客户端和MySQL服务器之间进行负载均衡、故障转移和读写分离等操作,提高了MySQL数据库的可伸缩性和高可用性。MySQL Router支持多种协议,包括TCP/IP、HTTP、JDBC和ODBC等,可以与许多不同的客户端应用程序集成使用。此外,MySQL Router还提供了丰富的监控和管理功能,帮助管理员轻松维护和管理MySQL环境。
1、准备两台全新的linux服务器,安装好mysqlrouter软件
proxy-1 :192.168.1.148
proxy-2: 192.168.1.149
2、修改mysqlrouter的配置文件
proxy-1配置如下
[DEFAULT]
user = mysql
logging_folder = /data/mysql/mysql-router-8.0.28/log
plugin_folder = /data/mysql/mysql-router-8.0.28/lib/mysqlrouter
runtime_folder = /data/mysql/mysql-router-8.0.28/run
data_folder = /data/mysql/mysql-router-8.0.28/data
[logger]
level = INFO
filename = mysqlrouter.log
# 名字可以自定义
[routing:read_write]
bind_address = 0.0.0.0 #将mysqlroute绑定的IP地址改为0.0.0.0,本机任意ip地址都能访问
bind_port = 7001
#支持可读可写
mode = read-write
#mysql-master服务器的ip地址:mysql服务的端口号
destinations = 192.168.1.150:3306
max_connections = 65535
max_connect_errors = 100
client_connect_timeout = 9
[routing:read_only]
bind_address = 0.0.0.0
bind_port = 7002
# 仅可读
mode = read-only
# mysql-slave服务器的ip地址:mysql服务的端口号
destinations = 192.168.1.150:3306,192.168.1.151:3306,192.168.1.152:3306
max_connections = 65535
max_connect_errors = 100
client_connect_timeout = 9
proxy-2配置如下
[DEFAULT]
user = mysql
logging_folder = /data/mysql/mysql-router-8.0.28/log
plugin_folder = /data/mysql/mysql-router-8.0.28/lib/mysqlrouter
runtime_folder = /data/mysql/mysql-router-8.0.28/run
data_folder = /data/mysql/mysql-router-8.0.28/data
[logger]
level = INFO
filename = mysqlrouter.log
# 名字可以自定义
[routing:read_write]
bind_address = 0.0.0.0
bind_port = 7001
#支持可读可写
mode = read-write
#mysql-master服务器的ip地址:mysql服务的端口号
destinations = 192.168.1.150:3306
max_connections = 65535
max_connect_errors = 100
client_connect_timeout = 9
[routing:read_only]
bind_address = 0.0.0.0
bind_port = 7002
# 仅可读
mode = read-only
# mysql-slave服务器的ip地址:mysql服务的端口号
destinations = 192.168.1.150:3306,192.168.1.151:3306,192.168.1.152:3306
max_connections = 65535
max_connect_errors = 100
client_connect_timeout = 9
3、启动MySQLrouter
[root@keep-02 conf]# mysqlrouter -c /data/mysql/mysql-router-8.0.28/conf/mysqlrouter.conf
三、master和slave实现异步复制
安装mysql
配置一台master和两台slave以及一台backup。
可以使用编译安装也可以使用yum安装,yum安装需要先安装epel源。
由于有4台机器,可以使用ansible批量安装。
安装epel源
yum install -y epel-release
yum安装mysql
yum -y install mysql-server
安装完mysql后,启动MySQL服务
service mysqld start
#开机启动
systemctl enable mysqld.service
1.首先使用msyqldump将master中的所有数据导出,然后在slave上面备份,防止主库和从库数据不一致
mysqldump -uroot -p'xzx527416' --all-databases >/backup/all_sql.db
#此时切换到/backup目录,可以看到备份文件all_sql.db
2.使用scp将备份文件all_sql.db传到slave1和slave2以及backup机器上。(也可以使用ansible)
scp /backup/all_sql.db 192.168.1.151:/root
scp /backup/all_sql.db 192.168.1.152:/root
scp /backup/all_sql.db 192.168.1.153:/root
slave和backup导入数据
[root@localhost ~] mysql -uroot -p'xzx527416' < all_sql.db
mysql: [Warning] Using a password on the command line interface can be insecure.
3.master和slave开启二进制日志和server_id
master配置:
# binary log
log_bin
server_id = 1
slave1配置
# 二进制日志
log-bin
server_id = 2
slave2配置
# 开启二进制日志
log_bin
server_id = 3
backup_server配置
# 开启二进制日志
log_bin
server_id = 4
4.在master上创建一个有备份权限的用户,然后在slave上添加授权用户信息。
root@mysql 10:31 mysql>create user 'xu'@'%' identified by '123456';
Query OK, 0 rows affected (1.01 sec)
root@mysql 10:40 mysql>grant replication slave on *.* to 'xu'@'%';
Query OK, 0 rows affected (0.00 sec)
查看master状态信息。
root@(none) 18:06 mysql>show master status;
+----------------------+----------+--------------+------------------+--------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+----------------------+----------+--------------+------------------+--------------------------------------------+
| localhost-bin.000008 | 194 | | | cc75c304-cd41-11ed-889b-000c290990b9:1-214 |
+----------------------+----------+--------------+------------------+--------------------------------------------+
1 row in set (0.00 sec)
在slave和backup上添加此授权用户信息
root@(none) 16:07 mysql>CHANGE MASTER TO MASTER_HOST='192.168.1.150' ,
-> MASTER_USER='xu',
-> MASTER_PASSWORD='123456',
-> MASTER_PORT=3306,
-> MASTER_LOG_FILE='master-bin.000008',
-> MASTER_LOG_POS=194;
Query OK, 0 rows affected, 2 warnings (0.01 sec)
在slave上面开启I/O线程和sql线程
start slave
root@(none) 18:39 mysql>show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.150
Master_User: xu
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: localhost-bin.000008
Read_Master_Log_Pos: 194
Relay_Log_File: localhost-relay-bin.000002
Relay_Log_Pos: 324
Relay_Master_Log_File: localhost-bin.000008
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
测试:
master上删除数据库
root@(none) 18:38 mysql>show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| mysqlslap |
| performance_schema |
| sc |
| sys |
| xw |
+--------------------+
7 rows in set (0.00 sec)
root@(none) 18:42 mysql>drop database xw;
Query OK, 0 rows affected (0.01 sec)
root@(none) 18:42 mysql>show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| mysqlslap |
| performance_schema |
| sc |
| sys |
+--------------------+
从库中的这个数据库也被删掉了
root@(none) 18:40 mysql>show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| mysqlslap |
| performance_schema |
| sc |
| sys |
+--------------------+
6 rows in set (0.00 sec)
到这里就完成了异步复制
四、基于GTID的主从复制
1.关闭slave上的服务
stop slave
2.配置master和slave的配置文件。
gtid-mode=on
enforce-gtid-consistency = on
3.重启master和slave的mysql服务
service mysqld restart
五、为backup配置延迟备份
1.关闭backup上的slave服务
stop slave
2.在backup_server上设置延迟时间,然后开启slave,这里设置30秒
CHANGE MASTER TO MASTER_DELAY = 30;
start slave;
show slave status\G;
3.测试效果
master上删除数据库
root@(none) 19:17 mysql>drop database sc;
Query OK, 2 rows affected (0.02 sec)
30秒后,backup上的才删掉,而slave的马上就删掉了。
#这里是backup的mysql
root@(none) 19:13 mysql>show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| mysqlslap |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.01 sec)
六、搭建双vip高可用集群
1、搭建过程
# 同样是做mysqlrouter的两台机器做高可用
keepalived-1 :192.168.1.148
keepalived-2: 192.168.1.149
keep-01 操作:
[root@keep-01 /]# yum install keepalived -y
[root@keep-01 keepalived]# cd /etc/keepalived/
[root@keep-01 keepalived]# ls
keepalived.conf
[root@keep-01 keepalived]# vim keepalived.conf
[root@keep-01 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 58
priority 120
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.240
}
}
vrrp_instance VI_2 {
state BACKUP
interface ens33
virtual_router_id 59
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.241
}
}
[root@keep-01 keepalived]#
keep-02操作:
[root@keep-02 /]# yum install keepalived -y
[root@keep-02 /]# cd /etc/keepalived/
[root@keep-02 keepalived]# ls
keepalived.conf
[root@keep-02 keepalived]# vim keepalived.conf
[root@keep-02 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 58
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.240
}
}
vrrp_instance VI_2 {
state MASTER
interface ens33
virtual_router_id 59
priority 120
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.241
}
}
[root@keep-02 keepalived]#
2、启动keepalived
[root@keep-01 keepalived]# service keepalived restart
Redirecting to /bin/systemctl restart keepalived.service
[root@keep-01 keepalived]# ps aux | grep keepalived
root 4728 0.0 0.1 123012 1408 ? Rs 18:35 0:00 /usr/sbin/keepalived -D
root 4729 0.0 0.2 127212 2452 ? S 18:35 0:00 /usr/sbin/keepalived -D
root 4730 0.0 0.2 127212 2468 ? S 18:35 0:00 /usr/sbin/keepalived -D
root 4740 0.0 0.0 112824 980 pts/0 R+ 18:35 0:00 grep --color=auto keepalived
# 第二台keepalived同样的操作
3、双VIP效果
# keep-01效果
[root@keep-01 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:bf:65:64 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.148/24 brd 192.168.1.255 scope global noprefixroute dynamic ens33
valid_lft 4193sec preferred_lft 4193sec
inet 192.168.1.240/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::803c:dca6:b254:e8a4/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::eb4c:4d26:bf10:da92/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@keep-01 keepalived]#
# keep-02效果
[root@keep-02 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:3c:25:77 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.149/24 brd 192.168.1.255 scope global noprefixroute dynamic ens33
valid_lft 4194sec preferred_lft 4194sec
inet 192.168.1.241/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::eb4c:4d26:bf10:da92/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::803c:dca6:b254:e8a4/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::bac:77c2:cf4a:8b22/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@keep-02 keepalived]#
# 至此双VIP效果已经实现。
七、搭建监控系统
1、安装Prometheus
第1步:安装prometheus server
源码安装
1.上传下载的源码包到linux服务器
[root@sc-prom ~]# mkdir /prom
[root@sc-prom ~]# cd /prom
[root@sc-prom prom]# ls
prometheus-2.34.0.linux-amd64.tar.gz
[root@sc-prom prom]#
2.解压源码包
[root@sc-prom prom]# tar xf prometheus-2.34.0.linux-amd64.tar.gz
[root@sc-prom prom]# ls
prometheus-2.34.0.linux-amd64 prometheus-2.34.0.linux-amd64.tar.gz
[root@sc-prom prom]#
3.修改解压后的压缩包名字
[root@sc-prom prom]# mv prometheus-2.34.0.linux-amd64 prometheus
[root@sc-prom prom]# ls
prometheus prometheus-2.34.0.linux-amd64.tar.gz
[root@sc-prom prom]#
# 临时和永久修改PATH变量,添加prometheus的路径
[root@sc-prom prometheus]# PATH=/prom/prometheus:$PATH
[root@sc-prom prometheus]# cat /root/.bashrc
# .bashrc
# User specific aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
PATH=/prom/prometheus:$PATH #添加
执行prometheus程序
[root@prometheus prometheus]# nohup prometheus --config.file=/prom/prometheus/prometheus.yml &
[1] 8431
[root@prometheus prometheus]# nohup: 忽略输入并把输出追加到"nohup.out"
[root@prometheus prometheus]#
# 查看prometheus的进程
[root@prometheus prometheus]# ps aux|grep prome
root 8431 4.5 2.4 782084 46204 pts/0 Sl 11:21 0:00 prometheus --config.file=/prom/prometheus/prometheus.yml
root 8439 0.0 0.0 112824 980 pts/0 S+ 11:21 0:00 grep --color=auto prome
[root@prometheus prometheus]#
# 查看prometheus监听的端口号
[root@prometheus prometheus]# netstat -anplut|grep prome
tcp6 0 0 :::9090 :::* LISTEN 8431/prometheus
tcp6 0 0 ::1:9090 ::1:51738 ESTABLISHED 8431/prometheus
tcp6 0 0 ::1:51738 ::1:9090 ESTABLISHED 8431/prometheus
2、访问测试
3、节点安装和启动node-exporter
1.下载node_exporter-1.4.0-rc.0.linux-amd64.tar.gz源码,上传到节点服务器上
2.解压
[root@keep-01 ~]# ls
anaconda-ks.cfg node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
[root@keep-01 ~]# tar xf node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
[root@keep-01 ~]# ls
node_exporter-1.4.0-rc.0.linux-amd64
node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
单独存放到/node_exporter文件夹
[root@keep-01 ~]# mv node_exporter-1.4.0-rc.0.linux-amd64 /node_exporter
[root@keep-01 ~]#
[root@keep-01 ~]# cd /node_exporter/
[root@keep-01 node_exporter]# ls
LICENSE node_exporter NOTICE
[root@keep-01 node_exporter]#
# 修改PATH变量
[root@keep-01 node_exporter]# PATH=/node_exporter/:$PATH
[root@keep-01 node_exporter]# vim /root/.bashrc
[root@keep-01 node_exporter]# tail -1 /root/.bashrc
PATH=/node_exporter/:$PATH
# 执行node exporter 代理程序agent
[root@keep-01 node_exporter]# nohup node_exporter --web.listen-address 0.0.0.0:8090 &
[root@keep-01 node_exporter]# ps aux | grep node_exporter
root 64281 0.0 2.1 717952 21868 pts/0 Sl 19:03 0:04 node_exporter --web.listen-address 0.0.0.0:8090
root 82787 0.0 0.0 112824 984 pts/0 S+ 20:46 0:00 grep --color=auto node_exporter
[root@keep-01 node_exporter]# netstat -anplut | grep 8090
tcp6 0 0 :::8090 :::* LISTEN 64281/node_exporter
tcp6 0 0 192.168.17.152:8090 192.168.17.156:43576 ESTABLISHED 64281/node_exporter
[root@lb-1 node_exporter]#
# 其他节点一样的配置,这里可以使用ansible部署。
4、测试访问
5、在prometheus server里添加node节点
[root@prometheus prometheus]# pwd
/prom/prometheus
[root@prometheus prometheus]# cat prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
- job_name: "keepalived-01"
static_configs:
- targets: ["192.168.1.148:8090"]
- job_name: "keepalived-02"
static_configs:
- targets: ["192.168.1.149:8090"]
- job_name: "mysql-master"
static_configs:
- targets: ["192.168.1.150:8090"]
- job_name: "mysql-slave1"
static_configs:
- targets: ["192.168.1.151:8090"]
- job_name: "mysql-slave2"
static_configs:
- targets: ["192.168.1.152:8090"]
[root@prometheus prometheus]#
6、测试访问
7、安装grafana
# 直接导入grafana安装包,用yum下载
[root@prometheus yum.repos.d]# cd /grafana/
[root@prometheus grafana]# ls
grafana-enterprise-9.1.2-1.x86_64.rpm
[root@prometheus grafana]# yum install grafana-enterprise-9.1.2-1.x86_64.rpm -y
# 重启grafana-server 默认监听的端口为3000
[root@prometheus grafana]# service grafana-server start
Starting grafana-server (via systemctl): [ 确定 ]
[root@prometheus grafana]# netstat -anplut | grep grafana
tcp 0 0 192.168.17.156:55100 34.120.177.193:443 ESTABLISHED 12270/grafana-serve
tcp6 0 0 :::3000 :::* LISTEN 12270/grafana-serve
8、监控展示
八、测试
[root@ansible ~]# mysql -h 192.168.1.240 -P 7001 -uread-write -p'123456'
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 20
Server version: 5.7.35-log MySQL Community Server (GPL)
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
read-write@(none) 20:30 mysql>
mysql -h 192.168.1.241 -P 7002 -uread-only -p'123456'
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 22
Server version: 5.7.35-log MySQL Community Server (GPL)
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
使用sysbench可以进行压力测试
好博客就要一起分享哦!分享海报
此处可发布评论
评论(0)展开评论
展开评论