CentOS7 搭建RabbitMQ集群(镜像模式)

0x00 准备

0. 架构

  • 三台机器 搭建 RabbitMQ 镜像集群
  • 两台机器 搭建 Haproxy 负载均衡
  • 两台机器 搭建 keepalived(和haproxy机器共用),解决haproxy 单一故障点的问题。

1. RabbitMQ端口相关

  • 5672:程序Api调用
  • 15672:Web管理页面(需要手动启用rabbitmq_management插件)
  • 25672:用于节点和CLI工具通信(使用rabbitmqctl命令时,此端口要处于监听状态)

2. 机器列表

  • 192.168.20.21:rabbitmq01 (RabbitMQ集群 master节点)
  • 192.168.20.22: rabbitmq02
  • 192.168.20.23: rabbitmq03
  • 192.168.20.151:haka01 (Haproxy+Keepalived主节点)
  • 192.168.20.152:haka02 (Haproxy+Keepalived备用节点)
  • 192.168.20.150:非机器,虚拟ip (vip),配置在Keepalived中,动态的和 haka01/haka02 绑定

3. 版本

采用Yum安装最新版

4. 机器系统:CentOS 7.8 或者CentOS 7.9

5. RabbitMQ集群名

RabbitMQ集群名字默认是第一台的名字。

0x01 安装RabbitMQ

1. 服务器准备

1.1 修改/etc/hosts
192.168.20.21  rabbitmq01
192.168.20.22  rabbitmq02
192.168.20.23  rabbitmq03
1.2 关闭selinux
  • 永久关闭,修改后重启

    # vim /etc/selinux/config
    SELINUX=enforcing/disabled
  • 临时关闭

    setenforce 0
1.3 关闭防火墙

关闭防火墙(可以在搭建完成之后,再开启并配置对应端口)

systemctl stop firewalld
1.4 修改hostname(RabbitMQ所有机器)
192.168.20.21:hostnamectl set-hostname rabbitmq01
192.168.20.22:hostnamectl set-hostname rabbitmq02
192.168.20.23:hostnamectl set-hostname rabbitmq03
  • 创建/data目录并挂载独立分区,用于存储(可选)

2. 安装ErlangRabbitMQ

2.1 使用官方脚本配置源
    1. 执行官方脚本来配置erlang的源,并安装erlang(官方地址)
curl -s https://packagecloud.io/install/repositories/rabbitmq/erlang/script.rpm.sh | sudo bash
yum install -y erlang
  • 执行官方脚本,配置RabbitMQ-server的源,并安装RabbitMQ-server(官方地址)

    curl -s https://packagecloud.io/install/repositories/rabbitmq/rabbitmq-server/script.rpm.sh | sudo bash
    yum install -y rabbitmq-server

3. RabbitMQ配置

3.1 所有节点:创建rabbitMQ目录

创建rabbitMQ目录,修改默认存储数据的目录。

mkdir -p /data/rabbitmq
chown -R rabbitmq:rabbitmq /data/rabbitmq
3.2 所有节点:创建配置文件

创建配置文件/etc/rabbitmq/rabbitmq-env.conf,此文件默认没有生成。

[root@rabbitmq03 ~]# vim /etc/rabbitmq/rabbitmq-env.conf
RABBITMQ_LOG_BASE=/data/rabbitmq/log
RABBITMQ_MNESIA_BASE=/data/rabbitmq/mnesia
3.3 所有节点:启动服务,启用web管理插件

启动rabbitmq-server服务,并启用web管理插件rabbitmq-plugins enable rabbitmq_management

[root@rabbitmq03 ~]# systemctl start rabbitmq-server
[root@rabbitmq03 ~]# rabbitmq-plugins enable rabbitmq_management
Enabling plugins on node rabbit@rabbitmq03:
rabbitmq_management
The following plugins have been configured:
  rabbitmq_management
  rabbitmq_management_agent
  rabbitmq_web_dispatch
Applying plugin configuration to rabbit@rabbitmq03...
The following plugins have been enabled:
  rabbitmq_management
  rabbitmq_management_agent
  rabbitmq_web_dispatch
  
started 3 plugins.
[root@rabbitmq03 ~]# netstat -lntp|grep 5672
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      5612/beam.smp       
tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      5612/beam.smp       
tcp6       0      0 :::5672                 :::*                    LISTEN      5612/beam.smp

此时,可以验证一下,各节点是否均可以访问,地址是 http://192.168.20.21:15672/

0x02 集群配置

默认以第一个节点 ( rabbitmq01 ) 为master节点,rabbitmq02rabbitmq03 为从节点。

1.RabbitMQ集群配置

1.1 所有节点:关闭 rabbitmq 服务
[root@rabbitmq01 ~]# rabbitmqctl stop
Stopping and halting node rabbit@rabbitmq01 ...
[root@rabbitmq01 ~]# netstat -lntp|grep 5672

注:此时 rabbitmq 进程完全退出、三个相关端口(5672/15672/25672)都未被监听。

1.2. 从节点(rabbitmq02/03):复制cookie文件

rabbitmq01节点的/var/lib/rabbitmq/.erlang.cookie,复制到对应目录下。

[root@rabbitmq01 ~]# scp /var/lib/rabbitmq/.erlang.cookie root@192.168.20.22:/var/lib/rabbitmq/
.erlang.cookie                                                                           100%   20    14.8KB/s   00:00
[root@rabbitmq01 ~]# scp /var/lib/rabbitmq/.erlang.cookie root@192.168.20.23:/var/lib/rabbitmq/
.erlang.cookie                                                                           100%   20    14.8KB/s   00:00
1.3. 所有节点:以detached模式启动

detached模式启动 rabbitmq-server(此步骤是加入集群的基础,后续新节点也要从此步骤开始)

[root@rabbitmq03 ~]# rabbitmq-server -detached
[root@rabbitmq03 ~]# netstat -lntp|grep 5672
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      31221/beam.smp

注:此时,rabbitmq 进程只监听 25672端口

1.4. 从节点(rabbitmq02/03):加入集群

磁盘模式、内存模式: --ram是指内存模式,--disc是磁盘模式

[root@rabbitmq02 ~]# rabbitmqctl stop_app
Stopping rabbit application on node rabbit@rabbitmq02 ...
[root@rabbitmq02 ~]#rabbitmqctl join_cluster --ram rabbit@rabbitmq01
Clustering node rabbit@rabbitmq02 with rabbit@rabbitmq01
[root@rabbitmq02 ~]# rabbitmqctl start_app
Starting node rabbit@rabbitmq02 ...
[root@rabbitmq02 ~]# netstat -lntp|grep 5672
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      32650/beam.smp      
tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      32650/beam.smp      
tcp6       0      0 :::5672                 :::*                    LISTEN      32650/beam.smp 

此时,可以在任意节点上查看集群的状态rabbitmqctl cluster_status

[root@rabbitmq03 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@rabbitmq03 ...
Basics

Cluster name: rabbit@rabbitmq01

Disk Nodes

rabbit@rabbitmq01

RAM Nodes

rabbit@rabbitmq02
rabbit@rabbitmq03

Running Nodes

rabbit@rabbitmq01
rabbit@rabbitmq02
rabbit@rabbitmq03

Versions
......

Listeners

Node: rabbit@rabbitmq01, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Node: rabbit@rabbitmq01, interface: [::], port: 15672, protocol: http, purpose: HTTP API
Node: rabbit@rabbitmq01, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
Node: rabbit@rabbitmq02, interface: [::], port: 15672, protocol: http, purpose: HTTP API
Node: rabbit@rabbitmq02, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Node: rabbit@rabbitmq02, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
Node: rabbit@rabbitmq03, interface: [::], port: 15672, protocol: http, purpose: HTTP API
Node: rabbit@rabbitmq03, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Node: rabbit@rabbitmq03, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
......
1.5 任意节点:配置镜像队列
rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}'
1.6任意节点:创建管理账号
[root@rabbitmq03 ~]# rabbitmqctl add_user admin 123456
Adding user "admin" ...
Done. Don't forget to grant the user permissions to some virtual hosts! See 'rabbitmqctl help set_permissions' to learn more.
[root@rabbitmq03 ~]# rabbitmqctl set_user_tags ori administrator
Setting tags for user "admin" to [administrator] ...
[root@rabbitmq03 ~]#
1.7 其他命令:
  • 在某节点上移除其他某个节点

    [root@rabbitmq03 ~]# rabbitmqctl forget_cluster_node rabbit@rabbitmq0x
  • 修改集群名称(默认名称为第一个节点的名称)

    [root@rabbitmq03 ~]# rabbitmqctl set_cluster_name rabbitmq_cluster
    Setting cluster name to rabbitmq_cluster ...

0x03 Haproxy 安装及配置

0. Haproxy相关

  • Haproxy 作用:用作服务或者后端项目的负载均衡器。
  • 一般使用双节点,这里选择主备模式。
  • 每个节点,都要配置后端项目的服务器ip以及端口。

1. Haproxy安装

<u>注:两个节点都是相同的操作</u>

1.1准备
  • 关闭防火墙

    systemctl disable firewalld
    sysetmctl stop firewalld
  • 关闭Selinux

    setenforce 0
    # vim /etc/selinux/config
    SELINUX=enforcing/disabled
  • 修改hostname

    192.168.20.151: hostnamectl set-hostname haka01
    192.168.20.152: hostnamectl set-hostname haka02
1.2 安装源
rpm -ivh https://mirrors.tuna.tsinghua.edu.cn/ius/ius-release-el7.rpm
yum makecache
1.3 安装

可以先列出所有版本,其中 haproxy20 和haproxy22都是 LTS版本,具体版本维护时间点此链接

yum search haproxy
yum install -y haproxy20

2. Haproxy配置

<u>注:两个节点都是相同的操作</u>

参照2.1的配置文件示例配置后,启动haproxy

systemctl start haproxy
systemctl enable haproxy
2.1 具体配置文件示例

两个节点均安装相同的配置

#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   https://www.haproxy.org/download/1.8/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid

    # max connection num default 4000
    maxconn     20480
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    #option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
#frontend main
#    bind *:5000
#    acl url_static       path_beg       -i /static /images /javascript /stylesheets
#    acl url_static       path_end       -i .jpg .gif .png .css .js

#    use_backend static          if url_static
#    default_backend             app
frontend rabbitmq_5672
    bind *:5672
    mode tcp
    option tcplog
    option dontlognull
    default_backend rabbitmq_cluster_5672

frontend rabbitmq_15672
   bind *:15672
   mode http
   option httplog
   option dontlognull
   default_backend1 rabbitmq_cluster_15672
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
#backend static
#    balance     roundrobin
#    server      static 127.0.0.1:4331 check

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
#backend app
#    balance     roundrobin
#    server  app1 127.0.0.1:5001 check
#    server  app2 127.0.0.1:5002 check
#    server  app3 127.0.0.1:5003 check
#    server  app4 127.0.0.1:5004 check
backend rabbitmq_cluster_5672
    balance     roundrobin
    cookie SERVERID
    server rabbitmq01 192.168.20.21:5672 weight 100 check inter 3000 rise 2 fall 3
    server rabbitmq02 192.168.20.22:5672 weight 100 check inter 3000 rise 2 fall 3
    server rabbitmq03 192.168.20.23:5672 weight 100 check inter 3000 rise 2 fall 3
backend rabbitmq_cluster_15672

    balance     roundrobin
    cookie SERVERID
    server rabbitmq01 192.168.20.21:15672 weight 100 check inter 3000 rise 2 fall 3
    server rabbitmq02 192.168.20.22:15672 weight 100 check inter 3000 rise 2 fall 3
    server rabbitmq03 192.168.20.23:15672 weight 100 check inter 3000 rise 2 fall 3
#---------------------------------------------------------------------
# listen status
#---------------------------------------------------------------------
listen admin_status
   bind *:3030
   mode http
   log 127.0.0.1 local3 err
   stats refresh 5s
   stats uri /status
   stats auth oristand:*****
   stats hide-version
   stats admin if TRUE
                   
2.2 需要注意的点:
  • 5672端口走四层即TCP.故,在对应的 frontend 段配置用 使用 mode http,且是 option tcplog,而非option httplog
  • 15672端口是http服务,模式是 mode http,对应 option httplog
  • 修改完配置后,通过haproxy -c -f /etc/haproxy/haproxy.cfg 命令来验证该配置是否有误。
  • 配置前,先关闭防火墙

0x04 Keepalived

0. Keepalived相关

  • keepalived 也是两个节点,和haproxy安装在相同的两个节点上。
  • keepalived 每隔固定的时间去执行脚本,验证haproxy进程是否存活,否则将当前节点进程退出。

1. Keepalived安装

1.1 准备

Haproxy安装的时候已做过此步骤。

1.2 创建依赖环境
  • 官方源码下载链接,下载所需要的版本,这里选择的是2.1.5版本,即2.1的最新版本
  • 创建配置文件路径/etc/keepalived
  • 安装依赖
wget https://www.keepalived.org/software/keepalived-2.1.5.tar.gz
mkdir /etc/keepalived
yum -y install openssl-devel gcc gcc-c++
1.3 安装Keepalived

keepalived默认的安装路径是 /usr/local/

tar -zxvf keepalived-2.1.5.tar.gz
mv keepalived-2.1.5 /usr/local/keepalived
cd /usr/local/keepalived
./configure && make && make install
1.4 创建启动文件
cp  -a /usr/local/etc/keepalived   /etc/init.d/
cp  -a /usr/local/etc/sysconfig/keepalived    /etc/sysconfig/
cp  -a /usr/local/sbin/keepalived    /usr/sbin/

2. 修改配置文件

2.1 配置文件
  • 主节点(haka01)配置文件/etc/keepalived/keepalived.conf

    ! Configuration File for keepalived
    
    global_defs {
       # router_id 保持唯一,通常设置成hostname
       router_id haka01
       vrrp_skip_check_adv_addr
       vrrp_strict
       vrrp_garp_interval 0
       vrrp_gna_interval 0
    }
    
    vrrp_script chk_haproxy {
        script "/etc/keepalived/haproxy_check.sh"
        interval 3
    }
    
    vrrp_instance VI_1 {
        # 主节点为 MASTER,备用节点为 BACKUP
        state MASTER
        interface eth0
        virtual_router_id 150
        mcast_src_ip 192.168.20.151
        priority 100
        advert_int 1
        preempt
        authentication {
            auth_type PASS
            auth_pass Ori
        }
        virtual_ipaddress {
            192.168.20.150
        }
        track_script {
            chk_haproxy
        }
    }
  • 备用节点(haka02)配置文件/etc/keepalived/keepalived.conf

    ! Configuration File for keepalived
    
    global_defs {
       router_id haka02
       vrrp_skip_check_adv_addr
       vrrp_strict
       vrrp_garp_interval 0
       vrrp_gna_interval 0
    }
    
    vrrp_script chk_haproxy {
        script "/etc/keepalived/haproxy_check.sh"
        interval 3
    }
    
    vrrp_instance VI_1 {
        state BACKUP
        interface eth0
        virtual_router_id 150
        mcast_src_ip 192.168.20.152
        priority 90
        advert_int 1
        preempt
        authentication {
            auth_type PASS
            auth_pass Ori
        }
        virtual_ipaddress {
            192.168.20.150
        }
        track_script {
            chk_haproxy
        }
    }
  • 监控检查脚本/etc/keepalived/haproxy_check.sh

    #!/bin/bash
    COUNT=`ps -C haproxy --no-header |wc -l`
    if [ $COUNT -eq 0 ];then
        systemctl start haproxy
        sleep 3
        if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
            systemctl stop keepalived
        fi
    fi
2.2 启动Keepalived
systemctl start  keepalived
systemctl enable keepalived

3. 注意事项

  • keepalived的配置文件 属于声明类型的,后面要用到的片段,要在之前配置,否则不生效(亲身经历)
  • router_id要保持唯一性,通常设置为hostname
  • virtual_router_id,在一个集群中,要保持一致。
  • preempt是抢占式,nopreempt为非抢占式。如果为非抢占式,主节点恢复后,vip不会发生变动,仍然绑定在备用节点。
  • auth_typeauth_pass,在一个集群中,要保持一致。

我来吐槽

*

*