文档章节

ActiveMQ基于LevelDB的Zookeeper集群

雁南飞丶
 雁南飞丶
发布于 2017/06/21 10:37
字数 3776
阅读 222
收藏 0
  1. 使用zookeeper实现方式

是对ActiveMQ进行高可用的一种有效的解决方案,高可用的原理:使用ZooKeeper(集群)注册所有的ActiveMQ Broker。只有其中的一个Broker可以对外提供服务(也就是Master节点),其他的Broker处于待机状态,被视为Slave。如果Master因故障而不能提供服务,则利用ZooKeeper的内部选举机制会从Slave中选举出一个Broker充当Master节点,继续对外提供服务。

官方文档:http://activemq.apache.org/replicated-leveldb-store.html

参考文档https://www.ibm.com/developerworks/cn/data/library/bd-zookeeper/

2、集群角色介绍

zookeeper集群中主要有两个角色:leader和follower。

领导者(leader),用于负责进行投票的发起和决议,更新系统状态。

学习者(learner),包括跟随者(follower)和观察者(observer)。

其中follower用于接受客户端请求并想客户端返回结果,在选主过程中参与投票。

而observer可以接受客户端连接,将写请求转发给leader,但observer不参加投票过程,只同步leader的状态,observer的目的是为了扩展系统,提高读取速度。

3、zookeeper集群节点数

一个zookeeper集群需要运行几个zookeeper节点呢?

你可以运行一个zookeeper节点,但那就不是集群了。如果要运行zookeeper集群的话,最好部署3,5,7个zookeeper节点。本次实验我们是以3个节点进行的。

zookeeper节点部署的越多,服务的可靠性也就越高。当然建议最好是部署奇数个,偶数个不是不可以。但是zookeeper集群是以宕机个数过半才会让整个集群宕机的,所以奇数个集群更佳。

你需要给每个zookeeper 1G左右的内存,如果可能的话,最好有独立的磁盘,因为独立磁盘可以确保zookeeper是高性能的。如果你的集群负载很重,不要把zookeeper和RegionServer运行在同一台机器上面,就像DataNodes和TaskTrackers一样。

这里写图片描述

2.环境准备

zookeeper环境

主机IP 消息端口 通信端口 部署节点位置/usr/local下
192.168.0.85 2181 2287:3387 zookeeper-3.4.10
192.168.0.171 2181 2287:3387 zookeeper-3.4.10
192.168.0.181 2181 2287:3387 zookeeper-3.4.10

3.安装配置zookeeper

[root@zqdd:/root]#tar xvf zookeeper-3.4.10.tar.gz  -C /usr/local/ #解压
[root@zqdd:/usr/local/zookeeper-3.4.10/conf]#cp zoo_sample.cfg zoo.cfg #复制配置文件zoo.cfg
#输出zookeeper环境变量
添加下面的信息到/etc/profile
export ZK_HOME=/usr/local/zookeeper-3.4.10
export PATH=$PATH:$ZK_HOME/bin
#使环境变量生效
[root@zqdd:/root]#source  /etc/profile

4.修改zookeeper主配置文件zoo.cfg
 

[root@zqdd:/usr/local/zookeeper-3.4.10/conf]#cat zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=5
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=2
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
dataLogDir=/var/log
server.1=192.168.0.171:2888:3888
server.2=192.168.0.181:2888:3888
server.3=192.168.0.85:2888:3888

参数说明#######################################
#  dataDir:数据目录
#  dataLogDir:日志目录
#  clientPort:客户端连接端口
#  tickTime:Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime 时间就会发送一个心跳。
#  initLimit:Zookeeper的Leader 接受客户端(Follower)初始化连接时最长能忍受多少个心跳时间间隔数。当已经超过 5个心跳的时间(也就是tickTime)长度后 Zookeeper 服务器还没有收到客户端的返回信息,那么表明这个客户端连接失败。总的时间长度就是 5*2000=10 秒
#  syncLimit:表示 Leader 与 Follower 之间发送消息时请求和应答时间长度,最长不能超过多少个tickTime 的时间长度,总的时间长度就是 2*2000=4 秒。
#  server.A=B:C:D:其中A 是一个数字,表示这个是第几号服务器;B 是这个服务器的 ip 地址;C 表示的是这个服务器与集群中的 Leader 服务器交换信息的端口;D 表示的是万一集群中的 Leader 服务器挂了,需要一个端口来重新进行选举,选出一个新的 Leader,而这个端口就是用来执行选举时服务器相互通信的端口。如果是伪集群的配置方式,由于 B 都是一样,所以不同的 Zookeeper 实例通信端口号不能一样,所以要给它们分配不同的端口号。

每个ZooKeeper的instance,都需要设置独立的数据存储目录、日志存储目录,所以dataDir节点对应的目录,需要手动先创建好

5.创建ServerID 标识

除了修改zoo.cfg配置文件外,zookeeper集群模式下还要配置一个myid文件,这个文件需要放在dataDir目录下。

这个文件里面有一个数据就是A的值(该A就是zoo.cfg文件中server.A=B:C:D中的A),在zoo.cfg文件中配置的dataDir路径中创建myid文件。

在192.168.0.171服务器上创建myid文件,并设置为1,同时与zoo.cfg文件里面的server.1对应,如下:

echo “1” > /tmp/zookeeper/myid

6.三台机器zookeeper安装配置一样

7.启动zookeeper,并查看集群状态

#查看192.168.0.85
root@agent2:/root#zkServer.sh start  #启动
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
root@agent2:/root#zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: leader                           #角色
root@agent2:/root#zkServer.sh   #查看帮助
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Usage: /usr/local/zookeeper-3.4.10/bin/zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}
#查看192.168.0.171
[root@zqdd:/usr/local/zookeeper-3.4.10/conf]#zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@zqdd:/usr/local/zookeeper-3.4.10/conf]#zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower   #角色
#查看192.168.0.181
root@agent:/root#zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
root@agent:/root#zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower #角色

8.设置zookeeper开机自启动

touch /etc/init.d/zookeeper  #创建启动文件

chmod +x /etc/init.d/zookeeper  #

#脚本内容如下
#!/bin/bash  
#chkconfig:2345 20 90  
#description:zookeeper  
#processname:zookeeper  
case $1 in  
          start) /usr/local/zookeeper-3.4.10/bin/zkServer.sh start;;  
          stop) /usr/local/zookeeper-3.4.10/bin/zkServer.sh stop;;  
          status) /usr/local/zookeeper-3.4.10/bin/zkServer.sh status;;  
          restart) /usr/local/zookeeper-3.4.10/bin/zkServer.sh restart;;  
          *)  echo "require start|stop|status|restart";;  
esac 


chkconfig --add zookeeper  #添加服务
chkconfig --level 35 zookeeper on

9.zookeeper客户端使用

root@agent2:/usr/local/zookeeper-3.4.10/bin#zkCli.sh -timeout 5000 -server 192.168.0.85:2181
Connecting to 192.168.0.85:2181
2017-06-21 10:01:12,672 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2017-06-21 10:01:12,685 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=agent2
2017-06-21 10:01:12,685 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.7.0_79
2017-06-21 10:01:12,694 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2017-06-21 10:01:12,697 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/local/jdk1.7.0_79/jre
2017-06-21 10:01:12,697 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/local/zookeeper-3.4.10/bin/../build/classes:/usr/local/zookeeper-3.4.10/bin/../build/lib/*.jar:/usr/local/zookeeper-3.4.10/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/zookeeper-3.4.10/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/zookeeper-3.4.10/bin/../lib/netty-3.10.5.Final.jar:/usr/local/zookeeper-3.4.10/bin/../lib/log4j-1.2.16.jar:/usr/local/zookeeper-3.4.10/bin/../lib/jline-0.9.94.jar:/usr/local/zookeeper-3.4.10/bin/../zookeeper-3.4.10.jar:/usr/local/zookeeper-3.4.10/bin/../src/java/lib/*.jar:/usr/local/zookeeper-3.4.10/bin/../conf:
2017-06-21 10:01:12,700 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-06-21 10:01:12,700 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2017-06-21 10:01:12,700 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2017-06-21 10:01:12,702 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2017-06-21 10:01:12,702 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2017-06-21 10:01:12,702 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=2.6.32-431.el6.x86_64
2017-06-21 10:01:12,703 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
2017-06-21 10:01:12,704 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
2017-06-21 10:01:12,704 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/usr/local/zookeeper-3.4.10/bin
2017-06-21 10:01:12,713 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=192.168.0.85:2181 sessionTimeout=5000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@5bed2ccd
Welcome to ZooKeeper!
2017-06-21 10:01:12,877 [myid:] - INFO  [main-SendThread(192.168.0.85:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server 192.168.0.85/192.168.0.85:2181. Will not attempt to authenticate using SASL (unknown error)
2017-06-21 10:01:12,928 [myid:] - INFO  [main-SendThread(192.168.0.85:2181):ClientCnxn$SendThread@876] - Socket connection established to 192.168.0.85/192.168.0.85:2181, initiating session
JLine support is enabled
2017-06-21 10:01:13,013 [myid:] - INFO  [main-SendThread(192.168.0.85:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server 192.168.0.85/192.168.0.85:2181, sessionid = 0x35cc85763500000, negotiated timeout = 5000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.0.85:2181(CONNECTED) 0] ls
[zk: 192.168.0.85:2181(CONNECTED) 1] ls /  
[activemq, zookeeper]
[zk: 192.168.0.85:2181(CONNECTED) 2] 

输入ls /

看到搭建的服务有zookeeper activemq,本来没有activemq,因我已经搭建好了activemq集群

测验

停掉leader,看日志会选举出另一个leader

10.部署activemq

主机 集群通信端口 消息端口 控制台端口 部署路径/usr/local
192.168.0.85 61619 61616 8161 apache-activemq-5.14.5
192.168.0.171 61619 61616 8161 apache-activemq-5.14.5
192.168.0.181 61619 61616 8161 apache-activemq-5.14.5

 11、安装activemq

下载
wget http://www.apache.org/dyn/closer.cgi?filename=/activemq/5.14.5/apache-activemq-5.14.5-bin.tar.gz
解压
tar xvf apache-activemq-5.14.5-bin.tar.gz -C /usr/local/
设置开机启动
root@agent2:/usr/local/apache-activemq-5.14.5/bin#cp activemq /etc/init.d/activemq
# chkconfig: 345 63 37  
# description: Auto start ActiveMQ 

12.启动activemq

root@agent2:/usr/local/apache-activemq-5.14.5/bin#./activemq start
查看监听端口
root@agent2:/usr/local/apache-activemq-5.14.5/bin#netstat -antlp |grep "8161\|61616\|616*"
tcp        0     64 192.168.0.85:22             192.168.0.61:52967          ESTABLISHED 6702/sshd           
tcp        0      0 :::61613                    :::*                        LISTEN      7481/java           
tcp        0      0 :::61614                    :::*                        LISTEN      7481/java           
tcp        0      0 :::61616                    :::*                        LISTEN      7481/java           
tcp        0      0 :::8161                     :::*                        LISTEN      7481/java           

13.activemq 集群配置

root@agent2:/usr/local/apache-activemq-5.14.5/conf#cat activemq.xml 
<!--
    Licensed to the Apache Software Foundation (ASF) under one or more
    contributor license agreements.  See the NOTICE file distributed with
    this work for additional information regarding copyright ownership.
    The ASF licenses this file to You under the Apache License, Version 2.0
    (the "License"); you may not use this file except in compliance with
    the License.  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
-->
<!-- START SNIPPET: example -->
<beans
  xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
  http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">

    <!-- Allows us to use system properties as variables in this configuration file -->
    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="locations">
            <value>file:${activemq.conf}/credentials.properties</value>
        </property>
    </bean>

   <!-- Allows accessing the server log -->
    <bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
          lazy-init="false" scope="singleton"
          init-method="start" destroy-method="stop">
    </bean>

    <!--
        The <broker> element is used to configure the ActiveMQ broker.
    -->
#mq安装路径下的conf/activemq.xml进行mq的brokerName,并且每个节点名称都必须相同
#brokerName=”activemq-cluster”(三个节点都需要修改)
    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="activemq-cluster" dataDirectory="${activemq.data}">

        <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry topic=">" >
                    <!-- The constantPendingMessageLimitStrategy is used to prevent
                         slow topic consumers to block producers and affect other consumers
                         by limiting the number of messages that are retained
                         For more information, see:

                         http://activemq.apache.org/slow-consumer-handling.html

                    -->
                  <pendingMessageLimitStrategy>
                    <constantPendingMessageLimitStrategy limit="1000"/>
                  </pendingMessageLimitStrategy>
                </policyEntry>
              </policyEntries>
            </policyMap>
        </destinationPolicy>


        <!--
            The managementContext is used to configure how ActiveMQ is exposed in
            JMX. By default, ActiveMQ uses the MBean server that is started by
            the JVM. For more information, see:

            http://activemq.apache.org/jmx.html
        -->
        <managementContext>
            <managementContext createConnector="false"/>
        </managementContext>

        <!--
            Configure message persistence for the broker. The default persistence
            mechanism is the KahaDB store (identified by the kahaDB tag).
            For more information, see:

            http://activemq.apache.org/persistence.html
        -->
        <persistenceAdapter>
#释掉适配器中的kahadb 
            <!-- <kahaDB directory="${activemq.data}/kahadb"/> -->
#添加新的leveldb配置
		    <replicatedLevelDB
		            directory="${activemq.data}/leveldb"
		            replicas="3"
		            bind="tcp://0.0.0.0:61619"
		            zkAddress="192.168.0.85:2181,192.168.0.171:2181,192.168.0.181:2181"
		            hostname="192.168.0.85" #3台机器各填自己的ip
		            zkPath="/activemq/leveldb-stores"
		     />
        </persistenceAdapter>
#启用简单认证
         <plugins>
                <simpleAuthenticationPlugin>
                <users>
                <authenticationUser username="${activemq.username}" password="${activemq.password}" groups="admins,everyone"/>
                <authenticationUser username="mcollective" password="musingtec" groups="mcollective,admins,everyone"/>
                </users>
                </simpleAuthenticationPlugin>
          </plugins>
          <!--
            The systemUsage controls the maximum amount of space the broker will
            use before disabling caching and/or slowing down producers. For more information, see:
            http://activemq.apache.org/producer-flow-control.html
          -->
          <systemUsage>
            <systemUsage>
                <memoryUsage>
                    <memoryUsage percentOfJvmHeap="70" />
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="100 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="50 gb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>

        <!--
            The transport connectors expose ActiveMQ over a given protocol to
            clients and other brokers. For more information, see:

            http://activemq.apache.org/configuring-transports.html
        -->
        <transportConnectors>
            <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
            <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        </transportConnectors>

        <!-- destroy the spring context on shutdown to stop jetty -->
        <shutdownHooks>
            <bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
        </shutdownHooks>

    </broker>

    <!--
        Enable web consoles, REST and Ajax APIs and demos
        The web consoles requires by default login, you can disable this in the jetty.xml file

        Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
    -->
    <import resource="jetty.xml"/>

</beans>
<!-- END SNIPPET: example -->

参数详解

Replicated LevelDB Store Properties

All the broker nodes that are part of the same replication set should have matching brokerName XML attributes. The following configuration properties should be the same on all the broker nodes that are part of the same replication set:

property name

default value

Comments

replicas

3

The number of nodes that will exist in the cluster. At least (replicas/2)+1 nodes must be online to avoid service outage.

securityToken

 

A security token which must match on all replication nodes for them to accept each others replication requests.

zkAddress

127.0.0.1:2181

A comma separated list of ZooKeeper servers.

zkPassword

 

The password to use when connecting to the ZooKeeper server.

zkPath

/default

The path to the ZooKeeper directory where Master/Slave election information will be exchanged.

zkSessionTimeout

2s

How quickly a node failure will be detected by ZooKeeper. (prior to 5.11 - this had a typo zkSessionTmeout)

sync

quorum_mem

Controls where updates are reside before being considered complete. This setting is a comma separated list of the following options: local_memlocal_diskremote_memremote_diskquorum_memquorum_disk. If you combine two settings for a target, the stronger guarantee is used. For example, configuring local_mem, local_disk is the same as just using local_disk. quorum_mem is the same as local_mem, remote_mem andquorum_disk is the same as local_disk, remote_disk

Different replication sets can share the same zkPath as long they have different brokerName.

The following configuration properties can be unique per node:

property name

default value

Comments

bind

tcp://0.0.0.0:61619

When this node becomes a master, it will bind the configured address and port to service the replication protocol. Using dynamic ports is also supported. Just configure with tcp://0.0.0.0:0

hostname

 

The host name used to advertise the replication service when this node becomes the master. If not set it will be automatically determined.

weight

1

The replication node that has the latest update with the highest weight will become the master. Used to give preference to some nodes towards becoming master.

The store also supports the same configuration properties of a standard LevelDB Store but it does not support the pluggable storage lockers :

Standard LevelDB Store Properties

property name

default value

Comments

directory

LevelDB

The directory which the store will use to hold it's data files. The store will create the directory if it does not already exist.

readThreads

10

The number of concurrent IO read threads to allowed.

logSize

104857600 (100 MB)

The max size (in bytes) of each data log file before log file rotation occurs.

verifyChecksums

false

Set to true to force checksum verification of all data that is read from the file system.

paranoidChecks

false

Make the store error out as soon as possible if it detects internal corruption.

indexFactory

org.fusesource.leveldbjni.JniDBFactory, org.iq80.leveldb.impl.Iq80DBFactory

The factory classes to use when creating the LevelDB indexes

indexMaxOpenFiles

1000

Number of open files that can be used by the index.

indexBlockRestartInterval

16

Number keys between restart points for delta encoding of keys.

indexWriteBufferSize

6291456 (6 MB)

Amount of index data to build up in memory before converting to a sorted on-disk file.

indexBlockSize

4096 (4 K)

The size of index data packed per block.

indexCacheSize

268435456 (256 MB)

The maximum amount of off-heap memory to use to cache index blocks.

indexCompression

snappy

The type of compression to apply to the index blocks. Can be snappy or none.

logCompression

none

The type of compression to apply to the log records. Can be snappy or none.

14.集群服务搭建完毕

启动之后查看3台机器日志

从日志可以看出192.168.0.171 为master其他为slave

测试

使用调试工具ZooInspector.zip查看当前activemq在那个机器上

查看消息队列控制台

只能在master机器上维护MQ服务在其他机器上则不会提供服务,也就是说以下3个节点只有一个可以正产访问,down掉一台activemq 会立刻切换到别的服务器上继续提供服务,会话不会断开。

http://192.168.0.85:8161/admin/queues.jsp

http://192.168.0.171:8161/admin/queues.jsp

http://192.168.0.181:8161/admin/queues.jsp

 

待完善。。。。

© 著作权归作者所有

雁南飞丶
粉丝 37
博文 188
码字总数 248284
作品 0
西安
运维
私信 提问
基于zookeeper+leveldb搭建activemq集群实现高可用

自从activemq5.9.0开始,activemq的集群实现方式取消了传统的Master-Slave方式,增加了基于zookeeper+leveldb的实现方式,其他两种方式:目录共享和数据库共享依然存在。本文主要阐述基于zoo...

chaun
2015/08/17
338
2
基于zookeeper+leveldb搭建activemq集群

自从activemq5.9.0开始,activemq的集群实现方式取消了传统的Master-Slave方式,增加了基于zookeeper+leveldb的实现方式,其他两种方式:目录共享和数据库共享依然存在。本文主要阐述基于zoo...

萧十一郎君
2014/09/12
15.4K
7
activemq集群假死

现象是:进程都还在,但是端口已经不监听,查看日志发现以下错误,集群重启后就好了,这个情况大家碰到过吗,有规避的办法吗,在这里先谢谢了 2017-01-04 04:43:12,184 | WARN | listeners a...

youarepp
2017/01/05
1K
3
基于levelDB可复制master/slave(zookeeper+levelDB)

Leveldb是一个google实现的非常高效的kv数据库,是单进程的服务,能够处理十亿级别规模Key-Value型数据,占用内存小。 基于可复制LevelDB的集群方案,需要引入ZooKeeper。根据ZooKeeper的使用...

chaun
2015/08/17
418
0
ActiveMQ集群方案(下)

版权声明:欢迎转载,但是看在我辛勤劳动的份上,请注明来源:http://blog.csdn.net/yinwenjie(未经允许严禁用于商业用途!) https://blog.csdn.net/yinwenjie/article/details/51205822 目...

yunlielai
2018/04/15
0
0

没有更多内容

加载失败,请刷新页面

加载更多

总结

一、设计模式 简单工厂:一个简单而且比较杂的工厂,可以创建任何对象给你 复杂工厂:先创建一种基础类型的工厂接口,然后各自集成实现这个接口,但是每个工厂都是这个基础类的扩展分类,spr...

BobwithB
22分钟前
2
0
java内存模型

前言 Java作为一种面向对象的,跨平台语言,其对象、内存等一直是比较难的知识点。而且很多概念的名称看起来又那么相似,很多人会傻傻分不清楚。比如本文我们要讨论的JVM内存结构、Java内存模...

ls_cherish
26分钟前
2
0
友元函数强制转换

友元函数强制转换 p522

天王盖地虎626
昨天
5
0
js中实现页面跳转(返回前一页、后一页)

本文转载于:专业的前端网站➸js中实现页面跳转(返回前一页、后一页) 一:JS 重载页面,本地刷新,返回上一页 复制代码代码如下: <a href="javascript:history.go(-1)">返回上一页</a> <a h...

前端老手
昨天
4
0
JAVA 利用时间戳来判断TOKEN是否过期

import java.time.Instant;import java.time.LocalDateTime;import java.time.ZoneId;import java.time.ZoneOffset;import java.time.format.DateTimeFormatter;/** * @descri......

huangkejie
昨天
4
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部