ZooKeeper Docker Cluster(ZK伪集群)

1. 背景

1.1. 概述

原来搭建 zookeeper 集群时,都是要一个个去搭建、配置、启动,但总体部署起来还有有点麻烦的,尤其是当你只需要一个测试环境时,就更没有必要大费周章的去搭建 zookeeper 集群了,使用了 Docker 之后,大大简化的集群搭建的步骤,而且还可以重复利用配置文件。

尤其是在测试时,大大的方便了我们搭建集群的时间。

Docker 的安装详见《Linux安装Dokcer CE

1.2 . 环境

  • Ubuntu 18.04.3 LTS

  • Docker 19.03.2, build 6a30dfc

    [TOC]

2. zk镜像的基本操作

2.1 这里我们使用 zookeeper3.5 的版本

1
docker pull zookeeper:3.5

2.2 出现如下结果时,说明镜像已经下载完成

1
2
3
4
5
6
7
8
9
10
3.5: Pulling from library/zookeeper
ff3a5c916c92: Pull complete
5de5f69f42d7: Pull complete
fa7536dd895a: Pull complete
644150d38454: Pull complete
9ff28e2fa4ed: Pull complete
51e97a19e3a8: Pull complete
13b26111158e: Pull complete
Digest: sha256:18b81f09a371e69be882dd759c93ad9d450d7c6a628458b2b38123c078ba01ae
Status: Downloaded newer image for zookeeper:3.5

2.2.1 运行如下命令启动一个 zookeeper 实例, 默认端口 2181

2.2.2 查看运行日志

2.2.3 输出如下, 说明启动成功

1
2
3
4
5
6
7
8
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
2018-04-06 14:26:29,648 [myid:] - INFO [main:QuorumPeerConfig@117] - Reading configuration from: /conf/zoo.cfg
2018-04-06 14:26:29,651 [myid:] - INFO [main:QuorumPeerConfig@317] - clientPort is not set
2018-04-06 14:26:29,651 [myid:] - INFO [main:QuorumPeerConfig@331] - secureClientPort is not set
2018-04-06 14:26:29,655 [myid:] - WARN [main:QuorumPeerConfig@590] - No server failure will be tolerated. You need at least 3 servers.
[省略若干... ]
2018-04-06 14:26:30,164 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0.0.0.0:2181)(secure=disabled):ContainerManager@64] - Using checkIntervalMs=60000 maxPerMinute=10000

2.3 使用 ZK 命令行客户端连接 ZK

因为刚才我们启动的那个 ZK 容器并没有绑定宿主机的端口, 因此我们不能直接访问它. 但是我们可以通过 Docker 的 link 机制来对这个 ZK 容器进行访问. 执行如下命令:

1
docker run -it --rm --link my_zk:my_zk2 zookeeper:3.5 zkCli.sh -server my_zk2

这个命令的含义是:

  • 启动一个 zookeeper 镜像, 并运行这个镜像内的 zkCli.sh 命令, 命令参数是 "-server my_zk2"
  • 将我们先前启动的名为 my_zk 的容器连接 (link) 到我们新建的这个容器上, 并将其主机名命名为 my_zk2

3. zk 集群

因为一个一个地启动 zk 太麻烦了, 所以为了方便起见, 我直接使用 docker-compose 来启动 zk 集群.

3.1 安装

  • 首先创建一个名为 docker-compose.yml 的文件, 其内容如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
version: '3.1'

services:
zoo1:
image: zookeeper
restart: always
container_name: zoo1
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181

zoo2:
image: zookeeper
restart: always
container_name: zoo2
hostname: zoo2
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181

zoo3:
image: zookeeper
restart: always
container_name: zoo3
hostname: zoo3
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181

这个配置文件会告诉 Docker 分别运行三个 zookeeper 镜像, 并分别将本地的 2181, 2182, 2183 端口绑定到对应的容器的 2181 端口上. ZOO_MY_IDZOO_SERVERS 是搭建 ZK 集群需要设置的两个环境变量, 其中 ZOO_MY_ID 表示 zk 服务的 id, 它是 1-255 之间的整数, 必须在集群中唯一. ZOO_SERVERS 是 zk 集群的主机列表.

  • 接着我们在 docker-compose.yml 当前目录下运行如下命令, 就可以启动 zk 集群了:
1
docker-compose up -d

注:如果出现以下错误,参考《Linux安装Dokcer CE》的 4.4 开启 Docker Remote API。

1
2
3
ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is > it running?

If it's at a non-standard location, specify the URL with the DOCKER_HOST > environment variable.
  • 执行上述命令成功后, 运行 docker-compose ps 命令可以查看启动的 zk 容器:
1
2
3
4
5
6
Name              Command               State                          Ports
----------------------------------------------------------------------------------------------------
zoo1 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp
zoo2 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2182->2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp
zoo3 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2183->2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp

3.2 使用 Docker 命令行客户端连接 zk 集群

通过 docker-compose ps 命令, 我们知道启动的 zk 集群的三个主机名分别是 zoo1, zoo2, zoo3, 因此我们分别 link 它们即可:

1
2
3
4
5
6
7
docker run -it --rm \
--link zoo1:zk1 \
--link zoo2:zk2 \
--link zoo3:zk3 \
--net pseudo_cluster_default \
zookeeper zkCli.sh -server zk1:2181,zk2:2181,zk3:2181

net为容器所使用的的network,若未通过docker-compose.yml指定,默认为$pwd_dafult,

通过docker inspect zoo1 | grep Networks -A 20可以查看所使用的network名称

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
"Networks": {
"pseudo_cluster_default": { # network名称
"IPAMConfig": null,
"Links": null,
"Aliases": [
"ad0acab9b92f",
"zoo1"
],
"NetworkID": "c40c9639a37ebc901369a762ec69dedd137bcdf22a1277811e592790fdaf9f08",
"EndpointID": "708837900be7d8fda10d7469791bc58f335e13804718c46267b37a908e3f9140",
"Gateway": "172.28.0.1",
"IPAddress": "172.28.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:1c:00:03",
"DriverOpts": null
}
}
}

输出如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
Connecting to zk1:2181,zk2:2181,zk3:2181
2019-10-15 12:07:11,485 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.5.5-390fe37ea45dee01bf87dc1c042b5e3dcce88653, built on 05/03/2019 12:07 GMT
2019-10-15 12:07:11,487 [myid:] - INFO [main:Environment@109] - Client environment:host.name=b017d561bf98
2019-10-15 12:07:11,488 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.8.0_222
2019-10-15 12:07:11,489 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation
2019-10-15 12:07:11,489 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/usr/local/openjdk-8
2019-10-15 12:07:11,489 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/apache-zookeeper-3.5.5-bin/bin/../zookeeper-server/target/classes:/apache-zookeeper-3.5.5-bin/bin/../build/classes:/apache-zookeeper-3.5.5-bin/bin/../zookeeper-server/target/lib/*.jar:/apache-zookeeper-3.5.5-bin/bin/../build/lib/*.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/zookeeper-jute-3.5.5.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/zookeeper-3.5.5.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/slf4j-log4j12-1.7.25.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/slf4j-api-1.7.25.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/netty-all-4.1.29.Final.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/log4j-1.2.17.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/json-simple-1.1.1.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/jline-2.11.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/jetty-util-9.4.17.v20190418.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/jetty-servlet-9.4.17.v20190418.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/jetty-server-9.4.17.v20190418.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/jetty-security-9.4.17.v20190418.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/jetty-io-9.4.17.v20190418.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/jetty-http-9.4.17.v20190418.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/javax.servlet-api-3.1.0.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/jackson-databind-2.9.8.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/jackson-core-2.9.8.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/jackson-annotations-2.9.0.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/commons-cli-1.2.jar:/apache-zookeeper-3.5.5-bin/bin/../lib/audience-annotations-0.5.0.jar:/apache-zookeeper-3.5.5-bin/bin/../zookeeper-*.jar:/apache-zookeeper-3.5.5-bin/bin/../zookeeper-server/src/main/resources/lib/*.jar:/conf:
2019-10-15 12:07:11,490 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-10-15 12:07:11,490 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp
2019-10-15 12:07:11,490 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler=<NA>
2019-10-15 12:07:11,490 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux
2019-10-15 12:07:11,490 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64
2019-10-15 12:07:11,490 [myid:] - INFO [main:Environment@109] - Client environment:os.version=4.15.0-65-generic
2019-10-15 12:07:11,490 [myid:] - INFO [main:Environment@109] - Client environment:user.name=root
2019-10-15 12:07:11,490 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/root
2019-10-15 12:07:11,491 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/apache-zookeeper-3.5.5-bin
2019-10-15 12:07:11,491 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=55MB
2019-10-15 12:07:11,492 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=228MB
2019-10-15 12:07:11,493 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=59MB
2019-10-15 12:07:11,496 [myid:] - INFO [main:ZooKeeper@868] - Initiating client connection, connectString=zk1:2181,zk2:2181,zk3:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@3b95a09c
2019-10-15 12:07:11,501 [myid:] - INFO [main:X509Util@79] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2019-10-15 12:07:11,508 [myid:] - INFO [main:ClientCnxnSocket@237] - jute.maxbuffer value is 4194304 Bytes
2019-10-15 12:07:11,515 [myid:] - INFO [main:ClientCnxn@1653] - zookeeper.request.timeout value is 0. feature enabled=
Welcome to ZooKeeper!
2019-10-15 12:07:11,525 [myid:zk1:2181] - INFO [main-SendThread(zk1:2181):ClientCnxn$SendThread@1112] - Opening socket connection to server zk1/172.28.0.3:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2019-10-15 12:07:11,587 [myid:zk1:2181] - INFO [main-SendThread(zk1:2181):ClientCnxn$SendThread@959] - Socket connection established, initiating session, client: /172.28.0.5:35980, server: zk1/172.28.0.3:2181
2019-10-15 12:07:11,607 [myid:zk1:2181] - INFO [main-SendThread(zk1:2181):ClientCnxn$SendThread@1394] - Session establishment complete on server zk1/172.28.0.3:2181, sessionid = 0x100010585200003, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: zk1:2181,zk2:2181,zk3:2181(CONNECTED) 0] # 连接成功,进入命令行交互模式

3.3 通过本地主机连接 ZK 集群

因为我们分别将 zoo1, zoo2, zoo3 的 2181 端口映射到了本地主机的 2181, 2182, 2183 端口上, 因此我们使用如下命令即可连接 zk 集群了:

1
2
zkCli.sh -server 192.192.1.12:2181,192.192.1.12:2182,192.192.1.12:2183

注: zkCli.sh 工具需要到 Zookeeper 官网下载安装包里面会有

Mac可以使用homebrew进行安装brew install zookeeper

  • 执行
    zkCli -server 192.192.1.12:2181,192.192.1.12:2182,192.192.1.12:2183
  • 结果
1
2
3
4
5
Connecting to 192.192.1.12:2181,192.192.1.12:2182,192.192.1.12:2183
Welcome to ZooKeeper!
JLine support is enabled
[zk: 192.192.1.12:2181,192.192.1.12:2182,192.192.1.12:2183(CONNECTING) 0] # 连接成功,进入命令行交互模式

3.4 以docker exec交互方式进入容器查看

  • 宿主机内执行docker exec -it zoo1 /bin/bash进入容器,修改container_namezoo2zoo3即可查看其他zk节点

  • 进入容器后执行./bin/zkServer.sh status

1
2
3
4
5
6
7
8
docker exec -it zoo1  /bin/bash 
root@zoo1:/apache-zookeeper-3.5.5-bin# ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
root@zoo1:/apache-zookeeper-3.5.5-bin#

3.5 停止和关闭集群

cddocker-compose.yml文件路径下

停止集群

1
2
docker-compose stop

删除集群

1
2
docker-compose  rm

停止并删除

docker-compose down

4. zk的其它环境变量

  • ZOO_TICK_TIME默认 2000. ZooKeeper 的tickTime`
  • ZOO_INIT_LIMIT 默认 5. ZooKeeper 的 initLimit
  • ZOO_SYNC_LIMIT 默认 2. ZooKeeper 的 syncLimit
  • ZOO_MAX_CLIENT_CNXNS 默认 60. ZooKeeper 的 maxClientCnxns

[TOC]


本博客所有文章除特别声明外,均采用 CC BY-SA 4.0 协议 ,转载请注明出处!