Zacard's Notes

docker学习系列六:构建zookeeper镜像

什么是zookeeper

zookeeper 是一个分布式的,开源的协调服务框架,服务于分布式应用程序。

为什么要zookeeper

可以将分布式应用从处理协调服务的泥潭中解救出来。且性能优越,设计简洁优雅。

  • 顺序一致性: 来自客户端的更新操作将会按照顺序被作用
  • 原子性操作: 更新要么全部成功,要么全部失败,没有部分的结果
  • 统一的系统镜像: 无论客户端链接的是哪台服务器,都能获得同样的服务视图,也就是说他是无状态的
  • 可靠性保证: 一旦写入操作被执行(作用到服务器),这个状态将会被持久化,直到其他客户端的修改生效
  • 时间线特性: 客户端访问服务器系统镜像能在一个特定时间访问内保证当前系统是实时更新的

Dockfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
FROM oraclejdk8
MAINTAINER zacard <[email protected]>
# Install required packages
RUN apk add --no-cache \
bash \
su-exec
ENV ZOO_USER zookeeper
ENV ZOO_CONF_DIR /conf
ENV ZOO_DATA_DIR /data
ENV ZOO_DATA_LOG_DIR /datalog
# Add a user and make dirs
RUN set -x \
&& adduser -D "$ZOO_USER" \
&& mkdir -p "$ZOO_DATA_LOG_DIR" "$ZOO_DATA_DIR" "$ZOO_CONF_DIR" \
&& chown "$ZOO_USER:$ZOO_USER" "$ZOO_DATA_LOG_DIR" "$ZOO_DATA_DIR" "$ZOO_CONF_DIR"
ARG GPG_KEY=C823E3E5B12AF29C67F81976F5CECB3CB5E9BD2D
ARG DISTRO_NAME=zookeeper-3.4.9
# Download Apache Zookeeper, verify its PGP signature, untar and clean up
RUN set -x \
&& apk add --no-cache --virtual .build-deps \
gnupg \
&& wget -q "http://www.apache.org/dist/zookeeper/$DISTRO_NAME/$DISTRO_NAME.tar.gz" \
&& wget -q "http://www.apache.org/dist/zookeeper/$DISTRO_NAME/$DISTRO_NAME.tar.gz.asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-key "$GPG_KEY" \
&& gpg --batch --verify "$DISTRO_NAME.tar.gz.asc" "$DISTRO_NAME.tar.gz" \
&& tar -xzf "$DISTRO_NAME.tar.gz" \
&& mv "$DISTRO_NAME/conf/"* "$ZOO_CONF_DIR" \
&& rm -r "$GNUPGHOME" "$DISTRO_NAME.tar.gz" "$DISTRO_NAME.tar.gz.asc" \
&& apk del .build-deps
WORKDIR $DISTRO_NAME
VOLUME ["$ZOO_DATA_DIR", "$ZOO_DATA_LOG_DIR"]
ENV ZOO_PORT 2181
EXPOSE $ZOO_PORT
ENV PATH $PATH:/$DISTRO_NAME/bin
ENV ZOOCFGDIR $ZOO_CONF_DIR
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["zkServer.sh", "start-foreground"]

注意:这里依赖的镜像oraclejdk8请查看之前的文章“Docker学习系列五:构建oracle-jdk8镜像”

依赖的docker-entrypoint.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#!/bin/bash
set -e
# Allow the container to be started with `--user`
if [ "$1" = 'zkServer.sh' -a "$(id -u)" = '0' ]; then
exec su-exec "$ZOO_USER" "$0" "$@"
fi
# Generate the config only if it doesn't exist
if [ ! -f "$ZOO_CONF_DIR/zoo.cfg" ]; then
CONFIG="$ZOO_CONF_DIR/zoo.cfg"
echo "clientPort=$ZOO_PORT" >> "$CONFIG"
echo "dataDir=$ZOO_DATA_DIR" >> "$CONFIG"
echo "dataLogDir=$ZOO_DATA_LOG_DIR" >> "$CONFIG"
echo 'tickTime=2000' >> "$CONFIG"
echo 'initLimit=5' >> "$CONFIG"
echo 'syncLimit=2' >> "$CONFIG"
for server in $ZOO_SERVERS; do
echo "$server" >> "$CONFIG"
done
fi
# Write myid only if it doesn't exist
if [ ! -f "$ZOO_DATA_DIR/myid" ]; then
echo "${ZOO_MY_ID:-1}" > "$ZOO_DATA_DIR/myid"
fi
exec "$@"

设置docker-entrypoint.sh权限:

chmod 755 docker-entrypoint.sh

build镜像

docker build -t zookeeper:3.4.9 .

测试镜像

启动镜像

docker run --name zookeeper --restart always -d zookeeper:3.4.9

This image includes EXPOSE 2181 (the zookeeper port), so standard container linking will make it automatically available to the linked containers. Since the Zookeeper “fails fast” it’s better to always restart it.

这个镜像内部开放了2181端口(zookeeper默认端口),所有标准的容器链接会使之自动可用。然后因为Zookeeper是fail fast,所以最好总是能自动重启。

从另一个应用容器链接到zookeeper容器

docker run --name some-app --link some-zookeeper:zookeeper -d application-that-uses-zookeeper

从zookeeper命令行客户端链接到zookeeper容器

docker run -it --rm --link some-zookeeper:zookeeper zookeeper zkCli.sh -server zookeeper

集群模式启动zookeeper

docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
version: '2'
services:
zoo1:
image: zookeeper:3.4.9
restart: always
ports:
- 2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
zoo2:
image: zookeeper:3.4.9
restart: always
ports:
- 2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
zoo3:
image: zookeeper:3.4.9
restart: always
ports:
- 2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888

启动集群:

docker-compose up

查看集群状态(端口):

docker-compose ps  

这里需要注意:这里是伪集群。因为所有容器都启动在同一个物理主机中。实际应该是在不同的主机中启动zookeeper容器。

配置

zookeeper的配置在/conf目录下。如果需要修改配置,可以挂载本地配置文件。例如以下所示:

docker run --name some-zookeeper --restart always -d -v $(pwd)/zoo.cfg:/conf/zoo.cfg zookeeper
坚持原创技术分享,您的支持将鼓励我继续创作!

热评文章