这是我们关于如何在 Apache Pulsar 中实现资源隔离博客系列中的第四篇。在深入研究之前,让我们回顾一下前三篇博客所涵盖的内容。
深度解析如何在 Pulsar 中实现隔离 介绍了在 Pulsar 中实现隔离的三种方法。包括:
零经验玩转隔离策略:多个 Pulsar 集群 演示如何使用独立 BookKeeper 集群在各个 Pulsar 集群之间实现隔离。这种无共享的方法提供了最高级别的隔离,适用于存储高度敏感的数据,例如个人身份信息或财务记录。
在本系列的第四篇也是最后一篇博客中,我们将逐步演示如何使用单个集群实现 broker 和 bookie 的隔离。这种更传统的方法利用了 Pulsar 内置的多租户特性,并且无需管理多个 broker 和 bookie 集群。
本教程将使用 docker-compose
来建立 Pulsar 集群。我们先需要安装 Docker 环境.
本教程基于 Docker 20.10.10、docker-compose 1.29.2 和 macOS 12.3.1。
git clone https://github.com/gaoran10/pulsar-docker-compose
cd pulsar-docker-compose
docker-compose up
3.查看 Pod 状态。
docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------------------
bk1 bash -c export dbStorage_w ... Up
bk2 bash -c export dbStorage_w ... Up
bk3 bash -c export dbStorage_w ... Up
bk4 bash -c export dbStorage_w ... Up
broker1 bash -c bin/apply-config-f ... Up
broker2 bash -c bin/apply-config-f ... Up
broker3 bash -c bin/apply-config-f ... Up
proxy1 bash -c bin/apply-config-f ... Up 0.0.0.0:6650->6650/tcp, 0.0.0.0:8080->8080/tcp
pulsar-init bin/init-cluster.sh Exit 0
zk1 bash -c bin/apply-config-f ... Up
集群初始化完成后,我们可以开始设置 broker 隔离策略。
wget https://archive.apache.org/dist/pulsar/pulsar-2.10.0/apache-pulsar-2.10.0-bin.tar.gz
tar -txvf apache-pulsar-2.10.0-bin.tar.gz
// 在此目录下执行 pulsar-admin 命令
cd apache-pulsar-2.10.0
bin/pulsar-admin brokers list test
# 输出
"broker1:8080"
"broker2:8080"
"broker3:8080"
bin/pulsar-admin namespaces create public/ns-isolation
bin/pulsar-admin namespaces set-retention -s 1G -t 3d public/ns-isolation
bin/pulsar-admin ns-isolation-policy set \
--auto-failover-policy-type min_available \
--auto-failover-policy-params min_limit=1,usage_threshold=80 \
--namespaces public/ns-isolation \
--primary "broker1:*" \
--secondary "broker2:*" \
test ns-broker-isolation
bin/pulsar-admin ns-isolation-policy list test
# 输出
ns-broker-isolation NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker1:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
bin/pulsar-admin topics create-partitioned-topic -p 10 public/ns-isolation/t1
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# 输出
persistent://public/ns-isolation/t1-partition-0 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-1 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-2 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-3 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-4 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-5 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-6 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-7 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-8 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-9 pulsar://broker1:6650
${DOCKER_COMPOSE_HOME}/docker-compose stop broker1
# 输出
Stopping broker1 ... done
当 broker 1 停止服务后,原主题将归备用y broker broker2:*
所有。
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# 输出
persistent://public/ns-isolation/t1-partition-0 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-1 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-2 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-3 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-4 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-5 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-6 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-7 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-8 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-9 pulsar://broker2:6650
${DOCKER_COMPOSE_HOME}/docker-compose stop broker2
# 输出
Stopping broker2 ... done
当停止 broker2 后,namespace public/ns-isolation-broker
就没有可用的 broker 了。
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# 输出
HTTP 503 Service Unavailable
Reason: javax.ws.rs.ServiceUnavailableException: HTTP 503 Service Unavailable
${DOCKER_COMPOSE_HOME}/docker-compose start broker1
# 输出
Starting broker1 ... done
${DOCKER_COMPOSE_HOME}/docker-compose start broker2
# 输出
Starting broker2 ... done
bin/pulsar-admin ns-isolation-policy list test
# 输出
ns-broker-isolation NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker1:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
我们可以看到 namespace public/ns-isolation
的 primary 和 secondary 的 broker 分别是 broker1:*
和 broker2:*
。
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# 输出
persistent://public/ns-isolation/t1-partition-0 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-1 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-2 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-3 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-4 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-5 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-6 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-7 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-8 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-9 pulsar://broker1:6650
bin/pulsar-admin ns-isolation-policy set \
--auto-failover-policy-type min_available \
--auto-failover-policy-params min_limit=1,usage_threshold=80 \
--namespaces public/ns-isolation \
--primary "broker3:*" \
--secondary "broker2:*" \
test ns-broker-isolation
bin/pulsar-admin ns-isolation-policy list test
# 输出
ns-broker-isolation NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker3:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
bin/pulsar-admin namespaces unload public/ns-isolation
我们可以看到主题的所属 broker 已更改为 primary broker (broker3)。
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# 输出
persistent://public/ns-isolation/t1-partition-0 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-1 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-2 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-3 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-4 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-5 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-6 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-7 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-8 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-9 pulsar://broker3:6650
在 docker-compose 文件中添加 broker4 配置。
broker4:
hostname: broker4
container_name: broker4
image: apachepulsar/pulsar:latest
restart: on-failure
command: >
bash -c "bin/apply-config-from-env.py conf/broker.conf && \
bin/apply-config-from-env.py conf/pulsar_env.sh && \
bin/watch-znode.py -z $$zookeeperServers -p /initialized-$$clusterName -w && \
exec bin/pulsar broker"
environment:
clusterName: test
zookeeperServers: zk1:2181
configurationStore: zk1:2181
webSocketServiceEnabled: "false"
functionsWorkerEnabled: "false"
managedLedgerMaxEntriesPerLedger: 100
managedLedgerMinLedgerRolloverTimeMinutes: 0
volumes:
- ./apply-config-from-env.py:/pulsar/bin/apply-config-from-env.py
depends_on:
- zk1
- pulsar-init
- bk1
- bk2
- bk3
- bk4
networks:
pulsar:
启动 broker4。
${DOCKER_COMPOSE_HOME}/docker-compose create
# 输出
zk1 is up-to-date
bk1 is up-to-date
bk2 is up-to-date
bk3 is up-to-date
broker1 is up-to-date
broker2 is up-to-date
broker3 is up-to-date
Creating broker4 ... done
proxy1 is up-to-date
${DOCKER_COMPOSE_HOME}/docker-compose start broker4
# 输出
Starting broker4 ... done
bin/pulsar-admin brokers list test
# 输出
broker4:8080
broker1:8080
broker2:8080
broker3:8080
bin/pulsar-admin ns-isolation-policy set \
--auto-failover-policy-type min_available \
--auto-failover-policy-params min_limit=1,usage_threshold=80 \
--namespaces public/ns-isolation \
--primary "broker1:*,broker4:*" \
--secondary "broker2:*" \
test ns-broker-isolation
bin/pulsar-admin ns-isolation-policy list test
# 输出
ns-broker-isolation NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker1:*, broker4:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
bin/pulsar-admin namespaces unload public/ns-isolation
该主题应该同时归属于 broker1 和 broker4。
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# 输出
persistent://public/ns-isolation/t1-partition-0 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-1 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-2 pulsar://broker4:6650
persistent://public/ns-isolation/t1-partition-3 pulsar://broker4:6650
persistent://public/ns-isolation/t1-partition-4 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-5 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-6 pulsar://broker4:6650
persistent://public/ns-isolation/t1-partition-7 pulsar://broker4:6650
persistent://public/ns-isolation/t1-partition-8 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-9 pulsar://broker1:6650
bin/pulsar-admin ns-isolation-policy set \
--auto-failover-policy-type min_available \
--auto-failover-policy-params min_limit=1,usage_threshold=80 \
--namespaces public/ns-isolation \
--primary "broker1:*" \
--secondary "broker2:*" \
test ns-broker-isolation
bin/pulsar-admin ns-isolation-policy list test
# output
ns-broker-isolation NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker1:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
${DOCKER_COMPOSE_HOME}/docker-compose stop broker4
# 输出
Stopping broker4 ... done
bin/pulsar-admin brokers list test
# 输出
broker1:8080
broker2:8080
broker3:8080
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# 输出
persistent://public/ns-isolation/t1-partition-0 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-1 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-2 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-3 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-4 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-5 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-6 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-7 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-8 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-9 pulsar://broker1:6650
bin/pulsar-admin bookies list-bookies
# 输出
{
"bookies" : [ {
"bookieId" : "bk2:3181"
}, {
"bookieId" : "bk4:3181"
}, {
"bookieId" : "bk3:3181"
}, {
"bookieId" : "bk1:3181"
} ]
}
bookkeeperClientRackawarePolicyEnabled
的默认配置值是 true,所以 RackawareEnsemblePlacementPolicy
是 bookie 的默认隔离策略,我们将设置 rack 的名称如下:/rack
。
bin/pulsar-admin bookies set-bookie-rack \
--bookie bk1:3181 \
--hostname bk1:3181 \
--group group1 \
--rack /rack1
bin/pulsar-admin bookies set-bookie-rack \
--bookie bk3:3181 \
--hostname bk3:3181 \
--group group1 \
--rack /rack1
bin/pulsar-admin bookies set-bookie-rack \
--bookie bk2:3181 \
--hostname bk2:3181 \
--group group2 \
--rack /rack2
bin/pulsar-admin bookies set-bookie-rack \
--bookie bk4:3181 \
--hostname bk4:3181 \
--group group2 \
--rack /rack2
bin/pulsar-admin bookies racks-placement
group1 {bk1:3181=BookieInfoImpl(rack=/rack1, hostname=bk1:3181), bk3:3181=BookieInfoImpl(rack=/rack1, hostname=bk3:3181)}
group2 {bk2:3181=BookieInfoImpl(rack=/rack2, hostname=bk2:3181), bk4:3181=BookieInfoImpl(rack=/rack2, hostname=bk4:3181)}
bin/pulsar-admin namespaces set-bookie-affinity-group public/ns-isolation \
--primary-group group1 \
--secondary-group group2
bin/pulsar-admin namespaces get-bookie-affinity-group public/ns-isolation
{
"bookkeeperAffinityGroupPrimary" : "group1",
"bookkeeperAffinityGroupSecondary" : "group2"
}
bin/pulsar-client produce -m 'hello' -n 500 public/ns-isolation/t2
bin/pulsar-admin topics stats-internal public/ns-isolation/t2 | grep ledgerId | tail -n 6
"ledgerId" : 0,
"ledgerId" : 1,
"ledgerId" : 2,
"ledgerId" : 3,
"ledgerId" : 4,
"ledgerId" : -1,
# 在 bk1 节点上执行以下命令
${DOCKER_COMPOSE_HOME}/docker-compose exec bk1 /bin/bash
bin/bookkeeper shell ledgermetadata -ledgerid 0
# 查看 ensembles
ensembles={0=[bk1:3181, bk3:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 1
# 查看 ensembles
ensembles={0=[bk3:3181, bk1:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 2
# 查看 ensembles
ensembles={0=[bk1:3181, bk3:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 3
# 查看 ensembles
ensembles={0=[bk1:3181, bk3:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 4
# 查看 ensembles
ensembles={0=[bk1:3181, bk3:3181]}
${DOCKER_COMPOSE_HOME}/docker-compose stop bk1
bin/pulsar-client produce -m 'hello' -n 500 public/ns-isolation/t2
bin/pulsar-admin topics stats-internal public/ns-isolation/t2 | grep ledgerId | tail -n 6
"ledgerId" : 5,
"ledgerId" : 6,
"ledgerId" : 7,
"ledgerId" : 8,
"ledgerId" : 9,
"ledgerId" : -1,
查看新增 ledger [5, 6, 7, 8, 9] 的元数据。bookkeeperClientEnforceMinNumRacksPerWriteQuorum
的配置为 false 且 bookie1 始终不可用,由此可知会使用 secondary bookie。bookie3 位于 primary group 中,因此始终使用 bookie3。
# 在 bk2 节点上执行以下命令${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 /bin/bash
bin/bookkeeper shell ledgermetadata -ledgerid 5
# 查看 ensemble
ensembles={0=[bk4:3181, bk3:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 6
# 查看 ensemble
ensembles={0=[bk3:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 7
# 查看 ensemble
ensembles={0=[bk2:3181, bk3:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 8
# 查看 ensemble
ensembles={0=[bk3:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 9
# 查看 ensemble
ensembles={0=[bk3:3181, bk2:3181]}
重启 bk1。
${DOCKER_COMPOSE_HOME}/docker-compose start bk1
bin/pulsar-admin namespaces get-bookie-affinity-group public/ns-isolation
{
"bookkeeperAffinityGroupPrimary" : "group1",
"bookkeeperAffinityGroupSecondary" : "group2"
}
bin/pulsar-admin namespaces set-bookie-affinity-group public/ns-isolation \
--primary-group group2
bin/pulsar-admin namespaces get-bookie-affinity-group public/ns-isolation
{
"bookkeeperAffinityGroupPrimary" : "group2"
}
bin/pulsar-admin namespaces unload public/ns-isolation
bin/pulsar-client produce -m 'hello' -n 500 public/ns-isolation/t2
bin/pulsar-admin topics stats-internal public/ns-isolation/t2 | grep ledgerId | tail -n 6
"ledgerId" : 12,
"ledgerId" : 13,
"ledgerId" : 14,
"ledgerId" : 15,
"ledgerId" : 16,
"ledgerId" : -1,
# 在 bk2 节点上执行以下命令
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 /bin/bash
bin/bookkeeper shell ledgermetadata -ledgerid 12
# 查看 ensemble
ensembles={0=[bk4:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 13
# 查看 ensemble
ensembles={0=[bk4:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 14
# 查看 ensemble
ensembles={0=[bk4:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 15
# 查看 ensemble
ensembles={0=[bk4:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 16
# 查看 ensemble
ensembles={0=[bk2:3181, bk4:3181]}
bk5:
hostname: bk5
container_name: bk5
image: apachepulsar/pulsar:latest
command: >
bash -c "export dbStorage_writeCacheMaxSizeMb="${dbStorage_writeCacheMaxSizeMb:-16}" && \
export dbStorage_readAheadCacheMaxSizeMb="${dbStorage_readAheadCacheMaxSizeMb:-16}" && \
bin/apply-config-from-env.py conf/bookkeeper.conf && \
bin/apply-config-from-env.py conf/pulsar_env.sh && \
bin/watch-znode.py -z $$zkServers -p /initialized-$$clusterName -w && \
exec bin/pulsar bookie"
environment:
clusterName: test
zkServers: zk1:2181
numAddWorkerThreads: 8
useHostNameAsBookieID: "true"
volumes:
- ./apply-config-from-env.py:/pulsar/bin/apply-config-from-env.py
depends_on:
- zk1
- pulsar-init
networks:
pulsar:
${DOCKER_COMPOSE_HOME}/docker-compose create
${DOCKER_COMPOSE_HOME}/docker-compose start bk5
# 在 bk2 上执行此命令
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell listbookies -rw
ReadWrite Bookies :
BookieID:bk2:3181, IP:192.168.32.5, Port:3181, Hostname:bk2
BookieID:bk4:3181, IP:192.168.32.7, Port:3181, Hostname:bk4
BookieID:bk3:3181, IP:192.168.32.6, Port:3181, Hostname:bk3
BookieID:bk1:3181, IP:192.168.32.4, Port:3181, Hostname:bk1
BookieID:bk5:3181, IP:192.168.32.9, Port:3181, Hostname:bk5
bin/pulsar-admin bookies set-bookie-rack \
--bookie bk5:3181 \
--hostname bk5:3181 \
--group group2 \
--rack /rack2
bin/pulsar-admin bookies racks-placement
group1 {bk1:3181=BookieInfoImpl(rack=/rack1, hostname=bk1:3181), bk3:3181=BookieInfoImpl(rack=/rack1, hostname=bk3:3181)}
group2 {bk2:3181=BookieInfoImpl(rack=/rack2, hostname=bk2:3181), bk4:3181=BookieInfoImpl(rack=/rack2, hostname=bk4:3181), bk5:3181=BookieInfoImpl(rack=/rack2, hostname=bk5:3181)}
bin/pulsar-admin namespaces unload public/ns-isolation
bin/pulsar-client produce -m 'hello' -n 500 public/ns-isolation/t2
bin/pulsar-admin topics stats-internal public/ns-isolation/t2 | grep ledgerId | tail -n 6
"ledgerId" : 17,
"ledgerId" : 20,
"ledgerId" : 21,
"ledgerId" : 22,
"ledgerId" : 23,
"ledgerId" : -1,
通过查看 ledger 的 ensemble,我们可以发现新建的 ledger 都写入了 primary group 中的节点,因为有足够的 rw bookie 可用。
# 在 bk1 节点上执行以下命令
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 /bin/bash
bin/bookkeeper shell ledgermetadata -ledgerid 17
# 查看 ensemble
ensembles={0=[bk5:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 20
# 查看 ensemble
ensembles={0=[bk2:3181, bk4:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 21
# 查看 ensemble
ensembles={0=[bk5:3181, bk4:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 22
# 查看 ensemble
ensembles={0=[bk5:3181, bk4:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 23
# 查看 ensemble
ensembles={0=[bk2:3181, bk4:3181]}
bin/pulsar-admin bookies racks-placement
group1 {bk1:3181=BookieInfoImpl(rack=/rack1, hostname=bk1:3181), bk3:3181=BookieInfoImpl(rack=/rack1, hostname=bk3:3181)}
group2 {bk2:3181=BookieInfoImpl(rack=/rack2, hostname=bk2:3181), bk4:3181=BookieInfoImpl(rack=/rack2, hostname=bk4:3181), bk5:3181=BookieInfoImpl(rack=/rack2, hostname=bk5:3181)}
bin/pulsar-admin bookies delete-bookie-rack -b bk5:3181
# 在 bk2 节点上执行以下命令
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell listunderreplicated
${DOCKER_COMPOSE_HOME}/docker-compose stop bk5
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell decommissionbookie -bookieid bk5:3181
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell listledgers -bookieid bk5:3181
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell listbookies -rw
ReadWrite Bookies :
BookieID:bk2:3181, IP:192.168.48.5, Port:3181, Hostname:bk2
BookieID:bk4:3181, IP:192.168.48.7, Port:3181, Hostname:bk4
BookieID:bk3:3181, IP:192.168.48.6, Port:3181, Hostname:bk3
BookieID:bk1:3181, IP:192.168.48.4, Port:3181, Hostname:bk1
译者简介 宋博,就职于北京百观科技有限公司,高级开发工程师,专注于微服务、云计算与大数据领域。