sharetwitterlinkedIn

Pulsar Isolation Part IV: Single Cluster Isolation

June 01, 2022
head img

Introduction

This is the fourth blog in our four-part blog series on how to achieve resource isolation in Apache Pulsar. Before we dive in, let’s review what was covered in Parts I, II, and III.

  • Pulsar Isolation Part I: Taking an In-Depth Look at How to Achieve Isolation in Pulsar This blog provides an introduction to three approaches to implement isolation in Pulsar. These include:

    • leveraging separate Pulsar clusters that use separate BookKeeper clusters,
    • leveraging separate Pulsar clusters that share one BookKeeper cluster, and
    • using a single Pulsar cluster with a single BookKeeper cluster. Each of these approaches and their specific use cases are discussed at length in the subsequent blogs.
  • Pulsar Isolation Part II: Separate Pulsar Clusters shows you how to achieve isolation between separate Pulsar clusters that use separate BookKeeper clusters. This shared-nothing approach offers the highest level of isolation and is suitable for storing highly sensitive data, such as personally identifiable information or financial records.
  • Pulsar Isolation Part III: Separate Pulsar Clusters Sharing a Single BookKeeper Cluster demonstrates how to achieve Pulsar isolation using separate Pulsar clusters that share one BookKeeper cluster. This approach uses separate Pulsar broker clusters in order to isolate the end-users from one another and allows you to use different authentication methods based on the use case. As a result, you gain the benefits of using a shared storage layer, such as a reduced hardware footprint and the associated hardware and maintenance costs.

In this fourth and final blog of the series, we provide a step-by-step tutorial on how to use a single cluster to achieve broker and bookie isolation. This more traditional approach takes advantage of Pulsar’s built-in multi-tenancy and removes the need to manage multiple broker and bookie clusters.

Preparation

In this tutorial we use the docker-compose to establish a Pulsar cluster. First, we need to install the docker environment.

This tutorial is based on docker 20.10.10, docker-compose 1.29.2, and MacOS 12.3.1.

  1. Get the docker-compose configuration files.
git clone https://github.com/gaoran10/pulsar-docker-compose
cd pulsar-docker-compose
  1. Start the cluster.
docker-compose up
  1. Check the pods.
docker-compose ps
   Name                  Command                State                         Ports
--------------------------------------------------------------------------------------------------------
bk1           bash -c export dbStorage_w ...   Up
bk2           bash -c export dbStorage_w ...   Up
bk3           bash -c export dbStorage_w ...   Up
bk4           bash -c export dbStorage_w ...   Up
broker1       bash -c bin/apply-config-f ...   Up
broker2       bash -c bin/apply-config-f ...   Up
broker3       bash -c bin/apply-config-f ...   Up
proxy1        bash -c bin/apply-config-f ...   Up         0.0.0.0:6650->6650/tcp, 0.0.0.0:8080->8080/tcp
pulsar-init   bin/init-cluster.sh              Exit 0
zk1           bash -c bin/apply-config-f ...   Up

After the cluster initiation completes, we can begin setting the broker isolation policy.

Broker Isolation

  1. Download a Pulsar release package to execute the pulsar-admin command.
wget https://archive.apache.org/dist/pulsar/pulsar-2.10.0/apache-pulsar-2.10.0-bin.tar.gz
tar -txvf apache-pulsar-2.10.0-bin.tar.gz
// we can execute pulsar-admin command in this directory
cd apache-pulsar-2.10.0
  1. Get the broker list.
bin/pulsar-admin brokers list test
# output
"broker1:8080"
"broker2:8080"
"broker3:8080"
  1. Create a namespace.
bin/pulsar-admin namespaces create public/ns-isolation
bin/pulsar-admin namespaces set-retention -s 1G -t 3d public/ns-isolation
  1. Set the namespace isolation policy.
bin/pulsar-admin ns-isolation-policy set \
--auto-failover-policy-type min_available \
--auto-failover-policy-params min_limit=1,usage_threshold=80 \
--namespaces public/ns-isolation \
--primary "broker1:*" \
--secondary "broker2:*" \
test ns-broker-isolation
  1. Get the namespace isolation policies.
bin/pulsar-admin ns-isolation-policy list test
# output
ns-broker-isolation    NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker1:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
  1. Create a partitioned topic.
bin/pulsar-admin topics create-partitioned-topic -p 10 public/ns-isolation/t1
  1. Do a partitioned lookup.
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
persistent://public/ns-isolation/t1-partition-0    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-1    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-2    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-3    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-4    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-5    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-6    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-7    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-8    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-9    pulsar://broker1:6650

  1. Stop broker1.
${DOCKER_COMPOSE_HOME}/docker-compose stop broker1
# output
Stopping broker1 ... done
  1. Check the partitioned lookup.

After broker1 stop, the topics will be owned by secondary broker broker2:*.

bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
persistent://public/ns-isolation/t1-partition-0    pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-1    pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-2    pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-3    pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-4    pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-5    pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-6    pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-7    pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-8    pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-9    pulsar://broker2:6650

  1. Stop broker2.
${DOCKER_COMPOSE_HOME}/docker-compose stop broker2
# output
Stopping broker2 ... done
  1. Check the partitioned lookup.

After stopping broker2, there are no available brokers for namespace public/ns-isolation-broker.

bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
HTTP 503 Service Unavailable

Reason: javax.ws.rs.ServiceUnavailableException: HTTP 503 Service Unavailable

  1. Restart broker1 and broker2.
${DOCKER_COMPOSE_HOME}/docker-compose start broker1
# output
Starting broker1 ... done

${DOCKER_COMPOSE_HOME}/docker-compose start broker2
# output
Starting broker2 ... done

Migrate the Namespace between Brokers

Because the Pulsar broker is stateless, we can migrate the namespace between broker groups by simply changing the namespace isolation policy.

  1. Check the namespace isolation policies.
bin/pulsar-admin ns-isolation-policy list test
# output
ns-broker-isolation    NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker1:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))

We could find that the primary and secondary brokers of the namespace public/ns-isolation are broker1:* and broker2:*.

  1. Check the topic partitioned lookup results.
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
persistent://public/ns-isolation/t1-partition-0    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-1    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-2    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-3    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-4    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-5    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-6    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-7    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-8    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-9    pulsar://broker1:6650
  1. Modify a new namespace isolation policy.
bin/pulsar-admin ns-isolation-policy set \
--auto-failover-policy-type min_available \
--auto-failover-policy-params min_limit=1,usage_threshold=80 \
--namespaces public/ns-isolation \
--primary "broker3:*" \
--secondary "broker2:*" \
test ns-broker-isolation
  1. Check the namespace isolation policy.
bin/pulsar-admin ns-isolation-policy list test
# output
ns-broker-isolation    NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker3:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
  1. Unload the namespace to make the namespace isolation policy take effect.
bin/pulsar-admin namespaces unload public/ns-isolation
  1. Check the partitioned lookup.

We could find that topics are already owned by the primary broker(broker3).

bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
persistent://public/ns-isolation/t1-partition-0    pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-1    pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-2    pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-3    pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-4    pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-5    pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-6    pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-7    pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-8    pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-9    pulsar://broker3:6650

IMAGE

Scale up and down Brokers

Scale up

  1. Start broker4.

Add broker4 configurations in the docker-compose file.

  broker4:
    hostname: broker4
    container_name: broker4
    image: apachepulsar/pulsar:latest
    restart: on-failure
    command: >
      bash -c "bin/apply-config-from-env.py conf/broker.conf && \
               bin/apply-config-from-env.py conf/pulsar_env.sh && \
               bin/watch-znode.py -z $$zookeeperServers -p /initialized-$$clusterName -w && \
               exec bin/pulsar broker"
    environment:
      clusterName: test
      zookeeperServers: zk1:2181
      configurationStore: zk1:2181
      webSocketServiceEnabled: "false"
      functionsWorkerEnabled: "false"
      managedLedgerMaxEntriesPerLedger: 100
      managedLedgerMinLedgerRolloverTimeMinutes: 0
    volumes:
      - ./apply-config-from-env.py:/pulsar/bin/apply-config-from-env.py
    depends_on:
      - zk1
      - pulsar-init
      - bk1
      - bk2
      - bk3
      - bk4
    networks:
      pulsar:

Start broker4.

${DOCKER_COMPOSE_HOME}/docker-compose create
# output
zk1 is up-to-date
bk1 is up-to-date
bk2 is up-to-date
bk3 is up-to-date
broker1 is up-to-date
broker2 is up-to-date
broker3 is up-to-date
Creating broker4 ... done
proxy1 is up-to-date
${DOCKER_COMPOSE_HOME}/docker-compose start broker4
# output
Starting broker4 ... done
  1. Check the broker list.
bin/pulsar-admin brokers list test
# output
broker4:8080
broker1:8080
broker2:8080
broker3:8080
  1. Set a namespace isolation policy.
bin/pulsar-admin ns-isolation-policy set \
--auto-failover-policy-type min_available \
--auto-failover-policy-params min_limit=1,usage_threshold=80 \
--namespaces public/ns-isolation \
--primary "broker1:*,broker4:*" \
--secondary "broker2:*" \
test ns-broker-isolation
  1. Get the namespace isolation policies.
bin/pulsar-admin ns-isolation-policy list test
# output
ns-broker-isolation    NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker1:*, broker4:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
  1. Unload the namespace.
bin/pulsar-admin namespaces unload public/ns-isolation
  1. Check the partitioned lookup.

The topic should be owned by broker1 and broker4.

bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
persistent://public/ns-isolation/t1-partition-0    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-1    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-2    pulsar://broker4:6650
persistent://public/ns-isolation/t1-partition-3    pulsar://broker4:6650
persistent://public/ns-isolation/t1-partition-4    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-5    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-6    pulsar://broker4:6650
persistent://public/ns-isolation/t1-partition-7    pulsar://broker4:6650
persistent://public/ns-isolation/t1-partition-8    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-9    pulsar://broker1:6650

IMAGE

Scale down

  1. Remove broker4 from the namespace isolation policy.
bin/pulsar-admin ns-isolation-policy set \
--auto-failover-policy-type min_available \
--auto-failover-policy-params min_limit=1,usage_threshold=80 \
--namespaces public/ns-isolation \
--primary "broker1:*" \
--secondary "broker2:*" \
test ns-broker-isolation
  1. Check the namespace isolation policy.
bin/pulsar-admin ns-isolation-policy list test
# output
ns-broker-isolation    NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker1:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
  1. Stop broker4.
${DOCKER_COMPOSE_HOME}/docker-compose stop broker4
# output
Stopping broker4 ... done
  1. Check the broker list.
bin/pulsar-admin brokers list test
# output
broker1:8080
broker2:8080
broker3:8080
  1. Check the partitioned lookup.
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
persistent://public/ns-isolation/t1-partition-0    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-1    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-2    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-3    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-4    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-5    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-6    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-7    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-8    pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-9    pulsar://broker1:6650

IMAGE

BookKeeper Isolation

  1. Get the bookie list.
bin/pulsar-admin bookies list-bookies
# output
{
  "bookies" : [ {
    "bookieId" : "bk2:3181"
  }, {
    "bookieId" : "bk4:3181"
  }, {
    "bookieId" : "bk3:3181"
  }, {
    "bookieId" : "bk1:3181"
  } ]
}
  1. Set the bookie rack.

The default value of the configuration bookkeeperClientRackawarePolicyEnabled is true, so the RackawareEnsemblePlacementPolicy is the default bookie isolation policy, we'll set the rack name like this /rack.

bin/pulsar-admin bookies set-bookie-rack \
--bookie bk1:3181 \
--hostname bk1:3181 \
--group group1 \
--rack /rack1

bin/pulsar-admin bookies set-bookie-rack \
--bookie bk3:3181 \
--hostname bk3:3181 \
--group group1 \
--rack /rack1

bin/pulsar-admin bookies set-bookie-rack \
--bookie bk2:3181 \
--hostname bk2:3181 \
--group group2 \
--rack /rack2

bin/pulsar-admin bookies set-bookie-rack \
--bookie bk4:3181 \
--hostname bk4:3181 \
--group group2 \
--rack /rack2
  1. Check the bookie racks placement.
bin/pulsar-admin bookies racks-placement
group1    {bk1:3181=BookieInfoImpl(rack=/rack1, hostname=bk1:3181), bk3:3181=BookieInfoImpl(rack=/rack1, hostname=bk3:3181)}
group2    {bk2:3181=BookieInfoImpl(rack=/rack2, hostname=bk2:3181), bk4:3181=BookieInfoImpl(rack=/rack2, hostname=bk4:3181)}
  1. Set the bookie affinity group for the namespace.
bin/pulsar-admin namespaces set-bookie-affinity-group public/ns-isolation \
--primary-group group1 \
--secondary-group group2
  1. Check the namespace affinity group.
bin/pulsar-admin namespaces get-bookie-affinity-group public/ns-isolation
{
  "bookkeeperAffinityGroupPrimary" : "group1",
  "bookkeeperAffinityGroupSecondary" : "group2"
}
  1. Produce messages to the topic.
bin/pulsar-client produce -m 'hello' -n 500 public/ns-isolation/t2
  1. Get internal stats of the topic.
bin/pulsar-admin topics stats-internal public/ns-isolation/t2 | grep ledgerId | tail -n 6
    "ledgerId" : 0,
    "ledgerId" : 1,
    "ledgerId" : 2,
    "ledgerId" : 3,
    "ledgerId" : 4,
    "ledgerId" : -1,
  1. Check ledger ensembles for the ledgers [0, 1, 2, 3, 4].
# execute these commands in the node bk1
${DOCKER_COMPOSE_HOME}/docker-compose exec bk1 /bin/bash

bin/bookkeeper shell ledgermetadata -ledgerid 0
# check ensembles
ensembles={0=[bk1:3181, bk3:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 1
# check ensembles
ensembles={0=[bk3:3181, bk1:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 2
# check ensembles
ensembles={0=[bk1:3181, bk3:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 3
# check ensembles
ensembles={0=[bk1:3181, bk3:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 4
# check ensembles
ensembles={0=[bk1:3181, bk3:3181]}

IMAGE

  1. Stop bookie1.
${DOCKER_COMPOSE_HOME}/docker-compose stop bk1
  1. Produce messages to the topic.
bin/pulsar-client produce -m 'hello' -n 500 public/ns-isolation/t2
  1. Check ledger metadata.
bin/pulsar-admin topics stats-internal public/ns-isolation/t2 | grep ledgerId | tail -n 6
    "ledgerId" : 5,
    "ledgerId" : 6,
    "ledgerId" : 7,
    "ledgerId" : 8,
    "ledgerId" : 9,
    "ledgerId" : -1,

Check ledger metadata for the newly added ledgers [5,6,7,8,9]. Because bookie1 is not usable and the configuration bookkeeperClientEnforceMinNumRacksPerWriteQuorum is false, we should find that the secondary bookies are used. Bookie3 is in the primary group so bookie3 is always used.

# execute these commands in the node bk2
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 /bin/bash

bin/bookkeeper shell ledgermetadata -ledgerid 5
# check ensembles
ensembles={0=[bk4:3181, bk3:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 6
# check ensembles
ensembles={0=[bk3:3181, bk2:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 7
# check ensembles
ensembles={0=[bk2:3181, bk3:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 8
# check ensembles
ensembles={0=[bk3:3181, bk2:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 9
# check ensembles
ensembles={0=[bk3:3181, bk2:3181]}

IMAGE

Restart bk1

${DOCKER_COMPOSE_HOME}/docker-compose start bk1

Migrate Bookie Affinity Group

  1. Check the bookie affinity group.
bin/pulsar-admin namespaces get-bookie-affinity-group public/ns-isolation
{
  "bookkeeperAffinityGroupPrimary" : "group1",
  "bookkeeperAffinityGroupSecondary" : "group2"
}
  1. Modify the bookie affinity group of the namespace.
bin/pulsar-admin namespaces set-bookie-affinity-group public/ns-isolation \
--primary-group group2
  1. Check the bookie affinity group.
bin/pulsar-admin namespaces get-bookie-affinity-group public/ns-isolation
{
  "bookkeeperAffinityGroupPrimary" : "group2"
}
  1. Unload the namespace.
bin/pulsar-admin namespaces unload public/ns-isolation
  1. Produce messages.
bin/pulsar-client produce -m 'hello' -n 500 public/ns-isolation/t2
  1. Check the ensemble's bookies for newly created ledgers.
bin/pulsar-admin topics stats-internal public/ns-isolation/t2 | grep ledgerId | tail -n 6
    "ledgerId" : 12,
    "ledgerId" : 13,
    "ledgerId" : 14,
    "ledgerId" : 15,
    "ledgerId" : 16,
    "ledgerId" : -1,
  1. Check ledger metadata for the newly added ledgers [12, 13, 14, 15, 16].
# execute these commands in the node bk2
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 /bin/bash

bin/bookkeeper shell ledgermetadata -ledgerid 12
# check ensembles
ensembles={0=[bk4:3181, bk2:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 13
# check ensembles
ensembles={0=[bk4:3181, bk2:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 14
# check ensembles
ensembles={0=[bk4:3181, bk2:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 15
# check ensembles
ensembles={0=[bk4:3181, bk2:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 16
# check ensembles
ensembles={0=[bk2:3181, bk4:3181]}

IMAGE

Scale up and down Bookies

Scale up

  1. Add the following configuration in the docker-compose file.
  bk5:
    hostname: bk5
    container_name: bk5
    image: apachepulsar/pulsar:latest
    command: >
      bash -c "export dbStorage_writeCacheMaxSizeMb="${dbStorage_writeCacheMaxSizeMb:-16}" && \
               export dbStorage_readAheadCacheMaxSizeMb="${dbStorage_readAheadCacheMaxSizeMb:-16}" && \
               bin/apply-config-from-env.py conf/bookkeeper.conf && \
               bin/apply-config-from-env.py conf/pulsar_env.sh && \
               bin/watch-znode.py -z $$zkServers -p /initialized-$$clusterName -w && \
               exec bin/pulsar bookie"
    environment:
      clusterName: test
      zkServers: zk1:2181
      numAddWorkerThreads: 8
      useHostNameAsBookieID: "true"
    volumes:
      - ./apply-config-from-env.py:/pulsar/bin/apply-config-from-env.py
    depends_on:
      - zk1
      - pulsar-init
    networks:
      pulsar:
  1. Start bookie5.
${DOCKER_COMPOSE_HOME}/docker-compose create
${DOCKER_COMPOSE_HOME}/docker-compose start bk5
  1. Check the readable and writable bookie list. Because bookie1 is stopped, there should be 4 bookies.
# execute this command in bk2
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell listbookies -rw
ReadWrite Bookies :
BookieID:bk2:3181, IP:192.168.32.5, Port:3181, Hostname:bk2
BookieID:bk4:3181, IP:192.168.32.7, Port:3181, Hostname:bk4
BookieID:bk3:3181, IP:192.168.32.6, Port:3181, Hostname:bk3
BookieID:bk1:3181, IP:192.168.32.4, Port:3181, Hostname:bk1
BookieID:bk5:3181, IP:192.168.32.9, Port:3181, Hostname:bk5
  1. Add the newly added bookie node to the primary group.
bin/pulsar-admin bookies set-bookie-rack \
--bookie bk5:3181 \
--hostname bk5:3181 \
--group group2 \
--rack /rack2
  1. Check the bookie racks placement.
bin/pulsar-admin bookies racks-placement
group1    {bk1:3181=BookieInfoImpl(rack=/rack1, hostname=bk1:3181), bk3:3181=BookieInfoImpl(rack=/rack1, hostname=bk3:3181)}
group2    {bk2:3181=BookieInfoImpl(rack=/rack2, hostname=bk2:3181), bk4:3181=BookieInfoImpl(rack=/rack2, hostname=bk4:3181), bk5:3181=BookieInfoImpl(rack=/rack2, hostname=bk5:3181)}
  1. Unload the namespace.
bin/pulsar-admin namespaces unload public/ns-isolation
  1. Produce messages to a new topic.
bin/pulsar-client produce -m 'hello' -n 500 public/ns-isolation/t2
  1. Check the newly added ledger of the topic.
bin/pulsar-admin topics stats-internal public/ns-isolation/t2 | grep ledgerId | tail -n 6
    "ledgerId" : 17,
    "ledgerId" : 20,
    "ledgerId" : 21,
    "ledgerId" : 22,
    "ledgerId" : 23,
    "ledgerId" : -1,

Verify ledger ensembles, we could find that the new created ledgers are all wrote to primary group, because there are enough rw bookies.

# execute these commands in the node bk1
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 /bin/bash

bin/bookkeeper shell ledgermetadata -ledgerid 17
# check ensembles
ensembles={0=[bk5:3181, bk2:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 20
# check ensembles
ensembles={0=[bk2:3181, bk4:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 21
# check ensembles
ensembles={0=[bk5:3181, bk4:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 22
# check ensembles
ensembles={0=[bk5:3181, bk4:3181]}

bin/bookkeeper shell ledgermetadata -ledgerid 23
# check ensembles
ensembles={0=[bk2:3181, bk4:3181]}

IMAGE

Scale down

  1. Check the placement of the racks.
bin/pulsar-admin bookies racks-placement
group1    {bk1:3181=BookieInfoImpl(rack=/rack1, hostname=bk1:3181), bk3:3181=BookieInfoImpl(rack=/rack1, hostname=bk3:3181)}
group2    {bk2:3181=BookieInfoImpl(rack=/rack2, hostname=bk2:3181), bk4:3181=BookieInfoImpl(rack=/rack2, hostname=bk4:3181), bk5:3181=BookieInfoImpl(rack=/rack2, hostname=bk5:3181)}
  1. Delete the bookie from the affinity bookie group.
bin/pulsar-admin bookies delete-bookie-rack -b bk5:3181
  1. Check if there are under-replicated ledgers, which should be expected because we deleted a bookie.
# execute these commands in the node bk2
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell listunderreplicated
  1. Stop the bookie.
${DOCKER_COMPOSE_HOME}/docker-compose stop bk5
  1. Decommission the bookie.
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell decommissionbookie -bookieid bk5:3181
  1. Check ledgers in the decommissioned bookie.
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell listledgers -bookieid bk5:3181
  1. List the bookies.
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell listbookies -rw
ReadWrite Bookies :
BookieID:bk2:3181, IP:192.168.48.5, Port:3181, Hostname:bk2
BookieID:bk4:3181, IP:192.168.48.7, Port:3181, Hostname:bk4
BookieID:bk3:3181, IP:192.168.48.6, Port:3181, Hostname:bk3
BookieID:bk1:3181, IP:192.168.48.4, Port:3181, Hostname:bk1

What’s Next

  1. Read the previous blogs in this series to learn more about Pulsar isolation:

  2. Learn Pulsar Fundamentals with StreamNative Academy: If you are new to Pulsar, we recommend taking the self-paced Pulsar courses developed by the original creators of Pulsar.
  3. Spin up a Pulsar cluster in minutes: Sign up for StreamNative Cloud today. StreamNative Cloud is the simple, fast, and cost-effective way to run Pulsar in the public cloud.
  4. Save your spot at the Pulsar Summit San Francisco: The first in-person Pulsar Summit is taking place this August! Sign up today to join the Pulsar community and the messaging and event streaming community.
© StreamNative, Inc. 2022Apache, Apache Pulsar, Apache BookKeeper, Apache Flink, and associated open source project names are trademarks of the Apache Software Foundation.TermsPrivacy