教你如何Zookeeper集群安裝。。。

2021-08-14 15:14:04 字數 3895 閱讀 4684

1. 在根目錄建立zookeeper資料夾(service1、service2、service3都建立)

[root@localhost /]# mkdir zookeeper

通過xshell上傳檔案到service1伺服器:上傳zookeeper-3.4.6.tar.gz到/software資料夾

2.遠端copy將service1下的/software/zookeeper-3.4.6.tar.gz到service2、service3

[root@localhost software]# scp -r /software/[email protected]:/software/

[root@localhost software]# scp -r /software/[email protected]:/software/

3.copy /software/zookeeper-3.4.6.tar.gz到/zookeeper/目錄(service1、service2、service3都執行)

[root@localhost software]# cp /software/zookeeper-3.4.6.tar.gz /zookeeper/

4.安裝解壓zookeeper-3.4.6.tar.gz(service1、service2、service3都執行)

5.在/zookeeper建立兩個目錄:zkdata、zkdatalog(service1、service2、service3都建立)

[root@localhost zookeeper]# mkdir zkdata

[root@localhost zookeeper]# mkdir zkdatalog

6.進入/zookeeper/zookeeper-3.4.6/conf/目錄

[root@localhost zookeeper]# cd /zookeeper/zookeeper-3.4.6/conf/

[root@localhost conf]# ls

configuration.xsl log4j.properties zoo.cfg zoo_sample.cfg

7. 修改zoo.cfg檔案

# the number of milliseconds of each tick

ticktime=2000

# the number of ticks that the initial

# synchronization phase can take

initlimit=10

# the number of ticks that can pass between

# sending a request and getting an acknowledgement

synclimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# example sakes.

datadir=/zookeeper/zkdata

datalogdir=/zookeeper/zkdatalog

# the port at which the clients will connect

clientport=2181

# the maximum number of client connections.

# increase this if you need to handle more clients

#maxclientcnxns=60

## be sure to read the maintenance section of the

# administrator guide before turning on autopurge.##

## the number of snapshots to retain in datadir

#autopurge.snapretaincount=3

# purge task interval in hours

# set to "0" to disable auto purge feature

#autopurge.purgeinterval=1

server.1=192.168.2.211:12888:13888

server.2=192.168.2.212:12888:13888

server.3=192.168.2.213:12888:13888

8. 同步修改service2、service3的zoo.cfg配置

9. myid檔案寫入(進入/zookeeper/zkdata目錄下)

[root@localhost /]# cd /zookeeper/zkdata

[root@localhost /]# echo 1 > myid

10. myid檔案寫入service2、service3

11.檢視zk命令:

[root@localhost ~]# cd /zookeeper/zookeeper-3.4.6/bin/

[root@localhost bin]# ls

readme.txt zkcleanup.sh zkcli.cmd zkcli.sh zkenv.cmd zkenv.sh zkserver.cmd zkserver.sh zookeeper.out

12.執行zkserver.sh檢視詳細命令:

[root@localhost bin]# ./zkserver.sh

jmx enabled by default

using config: /zookeeper/zookeeper-3.4.6/bin/../conf/zoo.cfg

usage: ./zkserver.sh

13. 在service1、service2、service3分別啟動zk服務

[root@localhost bin]# ./zkserver.sh start

14. jps檢視zk程序

[root@localhost bin]# jps

31483 quorumpeermain

31664 jps

15. 分別在service1、service2、service3檢視zk狀態(可以看到leader和follower節點)

[root@localhost bin]# ./zkserver.sh status

jmx enabled by default

using config: /zookeeper/zookeeper-3.4.6/bin/../conf/zoo.cfg

mode: follower

[root@localhost bin]# ./zkserver.sh status

jmx enabled by default

using config: /zookeeper/zookeeper-3.4.6/bin/../conf/zoo.cfg

mode: leader

16. 看到leader和follower節點已經安裝成功

分布式的一些解決方案,有願意了解的朋友可以找我扣扣**1018925780

zookeeper集群 Zookeeper集群搭建

zookeeper有兩種執行模式 單機模式和集群模式。因為單機模式只是在開發測試時使用,所以這裡就不介紹單機模式的搭建。注意 因為zookeeper遵循半數原則,所以集群節點個數最好是奇數。ip位址 系統環境 192.168.0.10 centos7 jdk8 192.168.0.11 centos...

zookeeper集群 Zookeeper集群搭建

埠分別為 2181 2182 2183。投票選舉埠分別為 2881 3881 2882 3882 2883 3883。tar zxf zookeeper 3.4.6.tar.gz 將解壓後的 zookeeper 應用目錄重新命名,便於管理 在 zookeeper01 應用目錄中,建立 data 目錄...

zookeeper集群安裝

此處為zookeeper 3.3.2 2.分別解壓至zookeepr集群機器 一般為基數臺機器 相應目錄 此處為 zookeeper home 集群為mem1,mem2,mem3 hosts對應 1.分別在zookeepr集群機器上配置 zookeeper home conf zoo.cfg 預設是...