Kafka安裝和簡單使用

2021-09-01 08:58:58 字數 3496 閱讀 1395

首先安裝zookeeper和scala

安裝zookeeper

[root@hadoop001 conf]# cp zoo_sample.cfg zoo.cfg

[root@hadoop001 conf]# vi zoo.cfg

# the number of milliseconds of each tick

ticktime=2000

# the number of ticks that the initial

# synchronization phase can take

initlimit=10

# the number of ticks that can pass between

# sending a request and getting an acknowledgement

synclimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# example sakes.

datadir=/opt/software/zookeeper/data

# the port at which the clients will connect

clientport=2181

# the maximum number of client connections.

# increase this if you need to handle more clients

#maxclientcnxns=60

## be sure to read the maintenance section of the

# administrator guide before turning on autopurge.

## ## the number of snapshots to retain in datadir

#autopurge.snapretaincount=3

# purge task interval in hours

# set to "0" to disable auto purge feature

#autopurge.purgeinterval=1

server.1=hadoop001:2888:3888

server.2=hadoop002:2888:3888

server.3=hadoop003:2888:3888

主要新增datadir和三個server。

然後建立data資料夾,建立myid檔案,三颱機器都要有

[root@hadoop001 conf]# cd ../

[root@hadoop001 zookeeper]# mkdir data

[root@hadoop001 zookeeper]# touch data/myid

[root@hadoop001 zookeeper]# echo 1 > data/myid

hadoop002/003,使用scp命令複製整個zookeeper資料夾過去,並修改myid檔案

[root@hadoop001 software]# scp -r  zookeeper 192.168.204.202:/opt/software/

[root@hadoop001 software]# scp -r zookeeper 192.168.204.203:/opt/software/

[root@hadoop002 zookeeper]# echo 2 > data/myid

[root@hadoop003 zookeeper]# echo 3 > data/myid

啟動zookeeper集群
[root@hadoop001 bin]# ./zkserver.sh start

[root@hadoop002 bin]# ./zkserver.sh start

[root@hadoop003 bin]# ./zkserver.sh start

檢視狀態:./zkserver.sh status

安裝scala

[root@hadoop001 kafka]# mkdir logs

[root@hadoop001 kafka]# cd config/

[root@hadoop001 config]# vi server.properties

broker.id=1

port=9092

host.name=192.168.204.201

log.dirs=/opt/software/kafka/logs

zookeeper.connect=192.168.204.201:2181,192.168.204.202:2181,192.168.204.203:2181/kafka

修改以上幾處(沒有則新增),並配置環境變數

對另外兩台做同樣操作(注意修改broker.id和host.name對應的值)

啟動/停止

啟動(三颱都啟動)

nohup kafka-server-start.sh config/server.properties &

停止bin/kafka-server-stop.sh

建立topic
bin/kafka-topics.sh --create \

--zookeeper 192.168.204.201:2181,192.168.204.202:2181,192.168.204.203:2181/kafka \

--replication-factor 3 --partitions 3 --topic test

上面的引數通過kafka-topics.sh --help可以檢視

建立之後,啟動producer測試:(broker-list指的是kafka,不是zookeeper)

bin/kafka-console-producer.sh \

--broker-list 192.168.204.201:9092,192.168.204.202:9092,192.168.204.203:9092 --topic test

在另乙個終端,啟動consumer,並訂閱topic中生產的訊息

bin/kafka-console-consumer.sh \

--zookeeper 192.168.204.201:2181,192.168.204.202:2181,192.168.204.203:2181/kafka \

--from-beginning --topic test

在生產者終端裡,輸入資訊,然後回車,消費者終端如果接收到這些資訊,說明部署kafka成功。

kafka的元資料存放在zookeeper上。

kafka資料存放在磁碟上。

安裝和使用kafka

啟動建立主題 模擬生產者和消費者 一部已經安裝好的zookeeper虛擬機器 上傳kafka安裝包 為了方便執行,配置一下環境變數 export kafka home opt software kafka kafka 2.12 1.1.0 export path path kafka home bi...

kafka的安裝到簡單使用

首先我們要知道 kafka依賴於zookeeper而zookeeper又依賴於jdk 1.1.jdk的安裝參考這裡安裝 1.2.zookeeper的安裝 命令 wget 1.2.3 解壓 tar zxvf zookeeper 3.4.12.tar.gz 命令 tar zxvf zookeeper 3...

kafka安裝和基本使用

2.將檔案解壓 3.進入 kafka 2.9.2 0.8.1.1 config 目錄 4.修改 server.properties檔案 引數 參考引數值 broker.id 0 port 9092 log.dirs kafka 2.9.2 0.8.1.1 logs zookeeper.connect...