hadoop集群環境安裝中的hosts 配置問題

2021-05-24 23:00:49 字數 1605 閱讀 1137

今天在安裝hadoop集群的時候,所有節點配置完畢,發現執行下面的命令的時候

hadoop@name-node:~/hadoop$ bin/hadoop fs -ls

name節點會報如下錯誤:

11/04/02 17:16:13 warn conf.configuration: mapred.task.id is deprecated. instead, use mapreduce.task.attempt.id

11/04/02 17:16:14 info ipc.client: retrying connect to server: /10.42.43.55:9000. already tried 0 time(s).

11/04/02 17:16:15 info ipc.client: retrying connect to server: /10.42.43.55:9000. already tried 1 time(s).

11/04/02 17:16:16 info ipc.client: retrying connect to server: /10.42.43.55:9000. already tried 2 time(s).

11/04/02 17:16:17 info ipc.client: retrying connect to server: /10.42.43.55:9000. already tried 3 time(s).

11/04/02 17:16:18 info ipc.client: retrying connect to server: /10.42.43.55:9000. already tried 4 time(s).

11/04/02 17:16:19 info ipc.client: retrying connect to server: /10.42.43.55:9000. already tried 5 time(s).

11/04/02 17:16:20 info ipc.client: retrying connect to server: /10.42.43.55:9000. already tried 6 time(s).

11/04/02 17:16:21 info ipc.client: retrying connect to server: /10.42.43.55:9000. already tried 7 time(s).

11/04/02 17:16:22 info ipc.client: retrying connect to server: /10.42.43.55:9000. already tried 8 time(s).

11/04/02 17:16:23 info ipc.client: retrying connect to server: /10.42.43.55:9000. already tried 9 time(s).

bad connection to fs. command aborted.

折騰了大半天,找到如下解決方案:

hadoop@name-node:~$ sudo vim /etc/hosts

去掉127.0.0.1  name-node 這行,儲存。

重啟hadoop集群,問題解決。

hadoop的集群安裝

hadoop的集群安裝 1 安裝jdk,解壓jar,配置環境變數 1.1 解壓jar tar zxvf jdk 7u79 linux x64.tar.gz c opt install 將jdk解壓到對應的檔案目錄下 1.2 配置環境變數,sudo vim etc profile 新增下面內容 exp...

hadoop集群lzo的安裝

主要步驟 1,安裝和更新gcc ant 系統已經安裝的話,略去下面步驟 yum y install gcc gcc c autoconf automake wget tar jxvf apache ant 1.8.2 bin.tar.bz2 export ant home usr local apa...

Hadoop集群MYSQL的安裝

前言 有一段時間沒寫文章了,最近事情挺多的,現在咱們回歸正題,經過前面四篇文章的介紹,已經通過vmware安裝了hadoop的集群環境,相關的兩款軟體vsftp和securecrt也已經正常安裝了。本篇主要介紹在大資料應用中比較常用的一款軟體mysql,我相信這款軟體不緊緊在大資料分析的時候會用到,...