1.需求:
由于hadoop的数据文件所在磁盘出现99%占用,导致hdfs无法上传文件
异常:org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/input could only be replicated to 0 nodes, instead of 1
[root@idata-slave3 hadoop]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_greenbigdata4-lv_root
50G 49.2G 880M 99% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/sda1 485M 64M 396M 14% /boot
/dev/mapper/vg_greenbigdata4-lv_home
860G 36G 781G 5% /home
可以看见/下目录50G而/home下860G,所以迁移目录
2.原hadoop配置
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://idata-slave3:9010</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/dfs/tmp</value>
</property>
</configuration>
3.停掉hadoop集群
Stop-all.sh
4.新hadoop配置
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://idata-slave3:9010</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/dfs/tmp</value>
</property>
</configuration>
5.数据迁移
原始数据目录:
[root@idata-slave3 ~]# cd /usr/dfs/tmp/
[root@idata-slave3 tmp]# pwd
/usr/dfs/tmp
[root@idata-slave3 tmp]# ls
dfs hadoop-unjar5031704250755207519 hadoop-unjar7185041879037748832
hadoop-unjar2609195906982650560 hadoop-unjar5034268358752828831 nm-local-dir
新的路径:
[root@idata-slave3 hadoop]# mkdir /home/dfs
复制:
[root@idata-slave3 hadoop]# cp -r /usr/dfs/tmp/ /home/dfs/
6.启动hadoop集群
Start-all.sh
输出信息:
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-core-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-examples-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Starting namenodes on [idata-slave3]
idata-slave3: starting namenode, logging to /usr/lib/hadoop-2.2.0/logs/hadoop-root-namenode-idata-slave3.out
idata-slave3: SLF4J: Class path contains multiple SLF4J bindings.
idata-slave3: SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
idata-slave3: SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-core-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
idata-slave3: SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-examples-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
idata-slave3: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
idata-slave3: SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
idata-slave3: starting datanode, logging to /usr/lib/hadoop-2.2.0/logs/hadoop-root-datanode-idata-slave3.out
idata-slave3: SLF4J: Class path contains multiple SLF4J bindings.
idata-slave3: SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
idata-slave3: SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-core-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
idata-slave3: SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-examples-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
idata-slave3: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
idata-slave3: SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Starting secondary namenodes [0.0.0.0]
0.0.0.0: reverse mapping checking getaddrinfo for localhost [127.0.0.1] failed - POSSIBLE BREAK-IN ATTEMPT!
0.0.0.0: starting secondarynamenode, logging to /usr/lib/hadoop-2.2.0/logs/hadoop-root-secondarynamenode-idata-slave3.out
0.0.0.0: SLF4J: Class path contains multiple SLF4J bindings.
0.0.0.0: SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
0.0.0.0: SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-core-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
0.0.0.0: SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-examples-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
0.0.0.0: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
0.0.0.0: SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-core-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-examples-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
starting yarn daemons
starting resourcemanager, logging to /usr/lib/hadoop-2.2.0/logs/yarn-root-resourcemanager-idata-slave3.out
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-core-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-examples-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
idata-slave3: starting nodemanager, logging to /usr/lib/hadoop-2.2.0/logs/yarn-root-nodemanager-idata-slave3.out
启动成功
7.测试
[root@idata-slave3 hadoop]# hdfs dfs -ls /
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-core-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-examples-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Found 6 items
drwxr-xr-x - root supergroup 0 2014-10-24 15:57 /flume
drwxr-xr-x - root supergroup 0 2014-11-14 14:41 /hbase
drwxr-xr-x - root supergroup 0 2014-11-11 12:58 /test
drwxr-xr-x - root supergroup 0 2014-11-27 17:11 /tmp
drwxr-xr-x - root supergroup 0 2014-10-11 11:14 /tmp-output101
drwxr-xr-x - root supergroup 0 2014-10-28 16:02 /user
上传:
[root@idata-slave3 ~]# hdfs dfs -put sparkdata/ ./
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-core-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.2.0/share/hadoop/mapreduce/lib/mahout-examples-0.9-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
drwxr-xr-x - root supergroup 0 2014-11-04 15:14 _sqoop
drwxr-xr-x - root supergroup 0 2014-10-23 17:08 flume
drwxr-xr-x - root supergroup 0 2014-10-16 16:22 forest
drwxr-xr-x - root supergroup 0 2014-10-16 16:25 od
drwxr-xr-x - root supergroup 0 2014-11-11 15:10 pfp
drwxr-xr-x - root supergroup 0 2014-11-07 16:07 recommend
drwxr-xr-x - root supergroup 0 2014-11-27 17:44 sparkdata
drwxr-xr-x - root supergroup 0 2014-11-18 17:19 temp
这样数据可以迁移了,不用格式化hdfs,防止丢失数据
原文地址:http://blog.csdn.net/u010670689/article/details/41549907