Hadoop distcp 命令跨集群复制文件

原创
2016/09/30 15:00
阅读数 4.6K

hadoop提供了Hadoop distcp命令在Hadoop不同集群之间进行数据复制和copy。

使用格式为:hadoop  distcp   -pbc  hdfs://namenode1/test hdfs://namenode2/test

distcp copy只有Map没有Reduce

usage: distcp OPTIONS [source_path...] <target_path>

OPTIONS

-append        Reuse existing data in target files and append new

                      data to them if possible

-async           Should distcp execution be blocking

-atomic          Commit all changes or none

-bandwidth <arg> Specify bandwidth per map in MB

-delete          Delete from target, files missing in source

-diff <arg>    Use snapshot diff report to identify the

                    difference between source and target

-f <arg>        List of files that need to be copied

-filelimit <arg> (Deprecated!) Limit number of files copied to <= n

-i Ignore failures during copy

-log <arg> Folder on DFS where distcp execution logs are

saved

-m <arg> Max number of concurrent maps to use for copy

-mapredSslConf <arg> Configuration for ssl config file, to use with

                               hftps://

-overwrite Choose to overwrite target files unconditionally,

even if they exist.

-p <arg> preserve status (rbugpcaxt)(replication,

              block-size, user, group, permission,

             checksum-type, ACL, XATTR, timestamps). If -p is

             specified with no <arg>, then preserves

             replication, block size, user, group, permission,

             checksum type and timestamps. raw.* xattrs are

             preserved when both the source and destination

             paths are in the /.reserved/raw hierarchy (HDFS

             only). raw.* xattrpreservation is independent of

             the -p flag. Refer to the DistCp documentation for

            more details.

-sizelimit <arg> (Deprecated!) Limit number of files copied to <= n

                          bytes

-skipcrccheck Whether to skip CRC checks between source and

                      target paths.

-strategy <arg> Copy strategy to use. Default is dividing work

                         based on file sizes

-tmp <arg> Intermediate work path to be used for atomic

                  commit

-update Update target, copying only missingfiles or

             directories

不同版本的Hadoop集群由于RPC协议版本不一样不能直接使用命令 hadoop distcp hdfs://namenode1/test hdfs://namenode2/test

对于不同Hadoop版本间的拷贝,用户应该使用HftpFileSystem。 这是一个只读文件系统,所以DistCp必须运行在目标端集群上(更确切的说是在能够写入目标集群的TaskTracker上)。 源的格式是hftp://<dfs.http.address>/<path> (默认情况dfs.http.address是 <namenode>:50070)。

 

 

展开阅读全文
加载中
点击引领话题📣 发布并加入讨论🔥
打赏
0 评论
0 收藏
0
分享
返回顶部
顶部