Windows平台安装配置Hadoop
步骤:
1. JDK安装(不会的戳这)
2. 下载hadoop2.5.2.tar.gz,或者自行去百度下载。
3. 下载hadooponwindows-master.zip【**能支持在windows运行hadoop的工具】
一、 安装hadoop2.5.2
下载hadoop2.5.2.tar.gz ,并解压到你想要的目录下,我放在D:\dev\hadoop-2.5.2
二、配置hadoop环境变量
1.windows环境变量配置
右键单击我的电脑 –>属性 –>高级环境变量配置 –>高级选项卡 –>环境变量 –> 单击新建HADOOP_HOME,如下图
2.接着编辑环境变量path,将hadoop的bin目录加入到后面;
三、修改hadoop配置文件
- 编辑“D:\dev\hadoop-2.5.2\etc\hadoop”下的core-site.xml文件,将下列文本粘贴进去,并保存;
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/D:/dev/hadoop-2.5.2/workplace/tmp</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/D:/dev/hadoop-2.5.2/workplace/name</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
2.编辑“D:\dev\hadoop-2.5.2\etc\hadoop”目录下的mapred-site.xml(没有就将mapred-site.xml.template重命名为mapred-site.xml)文件,粘贴一下内容并保存;
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:9001</value>
</property>
</configuration>
3.编辑“D:\dev\hadoop-2.5.2\etc\hadoop”目录下的hdfs-site.xml文件,粘贴以下内容并保存。请自行创建data目录,在这里我是在HADOOP_HOME目录下创建了workplace/data目录;
<configuration>
<!-- 这个参数设置为1,因为是单机版hadoop -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/D:/dev/hadoop-2.5.2/workplace/data</value>
</property>
</configuration>
4.编辑“D:\dev\hadoop-2.5.2\etc\hadoop”目录下的yarn-site.xml文件,粘贴以下内容并保存;
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
5.编辑“D:\dev\hadoop-2.5.2\etc\hadoop”目录下的hadoop-env.cmd文件,将JAVA_HOME用 @rem注释掉,编辑为JAVA_HOME的路径,然后保存;
@rem set JAVA_HOME=%JAVA_HOME%
set JAVA_HOME=D:\java\jdk --jdk安装路径
四、替换文件
下载到的hadooponwindows-master.zip,解压,将bin目录(包含以下.dll和.exe文件)文件替换原来hadoop目录下的bin目录;
五、运行环境
1.运行cmd窗口,执行“hdfs namenode -format”;
2.运行cmd窗口,切换到hadoop的sbin目录,执行“start-all.cmd”,它将会启动以下进程。
成功后,如图:
至此,hadoop服务已经搭建完毕。
接下来上传测试,操作HDFS
根据你core-site.xml的配置,接下来你就可以通过:hdfs://localhost:9000来对hdfs进行操作了。
1.创建输入目录
C:\WINDOWS\system32>hadoop fs -mkdir hdfs://localhost:9000/user/
C:\WINDOWS\system32>hadoop fs -mkdir hdfs://localhost:9000/user/wcinput
2.上传数据到目录
C:\WINDOWS\system32>hadoop fs -put D:\file1.txt hdfs://localhost:9000/user/wcinput
C:\WINDOWS\system32>hadoop fs -put D:\file2.txt hdfs://localhost:9000/user/wcinput
3.查看文件
大功告成。
附录:hadoop自带的web控制台GUI
1.资源管理GUI:http://localhost:8088/;
2.节点管理GUI:http://localhost:50070/;
使用Hadoop自带的例子pi计算圆周率
D:\HADOOP\hadoop>hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.0.jar pi 10 10
Number of Maps = 10
Samples per Map = 10
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
18/11/09 13:31:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/11/09 13:31:07 INFO input.FileInputFormat: Total input files to process : 10
18/11/09 13:31:07 INFO mapreduce.JobSubmitter: number of splits:10
18/11/09 13:31:07 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
18/11/09 13:31:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1541741344890_0001
18/11/09 13:31:08 INFO impl.YarnClientImpl: Submitted application application_1541741344890_0001
18/11/09 13:31:08 INFO mapreduce.Job: The url to track the job: http://DESKTOP-S0J61R2:8088/proxy/application_1541741344890_0001/
18/11/09 13:31:08 INFO mapreduce.Job: Running job: job_1541741344890_0001
18/11/09 13:31:29 INFO mapreduce.Job: Job job_1541741344890_0001 running in uber mode : false
18/11/09 13:31:29 INFO mapreduce.Job: map 0% reduce 0%
18/11/09 13:31:43 INFO mapreduce.Job: map 50% reduce 0%
18/11/09 13:31:44 INFO mapreduce.Job: map 60% reduce 0%
18/11/09 13:31:52 INFO mapreduce.Job: map 90% reduce 0%
18/11/09 13:31:53 INFO mapreduce.Job: map 100% reduce 0%
18/11/09 13:31:54 INFO mapreduce.Job: map 100% reduce 100%
18/11/09 13:32:04 INFO mapreduce.Job: Job job_1541741344890_0001 completed successfully
18/11/09 13:32:04 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2238841
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2680
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=99705
Total time spent by all reduces in occupied slots (ms)=8623
Total time spent by all map tasks (ms)=99705
Total time spent by all reduce tasks (ms)=8623
Total vcore-milliseconds taken by all map tasks=99705
Total vcore-milliseconds taken by all reduce tasks=8623
Total megabyte-milliseconds taken by all map tasks=102097920
Total megabyte-milliseconds taken by all reduce tasks=8829952
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1500
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=1144
CPU time spent (ms)=4669
Physical memory (bytes) snapshot=3203293184
Virtual memory (bytes) snapshot=3625623552
Total committed heap usage (bytes)=2142240768
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 58.577 seconds
Estimated value of Pi is 3.20000000000000000000
Windows平台Hadoop出现 Exception message: CreateSymbolicLink error (1314): ???????????
平台:
hadoop 2.7.1
windows 2008 server R2
问题描述:
在使用kettel执行ELT任务到hive时 hadoop出现Exception message: CreateSymbolicLink error (1314): ???????????(创建符号表异常),经过分析发现为windows账户不具备创建符号表的权限
解决方法:
1. 默认管理员可以创建符号表,可以使用管理员命令行启动 hadoop的应用;
2. 通过:
2.1.win+R gpedit.msc
2.2. 计算机配置->windows设置->安全设置->本地策略->用户权限分配->创建符号链接。
2.3. 把用户添加进去,重启或者注销
的方式来添加账户的创建符号表权限信息。
参考文献:
https://stackoverflow.com/questions/28958999/hdfs-write-resulting-in-createsymboliclink-error-1314-a-required-privilege