文档章节

hadoop常见异常、错误

Admirals
 Admirals
发布于 2015/01/03 18:37
字数 1185
阅读 1055
收藏 1
exception 1

2014-12-21 12:27:17,084 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService

java.net.ConnectException: Call From hadoop02/10.173.52.7 to hadoop01:90044 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)

        at org.apache.hadoop.ipc.Client.call(Client.java:1351)

        at org.apache.hadoop.ipc.Client.call(Client.java:1300)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

        at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)

        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)

        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

        at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)

        at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:167)

        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:445)

        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:525)

        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:676)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.net.ConnectException: Connection refused

        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)

        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)

        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)

        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)

        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547)

        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)

        at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)

        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)

        at org.apache.hadoop.ipc.Client.call(Client.java:1318)

        ... 14 more

exception 2

2014-12-23 10:42:35,714 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1927242338-10.173.52.29-1419134717023:blk_1073744602_3778 received exception org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block BP-1927242338-10.173.52.2-1419134717023:blk_1073744602_3778 already exists in state TEMPORARY and thus cannot be created.

2014-12-23 10:42:35,714 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: hadoop02:50010:DataXceiver error processing WRITE_BLOCK operation  src: /10.173.52.2:50123 dest: /10.173.52.79:50010

org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block BP-1927242338-10.173.52.29-1419134717023:blk_1073744602_3778 already exists in state TEMPORARY and thus cannot be created.

        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:842)

        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:92)

        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:160)

        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:454)

        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115)

        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68)

        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)

        at java.lang.Thread.run(Thread.java:745)

exception 3

2014-12-25 09:47:55,006 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN

java.io.IOException: Failed on local exception: java.io.EOFException; Host Details : local host is: "hadoop02/10.173.52.7"; destination host is: "hadoop01":9000;

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)

        at org.apache.hadoop.ipc.Client.call(Client.java:1351)

        at org.apache.hadoop.ipc.Client.call(Client.java:1300)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

        at com.sun.proxy.$Proxy11.rollEditLog(Unknown Source)

        at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)

        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:268)

        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)

        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:310)

        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:279)

        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:296)

        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456)

        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:292)

Caused by: java.io.EOFException

        at java.io.DataInputStream.readInt(DataInputStream.java:392)

        at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:995)

        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:891)

2014-12-25 09:47:55,046 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby

2014-12-25 09:47:55,046 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.173.52.2:43750 Call#316 Retry#1: error: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby

exception 4

2014-12-25 09:48:14,757 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted

java.lang.InterruptedException: sleep interrupted

        at java.lang.Thread.sleep(Native Method)

        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:334)

        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:279)

        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:296)

        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456)

error 5

2014-12-25 10:33:23,598 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.complete from 10.173.52.2:56491 Call#15 Retry#0: error: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /backup/adaptor/Result/20141224/03/_temporary/1/_temporary/attempt_1419136502737_0275_r_000000_1/part-00000: File does not exist. Holder DFSClient_attempt_1419136502737_0275_r_000000_1_1803932396_1 does not have any open files.

error 6

2014-12-21 12:14:07,616 WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect

java.net.ConnectException: Connection refused

        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)

        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)

        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)

error 6

2014-12-31 16:52:38,934 ERROR org.apache.hadoop.hdfs.qjournal.server.JournalNode: RECEIVED SIGNAL 15: SIGTERM

warn 7

2014-12-30 04:08:03,970 WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to monitor health of NameNode at 

hadoop02/10.173.52.79:9000: Call From hadoop02/10.173.52.79 to hadoop02:9000 failed on connection exception: 

java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

exception 8

2014-12-30 09:35:11,041 INFO org.apache.hadoop.http.HttpServer: HttpServer.start() threw a non Bind IOException

java.net.BindException: Port in use: hadoop02:8088

        at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:742)

        at org.apache.hadoop.http.HttpServer.start(HttpServer.java:686)

        at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:257)

        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:623)

        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:655)

        at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)

        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:872)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:444)

        at sun.nio.ch.Net.bind(Net.java:436)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)

        at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:738)

        ... 6 more

exception 9

2014-12-30 09:35:11,043 INFO org.apache.hadoop.service.AbstractService: Service ResourceManager failed in state STARTED; cause: 

org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server

org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server

        at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:262)

        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:623)

        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:655)

        at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)

        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:872)

Caused by: java.net.BindException: Port in use: hadoop02:8088

        at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:742)

        at org.apache.hadoop.http.HttpServer.start(HttpServer.java:686)

        at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:257)

        ... 4 more

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:444)

        at sun.nio.ch.Net.bind(Net.java:436)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAd

exception 10


2014-12-31 16:54:27,396 FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager

org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server

        at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:262)

        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:623)

        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:655)

        at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)

        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:872)

Caused by: java.net.BindException: Port in use: hadoop02:8088

        at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:742)

        at org.apache.hadoop.http.HttpServer.start(HttpServer.java:686)

        at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:257)

        ... 4 more

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:444)

        at sun.nio.ch.Net.bind(Net.java:436)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)

        at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:738)

        ... 6 more

exception 11

2014-12-22 10:11:30,223 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService

java.net.ConnectException: Call From hadoop03/10.170.199.83 to hadoop02:9000 failed on connection exception: 


java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)

        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)

        at org.apache.hadoop.ipc.Client.call(Client.java:1351)

        at org.apache.hadoop.ipc.Client.call(Client.java:1300)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

        at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)

        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)

        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

        at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)

        at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat


(DatanodeProtocolClientSideTranslatorPB.java:167)

        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:445)

        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:525)

        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:676)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.net.ConnectException: Connection refused

        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)

        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)

        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)

        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)

        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547)

        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)

        at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)

        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)

        at org.apache.hadoop.ipc.Client.call(Client.java:1318)

        ... 14 more

exception 12

2014-12-30 04:08:03,970 WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to monitor health of NameNode at 


hadoop02/10.173.52.79:9000: Call From hadoop02/10.173.52.79 to hadoop02:9000 failed on connection exception: 


java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused


© 著作权归作者所有

共有 人打赏支持
上一篇: Hadoop-调度器
下一篇: hadoop集群优化
Admirals
粉丝 6
博文 38
码字总数 68400
作品 0
朝阳
高级程序员
私信 提问
加载中

评论(4)

Admirals
Admirals
我也没解决,1
我找的错误最后指向了journal连接,超时以后导致namenode挂掉,目前还没有解决, 有方案的话记得个我留言 44
我_是_我

引用来自“我_是_我”的评论

姐,你这问题都有答案么??
我想知道这个
WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted
我就是切换HA的时候报这个,还有这个
:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
好像还是一直在连原来的 那个namenode

引用来自“Admirals”的评论

who's your sister ... (My english is very pool, although I have to use English to reply you, because of my Chinese Input is not exist. so sorry) The question is cluster running normally but nn tranfored error, I haven't solve this it, finally, I write restart command to hide it.
好吧,那就只有重新启动集群了被。 那你的这个解决了么? org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to monitor health of NameNode at 好像journode的数据不对了。。。
Admirals
Admirals

引用来自“我_是_我”的评论

姐,你这问题都有答案么??
我想知道这个
WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted
我就是切换HA的时候报这个,还有这个
:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
好像还是一直在连原来的 那个namenode
who's your sister ... (My english is very pool, although I have to use English to reply you, because of my Chinese Input is not exist. so sorry) The question is cluster running normally but nn tranfored error, I haven't solve this it, finally, I write restart command to hide it.
我_是_我
姐,你这问题都有答案么??
我想知道这个
WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted
我就是切换HA的时候报这个,还有这个
:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
好像还是一直在连原来的 那个namenode
nutch 部署到eclipse常见错误

Nutch部署到eclipse常见错误 Failed to set permissions of path: tmphadoop-hadoopmapredstaginghadoop1847455384.staging to 0700 之前在eclipse上部署hadoop时好像也遇到过这个问题。但是......

hiqj
2014/04/18
0
0
hadoop常见异常

1、Shell$ExitCodeException 现象:运行hadoop job时出现如下异常: 原因及解决办法:原因未知。重启可恢复正常 2、Safe mode 现象:分配map reduce任务时产生: 说明Hadoop的NameNode处在安...

o0无忧亦无怖
2015/10/08
38
0
运行Hadoop作业时一处常见错误以及解决方法

1. 问题描述 当用户编写完Hadoop程序时,准备运行时,经常会抛出以下异常: [15:10:41,949][ INFO][main][org.apache.hadoop.mapred.JobClient:1330] – Task Id : attempt_201202281244_000...

混绅士
2014/12/30
0
0
hadoop常见错误及解决办法!

1:Shuffle Error: Exceeded MAXFAILEDUNIQUE_FETCHES; bailing-out Answer: 程序里面需要打开多个文件,进行分析,系统一般默认数量是1024,(用ulimit -a可以看到)对于正常使用是够了,但...

vieky
2013/03/04
0
0
hadoop 常见错误及解决方案

1:Shuffle Error: Exceeded MAXFAILEDUNIQUE_FETCHES; bailing-out Answer: 程序里面需要打开多个文件,进行分析,系统一般默认数量是1024,(用ulimit -a可以看到)对于正常使用是够了,但...

蓝狐乐队
2014/02/20
0
0

没有更多内容

加载失败,请刷新页面

加载更多

Spring学习记录

Java类定义配置 @Configuration //标记为配置类@ComponentScan //标记为扫描当前包及子包所有标记为@Component的类@ComponentScan(basePackageClasses = {接口.class,...}) //标记为扫描当...

CHONGCHEN
今天
1
0
如何开发一款以太坊(安卓)钱包系列2 - 导入账号及账号管理

这是如何开发一款以太坊(安卓)钱包系列第2篇,如何导入账号。有时用户可能已经有一个账号,这篇文章接来介绍下,如何实现导入用户已经存在的账号。 导入账号预备知识 从用户需求上来讲,导...

Tiny熊
今天
3
0
intellJ IDEA搭建java+selenium自动化环境(maven,selenium,testng)

1.安装jdk1.8; 2.安装intellJ; 3.安装maven; 3.1 如果是单前用户,配置用户环境变量即可,如果是多用户,则需配置系统环境变量,变量名为MAVEN_HOME,赋值D:\Application\maven,往path中...

不最醉不龟归
今天
4
0
聊聊ShenandoahGC的Brooks Pointers

序 本文主要研究一下ShenandoahGC的Brooks Pointers Shenandoah Shenandoah面向low-pause-time的垃圾收集器,它的GC cycle主要有 Snapshot-at-the-beginning concurrent mark包括Init Mark(P......

go4it
昨天
4
0
Makefile通用编写规则

#简单实用的Makefile模板: objs := a.o b.o test:$(objs) gcc -o test $^ # .a.o.d .b.o.d dep_files := $(foreach f,$(objs),.$(f).d) dep_files := $(wildcard $(dep_files)) ifneq ($(d......

shzwork
昨天
3
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部