标签:des style http io ar color os 使用 sp
背景:之前的pig版本是0.12,看到社区的0.13.0已经发布好久了,有很多新的patch和feature。其中有一个feature是 设置jar包缓存的参数,pig.user.cache.enabled 。这个参数可以提高pig的执行速度。具体看下:
https://issues.apache.org/jira/browse/PIG-3954
User Jar Cache Jars required for user defined functions (UDFs) are copied todistributed cache by pig to make them available on task nodes. To put thesejars on distributed cache, pig clients copy these jars to HDFS under atemporary location. For scheduled jobs, these jars do not change frequently.Also, creating a lot of small jar files on HDFS is not HDFS friendly. To avoidcopying these small jar files to HDFS again and again, pig allows users toconfigure a user level jar cache (readable only to the user for securityreasons). If pig.user.cache.enabled flag is set to true, UDF jars are copied tojar cache location (configurable) under a directory named with the hash (SHA)of the jar. Hash of the jar is used to identify the existence of the jar insubsequent uses of the jar by the user. If a jar with same hash and filename isfound in the cache, it is used avoiding copy of the jar to hdfs.
You can set the values for these properties in order to configurethe jar cache:
· pig.user.cache.enabled -Turn on/off user jar cache feature (false by default).
· pig.user.cache.location -Path on HDFS that will be used a staging directory for the user jar cache(defaults to pig.temp.dir or /tmp).
User jar cache feature is fail safe. If jars cannot be copied tojar cache due to any permission/configuration problems, pig will default oldbehavior.
升级的过程很简单,主要是把0.13的代码git clone下来,然后重新编译一下,生成新的jar包,然后用新的jar包替换老的jar包。
Pig升级完成后,跑了一下之前的pig脚本,很快发现了问题。之前写了一堆pig函数,发现在新的pig环境下,当如果某列的值为NULL的时候,会跳过执行函数。函数很简单,如下。
public class GFChannelEvalFunc extends EvalFunc<String>{
public Stringexec(Tupletuple)throws IOException {
///////之前的函数,当tuple.get(0)==null的时候,exec函数依然会执行,但在新的pig环境下, ///////tuple.get(0)==null的时候,pig会跳过执行你定义的函数。
int fieldNum = 1;
if (tuple == null ||tuple.size() != fieldNum) {
return null;
}
String gf = (String) tuple.get(0);
return GFChannel.getChannel(gf);
}
}
一开始,以为是自己写的UDF有问题。于是,决定拿系统自定义的UDF作为例子,看看bug能不能复现。使用的函数是IsInt。构造了一个简单的测试例子,想看下使用系统自定义的UDF是否也有问题。Pig代码如下:
a= foreach raw_log generate pbstr#‘g_f‘ as g_f ,uuid as uuid;
c= foreach a generateorg.apache.pig.piggybank.evaluation.IsInt(g_f) as channel,g_f;
上面代码执行的结果,如果g_f为NULL,应该返回的是false。但上面代码执行完后,返回的不是false。初步判断,系统自带的UDF、自定义的UDF在处理NULL值的时候,有问题。
经过上一步判断,不是自己udf的问题,我推测是新的pig,udf的执行逻辑和之前不一样了。
首先,需要清楚,pig是在哪个地方调用自定义的udf。
采用的方法比较挫,比较简单。就是在udf中抛个exception,这样,在日志中,会打印出错时的方法调用的堆栈信息。修改后的方法部分见下。
publicclass GFChannelEvalFunc extends EvalFunc<String> {
public String exec(Tuple tuple) throwsIOException {
String test = null ;
if(test.equals("tag")){
System.out.println("tag");
throw new IOException("ioexception");}
然后,在pig环境下重新执行下udf。在hadoop集群上查找执行mapreduce时打印的log信息。
报错时的log信息如下,很清晰的打印出了方法调用的信息,比较重要的部分见标红的部分。
2014-12-10 11:25:20,352 WARN [main]org.apache.hadoop.mapred.YarnChild: Exception running child : org.apache.pig.backend.executionengine.ExecException:ERROR 0: Exception while executing [POUserFunc (Name:POUserFunc(com.sogou.wap.pig.eval.GFChannelEvalFunc)[chararray] - scope-12Operator Key: scope-12) children: null at []]: java.lang.NullPointerException
atorg.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.getNext(PhysicalOperator.java:339)
atorg.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.processPlan(POForEach.java:378)
atorg.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:298)
atorg.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:282)
atorg.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
atorg.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
atorg.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
atorg.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:791)
atorg.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
atorg.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
atjava.security.AccessController.doPrivileged(Native Method)
atjavax.security.auth.Subject.doAs(Subject.java:415)
atorg.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796)
atorg.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.lang.NullPointerException
atcom.sogou.wap.pig.eval.GFChannelEvalFunc.exec(GFChannelEvalFunc.java:16)
atcom.sogou.wap.pig.eval.GFChannelEvalFunc.exec(GFChannelEvalFunc.java:11)
atorg.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:345)
atorg.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNextString(POUserFunc.java:445)
atorg.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.getNext(PhysicalOperator.java:316)
经过上面的排查,发现是POUserFunc这个类的问题。阅读了下代码,发现可能是下面截图中代码的问题。Pig的代码在github上有托管。
1) 找到github上的pig工程https://github.com/apache/pig
2) 找到POUserFunc这个类
3) 查看这个类的代码,看下代码的提交历史,发现截图中的代码是在PIG-3679中被引入的。但在PIG-4184的提交,这段代码已经被取消了。
4) Google了下PIG-4184,找到https://issues.apache.org/jira/browse/PIG-4184,发现引起bug的代码是作业一开始为了修复3679的bug,但不曾想引入了更严重的bug,于是在0.14.0又提交了PIG-4184。代码的提交者已经注意到了这个问题的严重性了。
5) 知道了问题的原因,解决起来也比较简单了。由于不想升级到0.14.0,有点激进。在0.13.0的基础上,把PIG-4184这个patch合并到0.13.0。
合并完成后,重新打jar包,重新执行,测试ok。问题解决。标签:des style http io ar color os 使用 sp
原文地址:http://blog.csdn.net/wisgood/article/details/41851737