标签:
本文地址:http://www.cnblogs.com/archimedes/p/hadoop-writable-interface.html,转载请注明源地址。
通讯格式需求
hadoop在节点间的内部通讯使用的是RPC,RPC协议把消息翻译成二进制字节流发送到远程节点,远程节点再通过反序列化把二进制流转成原始的信息。RPC的序列化需要实现以下几点:
1.压缩,可以起到压缩的效果,占用的宽带资源要小。
2.快速,内部进程为分布式系统构建了高速链路,因此在序列化和反序列化间必须是快速的,不能让传输速度成为瓶颈。
3.可扩展的,新的服务端为新的客户端增加了一个参数,老客户端照样可以使用。
4.兼容性好,可以支持多个语言的客户端
存储格式需求
表面上看来序列化框架在持久化存储方面可能需要其他的一些特性,但事实上依然是那四点:
1.压缩,占用的空间更小
2.快速,可以快速读写
3.可扩展,可以老格式读取老数据
4.兼容性好,可以支持多种语言的读写
Writable接口定义了两个方法:
一个将其状态写到DataOutput二进制流,另一个从DataInput二进制流读取其状态:
package org.apache.hadoop.io; import java.io.*; public interface Writable { void write(DataOutput out) throws IOException; void readFields(DataInput in) throws IOException; }
我们再来看下Writable接口与序列化和反序列化是如何关联的:
package org.apache.hadoop.io; import java.io.*; import org.apache.hadoop.util.StringUtils; import junit.framework.Assert; public class WritableExample { public static byte[] bytes = null; //将一个实现了Writable接口的对象序列化成字节流 public static byte[] serialize(Writable writable) throws IOException { ByteArrayOutputStream out = new ByteArrayOutputStream(); DataOutputStream dataOut = new DataOutputStream(out); writable.write(dataOut); dataOut.close(); return out.toByteArray(); } //将字节流转化为实现了Writable接口的对象 public static byte[] deserialize(Writable writable, byte[] bytes) throws IOException { ByteArrayInputStream in = new ByteArrayInputStream(bytes); DataInputStream dataIn = new DataInputStream(in); writable.readFields(dataIn); dataIn.close(); return bytes; } public static void main(String[] args) { // TODO Auto-generated method stub try { IntWritable writable = new IntWritable(123); bytes = serialize(writable); System.out.println("After serialize " + bytes); Assert.assertEquals(bytes.length, 4); Assert.assertEquals(StringUtils.byteToHexString(bytes), "0000007b"); IntWritable newWritable = new IntWritable(); deserialize(newWritable, bytes); System.out.println("After deserialize " + bytes); Assert.assertEquals(newWritable.get(),123); } catch(IOException ex){ } } }
IntWritable实现了WritableComparable,WritableComparable是Writable接口和java.lang.Comparable<T>的一个子接口。
package org.apache.hadoop.io; public interface WritableComparable <T> extends org.apache.hadoop.io.Writable, java.lang.Comparable<T> { }
MapReduce在排序部分要根据key值的大小进行排序,因此类型的比较相当重要,RawComparator是Comparator的增强版
package org.apache.hadoop.io; public interface RawComparator <T> extends java.util.Comparator<T> { int compare(byte[] bytes, int i, int i1, byte[] bytes1, int i2, int i3); }
它可以做到,不先反序列化就可以直接比较二进制字节流的大小:
package org.apache.hadoop.io; import java.io.*; import org.apache.hadoop.util.StringUtils; import junit.framework.Assert; public class ComparatorExample { public static byte[] serialize(Writable writable) throws IOException { ByteArrayOutputStream out = new ByteArrayOutputStream(); DataOutputStream dataOut = new DataOutputStream(out); writable.write(dataOut); dataOut.close(); return out.toByteArray(); } public static void main(String[] args) { // TODO Auto-generated method stub RawComparator<IntWritable> comparator; IntWritable w1, w2; comparator = WritableComparator.get(IntWritable.class); w1 = new IntWritable(123); w2 = new IntWritable(32); if(comparator.compare(w1, w2) <= 0) System.exit(0); try { byte[] b1 = serialize(w1); byte[] b2 = serialize(w2); if(comparator.compare(b1, 0, b1.length, b2, 0, b2.length) <= 0) { System.exit(0); } } catch(IOException ex) { } } }
《Hadoop权威指南》
标签:
原文地址:http://www.cnblogs.com/archimedes/p/hadoop-writable-interface.html