1) 在某些场合,希望类的不同版本对序列化兼容,因此需要确保类的不同版本具有相同的serialVersionUID;
2) 在某些场合,不希望类的不同版本对序列化兼容,因此需要确保类的不同版本具有不同的serialVersionUID。
java的序列化算法要考虑到下面这些东西:◆将对象实例相关的类元数据输出。
◆递归地输出类的超类描述直到不再有超类。
◆类元数据完了以后,开始从最顶层的超类开始输出对象实例的实际数据值。
◆从上至下递归输出实例的数据
所以java的序列化确实很强大,序列化后得到的信息也很详细,所以反序列化就so easy.import java.io.Serializable; public class Block implements Serializable{ /** * */ private static final long serialVersionUID = 1L; private int id; private String name; public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Block(int id, String name) { this.id = id; this.name = name; } }
import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; public class TestSerializable { public static void main(String[] args) throws IOException, ClassNotFoundException { //将序列化化的数据写到文件out里面(持久化) FileOutputStream fos = new FileOutputStream("./out"); ObjectOutputStream oos = new ObjectOutputStream(fos); for (int i = 0; i < 100; i++) { Block b = new Block(i, "B"+i); oos.writeObject(b); } oos.flush(); oos.close(); //读出一个序列化的对象的字节序列(^..^)就是反序列化 FileInputStream fis = new FileInputStream("./out"); ObjectInputStream ois = new ObjectInputStream(fis); Block b2 = (Block) ois.readObject(); ois.close(); System.out.println(b2.getName()); } }
B0生成一百个对象的持久化数据的大小是:1.60 KB (1,643 字节)一个对象平均16个字节,该类只有两个字段一个是int,一个字符串但是字符串的长度为2,所以我们可以感受到这冗余还是挺大的。
package hadoop; import java.io.ByteArrayOutputStream; import java.io.DataInput; import java.io.DataInputStream; import java.io.DataOutput; import java.io.DataOutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.Writable; import org.junit.Test; public class Testhadoop_serializable_writable { @Test public void serializable() throws IOException { ByteArrayOutputStream out = new ByteArrayOutputStream(); DataOutputStream dataOut = new DataOutputStream(out); FileOutputStream fos = new FileOutputStream("./hadoop_out"); for (int i = 0; i < 10; i++) { Text t1 = new Text(String.valueOf(i)); Text t2 = new Text("mw"); MyWritable mw = new MyWritable(t1,t2); mw.write(dataOut); } dataOut.close(); fos.write(out.toByteArray()); fos.flush(); fos.close(); FileInputStream fis = new FileInputStream("./hadoop_out"); DataInputStream dis = new DataInputStream(fis); for (int i = 0; i < 10; i++) { MyWritable mw = new MyWritable(new Text(), new Text()); mw.readFields(dis); System.out.println(mw.getId() + " " + mw.getName()); } } } class MyWritable implements Writable { private Text id; private Text name; public MyWritable(Text id, Text name) { super(); this.id = id; this.name = name; } public synchronized Text getId() { return id; } public synchronized void setId(Text id) { this.id = id; } public synchronized Text getName() { return name; } public synchronized void setName(Text name) { this.name = name; } @Override public void write(DataOutput out) throws IOException { id.write(out); name.write(out); } @Override public void readFields(DataInput in) throws IOException { id.readFields(in); name.readFields(in); } }
java序列化与反序列化以及浅谈一下hadoop的序列化,布布扣,bubuko.com
原文地址:http://blog.csdn.net/dafeng_blog/article/details/38664455