码迷,mamicode.com
首页 > 其他好文 > 详细

基于开源JOONE 神经网络实例

时间:2014-06-08 15:14:21      阅读:175      评论:0      收藏:0      [点我收藏+]

标签:神经网络   开源   joone xor实例的实现   

作者:北邮小生-chaosju

1.入门书籍推荐: 人工神经网络教程-韩力群  北京邮电大学出版社

      写算法要了解算法的原理,方能写出好算法,明白原理实现算法 事半功倍

2.Joone

JOONE(Java Object Oriented Neural Network)是sourceforge.net上一个用java语言迅速开发神经网络的开源项目。JOONE支持很多的特性,比如多线程和分布式计算,这意味着可以JOONE可以利用多处理器或是多计算机来均衡附载。

JOONE主要有三个大的模块:

      joone-engine:joone的核心模块。
      joone-editor:joone的gui开发环境。不用编写一行代码就建立神经网络模型,并可以进行训练和验证。Joone中提供了一个用joone-editor建立xor网络模型的例子,本文中的神经网络就是参考这个例子所完成的。
      joone-distributed-environment :joone用于支持分布式计算的模块。

文档,源码下载地址:http://www.codertodeveloper.com/docs/documentation.html

-----------学会读一手的东西,直接看文档源码

3.XOR(异或)实例的实现

bubuko.com,布布扣

public class XOR_Test implements NeuralNetListener,Serializable {

	private static final long serialVersionUID = -3597853311948127352L;
	private FileInputSynapse inputStream = null;
	private NeuralNet nnet = null;
	public static void main(String args[]) {
		XOR_Test xor = new XOR_Test();

		xor.initNeuralNet();
	}

	protected void initNeuralNet() {

		// 三层:输入层 ,隐层,输出层
		// create the three layers (using the sigmoid transfer function for the
		// hidden and the output layers)
		LinearLayer input = new LinearLayer();
		SigmoidLayer hidden = new SigmoidLayer();
		SigmoidLayer output = new SigmoidLayer();

		// set their dimensions(神经元):
		// 设置每层神经元的个数
		input.setRows(2);
		hidden.setRows(3);
		output.setRows(1);

		// Now build the neural net connecting the layers by creating the two
		// synapses(突触)
		// 三层需要两个突触对其进行连接,创建神经元
		FullSynapse synapse_IH = new FullSynapse(); /* Input -> Hidden conn. */
		FullSynapse synapse_HO = new FullSynapse(); /* Hidden -> Output conn. */

		// 连接操作
		// Next connect the input layer with the hidden layer:
		input.addOutputSynapse(synapse_IH);
		hidden.addInputSynapse(synapse_IH);
		// and then, the hidden layer with the output layer:
		hidden.addOutputSynapse(synapse_HO);
		output.addInputSynapse(synapse_HO);

		// need a NeuralNet object that will contain all the Layers of the
		// network
		// 创建一个网络来容纳各层
		nnet = new NeuralNet();
		nnet.addLayer(input, NeuralNet.INPUT_LAYER);
		nnet.addLayer(hidden, NeuralNet.HIDDEN_LAYER);
		nnet.addLayer(output, NeuralNet.OUTPUT_LAYER);
		
		
		Monitor monitor = nnet.getMonitor();
		// 设定神经网络的学习率,
		monitor.setLearningRate(0.8);
		// 设定神经网络的动量 为 0.3 这两个变量与步长有关
		monitor.setMomentum(0.3);
		monitor.addNeuralNetListener(this);
		
		
		
		// 输入流
		inputStream = new FileInputSynapse();
		/* The first two columns contain the input values */
		inputStream.setAdvancedColumnSelector("1,2");
		/* This is the file that contains the input data */
		inputStream.setInputFile(new File("E:\\joone\\XOR.txt"));
		// Next add the input synapse to the first layer.
		input.addInputSynapse(inputStream);

	
		

		TeachingSynapse trainer = new TeachingSynapse();
		/*
		 * Setting of the file containing the desired responses, provided by a
		 * FileInputSynapse
		 */
		FileInputSynapse samples = new FileInputSynapse();
		samples.setInputFile(new File("e:\\joone\\XOR.txt"));
		/* The output values are on the third column of the file */
		samples.setAdvancedColumnSelector("3");
		trainer.setDesired(samples);
		
		
		output.addOutputSynapse(trainer);
		/* We attach the teacher to the network */
		nnet.setTeacher(trainer);
		
		monitor.setTrainingPatterns(4); /* # of rows in the input file */
		monitor.setTotCicles(100000); /* How many times the net must be trained*/
		
		monitor.setLearning(true); /* The net must be trained */
		nnet.go(); /* The network starts the training phase */

	}


	@Override
	public void netStarted(NeuralNetEvent e) {
		System.out.println("Training...");

	}

	@Override
	public void cicleTerminated(NeuralNetEvent e) {
		Monitor mon = (Monitor)e.getSource();
		long c = mon.getCurrentCicle();
		/* We want print the results every 100 epochs */
		if (c % 100 == 0)
		System.out.println(c + " epochs remaining - RMSE = " +
		mon.getGlobalError());

	}

	@Override
	public void netStopped(NeuralNetEvent e) {
		System.out.println("Training Stopped...");
		long mystr = System.currentTimeMillis(); // 初始化当前的系统时间
		System.out.println(mystr);

		saveNeuralNet("d://xor.snet"); // 保存生成当前时间的myxor.snet神经网络
		test();

	}
    public void test(){
    	NeuralNet xorNNet = this.restoreNeuralNet("D://xor.snet");
    	if (xorNNet != null) {
    	// we get the output layer
    	Layer output = xorNNet.getOutputLayer();
    	// we create an output synapse
    	FileOutputSynapse fileOutput = new FileOutputSynapse();
    	fileOutput.setFileName("d://xor_out.txt");
    	// we attach the output synapse to the last layer of the NN
    	output.addOutputSynapse(fileOutput);
    	// we run the neural network only once (1 cycle) in recall mode
    	xorNNet.getMonitor().setTotCicles(1);
    	xorNNet.getMonitor().setLearning(false);
    	xorNNet.go();
    	}
    }
	@Override
	public void errorChanged(NeuralNetEvent e) {
		Monitor mon = (Monitor) e.getSource();// 得到监控层的信息
		long c = mon.getCurrentCicle();
		if (c % 100 == 0)
		System.out.println("Cycle: "
				+ (mon.getTotCicles() - mon.getCurrentCicle()) + " RMSE:"
				+ mon.getGlobalError()); // 输出 训练的次数和 rmse 均方误差

	}
	
	public void saveNeuralNet(String fileName) {
		try {
			FileOutputStream stream = new FileOutputStream(fileName);
			ObjectOutputStream out = new ObjectOutputStream(stream);
			out.writeObject(nnet);// 写入nnet对象
			out.close();
		} catch (Exception excp) {
			excp.printStackTrace();
		}
	}
	
	NeuralNet restoreNeuralNet(String fileName) {
		NeuralNetLoader loader = new NeuralNetLoader(fileName);
		NeuralNet nnet = loader.getNeuralNet();
		return nnet;
		}
	@Override
	public void netStoppedError(NeuralNetEvent e, String error) {
		// TODO Auto-generated method stub

	}
}

程序的输出结果:

0.0022572691083591304
0.9972511752900466
0.9972455081943005
0.0037839413733784474

bubuko.com,布布扣

和异或的结果一直,训练的次数约多,结果越接近-------0.1


基于开源JOONE 神经网络实例,布布扣,bubuko.com

基于开源JOONE 神经网络实例

标签:神经网络   开源   joone xor实例的实现   

原文地址:http://blog.csdn.net/chaosju/article/details/28619597

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!