码迷,mamicode.com
首页 > Web开发 > 详细

来看一例Netty的错误

时间:2015-09-12 19:05:43      阅读:364      评论:0      收藏:0      [点我收藏+]

标签:

原文地址: http://stackoverflow.com/questions/16879104/netty-4-0-0-cr3-lengthfieldbasedframedecoder-maxframelength-exceeds-integer-ma?rq=1

提问者的原话:


So far I have been amp‘d about upgrading from Netty version 3.5.7.Final to 4.0.0CR3 until I ran into my final problem during the upgrade.... the LengthFieldBasedFrameDecoder. Each available constructor requires a maxFrameLength which I set to Integer.MAX_VALUE but when running the client/server I get several stack traces (one shown below) that state that the Integer.MAX_VALUE has been exceeded (2147483647). I have tried to take a shot at configuring the maximum channel buffer size by digging through the ChannelConfig class in the API docs and various other stackoverflow post, still to no prevail. Does anyone know if there is a missing option I can set or a way to prevent the reads from even being this high?

Stack Trace:

io.netty.handler.codec.TooLongFrameException: Adjusted frame length exceeds 2147483647: 4156555235 - discarded
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:486)
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:462)
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:397)
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:352)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:111)
at io.netty.handler.codec.ByteToMessageDecoder.inboundBufferUpdated(ByteToMessageDecoder.java:69)
at io.netty.channel.ChannelInboundByteHandlerAdapter.inboundBufferUpdated(ChannelInboundByteHandlerAdapter.java:46)
at io.netty.channel.DefaultChannelHandlerContext.invokeInboundBufferUpdated(DefaultChannelHandlerContext.java:1031)
at io.netty.channel.DefaultChannelHandlerContext.fireInboundBufferUpdated0(DefaultChannelHandlerContext.java:998)
at io.netty.channel.DefaultChannelHandlerContext.fireInboundBufferUpdated(DefaultChannelHandlerContext.java:978)
at io.netty.handler.timeout.IdleStateHandler.inboundBufferUpdated(IdleStateHandler.java:257)
at io.netty.channel.DefaultChannelHandlerContext.invokeInboundBufferUpdated(DefaultChannelHandlerContext.java:1057)
at io.netty.channel.DefaultChannelHandlerContext.fireInboundBufferUpdated0(DefaultChannelHandlerContext.java:998)
at io.netty.channel.DefaultChannelHandlerContext.fireInboundBufferUpdated(DefaultChannelHandlerContext.java:978)
at io.netty.channel.DefaultChannelPipeline.fireInboundBufferUpdated(DefaultChannelPipeline.java:828)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:118)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:429)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:392)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:322)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:114)
at java.lang.Thread.run(Thread.java:680)

My Client is configured as followed:

peerClient.bootstrap = new Bootstrap();

peerClient.bootstrap.group(new NioEventLoopGroup())
          .channel(NioSocketChannel.class)
          .option(ChannelOption.ALLOCATOR, UnpooledByteBufAllocator.DEFAULT)
          .option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 120000)
          .option(ChannelOption.SO_KEEPALIVE, true)
          .option(ChannelOption.TCP_NODELAY, true)
          .option(ChannelOption.SO_REUSEADDR, true)
          .handler(PeerInitializer.newInstance());
My Server Configuration:
result.serverBootstrap = new ServerBootstrap();

result.serverBootstrap.group(new NioEventLoopGroup(), new NioEventLoopGroup())
      .channel(NioServerSocketChannel.class)
      .handler(new LoggingHandler(LogLevel.INFO))
      .childHandler(PeerInitializer.newInstance())
      .childOption(ChannelOption.ALLOCATOR, UnpooledByteBufAllocator.DEFAULT);
The initChannel method overridden with a custom class extending ChannelInitializer
public class PeerInitializer extends ChannelInitializer<SocketChannel>

@Override
protected void initChannel(SocketChannel ch) throws Exception {
    final ChannelPipeline pipeline = ch.pipeline();

    pipeline.addLast(
            messageEncoder,
            HandshakeDecoder.newInstance(),
            connectionHandler,
            handshakeHandler,
            new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4),
            messageHandler);
}
-----------------------------------------------------------------------------

下面开始分析 这个错是怎么来的

由这个 异常信息的 结果来看,这是由 

 io.netty.handler.codec.LengthFieldBasedFrameDecoder.fail(long) 这个方法抛出来的。而这个方法只有在解析的数据超过 当初定义LengthFieldBasedFrameDecoder的最大帧长度的值 时 才会发生,所以这个数据值已经很大很大了,由异常信息内容 可知 当前定义的 最大帧长度为 214748364(Integer.MAX_VALUE) 这么多字节, 而真正取到的字节数 却有4156555235 个字节...这明明是 网络接收的超了,。这是解码操作(HandshakeDecoder 或 Netty的 )处的错误.. ...这也不能说是个bug..只能说在网络高并发字节传输时Netty可能不是那么 优秀
214748364
4156555235   

所以 下面那位 老兄 也只能 是 在他的init initChannel 加了个log..

完..

来看一例Netty的错误

标签:

原文地址:http://my.oschina.net/httpssl/blog/505353

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!