码迷,mamicode.com
首页 > 其他好文 > 详细

庖丁解牛-----Live555源码彻底解密(RTP打包)

时间:2015-05-07 00:27:03      阅读:678      评论:0      收藏:0      [点我收藏+]

标签:

本文主要讲解live555的服务端RTP打包流程,根据MediaServer讲解RTP的打包流程,所以大家看这篇文章时,先看看下面这个链接的内容;

庖丁解牛-----Live555源码彻底解密(根据MediaServer讲解Rtsp的建立过程)

http://blog.csdn.net/smilestone_322/article/details/18923139

在收到客户端的Play命令后,调用StartStream函数启动流

void OnDemandServerMediaSubsession::startStream(unsigned clientSessionId,

                            void* streamToken,

                            TaskFunc* rtcpRRHandler,

                            void* rtcpRRHandlerClientData,

                            unsignedshort& rtpSeqNum,

                            unsigned& rtpTimestamp,

                            ServerRequestAlternativeByteHandler* serverRequestAlternativeByteHandler,

                            void* serverRequestAlternativeByteHandlerClientData) {

  StreamState* streamState = (StreamState*)streamToken;

  Destinations* destinations

    = (Destinations*)(fDestinationsHashTable->Lookup((charconst*)clientSessionId));

  if (streamState != NULL) {

    //启动流

    streamState->startPlaying(destinations,

                    rtcpRRHandler, rtcpRRHandlerClientData,

                    serverRequestAlternativeByteHandler, serverRequestAlternativeByteHandlerClientData);

    RTPSink* rtpSink = streamState->rtpSink(); // alias

if (rtpSink != NULL) {

 //获取序列号与时间戳

      rtpSeqNum = rtpSink->currentSeqNo();

      rtpTimestamp = rtpSink->presetNextTimestamp();

    }

  }

}

 

接着跟踪streamState类中的startPlaying函数;源码如下:

void StreamState

::startPlaying(Destinations* dests,

            TaskFunc* rtcpRRHandler, void* rtcpRRHandlerClientData,

            ServerRequestAlternativeByteHandler* serverRequestAlternativeByteHandler,

            void* serverRequestAlternativeByteHandlerClientData) {

  if (dests == NULL) return;

 

  if (fRTCPInstance == NULL && fRTPSink != NULL) {

// Create (and start) a ‘RTCP instance‘ for this RTP sink:

//用来发送RTCP数据包

    fRTCPInstance

      = RTCPInstance::createNew(fRTPSink->envir(), fRTCPgs,

                   fTotalBW, (unsignedchar*)fMaster.fCNAME,

                   fRTPSink, NULL /* we‘re a server */);

        // Note: This starts RTCP running automatically

  }

 

  if (dests->isTCP) {

// Change RTP and RTCP to use the TCP socket instead of UDP:

//使用TCP Socket代替UDP socket,使用什么socket由客户端确定,客户端在Setup时,将socket的连接方式告诉服务端;

 

    if (fRTPSink != NULL) {

      fRTPSink->addStreamSocket(dests->tcpSocketNum, dests->rtpChannelId);

      RTPInterface

     ::setServerRequestAlternativeByteHandler(fRTPSink->envir(), dests->tcpSocketNum,

                             serverRequestAlternativeByteHandler, serverRequestAlternativeByteHandlerClientData);

        // So that we continue to handle RTSP commands from the client

    }

    if (fRTCPInstance != NULL) {

      fRTCPInstance->addStreamSocket(dests->tcpSocketNum, dests->rtcpChannelId);

      fRTCPInstance->setSpecificRRHandler(dests->tcpSocketNum, dests->rtcpChannelId,

                         rtcpRRHandler, rtcpRRHandlerClientData);

    }

  } else {

    // Tell the RTP and RTCP ‘groupsocks‘ about this destination

    // (in case they don‘t already have it):

    if (fRTPgs != NULL) fRTPgs->addDestination(dests->addr, dests->rtpPort);

    if (fRTCPgs != NULL) fRTCPgs->addDestination(dests->addr, dests->rtcpPort);

    if (fRTCPInstance != NULL) {

      fRTCPInstance->setSpecificRRHandler(dests->addr.s_addr, dests->rtcpPort,

                         rtcpRRHandler, rtcpRRHandlerClientData);

    }

  }

 

  if (fRTCPInstance != NULL) {

    // Hack: Send an initial RTCP "SR" packet, before the initial RTP packet, so that receivers will (likely) be able to

    // get RTCP-synchronized presentation times immediately:

    fRTCPInstance->sendReport();

  }

 

  if (!fAreCurrentlyPlaying && fMediaSource != NULL) {

if (fRTPSink != NULL) {

 //启动流

      fRTPSink->startPlaying(*fMediaSource, afterPlayingStreamState,this);

      fAreCurrentlyPlaying = True;

    } else if (fUDPSink != NULL) {

      fUDPSink->startPlaying(*fMediaSource, afterPlayingStreamState,this);

      fAreCurrentlyPlaying = True;

    }

  }

}

 

下面主要分析:

fRTPSink->startPlaying(*fMediaSource, afterPlayingStreamState, this);

代码;RTPSink* fRTPSink;RTPSink继承自MediaSink,所以fRTPSink调用的是MediaSink中的startPlaying函数;跟踪进入到startPlaying函数;

    Boolean MediaSink::startPlaying(MediaSource& source,

                   afterPlayingFunc* afterFunc,

                   void* afterClientData) {

  // Make sure we‘re not already being played:

  if (fSource != NULL) {

    envir().setResultMsg("This sink is already being played");

    return False;

  }

 

  // Make sure our source is compatible:

  if (!sourceIsCompatibleWithUs(source)) {

    envir().setResultMsg("MediaSink::startPlaying(): source is not compatible!");

    return False;

  }

 

  //保存下一些变量

  fSource = (FramedSource*)&source;

  fAfterFunc = afterFunc;

  fAfterClientData = afterClientData;

  return continuePlaying();

}

 

这个函数的内容对于客户端和服务端来说,都差不多,就是Sink跟source要数据,对服务器来说,source就是读文件或读实时流,将数据数据传递到sink,sink负责打包发送,对于客户端来说,source就是从网络上接收数据包,组成帧,而sink就是数据的解码等内容;下面接着跟进到continuePlaying();

  virtual Boolean continuePlaying() = 0;函数在MediaSink类中定义的是一个纯虚函数,实现就是在它的子类里面实现了。跟进代码,看在哪个子类中实现该函数;

 

Boolean MultiFramedRTPSink::continuePlaying() {

  // Send the first packet.

  // (This will also schedule any future sends.)

  buildAndSendPacket(True);

  return True;

}

 

在MultiFrameRTPSink中找到continuePlaying()函数,该函数很简单,就是调用buildAndSendPacket(True);函数;MultiFrameRTPSink是一个与帧有关的类,它每次从source中获得一帧数据,buildAndSendPacket函数,顾名思义就是打包和发送的函数了。

 

void MultiFramedRTPSink::buildAndSendPacket(Boolean isFirstPacket) {

  fIsFirstPacket = isFirstPacket;

 

  // Set up the RTP header:

  //填充RTP包头

  unsigned rtpHdr = 0x80000000; // RTP version 2; marker (‘M‘) bit not set (by default; it can be set later)

  rtpHdr |= (fRTPPayloadType<<16); //负载类型

  rtpHdr |= fSeqNo; // sequence number //序列号

  //往包buff中加入rtpHdr

  fOutBuf->enqueueWord(rtpHdr);

 

  // Note where the RTP timestamp will go.

  // (We can‘t fill this in until we start packing payload frames.)

  fTimestampPosition = fOutBuf->curPacketSize();

 

  //缓冲区中空出一个时间戳的位置,时间戳在以后在填充

  fOutBuf->skipBytes(4); // leave a hole for the timestamp

 

  //缓冲区中填入SSRC内容;

  fOutBuf->enqueueWord(SSRC());

 

  // Allow for a special, payload-format-specific header following the

  // RTP header:

  fSpecialHeaderPosition = fOutBuf->curPacketSize();

  fSpecialHeaderSize = specialHeaderSize();

  fOutBuf->skipBytes(fSpecialHeaderSize);

 

  // Begin packing as many (complete) frames into the packet as we can:

  fTotalFrameSpecificHeaderSizes = 0;

  fNoFramesLeft = False;

  fNumFramesUsedSoFar = 0;

  //前面的内容都是填充RTP包头,packFrame就是打包数据了

  packFrame();

}

 

PackFrame函数源码如下:

void MultiFramedRTPSink::packFrame() {

  // Get the next frame.

 

  // First, see if we have an overflow frame that was too big for the last pkt

  if (fOutBuf->haveOverflowData()) {

     //上一帧的数据太大,溢出了

    // Use this frame before reading a new one from the source

    unsigned frameSize = fOutBuf->overflowDataSize();

    struct timeval presentationTime = fOutBuf->overflowPresentationTime();

    unsigned durationInMicroseconds = fOutBuf->overflowDurationInMicroseconds();

    fOutBuf->useOverflowData();

 

    afterGettingFrame1(frameSize, 0, presentationTime, durationInMicroseconds);

  } else {

    // Normal case: we need to read a new frame from the source

    if (fSource == NULL) return;

    //更新缓冲区的位置信息

    fCurFrameSpecificHeaderPosition = fOutBuf->curPacketSize();

    fCurFrameSpecificHeaderSize = frameSpecificHeaderSize();

    fOutBuf->skipBytes(fCurFrameSpecificHeaderSize);

    fTotalFrameSpecificHeaderSizes += fCurFrameSpecificHeaderSize;

   

//再次从source要数据, fOutBuf->curPtr()表示数据存放的起始Buff地址;第2个参数表示Buff可用缓冲区的size,afterGettingFrame为收到一帧数据的回调函数,对该帧数据进行处理;ourHandleClosure在关闭文件时调用该函数;

    fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),

                afterGettingFrame, this, ourHandleClosure,this);

  }

}

GetNextFrame函数就是Source读文件或某个设备(比如IP Camera)中读取一帧数据,读完后返回给Sink,然后调用afterGettingFrame函数;

 

下面接着讲解getNextFrame函数;

void FramedSource::getNextFrame(unsignedchar* to,unsigned maxSize,

                   afterGettingFunc* afterGettingFunc,

                   void* afterGettingClientData,

                   onCloseFunc* onCloseFunc,

                   void* onCloseClientData) {

  // Make sure we‘re not already being read:

  if (fIsCurrentlyAwaitingData) {

    envir() << "FramedSource[" <<this <<"]::getNextFrame(): attempting to read more than once at the same time!\n";

    envir().internalError();

  }

 

//保存一些变量

  fTo = to;

  fMaxSize = maxSize;

  fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()

  fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()

  fAfterGettingFunc = afterGettingFunc;

  fAfterGettingClientData = afterGettingClientData;

  fOnCloseFunc = onCloseFunc;

  fOnCloseClientData = onCloseClientData;

  fIsCurrentlyAwaitingData = True;

 

  doGetNextFrame();

}

 

调用doGetNextFrame()函数取下一帧数据;

H264FUAFragmenter类是H264VideoRTPSink的中调用,为H264VideoRTPSink的一个成员变量,H264VideoRTPSink继承自VideoRTPSink,而VideoRTPSink又继承自MultiFramedRTPSink;MultiFramedRTPSink继承自MediaSink;H264FUAFragmenter类取代了H264VideoStreamFramer成为和RTPSink的source,RTPSink要获取数据帧时,从H264FUAFragmenter获取。

 

void H264FUAFragmenter::doGetNextFrame() {

  if (fNumValidDataBytes == 1) {

// We have no NAL unit data currently in the buffer. Read a new one:

//buff中没有数据,则调用fInputSource->getNextFrame函数从source中获取数据;

//fInputSource为H264VideoStreamFramer,H264VideoStreamFramer的getNextFrame()会调用H264VideoStreamParser的parser(),parser()又调用ByteStreamFileSource获取数据;

    fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1,

                     afterGettingFrame, this,                       

                     FramedSource::handleClosure, this);

  } else {

    // We have NAL unit data in the buffer. There are three cases to consider:

    // 1. There is a new NAL unit in the buffer, and it‘s small enough to deliver

    //    to the RTP sink (as is).

    // 2. There is a new NAL unit in the buffer, but it‘s too large to deliver to

    //    the RTP sink in its entirety.  Deliver the first fragment of this data,

    //    as a FU-A packet, with one extra preceding header byte.

    // 3. There is a NAL unit in the buffer, and we‘ve already delivered some

    //    fragment(s) of this.  Deliver the next fragment of this data,

    //    as a FU-A packet, with two extra preceding header bytes.

 

    if (fMaxSize < fMaxOutputPacketSize) {// shouldn‘t happen

      envir() << "H264FUAFragmenter::doGetNextFrame(): fMaxSize ("

           << fMaxSize << ") is smaller than expected\n";

    } else {

      fMaxSize = fMaxOutputPacketSize;

    }

 

fLastFragmentCompletedNALUnit = True; // by default

//1)非分片包

    if (fCurDataOffset == 1) { // case 1 or 2

      if (fNumValidDataBytes - 1 <= fMaxSize) {// case 1

     memmove(fTo, &fInputBuffer[1], fNumValidDataBytes - 1);

     fFrameSize = fNumValidDataBytes - 1;

     fCurDataOffset = fNumValidDataBytes;

      } else { // case 2

     // We need to send the NAL unit data as FU-A packets. Deliver the first

     // packet now.  Note that we add FU indicator and FU header bytes to the front

     // of the packet (reusing the existing NAL header byte for the FU header).

    //2)为FU-A的第一个包

     fInputBuffer[0] = (fInputBuffer[1] & 0xE0) | 28; // FU indicator

     fInputBuffer[1] = 0x80 | (fInputBuffer[1] & 0x1F); // FU header (with S bit)

     memmove(fTo, fInputBuffer, fMaxSize);

     fFrameSize = fMaxSize;

     fCurDataOffset += fMaxSize - 1;

     fLastFragmentCompletedNALUnit = False;

      }

    } else { // case 3

      // We are sending this NAL unit data as FU-A packets. We‘ve already sent the

      // first packet (fragment).  Now, send the next fragment.  Note that we add

      // FU indicator and FU header bytes to the front. (We reuse these bytes that

      // we already sent for the first fragment, but clear the S bit, and add the E

      //3) bit if this is the last fragment.)

      //为FU-A的中间的包,复用FU indicator and FU header,清除掉FU header (no S bit开始标记)

      fInputBuffer[fCurDataOffset-2] = fInputBuffer[0]; // FU indicator

      fInputBuffer[fCurDataOffset-1] = fInputBuffer[1]&~0x80; // FU header (no S bit)

      unsigned numBytesToSend = 2 + fNumValidDataBytes - fCurDataOffset;

      if (numBytesToSend > fMaxSize) {

     // We can‘t send all of the remaining data this time:

     numBytesToSend = fMaxSize;

     fLastFragmentCompletedNALUnit = False;

      } else {

     // This is the last fragment:

     //4)这是FU(分片包28)的最后一个包了,将FU头部的设置成E表示End,方便客户端组帧

     fInputBuffer[fCurDataOffset-1] |= 0x40; // set the E bit in the FU header

     fNumTruncatedBytes = fSaveNumTruncatedBytes;

      }

      memmove(fTo, &fInputBuffer[fCurDataOffset-2], numBytesToSend);

      fFrameSize = numBytesToSend;

      fCurDataOffset += numBytesToSend - 2;

    }

 

    if (fCurDataOffset >= fNumValidDataBytes) {

      // We‘re done with this data.  Reset the pointers for receiving new data:

      fNumValidDataBytes = fCurDataOffset = 1;

    }

 

    // Complete delivery to the client:

    FramedSource::afterGetting(this);

  }

}

 

该函数的else部分实现RTP数据打包工作;live555只处理2种包;单独的包,比如sps,pps信息,一个包就是一个数据帧,2)包很大,拆包,采用FU-A的方法拆包,参考RTP打包协议!

 http://blog.csdn.net/smilestone_322/article/details/7574253

//fInputSource->getNextFrame()后调用回调函数:

void H264FUAFragmenter::afterGettingFrame(void* clientData,unsigned frameSize,

                         unsigned numTruncatedBytes,

                         struct timeval presentationTime,

                         unsigned durationInMicroseconds) {

  H264FUAFragmenter* fragmenter = (H264FUAFragmenter*)clientData;

  fragmenter->afterGettingFrame1(frameSize, numTruncatedBytes, presentationTime,

                    durationInMicroseconds);

}

 

void H264FUAFragmenter::afterGettingFrame1(unsigned frameSize,

                          unsigned numTruncatedBytes,

                          struct timeval presentationTime,

                          unsigned durationInMicroseconds) {

  fNumValidDataBytes += frameSize;

  fSaveNumTruncatedBytes = numTruncatedBytes;

  fPresentationTime = presentationTime;

  fDurationInMicroseconds = durationInMicroseconds;

 

  // Deliver data to the client:

  doGetNextFrame();

}

 

doGetNextFrame();获取到一帧数据后,就打包将数据发送给客户端;调用H264FUAFragmenter的doGetNextFrame()函数,对数据进行分析处理;这时走的doGetNextFrame()的else部分;

 

afterGettingFrame函数的源码如下:

void MultiFramedRTPSink

::afterGettingFrame(void* clientData,unsigned numBytesRead,

             unsigned numTruncatedBytes,

             struct timeval presentationTime,

             unsigned durationInMicroseconds) {

  MultiFramedRTPSink* sink = (MultiFramedRTPSink*)clientData;

  sink->afterGettingFrame1(numBytesRead, numTruncatedBytes,

                 presentationTime, durationInMicroseconds);

}

 

 

afterGettingFrame又调用afterGettingFrame1来消费数据,afterGettingFrame1我猜是发送数据;

看源码;

void MultiFramedRTPSink

::afterGettingFrame1(unsigned frameSize,unsigned numTruncatedBytes,

              struct timeval presentationTime,

              unsigned durationInMicroseconds) {

  if (fIsFirstPacket) {

    // Record the fact that we‘re starting to play now:

    gettimeofday(&fNextSendTime, NULL);

  }

 

  fMostRecentPresentationTime = presentationTime;

  if (fInitialPresentationTime.tv_sec == 0 && fInitialPresentationTime.tv_usec == 0) {

    fInitialPresentationTime = presentationTime;

  }   

 

  if (numTruncatedBytes > 0) {

    unsigned const bufferSize = fOutBuf->totalBytesAvailable();

    envir() << "MultiFramedRTPSink::afterGettingFrame1(): The input frame data was too large for our buffer size ("

         << bufferSize << ").  "

         << numTruncatedBytes << " bytes of trailing data was dropped! Correct this by increasing \"OutPacketBuffer::maxSize\" to at least "

         << OutPacketBuffer::maxSize + numTruncatedBytes << ", *before* creating this ‘RTPSink‘.  (Current value is "

         << OutPacketBuffer::maxSize << ".)\n";

  }

  unsigned curFragmentationOffset = fCurFragmentationOffset;

  unsigned numFrameBytesToUse = frameSize;

  unsigned overflowBytes = 0;

 

  // If we have already packed one or more frames into this packet,

  // check whether this new frame is eligible to be packed after them.

  // (This is independent of whether the packet has enough room for this

  // new frame; that check comes later.)

  if (fNumFramesUsedSoFar > 0) {

    if ((fPreviousFrameEndedFragmentation

      && !allowOtherFramesAfterLastFragment())

     || !frameCanAppearAfterPacketStart(fOutBuf->curPtr(), frameSize)) {

      // Save away this frame for next time:

      numFrameBytesToUse = 0;

      fOutBuf->setOverflowData(fOutBuf->curPacketSize(), frameSize,

                     presentationTime, durationInMicroseconds);

    }

  }

  fPreviousFrameEndedFragmentation = False;

 

//缓冲区太小了,数据帧被截断了,提示用户增加缓冲区大小

  if (numFrameBytesToUse > 0) {

    // Check whether this frame overflows the packet

    if (fOutBuf->wouldOverflow(frameSize)) {

      // Don‘t use this frame now; instead, save it as overflow data, and

      // send it in the next packet instead. However, if the frame is too

      // big to fit in a packet by itself, then we need to fragment it (and

      // use some of it in this packet, if the payload format permits this.)

      if (isTooBigForAPacket(frameSize)

          && (fNumFramesUsedSoFar == 0 || allowFragmentationAfterStart())) {

        // We need to fragment this frame, and use some of it now:

        overflowBytes = computeOverflowForNewFrame(frameSize);

        numFrameBytesToUse -= overflowBytes;

        fCurFragmentationOffset += numFrameBytesToUse;

      } else {

        // We don‘t use any of this frame now:

        overflowBytes = frameSize;

        numFrameBytesToUse = 0;

      }

      fOutBuf->setOverflowData(fOutBuf->curPacketSize() + numFrameBytesToUse,

                     overflowBytes, presentationTime, durationInMicroseconds);

    } else if (fCurFragmentationOffset > 0) {

      // This is the last fragment of a frame that was fragmented over

      // more than one packet.  Do any special handling for this case:

      fCurFragmentationOffset = 0;

      fPreviousFrameEndedFragmentation = True;

    }

  }

 

  if (numFrameBytesToUse == 0 && frameSize > 0) {

// Send our packet now, because we have filled it up:

//发送数据包

    sendPacketIfNecessary();

  } else {

    // Use this frame in our outgoing packet:

    unsigned char* frameStart = fOutBuf->curPtr();

    fOutBuf->increment(numFrameBytesToUse);

        // do this now, in case "doSpecialFrameHandling()" calls "setFramePadding()" to append padding bytes

 

    // Here‘s where any payload format specific processing gets done:

    doSpecialFrameHandling(curFragmentationOffset, frameStart,

                 numFrameBytesToUse, presentationTime,

                 overflowBytes);

 

    ++fNumFramesUsedSoFar;

 

    // Update the time at which the next packet should be sent, based

    // on the duration of the frame that we just packed into it.

    // However, if this frame has overflow data remaining, then don‘t

// count its duration yet.

//更新时间戳

    if (overflowBytes == 0) {

      fNextSendTime.tv_usec += durationInMicroseconds;

      fNextSendTime.tv_sec += fNextSendTime.tv_usec/1000000;

      fNextSendTime.tv_usec %= 1000000;

    }

 

    // Send our packet now if (i) it‘s already at our preferred size, or

    // (ii) (heuristic) another frame of the same size as the one we just

    //      read would overflow the packet, or

    // (iii) it contains the last fragment of a fragmented frame, and we

    //      don‘t allow anything else to follow this or

// (iv) one frame per packet is allowed:

//1)数据包的size已经是一个恰当的大小了,在往里面打包数据可能造成缓冲区溢出了;

//2)已经包括了分片包的最后一个包了;

//3)容许一帧一个数据包

    if (fOutBuf->isPreferredSize()

        || fOutBuf->wouldOverflow(numFrameBytesToUse)

        || (fPreviousFrameEndedFragmentation &&

            !allowOtherFramesAfterLastFragment())

        || !frameCanAppearAfterPacketStart(fOutBuf->curPtr() - frameSize,

                          frameSize) ) {

      // The packet is ready to be sent now

      //发送数据包

      sendPacketIfNecessary();

    } else {

      // There‘s room for more frames; try getting another:

     //继承打包

      packFrame();

    }

  }

}

 

下面继续看发送数据的函数:

void MultiFramedRTPSink::sendPacketIfNecessary() {

  if (fNumFramesUsedSoFar > 0) {

    // Send the packet:

#ifdef TEST_LOSS

    if ((our_random()%10) != 0) // simulate 10% packet loss #####

#endif

      if (!fRTPInterface.sendPacket(fOutBuf->packet(), fOutBuf->curPacketSize())) {

     // if failure handler has been specified, call it

     if (fOnSendErrorFunc != NULL) (*fOnSendErrorFunc)(fOnSendErrorData);

      }

    ++fPacketCount;

    fTotalOctetCount += fOutBuf->curPacketSize();

    fOctetCount += fOutBuf->curPacketSize()

      - rtpHeaderSize - fSpecialHeaderSize - fTotalFrameSpecificHeaderSizes;

 

    ++fSeqNo; // for next time

  }

 

  if (fOutBuf->haveOverflowData()

      && fOutBuf->totalBytesAvailable() > fOutBuf->totalBufferSize()/2) {

    // Efficiency hack: Reset the packet start pointer to just in front of

    // the overflow data (allowing for the RTP header and special headers),

    // so that we probably don‘t have to "memmove()" the overflow data

    // into place when building the next packet:

    unsigned newPacketStart = fOutBuf->curPacketSize()

      - (rtpHeaderSize + fSpecialHeaderSize + frameSpecificHeaderSize());

    fOutBuf->adjustPacketStart(newPacketStart);

  } else {

    // Normal case: Reset the packet start pointer back to the start:

    fOutBuf->resetPacketStart();

  }

  fOutBuf->resetOffset();

  fNumFramesUsedSoFar = 0;

 

  if (fNoFramesLeft) {

    // We‘re done:

    onSourceClosure(this);

  } else {

    // We have more frames left to send. Figure out when the next frame

    // is due to start playing, then make sure that we wait this long before

    // sending the next packet.

    struct timeval timeNow;

    gettimeofday(&timeNow, NULL);

    int secsDiff = fNextSendTime.tv_sec - timeNow.tv_sec;

    int64_t uSecondsToGo = secsDiff*1000000 + (fNextSendTime.tv_usec - timeNow.tv_usec);

    if (uSecondsToGo < 0 || secsDiff < 0) {// sanity check: Make sure that the time-to-delay is non-negative:

      uSecondsToGo = 0;

    }

 

    // Delay this amount of time:

    nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecondsToGo, (TaskFunc*)sendNext,this);

  }

}

 

在发送数据的函数中使用延迟任务,为了延迟包的发送,使用delay task来执行下次打包发送任务,看sendNext的代码;

void MultiFramedRTPSink::sendNext(void* firstArg) {

  MultiFramedRTPSink* sink = (MultiFramedRTPSink*)firstArg;

  sink->buildAndSendPacket(False);

}

它又调用了buildAndSendPacket函数,看下该函数参数的作用,True和False的区别;True表示该帧是第一帧,记下实际Play的时间;在afterGettingFrame1中有如下代码:

  if (fIsFirstPacket) {

    // Record the fact that we‘re starting to play now:

    gettimeofday(&fNextSendTime, NULL);

  }

 

在MultiFramedRTPSink中数据包和帧的缓冲区队列是同一个,使用了一些标记和对指针的移动来操作数据打包和发送数据帧;注意:如果数据帧溢出,时间戳会计算不准确;

 

from:http://blog.csdn.net/smilestone_322/article/details/18923711

庖丁解牛-----Live555源码彻底解密(RTP打包)

标签:

原文地址:http://www.cnblogs.com/lidabo/p/4483520.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!