码迷,mamicode.com
首页 > 其他好文 > 详细

视频监控、直播——基于opencv,lbx264,live555的RTSP流媒体服务器 (zc301P摄像头)

时间:2015-08-04 18:35:54      阅读:615      评论:0      收藏:0      [点我收藏+]

标签:

一个月一步步的学习历程已经在我前面的随笔中。现在终于迎来了最后一步

不多说,贴代码,不懂的,先看看我之前的随笔,有一步步的过程。还是不懂就在评论中问。

如果有哪里错了或哪里不好,希望读者给出建议。

技术分享
#ifndef _DYNAMIC_RTSP_SERVER_HH
#define _DYNAMIC_RTSP_SERVER_HH

#ifndef _RTSP_SERVER_SUPPORTING_HTTP_STREAMING_HH
#include <RTSPServerSupportingHTTPStreaming.hh>
#endif

class DynamicRTSPServer: public RTSPServerSupportingHTTPStreaming {
public:
  static DynamicRTSPServer* createNew(UsageEnvironment& env, Port ourPort,
                      UserAuthenticationDatabase* authDatabase,
                      unsigned reclamationTestSeconds = 65);

protected:
  DynamicRTSPServer(UsageEnvironment& env, int ourSocket, Port ourPort,
            UserAuthenticationDatabase* authDatabase, unsigned reclamationTestSeconds);
  // called only by createNew();
  virtual ~DynamicRTSPServer();

protected: // redefined virtual functions
  virtual ServerMediaSession*
  lookupServerMediaSession(char const* streamName, Boolean isFirstLookupInSession);
};

#endif
DynamicRTSPServer.hh
技术分享
#include "DynamicRTSPServer.hh"
#include "H264LiveVideoServerMediaSubssion.hh"
#include <liveMedia.hh>
#include <string.h>



DynamicRTSPServer* DynamicRTSPServer::createNew(
                UsageEnvironment& env, Port ourPort,
                 UserAuthenticationDatabase* authDatabase,
                 unsigned reclamationTestSeconds) 
{
  int ourSocket = setUpOurSocket(env, ourPort);
  if (ourSocket == -1) return NULL;

  return new DynamicRTSPServer(env, ourSocket, ourPort, authDatabase, reclamationTestSeconds);
}


DynamicRTSPServer::DynamicRTSPServer(UsageEnvironment& env, int ourSocket, Port ourPort,
                     UserAuthenticationDatabase* authDatabase, unsigned reclamationTestSeconds)
  : RTSPServerSupportingHTTPStreaming(env, ourSocket, ourPort, authDatabase, reclamationTestSeconds) {}


DynamicRTSPServer::~DynamicRTSPServer() {}


static ServerMediaSession* createNewSMS(UsageEnvironment& env, char const* fileName/*, FILE* fid*/); // forward


ServerMediaSession* DynamicRTSPServer::lookupServerMediaSession(char const* streamName, Boolean isFirstLookupInSession) 
{
  // Next, check whether we already have a "ServerMediaSession" for this file:
  ServerMediaSession* sms = RTSPServer::lookupServerMediaSession(streamName);
  Boolean smsExists = sms != NULL;

  // Handle the four possibilities for "fileExists" and "smsExists":
  if (smsExists && isFirstLookupInSession)
  { 
      // Remove the existing "ServerMediaSession" and create a new one, in case the underlying
      // file has changed in some way:
      removeServerMediaSession(sms); 
      sms = NULL;
  } 
  if (sms == NULL) 
  {
      sms = createNewSMS(envir(), streamName/*, fid*/); 
      addServerMediaSession(sms);
  }

  return sms;
}


static ServerMediaSession* createNewSMS(UsageEnvironment& env, char const* fileName/*, FILE* fid*/) 
{
  // Use the file name extension to determine the type of "ServerMediaSession":
  char const* extension = strrchr(fileName, .);
  if (extension == NULL) return NULL;

  ServerMediaSession* sms = NULL;
  Boolean const reuseSource = False;
  
  if (strcmp(extension, ".264") == 0) {
    // Assumed to be a H.264 Video Elementary Stream file:
    char const* descStr = "H.264 Video, streamed by the LIVE555 Media Server"; 
    sms = ServerMediaSession::createNew(env, fileName, fileName, descStr);
    OutPacketBuffer::maxSize = 100000; // allow for some possibly large H.264 frames
    sms->addSubsession(H264LiveVideoServerMediaSubssion::createNew(env, fileName, reuseSource));
  }

  return sms;
}
DynamicRTSPServer.cpp
技术分享
#include <BasicUsageEnvironment.hh>
#include "DynamicRTSPServer.hh"
#include "H264FramedLiveSource.hh"
#include <opencv/highgui.h>

//"version"
#ifndef _MEDIA_SERVER_VERSION_HH
#define _MEDIA_SERVER_VERSION_HH
#define MEDIA_SERVER_VERSION_STRING "0.85"
#endif


Cameras Camera;
int main(int argc, char** argv) {
  // Begin by setting up our usage environment:
  TaskScheduler* scheduler = BasicTaskScheduler::createNew();
  UsageEnvironment* env = BasicUsageEnvironment::createNew(*scheduler);

  UserAuthenticationDatabase* authDB = NULL;
#ifdef ACCESS_CONTROL
  // To implement client access control to the RTSP server, do the following:
  authDB = new UserAuthenticationDatabase;
  authDB->addUserRecord("username1", "password1"); // replace these with real strings
  // Repeat the above with each <username>, <password> that you wish to allow
  // access to the server.
#endif

  // Create the RTSP server.  Try first with the default port number (554),
  // and then with the alternative port number (8554):
  RTSPServer* rtspServer;
  portNumBits rtspServerPortNum = 554;
  Camera.Init();
  rtspServer = DynamicRTSPServer::createNew(*env, rtspServerPortNum, authDB);
  if (rtspServer == NULL) {
    rtspServerPortNum = 8554;
    rtspServer = DynamicRTSPServer::createNew(*env, rtspServerPortNum, authDB);
  }
  if (rtspServer == NULL) {
    *env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
    exit(1);
  }

  *env << "LIVE555 Media Server\n";
  *env << "\tversion " << MEDIA_SERVER_VERSION_STRING
       << " (LIVE555 Streaming Media library version "
       << LIVEMEDIA_LIBRARY_VERSION_STRING << ").\n";

  char* urlPrefix = rtspServer->rtspURLPrefix();
  *env << "Play streams from this server using the URL\n\t"
       << urlPrefix << "<filename>\nwhere <filename> is a file present in the current directory.\n";
  *env << "Each file‘s type is inferred from its name suffix:\n";
  *env << "\t\".264\" => a H.264 Video Elementary Stream file\n";

  // Also, attempt to create a HTTP server for RTSP-over-HTTP tunneling.
  // Try first with the default HTTP port (80), and then with the alternative HTTP
  // port numbers (8000 and 8080).

  if (rtspServer->setUpTunnelingOverHTTP(80) || rtspServer->setUpTunnelingOverHTTP(8000) || rtspServer->setUpTunnelingOverHTTP(8080)) {
    *env << "(We use port " << rtspServer->httpServerPortNum() << " for optional RTSP-over-HTTP tunneling, or for HTTP live streaming (for indexed Transport Stream files only).)\n";
  } else {
    *env << "(RTSP-over-HTTP tunneling is not available.)\n";
  }

  env->taskScheduler().doEventLoop(); // does not return
  Camera.Destory();
  return 0; // only to prevent compiler warning
}
live555MediaServer.cpp
技术分享
/*
*  H264LiveVideoServerMediaSubssion.hh
*/
#ifndef _H264_LIVE_VIDEO_SERVER_MEDIA_SUBSESSION_HH
#define _H264_LIVE_VIDEO_SERVER_MEDIA_SUBSESSION_HH
#include <liveMedia/H264VideoFileServerMediaSubsession.hh>
#include <UsageEnvironment/UsageEnvironment.hh>


class H264LiveVideoServerMediaSubssion : public H264VideoFileServerMediaSubsession {

public:
    static H264LiveVideoServerMediaSubssion*
        createNew(UsageEnvironment& env,
        char const* fileName,
        Boolean reuseFirstSource);

protected: 
    H264LiveVideoServerMediaSubssion(UsageEnvironment& env, char const* fileName, Boolean reuseFirstSource);
    ~H264LiveVideoServerMediaSubssion();

protected:
    FramedSource* createNewStreamSource(unsigned clientSessionId,
        unsigned& estBitrate);
public:
    char fFileName[100];

};


#endif
H264LiveVideoServerMediaSubssion.hh
技术分享
/*
*  H264LiveVideoServerMediaSubssion.cpp
*/

#include "H264LiveVideoServerMediaSubssion.hh"
#include "H264FramedLiveSource.hh"
#include <H264VideoStreamFramer.hh>
#include <string.h>


H264LiveVideoServerMediaSubssion* H264LiveVideoServerMediaSubssion::createNew (UsageEnvironment& env, char const* fileName, Boolean reuseFirstSource)
{
    return new H264LiveVideoServerMediaSubssion(env, fileName, reuseFirstSource);
}

H264LiveVideoServerMediaSubssion::H264LiveVideoServerMediaSubssion(UsageEnvironment& env, char const* fileName, Boolean reuseFirstSource)
: H264VideoFileServerMediaSubsession(env, fileName, reuseFirstSource)
{
    strcpy(fFileName, fileName);
}


H264LiveVideoServerMediaSubssion::~H264LiveVideoServerMediaSubssion()
{
}

FramedSource* H264LiveVideoServerMediaSubssion::createNewStreamSource(unsigned clientSessionId, unsigned& estBitrate)
{
    estBitrate = 1000; // kbps

    H264FramedLiveSource* liveSource = H264FramedLiveSource::createNew(envir(), fFileName);
    if (liveSource == NULL)
    {
        return NULL;
    }
    return H264VideoStreamFramer::createNew(envir(), liveSource);
}
H264LiveVideoServerMediaSubssion.cpp
技术分享
/*
* H264FramedLiveSource.hh
*/

#ifndef _H264FRAMEDLIVESOURCE_HH
#define _H264FRAMEDLIVESOURCE_HH

#include <FramedSource.hh>
#include <UsageEnvironment.hh>
#include <opencv/highgui.h>

extern "C"
{
#include "encoder.h"
}


class H264FramedLiveSource : public FramedSource
{
public:
    static H264FramedLiveSource* createNew(UsageEnvironment& env, char const* fileName, unsigned preferredFrameSize = 0, unsigned playTimePerFrame = 0);
    x264_nal_t * my_nal;

protected:
    H264FramedLiveSource(UsageEnvironment& env, char const* fileName, unsigned preferredFrameSize, unsigned playTimePerFrame); // called only by createNew()
    ~H264FramedLiveSource();

private:
    // redefined virtual functions:
    virtual void doGetNextFrame();
    int TransportData(unsigned char* to, unsigned maxSize);
    //static int nalIndex;

protected:
    FILE *fp;

};


class Cameras
{
public:
    void Init();
    void GetNextFrame();
    void Destory();
public:
    CvCapture * cap ;
    my_x264_encoder*  encoder;
    int n_nal;
    x264_picture_t pic_out;

    IplImage * img;
    unsigned char *RGB1;
};

#endif
H264FramedLiveSource.hh
技术分享
/*
*  H264FramedLiveSource.cpp
*/

#include "H264FramedLiveSource.hh"
#include <stdio.h>
#include <stdint.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdlib.h>
#include <string.h>
extern class Cameras Camera; //in mainRTSPServer.cpp

#define WIDTH 320
#define HEIGHT 240
#define widthStep 960
#define ENCODER_TUNE   "zerolatency"
#define ENCODER_PROFILE  "baseline"
#define ENCODER_PRESET "veryfast"
#define ENCODER_COLORSPACE X264_CSP_I420
#define CLEAR(x) (memset((&x),0,sizeof(x)))

void Convert(unsigned char *RGB, unsigned char *YUV, unsigned int width, unsigned int height);


H264FramedLiveSource::H264FramedLiveSource(UsageEnvironment& env, char const* fileName, unsigned preferredFrameSize, unsigned playTimePerFrame) : FramedSource(env)
{
    //fp = fopen(fileName, "rb");
}

H264FramedLiveSource* H264FramedLiveSource::createNew(UsageEnvironment& env, char const* fileName, unsigned preferredFrameSize /*= 0*/, unsigned playTimePerFrame /*= 0*/)
{
    H264FramedLiveSource* newSource = new H264FramedLiveSource(env, fileName, preferredFrameSize, playTimePerFrame);

    return newSource;
}

H264FramedLiveSource::~H264FramedLiveSource()
{
    //fclose(fp);
}


void H264FramedLiveSource::doGetNextFrame()
{

    fFrameSize = 0;
    //不知道为什么,多几帧一起发送效果会好一点点,也许是心理作怪
    for(int i = 0; i < 2; i++)
    {
        Camera.GetNextFrame();
        for (my_nal = Camera.encoder->nal; my_nal < Camera.encoder->nal + Camera.n_nal; ++my_nal){
            memmove((unsigned char*)fTo + fFrameSize, my_nal->p_payload, my_nal->i_payload);
            fFrameSize += my_nal->i_payload;
        }
    }

    nextTask() = envir().taskScheduler().scheduleDelayedTask(0,
        (TaskFunc*)FramedSource::afterGetting, this);//表示延迟0秒后再执行 afterGetting 函数
    return;
}

void Cameras::Init()
{
    int ret;
    //打开第一个摄像头
    cap = cvCreateCameraCapture(0);
    if (!cap)
    {
        fprintf(stderr, "Can not open camera1.\n");
        exit(-1);
    }
    cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_WIDTH, WIDTH);
    cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_HEIGHT, HEIGHT);

    encoder = (my_x264_encoder *)malloc(sizeof(my_x264_encoder));
    if (!encoder){
        printf("cannot malloc my_x264_encoder !\n");
        exit(EXIT_FAILURE);
    }
    CLEAR(*encoder);
    
    strcpy(encoder->parameter_preset, ENCODER_PRESET);
    strcpy(encoder->parameter_tune, ENCODER_TUNE);

    encoder->x264_parameter = (x264_param_t *)malloc(sizeof(x264_param_t));
    if (!encoder->x264_parameter){
        printf("malloc x264_parameter error!\n");
        exit(EXIT_FAILURE);
    }

    /*初始化编码器*/
    CLEAR(*(encoder->x264_parameter));
    x264_param_default(encoder->x264_parameter);

    if ((ret = x264_param_default_preset(encoder->x264_parameter, encoder->parameter_preset, encoder->parameter_tune))<0){
        printf("x264_param_default_preset error!\n");
        exit(EXIT_FAILURE);
    }

    /*cpuFlags 去空缓冲区继续使用不死锁保证*/
    encoder->x264_parameter->i_threads = X264_SYNC_LOOKAHEAD_AUTO;
    /*视频选项*/
    encoder->x264_parameter->i_width = WIDTH;//要编码的图像的宽度
    encoder->x264_parameter->i_height = HEIGHT;//要编码的图像的高度
    encoder->x264_parameter->i_frame_total = 0;//要编码的总帧数,不知道用0
    encoder->x264_parameter->i_keyint_max = 25;
    /*流参数*/
    encoder->x264_parameter->i_bframe = 5;
    encoder->x264_parameter->b_open_gop = 0;
    encoder->x264_parameter->i_bframe_pyramid = 0;
    encoder->x264_parameter->i_bframe_adaptive = X264_B_ADAPT_TRELLIS;

    /*log参数,不需要打印编码信息时直接注释掉*/
//    encoder->x264_parameter->i_log_level = X264_LOG_DEBUG;

    encoder->x264_parameter->i_fps_num = 25;//码率分子
    encoder->x264_parameter->i_fps_den = 1;//码率分母

    encoder->x264_parameter->b_intra_refresh = 1;
    encoder->x264_parameter->b_annexb = 1;
    /////////////////////////////////////////////////////////////////////////////////////////////////////

    strcpy(encoder->parameter_profile, ENCODER_PROFILE);
    if ((ret = x264_param_apply_profile(encoder->x264_parameter, encoder->parameter_profile))<0){
        printf("x264_param_apply_profile error!\n");
        exit(EXIT_FAILURE);
    }
    /*打开编码器*/
    encoder->x264_encoder = x264_encoder_open(encoder->x264_parameter);
    encoder->colorspace = ENCODER_COLORSPACE;

    /*初始化pic*/
    encoder->yuv420p_picture = (x264_picture_t *)malloc(sizeof(x264_picture_t));
    if (!encoder->yuv420p_picture){
        printf("malloc encoder->yuv420p_picture error!\n");
        exit(EXIT_FAILURE);
    }
    if ((ret = x264_picture_alloc(encoder->yuv420p_picture, encoder->colorspace, WIDTH, HEIGHT))<0){
        printf("ret=%d\n", ret);
        printf("x264_picture_alloc error!\n");
        exit(EXIT_FAILURE);
    }

    encoder->yuv420p_picture->img.i_csp = encoder->colorspace;
    encoder->yuv420p_picture->img.i_plane = 3;
    encoder->yuv420p_picture->i_type = X264_TYPE_AUTO;

    /*申请YUV buffer*/
    encoder->yuv = (uint8_t *)malloc(WIDTH*HEIGHT * 3 / 2);
    if (!encoder->yuv){
        printf("malloc yuv error!\n");
        exit(EXIT_FAILURE);
    }
    CLEAR(*(encoder->yuv));
    encoder->yuv420p_picture->img.plane[0] = encoder->yuv;
    encoder->yuv420p_picture->img.plane[1] = encoder->yuv + WIDTH*HEIGHT;
    encoder->yuv420p_picture->img.plane[2] = encoder->yuv + WIDTH*HEIGHT + WIDTH*HEIGHT / 4;

    n_nal = 0;
    encoder->nal = (x264_nal_t *)calloc(2, sizeof(x264_nal_t));
    if (!encoder->nal){
        printf("malloc x264_nal_t error!\n");
        exit(EXIT_FAILURE);
    }
    CLEAR(*(encoder->nal));

    RGB1 = (unsigned char *)malloc(HEIGHT * WIDTH * 3);
    
}
void Cameras::GetNextFrame()
{
    img = cvQueryFrame(cap);

    for (int i = 0; i< HEIGHT; i++)
    {
        for (int j = 0; j< WIDTH; j++)            
        {
            RGB1[(i*WIDTH + j) * 3] = img->imageData[i * widthStep + j * 3 + 2];;
            RGB1[(i*WIDTH + j) * 3 + 1] = img->imageData[i * widthStep + j * 3 + 1];                
            RGB1[(i*WIDTH + j) * 3 + 2] = img->imageData[i * widthStep + j * 3];
        }
    }
    Convert(RGB1, encoder->yuv, WIDTH, HEIGHT);
    encoder->yuv420p_picture->i_pts++;
//printf("!!!!!\n");
    if ( x264_encoder_encode(encoder->x264_encoder, &encoder->nal, &n_nal, encoder->yuv420p_picture, &pic_out) < 0){
        printf("x264_encoder_encode error!\n");
        exit(EXIT_FAILURE);
    }
//printf("@@@@@@\n");
    /*for (my_nal = encoder->nal; my_nal < encoder->nal + n_nal; ++my_nal){
        write(fd_write, my_nal->p_payload, my_nal->i_payload);
    }*/
}
void Cameras::Destory()
{
    free(RGB1);
    cvReleaseCapture(&cap);
    free(encoder->yuv);
    free(encoder->yuv420p_picture);
    free(encoder->x264_parameter);
    x264_encoder_close(encoder->x264_encoder);
    free(encoder);
}
H264FramedLiveSource.cpp
技术分享
#include <x264.h>

typedef struct my_x264_encoder{
    x264_param_t  * x264_parameter;
    char parameter_preset[20];
    char parameter_tune[20];
    char parameter_profile[20];
    x264_t  * x264_encoder;
    x264_picture_t * yuv420p_picture;
    long colorspace;
    unsigned char *yuv;
    x264_nal_t * nal;
} my_x264_encoder;
encoder.h
技术分享
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <iostream>
//转换矩阵
#define MY(a,b,c) (( a*  0.2989  + b*  0.5866  + c*  0.1145))
#define MU(a,b,c) (( a*(-0.1688) + b*(-0.3312) + c*  0.5000 + 128))
#define MV(a,b,c) (( a*  0.5000  + b*(-0.4184) + c*(-0.0816) + 128))
//#define MY(a,b,c) (( a*  0.257 + b*  0.504  + c*  0.098+16))
//#define MU(a,b,c) (( a*( -0.148) + b*(- 0.291) + c* 0.439 + 128))
//#define MV(a,b,c) (( a*  0.439  + b*(- 0.368) + c*( - 0.071) + 128))

//大小判断
#define DY(a,b,c) (MY(a,b,c) > 255 ? 255 : (MY(a,b,c) < 0 ? 0 : MY(a,b,c)))
#define DU(a,b,c) (MU(a,b,c) > 255 ? 255 : (MU(a,b,c) < 0 ? 0 : MU(a,b,c)))
#define DV(a,b,c) (MV(a,b,c) > 255 ? 255 : (MV(a,b,c) < 0 ? 0 : MV(a,b,c)))
#define CLIP(a) ((a) > 255 ? 255 : ((a) < 0 ? 0 : (a)))
//RGB to YUV
void Convert(unsigned char *RGB, unsigned char *YUV, unsigned int width, unsigned int height)
{
    //变量声明
    unsigned int i, x, y, j;
    unsigned char *Y = NULL;
    unsigned char *U = NULL;
    unsigned char *V = NULL;

    Y = YUV;
    U = YUV + width*height;
    V = U + ((width*height) >> 2);
    for (y = 0; y < height; y++)
    for (x = 0; x < width; x++)
    {
        j = y*width + x;
        i = j * 3;
        Y[j] = (unsigned char)(DY(RGB[i], RGB[i + 1], RGB[i + 2]));
        if (x % 2 == 1 && y % 2 == 1)
        {
            j = (width >> 1) * (y >> 1) + (x >> 1);
            //上面i仍有效
            U[j] = (unsigned char)
                ((DU(RGB[i], RGB[i + 1], RGB[i + 2]) +
                DU(RGB[i - 3], RGB[i - 2], RGB[i - 1]) +
                DU(RGB[i - width * 3], RGB[i + 1 - width * 3], RGB[i + 2 - width * 3]) +
                DU(RGB[i - 3 - width * 3], RGB[i - 2 - width * 3], RGB[i - 1 - width * 3])) / 4);
            V[j] = (unsigned char)
                ((DV(RGB[i], RGB[i + 1], RGB[i + 2]) +
                DV(RGB[i - 3], RGB[i - 2], RGB[i - 1]) +
                DV(RGB[i - width * 3], RGB[i + 1 - width * 3], RGB[i + 2 - width * 3]) +
                DV(RGB[i - 3 - width * 3], RGB[i - 2 - width * 3], RGB[i - 1 - width * 3])) / 4);
        }
    }
}
RGB2YUV.cpp

 

以上就是全部代码了,代码有些是修改源代码的,有些是修改别人的代码的,也许还有一些没用的变量或没用的步骤,忽略就行了。不影响编译。

g++ -c *.cpp -I /usr/local/include/groupsock  -I /usr/local/include/UsageEnvironment -I /usr/local/include/liveMedia -I /usr/local/include/BasicUsageEnvironment -I .

g++  *.o /usr/local/lib/libliveMedia.a /usr/local/lib/libgroupsock.a /usr/local/lib/libBasicUsageEnvironment.a /usr/local/lib/libUsageEnvironment.a /usr/local/lib/libx264.so /usr/local/lib/libopencv_highgui.so /usr/local/lib/libopencv_videoio.so /usr/lib/x86_64-linux-gnu/libx264.so.142 -ldl  -lm -lpthread -ldl -g

这个是我的编译命令,因为libx264误安装了两个版本,所以两个库都用上。

切换到root权限,然后执行./a.out

除了VLC软件之外,手机软件MX player更好用,我一直都是用手机软件看的,连上自己的WIFI,输入地址就行了。

代码中设置的帧速是25帧每秒,但实际只有...我看不出来。。总之算是很流畅了,分辨率为320x240(可以自己适当调节),延迟在1秒钟之内或1秒左右。

 

看了很多视频监控、直播的论文,别人要不就是不搭建服务器,要不就是直接传jpeg图片,要不就是帧速才6-7帧每秒,感觉都不怎么符合要求。

当时还以为能找到些适合的论文学习,还有搜索到适合的博客那些,但是没有。

很多人会了但他就是不肯分享出来,花点时间总结下也不错吧?还能给初学者学习,一代代地分享下去世界才会有更快的发展。

希望我的读者能够分享出自己幸苦学来的知识,也带动起周围的这种气氛,让人人都学会分享。

谢谢阅读~,有什么建议或问题的话希望能够提出。

 

测试没有问题后,剩下的步骤就是把代码移植到开发板上面了,这个根据自己的开发板来弄把,过程也不是很复杂。 

 

视频监控、直播——基于opencv,lbx264,live555的RTSP流媒体服务器 (zc301P摄像头)

标签:

原文地址:http://www.cnblogs.com/chaingank/p/4702554.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!