码迷,mamicode.com
首页 > 移动开发 > 详细

FFmpeg总结(十一)用ffmpeg进行转格式,Android下播放网络音频流

时间:2017-05-07 22:06:50      阅读:799      评论:0      收藏:0      [点我收藏+]

标签:ica   分配   .so   linear   pen   rup   etc   order   click   

思路:
1、mp3转成pcm(音频数据),ffmpeg做的事
2、OpenSL ES引擎创建AudioPlayer,实际调用了AudioTrack

遇到的错误:
Error #include nested too deeply
原因:c文件互相引用
解决方案:

  • 1、将两个头文件共用的那一部分抽出来单独建一个头文件。
  • 2、加预处理#ifndef.. #define…#endif

x86平台没有编译出来so,怀疑存在版本不兼容,编译别的相关so,x86下没有异常。有空这里再更新下原因

技术分享

studio写ndk相当方便:

技术分享

技术分享

工程结构:

技术分享

Java代码:

package com.hejunlin.ffmpegaudio;

import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.View;
import android.widget.EditText;
import android.widget.TextView;

public class MainActivity extends AppCompatActivity {

    private EditText mInput;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        mInput = (EditText) findViewById(R.id.et_input);
     mInput.setText("http://qzone.60dj.com/huiyuan/201704/19/201704190533197825_35285.mp3");
        findViewById(R.id.bt_play).setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View view) {
                NativePlayer.play(mInput.getText().toString().trim());
            }
        });
        findViewById(R.id.bt_pause).setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View view) {
                NativePlayer.stop();
            }
        });
    }
}

NativePlayer:

package com.hejunlin.ffmpegaudio;

/**
 * Created by hejunlin on 17/5/6.
 */

public class NativePlayer {

    static {
        System.loadLibrary("NativePlayer");
    }

    public static native void play(String url);
    public static native void stop();
}

布局文件:

<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
    xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    android:id="@+id/activity_main"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context="com.hejunlin.ffmpegaudio.MainActivity">

    <TextView
        android:id="@+id/tv_input"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:padding="10dp"
        android:layout_marginTop="30dp"
        android:text="播放链接:"
        android:textSize="20sp"/>

    <EditText
        android:id="@+id/et_input"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_toRightOf="@id/tv_input"
        android:padding="10dp"/>

    <LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_below="@id/et_input"
        android:orientation="horizontal">


        <Button
            android:id="@+id/bt_play"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginTop="10dp"
            android:layout_marginLeft="60dp"
            android:background="@drawable/button_shape"
            android:textColor="@color/white"
            android:text="播放" />

        <Button
            android:id="@+id/bt_pause"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginTop="10dp"
            android:background="@drawable/button_shape"
            android:textColor="@color/white"
            android:layout_marginLeft="80dp"
            android:text="暂停" />
    </LinearLayout>

</RelativeLayout>

jni相关代码:
OpenSL_ES_Core.c

//
// Created by hejunlin on 17/5/6.
//

#include "OpenSL_ES_Core.h"
#include "FFmpegCore.h"
#include <assert.h>
#include <jni.h>
#include <string.h>

#include <SLES/OpenSLES.h>
#include <SLES/OpenSLES_Android.h>

// for native asset manager
#include <sys/types.h>
#include <android/asset_manager.h>
#include <android/asset_manager_jni.h>
#include "log.h"

// engine interfaces
static SLObjectItf engineObject = NULL;
static SLEngineItf engineEngine;

// output mix interfaces
static SLObjectItf outputMixObject = NULL;
static SLEnvironmentalReverbItf outputMixEnvironmentalReverb = NULL;

// buffer queue player interfaces
static SLObjectItf bqPlayerObject = NULL;
static SLPlayItf bqPlayerPlay;
static SLAndroidSimpleBufferQueueItf bqPlayerBufferQueue;
static SLEffectSendItf bqPlayerEffectSend;
static SLMuteSoloItf bqPlayerMuteSolo;
static SLVolumeItf bqPlayerVolume;

// aux effect on the output mix, used by the buffer queue player
static const SLEnvironmentalReverbSettings reverbSettings =
        SL_I3DL2_ENVIRONMENT_PRESET_STONECORRIDOR;

static void *buffer;
static size_t bufferSize;

// this callback handler is called every time a buffer finishes playing
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{
    LOGD(">> buffere queue callback");
    assert(bq == bqPlayerBufferQueue);
    bufferSize = 0;
    //assert(NULL == context);
    getPCM(&buffer, &bufferSize);
    // for streaming playback, replace this test by logic to find and fill the next buffer
    if (NULL != buffer && 0 != bufferSize) {
        SLresult result;
        // enqueue another buffer
        result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue, buffer,
                                                 bufferSize);
        // the most likely other result is SL_RESULT_BUFFER_INSUFFICIENT,
        // which for this code example would indicate a programming error
        assert(SL_RESULT_SUCCESS == result);

        (void)result;
    }
}

void initOpenSLES()
{
    LOGD(">> initOpenSLES...");
    SLresult result;

    // 1、create engine
    result = slCreateEngine(&engineObject, 0, NULL, 0, NULL, NULL);
    LOGD(">> initOpenSLES... step 1, result = %d", result);

    // 2、realize the engine
    result = (*engineObject)->Realize(engineObject, SL_BOOLEAN_FALSE);
    LOGD(">> initOpenSLES...step 2, result = %d", result);

    // 3、get the engine interface, which is needed in order to create other objects
    result = (*engineObject)->GetInterface(engineObject, SL_IID_ENGINE, &engineEngine);
    LOGD(">> initOpenSLES...step 3, result = %d", result);

    // 4、create output mix, with environmental reverb specified as a non-required interface
    const SLInterfaceID ids[1] = {SL_IID_ENVIRONMENTALREVERB};
    const SLboolean req[1] = {SL_BOOLEAN_FALSE};
    result = (*engineEngine)->CreateOutputMix(engineEngine, &outputMixObject, 0, 0, 0);
    LOGD(">> initOpenSLES...step 4, result = %d", result);

    // 5、realize the output mix
    result = (*outputMixObject)->Realize(outputMixObject, SL_BOOLEAN_FALSE);
    LOGD(">> initOpenSLES...step 5, result = %d", result);

    // 6、get the environmental reverb interface
    // this could fail if the environmental reverb effect is not available,
    // either because the feature is not present, excessive CPU load, or
    // the required MODIFY_AUDIO_SETTINGS permission was not requested and granted
    result = (*outputMixObject)->GetInterface(outputMixObject, SL_IID_ENVIRONMENTALREVERB,
                                              &outputMixEnvironmentalReverb);
    if (SL_RESULT_SUCCESS == result) {
        result = (*outputMixEnvironmentalReverb)->SetEnvironmentalReverbProperties(
                outputMixEnvironmentalReverb, &reverbSettings);
        LOGD(">> initOpenSLES...step 6, result = %d", result);
    }

}

// init buffer queue
void initBufferQueue(int rate, int channel, int bitsPerSample)
{
    LOGD(">> initBufferQueue");
    SLresult result;

    // configure audio source
    SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
    SLDataFormat_PCM format_pcm;
    format_pcm.formatType = SL_DATAFORMAT_PCM;
    format_pcm.numChannels = channel;
    format_pcm.samplesPerSec = rate * 1000;
    format_pcm.bitsPerSample = bitsPerSample;
    format_pcm.containerSize = 16;
    if (channel == 2)
        format_pcm.channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT;
    else
        format_pcm.channelMask = SL_SPEAKER_FRONT_CENTER;
    format_pcm.endianness = SL_BYTEORDER_LITTLEENDIAN;
    SLDataSource audioSrc = {&loc_bufq, &format_pcm};

    // configure audio sink
    SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, outputMixObject};
    SLDataSink audioSnk = {&loc_outmix, NULL};

    // create audio player
    const SLInterfaceID ids[3] = {SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND,
            /*SL_IID_MUTESOLO,*/ SL_IID_VOLUME};
    const SLboolean req[3] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE,
            /*SL_BOOLEAN_TRUE,*/ SL_BOOLEAN_TRUE};
    result = (*engineEngine)->CreateAudioPlayer(engineEngine, &bqPlayerObject, &audioSrc, &audioSnk,
                                                3, ids, req);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;

    // realize the player
    result = (*bqPlayerObject)->Realize(bqPlayerObject, SL_BOOLEAN_FALSE);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;

    // get the play interface
    result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_PLAY, &bqPlayerPlay);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;

    // get the buffer queue interface
    result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_BUFFERQUEUE,
                                             &bqPlayerBufferQueue);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;

    // register callback on the buffer queue
    result = (*bqPlayerBufferQueue)->RegisterCallback(bqPlayerBufferQueue, bqPlayerCallback, NULL);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;

    // get the effect send interface
    result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_EFFECTSEND,
                                             &bqPlayerEffectSend);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;

    // get the volume interface
    result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_VOLUME, &bqPlayerVolume);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;

    // set the player‘s state to playing
    result = (*bqPlayerPlay)->SetPlayState(bqPlayerPlay, SL_PLAYSTATE_PLAYING);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;
}

// stop the native audio system
void stop()
{
    // destroy buffer queue audio player object, and invalidate all associated interfaces
    if (bqPlayerObject != NULL) {
        (*bqPlayerObject)->Destroy(bqPlayerObject);
        bqPlayerObject = NULL;
        bqPlayerPlay = NULL;
        bqPlayerBufferQueue = NULL;
        bqPlayerEffectSend = NULL;
        bqPlayerMuteSolo = NULL;
        bqPlayerVolume = NULL;
    }

    // destroy output mix object, and invalidate all associated interfaces
    if (outputMixObject != NULL) {
        (*outputMixObject)->Destroy(outputMixObject);
        outputMixObject = NULL;
        outputMixEnvironmentalReverb = NULL;
    }

    // destroy engine object, and invalidate all associated interfaces
    if (engineObject != NULL) {
        (*engineObject)->Destroy(engineObject);
        engineObject = NULL;
        engineEngine = NULL;
    }

    // 释放FFmpeg解码器
    releaseFFmpeg();
}

void play(char *url)
{
    int rate, channel;
    LOGD("...get url=%s", url);
    // 1、初始化FFmpeg解码器
    initFFmpeg(&rate, &channel, url);

    // 2、初始化OpenSLES
    initOpenSLES();

    // 3、初始化BufferQueue
    initBufferQueue(rate, channel, SL_PCMSAMPLEFORMAT_FIXED_16);

    // 4、启动音频播放
    bqPlayerCallback(bqPlayerBufferQueue, NULL);
}

FFmpegCore.c

#include "log.h"
#include "FFmpegCore.h"
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"
#include "libavutil/samplefmt.h"
#include <SLES/OpenSLES.h>
#include <SLES/OpenSLES_Android.h>

uint8_t *outputBuffer;
size_t outputBufferSize;

AVPacket packet;
int audioStream;
AVFrame *aFrame;
SwrContext *swr;
AVFormatContext *aFormatCtx;
AVCodecContext *aCodecCtx;

int initFFmpeg(int *rate, int *channel, char *url) {

    av_register_all();
    aFormatCtx = avformat_alloc_context();
    LOGD("ffmpeg get url=:%s", url);
    // 网络音频流
    char *file_name = url;

    // Open audio file
    if (avformat_open_input(&aFormatCtx, file_name, NULL, NULL) != 0) {
        LOGE("Couldn‘t open file:%s\n", file_name);
        return -1; // Couldn‘t open file
    }

    // Retrieve stream information
    if (avformat_find_stream_info(aFormatCtx, NULL) < 0) {
        LOGE("Couldn‘t find stream information.");
        return -1;
    }

    // Find the first audio stream
    int i;
    audioStream = -1;
    for (i = 0; i < aFormatCtx->nb_streams; i++) {
        if (aFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO &&
            audioStream < 0) {
            audioStream = i;
        }
    }
    if (audioStream == -1) {
        LOGE("Couldn‘t find audio stream!");
        return -1;
    }

    // Get a pointer to the codec context for the video stream
    aCodecCtx = aFormatCtx->streams[audioStream]->codec;

    // Find the decoder for the audio stream
    AVCodec *aCodec = avcodec_find_decoder(aCodecCtx->codec_id);
    if (!aCodec) {
        fprintf(stderr, "Unsupported codec!\n");
        return -1;
    }

    if (avcodec_open2(aCodecCtx, aCodec, NULL) < 0) {
        LOGE("Could not open codec.");
        return -1; // Could not open codec
    }

    aFrame = av_frame_alloc();

    // 设置格式转换
    swr = swr_alloc();
    av_opt_set_int(swr, "in_channel_layout",  aCodecCtx->channel_layout, 0);
    av_opt_set_int(swr, "out_channel_layout", aCodecCtx->channel_layout,  0);
    av_opt_set_int(swr, "in_sample_rate",     aCodecCtx->sample_rate, 0);
    av_opt_set_int(swr, "out_sample_rate",    aCodecCtx->sample_rate, 0);
    av_opt_set_sample_fmt(swr, "in_sample_fmt",  aCodecCtx->sample_fmt, 0);
    av_opt_set_sample_fmt(swr, "out_sample_fmt", AV_SAMPLE_FMT_S16,  0);
    swr_init(swr);

    // 分配PCM数据缓存
    outputBufferSize = 8196;
    outputBuffer = (uint8_t *) malloc(sizeof(uint8_t) * outputBufferSize);

    // 返回sample rate和channels
    *rate = aCodecCtx->sample_rate;
    *channel = aCodecCtx->channels;
    return 0;
}

// 获取PCM数据, 自动回调获取
int getPCM(void **pcm, size_t *pcmSize) {
    LOGD(">> getPcm");
    while (av_read_frame(aFormatCtx, &packet) >= 0) {

        int frameFinished = 0;
        // Is this a packet from the audio stream?
        if (packet.stream_index == audioStream) {
            avcodec_decode_audio4(aCodecCtx, aFrame, &frameFinished, &packet);

            if (frameFinished) {
                // data_size为音频数据所占的字节数
                int data_size = av_samples_get_buffer_size(
                        aFrame->linesize, aCodecCtx->channels,
                        aFrame->nb_samples, aCodecCtx->sample_fmt, 1);
                LOGD(">> getPcm data_size=%d", data_size);
                // 这里内存再分配可能存在问题
                if (data_size > outputBufferSize) {
                    outputBufferSize = data_size;
                    outputBuffer = (uint8_t *) realloc(outputBuffer,
                                                       sizeof(uint8_t) * outputBufferSize);
                }

                // 音频格式转换
                swr_convert(swr, &outputBuffer, aFrame->nb_samples,
                            (uint8_t const **) (aFrame->extended_data),
                        aFrame->nb_samples);

                // 返回pcm数据
                *pcm = outputBuffer;
                *pcmSize = data_size;
                return 0;
            }
        }
    }
    return -1;
}

// 释放相关资源
int releaseFFmpeg()
{
    av_packet_unref(&packet);
    av_free(outputBuffer);
    av_free(aFrame);
    avcodec_close(aCodecCtx);
    avformat_close_input(&aFormatCtx);
    return 0;

NativePlayer.c

//
// Created by hejunlin on 17/5/6.
//
#include "log.h"
#include "com_hejunlin_ffmpegaudio_NativePlayer.h"
#include "OpenSL_ES_Core.h"

JNIEXPORT void JNICALL
Java_com_hejunlin_ffmpegaudio_NativePlayer_play(JNIEnv *env, jclass type, jstring url_) {
    const char *url = (*env)->GetStringUTFChars(env, url_, 0);
    LOGD("start playaudio... url=%s", url);

    play(url);
    (*env)->ReleaseStringUTFChars(env, url_, url);
}

JNIEXPORT void JNICALL
Java_com_hejunlin_ffmpegaudio_NativePlayer_stop(JNIEnv *env, jclass type) {

    LOGD("stop");
    stop();
}

通过cmake,或ndk-build都可以编译,会生成一个NativePlayer.so

技术分享

效果图:

技术分享

log输出如下:

05-07 10:14:04.476 6001-6097/com.hejunlin.ffmpegaudio D/Surface: Surface::setBuffersDimensions(this=0xf45b6300,w=1080,h=1920)
05-07 10:14:04.491 6001-6097/com.hejunlin.ffmpegaudio D/Surface: Surface::setBuffersDimensions(this=0xf45b6300,w=1080,h=1920)
05-07 10:14:04.516 6001-6097/com.hejunlin.ffmpegaudio D/Surface: Surface::setBuffersDimensions(this=0xf45b6300,w=1080,h=1920)
05-07 10:14:04.525 6001-6097/com.hejunlin.ffmpegaudio D/Surface: Surface::setBuffersDimensions(this=0xf45b6300,w=1080,h=1920)
05-07 10:14:04.542 6001-6097/com.hejunlin.ffmpegaudio D/Surface: Surface::setBuffersDimensions(this=0xf45b6300,w=1080,h=1920)
05-07 10:14:04.556 6001-6097/com.hejunlin.ffmpegaudio D/Surface: Surface::setBuffersDimensions(this=0xf45b6300,w=1080,h=1920)
05-07 10:14:04.573 6001-6097/com.hejunlin.ffmpegaudio D/Surface: Surface::setBuffersDimensions(this=0xf45b6300,w=1080,h=1920)
05-07 10:14:04.577 6001-6001/com.hejunlin.ffmpegaudio W/linker: libNativePlayer.so: unused DT entry: type 0x6ffffffe arg 0x1414
05-07 10:14:04.577 6001-6001/com.hejunlin.ffmpegaudio W/linker: libNativePlayer.so: unused DT entry: type 0x6fffffff arg 0x4
05-07 10:14:04.578 6001-6001/com.hejunlin.ffmpegaudio W/linker: libavcodec-57.so: unused DT entry: type 0x6ffffffe arg 0x5da4
05-07 10:14:04.578 6001-6001/com.hejunlin.ffmpegaudio W/linker: libavcodec-57.so: unused DT entry: type 0x6fffffff arg 0x2
05-07 10:14:04.578 6001-6001/com.hejunlin.ffmpegaudio W/linker: libavformat-57.so: unused DT entry: type 0x6ffffffe arg 0x6408
05-07 10:14:04.578 6001-6001/com.hejunlin.ffmpegaudio W/linker: libavformat-57.so: unused DT entry: type 0x6fffffff arg 0x2
05-07 10:14:04.578 6001-6001/com.hejunlin.ffmpegaudio W/linker: libswresample-2.so: unused DT entry: type 0x6ffffffe arg 0xcd4
05-07 10:14:04.578 6001-6001/com.hejunlin.ffmpegaudio W/linker: libswresample-2.so: unused DT entry: type 0x6fffffff arg 0x1
05-07 10:14:04.578 6001-6001/com.hejunlin.ffmpegaudio W/linker: libswscale-4.so: unused DT entry: type 0x6ffffffe arg 0xd70
05-07 10:14:04.578 6001-6001/com.hejunlin.ffmpegaudio W/linker: libswscale-4.so: unused DT entry: type 0x6fffffff arg 0x1
05-07 10:14:04.589 6001-6001/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/NativePlayer.c: Java_com_hejunlin_ffmpegaudio_NativePlayer_play:start playaudio... url=http://qzone.60dj.com/huiyuan/201704/19/201704190533197825_35285.mp3
05-07 10:14:04.589 6001-6001/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: play:...get url=http://qzone.60dj.com/huiyuan/201704/19/201704190533197825_35285.mp3
05-07 10:14:04.590 6001-6001/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: initFFmpeg:ffmpeg get url=:http://qzone.60dj.com/huiyuan/201704/19/201704190533197825_35285.mp3
05-07 10:14:04.696 6001-6001/com.hejunlin.ffmpegaudio D/libc-netbsd: getaddrinfo: qzone.60dj.com get result from proxy >>
05-07 10:14:04.949 6001-6001/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initOpenSLES:>> initOpenSLES...
05-07 10:14:04.950 6001-6001/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initOpenSLES:>> initOpenSLES... step 1, result = 0
05-07 10:14:04.950 6001-6001/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initOpenSLES:>> initOpenSLES...step 2, result = 0
05-07 10:14:04.950 6001-6001/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initOpenSLES:>> initOpenSLES...step 3, result = 0
05-07 10:14:04.950 6001-6001/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initOpenSLES:>> initOpenSLES...step 4, result = 0
05-07 10:14:04.950 6001-6001/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initOpenSLES:>> initOpenSLES...step 5, result = 0
05-07 10:14:04.950 6001-6001/com.hejunlin.ffmpegaudio W/libOpenSLES: Leaving Object::GetInterface (SL_RESULT_FEATURE_UNSUPPORTED)
05-07 10:14:04.950 6001-6001/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initBufferQueue:>> initBufferQueue
05-07 10:14:04.951 6001-6001/com.hejunlin.ffmpegaudio V/AudioTrack: set(): streamType 3, sampleRate 44100, format 0x1, channelMask 0x3, frameCount 0, flags #0, notificationFrames 0, sessionId 774, transferType 0
05-07 10:14:04.951 6001-6001/com.hejunlin.ffmpegaudio V/AudioTrack: set() streamType 3 frameCount 0 flags 0000
05-07 10:14:04.951 6001-6001/com.hejunlin.ffmpegaudio D/AudioTrack: audiotrack 0xf459cd80 set Type 3, rate 44100, fmt 1, chn 3, fcnt 0, flags 0000
05-07 10:14:04.951 6001-6001/com.hejunlin.ffmpegaudio D/AudioTrack: mChannelMask 0x3
05-07 10:14:04.953 6001-6001/com.hejunlin.ffmpegaudio V/AudioTrack: createTrack_l() output 2 afLatency 21
05-07 10:14:04.953 6001-6001/com.hejunlin.ffmpegaudio V/AudioTrack: afFrameCount=1024, minBufCount=1, afSampleRate=48000, afLatency=21
05-07 10:14:04.953 6001-6001/com.hejunlin.ffmpegaudio V/AudioTrack: minFrameCount: 2822, afFrameCount=1024, minBufCount=3, sampleRate=44100, afSampleRate=48000, afLatency=21
05-07 10:14:04.954 6001-6001/com.hejunlin.ffmpegaudio D/AudioTrackCenter: addTrack, trackId:0xdaf0c000, frameCount:2822, sampleRate:44100, trackPtr:0xf459cd80
05-07 10:14:04.954 6001-6001/com.hejunlin.ffmpegaudio D/AudioTrackCenter: mAfSampleRate 48000, sampleRate 44100, AfFrameCount 1024 , mAfSampleRate 48000, frameCount 2822
05-07 10:14:04.979 6001-6001/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: bqPlayerCallback:>> buffere queue callback
05-07 10:14:04.979 6001-6001/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm
05-07 10:14:04.979 6001-6681/com.hejunlin.ffmpegaudio D/AudioTrackShared: front(0x0), mIsOut 1, avail 2822, mFrameCount 2822, filled 0
05-07 10:14:04.979 6001-6681/com.hejunlin.ffmpegaudio V/AudioTrack: obtainBuffer(940) returned 2822 = 940 + 1882 err 0
05-07 10:14:04.979 6001-6681/com.hejunlin.ffmpegaudio D/AudioTrackShared: front(0x0), mIsOut 1, interrupt() FUTEX_WAKE
05-07 10:14:04.979 6001-6681/com.hejunlin.ffmpegaudio D/AudioTrack: audiotrack 0xf459cd80 stop done
05-07 10:14:04.980 6001-6001/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm data_size=188
05-07 10:14:04.980 6001-6681/com.hejunlin.ffmpegaudio D/AudioTrackShared: front(0x0), mIsOut 1, avail 2822, mFrameCount 2822, filled 0
05-07 10:14:04.980 6001-6681/com.hejunlin.ffmpegaudio V/AudioTrack: obtainBuffer(940) returned 2822 = 940 + 1882 err 0
05-07 10:14:04.980 6001-6681/com.hejunlin.ffmpegaudio D/AudioTrackShared: front(0x0), mIsOut 1, avail 2775, mFrameCount 2822, filled 47
05-07 10:14:04.980 6001-6681/com.hejunlin.ffmpegaudio V/AudioTrack: obtainBuffer(893) returned 2775 = 893 + 1882 err 0
05-07 10:14:04.980 6001-6681/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: bqPlayerCallback:>> buffere queue callback
05-07 10:14:04.980 6001-6681/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm
05-07 10:14:04.980 6001-6681/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm data_size=4608
05-07 10:14:04.980 6001-6681/com.hejunlin.ffmpegaudio D/AudioTrackShared: front(0x0), mIsOut 1, avail 1882, mFrameCount 2822, filled 940
05-07 10:14:04.980 6001-6681/com.hejunlin.ffmpegaudio V/AudioTrack: obtainBuffer(940) returned 1882 = 940 + 942 err 0
05-07 10:14:04.980 6001-6681/com.hejunlin.ffmpegaudio D/AudioTrackShared: front(0x0), mIsOut 1, avail 1623, mFrameCount 2822, filled 1199
05-07 10:14:04.980 6001-6681/com.hejunlin.ffmpegaudio V/AudioTrack: obtainBuffer(681) returned 1623 = 681 + 942 err 0
05-07 10:14:04.980 6001-6681/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: bqPlayerCallback:>> buffere queue callback
05-07 10:14:04.980 6001-6681/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm
05-07 10:14:04.981 6001-6681/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm data_size=4608
05-07 10:14:04.981 6001-6681/com.hejunlin.ffmpegaudio D/AudioTrackShared: front(0x0), mIsOut 1, avail 942, mFrameCount 2822, filled 1880
05-07 10:14:04.981 6001-6681/com.hejunlin.ffmpegaudio V/AudioTrack: obtainBuffer(940) returned 942 = 940 + 2 err 0
05-07 10:14:04.981 6001-6681/com.hejunlin.ffmpegaudio D/AudioTrackShared: front(0x0), mIsOut 1, avail 471, mFrameCount 2822, filled 2351
05-07 10:14:04.981 6001-6681/com.hejunlin.ffmpegaudio V/AudioTrack: obtainBuffer(469) returned 471 = 469 + 2 err 0
05-07 10:14:04.981 6001-6681/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: bqPlayerCallback:>> buffere queue callback
05-07 10:14:04.981 6001-6681/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm
05-07 10:14:04.981 6001-6681/com.hejunlin.ffmpegaudio D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm data_size=4608

FFmpeg总结(十一)用ffmpeg进行转格式,Android下播放网络音频流

标签:ica   分配   .so   linear   pen   rup   etc   order   click   

原文地址:http://blog.csdn.net/hejjunlin/article/details/71308337

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!