标签:需要 维护 cas url force UNC 根据 native clock
在上一篇文章《(三)Audio子系统之AudioRecord.startRecording》中已经介绍了AudioRecord如何开始录制音频,接下来,继续分析AudioRecord方法中的read的实现
函数原型:
public int read(byte[] audioData, int offsetInBytes, int sizeInBytes)
作用:
从音频硬件录制缓冲区读取数据,直接复制到指定缓冲区。 如果audioBuffer不是直接的缓冲区,此方法总是返回0
参数:
audioData:写入的音频录制数据
offsetInBytes:audioData的起始偏移值,单位byte
sizeInBytes:读取的最大字节数
返回值:
读入缓冲区的总byte数,如果对象属性没有初始化,则返回ERROR_INVALID_OPERATION
,如果参数不能解析成有效的数据或索引,则返回ERROR_BAD_VALUE
。 读取的总byte数不会超过sizeInBytes
接下来进入系统分析具体实现
frameworks\base\media\java\android\media\AudioRecord.java
public int read(byte[] audioData, int offsetInBytes, int sizeInBytes) { if (mState != STATE_INITIALIZED) { return ERROR_INVALID_OPERATION; } if ( (audioData == null) || (offsetInBytes < 0 ) || (sizeInBytes < 0) || (offsetInBytes + sizeInBytes < 0) // detect integer overflow || (offsetInBytes + sizeInBytes > audioData.length)) { return ERROR_BAD_VALUE; } return native_read_in_byte_array(audioData, offsetInBytes, sizeInBytes); }
这里我们只分析获取byte[]类型的数据
frameworks\base\core\jni\android_media_AudioRecord.cpp
static jint android_media_AudioRecord_readInByteArray(JNIEnv *env, jobject thiz, jbyteArray javaAudioData, jint offsetInBytes, jint sizeInBytes) { jbyte* recordBuff = NULL; // get the audio recorder from which we‘ll read new audio samples sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz); if (lpRecorder == NULL) { ALOGE("Unable to retrieve AudioRecord object, can‘t record"); return 0; } if (!javaAudioData) { ALOGE("Invalid Java array to store recorded audio, can‘t record"); return 0; } recordBuff = (jbyte *)env->GetByteArrayElements(javaAudioData, NULL); if (recordBuff == NULL) { ALOGE("Error retrieving destination for recorded audio data, can‘t record"); return 0; } // read the new audio data from the native AudioRecord object ssize_t recorderBuffSize = lpRecorder->frameCount()*lpRecorder->frameSize(); ssize_t readSize = lpRecorder->read(recordBuff + offsetInBytes, sizeInBytes > (jint)recorderBuffSize ? (jint)recorderBuffSize : sizeInBytes ); env->ReleaseByteArrayElements(javaAudioData, recordBuff, 0); if (readSize < 0) { readSize = (jint)AUDIO_JAVA_INVALID_OPERATION; } return (jint) readSize; }
ssize_t AudioRecord::read(void* buffer, size_t userSize) { if (mTransfer != TRANSFER_SYNC) { return INVALID_OPERATION; } if (ssize_t(userSize) < 0 || (buffer == NULL && userSize != 0)) { // sanity-check. user is most-likely passing an error code, and it would // make the return value ambiguous (actualSize vs error). ALOGE("AudioRecord::read(buffer=%p, size=%zu (%zu)", buffer, userSize, userSize); return BAD_VALUE; } ssize_t read = 0; Buffer audioBuffer; while (userSize >= mFrameSize) { audioBuffer.frameCount = userSize / mFrameSize; status_t err = obtainBuffer(&audioBuffer, &ClientProxy::kForever); if (err < 0) { if (read > 0) { break; } return ssize_t(err); } size_t bytesRead = audioBuffer.size; memcpy(buffer, audioBuffer.i8, bytesRead); buffer = ((char *) buffer) + bytesRead; userSize -= bytesRead; read += bytesRead; releaseBuffer(&audioBuffer); } return read; }
这个mFrameSize是通过channelCount*采样精度所占字节计算得出的,所以每次通过obtainBuffer获取共享内存中的数据,然后通过memcpy把数据拷贝到应用层的buffer中,直到把整个userSize都拷贝到buffer中为止。
这里就详细分析下obtainBuffer函数
status_t AudioRecord::obtainBuffer(Buffer* audioBuffer, const struct timespec *requested, struct timespec *elapsed, size_t *nonContig) { // previous and new IAudioRecord sequence numbers are used to detect track re-creation uint32_t oldSequence = 0; uint32_t newSequence; Proxy::Buffer buffer; status_t status = NO_ERROR; static const int32_t kMaxTries = 5; int32_t tryCounter = kMaxTries; do { // obtainBuffer() is called with mutex unlocked, so keep extra references to these fields to // keep them from going away if another thread re-creates the track during obtainBuffer() sp<AudioRecordClientProxy> proxy; sp<IMemory> iMem; sp<IMemory> bufferMem; { // start of lock scope AutoMutex lock(mLock); newSequence = mSequence; // did previous obtainBuffer() fail due to media server death or voluntary invalidation? if (status == DEAD_OBJECT) { // re-create track, unless someone else has already done so if (newSequence == oldSequence) { status = restoreRecord_l("obtainBuffer"); if (status != NO_ERROR) { buffer.mFrameCount = 0; buffer.mRaw = NULL; buffer.mNonContig = 0; break; } } } oldSequence = newSequence; // Keep the extra references proxy = mProxy; iMem = mCblkMemory; bufferMem = mBufferMemory; // Non-blocking if track is stopped if (!mActive) { requested = &ClientProxy::kNonBlocking; } } // end of lock scope buffer.mFrameCount = audioBuffer->frameCount; // FIXME starts the requested timeout and elapsed over from scratch status = proxy->obtainBuffer(&buffer, requested, elapsed); } while ((status == DEAD_OBJECT) && (tryCounter-- > 0)); audioBuffer->frameCount = buffer.mFrameCount; audioBuffer->size = buffer.mFrameCount * mFrameSize; audioBuffer->raw = buffer.mRaw; if (nonContig != NULL) { *nonContig = buffer.mNonContig; } return status; }
在这个函数中的主要工作如下:
1.获取AudioRecordClientProxy代理,mCblkMemory与mBufferMemory,他们是在构建AudioRecord对象的时候,通过AF端获取的,即RecordThread::RecordTrack->getCblk()与RecordThread::RecordTrack->getBuffers()
2.在start中,AudioRecordThread.resume之前,mActive已经标记为true了,所以requested还是&ClientProxy::kForever;
3.调用proxy->obtainBuffer继续获取数据;
3.更新audioBuffer的frameCount,size以及pcm数据;
这里继续分析第3步:ClientProxy::obtainBuffer
frameworks\av\media\libmedia\AudioTrackShared.cpp
status_t ClientProxy::obtainBuffer(Buffer* buffer, const struct timespec *requested, struct timespec *elapsed) { LOG_ALWAYS_FATAL_IF(buffer == NULL || buffer->mFrameCount == 0); struct timespec total; // total elapsed time spent waiting total.tv_sec = 0; total.tv_nsec = 0; bool measure = elapsed != NULL; // whether to measure total elapsed time spent waiting status_t status; enum { TIMEOUT_ZERO, // requested == NULL || *requested == 0 TIMEOUT_INFINITE, // *requested == infinity TIMEOUT_FINITE, // 0 < *requested < infinity TIMEOUT_CONTINUE, // additional chances after TIMEOUT_FINITE } timeout; if (requested == NULL) { timeout = TIMEOUT_ZERO; } else if (requested->tv_sec == 0 && requested->tv_nsec == 0) { timeout = TIMEOUT_ZERO; } else if (requested->tv_sec == INT_MAX) { timeout = TIMEOUT_INFINITE; } else { timeout = TIMEOUT_FINITE; if (requested->tv_sec > 0 || requested->tv_nsec >= MEASURE_NS) { measure = true; } } struct timespec before; bool beforeIsValid = false; audio_track_cblk_t* cblk = mCblk; bool ignoreInitialPendingInterrupt = true; // check for shared memory corruption if (mIsShutdown) { status = NO_INIT; goto end; } for (;;) { int32_t flags = android_atomic_and(~CBLK_INTERRUPT, &cblk->mFlags); // check for track invalidation by server, or server death detection if (flags & CBLK_INVALID) { ALOGV("Track invalidated"); status = DEAD_OBJECT; goto end; } // check for obtainBuffer interrupted by client if (!ignoreInitialPendingInterrupt && (flags & CBLK_INTERRUPT)) { ALOGV("obtainBuffer() interrupted by client"); status = -EINTR; goto end; } ignoreInitialPendingInterrupt = false; // compute number of frames available to write (AudioTrack) or read (AudioRecord) int32_t front; int32_t rear; if (mIsOut) { // The barrier following the read of mFront is probably redundant. // We‘re about to perform a conditional branch based on ‘filled‘, // which will force the processor to observe the read of mFront // prior to allowing data writes starting at mRaw. // However, the processor may support speculative execution, // and be unable to undo speculative writes into shared memory. // The barrier will prevent such speculative execution. front = android_atomic_acquire_load(&cblk->u.mStreaming.mFront); rear = cblk->u.mStreaming.mRear; } else { // On the other hand, this barrier is required. rear = android_atomic_acquire_load(&cblk->u.mStreaming.mRear); front = cblk->u.mStreaming.mFront; } ssize_t filled = rear - front; // pipe should not be overfull if (!(0 <= filled && (size_t) filled <= mFrameCount)) { if (mIsOut) { ALOGE("Shared memory control block is corrupt (filled=%zd, mFrameCount=%zu); " "shutting down", filled, mFrameCount); mIsShutdown = true; status = NO_INIT; goto end; } // for input, sync up on overrun filled = 0; cblk->u.mStreaming.mFront = rear; (void) android_atomic_or(CBLK_OVERRUN, &cblk->mFlags); } // don‘t allow filling pipe beyond the nominal size size_t avail = mIsOut ? mFrameCount - filled : filled; if (avail > 0) { // ‘avail‘ may be non-contiguous, so return only the first contiguous chunk size_t part1; if (mIsOut) { rear &= mFrameCountP2 - 1; part1 = mFrameCountP2 - rear; } else { front &= mFrameCountP2 - 1; part1 = mFrameCountP2 - front; } if (part1 > avail) { part1 = avail; } if (part1 > buffer->mFrameCount) { part1 = buffer->mFrameCount; } buffer->mFrameCount = part1; buffer->mRaw = part1 > 0 ? &((char *) mBuffers)[(mIsOut ? rear : front) * mFrameSize] : NULL; buffer->mNonContig = avail - part1; mUnreleased = part1; status = NO_ERROR; break; } struct timespec remaining; const struct timespec *ts; switch (timeout) { case TIMEOUT_ZERO: status = WOULD_BLOCK; goto end; case TIMEOUT_INFINITE: ts = NULL; break; case TIMEOUT_FINITE: timeout = TIMEOUT_CONTINUE; if (MAX_SEC == 0) { ts = requested; break; } // fall through case TIMEOUT_CONTINUE: // FIXME we do not retry if requested < 10ms? needs documentation on this state machine if (!measure || requested->tv_sec < total.tv_sec || (requested->tv_sec == total.tv_sec && requested->tv_nsec <= total.tv_nsec)) { status = TIMED_OUT; goto end; } remaining.tv_sec = requested->tv_sec - total.tv_sec; if ((remaining.tv_nsec = requested->tv_nsec - total.tv_nsec) < 0) { remaining.tv_nsec += 1000000000; remaining.tv_sec++; } if (0 < MAX_SEC && MAX_SEC < remaining.tv_sec) { remaining.tv_sec = MAX_SEC; remaining.tv_nsec = 0; } ts = &remaining; break; default: LOG_ALWAYS_FATAL("obtainBuffer() timeout=%d", timeout); ts = NULL; break; } int32_t old = android_atomic_and(~CBLK_FUTEX_WAKE, &cblk->mFutex); if (!(old & CBLK_FUTEX_WAKE)) { if (measure && !beforeIsValid) { clock_gettime(CLOCK_MONOTONIC, &before); beforeIsValid = true; } errno = 0; (void) syscall(__NR_futex, &cblk->mFutex, mClientInServer ? FUTEX_WAIT_PRIVATE : FUTEX_WAIT, old & ~CBLK_FUTEX_WAKE, ts); // update total elapsed time spent waiting if (measure) { struct timespec after; clock_gettime(CLOCK_MONOTONIC, &after); total.tv_sec += after.tv_sec - before.tv_sec; long deltaNs = after.tv_nsec - before.tv_nsec; if (deltaNs < 0) { deltaNs += 1000000000; total.tv_sec--; } if ((total.tv_nsec += deltaNs) >= 1000000000) { total.tv_nsec -= 1000000000; total.tv_sec++; } before = after; beforeIsValid = true; } switch (errno) { case 0: // normal wakeup by server, or by binderDied() case EWOULDBLOCK: // benign race condition with server case EINTR: // wait was interrupted by signal or other spurious wakeup case ETIMEDOUT: // time-out expired // FIXME these error/non-0 status are being dropped break; default: status = errno; ALOGE("%s unexpected error %s", __func__, strerror(status)); goto end; } } } end: if (status != NO_ERROR) { buffer->mFrameCount = 0; buffer->mRaw = NULL; buffer->mNonContig = 0; mUnreleased = 0; } if (elapsed != NULL) { *elapsed = total; } if (requested == NULL) { requested = &kNonBlocking; } if (measure) { ALOGV("requested %ld.%03ld elapsed %ld.%03ld", requested->tv_sec, requested->tv_nsec / 1000000, total.tv_sec, total.tv_nsec / 1000000); } return status; }
这个函数的主要工作如下:
1.从ClientProxy::kForever的定义可知,这个tv_sec是INT_MAX,所以timeout为TIMEOUT_INFINITE;
2.从cblk中获取到rear以及front,之前分析过了,这两个指针是在RecordThread线程中维护的,他一边在读取pcm数据,一直在更新最新数据指针的位置;
3.如果发现获取到的数据小于mFrameCount,或者没有获取到数据,那么也就是表示应用读的太快了,其实应该说RecordThread读的太慢了导致,这时候也需要设置为OVERRUN,则更新cblk的指针,重置filled为0;
4.如果获取到了数据,则把录音数据放到mRaw中,把获取到的数据大小放到mFrameCount中。
总结:
到这里,整个获取pcm数据的流程就结束了,到这里我们应该能理解整个Audio系统中对AudioBuffer的管理策略了,即通过RecordThread线程把数据从硬件层读取到IMemory中,然后应用层在去IMemory中去读取。
由于作者内功有限,若文章中存在错误或不足的地方,还请给位大佬指出,不胜感激!
标签:需要 维护 cas url force UNC 根据 native clock
原文地址:https://www.cnblogs.com/pngcui/p/10016588.html