厌伴老儒烹瓠叶,强随举子踏槐花。这篇文章主要讲述Android技术分享| Android WebRTC 对 AudioRecord 的使用相关的知识,希望能为你提供帮助。
AudioRecord 是 android 基于原始PCM音频数据录制的类,WebRCT 对其封装的代码位置位于org/webrtc/audio/WebRtcAudioRecord.java,接下来我们学习一下 AudioRecord 是如何创建启动,读取音频采集数据以及销毁等功能的。
创建和初始化
private int initRecording(int sampleRate, int channels)
Logging.d(TAG, "initRecording(sampleRate=" + sampleRate + ", channels=" + channels + ")");
if (audioRecord != null)
reportWebRtcAudioRecordInitError("InitRecording called twice without StopRecording.");
return -1;
final int bytesPerFrame = channels * (BITS_PER_SAMPLE / 8);
final int framesPerBuffer = sampleRate / BUFFERS_PER_SECOND;
byteBuffer = ByteBuffer.allocateDirect(bytesPerFrame * framesPerBuffer);
Logging.d(TAG, "byteBuffer.capacity: " + byteBuffer.capacity());
emptyBytes = new byte[byteBuffer.capacity()];
// Rather than passing the ByteBuffer with every callback (requiring
// the potentially expensive GetDirectBufferAddress) we simply have the
// the native class cache the address to the memory once.
nativeCacheDirectBufferAddress(byteBuffer, nativeAudioRecord);
// Get the minimum buffer size required for the successful creation of
// an AudioRecord object, in byte units.
// Note that this size doesnt guarantee a smooth recording under load.
final int channelConfig = channelCountToConfiguration(channels);
int minBufferSize =
AudioRecord.getMinBufferSize(sampleRate, channelConfig, AudioFormat.ENCODING_PCM_16BIT);
if (minBufferSize == AudioRecord.ERROR || minBufferSize == AudioRecord.ERROR_BAD_VALUE)
reportWebRtcAudioRecordInitError("AudioRecord.getMinBufferSize failed: " + minBufferSize);
return -1;
Logging.d(TAG, "AudioRecord.getMinBufferSize: " + minBufferSize);
// Use a larger buffer size than the minimum required when creating the
// AudioRecord instance to ensure smooth recording under load. It has been
// verified that it does not increase the actual recording latency.
int bufferSizeInBytes = Math.max(BUFFER_SIZE_FACTOR * minBufferSize, byteBuffer.capacity());
Logging.d(TAG, "bufferSizeInBytes: " + bufferSizeInBytes);
try
audioRecord = new AudioRecord(audiosource, sampleRate, channelConfig,
AudioFormat.ENCODING_PCM_16BIT, bufferSizeInBytes);
catch (IllegalArgumentException e)
reportWebRtcAudioRecordInitError("AudioRecord ctor error: " + e.getMessage());
releaseAudioResources();
return -1;
if (audioRecord == null || audioRecord.getState() != AudioRecord.STATE_INITIALIZED)
reportWebRtcAudioRecordInitError("Failed to create a new AudioRecord instance");
releaseAudioResources();
return -1;
if (effects != null)
effects.enable(audioRecord.getAudioSessionId());
logMainParameters();
logMainParametersExtended();
return framesPerBuffer;
在初始化的方法中,主要做了两件事。
- 创建缓冲区
- 由于实际使用数据的代码在native层,因此这里创建了一个Java的direct buffer,而且AudioRecord也有通过ByteBuffer读数据的接口,并且实际把数据复制到ByteBuffer的代码也在native层,所以这里使用direct buffer效率会更高。
- ByteBuffer的容量为单次读取数据的大小。Android的数据格式是打包格式(packed),在多个声道时,同一个样点的不同声道连续存储在一起,接着存储下一个样点的不同声道;一帧就是一个样点的所有声道数据的合集,一次读取的帧数是10ms的样点数(采样率除以100,样点个数等于采样率时对应于1s的数据,所以除以100就是10ms的数据);ByteBuffer的容量为帧数乘声道数乘每个样点的字节数(PCM 16 bit表示每个样点为两个字节)。
- 这里调用的nativeCacheDirectBufferAddress JNI函数会在native层把ByteBuffer的访问地址提前保存下来,避免每次读到音频数据后,还需要调用接口获取访问地址。
- 由于实际使用数据的代码在native层,因此这里创建了一个Java的direct buffer,而且AudioRecord也有通过ByteBuffer读数据的接口,并且实际把数据复制到ByteBuffer的代码也在native层,所以这里使用direct buffer效率会更高。
- 创建 AudioRecord对象,构造函数有很多参数,分析如下
- audioSource
指的是音频采集模式,默认是 VOICE_COMMUNICATION,该模式会使用硬件AEC(回声抑制)
- sampleRate
采样率
- channelConfig
声道数
- audioFormat
音频数据格式,这里实际使用的是 AudioFormat.ENCODING_PCM_16BIT,即PCM 16 bit的数据格式。
- bufferSize
【Android技术分享| Android WebRTC 对 AudioRecord 的使用】系统创建AudioRecord时使用的缓冲区大小,这里使用了两个数值的较大者:通过AudioRecord.getMinBufferSize接口获取的最小缓冲区大小的两倍,读取数据的ByteBuffer的容量。通过注释我们可以了解到,考虑最小缓冲区的两倍是为了确保系统负载较高的情况下音频采集仍能平稳运行,而且这里设置更大的缓冲区并不会增加音频采集的延迟。
- audioSource
private boolean startRecording()
Logging.d(TAG, "startRecording");
assertTrue(audioRecord != null);
assertTrue(audioThread == null);
try
audioRecord.startRecording();
catch (IllegalStateException e)
reportWebRtcAudioRecordStartError(AudioRecordStartErrorCode.AUDIO_RECORD_START_EXCEPTION,
"AudioRecord.startRecording failed: " + e.getMessage());
return false;
if (audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING)
reportWebRtcAudioRecordStartError(
AudioRecordStartErrorCode.AUDIO_RECORD_START_STATE_MISMATCH,
"AudioRecord.startRecording failed - incorrect state :"
+ audioRecord.getRecordingState());
return false;
audioThread = new AudioRecordThread("AudioRecordJavaThread");
audioThread.start();
return true;
?在该方法中,首先启动了 audioRecord,接着判断了读取线程事都正在录制中。
读数据
private class AudioRecordThread extends Thread
private volatile boolean keepAlive = true;
public AudioRecordThread(String name)
super(name);
// TODO(titovartem) make correct fix during webrtc:9175
@SuppressWarnings("ByteBufferBackingArray")
@Override
public void run()
Process.setThreadPriority(Process.THREAD_PRIORITY_URGENT_AUDIO);
Logging.d(TAG, "AudioRecordThread" + WebRtcAudioUtils.getThreadInfo());
assertTrue(audioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING);
long lastTime = System.nanoTime();
while (keepAlive)
int bytesRead = audioRecord.read(byteBuffer, byteBuffer.capacity());
if (bytesRead == byteBuffer.capacity())
if (microphoneMute)
byteBuffer.clear();
byteBuffer.put(emptyBytes);
// Its possible weve been shut down during the read, and stopRecording() tried and
// failed to join this thread. To be a bit safer, try to avoid calling any native methods
// in case theyve been unregistered after stopRecording() returned.
if (keepAlive)
nativeDataIsRecorded(bytesRead, nativeAudioRecord);
if (audioSamplesReadyCallback != null)
// Copy the entire byte buffer array.Assume that the start of the byteBuffer is
// at index 0.
byte[] data = https://www.songbingjia.com/android/Arrays.copyOf(byteBuffer.array(), byteBuffer.capacity());
audioSamplesReadyCallback.onWebRtcAudioRecordSamplesReady(
new AudioSamples(audioRecord, data));
else
String errorMessage ="AudioRecord.read failed: " + bytesRead;
Logging.e(TAG, errorMessage);
if (bytesRead == AudioRecord.ERROR_INVALID_OPERATION)
keepAlive = false;
reportWebRtcAudioRecordError(errorMessage);
if (DEBUG)
long nowTime = System.nanoTime();
long durationInMs = TimeUnit.NANOSECONDS.toMillis((nowTime - lastTime));
lastTime = nowTime;
Logging.d(TAG, "bytesRead[" + durationInMs + "] " + bytesRead);
try
if (audioRecord != null)
audioRecord.stop();
catch (IllegalStateException e)
Logging.e(TAG, "AudioRecord.stop failed: " + e.getMessage());
// Stops the inner thread loop and also calls AudioRecord.stop().
// Does not block the calling thread.
public void stopThread()
Logging.d(TAG, "stopThread");
keepAlive = false;
?从 AudioRecord去数据的逻辑在 AudioRecordThread 线程的 Run函数中。
- 在线程启动的地方,先设置线程的优先级为URGENT_AUDIO,这里调用的是Process.setThreadPriority。
- 在一个循环中不停地调用audioRecord.read读取数据,把采集到的数据读到ByteBuffer中,然后调用nativeDataIsRecorded JNI函数通知native层数据已经读到,进行下一步处理。
private boolean stopRecording()
Logging.d(TAG, "stopRecording");
assertTrue(audioThread != null);
audioThread.stopThread();
if (!ThreadUtils.joinUninterruptibly(audioThread, AUDIO_RECORD_THREAD_JOIN_TIMEOUT_MS))
Logging.e(TAG, "Join of AudioRecordJavaThread timed out");
WebRtcAudioUtils.logAudioState(TAG);
audioThread = null;
if (effects != null)
effects.release();
releaseAudioResources();
return true;
?可以看到,这里首先把AudioRecordThread读数据循环的keepAlive条件置为false,接着调用ThreadUtils.joinUninterruptibly等待AudioRecordThread线程退出。
这里有一点值得一提,keepAlive变量加了volatile关键字进行修饰,这是因为修改和读取这个变量的操作可能发生在不同的线程,使用volatile关键字进行修饰,可以保证修改之后能被立即读取到。
AudioRecordThread线程退出循环后,会调用audioRecord.stop()停止采集;线程退出之后,会调用audioRecord.release()释放AudioRecord对象。
?以上,就是 Android WebRTC 音频采集 Java 层的大致流程。
?
文章图片
推荐阅读
- Object类
- java中this关键字的基本使用
- 深度介绍Flink在字节跳动数据流的实践
- Java向上转型和向下转型
- java基础知识简化
- #yyds干货盘点#分布式技术专题「Zookeeper中间件」给大家学习一下Zookeeper的”开发伴侣”—Curator-Framework(基础篇)
- Javainstanceof用法
- 使用MyEclipse快速开发图形化界面
- 接口implements