标签:
本文记录在学习声波传递信息技术过程中的点滴。用过支付宝当面付的同学都知道,当面付其实是传递的一个用户的编码,个人理解为后台实时产生的关联用户ID的唯一编号(实时有效)。
声波调制解调涉及信号处理技术,本文中涉及到概念解释均为个人理解,不正确的地方望纠正。
先来看两个概念:
时域--->真实存在的,描述事物按时间顺序变化的过程,也是人们认识事物变化的一般参考系。
频域--->是一个数学构造域,便于人们研究事物随频率变化的过程,是人们对事物本质规律一种标识系。
为了便于对信号的处理,人们将原本不好处理的时域信号转换为频域,这也就是频域的作用。
声波传递信息,我们需要把信息变成声音数据,通过扬声器对外发出。
第一步:信息是什么?
我们先来传输数字(1,2....),信息既是数字。我们如何用声音来表示呢?我们知道音乐中的表示方法,比如国际标准音6就是“La”,对应的频率就是440HZ,好了,那我们就用这个国际音标对我们的信息进行编码,我们采用高音进行编码。
音符 |
频率Hz |
周期μs |
低1Do |
262 |
3816 |
低2Re |
294 |
3401 |
低3Mi |
330 |
3030 |
低4Fa |
349 |
2865 |
低5So |
392 |
2551 |
低6La |
440 |
2272 |
低7Si |
494 |
2024 |
高1Do |
1047 |
955 |
高2Re |
1175 |
851 |
高3Mi |
1319 |
758 |
高4Fa |
1397 |
751 |
高5So |
1568 |
637 |
高6La |
1760 |
568 |
高7Si |
1967 |
508 |
知道要传递的信息实质是一连串特定频率了就好办了。
第二步:如何把信息变成声音数据?
现在开始定义一个声音,声音就是一连串特定频率波形(自然界中是各种频率叠加后的波形),在时域里可以理解为质点随时间的做特定频率的上下运动。接下来看一下Android中声音的播放类AudioTrack(http://developer.android.com/reference/android/media/AudioTrack.html),以下是他的构造函数:
publicAudioTrack(int streamType, int sampleRateInHz, int channelConfig, intaudioFormat, int bufferSizeInBytes, int mode)
参数说明:
streamType |
the type of the audio stream. See STREAM_VOICE_CALL, STREAM_SYSTEM, STREAM_RING, STREAM_MUSIC, STREAM_ALARM, andSTREAM_NOTIFICATION. |
sampleRateInHz |
the initial source sample rate expressed in Hz. |
channelConfig |
describes the configuration of the audio channels. See CHANNEL_OUT_MONO and CHANNEL_OUT_STEREO |
audioFormat |
the format in which the audio data is represented. See ENCODING_PCM_16BIT and ENCODING_PCM_8BIT, and ENCODING_PCM_FLOAT. |
bufferSizeInBytes |
the total size (in bytes) of the buffer where audio data is read from for playback. If using the AudioTrack in streaming mode, you can write data into this buffer in smaller chunks than this size. If using the AudioTrack in static mode, this is the maximum size of the sound that will be played for this instance. See getMinBufferSize(int, int, int) to determine the minimum required buffer size for the successful creation of an AudioTrack instance in streaming mode. Using values smaller than getMinBufferSize() will result in an initialization failure. |
mode |
streaming or static buffer. See MODE_STATIC and MODE_STREAM |
这些数据只需要设定就可以了:
a>1参数设定播放类型为STREAM_MUSIC
b>2设定采样频率为44.1k既44100HZ
c>3设定输出声道为单个声道CHANNEL_OUT_MONO
d> 设定音频位深度为ENCODING_PCM_16BIT
e> 设定存储audio data缓冲区的大小
f> 设定为MODE_STREAM流的方式
MODE_STREAM对应AudioTrack播放方法:
public intwrite(short [] audioData, int offsetInBytes, int sizeInBytes)
audioData |
the array that holds the data to play. |
offsetInBytes |
the offset expressed in bytes in audioData where the data to play starts. |
sizeInBytes |
the number of bytes to read in audioData after the offset. |
a> 音频数据buffer
b> 播放偏移
c> 写入buffer的大小
我们传递1,2,3,4,5,6,7下面是程序中需要的一系列变量:
根据AudioTrack的参数,生成数据。
第三步:采集音频数据,呵呵常说的录音AudioRecord (http://developer.android.com/reference/android/media/AudioRecord.html)
Class constructor. Though some invalid parameters will result in an IllegalArgumentException
exception,
other errors do not. Thus you should call getState()
immediately
after construction to confirm that the object is usable.
audioSource |
the recording source (also referred to as capture preset). See MediaRecorder.AudioSource for
the capture preset definitions. |
---|---|
sampleRateInHz | the sample rate expressed in Hertz. 44100Hz is currently the only rate that is guaranteed to work on all devices, but other rates such as 22050, 16000, and 11025 may work on some devices. |
channelConfig |
describes the configuration of the audio channels. See CHANNEL_IN_MONO and CHANNEL_IN_STEREO . CHANNEL_IN_MONO is
guaranteed to work on all devices. |
audioFormat |
the format in which the audio data is represented. See ENCODING_PCM_16BIT and ENCODING_PCM_8BIT |
bufferSizeInBytes |
the total size (in bytes) of the buffer where audio data is written to during the recording. New audio data can be read from this buffer in smaller chunks than this size. See getMinBufferSize(int,
int, int) to determine the minimum required buffer size for the successful creation of an AudioRecord instance. Using values smaller than getMinBufferSize() will result in an initialization failure. |
比较简单,读取到预先设定的buffer里就好了。
第四步:也就是关键步骤,对第三步中读取的数据进行FFT,到的信号的频率,根据频率值去对应我们第一步中所说的音频标准中,取得数据。
至此声波传递信息的步骤的做了记录。我的代码,非常粗糙,由于设置的阀值比较高,传输距离比较短。
纸上得来终觉浅,绝知此事要躬行,有兴趣可以自己试试。
标签:
原文地址:http://blog.csdn.net/suxiaolincalendar/article/details/43935027