Episode two of my experiment with the Raspberry Pi I have received at Pycon 2013. On the way back from Santa Clara I stopped by Fry’s and bought the only USB sound card they had in store, a weird looking Creative and a powered USB hub. I ordered also a case from Adafruit industries (not convinced of the case though).
The output of lsusb:
Bus 001 Device 006: ID 147a:e03d Formosa Industrial Computing, Inc.
Hooked them all together, installed avahi-daemon on the Pi so I can ssh into it easily from any lan (ssh raspberrypi.local, although I should change its name to something more unique). Tested arecord locally first. It took me a while to figure out how to use arecord, it’s old stuff that I’m not very used to. You need to specify the hardware device. If you get this sort of error:
arecord: main:682: audio open error: No such file or directory
probably you haven’t specified that you want to record from the card that actually has an input device
pi@raspberrypi ~ $ arecord -l
**** List of CAPTURE Hardware Devices ****
card 1: Audio [2 Channel USB Audio], device 0: USB Audio [USB Audio]
Subdevice #0: subdevice #0
I hooked my phone’s audio player to the mic input of the USB card so that there would be constantly audio coming in the Pi, then started recording
pi@raspberrypi ~ $ arecord -D hw:1,0 -f s16_le -c 2 > test.wav
I have specified the hardware device I want to use, the format and the number of channels. Playing back that file worked.
pi@raspberrypi ~ $ aplay -D hw:1,0 test.wav
Next step was to capture live audio from the line input of the USB card, transcode that to OGG Vorbis and ship the bits to the remote Icecast server I setup last week. I quickly gave up on ezstream and started using ices2, the Icecast client: it seems easier to manage as it takes care of the encoding. This is the input module I used for the Pi:
<!– Read metadata (from stdin by default, or –>
<!– filename defined below (if the latter, only on SIGUSR1) –>
The USB soundcard I’m using sends 16000hz samples. I chose not to resample that, only to downmix stereo to mono to save bandwidth.
<– stereo->mono downmixing, enabled by setting this to 1 –>
And all seems to work: the Pi sends a clear signal up to the streaming server and it’s been doing that for a while. Big success so far. Next step for me will be to write a script that grabs data from the OpenStack Summit schedule for one room and adds that information as metadata for the streaming: this way the listeners will have an idea of who is speaking or what the session is about.
Update: the stream was having a wide latency, around 20 seconds so I decided to play a little bit with the sampling rates. The latency went down to around 1 second using <param name=”rate”>48000</param> in the input module and <samplerate>48000</samplerate> in the encode module, with no resampling. Unfortunately the USB card dies every now and then when not connected through a powered USB hub. Too bad, because the USB hub looks ugly.