Streaming Audio from Raspberry Pi – part 2

Episode two of my experiment with the Raspberry Pi I have received at Pycon 2013. On the way back from Santa Clara I stopped by Fry’s and bought the only USB sound card they had in store, a weird looking Creative and a powered USB hub.  I ordered also a case from Adafruit industries (not convinced of the case though).

Raspberry Pi and USB sound card
The output of lsusb:

Bus 001 Device 006: ID 147a:e03d Formosa Industrial Computing, Inc.

Hooked them all together, installed avahi-daemon on the Pi so I can ssh into it easily from any lan (ssh raspberrypi.local, although I should change its name to something more unique). Tested arecord locally first. It took me a while to figure out how to use arecord, it’s old stuff that I’m not very used to. You need to specify the hardware device. If you get this sort of error:

arecord: main:682: audio open error: No such file or directory

probably you haven’t specified that you want to record from the card that actually has an input device

pi@raspberrypi ~ $ arecord -l
**** List of CAPTURE Hardware Devices ****
card 1: Audio [2 Channel USB Audio], device 0: USB Audio [USB Audio]
Subdevices: 1/1
Subdevice #0: subdevice #0

I hooked my phone’s audio player to the mic input of the USB card so that there would be constantly audio coming in the Pi, then started recording

pi@raspberrypi ~ $ arecord -D hw:1,0 -f s16_le -c 2 > test.wav

I have specified the hardware device I want to use, the format and the number of channels. Playing back that file worked.

pi@raspberrypi ~ $ aplay -D hw:1,0 test.wav

Next step was to capture live audio from the line input of the USB card, transcode that to OGG Vorbis and ship the bits to the remote Icecast server I setup last week. I quickly gave up on ezstream and started using ices2, the Icecast client: it seems easier to manage as it takes care of the encoding. This is the input module I used for the Pi:

<input>
<module>alsa</module>
<param name=”rate”>16000</param>
<param name=”channels”>2</param>
<param name=”device”>hw:1,0</param>
<!– Read metadata (from stdin by default, or –>
<!– filename defined below (if the latter, only on SIGUSR1) –>
<param name=”metadata”>1</param>
<param name=”metadatafilename”>liveaudio</param>
</input>

The USB soundcard I’m using sends 16000hz samples. I chose not to resample that, only to downmix stereo to mono to save bandwidth.

<–  stereo->mono downmixing, enabled by setting this to 1 –>
<downmix>1</downmix>

And all seems to work: the Pi sends a clear signal up to the streaming server and it’s been doing that for a while. Big success so far. Next step for me will be to write a script that grabs data from the OpenStack Summit schedule for one room and adds that information as metadata for the streaming: this way the listeners will have an idea of who is speaking or what the session is about.

Update: the stream was having a wide latency, around 20 seconds so I decided to play a little bit with the sampling rates. The latency went down to around 1 second using  <param name=”rate”>48000</param> in the input module and  <samplerate>48000</samplerate> in the encode module, with no resampling. Unfortunately the USB card dies every now and then when not connected through a powered USB hub. Too bad, because the USB hub looks ugly.

Simple live audio streaming from OpenStack Summit using RaspberryPi

I have always wanted to have a simple way to stream audio from the OpenStack Design Summits, much like Ubuntu used to do for its Ubuntu Design Summit. Since every participant to Pycon 2013 was kindly given a Raspberry Pi and having a couple of hours to kill, I have started playing with a simple idea: configure a Raspberry Pi to capture audio and stream it to a public icecast server.

First thing: spin up a virtual machine on a public cloud that will be serving as public icecast server. I have used my UniCloud account, created a small server, 512Mb ram, Ubuntu LTS 32bit. I updated the default security groups to allow traffic on port 22 (SSH) and 8000 (icecast), and I also needed to assign a public floating IP.  Once the machine is up, simple apt-get install icecast2 took care of starting the audio streaming server. That’s it, streaming server part is done.

Back to the RasPi, in order to test the streaming server, I installed ezstream, copied the config files from /usr/share/doc/ezstream/examples. I copied ezstream_vorbis.xml to pi home dir:

<ezstream>
<url>http://INSTANCE_NAME:8000/armin</url>
<sourcepassword>TheSecret</sourcepassword>
<format>VORBIS</format>
<filename>playlist.m3u</filename>
<!– looping the stream forever, for testing purposes –>
<stream_once>0</stream_once>
<svrinfoname>OpenStack Test Streaming Rradio</svrinfoname>
<svrinfourl>http://radio.openstack.org</svrinfourl>
<svrinfogenre>OpenStack Test Streaming</svrinfogenre>
<svrinfodescription></svrinfodescription>
<svrinfobitrate>320</svrinfobitrate>
<svrinfochannels>2</svrinfochannels>
<svrinfosamplerate>44100</svrinfosamplerate>
<!– advertising on public YP directory –>
<svrinfopublic>1</svrinfopublic>
</ezstream>

The playlist.m3u is a simple text file with one .ogg file in there, enough to test it. Start the stream to be sent to the icecast server with

ezstream -c ezstream_vorbis.

And go play the audio in your favorite icecast player, the URL is something like http://YOUR_INSTANCE_NAME:8000/vorbis.ogg.m3u

Simple, rudimentary but I like because it seems to be easy. The next step for me is to buy a USB microphone to stream live audio captured in a room. The optimal configuration though is to use this system to stream audio easily from the OpenStack Summit rooms. I need a way to connect the USB input to a regular audio mixer: any idea on how to do that?

3 ways to get cash over the holidays with Funambol

The winter holidays are a provide a good opportunity to  do something useful in that downtime between celebrations. Funambol’s community programs provide three ways to have fun and earn some cash, too.

1. Participate in the Code Sniper program: it rewards the efforts of the open source community to help build the open source mobile cloud. The Funambol project provides the basic framework, including server and clients, that implements an open standard (SyncML, also know as OMA DS and OMA DM). You can propose to develop new clients and connectors or to help improving an existing project. Each contribution counts, although developing code counts more: writing work items, writing full bug reports and testing them is worth $10; fixing issues and developing new features is decided by the community and it can reach $500. If you want to start a new project you may want to read How to develop a SyncML client the Agile way. Or you can help an existing project, like Thunderbird: check the list of active requirements. Bounties go up to $500. Find details about Funambol Code Sniper and also on Galoppini’s blog.

2. Test your new phone with Phone Sniper program: if you get a new mobile phone as a present, make the most out of it and rush to myFUNAMBOL, sign up for  an account (if you don’t already have one) and test its sync and push capabilities. Send a report back to Funambol and earn $25.  Check the list of phones that need testing on Phone  Sniper program pages.

3. Translate a Funambol client with L1on sniper program: one simple way to get started with Funambol is to download the code and compile it yourself. Once you’ve done that, you’re ready to translate the client into your local language and earn $250. Once you’re done, send the patch with the translated files and some screenshots of the application. Detailed instructions are on L10n Sniper program pages.