Tagged: participation Toggle Comment Threads | Keyboard Shortcuts

  • stefano 4:16 pm on March 23, 2013 Permalink | Reply
    Tags: , , , participation, ,   

    Streaming Audio from Raspberry Pi – part 2 

    Episode two of my experiment with the Raspberry Pi I have received at Pycon 2013. On the way back from Santa Clara I stopped by Fry’s and bought the only USB sound card they had in store, a weird looking Creative and a powered USB hub.  I ordered also a case from Adafruit industries (not convinced of the case though).

    Raspberry Pi and USB sound card
    The output of lsusb:

    Bus 001 Device 006: ID 147a:e03d Formosa Industrial Computing, Inc.

    Hooked them all together, installed avahi-daemon on the Pi so I can ssh into it easily from any lan (ssh raspberrypi.local, although I should change its name to something more unique). Tested arecord locally first. It took me a while to figure out how to use arecord, it’s old stuff that I’m not very used to. You need to specify the hardware device. If you get this sort of error:

    arecord: main:682: audio open error: No such file or directory

    probably you haven’t specified that you want to record from the card that actually has an input device

    pi@raspberrypi ~ $ arecord -l
    **** List of CAPTURE Hardware Devices ****
    card 1: Audio [2 Channel USB Audio], device 0: USB Audio [USB Audio]
    Subdevices: 1/1
    Subdevice #0: subdevice #0

    I hooked my phone’s audio player to the mic input of the USB card so that there would be constantly audio coming in the Pi, then started recording

    pi@raspberrypi ~ $ arecord -D hw:1,0 -f s16_le -c 2 > test.wav

    I have specified the hardware device I want to use, the format and the number of channels. Playing back that file worked.

    pi@raspberrypi ~ $ aplay -D hw:1,0 test.wav

    Next step was to capture live audio from the line input of the USB card, transcode that to OGG Vorbis and ship the bits to the remote Icecast server I setup last week. I quickly gave up on ezstream and started using ices2, the Icecast client: it seems easier to manage as it takes care of the encoding. This is the input module I used for the Pi:

    <input>
    <module>alsa</module>
    <param name=”rate”>16000</param>
    <param name=”channels”>2</param>
    <param name=”device”>hw:1,0</param>
    <!– Read metadata (from stdin by default, or –>
    <!– filename defined below (if the latter, only on SIGUSR1) –>
    <param name=”metadata”>1</param>
    <param name=”metadatafilename”>liveaudio</param>
    </input>

    The USB soundcard I’m using sends 16000hz samples. I chose not to resample that, only to downmix stereo to mono to save bandwidth.

    <–  stereo->mono downmixing, enabled by setting this to 1 –>
    <downmix>1</downmix>

    And all seems to work: the Pi sends a clear signal up to the streaming server and it’s been doing that for a while. Big success so far. Next step for me will be to write a script that grabs data from the OpenStack Summit schedule for one room and adds that information as metadata for the streaming: this way the listeners will have an idea of who is speaking or what the session is about.

    Update: the stream was having a wide latency, around 20 seconds so I decided to play a little bit with the sampling rates. The latency went down to around 1 second using  <param name=”rate”>48000</param> in the input module and  <samplerate>48000</samplerate> in the encode module, with no resampling. Unfortunately the USB card dies every now and then when not connected through a powered USB hub. Too bad, because the USB hub looks ugly.

     
    • Ole N 11:58 pm on May 14, 2013 Permalink | Reply

      Could you please give som more information about that soundcard? Can’t find it at Creative.

    • stefano zorzanello 4:31 pm on May 15, 2013 Permalink | Reply

      Hi Stef,
      thank you for your great report about Pi audio streaming…..
      I’m completely new to this great world
      I would like to know if you think the Raspberry Pi would be the right shield for my project.

      I am a musician and sound designer, using Max/Msp, particularly interested on soundscape projects and researches, and this project is for an interactive sound installation in a real environment, with the sound produced by cicadas. As I’d prefer not to abandon a computer in a field, I assumed to use a microcontroller for this purpose.

      So Raspberry Pi should be involved in steps 1-2-4-5 of the following:

      1) sampling the sound locally produced by cicadas males during summer: mono signal is fine.
      2) sending a stream of samples through the web in real time: Raspberry Pi should be connected to the web via 3G (no ethernet cable).
      3) another computer placed in a studio should get this stream, making the audio treatments, re-sending it trough the web (splitted possibly into four independent synchronized channels-signals)
      4) downloading the 4 processed streams;
      5) distributing the sounds on a local multi speaker system, 4 channels basically.

      If you think that all this is feasible through a Raspberry Pi plus a proper internet networking shield, I would buy it as soon as possible. Could you also tell me in case it would be possible with only 2 channels (stereo) playing?

      I thank you very much for your attention, I stay waiting form an answer from you.

      Best regards

      stefano zorzanello

      • Stef 4:40 pm on May 15, 2013 Permalink | Reply

        Hi Stefano, your project sounds interesting. I think you can do step 2 with the Pi but you’ll need to find a USB GPRS modem that supports ARM Linux drivers. I have no idea about that though. Also, I found the soundcard I bought not to be very stable: at times the driver seems to crash and the streaming stops. Unfortunately I have very little time to debug this. I’m considering shopping for a board that has native audio input and similar cost.

    • stefano zorzanello 5:10 pm on May 15, 2013 Permalink | Reply

      thank you for your interest!
      yes I’ve to look for this USB GPRS modem that supports ARM Linux drivers……. maybe this will be the rock to climb…. I’, pretty sure you will find a more stable sound card…
      I was wandering for arduino plus audio codec shields… but it seems that arduino has not enough resources to support both 3g shield and audio codec at the same time……
      any way I’ll keep on studying the feasibility…

      in the while thank you very much..

      ciao

      stefano (so I have to avoid to sign myself “Stef” as I usually do!!)

    • Jeroen 8:29 am on July 4, 2013 Permalink | Reply

      Very interesting article! Did you already found a new sound-card for your project? Ad if so, why did you pick that one?

      Sincere greetings,

      Jeroen

      • Stef 10:48 am on July 16, 2013 Permalink | Reply

        I haven’t looked for another sound card, yet. This project has fallen a little behind in the todo list unfortunately. I picked that one because it looked cool and was readily available :)

  • stefano 3:41 pm on March 17, 2013 Permalink | Reply
    Tags: , , participation,   

    Simple live audio streaming from OpenStack Summit using RaspberryPi 

    I have always wanted to have a simple way to stream audio from the OpenStack Design Summits, much like Ubuntu used to do for its Ubuntu Design Summit. Since every participant to Pycon 2013 was kindly given a Raspberry Pi and having a couple of hours to kill, I have started playing with a simple idea: configure a Raspberry Pi to capture audio and stream it to a public icecast server.

    First thing: spin up a virtual machine on a public cloud that will be serving as public icecast server. I have used my UniCloud account, created a small server, 512Mb ram, Ubuntu LTS 32bit. I updated the default security groups to allow traffic on port 22 (SSH) and 8000 (icecast), and I also needed to assign a public floating IP.  Once the machine is up, simple apt-get install icecast2 took care of starting the audio streaming server. That’s it, streaming server part is done.

    Back to the RasPi, in order to test the streaming server, I installed ezstream, copied the config files from /usr/share/doc/ezstream/examples. I copied ezstream_vorbis.xml to pi home dir:

    <ezstream>
    <url>http://INSTANCE_NAME:8000/armin</url&gt;
    <sourcepassword>TheSecret</sourcepassword>
    <format>VORBIS</format>
    <filename>playlist.m3u</filename>
    <!– looping the stream forever, for testing purposes –>
    <stream_once>0</stream_once>
    <svrinfoname>OpenStack Test Streaming Rradio</svrinfoname>
    <svrinfourl>http://radio.openstack.org</svrinfourl&gt;
    <svrinfogenre>OpenStack Test Streaming</svrinfogenre>
    <svrinfodescription></svrinfodescription>
    <svrinfobitrate>320</svrinfobitrate>
    <svrinfochannels>2</svrinfochannels>
    <svrinfosamplerate>44100</svrinfosamplerate>
    <!– advertising on public YP directory –>
    <svrinfopublic>1</svrinfopublic>
    </ezstream>

    The playlist.m3u is a simple text file with one .ogg file in there, enough to test it. Start the stream to be sent to the icecast server with

    ezstream -c ezstream_vorbis.

    And go play the audio in your favorite icecast player, the URL is something like http://YOUR_INSTANCE_NAME:8000/vorbis.ogg.m3u

    Simple, rudimentary but I like because it seems to be easy. The next step for me is to buy a USB microphone to stream live audio captured in a room. The optimal configuration though is to use this system to stream audio easily from the OpenStack Summit rooms. I need a way to connect the USB input to a regular audio mixer: any idea on how to do that?

     
  • stefano 6:08 pm on December 17, 2009 Permalink | Reply
    Tags: agile, code sniper, , funambol, incentive, l1on sniper, participation, phone sniper   

    3 ways to get cash over the holidays with Funambol 

    The winter holidays are a provide a good opportunity to  do something useful in that downtime between celebrations. Funambol’s community programs provide three ways to have fun and earn some cash, too.

    1. Participate in the Code Sniper program: it rewards the efforts of the open source community to help build the open source mobile cloud. The Funambol project provides the basic framework, including server and clients, that implements an open standard (SyncML, also know as OMA DS and OMA DM). You can propose to develop new clients and connectors or to help improving an existing project. Each contribution counts, although developing code counts more: writing work items, writing full bug reports and testing them is worth $10; fixing issues and developing new features is decided by the community and it can reach $500. If you want to start a new project you may want to read How to develop a SyncML client the Agile way. Or you can help an existing project, like Thunderbird: check the list of active requirements. Bounties go up to $500. Find details about Funambol Code Sniper and also on Galoppini’s blog.

    2. Test your new phone with Phone Sniper program: if you get a new mobile phone as a present, make the most out of it and rush to myFUNAMBOL, sign up for  an account (if you don’t already have one) and test its sync and push capabilities. Send a report back to Funambol and earn $25.  Check the list of phones that need testing on Phone  Sniper program pages.

    3. Translate a Funambol client with L1on sniper program: one simple way to get started with Funambol is to download the code and compile it yourself. Once you’ve done that, you’re ready to translate the client into your local language and earn $250. Once you’re done, send the patch with the translated files and some screenshots of the application. Detailed instructions are on L10n Sniper program pages.

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel