Audio input stream in Processing 3 - java

My overarching goal: I'm looking for a way to grab current system sound and run it through a visualizer in Processing 3. Currently I have found a way to do this grabbing the mic input:
function setup() {
sound = new p5.AudioIn();
sound.start();
fft = new p5.FFT();
fft.setInput(sound);
}
But I have yet to find a way to do this with system sound (ie. A youtube video, Spotify, an MP3 file playing)
As well i'm not even sure if this is possible with some programs like spottily as they have built in security.
All in all I think the solution to this problem is probably similar to how you would go about capturing system audio in a screen recording program.
Note: The captured audio is being pipped into the Minim Library for visual processing.

Capturing system output in Processing is a bit tricky. In fact, even being able to record system output is a demon of its own.
I managed to accomplish this task on my MacBook Pro in Processing by using Soundflower (Mac) as a workaround. This application acts as a MIDI device to route your sound output to your sound input. Once installed, open Audio MIDI Setup and select Soundflower as your sound input.
When you run your Processing script, p5.AudioIn() will take the Soundflower audio input channel and use it get the frequency bands of all sounds coming out of your computer alone.
Best of luck!

Related

Read current audio output - Android

I want to know if it is possible to access the audio that is currently playing on the Android device.
Example: if Spotify is running in the background, I want to access the audio to control some LEDs that are connected to my RaspberryPi.
I want to create some sort of equalizer that changes colors depending on the sound that is currently playing. I appreciate if some one could tell me if accessing the main audio output is possible or not.
Unless you are using a rooted phone, it's not possible to capture output of a random app on Android.
You can however create an app that plays media files and captures the output for the purpose of visualization with "Visualizer" effect. You can take a look on the sample here: https://android.googlesource.com/platform/development/+/master/samples/ApiDemos/src/com/example/android/apis/media/AudioFxDemo.java
(look for "Visualizer").
If you are using Raspberry Pi anyway, you can just play all your music through it, capture and analyze it there. You will need an external USB sound card though. See for example this post: http://www.g7smy.co.uk/2013/08/recording-sound-on-the-raspberry-pi/
There they just record and play audio back, but you can insert an analysis phase in between.

How can I monitor/sample output audio in Java or C?

Lately I have been experimenting with real time visualizations on the audio I play on my computer (via any arbitrary program, such as Spotify) but I've been using SoundFlower to pipe the output audio in to a fake line in.
What I'm wondering is if there is a way that is native to C/C++ or Java that will allow me to capture whatever audio is sent to my computer's (I'm using a Mac) line out in a similar way to how I can capture a line in (I.E. a sample buffer that is continually filled with PCM data).
I have no desire to emulate the other features of SoundFlower except for reading the line out data.
I suggest having a look at the source code for WavTap, a fork of SoundFlower, which focusses only on capturing the default audio output of the system.
Both SoundFlower and WavTap work by installing a kernel extension, which adds an additional audio device to which audio can be routed. They then captures audio from this device. WavTap makes this the default device when it starts so that the overall output of the system is automatically captured without the user having to explicitly set up the routing.
I believe the WavTap code is MIT licensed and the system extension code is well abstracted so you should be able to adapt it for your own project.
To understand a bit more about how this works, the OS X and iOS Kernel Programming book has an explanation of some of the techniques in Chapter 12 as well as downloadable code for an example audio device and engine.

Search java library for audio processing

I should want to detect frequencies in music with java.
I use TarsosDSP, which work ok, but I'm afraid there no active community (I think).
Is there other ?
On stackoverflow I've seen Sound processing: mixing two audio files, phase shifting and peak controller, Capturing audio and processing rythm in realtime, audio search library... but there's no accepted answer, so I try me too.

Android (Java) Real-time Audio Input (microphone AND USB) and Output

I would like to build an Android App to take audio data from two microphones, mix the sound with some from memory, and play the sound through headphones. This needs to be done in real-time. Could you please refer me to some tutorials or any references, for real-time audio input, mixing, and output with Java eclipse?
So far, I am able to record sound, save it, and then play it, but I cannot find any tutorials for real-time interfacing with sound-hardware this way.
Note: One microphone is connected to the 3.5 mm headphone jack of the Android through a splitter and the other is connected through a USB port.
Thanks!
There are two issues that I see here:
1) Audio input via USB.
Audio input can be done using android 3.2+ and libusb but it is not easy (You will need to get the USB descriptors from libusb, parse them yourself and send the right control transfers to the device etc). You can get input latency via USB in the order of 5-10 mS with some phones.
2) Audio out in real-time.
This is a perennial problem in Android and you are pretty much limited to the Galaxy Nexus at the moment if you want to approach real-time (using Native Audio output). However, if you master the USB you may be able to output with less latency as well.
I suppose if you go to the trouble of getting the USB to work, you can get a USB audio device with stereo in. If you had connected one mono mic to each of the input channels, then output via USB you would be very close to your stated goal. You might like to try "USB Audio Tester" or "usbEffects" apps to see what is currently possible.
In terms of coding the mixing and output etc, you will probably want one thread reading each separate input source and writing to a queue in small chunks (100-1000 samples at a time). Then have a separate thread reading off the queue(s) and mixing, placing the output onto another queue and finally a thread (possibly in native code if not doing output via USB) to read the mixed queue and do output.
The following Link http://code.google.com/p/loopmixer/ has a flavor for dealing with the audio itself.

Multichannel USB recording with Java Sound API?

I'm trying to record/process some audio from three usb microphones with Java Sound on Snow Leopard (but can switch to Windows if it fixes things). Problem is, when I try to use the mixer that corresponds to the usb mic, Java Sound tells me that the line isn't supported. Specifically, it says this...
Available mixers:
Java Sound Audio Engine
USBMIC Serial# 041270067
Built-in Input Built-in Microphone
Soundflower (2ch)
Soundflower (16ch)
Exception in thread "AWT-EventQueue-0"
java.lang.IllegalArgumentException:
Line unsupported: interface
TargetDataLine supporting format
PCM_SIGNED 96000.0 Hz, 8 bit, stereo,
2 bytes/frame,
...when I ask it to select the USBMIC mixer:
Mixer mixer = AudioSystem.
getMixer(mixerInfo[1]);
I have tried matching the audio format to the exact specifications of the microphones (16-bit, 44100Hz, stereo) and it didn't make any difference.
The problem is cropping up here:
final TargetDataLine line = (TargetDataLine)
mixer.getLine(info);
It would seem that the mixer and the TargetDataLine don't like each other. Is there some way to get these two to 'match' and get along?
The microphones that I'm using are admittedly a bit strange. They were made to be used in a karaoke video game called SingStar. The mics themselves have standard mono line-in connectors that plug into a little hub (two to a hub) that converts them into a single male usb connector. Strangeness aside, though, they seem to work perfectly fine with Audacity as separate channels, so multichannel recording with them is clearly possible, just maybe not in Java.
I've also considered using a program like Soundflower that shares audio between different programs. However, I'm not sure this will work as I can't see how to make the USB mics inputs to Soundflower and then pipe them into a Java. A quick experiment showed me that I could record audio in Audacity from the mics, pipe it out through Soundflower, and then process in my Java program. Still, what I would like to do is have it all happen in real time in Java.
Anybody familiar with this kind of problem?
I think that a simple way to do this would be using Soundflower and Soundflowerbed.
I can't see how to make the USB mics inputs to Soundflower and then pipe them into a Java.
It sounds like you have Soundflower installed already. Soundflowerbed is found in the same disk image as Soundflower and is a menubar application. It lets you route sound between applications which don't have controls built in for selecting sound devices. Install that from the disk image and click it to run.
All of the following will be using my Echo Audiofire 4 but in principle should work on any audio device.
Using Soundflowerbed
Open Soundflower and tick the audio device you want to use under Soundflower (16ch). As I'm a new user I can't post images but they are linked below. If I get the bounty then I will edit the post to include the images inline.
From here you would use Soundflower (16ch) as your audio input device in Java sound.
Creating an aggregate audio device
An alternative way to solve this if that didn't work is to create an aggregate device. Open Applications > Utilities > Audio Midi Setup and click the plus sign to create a new aggregate device.
Tick the device that you want to aggregate. You only want your USBMIC (As I'm a new SO user I can only post two images per answer so the next two are linked here).
The key part which may be giving you trouble is the clock on the device. If you select the Mac as the clock source then that may be more stable.
If this still doesn't work then you could try adding the Mac built-in audio to the aggregate device and making it the master clock by right clicking on the device you want to be the master.
Other options
Finally, I haven't used this before but Pulse Audio (Google it, I can't insert more links in this post) might be a possible solution for mixing your audio streams together. It looks quite heavyweight though.
According to my research, especially threads like this, the microphone you are using is most likely causing the problem. The thread states that the microphone is even a problem when it comes to switching games, so I am guessing that it will be a problem when switching platforms, too.
My suggestion is - if you have not tried this already - to use a different microphone! Most microphones I have messed around with have special chip controllers that convert data into the data compatible for the game system. Being that you are using this on an operating system for the computer, you are probably getting some very odd effects that you wouldn't get on a game system like Playstation or others.
Hopefully this helps! Happy coding!
The AudioFormat doesn't match the TargetDataLine's supported format. I don't know if that was a typo or not but the Exception thrown says the TargetDataLine supports 8 bit audio and right below that you said you're using a 16 bit AudioFormat. It also supports up to 2 bytes per frame, how quickly and in what size chunks are you trying to read the data? Sorry if that doesn't help but I thought I'd point that out in case it was overlooked.

Categories