Verifying an audio stream is actually played and heard - java

I'm broadcasting an audio stream (predefined playlists not live) through an HTTP server. I'm wondering are there any cheap solutions (in terms of computational complexity) which I can confirm my stream is actually played and heard. This means a reasonable amount of audio output is broadcasted from device which resembles mostly that stream.
For a simple scenario: assume there is an Android device & app, which is responsible for connecting to the server and playing the stream. Same Android app will be used to capture microphone input and compare it with the stream. Testing environment is outdoor scene with low-to-moderate background noise.
I did some studying with FFT and audio analysis in the college but I'd rather not reinvent the wheel so I'm seeking reliable and cheap libraries for this matter (mainly Android but Java libraries are welcome too).
As a side note, I started out with getting volume levels from the device, but this turned out to be insufficient since user can just plug in a turned off speaker to the device.
You might think the amount of work required to accomplish such a feature might not be feasible. But keep in mind that this feature I'm working on is between the "content generator" and "broadcaster", NOT "broadcaster" vs. "listener". So all I'm trying to do is making sure the broadcaster is holding his end of the contract.

Related

amplifying/boosting call channel/audio volume

We made an application that makes it possible to video call between 2 devices(iOS, Android and web). Using cordova, opentok, nodejs and the cordova-opentok-plugin. During testing we noticed that the sound on an Android device is kind of low, hard to hear the other person talk.
We tested the sound from our application and compared it to tests with Google Hangouts and a normal telephone call. From these tests we can see that the audio is on maximum volume in our application. The audio stream goes through the call channel for all these applications and our own.
We tested the same device with skype, which also goes over the call channel, and the sound on skype is a lot louder than our own application and Google Hangouts or even a normal telephone call.
So it seems Skype has found a way to boost the audio in Android. Does anyone know how we could implemented such kind of a boost/amplify to the audio channel?
Thanks in advance.

Using another sound card with the JavaSound Synthesizer

Could I use my M-AUDIO Fast Track Ultra as the audio interface in a Java MIDI plugin I'm writing? That virtual driver ASIO4ALL could be a nice way to go as well.
I will code a Live Performance MIDI Modifier, to enhance a keyboardist's level of control and complexity of MIDI-controlled effects. I've already begun a small proof-of-concept app that will get me going, but even when testing Oracle's own demo of the JavaSound API I notice some delay between the mouse input commands and the sound going. Whether this is caused by the way this app is constructed, I'll still find out, but I want to be sure I can deliver almost zero latency (20ms, as I get in my live performance host software). Have you guys found out anything relevant?
The only portable way to get a synthesizer is MidiSystem.getSynthesizer(), which gives you nothing but the default synthesizer, which outputs to some default audio device.
You would have to change the default audio output device of the JVM or of the OS.
The synthesizer has a fixed latency, which you can obtain with Synthesizer.getLatency().
The audio device will add its own latency.

Android (Java) Real-time Audio Input (microphone AND USB) and Output

I would like to build an Android App to take audio data from two microphones, mix the sound with some from memory, and play the sound through headphones. This needs to be done in real-time. Could you please refer me to some tutorials or any references, for real-time audio input, mixing, and output with Java eclipse?
So far, I am able to record sound, save it, and then play it, but I cannot find any tutorials for real-time interfacing with sound-hardware this way.
Note: One microphone is connected to the 3.5 mm headphone jack of the Android through a splitter and the other is connected through a USB port.
Thanks!
There are two issues that I see here:
1) Audio input via USB.
Audio input can be done using android 3.2+ and libusb but it is not easy (You will need to get the USB descriptors from libusb, parse them yourself and send the right control transfers to the device etc). You can get input latency via USB in the order of 5-10 mS with some phones.
2) Audio out in real-time.
This is a perennial problem in Android and you are pretty much limited to the Galaxy Nexus at the moment if you want to approach real-time (using Native Audio output). However, if you master the USB you may be able to output with less latency as well.
I suppose if you go to the trouble of getting the USB to work, you can get a USB audio device with stereo in. If you had connected one mono mic to each of the input channels, then output via USB you would be very close to your stated goal. You might like to try "USB Audio Tester" or "usbEffects" apps to see what is currently possible.
In terms of coding the mixing and output etc, you will probably want one thread reading each separate input source and writing to a queue in small chunks (100-1000 samples at a time). Then have a separate thread reading off the queue(s) and mixing, placing the output onto another queue and finally a thread (possibly in native code if not doing output via USB) to read the mixed queue and do output.
The following Link http://code.google.com/p/loopmixer/ has a flavor for dealing with the audio itself.

Android Phone call stream

Is it possible in Android to manipulate phone call data live before they are sent? (for, eg. by creating a buffer where the voice is recorded then sent after) or is it inaccessible, and must always be "live"?
Sorry, no. There is no supported way for an Android application to interact with the audio stream from a phone call.
Unlike pretty much all other audio, voice call audio is typically processed entirely by the modem subsystem. So the modem processor and it's associated DSP(s) (if it has any) has access to the voice call audio, but the application processor(s) don't, or at least don't modify it any way.
Some platforms allow the application processor to read the uplink/downlink audio either in their compressed form (AMR) or after decoding has been performed (PCM). But no platform used for Android devices that I know about has (complete) support for injecting data into the uplink. If there are any that do, it would be a completely non-standard feature.
Try doing the coding in C with JNI. Also I would recommend p_thread. As Android doesn't have control over such threads.

Multichannel USB recording with Java Sound API?

I'm trying to record/process some audio from three usb microphones with Java Sound on Snow Leopard (but can switch to Windows if it fixes things). Problem is, when I try to use the mixer that corresponds to the usb mic, Java Sound tells me that the line isn't supported. Specifically, it says this...
Available mixers:
Java Sound Audio Engine
USBMIC Serial# 041270067
Built-in Input Built-in Microphone
Soundflower (2ch)
Soundflower (16ch)
Exception in thread "AWT-EventQueue-0"
java.lang.IllegalArgumentException:
Line unsupported: interface
TargetDataLine supporting format
PCM_SIGNED 96000.0 Hz, 8 bit, stereo,
2 bytes/frame,
...when I ask it to select the USBMIC mixer:
Mixer mixer = AudioSystem.
getMixer(mixerInfo[1]);
I have tried matching the audio format to the exact specifications of the microphones (16-bit, 44100Hz, stereo) and it didn't make any difference.
The problem is cropping up here:
final TargetDataLine line = (TargetDataLine)
mixer.getLine(info);
It would seem that the mixer and the TargetDataLine don't like each other. Is there some way to get these two to 'match' and get along?
The microphones that I'm using are admittedly a bit strange. They were made to be used in a karaoke video game called SingStar. The mics themselves have standard mono line-in connectors that plug into a little hub (two to a hub) that converts them into a single male usb connector. Strangeness aside, though, they seem to work perfectly fine with Audacity as separate channels, so multichannel recording with them is clearly possible, just maybe not in Java.
I've also considered using a program like Soundflower that shares audio between different programs. However, I'm not sure this will work as I can't see how to make the USB mics inputs to Soundflower and then pipe them into a Java. A quick experiment showed me that I could record audio in Audacity from the mics, pipe it out through Soundflower, and then process in my Java program. Still, what I would like to do is have it all happen in real time in Java.
Anybody familiar with this kind of problem?
I think that a simple way to do this would be using Soundflower and Soundflowerbed.
I can't see how to make the USB mics inputs to Soundflower and then pipe them into a Java.
It sounds like you have Soundflower installed already. Soundflowerbed is found in the same disk image as Soundflower and is a menubar application. It lets you route sound between applications which don't have controls built in for selecting sound devices. Install that from the disk image and click it to run.
All of the following will be using my Echo Audiofire 4 but in principle should work on any audio device.
Using Soundflowerbed
Open Soundflower and tick the audio device you want to use under Soundflower (16ch). As I'm a new user I can't post images but they are linked below. If I get the bounty then I will edit the post to include the images inline.
From here you would use Soundflower (16ch) as your audio input device in Java sound.
Creating an aggregate audio device
An alternative way to solve this if that didn't work is to create an aggregate device. Open Applications > Utilities > Audio Midi Setup and click the plus sign to create a new aggregate device.
Tick the device that you want to aggregate. You only want your USBMIC (As I'm a new SO user I can only post two images per answer so the next two are linked here).
The key part which may be giving you trouble is the clock on the device. If you select the Mac as the clock source then that may be more stable.
If this still doesn't work then you could try adding the Mac built-in audio to the aggregate device and making it the master clock by right clicking on the device you want to be the master.
Other options
Finally, I haven't used this before but Pulse Audio (Google it, I can't insert more links in this post) might be a possible solution for mixing your audio streams together. It looks quite heavyweight though.
According to my research, especially threads like this, the microphone you are using is most likely causing the problem. The thread states that the microphone is even a problem when it comes to switching games, so I am guessing that it will be a problem when switching platforms, too.
My suggestion is - if you have not tried this already - to use a different microphone! Most microphones I have messed around with have special chip controllers that convert data into the data compatible for the game system. Being that you are using this on an operating system for the computer, you are probably getting some very odd effects that you wouldn't get on a game system like Playstation or others.
Hopefully this helps! Happy coding!
The AudioFormat doesn't match the TargetDataLine's supported format. I don't know if that was a typo or not but the Exception thrown says the TargetDataLine supports 8 bit audio and right below that you said you're using a 16 bit AudioFormat. It also supports up to 2 bytes per frame, how quickly and in what size chunks are you trying to read the data? Sorry if that doesn't help but I thought I'd point that out in case it was overlooked.

Categories