Have a simple program running on Raspberry PI. When hooked up to monitor with speakers I can get the voice speaking through the HDMI speakers. However, I want the voice to output through bluetooth speakers. The connected bluetooth speakers play when I play some audio files, but when I run FreeTTS program the sound outputs either through HDMI audio or when headless and connected only to bluetooth speakers, there is no output at all. I'm thinking maybe it's trying to play through some default audio, since it plays through HDMI speakers even with bluetooth selected in audio menu...
Here's the basic code I started with.
Voice voice;
VoiceManager voiceManager = VoiceManager.getInstance();
voice = voiceManager.getVoice(voiceName);
voice.allocate();
voice.speak(text);
I've been trying additions, but nothing has worked so far. I was thinking I maybe needed to connect an AudioPlayer of some sort, but I couldn't get the default streaming one working. I need the audio played immediately. Any thoughts?
Turned out it wasn't really a programming issue, just configuration. I needed to have sound.properties setup for Java to use ALSA sound I guess. Once they were set, FreeTTS output to the correct audio as expected (in my case to a bluetooth speaker).
javax.sound.sampled.Clip=com.sun.media.sound.DirectAudioDeviceProvider
javax.sound.sampled.Port=com.sun.media.sound.PortMixerProvider
javax.sound.sampled.SourceDataLine=com.sun.media.sound.DirectAudioDeviceProvider
javax.sound.sampled.TargetDataLine=com.sun.media.sound.DirectAudioDeviceProvider
Just put that in your sound.properties file (Somewhere in jdk/jvm folder: find / -name sound.properties). Should already be stubbed out in existing file, if not just throw it in there.
Since I was using bluetooth and needed to do some mixing, I eventually started using pulseaudio as well which led to other difficulties, but this still applied to getting Java sound working in that case as well.
Related
I am new to stack overflow and I am developing an android app which can record and retransmit IR codes via audio input. But i dont know how to do it. Can someone please help me about it. I have built the hardware (An headphone jack with IR Emitter and IR Receiver connected to it) and i can send IR codes by first packing them into .Wav file and then playing that file using AudioTrack but i dont know how to make receiver to work. I want receiver to normally go to recording mode, record every IR code i give and then just repeat it. I have also checked the receiver via WinLIRC it works like a charm. Now its time to implement in android. Currently i am using Aurduino Nano and IR Scrutinizer software to record IR and make a .WAV file but i want this thing in android. There are some apps on playstore like AnyMote which record IR but they use built-in receiver of Android and my hardware uses Audio Input/Output for its function. I have also checked all drivers of ZaZa Remote but none of them showed support for Audio IR Receiver, it can just send IR codes using Audio IR Blaster and that is working very well. Please help me making my receiver to work.
My overarching goal: I'm looking for a way to grab current system sound and run it through a visualizer in Processing 3. Currently I have found a way to do this grabbing the mic input:
function setup() {
sound = new p5.AudioIn();
sound.start();
fft = new p5.FFT();
fft.setInput(sound);
}
But I have yet to find a way to do this with system sound (ie. A youtube video, Spotify, an MP3 file playing)
As well i'm not even sure if this is possible with some programs like spottily as they have built in security.
All in all I think the solution to this problem is probably similar to how you would go about capturing system audio in a screen recording program.
Note: The captured audio is being pipped into the Minim Library for visual processing.
Capturing system output in Processing is a bit tricky. In fact, even being able to record system output is a demon of its own.
I managed to accomplish this task on my MacBook Pro in Processing by using Soundflower (Mac) as a workaround. This application acts as a MIDI device to route your sound output to your sound input. Once installed, open Audio MIDI Setup and select Soundflower as your sound input.
When you run your Processing script, p5.AudioIn() will take the Soundflower audio input channel and use it get the frequency bands of all sounds coming out of your computer alone.
Best of luck!
I want to know if it is possible to access the audio that is currently playing on the Android device.
Example: if Spotify is running in the background, I want to access the audio to control some LEDs that are connected to my RaspberryPi.
I want to create some sort of equalizer that changes colors depending on the sound that is currently playing. I appreciate if some one could tell me if accessing the main audio output is possible or not.
Unless you are using a rooted phone, it's not possible to capture output of a random app on Android.
You can however create an app that plays media files and captures the output for the purpose of visualization with "Visualizer" effect. You can take a look on the sample here: https://android.googlesource.com/platform/development/+/master/samples/ApiDemos/src/com/example/android/apis/media/AudioFxDemo.java
(look for "Visualizer").
If you are using Raspberry Pi anyway, you can just play all your music through it, capture and analyze it there. You will need an external USB sound card though. See for example this post: http://www.g7smy.co.uk/2013/08/recording-sound-on-the-raspberry-pi/
There they just record and play audio back, but you can insert an analysis phase in between.
I watched the sound from microphone recording example and, as I can see it, the output seems doesn't show the currently available microphone device only info; As a fact, there maybe be another (not a build-in) microphone device available if a headset is plugged for example :)
EDIT:
In case of linux I have pulse audio and it shows my notebook build-in microphone as "Build-in Audio Analog Stereo" (see image)
EDIT
For example if I run the applet code in my NetBeans IDE 8.0.1 JDK 1.7 (linux x64) I am not sure I can see my build-in microphone device in the tree (see image) but still I can record audio with Sound API
So my question is... how to get currently available input device info like brand (lets say "Logitech" or the native one as "built-in microphone") etc?
See the Media example for a tree of media related properties.
See the code that makes it for the sources of the data.
I would like to build an Android App to take audio data from two microphones, mix the sound with some from memory, and play the sound through headphones. This needs to be done in real-time. Could you please refer me to some tutorials or any references, for real-time audio input, mixing, and output with Java eclipse?
So far, I am able to record sound, save it, and then play it, but I cannot find any tutorials for real-time interfacing with sound-hardware this way.
Note: One microphone is connected to the 3.5 mm headphone jack of the Android through a splitter and the other is connected through a USB port.
Thanks!
There are two issues that I see here:
1) Audio input via USB.
Audio input can be done using android 3.2+ and libusb but it is not easy (You will need to get the USB descriptors from libusb, parse them yourself and send the right control transfers to the device etc). You can get input latency via USB in the order of 5-10 mS with some phones.
2) Audio out in real-time.
This is a perennial problem in Android and you are pretty much limited to the Galaxy Nexus at the moment if you want to approach real-time (using Native Audio output). However, if you master the USB you may be able to output with less latency as well.
I suppose if you go to the trouble of getting the USB to work, you can get a USB audio device with stereo in. If you had connected one mono mic to each of the input channels, then output via USB you would be very close to your stated goal. You might like to try "USB Audio Tester" or "usbEffects" apps to see what is currently possible.
In terms of coding the mixing and output etc, you will probably want one thread reading each separate input source and writing to a queue in small chunks (100-1000 samples at a time). Then have a separate thread reading off the queue(s) and mixing, placing the output onto another queue and finally a thread (possibly in native code if not doing output via USB) to read the mixed queue and do output.
The following Link http://code.google.com/p/loopmixer/ has a flavor for dealing with the audio itself.