I am new to stack overflow and I am developing an android app which can record and retransmit IR codes via audio input. But i dont know how to do it. Can someone please help me about it. I have built the hardware (An headphone jack with IR Emitter and IR Receiver connected to it) and i can send IR codes by first packing them into .Wav file and then playing that file using AudioTrack but i dont know how to make receiver to work. I want receiver to normally go to recording mode, record every IR code i give and then just repeat it. I have also checked the receiver via WinLIRC it works like a charm. Now its time to implement in android. Currently i am using Aurduino Nano and IR Scrutinizer software to record IR and make a .WAV file but i want this thing in android. There are some apps on playstore like AnyMote which record IR but they use built-in receiver of Android and my hardware uses Audio Input/Output for its function. I have also checked all drivers of ZaZa Remote but none of them showed support for Audio IR Receiver, it can just send IR codes using Audio IR Blaster and that is working very well. Please help me making my receiver to work.
Related
Have a simple program running on Raspberry PI. When hooked up to monitor with speakers I can get the voice speaking through the HDMI speakers. However, I want the voice to output through bluetooth speakers. The connected bluetooth speakers play when I play some audio files, but when I run FreeTTS program the sound outputs either through HDMI audio or when headless and connected only to bluetooth speakers, there is no output at all. I'm thinking maybe it's trying to play through some default audio, since it plays through HDMI speakers even with bluetooth selected in audio menu...
Here's the basic code I started with.
Voice voice;
VoiceManager voiceManager = VoiceManager.getInstance();
voice = voiceManager.getVoice(voiceName);
voice.allocate();
voice.speak(text);
I've been trying additions, but nothing has worked so far. I was thinking I maybe needed to connect an AudioPlayer of some sort, but I couldn't get the default streaming one working. I need the audio played immediately. Any thoughts?
Turned out it wasn't really a programming issue, just configuration. I needed to have sound.properties setup for Java to use ALSA sound I guess. Once they were set, FreeTTS output to the correct audio as expected (in my case to a bluetooth speaker).
javax.sound.sampled.Clip=com.sun.media.sound.DirectAudioDeviceProvider
javax.sound.sampled.Port=com.sun.media.sound.PortMixerProvider
javax.sound.sampled.SourceDataLine=com.sun.media.sound.DirectAudioDeviceProvider
javax.sound.sampled.TargetDataLine=com.sun.media.sound.DirectAudioDeviceProvider
Just put that in your sound.properties file (Somewhere in jdk/jvm folder: find / -name sound.properties). Should already be stubbed out in existing file, if not just throw it in there.
Since I was using bluetooth and needed to do some mixing, I eventually started using pulseaudio as well which led to other difficulties, but this still applied to getting Java sound working in that case as well.
We made an application that makes it possible to video call between 2 devices(iOS, Android and web). Using cordova, opentok, nodejs and the cordova-opentok-plugin. During testing we noticed that the sound on an Android device is kind of low, hard to hear the other person talk.
We tested the sound from our application and compared it to tests with Google Hangouts and a normal telephone call. From these tests we can see that the audio is on maximum volume in our application. The audio stream goes through the call channel for all these applications and our own.
We tested the same device with skype, which also goes over the call channel, and the sound on skype is a lot louder than our own application and Google Hangouts or even a normal telephone call.
So it seems Skype has found a way to boost the audio in Android. Does anyone know how we could implemented such kind of a boost/amplify to the audio channel?
Thanks in advance.
I want to know if it is possible to access the audio that is currently playing on the Android device.
Example: if Spotify is running in the background, I want to access the audio to control some LEDs that are connected to my RaspberryPi.
I want to create some sort of equalizer that changes colors depending on the sound that is currently playing. I appreciate if some one could tell me if accessing the main audio output is possible or not.
Unless you are using a rooted phone, it's not possible to capture output of a random app on Android.
You can however create an app that plays media files and captures the output for the purpose of visualization with "Visualizer" effect. You can take a look on the sample here: https://android.googlesource.com/platform/development/+/master/samples/ApiDemos/src/com/example/android/apis/media/AudioFxDemo.java
(look for "Visualizer").
If you are using Raspberry Pi anyway, you can just play all your music through it, capture and analyze it there. You will need an external USB sound card though. See for example this post: http://www.g7smy.co.uk/2013/08/recording-sound-on-the-raspberry-pi/
There they just record and play audio back, but you can insert an analysis phase in between.
I would like to build an Android App to take audio data from two microphones, mix the sound with some from memory, and play the sound through headphones. This needs to be done in real-time. Could you please refer me to some tutorials or any references, for real-time audio input, mixing, and output with Java eclipse?
So far, I am able to record sound, save it, and then play it, but I cannot find any tutorials for real-time interfacing with sound-hardware this way.
Note: One microphone is connected to the 3.5 mm headphone jack of the Android through a splitter and the other is connected through a USB port.
Thanks!
There are two issues that I see here:
1) Audio input via USB.
Audio input can be done using android 3.2+ and libusb but it is not easy (You will need to get the USB descriptors from libusb, parse them yourself and send the right control transfers to the device etc). You can get input latency via USB in the order of 5-10 mS with some phones.
2) Audio out in real-time.
This is a perennial problem in Android and you are pretty much limited to the Galaxy Nexus at the moment if you want to approach real-time (using Native Audio output). However, if you master the USB you may be able to output with less latency as well.
I suppose if you go to the trouble of getting the USB to work, you can get a USB audio device with stereo in. If you had connected one mono mic to each of the input channels, then output via USB you would be very close to your stated goal. You might like to try "USB Audio Tester" or "usbEffects" apps to see what is currently possible.
In terms of coding the mixing and output etc, you will probably want one thread reading each separate input source and writing to a queue in small chunks (100-1000 samples at a time). Then have a separate thread reading off the queue(s) and mixing, placing the output onto another queue and finally a thread (possibly in native code if not doing output via USB) to read the mixed queue and do output.
The following Link http://code.google.com/p/loopmixer/ has a flavor for dealing with the audio itself.
I am working on a project where I need to play audio files over VOIP channel. I am using OpenSource phone (SFLphone). I would like to know how to play an MP3 audio file over VOIP channel.
Application areas: Play audio lecture through VOIP channel.
Maybe you can try to use the Jack Audio Server for this purpose. Though, notice: This site is about programming. maybe this question is a little offtopic. Maybe you should try to make it on some other Stack Exchange site. If it doesn't apply to any other, then let it here.