Is it possible in Android to manipulate phone call data live before they are sent? (for, eg. by creating a buffer where the voice is recorded then sent after) or is it inaccessible, and must always be "live"?
Sorry, no. There is no supported way for an Android application to interact with the audio stream from a phone call.
Unlike pretty much all other audio, voice call audio is typically processed entirely by the modem subsystem. So the modem processor and it's associated DSP(s) (if it has any) has access to the voice call audio, but the application processor(s) don't, or at least don't modify it any way.
Some platforms allow the application processor to read the uplink/downlink audio either in their compressed form (AMR) or after decoding has been performed (PCM). But no platform used for Android devices that I know about has (complete) support for injecting data into the uplink. If there are any that do, it would be a completely non-standard feature.
Try doing the coding in C with JNI. Also I would recommend p_thread. As Android doesn't have control over such threads.
Related
I am creating an application for tablet in android where, I want to use usb camera as a default camera when I start my application. I want to click picture and save them in either jpeg,jpg or png format. I could not find any helpful resources on web. How can I implement such functionality? Any help would be appreciated.
The solutions are all complex with advantages and disadvantages. Information on USB and UVC is freely available from the USB IF web site, although there are many hundreds of pages of detail. The main issue is that the isochronous transfer method is missing from the Android framework, although descriptor retrieval, control, interrupt and bulk are implemented. Hence, most of the usb access can be done in Java or Kotlin, but the streaming side is a real pain. Using method one below and a Logitek C615 webcam, I obtained 640x480#30 fps on a Lenovo Tab10 using Android 6, and 1920x1080#30fps on a Lenovo IdeaPad 520 Miix using Android9 x86. The Tab10 appears to run at USB v1 speeds although it has a USB v2 micro-B socket. The Miix has a type A USB socket and does not need an OTG converter. I know of three methods:
Use Libusb. This requires a separate compilation and build in Android Studio to a shared library. Write C++ code to setup, transfer packets and teardown the webcam. Write Java code for the user interface, decompress MJPEG, display preview and write JPEGs to storage. Connect the two via JNI. The C++ and Java run in separate threads but are non blocking. The inefficiency is in memcopying from a native image frame array to a JNI array and then freeing for garbage collection after each frame. Android apps run in a container, which means a special startup for Libusb and some functions fail, probably because of selinux violations.
Use JNA to wrap the Libc libray. JNA is obtained from a repository and does not require a separate build. Libc is in Android Studio. All the code is Java, but IOCTL is used to control the usb file streams and the small stuff is really sweated. Libusb does much of the low level stuff in design choice 1.
Use the external camera option with the Camera2 API. Non of my devices support an external camera, not even my Samsung Android 13 phone. I suspect this is meant for systems integrators to build custom versions of Android with the appropriate settings as documented by Google, implemented on SBCs for point-of-sale etc.
TL;DR: My app is hogging the user's microphone. Can I turn it off automatically whenever another app needs to use the mic?
I have an Android app that has some really cool microphone functionality, similar to Amazon Alexa, that stays on all the time in a background service. The problem is, my app hogs the users' microphone, making it unusable:
However, this is terrible application behavior on my behalf, and I want to do my best to avoid it. Is it possible to be notified when another application requests to use the microphone, so that I can automatically stop my service?
PS: I am using the Pocketsphinx library for continuous background voice recognition.
This is tricky, as I'm not a ware of any API for this. This surely will require system-level APIs to work like an "Ok Google" type of thing.
A viable option would be (from https://stackoverflow.com/a/43623308/603270) to run a Job at regular intervals, checking for foreground apps using android.permission.PACKAGE_USAGE_STATS.
This might suffice. But you could also add things regarding phone calls (how to detect phone call broadcast receiver in android) using android.intent.action.PHONE_STATE or media playback (via Receiver or maybe even MediaPlayer directly).
If you're really wanting to get this thing working, an alternative would be to get an array list of all installed apps on the system and which ones require permission to use the mic or not, then use an accessibility service to monitor the users screen if an app the user just opened requires the mic (which you'll know from the array you just grabbed). From there, disable the mic in your app if their app needs the mic. The background service can then check in intervals of, say, two minutes, to see if the app that required the mic is still open. This is really inefficient. But if you don't care, then this might be a good option.
There is no standard way to inform another app that you want access to the microphone (so that they release the resources and let you access it). You could however send a broadcast to all other apps ("requesting the microphone"), but the other apps would have to implement this feature (and very few or zero developers will do this).
I would recommend you to simply inform the user that the microphone is currently not available, because you can't do anything else.
I'm broadcasting an audio stream (predefined playlists not live) through an HTTP server. I'm wondering are there any cheap solutions (in terms of computational complexity) which I can confirm my stream is actually played and heard. This means a reasonable amount of audio output is broadcasted from device which resembles mostly that stream.
For a simple scenario: assume there is an Android device & app, which is responsible for connecting to the server and playing the stream. Same Android app will be used to capture microphone input and compare it with the stream. Testing environment is outdoor scene with low-to-moderate background noise.
I did some studying with FFT and audio analysis in the college but I'd rather not reinvent the wheel so I'm seeking reliable and cheap libraries for this matter (mainly Android but Java libraries are welcome too).
As a side note, I started out with getting volume levels from the device, but this turned out to be insufficient since user can just plug in a turned off speaker to the device.
You might think the amount of work required to accomplish such a feature might not be feasible. But keep in mind that this feature I'm working on is between the "content generator" and "broadcaster", NOT "broadcaster" vs. "listener". So all I'm trying to do is making sure the broadcaster is holding his end of the contract.
I would like to build an Android App to take audio data from two microphones, mix the sound with some from memory, and play the sound through headphones. This needs to be done in real-time. Could you please refer me to some tutorials or any references, for real-time audio input, mixing, and output with Java eclipse?
So far, I am able to record sound, save it, and then play it, but I cannot find any tutorials for real-time interfacing with sound-hardware this way.
Note: One microphone is connected to the 3.5 mm headphone jack of the Android through a splitter and the other is connected through a USB port.
Thanks!
There are two issues that I see here:
1) Audio input via USB.
Audio input can be done using android 3.2+ and libusb but it is not easy (You will need to get the USB descriptors from libusb, parse them yourself and send the right control transfers to the device etc). You can get input latency via USB in the order of 5-10 mS with some phones.
2) Audio out in real-time.
This is a perennial problem in Android and you are pretty much limited to the Galaxy Nexus at the moment if you want to approach real-time (using Native Audio output). However, if you master the USB you may be able to output with less latency as well.
I suppose if you go to the trouble of getting the USB to work, you can get a USB audio device with stereo in. If you had connected one mono mic to each of the input channels, then output via USB you would be very close to your stated goal. You might like to try "USB Audio Tester" or "usbEffects" apps to see what is currently possible.
In terms of coding the mixing and output etc, you will probably want one thread reading each separate input source and writing to a queue in small chunks (100-1000 samples at a time). Then have a separate thread reading off the queue(s) and mixing, placing the output onto another queue and finally a thread (possibly in native code if not doing output via USB) to read the mixed queue and do output.
The following Link http://code.google.com/p/loopmixer/ has a flavor for dealing with the audio itself.
Simple question, is there any way to read bytes/data from the headphone jack of an android phone? I know HTC made an app that lets headphones act as an antenna and gets Radio that way, but do i have to use native C++ for that or what? What i want to do is attach a double throw switch to the headphone jack and i want my phone to detect if the switch is pressed or not. Any way to do this??? I can tell this won't be an easy feat, but i've probably been through far worse.
Edit: even if it was the USB jack, i wudnt mind that either. I just want to attach a switch to my phone and use a program to detect if it's on or off
As the headphone jack typically includes a contact for a microphone to support headsets you could use digital/analog/digital conversion to transmit data. (That is how hardware extensions utilizing the headset jack like Square probably work).
But for a switch like you describe I would either go Bluetooth (which can just act like a simple serial connection in code; but you would need some custom but not terribly complicated hardware for this) or try Arduino for Android, which is designed exactly for this use case and uses USB (but I don't have personal experience with it).
No need for C++ or native in any case, everything is available via Java APIs.
You are not going to be able to send data through the headphone jack into devices. Using the headphones as an antenna is different. It's not actually transmitting data into the phone, it's using the wire as an extension to the internal antenna (attached to a receiver inside the phone which generates the "data")
You should be able to do with the USB for your device though, you should focus on the USB anytime you want to transmit data from outside the phone into it, though you could probably also do it with an IR broadcaster/ bluetooth broadcaster set up instead of a wired switch.