Route MIC input through headset and AUDIO output through phone's speaker - java

is there a way to route microphone input through headset and use simultaneously the AUDIO output through smartphone's speaker ?
I've been watching around for hours now and I saw that it's clearly impossible on iOS but what about Android..
I'm using a Samsung Galaxy S4.
Here is a part of my code to route the audio output through the speaker even if the headset is plugged in :
AudioManager speakerOutput = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
speakerOutput.setMode(AudioManager.MODE_IN_CALL);
speakerOutput.setSpeakerphoneOn(true);
But then when I tried to control my app by the headset's microphone, nothing. So I tried with the smartphone's one and it's obviously working.
It don't seems like others AudioManager.MODEs route audio output only and leave the mic input to the headset. So what can I do now without having to build a new kernel, modify drivers, HAL, …etc ?
Thanks, C.

There is no way to loopback from headset Mic to Speaker if you do not change HAL or driver on Android. If u can change hardware, it possible works out.

If you use AudioManager.STREAM_RING, the audio should be routed to both the loudspeaker and the wired headset, just as when the ringtone is played for an incoming call (remove the setMode and setSpeakerphoneOn calls).

Yes, It's possible. I successfully do it a few days ago. There are 2 solutions for it:
Solution 1:
Suppose you are in a normal mood & Headset connected. We just need to call setPreferredDevice function to set your preferred audio input/output device. Our input source is already coming from the headset. So we only need to route the audio output source to the Phone Speaker. Here is my sample code by Media Player:
#RequiresApi(Build.VERSION_CODES.P)
private fun playAudioFromPhoneSpeaker(context: Context, audioResId: Int) {
val mMediaPlayer: MediaPlayer = MediaPlayer.create(context, audioResId)
val mCurrentDevice: AudioDeviceInfo? = mMediaPlayer.preferredDevice
val mDeviceSpeaker: AudioDeviceInfo? = getDeviceOutputSpeaker(context)
mDeviceSpeaker?.let { speaker ->
if (mCurrentDevice != speaker) {
val result: Boolean? = mMediaPlayer.setPreferredDevice(mDeviceSpeaker)
Log.i("PreferredDevice", "is set as speaker: $result")
}
}
mMediaPlayer.start()
mMediaPlayer.setOnCompletionListener { mediaPlayer ->
mediaPlayer.release()
}
}
#RequiresApi(Build.VERSION_CODES.M)
private fun getDeviceOutputSpeaker(context: Context): AudioDeviceInfo? {
val audioManager = context.getSystemService(Context.AUDIO_SERVICE) as AudioManager
val mAudioDeviceOutputList = audioManager.getDevices(AudioManager.GET_DEVICES_OUTPUTS)
mAudioDeviceOutputList?.filterNotNull()?.forEach { speaker ->
if (speaker.type == AudioDeviceInfo.TYPE_BUILTIN_SPEAKER) {
return speaker
}
}
return null
}
The positive thing about this solution is that we don't need to think about the input source. We only change the audio output source and it doesn't have any impact on the input source.
The problem is setPreferredDevice of MediaPlayer come from Android 9. So this solution is only supported from Android 9.
Solution 2:
In 2nd solution we turn on device speaker by Audio Manager. So after turn on the device speaker the audio input & output source will come to the device internal speaker & device mic. Now we have to route the mic to the headset. Sample code:
private val mAudioRecord: AudioRecord? = null
private fun RecordAudioFromHeadphone(context: Context) {
//Step 1: Turn on Speaker
val audioManager = context.getSystemService(Context.AUDIO_SERVICE) as AudioManager
audioManager.isSpeakerphoneOn = true
//Step 2: Init AudioRecorder TODO for you
//Step 3: Route mic to headset
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
setPreferredInputMic(getWiredHeadPhoneMic(this))
}
}
#RequiresApi(Build.VERSION_CODES.M)
private fun setPreferredInputMic(mAudioDeviceInfo: AudioDeviceInfo?) {
val result: Boolean? = mAudioRecord?.setPreferredDevice(mAudioDeviceInfo)
}
#RequiresApi(Build.VERSION_CODES.M)
private fun getWiredHeadPhoneMic(context: Context): AudioDeviceInfo? {
val audioManager = context.getSystemService(Context.AUDIO_SERVICE) as AudioManager
val mAudioDeviceOutputList = audioManager.getDevices(AudioManager.GET_DEVICES_OUTPUTS)
mAudioDeviceOutputList?.filterNotNull()?.forEach { device ->
if (device.type == AudioDeviceInfo.TYPE_WIRED_HEADSET) {
return device
}
}
return null
}
The positive think of this solution is supported from Android 6. And negative thing is it's little bit complex and you have to maintain the voice input source.
AudioRecorder, MediaRecorder, AudioTrack also support this kind of solution.
Thanks

Related

Can notification.setSound() be used with multiple audio sources?

Expected: A different audio file plays for each if statement
Actual: The first audio file plays for all notifications. The text message updates correctly, so it is meeting the if statement requirements.
Is there a way to have the correct audio file play?
if (alarm.getMINS().equals("0")) {
alarmSound = audioAlert;
text = context.getString(R.string.message);
} else if (alarmn.getMINS().equals("2")) {
alarmSound = audioReminder;
text = context.getString(R.string.reminder);
} else if (alarm.getMINS().equals("5")) {
alarmSound = audioImportant;
text = context.getString(R.string.important);
}
notification.setSound(alarmSound, audioAttributes);
...
.setContentText(text)
Sound on a per-Notification basis has been deprecated since Android 5.0. Sounds are now set on the notification channel, and each channel has one sound.

How to let a sound play once in Processing?

import processing.serial.*;
import processing.sound.*;
SoundFile file;
Serial myPort; // Create object from Serial class
String val; // Data received from the serial port
//String antwoord = "A";
void setup()
{
size(300,300);
// I know that the first port in the serial list on my mac
// is Serial.list()[0].
// On Windows machines, this generally opens COM1.
// Open whatever port is the one you're using.
String portName = Serial.list()[0]; //change the 0 to a 1 or 2 etc. to match your port
myPort = new Serial(this, portName, 9600);
}
void draw()
{
if ( myPort.available() > 0)
{ // If data is available,
val = trim( myPort.readStringUntil(ENTER) );
// read it and store it in val
}
//println(val); //print it out in the console
file = new SoundFile(this,"Promise.mp3");
if ("A".equals(val) == true && file.isPlaying() == false) {
file.play();
file.amp(0.2);}
else{
ellipse(40,40,40,40);
}
}
I got this code, but I want the sound to keep playing as long as the signal 'A' is given. Now it starts playing constantly, which leads to a weird static noise. How can I make it just play steadidly?
You're creating a new SoundFile in every run of draw. So file.isPlaying() will always return false. Only create a new SoundFile if you haven't already. The simplest solution is probably to move file = new SoundFile(this,"Promise.mp3"); into setup
Or you check or remember if you already loaded the file.
I am sorry if it is a bit disconnected but I recommend using minim or a different sound library instead of the processing one, since it causes a lot of problems in exporting (at least it had done so consistently for me).

How to increase the sound Volume Output in Webrtc

Am working on a webrtc android application and everything is working fine except two things...
which are;
switching the default sound output device from the Earpiece to Speaker and vice versa..
I have tried to use the code below from this stackoverflow thread but they are not working
audioManager = (AudioManager) this.activity.getSystemService(Context.AUDIO_SERVICE);
audioManager.setSpeakerphoneOn(true);
audioManager.setMode(AudioManager.MODE_IN_COMMUNICATION);
And improving the video performance, there is a lot of lagging in the video streams where even the stream hangs a lot. if there is any one who can help me on this too thanks a lot.
Below is my peerConnection configs
String fieldTrials = (PeerConnectionFactory.VIDEO_FRAME_EMIT_TRIAL + "/" + PeerConnectionFactory.TRIAL_ENABLED + "/");
PeerConnectionFactory.InitializationOptions initializationOptions =
PeerConnectionFactory.InitializationOptions.builder(this)
.setFieldTrials(fieldTrials)
.createInitializationOptions();
PeerConnectionFactory.initialize(initializationOptions);
//Create a new PeerConnectionFactory instance - using Hardware encoder and decoder.
PeerConnectionFactory.Options options = new PeerConnectionFactory.Options();
DefaultVideoEncoderFactory defaultVideoEncoderFactory = new DefaultVideoEncoderFactory(
rootEglBase.getEglBaseContext(), /* enableIntelVp8Encoder */true, /* enableH264HighProfile */true);
DefaultVideoDecoderFactory defaultVideoDecoderFactory = new DefaultVideoDecoderFactory(rootEglBase.getEglBaseContext());
peerConnectionFactory = PeerConnectionFactory.builder()
.setOptions(options)
.setVideoEncoderFactory(defaultVideoEncoderFactory)
.setVideoDecoderFactory(defaultVideoDecoderFactory)
.createPeerConnectionFactory();
Thanks
Need add this permission in manifest
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS"/>

MediaPlayer in Android works with a delay [duplicate]

This question already has answers here:
Not able to achieve Gapless audio looping so far on Android
(9 answers)
Closed 4 years ago.
I'm trying to play very short wav files (around 0.5 seconds each) at a very precise moments.
I've loaded a wav file and tried to play it when it's looped:
private val player = MediaPlayer()
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
val afd = resources.openRawResourceFd(R.raw.sub_kick_36_045)
val fileDescriptor = afd.fileDescriptor
try {
player.setDataSource(
fileDescriptor, afd.startOffset,
afd.length
)
player.isLooping = true
player.prepare()
} catch (ex: IOException) {
Log.d("Activity", ex.message)
}
play.setOnClickListener {
player.start()
}
stop.setOnClickListener {
player.stop()
}
}
Sound is playing, however I have a significant delay when playing loops.
I've found an app which plays sound very accurately but it uses much more complicated process to play files and files itself are very peculiar(not wav)
https://github.com/tube42/drumon
Maybe you guys can advice me how I can play sounds (0.5 sec - 5 sec long) instantly, with a minimum delay. (using some java library or something)
you audio is very short,Maybe you could be to try "SoundPool".

reading rtmp streaming using OpenCV in java

objective:
translate a python code that makes use of opencv into java.
issue:
inability to capture rtmp stream in the java version
details:
it's a drone control base code for a 4G drone that streams its camera feed to a nodejs rtmp server. the following is its code:
const { NodeMediaCluster } = require('node-media-server');
const numCPUs = require('os').cpus().length;
const config = {
rtmp: {
port: 1935,
chunk_size: 600000,
gop_cache: false,
ping: 60,
ping_timeout: 30
},
http: {
port: 8000,
allow_origin: '*'
},
cluster: {
num: numCPUs
}
};
var nmcs = new NodeMediaCluster(config)
nmcs.run();
then the stream is captured by the control base (for further operations involving opencv functionalities)
in the python version code, i used
cap = cv2.VideoCapture('rtmp://192.168.1.12:1935/live/STREAM_NAME')
to read from the server on test local network.
in java , i downloaded the official opencv tutorial sample app located here. it's a tutorial for how to use VideoCapture objects to read from one's webcam.
like i did before in python, i replaced the argument 0 (for first cam) to the rtmp url :
//private static int cameraId = 0;
String cameraId = "rtmp://192.168.1.12:1935/live/STREAM_NAME";
/**
* The action triggered by pushing the button on the GUI
*
* #param event
* the push button event
*/
#FXML
protected void startCamera(ActionEvent event)
{
if (!this.cameraActive)
{
// start the video capture
this.capture.open(cameraId);
this.capture.isOpened() returns false.
and no connection attempt is made to the server
can you kindly point out where did i go wrong ?
it took me a couple of hours to realize what is wrong. the issue has been already addressed and solved here

Categories