Two instances of Android mediaplayer cause odd issues - java

I was hoping someone can help me understand an issue I am seeing with the Mediaplayer class.
I am creating a music app that needs to play two music files at the same time. In one of the use case scenarios I want to be able to play an MP3 track and then initiate another MP3 to start playing at a differnt volume over the top of the first.
I have found that the Android mediaplayer class offers this functionality and have created a test application to do this by simply creating two instances of mediaplayer.
For example...
MediaPlayer mMediaPlayer1,mMediaPlayer2;
mMediaPlayer1 = new MediaPlayer();
mMediaPlayer2= new MediaPlayer();
The problem I am having is that in the emulator it works fine and on most devices I try it works fine but on a few test devices I get odd results when I try to start the second mediaplayer/track.
What happens is that the volume of either the second or the first audio track suddenly reduces to nothing. I can see that the mediaplayer is still "playing" as I have several progress bars setup to track its progress but you can't hear anything.
I've seen this on both a OnePlus One and a OnePlus X phone. On my Asus Tablet and a Smasung A3 phone it works fine though. Its not related to the OS version either as I've tried it on 4.4.2, 5, 6 and 7 with mixed results. It definitly seems to be hardware related.
I've also seen related posts describing this issue but none so far with an answer as to what is causing it.
Can anyone explain this or shed any light on the problem? Even if it is only to understand the limitation of what I am doing?
FYI - I did look at Soundpool but can't use it becuase the clips I am using are bigger than 1Mb.
Thanks in advance...

For your goal mixing music you can develop your own "mixer" which will work with raw audio data.
Steps are:
extracting encoded audio data from a music file by MediaExtractor
decoding these ByteBuffers by Decoder (MediaCodec)
mixing one decoded buffer from first audio with one decoded buffer form second audio to get one mixed buffer, here is algorithm
playing the mixed buffer by AudioTrack
Here is a lot of work but it will work anywhere!

Thanks for the suggestion. In the end found a way round it. If you use the newer AudioAttributes option (API 21 and above) and set the FLAG_AUDIBILITY_ENFORCED flag it then seems to force the devices I was having issues with to play the streams.... thanks for looking folks!

Related

what can be used instead of getMaxAmplitude for a MediaPlayer?

recently, I've been trying to create a visual recorder and player inside my android application, I want it to be like the SoundCloud sound waves.
so far, I've created one for the recorder using a customized view that makes lines after the last one with the height that would be given by mediaRecorder.getMaxAmplitude each 100ms.
the problem is that, unlike MediaRecorder, MediaPlayer doesn't have getMaxAmplitude, so when users play recorded files they won't see sound waves correctly.
However, I tried to search about this but the answer that I found was not the right one, It just gives me the volume level of the user's mobile.
thanks to anyone who could help to solve this problem.

mediaRecoder.reset() while recording a video CAMERA2 API

Can we reset all the values that hold in mediaRecorder while recording video?
I've tried just using mediaRecorder.reset() while recording video. But it won't work. I don't know is it possible or not. If it is possible please any references will appreciate.
I've read this and also google developers, mediaRecorder in developers. But any of references didn't mention my issue.
EDIT :
What I want is while recording a video set mediaRecorder.reset() and mediaRecorder.start(). The problem occurs when I'm doing this. I need to chunk of video clips while recording the same video. Need those process in parallel. While I'm trying to stop and restart the camera capturing methods it will miss many frames. Bcoz handling camera is somewhat cost for the processor. I tried this and it occurs some errors that telling session configuration failed. Now I'm stuck in here. Need help!
Thank you for your valuable time.
Edit in response to clarifications:
Ok, so you want to split the video file into multiple separate files.
You'll need to use the lower-level APIs (MediaCodec, MediaMuxer) to implement this yourself; the higher-level MediaRecorder does not support this without losing frames.
Original:
So you're trying to pause the video recording temporarily.
Unfortunately, there's no support for this before API level 24, which added MediaRecorder.pause(). You can't call MediaRecorder.reset() mid-video and have it work.
All you can really do is to record the full video and then post-process it to crop sections you don't want.

Android MediaPlayer takes long time to prepare and buffer

My application takes a long time to prepare and buffer an audio stream. I have read this question Why does it take so long for Android's MediaPlayer to prepare some live streams for playback?, however it just says people have experienced this issue, it does not state how to improve the problem.
I am experiencing this in all versions of Android, tested from 2.2 - 4.1.2.
The streams are in a suitable bit-rate for mobile and 3G connection. The same stream takes less than a second to start buffering in the equivalent iOS app.
Is there a way to specify the amount of time that should be buffered? I know that the Tune In radio application offers this feature ( https://play.google.com/store/apps/details?id=tunein.player ).
Thanks.
Edit: I've tested again and found that it only happens on devices running Gingerbread and above (>=2.3). I know that Android changed the underlying framework from OpenCore to StageFright. So how can I optimise the media framework? It just seems wrong that the old HTC Wildfire can prepare, stream and play, literally 10x faster than the brand new HTC One X and Nexus 7.
I have struggled with this question for months. Finally i found the solution.
The real problem is in the implementation of the MediaPlayer class. Particularly with the way MediaPlayer buffers the data. This is why the solution is to create your own buffering, save it to a temp file and feed that to MediaPlayer.
This tutorial and source code explain exactly how. http://androidstreamingtut.blogspot.nl/2012/08/custom-progressive-audio-streaming-with.html
By adapting this code, it is easy to create a much better streaming player.
Google Developers really screwed up here.
EDIT : This answer is rather old. Nowdays i would recommend not using MediaPlayer and use ExoPlayer instead. It is extendable, stable and can play many different types of media. You can find it here: https://github.com/google/ExoPlayer/
There really isn't much you can do since the Android MediaPlayer class doesn't provide access to lower level settings such as buffer size. The only alternative would be to make your own player using AudioTrack and a library like FFmpeg to do the decoding.
The one thing I'd recommend is to play around with encoding. For instance, for MP4s, ensure that the MOOV Atom is located at the beginning of the file (there are enough questions on S/O regarding how to do this with ffmpeg, etc). With MP3s, you can look at different codecs or bitrates for instance.
You can, for instance, try a number of audio files you find online, and if you see one that doesn't take a long time to buffer, try to encode your files in the same way.

How to play changing midi on Android - jetPlayer

I have been breaking my head over this.
I have a ndk c++ app that continuously generates note info in a vector.
Now I need to either write this as midi files (from ndk or sdk) that can be played back without delay.
It seems I should use JetPlayer. But this is not documented properly, I cannot make heads or tails from it.
How do I get the .jet file? And where exactly is my midi info? I looked at the jetBoy example, but I don't really understand it. Thanks for any help.
As far as I know JetPlayer can't generate midi.
So I used MediaPlayer instead and generated the midi with android-midi-lib.

Multichannel USB recording with Java Sound API?

I'm trying to record/process some audio from three usb microphones with Java Sound on Snow Leopard (but can switch to Windows if it fixes things). Problem is, when I try to use the mixer that corresponds to the usb mic, Java Sound tells me that the line isn't supported. Specifically, it says this...
Available mixers:
Java Sound Audio Engine
USBMIC Serial# 041270067
Built-in Input Built-in Microphone
Soundflower (2ch)
Soundflower (16ch)
Exception in thread "AWT-EventQueue-0"
java.lang.IllegalArgumentException:
Line unsupported: interface
TargetDataLine supporting format
PCM_SIGNED 96000.0 Hz, 8 bit, stereo,
2 bytes/frame,
...when I ask it to select the USBMIC mixer:
Mixer mixer = AudioSystem.
getMixer(mixerInfo[1]);
I have tried matching the audio format to the exact specifications of the microphones (16-bit, 44100Hz, stereo) and it didn't make any difference.
The problem is cropping up here:
final TargetDataLine line = (TargetDataLine)
mixer.getLine(info);
It would seem that the mixer and the TargetDataLine don't like each other. Is there some way to get these two to 'match' and get along?
The microphones that I'm using are admittedly a bit strange. They were made to be used in a karaoke video game called SingStar. The mics themselves have standard mono line-in connectors that plug into a little hub (two to a hub) that converts them into a single male usb connector. Strangeness aside, though, they seem to work perfectly fine with Audacity as separate channels, so multichannel recording with them is clearly possible, just maybe not in Java.
I've also considered using a program like Soundflower that shares audio between different programs. However, I'm not sure this will work as I can't see how to make the USB mics inputs to Soundflower and then pipe them into a Java. A quick experiment showed me that I could record audio in Audacity from the mics, pipe it out through Soundflower, and then process in my Java program. Still, what I would like to do is have it all happen in real time in Java.
Anybody familiar with this kind of problem?
I think that a simple way to do this would be using Soundflower and Soundflowerbed.
I can't see how to make the USB mics inputs to Soundflower and then pipe them into a Java.
It sounds like you have Soundflower installed already. Soundflowerbed is found in the same disk image as Soundflower and is a menubar application. It lets you route sound between applications which don't have controls built in for selecting sound devices. Install that from the disk image and click it to run.
All of the following will be using my Echo Audiofire 4 but in principle should work on any audio device.
Using Soundflowerbed
Open Soundflower and tick the audio device you want to use under Soundflower (16ch). As I'm a new user I can't post images but they are linked below. If I get the bounty then I will edit the post to include the images inline.
From here you would use Soundflower (16ch) as your audio input device in Java sound.
Creating an aggregate audio device
An alternative way to solve this if that didn't work is to create an aggregate device. Open Applications > Utilities > Audio Midi Setup and click the plus sign to create a new aggregate device.
Tick the device that you want to aggregate. You only want your USBMIC (As I'm a new SO user I can only post two images per answer so the next two are linked here).
The key part which may be giving you trouble is the clock on the device. If you select the Mac as the clock source then that may be more stable.
If this still doesn't work then you could try adding the Mac built-in audio to the aggregate device and making it the master clock by right clicking on the device you want to be the master.
Other options
Finally, I haven't used this before but Pulse Audio (Google it, I can't insert more links in this post) might be a possible solution for mixing your audio streams together. It looks quite heavyweight though.
According to my research, especially threads like this, the microphone you are using is most likely causing the problem. The thread states that the microphone is even a problem when it comes to switching games, so I am guessing that it will be a problem when switching platforms, too.
My suggestion is - if you have not tried this already - to use a different microphone! Most microphones I have messed around with have special chip controllers that convert data into the data compatible for the game system. Being that you are using this on an operating system for the computer, you are probably getting some very odd effects that you wouldn't get on a game system like Playstation or others.
Hopefully this helps! Happy coding!
The AudioFormat doesn't match the TargetDataLine's supported format. I don't know if that was a typo or not but the Exception thrown says the TargetDataLine supports 8 bit audio and right below that you said you're using a 16 bit AudioFormat. It also supports up to 2 bytes per frame, how quickly and in what size chunks are you trying to read the data? Sorry if that doesn't help but I thought I'd point that out in case it was overlooked.

Categories