I am using SoundPool in order to play some sounds using timers in my applications. Does anyone know if there is an in-built queue where it will queue files and play them only when the previous files are finished?
Do I have to write my own implementation of a SoundPool queue?
I have had a similar issues in a couple of apps that relied heavily on SoundPool. There is no "in-built queue" or detection of when a sound has finished. Also annoyingly you can't get the play length of loaded sounds from SoundPool either. In your searches I'm sure you've come across many people complaining about this.
For my purposes I got around this by first briefly loading each sound into MediaPlayer to get and store it's play length. Then used these lengths to signify when playback will have stopped.
Unfortunately the audio side of things is a well recognised weak point in Android. The general advice is if you want to get accurate and have good control, then you need to turn to the NDK rather than using SoundPool or MediaPlayer, sorry.
Related
Can we reset all the values that hold in mediaRecorder while recording video?
I've tried just using mediaRecorder.reset() while recording video. But it won't work. I don't know is it possible or not. If it is possible please any references will appreciate.
I've read this and also google developers, mediaRecorder in developers. But any of references didn't mention my issue.
EDIT :
What I want is while recording a video set mediaRecorder.reset() and mediaRecorder.start(). The problem occurs when I'm doing this. I need to chunk of video clips while recording the same video. Need those process in parallel. While I'm trying to stop and restart the camera capturing methods it will miss many frames. Bcoz handling camera is somewhat cost for the processor. I tried this and it occurs some errors that telling session configuration failed. Now I'm stuck in here. Need help!
Thank you for your valuable time.
Edit in response to clarifications:
Ok, so you want to split the video file into multiple separate files.
You'll need to use the lower-level APIs (MediaCodec, MediaMuxer) to implement this yourself; the higher-level MediaRecorder does not support this without losing frames.
Original:
So you're trying to pause the video recording temporarily.
Unfortunately, there's no support for this before API level 24, which added MediaRecorder.pause(). You can't call MediaRecorder.reset() mid-video and have it work.
All you can really do is to record the full video and then post-process it to crop sections you don't want.
I was hoping someone can help me understand an issue I am seeing with the Mediaplayer class.
I am creating a music app that needs to play two music files at the same time. In one of the use case scenarios I want to be able to play an MP3 track and then initiate another MP3 to start playing at a differnt volume over the top of the first.
I have found that the Android mediaplayer class offers this functionality and have created a test application to do this by simply creating two instances of mediaplayer.
For example...
MediaPlayer mMediaPlayer1,mMediaPlayer2;
mMediaPlayer1 = new MediaPlayer();
mMediaPlayer2= new MediaPlayer();
The problem I am having is that in the emulator it works fine and on most devices I try it works fine but on a few test devices I get odd results when I try to start the second mediaplayer/track.
What happens is that the volume of either the second or the first audio track suddenly reduces to nothing. I can see that the mediaplayer is still "playing" as I have several progress bars setup to track its progress but you can't hear anything.
I've seen this on both a OnePlus One and a OnePlus X phone. On my Asus Tablet and a Smasung A3 phone it works fine though. Its not related to the OS version either as I've tried it on 4.4.2, 5, 6 and 7 with mixed results. It definitly seems to be hardware related.
I've also seen related posts describing this issue but none so far with an answer as to what is causing it.
Can anyone explain this or shed any light on the problem? Even if it is only to understand the limitation of what I am doing?
FYI - I did look at Soundpool but can't use it becuase the clips I am using are bigger than 1Mb.
Thanks in advance...
For your goal mixing music you can develop your own "mixer" which will work with raw audio data.
Steps are:
extracting encoded audio data from a music file by MediaExtractor
decoding these ByteBuffers by Decoder (MediaCodec)
mixing one decoded buffer from first audio with one decoded buffer form second audio to get one mixed buffer, here is algorithm
playing the mixed buffer by AudioTrack
Here is a lot of work but it will work anywhere!
Thanks for the suggestion. In the end found a way round it. If you use the newer AudioAttributes option (API 21 and above) and set the FLAG_AUDIBILITY_ENFORCED flag it then seems to force the devices I was having issues with to play the streams.... thanks for looking folks!
My application takes a long time to prepare and buffer an audio stream. I have read this question Why does it take so long for Android's MediaPlayer to prepare some live streams for playback?, however it just says people have experienced this issue, it does not state how to improve the problem.
I am experiencing this in all versions of Android, tested from 2.2 - 4.1.2.
The streams are in a suitable bit-rate for mobile and 3G connection. The same stream takes less than a second to start buffering in the equivalent iOS app.
Is there a way to specify the amount of time that should be buffered? I know that the Tune In radio application offers this feature ( https://play.google.com/store/apps/details?id=tunein.player ).
Thanks.
Edit: I've tested again and found that it only happens on devices running Gingerbread and above (>=2.3). I know that Android changed the underlying framework from OpenCore to StageFright. So how can I optimise the media framework? It just seems wrong that the old HTC Wildfire can prepare, stream and play, literally 10x faster than the brand new HTC One X and Nexus 7.
I have struggled with this question for months. Finally i found the solution.
The real problem is in the implementation of the MediaPlayer class. Particularly with the way MediaPlayer buffers the data. This is why the solution is to create your own buffering, save it to a temp file and feed that to MediaPlayer.
This tutorial and source code explain exactly how. http://androidstreamingtut.blogspot.nl/2012/08/custom-progressive-audio-streaming-with.html
By adapting this code, it is easy to create a much better streaming player.
Google Developers really screwed up here.
EDIT : This answer is rather old. Nowdays i would recommend not using MediaPlayer and use ExoPlayer instead. It is extendable, stable and can play many different types of media. You can find it here: https://github.com/google/ExoPlayer/
There really isn't much you can do since the Android MediaPlayer class doesn't provide access to lower level settings such as buffer size. The only alternative would be to make your own player using AudioTrack and a library like FFmpeg to do the decoding.
The one thing I'd recommend is to play around with encoding. For instance, for MP4s, ensure that the MOOV Atom is located at the beginning of the file (there are enough questions on S/O regarding how to do this with ffmpeg, etc). With MP3s, you can look at different codecs or bitrates for instance.
You can, for instance, try a number of audio files you find online, and if you see one that doesn't take a long time to buffer, try to encode your files in the same way.
I'm attempting to use both progressive and cached sound files in my android app. Tthe soundpool works great for preloading small files, but obviously sometimes you need to play a 15-30sec sound file. I don't want to preload those (and can't due to memory constraints), but i'm at a loss to discover how to progressively stream resource sounds. Every tutorial about progressive sound streaming is for HTTP streams.
The sounds are in /res/raw/ and are oggs.
How do i progressively stream local resource sounds?
Since it's been some time from this question's being asked, i figure i'll mark it as answered with the method i went with.
I decided to make a jni wrapper over libvorbis and include the library with the app so that it can stream sounds using native libvorbis code. It's a pain, and it really shouldn't be necessary with all the decoding capability that android has, but i can't find any convenience methods.
So, for future googlers, sorry for the bad news.
I am creating an app that requires a sound or sounds to potentially be played every ~25ms. (300beats per minute with potentially 8 "plays" per beat)
At first I used SoundPool to accomplish this. I have 3 threads. One is updating the SurfaceView animation, one is updating the time using System.nanoTime(), and the other is playing the sounds (mp3s) using Soundpool.
This works, but seems to be using a lot of processor power as anytime a background process runs such as the WiFi rescanning, or GC, it starts skipping beats here and there, which is unacceptable.
I am looking for an alternative solution. I've looked at mixing and also the JET engine.
The JET engine doesn't seem like a solution as it only uses MIDIs. My app requires high-quality sounds (recordings from actual instruments). (correct me if I'm wrong on midi not being high quality)
Mixing seems very complicated with Android as it seems first you must get the raw sound (takes up a lot of memory) and also create "silence" in between sounds. I am not sure if this is the most elegant solution as my app will have variable speed (bpm) controlled by the user.
If anyone is experienced in this area, I would GREATLY appreciate any advice.
Thank you