I have been breaking my head over this.
I have a ndk c++ app that continuously generates note info in a vector.
Now I need to either write this as midi files (from ndk or sdk) that can be played back without delay.
It seems I should use JetPlayer. But this is not documented properly, I cannot make heads or tails from it.
How do I get the .jet file? And where exactly is my midi info? I looked at the jetBoy example, but I don't really understand it. Thanks for any help.
As far as I know JetPlayer can't generate midi.
So I used MediaPlayer instead and generated the midi with android-midi-lib.
Related
I was hoping someone can help me understand an issue I am seeing with the Mediaplayer class.
I am creating a music app that needs to play two music files at the same time. In one of the use case scenarios I want to be able to play an MP3 track and then initiate another MP3 to start playing at a differnt volume over the top of the first.
I have found that the Android mediaplayer class offers this functionality and have created a test application to do this by simply creating two instances of mediaplayer.
For example...
MediaPlayer mMediaPlayer1,mMediaPlayer2;
mMediaPlayer1 = new MediaPlayer();
mMediaPlayer2= new MediaPlayer();
The problem I am having is that in the emulator it works fine and on most devices I try it works fine but on a few test devices I get odd results when I try to start the second mediaplayer/track.
What happens is that the volume of either the second or the first audio track suddenly reduces to nothing. I can see that the mediaplayer is still "playing" as I have several progress bars setup to track its progress but you can't hear anything.
I've seen this on both a OnePlus One and a OnePlus X phone. On my Asus Tablet and a Smasung A3 phone it works fine though. Its not related to the OS version either as I've tried it on 4.4.2, 5, 6 and 7 with mixed results. It definitly seems to be hardware related.
I've also seen related posts describing this issue but none so far with an answer as to what is causing it.
Can anyone explain this or shed any light on the problem? Even if it is only to understand the limitation of what I am doing?
FYI - I did look at Soundpool but can't use it becuase the clips I am using are bigger than 1Mb.
Thanks in advance...
For your goal mixing music you can develop your own "mixer" which will work with raw audio data.
Steps are:
extracting encoded audio data from a music file by MediaExtractor
decoding these ByteBuffers by Decoder (MediaCodec)
mixing one decoded buffer from first audio with one decoded buffer form second audio to get one mixed buffer, here is algorithm
playing the mixed buffer by AudioTrack
Here is a lot of work but it will work anywhere!
Thanks for the suggestion. In the end found a way round it. If you use the newer AudioAttributes option (API 21 and above) and set the FLAG_AUDIBILITY_ENFORCED flag it then seems to force the devices I was having issues with to play the streams.... thanks for looking folks!
I'm writing a video recording program, and it's going quite well. I can record mic as well as video from the screen. However, I would also like to be able to obtain sounds from another Java program and then sync them with the video. Basically, record the audio as it is played by the other program.
Is there a way to accomplish this? I'm pretty new with sound, and have read a bit up on it. I think I need to set up a mixer, but I'm not sure if I can actually obtain sound from another Java program that way.
This is not possible with java sound, not because of any particular problem with java sound, but because not all audio APIs that java builds on support this feature. (Core audio on the mac for example, and ASIO on windows. Not sure about ALSA on linux, but I don't think it supports this either).
If you are on windows and want to write JNI/JNA code you can use PortAudio which supports this on one of the audio APIs (sorry I can't recall which one).
My application takes a long time to prepare and buffer an audio stream. I have read this question Why does it take so long for Android's MediaPlayer to prepare some live streams for playback?, however it just says people have experienced this issue, it does not state how to improve the problem.
I am experiencing this in all versions of Android, tested from 2.2 - 4.1.2.
The streams are in a suitable bit-rate for mobile and 3G connection. The same stream takes less than a second to start buffering in the equivalent iOS app.
Is there a way to specify the amount of time that should be buffered? I know that the Tune In radio application offers this feature ( https://play.google.com/store/apps/details?id=tunein.player ).
Thanks.
Edit: I've tested again and found that it only happens on devices running Gingerbread and above (>=2.3). I know that Android changed the underlying framework from OpenCore to StageFright. So how can I optimise the media framework? It just seems wrong that the old HTC Wildfire can prepare, stream and play, literally 10x faster than the brand new HTC One X and Nexus 7.
I have struggled with this question for months. Finally i found the solution.
The real problem is in the implementation of the MediaPlayer class. Particularly with the way MediaPlayer buffers the data. This is why the solution is to create your own buffering, save it to a temp file and feed that to MediaPlayer.
This tutorial and source code explain exactly how. http://androidstreamingtut.blogspot.nl/2012/08/custom-progressive-audio-streaming-with.html
By adapting this code, it is easy to create a much better streaming player.
Google Developers really screwed up here.
EDIT : This answer is rather old. Nowdays i would recommend not using MediaPlayer and use ExoPlayer instead. It is extendable, stable and can play many different types of media. You can find it here: https://github.com/google/ExoPlayer/
There really isn't much you can do since the Android MediaPlayer class doesn't provide access to lower level settings such as buffer size. The only alternative would be to make your own player using AudioTrack and a library like FFmpeg to do the decoding.
The one thing I'd recommend is to play around with encoding. For instance, for MP4s, ensure that the MOOV Atom is located at the beginning of the file (there are enough questions on S/O regarding how to do this with ffmpeg, etc). With MP3s, you can look at different codecs or bitrates for instance.
You can, for instance, try a number of audio files you find online, and if you see one that doesn't take a long time to buffer, try to encode your files in the same way.
Is it possible to get the raw audio being played by the other apps? My idea is to create a visualizer like you find in iTunes or windows media player that will work with any app. I've looked around but haven't seen anything that would work. If anyone could point me in the right direction, I would really appreciated it.
You might access the buffers via JNi and C++..
I've been banging my head against Midi on the Android SDK all day and, although I have music running through the JetCreator all right, there's still an issue where certain notes on certain instruments just aren't playing. It almost sounds like the octave range is capped or something.
My general hypothesis is that the default DLS file android uses doesn't support a full range of octaves...or something like that. I tried importing gm.dls from the Windows install into the jetfile but I had the same problem so maybe that's not it. Programmatic music is very new to me. There's very little user-developed documentation or help in this regard...apparently DLS is really old and everyone uses soundfonts now....which begs the question why does JetCreator still use DLS?
Anyways I'm looking for a little guidance from someone who's familiar with the library.
Given your comment, I'm guessing that you probably tried to queue too many notes to the instruments at the same time. Did you manage to get the issue fixed?