All of the questions regarding syncing audio and video, when decoding using MediaCodec, suggests that we should use an "AV Sync" mechanism to sync the video and audio using their timestamps.
Here is what I do to achieve this:
I have 2 threads, one for decoding video and one for audio. To sync the video and audio I'm using Extractor.getSampleTime() to determine if I should release the audio or video buffers, please see below:
//This is called after configuring MediaCodec(both audio and video)
private void startPlaybackThreads(){
//Audio playback thread
mAudioWorkerThread = new Thread("AudioThread") {
#Override
public void run() {
if (!Thread.interrupted()) {
try {
//Check info below
if (shouldPushAudio()) {
workLoopAudio();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
};
mAudioWorkerThread.start();
//Video playback thread
mVideoWorkerThread = new Thread("VideoThread") {
#Override
public void run() {
if (!Thread.interrupted()) {
try {
//Check info below
if (shouldPushVideo()) {
workLoopVideo();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
};
mVideoWorkerThread.start();
}
//Check if more buffers should be sent to the audio decoder
private boolean shouldPushAudio(){
int audioTime =(int) mAudioExtractor.getSampleTime();
int videoTime = (int) mExtractor.getSampleTime();
return audioTime <= videoTime;
}
//Check if more buffers should be sent to the video decoder
private boolean shouldPushVideo(){
int audioTime =(int) mAudioExtractor.getSampleTime();
int videoTime = (int) mExtractor.getSampleTime();
return audioTime > videoTime;
}
Inside workLoopAudio() and workLoopVideo() is all my MediaCodec logic (I decided not to post it because it's not relevant).
So what I do is, I get the sample time of the video and the audio tracks, I then check which one is bigger(further ahead). If the video is "ahead" then I pass more buffers to my audio decoder and visa versa.
This seems to be working fine - The video and audio are playing in sync.
My question:
I would like to know if my approach is correct(is this how we should be doing it, or is there another/better way)? I could not find any working examples of this(written in java/kotlin), thus the question.
EDIT 1:
I've found that the audio trails behind the video (very slightly) when I decode/play a video that was encoded using FFmpeg. If I use a video that was not encoded using FFmpeg then the video and audio syncs perfectly.
The FFmpeg command is nothing out of the ordinary:
-i inputPath -crf 18 -c:v libx264 -preset ultrafast OutputPath
I will be providing additional information below:
I initialize/create AudioTrack like this:
//Audio
mAudioExtractor = new MediaExtractor();
mAudioExtractor.setDataSource(mSource);
int audioTrackIndex = selectAudioTrack(mAudioExtractor);
if (audioTrackIndex < 0){
throw new IOException("Can't find Audio info!");
}
mAudioExtractor.selectTrack(audioTrackIndex);
mAudioFormat = mAudioExtractor.getTrackFormat(audioTrackIndex);
mAudioMime = mAudioFormat.getString(MediaFormat.KEY_MIME);
mAudioChannels = mAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
mAudioSampleRate = mAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE);
final int min_buf_size = AudioTrack.getMinBufferSize(mAudioSampleRate, (mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO), AudioFormat.ENCODING_PCM_16BIT);
final int max_input_size = mAudioFormat.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);
mAudioInputBufSize = min_buf_size > 0 ? min_buf_size * 4 : max_input_size;
if (mAudioInputBufSize > max_input_size) mAudioInputBufSize = max_input_size;
final int frameSizeInBytes = mAudioChannels * 2;
mAudioInputBufSize = (mAudioInputBufSize / frameSizeInBytes) * frameSizeInBytes;
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
mAudioSampleRate,
(mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO),
AudioFormat.ENCODING_PCM_16BIT,
AudioTrack.getMinBufferSize(mAudioSampleRate, mAudioChannels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT),
AudioTrack.MODE_STREAM);
try {
mAudioTrack.play();
} catch (final Exception e) {
Log.e(TAG, "failed to start audio track playing", e);
mAudioTrack.release();
mAudioTrack = null;
}
And I write to the AudioTrack like this:
//Called from within workLoopAudio, when releasing audio buffers
if (bufferAudioIndex >= 0) {
if (mAudioBufferInfo.size > 0) {
internalWriteAudio(mAudioOutputBuffers[bufferAudioIndex], mAudioBufferInfo.size);
}
mAudioDecoder.releaseOutputBuffer(bufferAudioIndex, false);
}
private boolean internalWriteAudio(final ByteBuffer buffer, final int size) {
if (mAudioOutTempBuf.length < size) {
mAudioOutTempBuf = new byte[size];
}
buffer.position(0);
buffer.get(mAudioOutTempBuf, 0, size);
buffer.clear();
if (mAudioTrack != null)
mAudioTrack.write(mAudioOutTempBuf, 0, size);
return true;
}
"NEW" Question:
The audio trails about 200ms behind the video if I use a video that was encoded using FFmpeg, is there a reason why this could be happening?
It seems like it is working now. I use the same logic as above, but now I keep a reference of the presentationTimeUs returned from MediaCodec.BufferInfo() before calling dequeueOutputBuffer to check if I should continue my video or audio work loop:
// Check if audio work loop should continue
private boolean shouldPushAudio(){
long videoTime = mExtractor.getSampleTime();
return tempAudioPresentationTimeUs <= videoTime;
}
// Check if video work loop should continue
private boolean shouldPushVideo(){
long videoTime = mExtractor.getSampleTime();
return tempAudioPresentationTimeUs >= videoTime;
}
// tempAudioPresentationTimeUs is set right before I call dequeueOutputBuffer
// As shown here:
tempAudioPresentationTimeUs = mAudioBufferInfo.presentationTimeUs;
int outIndex = mAudioDecoder.dequeueOutputBuffer(mAudioBufferInfo, timeout);
By doing this, my video and audio is synced perfectly, even with files that was encoded with FFmpeg(as mentioned in my edit above).
I ran into an issue where my video work loop didn't complete, this was caused by the audio reaching EOS before the video and then returning -1. So I changed my original mVideoWorkerThread to the following:
mVideoWorkerThread = new Thread("VideoThread") {
#Override
public void run() {
if (!Thread.interrupted()) {
try {
if (shouldPushVideo() || audioReachedEOS()) {
workLoopVideo();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
};
mVideoWorkerThread.start();
private boolean audioReachedEOS() {
return mAudioExtractor.getSampleTime() == -1;
}
So I use audioReachedEOS() to check if my audio MediaExtractor returns -1. If it does then it means that my audio is done, but my video is not, so I continue my video work loop until it is done.
This seems to be working as expected (when I only play/pause the video without seeking). I had another issue with seeking, but I will not elaborate on this.
I will release my application as is and update this answer when I run into problems.
Related
I'm writing a function to capture an audio clip for ~ 7.5 seconds using a TargetDataLine. The code executes and renders an 'input.wav' file, but when I play it there is no sound.
My approach, as shown in the code at the bottom of this post, is to do the following things:
Create an AudioFormat and get the Info for a Target Data Line.
Create the Target Data Line by getting the line from AudioSystem.
Open and Start the TargetDataLine, which allocates system resources for recording.
Create an auxiliary Thread that will record audio by writing to a file.
Start the auxiliary Thread, pause the main Thread in the meantime, and then close out the Target Data Line in order to stop recording.
What I have tried so far:
Changing the AudioFormat. Initially, I was using the other AudioFormat constructor which takes the file type as well (where the first argument is AudioFormat.Encoding.PCM_SIGNED etc). I had a sample rate of 44100, 16 bits, 2 channels and small-Endian settings on the other format, which yielded the same result.
Changing the order of commands on my auxiliary and main Thread (i.e. performing TLine.open() or start() in alternate locations).
Checking that my auxiliary thread does actually start.
For reference I am using IntelliJ on a Mac OS Big Sur.
public static void captureAudio() {
try {
AudioFormat f = new AudioFormat(22050, 8, 1, false, false);
DataLine.Info secure = new DataLine.Info(TargetDataLine.class, f);
if (!AudioSystem.isLineSupported(secure)) {
System.err.println("Unsupported Line");
}
TargetDataLine tLine = (TargetDataLine)AudioSystem.getLine(secure);
System.out.println("Starting recording...");
tLine.open(f);
tLine.start();
File writeTo = new File("input.wav");
Thread t = new Thread(){
public void run() {
try {
AudioInputStream is = new AudioInputStream(tLine);
AudioSystem.write(is, AudioFileFormat.Type.WAVE, writeTo);
} catch(IOException e) {
System.err.println("Encountered system I/O error in recording:");
e.printStackTrace();
}
}
};
t.start();
Thread.sleep(7500);
tLine.stop();
tLine.close();
System.out.println("Recording has ended.");
} catch(Exception e) {
e.printStackTrace();
}
}
Update 1: Some new testing and results
My microphone and speakers are both working with other applications - recorded working audio with QuickTimePlayer.
I did a lot of testing around what my TargetDataLines are and what the deal is with them. I ran the following code:
public static void main(String[] args) {
AudioFormat f = new AudioFormat(48000, 16, 2, true, false);
//DataLine.Info inf = new DataLine.Info(SourceDataLine.class, f);
try {
TargetDataLine line = AudioSystem.getTargetDataLine(f);
DataLine.Info test = new DataLine.Info(TargetDataLine.class, f);
TargetDataLine other = (TargetDataLine)AudioSystem.getLine(test);
String output = line.equals(other) ? "Yes" : "No";
if (output.equals("No")) {
System.out.println(other.toString());
}
System.out.println(line.toString());
System.out.println("_______________________________");
for (Mixer.Info i : AudioSystem.getMixerInfo()) {
Line.Info[] tli = AudioSystem.getMixer(i).getTargetLineInfo();
if (tli.length != 0) {
Line comp = AudioSystem.getLine(tli[0]);
System.out.println(comp.toString() + ":" +i.getName());
if (comp.equals(line) || comp.equals(other)) {
System.out.println("The TargetDataLine is from " + i.getName());
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
Long story short, the TargetDataLine I receive from doing
TargetDataLine line = AudioSystem.getTargetDataLine(f); and
TargetDataLine other = (TargetDataLine)AudioSystem.getLine(new DataLine.Info(TargetDataLine.class, f));
are different, and furthermore, don't match any of the TargetDataLines that are associated with my system's mixers.
The output of the above code was this (where there first lines are other and line respectively):
com.sun.media.sound.DirectAudioDevice$DirectTDL#cc34f4d
com.sun.media.sound.DirectAudioDevice$DirectTDL#17a7cec2
_______________________________
com.sun.media.sound.PortMixer$PortMixerPort#79fc0f2f:Port MacBook Pro Speakers
com.sun.media.sound.PortMixer$PortMixerPort#4d405ef7:Port ZoomAudioDevice
com.sun.media.sound.DirectAudioDevice$DirectTDL#3f91beef:Default Audio Device
com.sun.media.sound.DirectAudioDevice$DirectTDL#1a6c5a9e:MacBook Pro Microphone
com.sun.media.sound.DirectAudioDevice$DirectTDL#37bba400:ZoomAudioDevice
Upon this realization I manually loaded up all the TargetDataLines from my mixers and tried recording audio with each of them to see if I got any sound.
I used the following method to collect all the TargetDataLines:
public static ArrayList<Line.Info> allTDL() {
ArrayList<Line.Info> all = new ArrayList<>();
for (Mixer.Info i : AudioSystem.getMixerInfo()) {
Line.Info[] tli = AudioSystem.getMixer(i).getTargetLineInfo();
if (tli.length != 0) {
for (int f = 0; f < tli.length; f += 1) {
all.add(tli[f]);
}
}
}
return all;
}
My capture/record audio method remained the same, except for switching the format to AudioFormat f = new AudioFormat(48000, 16, 2, true, false);, changing the recording time to 5000 milliseconds, and writing the method header as public static void recordAudio(Line.Info inf) so I could load each TargetDataLine individually with it's info.
I then executed the following code to rotate TargetDataLines:
public static void main(String[] args) {
for (Line.Info inf : allTDL()) {
recordAudio(inf);
try {
Thread.sleep(5000);
} catch(Exception e) {
e.printStackTrace();
}
if (!soundless(loadAsBytes("input.wav"))) {
System.out.println("The recording with " + inf.toString() + " has sound!");
}
System.out.println("The last recording with " + inf.toString() + " was soundless.");
}
}
}
The output was as such:
Recording...
Was unable to cast com.sun.media.sound.PortMixer$PortMixerPort#506e1b77 to a TargetDataLine.
End recording.
The last recording with SPEAKER target port was soundless.
Recording...
Was unable to cast com.sun.media.sound.PortMixer$PortMixerPort#5e9f23b4 to a TargetDataLine.
End recording.
The last recording with ZoomAudioDevice target port was soundless.
Recording...
End recording.
The last recording with interface TargetDataLine supporting 8 audio formats, and buffers of at least 32 bytes was soundless.
Recording...
End recording.
The last recording with interface TargetDataLine supporting 8 audio formats, and buffers of at least 32 bytes was soundless.
Recording...
End recording.
The last recording with interface TargetDataLine supporting 14 audio formats, and buffers of at least 32 bytes was soundless.
TL;DR the audio came out soundless for every TargetDataLine.
For completeness, here are the soundless and loadAsBytes functions:
public static byte[] loadAsBytes(String name) {
assert name.contains(".wav");
ByteArrayOutputStream out = new ByteArrayOutputStream();
File retrieve = new File("src/"+ name);
try {
InputStream input = AudioSystem.getAudioInputStream(retrieve);
int read;
byte[] b = new byte[1024];
while ((read = input.read(b)) > 0) {
out.write(b, 0, read);
}
out.flush();
byte[] full = out.toByteArray();
return full;
} catch(UnsupportedAudioFileException e) {
System.err.println("The File " + name + " is unsupported on this system.");
e.printStackTrace();
} catch (IOException e) {
System.err.println("Input-Output Exception on retrieval of file " + name);
e.printStackTrace();
}
return null;
}
static boolean soundless(byte[] s) {
if (s == null) {
return true;
}
for (int i = 0; i < s.length; i += 1) {
if (s[i] != 0) {
return false;
}
}
return true;
}
I'm not really sure what the issue could be at this point save for an operating system quirk that doesn't allow Java to access audio lines, but I do not know how to fix that - looking at System Preferences there isn't any obvious way to allow access. I think it might have to be done with terminal commands but also not sure of precisely what commands I'd have to execute there.
I'm not seeing anything wrong in the code you are showing. I haven't tried testing it on my system though. (Linux, Eclipse)
It seems to me your code closely matches this tutorial. The author Nam Ha Minh is exceptionally conscienscious about answering questions. You might try his exact code example and consult with him if his version also fails for you.
But first, what is the size of the resulting .wav file? Does the file size match the amount of data expected for the duration you are recording? If so, are you sure you have data incoming from your microphone? Nam has another code example where recorded sound is progressively read and placed into memory. Basically, instead of using the AudioInputStream as a parameter to the AudioSystem.write method, you execute multiple read method calls on the AudioInputStream and inspect the incoming data directly. That might be helpful for trouble-shooting whether the problem is occurring on the incoming vs outgoing part of the process.
I'm not knowledgeable enough about formats to know if the Mac does things differently. I'm surprised you are setting the format to unsigned. For my limited purposes, I stick with "CD quality stereo" and signed PCM at all junctures.
EDIT: based on feedback, it seems that the problem is that the incoming line is not returning data. From looking at other, similar tutorials, it seems that several people have had the same problem on their Mac systems.
First thing to verify: does your microphone work with other applications?
As far as next steps, I would try verifying the chosen line. The lines that are exposed to java can be enumerated/inspected. The tutorial Accessing Audio System Resources has some basic information on how to do this. It looks like AudioSystem.getMixerInfo() will return a list of available mixers that can be inspected. Maybe AudioSystem.getTargetLineInfo() would be more to the point.
I suppose it is possible that the default Line or Port being used when you obtain a TargetDataLine isn't the one that is running the microphone. If a particular line or port turns out to be the one you need, then it can be specified explicitly via an overridden getTargetDataLine method.
I'm reading that there might be a security policy that needs to be handled. I don't fully understand the code, but if that were the issue, an Exception presumably would have been thrown. Perhaps there are new security measures coming from the MacOs, to prevent an external program from opening a mic line surreptitiously?
If you do get this solved, be sure and post the answer and mark it solved. This seems to be a live question for many people.
We have an application for Android that work with network camera.
Our main problem is that video displayed with artifacts. Most of the screen in green squares. When you start to move your hand in front of the camera, the squares disappear but video still with artifacts. We have checked buffer length, packets size and many parameters…. Now we have no idea what is wrong.
I will describe the whole process:
Camera work with SIP protocol. According to SIP we collect SDP data and establish connection. We have discovered that video translate as H264 base profile in RTP packets.We receive UDP packets. Extract RTP. Look to the headers of RTP.
We received packets with type 7 and 8. These two packets we use to configure MediaCodec.
private void initMedia(ByteBuffer header_sps, ByteBuffer header_pps) {
try {
mMediaCodec = MediaCodec.createDecoderByType(MediaFormat.MIMETYPE_VIDEO_AVC);
//mMediaCodec = MediaCodec.createByCodecName("OMX.google.h264.decoder");
MediaFormat mediaFormat = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, 640, 480);
mediaFormat.setByteBuffer("csd-0", header_sps);
mediaFormat.setByteBuffer("csd-1", header_pps);
mMediaCodec.configure(mediaFormat, videoView.getHolder().getSurface(), null, 0);
mMediaCodec.start();
mConfigured = true;
startMs = System.currentTimeMillis();
show.start();
} catch (IOException e) {
e.printStackTrace();
}
}
Also we receive packets 28 it mean that it is parts and we should reconstruct it.
public ByteBuffer writeRawH264toByteBuffer() throws IOException, NotImplementedException {
ByteBuffer res = null;
switch (nal.getType()){
case NAL.FU_A: //FU-A, 5.8. Fragmentation Units (FUs)/rfc6184
FUHeader fu = getFUHeader();
if(fu.isFirst()){
//if(debug) System.out.println("first");
res = ByteBuffer.allocate(5+getH264PayloadLength());
res.put(H264RTP.NON_IDR_PICTURE);
res.put(getReconstructedNal());
res.put(rtp.getBuffer(), getH264PayloadStart(), getH264PayloadLength());
} else {
//if(debug) System.out.println("end");
res = ByteBuffer.allocate(getH264PayloadLength());
res.put(rtp.getBuffer(), getH264PayloadStart(), getH264PayloadLength());
}
break;
case NAL.SPS: //Sequence parameter set
case NAL.PPS: //Picture parameter set
case NAL.NAL_UNIT:
res = ByteBuffer.allocate(4+getH264PayloadLength());
//System.out.println("sps or pps write");
res.put(H264RTP.NON_IDR_PICTURE);
res.put(rtp.getBuffer(), rtp.getPayloadStart(), rtp.getPayloadLength());
break;
default:
throw new NotImplementedException("NAL type " + getNAL().getType() + " not implemented");
}
return res;
}
NON_IDR_PICTURE is byte array {0x00, 0x00, 0x00, 0x01}
We use VideoView for translating video on Android device
This one write packets:
if (mConfigured) {
int index = mMediaCodec.dequeueInputBuffer(mTimeoutUsDegueueInput);
if (index >= 0) {
ByteBuffer buffer = mMediaCodec.getInputBuffer(index);
//buffer.clear();
int capacity = wrapper.getByPayload().writeRawH264toByteBuffer(buffer);
mMediaCodec.queueInputBuffer(index, 0, capacity, wrapper.getSequence(), 0);
}
}
and this one renew VideoView (in the separate thread)
while(true)
if (mConfigured) {
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
int index = mMediaCodec.dequeueOutputBuffer(info, mTimeoutUsDegueueOutput);
if (index >= 0) {
mMediaCodec.releaseOutputBuffer(index, info.size > 0);
if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) == MediaCodec.BUFFER_FLAG_END_OF_STREAM) {
break;
}
}
} else {
try {
Thread.sleep(10);
} catch (InterruptedException ignore) {
}
}
Now i have no idea why video crashed with artifacts and what to debug.
Example of video:
screen of video
The problem was with a FU_A reconstruction.
Problem was in this string
int capacity = wrapper.getByPayload().writeRawH264toByteBuffer(buffer);
Packet FU_A should be reconstruct to full packet and only after it put in to the decoder
I am creating an application to play an audio file (WAV).
Playback works out perfectly, I did a small feature to make a forward "leap" of X (seconds) via the skip() method of audioInputStream and this also works perfectly.
My problem is to achieve the backward "leap" X (seconds). I use again the skip() method with a negative number, it works. But then the audio file doesn't read the end of the file I am losing the X (seconds) at the end, my playback stops and my reference audioInputStream.read returns -1 as if my playback was finished but it actually did not.
Here are a sample of my code :
while ((((bytesRead = this.audioInputStream.read(audioDataFull, 0, audioDataFull.length)) != -1) && (!this.leaveThread))) {
line.write(audioDataFull, 0, bytesRead);
this.nb++;
while (this.audioWav.isPause()) {
if (test) {
try {
this.audioInputStream.skip((long) -(this.audioInputStream.getFormat().getSampleRate()
* (this.audioInputStream.getFormat().getSampleSizeInBits() / 8) * TraitementXML.tmpXml.getBufferRelecture()));
test = false;
} catch (final Exception e) {
this.audioInputStream.skip(-(this.nb * bytesRead));
this.nb = 0;
test = false;
}
}
try {
Thread.sleep(1000);
} catch (final InterruptedException e) {
logger.error(e.getMessage(), e);
}
}
}
I have implemented a simple program, that takes input from a Midi keyboard and then outputs the corresponding sound using the javax synthesizer interface.
This works really well on my Pc running Windows, however, I want to run it on a Raspberry Pi running Raspbian. It actually does work, too, but as soon as i play more/faster notes, the sound starts to jitter and crackle really bad, and i have to stop playing notes for about 2 seconds in order for the jittering to die down.
I am using a external USB sound adapter already, which did not really help alot.
Here is the class that handles the midi input:
public class MidiHandler {
public MidiHandler() {
MidiDevice device;
MidiDevice.Info[] infos = MidiSystem.getMidiDeviceInfo();
for (int i = 0; i < infos.length; i++) {
try {
device = MidiSystem.getMidiDevice(infos[i]);
// does the device have any transmitters?
// if it does, add it to the device list
System.out.println(infos[i]);
// get all transmitters
List<Transmitter> transmitters = device.getTransmitters();
// and for each transmitter
for (int j = 0; j < transmitters.size(); j++) {
// create a new receiver
transmitters.get(j).setReceiver(
// using my own MidiInputReceiver
new MidiInputReceiver(device.getDeviceInfo()
.toString()));
}
Transmitter trans = device.getTransmitter();
trans.setReceiver(new MidiInputReceiver(device.getDeviceInfo()
.toString()));
// open each device
device.open();
// if code gets this far without throwing an exception
// print a success message
} catch (MidiUnavailableException e) {
}
}
}
// tried to write my own class. I thought the send method handles an
// MidiEvents sent to it
public class MidiInputReceiver implements Receiver {
Synthesizer synth;
MidiChannel[] mc;
Instrument[] instr;
int instrument;
int channel;
public MidiInputReceiver(String name) {
try
{
patcher p = new patcher();
this.instrument = p.getInstrument();
this.channel = p.getChannel();
this.synth = MidiSystem.getSynthesizer();
this.synth.open();
this.mc = synth.getChannels();
instr = synth.getDefaultSoundbank().getInstruments();
this.synth.loadInstrument(instr[1]);
mc[this.channel].programChange(0, this.instrument);
System.out.println(this.channel + ", " + this.instrument);
}
catch (MidiUnavailableException e)
{
e.printStackTrace();
System.exit(1);
}
}
public void send(MidiMessage msg, long timeStamp) {
/*
* Use to display midi message
*
for(int i = 0; i < msg.getMessage().length; i++) {
System.out.print("[" + msg.getMessage()[i] + "] ");
}
System.out.println();
*/
if (msg.getMessage()[0] == -112) {
mc[this.channel].noteOn(msg.getMessage()[1], msg.getMessage()[2]+1000);
}
if (msg.getMessage()[0] == -128) {
mc[this.channel].noteOff(msg.getMessage()[1], msg.getMessage()[2]+1000);
}
}
public void close() {
}
}
}
Is this due to hardware limitations of the Pi, or can I do anything about it?
If you have any loops for updating the midi retrieval/sound output, maybe try thread-sleeping in there to give the OS time to do stuff? Other than that, not sure. The USB on the original Raspberry Pi wasn't very good for some reason (lots of bugs, slow performance -- but these did get fixed somewhat in newer Linux/firmware). You may also need to modify the sample rate to match the ideal setting for the current sound output, if it's accessible (sample rate mismatch means more conversion). Java may try to use the ideal as the default, but may be misreported by the OS?
So, I'm working on a project for class wherein we have to have a game with background music. I'm trying to play a .wav file as background music, but since I can't use clips (too short for a music file) I have to play with the AudioStream.
In my first implementation, the game would hang until the song finished, so I threw it into its own thread to try and alleviate that. Currently, the game plays very slowly while the song plays. I'm not sure what I need to do to make this thread play nice with my animator thread, because we we're never formally taught threads. Below is my background music player class, please someone tell me what I've done wrong that makes it hog all the system resources.
public class BGMusicPlayer implements Runnable {
File file;
AudioInputStream in;
SourceDataLine line;
int frameSize;
byte[] buffer = new byte [32 * 1024];
Thread player;
boolean playing = false;
boolean fileNotOver = true;
public BGMusicPlayer (File inputFile){
try{
file = inputFile;
in = AudioSystem.getAudioInputStream (inputFile);
AudioFormat format = in.getFormat();
frameSize = format.getFrameSize();
DataLine.Info info =new DataLine.Info (SourceDataLine.class, format);
line = (SourceDataLine) AudioSystem.getLine (info);
line.open();
player = new Thread (this);
player.start();
}
catch(Exception e){
System.out.println("That is not a valid file. No music for you.");
}
}
public void run() {
int readPoint = 0;
int bytesRead = 0;
player.setPriority(Thread.MIN_PRIORITY);
while (fileNotOver) {
if (playing) {
try {
bytesRead = in.read (buffer,
readPoint,
buffer.length - readPoint);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if (bytesRead == -1) {
fileNotOver = false;
break;
}
int leftover = bytesRead % frameSize;
// send to line
line.write (buffer, readPoint, bytesRead-leftover);
// save the leftover bytes
System.arraycopy (buffer, bytesRead,
buffer, 0,
leftover);
readPoint = leftover;
try {
Thread.sleep(20);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
public void start() {
playing = true;
if(!player.isAlive())
player.start();
line.start();
}
public void stop() {
playing = false;
line.stop();
}
}
You are pretty close, but there are a couple of unusual things that maybe are contributing to the performance problem.
First off, if you are just playing back a .wav, there shouldn't really be a need to deal with any "readpoint" but a value of 0, and there shouldn't really be a need for a "leftover" computation. When you do the write, it should simply be the same number of bytes that were read in (return value of the read() method).
I'm also unclear why you are doing the ArrayCopy. Can you lose that?
Setting the Thread to low priority, and putting a Sleep--I guess you were hoping those would slow down the audio processing to allow more of your game to process? I've never seen this done before and it is really unusual if it is truly needed. I really recommend getting rid of these as well.
I'm curious where your audio file is coming from. Your not streaming it over the web, are you?
By the way, the way you get your input from a File and place it into an InputStream very likely won't work with Java7. A lot of folks are reporting a bug with that. It turns out it is more correct and efficient to generate a URL from the File, and then get the AudioInputStream using the URL as the argument rather than the file. The error that can come up is a "Mark/Reset" error. (A search on that will show its come up a number of times here.)