I'm trying to implement my own version of streaming. I'm sending byte arrays over a websocket. Once I get the first message I write it to a temporary and using android's MediaPlayer to play the file. For the first message everything works fine, I turn the byte array into an mp3 and audio comes out. However I'm not really sure how to keep writing to the file every time a message comes over.
some example code
File test;
FileOutputStream fos;
MediaPlayer mediaPlayer;
FileInputStream MyFile;
Everytime a message comes through this code gets run.
try {
if (fos == null) {
test = File.createTempFile("TCL", "mp3", getCacheDir());
fos = new FileOutputStream(test);
fos.write(bytearray);
mediaPlayer = new MediaPlayer();
MyFile = new FileInputStream(test);
mediaPlayer.setDataSource(MyFile.getFD());
mediaPlayer.prepare();
if(!mediaPlayer.isPlaying()){
mediaPlayer.start();
}
}else{
fos.write(bytearray);
}
} catch (IOException ex) {
ex.printStackTrace();
}
I thought I could just keep writing incoming byte[]'s to the file but that doesn't seem to be working. Any advice would be appreciated.
What you're trying to do (play the audio in a file that keeps growing indefinitely) is not supported by MediaPlayer. Instead, look into decoding the audio yourself and sending the raw PCM data to AudioTrack. It's a lot more work, but AudioTrack is the easiest way to progressively play a stream of audio data.
Related
I'm using the library org.gagravarr:vorbis-java-core:0.8 (https://github.com/Gagravarr/VorbisJava).
I want to get the PCM data from an OGG file and use AudioTrack to play it. Using AudioTrack is a requirement for me because I will later need to concatenate multiple PCM data while it's playing to have the smoothest playback.
As you can see bellow, I tried to set up AudioTrack with data matching the file, read the file's content with the library, and write it directly into the AudioTrack, but the result is no audio when played.
I checked the loop and I'm sure the data is correctly being read.
AudioTrack track = new AudioTrack.Builder()
.setAudioAttributes(new AudioAttributes.Builder()
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.setUsage(AudioAttributes.USAGE_MEDIA)
.build()
)
.setAudioFormat(new AudioFormat.Builder()
.setEncoding(AudioFormat.ENCODING_PCM_16BIT)
.setSampleRate(44100)
.setChannelMask(AudioFormat.CHANNEL_OUT_STEREO)
.build()
)
.setPerformanceMode(AudioTrack.PERFORMANCE_MODE_LOW_LATENCY)
.build();
FileInputStream fileInputStream = new FileInputStream(
this.currentSong.getTrackFile("03")
);
OggFile oggFile = new OggFile(fileInputStream);
OggPacketReader oggPacketReader = oggFile.getPacketReader();
int written = 0;
while (true) {
OggPacket oggPacket = oggPacketReader.getNextPacket();
if (oggPacket == null) break;
byte[] data = oggPacket.getData();
track.write(data, written, data.length);
written += data.length;
}
track.play();
And here is some information about the file I'm trying to read:
Am I even using the appropriate library for this? I recently saw something called MediaCodec to use low-level codecs, but I'm not sure where to start...
Currently, AudioTrack doesn't support the Vorbis format. So, you need to decode your source audio track into raw PCM before feeding it into the AudioTrack.
This can be done using MediaExtractor and MediaCodec (decoder).
Links:
https://developer.android.com/reference/android/media/MediaExtractor
https://developer.android.com/reference/android/media/MediaCodec#asynchronous-processing-using-buffers
I am trying to read a file into a byte buffer in android. The application crashes whenever I initialize the byte buffer with the size equal to the size of the file. I have checked correctly and the file size is well below the int max value. Because of certain project setup, I have to test the application on the device due to which I don't have the access to the logcat.
File outputdir = new File(localcontext.getFilesDir(), "appData");
if(!outputdir.exists()){
if(outputdir.mkdir()){
Toast.makeText(localcontext, outputdir.getAbsolutePath(), Toast.LENGTH_SHORT).show();
}
}
tempfile = new File(outputdir, "runningfile.mp4");
bytebuffer = new byte[(int)encryptedfile.length()];
OutputStream os = new FileOutputStream(tempfile.getAbsolutePath(), false);
// DataInputStream dataInputStream = new DataInputStream(fis);
// dataInputStream.readFully(bytebuffer);
// dataInputStream.close();
The application runs fine and displays some message when I comment out the byte buffer initialization line but crashes otherwise.
I am unable to figure out what's wrong here. Please help. Thanks.
Use try catch block and display error with toast.
try{
bytebuffer = new byte[(int)encryptedfile.length()];
}catch(Exception e){
Toast.makeText(getActivity(), e.getMessage(),Toast.LENGTH_LONG).show();
}
I'm developing a sound recognition app on android and am using the MediaRecorder class as well as a tensorflow model. I create the audio file where I will be saving the recorder microphone audio in the onCreate method for the class
audioFile = getExternalCacheDir().getAbsolutePath();
audioFile += "/Recording.3gp";
And I set the output file of the mediaRecorder to this file in the startRecording class
mediaRecorder.setOutputFile(audioFile);
The issue im having is that I need to convert the recording into a series of MFCC values for the model to work, and the MFFC.java class im using requires that the recording be converted to a double array. I'm doing that like this
ByteArrayOutputStream out = new ByteArrayOutputStream();
BufferedInputStream in = new BufferedInputStream(new FileInputStream(audioFile));
int read;
byte[] buff = new byte[1024];
while ((read = in.read(buff)) > 0)
{
out.write(buff, 0, read);
}
out.flush();
byte[] bytes = out.toByteArray();
int times = Double.SIZE / Byte.SIZE;
double[] doubleArray = new double[bytes.length / times];
for(int i=0;i<doubleArray.length;i++){
doubleArray[i] = ByteBuffer.wrap(bytes, i*times, times).getDouble();
}
Which is how they said to do it on another stack overflow post. The issue with this is that the audio file im sending the recording(s) to just keeps adding the new recordings to previous ones. This is because I am recording the audio then passing it to my classifier method in a loop like so
while(true){
try {
soundRecognition task = new soundRecognition();
task.execute();
sleep(1500);
}
Solutions I have tried
I have tried to move the creation of the audio to the sound recognition class but I cant do that as it produces errors, specifically mediaRecorder start called in invalid state: 4.
I have tried to overwrite the file using a FileWriter and PrintWriter class, but this didnt work, im assuming because the file is and audio file.
Any help would be appreciated
i am developing a video capturing Android App.
My Goal is to merge the captured Video file with a given mp3 audio file. I am using FFmpeg to merge the files. The capturing is done with the Android.Media Framework.
If i now try to merge the files like descripted here:
How to multiplex mp3 and mp4 files in Android
i get the error: avcodec_open2() error -1: Could not open video codec.
Is there a way to convert the captured video file in a readable version for FFmpeg ?
Or is there any other way to solve this issue ?
i Dont want to capture the video with FFmpeg, caus this will be to much complex and not clean (for my opinion)
Hope that any body can help :)
thanks in advance.
I Capture like propsed in the android DOC:
http://developer.android.com/guide/topics/media/camera.html
and this is how i try to merge
FrameGrabber grabber1 = new FFmpegFrameGrabber(videoPath);
FrameGrabber grabber2 = new FFmpegFrameGrabber(audioPath);
grabber1.start();
grabber2.start();
FrameRecorder recorder = new FFmpegFrameRecorder(OutputPath,grabber1.getImageWidth(), grabber1.getImageHeight(), 2);
recorder.setFormat("mp4");
recorder.setVideoQuality(1);
recorder.setFrameRate(grabber1.getFrameRate());
recorder.setSampleRate(grabber2.getSampleRate());
recorder.start();
Frame frame1, frame2 = null;
long timestamp = -2;
int count = 0;
boolean isFirstTime = false;
boolean isFirstCheck = true;
while ((frame1 = grabber1.grabFrame())!=null) {
//frame1 = grabber1.grabFrame();
frame2 = grabber2.grabFrame();
recorder.record(frame1);
recorder.record(frame2);
}
recorder.stop();
grabber1.stop();
grabber2.stop();
}catch (org.bytedeco.javacv.FrameGrabber.Exception e) {
e.printStackTrace();
} catch (Exception e1) {
e1.printStackTrace();
}
try {
//String location = dir1.getCanonicalPath()+"\\app_yamb_test1\\mySound.au";
//displayMessage(location);
AudioInputStream audio2 = AudioSystem.getAudioInputStream(getClass().getResourceAsStream("mySound.au"));
Clip clip2 = AudioSystem.getClip();
clip2.open(audio2);
clip2.start();
} catch (UnsupportedAudioFileException uae) {
System.out.println(uae);
JOptionPane.showMessageDialog(null, uae.toString());
} catch (IOException ioe) {
System.out.println("Couldn't find it");
JOptionPane.showMessageDialog(null, ioe.toString());
} catch (LineUnavailableException lua) {
System.out.println(lua);
JOptionPane.showMessageDialog(null, lua.toString());
}
This code works fine when I run the application from netbeans. The sound plays and there are no exceptions. However, when I run it from the dist folder, the sound does not play and I get the java.io.IOException: mark/reset not supported in my message dialog.
How can I fix this?
The documentation for AudioSystem.getAudioInputStream(InputStream) says:
The implementation of this method may
require multiple parsers to examine
the stream to determine whether they
support it. These parsers must be able
to mark the stream, read enough data
to determine whether they support the
stream, and, if not, reset the
stream's read pointer to its original
position. If the input stream does not
support these operation, this method
may fail with an IOException.
Therefore, the stream you provide to this method must support the optional mark/reset functionality. Decorate your resource stream with a BufferedInputStream.
//read audio data from whatever source (file/classloader/etc.)
InputStream audioSrc = getClass().getResourceAsStream("mySound.au");
//add buffer for mark/reset support
InputStream bufferedIn = new BufferedInputStream(audioSrc);
AudioInputStream audioStream = AudioSystem.getAudioInputStream(bufferedIn);
After floundering about for a while and referencing this page many times, I stumbled across this which helped me with my problem. I was initially able to load a wav file, but subsequently only could play it once, because it could not rewind it due to the "mark/reset not supported" error. It was maddening.
The linked code reads an AudioInputStream from a file, then puts the AudioInputStream into a BufferedInputStream, then puts that back into the AudioInputStream like so:
audioInputStream = AudioSystem.getAudioInputStream(new File(filename));
BufferedInputStream bufferedInputStream = new BufferedInputStream(audioInputStream);
audioInputStream = new AudioInputStream(bufferedInputStream, audioInputStream.getFormat(), audioInputStream.getFrameLength());
And then finally it converts the read data to a PCM encoding:
audioInputStream = convertToPCM(audioInputStream);
With convertToPCM defined as:
private static AudioInputStream convertToPCM(AudioInputStream audioInputStream)
{
AudioFormat m_format = audioInputStream.getFormat();
if ((m_format.getEncoding() != AudioFormat.Encoding.PCM_SIGNED) &&
(m_format.getEncoding() != AudioFormat.Encoding.PCM_UNSIGNED))
{
AudioFormat targetFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
m_format.getSampleRate(), 16,
m_format.getChannels(), m_format.getChannels() * 2,
m_format.getSampleRate(), m_format.isBigEndian());
audioInputStream = AudioSystem.getAudioInputStream(targetFormat, audioInputStream);
}
return audioInputStream;
}
I believe they do this because BufferedInputStream handles mark/reset better than audioInputStream. Hope this helps somebody out there.
Just came across this question from someone else with the same problem who referenced it. Looks like this issue arose with Java 7.
Oracle Bug database, #7095006
A test, executed when InputStream is the argument to the getAudioInputStream() method, is triggering the error. The existence of Mark/Reset capabilities in the audio resource file have no bearing on whether the Clip will load and play. Given that, there is no reason to prefer an InputStream as the argument, when a URL or File suffice.
If we substitute a URL as the argument, this needless test is not executed. Revising the OP code:
AudioInputStream ais = AudioSystem.getAudioInputStream(getClass().getResource(fileName));
Details can be seen in the API, in the description text for the two forms.
AudioSystem.getAudioInputStream(InputStream)
AudioSystem.getAudioInputStream(URL)
The problem is that you're input stream has to support the methods mark and reset. At least if mark is supported you can test with: AudioInputStream#markSupported.
So you should maybe use a different InputStream.