The libraries I founded so far only have methods to decode from a file or InputStream. I have a ByteBuffer with OGG vorbis data and I need it decoded to PCM without having to write it to a file first.
There seem to be 2 parts to this problem.
1) Getting Java Sound to deal with OGG Vorbis format.
2) Avoiding the File.
For (1), the Java Sound API allows the addition of extra formats via the Service Provider Interface. The idea is to put an encoder/decoder into a Jar and use a standard path and format of file to identify the class that does the encoding/decoding.
For (2), it is simply a matter of supplying an InputStream and required AudioFormat to the relevant methods of the AudioSystem static functions. E.G. (Pseudo code..)
byte[] b = byteBuffer.array();
ByteArrayInputStream bais = new ByteArrayInputStream(b);
InputStream is = new InputStream(bais);
AudioInputStrream aisOgg = AudioSystem.getAudioInputStream(is);
AudioInputStrream aisPcm = AudioSystem.
getAudioInputStream(pcmAudioFormat, aisOgg);
You can use ByteArrayInputStream which is a subclass of InputStream.
If your stream is very large you probably will have to write to file.
Related
DicomDroid.jar used to open a .dcm formated image in my Android application. I got the follwing exception when try to open it.
java.io.IOException: DICOM JPEG compression not yet supported
Adding my code below
try {
// Read the imagefile into a byte array (data[])
File imagefile = new File(path);
byte[] data = new byte[(int) imagefile.length()];
FileInputStream fis = new FileInputStream(imagefile);
fis.read(data);
fis.close();
// Create a DicomReader with the given data array (data[])
DicomReader DR = new DicomReader(data);
} catch (Exception ex) {
Log.e("ERROR", ex.toString());
}
What can be done to avoid this error?
Thanks in advance.
The cause is pretty obvious. That DICOM library doesn't support that particular kind of DICOM file.
There's not much you can do about it ... unless you are prepared to enhance the library yourself.
But I think you have probably made a mistake in setting up your instrument to generate DICOM files with JPEG compression. JPEG is lossy, and best practice is to capture and store images with the best resolution feasible. If you need to downgrade resolution to reduce bandwidth, it would be better to
save a high resolution DICOM,
convert the DICOM to a low resolution JPG, and
send the JPEG.
Another option is to get the Dicom file in an uncompressed format (ej: Explicit VR Little Endian). This is the simplest dicom file format and every dicom library has support for such format.
So, when you get your Dicom file from your PACS, force this transfer syntax. This way, your dicom library will be able to deal with the image file.
I'm reading a file line by line. The file is encoded by CipherOutputStream and then later compressed by DeflaterOutputStream. The file can consist of UTF-8 characters, like Russian letters, etc.
I want to obtain the offset in actually read file, or the number of bytes read by br.ReadLine() command. The problem is that the file is both encrypted, and deflated, so length of read String is larger than number of read bytes in file.
InputStream fis=tempURL.openStream(); //in tempURL I've got an URL to download
CipherInputStream cis=new CipherInputStream(fis,pbeCipher); //CipherStream
InflaterInputStream iis=new InflaterInputStream(cis); //InflaterInputStream
BufferedReader br = new BufferedReader(
new InputStreamReader(iis, "UTF8")); //BufferedReader
br.readLine();
int fSize=tempURL.openConnection().getContentLength(); //Catch FileSize
Use a CountingInputStream from the Apache Commons IO project:
InputStream fis=tempURL.openStream();
CountingInputStream countStream = new CountingInputStream(fis);
CipherInputStream cis=new CipherInputStream(countStream,pbeCipher);
...
Later you can obtain the file position with countStream.getByteCount().
For compressed files, you can find that a String doesn't use a whole number of bytes so the question cannot be answered. e.g. a byte can take less than a byte when compressed (otherwise there would be no point trying to compress it)
BTW: Is usually best to compress the data before encrypting it as it will usually be much more compact. Compressing the data after it has been encrypted will only help if its output is base 64 or something similar. Compression works best when you can the contents become predictable (e.g. repeating sequences, common characters) whereas the porpose of encryption is to make the data appear unpredictable.
I am modifying an application that plays audio data to write the data to a file instead. As it is currently implemented, a byte array is filled dynamically, and the contents of this buffer are written to a SourceDataLine each time it is filled. I basically want to write that buffer out to a file in a specified format.
I have read through this official tutorial and came across this code snipped for writing audio data to a file:
File fileOut = new File(someNewPathName);
AudioFileFormat.Type fileType = fileFormat.getType();
if (AudioSystem.isFileTypeSupported(fileType,
audioInputStream)) {
AudioSystem.write(audioInputStream, fileType, fileOut);
}
I see from the API documentation that I can construct an AudioInputStream using a TargetDataLine, however in my case I have a SourceDataLine. I don't know how to get the data from my byte array into the TargetDataLine since it implements the read() method instead of write(). Other uses of the AudioInputStream in that and other documentation treat it as a way of reading from a file, so I'm a little confused by its use with AudioSystem.write().
So, how can I get the data from a SourceDataLine, or from the buffer directly, into a TargetDataLine or AudioInputStream so that it can be written out to a file?
Use the byte[] to establish a ByteArrayInputStream
Provide the BAIS to AudioSystem.getAudioInputStream(InputStream)
Use the AIS in AudioSystem.write(..)
Our current project requires us to send an audio file to the server and then use the audio file for further computation.
Using the Java sound api, I was able to capture the recording and save it as a wav file in my system. Then in order to pass the audio wav to the server, I am using Apache Commons HttpClient to post a request to the server. (I am using InputstreamEntity provided by apache and sending the data as a chunk).
The problem appears when i am trying to recreate/retrieve the wav file on the server. I understand that I would have to use the AudioSystem.write API to create the wav file (exactly as what was done on my system). However what I observe is that althought the file gets created , it does not play (I am using vlc media player to test it FYI). I have searched in Google for sample codes and have tried to implement it, but is unable to play it once the file gets created.
The sample code snippets indicates the approaches i have tried:
//******************************************************************
try {
InputStream is = request.getInputStream();
FileOutputStream fs = new FileOutputStream("output123.wav");
byte[] tempbuffer = new byte[4096];
int bytesRead;
while((bytesRead=is.read(tempbuffer))!=-1)
{
fs.write(tempbuffer, 0,bytesRead);
}
is.close();
fs.close();
AudioInputStream inputStream =AudioSystem.getAudioInputStream(newFile("output123.wav"));
int numofbytes = inputStream.available();
byte[] buffer = new byte[numofbytes];
inputStream.read(buffer);
int bytesWritten = AudioSystem.write(inputStream, AudioFileFormat.Type.WAVE,new File("outputtest.wav"));
System.out.println("written"+bytesWritten);
Approach 2
InputStream is = request.getInputStream();
System.out.println("inputStream obtained : "+is.toString());
ByteArrayInputStream bais = null;
byte[] audioBuffer = IOUtils.toByteArray(is);
System.out.println(" is audioBuffer empty? : length = ? "+audioBuffer.length);
try {
AudioFileFormat ai = AudioSystem.getAudioFileFormat(is);
System.out.println("ai bytelength ? "+ai.getByteLength());
System.out.println("ai frame length = "+ai.getFrameLength());
Set<Map.Entry<String,Object>> audioProperties = ai.getFormat().properties().entrySet();
System.out.println("entry set is empty ? "+audioProperties.isEmpty());
for(Map.Entry me : audioProperties){
System.out.println("key = "+me.getKey());
System.out.println("value ="+me.getValue());}
bais = new ByteArrayInputStream(audioBuffer);
AudioInputStream ais = new AudioInputStream(bais, new AudioFormat(8000,8,2,true,true), 2);
AudioSystem.write(ais, AudioFileFormat.Type.WAVE,new File("testtest.wav"));
//*************************************************************************************
The audioFormat properties all turned out to be null. Are these null values giving the problem? So while creating the wave file on the server, I tried to set the properties manually once again. But even then the wav file would not play.
I have also tried quite a few approaches already mentioned on this site, but somehow they aren't working. I am sure i am missing something, but I am unable to pinpoint the exact problem.
Would be really helpful, if you guys can point out how to go about the conversion from ServletInputStream to getting a wav.
P.S (1) I know the code is shabby, because i have been under a trial and error situation for quite some time now. But I will give more details on the approaches if needed.
2) Apologise for the clumsiness, this happens to be my first post.. )
this is not how you copy a stream (from Approach 1). you have the correct code to copy a stream just above this.:
int numofbytes = inputStream.available();
byte[] buffer = new byte[numofbytes];
inputStream.read(buffer);
If all your server wants to do is get the data and write it to a file, then you do not need to use any of the audio API: simply treat the data as a stream of bytes.
So the part of approach 1 that is before any mention of AudioInputStream should be sufficient.
Although the approach chosen might not be the perfect solution, due to time constraints, I adopted a simpler approach. Using java.util.zip i simply zipped it up and sent it over to the server and then wrote a layer wherin the file gets unzipped . then i deleted the zip files. Seems like an immature solution (bcos the original challenge was to send the audio file). now i am incurring an overhead of zipping the files, but the file transfer would hapeen relatively faster. Thanks for your help guys.
I'm working on an application that has to process audio files. When using mp3 files I'm not sure how to handle data (the data I'm interested in are the the audio bytes, the ones that represent what we hear).
If I'm using a wav file I know I have a 44 bytes header and then the data. When it comes to an mp3, I've read that they are composed by frames, each frame containing a header and audio data. Is it possible to get all the audio data from a mp3 file?
I'm using java (I've added MP3SPI, Jlayer, and Tritonus) and I'm able to get the bytes from the file, but I'm not sure about what these bytes represent or how to handle then.
From the documentation for MP3SPI:
File file = new File(filename);
AudioInputStream in= AudioSystem.getAudioInputStream(file);
AudioInputStream din = null;
AudioFormat baseFormat = in.getFormat();
AudioFormat decodedFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
baseFormat.getSampleRate(),
16,
baseFormat.getChannels(),
baseFormat.getChannels() * 2,
baseFormat.getSampleRate(),
false);
din = AudioSystem.getAudioInputStream(decodedFormat, in);
You then just read data from din - it will be the "raw" data as per decodedFormat. (See the docs for AudioFormat for more information.)
(Note that this sample code doesn't close the stream or anything like that - use appropriate try/finally blocks as normal.)
The data that you want are the actual samples, while MP3 represents the data differently. So, like what everyone else has said - you need a library to decode the MP3 data into actual samples for your purpose.
As mentioned in the other answers, you need a decoder to decode MP3 into regular audio samples.
One popular option would be JavaLayer (LGPL).