JSyn DualOscilloscope Two open line on my sound card - java

I created a small program to record sound (I use JavaSound with TargetDataLine to reach my sound card).
I did some testing with the class " DualOscilloscope.java " of JSYN for a visual of the sound .
The problem is that their class opens a line with " Synthesizer " so I have two line tapping on my soundcard and me that triggers an exception (as you can not open two line on a sound card).
Is it possible to use my instantiating my TargetDataLine to initialize the synthesizer of JSYN ?
The Latest JSyn JAR File
Source code class DualOscilloscope (author Phil Burk )
protected void startAudio(int itemIndex) {
// Both stereo.
int numInputChannels = deviceMaxInputs.get(itemIndex);
if (numInputChannels > 2)
numInputChannels = 2;
int inputDeviceIndex = deviceIds.get(itemIndex);
synth.start(16000, inputDeviceIndex, numInputChannels, AudioDeviceManager.USE_DEFAULT_DEVICE, 0);
channel1.output.connect(pass1.input);
// Only connect second channel if more than one input channel.
if (numInputChannels > 1) {
channel2.output.connect(pass2.input);
}
// We only need to start the LineOut. It will pull data from the
// channels.
scope.start();

JSyn does not currently support being passed a TargetDataLine. You could, however, implement your own AudioDeviceManager based on the source code on GitHub. Replace JavaSoundAudioDevice.java with one that used your TargetDataLine instead of creating a new one.
An easier way would be to let JSyn open the audio input and then use that input in your program. Don't open your own TargetDataLine.
You can use JSyn to process audio or to save it as a WAVE file. If you need to do custom processing then you could write a custom unit generator. Or you could use an AudioStreamReader to stream the audio data to your own thread.
AudioStreamReader reader = new AudioStreamReader(synth, 2); // stereo
lineIn.connect(0, read.getInput(), 0);
lineIn.connect(1, read.getInput(), 1);
Then you can read the data from that reader instead of from your own TargetDataLine.
reader.read(buffer, start, count);

Related

Getting recent Bytes while recording Audio Android

I am working on an Audio recording function. I want the recorded Audio to be saved into the internal cache directory of my app so that I can later process it and send it to my server. I have taken the RECORD_AUDIO_PERMISSION in my Android Manifest.
Below is the code I plan to use for recording audio and save it to a file.
String uuid = UUID.randomUUID().toString();
fileName = getExternalCacheDir().getAbsolutePath() + "/" + uuid + ".3gp";
recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
recorder.setOutputFile(fileName);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
try {
recorder.prepare();
recorder.start();
} catch (IOException e) {}
I expect the above code to work fine but I am facing another issue. I want to create a Waveform effect for my app for which I am using this library. This library works with the below code:
//get a reference to the visualizer
mVisualizer = findViewById(R.id.blast);
//TODO: get the raw audio bytes
//pass the bytes to visualizer
mVisualizer.setRawAudioBytes(bytes);
Now, my question is how can I get the Bytes in real-time of the Audio which is being recorded and being saved? Should I read the file and extract recent bytes from it at regular intervals or is there any other method to achieve this.
Any help would be appreciated.
Thanks.
As per the logic what you can do is take input of small intervals, say 1 second(1000 ms), and then show the waveform of that and after that save the data you got. Now after saving take new input and then add that new data in the previous data after forming a waveform (or doing any operation) of new data.
Just do these things on separate Threads.

Java: playing audio from a youtube video

I'm thinking about coding a Java applet that would take in the top 100 or so songs, find their samples (music that appears within the songs) off WhoSampled.com and then playing those samples off of YouTube.
My problem is the playing part, let's say I have the URL. What's the best way to deal with that in Java, do you think ripping the audio off and playing the audio from there would be best, or should I try to control a sentient YouTube player.
I'm leaning towards extracting the audio, and this: thread mentions a way to extract that audio, however the code:
wget http://www.youtube.com/get_video.php?video_id=...
ffmpeg -i - audio.mp3
Is not written in Java. How do I, if possible convert this to run in a Java program? Or does anyone know a good way in Java
Thank you for your suggestions.
You can use an FFMPEG Java wrapper like this one https://github.com/bramp/ffmpeg-cli-wrapper/
An example can be found in Readme. Converting MP4 to mp3 should be like this:
FFmpeg ffmpeg = new FFmpeg("/path/to/ffmpeg");
FFprobe ffprobe = new FFprobe("/path/to/ffprobe");
FFmpegBuilder builder = new FFmpegBuilder()
.setInput("input.mp4") // Filename, or a FFmpegProbeResult
.overrideOutputFiles(true) // Override the output if it exists
.addOutput("output.mp3") // Filename for the destination
.setFormat("mp3") // Format is inferred from filename, or can be set
.setAudioCodec("aac") // using the aac codec
.setAudioSampleRate(48_000) // at 48KHz
.setAudioBitRate(32768) // at 32 kbit/s
.done();
FFmpegExecutor executor = new FFmpegExecutor(ffmpeg, ffprobe);
// Run a one-pass encode
executor.createJob(builder).run();

Creating a sound file composed of smaller files

I am writing an app that takes Morse code, and plays it over the speakers.
Currently I am able to record audio over the microphone using this code:
public void startRecord() throws Exception{
if (record != null){
record.release();
}
File fileOut = new File(FILE);
if (fileOut != null){
fileOut.delete(); // delete any existing file at that location.
}
record = new MediaRecorder();
record.setAudioSource(MediaRecorder.AudioSource.MIC);
record.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
record.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
record.setOutputFile(FILE);
record.prepare();
record.start();
}
and i am able to generate morse code in a string formatted like this:
"-.... .---- -.... -.-. -.... ..... --... ---.."
I can iterate over this string using a for loop such as this:
char[] chars = message.toCharArray();
for (char ch : chars) {
//add to audio file
}
But I am not sure how to create a file out of strung together wav files. Ive seen some posts that mention setting the audio source as a file from the device, but Im not sure how to pick and choose which file and where to insert them, or how to compile it all into a single audio file.
Instead of creating a new sound file and playing that, it would probably be easier to just play each sound individually and when that sound finishes, you play the next sound, or you wait for a brief pause if it's a space.
I think you are trying to do this the harder way. What if you were to simply have the program read the first letter, play the appropriate sound, do the same for the next letter and so on throughout the text. I believe it is much simpler but if you are really set on trying to put it into one file you could have the program make an empty file but in the name it sets have the extension `.wav` or `.mp3` and do research into how they are encoded.

One use Java audio?

I am building a speech synthesizer, and everything works except the audio. I have a list of phonemes that are stored as .wav files, and I am calling them with AudioInputStreams, but they won't repeat. I have no idea what could be the issue, so any help would be appreciated.
The code that initializes a HashMap full of phones is
for(File phone : listOfFiles){
String path = phone.getPath();
if(path.startsWith(".")){continue;}
path = path.replace(".wav", "").replace("phones/", "");
AudioInputStream clip1 = AudioSystem.getAudioInputStream(phone);
phonemes.put(path,clip1);
}
and the code that combines and outputs the sound is
public void speak(String[] input){
AudioInputStream phrase = phonemes.get(input[0]);
AudioInputStream phone;
int x = input.length;
for(int i=1; i<input.length; i++){
phone = phonemes.get(input[i]);
phrase = new AudioInputStream(new SequenceInputStream(phrase, phone), phrase.getFormat(), phrase.getFrameLength() + phone.getFrameLength());
}
try {
Clip clip = AudioSystem.getClip();
clip.open(phrase);
clip.start();
} catch (Exception e) {
e.printStackTrace();
}
}
To replay a Clip, you have to stop it and reposition it, then start it. I don't think you can close and reopen a given Clip. But attempts to do that should have generated a LineUnavailable exception, and you say you got no exceptions.
To troubleshoot, I'd first verify that it is possible to play the .wav files prior to placing them in the hash table. Sometimes an unexpected format (e.g., 24-bit or 32-bit encoding, or big-endian rather than little-endian) can lead to .wav files not playing.
If you are trying to concatenate a series of clips or audio data into a single clip, that could also be problematic. I think that AudioInputStream expects a single set of "header" data from the .wav file, but the SequenceInputStream could in effect be sending multiple "headers", one for each source file. I've never seen concatenation attempted like that before.
You might need to make your own data storage for the raw audio for each phoneme, and then build your combined phonemes from that rather than directly from .wav files. Instead of loading to Clips, load the raw PCM from the AudioInputStream into byte arrays. To output the raw audio bytes, you can use a SourceDataLine.

Audio Streaming from disk in java Servlets

To stream audio file I have implemented following code. But i am getting Exception:
javax.sound.sampled.UnsupportedAudioFileException: could not get audio input stream from input file
at javax.sound.sampled.AudioSystem.getAudioInputStream(AudioSystem.java:1170)
Can Any one help me please......
try {
// From file
System.out.println("hhhhhhhhhhhhhhhh");
AudioInputStream stream = AudioSystem.getAudioInputStream(new File("C:\\track1.mp3"));
System.out.println("stream created");
AudioFormat format = stream.getFormat();
if (format.getEncoding() != AudioFormat.Encoding.PCM_SIGNED) {
System.out.println("in if");
format = new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED,
format.getSampleRate(),
format.getSampleSizeInBits()*2,
format.getChannels(),
format.getFrameSize()*2,
format.getFrameRate(),
true); // big endian
stream = AudioSystem.getAudioInputStream(format, stream);
}
// Create line
SourceDataLine.Info info = new DataLine.Info(
SourceDataLine.class, stream.getFormat(),
((int)stream.getFrameLength()*format.getFrameSize()));
SourceDataLine line = (SourceDataLine) AudioSystem.getLine(info);
line.open(stream.getFormat());
line.start();
// Continuously read and play chunks of audio
int numRead = 0;
byte[] buf = new byte[line.getBufferSize()];
while ((numRead = stream.read(buf, 0, buf.length)) >= 0) {
int offset = 0;
while (offset < numRead) {
offset += line.write(buf, offset, numRead-offset);
}
}
line.drain();
line.stop();
}
That you're doing this job in a servlet class gives me the impression that your intent is to play the mp3 file whenever someone visits your website and that the visitor should hear this mp3 file.
If true, I'm sorry to say, but you're approaching this entirely wrong. Java servlet code runs in webserver machine and not in webbrowser machine. Whenever someone visits your website, this way the mp3 file would only be played at the webserver machine. This is usually a physically completely different machine which runs at the other side of the network connection and the visitor ain't ever going to hear the music.
You want to send the mp3 file raw (unmodified byte by byte) from webserver to the webbrowser without massaging it by some Java Audio API and instruct the webbrowser to play this file. The easist way is to just drop the mp3 file in public webcontent (there where your HTML/JSP files also are) and use HTML <embed> tag to embed it in your HTML/JSP file. The below example assumes the MP3 file to be in the same folder as the HTML/JSP file:
<embed src="file.mp3" autostart="true"></embed>
That's all and this is supported in practically every browser and it will show a player as well.
If the MP3 file is by business requirement stored outside public webcontent, then you may indeed need a servlet for this, but the servlet should do absolutely nothing more than getting an InputStream of it in some way and write it unmodified to the OutputStream of the HttpServletResponse the usual Java IO way. You only need to set the HTTP Content-Type header to audio/mpeg beforehand and if possible also the HTTP Content-Length header. Then point the src to the servlet's URL instead.
<embed src="mp3servlet" autostart="true"></embed>
Default java AudioInputStream does not support mp3 files. You have to plug in MP3SPI to let it decode mp3.
ALso, what do you mean by streaming? This code will play the audio file, not stream it as in internet radio streaming.

Categories