Let's suppose that we have the following scenario: something is playing on an android device (an mp3 par example, but it could be anything that use the audio part of an android device). From an application (android application :) ), I would like to intercept the audio stream to analyze it, to record it, etc. From this application (let's say "the analyzer") I don't want to start an mp3 or something, all I want is to have access to the audio stream of android.
Any advice is appreciated, it could a Java or C++ solution.
http://developer.android.com/reference/android/media/MediaRecorder.html
public class AudioRecorder {
final MediaRecorder recorder = new MediaRecorder();
final String path;
/**
* Creates a new audio recording at the given path (relative to root of SD
* card).
*/
public AudioRecorder(String path) {
this.path = sanitizePath(path);
}
private String sanitizePath(String path) {
if (!path.startsWith("/")) {
path = "/" + path;
}
if (!path.contains(".")) {
path += ".3gp";
}
return Environment.getExternalStorageDirectory().getAbsolutePath()
+ path;
}
/**
* Starts a new recording.
*/
public void start() throws IOException {
String state = android.os.Environment.getExternalStorageState();
if (!state.equals(android.os.Environment.MEDIA_MOUNTED)) {
throw new IOException("SD Card is not mounted. It is " + state
+ ".");
}
// make sure the directory we plan to store the recording in exists
File directory = new File(path).getParentFile();
if (!directory.exists() && !directory.mkdirs()) {
throw new IOException("Path to file could not be created.");
}
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
recorder.setOutputFile(path);
recorder.prepare();
recorder.start();
}
/**
* Stops a recording that has been previously started.
*/
public void stop() throws IOException {
recorder.stop();
recorder.release();
}
}
Consider using the AudioPlaybackCapture API that was introduced in Android 10 if you want to get the audio stream for a particular app.
Related
I am using SSML, so my app can speak. The app itself works perfectly fine on my phone BUT when I connect my phone with a device over Bluetooth, there is mostly a gap or a delay. Either at the beginning or in the middle of the speech.
So for instance, when the audio is Hello John, I am your assistant. How can I help you?, the output could be sistant. How can I help you?. Sometimes the sentences are fluent but sometimes there are these gaps.
This is how I play the audio file:
String myFile = context.getFilesDir() + "/output.mp3";
mMediaPlayer.reset();
mMediaPlayer.setDataSource(myFile);
mMediaPlayer.prepare();
mMediaPlayer.start();
And this is the entire class of it:
public class Tts {
public Context context;
private final MediaPlayer mMediaPlayer;
public Tts(Context context, MediaPlayer mMediaPlayer) {
this.context = context;
this.mMediaPlayer = mMediaPlayer;
}
#SuppressLint({"NewApi", "ResourceType", "UseCompatLoadingForColorStateLists"})
public void say(String text) throws Exception {
InputStream stream = context.getResources().openRawResource(R.raw.credential); // R.raw.credential is credential.json
GoogleCredentials credentials = GoogleCredentials.fromStream(stream);
TextToSpeechSettings textToSpeechSettings =
TextToSpeechSettings.newBuilder()
.setCredentialsProvider(
FixedCredentialsProvider.create(credentials)
).build();
// Instantiates a client
try (TextToSpeechClient textToSpeechClient = TextToSpeechClient.create(textToSpeechSettings)) {
// Replace {name} with target
SharedPreferences sharedPreferences = context.getSharedPreferences("target", Context.MODE_PRIVATE);
String target = sharedPreferences.getString("target", null);
text = (target != null) ? text.replace("{name}", target) : text.replace("null", "");
// Set the text input to be synthesized
String myString = "<speak><prosody pitch=\"low\">" + text + "</prosody></speak>";
SynthesisInput input = SynthesisInput.newBuilder().setSsml(myString).build();
// Build the voice request, select the language code ("en-US") and the ssml voice gender
// ("neutral")
VoiceSelectionParams voice =
VoiceSelectionParams.newBuilder()
.setName("de-DE-Wavenet-E")
.setLanguageCode("de-DE")
.setSsmlGender(SsmlVoiceGender.MALE)
.build();
// Select the type of audio file you want returned
AudioConfig audioConfig =
AudioConfig.newBuilder().setAudioEncoding(AudioEncoding.MP3).build();
// Perform the text-to-speech request on the text input with the selected voice parameters and
// audio file type
SynthesizeSpeechResponse response = textToSpeechClient.synthesizeSpeech(input, voice, audioConfig);
// Get the audio contents from the response
ByteString audioContents = response.getAudioContent();
// Write the response to the output file.
try (FileOutputStream out = new FileOutputStream(context.getFilesDir() + "/output.mp3")) {
out.write(audioContents.toByteArray());
}
String myFile = context.getFilesDir() + "/output.mp3";
mMediaPlayer.setAudioAttributes(new AudioAttributes.Builder().setContentType(AudioAttributes.CONTENT_TYPE_MUSIC).build());
mMediaPlayer.reset();
mMediaPlayer.setDataSource(myFile);
mMediaPlayer.prepare();
mMediaPlayer.setOnPreparedListener(mediaPlayer -> mMediaPlayer.start());
}
}
}
The distance cannot be the reason, since my phone is right next to the device.
Google's SSML needs an internet connection. So I am not quite sure if the gap is because of Bluetooth or internet connection.
So I am trying to close the gap, no matter what the reason is. The audio should be played, when it is prepared and ready to be played.
What I tried
This is what I have tried but I don't hear a difference:
mMediaPlayer.setAudioAttributes(new AudioAttributes.Builder().setContentType(AudioAttributes.CONTENT_TYPE_SPEECH).build());
Instead of mMediaPlayer.prepare(), I also tried it with mMediaPlayer.prepareAsync() but then the audio will not be played (or at least I can't hear it).
Invoking start() in a listener:
mMediaPlayer.setOnPreparedListener(mediaPlayer -> {
mMediaPlayer.start();
});
Unfortunately, the gap is sometimes still there.
Here is my proposed solution. Check out the // *** comments in the code to see what I changed in respect to your code from the question.
Also take it with a grain of salt, because I have no way of testing that right now.
Nevertheless - as far as I can tell - that is all you can do using the MediaPlayer API. If that still doesn't work right for your BlueTooth device, you should try a different BlueTooth device and if that doesn't help either, maybe you can switch the whole thing to use the AudioTrack API instead of MediaPlayer, which gives you a low latency setting and you could use the audio data directly from the response instead of writing it to a file and reading it from there again.
public class Tts {
public Context context;
private final MediaPlayer mMediaPlayer;
public Tts(Context context, MediaPlayer mMediaPlayer) {
this.context = context;
this.mMediaPlayer = mMediaPlayer;
}
#SuppressLint({"NewApi", "ResourceType", "UseCompatLoadingForColorStateLists"})
public void say(String text) throws Exception {
InputStream stream = context.getResources().openRawResource(R.raw.credential); // R.raw.credential is credential.json
GoogleCredentials credentials = GoogleCredentials.fromStream(stream);
TextToSpeechSettings textToSpeechSettings =
TextToSpeechSettings.newBuilder()
.setCredentialsProvider(
FixedCredentialsProvider.create(credentials)
).build();
// Instantiates a client
try (TextToSpeechClient textToSpeechClient = TextToSpeechClient.create(textToSpeechSettings)) {
// Replace {name} with target
SharedPreferences sharedPreferences = context.getSharedPreferences("target", Context.MODE_PRIVATE);
String target = sharedPreferences.getString("target", null);
text = text.replace("{name}", (target != null) ? target : ""); // *** bug fixed
// Set the text input to be synthesized
String myString = "<speak><prosody pitch=\"low\">" + text + "</prosody></speak>";
SynthesisInput input = SynthesisInput.newBuilder().setSsml(myString).build();
// Build the voice request, select the language code ("en-US") and the ssml voice gender
// ("neutral")
VoiceSelectionParams voice =
VoiceSelectionParams.newBuilder()
.setName("de-DE-Wavenet-E")
.setLanguageCode("de-DE")
.setSsmlGender(SsmlVoiceGender.MALE)
.build();
// Select the type of audio file you want returned
AudioConfig audioConfig =
AudioConfig.newBuilder().setAudioEncoding(AudioEncoding.MP3).build();
// Perform the text-to-speech request on the text input with the selected voice parameters and
// audio file type
SynthesizeSpeechResponse response = textToSpeechClient.synthesizeSpeech(input, voice, audioConfig);
// Get the audio contents from the response
ByteString audioContents = response.getAudioContent();
// Write the response to the output file.
try (FileOutputStream out = new FileOutputStream(context.getFilesDir() + "/output.mp3")) {
out.write(audioContents.toByteArray());
}
String myFile = context.getFilesDir() + "/output.mp3";
mMediaPlayer.reset();
mMediaPlayer.setDataSource(myFile);
mMediaPlayer.setAudioAttributes(new AudioAttributes.Builder() // *** moved here (should be done before prepare and very likely AFTER reset)
.setContentType(AudioAttributes.CONTENT_TYPE_SPEECH) // *** changed to speech
.setUsage(AudioAttributes.USAGE_ASSISTANT) // *** added
.setFlags(AudioAttributes.FLAG_AUDIBILITY_ENFORCED) // *** added
.build());
mMediaPlayer.prepare();
// *** following line changed since handler was defined AFTER prepare and
// *** the prepare call isn't asynchronous, thus the handler would never be called.
mMediaPlayer.start();
}
}
}
Hope that get's you going!
I am trying to make a discord bot that plays custom sounds, i put the sounds in a aws s3 bucket and i can retrieve them but i dont know how to stream them to discord, i can stream audio files saved locally just fine, to stream local files i use lavaplayer.
This is how i get the file from the s3 bucket:
fullObject = s3Client.getObject(new GetObjectRequest("bucket-name", audioName));
System.out.println("Content-Type: " + fullObject.getObjectMetadata().getContentType());
S3ObjectInputStream s3is = fullObject.getObjectContent();
This i how i play the local files with lavaplayer:
String toPlay = "SoundBoard" + File.separator + event.getArgs();
MessageChannel channel = event.getChannel();
AudioChannel myChannel = event.getMember().getVoiceState().getChannel();
AudioManager audioManager = event.getGuild().getAudioManager();
AudioPlayerManager playerManager = new DefaultAudioPlayerManager();
AudioPlayer player = playerManager.createPlayer();
AudioPlayerSendHandler audioPlayerSendHandler = new AudioPlayerSendHandler(player);
audioManager.setSendingHandler(audioPlayerSendHandler);
audioManager.openAudioConnection(myChannel);
TrackScheduler trackScheduler = new TrackScheduler(player);
player.addListener(trackScheduler);
playerManager.registerSourceManager(new LocalAudioSourceManager());
playerManager.loadItem(toPlay, new AudioLoadResultHandler() {
#Override
public void trackLoaded(AudioTrack track) {
trackScheduler.addQueue(track);
}
#Override
public void noMatches() {
channel.sendMessage("audio not found").queue();
trackScheduler.addQueue(null);
}
#Override
public void loadFailed(FriendlyException throwable) {
System.out.println("error " + throwable.getMessage());
}
});
player.playTrack(trackScheduler.getTrack());
So is there a way to stream the files directly with lavaplayer or in another way? (im trying to avoid saving the audio to a file then playing it and then deleting it)
I have looked at countless different StackOverflow answers as well as answers from other sites, but none of the solutions have fixed my problem. I cannot for the life of me get my .wav file to play.
Here is my code:
Sound class:
public class Sound {
/**
* Static file paths for each sound.
*/
public static String stepSound = "/resources/step.wav";
/**
* Audio input stream for this sound.
*/
private AudioInputStream audioInputStream;
/**
* Audio clip for this sound.
*/
private Clip clip;
/* -- Constructor -- */
/**
* Creates a new sound at the specified file path.
*
* #param path File path to sound file
*/
public Sound(String path) {
// Get the audio from the file
try {
// Convert the file path string to a URL
URL sound = getClass().getResource(path);
System.out.println(sound);
// Get audio input stream from the file
audioInputStream = AudioSystem.getAudioInputStream(sound);
// Get clip resource
clip = AudioSystem.getClip();
// Open clip from audio input stream
clip.open(audioInputStream);
} catch (UnsupportedAudioFileException | IOException | LineUnavailableException e) {
e.printStackTrace();
}
}
/* -- Method -- */
/**
* Play the sound.
*/
public void play() {
// Stop clip if it's already running
if (clip.isRunning())
stop();
// Rewind clip to beginning
clip.setFramePosition(0);
// Play clip
clip.start();
}
/**
* Stop the sound.
*/
public void stop() {
clip.stop();
}
}
Constructor call that leads to error:
// Play step sound
new Sound(Sound.stepSound).play();
I know this isn't the first time a problem like this has been asked or answered on this website, but I've been trying other solutions for hours at this point and all I've found is pain and frustration. I can post more code if needed. Thanks in advance.
EDIT: I have unpacked the .jar file and confirmed that the file is indeed there. The problem is that the URL ends up being null, and so a NullPointerException is thrown.
EDIT #2: Added more code in case there's another problem.
I created a project that plays audio within the netbeans IDE. Those audio files were placed in the Classes folder.
Although when I created it as a JAR file, it was unable to locate the audio files. I even copy and pasted the files inside the new dist folder.
Here is a snippet of code:
private void playSound39()
{
try
{
/**Sound player code from:
http://alvinalexander.com/java/java-audio-example-java-au-play-sound
*/
// the input stream portion of this recipe comes from a javaworld.com article.
InputStream inputStream = getClass().getResourceAsStream("./beep39.wav");
AudioStream audioStream = new AudioStream(inputStream);
AudioPlayer.player.start(audioStream);
}
catch (Exception e)
{
JOptionPane.showMessageDialog(null,"Audio file not found!");
}
}
If you want to embedd the audio file in your program it's must be placed inside the src folder in a package.
For example I'll demonstrate a code I use to set icons to buttons (should work for audio files as well) :
While creating the JFrame I wrote :
jButton1.setIcon(new javax.swing.ImageIcon(getClass().getResource("/GUI/Icon/PatientBig.png")));
I have in my project a package called GUI with a subpackage called Icons where my icons exist and they all are in src folder.
When you using getClass().getResource function , I prefer to use an absolute path.
After seeing your respone I have noticed that you keep using . in the begining of the class path, I copied the snippet you published and removed the . from the begining of the path and placed my audio file bark.wav in the src folder in the default package and it worked
public class test {
private void playSound39() {
try {
/**
* Sound player code from:
* http://alvinalexander.com/java/java-audio-example-java-au-play-sound
*/
// the input stream portion of this recipe comes from a javaworld.com article.
InputStream inputStream = getClass().getResourceAsStream("/bark.wav");
AudioStream audioStream = new AudioStream(inputStream);
AudioPlayer.player.start(audioStream);
} catch (Exception e) {
JOptionPane.showMessageDialog(null, "Audio file not found!");
}
}
public static void main(String[] args){
new test().playSound39();
}
}
Then I placed the audio file inside a package called test1 and modified the path in getResourceAsStream function and again it worked:
public class test {
private void playSound39() {
try {
/**
* Sound player code from:
* http://alvinalexander.com/java/java-audio-example-java-au-play-sound
*/
// the input stream portion of this recipe comes from a javaworld.com article.
InputStream inputStream = getClass().getResourceAsStream("/test1/bark.wav");
AudioStream audioStream = new AudioStream(inputStream);
AudioPlayer.player.start(audioStream);
} catch (Exception e) {
JOptionPane.showMessageDialog(null, "Audio file not found!");
}
}
public static void main(String[] args){
new test().playSound39();
}
}
The Most important thing is to remove . from the path
try this
InputStream in = getClass().getResourceAsStream("/beep39.wav");
I think you need to bypass use of the InputStream. When running the getAudioInputStream method, using InputStream as a parameter triggers markability and resetability tests on the audio file. Audio files usually fail these tests. If you create your AudioInputStream with a URL or File parameter, these tests are circumvented. I prefer URL as it seems more robust and can "see into" jars.
URL url = getClass().getResource("./beep39.wav");
AudioInputStream ais = AudioSystem.getAudioInputStream(url);
Then, in a while loop, you would execute a read method on the AudioInputStream and send the data to a SourceDataLine.
The Java Tutorials covers this in their audio trail. This link jumps into the middle of the tutorials.
AFAIK, there is no "AudioPlayer" in the Java 7 SDK.
I have made an app that records sound and analyses it for frequency. This process is repeated a couple of times every second and thus uses threading.
This does work most of the time, but for some reason in the logcat I get these messages repeated after the first analysis.
Rarely (but sometimes) when I test, the app records no sound. So I'm thinking it has something to do with this error.
01-23 13:52:03.414: E/AudioRecord(3647): Could not get audio input for record source 1
01-23 13:52:03.424: E/AudioRecord-JNI(3647): Error creating AudioRecord instance: initialization check failed.
01-23 13:52:03.424: E/AudioRecord-Java(3647): [ android.media.AudioRecord ] Error code -20 when initializing native AudioRecord object.
The code is below, does anyone have any idea where im going wrong? Am I not killing the AudioRecord object correctly? Code has been modifed for ease of reading:
public class recorderThread extends AsyncTask<Sprite, Void, Integer> {
short[] audioData;
int bufferSize;
#Override
protected Integer doInBackground(Sprite... ball) {
boolean recorded = false;
int sampleRate = 8192;
AudioRecord recorder = instatiateRecorder(sampleRate);
while (!recorded) { //loop until recording is running
if (recorder.getState()==android.media.AudioRecord.STATE_INITIALIZED) // check to see if the recorder has initialized yet.
{
if (recorder.getRecordingState()==android.media.AudioRecord.RECORDSTATE_STOPPED)
recorder.startRecording();
//check to see if the Recorder has stopped or is not recording, and make it record.
else {
//read the PCM audio data into the audioData array
//get frequency
//checks if correct frequency, assigns number
int correctNo = correctNumber(frequency, note);
checkIfMultipleNotes(correctNo, max_index, frequency, sampleRate, magnitude, note);
if (audioDataIsNotEmpty())
recorded = true;
return correctNo;
}
}
else
{
recorded = false;
recorder = instatiateRecorder(sampleRate);
}
}
if (recorder.getState()==android.media.AudioRecord.RECORDSTATE_RECORDING)
{
killRecorder(recorder);
}
return 1;
}
private void killRecorder(AudioRecord recorder) {
recorder.stop(); //stop the recorder before ending the thread
recorder.release(); //release the recorders resources
recorder=null; //set the recorder to be garbage collected
}
#Override
protected void onPostExecute(Integer result) {
ballComp.hitCorrectNote = result;
}
private AudioRecord instatiateRecorder(int sampleRate) {
bufferSize= AudioRecord.getMinBufferSize(sampleRate,AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT)*2;
//get the buffer size to use with this audio record
AudioRecord recorder = new AudioRecord (AudioSource.MIC,sampleRate,AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT,bufferSize);
//instantiate the AudioRecorder
audioData = new short [bufferSize];
//short array that pcm data is put into.
return recorder;
}
}
As your log says that "Could not get audio input for record source 1" that means. Android Device not found any hardware for recording the Sound.
So If you are testing the app from Emulator then make sure that you have successfully attached the mice during recording of the sound or if you are debugging or running it from the device then be sure that the Mic is on to record the Sound.
Hope it will help you.
Or
If above not solve your issue then use the below code to record the Sound as it works perfect for me.
Code:
record.setOnClickListener(new View.OnClickListener()
{
boolean mStartRecording = true;
public void onClick(View v)
{
if (mStartRecording==true)
{
//startRecording();
haveStartRecord=true;
String recordWord = wordValue.getText().toString();
String file = Environment.getExternalStorageDirectory().getAbsolutePath();
file = file+"/"+recordWord+".3gp";
System.out.println("Recording Start");
//record.setText("Stop recording");
record.setBackgroundDrawable(getResources().getDrawable( R.drawable.rec_on));
mRecorder = new MediaRecorder();
mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
mRecorder.setOutputFile(file);
mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
// mRecorder.setAudioChannels(1);
// mRecorder.setAudioSamplingRate(8000);
try
{
mRecorder.prepare();
}
catch (IOException e)
{
Log.e(LOG_TAG, "prepare() failed");
}
mRecorder.start();
}
else
{
//stopRecording();
System.out.println("Recording Stop");
record.setBackgroundDrawable(getResources().getDrawable( R.drawable.rec_off));
mRecorder.stop();
mRecorder.release();
mRecorder = null;
haveFinishRecord=true;
}
mStartRecording = !mStartRecording;
}
});
Hope this answer help you.
Enjoy. :)
What stops you having two RecorderThreads running at the same time? Show the code that instantiates one of these objects, executes it, and of course waits for any previous RecorderThread to finish first.
If the answer is that nothing stops two RecorderThreads running at the same time, then your use of 'static' will obviously be a problem... a second thread will cause the first AudioRecord to be leaked while open. IMHO it's a good idea to try to avoid using static data.
I had the same problem. And I solved it by adding
"<uses-permission android:name="android.permission.RECORD_AUDIO"></uses-permission>"
to the "AndroidManifest.xml"