Stream audio from aws s3 to Discord in java - java

I am trying to make a discord bot that plays custom sounds, i put the sounds in a aws s3 bucket and i can retrieve them but i dont know how to stream them to discord, i can stream audio files saved locally just fine, to stream local files i use lavaplayer.
This is how i get the file from the s3 bucket:
fullObject = s3Client.getObject(new GetObjectRequest("bucket-name", audioName));
System.out.println("Content-Type: " + fullObject.getObjectMetadata().getContentType());
S3ObjectInputStream s3is = fullObject.getObjectContent();
This i how i play the local files with lavaplayer:
String toPlay = "SoundBoard" + File.separator + event.getArgs();
MessageChannel channel = event.getChannel();
AudioChannel myChannel = event.getMember().getVoiceState().getChannel();
AudioManager audioManager = event.getGuild().getAudioManager();
AudioPlayerManager playerManager = new DefaultAudioPlayerManager();
AudioPlayer player = playerManager.createPlayer();
AudioPlayerSendHandler audioPlayerSendHandler = new AudioPlayerSendHandler(player);
audioManager.setSendingHandler(audioPlayerSendHandler);
audioManager.openAudioConnection(myChannel);
TrackScheduler trackScheduler = new TrackScheduler(player);
player.addListener(trackScheduler);
playerManager.registerSourceManager(new LocalAudioSourceManager());
playerManager.loadItem(toPlay, new AudioLoadResultHandler() {
#Override
public void trackLoaded(AudioTrack track) {
trackScheduler.addQueue(track);
}
#Override
public void noMatches() {
channel.sendMessage("audio not found").queue();
trackScheduler.addQueue(null);
}
#Override
public void loadFailed(FriendlyException throwable) {
System.out.println("error " + throwable.getMessage());
}
});
player.playTrack(trackScheduler.getTrack());
So is there a way to stream the files directly with lavaplayer or in another way? (im trying to avoid saving the audio to a file then playing it and then deleting it)

Related

Gaps in audio when connecting to a Bluetooth device

I am using SSML, so my app can speak. The app itself works perfectly fine on my phone BUT when I connect my phone with a device over Bluetooth, there is mostly a gap or a delay. Either at the beginning or in the middle of the speech.
So for instance, when the audio is Hello John, I am your assistant. How can I help you?, the output could be sistant. How can I help you?. Sometimes the sentences are fluent but sometimes there are these gaps.
This is how I play the audio file:
String myFile = context.getFilesDir() + "/output.mp3";
mMediaPlayer.reset();
mMediaPlayer.setDataSource(myFile);
mMediaPlayer.prepare();
mMediaPlayer.start();
And this is the entire class of it:
public class Tts {
public Context context;
private final MediaPlayer mMediaPlayer;
public Tts(Context context, MediaPlayer mMediaPlayer) {
this.context = context;
this.mMediaPlayer = mMediaPlayer;
}
#SuppressLint({"NewApi", "ResourceType", "UseCompatLoadingForColorStateLists"})
public void say(String text) throws Exception {
InputStream stream = context.getResources().openRawResource(R.raw.credential); // R.raw.credential is credential.json
GoogleCredentials credentials = GoogleCredentials.fromStream(stream);
TextToSpeechSettings textToSpeechSettings =
TextToSpeechSettings.newBuilder()
.setCredentialsProvider(
FixedCredentialsProvider.create(credentials)
).build();
// Instantiates a client
try (TextToSpeechClient textToSpeechClient = TextToSpeechClient.create(textToSpeechSettings)) {
// Replace {name} with target
SharedPreferences sharedPreferences = context.getSharedPreferences("target", Context.MODE_PRIVATE);
String target = sharedPreferences.getString("target", null);
text = (target != null) ? text.replace("{name}", target) : text.replace("null", "");
// Set the text input to be synthesized
String myString = "<speak><prosody pitch=\"low\">" + text + "</prosody></speak>";
SynthesisInput input = SynthesisInput.newBuilder().setSsml(myString).build();
// Build the voice request, select the language code ("en-US") and the ssml voice gender
// ("neutral")
VoiceSelectionParams voice =
VoiceSelectionParams.newBuilder()
.setName("de-DE-Wavenet-E")
.setLanguageCode("de-DE")
.setSsmlGender(SsmlVoiceGender.MALE)
.build();
// Select the type of audio file you want returned
AudioConfig audioConfig =
AudioConfig.newBuilder().setAudioEncoding(AudioEncoding.MP3).build();
// Perform the text-to-speech request on the text input with the selected voice parameters and
// audio file type
SynthesizeSpeechResponse response = textToSpeechClient.synthesizeSpeech(input, voice, audioConfig);
// Get the audio contents from the response
ByteString audioContents = response.getAudioContent();
// Write the response to the output file.
try (FileOutputStream out = new FileOutputStream(context.getFilesDir() + "/output.mp3")) {
out.write(audioContents.toByteArray());
}
String myFile = context.getFilesDir() + "/output.mp3";
mMediaPlayer.setAudioAttributes(new AudioAttributes.Builder().setContentType(AudioAttributes.CONTENT_TYPE_MUSIC).build());
mMediaPlayer.reset();
mMediaPlayer.setDataSource(myFile);
mMediaPlayer.prepare();
mMediaPlayer.setOnPreparedListener(mediaPlayer -> mMediaPlayer.start());
}
}
}
The distance cannot be the reason, since my phone is right next to the device.
Google's SSML needs an internet connection. So I am not quite sure if the gap is because of Bluetooth or internet connection.
So I am trying to close the gap, no matter what the reason is. The audio should be played, when it is prepared and ready to be played.
What I tried
This is what I have tried but I don't hear a difference:
mMediaPlayer.setAudioAttributes(new AudioAttributes.Builder().setContentType(AudioAttributes.CONTENT_TYPE_SPEECH).build());
Instead of mMediaPlayer.prepare(), I also tried it with mMediaPlayer.prepareAsync() but then the audio will not be played (or at least I can't hear it).
Invoking start() in a listener:
mMediaPlayer.setOnPreparedListener(mediaPlayer -> {
mMediaPlayer.start();
});
Unfortunately, the gap is sometimes still there.
Here is my proposed solution. Check out the // *** comments in the code to see what I changed in respect to your code from the question.
Also take it with a grain of salt, because I have no way of testing that right now.
Nevertheless - as far as I can tell - that is all you can do using the MediaPlayer API. If that still doesn't work right for your BlueTooth device, you should try a different BlueTooth device and if that doesn't help either, maybe you can switch the whole thing to use the AudioTrack API instead of MediaPlayer, which gives you a low latency setting and you could use the audio data directly from the response instead of writing it to a file and reading it from there again.
public class Tts {
public Context context;
private final MediaPlayer mMediaPlayer;
public Tts(Context context, MediaPlayer mMediaPlayer) {
this.context = context;
this.mMediaPlayer = mMediaPlayer;
}
#SuppressLint({"NewApi", "ResourceType", "UseCompatLoadingForColorStateLists"})
public void say(String text) throws Exception {
InputStream stream = context.getResources().openRawResource(R.raw.credential); // R.raw.credential is credential.json
GoogleCredentials credentials = GoogleCredentials.fromStream(stream);
TextToSpeechSettings textToSpeechSettings =
TextToSpeechSettings.newBuilder()
.setCredentialsProvider(
FixedCredentialsProvider.create(credentials)
).build();
// Instantiates a client
try (TextToSpeechClient textToSpeechClient = TextToSpeechClient.create(textToSpeechSettings)) {
// Replace {name} with target
SharedPreferences sharedPreferences = context.getSharedPreferences("target", Context.MODE_PRIVATE);
String target = sharedPreferences.getString("target", null);
text = text.replace("{name}", (target != null) ? target : ""); // *** bug fixed
// Set the text input to be synthesized
String myString = "<speak><prosody pitch=\"low\">" + text + "</prosody></speak>";
SynthesisInput input = SynthesisInput.newBuilder().setSsml(myString).build();
// Build the voice request, select the language code ("en-US") and the ssml voice gender
// ("neutral")
VoiceSelectionParams voice =
VoiceSelectionParams.newBuilder()
.setName("de-DE-Wavenet-E")
.setLanguageCode("de-DE")
.setSsmlGender(SsmlVoiceGender.MALE)
.build();
// Select the type of audio file you want returned
AudioConfig audioConfig =
AudioConfig.newBuilder().setAudioEncoding(AudioEncoding.MP3).build();
// Perform the text-to-speech request on the text input with the selected voice parameters and
// audio file type
SynthesizeSpeechResponse response = textToSpeechClient.synthesizeSpeech(input, voice, audioConfig);
// Get the audio contents from the response
ByteString audioContents = response.getAudioContent();
// Write the response to the output file.
try (FileOutputStream out = new FileOutputStream(context.getFilesDir() + "/output.mp3")) {
out.write(audioContents.toByteArray());
}
String myFile = context.getFilesDir() + "/output.mp3";
mMediaPlayer.reset();
mMediaPlayer.setDataSource(myFile);
mMediaPlayer.setAudioAttributes(new AudioAttributes.Builder() // *** moved here (should be done before prepare and very likely AFTER reset)
.setContentType(AudioAttributes.CONTENT_TYPE_SPEECH) // *** changed to speech
.setUsage(AudioAttributes.USAGE_ASSISTANT) // *** added
.setFlags(AudioAttributes.FLAG_AUDIBILITY_ENFORCED) // *** added
.build());
mMediaPlayer.prepare();
// *** following line changed since handler was defined AFTER prepare and
// *** the prepare call isn't asynchronous, thus the handler would never be called.
mMediaPlayer.start();
}
}
}
Hope that get's you going!

How to record data while app is not in focus?

I've created an app that records data usage of an app and writes it to a file on the phones sd card. Now I'm trying to allow this process to run in the background through the use of adb. I want to be able to send a signal/broadcast to the app to write the current data usage. And later be able to send a second signal that writes the data usage and the new time. So that one can look at how much data the app has used.
So far I've tried using adb and sending broadcasts to the app and it seems to be working however I am not able to save the file to the sd card through the use of mediascanner.
This is the function that is run when I send "adb -d shell am broadcast -n com.axel.datatracking/.IntentReceiver --es --start 'com.linku.android.mobile_emergency.app.activity' " to the app.
public void startLog(Context context, SimplifiedAppInfo selectedApp) {
int i = 0;
String name = "dataFile.csv";
Log.d("update", "sort of works maybe");
// make the file if it already exists increment the name by 1
try {
testFile = new File(context.getExternalFilesDir(null), name);
while(testFile.exists()) {
i++;
name = this.makeFileName(i);
testFile = new File(context.getExternalFilesDir(null), name);
}
Log.d("filename", name);
testFile.createNewFile();
} catch (IOException e) {
Log.d("broke", "Unable to write my dood");
}
// try to write to the file
try {
fOut = new FileOutputStream(testFile);
BufferedWriter writer = new BufferedWriter(new FileWriter(testFile, true));
startingDown = selectedApp.getDownbytes();
startingUp = selectedApp.getUpbyts();
startingTime = System.currentTimeMillis();
writer.write("data,up,down\n");
writer.write("Initial,"+selectedApp.getUpbyts()+","+selectedApp.getDownbytes()+"\n");
writer.close();
// refresh the data file
//MediaScannerConnection.scanFile(context, new String[]{this.testFile.toString()}, null, null);
} catch (IOException e) {
Log.d("broke", "cant write to the file");
}
}
This is the function to write the end results that is run by "adb -d shell am broadcast -n com.axel.datatracking/.IntentReceiver --es --end 'com.linku.android.mobile_emergency.app.activity' "
public void endLog(Context context, SimplifiedAppInfo selectedApp) {
// write end results to file
try {
BufferedWriter writer = new BufferedWriter(new FileWriter(testFile, true));
writer.write("End,"+selectedApp.getUpbyts()+","+selectedApp.getDownbytes()+"\n");
float effectiveDown = selectedApp.getDownbytes() - startingDown;
float effectiveUp = selectedApp.getUpbyts() - startingUp;
writer.write("Effective,"+effectiveUp+","+effectiveDown+"\n");
float timePassed = ((float) ((System.currentTimeMillis() - startingTime)/1000));
float avgUp = effectiveUp/timePassed;
float avgDown = effectiveDown/timePassed;
writer.write("Average bytes/sec,"+avgUp+","+avgDown+"\n");
writer.close();
fOut.close();
//MediaScannerConnection.scanFile(context, new String[]{this.testFile.toString()},null, null);
} catch (IOException e) {
Log.d("broke", "the write dont work");
}
}
Broadcast receivers components are not allowed to bind to services.
Edit: I'm open to other solutions besides using a broadcast receiver I just need to be able to log the data while outside of the app and focusing on another app, from the terminal.
Register receiver in manifest and then make some Receiver class, example:
public class CustomReceiver extends BroadcastReceiver {
#Override
public final void onReceive(Context context, Intent intent){
//// here you can start your own service or do logic here
}
here is how to register receiver in Manifest just pick different intent filter:
<receiver android:name="com.example.example.CustomReceiver ">
<intent-filter>
<action android:name="android.intent.action.PHONE_STATE" />
</intent-filter>
</receiver>
And onReceive going to be called whenever phone action will be triggered.
I would advice to read documentation about Intents because Android has a lot of restrictions, per

Unable to download Media file in Android app sent from web app

I am working on Chat Application, we have two Client one is Android and another is web,I am uploading Media files to S3-Amazon,when I am sending media file from Web App to android client Media file are not downloaded showing error as bellow.
Media Download interrupted : com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: XXXXXXXXX), S3 Extended Request ID:XXXXXXXXXXX
private void beginDownload(String key, String bucket, String
mediaType,final
DownloadFileFromAwsCompletionListener listener) {
// Location to download files from S3 to. You can choose any
accessible
// file.
String localFilePath = Strings.EMPTY;
try {
//if (!isThumb) {
localFilePath = MediaHelper.createMediaFile(mediaType, false, false, key);
/* } else {
localFilePath = MediaHelper.createMediaFile(mediaType, false, true);
}*/
} catch (Exception e) {
e.printStackTrace();
}
if (!StringHelper.isNullOrEmpty(localFilePath)) {
File file = new File(localFilePath);
// Initiate the download
TransferObserver observer = mTransferUtility.download(bucket, key, file);
final String finalLocalFilePath = localFilePath;
observer.setTransferListener(new TransferListener() {
#Override
public void onStateChanged(int id, TransferState state) {
//String bucketPath = UrlStrings.XmppStrings.
if (state.equals(TransferState.COMPLETED)) {
listener.onDownloadSuccess(finalLocalFilePath);
}
}
#Override
public void onProgressChanged(int id, long bytesCurrent, long bytesTotal) {
}
#Override
public void onError(int id, Exception ex) {
listener.onDatabaseError(new AwsFailure(ex));
}
});
} else {
getLogger().log(Strings.TAG, "xmpp beginDownload(): file could not be created.");
}
}
You should cross check the uploaded path with which path you are using for downloading media files from S3. I think You are using the different path for downloading media files That's why you getting an error.

Play 2.3.x: Non-blocking image upload to Amazon S3

I am wondering what the correct way with Play 2.3.x (Java) is to upload images to Amazon S3 in a non-blocking way.
Right now I am wrapping the amazons3.putObject method inside a promise. However I fear that I am basically just blocking another thread with this logic. My code looks like following:
return Promise.promise(
new Function0<Boolean>() {
public Boolean apply() {
if (S3Plugin.amazonS3 != null) {
try {
PutObjectRequest putObjectRequest = new PutObjectRequest(
S3Plugin.s3Bucket, name + "." + format, file.getFile());
ObjectMetadata metadata = putObjectRequest.getMetadata();
if(metadata == null) {
metadata = new ObjectMetadata();
}
metadata.setContentType(file.getContentType());
putObjectRequest.setMetadata(metadata);
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
S3Plugin.amazonS3.putObject(putObjectRequest);
return true;
} catch (AmazonServiceException e) {
// error uploading image to s3
Logger.error("AmazonServiceException: " + e.toString());
} catch (AmazonClientException e) {
// error uploading image to s3
Logger.error("AmazonClientException: " + e.toString());
}
}
return false;
}
}
);
What is the best way to do the upload process non-blocking?
The Amazon library also provides the TransferManager.class for asynchronous uploads but I am not sure how to utilize this in a non-blocking way either...
SOLUTION:
After spending quite a while figuring out how to utilize the Promise/Future in Java, I came up with following solution thanks to Will Sargent:
import akka.dispatch.Futures;
final scala.concurrent.Promise<Boolean> promise = Futures.promise();
... create AmazonS3 upload object ...
upload.addProgressListener(new ProgressListener() {
#Override
public void progressChanged(ProgressEvent progressEvent) {
if(progressEvent.getEventCode() == ProgressEvent.COMPLETED_EVENT_CODE) {
promise.success(true);
}
else if(progressEvent.getEventCode() == ProgressEvent.FAILED_EVENT_CODE) {
promise.success(false);
}
}
});
return Promise.wrap(promise.future());
Important to note is that I have to use the scala promise and not the play framework promise. The return value however is a play.libs.F.Promise.
You can do the upload process and return a Future by creating a Promise, returning the promise's future, and only writing to the promise in the TransferManager's progress listener:
Examples of promises / futures: http://docs.scala-lang.org/overviews/core/futures.html
Return the Scala future from Promise: http://www.playframework.com/documentation/2.3.x/api/java/play/libs/F.Promise.html#wrapped()
TransferManager docs: http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html

How can I intercept the audio stream on an android device?

Let's suppose that we have the following scenario: something is playing on an android device (an mp3 par example, but it could be anything that use the audio part of an android device). From an application (android application :) ), I would like to intercept the audio stream to analyze it, to record it, etc. From this application (let's say "the analyzer") I don't want to start an mp3 or something, all I want is to have access to the audio stream of android.
Any advice is appreciated, it could a Java or C++ solution.
http://developer.android.com/reference/android/media/MediaRecorder.html
public class AudioRecorder {
final MediaRecorder recorder = new MediaRecorder();
final String path;
/**
* Creates a new audio recording at the given path (relative to root of SD
* card).
*/
public AudioRecorder(String path) {
this.path = sanitizePath(path);
}
private String sanitizePath(String path) {
if (!path.startsWith("/")) {
path = "/" + path;
}
if (!path.contains(".")) {
path += ".3gp";
}
return Environment.getExternalStorageDirectory().getAbsolutePath()
+ path;
}
/**
* Starts a new recording.
*/
public void start() throws IOException {
String state = android.os.Environment.getExternalStorageState();
if (!state.equals(android.os.Environment.MEDIA_MOUNTED)) {
throw new IOException("SD Card is not mounted. It is " + state
+ ".");
}
// make sure the directory we plan to store the recording in exists
File directory = new File(path).getParentFile();
if (!directory.exists() && !directory.mkdirs()) {
throw new IOException("Path to file could not be created.");
}
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
recorder.setOutputFile(path);
recorder.prepare();
recorder.start();
}
/**
* Stops a recording that has been previously started.
*/
public void stop() throws IOException {
recorder.stop();
recorder.release();
}
}
Consider using the AudioPlaybackCapture API that was introduced in Android 10 if you want to get the audio stream for a particular app.

Categories