I have found this code on a Google documentation page (Android Studio changed it a bit automatically):
#RequiresApi(api = Build.VERSION_CODES.KITKAT)
public static void ssmlToAudio(String ssmlText, String outFile) throws Exception {
// Instantiates a client
try (TextToSpeechClient textToSpeechClient = TextToSpeechClient.create()) {
// Set the ssml text input to synthesize
SynthesisInput input = SynthesisInput.newBuilder().setSsml(ssmlText).build();
// Build the voice request, select the language code ("en-US") and
// the ssml voice gender ("male")
VoiceSelectionParams voice =
VoiceSelectionParams.newBuilder()
.setLanguageCode("en-US")
.setSsmlGender(SsmlVoiceGender.MALE)
.build();
// Select the audio file type
AudioConfig audioConfig =
AudioConfig.newBuilder().setAudioEncoding(AudioEncoding.MP3).build();
// Perform the text-to-speech request on the text input with the selected voice parameters and
// audio file type
SynthesizeSpeechResponse response =
textToSpeechClient.synthesizeSpeech(input, voice, audioConfig);
// Get the audio contents from the response
ByteString audioContents = response.getAudioContent();
// Write the response to the output file
try (OutputStream out = new FileOutputStream(outFile)) {
out.write(audioContents.toByteArray());
System.out.println("Audio content written to file " + outFile);
}
}
}
I would like to run this method on a click event. So this is what I have tried so far:
#RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
public void onClick(View view) throws Exception {
ssmlToAudio("Hello", "test");
}
But if I run my app and click on a button, I'll get this error:
java.lang.IllegalStateException: Could not execute method for
android:onClick
What am I doing wrong?
You have to implement the onClickListener in your activity and then override the onClick method.
Related
I am using SSML, so my app can speak. The app itself works perfectly fine on my phone BUT when I connect my phone with a device over Bluetooth, there is mostly a gap or a delay. Either at the beginning or in the middle of the speech.
So for instance, when the audio is Hello John, I am your assistant. How can I help you?, the output could be sistant. How can I help you?. Sometimes the sentences are fluent but sometimes there are these gaps.
This is how I play the audio file:
String myFile = context.getFilesDir() + "/output.mp3";
mMediaPlayer.reset();
mMediaPlayer.setDataSource(myFile);
mMediaPlayer.prepare();
mMediaPlayer.start();
And this is the entire class of it:
public class Tts {
public Context context;
private final MediaPlayer mMediaPlayer;
public Tts(Context context, MediaPlayer mMediaPlayer) {
this.context = context;
this.mMediaPlayer = mMediaPlayer;
}
#SuppressLint({"NewApi", "ResourceType", "UseCompatLoadingForColorStateLists"})
public void say(String text) throws Exception {
InputStream stream = context.getResources().openRawResource(R.raw.credential); // R.raw.credential is credential.json
GoogleCredentials credentials = GoogleCredentials.fromStream(stream);
TextToSpeechSettings textToSpeechSettings =
TextToSpeechSettings.newBuilder()
.setCredentialsProvider(
FixedCredentialsProvider.create(credentials)
).build();
// Instantiates a client
try (TextToSpeechClient textToSpeechClient = TextToSpeechClient.create(textToSpeechSettings)) {
// Replace {name} with target
SharedPreferences sharedPreferences = context.getSharedPreferences("target", Context.MODE_PRIVATE);
String target = sharedPreferences.getString("target", null);
text = (target != null) ? text.replace("{name}", target) : text.replace("null", "");
// Set the text input to be synthesized
String myString = "<speak><prosody pitch=\"low\">" + text + "</prosody></speak>";
SynthesisInput input = SynthesisInput.newBuilder().setSsml(myString).build();
// Build the voice request, select the language code ("en-US") and the ssml voice gender
// ("neutral")
VoiceSelectionParams voice =
VoiceSelectionParams.newBuilder()
.setName("de-DE-Wavenet-E")
.setLanguageCode("de-DE")
.setSsmlGender(SsmlVoiceGender.MALE)
.build();
// Select the type of audio file you want returned
AudioConfig audioConfig =
AudioConfig.newBuilder().setAudioEncoding(AudioEncoding.MP3).build();
// Perform the text-to-speech request on the text input with the selected voice parameters and
// audio file type
SynthesizeSpeechResponse response = textToSpeechClient.synthesizeSpeech(input, voice, audioConfig);
// Get the audio contents from the response
ByteString audioContents = response.getAudioContent();
// Write the response to the output file.
try (FileOutputStream out = new FileOutputStream(context.getFilesDir() + "/output.mp3")) {
out.write(audioContents.toByteArray());
}
String myFile = context.getFilesDir() + "/output.mp3";
mMediaPlayer.setAudioAttributes(new AudioAttributes.Builder().setContentType(AudioAttributes.CONTENT_TYPE_MUSIC).build());
mMediaPlayer.reset();
mMediaPlayer.setDataSource(myFile);
mMediaPlayer.prepare();
mMediaPlayer.setOnPreparedListener(mediaPlayer -> mMediaPlayer.start());
}
}
}
The distance cannot be the reason, since my phone is right next to the device.
Google's SSML needs an internet connection. So I am not quite sure if the gap is because of Bluetooth or internet connection.
So I am trying to close the gap, no matter what the reason is. The audio should be played, when it is prepared and ready to be played.
What I tried
This is what I have tried but I don't hear a difference:
mMediaPlayer.setAudioAttributes(new AudioAttributes.Builder().setContentType(AudioAttributes.CONTENT_TYPE_SPEECH).build());
Instead of mMediaPlayer.prepare(), I also tried it with mMediaPlayer.prepareAsync() but then the audio will not be played (or at least I can't hear it).
Invoking start() in a listener:
mMediaPlayer.setOnPreparedListener(mediaPlayer -> {
mMediaPlayer.start();
});
Unfortunately, the gap is sometimes still there.
Here is my proposed solution. Check out the // *** comments in the code to see what I changed in respect to your code from the question.
Also take it with a grain of salt, because I have no way of testing that right now.
Nevertheless - as far as I can tell - that is all you can do using the MediaPlayer API. If that still doesn't work right for your BlueTooth device, you should try a different BlueTooth device and if that doesn't help either, maybe you can switch the whole thing to use the AudioTrack API instead of MediaPlayer, which gives you a low latency setting and you could use the audio data directly from the response instead of writing it to a file and reading it from there again.
public class Tts {
public Context context;
private final MediaPlayer mMediaPlayer;
public Tts(Context context, MediaPlayer mMediaPlayer) {
this.context = context;
this.mMediaPlayer = mMediaPlayer;
}
#SuppressLint({"NewApi", "ResourceType", "UseCompatLoadingForColorStateLists"})
public void say(String text) throws Exception {
InputStream stream = context.getResources().openRawResource(R.raw.credential); // R.raw.credential is credential.json
GoogleCredentials credentials = GoogleCredentials.fromStream(stream);
TextToSpeechSettings textToSpeechSettings =
TextToSpeechSettings.newBuilder()
.setCredentialsProvider(
FixedCredentialsProvider.create(credentials)
).build();
// Instantiates a client
try (TextToSpeechClient textToSpeechClient = TextToSpeechClient.create(textToSpeechSettings)) {
// Replace {name} with target
SharedPreferences sharedPreferences = context.getSharedPreferences("target", Context.MODE_PRIVATE);
String target = sharedPreferences.getString("target", null);
text = text.replace("{name}", (target != null) ? target : ""); // *** bug fixed
// Set the text input to be synthesized
String myString = "<speak><prosody pitch=\"low\">" + text + "</prosody></speak>";
SynthesisInput input = SynthesisInput.newBuilder().setSsml(myString).build();
// Build the voice request, select the language code ("en-US") and the ssml voice gender
// ("neutral")
VoiceSelectionParams voice =
VoiceSelectionParams.newBuilder()
.setName("de-DE-Wavenet-E")
.setLanguageCode("de-DE")
.setSsmlGender(SsmlVoiceGender.MALE)
.build();
// Select the type of audio file you want returned
AudioConfig audioConfig =
AudioConfig.newBuilder().setAudioEncoding(AudioEncoding.MP3).build();
// Perform the text-to-speech request on the text input with the selected voice parameters and
// audio file type
SynthesizeSpeechResponse response = textToSpeechClient.synthesizeSpeech(input, voice, audioConfig);
// Get the audio contents from the response
ByteString audioContents = response.getAudioContent();
// Write the response to the output file.
try (FileOutputStream out = new FileOutputStream(context.getFilesDir() + "/output.mp3")) {
out.write(audioContents.toByteArray());
}
String myFile = context.getFilesDir() + "/output.mp3";
mMediaPlayer.reset();
mMediaPlayer.setDataSource(myFile);
mMediaPlayer.setAudioAttributes(new AudioAttributes.Builder() // *** moved here (should be done before prepare and very likely AFTER reset)
.setContentType(AudioAttributes.CONTENT_TYPE_SPEECH) // *** changed to speech
.setUsage(AudioAttributes.USAGE_ASSISTANT) // *** added
.setFlags(AudioAttributes.FLAG_AUDIBILITY_ENFORCED) // *** added
.build());
mMediaPlayer.prepare();
// *** following line changed since handler was defined AFTER prepare and
// *** the prepare call isn't asynchronous, thus the handler would never be called.
mMediaPlayer.start();
}
}
}
Hope that get's you going!
I know that there is too many solutions were given, but I can't get the exact solution. My problem is that I have picked one video from internal storage device and after picking video then I have converted to String and set the video to videoView but then also it shows that "Can't play this video" in videoView.
can anyone please help me to find out the solution :(
here is my code
File file = new File(Environment.getExternalStorageDirectory().getAbsolutePath()+"/Download/videos.mp4");
Log.d("video",""+file);
if (file.exists()) {
Uri uri = Uri.fromFile(file);
String video = String.valueOf(uri);
Log.d("video",""+uri);
videoView.setMediaController(new MediaController(this));
videoView.setVideoURI(Uri.parse(video));
videoView.requestFocus();
videoView.start();
}else {
Toast.makeText(this, "No video found", Toast.LENGTH_SHORT).show();
}
With scoped storage (required from API 30) you can't access files directly unless you request the MANAGE_EXTERNAL_STORAGE (on Google Play you need to request it to Google).
The new way is to use the file uri. You can try those ways:
Ask the user to select the file.
private final ActivityResultLauncher<String[]> openDoc =
registerForActivityResult(new ActivityResultContracts.OpenDocument(),
new ActivityResultCallback<Uri>() {
#Override
public void onActivityResult(Uri uri) {
// use uri
}
});
Call it with:
// Use the mimetype you want (optional). Like "text/plain"
openDoc.launch(new String[]{"text/plain"});
Read more here
Get the Media file uri with MediaStore
Read more here
You'll also need the READ_EXTERNAL_STORAGE permission if the file was not created by your app.
I am using the Google API Text-To-Speech and I would like to simply hear "Hello World".
This is what I have so far:
/** Demonstrates using the Text-to-Speech API. */
#RequiresApi(api = Build.VERSION_CODES.KITKAT)
public void hello() throws Exception {
InputStream stream = getResources().openRawResource(R.raw.credential); // R.raw.credential is credential.json
GoogleCredentials credentials = GoogleCredentials.fromStream(stream);
TextToSpeechSettings textToSpeechSettings =
TextToSpeechSettings.newBuilder()
.setCredentialsProvider(
FixedCredentialsProvider.create(credentials)
).build()
;
// Instantiates a client
try (TextToSpeechClient textToSpeechClient = TextToSpeechClient.create(textToSpeechSettings)) {
// Set the text input to be synthesized
SynthesisInput input = SynthesisInput.newBuilder().setText("Hello, World!").build();
// Build the voice request, select the language code ("en-US") and the ssml voice gender
// ("neutral")
VoiceSelectionParams voice =
VoiceSelectionParams.newBuilder()
.setLanguageCode("en-US")
.setSsmlGender(SsmlVoiceGender.NEUTRAL)
.build();
// Select the type of audio file you want returned
AudioConfig audioConfig =
AudioConfig.newBuilder().setAudioEncoding(AudioEncoding.MP3).build();
// Perform the text-to-speech request on the text input with the selected voice parameters and
// audio file type
SynthesizeSpeechResponse response =
textToSpeechClient.synthesizeSpeech(input, voice, audioConfig);
// Get the audio contents from the response
ByteString audioContents = response.getAudioContent();
// Write the response to the output file.
try (OutputStream out = new FileOutputStream("output.mp3")) {
out.write(audioContents.toByteArray());
System.out.println("Audio content written to file \"output.mp3\"");
}
}
}
I get the error:
java.io.FileNotFoundException: output.mp3 (Read-only file system)
Most of the codes I have copied from Google's documentation but I don't even want to save that audio in a file. The text "Hello, World!" should simply be played without being saved first. Is this possible?
I have a situation where i need to download a excel file. So i user Window.open for that. The problem is i need to check whether the file is exsist in the server location before call the Window.open. So when user click the download buton below call happens,
public void onClick(Button button, EventObject e) {
final String url = GWT.getModuleBaseURL() + "fileupload/dailyLogReport?param1=param1
openFileDownloadWindow(url,fileName);
}
public void openFileDownloadWindow(final String url,String fileName){
CommonServiceAsync serviceAsyn = CommonService.Util.getInstance();
final AsyncCallback callback = new AsyncCallback() {
public void onSuccess(Object result)
{
isFileExsist = (Boolean)result;
if(isFileExsist){
Window.open( url, "_blank", "status=0,toolbar=0,menubar=0,location=0");
}else{
Window.alert("File not found.");
}
}
public void onFailure(Throwable caught)
{
MessageBox.alert("Error", "Error while getting data"
+ caught.getMessage());
}
};
// calling of the action
serviceAsyn.isDailyLogFileExsists(fileName, callback);
}
But the problem is if i put the Window.open inside the success it just open a Window and getting it close quickly with out download the file. But if i put the Window.open directly in onClick method it successfully open the window pop up and download the file successfully. But Since i have to download the file conditionally by checking whether the file is exists or not I can not put the Window.open inside onClick.
What is the reason Window.open not working properly inside the call back success function?
The problem is popup blocker.
When you click on a element you can open a new window since the browser considers it is a deliberate user action to open the window.
Otherwise, the browser blocks any window.open in asynchronous blocks, because it considers that it could be malicious code run out of the user control.
The best solution, is to open the file in an iframe, but you have to set the appropriate content-disposition header in server side which causes the browser to show the "Save" dialog.
Client Code:
// Create a new iframe
final Frame f = new Frame();
f.setUrl(url_to_my_excel_file");
// Set a size of 0px unless you want the file be displayed in it
// For .html images .pdf, etc. you must configure your servlet
// to send the Content-Disposition header
f.setSize("0px", "0px");
RootPanel.get().add(f);
// Configure a timer to remove the element from the DOM
new Timer() {
public void run() {
f.removeFromParent();
}
}.schedule(10000);
Server Code:
protected void doGet( HttpServletRequest req, HttpServletResponse resp ) throws ServletException, IOException {
[...]
// Set the appropriate type for your file
resp.setContentType("application/vnd.ms-excel");
// Mandatory if you want the browser open the save dialog
resp.setHeader("Content-Disposition:", "attachment;filename='my_excel_file.xls'");
[...]
}
I am new to android. I am developing the new app with email sending option. To send a mail I have used gmail configurations host "smtp.gmail.com", port 465 with SSL true. To send an email I have apache commons API. OnTouch event mail sending method will call. Whenever touch button it shows following errors,
Error : Could not find class 'javax.naming.InitialContext', referenced from method org.apache.commons.mail.Email.setMailSessionFromJNDI
Warning: VFY: unable to resolve new-instance 955 (Ljavax/naming/InitialContext;) in Lorg/apache/commons/mail/Email;
Warning : org.apache.commons.mail.EmailException: Sending the email to the following server failed : smtp.gmail.com:465
I have added uses-permission android:name="android.permission.INTERNET" in my manifest file.
Can i use all java files in android ?
My email code executed correctly as a stand alone java program.
Here is an example of what I am doing in an app. I have an app that has its own email account that sends an email to the user when they fill out a form and press the submit button.
Important make sure you have the libSMTP.jar file referenced in your app. I am using this library for the following code. Here is the following code being used, take from it what you'd like, hope this is useful:
Imports needed:
import org.apache.commons.net.smtp.SMTPClient;
import org.apache.commons.net.smtp.SMTPReply;
import org.apache.commons.net.smtp.SimpleSMTPHeader;
Submit button to make the request to send email
submit.setOnClickListener(new OnClickListener()
{
public void onClick(View v)
{
//-- Submit saves data to sqlite db, but removed that portion for this demo...
//-- Executes an new task to send an automated email to user when they fill out a form...
new sendEmailTask().execute();
}
}
});
Email task to be preformed on seperate thread:
private class sendEmailTask extends AsyncTask<Void, Void, Void>
{
#Override
protected void onPostExecute(Void result)
{
}
#Override
protected void onPreExecute()
{
}
#SuppressLint("ParserError")
#Override
protected Void doInBackground(Void... params)
{
try {
//--Note the send format is as follows: send(from, to, subject line, body message)
send("myAppName#gmail.com", "emailToSendTo#gmail.com", "Form Submitted", "You submitted the form.");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
}
}
Send function being used:
public void send(String from, String to, String subject, String text) throws IOException
{
SMTPClient client = new SMTPClient("UTF-8");
client.setDefaultTimeout(60 * 1000);
client.setRequireStartTLS(true); // requires STARTTLS
//client.setUseStartTLS(true); // tries STARTTLS, but falls back if not supported
client.setUseAuth(true); // use SMTP AUTH
//client.setAuthMechanisms(authMechanisms); // sets AUTH mechanisms e.g. LOGIN
client.connect("smtp.gmail.com", 587);
checkReply(client);
//--Note the following format is as follows: client.login("localhost", (...your email account being used to send email from...), (...your email accounts password ...));
client.login("localhost", "myAppName#gmail.com", "...myAppName email account password...");
checkReply(client);
client.setSender(from);
checkReply(client);
client.addRecipient(to);
checkReply(client);
Writer writer = client.sendMessageData();
if (writer != null)
{
SimpleSMTPHeader header = new SimpleSMTPHeader(from, to, subject);
writer.write(header.toString());
writer.write(text);
writer.close();
client.completePendingCommand();
checkReply(client);
}
client.logout();
client.disconnect();
}
Check reply function being used:
private void checkReply(SMTPClient sc) throws IOException
{
if (SMTPReply.isNegativeTransient(sc.getReplyCode()))
{
sc.disconnect();
throw new IOException("Transient SMTP error " + sc.getReplyCode());
}
else if (SMTPReply.isNegativePermanent(sc.getReplyCode()))
{
sc.disconnect();
throw new IOException("Permanent SMTP error " + sc.getReplyCode());
}
}
From Apache Commons Net 3.3, you can just drop the jar in your classpath and start using the AuthenticationSMTPClient : http://blog.dahanne.net/2013/06/17/sending-a-mail-in-java-and-android-with-apache-commons-net/