Issues with Microsoft API - java

So Im having trouble using Microsoft's Emotion API for Android. I have no issues with regards to running the Face API; Im able to get the face rectangles but I am not able to get it working on the emotion api. I am taking images using the builtin Android camera itself. Here is the code I am using:
private void detectAndFrame(final Bitmap imageBitmap)
{
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
imageBitmap.compress(Bitmap.CompressFormat.PNG, 100, outputStream);
ByteArrayInputStream inputStream =
new ByteArrayInputStream(outputStream.toByteArray());
AsyncTask<InputStream, String, List<RecognizeResult>> detectTask =
new AsyncTask<InputStream, String, List<RecognizeResult>>() {
#Override
protected List<RecognizeResult> doInBackground(InputStream... params) {
try {
Log.e("i","Detecting...");
faces = faceServiceClient.detect(
params[0],
true, // returnFaceId
false, // returnFaceLandmarks
null // returnFaceAttributes: a string like "age, gender"
);
if (faces == null)
{
Log.e("i","Detection Finished. Nothing detected");
return null;
}
Log.e("i",
String.format("Detection Finished. %d face(s) detected",
faces.length));
ImageView imageView = (ImageView)findViewById(R.id.imageView);
InputStream stream = params[0];
com.microsoft.projectoxford.emotion.contract.FaceRectangle[] rects = new com.microsoft.projectoxford.emotion.contract.FaceRectangle[faces.length];
for (int i = 0; i < faces.length; i++) {
com.microsoft.projectoxford.face.contract.FaceRectangle rect = faces[i].faceRectangle;
rects[i] = new com.microsoft.projectoxford.emotion.contract.FaceRectangle(rect.left, rect.top, rect.width, rect.height);
}
List<RecognizeResult> result;
result = client.recognizeImage(stream, rects);
return result;
} catch (Exception e) {
Log.e("e", e.getMessage());
Log.e("e", "Detection failed");
return null;
}
}
#Override
protected void onPreExecute() {
//TODO: show progress dialog
}
#Override
protected void onProgressUpdate(String... progress) {
//TODO: update progress
}
#Override
protected void onPostExecute(List<RecognizeResult> result) {
ImageView imageView = (ImageView)findViewById(R.id.imageView);
imageView.setImageBitmap(drawFaceRectanglesOnBitmap(imageBitmap, faces));
MediaStore.Images.Media.insertImage(getContentResolver(), imageBitmap, "AnImage" ,"Another image");
if (result == null) return;
for (RecognizeResult res: result) {
Scores scores = res.scores;
Log.e("Anger: ", ((Double)scores.anger).toString());
Log.e("Neutral: ", ((Double)scores.neutral).toString());
Log.e("Happy: ", ((Double)scores.happiness).toString());
}
}
};
detectTask.execute(inputStream);
}
I keep getting the error Post Request 400, indicating some sort of issue with the JSON or the face rectangles. But I'm not sure where to start debugging this issue.

You're using the stream twice, so the second time around you're already at the end of the stream. So either you can reset the stream, or, simply call the emotion API without rectangles (ie skip the call to the face API.) The emotion API will determine the face rectangles for you.

Related

How do I prevent a For loop from continuing to loop, until a method called within the loop, is complete?

I am trying to crop faces out of the original captured image using the rect values to identify the target areas and create bitmaps of just the face detected area. This works :)
The issue is:
When I have an image with more than one face, the for loop in the onSuccess method which calls an alert dialog for user input for each cropped face filename seems to loop before the alert dialogs onClick() is complete. The code for saving each face is fired once the alert dialog onClick (OK) method is called.
The code currently saves only one of the cropped faces, the different user inputs are correctly handled in the individual alert dialogs but, only the last face in the is saved.
I think, the for loop is continuing to loop after the alert dialog is triggered but before the user has completed the input and the save has taken place for each face. Therefore, when the save method is called it is only saving the last object in the faces list.
Any suggestions on how I can improve this code?
#Override
public void onImage(CameraKitImage cameraKitImage) {
capturedImage = cameraKitImage.getBitmap();
capturedImage = Bitmap.createScaledBitmap(capturedImage, cameraView.getWidth(), cameraView.getHeight(), false);
cameraView.stop();
processFaceDetection(capturedImage);
}
public void processFaceDetection(final Bitmap bitmap) {
FirebaseVisionImage visionImage = FirebaseVisionImage.fromBitmap(bitmap);
FirebaseVisionFaceDetectorOptions detectorOptions = new FirebaseVisionFaceDetectorOptions.Builder()
.setPerformanceMode(FirebaseVisionFaceDetectorOptions.ACCURATE)
.setLandmarkMode(FirebaseVisionFaceDetectorOptions.NO_LANDMARKS)
.setClassificationMode(FirebaseVisionFaceDetectorOptions.NO_CLASSIFICATIONS)
.setMinFaceSize(0.15f)
.enableTracking()
.build();
FirebaseVisionFaceDetector detector = FirebaseVision.getInstance().getVisionFaceDetector(detectorOptions);
detector.detectInImage(visionImage).addOnSuccessListener(new OnSuccessListener<List<FirebaseVisionFace>>() {
#Override
public void onSuccess(List<FirebaseVisionFace> firebaseVisionFaces) {
listSize = firebaseVisionFaces.size();
Bitmap originalCapture = Bitmap.createScaledBitmap(capturedImage, cameraView.getWidth(), cameraView.getHeight(), false);//scaled bitmap created from captured image
saveImageOriginal(originalCapture);
//for (FirebaseVisionFace face : firebaseVisionFaces) {
for ( i = 0; i < firebaseVisionFaces.size(); i++){
FirebaseVisionFace face = firebaseVisionFaces.get(i);
Rect rect = face.getBoundingBox();
faceCrop = Bitmap.createBitmap(originalCapture, rect.left, rect.top, rect.width(), rect.height());//face cropped using rect values
RectOverlay rectOverlay = new RectOverlay(graphicOverlay, rect);
graphicOverlay.add(rectOverlay);//draw box around face
showAddItemDialog(Camera.CurrentContext); //prompt for name, save cropped face
}
}
});
}
private void showAddItemDialog(Context c) {
final EditText inputName = new EditText(c);
AlertDialog dialog = new AlertDialog.Builder(c)
.setTitle("Input Person's Name" + i)
.setMessage("Format: LastName, FirstName")
.setView(inputName)
.setPositiveButton("Add", new DialogInterface.OnClickListener() {
#Override
public void onClick(DialogInterface dialog, int which) {
nameIn = String.valueOf(inputName.getText());
try {
saveImage(faceCrop); //give read write permission
}catch (Exception e) {
e.printStackTrace();
}
}
})
.setNegativeButton("Cancel", null)
.create();
dialog.show();
}
public String saveImage(Bitmap croppedFace) {
String eventFaces, event;
event = "/Summer Event 2020";
eventFaces = "/Event_Faces";
final ByteArrayOutputStream bytes = new ByteArrayOutputStream();
croppedFace.compress(Bitmap.CompressFormat.JPEG, 90, bytes);
final File facesDirectory = new File(getApplicationContext().getExternalFilesDir(null).getAbsolutePath() + event + eventFaces); //crop
if (!facesDirectory.exists()) {
Log.d("directory SAVING", "" + facesDirectory.mkdirs());
facesDirectory.mkdirs();
}
try {
croppedFile = new File(facesDirectory, nameIn + ".jpg");
croppedFile.createNewFile();
FileOutputStream fo = new FileOutputStream(croppedFile);
fo.write(bytes.toByteArray());
MediaScannerConnection.scanFile(Camera.CurrentContext, new String[]{croppedFile.getPath()}, new String[]{"image/jpeg"}, null);
fo.close();
Log.d("TAG", "File Saved::--->" + croppedFile.getAbsolutePath());
Toast.makeText(Camera.this, nameIn + " " + "i" + i + " list" + listSize + " " + "Face Cropped and Saved to -> " + croppedFile.getPath(), Toast.LENGTH_SHORT).show();
return croppedFile.getAbsolutePath();
} catch (IOException e1) {
e1.printStackTrace();
}
return "";
}//end of save image
If anyone is experiencing this same type of issue, I divided the code in the for loop into two separate loops and incorporated a flag into the AlertDialog on user input (keyboard in). Once the flag is true, following the AlertDialog user input, the conditions will now be met for the second for loop.
Hope this helps.

Android app extremely slow to swipe viewpager and scroll through recyclerview, how to pintpoint cause?

So, I've been stuck on this issue for a week where I launch my app and it freezes for almost 10 seconds on a white screen and after that the app gets really slow. I have a bottomnavigationview with a viewpager that loads four fragments, some of which have recyclerviews with custom adapters. Everytime I swipe or select a different tab, the app is painfully slow to select the new tab. Even the recyclerview scrolling is really slow. The weird part is that when data is switched off, the app works just fine and the viewpager swipes and recyclerview scrolls are fast enough.
I have seen several suggestions which suggest using gson parsing for the json data received from online, but the performance increase was negligible. I have also tried viewPager.setOffscreenPageLimit(4); but that hasn't helped. All my network calls have been placed on an asycntask and I have used StrictMode to confirm that. The app also works ok on an emulator, so the problem is only on real devices of all apis that I have tested.
//First Fragment
public void loadData(final Context context, final boolean b) {
HurlStack hurlStack = new HurlStack() {
#Override
protected HttpURLConnection createConnection(URL url) {
HttpsURLConnection httpsURLConnection = null;
try {
httpsURLConnection = CustomCAHttpsProvider.getHttpsUrlConnection(ServerConstants.LOAD_CHAT_URL, InfosylumApplication.getContext(), R.raw.certificate, false);
} catch (Exception e) {
e.printStackTrace();
Helper.showErrorDialog(e.getMessage(), e.toString(), InfosylumApplication.getContext());
}
return httpsURLConnection;
}
};
StringRequest stringRequest = new StringRequest(Request.Method.POST, ServerConstants.LOAD_CHAT_URL, new Response.Listener<String>() {
#Override
public void onResponse(String response) {
mSwipeRefreshLayout.setRefreshing(false);
new Handler().postDelayed(new Runnable() {
#Override
public void run() {
if (isRunning) {
loadData(context, b);
}
}
}, 3000);
try {
JSONObject jsonObject = new JSONObject(response).getJSONObject("object");
TinyDB tinyDB = new TinyDB(context);
tinyDB.putString(TinyDBConstants.LOCAL_TYPING, "");
if (jsonObject.has("groupsAndInd")) {
final JSONArray array = jsonObject.getJSONArray("groupsAndInd");
LinkedHashMap<String, ChatNotification> chatNotificationLinkedHashMap = new LinkedHashMap<>();
final JSONArray localChatArray;
if (!tinyDB.getString(TinyDBConstants.LOCAL_CHAT_OTHER).isEmpty()) {
localChatArray = new JSONArray(tinyDB.getString(TinyDBConstants.LOCAL_CHAT_OTHER));
} else {
localChatArray = new JSONArray();
}
ArrayList<String> localChatIds = new ArrayList<>();
final ArrayList<String> onlineChatIds = new ArrayList<>();
String idsString = "";
for (int i = 0; i < localChatArray.length(); i++) {
JSONObject foodJson = localChatArray.getJSONObject(i);
localChatIds.add(foodJson.getString("id"));
idsString += foodJson.getString("id") + "\n";
}
for (int i = 0; i < array.length(); i++) {
JSONObject object = array.getJSONObject(i);
List<String> urlList = Arrays.asList(object.getString("seenUsers").split(","));
if ((!urlList.contains(PatriceUser.getCurrentUser().getObjectId()) || object.getInt("messageStatus") < (Const.MESSAGE_STATUS_RECEIVED)) && !localChatIds.contains(object.getString("id")) && !object.getString("senderId").equals(PatriceUser.getCurrentUser().getObjectId())) {
String id = null, chatType = null;
switch (object.getString("chatType")) {
case Const.CHAT_TYPE_GROUP:
id = object.getString("groupId");
chatType = Const.CHAT_TYPE_GROUP;
break;
case Const.CHAT_TYPE_INDIVIDUAL:
id = object.getString(Const.SENDER_ID);
chatType = Const.CHAT_TYPE_INDIVIDUAL;
break;
}
boolean found = false;
if (chatNotificationLinkedHashMap.containsKey(id)) {
ChatNotification chatNotification = chatNotificationLinkedHashMap.get(id);
chatNotification.addChatObject(object);
chatNotificationLinkedHashMap.put(id, chatNotification);
} else {
ChatNotification chatNotification = new ChatNotification(context, chatType, id, object);
chatNotificationLinkedHashMap.put(id, chatNotification);
}
}
onlineChatIds.add(object.getString("id"));
}
tinyDB.putString(TinyDBConstants.LOCAL_CHAT_OTHER, array.toString());
for (Map.Entry<String, ChatNotification> entry : chatNotificationLinkedHashMap.entrySet()) {
String key = entry.getKey();
ChatNotification value = entry.getValue();
final InfosylumNotification infosylumNotification = new InfosylumNotification(context);
infosylumNotification.showNotification(value);
}
}
try {
if (jsonObject.has("groupsAndInd")) {
getActualMessages(jsonObject.getJSONArray("groupsAndInd"), jsonObject.getJSONArray("unread"));
}
if (jsonObject.has("notes")) {
loadNotes(jsonObject.getJSONArray("notes"));
}
} catch (NullPointerException ignore) {
Helper.showErrorDialog("ignore", ignore.getMessage(), getActivity());
}
} catch (JSONException e) {
} catch (NullPointerException e) {
//When the user has logged out
}
}, new Response.ErrorListener() {
#Override
public void onErrorResponse(VolleyError error) {
}
}) {
#Override
protected Map<String, String> getParams() throws AuthFailureError {
Map<String, String> params = new HashMap<>();
params.put("userId", PatriceUser.getCurrentUser().getObjectId());
params.put("loadGroupsQuery", loadGroupsQuery);
return params;
}
};
stringRequest.setRetryPolicy(new DefaultRetryPolicy(
40000,
DefaultRetryPolicy.DEFAULT_MAX_RETRIES,
DefaultRetryPolicy.DEFAULT_BACKOFF_MULT));
VolleySingleton.getInstance(activity).addToRequestQueue(stringRequest);
stringRequest.setTag(REQUEST_TAG);
if (requestQueue == null) {
requestQueue = Volley.newRequestQueue(InfosylumApplication.getContext(), hurlStack);
}
requestQueue.add(stringRequest);
}
The other fragments are quite similar in how they obtain data there are also background jobs within the app and I think the hardest thing to do is to pinpoint which method is causing the bug. Also, using the profiler is out of the question because the device being used is of API 17. I'd just like to ask, is there any way I can pinpoint which method exactly is causing the sluggishness and the occasional ANRs that I receive?
Looking at your code you are creating new HttpURLConnection every time you call loadData(), which is unnecessary.Try to use Singleton design pattern to create httpConnection instance only once.
Here : new Handler().postDelayed(new Runnable() {
#Override
public void run() {
if (isRunning) {
loadData(context, b);
}
}
}, 3000);
this code you are executing from loadData() so there is recursive call (you call this method again and again every 3 seconds (if isRunning==true)).
In that case try to create all variable (specially lists) outside of the this method so you will just use them repeatedly not create new instance every time.
When you don't have an internet it is working fast because you aren't executing onResponse (which contains several loops) but only small onErrorResponse.
What are you trying to do with method? maybe I can help you by providing some alternative solution for it.
UPDATE where are you performing asynch tasks? Handler performs on the same thread it was created? is it AsynchTask or is it main thread are calling loadData()

Camera2 ImageReader hangs after a while with "Failed to release buffer" message

I'm having a problem with android's camera2 API.
My end goal here is to have a byte array which I can edit using opencv, whilst displaying the preview to the user (e.g. an OCR with a preview).
I've create a capture request and added an ImageReader as a target. Then on the OnImageAvailableListener, i'm getting the image, transforming it to a bitmap and then display it on an ImageView (and rotating it).
My problem is that after a few seconds, the preview stalls (after gradually slowing down) and in the log om getting the following error: E/BufferItemConsumer: [ImageReader-1225x1057f100m2-18869-0] Failed to release buffer: Unknown error -1 (1)
As you can see in my code, I have already tried closing the img after getting my byte[] from it.
I've also tried clearing the buffer.
I've tried closing the ImageReader but that of course stopped me from getting any further images (throws an exception).
Can anyone please help me understand what im doing wrong? I've been scouring google to no avail.
This is my OnImageAvailableListener, do let me know if you need more of my code to assist:
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image img = reader.acquireLatestImage();
final ImageView iv = findViewById(R.id.camPrev);
try{
if (img==null) throw new NullPointerException("null img");
ByteBuffer buffer = img.getPlanes()[0].getBuffer();
byte[] data = new byte[buffer.remaining()];
buffer.get(data);
final Bitmap b = BitmapFactory.decodeByteArray(data, 0, data.length);
runOnUiThread(new Runnable() {
#Override
public void run() {
iv.setImageBitmap(b);
iv.setRotation(90);
}
});
} catch (NullPointerException ex){
showToast("img is null");
}finally {
if(img!=null)
img.close();
}
}
};
Edit - adding cameraStateCallback
private CameraDevice.StateCallback mCameraDeviceStateCallback = new CameraDevice.StateCallback() {
#Override
public void onOpened(CameraDevice cameraDevice) {
mCameraDevice = cameraDevice;
showToast("Connected to camera!");
createCameraPreviewSession();
}
#Override
public void onDisconnected(CameraDevice cameraDevice) {
closeCamera();
}
#Override
public void onError(CameraDevice cameraDevice, int i) {
closeCamera();
}
};
private void closeCamera() {
if (mCameraDevice != null) {
mCameraDevice.close();
mCameraDevice = null;
}
}
You seem to have used setRepeatingRequest() for Jpeg format. This may not be fully supported on your device, also depends on the image resolution that you choose. Normally, we use createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW) in these cases, and get YUV or raw format from ImageReader.
I would try to choose low resolution for Jpeg: maybe this will be enough to keep the ImageReader running.

How to use AsyncTask to update a global variable

What I want to do: I am putting rows into a ListView using a custom "Location" adapter. I am trying to add a Bitmap into a row of the ListView. This Bitmap is coming from a URL. So, I have a global variable public static Bitmap bitmap and want to update this variable using an AsyncTask. Here is my code:
try {
String s = "";
JSONArray jArray = new JSONArray(result);
for (int i = 0; i < jArray.length(); i++) {
final JSONObject json = jArray.getJSONObject(i);
runOnUiThread(new Runnable() {
#Override
public void run() {
try {
//here I am calling my new task and giving it the ID to find the image
BitmapWorkerTask myTask = new BitmapWorkerTask(json.getInt("ID"));
myTask.execute();
adapter.add(new Location(bitmap, json
.getString("PlaceTitle"), json
.getString("PlaceDetails"), json
.getString("PlaceDistance"), json
.getString("PlaceUpdatedTime")));
bitmap = null;
} catch (JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
}
} catch (Exception e) {
// TODO: handle exception
Log.e("log_tag", "Error Parsing Data " + e.toString());
}
and here is my AsyncTask
class BitmapWorkerTask extends AsyncTask<Integer, Void, Bitmap> {
private int photoID = 0;
public BitmapWorkerTask(int photoID) {
// Use a WeakReference to ensure the ImageView can be garbage collected
this.photoID = photoID;
}
// Decode image in background.
#Override
protected Bitmap doInBackground(Integer... params) {
String initialURL = "http://afs.spotcontent.com/img/Places/Icons/";
final String updatedURL = initialURL + photoID + ".jpg";
Bitmap bitmap2 = null;
try {
bitmap2 = BitmapFactory.decodeStream((InputStream) new URL(
updatedURL).getContent());
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return bitmap2;
}
// Once complete, see if ImageView is still around and set bitmap.
#Override
protected void onPostExecute(Bitmap bitmap2) {
bitmap = bitmap2;
}
}
So, for each iteration, I am feeding the AsyncTask an ID which it uses to find the image, and then (I was hoping) should update the global Bitmap which is passed into the adapter. When I run my app, the every listing's picture is empty. Any ideas on what I'm doing wrong?
Consider the following possible order your commands get executed (remembering that tasks run in the background, so the order is non-deterministic):
myTask.execute()
BitmapWorkerTask.doInBackground()
adapter.Add(new Location(bitmap, .......
BitmapWorkerTask.onPostExecute()
When you create the Location() object in step 3, the 'bitmap' object being passed is the global object pointer. Its value is NOT valid yet, because onPostExecute() wasn't called yet. So the Location object is created with a non-bitmap object. On step 4, when the bitmap is finally retrieved, the value of the global object pointer is changed (correctly), but that doesn't affect the (empty) bitmap object already passed to Location on step 2... Which is why you don't see the bitmap on your view.
What you could do is pass an additional parameter to the BitmapWorkerTask constructor: You could pass the Location object (or the underlying bitmap). From onPostExecute() you could then update that Location/bitmap object with the retrieved bitmap. You don't need a global variable here.

Android TTS fails to speak large amount of text

I am trying to speak out large amount of text using Android Text To Speech. I using default Google speech engine. Below is my code.
public class Talk extends Activity implements TextToSpeech.OnInitListener {
private ImageView playBtn;
private EditText textField;
private TextToSpeech tts;
private boolean isSpeaking = false;
private String finalText;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_talk);
//Intialize the instance variables
playBtn = (ImageView)findViewById(R.id.playBtn);
textField = (EditText)findViewById(R.id.textField);
//Resister the listeners
playBtn.setOnClickListener(new PlayBtnAction());
//Other things
tts = new TextToSpeech(this,this);
//Get the web page text if called from Share-Via
if (Intent.ACTION_SEND.equals(getIntent().getAction()))
{
new GetWebText().execute("");
}
}
//This class will execute the text from web pages
private class GetWebText extends AsyncTask<String,Void,String>
{
#Override
protected String doInBackground(String... params) {
// TODO Auto-generated method stub
String text = getIntent().getStringExtra(Intent.EXTRA_TEXT);
String websiteText = "";
try {
//Create a URL for the desired page
URL url = new URL(text);
// Read all the text returned by the server
BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream()));
String str;
StringBuffer strBuffer = new StringBuffer("");
while ((str = in.readLine()) != null)
{
strBuffer.append(str+"\n"+"\n");
}
in.close();
String html = strBuffer.toString();
Document doc = Jsoup.parse(html);
websiteText = doc.body().text(); // "An example link"
//Toast.makeText(this, websiteText, Toast.LENGTH_LONG).show();
}
catch(Exception e)
{
Log.e("web_error", "Error in getting web text",e);
}
return websiteText;
}
#Override
protected void onPostExecute(String result)
{
textField.setText(result);
}
}
}
//Class to speak the text
private class PlayBtnAction implements OnClickListener
{
#Override
public void onClick(View v)
{
// TODO Auto-generated method stub
if(!isSpeaking)
{
isSpeaking = true;
//speak(textField.getText().toString());
finalText = textField.getText().toString();
new SpeakTheText().execute(finalText);
isSpeaking = false;
}
else
{
isSpeaking = false;
tts.stop();
}
}
}
#Override
public void onInit(int status) {
// TODO Auto-generated method stub
if(status==TextToSpeech.SUCCESS)
{
int result = tts.setLanguage(Locale.UK);
if(result==TextToSpeech.LANG_MISSING_DATA || result == TextToSpeech.LANG_NOT_SUPPORTED)
{
Toast.makeText(this, "Language Not Supported", Toast.LENGTH_LONG).show();
}
}
}
//This class will speak the text
private class SpeakTheText extends AsyncTask<String,Void,String>
{
#Override
protected String doInBackground(String... params) {
// TODO Auto-generated method stub
tts.speak(params[0], TextToSpeech.QUEUE_FLUSH, null);
return null;
}
}
#Override
public void onDestroy()
{
if(tts!=null)
{
tts.stop();
tts.shutdown();
}
super.onDestroy();
}
}
But the issue here is, when there is a large chunk of text (lets say you have extracted text from a web page) the TTS fails to read it. If I remove most of the text, then it will read it. Why is this happening?
When I am about to read the large text, the LogCat however display something like this
10-11 07:26:05.566: D/dalvikvm(2638): GC_CONCURRENT freed 362K, 44% free 3597K/6312K, paused 17ms+8ms, total 93ms
The String length should not be longer than pre-defined length, from docs:
Parameters
text The string of text to be spoken. No longer than getMaxSpeechInputLength() characters.
Returned value by getMaxSpeechInputLength() may vary from device to device, but according to AOSP source that is whopping 4000:
/**
* Limit of length of input string passed to speak and synthesizeToFile.
*
* #see #speak
* #see #synthesizeToFile
*/
public static int getMaxSpeechInputLength() {
return 4000;
}
Try not to exceed that limit: compare input text length with that value and split into separate parts if necessary.
Use this code...Working for any file ..
just send the string to speech function..
private void speech(String charSequence) {
int position ;
int sizeOfChar= charSequence.length();
String testStri= charSequence.substring(position,sizeOfChar);
int next = 20;
int pos =0;
while(true) {
String temp="";
Log.e("in loop", "" + pos);
try {
temp = testStri.substring(pos, next);
HashMap<String, String> params = new HashMap<String, String>();
params.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, temp);
engine.speak(temp, TextToSpeech.QUEUE_ADD, params);
pos = pos + 20;
next = next + 20;
} catch (Exception e) {
temp = testStri.substring(pos, testStri.length());
engine.speak(temp, TextToSpeech.QUEUE_ADD, null);
break;
}
}
}
In case someone might find this helpful. When you split the large text into strings, do not set the length of each string to the exact value of getMaxSpeechInputLength(). Subtract the string length by 1. Otherwise, only the last chunk of string could be read by TTS.
int length = toSpeech.getMaxSpeechInputLength() - 1;
Iterable<String> chunks = Splitter.fixedLength(length).split(largeText);
Lists.newArrayList(chunks);
It is worse than the 4000 characters limit in practice on Android. There are some TTS engines that limit the input length a lot more. For example Nuance.tts and vocalizer.tts engines won't speak any string longer than about 512 characters (from my tests some time ago). Today I hit a limit of below 300 characters in es.codefactory.eloquencetts package, which simply crashes if the string I send to it is more than 256-300 characters. I divide the contents into sentences, and guard for sentences longer than the above limit, further sub-dividing them in my app code.
Greg
If you follow ozbek's advice you should be fine. I too have large text files that I want spoken. I simply used the streamreader method and everything works fine. heres' PART of my code. it's the part that you should use. My code does a bit more than you want but it works for me and may work for you.
Dim sReader As StreamReader = New StreamReader(Story_file)
Try
Do Until EndOfStream '= True
Dim line_to_speak As String = sReader.ReadLine
Dim vc = Mid(line_to_speak, 1, 1) <- you dont need this
Select Case vc <- you dont need this
Case Is = "/" <- you dont need this
voice_index = Val(Mid(line_to_speak, 2, 2)) <- you dont need this
srate = Val(Mid(line_to_speak, 5, 2)) <- you dont need this
edassistv.lstVoices.SelectedIndex = voice_index <- you dont need this
selected_voice = edassistv.lstVoices.SelectedItem <- you dont need this
Case Else<- you dont need this
synth.SelectVoice(selected_voice)
synth.Speak(line_to_speak)
End Select<- you dont need this
Loop
Catch ex As Exception
GoTo finish

Categories