Calling the voice-input feature - CodenameOne app (iOS port) - java

My Android app features a text input box that has a button on the right of the EditText to call the voice-input feature.
I am porting the app with Codename One. At present time the iOS port is the goal.
The button has a suitable icon. This is the code:
voiceInputButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
Intent voiceIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
voiceIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_WEB_SEARCH);
try {
activity.startActivityForResult(voiceIntent, RESULT_SPEECH_REQUEST_CODE);
} catch (ActivityNotFoundException ex) {
}
}
});
It works very well, the voice-input screen is called and then the result is passed back to the app as a string.
The string is what the user said (for example, a single word).
I need to have this functionality in the CodenameOne app for iOS.
What should be the equivalent? Is it necessary to call native iOS functions, through the native interface?

You can implement speech-to-text via Speech framework, to perform speech recognition on live or prerecorded audio. More info: https://developer.apple.com/documentation/speech
About Codename One, you can create a native interface using Objective-C code.
To use the Speech framework with Objective-C, see this answer:
https://stackoverflow.com/a/43834120
The answer says so: «[...] To get this running and test it you just need a very basic UI, just create an UIButton and assign the microPhoneTapped action to it, when pressed the app should start listening and logging everything that it hears through the microphone to the console (in the sample code NSLog is the only thing receiving the text). It should stop the recording when pressed again. [...]». This seems very close to what you asked.
Obviously the creation of the native interface takes time. For further help, you can ask more specific questions, I hope I have given you a useful indication.
Lastly, there are also alternative solutions, again in Objective-C, such as: https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/objectivec/ios/from-microphone
You can search on the web for: objective-c speech-to-text

Related

Citymaps questions on Android studio

I have some new questions in today's citymaps development.
In the Android studio,if I develop the code for citymap, there are always no logs showing but for others that does not happen. Why?
According to the citymaps official website, to create a map instance with CitymapsMapFragment, but in the sample project which citymaps provides, it uses SupportCitymapsMpaFragment ,What is the difference between them?
When the map is loading complete, is it automatically positioning to the current position or some other default position? Where is it?
If I open the GPS location,I can locate to the current position and show a blue arrow quickly, but too much power consumption,are there any other location way like network or base station location?
Code follows:
CitymapsMapFragment fragment = (CitymapsMapFragment)fragmentManager.findFragmentById(R.id.map);
if (fragment != null) {
fragment.setMapViewListener(this);
}
I did not find the fragment have the method setMapViewListener but setMapViewReadyListener,does it right?
Other code:
CitymapsMapView mapView = new CitymapsMapView(this, options, this);
When I add animate in additional methods like this:
mapView.setMapPosition(position, 300, new MapViewAnimationListener() {
#Override
public void onAnimationEnd(boolean completed) {
Log.d("SomeApp", "Move Complete!");
}
});
the project fails and exits,I tried to surround the code with try-catch block to catch exception for purpose, but nothing shows in logcat view. Why?
I am developer on the Citymaps project. I will do my best to answer all your questions
1) If you are not receiving log statements, this is likely an issue with your own application, IDE, or device configuration. In our own application, which uses the Citymaps SDK, we have no issues with logging.
2) Prior to using the Citymaps SDK, it is highly advisable that you familiarize yourself with fragments, but the short version is that SupportCitymapsMapFragment extends from the Fragment class in the v4 support library.
3) It is up to you to set the default position the map.
4) If you create a class which implements from the LocationSource interface, and then call mapView.setLocationSource, you can modify the behaviors of the map's location services. For an example, have a look at CitymapsLocationSource.java, which is the default implementation for this interface used by the SDK.
As for the exception you are having, you have not provided nearly enough information. Please show a stack trace, and I may be able to help.
Thank you for using our SDK, feel free to post again with any more questions.

Yelp API on Android Studio

Been a lurker on this site to help find answers to some of my problems before, but I am currently stuck on this and could not find a recent solution. The closest answers I found to my problem were Yelp API Android Integration and Yelp Integration in Android
I tried following the steps in the 2nd link but they are a bit outdated. I have registered for an API, downloaded the jar files from the github and synced them, and made the YelpAPI.java and TwoStepOAuth.java files and removed the main method from YelpAPI. I am stuck on step 4 on the search part. I tried to call the queryAPI method from inside an onClick method I made for a button
public void getRandom(View view) {
Intent intent = new Intent(this, SecondActivity.class);
startActivity(intent);
YelpAPI.YelpAPICLI yelpApiCli = new YelpAPI.YelpAPICLI();
new JCommander(yelpApiCli);
YelpAPI yelpApi = new YelpAPI(CONSUMER_KEY, CONSUMER_SECRET, TOKEN, TOKEN_SECRET);
try {
YelpAPI.queryAPI(yelpApi, yelpApiCli);
} catch (JSONException e) {
e.printStackTrace();
}
}
Basically what I want this to do is, when the button is pressed, I want it to go to a second screen that will display what I query Yelp for. I haven't worked on that part yet, right now I just want to get a result back from Yelp. Keep in mind I am a complete noob at Android Studio and at most intermediate at Java.
Any help is greatly appreciated, it seems like its a really simple problem but its taking me forever to figure out on my own.
You can't do blocking task like downloading or image loading in android's Main thread (UI Thread). You can't block UI for more than 5 seconds. If you try it to block for more than 5 seconds than your app will stop working and display "Unfornutaley your app has stopped working" because of error too much work on main thread. So you need to make use of async task.

How to use QAndroidJniObject to call a Intent from Java

I want to use the rtl_tcp Driver for my App in Android to read Raw data from a usb Tv tuner.
I found this source code https://github.com/martinmarinov/rtl_tcp_andro- and the guy has an App as driver at google play and anyone can call this App and can read raw data via tcp port.
This is the App https://play.google.com/store/apps/details?id=marto.rtl_tcp_andro&hl=en
Now, i would like to call this drivers from my App, but i use QT. I found the classes QAndroidJNIObject to call java code.
I found also at github.com/demantz/RFAnalyzer/blob/master/app/src/main/java/com/mantz_it/rfanalyzer/MainActivity.java an example in java.
try {
Intent intent = new Intent(Intent.ACTION_VIEW);
intent.setData(Uri.parse("iqsrc://-a 127.0.0.1 -p 1234 -n 1"));
startActivityForResult(intent, RTL2832U_RESULT_CODE);
}
catch (ActivityNotFoundException e)
{
Log.e(LOGTAG, "createSource: RTL2832U is not installed");
...
}
But i cannot find a way to write the right code in C++/Qt to make the equal call like the example in Java. I am not sure if this java code is correct or missing something.
Can someone help me ?
Also i want to ask if i can have a tcp connection with anet.h libs from Qt code?
You need to use QJniAndroidObject class to create JNI objects and manipulate them. It's not always obvious but it works in the end. Your 9-line of Java will most likely end up to be a 50-line C++ code. I recommend that you transcode each line one by one and always check objects are valid (QJniAndroidObject::isValid()).
Here is an example creating an Intent and starting an activity:
startActivity on Qt, nothing displays
Try to write some code and post another SO question if it fails (syntax to create and manipulate QJniAndroidObject is not always obvious for C++ developers not familiar with Java).

Android SDK, Check if device is Amazon-FireTV

I am trying to write a simple piece of code that will execute some other code if true. What I want to do is check if my app is running on the 'Amazon Fire-TV (BOX, not the Fire-TV stick)' I think it would not be that hard to do but I am guessing it would be something like this?
String osName = android.getSystemOS();
if(!osName.equals("AMAZON FIRE-TV")){
Toast.makeText(MainActivity.class, "This app may not be compatible with your device..., Toast.LENGTH_LONG").show();
...
}
You can check any device name specifically using:
boolean isFireTV = Build.MODEL.equalsIgnoreCase("AFTB");
(see this page for FireTV model strings, and this one for Fire Tablets)
I'd also check out this answer for a more generic test to help you determine if your app is running on an Amazond device, or installed via the Amazon AppStore (eg on a Blackberry device)
the following function:
public static boolean isTV() {
return android.os.Build.MODEL.contains("AFT");
}
should detect either firetv or fire tv stick
see
https://developer.amazon.com/public/solutions/devices/fire-tv/docs/amazon-fire-tv-sdk-frequently-asked-questions
for details

Button to start the Gallery on Android

I'm trying to make a button in my App open the built in gallery.
public void onClick(View v) {
Intent intentBrowseFiles = new Intent(Intent.ACTION_VIEW);
intentBrowseFiles.setType("image/*");
intentBrowseFiles.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(intentBrowseFiles);
}
This results in an error message "The application Camera (process com.android.gallery) has stopped unexpectedly."
If I set the Intent action to ACTION_GET_CONTENT it manages to open the gallery but then simply returns the image to my app when a picture is selected which is not what I want.
I'm trying to make a button in my App open the built in browser.
Your question subject says "Gallery". Your first sentence in the question says "browser". These are not the same thing.
If I set the Intent action to ACTION_GET_CONTENT it manages to open the gallery but then simply returns the image to my app when a picture is selected which is not what I want.
Of course, actually telling us "what [you] want" would just be too useful, so you are making us guess.
I am going to go out on a limb and guess that you are trying to open the Gallery application just as a normal application. Note that there is no Gallery application in the Android OS. There may or may not be a Gallery application on any given device, and it may or may not be one from the Android open source project.
However, for devices that have the Android Market on them, they should support an ACTION_VIEW Intent with a MIME type obtained from android.provider.MediaStore.Images.Media.CONTENT_TYPE.

Categories