I read here there are 2 types of voice commands in glass:
1) choosing from a menu (e.g. "ok glass, directions to")
2) free speech recognition (e.g. "fifth avenue NYC")
I want develop a glass app and want to use voice recognition.
which of them can i use non-English language?
I talk developer-wise to change the language not user-wise.
Meaning saying "Ok Glass" and then menu items are in hebrew
or "take me to" and then place description in hebrew.
Is there any workaround for that?
At this point Glass voice recognition appears to only support US English. The "Ok Glass" menu items are controlled by Google for official apps. It is my understanding that the classifiers that recognize these commands are included in the Glass code, not just recognized using the string. (Side loaded apps can have their own voice command based on an English string but it isn't as reliable as the ones Google has officially endorsed.
Free speech recognition for example when you reply to an email in Glass is done using the RecognizerIntent.ACTION_RECOGNIZE_SPEECH intent. While Android documentation suggests one could add the extra parameter to the intent of EXTRA_LANGUAGE, Glass itself only handle's English.
Therefore if you want to workaround this you will need to use the MediaRecorder and grab the audio directly, stream it to a service providing Hebrew Voice to Text transcription and then send the text back to your Glass application. This would not be supported directly from the clock, you'd have to either handle it from a LiveCard or an Immersion. Glass will display Hebrew characters.
Related
The Android TV (ATV) app I'm working on has voice control capabilities. Basically, when the user presses the microphone button on a remote controller, the key event (identified by KeyEvent.KEYCODE_SEARCH) is handled by the app, speech recognition starts (using android.speech.SpeechRecognizer), the results (parsed speech) are obtained and parsed further by the app logic (e.g. showing the user search results or performing some in-app action).
Everything has been working as intended and described above, until, quite recently, Google Assistant (GA) was introduced to ATV platforms (the first one being Nvidia Shield box). Now, when the RCU mic button is pressed, the GA overlay appears and the mic key event doesn't even reach the app.
For the last few days I've done some extensive research (documentation, internet, forums, stackoverflow etc.) and experimented with some potential workarounds, but nothing's worked so far and I haven't been able to find any definite information on the topic (probably due to the ATV+GA combination being rather new on the scene, and the ATV ecosystem not being as large as the Android one).
The best hint I got so far is what's been done with the Spotify app for Android TV. When it's run on an ATV device with no GA, it basically behaves as I described above; but when GA is present, the GA overlay appears, receives the parsed speech and shows the search results, with results from Spotify in the first line - so, the Spotify app is integrated with GA, and this integration replaces the in-app voice control mechanism. This suggests that either there is no way to ignore/disable GA inside your app in order to receive the mic key event and proceed with voice control as usual, or at least this is the preferred way of handling voice commands now. It also shows that there are apps for ATV that approach voice control the way I described, so maybe someone here has already encountered similar problem.
My question(s):
is it possible to prevent Google Assistant from taking over RCU mic button signal?
is it ok to do so? (by "not ok" I would mean - are there any official guidelines that discourage such behavior - or at least are there valid reasons not to do so?)
if so, can it be done?
if not, is there a resource documenting how to integrate with GA (the way Spotify for ATV app does)?
Starting with your last question:
if not, is there a resource documenting how to integrate with GA (the way Spotify for ATV app does)?
I wrote about how to integrate on the Android Developer's Blog. Spotify has onboarded their content catalog to Google's services which is why the Google Assistant is able to work so well. You can achieve similar results if you make your app searchable (covered in the blog).
is it possible to prevent Google Assistant from taking over RCU mic button signal?
No, not at this time. The Google Assistant is a system app that takes control over the mic to give a uniform experience across all apps.
is it ok to do so? (by "not ok" I would mean - are there any official guidelines that discourage such behavior - or at least are there valid reasons not to do so?)
if so, can it be done?
You can still have an in-app search experience. There is an example in the leanback sample. You will need to set a listener on a BrowseFragment and implement a SearchFragment. We know this can be confusing, have in-app search and Google Assistant search competing, but we are working on how to improve this.
In a nutchell my question is:
How can I show a soft-keyboard when accessing a LibGDX HTML (GWT) app via a mobile phone?
Details
In LibGDX, to bring up the soft-keyboard we can call this method:
Gdx.input.setOnscreenKeyboardVisible(true);
However, according to the documentation (https://github.com/libgdx/libgdx/wiki/On-screen-keyboard):
"On-screen keyboard functionality is only available the Android and iOS
platforms"
i.e.: not for HTML (GWT)
Problem:
If you access your HTML (GWT) application through a mobile or tablet, you have no way to show the soft-keyboard and therefore no way of typing anything in your texfield.
Given that most people are now accessing web sites using mobiles and/or tablets I assume many would haved face the same issue.
I'm pretty sure that it is not about LibGDX but about mobile browser itself - think about it like about handling input from some device (like keyboard) which actually should work both on PC and mobile devices - the keyboard is just not visible.
Then you should rather look for a solution of showing keyboard in mobile browser no matter if it is LibGDX application or not.
Here are some examples:
Show android keyboard from javascript
Can I trigger Android soft keyboard to open via javascript ( without phonegap )?
jQuery Mobile Show Keyboard on Input Focus
Show virtual keyboard on mobile phones in javascript
The general thing (and what also my small tests showed) is that you generally not allowed to open keyboard from script context. Maybe there is a hack for this but I'm pretty sure that at least it won't be working on every browser.
Since that I would rather recommend to use some input or to just implement your own "input layer" - user interface by using some kind of buttons.
For the GWT application I ended up creating a softkeyboard with:
A Keyboard image
An actor per letter with an action listener
Then an Action resolver for showKeyboard. If GWT show the soft keyboard.
when sms is recieved a pop up is appeared on inbox icon which show number of unread sms,, how can i show that popup which indicate number of unread sms in inbox....
Is there a way to add a badge to an application icon in Android?
Unfortunately, Android does not allow changing of the application icon because it's sealed in the APK once the program is compiled. There is no way to programmatically change it to a 'drawable'.
Read this link: Is there a way to add a badge to an application icon in Android?
There is no way to implement this for all devices out there. The problem is, this is no feature of Android by itself, but of the specific launcher, and while there are some launchers that offer this possibility (each slightly differently), there is no solution that would work with all devices.
Also, it is discouraged to use this UI pattern in Android apps, as it is an iOS design pattern.
I am creating an application which has voice search facility? like in my music player if i want to play some music than i just speak song title than it will play that song. I already try with goggle voice search it works but i want to search off-line.How to do this? So any one please help me. Thanks in advance.
If you are referring to my application utter! I'm working on a developer API that will appear here in a couple of weeks time and allow applications to interact with it, providing the user has it installed of course.
Otherwise, for information on using offline recognition, see my answer on this post
You have to utilize a speech recognition system with a vocabulary embedded.
If you are using a vocabulary you do not need the web. You can process the information offline.
I think you need something like this:
Speech recognition by Microsoft
I have few MP3 files which are speeches. I have used Android Speech to Text before so I know it can store spoken words. Is there any way where we can get the spoken words from the MP3 and display it in a EditText ?
I am thinking about playing the MP3 silently and identify the words, but have no idea about how to do that. I am using Google Speech Engine.
There is no native way to convert an audio file that contains spoken words to text on Android. You'll need to use a third-party API to do this, such as.
A&T
Nuance
iSpeech
And perhaps Pocket Sphinx, although you may have to write the file input stream side of it yourself.
If you're not concerned about breaking terms and conditions, you could use the Chrome Speech API.