I was wondering if anybody new of a way that I could use of scanning a registration plate on a car and then adding that number to a string inside java? I have been looking around a bit and came across this
https://github.com/SandroMachado/openalpr-android
But I'm not sure that it's exactly what I am looking for?
Anybody know of anything that might be of use? Thanks in advance!
Google has recently released Mobile Vision API for text recognition from images. You can take a picture from camera and then use this API to get your desired result. Link - https://developers.google.com/vision/text-overview
Related
I am using a phone with 3 back cameras (Realme GT Pro 2) and I want to access the one with the widest FOV. I am currently trying to implement it with the help of the Multi-camera API, but it's relatively confusing and I'm unable to implement it/find a solution for this. Can anyone give me some tips on how to access the specific back camera and display its stream?
I can get the physicalCameraId with CameraCharacteristics.getPhysicalCameraIds(). But how can I use this to open the correct (physical) camera? I also know that some manufacturers haven't yet implemented/allowed this access
I'm currently using ACTION_IMAGE_CAPTURE, but I cannot save more than one picture at a time and also I can't take video (for that i should use ACTION_VIDEO_CAPTURE). I've seen some codes, where they use INTENT_ACTION_STILL_IMAGE_CAMERA and then broadcast reciver, but they were outdated. I need help please! Thanks
I'm attempting to make an app for Android and part of it includes the ability for me to convert an array of entries of x,y coordinates in to a character. e.g. if the coordinates form an L shape, it should return the character L.
I would assume that something like this exists already as I have seen similar things in other apps, although during my searching I wasn't able to find anything that did what I wanted (or I used the wrong search terms).
Does anyone know of any open-source systems that do this, or know of a good method for this?
Thanks for any help :)
For anyone who may want to do this in the future, I ended up using Gestures by Android - http://developer.android.com/reference/android/gesture/Gesture.html and used GesturePoints to input my x,y coordinate pair.
This functionality did exist previously (not sure if it still does, or if this project is open source) in the 'Eye's Free' product, which is designed to aid vision impaired users.
Your best bet is to use functionality from here is possible.
Source-code repo: https://code.google.com/p/eyes-free/
Blog: http://eyes-free.blogspot.com/
Search for TV Raman - he is\was the lead developer for this project
I think this is the specific project you are looking for (on Eye's Free source repo): https://code.google.com/p/eyes-free/downloads/detail?name=com.googlecode.eyesfree.inputmethod.latin-v1.1.6.apk&can=2&q=
You could use Android's built-in GestureDetector (http://developer.android.com/reference/android/view/GestureDetector.html). It will recognize shapes stored in a shape dictionary that you can ship with your app. You will have to seed the dictionary (although there may be some pre-defined alphabet dictionaries for free these days, this functionality has been around since API level 1). You can seed the dictionary by using your development environment and a utility to draw and save pre-defined shapes and map them to expected input.
The GestureDetector APIs will give you a probabilistic match between what the detector recognizes vs. what is in the dictionary and you can determine what to do with the input shape.
I have used this before in an old app and it works very well.
Here's another link to the old Android Dev docs for it: http://docs.huihoo.com/android/2.1/resources/articles/gestures.html
I'm sure most of you have used an android phone before and taken a picture. Whenever the user changes the mobile phone's position and holds it steady, the camera focusses automatically. I'm having a hard time replicating this in my app. The autofocus() method is being called only once when the application is being launched. I have been searching for a solution these past 3 days and while reading the google documentation I stumbled upon the sensor method calls (such as when the user tilts the mobile forwards or backwards). I could use this API to achieve what I need but it sounds too dirty and too complicated. I'm sure there's another way around it.
All examples on the internet which I have found only focus when the user presses the screen or a button. I have also gone through several questions on SO to hopefully find what I am looking for but I was unsuccessful. I have seen this question and that String is not compatible with my phone. For some reason the only focussing modes which I can use is fixed and auto.
I was hoping someone here would shed some light on the subject because I am at a loss.
Thankyou very much for your time.
Since API 14 you can set this parameter
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#FOCUS_MODE_CONTINUOUS_PICTURE
Yes, camera.autoFocus(callback) is a one-time function. You will need to call it in a loop to have it autofocus continuously. Preferably you would have a motion detection via accelerometer or compass to detect when camera is moved.
Is anyone developing robots and/or gadgets for Google Wave?
I have been a part of the sandbox development for a few days and I was interested in seeing what others have thought about the Google Wave APIs.
I was also wondering what everyone has been working on. Please share your opinions and comments!
I haven't tried the gadgets, but from the little I've looked at them, they seem pretty straight-forward. They're implemented in a template-ish way and you can easily keep states in them, allowing more complex things such as RSVP lists and even games.
Robots are what I'm most interested in, and well, all I can say is that they're really easy to develop! Like barely any effort at all! Heck, I'll code one for you right here:
import waveapi.events
import waveapi.robot
def OnBlipSubmitted(properties, context):
# Get the blip that was just submitted.
blip = context.GetBlipById(properties['blipId'])
# Respond to the blip (i.e. create a child blip)
blip.CreateChild().GetDocument().SetText('That\'s so funny!')
def OnRobotAdded(properties, context):
# Add a message to the end of the wavelet.
wavelet = context.GetRootWavelet()
wavelet.CreateBlip().GetDocument().SetText('Heeeeey everybody!')
if __name__ == '__main__':
# Register the robot.
bot = waveapi.robot.Robot(
'The Annoying Bot',
image_url='http://example.com/annoying-image.gif',
version='1.0',
profile_url='http://example.com/')
bot.RegisterHandler(waveapi.events.BLIP_SUBMITTED, OnBlipSubmitted)
bot.RegisterHandler(waveapi.events.WAVELET_SELF_ADDED, OnRobotAdded)
bot.Run()
Right now I'm working on a Google App Engine project that's going to be a collaborative text adventure game. For this game I made a bot that lets you play it on Wave. It uses Wave's threading of blips to let you branch the game at any point etc. For more info, have a look at the Google Code project page (scroll down a little bit for a screenshot.)
Go to Google Wave developers and read the blogs, forums and all your questions will be answered including a recent post for a gallery of Wave apps. You will also find other developers to play in the sandbox with.
I have been working on Gadgets, using the Wave API. It's pretty easy to work with. For the most part, you can use javascript inside an XML file. You just need to have the proper tags for the XML file. Below is a sample of what a Gadget would look like, this particular gadget retrieves the top headlines from Slashdot and displays them at the top of the Wave. You can learn more about Gadgets here and here.
alt text http://www.m1cr0sux0r.com/xml.jpg