Handwritten character (English letters, kanji,etc.) analysis and correction - java

I would like to know how practical it would be to create a program which takes handwritten characters in some form, analyzes them, and offers corrections to the user. The inspiration for this idea is to have elementary school students in other countries or University students in America learn how to write in languages such as Japanese or Chinese where there are a lot of characters and even the slightest mistake can make a big difference.
I am unsure how the program will analyze the character. My current idea is to get a single pixel width line to represent the stroke, compare how far each pixel is from the corresponding pixel in the example character loaded from a database, and output which area needs the most work. Endpoints will also be useful to know. I would also like to tell the user if their character could be interpreted as another character similar to the one they wanted to write.
I imagine I will need a library of some sort to complete this project in any sort of timely manner but I have been unable to locate one which meets the standards I will need for the program. I looked into OpenCV but it appears to be meant for vision than image processing. I would also appreciate the library/module to be in python or Java but I can learn a new language if absolutely necessary.
Thank you for any help in this project.

Character Recognition is usually implemented using Artificial Neural Networks (ANNs). It is not a straightforward task to implement seeing that there are usually lots of ways in which different people write the same character.
The good thing about neural networks is that they can be trained. So, to change from one language to another all you need to change are the weights between the neurons, and leave your network intact. Neural networks are also able to generalize to a certain extent, so they are usually able to cope with minor variances of the same letter.
Tesseract is an open source OCR which was developed in the mid 90's. You might want to read about it to gain some pointers.

You can follow company links from this Wikipedia article:
http://en.wikipedia.org/wiki/Intelligent_character_recognition
I would not recommend that you attempt to implement a solution yourself, especially if you want to complete the task in less than a year or two of full-time work. It would be unfortunate if an incomplete solution provided poor guidance for students.
A word of caution: some companies that offer commercial ICR libraries may not wish to support you and/or may not provide a quote. That's their right. However, if you do not feel comfortable working with a particular vendor, either ask for a different sales contact and/or try a different vendor first.
My current idea is to get a single pixel width line to represent the stroke, compare how far each pixel is from the corresponding pixel in the example character loaded from a database, and output which area needs the most work.
The initial step of getting a stroke representation only a single pixel wide is much more difficult than you might guess. Although there are simple algorithms (e.g. Stentiford and Zhang-Suen) to perform thinning, stroke crossings and rough edges present serious problems. This is a classic (and unsolved) problem. Thinning works much of the time, but when it fails, it can fail miserably.
You could work with an open source library, and although that will help you learn algorithms and their uses, to develop a good solution you will almost certainly need to dig into the algorithms themselves and understand how they work. That requires quite a bit of study.
Here are some books that are useful as introduct textbooks:
Digital Image Processing by Gonzalez and Woods
Character Recognition Systems by Cheriet, Kharma, Siu, and Suen
Reading in the Brain by Stanislas Dehaene
Gonzalez and Woods is a standard textbook in image processing. Without some background knowledge of image processing it will be difficult for you to make progress.
The book by Cheriet, et al., touches on the state of the art in optical character recognition (OCR) and also covers handwriting recognition. The sooner you read this book, the sooner you can learn about techniques that have already been attempted.
The Dehaene book is a readable presentation of the mental processes involved in human reading, and could inspire development of interesting new algorithms.

Have you seen http://www.skritter.com? They do this in combination with spaced recognition scheduling.
I guess you want to classify features such as curves in your strokes (http://en.wikipedia.org/wiki/CJK_strokes), then as a next layer identify componenents, then estimate the most likely character. All the while statistically weighting the most likely character. Where there are two likely matches you will want to show them as likely to be confused. You will also need to create a database of probably 3000 to 5000 characters, or up to 10000 for the ambitious.
See also http://www.tegaki.org/ for an open source program to do this.

Related

How to Read Text From Bounding Box using Java With OpenCV

I am working on Handwritten Form Recognition System, till now i have reached to this step where,i have been able to detect text using java with openCV but now i want to read the text from each of these bounding boxes Click to open image
I have being doing research to find out the process for the same using java with openCV but i was unable to find any.
Suggest me some links,Technologies,methods or process to perform this particular task with "JAVA".
This answer is more general than question specific. I will try to stick as much as possible with the problem statement.
Although there is a lot of on going research on recognition of hand written text, there is no full-proof method, which works with all possible problems.
The sample image you posted here is relatively noisy, with extremely high variance between the font of the same letter. This is exactly where it gets tricky.
I would personally suggest that once you have the bounding boxes around the text (which you already do), run contour extraction in all these bounding boxes in order to extract single letters. Once you have them, you need to figure out relevant feature/s that can represent the maximum variance (or at least 95% Confidence Interval) of the particular letter.
With this/ese feature/s, you need to train a supervised algorithm, letters as training data and their corresponding value (for eg. actual values) as labels. Once you have that, give it some data (the easiest and most difficult cases) to analyze the accuracy.
These links can help you for a start :
One of my first tools to check the accuracy with the set of features I use before I start coding: Weka
Go through basic tutorials on machine learning and how they work - Personal Favorite
You could try TensorFlow.
Simple Digit Recognition OCR in OpenCV-Python - Great for beginners.
Hope it helps!

Is it a good idea to train word2vec with unsupervised crawled text from the Web?

I would like to know if it is, in general, a good idea to train word2vec with text automatically crawled from the Web. In the examples you can find on the Web, the algorithm is always trained with text of high quality (correct sentences, correct punctuation marks, no strange words, and so on).
However, when automatically crawling the Web, the quality of the raw text is not going to be so high. On the other hand, the compilation of the text for training can be done automatically and we do not need to spend time on it.
To complement others answers, I would say that it really depends on what you want to do with the word vectors (output of word2Vec) after their creation :
if your intention is to use them to do some natural language processing (clustering, sentiment analysis,...) on text of bad quality (let's say forums content or tweets where oral language, abreviations, non correct phrases,...) it might be relevant. On the other side, if your model will later be used to process text of high quality, it is probably a bad idea.
Word2Vec algorithm tends to produce better accuracy with increasing amount of (good) text. My current approach is to use a dump of wikipedia, and to complement it with content retrieved by crawling.
As a first way to get better quality text, my crawler uses a white list of website of good quality (news website, government and administration, universities,...), and will thus retrieve content from this website only.
I still keep some bad text to have at least some representation of oral language, conversations, slang,... Depending on the usage, it might prove useful.
Hope that helps.
I wouldn't do that. The quality of data is always an important fact.
I would pre-process/filter data first.
On the other hand, you can ingest all data and leave unclear words out to treat them later or leave it as not valid data. You can launch a batch process to clean data first so I don't think automatization is a problem. You can even ingest it/filter in real time (streaming) from the crawler and then start training your word2vec as soon as data is filtered.
Sorry if my answer is too vague. Maybe if you tell us how you are approaching it or we can see some non-quality register the answer can be more accurate.
Maybe this link can give you some clues: http://chapeau.freevariable.com/2015/12/using-word2vec-on-log-messages.html

String analysis and classification

I am developing a financial manager in my freetime with Java and Swing GUI. When the user adds a new entry, he is prompted to fill in: Moneyamount, Date, Comment and Section (e.g. Car, Salary, Computer, Food,...)
The sections are created "on the fly". When the user enters a new section, it will be added to the section-jcombobox for further selection. The other point is, that the comments could be in different languages. So the list of hard coded words and synonyms would be enormous.
So, my question is, is it possible to analyse the comment (e.g. "Fuel", "Car service", "Lunch at **") and preselect a fitting Section.
My first thought was, do it with a neural network and learn from the input, if the user selects another section.
But my problem is, I donĀ“t know how to start at all. I tried "encog" with Eclipse and did some tutorials (XOR,...). But all of them are only using doubles as in/output.
Anyone could give me a hint how to start or any other possible solution for this?
Here is a runable JAR (current development state, requires Java7) and the Sourceforge Page
Forget about neural networks. This is a highly technical and specialized field of artificial intelligence, which is probably not suitable for your problem, and requires a solid expertise. Besides, there is a lot of simpler and better solutions for your problem.
First obvious solution, build a list of words and synonyms for all your sections and parse for these synonyms. You can then collect comments online for synonyms analysis, or use parse comments/sections provided by your users to statistically detect relations between words, etc...
There is an infinite number of possible solutions, ranging from the simplest to the most overkill. Now you need to define if this feature of your system is critical (prefilling? probably not, then)... and what any development effort will bring you. One hour of work could bring you a 80% satisfying feature, while aiming for 90% would cost one week of work. Is it really worth it?
Go for the simplest solution and tackle the real challenge of any dev project: delivering. Once your app is delivered, then you can always go back and improve as needed.
String myString = new String(paramInput);
if(myString.contains("FUEL")){
//do the fuel functionality
}
In a simple app, if you will be having only some specific sections in your application then you can get string from comments and check it if it contains some keywords and then according to it change the value of Section.
If you have a lot of categories, I would use something like Apache Lucene where you could index all the categories with their name's and potential keywords/phrases that might appear in a users description. Then you could simply run the description through Lucene and use the top matched category as a "best guess".
P.S. Neural Network inputs and outputs will always be doubles or floats with a value between 0 and 1. As for how to implement String matching I wouldn't even know where to start.
It seems to me that following will do:
hard word statistics
maybe a stemming class (English/Spanish) which reduce a word like "lunches" to "lunch".
a list of most frequent non-words (the, at, a, for, ...)
The best fit is a linear problem, so theoretical fit for a neural net, but why not take immediately the numerical best fit.
A machine learning algorithm such as an Artificial Neural Network doesn't seem like the best solution here. ANNs can be used for multi-class classification (i.e. 'to which of the provided pre-trained classes does the input represent?' not just 'does the input represent an X?') which fits your use case. The problem is that they are supervised learning methods and as such you need to provide a list of pairs of keywords and classes (Sections) that spans every possible input that your users will provide. This is impossible and in practice ANNs are re-trained when more data is available to produce better results and create a more accurate decision boundary / representation of the function that maps the inputs to outputs. This also assumes that you know all possible classes before you start and each of those classes has training input values that you provide.
The issue is that the input to your ANN (a list of characters or a numerical hash of the string) provides no context by which to classify. There's no higher level information provided that describes the word's meaning. This means that a different word that hashes to a numerically close value can be misclassified if there was insufficient training data.
(As maclema said, the output from an ANN will always be floats with each value representing proximity to a class - or a class with a level of uncertainty.)
A better solution would be to employ some kind of word-relation or synonym graph. A Bag of words model might be useful here.
Edit: In light of your comment that you don't know the Sections before hand,
an easy solution to program would be to provide a list of keywords in a file that gets updated as people use the program. Simply storing a mapping of provided comments -> Sections, which you will already have in your database, would allow you to filter out non-keywords (and, or, the, ...). One option is to then find a list of each Section that the typed keywords belong to and suggest multiple Sections and let the user pick one. The feedback that you get from user selections would enable improvements of suggestions in the future. Another would be to calculate a Bayesian probability - the probability that this word belongs to Section X given the previous stored mappings - for all keywords and Sections and either take the modal Section or normalise over each unique keyword and take the mean. Calculations of probabilities will need to be updated as you gather more information ofcourse, perhaps this could be done with every new addition in a background thread.

Where do I start for Text Pattern Recognition - Java Based

I am seriously considering doing a Optical Character Recognition program. I am well versed with Java and would love to know about libraries available out there. Basically, I want to convert something like the following to text. I will need to give manual interruption to specify a pattern. For example, I would need to ask user to mark f in this text, so that I know where f occurs.
I am a newbie to this entirely, so I dont mind learning from scratch as well. Need guidance.
If you are thinking of coding an OCR program from scratch, reading up on techniques may be useful. I found an OCR Survey from 1996 which reviews some of the popular techniques from a decade and a half ago. Reading that might be helpful; track down papers it cites or papers which cite it.
Usually the process goes as follows:
find text
find characters in the text
extract features from the characters found
do pattern matching
report suspected character
While getting a user to annotate text is fun and exciting, finding a collection of handwriting which is already annotated might save you a lot of time, that way you can focus on the nuts and bolts of doing OCR rather than building your own database of annotated text.
To start with a slightly easier task you might want to consider building a system to detect handwritten digits. The USPS produced a corpus for developing systems to do this for zip code processing. The link was something I found with a quick search.
If you want to use/look at a library, you could try the Google-endorsed Tesseract.

Algorithm for searching for an image in another image. (Collage)

Is this even possible? I have one huge image, 80mb with a lot of tiny pictures. They are tilted and turned around as well. How can i search for an image with programming? I know how to use java and c++. How would you go about this?
You might want to look up the Scale Invariant Feature Transform (SIFT) algorithm. Just for example, it's used in a fair number of programs for automatically generating panoramas, to recognize the parts of pictures that match up, despite differences in scaling, tilting, panning, and so on.
Edit: Quite true -- it is patented, and I probably should have mentioned that to start with. In case anybody care's it's US patent # 6,711,293.
One algorithm I've used before is SIFT. If you're interested in implementing the algorithm for yourself, you can see course notes for CPSC 425 at UBC, which describes in gentle detail how to implement SIFT in MATLAB. If you just want code that does this, take a look at VLFeat, a C library that does SIFT and a number of other algorithms.
Quotation from Jerry Coffin:
Edit: Quite true -- it is patented, and I probably should have mentioned that to start with. In case anybody care's it's US patent # 6,711,293.
How much do you know about the image? Exactly what it looks like? Do you have a copy of the image and you just need to figure out where in the large image it is?
Anyway, the branch of CS that deals with these kinds of questions is called Computer Vision.
Open CV and TINA are two open source libraries you might be able to use.
You should probably start out with the simplest ideas and see if they are sufficient for your needs. In the field of pattern matching the simplest idea is that of template matching. There is an efficient implementation of template matching found in OpenCv.
Note that template matching is rotation variant, meaning if the template you are trying to match can be rotated in the image you are trying to find it in, it won't work unless you pre-rotate the templates.

Categories