Does anyone know a mean to "render" plots or at least trees in console mode (draw it in the console)?
I would be able to render small plots at end of a very long process, by drawing some figures in ASCII mode, in order to have a geeky & fun view of some stats collected into the process.
I would be pleased to discover a library which does that, and I would keep the process 100% java, no shell-hack or third-party software.
-- EDIT
#lbalazscs and #Fortega made interesting answers, but the background of my question is to know if it exists such a library, and I will add some details I missed the first time :
The output should be able to display trees, binary trees (linked by #lbalazscs here), but also simple graphs such as bargraphs or so.
I will let this question "unanswered" for a while, and if there is no probant answer, #lbalazscs will get the point ;)
You can print ascii trees with minimal code. See the second answer to this question: How to print binary tree diagram?
(the second answer because this one is not only for binary trees)
For people coming here looking for a pure Java tree drawing library: I recommend text-tree, which draws trees like this (and other styles, a lot of possible config if you need it):
some text
├─── more text
├─── and more
│ ├─── still more
│ ╰─── more
╰─── the end
Full disclosure: I am the author of text-tree.
If I remember well, you can draw ascii graphs with javaplot
Related
I am working on Handwritten Form Recognition System, till now i have reached to this step where,i have been able to detect text using java with openCV but now i want to read the text from each of these bounding boxes Click to open image
I have being doing research to find out the process for the same using java with openCV but i was unable to find any.
Suggest me some links,Technologies,methods or process to perform this particular task with "JAVA".
This answer is more general than question specific. I will try to stick as much as possible with the problem statement.
Although there is a lot of on going research on recognition of hand written text, there is no full-proof method, which works with all possible problems.
The sample image you posted here is relatively noisy, with extremely high variance between the font of the same letter. This is exactly where it gets tricky.
I would personally suggest that once you have the bounding boxes around the text (which you already do), run contour extraction in all these bounding boxes in order to extract single letters. Once you have them, you need to figure out relevant feature/s that can represent the maximum variance (or at least 95% Confidence Interval) of the particular letter.
With this/ese feature/s, you need to train a supervised algorithm, letters as training data and their corresponding value (for eg. actual values) as labels. Once you have that, give it some data (the easiest and most difficult cases) to analyze the accuracy.
These links can help you for a start :
One of my first tools to check the accuracy with the set of features I use before I start coding: Weka
Go through basic tutorials on machine learning and how they work - Personal Favorite
You could try TensorFlow.
Simple Digit Recognition OCR in OpenCV-Python - Great for beginners.
Hope it helps!
To simplify the problem, I have a graph that contains nodes and edges which are on a 2D plane.
What I want to be able to do is click a button and it make the automatically layout the graph to look clean. By that I mean minimal crossing of edges, nice space between nodes, maybe even represent the graph scale (weighted edges).
I know this is completely subjective of what is a clean looking graph, but does anyone know of an algorithm to start with, rather than reinventing the wheel?
Thanks.
You will find http://graphdrawing.org/ and this tutorial, by Roberto Tamassia, professor at Brown University, quite helpful.
I like a lot Force-Directed Techniques (pp. 66-72 in the tutorial) like the Spring Embedder.
You assume there is a spring or other force between any two adjacent nodes and let nature (simulation) do the work :)
I would suggest that you take a look at at graphviz. The dot program can take a specification of a graph and generate an image of the network for you somewhat "cleanly". I've linked to the "theory" page that gives you some links that might be relevant if you're interested in the theoretical background. The library and tools themselves are mature enough if you simply want a solution to a problem with layout that you're facing.
I would say as Noufal Ibrahim, but you could also look more precisely at the C API of the graphviz project. It includes a lib to build your graph (libgraph.pdf) with all the nodes and edges, and a lib to layout the graph (libgvc.pdf) (just compute each nodes position), so you can then display it in your own UI for example.
Also JGraph if you want the layouts in Java (I work on the project).
A good visual guide how the most popular layouts actually look: follow the link
I would like to know how practical it would be to create a program which takes handwritten characters in some form, analyzes them, and offers corrections to the user. The inspiration for this idea is to have elementary school students in other countries or University students in America learn how to write in languages such as Japanese or Chinese where there are a lot of characters and even the slightest mistake can make a big difference.
I am unsure how the program will analyze the character. My current idea is to get a single pixel width line to represent the stroke, compare how far each pixel is from the corresponding pixel in the example character loaded from a database, and output which area needs the most work. Endpoints will also be useful to know. I would also like to tell the user if their character could be interpreted as another character similar to the one they wanted to write.
I imagine I will need a library of some sort to complete this project in any sort of timely manner but I have been unable to locate one which meets the standards I will need for the program. I looked into OpenCV but it appears to be meant for vision than image processing. I would also appreciate the library/module to be in python or Java but I can learn a new language if absolutely necessary.
Thank you for any help in this project.
Character Recognition is usually implemented using Artificial Neural Networks (ANNs). It is not a straightforward task to implement seeing that there are usually lots of ways in which different people write the same character.
The good thing about neural networks is that they can be trained. So, to change from one language to another all you need to change are the weights between the neurons, and leave your network intact. Neural networks are also able to generalize to a certain extent, so they are usually able to cope with minor variances of the same letter.
Tesseract is an open source OCR which was developed in the mid 90's. You might want to read about it to gain some pointers.
You can follow company links from this Wikipedia article:
http://en.wikipedia.org/wiki/Intelligent_character_recognition
I would not recommend that you attempt to implement a solution yourself, especially if you want to complete the task in less than a year or two of full-time work. It would be unfortunate if an incomplete solution provided poor guidance for students.
A word of caution: some companies that offer commercial ICR libraries may not wish to support you and/or may not provide a quote. That's their right. However, if you do not feel comfortable working with a particular vendor, either ask for a different sales contact and/or try a different vendor first.
My current idea is to get a single pixel width line to represent the stroke, compare how far each pixel is from the corresponding pixel in the example character loaded from a database, and output which area needs the most work.
The initial step of getting a stroke representation only a single pixel wide is much more difficult than you might guess. Although there are simple algorithms (e.g. Stentiford and Zhang-Suen) to perform thinning, stroke crossings and rough edges present serious problems. This is a classic (and unsolved) problem. Thinning works much of the time, but when it fails, it can fail miserably.
You could work with an open source library, and although that will help you learn algorithms and their uses, to develop a good solution you will almost certainly need to dig into the algorithms themselves and understand how they work. That requires quite a bit of study.
Here are some books that are useful as introduct textbooks:
Digital Image Processing by Gonzalez and Woods
Character Recognition Systems by Cheriet, Kharma, Siu, and Suen
Reading in the Brain by Stanislas Dehaene
Gonzalez and Woods is a standard textbook in image processing. Without some background knowledge of image processing it will be difficult for you to make progress.
The book by Cheriet, et al., touches on the state of the art in optical character recognition (OCR) and also covers handwriting recognition. The sooner you read this book, the sooner you can learn about techniques that have already been attempted.
The Dehaene book is a readable presentation of the mental processes involved in human reading, and could inspire development of interesting new algorithms.
Have you seen http://www.skritter.com? They do this in combination with spaced recognition scheduling.
I guess you want to classify features such as curves in your strokes (http://en.wikipedia.org/wiki/CJK_strokes), then as a next layer identify componenents, then estimate the most likely character. All the while statistically weighting the most likely character. Where there are two likely matches you will want to show them as likely to be confused. You will also need to create a database of probably 3000 to 5000 characters, or up to 10000 for the ambitious.
See also http://www.tegaki.org/ for an open source program to do this.
I am wondering if there exist libraries that could help me draw such figures on screen quickly using JAVA.
The dataset and number of nodes etc need to be parametrized.
If no such libraries exist, which tools in Swing would get me started. I want a quick and dirty way to represent this information.
Edit :
Also it would help if you could tell me what to search on google to get results for such a tailored query.
You can call GraphViz from within Java, converting any Java-based tree structure into the necessary GraphViz formats, and then reading the resulting .png image back into Java. That is probably the easiest approach, in terms of code-to-write (credit goes to SyntaxT3rr0r for proposing it first).
Customizing JGraph would also work, but I doubt that any of the default node-types would cut it. There are examples in the manual covering how to code your own node types and representations. JGraph allows easy graphical editing of node labels and positions, has hierarchical layouts (the type you use for trees); and it supports "ports" of origin (and also destination) for your parent-child edges. You can try their editor demo (included in their default download) if you just want a quick test.
Is this even possible? I have one huge image, 80mb with a lot of tiny pictures. They are tilted and turned around as well. How can i search for an image with programming? I know how to use java and c++. How would you go about this?
You might want to look up the Scale Invariant Feature Transform (SIFT) algorithm. Just for example, it's used in a fair number of programs for automatically generating panoramas, to recognize the parts of pictures that match up, despite differences in scaling, tilting, panning, and so on.
Edit: Quite true -- it is patented, and I probably should have mentioned that to start with. In case anybody care's it's US patent # 6,711,293.
One algorithm I've used before is SIFT. If you're interested in implementing the algorithm for yourself, you can see course notes for CPSC 425 at UBC, which describes in gentle detail how to implement SIFT in MATLAB. If you just want code that does this, take a look at VLFeat, a C library that does SIFT and a number of other algorithms.
Quotation from Jerry Coffin:
Edit: Quite true -- it is patented, and I probably should have mentioned that to start with. In case anybody care's it's US patent # 6,711,293.
How much do you know about the image? Exactly what it looks like? Do you have a copy of the image and you just need to figure out where in the large image it is?
Anyway, the branch of CS that deals with these kinds of questions is called Computer Vision.
Open CV and TINA are two open source libraries you might be able to use.
You should probably start out with the simplest ideas and see if they are sufficient for your needs. In the field of pattern matching the simplest idea is that of template matching. There is an efficient implementation of template matching found in OpenCv.
Note that template matching is rotation variant, meaning if the template you are trying to match can be rotated in the image you are trying to find it in, it won't work unless you pre-rotate the templates.