How can I compare two images in Java? - java

I have 5 images by default in the program, and I will allow the user choose an image from the desktop. The program will determine which image between the 5 images is the closest one to the user image.
Can anyone help me and take me to the start of the idea?

You can try to use a feature extraction algorithm like SIFT, SURF etc. Then compare extracted features with your database. You can select the best matching image based on the number of correct matches.
Generally SIFT works fine for 2D objects, like picture of a label or an advertisement board. Rotation on 2D plane or scale wont matter if you are using SIFT. SURF is supposed to be an improvement of SIFT but I do not have much experience on it.
These algorithms are said to be bit heavy. Anyway if you are matching just 5 images it wont be much of a problem.(Or you can simply calculate the descriptors(features) of your images before hand and store them. Then at run time all you have to do is get the descriptor of the user image and compare it) But still if you are trying to match images of basic shapes like squares and circles, using square detection or circle detection might be efficient performance wise.

Related

Java Bot get values from window/screen

ok this is my first time ever posting. Sorry if i am not specific enough.
i am trying to make a bot using java coding, using eclipse. the game i am playing(dark summoner) does not have status bars but, fractions such as 23/47 energy 240/300 battle points. i can't find a way to get the values from the top or the screen into my code. also to make this easier i thought i could get my code to focus on the window opened with the game in it, and can not find a way to do that.
the idea behind the bot so far is to have it attack when i reach enough battle points. when this is achieved i plan on making it usable on any or most energy based games like mafia wars age of baymont millions to list
so in short my question is how do i get values from a specific area of an image, to my code using java
oh yeah i am a noob
You cannot get the number from the image, but you can assign number to the image. After displaying proper image, you will know what number is connected with it. Divide whole number into digits and display each digit alone, composing the whole number.
If the only aim of the application is to get text from a image, you can think about image processing and some algorithms, but it time consuming.

Comparing images using color difference

I'm trying to figure out a good method for comparing two images in terms of their color. One idea I had was to take the average color of both images and subtract that amount to get a "color distance." Whichever two images have the smallest color distance would be a match. Does this seem like a viable option for identifying an image from a database of images?
Ideally I would like to use this to identify playing cards put through an image scanner.
For example if I were to scan a real version of this card onto my computer I would want to be able to compare that with all the images in my database to find the closest one.
Update:
I forgot to mention the challenges involved in my specific problem.
The scanned image of the card and the original image of the card are most likely going to be different sizes (in terms of width and height).
I need to make this as efficient as possible. I plan on using this to scan/identify hundreds of cards at a time. I figured that finding (and storing) a single average color value for each image would be far more efficient than comparing the individual pixels of each image in the database (the database has well over 10,000 images) for each scanned card that needed to be identified. The reason why I was asking about this was to see if anyone had tried to compare average color values before as a means of image recognition. I have a feeling it might not work as I envision due to issues with both color value precision and accuracy.
Update 2:
Here's an example of what I was envisioning.
Image to be identified = A
Images in database = { D1, D2 }
average color of image A = avg(A) = #8ba489
average color of images in database = { #58727a, #8ba489 }
D2 matches with image A because #8ba489 - #8ba489 is less than #8ba489 - #58727a.
Of course the test image would not be an exact match with any of those images because it would be scanned in; however, I'm trying to find the closest match.
Content based image retrieval (CBIR) can do the trick for you. There's LIRE, a java library for that. You can even first try several approaches using different color based image features with the demo. See https://code.google.com/p/lire/ for downloads & source. There's also the "Simple Application" which gets you started with indexing and search really fast.
Based on my experience I'd recommend to use either the ColorLayout feature (if the images are not rotated), the OpponentHistogram, or the AutoColorCorrelogram. The CEDD feature might also yield good results, and it's the smallest with ~ 60 bytes of data per image.
If you want to check color difference like this:
http://en.wikipedia.org/wiki/Color_difference
You can use Catalano Framework,
http://code.google.com/p/catalano-framework/
It works in Java and Android.
Example using Color Difference:
float[] lab = ColorConverter.RGBtoLAB(100, 120, 150, ColorConverter.CIE2_D65);
float[] lab2 = ColorConverter.RGBtoLAB(50, 80, 140, ColorConverter.CIE2_D65);
double diff = ColorDifference.DeltaC(lab, lab2);
I think your idea is not good enough to do the task.
Your method will say all images below are the same (average color of all images are 128).
Your color averaging approach would most likely fail, as #Heejin already explained.
You can do try it in different way. Shrink all images to some arbitrary size, and then subtract uknown image from all know images, the one with smallest difference is the one you are looking for. It's really simple method and it would't be slower than the averaging.
Another option is to use some smarter algorithm:
http://www.hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html
I have used this method in past and the results are okay-ish. Ir works great for finding same images, not so well for finding siilar images.

How to measure length of line which is drawn on image in java?

I m making app in netbeans platform in java using Swing technology for dentist. i want to measure length of line which is drawn by user on image's of teeth? so, then Doctor can find the length of root canal of teeth.and line can also be not straight, line can be ZigZag.if any one have idea about that then share with me please.
You can use one of the many line detection algorithms to detect the existence of lines and then measure the line in pixels.
You can use an image processing library that already has these algorithms implemented, or you can implement them your self (better use a library though), this question is about image processing libraries and approaches in java.
That is not very easy because the images are taken from different angles or distances as I suppose. You will need some kind of scale in the image which length you know. Think of a tag with a size of 5mm x 5mm which is pasted on the tooth. In you application you can then measure this tag. Lets say its edge size is 200x200 Pixel. Then you know that 200 Pixels are 5mm and you have the formula to calculate the real size from the line length.

JOGL: How can I draw many strings quickly

I'm using JOGL (OpenGL for Java) for my application and I need to draw tons of strings on screen at once and my current solution is far too slow. Right now I'm drawing the strings using TextRenderer using the draw3D method and for even a moderate number of strings (around 300-500), it just kills the FPS. I started messing with drawing text onto the object textures, which is much faster, but there are a few problems with it. The first is that allocating all those textures requires a lot of memory. The second is that I need to find a way to size the texture so its only as big as the string and then map it to the object without stretching. The problem there is that all these thousands of boxes are using a single model being rendered with a call list. I'm not sure its possible to change the texture mapping for each object in that situation.
I don't mind if the text appears flat or 3D, it just has to be positioned in 3D space. I would prefer to render the text in the highest quality possible without sacrificing too much speed, since readability of the text is the most important part of the application. Also, nearly all of the strings are different, there aren't many duplicates.
So, my question: Am I going down the right path with drawing the strings on the textures, and if so, how can I overcome those 2 problems? Or is there another method that would suit my needs?
Depending on exactly how TextRenderer works - you might be able to use display lists to batch up your text drawing commands.
If TextRenderer works by having a texture of individual character glyphs and piecing together a string a glyph at a time: it'll be fine. just bookend your text drawing code with glNewList and glEndList. Once a list is defined, just use glCallList to use it.
If however, TextRenderer works by drawing complete strings into a texture and using one quad per string - display lists may not work. If the strings in one batch do not all fit within TextRenderer's cache, it will delete the least-recently used one to reclaim some space. Display lists will only recreate the OpenGL calls made, and so the work done by TextRenderer to update the string cache texture will be lost and you'll get incorrect output. From a quick scan of the source, I suspect that TextRenderer works in this manner.
To summarise: Display lists will greatly speed up your rendering, but will only if you don't overflow TextRenderer's string cache texture and don't use the TextRenderer after the display list has been defined.
If you can't meet these constraints you're going to have to go a bit hardcore and write your own text renderer that renders glyph-by-glyph - it'll then be trivial to cache the output geometry and extremely quick to re-render. There's an example of such a system here, with the tool to create a font here. It uses LWJGL rather than JOGL, but the translation between the two will be the least of your worries if you want to integrate it - it's meshed with the texture management etc.

designing picture puzzle

I am planning to develop a jigsaw puzzle game.
Now I already have images and image pieces, so we don't need algorithm to cut the image in pieces.
On the UI side there would be two sections
First section contains the broken images in random order.
Second section contains the outline of the full image. User need to drag and drop the the cut images onto the outline image.
I am not sure how can the pieces be matched on the the outline image?
Any idea about the algorithm or the starting pointers?
Allow the user to drag each piece into the outline area. Allow the piece to be rotated in 90 degree increments.
Option 1:
If a piece is in the correct location in the overall puzzle, and at the correct angle, AND connected to another piece, then snap it into place with some user feedback. The outside edge of the puzzle can count for a connection to edge pieces.
Option 2:
A neighbor is an adjacent puzzle piece when the puzzle is assembled. When the puzzle pieces are mixed up, they still have the same neighbors. Each puzzle piece (except the edge pieces) has four neighbors.
If a piece is near one of its neighbors at the correct angle relative to that neighbor, then snap it to the other piece. Then allow the two (or more) pieces to be dragged around as a unit, as is done with a single piece. This would allow the user to assemble subsections of the puzzle in any area, much like is done with a physical jigsaw puzzle, and connect the subsections with one another.
You can check the piece being moved to its four neighbors to see if they are close enough to snap together. If a piece has its proper edge close enough to the proper edge of its neighbor, at the same angle, then they match.
There are several ways to check relative locations. One way would be to temporarily rotate the coordinates of the piece you are testing so it is upright, then rotate the coordinates of all its desired neighbors, also temporarily, to the same angle. (Use the same center of rotation for all the rotations.) Then you can easily test to see if they are close enough to match. If the user is dragging a subassembly, then you will need to check each unmatched edge in the subassembly.
Option 2 is more complex and more realistic. Option 1 can be further simplified by omitting the rotation of pieces and making every piece the proper angle initally.
For a regular shapes you can go with a matrix. I recommend this as the first approach. Dividing the puzzle is as simple as defining X,Y dimensions of the matrix. For each piece you have a series of four values then, one for each side, saying whether it is flat, pointing out, or pointing in. This will give you a very classic jigsaw puzzle setup.
How the pieces actually look becomes a strict GUI thing. Now, for the first draft I recommend getting it working with perfectly square pieces. Taking rectangular bits of an image should be easy to do in any GUI framework.
To go to shaped pieces you'll need a series of templates. These will become masks that you apply to the image. Each mask clips out a tiny portion of the image to produce your piece. You'll probably need to dynamically create the masks in order to fit them to the puzzle. At first start with simply triangular connections. Once you have that working you can do the math to get nice bulbous connector shapes. Look up "clip" and "mask" in your GUI framework.
If you wish to do irregular polygon shapes that don't follow a general matrix layout, then you need to do a lot more work. This is why I recommend getting the square first working as a good example. Now you'll need to delve into graph theory and partitioning. Pick up some books on 3D programming -- focusing on algorithms, as they do partitioning all the time. Though I wouldn't doubt if there is a book with this exact topic in it.
Have fun.
the data structure is simple I guess- each peace will point to it's neighbors and will hold the actual shape to display.
on the MMI (UI) of the app - what is your developing environment ?
If it's windows - I would go with c# and winforms or even better wpf.
if it's unix, you'll have to get someone else's advise, as I'm not an expert there.
1) How to break image into random polygons
It seems that you have figured out this part. (from : "Now I already have images and image pieces, so we don't need algorithm to cut the image in pieces.")
2) what kind of data structure can solve the problem
You can create a Class Piece like Scribble class in this example and your pieces would be array of objects of Piece class.
So, you will have two arrays,
(i) actual image pieces array
(ii) image piece outline array
So, whenever you drag and drop one piece on to the full outline of image, it will check whether the image piece object is intersecting more than 80% and ID (member variable of Piece object) of actual image piece and image piece outline matches, then you got the right piece at right place...
3) UI implementation
Check this out.
You could make an array of objects of the class "PuzzleTile"
Every such tile has an image and an integer
After every move, check if the integers are sorted correctly, means:
123
456
789
You could make a function for that which returns a bool.
Note: I'm currently developing under C#, that's why it's probably easiest to realize especially this concept under C#, although other platforms need none up to barely some modification to this.

Categories