I just found about Sikuli when I was looking for a library to find matches of a given image within a larger image (both loaded from files).
By default, Sikuli only supports loading the searched image from file, but relies on a proprietary class Screen to take screenshots to use as base for the search... And I'd like to have the ability to use a image file instead.
Looking for a solution has led me to this question, but the answer is a bit vague when you consider that I have no prior experience with Sikuli and the available documentation is not particularly helpful for my needs.
Does anyone have any examples on how to make a customized implementation of Screen, ScreenRegion, ImageScreen and ImageScreenLocation? Even a link to a more detailed documentation on these classes would be a big help.
All I want is to obtain the coordinates of an image match within another image file, so if there's another library that could help with this task I'd more than happy to learn about it!
You can implement it by yourself with something like this:
class MyImage{
private BufferedImage img;
private int imgWidth;
private int imgHeight;
public MyImage(String imagePath){
try{
img = ImageIO.read(getClass().getResource(imagePath));
}catch(IOException ioe){System.out.println("Unable to open file");}
init();
}
public MyImage(BufferedImage img){
this.img = img;
init();
}
private void init(){
imgWidth = img.getWidth;
imgHeight = img.getHeight();
}
public boolean equals(BufferedImage img){
//Your algorithm for image comparison (See below desc for your choices)
}
public boolean contains(BufferedImage subImage){
int subWidth = subImage.getWidth();
int subHeight = subImage.getHeight();
if(subWidth > imgWidth || subHeight > imgHeight)
throw new IllegalArgumentException("SubImage is larger than main image");
for(int x=0; x<(imgHeight-subHeight); x++)
for(int y=0; y<(imgWidth-subWidth); y++){
BufferedImage cmpImage = img.getSumbimage(x, y, subWidth, subHeight);
if(subImage.equals(cmpImage))
return true;
}
return false;
}
}
The contains method will grab a subimage from the main image and compare with the given subimage. If it is not the same, it will move on to the next pixel until it went through the entire image. There might be other more efficient ways than moving pixel by pixel, but this should work.
To compare 2 images for similarity
You have at least 2 options:
Scan pixel by pixel using a pair of nested loop to compare the RGB value of each pixel. (Just like how you compare two int 2D array for similarity)
It should be possible to generate a hash for the 2 images and just compare the hash value.
Aah... Sikuli has an answer for this too... You just didnt look close enough. :)
Answer : The FINDER Class
Pattern searchImage = new Pattern("abc.png").similar((float)0.9);
String ScreenImage = "xyz.png"; //In this case, the image you want to search
Finder objFinder = null;
Match objMatch = null;
objFinder = new Finder(ScreenImage);
objFinder.find(searchImage); //searchImage is the image you want to search within ScreenImage
int counter = 0;
while(objFinder.hasNext())
{
objMatch = objFinder.next(); //objMatch gives you the matching region.
counter++;
}
if(counter!=0)
System.out.println("Match Found!");
In the end I gave up on Sikuli and used pure OpenCV in my Android project: The Imgproc.matchTemplate() method did the trick, giving me a matrix of all pixels with "scores" for the likehood of that being the starting point of my subimage.
With Sikuli, you can check for the presence of an image inside another one.
In this example code, the pictures are loaded from files.
This code tell us if the second picture is a part of the first picture.
public static void main(String[] argv){
String img1Path = "/test/img1.png";
String img2Path = "/test/img2.png";
if ( findPictureRegion(img1Path, img2Path) == null )
System.out.println("Picture 2 was not found in picture 1");
else
System.out.println("Picture 2 is in picture 1");
}
public static ScreenRegion findPictureRegion(String refPictureName, String targetPictureName2){
Target target = new ImageTarget(new File(targetPictureName2));
target.setMinScore(0.5); // Precision of recognization from 0 to 1.
BufferedImage refPicture = loadPicture(refPictureName);
ScreenRegion screenRegion = new StaticImageScreenRegion(refPicture);
return screenRegion.find(target);
}
public static BufferedImage loadPicture(String pictureFullPath){
try {
return ImageIO.read(new File(pictureFullPath));
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
To use Sikuli package, I added this dependency with Maven :
<!-- SIKULI libraries -->
<dependency>
<groupId>org.sikuli</groupId>
<artifactId>sikuli-api</artifactId>
<version>1.1.0</version>
</dependency>
Related
I'm finishing my homework in OOP Java. The assignment is to load images on a JFrame, be able to move them around (top layer should be prioritized, it is currently not) and click them to "flip them" (change the source of the image essentially). I'm currently having trouble finding a solution on how to properly "layer" images that are visible on the screen and to prioritize the images on the top first (currently the bottom ones are being prioritized).
I also have issues finding a good way to change the source of the images, as our teacher has prohibited extending the Picture class with Swing.
My first attempt at solving this was saving the information of every individual "Picture" object in an ArrayList. This works to save the position of the images but does not solve my issue with the layering. I also wanted to use JLayeredPane but as I found out, it was harder than I thought as I have yet to find a viable solution this way (I might be missing some obvious facts about how it works).
I'm thinking what probably needs to happen is that I save the "position" of each image in some type of Array, then using this array to print out the images via paintComponent # ImagePanel. This is currently what I am doing but it does not act as I wish it to. I think my way of loading in the images in the "Picture" class might have something to do with it. I don't have a lot of experience in Java so all of this is new to me.
I don't want to print out all of my codes as I have 4 classes, so I'm going to print out what I feel are the essential methods in each class. If there's something missing that you guys need in order to guide me in the right direction I'll provide that aswell.
draw # Picture
public void draw(Graphics g, int i) {
try {
BufferedImage img = ImageIO.read(new File("images/icon_"+ i +".gif"));
g.drawImage(img, x, y, null);
} catch(IOException ie) {
System.out.println("Could not find images");
}
}
mousePressed & mouseDragged # MouseHandler
public void mousePressed (MouseEvent e) {
Point point = e.getPoint();
chosen = imagepanel.locateImage(point);
}
public void mouseDragged (MouseEvent e) {
int x = e.getX();
int y = e.getY();
if (chosen != null) {
imagepanel.moveImage(chosen, x, y);
}
}
loadImages & paintComponent # ImagePanel
private final static int IMAGES = 7;
private ArrayList <Picture> imageCollection = new ArrayList<>();
private Picture im;
Random rand = new Random();
public void loadImages() {
for(int i=0; i<IMAGES; i++) {
int x = rand.nextInt(400) + 40;
int y = rand.nextInt(400) + 60;
im = new Picture(x,y);
imageCollection.add(im);
}
}
#Override
public void paintComponent(Graphics g) {
super.paintComponent(g);
int i = 0;
for (Picture im : imageCollection) {
i++;
im.draw(g, i);
}
}
I expect the images to stack on top of eachother whenever "flipped" (clicked) or moved (dragged). They do not currently do this as they just maintain their "depth" position. I've tried implementing an Image[] without success.
I also have a flip method where I tried using setIcon (I was using ImageIcon instead of Image previously) but this did not really work for some reason.
I also would love for any feedback on the code so far and any improvements that could be made as I always want to improve.
EDIT: I manage to solve my problems, however I'm sure there's a better way to do this.
public void placeFirst(Picture im) {
int pos = imageCollection.indexOf(im);
imageCollection.remove(pos);
imageCollection.add(0, im);
}
public void flipImage(Picture im) {
im.flip();
placeFirst(im);
repaint();
}
public void moveImage(Picture im, Point point) {
im.move(point.x-(im.getWidth(im)/2), point.y-(im.getHeight(im)/2));
placeFirst(im);
repaint();
}
public Picture locateImage(Point point) {
for (int i=0; i<imageCollection.size(); i++) {
Picture im = imageCollection.get(i);
if (im.fits(point)) {
return im;
}
}
return null;
}
#Override
public void paintComponent(Graphics g) {
super.paintComponent(g);
// There probably exists a better and visually nicer way of doing this
for (int i=imageCollection.size()-1; i>=0; i--) {
Picture im = imageCollection.get(i);
im.draw(g);
}
}
chosen = imagepanel.locateImage(point);
Well, we don't know how the locateImage(...) method works, but I would guess you just iterate through the array until you find a match.
So you will always find the same match.
So if you want an image to stack on top you have two issues:
you need to modify the search order so that when you click on an image you move it to position 0 in the ArrayList so it is always found first
but know when you paint images you need to paint images from the end of the ArrayList to the beginning, so the first image in the ArrayList gets painted last.
I am creating an app to recognize a building or parts of a building to overlay certain highlights.
At first I thought about using Vuforia and Unity like everyone else does, but I feel like it does not give me the freedom I need, especially with the free version.
My logic goes a bit deeper than just using a target image, so my idea was to use Android Studio and OpenCV.
I am at a point, where I can show feature matching with steps like
Calib3d.findHomography(pts1Mat, pts2Mat, Calib3d.RANSAC, 10, outPutMask, 2000, 0.995);
to get good matches and then use
Features2d.drawMatches(imgFromFile, keyPoints1, imgFromFrame, keyPoints2, better_matches_mat, outputImg);
But at this moment I am kind of out of ideas on how to translate the seemingly easy python code you often find to android/java.
Things I need to do at this point:
Extract descriptors/keypoints from known images of the building so the app does not need to calculate those each time/frame (I will take many fotographs)
Highlight matching area (box or color highlights on contour)
Get rid of false positives (it finds matches even though I have the camera on some random object
Framerate kinda low atm with drawMatches, since I dont really need that I hope that the framerate will be better when "just" calculating matches
I am trying to frameResolution/2 or frameResolution/4 before working with them but matches get worse
Some of my code
public Mat matching (Mat matFrame, int viewMode, int resizeMode) {
if (viewMode == VIEW_MODE_FEATURES) {
initMatching();
if (!imageIsAnalyzed) {
detectImageFromFile();
}
detectFrame(matFrame, resizeMode);
featureMatching();
outPutMat = drawingOutputMat();
}
return outPutMat;
private void initMatching () {
detector = ORB.create();
descriptor = DescriptorExtractor.create(DescriptorExtractor.ORB);
matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
}
private void featureMatching () {
matcher.knnMatch(descriptor1, descriptor2, matches, 2);
//ratio test to get good matches
if (matOfDMatch.toArray()[0].distance / matOfDMatch.toArray()[1].distance < 0.9) {
good_matches.add(matOfDMatch.toArray()[0]);
}
//....
for(int i = 0; i<good_matches.size(); i++) {
pts1.add(keyPoints1.toList().get(good_matches.get(i).queryIdx).pt);
pts2.add(keyPoints2.toList().get(good_matches.get(i).trainIdx).pt);
}
//....
Calib3d.findHomography(pts1Mat, pts2Mat, Calib3d.RANSAC, 10, outPutMask, 2000, 0.995);
//outPutMask contains zeros and ones indicating which matches are filtered
better_matches = new LinkedList<DMatch>();
for (int i = 0; i < good_matches.size(); i++) {
if (outPutMask.get(i , 0)[0] != 0.0) {
better_matches.add(good_matches.get(i));
}
}
private void detectFrame (Mat matFrame, int resizeMode) {
imgFromFrame = matFrame;
Imgproc.resize(imgFromFrame, imgFromFrame, new Size(matFrame.width()/resizeMode, matFrame.height()/resizeMode));
descriptor2 = new Mat();
keyPoints2 = new MatOfKeyPoint();
detector.detect(imgFromFrame, keyPoints2);
descriptor.compute(imgFromFrame, keyPoints2, descriptor2);
}
private Mat drawingOutputMat () {
//Drawing Output
outputImg = new Mat();
better_matches_mat = new MatOfDMatch();
better_matches_mat.fromList(better_matches);
//this will draw all matches
Features2d.drawMatches(imgFromFile, keyPoints1, imgFromFrame, keyPoints2, better_matches_mat, outputImg);
//Instead of the drawing matches I will need some classification and some overlay on the output
return outputImg;
}
I hope some of you can help me to figure out my next steps and how I should continue.
Thanks in advance.
I'm using a library called Image4j to load an ico file, choose one of the images from the list, scale it (if necessary) and put it in a JLabel as an ImageIcon.
The library has a method read(File icoFile) which returns a List<BufferedImage> , a list of all the images contained within the ico file.
What I want to be able to do is quickly choose the image from the list that is the closest fit for the size of the label. Both the labels and the ico images will be square.
I've come up with the most naive way, but I suspect there's a faster one. I'd settle for this one, but my program will be doing this routine a lot so I want this part to be as efficient as possible.
public class IconUtil() {
private final List<BufferedImage> list;
public IconUtil(List<BufferedImage> list) {
this.list = list;
}
public BufferedImage getBestFit(int sizeToFit) {
int indexOfBestFit = 0; // assume it's the first image
int bestMargin = Math.abs(list.get(0).getWidth() - sizeToFit);
int margin;
for(int i = 1; i < list.size(); i++) {
margin = Math.abs(list.get(i).getWidth() - sizeToFit);
if(margin < bestMargin) {
bestMargin = margin;
indexOfBestFit = i;
}
}
return list[indexOfBestFit];
}
}
The choice to compare the images by their width was arbitrary because they're all square.
Also, as the main reason for picking the best fit is to try and maintain the quality of the images, should this method discard any images which are smaller than the target size? If so, how would that change the algorithm?
I have little experience with re-scaling images in Java. Is it even necessary to find the closest match in size? Maybe there are ways to re-scale without losing much quality even if the original is much bigger?
I know I could just change it in a photo manipulation software, but I want to learn to do it programmatically so I can change it to any color I wish.
First off, I'd like to say that I've been searching for the solution around two hours and I couldn't find one that works for me, or one that deals with my exact problem.
I've downloaded some icons from the internet and they're originally black with transparent background, which is good for menu bars and stuff. But, they're hard to notice on my tool bar and I want to change the black color on those icons to white color. Here's an edited screenshot of what I'm trying to achieve and here's a screenshot of what I achieve. (Sorry for links, I need at least 10 reputation to post images.)
Here's my Utility class that's responsible for the failed work:
public final class Utility{
public static ImageIcon replaceIconColor(ImageIcon icon, Color oldColor, Color newColor){
BufferedImage image = iconToImage(icon);
for(int y = 0; y < image.getHeight(); y++){
for(int x = 0; x < image.getWidth(); x++){
Color pixel = new Color(image.getRGB(x, y));
if((pixel.getRed() == oldColor.getRed()) && (pixel.getGreen() == oldColor.getGreen()) && (pixel.getBlue() == oldColor.getBlue()) && (pixel.getAlpha() == oldColor.getAlpha())){
image.setRGB(x, y, newColor.getRGB());
}
}
}
return new ImageIcon(image);
}
public static BufferedImage iconToImage(ImageIcon icon){
return Resources.loadImage(icon.getDescription());
}
}
I'm not sure if you need resource loading class code, but I thought it could only help you to understand my problem fully and to be able to help me the best of your abilty. So, here's my Resources class code snippet:
public static ImageIcon loadImageIcon(String fileName){
URL imageURL = Resources.class.getResource("/Resources/Images/" + fileName);
ImageIcon imageIcon = new ImageIcon(imageURL);
imageIcon.setDescription(fileName);
return imageIcon;
}
public static BufferedImage loadImage(String fileName){
URL imageURL = Resources.class.getResource("/Resources/Images/" + fileName);
BufferedImage image = null;
try{
image = ImageIO.read(imageURL);
}catch(IOException e){
e.printStackTrace();
}
return image;
}
My apologies if there actually is a solution somewhere on the internet for this, but I couldn't find it. Well, that's all. I think I was specific enough.
Thank you in advance!
Two approaches are common:
Loop through the BufferedImage using getRGB() and setRGB() as needed, for example.
Use a LookupOp, as shown in the examples cited here.
I am working with QRCode api found here
I successfully implemented the QRCode generation through Api but the result is this
(I changed the color from White to yellow in order to ask my question)
So, now if you see you find that the outer boundary is very thick I want it to be thin..like this
The code I used to generate the qrcode and changing the color is this
public boolean writeImage(String qrMessageForGeneratingQRCode,String filename){
boolean result = false;
try{
int length = 200;
int breadth = 200;
BufferedImage originalQRCodeBufferedImage = ImageIO.read(new ByteArrayInputStream(QRCode.from(qrMessageForGeneratingQRCode).withSize(length,breadth).stream().toByteArray()));
BufferedImage changedQRCodeBufferedImage = new ColorChanger().changeColor(originalQRCodeBufferedImage, Color.WHITE, new Color(255,202,0));
ImageIO.write(changedQRCodeBufferedImage,FilesUtil.getProperty("QR_CODE_IMAGE_FORMAT") , new File(filename));
result = true;
}catch(Exception ex){
ex.printStackTrace();
result = false;
}
return result;
}
Please shed some light how can I achieve that using code....Thanks in advance....
Ankur
The quiet zone around the QR code needs to be 4 modules. You don't want to reduce this. Your proposed image will be harder to scan.
You can always edit the BufferedImage after the fact with a simple crop:
BufferedImage crop = original.getSubimage(50, 50, original.getWidth() - 2*50, original.getHeight() - 2*50);
BTW you should update the underlying zxing library used in your solution to at least 2.0, probably 2.1-SNAPSHOT. 1.7 is old.
Obvious answer would be to reduce the length and breadth values. Did you try that?