OpenCV - Identifying particular object in image using Java - java

I'm developing an opencv application in java where I need to detect the different marks on the product.I have attached the input image below
In that image I need to identify the non veg mark.
Since i'm new to it, I need help to know which concepts can be used for it.
I need to identify these marks on the input images

After quite a struggle I was able to come up with a rough solution.
First, I separated the veg and non-veg labels.
&
Now in order to get the perfect fit of the non-veg label over the image I resized it to a particular level:
small = cv2.resize(nveg, (0,0), fx=0.12, fy=0.12)
Now I performed Template matching as I stated in the comments' section. To learn more about this topic VISIT THIS PAGE.
Using it I obtained the 'maximum probable location' of the non-veg label in the image.
res = cv2.matchTemplate(food, small, cv2.TM_CCOEFF_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
In the following image pay attention to the bright spot in the position of the non-veg mark:
Now using the max_loc variable, I added the tuple values to the size of the resized non-veg label and framed it with a rectangle as in the following:
You can see the black spot on the non-veg mark when I labelled it using max_loc:
Hope this helped. :)

Related

Use an Image as a watermark in iText 7

iText 7 just came out May 2016, and while some of the tutorials have been helpful, some of the more advanced functions have been harder to figure out. This page has an example of how to use text as a watermark (about 90% of the way down the page), but I can't figure out how to use an Image as a watermark, and I really have no idea where to start with the new release. Anyone know how to use an Image as a watermark in iText 7? Any ideas where to start?
I'm not 100% positive this is the right way to do this but I'd say I'm 95% confident.
Using the tutorial for iText 7 that you linked to as a starting guide along with the iText 5 version we can use a "graphics state" to modify the current canvas.
(The code below is C# but you should be able to convert it to Java pretty easily, pretty much just lowercase the first letter of properties and methods. Also, I'm using full namespace paths just so you know where things are at.)
First, create a custom state and set its transparency:
//Create a transparent state
iText.Kernel.Pdf.Extgstate.PdfExtGState tranState = new iText.Kernel.Pdf.Extgstate.PdfExtGState();
tranState.SetFillOpacity(0.5f);
Second, get your image:
//Get your image somehow
iText.IO.Image.ImageData myImageData = ImageDataFactory.Create("D:\\14.jpg", false);
iText.Layout.Element.Image myImage = new iText.Layout.Element.Image(myImageData);
Third (and optional), change your image if needed:
//Position, rotate and scale it as needed
myImage.SetFixedPosition(100, 100);
myImage.SetRotationAngle(45);
myImage.ScaleAbsolute(200, 200);
Fourth, save the pdfCanvas (from the tutorial) state and set a new one:
pdfCanvas.SaveState().SetExtGState(tranState);
Fifth, add your image to the higher level canvas (once again, from the tutorial):
canvas.Add(myImage);
And sixth, reset the pdfCanvas state:
pdfCanvas.RestoreState();
Update by Bruno:
Adding images is explained in Chapter 3 of the "iText 7: Building Blocks" tutorial. In chapter 3 of "iText 7: Jump-Start tutorial", we work with a PdfCanvas and Canvas object. The missing information about how to create and add an image is in the "Building Blocks" tutorial.

Auto transforming BufferedImage from side view/not perpendicular to perpendicular based on 4 points printed on list in Java

I see a lot of people just reading it... Maybe you need some extra info? Comment below and I'll give it!
As my previous questions were saying, I'm building a car that will orientate in the room based on an image that you would have taken with your phone. For that i would need an image that would represent the room. Since you cant take picture directly from the top of the room (unless you are a bird), I would need to transform it to a "perpendicular" image. It would be weird if you would have to do it manually so I decided to do it automatically. Now that is something harder :) Well I asked for transforming image in this thread and got that marked comment that solved part of my program. I'm still looking for a way to automate that. Since i need the image to be the same size as it would be in real life (we can take 1 pixel as 1 cm), we will probably need some kind of "points of reference" printed on an A4 paper sheet. Also, there will probably be needed some OpenCV since we will need to know the distance between two points in image. Besides that, how would you define the "correction" that has to be done based on four points?
I've done some pics for visual reference:
Id like to transform image like this:
To image like this (or even better):
EDIT: I'd like to do this in latest (3.1) version of which i have no idea about how to use it :)
EDIT #2: I've done some work on it, solving part of it in this post: Image perspective correction

How to identify an specif 'thing' in an image (Image Processing)

I'll start saying what I'm doing:
I'll take a photo with a webcam, in this photo there will be an object, always the same object, in a square format with letters inside it. I need to identify those letters. The step of identifying those letters is already done, the problem is the quality of the image coming from the webcam: it won't be the best nor in the best positioning, and the api I'm using to identify those letters requires positioning and quality.
The reason why I have a square is to help to identify where those letters are, so I can 'look for an square' in the image an then do what I've already done to identify the letters. My question is: is there more things I have to do in order to achieve this? Os is it only 're positioning the image, look for the square and then it's done'. If I need to study image processing there is no problem, I'm here because I don't even know what I have to look for.
I'm developing in Java because 'school things', so if there's already and api (I've heard and tried OpenCV, but I don't know what to do with it) it would really help me.
Thanks in advance.
Edit 1: As asked by Springfield762, I took some photos and I'll explain them below.
First let me explain what are the photos: the 'square thing' that will contain the letters isn't done yet, another department is taking care of it, so I had to improvise something here with pens and batteries. The letters will all be made of wood in a nice shape, I had to replace them with some Magicka cards as I don't have them yet, but the cards fits well to explain the example. I also made an example of the the square (that actually ended as an rectangle) in paint, so it has absolutely nothing of beautiful.
I took 3 photos, one using the light coming from the window, the second using the light of my room and the third using the flash of the webcam. (Sorry about links, I can't post images nor links, although I'm always here, this is the first time I post a question...)
Window light:
Room light:
Flash:
Square (rectangle) example:
The 'project' of the square you guys can ignore, I did it so that you can understand the images. And the reason I took 3 different photos was just to show all different possibilities that the webcam might be in. Also, the quality of the Magicka cards isn't a problem, since each card represents one letter, so it'll be easy to 'see' them.
Well, I found most answers to this question, I'll explain them below.
First it's not a square, but a rectangle, and it is still to be made. So I started testing the software using anything that was a rectangle, first I had to 'locate' the rectangle in the frame captured by the camera, then show it in the original image seen by the user, I accomplished that by:
Capturing the actual frame
Converting that frame to HSV;
Applying some kind of threshold (using the Core.inRange function, so that I could find a specific color in the range specified in the function);
Applying the Imgproc.findContours to find the contours of the rectangle;
Finally drawing a rectangle using the points found by the findContours;
How it ended: i.imgur.com/wmNVai0.jpg
After that I knew that I could place the rectangle in a way that all the letters inside it would be in a straight line, so I didn't need to care about the positioning of the letters. Now I had to fight with the OCR.
I chose Tesseract as it is OpenSource and seems to be a strong tool (supported by Google, that's for sure something), then I started to test some images.
In the beginning it was tough and I thought I'd have to train OCR even more, but the thing is that it has some kind of dictionary that tries to find words which are listed in this dictionary, and I didn't need that as I was looking for characters that could be in a total random way. I had to turn off that dictionary by adding the following line to a conf file:
load_system_dawg F
load_freq_dawg F
After that I had to change somethings in the image as well:
Transform into Grayscale;
Resize it by ~80%;
Original images (I can't post links...):
i.imgur.com/DFqNSYB.jpg
i.imgur.com/2Ntfqy3.jpg
Grayscale:
imgur.com/XUZ9b1Z.jpg
i.imgur.com/yjXMH5Q.jpg
Resized:
i.imgur.com/zgX9bKF.jpg
i.imgur.com/CWPRU3I.jpg
(Sometimes I had problems with resized images and on other moments I didn't, that's something I have to test even more.)
Then I could get some good results, though I'm still afraid as the light of the environment makes a whole difference, I still have to test it and mainly I still need the god da** base, I'll post it as an edit later.
If I did anything wrong or if anyone wants to correct me, please feel free to say it!

Issue with bubble graph labels in ADF DVT component

I am working on a bubble graph issue where the graph is distorted whenever the Y-axis labels are long
Ideally the graph should appear like the Normal.gif.
But if the labels are long, it is appearing like distorted.gif.
Hence , I tried to show only the first 15 characters of the labels followed by 3 ellipses (3 dots ...). But still there is so much space between the label names and the plot area.
Is there any way to reduce this space. Any idea which property of the graph controls this space. Also whenever the labels are concatenated to show only the first 15 characters, we need to show the full name when we hover the mouse over the name. How do we achieve this? Please let me know if you have any pointers on this.
<dvt:x1MajorTick lineWidth="0" tickStyle="GS_NONE"/>
<dvt:x1Title text="#{HcmGoalTopGenBundle['MenuItem.Performance.AddtoPerformanceGoal']}"
rendered="true">
<dvt:graphFont bold="true"/>
</dvt:x1Title>
<dvt:y1Axis majorTickStepAutomatic="false"
majorTickStep="#{bindings.YMinorTick.inputValue}"
axisMinAutoScaled="false" axisMaxAutoScaled="false"
axisMinValue="#{bindings.YLowerBoundary.inputValue}"
axisMaxValue="#{bindings.YUpperBoundary.inputValue}"></dvt:y1Axis>
<dvt:y1TickLabel>
<af:convertNumber pattern="#{applCorePrefs.numberFormatPattern}"/>
</dvt:y1TickLabel>
<dvt:y1MajorTick lineWidth="0" tickStyle="GS_NONE"/>
<dvt:y1Title text="#{hcmperformancedocspublicuiBundle1['OText.Potential.PotentialRating']}"
rendered="true">
<dvt:graphFont bold="true"/>
</dvt:y1Title>
<dvt:seriesSet defaultMarkerShape="MS_HUMAN"/>
<dvt:shapeAttributesSet>
<dvt:shapeAttributes component="GRAPH_DATAMARKER"/>
</dvt:shapeAttributesSet>
<dvt:legendArea position="LAP_RIGHT" rendered="false"/>
</dvt:graph>
I worked on this too..and found out that this is a framework issue. the same has been reported to the framework team

Sikuli actions inside a region

i am facing an issue while using sikuli through java, if there are 2 elements of same kind(or similar image) it fails to click on the correct element. so i wanted to know if it is possible to make sikuli just work inside a particular region and can some one please explain how can it be done ??
Yes sikuli can work within a particular region. The challenge is defining a region that only contains one of your two elements. You define a region by x,y coordinates. You can also increase the size of a region based on the location of a unique pattern (image) on your display.
while exists("foo.png"):
hover("bar.png")
ClickMeRegion = find("bar.png").nearby(5).right()
ClickMeRegion.click("baz.png")
So in the above I look for image foo.png/bar.png/baz.png image pairs that are being displayed. First I hover on bar.png so that visually I can see which pair the script is looking at. Then I create a region extending 5 pixels around the center of bar.png and extend this to the right of the display. This highlights a single baz.png image. I can then click on the one baz.png that I am interested in.
For more info on regions see: http://doc.sikuli.org/region.html

Categories