Making darker area part of image brighter in openCV - java

I am currently working on a road sign detection application on Android, using OpenCV, and found out that while processing frames in real-time, my camera often get focus on brighter parts of image, such as sky, and everything below (road, trees, and signs) is getting dark. Because of it my application is not able to detect these signs because they are too dark in this particular condition.
Have anyone of you had to do with such problem and found decent solution? If so I would appreciate any clues to do this (especially with good performance which is important in real-time processing).

As a preprocessing, you can apply intensity normalization. As a particular example histogram equalization can be applied:
http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_equalization/histogram_equalization.html
with an example code in java:
http://answers.opencv.org/question/7490/i-want-a-code-for-histogram-equalization/?answer=11014#post-id-11014
Note that such additional steps may slow down your overall detection operation.To increase overall speed, you can decrease your region-of-interest by sky detection.

You said that the camera gets focused on bright objects such as sky.
In most modern phones you can set the area of the image which is included in auto focus calculation. Since the sky is always in the upper part of the image (after you take care of phone orientation) you can set the focus zone to the lowest half of the image. This will take care of the problem in the first place.
If however you meant that the camera is not focused on the bright objects but rather does white balancing using the white objects, you can solve this in the same way as described for focus. If that does not help, try histogram equalization and gamma correction techniques. This will help improve the contrast

Related

Find similar Image uing matlab or Java [duplicate]

One of the most interesting projects I've worked on in the past couple of years was a project about image processing. The goal was to develop a system to be able to recognize Coca-Cola 'cans' (note that I'm stressing the word 'cans', you'll see why in a minute). You can see a sample below, with the can recognized in the green rectangle with scale and rotation.
Some constraints on the project:
The background could be very noisy.
The can could have any scale or rotation or even orientation (within reasonable limits).
The image could have some degree of fuzziness (contours might not be entirely straight).
There could be Coca-Cola bottles in the image, and the algorithm should only detect the can!
The brightness of the image could vary a lot (so you can't rely "too much" on color detection).
The can could be partly hidden on the sides or the middle and possibly partly hidden behind a bottle.
There could be no can at all in the image, in which case you had to find nothing and write a message saying so.
So you could end up with tricky things like this (which in this case had my algorithm totally fail):
I did this project a while ago, and had a lot of fun doing it, and I had a decent implementation. Here are some details about my implementation:
Language: Done in C++ using OpenCV library.
Pre-processing: For the image pre-processing, i.e. transforming the image into a more raw form to give to the algorithm, I used 2 methods:
Changing color domain from RGB to HSV and filtering based on "red" hue, saturation above a certain threshold to avoid orange-like colors, and filtering of low value to avoid dark tones. The end result was a binary black and white image, where all white pixels would represent the pixels that match this threshold. Obviously there is still a lot of crap in the image, but this reduces the number of dimensions you have to work with.
Noise filtering using median filtering (taking the median pixel value of all neighbors and replace the pixel by this value) to reduce noise.
Using Canny Edge Detection Filter to get the contours of all items after 2 precedent steps.
Algorithm: The algorithm itself I chose for this task was taken from this awesome book on feature extraction and called Generalized Hough Transform (pretty different from the regular Hough Transform). It basically says a few things:
You can describe an object in space without knowing its analytical equation (which is the case here).
It is resistant to image deformations such as scaling and rotation, as it will basically test your image for every combination of scale factor and rotation factor.
It uses a base model (a template) that the algorithm will "learn".
Each pixel remaining in the contour image will vote for another pixel which will supposedly be the center (in terms of gravity) of your object, based on what it learned from the model.
In the end, you end up with a heat map of the votes, for example here all the pixels of the contour of the can will vote for its gravitational center, so you'll have a lot of votes in the same pixel corresponding to the center, and will see a peak in the heat map as below:
Once you have that, a simple threshold-based heuristic can give you the location of the center pixel, from which you can derive the scale and rotation and then plot your little rectangle around it (final scale and rotation factor will obviously be relative to your original template). In theory at least...
Results: Now, while this approach worked in the basic cases, it was severely lacking in some areas:
It is extremely slow! I'm not stressing this enough. Almost a full day was needed to process the 30 test images, obviously because I had a very high scaling factor for rotation and translation, since some of the cans were very small.
It was completely lost when bottles were in the image, and for some reason almost always found the bottle instead of the can (perhaps because bottles were bigger, thus had more pixels, thus more votes)
Fuzzy images were also no good, since the votes ended up in pixel at random locations around the center, thus ending with a very noisy heat map.
In-variance in translation and rotation was achieved, but not in orientation, meaning that a can that was not directly facing the camera objective wasn't recognized.
Can you help me improve my specific algorithm, using exclusively OpenCV features, to resolve the four specific issues mentioned?
I hope some people will also learn something out of it as well, after all I think not only people who ask questions should learn. :)
An alternative approach would be to extract features (keypoints) using the scale-invariant feature transform (SIFT) or Speeded Up Robust Features (SURF).
You can find a nice OpenCV code example in Java, C++, and Python on this page: Features2D + Homography to find a known object
Both algorithms are invariant to scaling and rotation. Since they work with features, you can also handle occlusion (as long as enough keypoints are visible).
Image source: tutorial example
The processing takes a few hundred ms for SIFT, SURF is bit faster, but it not suitable for real-time applications. ORB uses FAST which is weaker regarding rotation invariance.
The original papers
SURF: Speeded Up Robust Features
Distinctive Image Features
from Scale-Invariant Keypoints
ORB: an efficient alternative to SIFT or SURF
To speed things up, I would take advantage of the fact that you are not asked to find an arbitrary image/object, but specifically one with the Coca-Cola logo. This is significant because this logo is very distinctive, and it should have a characteristic, scale-invariant signature in the frequency domain, particularly in the red channel of RGB. That is to say, the alternating pattern of red-to-white-to-red encountered by a horizontal scan line (trained on a horizontally aligned logo) will have a distinctive "rhythm" as it passes through the central axis of the logo. That rhythm will "speed up" or "slow down" at different scales and orientations, but will remain proportionally equivalent. You could identify/define a few dozen such scanlines, both horizontally and vertically through the logo and several more diagonally, in a starburst pattern. Call these the "signature scan lines."
Searching for this signature in the target image is a simple matter of scanning the image in horizontal strips. Look for a high-frequency in the red-channel (indicating moving from a red region to a white one), and once found, see if it is followed by one of the frequency rhythms identified in the training session. Once a match is found, you will instantly know the scan-line's orientation and location in the logo (if you keep track of those things during training), so identifying the boundaries of the logo from there is trivial.
I would be surprised if this weren't a linearly-efficient algorithm, or nearly so. It obviously doesn't address your can-bottle discrimination, but at least you'll have your logos.
(Update: for bottle recognition I would look for coke (the brown liquid) adjacent to the logo -- that is, inside the bottle. Or, in the case of an empty bottle, I would look for a cap which will always have the same basic shape, size, and distance from the logo and will typically be all white or red. Search for a solid color eliptical shape where a cap should be, relative to the logo. Not foolproof of course, but your goal here should be to find the easy ones fast.)
(It's been a few years since my image processing days, so I kept this suggestion high-level and conceptual. I think it might slightly approximate how a human eye might operate -- or at least how my brain does!)
Fun problem: when I glanced at your bottle image I thought it was a can too. But, as a human, what I did to tell the difference is that I then noticed it was also a bottle...
So, to tell cans and bottles apart, how about simply scanning for bottles first? If you find one, mask out the label before looking for cans.
Not too hard to implement if you're already doing cans. The real downside is it doubles your processing time. (But thinking ahead to real-world applications, you're going to end up wanting to do bottles anyway ;-)
Isn't it difficult even for humans to distinguish between a bottle and a can in the second image (provided the transparent region of the bottle is hidden)?
They are almost the same except for a very small region (that is, width at the top of the can is a little small while the wrapper of the bottle is the same width throughout, but a minor change right?)
The first thing that came to my mind was to check for the red top of bottle. But it is still a problem, if there is no top for the bottle, or if it is partially hidden (as mentioned above).
The second thing I thought was about the transparency of bottle. OpenCV has some works on finding transparent objects in an image. Check the below links.
OpenCV Meeting Notes Minutes 2012-03-19
OpenCV Meeting Notes Minutes 2012-02-28
Particularly look at this to see how accurately they detect glass:
OpenCV Meeting Notes Minutes 2012-04-24
See their implementation result:
They say it is the implementation of the paper "A Geodesic Active Contour Framework for Finding Glass" by K. McHenry and J. Ponce, CVPR 2006.
It might be helpful in your case a little bit, but problem arises again if the bottle is filled.
So I think here, you can search for the transparent body of the bottles first or for a red region connected to two transparent objects laterally which is obviously the bottle. (When working ideally, an image as follows.)
Now you can remove the yellow region, that is, the label of the bottle and run your algorithm to find the can.
Anyway, this solution also has different problems like in the other solutions.
It works only if your bottle is empty. In that case, you will have to search for the red region between the two black colors (if the Coca Cola liquid is black).
Another problem if transparent part is covered.
But anyway, if there are none of the above problems in the pictures, this seems be to a better way.
I really like Darren Cook's and stacker's answers to this problem. I was in the midst of throwing my thoughts into a comment on those, but I believe my approach is too answer-shaped to not leave here.
In short summary, you've identified an algorithm to determine that a Coca-Cola logo is present at a particular location in space. You're now trying to determine, for arbitrary orientations and arbitrary scaling factors, a heuristic suitable for distinguishing Coca-Cola cans from other objects, inclusive of: bottles, billboards, advertisements, and Coca-Cola paraphernalia all associated with this iconic logo. You didn't call out many of these additional cases in your problem statement, but I feel they're vital to the success of your algorithm.
The secret here is determining what visual features a can contains or, through the negative space, what features are present for other Coke products that are not present for cans. To that end, the current top answer sketches out a basic approach for selecting "can" if and only if "bottle" is not identified, either by the presence of a bottle cap, liquid, or other similar visual heuristics.
The problem is this breaks down. A bottle could, for example, be empty and lack the presence of a cap, leading to a false positive. Or, it could be a partial bottle with additional features mangled, leading again to false detection. Needless to say, this isn't elegant, nor is it effective for our purposes.
To this end, the most correct selection criteria for cans appear to be the following:
Is the shape of the object silhouette, as you sketched out in your question, correct? If so, +1.
If we assume the presence of natural or artificial light, do we detect a chrome outline to the bottle that signifies whether this is made of aluminum? If so, +1.
Do we determine that the specular properties of the object are correct, relative to our light sources (illustrative video link on light source detection)? If so, +1.
Can we determine any other properties about the object that identify it as a can, including, but not limited to, the topological image skew of the logo, the orientation of the object, the juxtaposition of the object (for example, on a planar surface like a table or in the context of other cans), and the presence of a pull tab? If so, for each, +1.
Your classification might then look like the following:
For each candidate match, if the presence of a Coca Cola logo was detected, draw a gray border.
For each match over +2, draw a red border.
This visually highlights to the user what was detected, emphasizing weak positives that may, correctly, be detected as mangled cans.
The detection of each property carries a very different time and space complexity, and for each approach, a quick pass through http://dsp.stackexchange.com is more than reasonable for determining the most correct and most efficient algorithm for your purposes. My intent here is, purely and simply, to emphasize that detecting if something is a can by invalidating a small portion of the candidate detection space isn't the most robust or effective solution to this problem, and ideally, you should take the appropriate actions accordingly.
And hey, congrats on the Hacker News posting! On the whole, this is a pretty terrific question worthy of the publicity it received. :)
Looking at shape
Take a gander at the shape of the red portion of the can/bottle. Notice how the can tapers off slightly at the very top whereas the bottle label is straight. You can distinguish between these two by comparing the width of the red portion across the length of it.
Looking at highlights
One way to distinguish between bottles and cans is the material. A bottle is made of plastic whereas a can is made of aluminum metal. In sufficiently well-lit situations, looking at the specularity would be one way of telling a bottle label from a can label.
As far as I can tell, that is how a human would tell the difference between the two types of labels. If the lighting conditions are poor, there is bound to be some uncertainty in distinguishing the two anyways. In that case, you would have to be able to detect the presence of the transparent/translucent bottle itself.
Please take a look at Zdenek Kalal's Predator tracker. It requires some training, but it can actively learn how the tracked object looks at different orientations and scales and does it in realtime!
The source code is available on his site. It's in MATLAB, but perhaps there is a Java implementation already done by a community member. I have succesfully re-implemented the tracker part of TLD in C#. If I remember correctly, TLD is using Ferns as the keypoint detector. I use either SURF or SIFT instead (already suggested by #stacker) to reacquire the object if it was lost by the tracker. The tracker's feedback makes it easy to build with time a dynamic list of sift/surf templates that with time enable reacquiring the object with very high precision.
If you're interested in my C# implementation of the tracker, feel free to ask.
If you are not limited to just a camera which wasn't in one of your constraints perhaps you can move to using a range sensor like the Xbox Kinect. With this you can perform depth and colour based matched segmentation of the image. This allows for faster separation of objects in the image. You can then use ICP matching or similar techniques to even match the shape of the can rather then just its outline or colour and given that it is cylindrical this may be a valid option for any orientation if you have a previous 3D scan of the target. These techniques are often quite quick especially when used for such a specific purpose which should solve your speed problem.
Also I could suggest, not necessarily for accuracy or speed but for fun you could use a trained neural network on your hue segmented image to identify the shape of the can. These are very fast and can often be up to 80/90% accurate. Training would be a little bit of a long process though as you would have to manually identify the can in each image.
I would detect red rectangles: RGB -> HSV, filter red -> binary image, close (dilate then erode, known as imclose in matlab)
Then look through rectangles from largest to smallest. Rectangles that have smaller rectangles in a known position/scale can both be removed (assuming bottle proportions are constant, the smaller rectangle would be a bottle cap).
This would leave you with red rectangles, then you'll need to somehow detect the logos to tell if they're a red rectangle or a coke can. Like OCR, but with a known logo?
This may be a very naive idea (or may not work at all), but the dimensions of all the coke cans are fixed. So may be if the same image contains both a can and a bottle then you can tell them apart by size considerations (bottles are going to be larger). Now because of missing depth (i.e. 3D mapping to 2D mapping) its possible that a bottle may appear shrunk and there isn't a size difference. You may recover some depth information using stereo-imaging and then recover the original size.
Hmm, I actually think I'm onto something (this is like the most interesting question ever - so it'd be a shame not to continue trying to find the "perfect" answer, even though an acceptable one has been found)...
Once you find the logo, your troubles are half done. Then you only have to figure out the differences between what's around the logo. Additionally, we want to do as little extra as possible. I think this is actually this easy part...
What is around the logo? For a can, we can see metal, which despite the effects of lighting, does not change whatsoever in its basic colour. As long as we know the angle of the label, we can tell what's directly above it, so we're looking at the difference between these:
Here, what's above and below the logo is completely dark, consistent in colour. Relatively easy in that respect.
Here, what's above and below is light, but still consistent in colour. It's all-silver, and all-silver metal actually seems pretty rare, as well as silver colours in general. Additionally, it's in a thin slither and close enough to the red that has already been identified so you could trace its shape for its entire length to calculate a percentage of what can be considered the metal ring of the can. Really, you only need a small fraction of that anywhere along the can to tell it is part of it, but you still need to find a balance that ensures it's not just an empty bottle with something metal behind it.
And finally, the tricky one. But not so tricky, once we're only going by what we can see directly above (and below) the red wrapper. Its transparent, which means it will show whatever is behind it. That's good, because things that are behind it aren't likely to be as consistent in colour as the silver circular metal of the can. There could be many different things behind it, which would tell us that it's an empty (or filled with clear liquid) bottle, or a consistent colour, which could either mean that it's filled with liquid or that the bottle is simply in front of a solid colour. We're working with what's closest to the top and bottom, and the chances of the right colours being in the right place are relatively slim. We know it's a bottle, because it hasn't got that key visual element of the can, which is relatively simplistic compared to what could be behind a bottle.
(that last one was the best I could find of an empty large coca cola bottle - interestingly the cap AND ring are yellow, indicating that the redness of the cap probably shouldn't be relied upon)
In the rare circumstance that a similar shade of silver is behind the bottle, even after the abstraction of the plastic, or the bottle is somehow filled with the same shade of silver liquid, we can fall back on what we can roughly estimate as being the shape of the silver - which as I mentioned, is circular and follows the shape of the can. But even though I lack any certain knowledge in image processing, that sounds slow. Better yet, why not deduce this by for once checking around the sides of the logo to ensure there is nothing of the same silver colour there? Ah, but what if there's the same shade of silver behind a can? Then, we do indeed have to pay more attention to shapes, looking at the top and bottom of the can again.
Depending on how flawless this all needs to be, it could be very slow, but I guess my basic concept is to check the easiest and closest things first. Go by colour differences around the already matched shape (which seems the most trivial part of this anyway) before going to the effort of working out the shape of the other elements. To list it, it goes:
Find the main attraction (red logo background, and possibly the logo itself for orientation, though in case the can is turned away, you need to concentrate on the red alone)
Verify the shape and orientation, yet again via the very distinctive redness
Check colours around the shape (since it's quick and painless)
Finally, if needed, verify the shape of those colours around the main attraction for the right roundness.
In the event you can't do this, it probably means the top and bottom of the can are covered, and the only possible things that a human could have used to reliably make a distinction between the can and the bottle is the occlusion and reflection of the can, which would be a much harder battle to process. However, to go even further, you could follow the angle of the can/bottle to check for more bottle-like traits, using the semi-transparent scanning techniques mentioned in the other answers.
Interesting additional nightmares might include a can conveniently sitting behind the bottle at such a distance that the metal of it just so happens to show above and below the label, which would still fail as long as you're scanning along the entire length of the red label - which is actually more of a problem because you're not detecting a can where you could have, as opposed to considering that you're actually detecting a bottle, including the can by accident. The glass is half empty, in that case!
As a disclaimer, I have no experience in nor have ever thought about image processing outside of this question, but it is so interesting that it got me thinking pretty deeply about it, and after reading all the other answers, I consider this to possibly be the easiest and most efficient way to get it done. Personally, I'm just glad I don't actually have to think about programming this!
EDIT
Additionally, look at this drawing I did in MS Paint... It's absolutely awful and quite incomplete, but based on the shape and colours alone, you can guess what it's probably going to be. In essence, these are the only things that one needs to bother scanning for. When you look at that very distinctive shape and combination of colours so close, what else could it possibly be? The bit I didn't paint, the white background, should be considered "anything inconsistent". If it had a transparent background, it could go over almost any other image and you could still see it.
Am a few years late in answering this question. With the state of the art pushed to its limits by CNNs in the last 5 years I wouldn't use OpenCV to do this task now! (I know you specifically wanted OpenCv features in the question) I feel object detection algorithms such as Faster-RCNNs, YOLO, SSD etc would ace this problem with a significant margin compared to OpenCV features. If I were to tackle this problem now (after 6 years !!) I would definitely use Faster-RCNN.
I'm not aware of OpenCV but looking at the problem logically I think you could differentiate between bottle and can by changing the image which you are looking for i.e. Coca Cola. You should incorporate till top portion of can as in case of can there is silver lining at top of coca cola and in case of bottle there will be no such silver lining.
But obviously this algorithm will fail in cases where top of can is hidden, but in such case even human will not be able to differentiate between the two (if only coca cola portion of bottle/can is visible)
I like the challenge and wanted to give an answer, which solves the issue, I think.
Extract features (keypoints, descriptors such as SIFT, SURF) of the logo
Match the points with a model image of the logo (using Matcher such as Brute Force )
Estimate the coordinates of the rigid body (PnP problem - SolvePnP)
Estimate the cap position according to the rigid body
Do back-projection and calculate the image pixel position (ROI) of the cap of the bottle (I assume you have the intrinsic parameters of the camera)
Check with a method whether the cap is there or not. If there, then this is the bottle
Detection of the cap is another issue. It can be either complicated or simple. If I were you, I would simply check the color histogram in the ROI for a simple decision.
Please, give the feedback if I am wrong. Thanks.
I like your question, regardless of whether it's off topic or not :P
An interesting aside; I've just completed a subject in my degree where we covered robotics and computer vision. Our project for the semester was incredibly similar to the one you describe.
We had to develop a robot that used an Xbox Kinect to detect coke bottles and cans on any orientation in a variety of lighting and environmental conditions. Our solution involved using a band pass filter on the Hue channel in combination with the hough circle transform. We were able to constrain the environment a bit (we could chose where and how to position the robot and Kinect sensor), otherwise we were going to use the SIFT or SURF transforms.
You can read about our approach on my blog post on the topic :)
Deep Learning
Gather at least a few hundred images containing cola cans, annotate the bounding box around them as positive classes, include cola bottles and other cola products label them negative classes as well as random objects.
Unless you collect a very large dataset, perform the trick of using deep learning features for small dataset. Ideally using a combination of Support Vector Machines(SVM) with deep neural nets.
Once you feed the images to a previously trained deep learning model(e.g. GoogleNet), instead of using neural network's decision (final) layer to do classifications, use previous layer(s)' data as features to train your classifier.
OpenCV and Google Net:
http://docs.opencv.org/trunk/d5/de7/tutorial_dnn_googlenet.html
OpenCV and SVM:
http://docs.opencv.org/2.4/doc/tutorials/ml/introduction_to_svm/introduction_to_svm.html
There are a bunch of color descriptors used to recognise objects, the paper below compares a lot of them. They are specially powerful when combined with SIFT or SURF. SURF or SIFT alone are not very useful in a coca cola can image because they don't recognise a lot of interest points, you need the color information to help. I use BIC (Border/Interior Pixel Classification) with SURF in a project and it worked great to recognise objects.
Color descriptors for Web image retrieval: a comparative study
You need a program that learns and improves classification accuracy organically from experience.
I'll suggest deep learning, with deep learning this becomes a trivial problem.
You can retrain the inception v3 model on Tensorflow:
How to Retrain Inception's Final Layer for New Categories.
In this case, you will be training a convolutional neural network to classify an object as either a coca-cola can or not.
As alternative to all these nice solutions, you can train your own classifier and make your application robust to errors. As example, you can use Haar Training, providing a good number of positive and negative images of your target.
It can be useful to extract only cans and can be combined with the detection of transparent objects.
There is a computer vision package called HALCON from MVTec whose demos could give you good algorithm ideas. There is plenty of examples similar to your problem that you could run in demo mode and then look at the operators in the code and see how to implement them from existing OpenCV operators.
I have used this package to quickly prototype complex algorithms for problems like this and then find how to implement them using existing OpenCV features. In particular for your case you could try to implement in OpenCV the functionality embedded in the operator find_scaled_shape_model. Some operators point to the scientific paper regarding algorithm implementation which can help to find out how to do something similar in OpenCV.
Maybe too many years late, but nevertheless a theory to try.
The ratio of bounding rectangle of red logo region to the overall dimension of the bottle/can is different. In the case of Can, should be 1:1, whereas will be different in that of bottle (with or without cap).
This should make it easy to distinguish between the two.
Update:
The horizontal curvature of the logo region will be different between the Can and Bottle due their respective size difference. This could be specifically useful if your robot needs to pick up can/bottle, and you decide the grip accordingly.
If you are interested in it being realtime, then what you need is to add in a pre-processing filter to determine what gets scanned with the heavy-duty stuff. A good fast, very real time, pre-processing filter that will allow you to scan things that are more likely to be a coca-cola can than not before moving onto more iffy things is something like this: search the image for the biggest patches of color that are a certain tolerance away from the sqrt(pow(red,2) + pow(blue,2) + pow(green,2)) of your coca-cola can. Start with a very strict color tolerance, and work your way down to more lenient color tolerances. Then, when your robot runs out of an allotted time to process the current frame, it uses the currently found bottles for your purposes. Please note that you will have to tweak the RGB colors in the sqrt(pow(red,2) + pow(blue,2) + pow(green,2)) to get them just right.
Also, this is gona seem really dumb, but did you make sure to turn on -oFast compiler optimizations when you compiled your C code?
The first things I would look for are color - like RED , when doing Red eye detection in an image - there is a certain color range to detect , some characteristics about it considering the surrounding area and such as distance apart from the other eye if it is indeed visible in the image.
1: First characteristic is color and Red is very dominant. After detecting the Coca Cola Red there are several items of interest
1A: How big is this red area (is it of sufficient quantity to make a determination of a true can or not - 10 pixels is probably not enough),
1B: Does it contain the color of the Label - "Coca-Cola" or wave.
1B1: Is there enough to consider a high probability that it is a label.
Item 1 is kind of a short cut - pre-process if that doe snot exist in the image - move on.
So if that is the case I can then utilize that segment of my image and start looking more zoom out of the area in question a little bit - basically look at the surrounding region / edges...
2: Given the above image area ID'd in 1 - verify the surrounding points [edges] of the item in question.
A: Is there what appears to be a can top or bottom - silver?
B: A bottle might appear transparent , but so might a glass table - so is there a glass table/shelf or a transparent area - if so there are multiple possible out comes. A Bottle MIGHT have a red cap, it might not, but it should have either the shape of the bottle top / thread screws, or a cap.
C: Even if this fails A and B it still can be a can - partial..
This is more complex when it is partial because a partial bottle / partial can might look the same , so some more processing of measurement of the Red region edge to edge.. small bottle might be similar in size ..
3: After the above analysis that is when I would look at the lettering and the wave logo - because I can orient my search for some of the letters in the words As you might not have all of the text due to not having all of the can, the wave would align at certain points to the text (distance wise) so I could search for that probability and know which letters should exist at that point of the wave at distance x.

Libgdx - TexturePacker combination of texture filters

apologies as this is a common topic and haven't found a widely-agreed on solution.
We have a game world "grid" size of 1220 x 1080 (based on our Designer's photoshop designs). Currently we test on a Nexus 4 (1280x768 #320DPI) and TF201 Transformer Prime Tablet (1280x800 #149DPI).
When packing textures, with the TexturePacker, we're a bit confused about which combination of filters to use. We've read the following page:
http://www.badlogicgames.com/wordpress/?p=1403
.. and when using "Nearest, Nearest", our FPS was fine at 60, but assets became pixelated. Now we packed using "Mipmap, Mipmap", and our FPS went down to 30, but the textures are smoothly edged again.
Is there an agreed upon combination of these filters, or are they simply dependent on requirements? There are quite a lot of combinations to set for "min filter" and "mag filter" in the Packer, so don't want to keep randomly setting them until everything is smoothly resized and FPS is high again, without fully understanding what it is doing.
Many thanks.
J
If you are supporting multiple screen sizes (which you are if targeting Android), the Mag filter should always be Linear. There is no such thing as a mip-mapped mag filter, and on some devices that won't even work (you'll get pure black). It's kind of a "gotcha", because some devices will just assume you meant Linear and fix it for you, so if you fail to test on a device that doesn't do this for you, you'll be unaware of the problem. Nearest will look pixelated when stretched bigger, and you would only want to use it if your are doing retro low resolution graphics, or drawing something pixel perfect.
You can choose one of the following for the Min filter, from fastest (and worst looking) to slowest (and best looking):
Nearest - this will look pixelated and I can't think of any situation where this would be the right choice for a min filter.
MipMapNearestNearest - Won't look or perform better than nearest, and uses more memory. No reason to ever use this.
MipMapNearestLinear - Gets the nearest pixel from the two nearest mips and then linearly interpolates between them. This will still look pixelated. I don't think this is ever used either.
MipMapLinearNearest - Gets the nearest mip level and linearly determines the pixel color. This is most commonly used on mobile for smooth graphics, I think. It performs significantly faster than the below option, but there are cases where it will look slightly blurry (when the nearest mip is kind of on the small side for what's on screen).
MipMapLinearLinear - Gets the two nearest mip levels, linearly determines the pixel color on each of them, and then linearly blends between the two. If you have a sprite that shrinks from nothing to full size, you probably won't be able to detect any difference in quality from smallest to largest. But this is also slow. In the past, I have limited its use to my fonts. I have also done one project that could run at 60fps on new devices three years ago, where I used this on everything. I was very careful about overdraw in that app, so I could get away with it.
Finally, there's linear filtering, which looks and performs worse than the mip-mapping options (for a Min filter):
Linear - this will look smooth if the image is slightly smaller on screen than its original texture. This doesn't use up the 33% extra texture memory that mip mapping does, but the performance will be worse than it would with mip mapping if the texture gets any smaller than 50% of the original, because for each screen pixel it will have to sample and blend more than four pixels from the original texture.

Programmatically find shaky OR out-of-focus Images

Most modern mobile cameras has a family of techniques called Image Stabilization to reduce shaky effects in photographs due the motion of the camera lens or associated hardware. But still quite a number of mobile cameras produce shaky photographs. Is there a reliable algorithm or method that can be implemented on mobile devices, specifically on Android for finding whether a given input image is shaky or not? I do not expect the algorithm to stabilize the input image, but the algorithm/method should reliably return a definitive boolean whether the image is shaky or not. It doesn't have to be Java, but can also be C/C++ so that one can build it through the native kit and expose the APIs to the top layer. The following illustration describes the expected result. Also, this question deals with single image problems, thus multiple frames based solutions won't work in this case. It is specifically about images, not videos.
Wouldn't out of focus images imply that
a) Edges are blurred, so any gradient based operator will have a low values compared to the luminance in the image
b) edges are blurred, so any curvature based operator will have low values
c) for shaky pictures, the pixels will be correlated with other pixels in the direction of the shake (a translation or a rotation)
I took your picture in gimp, applied Sobel for a) and Laplacian for b) (available in openCV), and got images that are a lot darker in the above portion.
Calibrating thresholds for general images would be quite difficult I guess.
Are you dealing with video stream or a single image
In case of video stream: The best way is calculate the difference between each 2 adjacent frames. And mark each pixel with difference. When the amount of such pixels is low - you are in a non shaky frame. Note, that this method does not check if image is in focus, but only designed to combat motion blur in the image.
Your implementation should include the following
For each frame 'i' - normalize the image (work with gray level, when working with floating points normalize the mean to 0 and standard deviation to 1)
Save the previous video frame.
On each new video frame calculate pixel-wise difference between the images and count the amount of pixels for whom the difference exceed some threshold. If the amount of such pixels is too high (say > 5% of the image) that means that the movement between the previous frame and current frame is big and you expect motion blur. When person holds the phone firmly, you will see a sharp drop in the amount of pixels that changed.
If your images are represented not in floating point but in fixed point (say 0..255) than you can match the histograms of the images prior to subtraction in order to reduce noise.
As long as you are getting images with motion, just drop those frames and display a message to the user "hold your phone firmly". Once you get a good stabilized image, process it but keep remembering the previous one and do the subtraction for each video frame.
The algorithm above should be strong enough (I used it in one of my projects, and it worked like a magic).
In case of Single Image: The algorithm above does not solve unfocused images and is irrelevant for a single image.
To solve the focus I recommend calculating image edges and counting
the amount of pixels that have strong edges (higher than a
threshold). Once you get high amount of pixels with edges (say > 5%
of the image), you say that the image is in focus. This algorithm is far from being perfect and may do many mistakes, depending on the texture of the image. I recommend using X,Y and diagonal edges, but smooth the image before edge detection to reduce noise.
A stronger algorithm would be taking all the edges (derivatives) and calculating their histogram (how many pixels in the image had this specific edge intensity). This is done by first calculating an image of edges and than calculating a histogram of the edge-image. Now you can analyse the shape of the histogram (the distribution of the edges strength). For example take only the top 5% of pixels with strongest edges and calculate the variance of their edge intensity.
Important fact: In unfocused images you expect the majority of the pixels to have very low edge response, few to have medium edge response and almost zero with strong edge response. In images with perfect focus you still have the majority of the pixels with low edge response but the ratio between medium response to strong response changes. You can clearly see it in the histogram shape. That is why I recommend taking only a few % of the pixels with the strongest edge response and work only with them. The rest are just a noise. Even a simple algorithm of taking the ratio between the amount of pixels with strong response divided by the amount of pixels with medium edges will be quite good.
Focus problem in video:
If you have a video stream than you can use the above described algorithms for problematic focus detection, but instead of using constant thresholds, just update them as the video runs. Eventually they will converge to better values than a predefined constants.
Last note: The focus detection problem in a single image is a very tough one. There are a lot of academic papers (using Fourier transform wavelets and other "Big algorithmic cannons"). But the problem remains very difficult because when you are looking at a blurred image you cannot know whether it is the camera that generated the blur with wrong focus, or the original reality is already blurred (for example, white walls are very blurry, pictures taken in a dark tend to be blurry even under perfect focus, pictures of water surface, table surface tend to be blurry).
Anyway there are few threads in stack overflow regarding focus in the image. Like this one. Please read them.
You can also compute the Fourier Transform of the image and then if there is a low accumulation in the high frequencies bins, then the image is probably blurred. JTransform is a reasonable library that provides FFT's if you wish to travel down this route.
There is also a fairly extensive blog post here about different methods that could be used
There is also another stack overflow question asking this but with OpenCV, OpenCV also has Java bindings and can be used in Android projects so this answer could also be helpful.

Face Features Detection - corner of eyes, eyebrows

I am creating basic emotion detection system for mobile phone with usage of OpenCV4Android. My system is already capable of finding mouth and doing some preprocessing. I have nice results of getting face objects from Canny:
Examplary Face1: https://dl.dropboxusercontent.com/u/108321090/FACE%20%282%29.png
Examplary Face2: https://dl.dropboxusercontent.com/u/108321090/FACE%20%281%29.png
Red rectangles are areas found by cascades. I have those saved as Mat objects.
Blue dots are points I need to find. Problem is, that I have both eyebrows and eyes on the same segment.
Additionaly there are situations in which eyebrows are directly connected to eyes (in some emotion states). It's hard to access some points. I have also normal images (of course) and tresholded ones which are also interesting for eyebrow shapes - but I lose some other objects (mouth - well that one doesn't matter cuz its already done, eyes) due to bad light, well eyebrows are always well visible. Of course I could change tresholding a bit, cuz I dont need it in finding other features. Like I said mouths is done well. Eyes/Eyebrows left.
Examplary Face3: https://dl.dropboxusercontent.com/u/108321090/Screenshot_2014-01-17-01-33-14.png
Examplary Face4: https://dl.dropboxusercontent.com/u/108321090/Screenshot_2014-01-17-01-26-33.png
Examplary Face5 (a bit problematic, eyes gone, but if I treshold them localy not globaly its fine) https://dl.dropboxusercontent.com/u/108321090/Screenshot_2014-03-05-01-30-48.png
Exampalary Face6 (eyebrows conencted to eyes) https://dl.dropboxusercontent.com/u/108321090/Screenshot_2014-03-05-01-28-21.png
I want to ask you if you could provide me with any materials/ideas connected to detection of eye, and eyebrows action units.
if you can locate an eye/eye-brow unit you can probably just track it and relate emotions to the relative motion there rather than trying to separate eyes from eye-brows. Your first two exemplary faces are gradients while the rest are thresholded grey tones. I would rather use gradients since grey tones are affected by lighting and shadows.
I would also avoid using Canny edge detector since it is a highly non-linear and non-stable operator for matching sequential frames and hence for motion detection. I would rather use a simpler Sobel and some kind of motion detection but only after tracking subtracts a global head motion.
The interesting work on emotion detection was done based on Kinect and it really works though it requires a bit of offline training, see faceShift. A good test for right processing (before mapping features to emotions) is trying to move the model of the face in sync with target face - some kind of virtual avatar.

Alpha Channel Blur

I've got this BufferedImage object that's guaranteed to contain only one color. I'm using it to display a sample image to show size, shape & hardness of a brush in a painting tool. I've tried several different blur implementations for hardness... the latest, that seems to work fairly well is this Stack Filter written by Romain Guy.
I've got 2 problems.
Faster on 1 channel than 4?: None of the blur filters I've tried seem to be quite fast enough... I realize this question has been asked before (and I'm not quite ready to try pulling in FFTW from C), but I'm wondering if there's a way to perform the blur using ONLY the alpha channel bits? The image only contains one color, so none of the other bits will change across points anyway and my thought is that this would cut the number of calculations for the blur to about 25% of the whole blur operation and I figure that's likely to result in a noticeable improvement in performance? I've not been able to find any information about this being tried via web search.
Eliminating the Dark Halo: Every time I try a different blur algorithm I end up having to rewrite it to get rid of the dark shadow around the shape caused by blurring in "black" from colorless pixels where nothing has been painted in yet. I've read about this and I'm using (as far as I know) INT_ARGB_PRE image types, which I remember reading as a solution to this problem. Am I missing something in that solution? Do I need to preformat the image in some way to make it interpret all the empty pixels as white instead of black?
Thanks!
You may find this interesting:
http://www.jhlabs.com/ip/blurring.html
The dark halo issue is discussed, all source code is available as as far as I can recall, it only uses standard Java SE stuff.

Categories