Java opencv robinson mask, Image is completely black - java

I was playing around with open cv and I decided to test out tutorialpoint's example for a Robinson mask. I copied the code and used a grayscaled jpg.
-unfortunately the outputed image was completely black.
-I tried commenting out what appears to be two additional directional filters. The image still came out black.
-I'm using java 1.8 with opencv 3
try{
int kernelSize = 9;
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
Mat source = Imgcodecs.imread("grayScale2.jpg", Imgcodecs.CV_LOAD_IMAGE_GRAYSCALE);
Mat destination = new Mat(source.rows(),source.cols(),source.type());
Mat kernel = new Mat(kernelSize,kernelSize, CvType.CV_32F){
{
put(0,0,-1);
put(0,1,0);
put(0,2,1);
put(1,0-2);
put(1,1,0);
put(1,2,2);
put(2,0,-1);
put(2,1,0);
put(2,2,1);
}
};
Imgproc.filter2D(source, destination, -1, kernel);
Imgcodecs.imwrite("robinsonMaskExample.jpg", destination);
} catch (Exception e) {
System.out.println("Error: " + e.getMessage());
}

The code that you linked us is a bit flawed. It defines the kernel size to be size 9 x 9, but the kernel itself is clearly 3 x 3. As such, it's putting the kernel coefficients at the top left corner of the kernel and the rest of the kernel itself is 0. This is probably the reason why you're not seeing the right results. The put method puts a number in the row and column of the matrix. As you can see in that code that defines the kernel, it's putting things in rows 0,1,2 and columns 0,1,2 - which is implicitly a 3 x 3 kernel, but the size of the kernel is actually 9 x 9.
As such, please uncomment those lines you commented out as it's important to define the entire edge detection mask properly. Also, the post is wrong in terms of what edge detection mask that's using. That's actually using the Sobel operator. I've never heard of a mask called "Robinson" before, but I have heard of a Roberts-Cross mask, which is a 2 x 2 kernel that looks like this:
Source: Wikipedia
Therefore, the simplest fix is to change the kernel size so that it's 3.... so simply change this:
int kernelSize = 9;
To this:
int kernelSize = 3;
For a broader picture:
try{
int kernelSize = 3; // Change
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
Mat source = Imgcodecs.imread("grayScale2.jpg", Imgcodecs.CV_LOAD_IMAGE_GRAYSCALE);
Mat destination = new Mat(source.rows(),source.cols(),source.type());
Mat kernel = new Mat(kernelSize,kernelSize, CvType.CV_32F){
{
put(0,0,-1);
put(0,1,0);
put(0,2,1);
put(1,0-2);
put(1,1,0);
put(1,2,2);
put(2,0,-1);
put(2,1,0);
put(2,2,1); // Leave it this way - don't uncomment
}
};
Imgproc.filter2D(source, destination, -1, kernel);
Imgcodecs.imwrite("robinsonMaskExample.jpg", destination);
} catch (Exception e) {
System.out.println("Error: " + e.getMessage());
}
Moral of this story. Let this be a lesson to you in terms of finding tutorials online. Don't trust all of them, as they sometimes give you wrong information, such as what you experienced just now with the wrong kernel size and calling the edge detector wrong. I'd certainly use them as a good starting point, but when it comes to the nitty-gritty details, always debug code that you see that has been posted to make sure that what they intended to write is actually what is produced.

Related

How can I process BufferedImage faster

I'm making a basic image editor to improve my image process skills. I have 12 filters(for now).
All filters have a clickable JLabel that has image
I update images of all of them when all filters apply with this function:
public static void buttonImagesUpdater(){
for(int i = 0; i < effects.size(); i++){
effects.get(i).getButton().setImage(new ImageIcon(effects.get(i).process(image)));
}
}
All filters have a process function like that:
public BufferedImage process(BufferedImage base) {
BufferedImage product = new BufferedImage(base.getWidth(), base.getHeight(), base.getType());
for(int indisY = 0; indisY < base.getHeight(); indisY++){
for(int indisX = 0; indisX < base.getWidth(); indisX++){
Color currentColor = new Color(base.getRGB(indisX, indisY));
int greyTone = 0;
greyTone = (int) (currentColor.getRed()*0.315) +
(int) (currentColor.getGreen()*0.215)
+ (int) (currentColor.getBlue()*0.111);
product.setRGB(indisX, indisY, new Color(greyTone,greyTone,greyTone).getRGB());
}
}
return product;
}
Program works so slowly. When I click an effect's button it done 45 second later when I use 5000x3000 image. How can I fix this performance problem?
You have got to remember that 3000 * 5000 is 15,000,000 so you're creating 15,000,000 Color objects, you're calling setRGB 15,000,000 times. If I were you, I would look into potentially using ForkJoinPool for this.
I agree with #Jason - the problem is that you're creating (and destroying) 15 million Color objects.
However, I don't think that just using multiple threads is going to get you enough of a performance increase, because you are still going to be putting a lot of pressure on memory and the garbage collector, since you'll still be creating and destroying 15 million objects, you'll just be doing several in parallel.
I think that you can both stay away from creating Color objects entirely, and make fewer loops, by using the result of the BufferedImage class' getRGB() method directly, instead of creating a Color object. Further, you can use the overload of getRGB() that returns an array of ints, to get, say, a row of pixels (or more) at a time, to reduce the number of calls that you have to make inside the loop. You can similarly use the version of setRGB() that takes an array of pixel values.
The trick is to be able to convert the int color value to a gray value (or whatever else you need to do) without separating the R, G, and B values, or finding an efficient way to separate R, G, and B - more efficient than creating, using, and destroying a Color object.
For a lead on getting R, G, and B values from the int returned by getRGB(), note that the documentation for Color.getRGB() says,
"Returns the RGB value representing the color in the default sRGB
ColorModel. (Bits 24-31 are alpha, 16-23 are red, 8-15 are green, 0-7
are blue)."
Once you have that working, you can think about parallelizing it.
You might try this to see if things speed up a little.
This uses a DataBuffer from an Image Raster.
And uses a map to retain previous converted colors. It may help over a period of time depending on the type of image.
And works with doubles since the data buffer supports various types.
I also multiplied your values by powers of 2 to shift them in the proper position. The get* methods of Color return values between 0 and 255 inclusive. The RGB occupy lower 24 bits on an int (Alpha is in the left most byte).
All I see is a dark image but I tested this with other parameters and I know it works. The long pole in the tent seems to be reading and writing the images. I was using a 6637 3787 image and could read, alter, and write it in 12 seconds. For more advanced processing you may want to check on AffineTransformOp.
static Map<Color, Double> colorMap = new HashMap<>();
public BufferedImage process(BufferedImage base) {
DataBuffer db = base.getRaster().getDataBuffer();
for (int i = 0; i < db.getSize(); i++) {
Color currentColor = new Color(db.getElem(i));
double greyTone = colorMap.computeIfAbsent(currentColor, v->
currentColor.getRed() * .315 *256*256
+ currentColor.getGreen() *.215 * 256
+ currentColor.getBlue()*.115);
db.setElemDouble(i, greyTone);
}
return base;
}

Java LWJGL glfwGetMonitorPhysicalSize & glfwGetVideoMode not returning expected values

I'm currently trying to make a basic monitor utility class for getting and printing info on monitors. Im using LWJGL in java for this. When i call the function glfwGetMonitorPhysicalSize, i always get a 0 returned for both x and y. and the glfwGetVideoMode function only returns "- ' ". I can't find what i'm doing wrong here!
Also the monitorID seems to be different each time i run the program. Is this normal?
Note that this code is just a test snippet:
private static GLFWErrorCallback errorCallback = Callbacks.errorCallbackPrint(System.out);
private static PointerBuffer monitors = null;
public static long getPrimaryMonitor(){
glfwSetErrorCallback(errorCallback);
if(glfwInit() == GL_FALSE)
System.out.println("error");
monitors = glfwGetMonitors();
long monitorId = monitors.get(0);
// Monitor name
System.out.println(glfwGetMonitorName(monitorId));
// Monitor physical size
IntBuffer xSize = IntBuffer.allocate(4);
IntBuffer ySize = IntBuffer.allocate(4);
glfwGetMonitorPhysicalSize(monitorId, xSize, ySize);
System.out.print("Pos X: ");
while(xSize.hasRemaining())
System.out.print(xSize.get());
System.out.println();
System.out.print("Pos Y: ");
while(ySize.hasRemaining())
System.out.print(ySize.get());
System.out.println();
// Monitor video mode
ByteBuffer videoMode = glfwGetVideoMode(monitorId);
System.out.print("Video mode: ");
while(videoMode.hasRemaining())
System.out.print(videoMode.getChar());
System.out.println();
return monitorId;
}
I found the solution! The functions were actually returning valid value's, but apparently i was reading them the wrong way!
The correct way for video mode:
ByteBuffer vidmode = glfwGetVideoMode(monitorId);
System.out.println("Video mode width: " + GLFWvidmode.width(vidmode));
System.out.println("Video mode height: " + GLFWvidmode.height(vidmode));
You probably should use glfwGetPrimaryMonitor() instead of glfwGetMonitors().get(0).
For what glfwGetMonitorPhysicalSize gives you back I only can say I get the same result.
As for documentation it says:
Returns the size, in millimetres, of the display area of the specified monitor.
Any or all of the size arguments may be NULL. If an error occurs, all non-NULL size arguments will be set to zero. Notes: This function may only be called from the main thread. Some systems do not provide accurate monitor size information, either because the EDID data is incorrect, or because the driver does not report it accurately.
So maybe your system just doesn't support it, as well as mine.
And, yes, it's normal that the monitorID is different because its like a pointer and points to dynamically allocated content.

map chunking strategy, rechunk lag issue

I'm having a horrible time coming up with a good question Title... sorry/please edit if your brain is less shot than mine.
I am having some issues handling my game's maps client side. My game is tile based using 32x32 pixel tiles. My first game map was 1750 x 1750 tiles. I had a bunch of layers client side, but managed to cut it down to 2 (ground and buildings). I was previously loading the entire map's layers into memory(short arrays). When I jumped to 2200 x 2200 tiles I noticed an older pc having some issues with out of memory (1GB+). I wish there was a data type between byte and short(I am aiming for ~1000 different tiles). My game supports multiple resolutions so the players visible space may show 23,17 tiles for a 800x600 resolution all the way up to 45,29 tiles for 1440x1024 (plus) resolutions. I use Tiled to draw my maps and output the 2 layers into separate text files using a format similar to the following (0, 0, 2, 0, 3, 6, 0, 74, 2...) all on one line.
With the help of many SO questions and some research I came up with a map chunking strategy. Using the player's current coordinates, as the center point, I load enough tiles for 5 times the size of the visual map(largest would be 45*5,29*5 = 225,145 tiles). The player is always drawn in the center and the ground moves beneath him/her(when you walk east the ground moves west). The minimap is drawn showing one screen away in all directions, to be three times the size of the visible map. Please see the below(very scaled down) visual representation to explain better than I likely explained it.
My issue is this: When the player moves "1/5th the chunk size tiles away" from the original center point (chunkX/Y) coordinates, I call for the game to re scan the file. The new scan will use the current coordinates of the player as it's center point. Currently this issue I'm having is the rechunking takes like .5s on my pc(which is pretty high spec). The map does not update for like 1-2 tile moves.
To combat the issue above I tried to run the file scanning in a new thread (before hitting the 1/5th point) into a temporary arraybuffer. Then once it was done scanning, I would copy the buffer into the real array, and call repaint(). Randomly I saw some skipping issues with this which was no big deal. Even worse I saw it drawing a random part of the map for 1-2 frames. Code sample below:
private void checkIfWithinAndPossiblyReloadChunkMap(){
if (Math.abs(MyClient.characterX - MyClient.chunkX) + 10 > (MyClient.chunkWidth / 5)){ //arbitrary number away (10)
Runnable myRunnable = new Runnable(){
public void run(){
logger.info("FillMapChunkBuffer started.");
short chunkXBuffer = MyClient.characterX;
short chunkYBuffer = MyClient.characterY;
int topLeftChunkIndex = MyClient.characterX - (MyClient.chunkWidth / 2) + ((MyClient.characterY - (MyClient.chunkHeight / 2)) * MyClient.mapWidth); //get top left coordinate of chunk
int topRightChunkIndex = topLeftChunkIndex + MyClient.chunkWidth - 1; //top right coordinate of chunk
int[] leftChunkSides = new int[MyClient.chunkHeight];
int[] rightChunkSides = new int[MyClient.chunkHeight];
for (int i = 0; i < MyClient.chunkHeight; i++){ //figure out the left and right index points for the chunk
leftChunkSides[i] = topLeftChunkIndex + (MyClient.mapWidth * i);
rightChunkSides[i] = topRightChunkIndex + (MyClient.mapWidth * i);
}
MyClient.groundLayerBuffer = MyClient.FillGroundBuffer(leftChunkSides, rightChunkSides);
MyClient.buildingLayerBuffer = MyClient.FillBuildingBuffer(leftChunkSides, rightChunkSides);
MyClient.groundLayer = MyClient.groundLayerBuffer;
MyClient.buildingLayer = MyClient.buildingLayerBuffer;
MyClient.chunkX = chunkXBuffer;
MyClient.chunkY = chunkYBuffer;
MyClient.gamePanel.repaint();
logger.info("FillMapChunkBuffer done.");
}
};
Thread thread = new Thread(myRunnable);
thread.start();
} else if (Math.abs(MyClient.characterY - MyClient.chunkY) + 10 > (MyClient.chunkHeight / 5)){ //arbitrary number away (10)
//same code as above for Y
}
}
public static short[] FillGroundBuffer(int[] leftChunkSides, int[] rightChunkSides){
try {
return scanMapFile("res/images/tiles/MyFirstMap-ground-p.json", leftChunkSides, rightChunkSides);
} catch (FileNotFoundException e) {
logger.fatal("ReadMapFile(ground)", e);
JOptionPane.showMessageDialog(theDesktop, getStringChecked("message_file_locks") + "\n\n" + e.getMessage(), getStringChecked("message_error"), JOptionPane.ERROR_MESSAGE);
System.exit(1);
}
return null;
}
private static short[] scanMapFile(String path, int[] leftChunkSides, int[] rightChunkSides) throws FileNotFoundException {
Scanner scanner = new Scanner(new File(path));
scanner.useDelimiter(", ");
int topLeftChunkIndex = leftChunkSides[0];
int bottomRightChunkIndex = rightChunkSides[rightChunkSides.length - 1];
short[] tmpMap = new short[chunkWidth * chunkHeight];
int count = 0;
int arrayIndex = 0;
while(scanner.hasNext()){
if (count >= topLeftChunkIndex && count <= bottomRightChunkIndex){ //within or outside (east and west) of map chunk
if (count == bottomRightChunkIndex){ //last entry
tmpMap[arrayIndex] = scanner.nextShort();
break;
} else { //not last entry
if (isInsideMapChunk(count, leftChunkSides, rightChunkSides)){
tmpMap[arrayIndex] = scanner.nextShort();
arrayIndex++;
} else {
scanner.nextShort();
}
}
} else {
scanner.nextShort();
}
count++;
}
scanner.close();
return tmpMap;
}
I really am at my wits end with this. I want to be able to move on past this GUI crap and work on real game mechanics. Any help would be tremendously appreciated. Sorry for the long post, but trust me a lot of thought/sleepless nights has gone into this. I need the SO experts ideas. Thanks so much!!
p.s. I came up with some potential optimization ideas (but not sure these would solve some of the issue):
split the map files into multiple lines so I can call scanner.nextLine() 1 time, rather than scanner.next() 2200 times
come up with a formula that given the 4 corners of the map chunk will know if a given coordinate lies within it. this would allow me to call scanner.nextLine() when at the farthest point on chunk for a given line. this would require the multiline map file approach above.
throw away only 1/5th of the chunk, shift the array, and load the next 1/5th of the chunk
Make sure scanning the file has finished before starting a new scan.
Currently you'll start scanning again (possibly in every frame) while your centre is too far away from the previous scan centre. To fix this remember you are scanning before you even start and enhance your far away condition accordingly.
// MyClient.worker represents the currently running worker thread (if any)
if(far away condition && MyClient.worker == null) {
Runnable myRunnable = new Runnable() {
public void run(){
logger.info("FillMapChunkBuffer started.");
try {
short chunkXBuffer = MyClient.nextChunkX;
short chunkYBuffer = MyClient.nextChunkY;
int topLeftChunkIndex = MyClient.characterX - (MyClient.chunkWidth / 2) + ((MyClient.characterY - (MyClient.chunkHeight / 2)) * MyClient.mapWidth); //get top left coordinate of chunk
int topRightChunkIndex = topLeftChunkIndex + MyClient.chunkWidth - 1; //top right coordinate of chunk
int[] leftChunkSides = new int[MyClient.chunkHeight];
int[] rightChunkSides = new int[MyClient.chunkHeight];
for (int i = 0; i < MyClient.chunkHeight; i++){ //figure out the left and right index points for the chunk
leftChunkSides[i] = topLeftChunkIndex + (MyClient.mapWidth * i);
rightChunkSides[i] = topRightChunkIndex + (MyClient.mapWidth * i);
}
// no reason for them to be a member of MyClient
short[] groundLayerBuffer = MyClient.FillGroundBuffer(leftChunkSides, rightChunkSides);
short[] buildingLayerBuffer = MyClient.FillBuildingBuffer(leftChunkSides, rightChunkSides);
MyClient.groundLayer = groundLayerBuffer;
MyClient.buildingLayer = buildingLayerBuffer;
MyClient.chunkX = chunkXBuffer;
MyClient.chunkY = chunkYBuffer;
MyClient.gamePanel.repaint();
logger.info("FillMapChunkBuffer done.");
} finally {
// in any case clear the worker thread
MyClient.worker = null;
}
}
};
// remember that we're currently scanning by remembering the worker directly
MyClient.worker = new Thread(myRunnable);
// start worker
MyClient.worker.start();
}
Preventing a rescan before the previous rescan has completed presents another challenge: what to do if you walk diagonally i.e. you reach the situation where in x you're meeting the far away condition, start scanning and during that scan you'll meet the condition for y to be far away. Since you choose the next scan centre according to your current position, this problem should not arise as long as you have a large enough chunk size.
Remembering the worker directly comes with a bonus: what do you do if you need to teleport the player/camera at some point while you are scanning? You can now simply terminate the worker thread and start scanning at the new location: you'll have to check the termination flag manually in MyClient.FillGroundBuffer and MyClient.FillBuildingBuffer, reject the (partially computed) result in the Runnable and stop the reset of MyClient.worker in case of an abort.
If you need to stream more data from the file system in your game think of implementing a streaming service (extend the idea of the worker to one that's processing arbitrary file parsing jobs). You should also check if your hard drive is able to perform reading from multiple files concurrently faster than reading a single stream from a single file.
Turning to a binary file format is an option, but won't save much in terms of file size. And since Scanner already uses an internal buffer to do its parsing (parsing integers from a buffer is faster than filling a buffer from a file), you should first focus on getting your worker running optimally.
Try to speed up the reading speed by using a binary file instead of a csv-file.
Use DataInputStream and readShort() for that. (This will also cut down the size of the map.)
You also can also use 32x32 tiles chunks and save them into several files.
So you haven't to load the tiles which are already loaded.

OpenCV - Converting DCT on C++ to Android : java.lang.IndexOutOfBoundsException: Invalid index 0, size is 0 thrown

I am trying to convert a C++ DCT code to Android. Below is the code that I have tried to convert from C++ to OpenCV on Android. However, I obtain this error:
Java.lang.IndexOutOfBoundsException: Invalid index 0, size is 0
I found out that the "outplanes" is 0. Is it possible to set ArrayList size or am I missing something? To be honest, I am not particularly sure this code will work or not. Please advise me on what I should do. Thank you so much!
Mat secondImage = new Mat();
secondImage = image.clone();
List<Mat> planes = new ArrayList<Mat>();
Core.split(secondImage, planes);
List<Mat> outplanes = new ArrayList<Mat>(planes.size());
Mat trans = new Mat(CvType.CV_32FC1);
Log.d("Planes", Integer.toString(planes.size())); //3
Log.d("Outplanes", Integer.toString(outplanes.size()));
for (int k = 0; k < planes.size(); k++) {
planes.get(k).convertTo(planes.get(k),CvType.CV_32FC1);
Core.dct(planes.get(k), outplanes.get(k));
outplanes.get(k).convertTo(outplanes.get(k),CvType.CV_8UC1);
}
Core.merge(outplanes, trans);
The call:
List<Mat> outplanes = new ArrayList<Mat>(planes.size());
will not dimension all you Mats for you - the parameter to the constructor is just a guide to how much memory you think you might eventually need. It doesn't matter if it turns out that you need more; the ArrayList will redimension as you add items but it is slightly more performant to tell it how large the array will be if you know this beforehand.
So, you still need to add Mats to the array list for the size as measured by planes.size() to increase. For example, if I do:
outplanes.addAll(planes);
after you initialise outplanes, then the error goes away.
Also look at this:
Get DCT (Discrete Cosine Transform) of image on Android

How to identify new faces with OpenIMAJ

I've got a live stream of JPG images coming in on a different thread in my Java app and I want to continually scan for faces to output later a list of all the different faces that passed the camera while it was running, and how many times each face was seen. Here's my current code:
void doImageProcessing() {
// Create face stuff
FKEFaceDetector faceDetector = new FKEFaceDetector(new HaarCascadeDetector());
EigenFaceRecogniser<KEDetectedFace, Person> faceRecognizer = EigenFaceRecogniser.create(512, new RotateScaleAligner(), 512, DoubleFVComparison.CORRELATION, Float.MAX_VALUE);
FaceRecognitionEngine<KEDetectedFace, Extractor<KEDetectedFace>, Person> faceEngine = FaceRecognitionEngine.create(faceDetector, faceRecognizer);
// Start loop
while (true) {
// Get next frame
byte[] imgData = nextProcessingData;
nextProcessingData = null;
// Decode image
BufferedImage img = ImageIO.read(new ByteArrayInputStream(imgData));
// Detect faces
FImage fimg = ImageUtilities.createFImage(img);
List<KEDetectedFace> faces = faceEngine.getDetector().detectFaces(fimg);
// Go through detected faces
for (KEDetectedFace face : faces) {
// Find existing person for this face
Person person = null;
try {
List<IndependentPair<KEDetectedFace, ScoredAnnotation<Person>>> rfaces = faceEngine.recogniseBest(face.getFacePatch());
ScoredAnnotation<Person> score = rfaces.get(0).getSecondObject();
if (score != null)
person = score.annotation;
} catch (Exception e) {
}
// If not found, create
if (person == null) {
// Create person
person = new Person();
System.out.println("Identified new person: " + person.getIdentifier());
// Train engine to recognize this new person
faceEngine.train(person, face.getFacePatch());
} else {
// This person has been detected before
System.out.println("Identified existing person: " + person.getIdentifier());
}
}
}
}
The problem is it always detects a face as new, even if it's the same face that was detected in the previous frame. rfaces is always empty. It can never identify an existing face. What am I doing wrong?
Also, I have no idea what the parameters for the EigenFaceRecognizer creator function should be, maybe that's why it's not recognizing anything...
The parameters you've given to the EigenFaceRecogniser.create() function are way off, so that's probably the likely cause of your problems. The following is more likely to work:
EigenFaceRecogniser<KEDetectedFace, Person> faceRecognizer = EigenFaceRecogniser.create(20, new RotateScaleAligner(), 1, DoubleFVComparison.CORRELATION, 0.9f);
Explaination:
The first parameter is the number of principal components in the EigenFace algorithm; the exact value is normally determined experimentally, but something around ~20 is probably fine.
The third parameter is the number of nearest neighbours to use for the KNN classifier. 1 nearest-neighbour should be fine.
The final parameter is a distance threshold for the classifier. The correlation comparison returns a similarity measure (high values mean more similar), so the threshold given is a lower limit that must be exceeded. As we've set 1 nearest neighbour, then the distance between the most similar face and the query face must be greater than 0.9. Note that a value of 0.9 is just a guess; in order to optimise the performance of your recogniser you'll need to play around with this.
Another minor point - instead of:
BufferedImage img = ImageIO.read(new ByteArrayInputStream(imgData));
FImage fimg = ImageUtilities.createFImage(img);
It's generally better to let OpenIMAJ read your image as it works around a number of known problems with ImageIO's handling of certain types of JPEG:
FImage fimg = ImageUtilities.readF(new ByteArrayInputStream(imgData));

Categories