I'm new to opencv. I'm working with it in java, which is a pain, since most of the example and resources around the internet is in C++.
Currently my project involves recognizing a chessboard and then be able to draw on specific parts of the board.
I've gotten so far as to get the corners through the Calib3d part of the library. But this is where i get stuck. My question is how do i convert the corners info i got (Which is the corners placement on the 2D image) to something i can use in a 3D space to draw on with LibGdx?
Following is my code (in snippets):
public class chessboardDrawer implements ApplicationListner{
... //Fields are here
MatOfPoint2f corners = new MatOfPoint2f();
MatOfPoint3f objectPoints = new MatOfPoint3f();
public void create(){
webcam = new VideoCapture(0);
... //Program sleeps to make sure camera is ready
}
public void render(){
//Fetch webcam image
webcam.read(webcamImage);
//Grayscale the image
Imgproc.cvtColor(webcamImage, greyImage, Imgproc.COLOR_BGR2GRAY);
//Check if image contains a chessboard of with 9x6 corners
boolean foundCorners = Calib3d.findChessboardCorners(greyImage,
new Size(9,6),
corners, Calib3d.CALIB_CB_FAST_CHECK | Calib3d.CALIB_CB_ADAPTIVE_THRESH);
if(foundCorners){
for(int i = 0; i < corners.height; i++){
//This is where i have to convert the corners
//to something i can use in libGdx to draw boxes
//And insert them into the objectPoints variable
}
}
//Show the corners on the webcamIamge
Calib3d.drawChessboardCorners(webcamImage, new Size(9,6), corners, true);
//Helper library to show the webcamImage
UtilAR.imDrawBackground(webcamImage);
}
}
Any help?
You actually need to localize the (physical) camera using those coordinates.
Fortunately, it is really easy in case of a chessboard.
Camera pose estimation
Note:
The current implementation in OpenCV may not satisfy you in terms of accuracy (at least for a monocular camera). A good AR experience demands nice accuracy.
(Optional) Use some noise filtering method/estimation algorithm to stabilize the pose estimate across time/frames (Preferably Kalman Filter).
This would reduce jerks and wobbling.
Control pose (position + orientation) of a PerspectiveCamera using aforementioned pose estimated.
Draw 3D stuff using scales and initial orientation in accordance with the objPoints that you provided to the camera calibration method.
You can follow this nice blog post to do it.
All 3D models that you render now would be in the chessboard's frame of reference.
Hope this helps.
Good luck.
Related
I have a TriangleMesh in javafx and a 2D point in screen space and i want to check whether any triangles in the mesh intersect with that point(basically same as clicking a point in the mesh except that i already have a predefined screen coordinate)
Note that i don't care where the intersection happened etc but only the fact whether it intersected or not.
I did try googling this and found some useful answers, especially one by José Pereda here: https://stackoverflow.com/a/27612786/14999427
However the links are down
Edit: i did find a working link to it, however it just hardcodes the origin/target and im not sure how to compute these from the 2d screen point
Another idea i had was to copy the implementation from openjfx but after some research i figured that i'd have to copy a lot of internals and even then i wasn't sure if i would get it working so i scrapped that idea.
Goal: i want to use my mouse to create a rectangle at an arbitrary position in the mesh and then find all triangles that are inside that rectangle (i believe it's usually called rectangle selection in 3d modeling software)
Update: converting each triangle to 2d screen coordinates using Node#localToScreen and then doing a 2D point inside triangle test works perfectly however it also selects faces that are culled.
Update 2: after also doing the culling myself it works quite well (it's not perfect but it's mostly accurate)
Current code:
Point2D v1Screen = view.localToScreen(mesh.getPoints().get(0), mesh.getPoints()
.get(1), mesh.getPoints().get(2));
Point2D v2Screen = view.localToScreen(mesh.getPoints().get(3), mesh.getPoints()
.get(4), mesh.getPoints().get(5));
Point2D v3Screen = view.localToScreen(mesh.getPoints().get(6), mesh.getPoints()
.get(7), mesh.getPoints().get(8));
Point2D point = new Point2D(mouseX, mouseY);
boolean inTriangle = pointInTriangle(point, v1Screen, v2Screen, v3Screen);
// Back-face culling
double dxAB = v1Screen.getX() - v2Screen.getX();
double dyAB = v1Screen.getY() - v2Screen.getY();
double dxCB = v3Screen.getX() - v2Screen.getX();
double dyCB = v3Screen.getY() - v2Screen.getY();
boolean culled = ((dxAB * dyCB) - (dyAB * dxCB)) <= 0;
if (inTriangle && !culled) {
view.setMaterial(new PhongMaterial(Color.BLUE));
System.out.println("Intersected");
}
I think the answer to this question may help you. How to correctly obtain screen coordinates of 3D shape after rotation If you can compute the screen coordinates of a 3D shape (triangle) the hit test should then be simple.
I am working on a desktop game with libGDX. I want to reduce the aliasing, that is very strong.
There is documentation about that. The magic is supposed to happen in the DesktopLauncher class with the line config.samples = samplingNumber;
I tried 2, 4, 8 and 16 sampling number. I am unable to see a difference.
Here is my DesktopLauncher class.
public class DesktopLauncher {
public static void main (String[] arg) {
LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();
config.title = "AA Test";
config.width = 1280;
config.height = 720;
config.samples = 8;
new LwjglApplication(new MyGdxGame(), config);
}
}
And here is an image showing the difference between no AA and MSAA 16x. The same result is observed for MSAA 2x, 4x and 8x.
Am I missing something to apply MSAA to my libGDX project ?
I found a solution to this problem.
texture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
The sampling field, as you experienced, has no effect but setting this filter on the Textures did.
If you are using TextureAtlas then you can do the following to your TextureAtlas object.
atlas.getTextures().forEach(t -> t.setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Linear));
MSAA only affects the edges of polygons, which are not usually visible in a 2D scene because sprites typically do not bleed all the way to their rectangular edges. (Exceptions are opaque rectangular sprites, and shapes drawn with ShapeRenderer.)
Your image quality looks to me like you are not using a mip mapping filter. Load your texture with the useMipMaps parameter true, and use a min filter of MipMapLinearLinear or MipMapLinearNearest (the first looks better, costs more). Note: a MipMap filter does nothing if you didn't load your Texture with useMipMaps true.
There are AA techniques that do process all pixels of the screen, but they are more expensive than simply using mip mapping and trilinear filtering. One example is FXAA, which is done not with a configuration setting, but by drawing your scene to a frame buffer object, and then drawing the FBO's texture to screen with a special shader.
Your GPU chip could not support MSAA although then it should support CSAA.
To make it work you need to replace
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
with
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT | (Gdx.graphics.getBufferFormat().coverageSampling?GL20.GL_COVERAGE_BUFFER_BIT_NV:0));
in your render method. Then set config.samples for Desktop or config.numSamples for Android to the desired value.
Read more about LibGDX AA here
I'm trying to make it so that I can move a camera around a world, however I'm having a tough time finding resources that will explain this clearly. Most resources I have found are explaining (At least I think they are) how to move the world around the camera, without the camera moving, to create the illusion of movement.
I have implemented this, however rotation of the world results in the world spinning around the origin of the world rather than the camera. Now I am of the mindset that I would get far better results if I could move the camera through the world and rotate it independently. I am asking which way is better for creating camera movement in JOGL.. Moving the world around the camera, or moving the camera through the world?
Use modern openGL with shaders so you can do the latter. Store the transform on the camera object and use that to compute the View Matrix which gets passed to your objects. Try having multiple cameras and multiple viewports.
//C++
//choose camera to use in the viewport
for (int i = 0; i < myWin.allObj.size(); ++i)
{
if (myWin.allObj[i]->name->val_s == "persp1") //set persp1 by default
selCam = myWin.allObj[i];
}
//rendering loop
ViewM_t = glm::translate(glm::mat4(), -selCam->t->val_3);
ViewM_rx = glm::rotate(glm::mat4(), selCam->r->val_3.x, myWin.axisX);
ViewM_ry = glm::rotate(glm::mat4(), selCam->r->val_3.y, myWin.axisY);
ViewM = ViewM_t * ViewM_rx * ViewM_ry;
//in object
compute MVP and upload to GPU
I'm developing right now an application for Android devices. The main functionality is to draw polylines on map to show what is the traffic in the city on each street. Unfortunately when I draw around 3K polylines - the number is reduced according to the screen size and zoom level - my map gets incredibly slow... I do not mention the time of drawing all of the lines.
Maybe you know more efficient way to mark streets or draw lines on a map?
I was also thinking about switching to OSM but I never used it and I don't know how efficient it is.
I debug app on Samsung Galaxy Note 10.1 and App uses Map API v2
My code to draw polylines:
Polyline line;
List<Float> coordinatesStart;
List<Float> coordinatesEnd;
LatLng start;
LatLng end;
List<List<Float>> coordinates;
int polylinesNumber = 0;
for(Features ftr : features){
coordinates = ftr.geometry.coordinates;
for(int i = 0; i<coordinates.size()-1; i++){
coordinatesStart = coordinates.get(i);
coordinatesEnd = coordinates.get(i+1);
start = new LatLng(coordinatesStart.get(1), coordinatesStart.get(0));
end = new LatLng(coordinatesEnd.get(1), coordinatesEnd.get(0));
line = map.addPolyline(new PolylineOptions()
.add(start, end)
.width(3)
.color(0x7F0000FF)); //semi-transparent blue
polylinesNumber++;
}
}
I would appreciate any help!
Great optimization here:
Your main error is that you use new PolyLineOptions instance for each and every line you draw to the maps. This is make the drawing terribly slow.
The solution would be:
Only use one instance of polyline options and use only the .add(LatLng) function inside the loops.
//MAGIC #1 here
//You make only ONE instance of polylineOptions.
//Setting width and color, points for the segments added later inside the loops.
PolylineOptions myPolylineOptionsInstance = new PolylineOptions()
.width(3)
.color(0x7F0000FF);
for (Features ftr : features) {
coordinates = ftr.geometry.coordinates;
for (int i = 0; i < coordinates.size(); i++) {
coordinatesStart = coordinates.get(i);
start = new LatLng(coordinatesStart.get(1), coordinatesStart.get(0));
//MAGIC #2 here
//Adding the actual point to the polyline instance.
myPolylineOptionsInstance.add(start);
polylinesNumber++;
}
}
//MAGIC #3 here
//Drawing, simply only once.
line = map.addPolyline(myPolylineOptionsInstance);
Attention:
If you would like to have different colors for different line segmnents/sections you would have to use multiple polyline options, because polyline option could have only 1 color. But the method would be the same: Use as few polylineOptions as you can.
Do you check if the polyline that you draw is even visible to the user on the screen? If not, that would be my first idea. This question could be of help for that.
This might be of help as well:
http://discgolfsoftware.wordpress.com/2012/12/06/hiding-and-showing-on-screen-markers-with-google-maps-android-api-v2/
I want to chime in on this because I didn't find this answer complete. If you zoom out you're going to still have a ton of individual polylines on screen and the UI thread will grind to a halt. I solved this problem using a custom TileProvider and a spherical mercator projection of my LatLng points to screen pixels. The idea came from the map-utils-library, which has most of the needed tools to write a canvas to a tile (and a lot of other niceities, too).
I've written an example ComplexTileOverlays from a project I was working on. This includes ways to change alpha and line thickness in the CustomTileProvider.
I first load my custom database of polylines into memory using a splashscreen (for this example, it's an open database of bike facilities on the island of Montréal). from there, I draw each line projection on a canvas 256x256 pixel canvas representing one tile. Overall this technique is faster by leaps and bounds if you have a lot of graphical overlays to tie to the map.
I am trying to rotate a bufferedImage of a missile turret so that it looks like it's following a target. Basically, I can do it easily with the AffineTransform/ affinetransform
my current code in a nutshell is:
public BufferedImage tower = null;
try
{
tower = ImageIO.read(SpriteSheet.class.getResource("/spriteSheet/testTower.png"));
}
catch(IOException e)
{
AffineTransform tx = AffineTransform.getRotateInstance(rotationRequired, locationX, locationY);
AffineTransformOp = op = new AffineTransformOp(tx, AffineTransformOp.TYPE_BILINEAR);
//then I draw it using
g.drawImage(op.filter(tower, null), towerLocationX, towerLocationY, null);
this works, but what I want to do is transform(rotate) the bufferedImage, then copy the newly rotated pixel data into a pixel array and then draw it onto the screen because I believe this is how most games draw rotating images as opposed to drawing a png directly to the screen.
But what do I know. How exactly do 2D games draw rotating images? Am I doing it correctly, or is there a better/ more memory efficient way of doing this?
There are a lot of ways to tackle image manipulation in 2D games. Before optimizing though, you should ask yourself if there's a real need for it to begin with. Moreover, memory optimization usually comes at the cost of CPU performance and vice verse.
If CPU time is the problem, a common approach is to keep an array of images already rotated to certain angles (precalculated).
If memory is the problem, keep a single image and calculate the rotated form each time it's displayed. An even more memory efficient yet CPU consuming approach, is to draw vector shapes rather than images. This also leads to better looking results than the interpolation of the smoothing algorithm used for images when transformed. Java supports SVG, and there are several good packages available (e.g. http://xmlgraphics.apache.org/batik/).
Finally, Java can be connected to graphic libraries in order to perform the rendering, thus improving performance. Such libraries (OpenGL, etc.) use the memory of the graphic cards to store images in order to improve CPU usage (http://jogamp.org/jogl/www/).