As the title implies I need an algorithm, code or a library that would help me to stretch a Bitmap (or a Path in Android) to an arbitrary polygon. Polygon is given with a list of x, y coordinates. Actually I need to transform/stretch a Path object in Android which is also given by x, y coordinates. I mentioned Bitmap because it is more likely that someone had similar problem and I assume that both will be transformed my a Matrix
I tried to use Matrix.setPolyToPoly(...) but it doesn't seem to help since it is transforming to square like area (only 4 points) not to an arbitrary polygon.
For better illustration what I need please check out image bellow. It is not exact transformation but something close. Note that whole image is stretched to star shaped polygon, it is not a mask and not a trim, just pixel transition.
I saw your question a few days ago, then yesterday I ran across this:
Canvas#drawBitmapMesh | Android Developers
It's kind of hard to grasp, but the way I understand it you start with an imaginary elastic grid over your bitmap. The way you want to warp the bitmap can be expressed by moving the x,y points of the grid to alternate locations.
Here's an article with a diagram and here's an article with some sample code.
Obviously, the hard part now is to take your frame polygon and use it to generate the warped vertices in the mesh. That may take some fancy mathematics. But I thought this would be a step in the right direction.
This is what I was envisioning: I'm looking at the star polygon and I'm picturing a circle as the starting point (not the square). The star could be seen as taking the circle and stretching points on it toward and away from the center. Whichever way it was stretched would create some vectors, from zero at the center to strongest at the stretch point.
For a Path, you could then just apply the vectors to the points in the path, but the lines would also need to be bent so this would be some pretty convoluted math with Bezier curves (convoluted at least for me, I'm not any sort of mathematician).
But if you drew the Path onto a Bitmap you might be in a better position. You could just alter the mesh vertices using the different vectors then use Canvas.drawBitmapMesh() to render the final result.
Related
I have a photo of a paper that I hold up to my webcam, and want to minimize the area of the photo to just the paper. This way, my OCR program will potentially be more accurate, as well as conceivably faster.
I have taken a couple steps thus far to isolate the paper from the background.
First, I use Canny Edge detection, set with high thresholds. This provides a two-color representation of the edges of my image. On it, I can see a rounded rectangle among some other artifacts that happen to have sharp edges in the background.
Next, I use a Hough transformation, to draw vectors with over 100 point hits in polar coordinates on a black background. The resulting image is as shown:
See that large (the largest), almost-rectangular figure right in the middle? That's the paper I'm holding. I need to isolate that trapezoid as a polygon, or somehow otherwise get the coordinates of its vertices.
I can use these coordinates on the original image to isolate a PNG of the paper and nothing else.
I would also highly appreciate if you could provide answers to any of these three sub-questions.
-How do you find the locations of the intersections of these lines on the image?
-How would I get rid of any lines that don't form the center trapezoidal polygon?
-With these points, is there anything better than convex hull that would allow me only to get the trapezoidal/rectangular shaped region of the image?
Here is another example, in which my program produced a better image:
I am creating a virtual reality app for android and I would like to generate a Sphere in openGL for my purposes. In a first step, I found this thread(Draw Sphere - order of vertices) where in the first answer there is a good tutorial of how to offline render the sphere. In that same answer, there is a sample code of a sphere(http://pastebin.com/4esQdVPP) that I used for my app, and then I successfully mapped a 2D texture onto the sphere.
However, that sample sphere has a poor resolution and I would like to generate a better one, so I proceeded to follow some blender tutorials to generate the sphere and then export the .obj file and simply take the point coordinates and index and parse them into java code.
The problem when doing this is that when the texture is added it looks broken at the poles of the sphere, while it looks good in the rest of the sphere (please have a look at the following pictures).
I don't know what i'm doing wrong since the algorithm for mapping the texture should be the same, so I guess that maybe the problem is in the index of the points generated. This is the algorithm im using for mapping the texture: https://en.wikipedia.org/wiki/UV_mapping#Finding_UV_on_a_sphere
This is the .obj file autogenerated with blender: http://pastebin.com/uP3ndM2d
And from there, we extract the index and the coordinates:
This is the point index: http://pastebin.com/rt1QjcaX
And this is the point coordinates: http://pastebin.com/h1hGwNfx
Could you give me some advice? Is there anything I am doing wrong?
First of all, determining the texture coordinates at (or even near) the poles needs to be handled with care. Using the UV-algorithm suggested for the s-coordinate at the pole will not give you what you want with the tessellation you chose (e.g., s = 0.5 + arctan2(1,0)/(2*pi) will be used for all points on the north pole). In the image below the M+1 vertices on the top row all represent the same vertex at the north pole -- each of these will have the same t-value but must have different s-values for the texture coordinates:
Second of all, using this type of tessellation will yield aliasing problems near the poles since the small distance between neighboring fragments generate large difference between s-values. You should mitigate the aliasing as much as possible using a mipmap filter. The following images show a mercator projection of the earth and vertical red stripes textured on the sphere (the stripes are a good test case):
A better sphere tessellation is to subdivide an icosahedron which will yield nearly equilaterial triangles. Here is an example of a normal mapped sphere that avoids these aliasing problems:
Ok, the problem is solved now. The textures were not working properly because the generated point indices start at 1 instead of 0. By substracting 1 to all indices the problem is solved... :)
Is there a Java graphics library that will rasterize a triangle given the coordinates of the vertices?
I'm trying to analyse the pixel values of an image for the triangular region defined by three points. I have the pixel values in memory, so I just want to figure out which pixels are in the triangle and iterate through them. The order of iteration is irrelevant, so long as I visit each pixel once.
I've done some searching for algorithms, and I think I could implement my own code based on Triangle Rasterization for Dummies, or an Introduction to Software-based Rendering, but I'd feel stupid if I overlooked some library that already implements this.
I briefly looked at getting Java to talk to the GPU, but that seems to be too much hassle.
You can use Polygon Shape to represent the tringle. Then use one of the contains() method passing Point2D or just two doubles params.
I'm building an Android puzzle game where the user rotates and shifts pieces of a puzzle to form a final picture. It's a bit like a sliding block puzzle but the shape and size of pieces is not uniform - more like a sliding block version of tetris.
At the moment I've got puzzle pieces as imageViews which can be selected and moved around a view to position them. I've got the vector forms of the shapes behind the scenes as ArrayLists of Points.
But...I'm stuck on how to snap align the pieces together. I.e. when a piece is nearby another, shift one piece so that the nearby edges overlay each other (i.e. essentially share a boundary).
I'm sure this has been done plenty of times but can't find examples with code (in any language). It's similar to snapping to a grid but not the same and is the same kind of functionality you get in a diagramming type interface when you can snap objects to each other.
Can anyone point me toward a tutorial (any langauge) / code / or advise on how to implement it?
Urs is like Tangram game. I think it cannot be done with pieces of image to form a final picture. It can be done by Creating Geometry shapes(for both Final shape and pieces/slices of final picture) using android.Graphics package. Its quite easy to determine the final shape from the edges and vertices of pieces/slices.
http://code.google.com/p/photogaffe/ is worth checking out. It is an opensource sliding puzzle consisting of 15 pieces and allows the user to choose an image from their gallery.
You would only have to figure out your various shapes and how to rotate them. And if you are supplying your own images...how to load them.
Hope that helps.
What about drawing a box around each shape. Afterwards you define the middle of it. Then you can store a value for the rotation for each piece. And you would need to store the neighbours together with a vector the their middle.
Then you simply have to compute that the vector is in a reasonable range and the rotation is +-X degree. For example if the vector is in a range of +-10pixels and the rotation is +-3° you could rotate the piece and fit it into the puzzle.
I'm pretty new to manually manipulating images, so please bear with me.
I have an image that I'm allowing the user to shrink/grow and move around.
The basic behavior works perfectly. However, I need to be able to grab whatever is in the "viewport" (visible clipping region rectangle) and save it out as a separate bitmap.
Before I can do this, I need to get a fix on WHERE the image actually is and what is being displayed. This is proving more tricky than I would have imagined.
My problem is that the Matrix documentation is absurdly vague, and I'm lost as to how I can measure the coordinates and dimensions of my transformed image. As I see it, the X,Y of the image remain constant even as the user shrinks/grows it. So, even though it reports at being at 0,0 it's displayed at (say) 100,100. And the only way I can get those coordinates is to do a fairly ugly computation (again... I'm probably not doing it the most elegant way, since geometry is not my forte).
I'm kind of hoping that I'm missing something and that there's some way to pull the object's auto translated coordinates and dimensions.
in an ideal world I would be able to call (pseudo) myImg.getDisplayedWidth() and myImg.getDisplayedX().
Oh, and I should add that this may all be a problem that I'm causing myself by using the center of the image as the point from which to grow/shrink. If I left the default 0,0 coordinate as the non changing point, I think the location would be correct no matter what its size was. So... maybe the answer to all this is to simply figure out my center offset and apply that to my translations?
All help greatly appreciated (and people not arbitrarily messing with my question's title even more so!).
The Matrix method mapPoints(float[] dst, float[] src) can be used to get a series of translated points by applying the Matrix translation. Or in (slightly) more layman's terms, an instance of the Matrix class contains not only the translation instruction but also convenience methods to apply the Matrix translation to a series of points.
So in your case, you just need the corners of your untranslated Bitmap (x, y, width, height) and pass the corner points into that method to get the translated points.