I'm trying to detect the positions of billiards balls on a table from an image taken at a perspective angle. I'm using the getPerspectiveTransform() method to find the transformation matrix and I want to apply that to only the circles I detect using HoughCircles. I'm trying to go from a rather large trapezoidal shape to a smaller rectangular shape. I don't want to do the transformation on the image first and then find the HoughCircles because the image gets too warped for houghcircles to provide useful results.
Here's my code:
CvMat mmat = cvCreateMat(3,3,CV_32FC1);
double srcX1 = 462;
double srcX2 = 978;
double srcX3 = 1440;
double srcX4 = 0;
double srcY = 241;
double srcHeight = 772;
double dstX = 56.8;
double dstY = 33.5;
double dstWidth = 262.4;
double dstHeight = 447.3;
CvSeq seq = cvHoughCircles(newGray, circles, CV_HOUGH_GRADIENT, 2.1d, (double)newGray.height()/40, 85d, 65d, 5, 50);
JavaCV.getPerspectiveTransform(new double[]{srcX1, srcY, srcX2,srcY, srcX3, srcHeight, srcX4, srcHeight},
new double[]{dstX, dstY, dstWidth, dstY, dstWidth, dstHeight, dstX, dstHeight}, mmat);
cvWarpPerspective(seq, seq, mmat);
for(int j=0; j<seq.total(); j++){
CvPoint3D32f point = new CvPoint3D32f(cvGetSeqElem(seq, j));
float xyr[] = {point.x(),point.y(),point.z()};
CvPoint center = new CvPoint(Math.round(xyr[0]), Math.round(xyr[1]));
int radius = Math.round(xyr[2]);
cvCircle(gray, center, 3, CvScalar.GREEN, -1, 8, 0);
cvCircle(gray, center, radius, CvScalar.BLUE, 3, 8, 0);
}
The problem is I get this error on the warpPerspective() method:
error: (-215) seq->total > 0 && CV_ELEM_SIZE(seq->flags) == seq->elem_size in function cv::Mat cv::cvarrToMat(const CvArr*, bool, bool, int)
Also I guess it's worth mentioning that I'm using JavaCV, in case the method calls look a bit different than what you're used to. Thanks for any help.
Answer:
the problem with what you want to do (besides the obvious, opencv wont let you) is that the radius cant really be warped correctly. AFAIK the xy coordinates are pretty easy to calculate x'=((m00x+m01y+m02)/(m20x+m21y+m22)) y'=((m10x+m11y+m12)/(m20x+m21y_m22)) when m is the transformation matrix. the radius you can hack by transforming all the points of the original circle and then find the max distance between x'y' and those points (atleast if the radius in the warped image is expected to cover all those points)
btw, mIJx = m(i,j)*x (just to clarify)
End Answer.
Everything i write is according to the c++ version, i've never used JavaCV but from what i could see its just a wrapper that calls the native c++ lib.
CvSeq is a sequance data structure that behaves like a linked list.
the assert your application crushes at is
CV_Assert(seq->total > 0 && CV_ELEM_SIZE(seq->flags) == seq->elem_size);
which means that either your seq instance is empty (total is the number of elements in the sequence) or somehow the inner seq flags are corrupted.
I'd recommend that you'd check the total member of your CvSeq, and the cvHoughCircles call.
all of this occurs before the actual implementation of cvWarpPerspective (its the first line in the implementation, that only converts your CvSeq to cv::Mat).. so its not the warping but what you're doing before that.
anyway, to understand whats wrong with cvHoughCircles we'll need more info about the creation of newGray and circles.
here is an example i've found on the javaCV page (Link)
IplImage gray = cvCreateImage( cvSize( img.width, img.height ), IPL_DEPTH_8U, 1 );
cvCvtColor( img, gray, CV_RGB2GRAY );
// smooth it, otherwise a lot of false circles may be detected
cvSmooth(gray,gray,CV_GAUSSIAN,9,9,2,2);
CvMemStorage circles = CvMemStorage.create();
CvSeq seq = cvHoughCircles(gray, circles.getPointer(), CV_HOUGH_GRADIENT,
2, img.height/4, 100, 100, 0, 0);
for(int i=0; i<seq.total; i++){
float xyr[] = cvGetSeqElem(seq,i).getFloatArray(0, 3);
CvPoint center = new CvPoint(Math.round(xyr[0]), Math.round(xyr[1]));
int radius = Math.round(xyr[2]);
cvCircle(img, center.byValue(), 3, CvScalar.GREEN, -1, 8, 0);
cvCircle(img, center.byValue(), radius, CvScalar.BLUE, 3, 8, 0);
}
from what i've seen in the implementation of cvHoughCircles, the answer is saved in the circles buff and at the end they create from it the CvSeq to return, so if you've allocated the circles buff wrong, it wont work.
EDIT:
as you can see, the CvSeq instance in case of the return from cvHoughCircles is a list of point-values, that is probably why the assertion failed. you cannot convert this CvSeq into a cv::Mat.. because its just not a cv::Mat. to get only the circles returned from cvHoughCircles in an cv::Mat instance, you'll need to create a new cv::Mat instance and than draw onto it all the circles in the CvSeq - as seen in the provided example above.
than the warping will work (you'll have a cv::Mat instance, and that is what the function expect - a cv::Mat as the only element in the CvSeq)
END EDIT
here is the c++ reference for CvSeq
and if you want to fiddle with the source code than
cvarrToMat is in matrix.cpp
CV_ELEM_SIZE is in types_c.h
cvWarpPerspective is in imgwarp.cpp
cvHoughCircles is in hough.cpp
I hope that will help.
BTW, your next error will probably be:
cv::warpPerspective in the C++ OpencCv asserts that dst.data != src.data
thus
cvWarpPerspective(seq, seq, mmat);
wont work cause your source mat and destination mat referencing the same data.
Not all the functions in OpenCV (and image processing in general) work in-situ (because there is no in-situ algorithm or because its slower then the other version eg. transpose of an n*n mat will work in-situ, but n*m where n!=m will be harder to do in-situ and might be slower)
you cant assume the using the src matrix as the dst will work.
Related
Please clear I am new at android and java
I have used this code but don't know what is meaning of these four variables
Imgproc.connectedComponentsWithStats(binarized, labeled, rectComponents, centComponents);
Mat tmp = new Mat(bmp.getWidth(), bmp.getHeight(), CvType.CV_8U);
Utils.bitmapToMat(bmp, tmp);
Imgproc.cvtColor(tmp, tmp, Imgproc.COLOR_RGB2GRAY);
Imgproc.threshold(tmp, tmp, 40, 255, Imgproc.THRESH_BINARY);
Imgproc.GaussianBlur(tmp, tmp, new org.opencv.core.Size(5, 5), 0 , 0);
Imgproc.threshold(tmp,tmp,130,255,Imgproc.THRESH_OTSU);
Utils.matToBitmap(tmp, bmp);
imageView.setImageBitmap(bmp);```
I want to get connected components with stats of tmp.
Thanks guys, I am answering my own question. I have found how to do it.
int a = Imgproc.connectedComponentsWithStats(image,labels,stats,centroid);
this function returns an integer which is the number of connected components in the image(Mat OpenCV) which is passing to the function (first argument).
labels is a Mat datatype array which is of the size of the input image with the value of each pixel is label of that pixel in the original image
centroid is centroid of each label(x,y).
stats tells us area of the label and position of the label(return leftmost pixel, topmost pixel,width,height)
I'm working on android app, which determines which font is used on a text image. So I need to extract every character from image and don't know how to do it precisely. Furthermore, when I'm trying to process an image I have one result...but my classmate has different (for example, more or less noise). The problem with character detection is that:
1) it detects also some noise blobs on image and shows it in rectangles (I thought about detectMultiScale... but I have doubts about it, maybe there are easiest ways to detect characters)
2) it detects several contours of one character (for example inner and outer radius of letter "o")
And question for the future: I'm going to create a DB with images (for now just 3 fonts) of different letters of fonts and compare them with an image of letters from photo. Maybe someone could recommend a better way to do it.
So this is part of code with image processing(I'm still playing with values of blur, threshold and Canny... but there was no really positive result):
Imgproc.cvtColor(sImage, grayImage, Imgproc.COLOR_BGR2GRAY); //градации серого
Imgproc.GaussianBlur(grayImage,blurImage,new Size(5, 5),0); //размытие
Imgproc.adaptiveThreshold(blurImage, thresImage, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 101, 39);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.Canny(thresImage, binImage, 30, 10, 3, true); //контур
Imgproc.findContours(binImage, contours, hierarchy, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));
hierarchy.release();
Imgproc.drawContours(binImage, contours, -1, new Scalar(255, 255, 255));//, 2, 8, hierarchy, 0, new Point());
MatOfPoint2f approxCurve = new MatOfPoint2f();
//For each contour found
for (int i = 0; i < contours.size(); i++) {
//Convert contours(i) from MatOfPoint to MatOfPoint2f
MatOfPoint2f contour2f = new MatOfPoint2f(contours.get(i).toArray());
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true) * 0.02;
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint(approxCurve.toArray());
// Get bounding rect of contour
Rect rect = Imgproc.boundingRect(points);
// draw enclosing rectangle (all same color, but you could use variable i to make them unique)
Imgproc.rectangle(binImage, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(255, 255, 255), 5);
}
And screen (not actually with processing values from code, just one with better results):
Original:
(unfortunately, I can't add more than 2 links to show more examples)
There were situations, when picture from this screen looked pretty good, but another pictures looked like with shapeless blobs.
Your code is fine, you just need to make a minor tweaks to get it work properly.
Firstly, the image size is very large, you can safely reduce it to 20% of current size without suffering a major loss in accuracy. Due to larger image size all the functions would perform slower.
You dont need to perform adaptive threshold before Canny, canny works perfectly on gray-scale images as well, You need to adjust the params as:
Canny(img, threshold1=170, threshold2=250)
which yields an image as:
[Optional] If you want to de-noise the image then you can try with morphological operations like erode and dilate.
Now you are ready to find the contours. The mistake in your code was using Imgproc.RETR_TREE flag you need to use Imgproc.RETR_EXTERNAL flag to get only the outer contours and not the nested inner contours.
At this step you may have some unwanted small contours, which can be filtered as:
// ** Below code if for reference purposes only, consult OpenCV docs for proper API methods
int character_area_lower_thresh = 10;
for (Contour c:contours) {
if (Imgproc.contourArea(c) > character_area_lower_thresh) {
// Desired contour, do what ever you want to do
Rect r = Imgproc.boundingRect(c);
}
}
What is the most efficient way to do lighting for a tile based engine in Java?
Would it be putting a black background behind the tiles and changing the tiles' alpha?
Or putting a black foreground and changing alpha of that? Or anything else?
This is an example of the kind of lighting I want:
There are many ways to achieve this. Take some time before making your final decision. I will briefly sum up some techiques you could choose to use and provide some code in the end.
Hard Lighting
If you want to create a hard-edge lighting effect (like your example image),
some approaches come to my mind:
Quick and dirty (as you suggested)
Use a black background
Set the tiles' alpha values according to their darkness value
A problem is, that you can neither make a tile brighter than it was before (highlights) nor change the color of the light. Both of these are aspects which usually make lighting in games look good.
A second set of tiles
Use a second set of (black/colored) tiles
Lay these over the main tiles
Set the new tiles' alpha value depending on how strong the new color should be there.
This approach has the same effect as the first one with the advantage, that you now may color the overlay tile in another color than black, which allows for both colored lights and doing highlights.
Example:
Even though it is easy, a problem is, that this is indeed a very inefficent way. (Two rendered tiles per tile, constant recoloring, many render operations etc.)
More Efficient Approaches (Hard and/or Soft Lighting)
When looking at your example, I imagine the light always comes from a specific source tile (character, torch, etc.)
For every type of light (big torch, small torch, character lighting) you
create an image that represents the specific lighting behaviour relative to the source tile (light mask). Maybe something like this for a torch (white being alpha):
For every tile which is a light source, you render this image at the position of the source as an overlay.
To add a bit of light color, you can use e.g. 10% opaque orange instead of full alpha.
Results
Adding soft light
Soft light is no big deal now, just use more detail in light mask compared to the tiles. By using only 15% alpha in the usually black region you can add a low sight effect when a tile is not lit:
You may even easily achieve more complex lighting forms (cones etc.) just by changing the mask image.
Multiple light sources
When combining multiple light sources, this approach leads to a problem:
Drawing two masks, which intersect each other, might cancel themselves out:
What we want to have is that they add their lights instead of subtracting them.
Avoiding the problem:
Invert all light masks (with alpha being dark areas, opaque being light ones)
Render all these light masks into a temporary image which has the same dimensions as the viewport
Invert and render the new image (as if it was the only light mask) over the whole scenery.
This would result in something similar to this:
Code for the mask invert method
Assuming you render all the tiles in a BufferedImage first,
I'll provide some guidance code which resembles the last shown method (only grayscale support).
Multiple light masks for e.g. a torch and a player can be combined like this:
public BufferedImage combineMasks(BufferedImage[] images)
{
// create the new image, canvas size is the max. of all image sizes
int w, h;
for (BufferedImage img : images)
{
w = img.getWidth() > w ? img.getWidth() : w;
h = img.getHeight() > h ? img.getHeight() : h;
}
BufferedImage combined = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
// paint all images, preserving the alpha channels
Graphics g = combined.getGraphics();
for (BufferedImage img : images)
g.drawImage(img, 0, 0, null);
return combined;
}
The final mask is created and applied with this method:
public void applyGrayscaleMaskToAlpha(BufferedImage image, BufferedImage mask)
{
int width = image.getWidth();
int height = image.getHeight();
int[] imagePixels = image.getRGB(0, 0, width, height, null, 0, width);
int[] maskPixels = mask.getRGB(0, 0, width, height, null, 0, width);
for (int i = 0; i < imagePixels.length; i++)
{
int color = imagePixels[i] & 0x00ffffff; // Mask preexisting alpha
// get alpha from color int
// be careful, an alpha mask works the other way round, so we have to subtract this from 255
int alpha = (maskPixels[i] >> 24) & 0xff;
imagePixels[i] = color | alpha;
}
image.setRGB(0, 0, width, height, imagePixels, 0, width);
}
As noted, this is a primitive example. Implementing color blending might be a bit more work.
Raytracing might be the simpliest approach.
you can store which tiles have been seen (used for automapping, used for 'remember your map while being blinded', maybe for the minimap etc.)
you show only what you see - maybe a monster of a wall or a hill is blocking your view, then raytracing stops at that point
distant 'glowing objects' or other light sources (torches lava) can be seen, even if your own light source doesn't reach very far.
the length of your ray gives will be used to check amount light (fading light)
maybe you have a special sensor (ESP, gold/food detection) which would be used to find objects that are not in your view? raytrace might help as well ^^
how is this done easy?
draw a line from your player to every point of the border of your map (using Bresehhams Algorithm http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm
walk along that line (from your character to the end) until your view is blocked; at this point stop your search (or maybe do one last final iteration to see what did top you)
for each point on your line set the lighning (maybe 100% for distance 1, 70% for distance 2 and so on) and mark you map tile as visited
maybe you won't walk along the whole map, maybe it's enough if you set your raytrace for a 20x20 view?
NOTE: you really have to walk along the borders of viewport, its NOT required to trace every point.
i'm adding the line algorithm to simplify your work:
public static ArrayList<Point> getLine(Point start, Point target) {
ArrayList<Point> ret = new ArrayList<Point>();
int x0 = start.x;
int y0 = start.y;
int x1 = target.x;
int y1 = target.y;
int sx = 0;
int sy = 0;
int dx = Math.abs(x1-x0);
sx = x0<x1 ? 1 : -1;
int dy = -1*Math.abs(y1-y0);
sy = y0<y1 ? 1 : -1;
int err = dx+dy, e2; /* error value e_xy */
for(;;){ /* loop */
ret.add( new Point(x0,y0) );
if (x0==x1 && y0==y1) break;
e2 = 2*err;
if (e2 >= dy) { err += dy; x0 += sx; } /* e_xy+e_x > 0 */
if (e2 <= dx) { err += dx; y0 += sy; } /* e_xy+e_y < 0 */
}
return ret;
}
i did this whole lightning stuff some time ago, a* pathfindin feel free to ask further questions
Appendum:
maybe i might simply add the small algorithms for raytracing ^^
to get the North & South Border Point just use this snippet:
for (int x = 0; x <map.WIDTH; x++){
Point northBorderPoint = new Point(x,0);
Point southBorderPoint = new Point(x,map.HEIGHT);
rayTrace( getLine(player.getPos(), northBorderPoint), player.getLightRadius()) );
rayTrace( getLine(player.getPos(), southBorderPoint, player.getLightRadius()) );
}
and the raytrace works like this:
private static void rayTrace(ArrayList<Point> line, WorldMap map, int radius) {
//int radius = radius from light source
for (Point p: line){
boolean doContinue = true;
float d = distance(line.get(0), p);
//caclulate light linear 100%...0%
float amountLight = (radius - d) / radius;
if (amountLight < 0 ){
amountLight = 0;
}
map.setLight( p, amountLight );
if ( ! map.isViewBlocked(p) ){ //can be blockeb dy wall, or monster
doContinue = false;
break;
}
}
}
I've been into indie game development for about three years right now. The way I would do this is first of all by using OpenGL so you can get all the benefits of the graphical computing power of the GPU (hopefully you are already doing that). Suppose we start off with all tiles in a VBO, entirely lit. Now, there are several options of achieving what you want. Depending on how complex your lighting system is, you can choose a different approach.
If your light is going to be circular around the player, no matter the fact if obstacles would block the light in real life, you could choose for a lighting algorithm implemented in the vertex shader. In the vertex shader, you could compute the distance of the vertex to the player and apply some function that defines how bright things should be in function of the computed distance. Do not use alpha, but just multiply the color of the texture/tile by the lighting value.
If you want to use a custom lightmap (which is more likely), I would suggest to add an extra vertex attribute that specifies the brightness of the tile. Update the VBO if needed. Same approach goes here: multiply the pixel of the texture by the light value. If you are filling light recursively with the player position as starting point, then you would update the VBO every time the player moves.
If your lightmap depends on where the sunlight hits your level, you could combine two sort of lighting techniques. Create one vertex attribute for the sun brightness and another vertex attribute for the light emitted by light points (like a torch held by the player). Now you can combine those two values in the vertex shader. Suppose the your sun comes up and goes down like the day and night pattern. Let's say the sun brightness is sun, which is a value between 0 and 1. This value can be passed to the vertex shader as a uniform. The vertex attribute that represents the sun brightness is s and the one for light, emitted by light points is l. Then you could compute the total light for that tile like this:
tileBrightness = max(s * sun, l + flicker);
Where flicker (also a vertex shader uniform) is some kind of waving function that represents the little variants in the brightness of your light points.
This approach makes the scene dynamic without having to recreate continuously VBO's. I implemented this approach in a proof-of-concept project. It works great. You can check out what it looks like here: http://www.youtube.com/watch?v=jTcNitp_IIo. Note how the torchlight is flickering at 0:40 in the video. That is done by what I explained here.
Hey all I'm trying to implement 3D picking into my program, and it works perfectly if I don't move from the origin. It is perfectly accurate. But if I move the model matrix away from the origin (the viewmatrix eye is still at 0,0,0) the picking vectors are still drawn from the original location. It should still be drawing from the view matrix eye (0,0,0) but it isn't. Here's some of my code to see if you can find out why..
Vector3d near = unProject(x, y, 0, mMVPMatrix, this.width, this.height);
Vector3d far = unProject(x, y, 1, mMVPMatrix, this.width, this.height);
Vector3d pickingRay = far.subtract(near);
//pickingRay.z *= -1;
Vector3d normal = new Vector3d(0,0,1);
if (normal.dot(pickingRay) != 0 && pickingRay.z < 0)
{
float t = (-5f-normal.dot(mCamera.eye))/(normal.dot(pickingRay));
pickingRay = mCamera.eye.add(pickingRay.scale(t));
addObject(pickingRay.x, pickingRay.y, pickingRay.z+.5f, Shape.BOX);
//a line for the picking vector for debugging
PrimProperties a = new PrimProperties(); //new prim properties for size and center
Prim result = null;
result = new Line(a, mCamera.eye, far);//new line object for seeing look at vector
result.createVertices();
objects.add(result);
}
public static Vector3d unProject(
float winx, float winy, float winz,
float[] resultantMatrix,
float width, float height)
{
winy = height-winy;
float[] m = new float[16],
in = new float[4],
out = new float[4];
Matrix.invertM(m, 0, resultantMatrix, 0);
in[0] = (winx / width) * 2 - 1;
in[1] = (winy / height) * 2 - 1;
in[2] = 2 * winz - 1;
in[3] = 1;
Matrix.multiplyMV(out, 0, m, 0, in, 0);
if (out[3]==0)
return null;
out[3] = 1/out[3];
return new Vector3d(out[0] * out[3], out[1] * out[3], out[2] * out[3]);
}
Matrix.translateM(mModelMatrix, 0, this.diffX, this.diffY, 0); //i use this to move the model matrix based on pinch zooming stuff.
Any help would be greatly appreciated! Thanks.
I wonder which algorithm you have implemented. Is it a ray casting approach to the problem?
I didn't focus much on the code itself but this looks a way too simple implementation to be a fully operational ray casting solution.
In my humble experience, i would like to suggest you, depending on the complexity of your final project (which I don't know), to adopt a color picking solution.
This solution is usually the most flexible and the easiest to be implemented.
It consist in the rendering of the objects in your scene with unique flat colors (usually you disable lighting as well in your shaders) to a backbuffer...a texture, then you acquire the coordinates of the click (touch) and you read the color of the pixel in that specific coordinates.
Having the color of the pixel and the tables of the colors of the different objects you rendered, makes possible for you to understand what the user clicked from a logical perspective.
There are other approaches to the object picking problem, this is probably universally recognized as the fastest one.
Cheers
Maurizio
Is there an easy way to rotate a picture around it's center? I used an AffineTransformOp first. It seems simple and need and finding the right parameters for a matrix should be done in a nice and neat google session. So I thought...
My Result is this:
public class RotateOp implements BufferedImageOp {
private double angle;
AffineTransformOp transform;
public RotateOp(double angle) {
this.angle = angle;
double rads = Math.toRadians(angle);
double sin = Math.sin(rads);
double cos = Math.cos(rads);
// how to use the last 2 parameters?
transform = new AffineTransformOp(new AffineTransform(cos, sin, -sin,
cos, 0, 0), AffineTransformOp.TYPE_BILINEAR);
}
public BufferedImage filter(BufferedImage src, BufferedImage dst) {
return transform.filter(src, dst);
}
}
Really simple if you ignore the cases of rotating multiples of 90 degrees (which can't be handled correctly by sin() and cos()). The problem with that solution is, that it transforms around the (0,0) coordinate point in the upper left corner of the picture and not around the center of the picture what would be normally expected. So I added some stuff to my filter:
public BufferedImage filter(BufferedImage src, BufferedImage dst) {
//don't let all that confuse you
//with the documentation it is all (as) sound and clear (as this library gets)
AffineTransformOp moveCenterToPointZero = new AffineTransformOp(
new AffineTransform(1, 0, 0, 1, (int)(-(src.getWidth()+1)/2), (int)(-(src.getHeight()+1)/2)), AffineTransformOp.TYPE_BILINEAR);
AffineTransformOp moveCenterBack = new AffineTransformOp(
new AffineTransform(1, 0, 0, 1, (int)((src.getWidth()+1)/2), (int)((src.getHeight()+1)/2)), AffineTransformOp.TYPE_BILINEAR);
return moveCenterBack.filter(transform.filter(moveCenterToPointZero.filter(src,dst), dst), dst);
}
My thinking here was that the form changing matrix should be the unity matrix (is that the right english word?) and the vector that moves the whole picture around are the last 2 entries. My solution first makes the picture bigger and then smaller again (doesn't really matter that much - reason unknown!!!) and also cuts around 3/4 of the picture away (what matters a lot - reason is probably that the picture is moved outside of the reasonable horizon of the "from (0,0) to (width,height)" picture dimensioning).
Through all the mathematics I am not so trained in and all the errors the computer does while calculating and everything else that doesn't get into my head so easily, I don't know how to go further. Please give advice. I want to rotate the picture around its center and I want to understand AffineTransformOp.
If I understand your question correctly, you can translate to the origin, rotate, and translate back, as shown in this example.
As you are using AffineTransformOp, this example may be more apropos. In particular, note the last-specified-first-applied order in which operations are concatenated; they are not commutative.