Getting negative DPI of a image in java - java

Getting negative imageXScale and imageYScale for some of pdf's
while converting pdf to image and finding its DPI.
Jar used is pdfbox1.8.8 and iText.
Found image Im0
position=602.64,451.08
size=837px,626px size=-212.59799mm,-159.131mm
Position which must be 0 has some value.
unable to detect the problem

The OP mentions he uses pdfbox1.8.8 and iText but offers no further indication how he retrieves values from his PDF using either of these libraries.
Considering the words imageXScale and imageYScale and the position and size outputs, I would assume he has used the PrintImageLocations PDFBox example.
The meaning of the PrintImageLocations outputs
This sample does the following outputs for a bitmap image drawn somewhere on a page:
System.out.println("Found image [" + objectName.getName() + "]");
The name of the image resource
Matrix ctmNew = getGraphicsState().getCurrentTransformationMatrix();
float imageXScale = ctmNew.getScalingFactorX();
float imageYScale = ctmNew.getScalingFactorY();
// position in user space units. 1 unit = 1/72 inch at 72 dpi
System.out.println("position in PDF = " + ctmNew.getTranslateX() + ", " + ctmNew.getTranslateY() + " in user space units");
Position of the anchor point, i.e. where the original bottom left corner of the image is drawn on the page.
// raw size in pixels
System.out.println("raw image size = " + imageWidth + ", " + imageHeight + " in pixels");
The original width and height of the image resource in pixels. Always non-negative.
// displayed size in user space units
System.out.println("displayed size = " + imageXScale + ", " + imageYScale + " in user space units");
The width and height of the image as drawn on the page. Negative values may mean that the image resource is not drawn right and up from the anchor point but instead left and down.
// displayed size in inches at 72 dpi rendering
imageXScale /= 72;
imageYScale /= 72;
System.out.println("displayed size = " + imageXScale + ", " + imageYScale + " in inches at 72 dpi rendering");
The width and height of the image as drawn on the page in inches assuming a user space unit width of 1/72nd inch, the default. Negative values may occur, see above.
// displayed size in millimeters at 72 dpi rendering
imageXScale *= 25.4;
imageYScale *= 25.4;
System.out.println("displayed size = " + imageXScale + ", " + imageYScale + " in millimeters at 72 dpi rendering");
The width and height of the image as drawn on the page in mm assuming a user space unit width of 1/72nd inch, the default. Negative values may occur, see above.
Thus, negative values here have a meaning (a mirroring or 180° rotation) which makes no difference in respect to any DPI properties. So to calculate a DPI value, use the absolute values only, ignore the signs.
Inconsistency in PDFBox
The x and y scaling factors used above are derived from the current transformation matrix like this:
/**
* Returns the x-scaling factor of this matrix. This is calculated from the scale and shear.
*
* #return The x-scaling factor.
*/
public float getScalingFactorX()
{
float xScale = single[0];
/**
* BM: if the trm is rotated, the calculation is a little more complicated
*
* The rotation matrix multiplied with the scaling matrix is:
* ( x 0 0) ( cos sin 0) ( x*cos x*sin 0)
* ( 0 y 0) * (-sin cos 0) = (-y*sin y*cos 0)
* ( 0 0 1) ( 0 0 1) ( 0 0 1)
*
* So, if you want to deduce x from the matrix you take
* M(0,0) = x*cos and M(0,1) = x*sin and use the theorem of Pythagoras
*
* sqrt(M(0,0)^2+M(0,1)^2) =
* sqrt(x2*cos2+x2*sin2) =
* sqrt(x2*(cos2+sin2)) = <- here is the trick cos2+sin2 is one
* sqrt(x2) =
* abs(x)
*/
if( !(single[1]==0.0f && single[3]==0.0f) )
{
xScale = (float)Math.sqrt(Math.pow(single[0], 2)+
Math.pow(single[1], 2));
}
return xScale;
}
(Excerpt from Matrix.java)
While obviously someone did spend some thoughts on this (look at the comment!), the implementation is somewhat inconsistent:
If there are non-zero values in single[1] or single[3], the calculation in the if block results in a non-negative method result.
For zero values in both single[1] and single[3], though, single[0] is returned as-is which may be negative.
A consistent implementation would either always remove the sign or always try to determine a meaningful sign
Furthermore the calculation is somewhat simplistic as it only considers transformation matrices which can be written as product of a scaling and a rotation. These are very common types but by far not all possible ones.

Related

Where does the 1/72 PDFBox value come from?

I'm extracting images from the PDF page using the PDFBox. In the example I used as a basis (PrintImageLocations), the value of 72 dpi is used for calculation. My question is, where does this value 72 come from?
// position in user space units. 1 unit = 1/72 inch at 72 dpi
System.out.println("position in PDF = " + ctmNew.getTranslateX() + ", " + ctmNew.getTranslateY() + " in user space units");
// raw size in pixels
System.out.println("raw image size = " + imageWidth + ", " + imageHeight + " in pixels");
// displayed size in user space units
System.out.println("displayed size = " + imageXScale + ", " + imageYScale + " in user space units");
// displayed size in inches at 72 dpi rendering
imageXScale /= 72;
imageYScale /= 72;
System.out.println("displayed size = " + imageXScale + ", " + imageYScale + " in inches at 72 dpi rendering");
// displayed size in millimeters at 72 dpi rendering
imageXScale *= 25.4;
imageYScale *= 25.4;
System.out.println("displayed size = " + imageXScale + ", " + imageYScale + " in millimeters at 72 dpi rendering");
Not the most technical of answers... but its been a "standard" for some time... one that is arbitrary and rather silly... Here's a random article that talks about its silliness.
https://petapixel.com/2020/02/13/why-wont-the-72dpi-myth-die/
PDF is closer to being a collection of pixels like a bitmap, than it is to being a token based document like a text file. So for sizing elements on the screen/page it has to assume certain resolution... Because 72dpi was so prevalent for images for so long it makes sense that pdf followed suit.

JavaFX how to get the center of the X and Y axis - point [0,0] to be in the center of the image

My JavaFX application can load an image and allow the user to click on it and get the X and Y coordinates printed out.
The problem is that the upper left corner of the image is the "center" [0,0] of the image instead of the actual image, any idea how to set the X and Y to be the center ?
Here is the code for loading the image:
BufferedImage bufferedImage = ImageIO.read(file);
Image image = SwingFXUtils.toFXImage(bufferedImage, null);
myImageView.setImage(image);
myImageView.setFitWidth(300);
myImageView.setPreserveRatio(true);
myImageView.setCache(true);
Here is the code for printing the location of the mouseClickedEvent:
myImageView.setOnMouseClicked(ev -> {
System.out.println("[" + ev.getX() + ", " + ev.getY() + "]");
});
Red rectablge is the actual [0,0]
Blue rectabgle is the expected [0,0]
Found the answer:
By calculating the offset between the actual center and the expected center, i was able to get the actual X and Y coordinates:
double x = event.getX() - imageWidth / 2;
double y = (event.getY() * -1) + (imageLength / 2);
String msg ="[" + decimalFormat.format(x) + "," + decimalFormat.format(y) +"]";

vtkImageViewer2 - changing volume data slice spacing value

I want to change the slice thickness of my dicom volume data. I'm using
vtkImageViewer2.
For example, the original data spacing is 2 and there are 200 slices, when I
change the slice thickness value to 4 I have to see 100 slices.
Original: 1,2,3,4,5...
Modified: 1, 2, 3...
My code:
if ((modif & InputEvent.BUTTON1_MASK) == InputEvent.BUTTON1_MASK) {
etat = 1;
int nb0 = imageViewer.GetSlice() + 1;
int nb1 = imageViewer.GetSlice() - 1;
int totSlice = imageViewer.GetSliceMax() + 1;
if (p1.y > p2.y) {
String Newligne=System.getProperty("line.separator");
cornerAnnotation.SetText(0,"Slice:" + (nb0 + 1) + "/" + totSlice+Newligne+"Zoom: "+(int)(100)+"%"+Newligne+ "C:" + windowhight + " / W:" +windowlevel+ Newligne+"Pixel:("+xs+":"+ys+")"+Newligne+reader.GetModality()+"("+reader.GetOutput().GetDimensions()[0]+"*"+reader.GetOutput().GetDimensions()[1]+")"+"-Axial"+Newligne);
imageViewer.SetSlice(nb0);
scrollBar.setValue(imageViewer.GetSlice());
} else {
String Newligne=System.getProperty("line.separator");
cornerAnnotation.SetText(0,"Slice:" + (nb1 + 1) + "/" + totSlice+Newligne+"Zoom: "+(int)(100)+"%"+Newligne+ "C:" + windowhight + " / W:" +windowlevel+ Newligne+"Pixel:("+xs+":"+ys+")"+Newligne+reader.GetModality()+"("+reader.GetOutput().GetDimensions()[0]+"*"+reader.GetOutput().GetDimensions()[1]+")"+"-Axial"+Newligne);
imageViewer.SetSlice(nb1);
scrollBar.setValue(imageViewer.GetSlice());
}
}
If you actually change the slice thickness in the DICOM attributes, you likely will have to change the image position (patient) and slice location DICOM attributes as well in order to keep the image volume consistent.
If you are just trying to move slices based on a certain distance (e.g. one click = 4 mm instead of 2 mm), then keep track of the position of the slice instead of the slice number. When the position changes, then compute the new slice for the new position and update to that slice. This will allow more flexibility as well.
If you really just want to step every other slice, then why not just use nb0 = getSlice() + 2 and nb1 = getSlice() -2?

Calculating world coordinates from camera coordinates

I have a world that is rendered in 2D and I'm looking at it from the top. Tjat looks like this (the floor tiles have no texture and only random green color yet):
Before rendering my entities, I transform the model-view matrix like this (while position is the position and zoom the zoom of the camera, ROTATION is 45):
glScalef(this.zoom, this.zoom, 1);
glTranslatef(this.position.x, this.position.y, 0);
glRotatef(ROTATION, 0, 0, 1);
Now I want to calculate the world coordinates for the current position of my camera. What I'm trying is to create a new matrix with glPushMatrix, then transform it the same way that the camera is transformed, and then get the matrix and multiply the given camera coordinate with it:
private Vector2f toWorldCoordinates(Vector2f position) {
glPushMatrix();
// do the same as when rendering
glScalef(this.zoom, this.zoom, 1);
glTranslatef(this.position.x, this.position.y, 0);
glRotatef(ROTATION, 0, 0, 1);
// get the model-view matrix
ByteBuffer m = ByteBuffer.allocateDirect(64);
m.order(ByteOrder.nativeOrder());
glGetFloatv(GL_MODELVIEW_MATRIX, m);
// calculate transformed position
float x = (position.x * m.getFloat(0)) + (position.y * m.getFloat(4)) + m.getFloat(12);
float y = (position.x * m.getFloat(1)) + (position.y * m.getFloat(5)) + m.getFloat(13);
System.out.println(x + "/" + y);
glPopMatrix();
return new Vector2f(x, y);
}
The problem now is: this works for the x coordinate, but the y coordinate is wrong and always 0. Have I misused the matrix somehow? Is there a "smoother" way of getting the world coordinates from the eye coordinates?
The problem is with the way you're calling getFloat(). When you call it with an index on a ByteBuffer, the index is the number of bytes into the buffer at which to start reading the float, not the number of floats. You need to multiply each of your indices by 4:
float x = (position.x * m.getFloat(0)) + (position.y * m.getFloat(16)) + m.getFloat(48);
float y = (position.x * m.getFloat(4)) + (position.y * m.getFloat(20)) + m.getFloat(52);
However given that x is working for you already, I suspect you might also need to transpose your matrix co-ordinates, and so the correct code is:
float x = (position.x * m.getFloat(0)) + (position.y * m.getFloat(4)) + m.getFloat(12);
float y = (position.x * m.getFloat(16)) + (position.y * m.getFloat(20)) + m.getFloat(28);
(By a co-incidence, transposing the first row of the matrix into the first column gives indices that are 4 times as great, so the 2 bugs cancel each other out in the case of x but not y).
If you're looking for a smoother way of doing it, look into using gluUnProject, although you may have to apply some additional transforms (it maps from window to object co-ordinates).

How to get the size of the intersecting part in a circle in Java

I need the size of the black part of this image:
I've done some research about how to find it in normal math, and I was pointed to this website: Website
The final answer on getting it was
(from MathWorld - A Wolfram Web Resource: wolfram.com)
where r is the radius of the first circle, R the radius of the second circle, and d the distance between the two centers.
The code I tried to use to get the size of this was the following:
float r = getRadius1();
float R = e.getRadius1();
float deltaX = Math.abs((getX() + getRadius()) - (e.getX() + e.getRadius()));
float deltaY = Math.abs((getY() + getRadius()) - (e.getY() + e.getRadius()));
float d = (float) Math.sqrt(Math.pow(deltaX, 2) + Math.pow(deltaY, 2));
float part, part2, part3;
//Chopping it in parts, because it's easier.
part = (float) (Math.pow(r,2) * Math.acos(
Math.toRadians((Math.pow(d, 2) + Math.pow(r, 2) - Math.pow(R, 2))/(2*d*r))));
part2 = (float) (Math.pow(R,2) * Math.acos(
Math.toRadians((Math.pow(d, 2) + Math.pow(R, 2) - Math.pow(r, 2))/(2*d*R))));
part3 = (float) (0.5 * Math.sqrt((-d + r + R) * (d+r-R) * (d-r+R) * (d+r+R)));
float res = part + part2 - part3;
Main.log(res + " " + part + " " + part2 + " " + part3+ " "
+ r + " " + R + " " + d);
//logs the data and System.out's it
I did some testing, and the output was this:
1345.9663 621.6233 971.1231 246.78008 20.0 25.0 43.528286
So that indicates that the size of the overlapping part was bigger than the circle itself (which is r^2 * PI).
What did I do wrong?
Just a guess (as stated in my comment): try removing the Math.toRadians(...) conversion.
Since there are no degrees involved in the formula but rather radii, I assume the parameter to cos-1(...) is already a value in radians.
If I remove the conversion and run your code, I get the following overlap area size: 11.163887023925781 which seems plausible since the length of the overlap segment on the line between the two centers is 20 + 25 - 43.5 = 1.5 (approximated)
Edit:
If I set the distance to 5 (the smaller circle is completely contained in the bigger one but touches its edge) I get the overlap area size 1256.63 which is exactly the area of the smaller circle (202 * Π). The calculation doesn't seem to work if the distance is smaller than the difference of the radii (i.e. in your case smaller than 5), but that might just be a problem of numerical representation (the normal datatypes might not be able to represent some of the intermediate results).

Categories