I have a quick question if someone of you can help me with this kind of information :).
What is the faster method to rotate an image with 90 degree(or multiples of 90 degree) if we speak about the execution speed and memory management.
I've search a lot with Google and I've found the faster method to do this is OpenCV in both languages Python or Java(and anothors languages).
It's true? Do you know and other method to rotate an image faster then 90 degree?
Thanks a lot for
JPEG images can be rotated without re-compressing the image data.
For a Python project, see jpegtran-cffi.
You probably can't get faster than that if you want to apply the rotation.
Another possibility is to edit the EXIF orientation of a JPEG image. It basically tells the viewer application on how to rotate the image. This is just changing a single value, however not all readers/viewers will support the orientation flag.
I had a more general question last week, how can I rotate an Image by any angel as fast as possible, and I ended up comparing different libraries which offered the rotation function in this article I wrote.
The quick answer is OpenCV, a more elaborate answer is written in the article:
I am going to focus on three most used libraries for image editing in python namely , Pillow, OpenCV and Scipy.
In the following code you can learn how to import these libraries and how to rotate an image using them. I have defined a function for each library to use it for our experiments
import numpy as np
import PIL
import cv2
import matplotlib.pylab as plt
from PIL import Image
from scipy.ndimage import rotate
from scipy.ndimage import interpolation
def rotate_PIL (image, angel, interpolation):
'''
input :
image : image : PIL image Object
angel : rotation angel : int
interpolation : interpolation mode : PIL.Image.interpolation_mode
Interpolation modes :
PIL.Image.NEAREST (use nearest neighbour), PIL.Image.BILINEAR (linear interpolation in a 2×2 environment), or PIL.Image.BICUBIC
https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.rotate
returns :
rotated image
'''
return image.rotate(angel,interpolation)
def rotate_CV(image, angel , interpolation):
'''
input :
image : image : ndarray
angel : rotation angel : int
interpolation : interpolation mode : cv2 Interpolation object
Interpolation modes :
interpolation cv2.INTER_CUBIC (slow) & cv2.INTER_LINEAR
https://theailearner.com/2018/11/15/image-interpolation-using-opencv-python/
returns :
rotated image : ndarray
'''
#in OpenCV we need to form the tranformation matrix and apply affine calculations
#
h,w = image.shape[:2]
cX,cY = (w//2,h//2)
M = cv2.getRotationMatrix2D((cX,cY),angel,1)
rotated = cv2.warpAffine(image,M , (w,h),flags=interpolation)
return rotated
def rotate_scipy(image, angel , interpolation):
'''
input :
image : image : ndarray
angel : rotation angel : int
interpolation : interpolation mode : int
Interpolation modes :
https://stackoverflow.com/questions/57777370/set-interpolation-method-in-scipy-ndimage-map-coordinates-to-nearest-and-bilinea
order=0 for nearest interpolation
order=1 for linear interpolation
returns :
rotated image : ndarray
'''
return scipy.ndimage.interpolation.rotate(image,angel,reshape=False,order=interpolation)
To understand which library is more efficient in rotating and interpolating images, we design a simple experiment at first. We apply a 20 degree rotation using all three libraries on a 200 x 200 pixel 8bit image generated by our function rand_8bit().
def rand_8bit(n):
im =np.random.rand(n,n)*255
im = im.astype(np.uint8)
im[n//2:n//2+n//2,n//2:n//4+n//2]= 0 # a self scaling rectangle
im[n//3:50+n//3,n//3:200+n//3]= 0 # a constant rectangle
return im
#generate images of 200x200 pixels
im = rand_8bit(200)
#for PIL library we need to first convert the image array into a PIL image object
image_for_PIL=Image.fromarray(im)
%timeit rotate_PIL(image_for_PIL,20,PIL.Image.BILINEAR)
%timeit rotate_CV(im,20,cv2.INTER_LINEAR)
%timeit rotate_scipy(im,20,1)
the result is :
987 µs ± 76 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
414 µs ± 79.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
4.46 ms ± 1.07 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
This means that OpenCV is the most efficient and Scipy is the slowest of them when it comes to image rotation.
The fastest way, known to me yet, to do basic image manipulation like rotating, cutting, resizing, and filtering is by using pillow module in python. OpenCV is used when advanced manipulations have to be done, that can't be done by Pillow. Pillow's rotate will answer your question.
Image.rotate(angle)
This is all you have to do to rotate the angle by any degree you want.
I have used in my java project this implementation of opencv to rotate images and I am pleased with the performance of the rotate image.
*OpenCV dependency version is like below.
<dependency>
<groupId>nu.pattern</groupId>
<artifactId>opencv</artifactId>
<version>2.4.9-4</version>
</dependency>
The method below does the rotation of image based on the angle you provide.
#Override
public BufferedImage rotateImage(BufferedImage image, double angle) {
Mat imageMat = OpenCVHelper.img2Mat(image);
// Calculate size of new matrix
double radians = Math.toRadians(angle);
double sin = Math.abs(Math.sin(radians));
double cos = Math.abs(Math.cos(radians));
int newWidth = (int) Math.floor(imageMat.width() * cos + imageMat.height() * sin);
int newHeight = (int) Math.floor(imageMat.width() * sin + imageMat.height() * cos);
int dx = (int) Math.floor(newWidth / 2 - (imageMat.width() / 2));
int dy = (int) Math.floor(newHeight / 2 - (imageMat.height() / 2));
// rotating image
Point center = new Point(imageMat.cols() / 2, imageMat.rows() / 2);
Mat rotMatrix = Imgproc.getRotationMatrix2D(center, 360 - angle, 1.0); // 1.0 means 100 % scale
// adjusting the boundaries of rotMatrix
double[] rot_0_2 = rotMatrix.get(0, 2);
for (int i = 0; i < rot_0_2.length; i++) {
rot_0_2[i] += dx;
}
rotMatrix.put(0, 2, rot_0_2);
double[] rot_1_2 = rotMatrix.get(1, 2);
for (int i = 0; i < rot_1_2.length; i++) {
rot_1_2[i] += dy;
}
rotMatrix.put(1, 2, rot_1_2);
Mat rotatedMat = new Mat();
Imgproc.warpAffine(imageMat, rotatedMat, rotMatrix, new Size(newWidth, newHeight));
return OpenCVHelper.mat2Img(rotatedMat);
}
The rotateImage method above takes an input an image of type BufferedImage and the angle in degrees that you need to rotate your image.
First operation of rotateImage method is to calculate the new width and new height that will have the rotated image by using the angle you provided and width and height of image that you want to rotate.
Second important operation is adjusting the boundaries of the matrix that is used to rotate the image. This is done to prevent the image to be cropped from the rotation operation.
Below is the class that i have used to convert the image from BufferedImage to Mat and vice versa.
public class OpenCVHelper {
/**
* The Mat type image is converted to BufferedImage type.
*
* #param mat
* #return
*/
public static BufferedImage mat2Img(Mat mat) {
BufferedImage image = new BufferedImage(mat.width(), mat.height(), BufferedImage.TYPE_3BYTE_BGR);
WritableRaster raster = image.getRaster();
DataBufferByte dataBuffer = (DataBufferByte) raster.getDataBuffer();
byte[] data = dataBuffer.getData();
mat.get(0, 0, data);
return image;
}
/**
* The BufferedImage type image is converted to Mat type.
*
* #param image
* #return
*/
public static Mat img2Mat(BufferedImage image) {
image = convertTo3ByteBGRType(image);
byte[] data = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
Mat mat = new Mat(image.getHeight(), image.getWidth(), CvType.CV_8UC3);
mat.put(0, 0, data);
return mat;
}
}
In my case it was need for me to converted the image in BufferedImage. If you don't need that, you can skip , and read the image directly as Mat type and pass it to that method rotateImage.
public Mat rotateImage(File input, double angle) {
Mat imageMat = Highgui.imread(input.getAbsolutePath())
...
}
Related
The following code finds the best-focus image within a set most of the time, but there are some images where it returns a higher value for the image that is way more blurry to my eye.
I am using OpenCV 3.4.2 on Linux and/or Mac.
import org.opencv.core.*;
import org.opencv.imgproc.Imgproc;
import static org.opencv.core.Core.BORDER_DEFAULT;
public class LaplacianExample {
public static Double calcSharpnessScore(Mat srcImage) {
/// Remove noise with a Gaussian filter
Mat filteredImage = new Mat();
Imgproc.GaussianBlur(srcImage, filteredImage, new Size(3, 3), 0, 0, BORDER_DEFAULT);
int kernel_size = 3;
int scale = 1;
int delta = 0;
Mat lplImage = new Mat();
Imgproc.Laplacian(filteredImage, lplImage, CvType.CV_64F, kernel_size, scale, delta, Core.BORDER_DEFAULT);
// converting back to CV_8U generate the standard deviation
Mat absLplImage = new Mat();
Core.convertScaleAbs(lplImage, absLplImage);
// get the standard deviation of the absolute image as input for the sharpness score
MatOfDouble median = new MatOfDouble();
MatOfDouble std = new MatOfDouble();
Core.meanStdDev(absLplImage, median, std);
return Math.pow(std.get(0, 0)[0], 2);
}
}
Here are two images using the same illumination (fluorescence, DAPI), taken from below a microscope slide while attempting to auto-focus on the coating/mask on the top surface of the slide.
I'm hoping someone can explain to me why my algorithm fails to detect the image that is less blurry. Thanks!
The main issue is that the laplacian kernel size is too small.
You are using kernel_size = 3, and it's too small for the above scene.
In the above images, kernel_size = 3 is affected mostly by noise, because the edges (in the image that shows more details) are much larger than 3x3 pixels.
In other words, the "special frequency" of the details is low frequency, and the 3x3 kernel emphasizes much higher special frequency.
Possible solutions:
You may increase the kernel size - set kernel_size = 11 for example.
As an alternative, you may resize (shrink) the source image by a factor of say 0.25 in each axis.
You may also compute the weighted sum of std before and after resizing (in case the shrunk image is not accurate enough when focus is good).
There is a small issue in your code:
Core.convertScaleAbs(lplImage, absLplImage) computes absolute value of the laplacian result, and as a result the computed STD is incorrect.
I suggest the following fix:
Set Laplacian depth to CvType.CV_16S (instead of CvType.CV_64F):
Imgproc.Laplacian(filteredImage, lplImage, CvType.CV_16S, kernel_size, scale, delta, Core.BORDER_DEFAULT);
Don't execute Core.meanStdDev(absLplImage, median, std), compute tee STD on lplImage:
Core.meanStdDev(lplImage, median, std);
I used the following Python code for testing:
import cv2
def calc_sharpness_score(srcImage):
""" Compute sharpness score for automatic focus """
filteredImage = cv2.GaussianBlur(srcImage, (3, 3), 0, 0)
kernel_size = 11
scale = 1
delta = 0
#lplImage = cv2.Laplacian(filteredImage, cv2.CV_64F, ksize=kernel_size, scale=scale, delta=delta)
lplImage = cv2.Laplacian(filteredImage, cv2.CV_16S, ksize=kernel_size, scale=scale, delta=delta)
# converting back to CV_8U generate the standard deviation
#absLplImage = cv2.convertScaleAbs(lplImage)
# get the standard deviation of the absolute image as input for the sharpness score
# (mean, std) = cv2.meanStdDev(absLplImage)
(mean, std) = cv2.meanStdDev(lplImage)
return std[0][0]**2
im1 = cv2.imread('im1.jpg', cv2.COLOR_BGR2GRAY) # Read input image as Grayscale
im2 = cv2.imread('im2.jpg', cv2.COLOR_BGR2GRAY) # Read input image as Grayscale
var1 = calc_sharpness_score(im1)
var2 = calc_sharpness_score(im2)
Result:
std1 = 668464355
std2 = 704603944
I am currently having problems coming up with an algorithm for re-scaling and image.
I currently want to implement both Bilinear interpolation and Nearest Neighbour. I understand how both of them work conceptually but, I can not seem to record it into code. That I am still stuck on Nearest Neighbour
I have wrote some pseudo-code for it below (based on what I know):
Resizing Images: Nearest Neighbour
Use a loop:for j=0 to Yb-1
for i=0 to Xb-1
for c=0 to 2
(floor) y=j*Ya/Yb
(floor) x=i*Xa/Xb
Ib[j][i][c] = Ia[y][x][c]
My original data set (where I get my volume of data) is stored in a 3D array with [x][y][z] with (x, y, z).I read each pixel separately and can calculate the colors for each pixel using Color.color in java. I however, need to know how I can get the color (c = [0,1,2] ) for each pixel position x and y (x,y) excluding z(for one view's) to convert 1 old pixel for each new pixel into my new data set containing the new width and Height. I have written most of the code I have translated above in Java. But I still can not understand how to finalise a working implementation.
Thanks in Advance😊
I am not very familiar with java. But here is a working code for python.
import cv2
import numpy as np
img = cv2.imread("image.png")
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
scaleX = 0.5
scaleY = 0.5
newImg = np.zeros((int(img.shape[0]*scaleX),int(img.shape[1]*scaleY))).astype(np.uint8)
for y in range(newImg.shape[0]):
for x in range(newImg.shape[1]):
samplex = x/scaleX
sampley = y/scaleY
dx = samplex - np.floor(samplex)
dy = sampley - np.floor(sampley)
val = img[int(sampley-dy),int(samplex-dx)]*(1-dx)*(1-dy)
val += img[int(sampley + 1 - dy),int(samplex-dx)]*(1-dx)*(dy)
val += img[int(sampley-dy),int(samplex + 1 - dx)]*(dx)*(1-dy)
val += img[int(sampley + 1 -dy),int(samplex + 1 - dx)]*(dx)*(dy)
newImg[y,x] = val.astype(np.uint8)
cv2.imshow("img",newImg)
cv2.waitKey(0)
You could simply add one more for loop inside they for and x for loops to account for channels.
if I get it right you are interpolating volumes (voxels) instead of pixels in such case:
Lets have source volume vol1[xs1][ys1][zs1] and target vol0[xs0][ys0][zs0] where xs,ys,zs are the resolutions then nearest neighbor would be:
// vol0 <- vol1
for ( x0=0; x0<xs0; x0++)
for (x1=(x*x1)/x0, y0=0; y0<ys0; y0++)
for (y1=(y*y1)/y0, z0=0; z0<zs0; z0++)
{ z1=(z*z1)/z0;
vol0[x0][y0][z0]=vol1[x1][y1][z1];
}
The color stays the same for nearest neighbor. In case vol0 has smaller resolutions than vol1 you can do the for loops at vol1 resolution and compute x0,y0,z0 from x1,y1,z1 instead to speed up. Btw. all the variables are integers no floats needed for this...
Now for the color encoding in case your voxels are 1D array ({r,g,b}) instead of scalar integral type:
vol0[xs0][ys0][zs0][3]
vol1[xs1][ys1][zs1][3]
the stuff would change to:
// vol0 <- vol1
for ( x0=0; x0<xs0; x0++)
for (x1=(x*x1)/x0, y0=0; y0<ys0; y0++)
for (y1=(y*y1)/y0, z0=0; z0<zs0; z0++)
for (z1=(z*z1)/z0; i=0; i<3; i++ )
vol0[x0][y0][z0][i]=vol1[x1][y1][z1][i];
I'm trying to draw a NinePatch using a transform matrix so it can be scaled, rotated, moved etc. So I created a class that inherits from LibGDX's NinePatch class and which is responsible of the matrix.
This is how I compute my transform matrix (I update it each time one of the following values changes) :
this.transform
.idt()
.translate(originX, originY, 0)
.rotate(0, 0, 1, rotation)
.scale(scale, scale, 1)
.translate(-originX, -originY, 0)
;
and how I render my custom NinePatch class :
drawConfig.begin(Mode.BATCH);
this.oldTransform.set(drawConfig.getTransformMatrix());
drawConfig.setTransformMatrix(this.transform);
this.draw(drawConfig.getBatch(), this.x, this.y, this.width, this.height); // Libgdx's NinePatch#draw()
drawConfig.setTransformMatrix(this.oldTransform);
Case 1
Here's what I get when I render 4 nine patches with :
Position = 0,0 / Origin = 0,0 / Scale = 0.002 / Rotation = different for each 9patch
I get what I expect to.
Case 2
Now the same 4 nine patches with :
Position = 0,0 / Origin = 0.5,0.5 / Scale = same / Rotation = same
You can see that my 9 patches aren't draw at 0,0 (their position) but at 0.5,0.5 (their origin), like if I had no .translate(-originX, -originY, 0) when computing the transform matrix. Just to be sure, I commented this instruction and I indeed get the same result. So why is my 2nd translation apparently not taken into account?
The problem is probably your scaling. Because it also scales down the translation, your seccond translate actually translates (-originX*scale, -originY*scale, 0) since scale=0.002, it looks like there is no translate at all. For instance for the x coordinate, you compute :
x_final = originX + scale * (-originX + x_initial)
I had to change the code computing my transform matrix to take the scale into account when translating back as pointed by Guillaume G. except my code is different from his :
this.transform
.idt()
.translate(originX, originY, 0)
.rotate(0, 0, 1, rotation)
.scale(scale, scale, 1)
.translate(-originX / scale, -originY / scale, 0);
;
I have got an assignment where I need to validate images. I have got 2 sets of folders one which is actual and other folder contain expected images. These images are of some brands/Companies.
Upon initial investigation, I found that images of each brand have different dimension but are of same format i.e png
What I have done so far:- Upon googling I found the below code which compares 2 images. I ran this code for one of the brand and ofcourse the result was false. Then I modify one of the image such that both the images have same dimension.. even then i got the same result.
public void testImage() throws InterruptedException{
String file1="D:\\image\\bliss_url_2.png";
String file2="D:\\bliss.png";
Image image1 = Toolkit.getDefaultToolkit().getImage(file1);
Image image2 = Toolkit.getDefaultToolkit().getImage(file2);
PixelGrabber grab1 =new PixelGrabber(image1, 0, 0, -1, -1, true);
PixelGrabber grab2 =new PixelGrabber(image2, 0, 0, -1, -1, true);
int[] data1 = null;
if (grab1.grabPixels()) {
int width = grab1.getWidth();
int height = grab1.getHeight();
System.out.println("Initial width and height of Image 1:: "+width + ">>"+ height);
grab2.setDimensions(250, 100);
System.out.println("width and height of Image 1:: "+width + ">>"+ height);
data1 = new int[width * height];
data1 = (int[]) grab1.getPixels();
System.out.println("Image 1:: "+ data1);
}
int[] data2 = null;
if (grab2.grabPixels()) {
int width = grab2.getWidth();
int height = grab2.getHeight();
System.out.println("width and height of Image 2:: "+width + ">>"+ height);
data2 = new int[width * height];
data2 = (int[]) grab2.getPixels();
System.out.println("Image 2:: "+ data2.toString());
}
System.out.println("Pixels equal: " + java.util.Arrays.equals(data1, data2));
}
I just want to verify if the content of images are same i.e images belong to same brand ,if not then what are the differences
Please help me what should I do to do valid comparison.
Maybe you should not use some external library assuming it should be your own work. In this point of view, a way to compare images is to get the average color of the same portion of both images. If results are equals (or very similar due to compression errors etc)
Lets say we have two images
image 1 is 4 pixel. (to simplify each pixel is represented with a number but should be RGB)
1 2
3 4
[ (1+2+3+4) / 4 = 2.5 ]
image 2 is twice bigger
1 1 2 2
1 1 2 2
3 3 4 4
3 3 4 4
[ ((4*1)+(4*2)+(4*3)+(4*4)) / 16 = 2.5]
The average pixel value (color) is 2.5 in both images.
(with real pixel colors, compare the RGB colors separately. The three should be equal or very close)
That's the idea. Now, you should make this caumputing for each pixel of the smallest image and the corresponding pixels of the bigest one (according to the scale difference of both images)
Hope you'll find out a good solution !
Method setDimensions doesn't scale the image. Moreover, you shouldn't call it directly (see its java-doc). PixelGrabber is just a grabber to grab a subset of the pixels in an image. To scale the image use Image.getScaledInstance() http://docs.oracle.com/javase/7/docs/api/java/awt/Image.html#getScaledInstance(int,%20int,%20int) for instance
Even if you have got 2 images with the same size after scaling, you still cannot compare them pixel by pixel, since any algorithm of scaling is lossy by its nature. That means the only thing you can do is to check "similarity" of the images. I'd suggest to take a look at a great image processing library OpenCV which has a wrapper for Java:
Simple and fast method to compare images for similarity
http://docs.opencv.org/doc/tutorials/introduction/desktop_java/java_dev_intro.html
I'm making application in java swing with netbeans platform. In my app I rotate MyImage.tiff (16 bit, tiff, Gray-Scale Image) it rotate image but change the type of MyImage.tiff. Before rotate image myImage.tiff type is 11, but after rotate MyImage.tiff it type change and it becomes 0 type of BufferedImage. So how to solve this Problem. In my app I use JAI for rotate image.I not installed JAI in my PC, but I made wrapper module in which I used jar files of JAI. So are there any missing jar files? My code for rotate Image is below.
public BufferedImage rotateRighteImage(BufferedImage im) {
int value = 90;
float angle = (float) (value * (Math.PI / 180.0F));
ParameterBlock pb = new ParameterBlock();
pb.addSource(im); // The source image
pb.add(0.0F); // The x origin
pb.add(0.0F); // The y origin
pb.add(angle); // The rotation angle
// Create the rotate operation
RenderedOp create = JAI.create("Rotate", pb, null);
im = create.getAsBufferedImage();
return im;
}