I am doing a school assignment in which I need to create a circular gradient fog filter. I did some research on how to do blur an image and founf that I need to take the color value of the pixels around the current pixel and average them to set the new color for the blurred image. So far, I am just focusing on the blurring aspect, and I have this code:
public void circularGradientBlur()
{
Pixel regularPixel = null;
Pixel L_regularPixel = null;
Pixel R_regularPixel = null;
Pixel T_regularPixel = null;
Pixel B_regularPixel = null;
Pixel blurredPixel = null;
Pixel pixels[][] = this.getPixels2D();
for (int row = 2; row < pixels.length; row++)
{
for (int col = 2; col < pixels[0].length - 1; col++)
{
regularPixel = pixels[row][col];
if (row != 0 && col != 0 && row != 498 && col != 498)
{
L_regularPixel = pixels[row - 1][col];
R_regularPixel = pixels[row + 1][col];
T_regularPixel = pixels[row][col - 1];
B_regularPixel = pixels[row][col + 1];
blurredPixel.setRed((L_regularPixel.getRed() + R_regularPixel.getRed() + T_regularPixel.getRed() + B_regularPixel.getRed()) / 4);
blurredPixel.setGreen((L_regularPixel.getGreen() + R_regularPixel.getGreen() + T_regularPixel.getGreen() + B_regularPixel.getGreen()) / 4);
blurredPixel.setBlue((L_regularPixel.getBlue() + R_regularPixel.getBlue() + T_regularPixel.getBlue() + B_regularPixel.getBlue()) / 4);
}
}
}
}
When I try use this code, I get a NullPointerException on the lines where I set the new colors- blurredPixel.setRed(), blurredPixel.setGreen(), blurredPixel.setBlue(). This error is confusing me as I have other methods that have similar code and don't throw this error. I would appreciate any help that I can get!
You have to create an instance of the blurredPixel. it will always be null if you call the setters.
Related
I am working with Camera2 API and want to detect captured image is blurry or clear, i used OpenCV for this but result is not satisfactory and it increases APK size 3 times, So is there any way to detect blurry?
Measuring image focus/blur involves iterating of the pixels of the bitmap, or at least a portion thereof.
While you don't need OpenCV to iterate over the pixels of a bitmap on Android, its not for the faint of heart. Doing so in a performant way would require you to drop into JNI native code, or perhaps a technology like RenderScript, as iterating over pixels in Java or Kotlin might prove too slow.
There are many algorithms and techniques for measuring focus, or sharpness, or contrast, this is one I've used with reasonable success.
Luma is the luminosity of a pixel, i.e. grayscale pixel value. You'll want to convert each pixel to a grayscale value for this focus measure. e.g. using the NTSC formula:
pixelLuma = (red * 0.299) + (green * 0.587) + (blue * 0.114)
Here is a suggested formula to measure focus score:
FocusScore = Max({Video_Gradient}) / {Gray_Level_Dynamic_Range} * {Pixel_Pitch}
Max{Video_Gradient} = a measure of the maximum luminosity difference between adjacent pixels (x,y) across the bitmap.
e.g.:
horizontally measure pixel[x] - pixel[x+1]
vertically measure pixel[y] - pixel[y+1]
{Gray_Level_Dynamic_Range} = difference between average of N lightest pixels and N darkest pixels across the bitmap. A typical value for N is 64, in my case working on images around 1200w x 500h. Smaller images should use smaller N.
{Pixel_Pitch} = 1 / DPI = 1/200 = 0.005
This will result in a score, higher values are more in focus. You can determine a reasonable threshold.
Here is a code snippet written in C:
width = width of bitmap
height = height of bitmap
pixels = an array of bytes of size (width * height) holding pixel luminosity values
VFOCUS_N = 64
int gradientHorizontal[256];
int *pGradientHorizontal = gradientHorizontal;
int gradientVertical[256];
int *pGradientVertical = gradientVertical;
int luminanceHistogram[256];
int *pLuminance = luminanceHistogram;
int maxGradient = 0;
for (int i = 0;i < 256;i++)
{
gradientHorizontal[i] = 0;
gradientVertical[i] = 0;
luminanceHistogram[i] = 0;
}
// pixel by pixel math...
for (nRow = 0; nRow < height-1; nRow++)
{
nRowOffset = nRow * width;
nNextRowOffset = (nRow+1) * width;
for (nCol = 0; nCol < width-1; nCol++)
{
int gC = pixels[nRowOffset + nCol];
int gH = abs(gC - pixels[nRowOffset + nCol + 1]);
int gV = abs(gC - pixels[nNextRowOffset + nCol]);
pLuminance[gC]++;
pGradientHorizontal[gH]++;
pGradientVertical[gV]++;
}
}
// find max gradient
for (int i = 255;i >= 0;i--)
{
// first one with a value
if ((gradientHorizontal[i] > 0) || (gradientVertical[i] > 0))
{
maxGradient = i;
break;
}
}
// calculate dynamic range
int rangeLow = 0;
int rangeHi = 0;
int p;
p = 0;
for (int i = 0;i < 256;i++)
{
if (luminanceHistogram[i] > 0)
{
if (p + luminanceHistogram[i] > VFOCUS_N)
{
rangeLow += (i * (VFOCUS_N - p));
p = VFOCUS_N;
break;
}
p += luminanceHistogram[i];
rangeLow += (i * luminanceHistogram[i]);
}
}
if (p)
rangeLow /= p;
p = 0;
for (int i = 255;i >= 0;i--)
{
if (luminanceHistogram[i] > 0)
{
if (p + luminanceHistogram[i] > VFOCUS_N)
{
rangeHi += (i * (VFOCUS_N - p));
p = VFOCUS_N;
break;
}
p += luminanceHistogram[i];
rangeHi += (i * luminanceHistogram[i]);
}
}
if (p)
rangeHi /= p;
float mFocusScore = (float)fmin((float)maxGradient / (fabs((float)rangeHi - (float)rangeLow) * 0.005), 100.00);
Low focus scores means a blurry image. Values close to or in excess of 100 indicate a sharp image, the code above caps the score at 100.
I have written a class to read the BMP files with all the necessary headers as stated here "http://www.ece.ualberta.ca/~elliott/ee552/studentAppNotes/2003_w/misc/bmp_file_format/bmp_file_format.htm"
The class reads all the necessary headers and for the raw data, it reads them as bytes as shown below -
private byte[] calcBytes(String path) throws IOException {
//Get the path of the file
Path file_path = Paths.get(path);
//Return the byte array of the file
return Files.readAllBytes(file_path);
}
Following this, I convert the bytes(stored as little-endian) values to the decimal equivalent and store them as pixel(RGB) values in an array(size: width*height) as shown below.
private int[][] calcPixels(byte[] bytes){
//Find padding, if divisible by 4
int padding = this.width % 4;
//If not divisible by 4, find the closest next that is divisible
if(padding != 0){
//Find closest bigger padding number divisible by 4
while ((padding % 4) != 0){
padding++;
}
}
//Output Pixel array store the pixel values[R,G,B] format
int[][] output_pixels = new int[((this.width + padding) * this.height)][3];
//Initialize the cols(column) of the pixel data as zero
int col = 0;
//Position to fill the output pixel array in correct index
int pos = 0;
//Iterate through the bytes array
for (int index = 0; index < bytes.length; index += 3){
//Increment the col
col++;
//Bytes to hex
String blue_hex = String.format("%02X", bytes[index]);
String green_hex = String.format("%02X", bytes[index+1]);
String red_hex = String.format("%02X", bytes[index+2]);
//Hex to int/short values
short blue = (short) Integer.parseInt(blue_hex, 16);
short green = (short) Integer.parseInt(green_hex, 16);
short red = (short) Integer.parseInt(red_hex, 16);
//Adding to the main array
output_pixels[pos++] = new int[] {red, green, blue};
//Increment Padding with at last column
if(col == (this.width+padding)/4){
//Skip the bytes since it is the padding
index += padding;
//Denote the end of the row or last column[RGB] as [-1,-1,-1], reset the value of the last stored pixel
output_pixels[pos - 1] = new int[] {(short)-1, (short)-1, (short)-1};
//Row will change now
col = 0;
}
}
return output_pixels;
}
Having generated the pixel array with necessary RGB data. I then use JavaFX to generate the pixel represented by a Rectangle shape class and giving it a when iterating through the pixel data array as generated above.
#Override
public void start(Stage primaryStage) throws Exception{
Button button = new Button("Select Image File");
String path = System.getProperty("user.dir") + "/filePath/image.bmp";
//Reads the BMP file and generate headers, bytes and pixels as shown above
BMP bmp = new BMP(path);
//Initialize the root
Group root = new Group();
int[][] pixelsData = bmp.getPixelsData();
//X (to manipulate the x coordinate of the pixel i.e. rectangle shape class)
double xFactor = 1.0;
double startX = 0.0;
double endX = startX + xFactor;
//Y (to manipulate the x coordinate of the pixel i.e. rectangle shape class)
double yFactor = 0.5;
double startY = 0.0;
double endY = startY + yFactor;
for (int index = 0; index < pixelsData.length; index++){
//Get Colors
int red = pixelsData[index][0];
int green = pixelsData[index][1];
int blue = pixelsData[index][2];
if(red == -1 && green == -1 && blue == -1){
//Start Next Line
startY += yFactor;
endY += yFactor;
startX = 0.0;
endX = startX + xFactor;
continue;
}
else {//keep moving the x coordinate
startX += xFactor;
endX += xFactor;
}
Color color = Color.rgb(red,green,blue);
Rectangle rectangle = new Rectangle(startX,startY,endX,endY);
rectangle.setFill(color);
root.getChildren().add(rectangle);
}
primaryStage.setScene(new Scene(root, 700, 700));
primaryStage.show();
}
Now when I actually generate the image with the code above. The image seems to blur out towards the end of the x-axis and also misses a lot of pixels in on the sides.
Example as shown below:
Original Image
JavaFX Image
I also reversed the iteration of my JavaFX pixel data loop in order to generate the image in the right order but no luck.
If you look closely at the original image and the two JavaFX images, it is evident that the left and right side of the "cat" in the pictures is printed fine but blurs out only because of the co-ordinate in my opinion.
I have spent 2 days trying to figure it out but I am really confused. Can someone please help out with my understanding or point out any mistake I might be committing in my code.
I'm creating a game using LWJGL and Slick Utils. I'm trying to load an animated texture as a set of frames contained in a single PNG image.
I have tried to figure out how to get subimages using Slick, but so far all I've been able to find on the subject is a way to do it outside of Slick using BufferedImages. I'd like to know if there is a way to do this using the Slick Utils library, since all of my image loading code in my project so far has been using Slick.
Sure, Slick provides a way to do this. If you look at org.newdawn.slick.SpriteSheet.initImpl() and later org.newdawn.slick.SpriteSheet.getSprite(), you will notice how org.newdawn.slick.Image.getSubImage() can be used to quickly extract specific portions of an existing image.
SpriteSheet.java
protected void initImpl() {
if (subImages != null) {
return;
}
int tilesAcross = ((getWidth()-(margin*2) - tw) / (tw + spacing)) + 1;
int tilesDown = ((getHeight()-(margin*2) - th) / (th + spacing)) + 1;
if ((getHeight() - th) % (th+spacing) != 0) {
tilesDown++;
}
subImages = new Image[tilesAcross][tilesDown];
for (int x=0;x<tilesAcross;x++) {
for (int y=0;y<tilesDown;y++) {
//extract parts of the main image and store them in an array as sprites
subImages[x][y] = getSprite(x,y);
}
}
}
/**
* Get a sprite at a particular cell on the sprite sheet
*
* #param x The x position of the cell on the sprite sheet
* #param y The y position of the cell on the sprite sheet
* #return The single image from the sprite sheet
*/
public Image getSprite(int x, int y) {
target.init();
initImpl();
if ((x < 0) || (x >= subImages.length)) {
throw new RuntimeException("SubImage out of sheet bounds: "+x+","+y);
}
if ((y < 0) || (y >= subImages[0].length)) {
throw new RuntimeException("SubImage out of sheet bounds: "+x+","+y);
}
//Call Image.getSubImage() to get a portion of the image
return target.getSubImage(x*(tw+spacing) + margin, y*(th+spacing) + margin,tw,th);
}
You should be able to use that as reference. I remember having fully ported the Tiled renderer bundled with Slick to Java2D once, using the old good PixelGrabber to extract sprites.
And if you decide to use SpriteSheet, you can find an example of use in org.newdawn.slick.tiled.Layer.render():
Layer.java
public void render(int x, int y, int sx, int sy, int width, int ty,
boolean lineByLine, int mapTileWidth, int mapTileHeight) {
for (int tileset = 0; tileset < map.getTileSetCount(); tileset++) {
TileSet set = null;
for (int tx = 0; tx < width; tx++) {
if ((sx + tx < 0) || (sy + ty < 0)) {
continue;
}
if ((sx + tx >= this.width) || (sy + ty >= this.height)) {
continue;
}
if (data[sx + tx][sy + ty][0] == tileset) {
if (set == null) {
set = map.getTileSet(tileset);
set.tiles.startUse();
}
int sheetX = set.getTileX(data[sx + tx][sy + ty][1]);
int sheetY = set.getTileY(data[sx + tx][sy + ty][1]);
int tileOffsetY = set.tileHeight - mapTileHeight;
//Call SpriteSheet.renderInUse() to render the sprite cached at slot [sheetX, sheetY]
set.tiles.renderInUse(x + (tx * mapTileWidth), y
+ (ty * mapTileHeight) - tileOffsetY, sheetX,
sheetY);
}
}
if (lineByLine) {
if (set != null) {
set.tiles.endUse();
set = null;
}
map.renderedLine(ty, ty + sy, index);
}
if (set != null) {
set.tiles.endUse();
}
}
}
SpriteSheet.java
public void renderInUse(int x,int y,int sx,int sy) {
//Draw the selected sprite at (x,y), using the width/height defined for this SpriteSheet
subImages[sx][sy].drawEmbedded(x, y, tw, th);
}
Hope this helps you.
I am trying to implement displacement algorithm, same as is used in Gimp or photohop. Describtion of this algorithm can be found here: http://docs.gimp.org/en/plug-in-displace.html
Everything works fine, very similar to photoshop, but i have 1 problem :)
I cant manage to properly blend 2 images (oryginal and this image after displace), and in resultant image i have a lot of black pixels. for example:
This is the code:
Mat img = Highgui.imread(path);
img.convertTo(img, CvType.CV_32SC3);
Mat map = Highgui.imread(path);
map.convertTo(map, CvType.CV_32SC3);
Mat displacedImg;
int coefficient = 15;
int displacement, intensity;
int[] dataMap = new int[3];
int[] dataImg = new int[3];
int[] dataDisplacedImg = new int[3];
Mat temp = new Mat(img.size(), img.type());
for (int i = 0; i < img.height(); i++) {
for (int j = 0; j < img.width(); j++) {
map.get(i, j, dataMap);
intensity = dataMap[0];
displacement = (int) (((intensity - 127.5) / 127.5) *(coefficient));
img.get(i, j, dataImg);
if (j + displacement <= img.width() && j + displacement >= 0) {
temp.put(i, j + displacement, dataImg);
}
}
}
displacedImg = temp;
Do you guys have any idea what i can in this case do ?
i want to use pixels from oryginal image instead of these black pixels.
I was thinking about transparent images, but i didnt find any solution.
Can you help ?
Using the ImageJ api, I'm trying to save an composite image, made up of several images laid out side by side.
I've got code that loads ImagePlus objs, and saves them. But I can't figure how to paste an image into another image.
I interpret the problem as taking multiple images and stitching them together side by side to form a large one where the images may have different dimensions. The following incomplete code is one way of doing it and should get you started.
public ImagePlus composeImages(ArrayList<ImagePlus> imageList){
int sumWidth = 0;
int maxHeight = 0;
for(ImagePlus imp : imageList){
sumWidth = sumWidth +imp.getWidth();
if(imp.getHeight() > maxHeight)
maxHeight = imp.getWidth();
}
ImagePlus impComposite = new ImagePlus();
ImageProcessor ipComposite = new ShortProcessor(sumWidth, maxHeight);
for(int i=0; i<sumWidth; i++){
for(int j=0; j<sumWidth; j++){
ipComposite.putPixelValue(i, j, figureOutThis);
}
}
impComposite.setProcessor(ipComposite);
return impComposite;
}
You need to write an algorithm to find the pixel value (figureOutThis) to put in the composite image at i,j. That is pretty trivial if all images have the same width and a little bit more work otherwise. Happy coding
Edit:
I should add that I am assuming they are also all short images (I work with medical grayscale). You can modify this for other processors
This code combines/stitches a grid of images:
It assumes the images are all of the same dimensions.
ImagePlus combine(List<List<ImagePlus>> imagesGrid) {
checkArgument(
!imagesGrid.isEmpty() && !imagesGrid.get(0).isEmpty(), "Expected grid to be non-empty");
checkArgument(
imagesGrid.stream().map(List::size).distinct().count() == 1,
"Expected all rows in the grid to be of the same size");
checkArgument(
imagesGrid.stream().flatMap(List::stream).map(ImagePlus::getWidth).distinct().count() == 1,
"Expected all images to have the same width");
checkArgument(
imagesGrid.stream().flatMap(List::stream).map(ImagePlus::getHeight).distinct().count() == 1,
"Expected all images to have the same height");
int rows = imagesGrid.size();
int cols = imagesGrid.get(0).size();
int singleWidth = imagesGrid.get(0).get(0).getWidth();
int singleHeight = imagesGrid.get(0).get(0).getHeight();
int combinedWidth = singleWidth * cols;
int combinedHeight = singleHeight * rows;
ImageProcessor processor = new ColorProcessor(combinedWidth, combinedHeight);
for (int row = 0; row < rows; row++) {
for (int col = 0; col < cols; col++) {
ImagePlus image = imagesGrid.get(row).get(col);
int offSetWidth = col * singleWidth;
int offsetHeight = row * singleHeight;
for (int w = 0; w < singleWidth; w++) {
for (int h = 0; h < singleHeight; h++) {
processor.putPixel(w + offSetWidth, h + offsetHeight, image.getPixel(w, h));
}
}
}
}
ImagePlus combinedImage = new ImagePlus();
combinedImage.setProcessor(processor);
return combinedImage;
}