I currently coding a Project using LWJGL 3. Since GLFW 3.2 its possible to add a window icon. I used it like you can see below, but I keep getting the error, and I have no clue why. What am I doing wrong?
What I have tried already:
I have made sure that the file is present in "res/img/icon.png".
The PNGLoader (Found here) works, as I am using textures and it loads them without any problems.
Code:
PNGDecoder dec = null;
try {
dec = new PNGDecoder(new FileInputStream("res/img/icon.png"));
int width = dec.getWidth();
int height = dec.getHeight();
ByteBuffer buf = BufferUtils.createByteBuffer(width * height * 4);
dec.decode(buf, width * 4, PNGDecoder.Format.RGBA);
buf.flip();
Buffer images = new Buffer(buf);
GLFW.glfwSetWindowIcon(window, images);
} catch (IOException e) {
e.printStackTrace();
}
Here's the error that I get:
[LWJGL] GLFW_PLATFORM_ERROR error
Description : Win32: Failed to create RGBA bitmap
Stacktrace :
org.lwjgl.glfw.GLFW.nglfwSetWindowIcon(GLFW.java:1802)
org.lwjgl.glfw.GLFW.glfwSetWindowIcon(GLFW.java:1829)
ch.norelect.emulator.designer.Designer.initialize(Designer.java:81)
ch.norelect.emulator.designer.Designer.start(Designer.java:48)
ch.norelect.emulator.Main.main(Main.java:27)
You are passing the raw pixel data as a GLFWImage.Buffer, but GLFW needs the width and height as well. GLFWImage.Buffer is just a convenience class for putting several GLFWImages next to each other, which each hold properties about an image.
GLFWImage image = GLFWImage.malloc();
image.set(width, height, buf);
GLFWImage.Buffer images = GLFWImage.malloc(1);
images.put(0, image);
glfwSetWindowIcon(window, images);
images.free();
image.free();
Related
I'm using JVM 14.0.2 in VSCode IDE.
The purpose of the code is to change the original input image to grayscale image and save the new gray image to the desired location.
The code runs with no exceptions and i tried to print some progress lines(System.out.println("Saving completed...");), those lines printed throughout the program where i plugged in. However, when i go to the selected filepath to search for the saved GrayScale image, i do not see the new image in the directory.
I then tried the BlueJ IDE, and the gray image was saved. Can you check if it's VSCode developing environment issue or my code issue? or I need a different class/method to edit images in VSCode? Thanks for your help.Let me know if you need more details.
public class GrayImage {
public static void main(String args[]) throws IOException {
BufferedImage img = null;
// read image
try {
File f = new File("C:\\original.jpg");
img = ImageIO.read(f);
// get image width and height
int width = img.getWidth();
int height = img.getHeight();
BufferedImage grayimg = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
// convert to grayscale
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
Color color = new Color(img.getRGB(x, y));
int r = (int) color.getRed();
int g = (int) color.getBlue();
int b = (int) color.getGreen();
// calculate average
int avg = (r + g + b) / 3;
// replace RGB value with avg
Color newColor = new Color(avg, avg, avg, color.getAlpha());
grayimg.setRGB(x, y, newColor.getRGB());
}
}
// write image
System.out.println("Trying to write the new image...");
File newf = new File("H:\\gray.jpg");
ImageIO.write(grayimg, "jpg", newf);
System.out.println("Finished writing the new image...");
} catch (IOException e) {
System.out.println(e);
}
}// main() ends here
}
If I understand this problem correctly, the important lesson here is that ImageIO.write(...) returns a boolean, indicating whether it succeeded or not. You should handle situations where the value is false, even if there is no exception. For reference, see the API doc.
Something like:
if (!ImageIO.write(grayimg, "JPEG", newf)) {
System.err.println("Could not store image as JPEG: " + grayimg);
}
Now, for the reason your code does indeed work in one JRE and not in another, is probably related to the image being of type TYPE_INT_ARGB (ie. contains alpha channel). This used to work in Oracle JDK/JREs but support was removed:
Previously, the Oracle JDK used proprietary extensions to the widely used IJG JPEG library in providing optional color space support.
This was used to support PhotoYCC and images with an alpha component on both reading and writing. This optional support has been removed in Oracle JDK 11.
The fix is easy; as your source is a JPEG file, it probably does not contain an alpha component anyway, so you could change to a different type with no alpha. As you want a gray image, I believe the best match would be:
BufferedImage grayimg = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
But TYPE_INT_RGB or TYPE_3BYTE_BGR should work too, should you later run into the same problem with color images.
This post is a follow up to :
ImageIO.read can't read ByteArrayInputStream (image processing)
Similar to the OP, I am getting a null pointer whenever I try to read from my ByteArrayInputStream (as it should, as explained by the top answer). Noticing this, I have implemented the code from the #haraldK 's answer from the post above in order to correct this issue, but I have run into another problem. I have the following code:
byte[] imageInByteArr = ...
// convert byte array back to BufferedImage
int width = 1085;
int height = 696;
BufferedImage convertedGrayScale = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
convertedGrayScale.getRaster().setDataElements(0, 0, width, height, imageInByteArr );
try {
ImageIO.write(convertedGrayScale, "jpg", new File("C:\\test.jpg"));
}
catch (IOException e) {
System.err.println("IOException: " + e);
}
Upon execution, I run into a java.lang.ArrayIndexOutOfBoundsException: null error on the line right before the try/catch block. My first thought was that this null pointer was arising for not having a file in my C drive called test.jpg. I adjusted to fix that worry, yet I am still getting the same null pointer issue at convertedGrayScale.getRaster().setDataElements(0, 0, width, height, imageInByteArr );. Why is this happening?
On another note, aside from writing the file uining ImageIO, is there ANY other way for me to convert the byte[] into a visual representation of an image? I have tried to just print the array onto a file and saving it as a '.jpg', but the file will not open. Any suggestions will help. To summarize, I am looking to convert a byte[] into an image and save it OR render it onto a browser. Whichever is easier/doable.
it appears that your imageInByteArr is too short. I was able to get the same error you get from this
public static void main(String[] args) {
int width = 1085;
int height = 696;
byte[] imageInByteArr = new byte[width ];
BufferedImage convertedGrayScale = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
convertedGrayScale.getRaster().setDataElements(0, 0, width, height, imageInByteArr);
}
when using width*height for size of imageInByteArr or anything bigger i get no error, but when it's smaller than the data you are trying to update it throws the exception.
I am stack with this problem for a couple of days. I want to make an android app that takes a picture and extracts HOG features of that image for future processing. The problem is that the code below always returns the HOG descriptors with rezo values.
#Override
public void onPictureTaken(byte[] data, Camera camera) {
Log.i(TAG, "Saving a bitmap to file");
// The camera preview was automatically stopped. Start it again.
mCamera.startPreview();
mCamera.setPreviewCallback(this);
this.disableView();
Bitmap bitmapPicture = BitmapFactory.decodeByteArray(data, 0, data.length);
myImage = new Mat(bitmapPicture.getWidth(), bitmapPicture.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(bitmapPicture, myImage);
Bitmap bm = Bitmap.createBitmap(myImage.cols(), myImage.rows(),Bitmap.Config.ARGB_8888);
Utils.matToBitmap(myImage.clone(), bm);
// find the imageview and draw it!
ImageView iv = (ImageView) getRootView().findViewById(R.id.imageView);
this.setVisibility(SurfaceView.GONE);
iv.setVisibility(ImageView.VISIBLE);
Mat forHOGim = new Mat();
org.opencv.core.Size sz = new org.opencv.core.Size(64,128);
Imgproc.resize( myImage, myImage, sz );
Imgproc.cvtColor(myImage,forHOGim,Imgproc.COLOR_RGB2GRAY);
//forHOGim = myImage.clone();
MatOfFloat descriptors = new MatOfFloat(); //an empty vector of descriptors
org.opencv.core.Size winStride = new org.opencv.core.Size(64/2,128/2); //50% overlap in the sliding window
org.opencv.core.Size padding = new org.opencv.core.Size(0,0); //no padding around the image
MatOfPoint locations = new MatOfPoint(); ////an empty vector of locations, so perform full search
//HOGDescriptor hog = new HOGDescriptor();
HOGDescriptor hog = new HOGDescriptor(sz,new org.opencv.core.Size(16,16),new org.opencv.core.Size(8,8),new org.opencv.core.Size(8,8),9);
Log.i(TAG,"Constructed");
hog.compute(forHOGim , descriptors, new org.opencv.core.Size(16,16), padding, locations);
Log.i(TAG,"Computed");
Log.i(TAG,String.valueOf(hog.getDescriptorSize())+" "+descriptors.size());
Log.i(TAG,String.valueOf(descriptors.get(12,0)[0]));
double dd=0.0;
for (int i=0;i<3780;i++){
if (descriptors.get(i,0)[0]!=dd) Log.i(TAG,"NOT ZERO");
}
Bitmap bm2 = Bitmap.createBitmap(forHOGim.cols(), forHOGim.rows(),Bitmap.Config.ARGB_8888);
Utils.matToBitmap(forHOGim,bm2);
iv.setImageBitmap(bm2);
}
So in the logcat I never get the NOT ZERO message. The problem is that whatever changes I do to this code I always have zeros in the descriptors MatOfFloat... And the strange part is, if I uncomment the HOGDescriptor hog = new HOGDescriptor(); and use that one instead of the one I am using now, my application crashes...
The rest of the code runs fine, the picture is always taken and displayed on my image view as I expect.
Any help will be appreciated.
Thanks in advance.
The problem was inside the library. When I executed the same code with OpenCV 2.4.13 for Linux and not for Android, the code worked great as expected. So I hope they will fix any problems with the HOGDescriptor for OpenCV4Android.
I have an application to capture video of the screen and save to a file. I give the user the ability to pick between 480, 720, and "Full Screen" video sizes. A 480 will record in a small box on the screen, 720 will record in a larger box, and of course, "Full Screen" will record in an even larger box. However, this full screen box is NOT the actual screen resolution. It is the app window size, which happens to be around 1700x800. The Video Tool works perfectly for the 480 and 720 options, and will also work if "Full Screen" is overwridden to be the entire screen of 1920x1080.
My question: Are only certain sizes allowed? Does it have to fit a certain aspect ratio, or be an "acceptable" resolution? My code, below, is modified from the xuggle CaptureScreenToFile.java file (the location of the problem is noted by comments):
public void run() {
try {
String parent = "Videos";
String outFile = parent + "example" + ".mp4";
file = new File(outFile);
// This is the robot for taking a snapshot of the screen. It's part of Java AWT
final Robot robot = new Robot();
final Rectangle customResolution = where; //defined resolution (custom record size - in this case, 1696x813)
final Toolkit toolkit = Toolkit.getDefaultToolkit();
final Rectangle fullResolution = new Rectangle(toolkit.getScreenSize()); //full resolution (1920x1080)
// First, let's make a IMediaWriter to write the file.
final IMediaWriter writer = ToolFactory.makeWriter(outFile);
writer.setForceInterleave(false);
// We tell it we're going to add one video stream, with id 0,
// at position 0, and that it will have a fixed frame rate of
// FRAME_RATE.
writer.addVideoStream(0, 0, FRAME_RATE, customResolution.width, customResolution.height); //if I use fullResolution, it works just fine - but captures more of the screen than I want.
// Now, we're going to loop
long startTime = System.nanoTime();
while (recording) {
// take the screen shot
BufferedImage screen = robot.createScreenCapture(fullResolution); //tried capturing using customResolution, but did not work. Instead, this captures full screen, then tries to trim it below (also does not work).
// convert to the right image type
BufferedImage bgrScreen = convertToType(screen, BufferedImage.TYPE_3BYTE_BGR); //Do I need to convert after trimming?
BufferedImage trimmedScreen = bgrScreen.getSubimage((int)customResolution.getX(), (int)customResolution.getY(), (int)customResolution.getWidth(), (int)customResolution.getHeight());
// encode the image
try{
//~~~~Problem is this line of code!~~~~ Error noted below.
writer.encodeVideo(0, trimmedScreen, System.nanoTime() - startTime, TimeUnit.NANOSECONDS); //tried using trimmedScreen and bgrScreen
} catch (Exception e) {
e.printStackTrace();
}
// sleep for framerate milliseconds
Thread.sleep((long) (1000 / FRAME_RATE.getDouble()));
}
// Finally we tell the writer to close and write the trailer if
// needed
writer.close();
} catch (Throwable e) {
System.err.println("an error occurred: " + e.getMessage());
}
}
public static BufferedImage convertToType(BufferedImage sourceImage, int targetType) {
BufferedImage image;
// if the source image is already the target type, return the source image
if (sourceImage.getType() == targetType)
image = sourceImage;
// otherwise create a new image of the target type and draw the new image
else {
image = new BufferedImage(sourceImage.getWidth(), sourceImage.getHeight(), targetType);
image.getGraphics().drawImage(sourceImage, 0, 0, null);
}
return image;
}
Error:
java.lang.RuntimeException: could not open stream com.xuggle.xuggler.IStream#2834912[index:0;id:0;streamcoder:com.xuggle.xuggler.IStreamCoder#2992432[codec=com.xuggle.xuggler.ICodec#2930320[type=CODEC_TYPE_VIDEO;id=CODEC_ID_H264;name=libx264;];time base=1/50;frame rate=0/0;pixel type=YUV420P;width=1696;height=813;];framerate:0/0;timebase:1/90000;direction:OUTBOUND;]: Operation not permitted
Note: The file is successfully created, but has size of zero, and cannot be opened by Windows Media Player, with the following error text:
Windows Media Player cannot play the file. The Player might not support the file type or might not support the codec that was used to compress the file.
Sorry for the wordy question. I'm interested in learning WHAT and WHY, not just a solution. So if anyone can explain why it isn't working, or point me towards material to help, I'd appreciate it. Thanks!
Try to have the dimension even numbers 1696x812
I am trying to implement a simple class that will allow a user to crop an image to be used for their profile picture. This is a java web application.
I have done some searching and found that java.awt has a BufferedImage class, and this appears (at first glance) to be perfect for what I need. However, it seems that there is a bug in this (or perhaps java, as I have seen suggested) that means that the cropping does not always work correctly.
Here is the code I am using to try to crop my image:
BufferedImage profileImage = getProfileImage(form, modelMap);
if (profileImage != null) {
BufferedImage croppedImage = profileImage
.getSubimage(form.getStartX(), form.getStartY(), form.getWidth(), form.getHeight());
System.err.println(form.getStartX());
System.err.println(form.getStartY());
File finalProfileImage = new File(form.getProfileImage());
try {
String imageType = getImageType(form.getProfileImage());
ImageIO.write(croppedImage, imageType, finalProfileImage);
}
catch (Exception e) {
logger.error("Unable to write cropped image", e);
}
}
return modelAndView;
}
protected BufferedImage getProfileImage(CropImageForm form, Map<String, Object> modelMap) {
String profileImageFileName = form.getProfileImage();
if (validImage(profileImageFileName) && imageExists(profileImageFileName)) {
BufferedImage image = null;
try {
image = getCroppableImage(form, ImageIO.read(new File(profileImageFileName)), modelMap);
}
catch (IOException e) {
logger.error("Unable to crop image, could not read profile image: [" + profileImageFileName + "]");
modelMap.put("errorMessage", "Unable to crop image. Please try again");
return null;
}
return image;
}
modelMap.put("errorMessage", "Unable to crop image. Please try again.");
return null;
}
private boolean imageExists(String profileImageFileName) {
return new File(profileImageFileName).exists();
}
private BufferedImage getCroppableImage(CropImageForm form, BufferedImage image, Map<String, Object> modelMap) {
int cropHeight = form.getHeight();
int cropWidth = form.getWidth();
if (cropHeight <= image.getHeight() && cropWidth <= image.getWidth()) {
return image;
}
modelMap.put("errorMessage", "Unable to crop image. Crop size larger than image.");
return null;
}
private boolean validImage(String profileImageFileName) {
String extension = getImageType(profileImageFileName);
return (extension.equals("jpg") || extension.equals("gif") || extension.equals("png"));
}
private String getImageType(String profileImageFileName) {
int indexOfSeparator = profileImageFileName.lastIndexOf(".");
return profileImageFileName.substring(indexOfSeparator + 1);
}
The form referred to in this code snippet is a simple POJO which contains integer values of the upper left corner to start cropping (startX and startY) and the width and height to make the new image.
What I end up with, however, is a cropped image that always starts at 0,0 rather than the startX and startY position. I have inspected the code to make sure the proper values are being passed in to the getSubimage method, and they appear to be.
Are there simple alternatives to using BufferedImage for cropping an image. I have taken a brief look at JAI. I would rather add a jar to my application than update the jdk installed on all of the production boxes, as well as any development/testing servers and local workstations.
My criteria for selecting an alternative are:
1) simple to use to crop an image as this is all I will be using it for
2) if not built into java or spring, the jar should be small and easily deployable in a web-app
Any suggestions?
Note: The comment above that there is an issue with bufferedImage or Java was something I saw in this posting: Guidance on the BufferedImage.getSubimage(int x, int y, int w, int h) method?
I have used getSubimage() numerous times before without any problems. Have you added a System.out.println(form.getStartX() + " " + form.getStartY()) before that call to make sure they're not both 0?
Also, are you at least getting an image that is form.getWidth() x form.getHeight()?
Do make sure you are not modifying/disposing profileImage in any way since the returned BufferedImage shares the same data array as the parent.
The best way is to just simply draw it across if you want a completely new and independent BufferedImage:
BufferedImage croppedImage = new BufferedImage(form.getWidth(),form.getHeight(),BufferedImage.TYPE_INT_ARGB);
Graphics g = croppedImage.getGraphics();
g.drawImage(profileImage,0,0,form.getWidth(),form.getHeight(),form.getStartX(),form.getStartY(),form.getWidth(),form.getHeight(),null);
g.dispose();
You can do it in this manner as well (code is not 100% tested as I adopted for example from an existing app i did):
import javax.imageio.*;
import java.awt.image.*;
import java.awt.geom.*;
...
BufferedImage img = ImageIO.read(imageStream);
...
/*
* w = image width, h = image height, l = crop left, t = crop top
*/
ColorModel dstCM = img.getColorModel();
BufferedImage dst = new BufferedImage(dstCM, dstCM.createCompatibleWritableRaster(w, h), dstCM.isAlphaPremultiplied(), null);
Graphics2D g = dst.createGraphics();
g.drawRenderedImage(img, AffineTransform.getTranslateInstance(-l,-t));
g.dispose();
java.io.File outputfile = new java.io.File(sessionScope.get('absolutePath') + java.io.File.separator + sessionScope.get('lastUpload'));
ImageIO.write(dst, 'png', outputfile);
Thanks for all who replied. It turns out that the problem was not in the cropping code at all.
When I displayed the image to be cropped, I resized it to fit into my layout nicely, then used a javascript cropping tool to figure out the coordinates to crop.
Since I had resized my image, but didn't take the resizing into account when I was determining the cropping coordinates, I ended up with coordinates that appeared to coincide with the top left corner.
I have changed the display to no longer resize the image, and now cropping is working beautifully.