Please answer these questions:
Is it true that if a BufferedImage is of type INT_ARGB it will be rendered at the same speed as a Toolkit generated Image object?
Are BufferedImages and Images "equal" for games? (speed & memory efficiency wise)
Is it correct that BufferedImages will not play an animated *.gif because the image data is buffered?
Will the animated image data stored in an Image object be lost if the Image is drawn to a BufferedImage which is then rendered to the screen through a Graphics object?
While BufferedImage is not intrinsically animated, they are frequently used to pre-load or pre-render complex images in order to speed up animation. This KineticModel is an example. This AnimationTest shows one way to examine rendering time.
Related
I'm drawing to a Canvas using Graphics through a BufferStrategy with lines such as
g.drawImage(bufferedImage, x, y, null);
I currently have this running undecorated in a JFrame, 1920x1080p as per the resolution of my laptop. I'm curious as to whether there is any way to alter the resolution of the Graphics rendered, particularly lowering resolution so as to increase efficiency/speed, or fitting to another differently sized screen. There are many objects being rendered with a camera and the game runs fairly well, but any usable alterations to the resolution would be useful as optional in my settings.
I've researched this and found no good answers. Thank you for your time.
(Resolution changes such as for printing.)
Best to use a drawImage with a smaller image, and scaled width and height.
Now, you could even render all in your own BufferedImage using a Graphics2D with BufferedImage.createGraphics and scale afterwards. Not so nice for text or printing.
Or use Graphics2D scaling:
For complex rendering:
g.scale(2.0, 2.0);
... // Draw smaller image
g.scale(0.5, 0.5);
As you might imagine this probably does not help in memory consumption, apart from needing smaller images. At one point all pixels of the image must be given in the devices color size. 256 colors gif, or 10KB jpg will not help.
The other way around, supporting high resolutions with tight memory also exists. There one might use tiled images, see ImageIO.
Important is to prepare the image outside the paintComponent/paint.
You might also go for device compatible bit maps if you make your own BufferedImage, but this seems circumstantial (GraphicsEnvironment).
I have a project using a particle engine in Java Swing. These particles use an image instead of a basic shape (the image is entirely black with a transparent background), and they share the image so memory is low. With the image (a BufferedImage) being shared, how could I have the particles have different colors?
I can make it work if I create a copy of a preloaded image and change the black to the color I want, but then each particle has its own image and it takes up a ton of memory.
Worst case scenario I'll probably switch to LWJGL or TWL, but I already have a lot of content in the program that was made prior to the particle engine that I would need to remake :/.
I have some very small images (20 by 20 pixels) which I am drawing using matrices onto a canvas using Canvas.drawBitmap(Bitmap, Matrix, Paint). The problem is that I am scaling these up about 5-10 times larger when I am drawing them and it is automatically re-sampling these images with smoothness. What I want is nearest-neighbour style re-sampling (so it will look pixelated) not the smoothness. I cannot find a way to change this. Also creating another whole image that is larger to store a properly re-sampled picture is not an option since I am under memory constraints. Thanks for any help!
You need to set up the paint you pass to drawBitmap, like so:
paint.setFilterBitmap(false);
In what instance would I want to use ImageIcon to represent a picture file rather than an Image object? I've been searching and I've seen people say that you would use an ImageIcon object when dealing with images that will be part of the GUI, but I still don't understand the implications of this. In other words, what is the actual difference between the two object types and what situations are they each suited for?
Image is an object that represents a bitmap: an array of pixels of different colors.
Icon is an object that can draw a rectangular piece of graphics. Since it is an interface (and a simple one too), you can imagine many different types of icons: drawing predefined vector graphics, generating images on the fly etc. Therefore it is a useful abstraction and is used by Swing components (buttons, labels).
ImageIcon is an object that IS an Icon, but HAS-A Image. That is - it draws graphics based on a specific image.
When you say "why should I be using an ImageIcon instead of Image" you miss the point: in fact you are using an Image either way.
An Image is an object representing the data model of a picture. An ImageIcon is a Swing component that draws an Image on the screen, and you have to provide it with the appropriate Image to draw (either by passing in an existing Image or by giving it enough information to find and load the image).
The relationship is similar to that between a String and a JTextField; one is the representation of the data, and the other is the graphical component that draws it on the screen.
The implementation is to not hold up the Swing thread.
Images that are created from a URL, filename or byte array are preloaded using MediaTracker to monitor the loaded state of the image.
Basically, then you can set an ImageIcon for a button without actually forcing it to be loaded beforehand.
This can be seen by having a very large icon and setting the Frame's icon to this image. Once set visible, it may take a few seconds to actually appear.
I have a graphics system for Java which allows objects to be "wallpapered" by specifying multiple images, which can have (relatively) complex alignment and resizing options applied.
In order to perform adequately (esp. on very low powered devices), I do the image painting to an internal image when the wallpaper is first painted, and then copy that composite image to the target graphics context to get it onto the screen. The composite is then recreated only if the object is resized so the only work for subsequent repaints is to copy the clipped region from the composite to the target graphics context.
The solution works really well, except that when I have PNG images with alpha-channel transparency the alpha channel is lost when painting the composite - that is the composite has all pixels completely opaque. So the subsequent copy to the on-screen graphics context fails to allow what's behind the wallpapered object show through.
I did manage to use an RGBImageFilter to filter out completely transparent pixels, but I can't see a solution with that to making blended transparency work.
Does anyone know of a way I can paint the images with the alpha-channel intact, and combined if two pixels with alpha values overlap?
What type of Image do you use for the composite image?
You should use a BufferedImage and set it's type to TYPE_INT_ARGB which allows translucency.