I have a Java applet, which is a form that draw shapes into it (Rect, Oval, Line).
Each shape is represented by 2 points and can draw itself to the form.
When the JApplet form resizes, I need to resize the shapes also while keeping the aspect ratio.
I didn't find an high quality solution for doing this that solves this problem.
Tried to write a solution from this, but it came up as lousy when tested, Can someone publish an example code for doing that please?
You give no details but I am guessing that when you scale the object moves as well as scales. You will need to know something about graphics transformations. There are two main approaches:
resize the object by scaling its coordinates
apply a transformation matrix to the object. This is probably the better way.
If you are using Graphics2D (http://java.sun.com/j2se/1.4.2/docs/api/java/awt/Graphics2D.html) you will have access to transformations.
When you resize the window you will want to work out which dimension (width or height) has most effect on the scale and take the others one - this will stop the object expanding too much. Calculate a single scale and apply it equally to X and Y.
You will probably not have the object on the origin. In this case you need to carry out three operations:
- translate the object to the origin
- rescale it
- translate it back to where it was
These operations can be combined into a single matrix.
I cannot be more definite as I do not know the details of your problem but this should give you some pointers.
Try out the examples in:
http://www.java2s.com/Code/Java/2D-Graphics-GUI/Linetransformationrotationshearscale.htm
which should show you some of the effects.
Related
I was searching for an anti-aliasing algorithm for my OpenGL program (so I searched for a good shader). The thing is, all shaders want to do something with the textures, but I dont use textures, only colors. I looked at FXAA most of the time, so is there a anti-aliasing algorithm that just works with colors? My game, what this is for looks blocky like minecraft, but only works with colors and cubes of different size.
I hope someone can help me.
Greetings
Anti-aliasing has nothing specifically to do with either textures or colors.
Proper anti-aliasing is about sample rate, which while highly technical can be thought of as doing extra work to make a better educated guess at some value that cannot be directly looked up (e.g. a pixel that is only partially covered by a triangle).
Multisample Anti-Aliasing (MSAA) will work nicely for you, it will only anti-alias polygon edges and does nothing for texture aliasing on the interior of a polygon. Since you are not using textures you do not need to worry about aliasing inside a polygon.
Incidentally, FXAA is not proper anti-aliasing. FXAA is basically a shader-based edge detection and blur image processing filter. FXAA will blur any part of the scene with sharp edges, whether it is a polygon edge or an edge due to a mapped texture. It indiscriminately blurs anything it thinks is an aliased edge and gets this wrong often, resulting in blurry textures.
To use MSAA, you need:
A framebuffer with at least 2 samples
Enable multisample rasterization
Satisfying (1) is going to depend on what you used to create your window (in this case LWJGL). Most frameworks let you select the sample count as one of the parameters at the time of creation.
Framebuffer Objects can also be used to do this without messing with your window's parameters, but they are more complicated than need be for this discussion.
(2) is as simple as calling glEnable (GL_MULTISAMPLE).
1) Re-Draw Vs Draw
Kind of a philosophical question, but... what is the "correct" or "accepted" way to render a game (2d, I understand how OGL perspecives work...) at different resolutions? Should I include separate sizes for my images (like Android APKs) and resize each object individually at draw on one canvas, or should I draw on a set-resolution drawing canvas, then resize that image onto another display canvas? I'm speaking generally, but, if you need me to be specific I'm using Java to build the engine.
Foreseeable benefits/issues:
#1) Resize at Draw
+ No additional drawing step
+ Sweet resolution
- Possible math/physics/placement issues
- Tons of math each step for scale
- Lots of resources
#2) Resize at Render
+ No additional math; one step
+ One set of images; smaller res. package
+ One set canvas size (easier to do math/phys./placement)
- Additional drawing step
- Poor resolution =(
It would seem that #2 is the obvious choice because of the number of benefits vs issues, but... is it? Is there a standard way to resize 2D games?
2) JOGL + Java2D + Java Swing
Would it be cumbersome to use JOGL, Java2D, and Java Swing at the same time? Would it be worth it to do 2D or layouts in JOGL? Why or why not?
EDIT: Using a BufferedImage to draw on and rendering the BufferedImage to the size of the panel with respect to aspect ratio is incredibly inefficient in swing. Apparently it's better to draw immediately to the panel, while resizing each image/element individually. Not my first hypothesis...
EDIT 2: Silly me... just scale and translate the graphics context to the size of the adjusted resolution before any other operations. The performance boost is super-dooper awesome. THIS is the correct answer to the question. DRAW ONCE, to scale/translation. B)
I can't answer your second question fully as I do not have much experience with JOGL or Java2D, but I don't see any reason for them to ever conflict or be cumbersome.
For your first question, I can definitely say that it depends. What's your target audience? Is your game memory intensive (Ever notice that many games have a high res/low res option)? Is this a game that will be available for vastly different screen sizes? If so you might want to provide 2-3 different "packages" of your assets, each scaled at a different size of the original (the largest one). The math to draw the images isn't as much as you think.
In addition:
If you build the game the right way, you wouldn't have to do much math at all. If you have some sort of Camera class that takes care of the viewing of your GameWorld then you simply have to scale the Camera's image instead of scaling each image independently.
Java Beginner.
Hi, I haven't got any much experience with GUI programming. So I'm after some hints on how to tackle this next project. Hopefully I can explain myself well enough.
(source: mobilehomeservicesltd.com)
(see above photo as a reference)
This GUI aspect of my program will be a 2D - Birds-eye-view of a static caravan and veranda/balcony made with basic shapes. So typically the caravan will be represented by a rectangle (just a rectangle, ignore the fill in diagram). Sometimes static caravans have shaped fronts so that would be represented by a polygon as opposed to a rectangle. All in scale dependant on the users input, as all caravans have their individual dimensions.
After the caravan unit is in place I then need to draw another polygon surrounding the caravan representing the balcony/veranda, all to scale. Understand so far?? Good. Here comes the challenge part (for me anyway).
On the polygon representing the balcony I need to be able to draw lines to represent the decking that will be nailed down as a surface (like the diagram above). Now because the caravan could possibly have a shaped front, the decking must follow the shape of the caravan. In other words, if the caravan has an oval or angled front the decking will have to be cut to follow that shaped.
Without boring you all too much with detail. The idea is to let the user decide whether they want the decking fitted in such a way that its running in the same direction as the caravan, or against. Once the user has decided I will then attempt to calculate from the drawing (as it will be to scale) how many full lengths of decking will it take to build this veranda (among various other items).
Now my knowledge is limited on GUI, but I'm up to scratch with panels and drawing lines, rectangles, polygons etc... My original idea was to manually draw the caravan using the g.drawLine method, same with the veranda and then base my calculations on pixel counting to calculate all the various components.
Am I out of my depth attempting this, or is this something relatively easy to program?? Is there a more efficient way of doing this that I should look up before attempting this?
What you want to do is achievable, but its not the simplest of tasks. But don't let that slow you down.
You'll want to get started by understanding how to draw in Swing. Take a look at
Graphics 2D
Custom Painting
You'll also want to be familiar with Swing in general
Creating a GUI with Swing
The basic concept with scaling, is assigning a weight to a pixel. The more distance that a pixel is responsible for, the small your image will become
Lets assume your eye is in the surface point P1 on an object A and there is a target object B and there is a point-light source behind object B.
Question: am i right if i look to the light source and say "i am in a shadow" if i cannot see the light because of object B ?. Then i flag that point of object A as "one of the shadow points of B on A" .
If this is true, then can we build a "shadow geometry"(black-colored) object on the surface of A then change it constantly because of motion of light,B,A, etc... in realtime ? Lets say a sphere(A) has 1000 vertices and other sphere (B)has 1000 vertices too, so does this mean 1 milion comparations? (is shadowing, O(N^2) (time) complexity?). I am not sure about the complexity becuse the changing the P1(eye) also changes the seen point of B (between P1 and light source point). What about the second-order shadows and higher (such as lights being reflecting between two objects many times) ?
I am using java-3D now but it doesnt have shadow capabilities so i think of moving to other java-compatible libraries.
Thanks.
Edit: i need to disable the "camera" when moving the camera to build that shadow. How can i do this? Does this decrease the performance badly?
New idea: java3D has built-in collision detection. I will create lines(invisible) from light to target polygon-vertex then check for a collision from another object. If collision occurs, add that vertex corrd. to the shadow list but this would work only for point-lights :( .
Anyone who supplys with a real shade library for java3d, will be much helpful.
Very small sample Geomlib shadow/raytracing in java3D would be the best
Ray-tracing example maybe?
I know this is a little hard but could have been tried by at least a hundred people.
Thanks.
Shadows is probably the most complex topic in 3D graphics programming, and there are many approaches, but the best option should be identified according to the task requirements. The algorithm you are talking about is the simplest way to implement shadows from a spot light source onto the plane. It should not be done on the CPU, as you already use GPU for 3D rendering.
Basically the approach is to render the same object twice: once from the camera view point, and once from the light source point. You will need to prepare model view matrices to convert between these two views. Once you render the object from the light point, you get the depth map, in which each point lies closest to the light source. Then, for each pixel of the normal rendering, you should convert its 3D coordinates into the previous view, and check against the corresponding depth value. This essentially gives you a way to tell which pixels are covered by shadow.
The performance impact comes from rendering the same object twice. If your task doesn't assume high scalability of shadow casting solution, then it might be a way to go.
A number of relevant questions:
How Do I Create Cheap Shadows In OpenGL?
Is there an easy way to get shadows in OpenGL?
What is the simplest method for rendering shadows on a scene in OpenGL?
Your approach can be summarised like this:
foreach (point p to be shaded) {
foreach (light) {
if (light is visible from p)
// p is lit by that light
else
// p is in shadow
}
}
The funny fact is that's how real-time shadows are done today on the GPU.
However it's not trivial for this to work efficiently. Rendering the scene is a streamlined process, triangle-by-triangle. It would be very cumbersome if for every single point (pixel, fragment) in every single triangle you'd need to consider all other triangles in other to check for ray intersection.
So how to do that efficiently? Answer: Reverse the process.
There's a lot fewer lights than pixels on the scene, usually. Let's take advantage of this fact and do some preprocessing:
// preprocess
foreach (light) {
// find all pixels p on the scene reachable from the light
}
// then render the whole scene...
foreach (point p to be shaded) {
foreach (light) {
// simply look up into what was calculated before...
if (p is visible by the light)
// p is lit
else
// p is in shadow
}
}
That seems a lot faster... But two problems remain:
how to find all pixels visible by the light?
how to make them accessible quickly for lookup during rendering?
There's the tricky part:
In order to find all points visible by a light, place a camera there and render the whole scene! Depth test will reject the invisible points.
To make this result accessible later, save it as a texture and use that texture for lookup during the actual rendering stage.
This technique is called Shadow Mapping, and the texture with pixels visible from a light is called a Shadow Map. For a more detailed explanation, see for example the Wikipedia article.
Basically yes, your approach will produce shadows. But doing it point by point is not feasible performance wise (for realtime), unless its done at the GPU. I'm not familiar with what the API's offer today, but I'm sure any recent engine will offer some shadow out of the box.
Your 'New idea' is how shadows were implemented back in the days when rendering was still done with the CPU. If the number of polygons isn't too big (or you can efficently reject entire bunches by having grouping volumes etc.) it can be done with fairly little CPU power.
3D shadow rendering on vanilla Java is never going to be efficient. You best use graphical libraries written to utilize the full capabilities range of the graphical card, such as OpenGL or DirectX. As you are using Canvas (from the screenshot you provided), you can even paint that Canvas from native code using JNI. So you could use all the technology from graphial libraries, do just a little fiddling and paint your Canvas directly from the native code. There would be very little work involved to make it work, compared to writing your own 3D engine.
Wiki link about AWT native access: http://en.wikipedia.org/wiki/Java_AWT_Native_Interface
Documentation: http://docs.oracle.com/javase/7/docs/technotes/guides/awt/AWT_Native_Interface.html
I'm pretty new to manually manipulating images, so please bear with me.
I have an image that I'm allowing the user to shrink/grow and move around.
The basic behavior works perfectly. However, I need to be able to grab whatever is in the "viewport" (visible clipping region rectangle) and save it out as a separate bitmap.
Before I can do this, I need to get a fix on WHERE the image actually is and what is being displayed. This is proving more tricky than I would have imagined.
My problem is that the Matrix documentation is absurdly vague, and I'm lost as to how I can measure the coordinates and dimensions of my transformed image. As I see it, the X,Y of the image remain constant even as the user shrinks/grows it. So, even though it reports at being at 0,0 it's displayed at (say) 100,100. And the only way I can get those coordinates is to do a fairly ugly computation (again... I'm probably not doing it the most elegant way, since geometry is not my forte).
I'm kind of hoping that I'm missing something and that there's some way to pull the object's auto translated coordinates and dimensions.
in an ideal world I would be able to call (pseudo) myImg.getDisplayedWidth() and myImg.getDisplayedX().
Oh, and I should add that this may all be a problem that I'm causing myself by using the center of the image as the point from which to grow/shrink. If I left the default 0,0 coordinate as the non changing point, I think the location would be correct no matter what its size was. So... maybe the answer to all this is to simply figure out my center offset and apply that to my translations?
All help greatly appreciated (and people not arbitrarily messing with my question's title even more so!).
The Matrix method mapPoints(float[] dst, float[] src) can be used to get a series of translated points by applying the Matrix translation. Or in (slightly) more layman's terms, an instance of the Matrix class contains not only the translation instruction but also convenience methods to apply the Matrix translation to a series of points.
So in your case, you just need the corners of your untranslated Bitmap (x, y, width, height) and pass the corner points into that method to get the translated points.