algorithm for gradient..how does it even work - java

If I was given color A and color B, how can one go aboit generating a gradient on a canvas which can be later converted to a bitmap.
Such that
public Bitmap makeGradient(Color from, Color to){}
Would actually work?
I hope this is not too vague. I thankyou for your time and effort.
Ps. There is a question on stackoverflow that answers this but I amstill confused :(
Here it is: Generating gradients programmatically?

One way to go about creating a radial gradient might be to define the focus point as well as the extent of the gradient and when you generate the image you'd calculate the distance between the current pixel and the focus point, divide it by the gradient extent and clip the result to 1. Then use the formula in the question you linked.
Something like this pseudocode:
double d = distance(currentPixel, focusPoint); //I'll leave the implementation for you
double factor = Math.max(1.0, d/extent);
int red = (int) (firstCol.getRed() * factor + secondCol.getRed() * (1.0 - factor) );
int green= (int) firstCol.getGreen() * factor + secondCol.getGreen()* (1.0 - factor) );
int blue = (int) (firstCol.getBlue() * factor + secondCol.getBlue()* (1.0 - factor) );
This would mean that the farther a pixel is from the focus point the more firstCol will contribute to it (pixels that are outside the extent of the gradient will only use firstCol since factor should be 1.0 for those).

Related

How can I set the center of scaling?

I'm working in some graphics programming in java. At the moment I can scale a picture image what I store in a 1D array. (it store row by row)
My code like where is the new position?
newpoz = (int)(x * scale) + (int)(y * scale) * width;
This gives me a position in the array. But it scaling by the 0,0 coordinate what is in the top left corner. How can I set it to the center if the screen? So to the screen.getWidth()/2;screen.getHeight()/2.
Demonstration
Somebody can help with this? If there is any question I will answer in the comments.
To scale pixel coordinates about some center (a, b):
(x', y') = ([x - a] * scale + a, [y - b] * scale + b)
Fetch the original pixel data using pixelArray[y * width + x] as before, and set the destination pixel similarly with the new coordinates.
EDIT: you may also want to look at bilinear interpolation, because the current raw method may give jagged edges in the final image if used directly.
If you first translate the Center of the image to (0,0), then scale the image, then translate the image back, you will have effectively scaled the image with your desired centering.
x1 = x - screen_width/2
x2 = x1 * scale
x3 = x2 + screen_width/2
Ditto for y axis.
With a bit of math, you can do this all in one step. Exercise left to student.
But if you get in trouble, post your attempt, and then we can help you fix the code.

What is the source of these pixel gaps in between identical vertices in OpenGL's Ortho? How can I eliminate them?

Despite passing equal (exactly equal) coordinates for 'adjacent' edges, I'm ending up with some strange lines between adjacent elements when scaling my grid of rendered tiles.
My tile grid rendering algorithm accepts scaled tiles, so that I can adjust the grid's visual size to match a chosen window size of the same aspect ratio, among other reasons. It seems to work correctly when scaled to exact integers, and a few non-integer values, but I get some inconsistent results for the others.
Some Screenshots:
The blue lines are the clear color showing through. The chosen texture has no transparent gaps in the tilesheet, as unused tiles are magenta and actual transparency is handled by the alpha layer. The neighboring tiles in the sheet have full opacity. Scaling is achieved by setting the scale to a normalized value obtained through a gamepad trigger between 1f and 2f, so I don't know what actual scale was applied when the shot was taken, with the exception of the max/min.
Attribute updates and entity drawing are synchronized between threads, so none of the values could have been applied mid-draw. This isn't transferred well through screenshots, but the lines don't flicker when the scale is sustained at that point, so it logically shouldn't be an issue with drawing between scale assignment (and thread locks prevent this).
Scaled to 1x:
Scaled to A, 1x < Ax < Bx :
Scaled to B, Ax < Bx < Cx :
Scaled to C, Bx < Cx < 2x :
Scaled to 2x:
Projection setup function
For setting up orthographic projection (changes only on screen size changes):
.......
float nw, nh;
nh = Display.getHeight();
nw = Display.getWidth();
GL11.glOrtho(0, nw, nh, 0, 1, -1);
orthocenter.setX(nw/2); //this is a Vector2, floats for X and Y, direct assignment.
orthocenter.setY(nh/2);
.......
For the purposes of the screenshot, nw is 512, nh is 384 (implicitly casted from int). These never change throughout the example above.
General GL drawing code
After cutting irrelevant attributes that didn't fix the problem when cut:
#Override
public void draw(float xOffset, float yOffset, float width, float height,
int glTex, float texX, float texY, float texWidth, float texHeight) {
GL11.glLoadIdentity();
GL11.glTranslatef(0.375f, 0.375f, 0f); //This is supposed to fix subpixel issues, but makes no difference here
GL11.glTranslatef(xOffset, yOffset, 0f);
if(glTex != lastTexture){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, glTex);
lastTexture = glTex;
}
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(texX,texY + texHeight);
GL11.glVertex2f(-height/2, -width/2);
GL11.glTexCoord2f(texX + texWidth,texY + texHeight);
GL11.glVertex2f(-height/2, width/2);
GL11.glTexCoord2f(texX + texWidth,texY);
GL11.glVertex2f(height/2, width/2);
GL11.glTexCoord2f(texX,texY);
GL11.glVertex2f(height/2, -width/2);
GL11.glEnd();
}
Grid drawing code (dropping the same parameters dropped from 'draw'):
//Externally there is tilesize, which contains tile pixel size, in this case 32x32
public void draw(Engine engine, Vector2 offset, Vector2 scale){
int xp, yp; //x and y position of individual tiles
for(int c = 0; c<width; c++){ //c as in column
xp = (int) (c*tilesize.a*scale.getX()); //set distance from chunk x to column x
for(int r = 0; r<height; r++){ //r as in row
if(tiles[r*width+c] <0) continue; //skip empty tiles ('air')
yp = (int) (r*tilesize.b*scale.getY()); //set distance from chunk y to column y
tileset.getFrame(tiles[r*width+c]).draw( //pull 'tile' frame from set, render.
engine, //drawing context
new Vector2(offset.getX() + xp, offset.getY() + yp), //location of tile
scale //scale of tiles
);
}
}
}
Between the tiles and the platform specific code, vectors' components are retrieved and passed along to the general drawing code as pasted earlier.
My analysis
Mathematically, each position is an exact multiple of the scale*tilesize in either the x or y direction, or both, which is then added to the offset of the grid's location. It is then passed as an offset to the drawing code, which translates that offset with glTranslatef, then draws a tile centered at that location through halving the dimensions then drawing each plus-minus pair.
This should mean that when tile 1 is drawn at, say, origin, it has an offset of 0. Opengl then is instructed to draw a quad, with the left edge at -halfwidth, right edge at +halfwidth, top edge at -halfheight, and bottom edge at +halfheight. It then is told to draw the neighbor, tile 2, with an offset of one width, so it translates from 0 to that width, then draws left edge at -halfwidth, which should coordinate-wise be exactly the same as tile1's right edge. By itself, this should work, and it does. When considering a constant scale, it breaks somehow.
When a scale is applied, it is a constant multiple across all width/height values, and mathematically shouldn't make anything change. However, it does make a difference, for what I think could be one of two reasons:
OpenGL is having issues with subpixel filling, ie filling left of a vertex doesn't fill the vertex's containing pixel space, and filling right of that same vertex also doesn't fill the vertex's containing pixel space.
I'm running into float accuracy problems, where somehow X+width/2 does not equal X+width - width/2 where width = tilewidth*scale, tilewidth is an integer, and X is a float.
I'm not really sure about how to tell which one is the problem, or how to remedy it other than to simply avoid non-integer scale values, which I'd like to be able to support. The only clue I think might apply to finding the solution is how the pattern of line gaps isn't really consistant (see how it skips tiles in some cases, only has vertical or horizontal but not both, etc). However, I don't know what this implies.
This looks like it's probably a floating point precision issue. The critical statement in your question is this:
Mathematically, each position is an exact multiple [..]
While that's mathematically true, you're dealing with limited floating point precision. Sequences of operations that should mathematically produce the same result can (and often do) produce slightly different results due to rounding errors during expression evaluation.
Specifically in your case, it looks like you're relying on identities of this form:
i * width + width/2 == (i + 1) * width - width/2
This is mathematically correct, but you can't expect to get exactly the same numbers when evaluating the values with limited floating point precision. Depending on how the small errors end up getting rounded to pixels, it can result in visual artifacts.
The only good way to avoid this is that you actually use the same values for coordinates that must be the same, instead of using calculations that mathematically produce the same results.
In the case of coordinates on a grid, you could calculate the coordinates for each grid line (tile boundary) once, and then use those values for all draw operations. Say if you have n tiles in the x-direction, you calculate all the x-values as:
x[i] = i * width;
and then when drawing tile i, use x[i] and x[i + 1] as the left and right x-coordinates.

Calculate height above map given zoom level in google maps

I am trying to calculate the height above map (ignoring topography) given a zoom level. I know the equation for scale at a specific zoom level is 591657550.5/2^(level-1) (https://gis.stackexchange.com/questions/7430/google-maps-zoom-level-ratio), but I am unsure on how to use this information (or whether or not this is the right information) to solve for height above map. Any help is appreciated.
I set my google map size to 5cm selected a zoom level, and then re-found that location with that zoom in google earth to get a eye altitude level (the D value in the angular size equation http://en.wikipedia.org/wiki/Forced_perspective). I was able to find the h value in the angular size equation by first setting my map length on screen to 5cm, and then using the scale equation of 591657550.5/2^(level-1) *5cm to calculate the h value in the angular size equation. Knowing these two variables I was able to calculate the constant angle for which google maps displayed images when maps was at a 5cm width (85.36222058). From these pieces of information I was able to construct this method which calculates eye altitude above map from zoom level with relative accuracy
public float getAltitude(float mapzoom){
//this equation is a transformation of the angular size equation solving for D. See: http://en.wikipedia.org/wiki/Forced_perspective
float googleearthaltitude;
float firstPartOfEq= (float)(.05 * ((591657550.5/(Math.pow(2,(mapzoom-1))))/2));//amount displayed is .05 meters and map scale =591657550.5/(Math.pow(2,(mapzoom-1))))
//this bit ^ essentially gets the h value in the angular size eq then divides it by 2
googleearthaltitude =(firstPartOfEq) * ((float) (Math.cos(Math.toRadians(85.362/2)))/(float) (Math.sin(Math.toRadians(85.362/2))));//85.362 is angle which google maps displays on a 5cm wide screen
return googleearthaltitude;
}
Sorry if my explanation is poorly explained. If you guys want to use this method feel free to. Sorry for any poorly worded sentences.
I have basically converted Javascript code to Java. I hope this works.
public int convertRangeToZoom(double range) {
//see: google.maps.v3.all.debug.js
int zoom = (int) Math.round(Math.log(35200000 / range) / Math.log(2));
if (zoom < 0) zoom = 0;
else if (zoom > 19) zoom = 19;
return zoom;
}
public int convertZoomToRange(double zoom){
//see: google.maps.v3.all.debug.js
int range = (int) 35200000/(Math.pow(2, zoom));
if (range < 300) range = 300;
return range;
}
https://groups.google.com/forum/#!msg/google-earth-browser-plugin/eSL9GlAkWBk/T4mdToJz_FgJ

Efficient 2D Tile based lighting system

What is the most efficient way to do lighting for a tile based engine in Java?
Would it be putting a black background behind the tiles and changing the tiles' alpha?
Or putting a black foreground and changing alpha of that? Or anything else?
This is an example of the kind of lighting I want:
There are many ways to achieve this. Take some time before making your final decision. I will briefly sum up some techiques you could choose to use and provide some code in the end.
Hard Lighting
If you want to create a hard-edge lighting effect (like your example image),
some approaches come to my mind:
Quick and dirty (as you suggested)
Use a black background
Set the tiles' alpha values according to their darkness value
A problem is, that you can neither make a tile brighter than it was before (highlights) nor change the color of the light. Both of these are aspects which usually make lighting in games look good.
A second set of tiles
Use a second set of (black/colored) tiles
Lay these over the main tiles
Set the new tiles' alpha value depending on how strong the new color should be there.
This approach has the same effect as the first one with the advantage, that you now may color the overlay tile in another color than black, which allows for both colored lights and doing highlights.
Example:
Even though it is easy, a problem is, that this is indeed a very inefficent way. (Two rendered tiles per tile, constant recoloring, many render operations etc.)
More Efficient Approaches (Hard and/or Soft Lighting)
When looking at your example, I imagine the light always comes from a specific source tile (character, torch, etc.)
For every type of light (big torch, small torch, character lighting) you
create an image that represents the specific lighting behaviour relative to the source tile (light mask). Maybe something like this for a torch (white being alpha):
For every tile which is a light source, you render this image at the position of the source as an overlay.
To add a bit of light color, you can use e.g. 10% opaque orange instead of full alpha.
Results
Adding soft light
Soft light is no big deal now, just use more detail in light mask compared to the tiles. By using only 15% alpha in the usually black region you can add a low sight effect when a tile is not lit:
You may even easily achieve more complex lighting forms (cones etc.) just by changing the mask image.
Multiple light sources
When combining multiple light sources, this approach leads to a problem:
Drawing two masks, which intersect each other, might cancel themselves out:
What we want to have is that they add their lights instead of subtracting them.
Avoiding the problem:
Invert all light masks (with alpha being dark areas, opaque being light ones)
Render all these light masks into a temporary image which has the same dimensions as the viewport
Invert and render the new image (as if it was the only light mask) over the whole scenery.
This would result in something similar to this:
Code for the mask invert method
Assuming you render all the tiles in a BufferedImage first,
I'll provide some guidance code which resembles the last shown method (only grayscale support).
Multiple light masks for e.g. a torch and a player can be combined like this:
public BufferedImage combineMasks(BufferedImage[] images)
{
// create the new image, canvas size is the max. of all image sizes
int w, h;
for (BufferedImage img : images)
{
w = img.getWidth() > w ? img.getWidth() : w;
h = img.getHeight() > h ? img.getHeight() : h;
}
BufferedImage combined = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
// paint all images, preserving the alpha channels
Graphics g = combined.getGraphics();
for (BufferedImage img : images)
g.drawImage(img, 0, 0, null);
return combined;
}
The final mask is created and applied with this method:
public void applyGrayscaleMaskToAlpha(BufferedImage image, BufferedImage mask)
{
int width = image.getWidth();
int height = image.getHeight();
int[] imagePixels = image.getRGB(0, 0, width, height, null, 0, width);
int[] maskPixels = mask.getRGB(0, 0, width, height, null, 0, width);
for (int i = 0; i < imagePixels.length; i++)
{
int color = imagePixels[i] & 0x00ffffff; // Mask preexisting alpha
// get alpha from color int
// be careful, an alpha mask works the other way round, so we have to subtract this from 255
int alpha = (maskPixels[i] >> 24) & 0xff;
imagePixels[i] = color | alpha;
}
image.setRGB(0, 0, width, height, imagePixels, 0, width);
}
As noted, this is a primitive example. Implementing color blending might be a bit more work.
Raytracing might be the simpliest approach.
you can store which tiles have been seen (used for automapping, used for 'remember your map while being blinded', maybe for the minimap etc.)
you show only what you see - maybe a monster of a wall or a hill is blocking your view, then raytracing stops at that point
distant 'glowing objects' or other light sources (torches lava) can be seen, even if your own light source doesn't reach very far.
the length of your ray gives will be used to check amount light (fading light)
maybe you have a special sensor (ESP, gold/food detection) which would be used to find objects that are not in your view? raytrace might help as well ^^
how is this done easy?
draw a line from your player to every point of the border of your map (using Bresehhams Algorithm http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm
walk along that line (from your character to the end) until your view is blocked; at this point stop your search (or maybe do one last final iteration to see what did top you)
for each point on your line set the lighning (maybe 100% for distance 1, 70% for distance 2 and so on) and mark you map tile as visited
maybe you won't walk along the whole map, maybe it's enough if you set your raytrace for a 20x20 view?
NOTE: you really have to walk along the borders of viewport, its NOT required to trace every point.
i'm adding the line algorithm to simplify your work:
public static ArrayList<Point> getLine(Point start, Point target) {
ArrayList<Point> ret = new ArrayList<Point>();
int x0 = start.x;
int y0 = start.y;
int x1 = target.x;
int y1 = target.y;
int sx = 0;
int sy = 0;
int dx = Math.abs(x1-x0);
sx = x0<x1 ? 1 : -1;
int dy = -1*Math.abs(y1-y0);
sy = y0<y1 ? 1 : -1;
int err = dx+dy, e2; /* error value e_xy */
for(;;){ /* loop */
ret.add( new Point(x0,y0) );
if (x0==x1 && y0==y1) break;
e2 = 2*err;
if (e2 >= dy) { err += dy; x0 += sx; } /* e_xy+e_x > 0 */
if (e2 <= dx) { err += dx; y0 += sy; } /* e_xy+e_y < 0 */
}
return ret;
}
i did this whole lightning stuff some time ago, a* pathfindin feel free to ask further questions
Appendum:
maybe i might simply add the small algorithms for raytracing ^^
to get the North & South Border Point just use this snippet:
for (int x = 0; x <map.WIDTH; x++){
Point northBorderPoint = new Point(x,0);
Point southBorderPoint = new Point(x,map.HEIGHT);
rayTrace( getLine(player.getPos(), northBorderPoint), player.getLightRadius()) );
rayTrace( getLine(player.getPos(), southBorderPoint, player.getLightRadius()) );
}
and the raytrace works like this:
private static void rayTrace(ArrayList<Point> line, WorldMap map, int radius) {
//int radius = radius from light source
for (Point p: line){
boolean doContinue = true;
float d = distance(line.get(0), p);
//caclulate light linear 100%...0%
float amountLight = (radius - d) / radius;
if (amountLight < 0 ){
amountLight = 0;
}
map.setLight( p, amountLight );
if ( ! map.isViewBlocked(p) ){ //can be blockeb dy wall, or monster
doContinue = false;
break;
}
}
}
I've been into indie game development for about three years right now. The way I would do this is first of all by using OpenGL so you can get all the benefits of the graphical computing power of the GPU (hopefully you are already doing that). Suppose we start off with all tiles in a VBO, entirely lit. Now, there are several options of achieving what you want. Depending on how complex your lighting system is, you can choose a different approach.
If your light is going to be circular around the player, no matter the fact if obstacles would block the light in real life, you could choose for a lighting algorithm implemented in the vertex shader. In the vertex shader, you could compute the distance of the vertex to the player and apply some function that defines how bright things should be in function of the computed distance. Do not use alpha, but just multiply the color of the texture/tile by the lighting value.
If you want to use a custom lightmap (which is more likely), I would suggest to add an extra vertex attribute that specifies the brightness of the tile. Update the VBO if needed. Same approach goes here: multiply the pixel of the texture by the light value. If you are filling light recursively with the player position as starting point, then you would update the VBO every time the player moves.
If your lightmap depends on where the sunlight hits your level, you could combine two sort of lighting techniques. Create one vertex attribute for the sun brightness and another vertex attribute for the light emitted by light points (like a torch held by the player). Now you can combine those two values in the vertex shader. Suppose the your sun comes up and goes down like the day and night pattern. Let's say the sun brightness is sun, which is a value between 0 and 1. This value can be passed to the vertex shader as a uniform. The vertex attribute that represents the sun brightness is s and the one for light, emitted by light points is l. Then you could compute the total light for that tile like this:
tileBrightness = max(s * sun, l + flicker);
Where flicker (also a vertex shader uniform) is some kind of waving function that represents the little variants in the brightness of your light points.
This approach makes the scene dynamic without having to recreate continuously VBO's. I implemented this approach in a proof-of-concept project. It works great. You can check out what it looks like here: http://www.youtube.com/watch?v=jTcNitp_IIo. Note how the torchlight is flickering at 0:40 in the video. That is done by what I explained here.

How to position a Node along a circular orbit around a fixed center based on mouse coordinates (JavaFX)?

Im trying to get into some basic JavaFX game development and I'm getting confused with some circle maths.
I have a circle at (x:250, y:250) with a radius of 50.
My objective is to make a smaller circle to be placed on the circumference of the above circle based on the position of the mouse.
Where Im getting confused is with the coordinate space and the Trig behind it all.
My issues come from the fact that the X/Y space on the screen is not centered at 0,0. But the top left of the screen is 0,0 and the bottom right is 500,500.
My calculations are:
var xpos:Number = mouseEvent.getX();
var ypos:Number = mouseEvent.getY();
var center_pos_x:Number = 250;
var center_pos_y:Number = 250;
var length = ypos - center_pos_y;
var height = xpos - center_pos_x;
var angle_deg = Math.toDegrees(Math.atan(height / length));
var angle_rad = Math.toRadians(angle_deg);
var radius = 50;
moving_circ_xpos = (radius * Math.cos(angle_rad)) + center_pos_x;
moving_circ_ypos = (radius * Math.sin(angle_rad)) + center_pos_y;
I made the app print out the angle (angle_deg) that I have calculated when I move the mouse and my output is below:
When the mouse is (in degrees moving anti-clockwise):
directly above the circle and horizontally inline with the center, the angle is -0
to the left and vertically centered, the angle is -90
directly below the circle and horizontally inline with the center, the angle is 0
to the right and vertically centered, the angle is 90
So, what can I do to make it 0, 90, 180, 270??
I know it must be something small, but I just cant think of what it is...
Thanks for any help
(and no, this is not an assignment)
atan(height/length) is not enough to get the angle. You need to compensate for each quadrant, as well as the possibility of "division-by-zero". Most programming language libraries supply a method called atan2 which take two arguments; y and x. This method does this calculation for you.
More information on Wikipedia: atan2
You can get away without calculating the angle. Instead, use the center of your circle (250,250) and the position of the mouse (xpos,ypos) to define a line. The line intersects your circle when its length is equal to the radius of your circle:
// Calculate distance from center to mouse.
xlen = xpos - x_center_pos;
ylen = ypos - y_center_pos;
line_len = sqrt(xlen*xlen + ylen*ylen); // Pythagoras: x^2 + y^2 = distance^2
// Find the intersection with the circle.
moving_circ_xpos = x_center_pos + (xlen * radius / line_len);
moving_circ_ypos = y_center_pos + (ylen * radius / line_len);
Just verify that the mouse isn't at the center of your circle, or the line_len will be zero and the mouse will be sucked into a black hole.
There's a great book called "Graphics Gems" that can help with this kind of problem. It is a cookbook of algorithms and source code (in C I think), and allows you to quickly solve a problem using tested functionality. I would totally recommend getting your hands on it - it saved me big time when I quickly needed to add code to do fairly complex operations with normals to surfaces, and collision detections.

Categories