Related
I have a rectangle array holding multiple objects, moving back and forth on X axis.
Iterator<Rectangle> iter = array.iterator();
while ( iter.hasNext() ) {
Rectangle obj = iter.next();
array.get(i).x += speed * Gdx.graphics.getDeltaTime() ;
if (obj.x + obj.width > 800 || obj.x < 0) {
speed = -speed;
}
}
When the speed gets bigger, you'll start noticing the first object in the array overlapping with the other objects and pushing them apart. How to fix that?
Basically each object has
Rectangle obj = new Rectangle();
obj.x = xpos;
obj.y = ypos;
obj.width = width;
obj.height = height;
xpos += width + 4;
And has a texture, image, a sqaure, a rectange, a triangle... And each object is generated at an X position xpos different than the other. All they do is keep moving on the X axis, from x=0 till 800 and back.
What happens is that when the first object gets to 0, it tries to increase its speed again and overlapping with other objects, and then time after time, all objects keep overlapping and get further apart from each other. I want the distance between the objects to stay constant at any speed.
From what you've commented, the questions appears to be "How can I make all these blocks move together, bouncing from one edge to another". The issue being that you're getting bouncing, but they stop acting as a group.
Firstly, if you want to treat them as a group - the simplest way is to consider them as one large bounding box containing lots of smaller (inconsequential) objects. Moving that as a single object from side to side will give you the behaviour you need.
That aside, the direct answer to your question is "you're changing the direction mid-way through iteration". So in any single tick, some objects have moved left and some have moved right - meaning they stop acting as a group.
How you organise it is up to you, but this is the basic idea you need:
// assume "speedForThisFrame" is a float defined outside this function
float speedForNextFrame = speedForThisFrame
// iterate through however you want
Iterator<Rectangle> iter = array.iterator();
while ( iter.hasNext() ) {
Rectangle obj = iter.next();
obj.x += speedForThisFrame * Gdx.graphics.getDeltaTime() ;
// if it's moved out of bounds, we will change direction NEXT fame
if (obj.x + obj.width > 800 || obj.x < 0) {
speedForNextFrame = -speedForThisFrame;
}
}
// now that all movement has finished, we update the speed
speedForThisFrame = speedForNextFrame
The key thing is everything must move by the same amount, in the same direction, every frame. Changing the speed mid-update will cause them to act independently.
Note, you will still have issues when your group is larger than the bounds - or when they go over the bounds in one frame and don't fully get back the next frame. These are separate issues though and can be asked in a separate question.
I think your problem is that, caused by variations in Gdx.graphics.getDeltaTime(), the rectangles exceed your 0/800 borders by different distances.
An example:
First step:
Rect #1 x=790
Rect #2 x=780
Speed=100, DeltaTime=0.11 => DeltaX=11
After this step, Rect#1 would be at 801, Rect#2 at 791, their distance is 10.
Next step:
DeltaTime=0.12 => DeltaX=12
After this step, Rect#1 is at 789, Rect#2 at 803, their distance is 14.
Your rectangles vary their distance because they travel different distances. A possible solution would be to really bounce at the borders. So you should not only invert the speed but also take the distance a rectangle exceeded the border and let it travel this distance in the opposite direction:
So Rect#1 at 790, moving 11 pixels rightwards, should not be at 801 in the end of the step but at 799 (moving 10 pixels to the right and one to the left).
My goal is to generate some unique random colors that can be easily seen by people. It means i try to avoid having both of ocean blue and sky blue. I prefer having violet red and ocean blue.
I think i need to have a variable that will be a "gap" between any generated RGB. For example, if the gap is 60 then the first and second color cant be RGB(12,27,38) and RGB(45,102,177) because the difference in red value is lesser than 60. (45-12=33).
This is what i tried so far :
int temp;
for(int i = 0; i < size; i++)
{
Rectangle r = new Rectangle();
temp = COLOR_LIMIT - colourGap;
if(i == 0 || temp <= 0)
colour = Color.argb(255, randomColour.nextInt(COLOR_LIMIT), randomColour.nextInt(COLOR_LIMIT), randomColour.nextInt(COLOR_LIMIT));
else
colour = Color.argb(255, randomColour.nextInt(temp), randomColour.nextInt(temp), randomColour.nextInt(temp));
temp-=colourGap;
}
With the above code, i realize the color generated is more unique compared to the basic random. However, my code above is not flexible and just a temporary fix (i need to play with the iteration size and gap variable every time). I believe there should be any clever way to do this. Basically, i just keep decreasing the limit by 60 and if the limit is lesser or equal to 0 then i just generate a basic random color.
Any help is appreciated, and sorry English is not my mother language.
Thanks very much.
You need to think of the "colour space" as a 3 dimensional space and look at the distance between your selected space in that.
i.e. the distance between r1,g1,b1 and r2,g2,b2 is sqrt((r1-r2)^2+(g1-g2)^2+(b1-b2)^2)
You can then check how close things are together and define a threshold.
A much easier way though will be just to define a "grid" of acceptable colours spaced far enough way from each other to look different.
i.e. r = random.nextInt(4)*32, g = random.nextInt(4)*32, b = random.nextInt(4)*32
Now you just have to check that none of your other generated colors exactly matches and you know there is a minimum distance between the selected. To guarantee a bit more distance you could check if any 2 or any 1 components exactly match although by doing that you reduce your number of possible colours massively.
Possibly this question may give you a clue.
The accepted answer suggests to change the Hue value of HSV color scheme with some interval, while making other two parameters random. This is how it would look like in Android:
for (int i = 0; i < 360; i += 360 / colorsNumber) {
// hsv[0] is Hue [0 .. 360) hsv[1] is Saturation [0...1] hsv[2] is Value [0...1]
float[] hsv = new float[3];
hsv[0] = i;
hsv[1] = (float) Math.random(); // Some restrictions here?
hsv[2] = (float) Math.random();
colors[i] = Color.HSVToColor(hsv);
}
This is my situation. It involves aligning a scanned image which will account for incorrect scanning. I must align the scanned image with my Java program.
These are more details:
There is a table-like form printed on a sheet of paper, which will be scanned into an image file.
I will open the picture with Java, and I will have an OVERLAY of text boxes.
The text boxes are supposed to align correctly with the scanned image.
In order to align correctly, my Java program must analyze the scanned image and detect the coordinates of the edges of the table on the scanned image, and thus position the image and the textboxes so that the textboxes and the image both align properly (in case of incorrect scanning)
You see, the guy scanning the image might not necessarily place the image in a perfectly correct position, so I need my program to automatically align the scanned image as it loads it. This program will be reusable on many of such scanned images, so I need the program to be flexible in this way.
My question is one of the following:
How can I use Java to detect the y coordinate of the upper edge of the table and the x-coordinate of the leftmost edge of the table. The table is a a regular table with many cells, with black thin border, printed on a white sheet of paper (horizontal printout)
If an easier method exists to automatically align the scanned image in such a way that all scanned images will have the graphical table align to the same x, y coordinates, then share this method :).
If you don't know the answer to the above to questions, do tell me where I should start. I don't know much about graphics java programming and I have about 1 month to finish this program. Just assume that I have a tight schedule and I have to make the graphics part as simple as possible for me.
Cheers and thank you.
Try to start from a simple scenario and then improve the approach.
Detect corners.
Find the corners in the boundaries of the form.
Using the form corners coordinates, calculate the rotation angle.
Rotate/scale the image.
Map the position of each field in the form relative to form origin coordinates.
Match the textboxes.
The program presented at the end of this post does the steps 1 to 3. It was implemented using Marvin Framework. The image below shows the output image with the detected corners.
The program also outputs: Rotation angle:1.6365770416167182
Source code:
import java.awt.Color;
import java.awt.Point;
import marvin.image.MarvinImage;
import marvin.io.MarvinImageIO;
import marvin.plugin.MarvinImagePlugin;
import marvin.util.MarvinAttributes;
import marvin.util.MarvinPluginLoader;
public class FormCorners {
public FormCorners(){
// Load plug-in
MarvinImagePlugin moravec = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.corner.moravec");
MarvinAttributes attr = new MarvinAttributes();
// Load image
MarvinImage image = MarvinImageIO.loadImage("./res/printedForm.jpg");
// Process and save output image
moravec.setAttribute("threshold", 2000);
moravec.process(image, null, attr);
Point[] boundaries = boundaries(attr);
image = showCorners(image, boundaries, 12);
MarvinImageIO.saveImage(image, "./res/printedForm_output.jpg");
// Print rotation angle
double angle = (Math.atan2((boundaries[1].y*-1)-(boundaries[0].y*-1),boundaries[1].x-boundaries[0].x) * 180 / Math.PI);
angle = angle >= 0 ? angle : angle + 360;
System.out.println("Rotation angle:"+angle);
}
private Point[] boundaries(MarvinAttributes attr){
Point upLeft = new Point(-1,-1);
Point upRight = new Point(-1,-1);
Point bottomLeft = new Point(-1,-1);
Point bottomRight = new Point(-1,-1);
double ulDistance=9999,blDistance=9999,urDistance=9999,brDistance=9999;
double tempDistance=-1;
int[][] cornernessMap = (int[][]) attr.get("cornernessMap");
for(int x=0; x<cornernessMap.length; x++){
for(int y=0; y<cornernessMap[0].length; y++){
if(cornernessMap[x][y] > 0){
if((tempDistance = Point.distance(x, y, 0, 0)) < ulDistance){
upLeft.x = x; upLeft.y = y;
ulDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, 0)) < urDistance){
upRight.x = x; upRight.y = y;
urDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, 0, cornernessMap[0].length)) < blDistance){
bottomLeft.x = x; bottomLeft.y = y;
blDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, cornernessMap[0].length)) < brDistance){
bottomRight.x = x; bottomRight.y = y;
brDistance = tempDistance;
}
}
}
}
return new Point[]{upLeft, upRight, bottomRight, bottomLeft};
}
private MarvinImage showCorners(MarvinImage image, Point[] points, int rectSize){
MarvinImage ret = image.clone();
for(Point p:points){
ret.fillRect(p.x-(rectSize/2), p.y-(rectSize/2), rectSize, rectSize, Color.red);
}
return ret;
}
public static void main(String[] args) {
new FormCorners();
}
}
Edge detection is something that is typically done by enhancing the contrast between neighboring pixels, such that you get a easily detectable line, which is suitable for further processing.
To do this, a "kernel" transforms a pixel according it the pixel's inital value, and the value of that pixel's neighbors. A good edge detection kernel will enhance the differences between neighboring pixels, and reduce the strength of a pixel with similar neigbors.
I would start by looking at the Sobel operator. This might not return results that are immediately useful to you; however, it will get you far closer than you would be if you were to approach the problem with little knowledge of the field.
After you have some crisp clean edges, you can use larger kernels to detect points where it seems that a 90% bend in two lines occurs, that might give you the pixel coordinates of the outer rectangle, which might be enough for your purposes.
With those outer coordinates, it still is a bit of math to make the new pixels be composted with the average values between the old pixels rotated and moved to "match". The results (especially if you do not know about anti-aliasing math) can be pretty bad, adding blur to the image.
Sharpening filters might be a solution, but they come with their own issues, mainly they make the picture sharper by adding graininess. Too much, and it is obvious that the original image is not a high-quality scan.
I researched the libraries but in the end I found it more convenient to code up my own edge detection methods.
The class below will detect black/grayed out edges of a scanned sheet of paper that contains such edges, and will return the x and y coordinate of the edges of the sheet of paper, starting from the rightmost end (reverse = true) or from lower end (reverse = true) or from the top edge (reverse = false) or from left edge (reverse = false). Also...the program will take ranges along vertical edges (rangex) measured in pixels, and horizontal ranges (rangey) measured in pixels. The ranges determine outliers in the points received.
The program does 4 vertical cuts using the specified arrays, and 4 horizontal cuts. It retrieves the values of the dark dots. It uses the ranges to eliminate outliers. Sometimes, a little spot on the paper may cause an outlier point. The smaller the range, the fewer the outliers. However, sometimes the edge is slightly tilted, so you don't want to make the range too small.
Have fun. It works perfectly for me.
import java.awt.image.BufferedImage;
import java.awt.Color;
import java.util.ArrayList;
import java.lang.Math;
import java.awt.Point;
public class EdgeDetection {
public App ap;
public int[] horizontalCuts = {120, 220, 320, 420};
public int[] verticalCuts = {300, 350, 375, 400};
public void printEdgesTest(BufferedImage image, boolean reversex, boolean reversey, int rangex, int rangey){
int[] mx = horizontalCuts;
int[] my = verticalCuts;
//you are getting edge points here
//the "true" parameter indicates that it performs a cut starting at 0. (left edge)
int[] xEdges = getEdges(image, mx, reversex, true);
int edgex = getEdge(xEdges, rangex);
for(int x = 0; x < xEdges.length; x++){
System.out.println("EDGE = " + xEdges[x]);
}
System.out.println("THE EDGE = " + edgex);
//the "false" parameter indicates you are doing your cut starting at the end (image.getHeight)
//and ending at 0
//if the parameter was true, it would mean it would start the cuts at y = 0
int[] yEdges = getEdges(image, my, reversey, false);
int edgey = getEdge(yEdges, rangey);
for(int y = 0; y < yEdges.length; y++){
System.out.println("EDGE = " + yEdges[y]);
}
System.out.println("THE EDGE = " + edgey);
}
//This function takes an array of coordinates...detects outliers,
//and computes the average of non-outlier points.
public int getEdge(int[] edges, int range){
ArrayList<Integer> result = new ArrayList<Integer>();
boolean[] passes = new boolean[edges.length];
int[][] differences = new int[edges.length][edges.length-1];
//THIS CODE SEGMENT SAVES THE DIFFERENCES BETWEEN THE POINTS INTO AN ARRAY
for(int n = 0; n<edges.length; n++){
for(int m = 0; m<edges.length; m++){
if(m < n){
differences[n][m] = edges[n] - edges[m];
}else if(m > n){
differences[n][m-1] = edges[n] - edges[m];
}
}
}
//This array determines which points are outliers or nots (fall within range of other points)
for(int n = 0; n<edges.length; n++){
passes[n] = false;
for(int m = 0; m<edges.length-1; m++){
if(Math.abs(differences[n][m]) < range){
passes[n] = true;
System.out.println("EDGECHECK = TRUE" + n);
break;
}
}
}
//Create a new array only using valid points
for(int i = 0; i<edges.length; i++){
if(passes[i]){
result.add(edges[i]);
}
}
//Calculate the rounded mean... This will be the x/y coordinate of the edge
//Whether they are x or y values depends on the "reverse" variable used to calculate the edges array
int divisor = result.size();
int addend = 0;
double mean = 0;
for(Integer i : result){
addend += i;
}
mean = (double)addend/(double)divisor;
//returns the mean of the valid points: this is the x or y coordinate of your calculated edge.
if(mean - (int)mean >= .5){
System.out.println("MEAN " + mean);
return (int)mean+1;
}else{
System.out.println("MEAN " + mean);
return (int)mean;
}
}
//this function computes "dark" points, which include light gray, to detect edges.
//reverse - when true, starts counting from x = 0 or y = 0, and ends at image.getWidth or image.getHeight()
//verticalEdge - determines whether you want to detect a vertical edge, or a horizontal edge
//arr[] - determines the coordinates of the vertical or horizontal cuts you will do
//set the arr[] array according to the graphical layout of your scanned image
//image - this is the image you want to detect black/white edges of
public int[] getEdges(BufferedImage image, int[] arr, boolean reverse, boolean verticalEdge){
int red = 255;
int green = 255;
int blue = 255;
int[] result = new int[arr.length];
for(int n = 0; n<arr.length; n++){
for(int m = reverse ? (verticalEdge ? image.getWidth():image.getHeight())-1:0; reverse ? m>=0:m<(verticalEdge ? image.getWidth():image.getHeight());){
Color c = new Color(image.getRGB(verticalEdge ? m:arr[n], verticalEdge ? arr[n]:m));
red = c.getRed();
green = c.getGreen();
blue = c.getBlue();
//determine if the point is considered "dark" or not.
//modify the range if you want to only include really dark spots.
//occasionally, though, the edge might be blurred out, and light gray helps
if(red<239 && green<239 && blue<239){
result[n] = m;
break;
}
//count forwards or backwards depending on reverse variable
if(reverse){
m--;
}else{
m++;
}
}
}
return result;
}
}
A similar such problem I've done in the past basically figured out the orientation of the form, re-aligned it, re-scaled it, and I was all set. You can use the Hough transform to to detect the angular offset of the image (ie: how much it is rotated), but you still need to detect the boundaries of the form. It also had to accommodate for the boundaries of the piece of paper itself.
This was a lucky break for me, because it basically showed a black and white image in the middle of a big black border.
Apply an aggressive, 5x5 median filter to remove some noise.
Convert from grayscale to black and white (rescale intensity values from [0,255] to [0,1]).
Calculate the Principal Component Analysis (ie: calculate the Eigenvectors of the covariance matrix for your image from the calculated Eigenvalues) (http://en.wikipedia.org/wiki/Principal_component_analysis#Derivation_of_PCA_using_the_covariance_method)
4) This gives you a basis vector. You simply use that to re-orient your image to a standard basis matrix (ie: [1,0],[0,1]).
Your image is now aligned beautifully. I did this for normalizing the orientation of MRI scans of entire human brains.
You also know that you have a massive black border around the actual image. You simply keep deleting rows from the top and bottom, and both sides of the image until they are all gone. You can temporarily apply a 7x7 median or mode filter to a copy of the image so far at this point. It helps rule out too much border remaining in the final image from thumbprints, dirt, etc.
What is the most efficient way to do lighting for a tile based engine in Java?
Would it be putting a black background behind the tiles and changing the tiles' alpha?
Or putting a black foreground and changing alpha of that? Or anything else?
This is an example of the kind of lighting I want:
There are many ways to achieve this. Take some time before making your final decision. I will briefly sum up some techiques you could choose to use and provide some code in the end.
Hard Lighting
If you want to create a hard-edge lighting effect (like your example image),
some approaches come to my mind:
Quick and dirty (as you suggested)
Use a black background
Set the tiles' alpha values according to their darkness value
A problem is, that you can neither make a tile brighter than it was before (highlights) nor change the color of the light. Both of these are aspects which usually make lighting in games look good.
A second set of tiles
Use a second set of (black/colored) tiles
Lay these over the main tiles
Set the new tiles' alpha value depending on how strong the new color should be there.
This approach has the same effect as the first one with the advantage, that you now may color the overlay tile in another color than black, which allows for both colored lights and doing highlights.
Example:
Even though it is easy, a problem is, that this is indeed a very inefficent way. (Two rendered tiles per tile, constant recoloring, many render operations etc.)
More Efficient Approaches (Hard and/or Soft Lighting)
When looking at your example, I imagine the light always comes from a specific source tile (character, torch, etc.)
For every type of light (big torch, small torch, character lighting) you
create an image that represents the specific lighting behaviour relative to the source tile (light mask). Maybe something like this for a torch (white being alpha):
For every tile which is a light source, you render this image at the position of the source as an overlay.
To add a bit of light color, you can use e.g. 10% opaque orange instead of full alpha.
Results
Adding soft light
Soft light is no big deal now, just use more detail in light mask compared to the tiles. By using only 15% alpha in the usually black region you can add a low sight effect when a tile is not lit:
You may even easily achieve more complex lighting forms (cones etc.) just by changing the mask image.
Multiple light sources
When combining multiple light sources, this approach leads to a problem:
Drawing two masks, which intersect each other, might cancel themselves out:
What we want to have is that they add their lights instead of subtracting them.
Avoiding the problem:
Invert all light masks (with alpha being dark areas, opaque being light ones)
Render all these light masks into a temporary image which has the same dimensions as the viewport
Invert and render the new image (as if it was the only light mask) over the whole scenery.
This would result in something similar to this:
Code for the mask invert method
Assuming you render all the tiles in a BufferedImage first,
I'll provide some guidance code which resembles the last shown method (only grayscale support).
Multiple light masks for e.g. a torch and a player can be combined like this:
public BufferedImage combineMasks(BufferedImage[] images)
{
// create the new image, canvas size is the max. of all image sizes
int w, h;
for (BufferedImage img : images)
{
w = img.getWidth() > w ? img.getWidth() : w;
h = img.getHeight() > h ? img.getHeight() : h;
}
BufferedImage combined = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
// paint all images, preserving the alpha channels
Graphics g = combined.getGraphics();
for (BufferedImage img : images)
g.drawImage(img, 0, 0, null);
return combined;
}
The final mask is created and applied with this method:
public void applyGrayscaleMaskToAlpha(BufferedImage image, BufferedImage mask)
{
int width = image.getWidth();
int height = image.getHeight();
int[] imagePixels = image.getRGB(0, 0, width, height, null, 0, width);
int[] maskPixels = mask.getRGB(0, 0, width, height, null, 0, width);
for (int i = 0; i < imagePixels.length; i++)
{
int color = imagePixels[i] & 0x00ffffff; // Mask preexisting alpha
// get alpha from color int
// be careful, an alpha mask works the other way round, so we have to subtract this from 255
int alpha = (maskPixels[i] >> 24) & 0xff;
imagePixels[i] = color | alpha;
}
image.setRGB(0, 0, width, height, imagePixels, 0, width);
}
As noted, this is a primitive example. Implementing color blending might be a bit more work.
Raytracing might be the simpliest approach.
you can store which tiles have been seen (used for automapping, used for 'remember your map while being blinded', maybe for the minimap etc.)
you show only what you see - maybe a monster of a wall or a hill is blocking your view, then raytracing stops at that point
distant 'glowing objects' or other light sources (torches lava) can be seen, even if your own light source doesn't reach very far.
the length of your ray gives will be used to check amount light (fading light)
maybe you have a special sensor (ESP, gold/food detection) which would be used to find objects that are not in your view? raytrace might help as well ^^
how is this done easy?
draw a line from your player to every point of the border of your map (using Bresehhams Algorithm http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm
walk along that line (from your character to the end) until your view is blocked; at this point stop your search (or maybe do one last final iteration to see what did top you)
for each point on your line set the lighning (maybe 100% for distance 1, 70% for distance 2 and so on) and mark you map tile as visited
maybe you won't walk along the whole map, maybe it's enough if you set your raytrace for a 20x20 view?
NOTE: you really have to walk along the borders of viewport, its NOT required to trace every point.
i'm adding the line algorithm to simplify your work:
public static ArrayList<Point> getLine(Point start, Point target) {
ArrayList<Point> ret = new ArrayList<Point>();
int x0 = start.x;
int y0 = start.y;
int x1 = target.x;
int y1 = target.y;
int sx = 0;
int sy = 0;
int dx = Math.abs(x1-x0);
sx = x0<x1 ? 1 : -1;
int dy = -1*Math.abs(y1-y0);
sy = y0<y1 ? 1 : -1;
int err = dx+dy, e2; /* error value e_xy */
for(;;){ /* loop */
ret.add( new Point(x0,y0) );
if (x0==x1 && y0==y1) break;
e2 = 2*err;
if (e2 >= dy) { err += dy; x0 += sx; } /* e_xy+e_x > 0 */
if (e2 <= dx) { err += dx; y0 += sy; } /* e_xy+e_y < 0 */
}
return ret;
}
i did this whole lightning stuff some time ago, a* pathfindin feel free to ask further questions
Appendum:
maybe i might simply add the small algorithms for raytracing ^^
to get the North & South Border Point just use this snippet:
for (int x = 0; x <map.WIDTH; x++){
Point northBorderPoint = new Point(x,0);
Point southBorderPoint = new Point(x,map.HEIGHT);
rayTrace( getLine(player.getPos(), northBorderPoint), player.getLightRadius()) );
rayTrace( getLine(player.getPos(), southBorderPoint, player.getLightRadius()) );
}
and the raytrace works like this:
private static void rayTrace(ArrayList<Point> line, WorldMap map, int radius) {
//int radius = radius from light source
for (Point p: line){
boolean doContinue = true;
float d = distance(line.get(0), p);
//caclulate light linear 100%...0%
float amountLight = (radius - d) / radius;
if (amountLight < 0 ){
amountLight = 0;
}
map.setLight( p, amountLight );
if ( ! map.isViewBlocked(p) ){ //can be blockeb dy wall, or monster
doContinue = false;
break;
}
}
}
I've been into indie game development for about three years right now. The way I would do this is first of all by using OpenGL so you can get all the benefits of the graphical computing power of the GPU (hopefully you are already doing that). Suppose we start off with all tiles in a VBO, entirely lit. Now, there are several options of achieving what you want. Depending on how complex your lighting system is, you can choose a different approach.
If your light is going to be circular around the player, no matter the fact if obstacles would block the light in real life, you could choose for a lighting algorithm implemented in the vertex shader. In the vertex shader, you could compute the distance of the vertex to the player and apply some function that defines how bright things should be in function of the computed distance. Do not use alpha, but just multiply the color of the texture/tile by the lighting value.
If you want to use a custom lightmap (which is more likely), I would suggest to add an extra vertex attribute that specifies the brightness of the tile. Update the VBO if needed. Same approach goes here: multiply the pixel of the texture by the light value. If you are filling light recursively with the player position as starting point, then you would update the VBO every time the player moves.
If your lightmap depends on where the sunlight hits your level, you could combine two sort of lighting techniques. Create one vertex attribute for the sun brightness and another vertex attribute for the light emitted by light points (like a torch held by the player). Now you can combine those two values in the vertex shader. Suppose the your sun comes up and goes down like the day and night pattern. Let's say the sun brightness is sun, which is a value between 0 and 1. This value can be passed to the vertex shader as a uniform. The vertex attribute that represents the sun brightness is s and the one for light, emitted by light points is l. Then you could compute the total light for that tile like this:
tileBrightness = max(s * sun, l + flicker);
Where flicker (also a vertex shader uniform) is some kind of waving function that represents the little variants in the brightness of your light points.
This approach makes the scene dynamic without having to recreate continuously VBO's. I implemented this approach in a proof-of-concept project. It works great. You can check out what it looks like here: http://www.youtube.com/watch?v=jTcNitp_IIo. Note how the torchlight is flickering at 0:40 in the video. That is done by what I explained here.
I wish to determine the 2D screen coordinates (x,y) of points in 3D space (x,y,z).
The points I wish to project are real-world points represented by GPS coordinates and elevation above sea level.
For example:
Point (Lat:49.291882, Long:-123.131676, Height: 14m)
The camera position and height can also be determined as a x,y,z point. I also have the heading of the camera (compass degrees), its degree of tilt (above/below horizon) and the roll (around the z axis).
I have no experience of 3D programming, therefore, I have read around the subject of perspective projection and learnt that it requires knowledge of matrices, transformations etc - all of which completely confuse me at present.
I have been told that OpenGL may be of use to construct a 3D model of the real-world points, set up the camera orientation and retrieve the 2D coordinates of the 3D points.
However, I am not sure if using OpenGL is the best solution to this problem and even if it is I have no idea how to create models, set up cameras etc
Could someone suggest the best method to solve my problem? If OpenGL is a feasible solution i'd have to use OpenGL ES if that makes any difference. Oh and whatever solution I choose it must execute quickly.
Here's a very general answer. Say the camera's at (Xc, Yc, Zc) and the point you want to project is P = (X, Y, Z). The distance from the camera to the 2D plane onto which you are projecting is F (so the equation of the plane is Z-Zc=F). The 2D coordinates of P projected onto the plane are (X', Y').
Then, very simply:
X' = ((X - Xc) * (F/Z)) + Xc
Y' = ((Y - Yc) * (F/Z)) + Yc
If your camera is the origin, then this simplifies to:
X' = X * (F/Z)
Y' = Y * (F/Z)
You do indeed need a perspective projection and matrix operations greatly simplify doing so. I assume you are already aware that your spherical coordinates must be transformed to Cartesian coordinates for these calculations.
Using OpenGL would likely save you a lot of work over rolling your own software rasterizer. So, I would advise trying it first. You can prototype your system on a PC since OpenGL ES is not too different as long as you keep it simple.
If just need to compute coordinates of some points, you should only need some algebra skills, not 3D programming with openGL.
Moreover openGL does not deal with Geographic coordinates
First get some doc about WGS84 and geodesic coordinates, you have first to convert your GPS data into a cartesian frame ( for instance the earth centric cartesian frame in which is defined the WGS84 ellipsoid ).
Then the computations with matrix can take place.
The chain of transformations is roughly :
WGS84
earth centric coordinates
some local frame
camera frame
2D projection
For the first conversion see this
The last involves a projection matrix
The others are only coordinates rotations and translation.
The "some local frame" is the local cartesian frame with origin as your camera location
tangent to the ellipsoid.
I'd recommend "Mathematics for 3D Game Programming and Computer Graphics" by Eric Lengyel. It covers matrices, transformations, the view frustum, perspective projection and more.
There is also a good chapter in The OpenGL Programming Guide (red book) on viewing transformations and setting up a camera (including how to use gluLookAt).
If you aren't interested in displaying the 3D scene and are limited to using OpenGL ES then it may be better to just write your own code to do the mapping from 3D to 2D window coords. As a starting point you could download Mesa 3D, an open source implementation of OpenGL, to see how they implement gluPerspective (to set a projection matrix), gluLookAt (to set a camera transformation) and gluProject (to project a 3D point to 2D window coords).
return [((fol/v[2])*v[0]+x),((fol/v[2])*v[1]+y)];
Point at [0,0,1] will be x=0 and y=0, unless you add center screen xy - it's not camera xy. fol is focal length, derived from fov angle and screen width - how high is the triangle (tangent). This method will not match three.js perspective matrix, which is why am I looking for that.
I should not be looking for it. I matched xy on openGL, perfectly like super glue! But I cannot get it to work right in java. THAT Perfect match follows.
var pmat = [0,0,0,0,0,0,0,0,0,0,
(farclip + nearclip) / (nearclip - farclip),-1,0,0,
2*farclip*nearclip / (nearclip - farclip),0 ];
void setpmat() {
double fl; // = tan(dtor(90-fovx/aspect/2)); /// UNIT focal length
fl = 1/tan(dtor(fov/Aspect/2)); /// same number
pmat[0] = fl/Aspect;
pmat[5] = fl;
}
void fovmat(double v[],double p[]) {
int cx = (int)(_Width/2),cy = (int)(_Height/2);
double pnt2[4], pnt[4] = { 0,0,0,1 } ;
COPYVECTOR(pnt,p);NORMALIZE(pnt);
popmatrix4(pnt2,pmat,pnt);
COPYVECTOR(v,pnt2);
v[0] *= -cx; v[1] *= -cy;
v[0] += cx; v[1] += cy;
} // world to screen matrix
void w2sm(int xy[],double p[]) {
double v[3]; fovmat(v,p);
xy[0] = (int)v[0];
xy[1] = (int)v[1];
}
I have one more way to match three.js xy, til I get the matrix working, just one condition. must run at Aspect of 2
function w2s(fol,v,x,y) {
var a = width / height;
var b = height/width ;
/// b = .5 // a = 2
var f = 1/Math.tan(dtor(_fov/a)) * x * b;
return [intr((f/v[2])*v[0]+x),intr((f/v[2])*v[1]+y)];
}
Use it with the inverted camera matrix, you will need invert_matrix().
v = orbital(i);
v = subv(v,campos);
v3 = popmatrix(wmatrix,v); //inverted mat
if (v3[2] > 0) {
xy = w2s(flen,v3,cx,cy);
Finally here it is, (everyone ought to know by now), the no-matrix match, any aspect.
function angle2fol(deg,centerx) {
var b = width / height;
var a = dtor(90 - (clamp(deg,0.0001,174.0) / 2));
return asa_sin(PI_5,centerx,a) / b;
}
function asa_sin(a,s,b) {
return Math.sin(b) * (s / Math.sin(PI-(a+b)));
} // ASA solve opposing side of angle2 (b)
function w2s(fol,v,x,y) {
return [intr((fol/v[2])*v[0]+x),intr((fol/v[2])*v[1]+y)];
}
Updated the image for the proof. Input _fov gets you 1.5 that, "approximately." To see the FOV readout correctly, redo the triangle with the new focal length.
function afov(deg,centerx) {
var f = angle2fol(deg,centerx);
return rtod(2 * sss_cos(f,centerx,sas_cos(f,PI_5,centerx)));
}
function sas_cos(s,a,ss) {
return Math.sqrt((Math.pow(s,2)+Math.pow(ss,2))-(2*s*ss*Math.cos(a)));
} // Side Angle Side - solve length of missing side
function sss_cos(a,b,c) {
with (Math) {
return acos((pow(a,2)+pow(c,2)-pow(b,2))/(2*a*c));
}
} // SSS solve angle opposite side2 (b)
Star library confirmed the perspective, then possible to measure the VIEW! http://innerbeing.epizy.com/cwebgl/perspective.jpg
I can explain the 90 deg correction to moon's north pole in one word precession. So what is the current up vector. pnt? radec?
function ininorths() {
if (0) {
var c = ctime;
var v = LunarPos(jdm(c));
c += secday();
var vv = LunarPos(jdm(c));
vv = crossprod(v,vv);
v = eyeradec(vv);
echo(v,vv);
v = [266.86-90,65.64]; //old
}
var v = [282.6425,65.8873]; /// new.
// ...
}
I have yet to explain the TWO sets of vectors: Three.milkyway.matrix and the 3D to 2D drawing. They ARE:
function drawmilkyway() {
var v2 = radec2pos(dtor(192.8595), dtor(27.1283),75000000);
// gcenter 266.4168 -29.0078
var v3 = radec2pos(dtor(266.4168), dtor(-29.0078),75000000);
// ...
}
function initmwmat() {
var r,u,e;
e = radec2pos(dtor(156.35), dtor(12.7),1);
u = radec2pos(dtor(60.1533), dtor(25.5935),1);
r = normaliz(crossprod(u,e));
u = normaliz(crossprod(e,r));
e = normaliz(crossprod(r,u));
var m = MilkyWayMatrix;
m[0]=r[0];m[1]=r[1];m[2]=r[2];m[3]=0.0;
m[4]=u[0];m[5]=u[1];m[6]=u[2];m[7]=0.0;
m[8]=e[0];m[9]=e[1];m[10]=e[2];m[11]=0.0;
m[12]=0.0;m[13]=0.0;m[14]=0.0;m[15]=1.0;
}
/// draw vectors and matrix were the same in C !
void initmwmat(double m[16]) {
double r[3], u[3], e[3];
radec2pos(e,dtor(192.8595), dtor(27.1283),1); //up
radec2pos(u,dtor(266.4051), dtor(-28.9362),-1); //eye
}