I am attempting to make a snake clone, and I wanted the snake to be 5 or so pixels wide, and moves in 1 pixel increments. This means the snake can be closer than one snake width to itself. This complicates things when randomly generating food, which will be a 5x5 pixel square. I need / want an algorithm that reduces the random generation to 5x5 squares that contain no pieces of the snake.
Part of the problem is that, since the food is 5x5, the collision detection has to be checked for each pixel within the food to see if the snake is partially within the food. This leaves me with two problem
Problem
How do I reduce the search space when randomly generating the food?
I've thought about doing a quad tree, subdividing the space until the divisions are smaller than the food I want to place. I've also thought about using awt's Rectangle, and figuring out if the food's generated position is contained within a list of the rectangles that makes up the player.
The best solution I can think of is to generate a 2D array, containing all possible rectangles, with their upper left corner as the index values. Then generate another 2D array that maps each pixel to each rectangle that contains it, or even just starting points, the whole rectangle isn't needed (25 for each). Then after each move, update the corner maps of the head and tail, marking each of the mapped rectangles that it is occupied. Then do a simpler random search using the mapped corner array, (this reduces it down to a 1x1 pixel search space).
The main problem I am trying to avoid is that when the snake fills 90% of the playable space, any basic search-fail-repeat system will take a noticeable amount of time to complete, and worst case, never generate a food square.
After 7 months I have a solution. I have not tested this yet, because I still have to implement it.
The way I'm handling the snake body is by a list of rectangles defined by the top left X,Y coordinate, width, and height.
What I can do is create a tree of degree 8 starting with the whole board, and then place each snake part into the tree, with the top left corner offset by the size of the apple. When a part is placed, I can then split the board into 9 sections based on the 4 corners, 4 edges, and center of the snake. Ignoring board parts containing the snake, and board parts with dimensions <= 0, I can place these into the tree.
Since the snake parts were already adjusted for the apple size, any point within a non-occupied leaf node can be used to place an apple
Related
I couldn't find any satisfying answer on that topic. I want to make a program that will get snapshots from camera above the pool table and detect balls. I am using OpenCV and Java. My algorithm now is basically:
blurring image -> converting RGB to HSV -> splitting into 3 planes -> using Canny() on H plane -> using HoughCircles() method to detect balls
This algorithm detects balls quite well, it has problem with two balls only (green and blue, because background of the table is green). But I want to go one step further and:
Detect if the ball belongs to stripes or solids
Set an ID of every ball, stripes would have for example 1-7 and solids 8-14, every ball would have unique ID that doesn't change during the game
Do you have any idea how to implement task #1? My idea is to use inRange() function, but then I'd have to prepare a mask for every ball that detects that one ball in specified range of colors, and do this detection for every ball, am I right? Thanks for sharing your opinions.
#Edit: Here I give you some samples of how my algorithm works. I changed some parameters because I wanted to detect everything, and now it works worse, but it still works with quite nice accuracy. I`ll give you three samples of original image from camera, image where I detect balls (undistorted, with some filters) and image with detected balls.
Recommendation:
If you can mask out the pixels corresponding to a ball, the following method should work to differentiate striped/solid balls based on their associated pixels:
Desaturate the ball pixels and threshold them at some brightness p.
Count the number of white pixels and total pixels within the ball area.
Threshold on counts: if the proportion of white pixels is greater than some threshold q, classify it as a striped ball. Otherwise, it's a solid ball.
(The idea being that the stripes are white, and always at least partially visible, so striped balls will have a higher proportion of white pixels).
Sample Testing:
Here's an example of this applied (by hand, with p = 0.7) to some of the balls in the unrectified image, with final % white pixels on the right.
It looks like a classification threshold of q = 0.1 (minimum 10% white pixels to be a striped ball) will distinguish the two groups, although it would be ideal to tune the thresholds based on more data.
If you run into issues with shadowed balls using this method, you also can try rescaling each ball's brightnesses before thresholding (so that the brightnesses span the full range 0, 1), which should make the method less dependent on the absolute brightness.
Let's say I have a triangular face in 3d space, and I have the 3d coordinates of each vertex of this triangle, and would also have other information about the triangle(angles, lengths of sides, etc.). In Java, if I have the viewing screen and its information, how can I draw that plane, without using libraries like LWJGL, to that image, assuming I can properly project, accounting for perspective, any 3d point to that 2d image.
Would the best course of action just be to run a loop that draws each point on the plain to a point on the image(i.e. setting the corresponding pixel), which will most likely set the same pixel multiple times? If I'd do this, what would be the best way to identify each point in an oblique triangle, or a triangle that doesn't line up nicely with the axes?
tl;dr: I have a triangular face in 3d space, a "camera" looking at the face, and an image in which I can set each pixel. Using no GL libraries, what's the best way to project and draw that face onto the image?
Projection :
won't detail as you seems to know it
Drawing a line
you can look at Bresenham algorithm if you wanna start with the basics
(hardwared in recent graphics card)
Filling
you can fill between left and right borders of the triangle while you use Bresenham on both (you could use a floodfill algorithm starting ... i don't know, maybe at the projection of the center of the triangle)
Your best bet is to check out the g.fillPolygon() function for Java. It allows you to draw polygons with as many sides as possible and theres also g.drawPolygon() if you don't want it solid. Then you can just do some simple maths for the points. Such as each point is basically it's x and y except if the polygon is further away the points move closer to the center of the polygon and if the polygon is closer they move further away from the center of the polygon.
A second idea could be using some sort of array to store pixels and then researching line drawing algorithms and drawing lines then putting all the line data in another array and using some sort of flood-fill. Then whilst it's in that array you could try and do some weird stuff to the pixels if you wanted textures or something.
I know do a horizontal and vertical scroller game (like Mario), in this game type, the character is always in the same distance from user. The character only moves to left and right in horizontal scroller and to down and up in vertical scroller.
But there are games in 2D that the character can move freely in the scene, like for example graphic adventures.
So how can I do that ground in order to move the character freely on the ground with depth sense?
An example can see in this video: http://youtu.be/DbZZVbF8fZY?t=4m17s
Thanks you.
This is how I would do that:
First imagine that you are looking at your scene from the top to the ground. Set your coordinate system like that. So all object on your scene will have X and Y coordinates. All your object movements and checking (when character bumps into a wall or something), calculations do in that 2D world.
Now, to draw your world, if you want simpler thing, without some isometric perspective 3D look you just to draw your background image first, and then order all your objects far to near and draw them that way. Devide your Y coords to squeeze movement area a bit. Add some constant to Y to move that area down. If you characters can jump or fly (move trough Y axe) just move Y coord to for some amount.
But if you want it to be more 3D you'll have to make some kind of perspective transformation - multiply your X coordinate with Y and some constant (start with value 1 for constant and tune it up until optimal value). You can do similar thing with Y coord too, but don't think it's needed for adventure games like this.
This is probably hard to understand without the image, but it's actually very simple transformation.
I have a problem that I'm not sure how to overcome. What I'm working on right now is a game that just generates an infinite floor of dirt, and I have a player object. I would like to keep the player object in the center of the screen, but move the view. I thought that this would be a common subject, but I haven't been able to find anything related to it on the internet. Do I have to move every other instance and keep the player still to simulate a moving view?
Thank you,
~Guad
Yes, you move every other instance except the player.
One way to display a large area of terrain is to use tiles. A common size for terrain tiles is 256 x 256 pixels. You create a grid of tiles that's a bit larger than the game area.
Let's say we have an 800 x 600 pixel display area. This would be completely covered by a 4 x 3 area of tiles. In order to make the motion smoother, you create a 6 x 5 area of tiles in memory, and display 800 x 600 pixels of the 6 x 5 area.
As the player moves to the right, you add 5 tiles to the right of the 6 x 5 area, and drop the 5 tiles on the left. On the screen, it looks like the player is covering a great distance, but you're just adding and removing terrain tiles. You would add and remove tiles in a similar manner when the player moves up, down, and to the left.
I hope this is enough information to get you started.
I'm experimenting with LibGDX and 3D in a projection view. Right now I'm looking at how to determine the outermost bounds of my viewport in world space at z=0.0, in order to draw coordinate grid no larger than necessary. However, I seem to have outpaced my education in that I haven't taken a formal linear algebra class and am still a little fuzzy on matrix math.
Is there a way to determine where I should start and stop drawing lines without resorting to using picking and drawing a transparent plane to intersect with?
LibGDX's unproject function takes screen coordinates in a Vector3 and returns a Vector3 in world space from the near clipping plane to the far, given the provided z. However, given that I have a translated and rotated Camera (an encapsulation of the viewprojection matrix and a slew of convenience methods), it occurs to me that I can't pick an arbitrary z to put in the window coordinate vector and just set it to 0.0 after unprojection, as that point probably won't be the furthest viewable point in the viewport. So how do I know what z value to use in the window coordinate that will give the the x and y I need in world space that's at z=0.0?
EDIT (UPDATE):
So apparently it looks like the problem I'm looking at is plane intersection, which would require ray tracing. So now I suppose my question is this: is ray tracing 4 times per render loop (or, I suppose whenever the camera has moved) worth the payoff of being able to dynamically draw a worldspace coordinate grid no larger than the viewport? If not, is there a cheaper algorithm I can use to estimate where I should start and stop drawing lines?