Im making a game and id like to implement raycasting for the hero's laser (and other stuff in the future), i have my sprites in a sprite sheet which i bind in the beggining and access when i draw since each element knows how to draw itself, but the spritesheet is a PNG, and thus some elements posess transparency, which works ok in openGL. i know each element's position, size etc but if some sprites have transparency, the position and size arent enough for the ray cast to be perfect since it would only hit the "bounding box". So is there a way to throw a ray using Bresenham algorithm (i believe it is the lightest way, correct me if im wrong) and make it pixel perfect in openGL, so that i can acquire the collision point of the ray with the actual non-transparent zone of the first sprite it appears in the way?
There is no easy way to do this. You would have to create a custom collision checker for your raycast to see if it would pass through or if it would collide with part of the sprite.
However it might be a better idea to use a smaller bounding box, or a circle to represent it, or both. These are much easier and faster to calculate then checking every pixel within the texture.
Related
I'm using libgdx to make simple tile based game and everything seemed to be fine, until I added a rectangle, which follows mouse position. I figure out, that whenever I jump, rectangle (and other blocks too) expands probably by 1 px, until I let the spacebar. When I hit the spacebar again, it gets to normal size. I tried printing out rectangle width and height, but they didn't change, so problem is with rendering.
Everything allright
On this picture you can see game before jump.
Wider textures
Here is game after jump. You can also clearly see it on players head.
A little more detail. I don't use block2d. Tiles sizes are 8x8 scaled to 20x20. Using texturepacker without padding (problem occurs with padding anyway). I don't know which code to post, because I have no idea where the problem could be, so here is just simple block class. Any help would be much appreciated, thanks.
public class Block extends Sprite {
private int[] id = { 0, 0 };
public Rectangle rect;
private int textureSize = 8;
public Block(PlayScreen play,String texture, int x, int y, int[] id) {
super(play.getAtlas().findRegion("terrain"));
this.id = id;
rect = new Rectangle(x, y, ID.tileSize, ID.tileSize);
setRegion(id[0] * textureSize, id[1] * textureSize + 32, textureSize, textureSize);
setBounds(rect.x, rect.y, rect.width, rect.height);
}
public void render(SpriteBatch batch) {
draw(batch);
}
Welcome to libGDX!
TL;DR- there isn't enough of your code there to tell what the exact problem is, but my guess is that somewhere in your code you are confusing pixel-space with game-space.
A Matter of Perspective
When you first create a libGDX game that is 2D, it's really tempting to think that you are just painting pixels onto the screen. After all, your screen is measured in pixels, your window is measured in pixels, and your texture is measured in pixels.
However, if you start looking closer at the API, you'll find weird little things such as your camera and sprite positions and sizes being measured as floating point values instead of integers (Why floats? You can't have a fraction of a pixel!).
The reason the dimensions of your game object are different than how big they are drawn. It's really easy to understand this in a 3D world- when I am close to something, it is drawn really big on the screen. When I am far away, it is drawn really small. The actual size of the object doesn't change based on my distance from it, but the perceived size did. This tells us that we can't safely measure things in our game just based on how they're drawn- we have to measure based on their true size.
As a side note, while you may be using an Orthographic camera (i.e. one without perspective) and drawing 2D sprites, libGDX is really drawing a flat 3D object (a plane) behind the scenes.
Game Units
So how do we measure the "true size" of something? The answer is that we can measure it using whatever type of unit we want! We can say something is 3.5 meters long, or 42 bananas- whatever you want! For the sake of this conversation, I'm going to call these units "Game Units" (GU).
For your game, you might consider making each block one GU high and one GU wide (essentially measuring your game world in blocks). Your character can move in fractions of a block, but you measure speed in terms of "blocks per second." I can almost guarantee it will make your game logic a lot simpler.
But our textures are in pixels!
As you probably already know, your game uses three things to render: A viewport (the patch of the screen where your game can be painted), A Camera (think of it like a real camera- you change the position and size of the lens to change how much of your world is 'in view'), and your game objects (the things you may or may not want to draw, depending on whether they're visible to the camera).
Now let's look at how they're measured:
Viewport: This is a chunk of your screen (set to be the size of your game window), and as such is measured in pixels.
Camera: The Camera is interesting, because its size and position are measured in Game Units, not pixels. Since the viewport uses the Camera to know what to paint on the screen, it does contain the mapping of GU to pixel.
Game Object: This is measured in Game Units. It may have a texture measured in pixels, but that different than the "true size" of the game object.
Now libGDX defaults all of these sizes such that 1 GU == 1 Pixel, which misleads a lot of folks into thinking that everything is measured by pixels. Once you realize that this isn't really the case, there are some really cool implications.
Really Cool Implications
The first implication is that even if my screen size changes, my camera size can stay the same. For example, if I have a small 800x600 pixel screen, I can set my camera size to 40x30. This maintains a nice aspect ratio, and allows me to draw 40x30 blocks on the screen.
If the screen size changes (say to 1440x900), my game will still show 40x30 blocks on the screen. They may look a little stretched if the aspect ratio changes, but libGDX has special viewports that will counteract this for you. This makes it much easier to support your game on other monitors, other devices, or even just handling screen resizes.
The second cool implication is that you stop caring about texture sizes to a large degree. If you start telling libGDX "Hey, go draw this 32x32px sprite on this 1x1 GU object" instead of "Hey, go draw this 32x32px sprite" (notice the difference?) it means that changing texture sizes doesn't change how big the things on your screen are drawn, it changes how detailed they are. If you want to change how big they are drawn, you can change your camera size to 'zoom in.'
The third cool implication is that this makes your game logic a lot cleaner. For example you start thinking of speeds in "Game Units per second", not "Pixels per second". This means that changes in drawing size won't affect how fast things are in the game, and will save you a ton of bug-hunting further down the road. You also avoid a lot of the weird "My jump behaves differently when I resize the screen" bugs.
Summary
I hope this is helpful and makes sense. It's difficult to get your mind around it at first, but it will make your life a lot easier and your game a lot better in the long run. If you'd like a better example with pictures, I recommend that you read this article by one of the libGDX developers.
Let's say I have a triangular face in 3d space, and I have the 3d coordinates of each vertex of this triangle, and would also have other information about the triangle(angles, lengths of sides, etc.). In Java, if I have the viewing screen and its information, how can I draw that plane, without using libraries like LWJGL, to that image, assuming I can properly project, accounting for perspective, any 3d point to that 2d image.
Would the best course of action just be to run a loop that draws each point on the plain to a point on the image(i.e. setting the corresponding pixel), which will most likely set the same pixel multiple times? If I'd do this, what would be the best way to identify each point in an oblique triangle, or a triangle that doesn't line up nicely with the axes?
tl;dr: I have a triangular face in 3d space, a "camera" looking at the face, and an image in which I can set each pixel. Using no GL libraries, what's the best way to project and draw that face onto the image?
Projection :
won't detail as you seems to know it
Drawing a line
you can look at Bresenham algorithm if you wanna start with the basics
(hardwared in recent graphics card)
Filling
you can fill between left and right borders of the triangle while you use Bresenham on both (you could use a floodfill algorithm starting ... i don't know, maybe at the projection of the center of the triangle)
Your best bet is to check out the g.fillPolygon() function for Java. It allows you to draw polygons with as many sides as possible and theres also g.drawPolygon() if you don't want it solid. Then you can just do some simple maths for the points. Such as each point is basically it's x and y except if the polygon is further away the points move closer to the center of the polygon and if the polygon is closer they move further away from the center of the polygon.
A second idea could be using some sort of array to store pixels and then researching line drawing algorithms and drawing lines then putting all the line data in another array and using some sort of flood-fill. Then whilst it's in that array you could try and do some weird stuff to the pixels if you wanted textures or something.
What is the easiest way to give 2D texture in OpenGL (lwjgl) some kind of "Thickness". Of course i could get the border of the texture somehow and add Quads, orriented by the normal of the quad that the texture is drawn on, in the color of the adjacent texture pixel. But there has to be an easier way to do it.
Minecraft is using lwigl as well and there are the (new) 3D Items, that spin down on the ground and don't cause as much of a performance issue, as is if they were drawn of dozends of polygons. As well, when you hold an item in your hand, there is that kind of "stretched" Texture in depth, which also works with high resolution textures.
Does anyone know how that is done?
A 2D texture is always infinitely thin. If you want actual thickness (when you look edge onto it) you need geometry. In Minecraft things look blocky, because they've been modeled that way.
If you look at some angle and ignore the edges you can use parallax mapping to "fake" some depth in the texture. Or you can use a depth map and use a combination of tesselation shaders and vertex shaders to implement a displacement map, that generates geometry from the texture.
I am generating very large hex grids (up to 120k total hexes at 32px wide hexes results in over 12k wide images) and I'm trying to find an efficient way to bind these to OpenGL textures in libgdx. I was thinking of using multiple FBOs and breaking the grid up as necessary into tiles, but I'm not sure how to ensure continuity between the FBOs. I can't start with one massive FBO, because that is backed up by a texture so it would fail from trying to load it to video memory. I can't use a standard bitmap on the heap because I need the drawing functionality of an OpenGL surface.
So what I was thinking was I would need to overdraw on the FBOs and somehow pick up on the next FBO exactly where the previous left off. However I'm not sure how to go about this. I'm drawing the hex grid with a series of hexagonal meshes, FYI.
Of course, there's probably some other much simpler and more efficient way to do this that I'm not even thinking of, which is why I pose this question to you fine people!
You have to draw it in pieces. You need to be able to draw your hex grid from an arbitrary position. This means being able to compute which hexes to draw based on a rectangle overlaid over the map. This isn't a hard problem, and I wouldn't worry too much about drawing extra stuff off-screen. You should master this ability to view the hexmap from any position before moving on.
Once you've mastered that, it's really simple.
Draw the top-left corner and store the pixel data. Then move the area you're drawing over exactly one image width. Draw and store that. Move the area over one image width. Draw and store it. Keep doing that until you've covered the entire width.
Move down one image height and repeat the process. Once you've run out of width and height, you're done. Save your mega-huge image.
You don't need FBOs for this. You could draw it to the screen if you wanted. Though if you want maximum performance, I would suggest using FBOs, double buffering them, and using glReadPixels though a pixel buffer object. That should cut down a lot on latency.
I'm trying to draw a 2D contour plot of some data on Android and I'm wondering what the best approach would be to draw those. The whole datasets can be relatively large (2k * 2k points) and zooming and moving inside the plot should be very fast. Most of the time only a small part of the data will be drawn as the user has zoomed in on the data.
My idea now would be to draw the whole plot onto a large canvas, but clip it to the portion visible on the screen, so that only that part would be really drawn in the end. I find the 2D drawing API of Android somewhat confusing and I'm not sure if this is really a feasible approach and how I would then go about executing it.
So my questions are:
Is it a good idea to draw onto a canvas much larger than the screen and use clipping to display only the relevant part?
How would I create a larger canvas and how would I select which parts should be drawn?
You should start the other way around. Instead of creating a huge canvas you should detect what part of your plot you need to draw and draw only that.
So basically you need some navigation/scrolling and you need to keep the offset from the starting point in memory to calculate where you are. Using the offset you can easily zoom in and out because you just need to scale the plot to the screen.
Is it a good idea to draw onto a
canvas much larger than the screen and
use clipping to display only the
relevant part?
A better question might be, do you have any other options. Some might argue that this is a bad idea since your going to keep memory in use when it isn't relevant to whats happening on the UI. However, from my experiences with the Canvas, I think you'll find this should work out just fine. Now, if you are trying to keep "5 square miles" of canvas in memory your definitely going to have to find a better way to manage it.
How would I create a larger canvas and
how would I select which parts should
be drawn?
I would expect that you will be creating your own "scrolling" method when the user touches the screen via overriding the onTouchEvent method. Basically your going to need to keep track of a starting point X and Y and just track that value as you move the Canvas on screen. In order to move the Canvas there are a number of built in's like translate and scale that you can use to both move the Canvas in X and Y as well as scale it when the user zooms in or out.
I don't think that is a good idea to draw your 2D contour plot on a big bitmap because you need a vector type graphics to zoom in and out in order to keep it sharp. Only pictures are good to scale down but graphs will lose thin lines or come out deformed when scaled down in bitmaps.
The proper way is to do it all mathematically and to calculate which part of the graph should be drawn for required position and zoom. Using anti_alias paint for lines and text, the graph would always come out sharp and good...
When the user zooms out, some items should not be drawn as they could not fit into the screen or would clutter it. So the graph would be always optimised for the zoom level...