I'm currently working on an Arduino Uno and Processing interface with the idea of revealing an image much like a "scratch off lottery ticket." I currently have the image uploaded into the sketch and the black background ready, however I'm not sure at this point how to start revealing the image through fill()
I know that I could technically use an ellipse of 1 pixel wide that will SLOWLY reveal the image (and subsequently take forever because the Arduino joystick isn't very cooperative.) but I was hoping there would be an easier way to reveal it. Does anyone have any ideas?
Here is the code:
void draw() {
noStroke();
ellipse(xPos, yPos, 1,1);
if(zButton == 0){
background(0);
}
color c = img.get(xPos, yPos);
fill(c);
serialEvent(myPort);
}
This is the draw function, it reads the joystick's interface through the serialEvent function, as of right now I have a 1x1 ellipse revealing the image pixel by pixel, but that would be extremely tedious
Thanks guys, any help is appreciated
Instead of using .get(x,y) to get individual pixels, you can use loadPixels to expose the pixel array, and simply color those instead. This is generally much faster than using the get and set functions.
To speed things up further, cache the image's pixels and track which consecutive lines you can just copy over (for instance, if you offer a 5 pixel radius ellipse, then you can track the fact that you need to copy 5 pixels starting at position X, which you can then copy over as a single call. The more the user scratches, the more pixels you can copy that way in a single call)
Related
I'm using libgdx to make simple tile based game and everything seemed to be fine, until I added a rectangle, which follows mouse position. I figure out, that whenever I jump, rectangle (and other blocks too) expands probably by 1 px, until I let the spacebar. When I hit the spacebar again, it gets to normal size. I tried printing out rectangle width and height, but they didn't change, so problem is with rendering.
Everything allright
On this picture you can see game before jump.
Wider textures
Here is game after jump. You can also clearly see it on players head.
A little more detail. I don't use block2d. Tiles sizes are 8x8 scaled to 20x20. Using texturepacker without padding (problem occurs with padding anyway). I don't know which code to post, because I have no idea where the problem could be, so here is just simple block class. Any help would be much appreciated, thanks.
public class Block extends Sprite {
private int[] id = { 0, 0 };
public Rectangle rect;
private int textureSize = 8;
public Block(PlayScreen play,String texture, int x, int y, int[] id) {
super(play.getAtlas().findRegion("terrain"));
this.id = id;
rect = new Rectangle(x, y, ID.tileSize, ID.tileSize);
setRegion(id[0] * textureSize, id[1] * textureSize + 32, textureSize, textureSize);
setBounds(rect.x, rect.y, rect.width, rect.height);
}
public void render(SpriteBatch batch) {
draw(batch);
}
Welcome to libGDX!
TL;DR- there isn't enough of your code there to tell what the exact problem is, but my guess is that somewhere in your code you are confusing pixel-space with game-space.
A Matter of Perspective
When you first create a libGDX game that is 2D, it's really tempting to think that you are just painting pixels onto the screen. After all, your screen is measured in pixels, your window is measured in pixels, and your texture is measured in pixels.
However, if you start looking closer at the API, you'll find weird little things such as your camera and sprite positions and sizes being measured as floating point values instead of integers (Why floats? You can't have a fraction of a pixel!).
The reason the dimensions of your game object are different than how big they are drawn. It's really easy to understand this in a 3D world- when I am close to something, it is drawn really big on the screen. When I am far away, it is drawn really small. The actual size of the object doesn't change based on my distance from it, but the perceived size did. This tells us that we can't safely measure things in our game just based on how they're drawn- we have to measure based on their true size.
As a side note, while you may be using an Orthographic camera (i.e. one without perspective) and drawing 2D sprites, libGDX is really drawing a flat 3D object (a plane) behind the scenes.
Game Units
So how do we measure the "true size" of something? The answer is that we can measure it using whatever type of unit we want! We can say something is 3.5 meters long, or 42 bananas- whatever you want! For the sake of this conversation, I'm going to call these units "Game Units" (GU).
For your game, you might consider making each block one GU high and one GU wide (essentially measuring your game world in blocks). Your character can move in fractions of a block, but you measure speed in terms of "blocks per second." I can almost guarantee it will make your game logic a lot simpler.
But our textures are in pixels!
As you probably already know, your game uses three things to render: A viewport (the patch of the screen where your game can be painted), A Camera (think of it like a real camera- you change the position and size of the lens to change how much of your world is 'in view'), and your game objects (the things you may or may not want to draw, depending on whether they're visible to the camera).
Now let's look at how they're measured:
Viewport: This is a chunk of your screen (set to be the size of your game window), and as such is measured in pixels.
Camera: The Camera is interesting, because its size and position are measured in Game Units, not pixels. Since the viewport uses the Camera to know what to paint on the screen, it does contain the mapping of GU to pixel.
Game Object: This is measured in Game Units. It may have a texture measured in pixels, but that different than the "true size" of the game object.
Now libGDX defaults all of these sizes such that 1 GU == 1 Pixel, which misleads a lot of folks into thinking that everything is measured by pixels. Once you realize that this isn't really the case, there are some really cool implications.
Really Cool Implications
The first implication is that even if my screen size changes, my camera size can stay the same. For example, if I have a small 800x600 pixel screen, I can set my camera size to 40x30. This maintains a nice aspect ratio, and allows me to draw 40x30 blocks on the screen.
If the screen size changes (say to 1440x900), my game will still show 40x30 blocks on the screen. They may look a little stretched if the aspect ratio changes, but libGDX has special viewports that will counteract this for you. This makes it much easier to support your game on other monitors, other devices, or even just handling screen resizes.
The second cool implication is that you stop caring about texture sizes to a large degree. If you start telling libGDX "Hey, go draw this 32x32px sprite on this 1x1 GU object" instead of "Hey, go draw this 32x32px sprite" (notice the difference?) it means that changing texture sizes doesn't change how big the things on your screen are drawn, it changes how detailed they are. If you want to change how big they are drawn, you can change your camera size to 'zoom in.'
The third cool implication is that this makes your game logic a lot cleaner. For example you start thinking of speeds in "Game Units per second", not "Pixels per second". This means that changes in drawing size won't affect how fast things are in the game, and will save you a ton of bug-hunting further down the road. You also avoid a lot of the weird "My jump behaves differently when I resize the screen" bugs.
Summary
I hope this is helpful and makes sense. It's difficult to get your mind around it at first, but it will make your life a lot easier and your game a lot better in the long run. If you'd like a better example with pictures, I recommend that you read this article by one of the libGDX developers.
I am working on a java game which deals with a bunch of sprite sheets, and I was wondering whether I should have separate sprite sheets for left and right animations, or if I should just draw up the left sprites and reverse the image programatically for the right animations. Which one would be a better practice, and would either of them perform better? I was thinking of having the image flipping occur during Game init(). If I do go with direction flipping (saving a lot of time in photoshop), would this be a safe way to go:
playerAttackLeft = spriteSheet.crop(0, 0, 400, 400); //(x, y, width, height)
playerAttackRight = spriteSheet.crop(400, 0, -400, 400);
?
You should rotate the image and use it instead of getting new one.
When you read an image then it will take space for JVM to load it.
Here is an example when I did it on my computer.
I had an image of 100kb and when I loaded it in my class, It has taken approximately 1mb of space.
reading an image is costly process
And on the other hand if you will use rotated image it will not only save your space but also your time too (space and time complexity, both) because rotating image will take much less time then to read an external image.
I'm using Java Graphics2D to generate this map with some sort of tinted red overlay over it. As you can see, the overlay gets cut off along the image boundary on the left side:-
After demo'ing this to my project stakeholders, what they want is for this overlay to clip along the map boundary with some consistent padding around it. The simple reason for this is to give users the idea that the overlay extends outside the map.
So, my initial thought was to perform a "zoom and shift", by creating another larger map that serves as a "cookie cutter", here's my simplified code:-
// polygon of the map
Polygon minnesotaPolygon = ...;
// convert polygon to area
Area minnesotaArea = new Area();
minnesotaArea.add(new Area(minnesotaPolygon));
// this represents the whole image
Area wholeImageArea = new Area(new Rectangle(mapWidth, mapHeight));
// zoom in by 8%
double zoom = 1.08;
// performing "zoom and shift"
Rectangle bound = minnesotaArea.getBounds();
AffineTransform affineTransform = new AffineTransform(g.getTransform());
affineTransform.translate(-((bound.getWidth() * zoom) - bound.getWidth()) / 2,
-((bound.getHeight() * zoom) - bound.getHeight()) / 2);
affineTransform.scale(zoom, zoom);
minnesotaArea.transform(affineTransform);
// using it as a cookie cutter
wholeImageArea.subtract(minnesotaArea);
g.setColor(Color.GREEN);
g.fill(wholeImageArea);
The reason I'm filling the outside part with green is to allow me to see if the cookie cutter is implemented properly. Here's the result:-
As you can see, "zoom and shift" doesn't work in this case. There is absolutely no padding at the bottom right. Then, I realized that this technique will not work for irregular shape, like the map... and it only works on simpler shapes like square, circle, etc.
What I want is to create consistent padding/margin around the map before clipping the rest off. To make sure you understand what I'm saying here, I photoshopped this image below (albeit, poorly done) to explain what I'm trying to accomplish here:-
I'm not sure how to proceed from here, and I hope you guys can give me some guidance on this.
Thanks.
I'll just explain the logic, as I don't have time to write the code myself. The short answer is that you should step through each pixel of the map image and if any pixels in the surrounding area (i.e. a certain distance away) are considered "land" then you register the current pixel as part of the padding area.
For the long answer, here are 9 steps to achieve your goal.
1. Decide on the size of the padding. Let's say 6 pixels.
2. Create an image of the map in monochrome (black is "water", white is "land"). Leave a margin of at least 6 pixels around the edge. This is the input image: (it isn't to scale)
3. Create an image of a circle which is 11 pixels in diameter (11 = 6*2-1). Again, black is empty/transparent, white is solid. This is the hit-area image:
4. Create a third picture which is all black (to start with). Make it the same size as the input image. It will be used as the output image.
5. Iterate each pixel of the input image.
6. At that pixel overlay the hit-area image (only do this virtually, via calculation), so that the center of the hit-area (the white circle) is over the current input image pixel.
7. Now iterate each pixel of the hit-area image.
8. If the any white pixel of the hit-area image intersects a white pixel of the input image then draw a white pixel (where the center of the circle is) into the output image.
9. Go to step 5.
Admittedly, from step 6 onward it isn't so simple, but it should be fairly easy to implement. Hopefully you understand the logic. If my explanation is too confusing (sorry) then I could spend some time and write the full solution (in Javascript, C# or Haskell).
I have been searching for a introductory to 2D selection in OpenGL ES in Stack Overflow. I mostly see questions about 3D.
I'm designing a 2D tile-based level editor on Android 4.0.3, using OpenGL ES. In the level editor, there is a 2D, yellow, square object placed in the center of the screen. All I wanted is to detect to see if the object has been touched by a user.
In the level editor, there aren't any tiles overlapping. Instead, they are placed side-by-side, just like two nearby pixels in a bitmap image in MS Paint. My purpose is to individually detect a touch event for each square object in the level editor.
The object is created with a simple vertex array, and using GL_TRIANGLES to draw 2 flat right triangles. There are no manipulations and no loading from a file or anything. The only thing I know is that if a user touches any one of the yellow triangles, then both yellow triangles are to be selected.
Could anyone provide a hint as to how I need to do this? Thanks in advance.
EDIT:
This is the draw() function:
public void draw(GL10 gl) {
gl.glPushMatrix();
gl.glTranslatef(-(deltaX - translateX), (deltaY - translateY), 1f);
gl.glColor4f(1f, 1f, 0f, 1f);
//TODO: Move ClientState and MatrixStack outside of draw().
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertices);
gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 6);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glPopMatrix();
}
EDIT 2:
I'm still missing some info. Are you using a camera? or pushing other
matrixes before the model rendering?. For example, if you are using an
orthographic camera, you can easily unproject your screen coordinates
[x_screen, y_screen] like this (y is analogous):
I'm not using a camera, but I'm probably using an orthographic projection. Again, I do not know, as I'm just using a common OpenGL function. I do pushing and popping matrices, because I plan on integrating many tiles (square 2D objects) with different translation matrices. No two tiles will have the same translation matrix M.
Is a perspective projection the same as orthographic projection when it comes to 2D? I do not see any differences between the two.
Here's the initial setup when the surface is created (a class extending GLSurfaceView, and implementing GLSurfaceView.Renderer):
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
}
public void onSurfaceCreated(GL10 gl, EGLConfig arg1) {
reset();
}
public void onDrawFrame(GL10 gl) {
clearScreen(gl);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0f, super.getWidth(), 0f, super.getHeight(), 1, -1);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
canvas.draw(gl);
}
private void clearScreen(GL10 gl) {
gl.glClearColor(0.5f, 1f, 1f, 1f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
}
A basic approach would be the following:
Define a bounding box for each "touchable" object. This could be
just a rectangle (x, y, width, height).
When you update a tile in the world you update its
bounding box (completely in world coordinates).
When user touches the screen, you have to unproject screen
coordinates to world coordinates
Check if unprojected point overlaps with any bounding box.
Some hints on prev items.[Edited]
1 and 2. You should have to keep track of where you are rendering
your tiles. Store their position and size. A rectangle is a
convenient structure. In your example it could be computed like
this. And you have to recompute it when model changes. Lets call it Rectangle r:
r.x = yourTile.position.x -(deltaX - translateX)
r.y = yourTile.position.y -(deltaY - translateY)
r.width= yourTile.width //as there is no model scaling
r.height = yourTile.height//
3 - if you are using
an orthographic camera, you can easily unproject your screen
coordinates [x_screen, y_screen] like this (y is analogous):
x_model = ((x_screen/GL_viewport_width) -0.5 )*camera.WIDTH + Camera.position.x
4 - For each of your Rectangles check if [x_model; y_model] is inside it.
[2nd Edit] By the way you are updating your matrixes, you can consider you are using a camera with postition surfaceView.width()/2, surfaceView.height()/2. You are matching 1 pixel on screen to 1 unit in world, so you dont need to unproject anything. You can replace that values on my formula and get x_screen = x_model - (You 'll need to flip the Y component of the touch event because of the Y grows downwards in Java, and upwards in GL).
Final words. If user touches point [x,y] check if [x, screenHeight-y]* hits some of your rectangles and you are done.
Do some debugging, log the touching points and see if they are as expected. Generate your rectangles and see if they match what you see on screen, then is a matter of checking if a point is inside a rectangle.
I must tell you that you should not set the camera to screen dimensions, because your app will look dramatically different on different devices. This is a topic on its own so i won't go any further, but consider defining your model in terms of world units - independent from screen size. This is getting so off-topic, but i hope you have gotten a good glimpse of what you need to know!
*The flipping i told you.
PS: stick with the orthographic projection (perspective would be more complex to use).
Please, allow me to post a second answer to your question. This is completely more high-level/philosophical. May be a silly, useless answer but, I hope it will help someone new to computer graphics to change it's mind to "graphics mode".
You can't really select a triangle on the screen. That square is not 2 triangles. That square is just a bunch of yellow pixels. OpenGL takes some vertices, connects them, process them and colors some pixels on the screen. At one stage on the graphics pipeline even geometrical information is lost, and you only have isolated pixels. That's analogous to a letter printed by a printer on a paper. You usually don't process information from a paper (ok, maybe a barcode reader does :D)
If you need to further process your drawings, you have to model them and process them yourself with auxiliary data structures. That's why I suggested you created a rectangle to model your tiles. You create your imaginary "world" of objects, and then render them to screen. The user touch-event does not belong to the same world, so you have to "translate" screen coordinates into your world coordinates. Then you change something in your world (may be the user drags her finger and you have to move an object), and back again tell OpenGL to render your world to screen.
You should operate on your model, not the view. Meshes are more of a view thing, so you shouldn't mix them with the model information, it's a good practice to separate both things. (please, an expert correct me, I'm quite a graphics hobbyist)
Have you checked out LibGDX?
Makes life so much easier when working with OpenGL ES.
I'm trying to draw a 2D contour plot of some data on Android and I'm wondering what the best approach would be to draw those. The whole datasets can be relatively large (2k * 2k points) and zooming and moving inside the plot should be very fast. Most of the time only a small part of the data will be drawn as the user has zoomed in on the data.
My idea now would be to draw the whole plot onto a large canvas, but clip it to the portion visible on the screen, so that only that part would be really drawn in the end. I find the 2D drawing API of Android somewhat confusing and I'm not sure if this is really a feasible approach and how I would then go about executing it.
So my questions are:
Is it a good idea to draw onto a canvas much larger than the screen and use clipping to display only the relevant part?
How would I create a larger canvas and how would I select which parts should be drawn?
You should start the other way around. Instead of creating a huge canvas you should detect what part of your plot you need to draw and draw only that.
So basically you need some navigation/scrolling and you need to keep the offset from the starting point in memory to calculate where you are. Using the offset you can easily zoom in and out because you just need to scale the plot to the screen.
Is it a good idea to draw onto a
canvas much larger than the screen and
use clipping to display only the
relevant part?
A better question might be, do you have any other options. Some might argue that this is a bad idea since your going to keep memory in use when it isn't relevant to whats happening on the UI. However, from my experiences with the Canvas, I think you'll find this should work out just fine. Now, if you are trying to keep "5 square miles" of canvas in memory your definitely going to have to find a better way to manage it.
How would I create a larger canvas and
how would I select which parts should
be drawn?
I would expect that you will be creating your own "scrolling" method when the user touches the screen via overriding the onTouchEvent method. Basically your going to need to keep track of a starting point X and Y and just track that value as you move the Canvas on screen. In order to move the Canvas there are a number of built in's like translate and scale that you can use to both move the Canvas in X and Y as well as scale it when the user zooms in or out.
I don't think that is a good idea to draw your 2D contour plot on a big bitmap because you need a vector type graphics to zoom in and out in order to keep it sharp. Only pictures are good to scale down but graphs will lose thin lines or come out deformed when scaled down in bitmaps.
The proper way is to do it all mathematically and to calculate which part of the graph should be drawn for required position and zoom. Using anti_alias paint for lines and text, the graph would always come out sharp and good...
When the user zooms out, some items should not be drawn as they could not fit into the screen or would clutter it. So the graph would be always optimised for the zoom level...