Hi I have successfully rendered Utah Teapot in OpenlGL ES 2.0. Currently I am trying to implement touch events so that whenever I touch the teapot it will explode.
My question is where should I implement the touch event in Renderer Class or in GLSurfaceView? and how do i make the teapot explode. Thank you in advance. I am new in Android Any suggestion is highly appreciated
About the touch event:
You would need to show the system/architecture you are currently having. Generally you would have a separate class that controls the scene. In your case you may only have a single object that contains the teapot and all its values that are needed to draw or move or explode... So this class should be initialized and owned by the surface view or its parent. In both cases the surface view would have the access to the scene. Now if I assume the renderer is responsible for drawing then the surface view is the owner of the renderer and would call something similar to this.myRenderer.drawScene(this.myScene) or if the renderer has a control on the draw initialization then the surface view must forward the access to the scene so this.myRenderer.setScene(this.myScene). So in the end both of the classes have the access to the scene and with it to the teapot object.
Now to handle the touch event you will need to check what is the nearest location you can intercept a touch event. If the surface view can intercept these calls then implement it there. If not I am sure the owner of the surface view can, in this case the owner would call this.mySurfaceView.handleTouchEvent(touch).
Now the surface view can optionally do some checks to see if the pot was hit and begin the explosion procedure. This might be as easy as only calling a method on the teapot like this.myScene.teapotObject.explode().
About the explosion itself:
There are many ways on how to explode an object and generally none of them are easy. The minimum would most likely be to have a system where your vertex buffer will be split into smaller chunks where each of the chunk is then misplaced animatedly while exploding.
Even creating an animation might be a hard procedure. One way is to create an interpolation where you would have an animation start time startTimeStamp, its duration, object start position startPosition and object end position endPosition. Then the current position is retrieved by currentPosition = startPosition + (endPosition -startPosition)*(startTimeStamp-currentTimeStamp)/duration. Another way is to implement physics where an object would move on every frame. In this case you define the objects speed and then on every frame you would call teapot.chunk.move(1.0/60.0) which would then do this.position += this.speed*dt. You can then add gravity to manipulate speed or collisions...
Related
I'm making a libgdx game where the user can create a distance joint and a revolute joint on its own, so what I do whenever 2 bodies have been touched they are both added to an arrayList, then when a button is touched a joint will be created. The problem is that the joint are always at the center, so I was wondering if there is a way to get the location of the body where it was touched?, then set those locations as anchorPointA and anchorPointB.
The first idea I get is a Gesture listener, see for example the GestureDetector.GestureAdapter(). Then you implement the touchDown method where you can get the x, y touch positions. To see wheter a body is touched, you might use the Vector.dst() method, but don't forget to unproject if you need.
Another idea might be to add an InputListener to your actor (which is connected to your body), but I haven't tried it yet.
I have an overlay in my game that consists of an image of a screen, and a set of buttons that are "on" the screen.
Screenshot:
My Screen has one Stage. The Stage has a set of Group objects, which I think of as layers. The first group has the backgrounds, the groups in the middle has the game elements, and the frontmost group has the screen overlay.
The overlay layer consists of one Image, the screen itself, and four TextButton (one in each corner).
This would work great, if it weren't for the fact that I can't click on anything in the game layer as long as the image in the overlay layer is in front of it. Even if the image is transparent, it still interecepts all touch events before they reach the game layer.
So my questions is: How can I make the image in the overlay layer ignore all touch events, so that the game layer will get them and one can actually play the game?
I tried one idea myself, but I'm not sure this is the right way to do it:
I tried creating the image as a custom Actor that always had height/width set to 0, but still (by overloading the draw() method) drew the image on the entire screen. This works very well, except for the fact that the image for some reason gets drawn behind elements in lower layers.
Screenshot: https://dl.dropboxusercontent.com/u/1545094/Screen2.png
In this screenshot, I have opened a instruction messagebox, which adds itself to one of the game layers (group 6).
Note that all the buttons in the overlay layer (which is group 7) is in front of the messagebox, but the screen frame (which is a custom Actor) somehow gets drawn behind the messagebox. Why is that?
Note: If I take this exact same case, and change my custom actor into a regular Image, everything is drawn correctly, but then I can't click anything in the lower layers anymore, as described above.
This is my custom actor, if anybody can make any sense of it:
public class ClickThroughImage extends Actor {
BaseDrawable d;
public NonexistingImage(BaseDrawable d){
this.d = d;
setSize(0, 0);
}
#Override
public void draw(SpriteBatch batch, float parentAlpha) {
d.draw(batch, 0, 0, 1024, 768); //Yes, I tried swapping these two lines.
super.draw(batch, parentAlpha); //It had no effect.
}
}
In addition to the other methods note you can also call:
setTouchable(Touchable.disabled);
Which is documented as:
No touch input events will be received by the actor or any children.
Method is n the Actor class.
Use an InputMultiplexer. The InputMultiplexer class allows you to share user input among multiple input processors. Create your own class extending InputProcessor, and put that in InputMultiplexer with your Stage. That way you can respond to user input in a custom way, and still be able to use your stage.
InputMultiplexer multiplexer = new InputMultiplexer();
Array<InputProcessor> processors = new Array<InputProcessor>();
MyInputProcessor myInputProcessor = new MyInputProcessor();
processors.add(myInputProcessor);
processors.add(stage);
this.multiplex.setProcessors(processors);
//...
//and in your show method in your Screen class
Gdx.input.setInputProcessor(this.multiplex);
Also, be sure to return null from Actor.hit. This should cause the actor to not respond to any user interaction.
This is how I solved this problem in my game.
Yes, Pool is right.
Just set touchable to disabled.
It is questionable, whether it is a "good" default of the engine to make all actors touchable in a stage, because in most of my games the majority of actors is _not_ touchable, and there are only few elements the user can/shall interact with. Therefore I always create a base class of "nonTouchableActor" where I derive all my Actors from that shall not react on clicks/taps and this base class sets touchable(disabled) in the constructor. That way you no longer have to think about it.
I'm writing simple solar system simulator.
This is my first libgdx project. I'm using a Stage and Actors for the main menu and is pretty handy especially touch events handling. But ... looking at the examples i see nobody uses actors in actual game logic. I wander if i should use actor as a parent of planet class or just write my own class tor that.
The planets won't be touchable and they will be moved only between the frames so the third parameter of action MoveBy will have to be time between frames.
That are the cons. What are the pros for using Actors?
The main pros for Actors are Actions, Hit testing and touch events, and Groups of Actors.
Actions make quick and easy tweening if your game logic needs that.
You can call stage.hit(x, y) at any time to return the first actor that returns true to whatever hit logic you wrote for it (usually checking bounds with x, y, width, height). return this actor or null to keep iterating through the actors' hit methods looking for a hit actor. Null is returned if no actor is hit.
Hit is used for the Stage's touch events. The actor's touch methods are passed local coordinates, and the Stage handles overlapping of objects, e.g. if an actor covers another actor such that the other actor shouldn't receive touchDown, return true on the covering actor to stop the calling of touchDown on actors "beneath". This also sets 'focus' on the actor that returns true so that Actor's touchUp will be called.
You can group actors together to perform Actions, touch events, etc on the entire Group of Actors as a single unit.
Some Cons:
Actors require a stage which limits functionality somewhat. Many coders use other logic to determine game object state, rather than the scene2d Actions (e.g. box2d). If you use Actors for game objects, you will probably want two Stages, one for ui and one for game world. If you don't use them you'll probably be using your own SpriteBatch and Camera anyway though. And keep in mind that Actors only have an abstract Draw method so you will still need to create draw logic anyway. You'll probably keep a TextureRegion or Sprite as a private field for the Actor. If you want to use your own update logic, you can override the act(float delta) method to get the delta time (call super.act(delta) if you use Actions).
So if you have your own logic and won't use much of what Stage has to offer, save some resources and roll your own application-specific solution. If you can use some of the pros without limiting needed functionality then go for a second Stage for the game logic.
I'm working on an Android application that requires 2D graphical view with a large set of objects. Here's what I basically need to display:
In my case, there could be hundreds of spatially distributed objects. This view is going to behave like a map, so the user is able to scroll in horizontally and vertically, zoom in and zoom out. It also requires click event handling, so the user is able to click any triangle and I then should display some extended information related with that particular triangle.
I'm mostly concerned about 3 things:
In case I re-draw all the objects per in my onDraw() handler, that would be really slow. Also, there I cases when I don't even need to draw all these objects since some of them are invisible depending on zoom level and scroll position. These requires using quad trees which I don't want to implement manually.
All these objects are defined as (x,y,rotation,type), so in case customer decides that we need a "show all" button, I'll have to implement a functionality to calculate bounding boxes.
I need to be able to handle click events and (probably) dragging for all these shapes.
Is there any library that can help me with these tasks? Just don't want to spend 3 days on stuff that I believe must already have been implemented.
All the methods in the Canvas class of the android.graphics package should suffice. The Canvas does clipping (meaning drawing commands get discarded if it's not visible) so if the image is static you could render it into a Picture and draw that on onDraw().
I think the drawing methods have methods to calculate bounds and return them. See Path's computeBounds(RectF bounds, boolean exact).
I am facing an strange issue not sure why this is happening.
I have a Java based Activity which has a LinearLayout. This LinearLayout consists of two GLSurfaceView. All the associated methods of GLSurfaceView like OnDraw, SurfaceChanged etc are moving call down to JNI layer. Inside the JNI layer I am drawing a cube using OpenGLES. I have also created a touch listener and have associated it with first GLSurfaceView. Once I get a touch event I move the call to JNI layer and randomly rotate the first cube.
Problem is when I rotate my first cube both the cubes rotates at exactly same angle. I have debugged this issue for last four hours and I am pretty sure there is nothing wrong in my logic. But for some unknown reason when I make change in one GLSurfaceView other cube changes automatically.
Any ideas? Similar issues? Guess?
Update
I am using same context i.e. my activity for both GLSurfaceView. Basically I have a class inside C++ which draws cube through opengles. Now I am successfully creating two cubes and displaying them simultaneously. Both cubes have different texture on them which I am passing via Java layer. My c++ class has a method which randomly rotates the cube. Problem is if I call method of one cube to rotate it other automatically rotates at same angle no make what I do.
Without your code, I'd guess you are initializing your GLSurfaceView's using the same context. When sharing a context changing one will change the other because they will share the same GL10 instance in the Renderer. I don't program in android, but in general you'd use multiple "viewports" to display different things.
Say that your first GLSurfaceView is on the first left side of your screen, and the second is on the second right half. One idea is to check what side the coordinates x, y of the motionEvent belongs to. And then pass rotation and translation accordingly.
Issue solved there was one logical mistake in my code
Sorry for inconvenience