I'm working on a Java based game where I need to model some basic shapes. The implementation I have at the moment has a lot of duplicated attributes across shapes of the same dimensions.
Example, I have an Interactive Rectangle which is involved in Box2d world and I have a Background Rectangle which is not involved in Box2d world. But both classes need a width and height defined.
Does anyone know a better way to model this data?
Current implementation ..
Mock idea for new implementation, however Interactive and Background can only inherit from one parent.
The reason why I have Interactive and Background classes is so I can loop through my objects like below.
for (Background bgShape : getLevel.getBGShapes()){
bgShape.draw(
gl2,
new Vec3(
bgShape.getPosition().x,
bgShape.getPosition().y,
bgShape.getPosition().z));
}
for (Body body = world.getBodyList(); body != null; body = body.getNext()) {
Interactive iaShape = (Interactive) body.getUserData();
iaShape.draw(
gl2,
new Vec3(
body.getPosition().x,
body.getPosition().y,
0.0f));
}
Also Interactive class defines a number of attributes which are not present in Background,
public abstract class InteractiveShape extends Shape {
private String bodyType;
private float density;
private float friction;
private float restitution;
private boolean fixedRotation;
private Body body;
}
2nd example is much better asRectangle is really a shape, but has additional data (width and height)
I don't think Iteractive and Background are necessary unless they add any additional operations (which from the picture they do not).
Related
I know most questions on the community should be done with at least some code. But I´m totally lost here, I don´t even know where to start. What I want to do is use the Vuforia AR Library to render LibGDX 3D modelInstances. However, I don't know how can I make Vuforia render the modelInstances or use a libGDX camera as it´s camera.
I've done external research but I have not been able to find useful information. Is there anyone who can help me get started with this?
Please note that I am not versed in Vuforia AR specifically, but the question has gone unanswered for a while, so I will give it a shot.
The Camera in LibGDX is essentially just a wrapper for two 4x4 matrices, the view matrix Camera#view, and the projection matrix Camer#projection (there is another Matrix, the model matrix, used for world space transformations, however I believe [but am not 100% certain] that in LibGDX, this matrix is already incorporated into the view matrix [so Camera#view is actually the model-view matrix]).
Anyway, following on from this, unless there is a simpler solution that I am not aware of, you should be able to use these underlying matrices to deal with projections between the the Vuforia and LibGDX apis.
(recommended further reading: Model, View, Projection matrices in 3D graphics)
Next is rendering LibGDX 3D ModelInstances using Vuforia. A general solution to this problem, would be to convert the ModelInstance into something that Vuforia can recognize. This would be done by taking the mesh/vertex data represented by the LibGDX model, and feeding it directly into Vuforia.
IMO, the best way of going about this, would be to use a a core representation of the model data which can easily be given to both Vuforia and LibGDX (for example, a certain file format that both can recognize, or as raw FloatBuffers which should be easy to wrap up and give to either API). For reference, LibGDX stores models as vertex information in a collection of FloatBuffers, accessible through Model#meshes or ModelInstance#model#meshes.
Ok. So I finfally managed to combine both libraries. I´m not sure if what I´m doing is the most efficient way of working but it has worked for me.
First of all I´m basing myself on the Sample Apps from Vuforia. Specifically using the FrameMarkers example.
I opened an empty LibGDX project, imported the Vuforia jar and copied the SampleApplicationControl, SampleApplicationException, SampleApplicationGLView, SampleApplicationSession, FrameMarkerRenderer and FrameMarker.
Next, I created some attributes on the AndroidLauncher class of LibGDX and initialized all the Vuforia Stuff:
public class AndroidLauncher extends AndroidApplication implements SampleApplicationControl{
private static final String LOGTAG = "FrameMarkers";
// Our OpenGL view:
public SampleApplicationGLView mGlView;
public SampleApplicationSession vuforiaAppSession;
// Our renderer:
public FrameMarkerRenderer mRenderer;
MyGDX gdxRender;
// The textures we will use for rendering:
public Vector<Texture> mTextures;
public RelativeLayout mUILayout;
public Marker dataSet[];
public GestureDetector mGestureDetector;
public LoadingDialogHandler loadingDialogHandler = new LoadingDialogHandler(
this);
// Alert Dialog used to display SDK errors
private AlertDialog mErrorDialog;
boolean mIsDroidDevice = false;
#Override
protected void onCreate (Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
vuforiaAppSession = new SampleApplicationSession(this);
vuforiaAppSession.setmActivity(this);
AndroidApplicationConfiguration config = new AndroidApplicationConfiguration();
// Load any sample specific textures:
mTextures = new Vector<Texture>();
loadTextures();
startLoadingAnimation();
vuforiaAppSession.initAR(ActivityInfo.SCREEN_ORIENTATION_PORTRAIT);
gdxRender = new MyGDX (vuforiaAppSession);
gdxRender.setTextures(mTextures);
initialize(gdxRender, config);
mGestureDetector = new GestureDetector(this, new GestureListener());
mIsDroidDevice = android.os.Build.MODEL.toLowerCase().startsWith(
"droid");
}
I needed to set the activity so I created the setmActivity() on the SampleApplicationSession.
After that I implemented the Libgdx ApplicationAdapter class and passed the vuforiaAppSession as an attribute to access all the stuff I initialized.
public class MyGDX extends ApplicationAdapter {
ModelInstance modelInstanceHouse;
private AnimationController controller;
Matrix4 lastTransformCube;
// Constants:
static private float kLetterScale = 25.0f;
static private float kLetterTranslate = 25.0f;
// OpenGL ES 2.0 specific:
private static final String LOGTAG = "FrameMarkerRenderer";
private int shaderProgramID = 0;
private Vector<com.mygdx.robot.Texture> mTextures;
//SampleApplicationSession vuforiaAppSession;
PerspectiveCamera cam;
ModelBuilder modelBuilder;
Model model;
ModelInstance instance;
ModelBatch modelBatch;
static boolean render;
public SampleApplicationSession vuforiaAppSession;
public MyGDX ( SampleApplicationSession vuforiaAppSession){
super();
this.vuforiaAppSession = vuforiaAppSession;
}
The last important thing to keep in mind is the render() method. I based myself on the the render method of the FrameMarkerRenderer. It has a boolean that gets activated when the camera starts. So I simple changed the variable both on the vuforia AR initialization and the render() method. I had to put the camera on and identity matrix and multiply the model by the modelViewMatrix.
#Override
public void render() {
if (render) {
// Clear color and depth buffer
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// Get the state from Vuforia and mark the beginning of a rendering
// section
State state = Renderer.getInstance().begin();
// Explicitly render the Video Background
Renderer.getInstance().drawVideoBackground();
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
// We must detect if background reflection is active and adjust the
// culling direction.
// If the reflection is active, this means the post matrix has been
// reflected as well,
// therefore standard counter clockwise face culling will result in
// "inside out" models.
GLES20.glEnable(GLES20.GL_CULL_FACE);
GLES20.glCullFace(GLES20.GL_BACK);
cam.update();
modelBatch.begin(cam);
if (Renderer.getInstance().getVideoBackgroundConfig().getReflection() == VIDEO_BACKGROUND_REFLECTION.VIDEO_BACKGROUND_REFLECTION_ON)
GLES20.glFrontFace(GLES20.GL_CW); // Front camera
else
GLES20.glFrontFace(GLES20.GL_CCW); // Back camera
// Set the viewport
int[] viewport = vuforiaAppSession.getViewport();
GLES20.glViewport(viewport[0], viewport[1], viewport[2], viewport[3]);
// Did we find any trackables this frame?
for (int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++)
{
// Get the trackable:
TrackableResult trackableResult = state.getTrackableResult(tIdx);
float[] modelViewMatrix = Tool.convertPose2GLMatrix(
trackableResult.getPose()).getData();
// Choose the texture based on the target name:
int textureIndex = 0;
// Check the type of the trackable:
assert (trackableResult.getType() == MarkerTracker.getClassType());
MarkerResult markerResult = (MarkerResult) (trackableResult);
Marker marker = (Marker) markerResult.getTrackable();
textureIndex = marker.getMarkerId();
float[] modelViewProjection = new float[16];
Matrix.translateM(modelViewMatrix, 0, -kLetterTranslate, -kLetterTranslate, 0.f);
Matrix.scaleM(modelViewMatrix, 0, kLetterScale, kLetterScale, kLetterScale);
Matrix.multiplyMM(modelViewProjection, 0, vuforiaAppSession.getProjectionMatrix().getData(), 0, modelViewMatrix, 0);
SampleUtils.checkGLError("FrameMarkers render frame");
cam.view.idt();
cam.projection.idt();
cam.combined.idt();
Matrix4 temp3 = new Matrix4(modelViewProjection);
modelInstanceHouse.transform.set(temp3);
modelInstanceHouse.transform.scale(0.05f, 0.05f, 0.05f);
controller.update(Gdx.graphics.getDeltaTime());
modelBatch.render(modelInstanceHouse);
}
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
modelBatch.end();
}
It´s a lot of code, but I hope it helps the people that try to start integrating both libraries. I don´t think this is efficient but it´s the only solution I´ve come with.
Pablo's answer is pretty good, however, if you are interested in a bit different approach (Calling Vuforia from LibGDX, not the other way) and a more complete example, here is a github repo with a simple 3D model renderer.
I'm making a game with Libgdx and I'm using isometric perspective.
I have a problem when rendering my character because the map is loaded from Tiled Map Editor and character is a Sprite.
If I have a wall on layer 0 of the map, when drawing the wall before the Sprite, if the sprite is behind the wall, we will see the Sprite instead of wall (we should see the wall), if rendering the Sprite first, we will see the wall instead of the Sprite when the sprite is in front of the wall (we should see the Sprite). Any idea about fixing this?
I haven't really worked with tiled maps, but I think it goes something like this.
First, in TileEd, create an object layer that will be used for your character sprite. So the layer should be between background stuff and stuff that should obscure your player. Name it something like "characters".
Then, create a MapObject subclass that can render a sprite. Something like this:
public class SpriteMapObject extends MapObject {
private Sprite sprite;
public SpriteMapObject (Sprite sprite) {
this.sprite = sprite;
}
#Override
public Color getColor () {
return sprite.getColor();
}
#Override
public void setColor(Color color){
sprite.setColor(color);
}
public void render(Batch batch){
Color spriteColor = sprite.getColor();
float originalAlpha = spriteColor.a;
spriteColor.a *= getOpacity();
sprite.draw(batch);
spriteColor.a = originalAlpha;
}
}
Now after you load your map and your sprite, you can put the sprite on that layer you prepared:
map.getLayers().get("characters").add(new SpriteMapObject(playerSprite));
You also need to subclass the map renderer so it will render the sprite for you. Use your subclassed version instead of the original. For instance:
public class SpritesOrthogonalTiledMapRenderer extends OrthogonalTiledMapRenderer {
//Override whichever constructor(s) you need
public SpritesOrthogonalTiledMapRenderer (TiledMap map, Batch batch) {
super(map, batch);
}
#Override
public void renderObject(MapObject object) {
super.renderObject(object);
if(object instanceof SpriteMapObject) {
((SpriteMapObject) object).render(batch);
}
}
}
That's the basics of how it works. But if you want your player to be able to be in front or behind of a wall like you described, then you will need to create multiple extra layers, and move your sprite object to different layers when necessary.
Looks to me like Libgdx could use a feature enhancement to OrthogonalTiledMapRenderer so it can draw objects automatically merged into a TileLayer and inserting them into the draw order based on row.
I found an "answer" somewhere else so I share it with you so if any1 else had the problem:
- get players tile location.
- get each tile in that stack ( aka, get a reference to every tile in that cell across each layer.
- check for visibility exceptions on each of those tiles. Store exceptions as properties defined in Tiled. ( will cover properties shortly )
- handle accordingly, most likely by raising or lowering the players render layer for that frame.
I am setting up a large scale GUI (larger than anything I have done before) using Java's Swing toolkit and I would like to set up my own custom color scheme to draw colors from so that all color definitions are in one place. To do this, I have decided to make a pseudo-static top-level class called ColorPalette (applied from https://stackoverflow.com/a/7486111/4547020 post) that contains a SchemeEnum where the programmer sets a color scheme for the entire GUI.
I would like the color selection to be independent to knowledge of color scheme. Does anyone know a design pattern or an efficient way to do this? I'm not entirely confident that my current setup is the best way to implement this, but I would like to set up a modular design where it would not be intrusive to add more ColorEnums or SchemeEnums (at compile time, not runtime).
For clarification sake, I want the programmer to be able to simply select a ColorEnum and get returned a java.awt.Color object based on the ColorEnum and the defined SchemeEnum.
For instance:
// Use the BASIC color scheme
ColorPalette.setCurrentScheme(ColorPalette.SchemeEnum.BASIC);
// Set button backgrounds
testButton.setBackground(ColorPalette.ColorEnum.DARK_RED.getColor());
testButton2.setBackground(ColorPalette.ColorEnum.BLUE.getColor());
should return different Color objects than
// Use the DARK color scheme
ColorPalette.setCurrentScheme(ColorPalette.SchemeEnum.DARK);
// Set button backgrounds
testButton.setBackground(ColorPalette.ColorEnum.DARK_RED.getColor());
testButton2.setBackground(ColorPalette.ColorEnum.BLUE.getColor());
because they have different SchemeEnums even though they are requesting the same color from ColorPalette. This way, changing the SchemeEnum changes every color in the GUI with a one line code change (or Colors could even be changed at runtime).
I've heard of HashTables being used for large data storage such as this, but I don't know how they work. Might that apply here?
Here is my code thus far. Thanks in advance!
package common.lookandfeel;
import java.awt.Color;
/**
* Class which contains the members for the color scheme used throughout the project.
* <p>This class is essentially static (no constructor, class is final, all members static) and
* should not be instantiated.
*/
public final class ColorPalette
{
/**
* The list of color schemes to choose from.
*/
public static enum SchemeEnum
{
BASIC, DARK, METALLIC
}
/**
* The list of color descriptions to choose from.
*/
public static enum ColorEnum
{
LIGHT_RED(256,0,0), RED(192,0,0), DARK_RED(128,0,0),
LIGHT_GREEN(0,256,0), GREEN(0,192,0), DARK_GREEN(0,128,0),
LIGHT_BLUE(0,0,256), BLUE(0,0,192), DARK_BLUE(0,0,128),
LIGHT_ORANGE(256,102,0), ORANGE(256,102,0), DARK_ORANGE(192,88,0),
LIGHT_YELLOW(256,204,0), YELLOW(256,204,0), DARK_YELLOW(192,150,0),
LIGHT_PURPLE(136,0,182), PURPLE(102,0,153), DARK_PURPLE(78,0,124);
private int red;
private int green;
private int blue;
private ColorEnum(int r, int g, int b)
{
this.red = r;
this.green = g;
this.blue = b;
}
/**
* Get the selected color object for this Enum.
* #return The color description as a Color object.
*/
public Color getColor()
{
// WANT TO RETURN A COLOR BASED ON currentScheme
return new Color(red, green, blue);
}
}
private static SchemeEnum currentScheme = SchemeEnum.BASIC;
/**
* Default constructor is private to prevent instantiation of this makeshift 'static' class.
*/
private ColorPalette()
{
}
/**
* Get the color scheme being used on this project.
* #return The current color scheme in use on this project.
*/
public static SchemeEnum getCurrentScheme()
{
return currentScheme;
}
/**
* Set the overall color scheme of this project.
* #param currentPalette The color scheme to set for use on this project.
*/
public static void setCurrentScheme(SchemeEnum cp)
{
currentScheme = cp;
}
/**
* Main method for test purposes only. Unpredictable results.
* #param args Command line arguments. Should not be present.
*/
public static void main(String[] args)
{
// Declare and define swing data members
JFrame frame = new JFrame("Test Environment");
CustomButton testButton = new CustomButton ("Hello World");
CustomButton testButton2 = new CustomButton ("I am a button!");
// Use a particular color scheme
ColorPalette.setCurrentScheme(ColorPalette.SchemeEnum.BASIC);
// Set button backgrounds
testButton.setBackground(ColorPalette.ColorEnum.DARK_RED.getColor());
testButton2.setBackground(ColorPalette.ColorEnum.BLUE.getColor());
// Place swing components in Frame
frame.getContentPane().setLayout(new BorderLayout());
frame.getContentPane().add(testButton, BorderLayout.NORTH);
frame.getContentPane().add(testButton2, BorderLayout.SOUTH);
frame.pack();
frame.setVisible(true);
// Set allocated memory to null
frame = null;
testButton = null;
testButton2 = null;
// Suggest garbage collecting to deallocate memory
System.gc();
}
}
It looks and sounds like you just need to compose SchemeEnum to be made up of ColorEnums just like how you have the ColorEnum made up of rgb values.
public static enum SchemeEnum
{
// Don't really know what colors you actually want
BASIC(ColorEnum.RED, ColorEnum.GREEN, ColorEnum.ORANGE),
DARK(ColorEnum.DARK_RED, ColorEnum.DARK_GREEN, ColorEnum.DARK_ORANGE),
METALLIC(ColorEnum.LIGHT_RED, ColorEnum.LIGHT_GREEN, ColorEnum.LIGHT_ORANGE);
// nor know how many colors make up a scheme
public ColorEnum mainColor;
public ColorEnum secondaryColor;
public ColorEnum borderColor;
private SchemeEnum(ColorEnum mainColor, ColorEnum secondaryColor,
ColorEnum borderColor)
{
this.mainColor = mainColor;
this.secondaryColor = secondaryColor;
this.borderColor = borderColor;
}
}
Then, use code like the following, where the colors are based on the selected scheme:
testButton.setBackground(ColorPalette.getCurrentScheme().mainColor.getColor());
Before you go reinventing the wheel, Swing is based on a pluggable look and feel API, see Modifying the Look and Feel.
The right way would be to define your own look and feel and load this. Because you want to provide a variable number of changes, it would probably be better to use something like Synth. This allows you to define cascading properties for objects and allows you to inherit from other properties as well (so you could devise a base set of properties and then only change the properties you need in each subsequent look and feel).
The cheat way would be to modify the UIManager directly, changing the various properties used by the current look and feel. This is sometimes easier if you want to perform small tweaks.
Either way, this will effect ALL components created by your application without you needing to do anything more then changing the look and feel at startup
In the Android MapsDemo available in Eclipse for the Google API, they create an inner class SmoothCanvas in MapViewCompassDemo.java. Within this class the re-implement seemingly every method and reroute it to a delegate instance of Canvas.
static final class SmoothCanvas extends Canvas {
Canvas delegate;
private final Paint mSmooth = new Paint(Paint.FILTER_BITMAP_FLAG);
public void setBitmap(Bitmap bitmap) {
delegate.setBitmap(bitmap);
}
public void setViewport(int width, int height) {
delegate.setViewport(width, height);
}
...
What is the point of the delegate in this instance?
The value of delegate is passed in to dispatchDraw. The SmoothCanvas class is a wrapper around delegate. By delegating, the Canvas implementation passed to dispatchDraw does all the heavy lifting. The wrapper just allows the injection of the smoothed paint without implementing all the logic of Canvas.
The key point of delegating in that instance is:
private final Paint mSmooth = new Paint(Paint.FILTER_BITMAP_FLAG);
FILTER_BITMAP_FLAG bit. Filtering affects the sampling of bitmaps when they are transformed. Filtering does not affect how the colors in the bitmap are converted into device pixels. That is dependent on dithering and xfermodes.
By activating that flag, drawing bitmaps will basically increase its performance. You will in the example that mSmoth is used in every drawBitmap call.
I would like to move a sphere in a random direction within a simple universe. How could i achieve this with behaviours by changing the location a small amount frame by frame. The reason I am trying to do this is to produce random movement within the universe and eventually build in simple collision detection between the particles.
Any advice/links would be appreciated
Add a new class that extends Behavior, using this skeleton:
public class XXXBehavior extends Behavior
{
private WakeupCondition wc = new WakeupOnElapsedTimer(1000); // 1000 ms
public void initialize()
{
wakeupOn(wc);
}
public void processStimulus(Enumeration criteria)
{
// Move the shape here
// prepare for the next update
wakeupOn(wc);
}
}
You later need to instantiate the class and add it to the scene graph. You also need to defined the bounds, otherwise nothing will happen!
xxxEffect = new XXXBehavior();
xxxEffect.setSchedulingBounds(bounds);
sceneBG.addChild(xxxEffect);