I would like to use a controller with a game I'm writing. I use the LWJGL Controller class to read the controller state.
The state of the triggers seems to have been summarised into a single value that represents the sum of the state of both triggers. The left trigger state is a value that varies between -1 and 0, the right between 0 and 1.
I currently use the getAxisValue() method to get the combined state.
I would like to be able to read these values separately. Is there any way I can do this?
The "Combined trigger axis" issue is caused by DirectInput. JInput uses DirectInput to poll the 360 controller. In this article is explained that the combining of axis was an intentional change to keep compatability with legacy games.
Quoting the article:
The combination of the left and right triggers in DirectInput is by design. Games have always assumed that DirectInput device axes are centered when there is no user interaction with the device. However, the Xbox 360 controller was designed to register minimum value, not center, when the triggers are not being held. Older games would therefore assume user interaction.
The solution was to combine the triggers, setting one trigger to a positive direction and the other to a negative direction, so no user interaction is indicative to DirectInput of the "control" being at center.
In order to test the trigger values separately, you must use XInput. `
LWJGL can not offer separate trigger values because it in effect relies on DirectInput.
The best course of action would be to find java bindings for XInput, which does allow separate trigger values.
Related
My java application requires a 2d physics engine for a horizontal world, however, looking at jbox2d and dyn4j, it seems that they don't offer what I need out-of-the-box. Specifically, they don't support defining which object groups can collide with others. Consider this simplified model: Bullets can collide with boxes. Planes pass through boxes, but they can collide with bullets.
How do I exclude certain groups of objects from colliding in a physics engine?
Dyn4j has the CategoryFilter. You create CategoryFilters with two longs and set those in your Fixtures. It's a little funny how the filters work because the category and mask are used in their binary forms to determine who can collide with who. To see this in practice check out this #Test from the Dyn4j repo.
Dyn4j also mentions this in the docs:
There are three Filter implementations provided, the Filter.DEFAULT_FILTER, the CategoryFilter (just like Box2D’s collision filter, int category + mask), and the TypeFilter.
So I'm assuming Box2D has this too (and jBox2d by extension). I'd say just about any physics engine at Box2D's or Dyn4j's level will have this ability in some form.
There is a solution in box2d (and jbox2d respectively).
The "PreSolve" method allows to disable a contact before colliding. See this question on gamedev which has pretty much the same problem as described here.
From documentation:
Pre-Solve Event
This is called after collision detection, but before collision resolution. This gives you a chance to disable the contact based on the current configuration. For example, you can implement a one-sided platform using this callback and calling b2Contact::SetEnabled(false). The contact will be re-enabled each time through collision processing, so you will need to disable the contact every time-step. The pre-solve event may be fired multiple times per time step per contact due to continuous collision detection.
People here keep suggesting to me to use Key Binding instead of KeyListener in Java 2D games.
What are the advantages and disadvantages of each one? Is Key Bindings really better for 2D games?
KeyListener is a much lower level API which requires the component that it is registered to be focused AND have keyboard focus. This can cause issues when you have other components within your game that may grab keyboard focus, for example.
KeyListener is generally more difficult to maintain and extend or change, as typically, all the key events are channelled through a single listener, so ALL the game controls originate from this single position.
(Now imagine you want to add other controls, such as buttons or even joystick or controllers to mix - you suddenly have any number of input hubs you need to consider, keep up to date and in sync :P)
The Key Bindings API has been designed to provide re-usable Actions which can be used in a variety of different parts of the Swing API, while this makes desktop application development easier, it can also make it easier when developing games...
Apart from the fact that you now gain control over the focus level that the events are generated, you also gain the flexibility to defining configurable key's which can be mapped to different actions
For example...
You define an Up Action, which moves you character up. This is divorced from any event. That means, the Action does not care how it is triggered, only what it should do when it is triggered.
You are now free to define the keystroke which would trigger this action. The great part about this, is suddenly have the ability to provide customisation to the user, so they can actually define the key stroke they want for the action - for example, without having to design some kind of key mapping system of your own.
It also means that you can use the same Action (and even the same instance) in a variety of different ways. For example, you can bind the Action to a key stroke and add it to button and if you're brave enough to try, even bind it another input device (like a joystick or controller)...but you'd need to build the API yourself to achieve it, but it means you suddenly have a single API concept for all your user input, for example...
Key bindings were introduced later. They map actions to specific keys whereas a listener simply listens for which keys are pressed. To be honest, it doesn't really matter which one you use, but it's always preferable to use key bindings.
There are many libraries also available which have their advantages/disadvantages. Key bindings should be fine though for a 2D game. However, please note that using the Java API is not recommended for game development. If you ever want to build 3D, or content rich 2D games, it's much better to use OpenGL. Try LWJGL or JOGL (LWJGL is preferred generally) or you can use a game engine such as Slick2D or LibGDX.
Background
I am writing an application which has two distinct functions.
Load a GPX file and direct the user to follow the route defined in the file, by pointing the correct direction of travel and distance. Then marking the waypoint as reached and selecting the next as and when necessary.
Display the route on the standard maps widget.
Current Thinking
My current design is to have three tabs: menu, location, map. Where menu is used for loading the gpx file and amending settings; location gives the current location and direction to travel; and maps is of course the map widget with the route overlay.
So this gives four activities (the main app, and the three tabs).
I am going to need some routine to take the current location and apply logic to it to work out the current best position. Another routine to keep track of the route and what waypoints have been met. My thinking is to have two separate threads (one for location,one for route tracking) spawned from the main activity which have methods which can be called by any of the activities e.g. get position. The route tracking should also use some callback mechanism or event mechanism to inform the UI when a waypoint has been reached.
This way the user interface can update as and when needed, but also responds to events driven from the location data.
Question
Does this seem a sensible set of decisions or is there something I haven't considered which will take me by surprise. Writing for a mobile phone is significantly different to my usual fare (having a UI makes a big change).
It might make more sense to have the location and route tracking logic implemented as bound services, rather than as threads spawned by one of your activities. See the Services guide topic for more info on how to set up and use a Service in your application.
Other than that, your approach seems pretty sound to me.
I'm helping to build a GWT application for a client and rewrote most of the stuff to work better, shorter code, faster, etc. However in all the GUI application I've worked on (not so many really) there comes a flexing point where you just have to put a lot of rules and move logic from the listeners to some common mediator. Then some times this could get an ugly mess so you whatever small think you need to do in the listener.
Let's take an example:
form with 10-20 fields
two exclusive radio control about half of the state of the other fields (enabling, validation, input limits)
three exclusive radio controls control again almost the same fields, but in a different way (affecting calculations, enabling); they are also controlled by the above
4 or so number fields are validated on the fly depending on the previous selections and some real-time data object; they can have upper/lower limits, be enabled/disabled
one drop-down box controls the next 6 or so controls - displaying/hiding them, modifying validators
some checkboxes (shown by the above combo) activate some input fields and also determine their validation algorithm
While everything is up an running, without known bugs, there are a few coding gotchas that really bother me:
code is spread among listeners and some mediator methods.
loading the form with some preset values presents its own challenges: like data objects that might be available or not, data objects that might alter their state and subsequent field behaviour
some fields are having a default value set and this should not be overwritten by automatic filling, but if the data objects are not there (yet) then they will need to be filled eventually when the later become available
form cannot be submitted if any of the fields are not validated
My approach:
identify which fields share a common afair and move code into one place
each radio group shares a single listener implementation between its radios
default form filling is deferred until the live data is available (as much as possible) and as a result it gets called multiple times
each action has a call to a common validator method
the validator runs through all the fields in the form, calls their validators (which highlight all errors) and returns a single boolean
each relevant keypress or mouse action, data change it gets deferred to be called after 250ms from the last call; this means first call just places the validator as a delayed action, subsequent calls reset the timer
Ok, it doesn't make any sense to dwelve into more details but I'm more upset about the fact that there is no clear separation between visual actions (enabling), data actions (setting form field values), field listeners, retrieving form values and live data listeners.
What would be a good approach/pattern (next time maybe) to make sure that MVC get separated and lends itself better to maintenance? I know this is not a typical question but I've read every documentation I could get my hands on and still did not find some helpful answer.
I'd move closer towards MVP than MVC. It's clearly the way Google intends to go, so adopting it will probably mean that you're able to go with the flow rather than fight the current.
How does this affect you? Well, I believe you should accept that a tidier implementation may involve more code: not the 'shorter code' you were hoping for. But, if it's logically structured, efficient code the Google compiler should be able to trim lots out in the compiler optimisation phase.
So, move as much of the logic as you can into the model layer. Test this thoroughly, and verify that the correct level of page reset/tidying happens (all of this can be done with plain JUnit, without any UI). Next, use your Presenter (Activity) to tie the View to the Model: handling the interactions, populating the fields, etc.
you can divide a Huge class in different classes bu dividing the GUI in different JPanels. All the panels are implemented in different classes extending JPanel. Guess that would help you.
I'm writing an application which has an interval time as a parameter and would like a field similar to the one the Timer has to set its time. Values of a few seconds to a few hours make sense for the application.
What type of field should I use?
Looks like a custom field.
Your choices with built-in fields are:
net.rim.device.api.ui.component.NumericChoiceField, which basically acts like a drop-down with numbers in it (not great when you're talking about 60 minutes/seconds, but if you want to constrain to say 5 minute intervals or something it might be ok).
net.rim.device.api.ui.component.EditField with a custom net.rim.device.api.ui.text.TextFilter (you could use a NumericTextFilter, but that wouldn't constrain you to 0-60, it'd allow any numbers).
Or you can roll you own. See this article for a start on creating custom fields. You'll probably want to override navigationMovement to make the numbers increment/decrement on trackball up & down, and to move the focus within the field while going left & right - setting an internal state variable indicating where the focus is and overriding getFocusRect to return an appropriate focus rectangle (be sure to call focusRemove and focusAdd from within navigationMovement to let the framework know you've updated the focus).