OS level event handling? - java

I want to handle os level events in java. I need to do is to perform some logic on key press. I find jni is used for this purpose but don't know how. I don't need to handle event on gui. my application will run on background and perform some logic on every key press.

If you think using a GUI is overkill, I wouldn't consider using JNI which will be 10x harder. Even if you are confident in C, it can be tricky.
You are better off using a library which does what you want already like http://jline.sourceforge.net/

Related

How to Assign different keys to SWING Action? [duplicate]

People here keep suggesting to me to use Key Binding instead of KeyListener in Java 2D games.
What are the advantages and disadvantages of each one? Is Key Bindings really better for 2D games?
KeyListener is a much lower level API which requires the component that it is registered to be focused AND have keyboard focus. This can cause issues when you have other components within your game that may grab keyboard focus, for example.
KeyListener is generally more difficult to maintain and extend or change, as typically, all the key events are channelled through a single listener, so ALL the game controls originate from this single position.
(Now imagine you want to add other controls, such as buttons or even joystick or controllers to mix - you suddenly have any number of input hubs you need to consider, keep up to date and in sync :P)
The Key Bindings API has been designed to provide re-usable Actions which can be used in a variety of different parts of the Swing API, while this makes desktop application development easier, it can also make it easier when developing games...
Apart from the fact that you now gain control over the focus level that the events are generated, you also gain the flexibility to defining configurable key's which can be mapped to different actions
For example...
You define an Up Action, which moves you character up. This is divorced from any event. That means, the Action does not care how it is triggered, only what it should do when it is triggered.
You are now free to define the keystroke which would trigger this action. The great part about this, is suddenly have the ability to provide customisation to the user, so they can actually define the key stroke they want for the action - for example, without having to design some kind of key mapping system of your own.
It also means that you can use the same Action (and even the same instance) in a variety of different ways. For example, you can bind the Action to a key stroke and add it to button and if you're brave enough to try, even bind it another input device (like a joystick or controller)...but you'd need to build the API yourself to achieve it, but it means you suddenly have a single API concept for all your user input, for example...
Key bindings were introduced later. They map actions to specific keys whereas a listener simply listens for which keys are pressed. To be honest, it doesn't really matter which one you use, but it's always preferable to use key bindings.
There are many libraries also available which have their advantages/disadvantages. Key bindings should be fine though for a 2D game. However, please note that using the Java API is not recommended for game development. If you ever want to build 3D, or content rich 2D games, it's much better to use OpenGL. Try LWJGL or JOGL (LWJGL is preferred generally) or you can use a game engine such as Slick2D or LibGDX.

JFrame Close to Background and Listen to Keys

Working on a new personal project with jframe. My goal is to close the frame in an ActionListener to the background, and when specific keys are pressed (Ctrl+Shft+L), I want to open the frame back up.
I'm not sure how I can do this keeping CPU usage low. I know I can set the frames visibility to false and then probably use a generic ActionListener for the keys however I have a few problems (and questions).
Is this the best way to do it? I'm trying to keep the CPU usage as low as possible.
Will the ActionListener even work while the frame's not visible?
How do I listen to multiple key presses? I have an idea but it doesn't sound like it will work.
Well, the problem is that java is designed to be platform independent.
Too achieve that, there have to be some limitations for the programms written in this programming language.
You want to capture keystrokes even if your window/programm doesn't have the focus set on it.
In fact what you need to write is some kind of global keylistener.
You can't do such things in java. In fact you have to choose a much more machine-oriented programming language like c/c++ to achieve what you want.
In java such stuff is only possible using the Java Native Interface (short JNI).
Using JNI it is possible to write a library for hooking the keyevents in for example c/c++ and call the librarys' methods using a java programm.
JNativeHook ( https://github.com/kwhat/jnativehook ) is using this exact approach. But well, i haven't tried this framework so i can't tell if it's working.
But i once used this and it worked fine for me: http://softk.de/opensource/jglobalkeylistener.html
You can just download the source and don't panic even if the site is written in german, the source-code is documented in english and even the comments within the code are in english.
PS: if that doesn't work, it may help you to google for things like "java global keylogger", because thats exactly what a keylogger is doing (well it obviously also logs the keys) and i think there will be much more stuff that may help you.
Greetings, Loki
Is this the best way to do it? I'm trying to keep the CPU usage as low as possible.
As previously mentioned, use JNativeHook. It is the only cross-platform solution and it will be much faster and less intensive than a polling approach like while (1) { GetAsyncKeyState(...); Sleep(5); } The biggest performance bottleneck with JNativeHook is the OS, not the library.
Will the ActionListener even work while the frame's not visible?
It will not work unless the frame has focus, but the native library will provide other events that do fire out of focus, so you could make it work by fabricating your own ActionEvent's from the NativeInputEvent listeners. Just make sure you set the library to use the Swing event dispatcher as it does not by default!
How do I listen to multiple key presses? I have an idea but it doesn't sound like it will work.
What do you mean by "multiple key presses?" If you mean auto-repeat when a key is held down, that is handled by sending multiple Key Pressed events after the auto repeat delay is exceeded at an interval of the auto repeat rate. You many also receive multiple Key Typed events if that event produces a printable character. When the key is released, a single key release event will be dispatched. If you mean a sequence of keys or multiple keys at the same time, you will need to do your own tracking or checks in the native input listener, but it should be possible.
Basic Modifier Example: Note that JNativeHook library has both a left and right mask for the modifier keys. I assume you want to use a combination of either the left or the right which makes this a tad more complicated.
public void nativeKeyPressed(NativeKeyEvent e) {
// If the keycode is L
if (e.getKeyCode() == NativeKeyEvent.VK_L) {
// We have a shift mask and a control mask for either the left or right key.
if (e.getModifiers() & NativeInputEvent.SHIFT_MASK && e.getModifiers() & NativeInputEvent.CTRL_MASK) {
// Make sure you don't have extra modifiers like the meta key.
if (e.getModifiers() & ~(NativeInputEvent.SHIFT_MASK | NativeInputEvent.CTRL_MASK) == 0x00) {
....
}
}
}
}

Event-driven programming - node.js, Java

I am coming from Java but have been doing some Node.js lately, and have been looking at the EventEmitter module in Node.
What I don't understand is the fundamental difference between Event-driven programming and 'regular' programming.
Here is some psuedo-code to demonstrate my idea of "event-driven" programming.
EventEmitter ee = new EventEmitter();
Function f = new SpecialFunction();
ee.on('grovel',f);
ee.emit('grovel'); //calls function 'f'
the only work the EventEmitter object seems to be doing is creating a hash relationship between the String representation of an event (in this case 'grovel') and a function to respond with. Seems like that's it, not much magic there.
however, my question is - how does event-driven programming really work behind the scenes with low-level events like mouse-clicks and typing? In other words, how do we take a click and turn it into a string (like 'grovel') in our program?
Okay. I will take a run at this.
There are a couple of major reasons to use event emitters.
One of the main reasons is that the browser, which is where JavaScript was born, sometimes forces you to. Whether you are wiring your events straight into your HTML, using jQuery or some other framework/library, or whatever, the underlying code is still basically the same (erm, basically...)
So first, if you want to react to a keyboard or mouse event, like you mentioned, you could just hard bind directly to an event handler (callback) like this:
<div onclick="myFunc(this)">Click me</div>
...or you could do the exact same thing in JS by DOM reference:
document.getElementById('my_element').onclick = function (evt) {
alert('You clicked me');
};
This used to be the primary way we wired up click handlers. One lethal drawback to this pattern is that you can only attach one callback to each DOM event. If you wanted to have a second callback that reacted to the same event, you would either need to combine write it into the existing click handler or build a delegate function to handle the job of calling the two functions. Plus, your event emitter ends up being tightly coupled to the event listener, and that is generally a bad thing.
As applications became more complex, it makes more sense to use event listeners, instead. Browser vendors (eventually) settled on a single way to do this:
// Build the handler
var myHandler = function (evt) {
alert('You clicked me too');
window.myHandlerRef = this; // Watch out! See below.
};
// Bind the handler to the DOM event
document.getElementById('my_element').addEventListener('click', myHandler);
The advantage to this pattern is that you can attach multiple handlers to a single DOM event, or call one single event handler from several different DOM events. The disadvantage is that you have to be careful not to leak: depending on how you write them, event-handling closures (like myHandler above) can continue to exist after the DOM element to which they were attached have been destroyed and GCed. This means it is good practice to always do a removeEventListener('click', myHandler). (Some libraries have an off() method that does the same thing).
This pattern works well for keyboard events as well:
var giveUserAHeadache = function (evt) {
alert('Annoying, isn\'t it?');
};
document.addEventListener('keypress', giveUserAHeadache);
Okay. So that is how you usually handle native browser events. But developers also like to use this pattern of event delegation in their own code. The reason you would want to do this is so you can decouple your code as much as possible.
For example, in a UI, you could have an event emitted every time the user's browser goes offline (you might watch navigator.onLine for example). Maybe you could have a green/red lamp on your page header to show the online state, and maybe you could disable all submit buttons when offline, and maybe also show a warning message in the page footer. With event listeners/emitters, you could write all of these as completely decoupled modules and they still can work in lock-step. And if you need to refactor your UI, you can remove one component (the lamp, for example), replace it with something else without worrying about screwing up the logic in some other module.
As another example, in a Node app you might want your database code to emit an error condition to a particular controller and also log the error -- and maybe send out an email. You can see how these sorts of things might get added iteratively. With event listeners, this sort of thing is easy to do.
You can either write your own, or you can use whatever pattern is available in your particular environment. jQuery, Angular, Ember and Node all have their particular methods, but you are free to also build your own -- which is what I would encourage you to try.
These are all variations of the same basic idea and there is a LOT of blur over the exact definition or most correct implementation (in fact, some might question if these are different at all). But here are the main culprits:
Observer
Pub-Sub
Mediator

Is there a way for fuzzing Swing applications?

Back in the old days, PalmOS had an emulator that could generate random events ("tap here, enter garbage in that text field, ...") for testing how applications would handle them (called "Gremlins"). This is a bit like fuzzing, but for a GUI. Is there an easy (existing) way to do that in a Java Swing application?
Edit:
Please note that I don't want to be able to specify, which events are fired. I'd like some code to automatically generate and fire random (as in "Math.random()") events. The probability that the events do something useful or find a bug is pretty small. But that is offset by firing many events.
Try FEST. It simplifies the process of functional-testing Swing GUIs by allowing to access Swing components by name and then interacting with them.
An example from FEST site:
dialog.comboBox("domain").select("Users");
dialog.textBox("username").enterText("alex.ruiz");
dialog.button("ok").click();
dialog.optionPane().requireErrorMessage()
.requireMessage("Please enter your password");
Edit:
Alternatively, what you are trying to achieve, should be really straightforward using Math.random(), a loop, findBomponentAt(int, int) and Robot class. Especially Robot class mitght be of use, as it has methods for spoofing mouse and keyboard events

Questions: controlling a Swing GUI from an external class and separating logic from user interface

UPDATE: I'm using Netbeans and Matise and it's possible that it could be Matise causing the problems I describe below.
UPDATE 2: Thanks to those who offered constructive suggestions. After rewriting the code without Matise's help, the answer offered by ignis worked as he described. I'm still not sure how the code the Netbeans code generator interfered.
Though I've been programming in Java for awhile I've never done any GUI programming until now. I would like to control a certain part of my program externally (updating a jTextArea field with output from an external source) without requiring any user action to trigger the display of this output in the jTextArea.
Specifically, I want this output to begin displaying on startup and to start and stop depending on external conditions that have nothing to do with the GUI or what the user is doing. From what I understand so far you can trigger such events through action listeners, but these action listeners assume they are listening for user activity. If I must use action listeners, is there a way to trick the GUI into thinking user interaction has happened or is there a more straightforward way to achieve what I want to do?
Also, I'd really like to know more about best practices for separating GUI code from the application logic. From the docs I've come across, it seems that GUI development demands more of a messy integration of logic and user interface than, say, a web application where one can achieve complete separation. I'd be very interested in any leads in this area.
There is no need to use listeners. GUI objects are just like any other objects in the program, so actually
you can use the listener pattern in any part of the program, even if it is unrelated to the GUI
you can invoke methods of objects of the GUI whenever you want during the program execution, even if you do not attach any listeners to the objects in the GUI.
The main "rule" you must follow is that every method invocation performed on objects of the GUI must be run on the AWT Event Dispatch Thread (yes, that's true for Swing also).
http://download.oracle.com/javase/tutorial/uiswing/concurrency/dispatch.html
So you must wrap code accessing the GUI objects, into either
javax.swing.SwingUtilities.invokeLater( new Runnable() { ... } )
or
javax.swing.SwingUtilities.invokeAndWait( new Runnable() { ... } )
http://download.oracle.com/javase/6/docs/api/javax/swing/SwingUtilities.html
About "separating GUI code from the application logic": google "MVC" or "model view controller". This is the "standard" way of separating these things. It consists in making the GUI code (the "view") just a "facade" for the contents (the "model"). Another part of the application (the "controller") creates and invokes the model and the view as needed (it "controls" program execution, or it should do that, so it is named "controller"), and connects them with each other.
http://download.oracle.com/javase/tutorial/uiswing/components/model.html
For example, a JFoo class in the javax.swing package, that defines a Swing component, acts as the view for one or more FooModel class or interface defined either under javax.swing or one of its subpackages. You program will be the "controller" which instantiates the view and an implementation of the model properly (which may be one of the default implementations found under those packages I mentioned, or a custom implementation defined among your custom packages in the program).
http://download.oracle.com/javase/1.4.2/docs/api/javax/swing/package-summary.html
That's a really good question, IMHO... one I asked a couple of years ago on Sun's Java Forums (now basically defunct, thanx to Oracle, the half-witted pack of febrile fiscal fascists).
On the front of bringing order to kaos that is your typical "first cut" of an GUI, Google for Swing MVC. The first article I read on the topic was JavaWorld's "MVC meets Swing". I got lucky, because it explains the PROBLEMS as well as proposes sane solutions (with examples). Read through it, and google yourself for "extended reading" and hit us with any specific questions arrising from that.
On the "simulated user activity" front you've got nothing to worry about really... you need only observe your external conditions, say you detect that a local-file has been updated (for instance) and in turn "raise" a notification to registered listener(s)... the only difference being that in this case you're implementing both the "talker" and the "listener". Swings Listener interface may be re-used for the messaging (or not, at your distretion). Nothing tricky here.
"Raising" an "event" is totally straight forward. Basically you'd just invoke the "EventHappened" method on each of the listeners which is currently registered. The only tricky bit is dealing with "multithreaded-ness" innate to all non-trivial Swing apps... otherwise they'd run like three-legged-dogs, coz the EDT (google it) is constantly off doing everything, instead of just painting and message brokering (i.e. what it was designed for). (As said earlier by Ignis) The SwingUtilies class exposes a couple of handy invoke methods for "raising events" on the EDT.
There's nothing really special about Swing apps... Swing just has a pretty steep learning curve, that's all, especially multithreading... a topic which I had previously avoided like the plague, as "too complicated for a humble brain like mine". Needless to say that turned out to be a baseless fear. Even an old idiot like myself can understand it... it just takes longer, that's all.
Cheers. Keith.
This doesn't exactly answer your question, but you might be interested in using Netbeans for Java GUI development. You can use GUI in Netbeans to do Java GUI development.
Here's a good place to get started -
http://netbeans.org/kb/trails/matisse.html

Categories