I'm working with python scripts in Inductive Automation's Ignition HMI (java backend) software. I'm trying to write a script that locates other scripts that are tied to certain objects. Currently I have
result = window.getRootContainer().getComponent("Group 1").getComponent("TheObject").mouseClicked
which gets the window displaying my object, enters the root container of that object, then the group that the object is in and then finally the script tied to the mouseClicked event on TheObject. When I run this and print the result, I don't get an error, but:
<CompoundCallable with 0 callables>
Has anyone seen this before? Does anyone know what I may need to change in my first line of code to access the actual data stored in the mouseClicked script?
Looks like there is no code associated with the mouseClicked event of that object.
CompoundCallable is a "composition of callables", something callable that calls multiple callables - kind of a callable container. It is used to allow registering multiple functions to be called in a single event handler.
However your CompoundCallable contains zero callables. That means nothing will be called if you call it.
If I understand what you're asking, I don't believe you'll be able to access the data that is in that script (variables, etc.). You could have the mouseClicked script write data to something else in order to access data. There are multiple possibilities for that: Custom Window Property, Custom Component Property, or a tag.
Related
I have two different and independent JFrame windows:
DataFrame
GraphFrame
The first one is for the user to manipulate, with the input of different values and patterns to display on a graph presented in 2). The 1) sends specific values to 2) (array with doubles) so that "GraphFrame" can create the graph.
I invoke the "main " method from GraphFrame in the "main" method of DataFrame so that they both run at the same time and are both visible during the whole process.
I want these frames to be completely independent, which means that the mission for 1) is to send values and the mission for 2) is to check when values are recieved and then create the graph.
I also prefer to keep most of the methods private, so that they can't be accessed from external sources.
My problem, is that I don't know which is the best way to implement this data exchange. What is the best way for Frame 2) keep "listening" for the values it needs to recieve?
Should I create getters/setters on 2) and with the help of an Observer https://sourcemaking.com/design_patterns/observer ?
Or should I use threads?
Or even the creation of a traditional loop that keeps waiting for values, like:
while(array.isEmpty()) {
//stuck here
}
//create the graph from the values in array
At the moment I am receiving the values in 2) from setter methods, but I am uncapable, so far, of performing the code I desire only after I get the values.
What do you think is the best way to implement this?
P.S.: Should I consider not invoking GraphFrame main from DataFrame and run these 2 separately?
From what I understood, you're trying to run both JFrames in the same application. Conceptually this is rather one UI split into two windows than running two Frames as you put it.
Swing requires all UI elements to be updated by one thread - the AWT-Thread. Also interaction with the UI will run in the AWT-Thread. You need to take this into account.
Also it is best practice to separate data model and view. To solve your problem you could create a model for the GraphFrame that is manipulated by changes on the DataFrame. These changes could e.g. be picked up by a listener on the model that uses SwingUtils.invokeLater() to update the GraphFrame.
Of course there is a number of Issues you might need to care for additionally and depending on your requirements you might need to further decouple the two parts.
You could try having the GraphFrame initialize and then stop, but have a (static/nonstatic) method in GraphFrame that DataFrame can call to update the graph. Afterwards, repaint GraphFrame.
Is this what you're looking for?
I am coming from Java but have been doing some Node.js lately, and have been looking at the EventEmitter module in Node.
What I don't understand is the fundamental difference between Event-driven programming and 'regular' programming.
Here is some psuedo-code to demonstrate my idea of "event-driven" programming.
EventEmitter ee = new EventEmitter();
Function f = new SpecialFunction();
ee.on('grovel',f);
ee.emit('grovel'); //calls function 'f'
the only work the EventEmitter object seems to be doing is creating a hash relationship between the String representation of an event (in this case 'grovel') and a function to respond with. Seems like that's it, not much magic there.
however, my question is - how does event-driven programming really work behind the scenes with low-level events like mouse-clicks and typing? In other words, how do we take a click and turn it into a string (like 'grovel') in our program?
Okay. I will take a run at this.
There are a couple of major reasons to use event emitters.
One of the main reasons is that the browser, which is where JavaScript was born, sometimes forces you to. Whether you are wiring your events straight into your HTML, using jQuery or some other framework/library, or whatever, the underlying code is still basically the same (erm, basically...)
So first, if you want to react to a keyboard or mouse event, like you mentioned, you could just hard bind directly to an event handler (callback) like this:
<div onclick="myFunc(this)">Click me</div>
...or you could do the exact same thing in JS by DOM reference:
document.getElementById('my_element').onclick = function (evt) {
alert('You clicked me');
};
This used to be the primary way we wired up click handlers. One lethal drawback to this pattern is that you can only attach one callback to each DOM event. If you wanted to have a second callback that reacted to the same event, you would either need to combine write it into the existing click handler or build a delegate function to handle the job of calling the two functions. Plus, your event emitter ends up being tightly coupled to the event listener, and that is generally a bad thing.
As applications became more complex, it makes more sense to use event listeners, instead. Browser vendors (eventually) settled on a single way to do this:
// Build the handler
var myHandler = function (evt) {
alert('You clicked me too');
window.myHandlerRef = this; // Watch out! See below.
};
// Bind the handler to the DOM event
document.getElementById('my_element').addEventListener('click', myHandler);
The advantage to this pattern is that you can attach multiple handlers to a single DOM event, or call one single event handler from several different DOM events. The disadvantage is that you have to be careful not to leak: depending on how you write them, event-handling closures (like myHandler above) can continue to exist after the DOM element to which they were attached have been destroyed and GCed. This means it is good practice to always do a removeEventListener('click', myHandler). (Some libraries have an off() method that does the same thing).
This pattern works well for keyboard events as well:
var giveUserAHeadache = function (evt) {
alert('Annoying, isn\'t it?');
};
document.addEventListener('keypress', giveUserAHeadache);
Okay. So that is how you usually handle native browser events. But developers also like to use this pattern of event delegation in their own code. The reason you would want to do this is so you can decouple your code as much as possible.
For example, in a UI, you could have an event emitted every time the user's browser goes offline (you might watch navigator.onLine for example). Maybe you could have a green/red lamp on your page header to show the online state, and maybe you could disable all submit buttons when offline, and maybe also show a warning message in the page footer. With event listeners/emitters, you could write all of these as completely decoupled modules and they still can work in lock-step. And if you need to refactor your UI, you can remove one component (the lamp, for example), replace it with something else without worrying about screwing up the logic in some other module.
As another example, in a Node app you might want your database code to emit an error condition to a particular controller and also log the error -- and maybe send out an email. You can see how these sorts of things might get added iteratively. With event listeners, this sort of thing is easy to do.
You can either write your own, or you can use whatever pattern is available in your particular environment. jQuery, Angular, Ember and Node all have their particular methods, but you are free to also build your own -- which is what I would encourage you to try.
These are all variations of the same basic idea and there is a LOT of blur over the exact definition or most correct implementation (in fact, some might question if these are different at all). But here are the main culprits:
Observer
Pub-Sub
Mediator
I'm working on building a custom Simulink block as a Matlab Toolbox. In order to avoid matlab's language to program the system, I'd like to make the system in Java as much as possible. I've researched the Matlab <-> Java interface, and it seems possible to do this. However, the one thing I couldn't find any information about was storing my custom Java object (holding the block's data) inside the Simulink block.
I conducted a quick test, and it seems storing a Java.lang.String instance is possible. However, that was a relatively simple test. Before jumping in head first, I wanted to check if this was even possible. Does anyone have experience with a similar setup? Does the object simply need to be Serializable to work?
For background information, I'm looking to implement the non-math part (GUI code, processing, etc) in Java. Math related elements would likely remain in matlab.
To store your Java object inside the block you should use its UserData block parameter. According to the documentation, you can put any data type in this parameter.
The only problems I can see with this are saving/loading and creation of new blocks. Saving/loading should be solved using serialization, but you will have to try it to see. If this doesn't work, then you could create a hidden mask parameter for your blocks, serialize your Java object to a string, and save the data in this mask during the PreSaveFcn callback. The data could be deserialized from the mask parameter in the LoadFcn callback.
For the creation of new blocks, you should set the PreCopyFcn callback of your library block and create your new Java object there. I have the feeling that if you don't do this, then MATLAB will copy the reference to your object from UserData (if one exists there already), which is probably not what you want.
You probably also want to override the OpenFcn callback since your aim is to use your Java object as a kind of souped-up mask, so that when a user double-clicks on the block you can show your custom UI.
For more information on block callback parameters, see this.
I have a big Java application.
each in-charge of a specific task.
what I'd like to do is to be able to dispatch events with parameters from one class and to be able to catch them in other classes and to execute functions according to them.
for example.
In one of the classes I have a function called userPuchaseGift4Himself so I want to add an event called USER_PURCHASE_GIFT_FOR_HIMSELF that will have 2 parameters, userid and amount. and I want any part of the code to be able to add an event listener to this event and when it connects to this event to execute a code with the relevant parameters the ever dispatched with the event.
can anyone please provide an example on how to do so ? that would be really great.
any information regarding the issue would be greatly appreciated.
thank you so much!
Check the EvenBus in Google Guava.
EventBus allows publish-subscribe-style communication between components without requiring the components to explicitly register with one another (and thus be aware of each other).
I've developed an interface that allows a user to load and manipulate data. The GUI is developed in Java and all the computational stuff is done in the background by R, linking the two with jri. The idea is that the user doesn't have to have any knowledge of R to use it, it's all options and buttons. However, i'd like to give the user the option to write some code if necessary. So here is my problem:
If I use the following code to start the Rengine and not let the user interact via console, everything works fine:
Rengine re=new Rengine(null, false, new TextConsole());
But if I use this:
Rengine re=new Rengine(null, true, new TextConsole());
The functionality of the gui doesnt work. I tried using the
re.startMainLoop();
function after the data was loaded. I was able to manipulate the data from the comand line in R, for example I could make a new variable from a column of the data loaded:
newVariable<-data$column1
But yet again, I couldn't use the gui anymore. Has anyone got any ideas or explainations as to why this is?
Thanks in advance,
Aran
Fundamentally, if REPL is not running, R is simply used via eval calls from your code. You have control at all times, except during the actual evaluation. That is the most common use, because you can do pretty much anything that way.
The moment you enable the event loop (REPL), you have to implement the callback methods that are used by the loop. By design R surrenders the control only by calling the rReadConsole callback which you have to implement. The example TextConsole works only as a demo, it uses blocking call (readLine()) to wait so you definitely don't want to use that in your GUI. You'll have to implement all callbacks correspondingly to react to your GUI's elements (wait in ReadConsole for your GUI to wake it up from a separate thread, dispatch WriteConsole to your elements etc.). You can have a look at JGR how it's done properly. Unless you are really building a general purpose R GUI, I wouldn't go into that trouble ...
(PS: please use stats-rosuda-devel mailing list for rJava/JRI questions - you get answers much faster)