I want to intercept all method calls for Script and any of its subtypes using MetaClasses preferably from Java but if needed I can do it from Groovy. I basically want to do something like this:
MetaClassImpl meta = new MetaClassImpl(Script.class) {
#Override
public final Object invokeMethod(Object obj, String method, Object[] args) {
if(method.equals("evaluate")) {
System.out.println("intercepted");
return run(shell, (String) args[0], "Evaluate");
} else {
return super.invokeMethod(obj, method, args);
}
}
};
The above works fine if I apply it to every instance of Script that I create but I'd like to apply it to the Script class and have it apply to all of its subtypes. Is there any way of doing this?
EDIT: What I'm attempting to do is replace evaluate with another method for all instances of Script and its subtypes. Either that or get evaluate to respect the ImportCustomizer set on the groovy shell the script is running inside of.
Related
I'm writing some code and I can't figure out what's going on with the bug I have. I hope someone here can give me some answers. Here is my code (the relevant part):
public class AppData implements Callable<Integer> {
private static AppData appData = new AppData();
private AppData() {
System.out.println("AppData-Constructor");
}
public static AppData getInstance() {
return appData;
}
#Override
public Integer call() throws Exception { // your business logic goes here...
return 0;
}
private boolean _validate;
public boolean validate() {
return _validate;
}
#Option(names = { "--validate" }, description = "", defaultValue = "false", hidden = false, interactive = false, paramLabel = "", required = false, type = boolean.class)
public void set_validate(boolean validate) {
System.out.println("Set Validate: " + validate);
this._validate = validate;
if(validate)
{
System.out.println("\nBeginne Programmvalidierung\n");
Path tmp = null;
try {
// Doing some validation stuff
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
As you can see my class is a singleton. The annotation is from picoli, which I use to parse the command line arguments. The System.out-calls are for debugging. This is the behaviour I can't explain:
When I start my app with e.g. "-h" as argument, I get the help printed just fine. The System.out.printlnshow that the singleton is created and that set_validate() is called with the default value. But that changes when I use --validate as an argument.
For some reason, the constructor and default set are called twice in a row. After that, set_validate()
is called with true (as it should). However, it seems that the first call sets the static instance variable, while the last call with true is done on the second instance (my theory). As a consequence, when I check the state of _validate with validate() on my singleton instance from my main method (in another class) I get false, as it wasn't set in the right instance.
I used a search engine to check:
The constructor is not called anywhere except for the static singleton instance (as expected, since it's private).
_validate is not accessed anywhere except the code I posted.
set_validate() is not called anywhere. Only Picocli calls it.
I don't know what to check next. Any ideas?
Regards
Thorsten
EDIT:
AppData is one of multiple classes holding the data. They are all collected in one big class for Picocli like this:
class Data
{
#AddGroup(...)
AppData appData = AppData.getInstance();
#AddGroup(...)
FooData fooData = FooData.getInstance();
#AddGroup(...)
BarData barData = BarData.getInstance();
}
It's used like this in my main method:
Data data = new Data();
CommandLine cmd = new CommandLine(data);
cmd.parseArgs(args);
I suspect (but can only guess, since that part of the code is not shown) that AppData is either a subcommand of another command, or that the application uses picocli like this:
int exitCode = new CommandLine(AppData.class).execute(args);
In both cases, picocli will create an instance of AppData using reflection. The instance created by picocli is the instance populated from the command line values. This is a different instance than the one returned by AppData::getInstance.
One way to ensure there is only one instance is to pass the singleton instance to picocli. For example:
AppData singleton = AppData.getInstance();
int exitCode = new CommandLine(singleton).execute(args);
System.out.println("validate=" + singleton.validate());
(If AppData is a subcommand there are other ways to access the instance that picocli created, like the #Spec annotation to inject the picocli model, and calling CommandSpec::userObject() getter on that to get the AppData instance.)
Now, the other question is: why does the set_validate method get invoked twice?
As of version 4.2, picocli will first call #Option-annotated methods with the default value, before parsing the command line parameters. So the set_validate method is first invoked with the default value, and then called again with the value specified on the command line.
(From version 4.3 (to be released soon), the default value will only be set if the value is not specified on the command line, so from version 4.3 the set_validate method will only be invoked once.)
I'm new to groovy and I'm trying to use Spock to do some integration tests.
I have a Java class FunctionalTestCase which I used as a framework before to do my tests.
For Spock, I'm still using this FunctionalTestCase from a groovy class. Most of the things worked without problems, but I have the following scenario:
In FunctionalTestCase I have something similar:
protected static void doSomething(#Nullable Object nullableObject)
{
SomeInterface<InstrumentQuotationDataImage> marketDataSnapshot = new SomeInterface<InstrumentQuotationDataImage>()
{
#Override
public InstrumentQuotationDataImage getData()
{
InstrumentQuotationDataImage instrumentQuotationData = new InstrumentQuotationDataImageImpl();
if (nullableObject != null)
{
instrumentQuotationData.setDataRecord(nullableObject);
}
return instrumentQuotationData;
}
};
marketData.getQuotationService().pushQuotation(marketDataSnapshot);
}
This method is called once at setup with a non null parameter, and pushes an object into a Service (this can be seen at the end of the code).
When I later want to call SomeInterface#getData from the object retrieved from that QuotationService, instead of nullableObject to be null it throws groovy.lang.MissingPropertyException.
Any idea why this happens from groovy? Using it from java everything works ok and the objects are null.
The Application
I am writing an application that executes certain functions depending on user input.
E.g. if the user input were to be
"1 2 add" the output would be "3".
I aim to implement many such methods (div, modulo, etc.). As my Scanner recognizes a function name like "add" the function "add()" should be called.
My Way
My way to do this is to let a FunctionHandler class evaluate the input.
Main:
String inputCommand = sc.nextCommand();
functionHandler.handle(inputCommand);
Function Handler:
public class FunctionHandler {
public void handle (String functionName) {
if (functionName.equals("add")) {
add();
} else if (functionName.equals("div") {
div();
}
}
private void add() {
.......
}
....
}
The Problem with that
As I am adding more and more functions the if statement gets very large, and of course the FunctionHandler class too. Also, whenever I add a new function, I have to change code in two places: I have to define the function, and then add the else if clause in handle() to call the function. Which means two pieces of information that should be encapsulated are "stored" completely independent from each other.
I was wondering what the best practice was to solve this kind of situation?
My Ideas
I was thinking about using enums, but they don't seem to fit well in this case.
Another idea I had was creating an interface Function, and then a class for each function that implements Function. The interface would have two methods:
getName()
execute()
Then I could create an array (manually) of Functions in the FunctionHandler, through which I could loop to see if the command the user enters matches getName().
However, having a different class for each function is not very clean either, and it also does not get rid of the problem that for each function I am adding I have to do it in two places: the class and the array.
This question is only about finding out how to solve this problem cleanly. A pointer in the right direction would be appreciated!
Thanks a lot!
Another option would be to keep a Map of handlers. If you're using Java 8, they can even be method references.
// InputType and ResultType are types you define
Map<String, Function<InputType, ResultType>> operations = new HashMap<>();
operations.put("add", MathClass::add);
// ...
ResultType result = operations.get(userInput).apply(inputObject);
One downside to doing it this way is that your input and output types must be the same for all operations.
You could create a custom annotation for the various functions. Then you could employ your array idea, but have it use reflection to discover which functions have your new annotation and what their names are.
As background, take a look at http://www.oracle.com/technetwork/articles/hunter-meta-2-098036.html and http://www.oracle.com/technetwork/articles/hunter-meta-3-092019.html. They're a bit old, but seem to address the necessary ideas.
You can always use reflection if you want a short solution.
In your handle method you could do something like this:
Method m = this.getClass().getMethod(functionName, new Class[]{});
m.invoke(this, new Object[]{});
Assuming you do not have a lot of functions that you want to do this way, and do not want to expose yourself to the security risks caused by reflection, you could use a string switch, like this:
void handleFunction(String function) {
switch (function) {
case "foo":
foo();
break;
case "bar":
bar();
break;
default:
throw new IllegalArgumentException("Unknown function " + function);
break;
}
}
Starting Java 7, you can use Strings in a switch statement and the compiler will make something reasonable out of it
I would do something like this:
public class FunctionTest {
private static final Map<String, Runnable> FUNCTIONS = new HashMap<String, Runnable>() {{
put("add", () -> System.out.println("I'm adding something!"));
put("div", () -> System.out.println("I'm dividing something!"));
}};
public void handle(String functionName) {
if (!FUNCTIONS.containsKey(functionName)) {
throw new IllegalArgumentException("No function with this name: " + functionName);
}
FUNCTIONS.get(functionName).run();
}
}
You basically can use any functional interface in place of Runnable, I used it, because it matches your add() method. You can map the names of the functions to their actual executable instance, get them by name from the Map and execute them.
You could also create an enum with the desired executable blocks:
public class FunctionsAsEnumsTest {
private static enum MyFunction {
ADD {
#Override public void execute() {
System.out.println("I'm adding something");
}
},
DIV {
#Override public void execute() {
System.out.println("I'm dividing something");
}
};
public abstract void execute();
}
public void handle(String functionName) {
// #toUpperCase() might not be the best idea,
// you could name your enums as you would the methods.
MyFunction fn = MyFunction.valueOf(functionName.toUpperCase());
fn.execute();
}
}
I'd like to see how this Java code would look in JRuby:
ParseQuery query = new ParseQuery("MyClass");
query.getInBackground(myId, new GetCallback() {
public void done(ParseObject object, ParseException e) {
if (e == null) {
objectWasRetrievedSuccessfully(object);
} else {
objectRetrievalFailed();
}
}
});
The biggest part of confusion for me is the anonymous inner class. This is my best first guess:
query = ParseQuery.new("GameScore")
query.getInBackground("xWMyZ4YEGZ", Class.new(GetCallback) do
def done(object, e)
# ...
end
end.new)
Update: Edited based on this: http://www.ruby-forum.com/topic/188599#823271
The syntax for expressing this in JRuby is deceptively simple. JRuby has a feature called 'closure conversion' where a closure passed to a method can is converted into the appropriate Java interface. From the JRuby docs:
This not only works for event listeners or Runnable, but basically for any interface. When calling a method that expects an interface, JRuby checks if a block is passed and automatically converts the block to an object implementing the interface.
So, your code would look like:
query.in_background 'xWMyZ4YEGZ' { |object, e|
# do stuff
}
The 'calling Java from JRuby' page on the JRuby wiki is a good resource for problems like these.
I am new to JSR-223 Java Scripting, actually I'm switching from MVEL to standard Mozilla Rhino JS. I have read all documentation, but get stuck. I have tried to reference some Java objects from script by bindings just like in tutorial:
// my object
public class MyBean {
public String getStringValue() { return "abc" };
}
// initialization
ScriptEngineManager manager = new ScriptEngineManager();
ScriptEngine engine = manager.getEngineByName("JavaScript");
// add bindings
engine.put("bean", new MyBean());
// evaluate script, output is "abc"
engine.eval("print(bean.stringValue)");
Java object is referenced from script as property bean. So far so good.
But I want to reference my object in script as this, I want to use its properties and methods without any prefix or explicitely with prefix this. Just like this:
// add bindings
engine.put(....., new MyBean()); // or whatever ???
// evaluate scripts, all have the same output "abc"
engine.eval("print(stringValue)");
engine.eval("print(this.stringValue)");
I know that this in JavaScript has special meaning (as in Java) but in MVEL scripting that could be done by using custom ParserContext and custom PropertyHandler.
Is something like this possible in Rhino?
Thanks a lot.
I try to implement this idea from answer from Pointy (thanks again), but this workaround doesn't work for properties without this prefix, which seems to be IMHO the very same. Instead of using Rhino 1.5 from Java API, there is original Rhino 1.7 from Mozilla. Test case is here:
import org.junit.Test;
import org.mozilla.javascript.Context;
import org.mozilla.javascript.Function;
import org.mozilla.javascript.Scriptable;
import org.mozilla.javascript.ScriptableObject;
import org.mozilla.javascript.Wrapper;
public class RhinoTest2 {
private Obj obj = new Obj();
public class Obj {
public String getStringValue() {
return "abc";
}
}
private Object eval(String expression) {
Context cx = Context.enter();
try {
ScriptableObject scope = cx.initStandardObjects();
// convert my "this" instance to JavaScript object
Object jsObj = Context.javaToJS(obj, scope);
// prepare envelope function run()
cx.evaluateString(scope,
String.format("function run() { %s }", expression),
"<func>", 1, null);
// call method run()
Object fObj = scope.get("run", scope);
Function f = (Function) fObj;
Object result = f.call(cx, scope, (Scriptable) jsObj, null);
if (result instanceof Wrapper)
return ((Wrapper) result).unwrap();
return result;
} finally {
Context.exit();
}
}
#Test
public void test() {
// works
eval("return this.getStringValue()");
eval("return this.stringValue");
// doesn't work, throws EcmaError: ReferenceError: "getStringValue" is not defined.
eval("return getStringValue()");
eval("return stringValue");
}
}
Why this.getStringValue()/this.stringValue works and getStringValue()/stringValue doesn't? Have some point overlooked? Pointy?
Well, in JavaScript it really only makes sense to think about this being set in the context of a function being invoked. Thus I think you should be able to use the "invoke" method on the ScriptEngine (which has to be cast to "Invocable"):
((Invocable) engine).invokeMethod(objectForThis, "yourFunction", arg, arg ...);
Now the "objectForThis" reference is (in my experience) generally something that was returned from a prior call to "eval()" (or "invokeMethod" I guess); in other words, it's supposed to be an object in the appropriate language for the script engine. Whether you could pass in a Java object there (and have it work out), I don't know for sure.