using eval in groovy - java

How can I use eval in groovy to evaluate the following String:
{key1=keyval, key2=[listitem1, listitem2], key3=keyval2}
All the list items and keyval is a String.
doing Eval.me("{key1=keyval, key2=[listitem1, listitem2], key3=keyval2}") is giving me the following error:
Ambiguous expression could be either a parameterless closure expression or an isolated open code block;
solution: Add an explicit closure parameter list, e.g. {it -> ...}, or force it to be treated as an open block by giving it a label, e.g. L:{...} at
I want to get HashMap

Is there no way you can get the data in JSON format? Then you could just use one of the parsers mentioned here.

You can parse that string by translating some of the characters, and writing your own binding to return variable names when Groovy tries to look them up, like so:
class Evaluator extends Binding {
def parse( s ) {
GroovyShell shell = new GroovyShell( this );
shell.evaluate( s )
}
Object getVariable( String name ) { name }
}
def inStr = '{key1=keyval, key2=[listitem1, listitem2], key3=keyval2}'
def inObj = new Evaluator().parse( inStr.tr( '{}=', '[]:' ) )
println inObj
But this is a very brittle solution, and getting the data in a friendlier format (as suggested by #Stefan) is definitely the best way to go...

Related

Access Lambda Arguments with Groovy and Spock Argument Capture

I am trying to unit test a Java class with a method containing a lambda function. I am using Groovy and Spock for the test. For proprietary reasons I can't show the original code.
The Java method looks like this:
class ExampleClass {
AsyncHandler asynHandler;
Component componet;
Component getComponent() {
return component;
}
void exampleMethod(String input) {
byte[] data = input.getBytes();
getComponent().doCall(builder ->
builder
.setName(name)
.data(data)
.build()).whenCompleteAsync(asyncHandler);
}
}
Where component#doCall has the following signature:
CompletableFuture<Response> doCall(Consumer<Request> request) {
// do some stuff
}
The groovy test looks like this:
class Spec extends Specification {
def mockComponent = Mock(Component)
#Subject
def sut = new TestableExampleClass(mockComponent)
def 'a test'() {
when:
sut.exampleMethod('teststring')
then:
1 * componentMock.doCall(_ as Consumer<Request>) >> { args ->
assert args[0].args$2.asUtf8String() == 'teststring'
return new CompletableFuture()
}
}
class TestableExampleClass extends ExampleClass {
def component
TestableExampleClass(Component component) {
this.component = component;
}
#Override
getComponent() {
return component
}
}
}
The captured argument, args, shows up as follows in the debug window if I place a breakpoint on the assert line:
args = {Arrays$ArrayList#1234} size = 1
> 0 = {Component$lambda}
> args$1 = {TestableExampleClass}
> args$2 = {bytes[]}
There are two points confusing me:
When I try to cast the captured argument args[0] as either ExampleClass or TestableExampleClass it throws a GroovyCastException. I believe this is because it is expecting Component$Lambda, but I am not sure how to cast this.
Accessing the data property using args[0].args$2, doesn't seem like a clean way to do it. This is likely linked to the casting issue mentioned above. But is there a better way to do this, such as with args[0].data?
Even if direct answers can't be given, a pointer to some documentation or article would be helpful. My search results discussed Groovy closures and Java lambdas comparisons separately, but not about using lambdas in closures.
Why you should not do what you are trying
This invasive kind of testing is a nightmare! Sorry for my strong wording, but I want to make it clear that you should not over-specify tests like this, asserting on private final fields of lambda expressions. Why would it even be important what goes into the lambda? Simply verify the result. In order to do a verification like this, you
need to know internals of how lambdas are implemented in Java,
those implementation details have to stay unchanged across Java versions and
the implementations even have to be the same across JVM types like Oracle Hotspot, OpenJ9 etc.
Otherwise, your tests break quickly. And why would you care how a method internally computes its result? A method should be tested like a black box, only in rare cases should you use interaction testing,where it is absolutely crucial in order to make sure that certain interactions between objects occur in a certain way (e.g. in order to verify a publish-subscribe design pattern).
How you can do it anyway (dont!!!)
Having said all that, just assuming for a minute that it does actually make sense to test like that (which it really does not!), a hint: Instead of accessing the field args$2, you can also access the declared field with index 1. Accessing by name is also possible, of course. anyway, you have to reflect on the lambda's class, get the declared field(s) you are interested in, make them accessible (remember, they are private final) and then assert on their respective contents. You could also filter by field type in order to be less sensitive to their order (not shown here).
Besides, I do not understand why you create a TestableExampleClass instead of using the original.
In this example, I am using explicit types instead of just def in order to make it easier to understand what the code does:
then:
1 * mockComponent.doCall(_ as Consumer<Request>) >> { args ->
Consumer<Request> requestConsumer = args[0]
Field nameField = requestConsumer.class.declaredFields[1]
// Field nameField = requestConsumer.class.getDeclaredField('arg$2')
nameField.accessible = true
byte[] nameBytes = nameField.get(requestConsumer)
assert new String(nameBytes, Charset.forName("UTF-8")) == 'teststring'
return new CompletableFuture()
}
Or, in order to avoid the explicit assert in favour of a Spock-style condition:
def 'a test'() {
given:
String name
when:
sut.exampleMethod('teststring')
then:
1 * mockComponent.doCall(_ as Consumer<Request>) >> { args ->
Consumer<Request> requestConsumer = args[0]
Field nameField = requestConsumer.class.declaredFields[1]
// Field nameField = requestConsumer.class.getDeclaredField('arg$2')
nameField.accessible = true
byte[] nameBytes = nameField.get(requestConsumer)
name = new String(nameBytes, Charset.forName("UTF-8"))
return new CompletableFuture()
}
name == 'teststring'
}

Returning value from persisted Groovy code without using java.io.File

In my end product, I provide the ability to extend the application code at runtime using small Groovy scripts that are edited via a form and whose code are persisted in the SQL database.
The scheme that these "custom code" snippets follow is generally returning a value based on input parameters. For example, during the invoicing of a service, the rating system might use a published schedule of predetermined rates, or values defined in a contract in the application, through custom groovy code, if an "overridden" value is returned, then it should be used.
In the logic that would determine the "override" value of the rate, I've incorporated something like these groovy code snippets that return a value, or if they return null, then the default value is used. E.g.
class GroovyRunner {
static final GroovyClassLoader classLoader = new GroovyClassLoader()
static final String GROOVY_CODE = MyDatabase().loadCustomCode()
static final String GROOVY_CLASS = MyDatabase().loadCustomClassName()
static final String TEMPDIR = System.getProperty("java.io.tmpdir")
double getOverrideRate(Object inParameters) {
def file = new File(TEMPDIR+GROOVY_CLASS+".groovy")
BufferedWriter bw = new BufferedWriter(new FileWriter(file))
bw.write(GROOVY_CODE)
bw.close()
Class gvy = classLoader.parseClass(file)
GroovyObject obj = (GroovyObject) gvy.getDeclaredConstructor().newInstance()
return Double.valueOf(obj.invokeMethod("getRate",inParameters)
}
}
And then, in the user-created custom groovy code:
class RateInterceptor {
def getRate(Object inParameters) {
def businessEntity = (SomeClass) inParameters
return businessEntity.getDiscount() == .5 ? .5 : null
}
}
The problem with this is that these "custom code" bits in GROOVY_CODE above, are pulled from a database during runtime, and contain an intricate groovy class. Since this method will be called numerous times in succession, it is impractical to create a new File object each time it is run.
Whether I use GroovyScriptEngine, or the GroovyClassLoader, these both involve the need of a java.io.File object. This makes the code execute extremely slowly, as the File will have to be created after the custom groovy code is retrieved from the database. Is there any way to run groovy code that can return a value without creating a temporary file to execute it?
The straight solution for your case would be using GroovyClassLoader.parseClass​(String text)
http://docs.groovy-lang.org/latest/html/api/groovy/lang/GroovyClassLoader.html#parseClass(java.lang.String)
The class caching should not be a problem because you are creating each time a new GroovyClassLoader
However think about using groovy scripts instead of classes
your rate interceptor code could be like this:
def businessEntity = (SomeClass) context
return businessEntity.getDiscount() == .5 ? .5 : null
or even like this:
context.getDiscount() == .5 ? .5 : null
in script you could declare functions, inner classes, etc
so, if you need the following script will work also:
class RateInterceptor {
def getRate(SomeClass businessEntity) {
return businessEntity.getDiscount() == .5 ? .5 : null
}
}
return new RateInterceptor().getRate(context)
The java code to execute those kind of scripts:
import groovy.lang.*;
...
GroovyShell gs = new GroovyShell();
Script script = gs.parse(GROOVY_CODE);
// bind variables
Binding binding = new Binding();
binding.setVariable("context", inParams);
script.setBinding(binding);
// run script
Object ret = script.run();
Note that parsing of groovy code (class or script) is a heavy operation. And if you need to speedup your code think about caching of parsed class into some in-memory cache or even into a map
Map<String, Class<groovy.lang.Script>>
Straight-forward would be also:
GroovyShell groovyShell = new GroovyShell()
Closure groovy = { String name, String code ->
String script = "{ Map params -> $code }"
groovyShell.evaluate( script, name ) as Closure
}
def closure = groovy( 'SomeName', 'params.someVal.toFloat() * 2' )
def res = closure someVal:21
assert 42.0f == res

Simple Code to Copy Same Name Properties?

I have an old question sustained in my mind for a long time. When I was writing code in Spring, there are lots dirty and useless code for DTO, domain objects. For language level, I am hopeless in Java and see some light in Kotlin. Here is my question:
Style 1 It is common for us to write following code (Java, C++, C#, ...)
// annot: AdminPresentation
val override = FieldMetadataOverride()
override.broadleafEnumeration = annot.broadleafEnumeration
override.hideEnumerationIfEmpty = annot.hideEnumerationIfEmpty
override.fieldComponentRenderer = annot.fieldComponentRenderer
Sytle 2 Previous code can be simplified by using T.apply() in Kotlin
override.apply {
broadleafEnumeration = annot.broadleafEnumeration
hideEnumerationIfEmpty = annot.hideEnumerationIfEmpty
fieldComponentRenderer = annot.fieldComponentRenderer
}
Sytle 3 Can such code be even simplified to something like this?
override.copySameNamePropertiesFrom (annot) { // provide property list here
broadleafEnumeration
hideEnumerationIfEmpty
fieldComponentRenderer
}
First Priority Requirments
Provide property name list only one time
The property name is provided as normal code, so as to we can get IDE auto complete feature.
Second Priority Requirements
It's prefer to avoid run-time cost for Style 3. (For example, 'reflection' may be a possible implementation, but it do introduce cost.)
It's prefer to generated code like style1/style2 directly.
Not care
The final syntax of Style 3.
I am a novice for Kotlin language. Is it possible to use Kotlin to define somthing like 'Style 3' ?
It should be pretty simple to write a 5 line helper to do this which even supports copying every matching property or just a selection of properties.
Although it's probably not useful if you're writing Kotlin code and heavily utilising data classes and val (immutable properties). Check it out:
fun <T : Any, R : Any> T.copyPropsFrom(fromObject: R, vararg props: KProperty<*>) {
// only consider mutable properties
val mutableProps = this::class.memberProperties.filterIsInstance<KMutableProperty<*>>()
// if source list is provided use that otherwise use all available properties
val sourceProps = if (props.isEmpty()) fromObject::class.memberProperties else props.toList()
// copy all matching
mutableProps.forEach { targetProp ->
sourceProps.find {
// make sure properties have same name and compatible types
it.name == targetProp.name && targetProp.returnType.isSupertypeOf(it.returnType)
}?.let { matchingProp ->
targetProp.setter.call(this, matchingProp.getter.call(fromObject))
}
}
}
This approach uses reflection, but it uses Kotlin reflection which is very lightweight. I haven't timed anything, but it should run almost at same speed as copying properties by hand.
Now given 2 classes:
data class DataOne(val propA: String, val propB: String)
data class DataTwo(var propA: String = "", var propB: String = "")
You can do the following:
var data2 = DataTwo()
var data1 = DataOne("a", "b")
println("Before")
println(data1)
println(data2)
// this copies all matching properties
data2.copyPropsFrom(data1)
println("After")
println(data1)
println(data2)
data2 = DataTwo()
data1 = DataOne("a", "b")
println("Before")
println(data1)
println(data2)
// this copies only matching properties from the provided list
// with complete refactoring and completion support
data2.copyPropsFrom(data1, DataOne::propA)
println("After")
println(data1)
println(data2)
Output will be:
Before
DataOne(propA=a, propB=b)
DataTwo(propA=, propB=)
After
DataOne(propA=a, propB=b)
DataTwo(propA=a, propB=b)
Before
DataOne(propA=a, propB=b)
DataTwo(propA=, propB=)
After
DataOne(propA=a, propB=b)
DataTwo(propA=a, propB=)

Java SnakeYaml - prevent dumping reference names

I have the following method which I use to get the object converted to yaml representation (which I can eg. print to console)
#Nonnull
private String outputObject(#Nonnull final ObjectToPrint packageSchedule) {
DumperOptions options = new DumperOptions();
options.setAllowReadOnlyProperties(true);
options.setPrettyFlow(true);
return new Yaml(new Constructor(), new JodaTimeRepresenter(), options).dump(ObjectToPrint);
}
All is good, but for some object contained within ObjectToPrint structure I get something like reference name and not the real object content, eg.
!!com.blah.blah.ObjectToPrint
businessYears:
- businessYearMonths: 12
ppiYear: &id001 {
endDate: 30-06-2013,
endYear: 2013,
startDate: 01-07-2012,
startYear: 2012
}
ppiPeriod:
ppiYear: *id001
endDate: 27-03-2014
startDate: 21-06-2013
units: 24.000
number: 1
As you can see from the example above I have ppiYear object printed (marked as $id001) and the same object is used in ppiPeriod but only the reference name is printed there, not the object content.
How to print the object content everytime I use that object within my structure, which I want to be converted to yaml (ObjectToPrint).
PS. It would be nice not to print the reference name at all (&id001) but thats not crucial
This is because you reference the same object at different places. To avoid this you need to create copies of those objects.
Yaml does not have a flag to switch this off, because you might get into endless loops in case of cyclic references.
However you might tweak Yaml source code to ignore double references:
look at Serializer line ~170 method serializeNode:
...
if ( this.serializedNodes.contains(node) ) {
this.emmitter.emit( new AliasEvent( ... ) );
} else {
serializedNodes.add(node); // <== Replace with myHook(serializedNodes,node);
...
void myHook(serializedNodes,node) {
if ( node's class != myClass(es) to avoid ) {
serializedNodes.add(node);
}
if you find a way to avoid Yaml to put nodes into the serializedNodes collection, your problem will be solved, however your programm will loop endless in case of cyclic references.
Best solution is to add a hook which avoids registering only the class you want to be written plain.
As a way to do this without changing the SnakeYAML source code, you can define:
public class NonAnchorRepresenter extends Representer {
public NonAnchorRepresenter() {
this.multiRepresenters.put(Map.class, new RepresentMap() {
public Node representData(Object data) {
return representWithoutRecordingDescendents(data, super::representData);
}
});
}
protected Node representWithoutRecordingDescendents(Object data, Function<Object,Node> worker) {
Map<Object,Node> representedObjectsOnEntry = new LinkedHashMap<Object,Node>(representedObjects);
try {
return worker.apply(data);
} finally {
representedObjects.clear();
representedObjects.putAll(representedObjectsOnEntry);
}
}
}
and use it as e.g. new Yaml(new SafeConstructor(), new NonAnchorRepresenter());
This only does maps, and it does use anchors when it has to, i.e. where a map refers to an ancestor. It would need similar extensions for sets and lists if required. (In my case it was empty maps that were the biggest offender.)
(It would be more easily handled in the SnakeYAML codebase on Representer.representData looking at an option e.g. setAllowAnchors defaulting to true, with the logic above to reset representedObjects after recursing if disallowed. I take the point at https://stackoverflow.com/a/18419489/109079 about the possibility of endless loops but it would be straightforward using this strategy to detect any reference to a parent map and fail fast.)
Another solution which i came by is using io.circle instead snake-yaml if it possible.
Scala code:
private[this] def removeAnchors(configYaml: String): String = {
val withoutAnchorsObj = io.circe.yaml.parser.parse(configYaml).valueOr(throw _)
val withoutAnchorString = io.circe.yaml.Printer(dropNullKeys = true, mappingStyle = Printer.FlowStyle.Block).pretty(withoutAnchorsObj)
logger.info(s"Removed Anchors from configYaml: $configYaml, result: $withoutAnchorString")
withoutAnchorString
}
build.sbt:
val circeVersion = "0.12.0"
libraryDependencies ++= Seq(
"io.circe" %% "circe-yaml" % circeVersion,
"io.circe" %% "circe-parser" % circeVersion,
)

Different types of Groovy execution in Java

I have 3 questions regarding using Groovy in java. They are all related together so I'm only creating one question here.
1) There are: GroovyClassLoader, GroovyShell, GroovyScriptEngine. But what is the difference between using them?
For example for this code:
static void runWithGroovyShell() throws Exception {
new GroovyShell().parse(new File("test.groovy")).invokeMethod("hello_world", null);
}
static void runWithGroovyClassLoader() throws Exception {
Class scriptClass = new GroovyClassLoader().parseClass(new File("test.groovy"));
Object scriptInstance = scriptClass.newInstance();
scriptClass.getDeclaredMethod("hello_world", new Class[]{}).invoke(scriptInstance, new Object[]{});
}
static void runWithGroovyScriptEngine() throws Exception {
Class scriptClass = new GroovyScriptEngine(".").loadScriptByName("test.groovy");
Object scriptInstance = scriptClass.newInstance();
scriptClass.getDeclaredMethod("hello_world", new Class[]{}).invoke(scriptInstance, new Object[]{});
}
2) What is the best way to load groovy script so it remains in memory in compiled form, and then I can call a function in that script when I need to.
3) How do I expose my java methods/classes to groovy script so that it can call them when needed?
Methods 2 and 3 both return the parsed class in return. So you can use a map to keep them in memory once they are parsed and successfully loaded.
Class scriptClass = new GroovyClassLoader().parseClass(new File("test.groovy"));
map.put("test.groovy",scriptClass);
UPDATE:
GroovyObject link to the groovy object docs.
Also this is possible to cast the object directly as GroovyObject and other java classes are indistinguishable.
Object aScript = clazz.newInstance();
MyInterface myObject = (MyInterface) aScript;
myObject.interfaceMethod();
//now here you can also cache the object if you want to
Cannot comment on efficiency. But I guess if you keep the loaded classes in memory, one time parsing would not hurt much.
UPDATE For efficiency:
You should use GroovyScriptEngine, it uses script caching internally.
Here is the link: Groovy Script Engine
Otherwise you can always test it using some performance benchmarks yourself and you would get rough idea. For example: Compiling groovy scripts with all three methods in three different loops and see which performs better. Try using same and different scripts, to see if caching kicks in, in some way.
UPDATE FOR PASSING PARAMS TO AND FROM SCRIPT
Binding class will help you sending the params to and from the script.
Example Link
// setup binding
def binding = new Binding()
binding.a = 1
binding.setVariable('b', 2)
binding.c = 3
println binding.variables
// setup to capture standard out
def content = new StringWriter()
binding.out = new PrintWriter(content)
// evaluate the script
def ret = new GroovyShell(binding).evaluate('''
def c = 9
println 'a='+a
println 'b='+b
println 'c='+c
retVal = a+b+c
a=3
b=2
c=1
''')
// validate the values
assert binding.a == 3
assert binding.getVariable('b') == 2
assert binding.c == 3 // binding does NOT apply to def'd variable
assert binding.retVal == 12 // local def of c applied NOT the binding!
println 'retVal='+binding.retVal
println binding.variables
println content.toString()

Categories