I have an old question sustained in my mind for a long time. When I was writing code in Spring, there are lots dirty and useless code for DTO, domain objects. For language level, I am hopeless in Java and see some light in Kotlin. Here is my question:
Style 1 It is common for us to write following code (Java, C++, C#, ...)
// annot: AdminPresentation
val override = FieldMetadataOverride()
override.broadleafEnumeration = annot.broadleafEnumeration
override.hideEnumerationIfEmpty = annot.hideEnumerationIfEmpty
override.fieldComponentRenderer = annot.fieldComponentRenderer
Sytle 2 Previous code can be simplified by using T.apply() in Kotlin
override.apply {
broadleafEnumeration = annot.broadleafEnumeration
hideEnumerationIfEmpty = annot.hideEnumerationIfEmpty
fieldComponentRenderer = annot.fieldComponentRenderer
}
Sytle 3 Can such code be even simplified to something like this?
override.copySameNamePropertiesFrom (annot) { // provide property list here
broadleafEnumeration
hideEnumerationIfEmpty
fieldComponentRenderer
}
First Priority Requirments
Provide property name list only one time
The property name is provided as normal code, so as to we can get IDE auto complete feature.
Second Priority Requirements
It's prefer to avoid run-time cost for Style 3. (For example, 'reflection' may be a possible implementation, but it do introduce cost.)
It's prefer to generated code like style1/style2 directly.
Not care
The final syntax of Style 3.
I am a novice for Kotlin language. Is it possible to use Kotlin to define somthing like 'Style 3' ?
It should be pretty simple to write a 5 line helper to do this which even supports copying every matching property or just a selection of properties.
Although it's probably not useful if you're writing Kotlin code and heavily utilising data classes and val (immutable properties). Check it out:
fun <T : Any, R : Any> T.copyPropsFrom(fromObject: R, vararg props: KProperty<*>) {
// only consider mutable properties
val mutableProps = this::class.memberProperties.filterIsInstance<KMutableProperty<*>>()
// if source list is provided use that otherwise use all available properties
val sourceProps = if (props.isEmpty()) fromObject::class.memberProperties else props.toList()
// copy all matching
mutableProps.forEach { targetProp ->
sourceProps.find {
// make sure properties have same name and compatible types
it.name == targetProp.name && targetProp.returnType.isSupertypeOf(it.returnType)
}?.let { matchingProp ->
targetProp.setter.call(this, matchingProp.getter.call(fromObject))
}
}
}
This approach uses reflection, but it uses Kotlin reflection which is very lightweight. I haven't timed anything, but it should run almost at same speed as copying properties by hand.
Now given 2 classes:
data class DataOne(val propA: String, val propB: String)
data class DataTwo(var propA: String = "", var propB: String = "")
You can do the following:
var data2 = DataTwo()
var data1 = DataOne("a", "b")
println("Before")
println(data1)
println(data2)
// this copies all matching properties
data2.copyPropsFrom(data1)
println("After")
println(data1)
println(data2)
data2 = DataTwo()
data1 = DataOne("a", "b")
println("Before")
println(data1)
println(data2)
// this copies only matching properties from the provided list
// with complete refactoring and completion support
data2.copyPropsFrom(data1, DataOne::propA)
println("After")
println(data1)
println(data2)
Output will be:
Before
DataOne(propA=a, propB=b)
DataTwo(propA=, propB=)
After
DataOne(propA=a, propB=b)
DataTwo(propA=a, propB=b)
Before
DataOne(propA=a, propB=b)
DataTwo(propA=, propB=)
After
DataOne(propA=a, propB=b)
DataTwo(propA=a, propB=)
Related
Problem: I'm using an AWS DynamoDb Enhanced Java client V2 in Kotlin (the details of the client might not be important), where I want to make a BatchWriteItemEnhancedRequest containing a bunch of requests. The standard usage according to the docs goes:
val batchWriteItemEnhancedRequest = BatchWriteItemEnhancedRequest.builder()
.addWriteBatch(
WriteBatch.builder(MyClass::class.java)
.mappedTableResource(myMappedTable)
.addPutItem(putRequest1)
.addPutItem(putRequest2)
.addPutItem(putRequest3)
.build()
)
.build()
Obviously this can't handle an anonymous list of putRequest items, only some known discrete values like putRequest1 and putRequest2.
The builder for WriteBatch doesn't seem to allow adding lists of requests: doc
The only way to do this, as I can see, is to make a for-loop that goes:
var writeBatchBuilder = WriteBatch.builder(MyClass::class.java)
.mappedTableResource(myMappedTable)
for (request in putItemRequests){
writeBatchBuilder = writeBatchBuilder.addPutItem(request)
}
val writeBatch = writeBatchBuilder.build()
Which seems horrible to me. There's got to be a more idiomatic way to do this right?
Given an instance of WriteBatch.Builder, there's no explicit need to chain the DSL, so your writeBatchBuilder can be a val. Each builder function mutates the internal state of the WriteBatch.Builder instance, and the returned value is just for convenience - source code.
You can therefore simplify your for-loop and make writeBatchBuilder a val:
val writeBatchBuilder = WriteBatch.builder(MyClass::class.java)
.mappedTableResource(myMappedTable)
for (request in putItemRequests){
writeBatchBuilder.addPutItem(request)
}
val writeBatch = writeBatchBuilder.build()
Scope function
Going one step further, scope functions are a nice way of converting Java DSLs into something more Kotlin-esque.
(For a more visual guide to scope functions, and Kotlin in general, check out https://typealias.com/start/kotlin-scopes-and-scope-functions/)
apply {} will turn the created WriteBatch.Builder into the receiver, meaning you don't need to have a variable for the builder, you can just use this.
val writeBatch = WriteBatch.builder(MyClass::class.java).apply {
mappedTableResource(myMappedTable)
putItemRequests.forEach { request ->
addPutItem(request)
}
}.build()
(I also changed the for loop into a forEach {} - which is more Kotlin-ey.)
Custom helper function
The next improvement is to encapsulate the apply {} logic inside of a helper function, writeBatchBuilder(), so the boilerplate is hidden away, which makes re-using the function easier.
Using inline and reified T means there's no need to define ::class.java every time - instead the class can be defined as a type-parameter, and the class can be omitted if Kotlin can infer the type.
Note that the receiver of the builder arg is WriteBatch.Builder, just like how the apply {} scope function changes the receiver to be the builder.
inline fun <reified T> buildWriteBatch(
builder: WriteBatch.Builder<T>.() -> Unit
) : WriteBatch<T> {
return WriteBatch.builder(T::class.java).apply(builder).build()
}
Example usage:
val writeBatch = buildWriteBatch<MyClass> {
mappedTableResource(myMappedTable)
putItemRequests.forEach { request ->
addPutItem(request)
}
}
// no need for explicit typing on buildWriteBatch - Kotlin can infer the type
val writeBatch2: WriteBatch<MyClass> = buildWriteBatch {}
(For examples of this approach in Kotlin stdlib, see the collection builder functions)
In this case extension function function may come handy. Let's assume, that WriteBatch.builder(MyClass::class.java) has type WriteBatch.Builder maybe with some generics, which is hard to say for example.
val writeBatch = WriteBatch.builder(MyClass::class.java)
.mappedTableResource(myMappedTable)
.addItems(putItemRequests)
.build()
// I did use Any for items type, change it according to your need
fun WriteBatch.Builder.addItems(items: List<Any>): WriteBatch.Builder {
items.forEach { item -> addPutItem(item) }
// in case addPutItem(item) returns a new builder instance just store it in the variable and return it at the end.
return this
}
I am trying to unit test a Java class with a method containing a lambda function. I am using Groovy and Spock for the test. For proprietary reasons I can't show the original code.
The Java method looks like this:
class ExampleClass {
AsyncHandler asynHandler;
Component componet;
Component getComponent() {
return component;
}
void exampleMethod(String input) {
byte[] data = input.getBytes();
getComponent().doCall(builder ->
builder
.setName(name)
.data(data)
.build()).whenCompleteAsync(asyncHandler);
}
}
Where component#doCall has the following signature:
CompletableFuture<Response> doCall(Consumer<Request> request) {
// do some stuff
}
The groovy test looks like this:
class Spec extends Specification {
def mockComponent = Mock(Component)
#Subject
def sut = new TestableExampleClass(mockComponent)
def 'a test'() {
when:
sut.exampleMethod('teststring')
then:
1 * componentMock.doCall(_ as Consumer<Request>) >> { args ->
assert args[0].args$2.asUtf8String() == 'teststring'
return new CompletableFuture()
}
}
class TestableExampleClass extends ExampleClass {
def component
TestableExampleClass(Component component) {
this.component = component;
}
#Override
getComponent() {
return component
}
}
}
The captured argument, args, shows up as follows in the debug window if I place a breakpoint on the assert line:
args = {Arrays$ArrayList#1234} size = 1
> 0 = {Component$lambda}
> args$1 = {TestableExampleClass}
> args$2 = {bytes[]}
There are two points confusing me:
When I try to cast the captured argument args[0] as either ExampleClass or TestableExampleClass it throws a GroovyCastException. I believe this is because it is expecting Component$Lambda, but I am not sure how to cast this.
Accessing the data property using args[0].args$2, doesn't seem like a clean way to do it. This is likely linked to the casting issue mentioned above. But is there a better way to do this, such as with args[0].data?
Even if direct answers can't be given, a pointer to some documentation or article would be helpful. My search results discussed Groovy closures and Java lambdas comparisons separately, but not about using lambdas in closures.
Why you should not do what you are trying
This invasive kind of testing is a nightmare! Sorry for my strong wording, but I want to make it clear that you should not over-specify tests like this, asserting on private final fields of lambda expressions. Why would it even be important what goes into the lambda? Simply verify the result. In order to do a verification like this, you
need to know internals of how lambdas are implemented in Java,
those implementation details have to stay unchanged across Java versions and
the implementations even have to be the same across JVM types like Oracle Hotspot, OpenJ9 etc.
Otherwise, your tests break quickly. And why would you care how a method internally computes its result? A method should be tested like a black box, only in rare cases should you use interaction testing,where it is absolutely crucial in order to make sure that certain interactions between objects occur in a certain way (e.g. in order to verify a publish-subscribe design pattern).
How you can do it anyway (dont!!!)
Having said all that, just assuming for a minute that it does actually make sense to test like that (which it really does not!), a hint: Instead of accessing the field args$2, you can also access the declared field with index 1. Accessing by name is also possible, of course. anyway, you have to reflect on the lambda's class, get the declared field(s) you are interested in, make them accessible (remember, they are private final) and then assert on their respective contents. You could also filter by field type in order to be less sensitive to their order (not shown here).
Besides, I do not understand why you create a TestableExampleClass instead of using the original.
In this example, I am using explicit types instead of just def in order to make it easier to understand what the code does:
then:
1 * mockComponent.doCall(_ as Consumer<Request>) >> { args ->
Consumer<Request> requestConsumer = args[0]
Field nameField = requestConsumer.class.declaredFields[1]
// Field nameField = requestConsumer.class.getDeclaredField('arg$2')
nameField.accessible = true
byte[] nameBytes = nameField.get(requestConsumer)
assert new String(nameBytes, Charset.forName("UTF-8")) == 'teststring'
return new CompletableFuture()
}
Or, in order to avoid the explicit assert in favour of a Spock-style condition:
def 'a test'() {
given:
String name
when:
sut.exampleMethod('teststring')
then:
1 * mockComponent.doCall(_ as Consumer<Request>) >> { args ->
Consumer<Request> requestConsumer = args[0]
Field nameField = requestConsumer.class.declaredFields[1]
// Field nameField = requestConsumer.class.getDeclaredField('arg$2')
nameField.accessible = true
byte[] nameBytes = nameField.get(requestConsumer)
name = new String(nameBytes, Charset.forName("UTF-8"))
return new CompletableFuture()
}
name == 'teststring'
}
In my end product, I provide the ability to extend the application code at runtime using small Groovy scripts that are edited via a form and whose code are persisted in the SQL database.
The scheme that these "custom code" snippets follow is generally returning a value based on input parameters. For example, during the invoicing of a service, the rating system might use a published schedule of predetermined rates, or values defined in a contract in the application, through custom groovy code, if an "overridden" value is returned, then it should be used.
In the logic that would determine the "override" value of the rate, I've incorporated something like these groovy code snippets that return a value, or if they return null, then the default value is used. E.g.
class GroovyRunner {
static final GroovyClassLoader classLoader = new GroovyClassLoader()
static final String GROOVY_CODE = MyDatabase().loadCustomCode()
static final String GROOVY_CLASS = MyDatabase().loadCustomClassName()
static final String TEMPDIR = System.getProperty("java.io.tmpdir")
double getOverrideRate(Object inParameters) {
def file = new File(TEMPDIR+GROOVY_CLASS+".groovy")
BufferedWriter bw = new BufferedWriter(new FileWriter(file))
bw.write(GROOVY_CODE)
bw.close()
Class gvy = classLoader.parseClass(file)
GroovyObject obj = (GroovyObject) gvy.getDeclaredConstructor().newInstance()
return Double.valueOf(obj.invokeMethod("getRate",inParameters)
}
}
And then, in the user-created custom groovy code:
class RateInterceptor {
def getRate(Object inParameters) {
def businessEntity = (SomeClass) inParameters
return businessEntity.getDiscount() == .5 ? .5 : null
}
}
The problem with this is that these "custom code" bits in GROOVY_CODE above, are pulled from a database during runtime, and contain an intricate groovy class. Since this method will be called numerous times in succession, it is impractical to create a new File object each time it is run.
Whether I use GroovyScriptEngine, or the GroovyClassLoader, these both involve the need of a java.io.File object. This makes the code execute extremely slowly, as the File will have to be created after the custom groovy code is retrieved from the database. Is there any way to run groovy code that can return a value without creating a temporary file to execute it?
The straight solution for your case would be using GroovyClassLoader.parseClass​(String text)
http://docs.groovy-lang.org/latest/html/api/groovy/lang/GroovyClassLoader.html#parseClass(java.lang.String)
The class caching should not be a problem because you are creating each time a new GroovyClassLoader
However think about using groovy scripts instead of classes
your rate interceptor code could be like this:
def businessEntity = (SomeClass) context
return businessEntity.getDiscount() == .5 ? .5 : null
or even like this:
context.getDiscount() == .5 ? .5 : null
in script you could declare functions, inner classes, etc
so, if you need the following script will work also:
class RateInterceptor {
def getRate(SomeClass businessEntity) {
return businessEntity.getDiscount() == .5 ? .5 : null
}
}
return new RateInterceptor().getRate(context)
The java code to execute those kind of scripts:
import groovy.lang.*;
...
GroovyShell gs = new GroovyShell();
Script script = gs.parse(GROOVY_CODE);
// bind variables
Binding binding = new Binding();
binding.setVariable("context", inParams);
script.setBinding(binding);
// run script
Object ret = script.run();
Note that parsing of groovy code (class or script) is a heavy operation. And if you need to speedup your code think about caching of parsed class into some in-memory cache or even into a map
Map<String, Class<groovy.lang.Script>>
Straight-forward would be also:
GroovyShell groovyShell = new GroovyShell()
Closure groovy = { String name, String code ->
String script = "{ Map params -> $code }"
groovyShell.evaluate( script, name ) as Closure
}
def closure = groovy( 'SomeName', 'params.someVal.toFloat() * 2' )
def res = closure someVal:21
assert 42.0f == res
Using the following code, I can set a couple variables to my matches. I want to do the same thing, but populate a map of all instances of these results. I'm struggling and could use help.
val (dice, level) = Regex("""([0-9]*d[0-9]*) at ([0-9]*)""").matchEntire(text)?.destructured!!
This code works for one instance, none of my attempts at matching multiple are working.
Your solution is short and readable. Here are a few options the one you use is largely a matter of preference. You can get a Map directly by using the associate method as follows.
val diceLevels = levelMatches.associate { matched ->
val (diceTwo,levelTwo) = matched.destructured
(levelTwo to diceTwo)
}
Note: This creates an immutable map. If you want a MutableMap, you can use associateTo.
If you want to be concise, you can simplify out the destructuring to local variables and index the groups directly.
val diceLevels = levelMatches.associate {
(it.groupValues[2] to it.groupValues[1])
}
Or, using let, you can also avoid needing to declare levelMatches as a local variable if it isn't used elsewhere --
val diceLevels = Regex("([0-9]+d[0-9]+) at ([0-9]+)")
.findAll(text)
.let { levelMatches ->
levelMatches.associate {
(it.groupValues[2] to it.groupValues[1])
}
}
I realized this was no where near as complicated as I was making it. Here was my solution. Is there something more elegant?
val levelMatches = Regex("([0-9]+d[0-9]+) at ([0-9]+)").findAll(text)
levelMatches.forEach { matched ->
val (diceTwo,levelTwo) = matched.destructured
diceLevels[levelTwo] = diceTwo
}
What are the most common mistakes that Java developers make when migrating to Scala?
By mistakes I mean writing a code that does not conform to Scala spirit, for example using loops when map-like functions are more appropriate, excessive use of exceptions etc.
EDIT: one more is using own getters/setters instead of methods kindly generated by Scala
It's quite simple: Java programmer will tend to write imperative style code, whereas a more Scala-like approach would involve a functional style.
That is what Bill Venners illustrated back in December 2008 in his post "How Scala Changed My Programming Style".
That is why there is a collection of articles about "Scala for Java Refugees".
That is how some of the SO questions about Scala are formulated: "help rewriting in functional style".
One obvious one is to not take advantage of the nested scoping that scala allows, plus the delaying of side-effects (or realising that everything in scala is an expression):
public InputStream foo(int i) {
final String s = String.valueOf(i);
boolean b = s.length() > 3;
File dir;
if (b) {
dir = new File("C:/tmp");
} else {
dir = new File("/tmp");
}
if (!dir.exists()) dir.mkdirs();
return new FileInputStream(new File(dir, "hello.txt"));
}
Could be converted as:
def foo(i : Int) : InputStream = {
val s = i.toString
val b = s.length > 3
val dir =
if (b) {
new File("C:/tmp")
} else {
new File("/tmp")
}
if (!dir.exists) dir.mkdirs()
new FileInputStream(new File(dir, "hello.txt"))
}
But this can be improved upon a lot. It could be:
def foo(i : Int) = {
def dir = {
def ensuring(d : File) = { if (!d.exists) require(d.mkdirs); d }
def b = {
def s = i.toString
s.length > 3
}
ensuring(new File(if (b) "C:/tmp" else "/tmp"));
}
new FileInputStream(dir, "hello.txt")
}
The latter example does not "export" any variable beyond the scope which it is needed. In fact, it does not declare any variables at all. This means it is easier to refactor later. Of course, this approach does lead to hugely bloated class files!
A couple of my favourites:
It took me a while to realise how truly useful Option is. A common mistake carried from Java is to use null to represent a field/variable that sometimes does not have a value. Recognise that you can use 'map' and 'foreach' on Option to write safer code.
Learn how to use 'map', 'foreach', 'dropWhile', 'foldLeft', ... and other handy methods on Scala collections to save writing the kind of loop constructions you see everywhere in Java, which I now perceive as verbose, clumsy, and harder to read.
A common mistake is to go wild and overuse a feature not present in Java once you "grokked" it. E.g. newbies tend to overuse pattern matching(*), explicit recursion, implicit conversions, (pseudo-) operator overloading and so on. Another mistake is to misuse features that look superficially similar in Java (but ain't), like for-comprehensions or even if (which works more like Java's ternary operator ?:).
(*) There is a great cheat sheet for pattern matching on Option: http://blog.tmorris.net/scalaoption-cheat-sheet/
I haven't adopted lazy vals and streams so far.
In the beginning, a common error (which the compiler finds) is to forget the semicolon in a for:
for (a <- al;
b <- bl
if (a < b)) // ...
and where to place the yield:
for (a <- al) yield {
val x = foo (a).map (b).filter (c)
if (x.cond ()) 9 else 14
}
instead of
for (a <- al) {
val x = foo (a).map (b).filter (c)
if (x.cond ()) yield 9 else yield 14 // why don't ya yield!
}
and forgetting the equals sign for a method:
def yoyo (aka : Aka) : Zirp { // ups!
aka.floskel ("foo")
}
Using if statements. You can usually refactor the code to use if-expressions or by using filter.
Using too many vars instead of vals.
Instead of loops, like others have said, use the list comprehension functions like map, filter, foldLeft, etc. If there isn't one available that you need (look carefully there should be something you can use), use tail recursion.
Instead of setters, I keep the spirit of functional programming and have my objects immutable. So instead I do something like this where I return a new object:
class MyClass(val x: Int) {
def setX(newx: Int) = new MyClass(newx)
}
I try to work with lists as much as possible. Also, to generate lists, instead of using a loop, use the for/yield expressions.
Using Arrays.
This is basic stuff and easily spotted and fixed, but will slow you down initially when it bites your ass.
Scala has an Array object, while in Java this is a built in artifact. This means that initialising and accessing elements of the array in Scala are actually method calls:
//Java
//Initialise
String [] javaArr = {"a", "b"};
//Access
String blah1 = javaArr[1]; //blah1 contains "b"
//Scala
//Initialise
val scalaArr = Array("c", "d") //Note this is a method call against the Array Singleton
//Access
val blah2 = scalaArr(1) //blah2 contains "d"