Java assertions - second argument evaluation? - java

I'm curious about the performance of Java assertions.
In particular, with 2-argument Java assertions, like this one:
assert condition : expression
Does "expression" get evaluated even if "condition" evaluates to true? I'm curious if it's a feasible solution for cases when we want super lightweight "condition", but "expression" can be too heavy (e.g. String concatenation). Solutions like e.g. Preconditions in Guava will evaluate the expression, making it infeasible for such cases.
My tests suggest it's lazily evaluated, but I couldn't find any reference proving it.
Thanks in advance!

You can always refer to the Java Language Specification (JLS), http://docs.oracle.com/javase/specs/jls/se7/html/jls-14.html#jls-14.10
In short, you will find there that the second expression is lazily evaluated.
I highly recommend you bookmark the JLS. The table of contents is here: http://docs.oracle.com/javase/specs/jls/se7/html/index.html . The VM Spec might also be useful (you can find it here: http://docs.oracle.com/javase/specs/jvms/se7/html/)
Also, please note that assertions are in no way a substitute for things like Guava's Preconditions (or if statements testing for preconditions). They have a similar, but not identical, purpose. For instance, they can be disabled in runtime!

Related

Providing common implementation for Java expression evaluators

Currently, we are using JEXL expression evaluator in our microservice. However, we would like to have the flexibility to change this to some other expression evaluator in future. The issue here is that the expression evaluators from different vendors do not inherit methods from a common interface. Thus, different expression evaluators could have different methods as well as different method signatures, although under the hood they do provide similar functionality. How can I approach this problem in a way that avoids changing the entire code base in case we do decide to change the expression evaluator in future ? It would be helpful if the answer includes a sample code implementing this scenario.
Well, #Admx mentions the adapter pattern in comments, and normally in situations like this that would be a good idea. It boils down to three steps:
Write an interface that defined what you need an expression evaluator to do;
Implement that interface using JEXL
Encapsulate creation of evaluators in a factory so that you can swap implementations whenever you want.
In this case, there is a problem with that idea: The expression language is part of the interface.
It's quite unlikely that any other 3rd party expression evaluator is going to implement the same expression language. That means that if you swap out JEXL for something else, then you'll have to edit all the expressions you need it to evaluate.
Is that really a direction you expect you project to evolve in? Probably not.
Since you can't really use JEXL without tying your product to JEXL, you might just want to incorporate JEXL source into your own code base, and maintain it from there as if it was your own. It's Apache-licensed and not too big, so that is certainly doable. You don't need to do this now, but it's an option you can turn to if JEXL ends up causing you trouble.

Specify rule in grammar for embedded DSL, if compiler can't verify it?

Is it common to define a rule in a grammar for an embedded DSL, even if the compiler can not validate the correctness of the given code? The rule I'm talking about is one that applies at runtime.
Here's an example:
I have a function that reads arbitrary classes and searches them for methods marked with a specific annotation. In addition the methods must have a boolean return type. I haven't found a way to define the annotation class to be only valid on methods with specific return types, so I check it at runtime, and raise an error if the method does not return boolean.
Now I want to specify a grammar for the internal/embedded DSL given by the tool. So basically a class with an annotated method with return type int is not valid.
So should the grammar contain a rule, forbidding other return types than boolean, or not?
Papers and/or articles on the topic would be helpful, too.
Thanks in advance.
Is it common to define a rule in a grammar for an embedded DSL, even if the compiler can not validate the correctness of the given code? The rule I'm talking about is one that applies at runtime.
I think you're referring to "the compiler" for code written in your DSL -- i.e. a program that evaluates the DSL code and transforms it to some other representation -- as opposed to a Java compiler with which you are building that program. In that case, this is a question of semantics, in more ways than one.
In the first place, if a compiler for your DSL cannot validate your rule then by definition, it is not a grammar rule. In that sense, the answer to your question is trivially "no" -- not only is what you describe not common, it doesn't even make sense.
On the other hand, you seem to be describing a semantic rule of your language, and there's nothing at all wrong or uncommon with a language having such rules. Most languages of any complexity do. I do not speak to your specific example, however, because it seems largely a matter of opinion, which is off-topic here.
It is common for language specifications to contain text (with varying degrees of formality) which constrains correct programs in ways that go beyond the possibilities of syntactic analysis. You don't have to look very far for examples; the C, C++ and ECMAScript standards are full of them.
But if you cannot verify a constraint at compile-time, it is clearly impossible to include the constraint in the grammar. Even constraints which can theoretically be detected at compile-time can be difficult to include in a formal context-free grammar (requiring variable declaration before use, for example, or more generally insisting on correct typing). Other formalisms exist, but they are not really grammatical.

What are the key differences between Java 8's Optional, Scala's Option and Haskell's Maybe?

I've read a few posts on Java 8's upcoming Optional type, and I'm trying to understand why people keep suggesting it's not as powerful as Scala's Option. As far as I can tell it has:
Higher order functions like map and filter using Java 8 lambdas.
Monadic flatMap
Short circuiting through getOrElse type functions.
What am I missing?
Some possibilities come to mind (OTOH, I haven't seen people actually saying that, so they might mean something else):
No pattern matching.
No equivalent to Scala's fold or Haskell's fromMaybe: you have to do optional.map(...).orElseGet(...) instead.
No monadic syntax.
I also wouldn't call any of these "less powerful" myself, since you can express everything you can with the corresponding Scala/Haskell types; these are all conciseness/usability concerns.
Optional and Maybe are effectively in correspondence. Scala has None and Some[A] as subclassing Option[A] which might be more directly comparable to Java since Java could have done the same.
Most other differences either have to do with the ease of handling Maybe/Option in Haskell/Scala which won't translate since Java is less expressive as a language or the consistency of use of Maybe/Option in Haskell/Scala where many of the guarantees and conveniences afforded by the type only kick in once most libraries have agreed to use optional types instead of nulls or exceptions.
For most purposes, they are equivalent; the main difference is that the Scala one is well-integrated into Scala, while the Java one is well-integrated into Java.
The biggest difference in my mind is that Java's is a value-based class. That's something new to the JVM. At the moment there's no real difference between value-based and regular classes, but the distinction paves the way for JVM runtimes to eliminate Java object overhead. In other words, a future JVM could rewrite Optional code as a directive for how to handle nulls, rather than allocating memory for Optional objects.
Scala does something similar with value classes, though it's done by unboxing types in the compiler rather than in the JVM, and its usage is limited. (Option isn't a value class.)

Drools dialect difference between implicit and explicit conjunction (AND)

In drools the drools dialect, conjunctions are implicit. For example:
rule "new drool"
when
Fact(id=="fact1")
Fact(id=="fact2")
then
end
The above requires there be two Fact objects. One must have an id of "fact1", the other must have an id of "fact2".
However, the AND operator does exist. You could write the same drools as follows:
rule "new drool"
when
Fact(id=="fact1") AND
Fact(id=="fact2")
then
end
I was under the impression that there is absolutely no logical or practical difference between these two expressions. However, I have a user telling me he is experiencing different behavior when he uses the explicit conjunction vs the implicit one. I am skeptical, but I haven't been able to find any documentation to support my position. Does anyone know whether an implicit vs explicit conjunction in drools could see different behavior?
The AND is implicit between the two Conditional Elements, so if the user is experiencing different behaviours there should be a bug there. If you can manage to reproduce in a test case the different behaviours please open a jira for it.

OGNL Expression Parsing vs Compilation

In OGNL, it is recommended to parse expressions that are reused in order to improve performance.
When consulting the API, I also noticed that there is a compileExpression method:
After searching thoroughly for information on compilation vs parsing, the only article I could find is part of the Struts documentation, and mentions how to do it, but not what it does compared to parsing.
Under what conditions should you use compilation instead of parsing alone, and are there significant performance benefits to be gained from compiling an expression compared to simply parsing that same expression?
From the method signatures, it appears that Ognl.parseExpression() produces an input-independent object, but Ognl.compileExpression() produces an object that depends upon the given input (root and context). Is this correct?
That http://struts.apache.org/release/2.3.x/docs/ognl-expression-compilation.html link is pretty old and I'm not sure if it's outdated or not but it's the only real documentation I ever wrote on how to use the javassist-based expression JIT code.
It's only a relevant concern if your own use of something either directly or indirectly using ognl shows a performance hit in that area. The normal expression evaluation mechanism is probably more than adequate for most needs but this extra step turns what is basically a java reflection chain of invocation calls into pure java equivalents so it eliminates almost entirely any hit you might otherwise incur using OGNL because of reflection.
Really, if you aren't sure if you need it you probably don't. Sorry I never got around to integrating the concept thoroughly into OGNL without so much scary looking extra work. Probably would've been best as an optional configuration setting in OGNL that was turned off or on but .. Feel free to fork on github if you want. =)

Categories