Drools dialect difference between implicit and explicit conjunction (AND) - java

In drools the drools dialect, conjunctions are implicit. For example:
rule "new drool"
when
Fact(id=="fact1")
Fact(id=="fact2")
then
end
The above requires there be two Fact objects. One must have an id of "fact1", the other must have an id of "fact2".
However, the AND operator does exist. You could write the same drools as follows:
rule "new drool"
when
Fact(id=="fact1") AND
Fact(id=="fact2")
then
end
I was under the impression that there is absolutely no logical or practical difference between these two expressions. However, I have a user telling me he is experiencing different behavior when he uses the explicit conjunction vs the implicit one. I am skeptical, but I haven't been able to find any documentation to support my position. Does anyone know whether an implicit vs explicit conjunction in drools could see different behavior?

The AND is implicit between the two Conditional Elements, so if the user is experiencing different behaviours there should be a bug there. If you can manage to reproduce in a test case the different behaviours please open a jira for it.

Related

How to get a field reference in java?

i am trying to get a compile time safe field reference in java, done not with reflection and Strings, but directly referencing the field. Something like
MyClass::myField
I have tried the usual reflection way, but you need to reference the fields as strings, and this is error prone in case of a rename, and will not throw a compile time error
EDIT: just want to clarify that my end goal is to get the field NAME for entity purposes, such as reference the entity field in a query, and not the value
Unfortunately, you might as well want to wish for a unicorn. The notion of 'a field reference', in the sense that you are asking for, simply isn't part of java-the-language.
That MyClass::myThing syntax works only for methods. There's simply no such thing for fields. It's unfortunate.
It's very difficult to give objective reasons for the design decisions of any language; it either requires spelunking through the designer's collective heads which requires magic or science fiction, or asking them to spill the beans, which they're probably not going to do in a stack overflow question. Sometimes (and more recent java features, such as this one), design is debated in public. Specifically, you can search for the openjdk lamba-dev mailing list where no doubt this question was covered. You'll need to go through, and I'm not exaggerating, tens of thousands of posts, but, the good news is, it's searchable.
But, I can guess / dig through my own memory as I spent some time discussing Project Lambda as it was designed:
Direct field access isn't common in the java ecosystem. The language allows direct field access but few java programs are written that way, so why make a language feature that would only be immediately useful and familiar to an exotic bunch.
The infrastructure required is also rather significant - a method lambda isn't allowed to be written in java unless you use it in a context that makes it possible for the compiler to 'treat' the lambda as a type - specifically, a #FunctionalInterface - any interface that contains exactly 1 method (other than methods that already exist in j.l.Object itself). In other words, this is fine:
Function<String, String> f = String::toLowerCase;
But this is not:
Object o = String::toLowerCase;
So, let's imagine for a moment that field refs did exist. What does that mean? What is the 'type' of the expression MyClass::myField? Perhaps a new concept: An interface with 2 methods; one of them takes no arguments and returns a T, the other wants a T and returns nothing (to match the act of reading the field, and writing it), but where it's also acceptable if it's a FunctionalInterface that is either one of those, perhaps? That sounds complicated.
The general mindset of the java design team right now (and has been for a while) is not to overcomplicate matters: Do not add features unless you have a good reason. After all, if it turns out that the community really clamours for field refs, they can be added. But, if on the other hand, they were added but nobody uses them, they can't be removed (and thus you've now permanently made the language more complicated and reduced room for future language features for a thing nobody uses and which most style guides tell you to actively avoid).
That's, I'm pretty sure, why they don't exist.

Can Ternary operators should not be nested (squid:S3358) be configured

When I have the following code with 2 levels of Ternary operations
double amount = isValid ? (isTypeA ? vo.getTypeA() : vo.getTypeB()) : 0;
Which Sonar warns about
Ternary operators should not be nested (squid:S3358)
Just because you can do something, doesn't mean you should, and that's the case with nested ternary operations. Nesting ternary operators results in the kind of code that may seem clear as day when you write it, but six months later will leave maintainers (or worse - future you) scratching their heads and cursing.
Instead, err on the side of clarity, and use another line to express the nested operation as a separate statement.
My colleague suggested that such level can be accepted and it's more clear than the alternative.
I wonder if this rule (or others) can be configured to allowed levels limit?
If not, why sonar is so strict when it deals with code conventions?
I don't want to ignore rule, just to customize to allow up to 2 levels instead of 1.
I wonder if this rule can be configured to allowed levels limit?
The Ternary operators should not be nested rule cannot be configured. You are only able to enable or disable it.
I wonder if other rules can be configured to allowed levels limit?
I don't know any existing rule which can do it. Luckily, you are able to create a custom analyzer. The original rule class is here NestedTernaryOperatorsCheck. You can simply copy it and adjust to your needs.
why sonar is so strict when it deals with code conventions?
SonarSource provides a lot of rules for different languages. Every customization makes code more difficult to maintain. They have a limited capacity, so they have to make decisions which are unaccepted by all users (but are accepted by most of them).

Specify rule in grammar for embedded DSL, if compiler can't verify it?

Is it common to define a rule in a grammar for an embedded DSL, even if the compiler can not validate the correctness of the given code? The rule I'm talking about is one that applies at runtime.
Here's an example:
I have a function that reads arbitrary classes and searches them for methods marked with a specific annotation. In addition the methods must have a boolean return type. I haven't found a way to define the annotation class to be only valid on methods with specific return types, so I check it at runtime, and raise an error if the method does not return boolean.
Now I want to specify a grammar for the internal/embedded DSL given by the tool. So basically a class with an annotated method with return type int is not valid.
So should the grammar contain a rule, forbidding other return types than boolean, or not?
Papers and/or articles on the topic would be helpful, too.
Thanks in advance.
Is it common to define a rule in a grammar for an embedded DSL, even if the compiler can not validate the correctness of the given code? The rule I'm talking about is one that applies at runtime.
I think you're referring to "the compiler" for code written in your DSL -- i.e. a program that evaluates the DSL code and transforms it to some other representation -- as opposed to a Java compiler with which you are building that program. In that case, this is a question of semantics, in more ways than one.
In the first place, if a compiler for your DSL cannot validate your rule then by definition, it is not a grammar rule. In that sense, the answer to your question is trivially "no" -- not only is what you describe not common, it doesn't even make sense.
On the other hand, you seem to be describing a semantic rule of your language, and there's nothing at all wrong or uncommon with a language having such rules. Most languages of any complexity do. I do not speak to your specific example, however, because it seems largely a matter of opinion, which is off-topic here.
It is common for language specifications to contain text (with varying degrees of formality) which constrains correct programs in ways that go beyond the possibilities of syntactic analysis. You don't have to look very far for examples; the C, C++ and ECMAScript standards are full of them.
But if you cannot verify a constraint at compile-time, it is clearly impossible to include the constraint in the grammar. Even constraints which can theoretically be detected at compile-time can be difficult to include in a formal context-free grammar (requiring variable declaration before use, for example, or more generally insisting on correct typing). Other formalisms exist, but they are not really grammatical.

Why does Guava's Optional use abstract classes when Java 8's uses nulls?

When Java 8 was released, I was expecting to find its implementation of Optional to be basically the same as Guava's. And from a user's perspective, they're almost identical. But Java 8's Optional uses null internally to mark an empty Optional, rather than making Optional abstract and having two implementations. Aside from Java 8's version feeling wrong (you're avoiding nulls by just hiding the fact that you're really still using them), isn't it less efficient to check if your reference is null every time you want to access it, rather than just invoke an abstract method? Maybe it's not, but I'm wondering why they chose this approach.
Perhaps the developers of Google Guava wanted to develop an idiom closer to those of the functional world:
datatype ‘a option = NONE | SOME of ‘a
In whose case you use pattern matching to check the true nature of an instance of type option.
case x of
NONE => //address null here
| SOME y => //do something with y here
By declaring Option as an abstract class, the Google Guava is following this approach, where Option represent the type ('a option), and the subclasses for of and absent would represent the particular instances of this type (SOME 'a and NONE).
The design of Option was thoroughly discussed in the lambda mailing list. In the words of Brian Goetz:
The problem is with the expectations. This is a classic "blind men
and elephant" problem; the thing called Optional has different
"essential natures" to different viewpoints, and the problem is not
that each is not valid, the problem is that we're all using the same
word to describe different concepts (more precisely, assuming that the
goals of the JDK team are the same as the goals of the people you
condescendingly refer to as "those familiar with the concept."
There is a narrow design scope of what Optional is being used for in
the JDK. The current design mostly meets that; it might be extended
in small ways, but the goal is NOT to create an option monad or solve
the problems that the option monad is intended to solve. (Even if we
did, the result would still likely not be satisfactory; without the
rest of the class library following the same monadic API conventions,
without higher-kinded generics to abstract over different kinds of
monads, without linguistic support for flatmap in the form of the <-
operator, without pattern matching, etc, etc, the value of turning
Optional into a monad is greatly decreased.) Given that this is not
our goal here, we're stopping where it stops adding value according to
our goals. Sorry if people are upset that we're not turning Java into
Scala or Haskell, but we're not.
On a purely practical note, the discussions surrounding Optional have
exceeded its design budget by several orders of magnitude. We've
carefully considered the considerable input we've received, spent no
small amount of time thinking about it, and have concluded that the
current design center is the right one for the current time. What is
surely meant as well-intentioned input is in fact rapidly turning into
a denial-of-service attack. We could spend endless time arguing this
back and forth, and there'd be no JDK 8 as a result. I'm sure no one
wants that.
So, let's keep our input on the subject to that which is within the
design center of the current implementation, rather than trying to
convince us to change the design center.
i would expect virtual method invocation lookup to be more expensive. you have to load the virtual function table, look up an offset, and then invoke the method. a null check is a single bytecode that reads from a register and not memory.

Java assertions - second argument evaluation?

I'm curious about the performance of Java assertions.
In particular, with 2-argument Java assertions, like this one:
assert condition : expression
Does "expression" get evaluated even if "condition" evaluates to true? I'm curious if it's a feasible solution for cases when we want super lightweight "condition", but "expression" can be too heavy (e.g. String concatenation). Solutions like e.g. Preconditions in Guava will evaluate the expression, making it infeasible for such cases.
My tests suggest it's lazily evaluated, but I couldn't find any reference proving it.
Thanks in advance!
You can always refer to the Java Language Specification (JLS), http://docs.oracle.com/javase/specs/jls/se7/html/jls-14.html#jls-14.10
In short, you will find there that the second expression is lazily evaluated.
I highly recommend you bookmark the JLS. The table of contents is here: http://docs.oracle.com/javase/specs/jls/se7/html/index.html . The VM Spec might also be useful (you can find it here: http://docs.oracle.com/javase/specs/jvms/se7/html/)
Also, please note that assertions are in no way a substitute for things like Guava's Preconditions (or if statements testing for preconditions). They have a similar, but not identical, purpose. For instance, they can be disabled in runtime!

Categories