In OGNL, it is recommended to parse expressions that are reused in order to improve performance.
When consulting the API, I also noticed that there is a compileExpression method:
After searching thoroughly for information on compilation vs parsing, the only article I could find is part of the Struts documentation, and mentions how to do it, but not what it does compared to parsing.
Under what conditions should you use compilation instead of parsing alone, and are there significant performance benefits to be gained from compiling an expression compared to simply parsing that same expression?
From the method signatures, it appears that Ognl.parseExpression() produces an input-independent object, but Ognl.compileExpression() produces an object that depends upon the given input (root and context). Is this correct?
That http://struts.apache.org/release/2.3.x/docs/ognl-expression-compilation.html link is pretty old and I'm not sure if it's outdated or not but it's the only real documentation I ever wrote on how to use the javassist-based expression JIT code.
It's only a relevant concern if your own use of something either directly or indirectly using ognl shows a performance hit in that area. The normal expression evaluation mechanism is probably more than adequate for most needs but this extra step turns what is basically a java reflection chain of invocation calls into pure java equivalents so it eliminates almost entirely any hit you might otherwise incur using OGNL because of reflection.
Really, if you aren't sure if you need it you probably don't. Sorry I never got around to integrating the concept thoroughly into OGNL without so much scary looking extra work. Probably would've been best as an optional configuration setting in OGNL that was turned off or on but .. Feel free to fork on github if you want. =)
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Following pseudo-C++-code:
vector v;
... filling vector here and doing stuff ...
assert(is_sorted(v));
auto x = std::find(v, elementToSearchFor);
find has linear runtime, because it's called on a vector, which can be unsorted. But at that line in that specific program we know that either: The program is incorrect (as in: it doesn't run to the end if the assertion fails) or the vector to search for is sorted, therefore allowing a binary search find with O(log n). Optimizing it into a binary search should be done by a good compiler.
This is only the easiest worst case behavrior I found so far (more complex assertions may allow even more optimization).
Do some compilers do this? If yes, which ones? If not, why don't they?
Appendix: Some higher level languages may easily do this (especially in case of FP ones), so this is more about C/C++/Java/similar languages
Rice's Theorem basically states that non-trivial properties of code cannot be computed in general.
The relationship between is_sorted being true, and running a faster search is possible instead of a linear one, is a non-trivial property of the program after is_sorted is asserted.
You can arrange for explicit connections between is_sorted and the ability to use various faster algorithms. The way you communicate this information in C++ to the compiler is via the type system. Maybe something like this:
template<typename C>
struct container_is_sorted {
C c;
// forward a bunch of methods to `c`.
};
then, you'd invoke a container-based algorithm that would either use a linear search on most containers, or a sorted search on containers wrapped in container_is_sorted.
This is a bit awkward in C++. In a system where variables could carry different compiler-known type-like information at different points in the same stream of code (types that mutate under operations) this would be easier.
Ie, suppose types in C++ had a sequence of tags like int{positive, even} you could attach to them, and you could change the tags:
int x;
make_positive(x);
Operations on a type that did not actively preserve a tag would automatically discard it.
Then assert( {is sorted}, foo ) could attach the tag {is sorted} to foo. Later code could then consume foo and have that knowledge. If you inserted something into foo, it would lose the tag.
Such tags might be run time (that has cost, however, so unlikely in C++), or compile time (in which case, the tag-state of a given variable must be statically determined at a given location in the code).
In C++, due to the awkwardness of such stuff, we instead by habit simply note it in comments and/or use the full type system to tag things (rvalue vs lvalue references are an example that was folded into the language proper).
So the programmer is expected to know it is sorted, and invoke the proper algorithm given that they know it is sorted.
Well, there are two parts to the answer.
First, let's look at assert:
7.2 Diagnostics <assert.h>
1 The header defines the assert and static_assert macros and
refers to another macro,
NDEBUG
which is not defined by <assert.h>. If NDEBUG is defined as a macro name at the point in the source file where <assert.h> is included, the assert macro is defined simply as
#define assert(ignore) ((void)0)
The assert macro is redefined according to the current state of NDEBUG each time that <assert.h> is included.
2 The assert macro shall be implemented as a macro, not as an actual function. If the macro definition is suppressed in order to access an actual function, the behavior is undefined.
Thus, there is nothing left in release-mode to give the compiler any hint that some condition can be assumed to hold.
Still, there is nothing stopping you from redefining assert with an implementation-defined __assume in release-mode yourself (take a look at __builtin_unreachable() in clang / gcc).
Let's assume you have done so. Now, the condition tested could be really complicated and expensive. Thus, you really want to annotate it so it does not ever result in any run-time work. Not sure how to do that.
Let's grant that your compiler even allows that, for arbitrary expressions.
The next hurdle is recognizing what the expression actually tests, and how that relates to the code as written and any potentially faster, but under the given assumption equivalent, code.
This last step results in an immense explosion of compiler-complexity, by either having to create an explicit list of all those patterns to test or building a hugely-complicated automatic analyzer.
That's no fun, and just about as complicated as building SkyNET.
Also, you really do not want to use an asymptotically faster algorithm on a data-set which is too small for asymptotic time to matter. That would be a pessimization, and you just about need precognition to avoid such.
Assertions are (usually) compiled out in the final code. Meaning, among other things, that the code could (silently) fail (by retrieving the wrong value) due to such an optimization, if the assertion was not satisfied.
If the programmer (who put the assertion there) knew that the vector was sorted, why didn't he use a different search algorithm? What's the point in having the compiler second-guess the programmer in this way?
How does the compiler know which search algorithm to substitute for which, given that they all are library routines, not a part of the language's semantics?
You said "the compiler". But compilers are not there for the purpose of writing better algorithms for you. They are there to compile what you have written.
What you might have asked is whether the library function std::find should be implemented to potentially seek whether or not it can perform the algorithm other than using linear search. In reality it might be possible if the user has passed in std::set iterators or even std::unordered_set and the STL implementer knows detail of those iterators and can make use of it, but not in general and not for vector.
assert itself only applies in debug mode and optimisations are normally needed for release mode. Also, a failed assert causes an interrupt not a library switch.
Essentially, there are collections provided for faster lookup and it is up to the programmer to choose it and not the library writer to try to second guess what the programmer really wanted to do. (And in my opinion even less so for the compiler to do it).
In the narrow sense of your question, the answer is they do if then can but mostly they can't, because the language isn't designed for it and assert expressions are too complicated.
If assert() is implemented as a macro (as it is in C++), and it has not been disabled (by setting NDEBUG in C++) and the expression can be evaluated at compile time (or can be data traced) then the compiler will apply its usual optimisations. That doesn't happen often.
In most cases (and certainly in the example you gave) the relationship between the assert() and the desired optimisation is far beyond what a compiler can do without assistance from the language. Given the very low level of meta-programming capability in C++ (and Java) the ability to do this is quite limited.
In the wider sense I think what you're really asking for is a language in which the programmer can make asserts about the intention of the code, from which the compiler can choose between different translations (and algorithms). There have been experimental languages attempting to do that, and Eiffel had some features in that direction, but I'm now aware of any mainstream compiled languages that could do it.
Optimizing it into a binary search should be done by a good compiler.
No! A linear search results in a much more predictable branch. If the array is short enough, linear search is the right thing to do.
Apart from that, even if the compiler wanted to, the list of ideas and notions it would have to know about would be immense and it would have to do nontrivial logic on them. This would get very slow. Compilers are engineered to run fast and spit out decent code.
You might spend some time playing with formal verification tools whose job is to figure out everything they can about the code they're fed in, which asserts can trip, and so forth. They're often built without the same speed requirements compilers have and consequently they're much better at figuring things out about programs. You'll probably find that reasoning rigorously about code is rather harder than it looks at first sight.
When Java 8 was released, I was expecting to find its implementation of Optional to be basically the same as Guava's. And from a user's perspective, they're almost identical. But Java 8's Optional uses null internally to mark an empty Optional, rather than making Optional abstract and having two implementations. Aside from Java 8's version feeling wrong (you're avoiding nulls by just hiding the fact that you're really still using them), isn't it less efficient to check if your reference is null every time you want to access it, rather than just invoke an abstract method? Maybe it's not, but I'm wondering why they chose this approach.
Perhaps the developers of Google Guava wanted to develop an idiom closer to those of the functional world:
datatype ‘a option = NONE | SOME of ‘a
In whose case you use pattern matching to check the true nature of an instance of type option.
case x of
NONE => //address null here
| SOME y => //do something with y here
By declaring Option as an abstract class, the Google Guava is following this approach, where Option represent the type ('a option), and the subclasses for of and absent would represent the particular instances of this type (SOME 'a and NONE).
The design of Option was thoroughly discussed in the lambda mailing list. In the words of Brian Goetz:
The problem is with the expectations. This is a classic "blind men
and elephant" problem; the thing called Optional has different
"essential natures" to different viewpoints, and the problem is not
that each is not valid, the problem is that we're all using the same
word to describe different concepts (more precisely, assuming that the
goals of the JDK team are the same as the goals of the people you
condescendingly refer to as "those familiar with the concept."
There is a narrow design scope of what Optional is being used for in
the JDK. The current design mostly meets that; it might be extended
in small ways, but the goal is NOT to create an option monad or solve
the problems that the option monad is intended to solve. (Even if we
did, the result would still likely not be satisfactory; without the
rest of the class library following the same monadic API conventions,
without higher-kinded generics to abstract over different kinds of
monads, without linguistic support for flatmap in the form of the <-
operator, without pattern matching, etc, etc, the value of turning
Optional into a monad is greatly decreased.) Given that this is not
our goal here, we're stopping where it stops adding value according to
our goals. Sorry if people are upset that we're not turning Java into
Scala or Haskell, but we're not.
On a purely practical note, the discussions surrounding Optional have
exceeded its design budget by several orders of magnitude. We've
carefully considered the considerable input we've received, spent no
small amount of time thinking about it, and have concluded that the
current design center is the right one for the current time. What is
surely meant as well-intentioned input is in fact rapidly turning into
a denial-of-service attack. We could spend endless time arguing this
back and forth, and there'd be no JDK 8 as a result. I'm sure no one
wants that.
So, let's keep our input on the subject to that which is within the
design center of the current implementation, rather than trying to
convince us to change the design center.
i would expect virtual method invocation lookup to be more expensive. you have to load the virtual function table, look up an offset, and then invoke the method. a null check is a single bytecode that reads from a register and not memory.
I was frustrated recently in this question where OP wanted to change the format of the output depending on a feature of the number being formatted.
The natural mechanism would be to construct the format dynamically but because PrintStream.format takes a String instead of a CharSequence the construction must end in the construction of a String.
It would have been so much more natural and efficient to build a class that implemented CharSequence that provided the dynamic format on the fly without having to create yet another String.
This seems to be a common theme in the Java libraries where the default seems to be to require a String even though immutability is not a requirement. I am aware that keys in Maps and Sets should generally be immutable for obvious reasons but as far as I can see String is used far too often where a CharSequence would suffice.
There are a few reasons.
In a lot of cases, immutability is a functional requirement. For example, you've identified that a lot of collections / collection types will "break" if an element or key is mutated.
In a lot of cases, immutability is a security requirement. For instance, in an environment where you are running untrusted code in a sandbox, any case where untrusted code could pass a StringBuilder instead of a String to trusted code is a potential security problem1.
In a lot of cases, the reason is backwards compatibility. The CharSequence interface was introduced in Java 1.4. Java APIs that predate Java 1.4 do not use it. Furthermore, changing an preexisting method that uses String to use CharSequence risks binary compatibility issues; i.e. it could prevent old Java code from running on a newer JVM.
In the remainder it could simply be - "too much work, too little time". Making changes to existing standard APIs involves a lot of effort to make sure that the change is going to be acceptable to everyone (e.g. checking for the above), and convincing everyone that it will all be OK. Work has to be prioritized.
So while you find this frustrating, it is unavoidable.
1 - This would leave the Java API designer with an awkward choice. Does he/she write the API to make (expensive) defensive copies whenever it is passed a mutable "string", and possibly change the semantics of the API (from the user's perspective!). Or does he/she label the API as "unsafe for untrusted code" ... and hope that developers notice / understand?
Of course, when you are designing your own APIs for your own reasons, you can make the call that security is not an issue. The Java API designers are not in that position. They need to design APIs that work for everyone. Using String is the simplest / least risky solution.
See http://docs.oracle.com/javase/6/docs/api/java/lang/CharSequence.html
Do you notice the part that explains that it has been around since 1.4? Previously all the API methods used String (which has been around since 1.0)
I'm going to reuse OGNL library out of Struts2 scope. I have rather large set of formulas, that is why I would like to precompile all of them:
Ognl.parseExpression(expressionString);
But I'm not sure if precompiled expression can be used in multi-thread environment. Does anybody knows if it can be used?
This PropertyUtils code from OGNL is written to be thread-safe, and so I would guess that compiled expressions are intended to be thread safe.
Further evidence is that most of the accessor API provide the mutable state as a context parameter (e.g. see PropertyAccessor), so the classes themselves have little mutable state. Immutable classes are intrinsicly thread-safe. The developer guide urges extensions to be thread-safe, and finally
looking through the code, where there is mutable state, it is guarded in a synchronized block, for example see EvaluationPool.
In summary, it seems OGNL has been designed to be thread-safe. Whether it actually is or not is another question! You could write a quick test to see for sure, using for example Concutest. Alternatively, if the number of threads is reasonable, storing all the expressions in a ThreadLocal sidesteps the issue altogether, at the cost of a little extra memory (or possibly not, as OGNL does expression caching.)
I think your best option is to contact original developers, directly or through mailing list:
http://www.opensymphony.com/ognl/members.action
https://ognl.dev.java.net/servlets/ProjectMailingListList
The project seems to be abandoned for some time, so there is hardly anybody else who knows :/
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Much of my programming background is in Java, and I'm still doing most of my programming in Java. However, I'm starting to learn Python for some side projects at work, and I'd like to learn it as independent of my Java background as possible - i.e. I don't want to just program Java in Python. What are some things I should look out for?
A quick example - when looking through the Python tutorial, I came across the fact that defaulted mutable parameters of a function (such as a list) are persisted (remembered from call to call). This was counter-intuitive to me as a Java programmer and hard to get my head around. (See here and here if you don't understand the example.)
Someone also provided me with this list, which I found helpful, but short. Anyone have any other examples of how a Java programmer might tend to misuse Python...? Or things a Java programmer would falsely assume or have trouble understanding?
Edit: Ok, a brief overview of the reasons addressed by the article I linked to to prevent duplicates in the answers (as suggested by Bill the Lizard). (Please let me know if I make a mistake in phrasing, I've only just started with Python so I may not understand all the concepts fully. And a disclaimer - these are going to be very brief, so if you don't understand what it's getting at check out the link.)
A static method in Java does not translate to a Python classmethod
A switch statement in Java translates to a hash table in Python
Don't use XML
Getters and setters are evil (hey, I'm just quoting :) )
Code duplication is often a necessary evil in Java (e.g. method overloading), but not in Python
(And if you find this question at all interesting, check out the link anyway. :) It's quite good.)
Don't put everything into classes. Python's built-in list and dictionaries will take you far.
Don't worry about keeping one class per module. Divide modules by purpose, not by class.
Use inheritance for behavior, not interfaces. Don't create an "Animal" class for "Dog" and "Cat" to inherit from, just so you can have a generic "make_sound" method.
Just do this:
class Dog(object):
def make_sound(self):
return "woof!"
class Cat(object):
def make_sound(self):
return "meow!"
class LolCat(object):
def make_sound(self):
return "i can has cheezburger?"
The referenced article has some good advice that can easily be misquoted and misunderstood. And some bad advice.
Leave Java behind. Start fresh. "do not trust your [Java-based] instincts". Saying things are "counter-intuitive" is a bad habit in any programming discipline. When learning a new language, start fresh, and drop your habits. Your intuition must be wrong.
Languages are different. Otherwise, they'd be the same language with different syntax, and there'd be simple translators. Because there are not simple translators, there's no simple mapping. That means that intuition is unhelpful and dangerous.
"A static method in Java does not translate to a Python classmethod." This kind of thing is really limited and unhelpful. Python has a staticmethod decorator. It also has a classmethod decorator, for which Java has no equivalent.
This point, BTW, also included the much more helpful advice on not needlessly wrapping everything in a class. "The idiomatic translation of a Java static method is usually a module-level function".
The Java switch statement in Java can be implemented several ways. First, and foremost, it's usually an if elif elif elif construct. The article is unhelpful in this respect. If you're absolutely sure this is too slow (and can prove it) you can use a Python dictionary as a slightly faster mapping from value to block of code. Blindly translating switch to dictionary (without thinking) is really bad advice.
Don't use XML. Doesn't make sense when taken out of context. In context it means don't rely on XML to add flexibility. Java relies on describing stuff in XML; WSDL files, for example, repeat information that's obvious from inspecting the code. Python relies on introspection instead of restating everything in XML.
But Python has excellent XML processing libraries. Several.
Getters and setters are not required in Python they way they're required in Java. First, you have better introspection in Python, so you don't need getters and setters to help make dynamic bean objects. (For that, you use collections.namedtuple).
However, you have the property decorator which will bundle getters (and setters) into an attribute-like construct. The point is that Python prefers naked attributes; when necessary, we can bundle getters and setters to appear as if there's a simple attribute.
Also, Python has descriptor classes if properties aren't sophisticated enough.
Code duplication is often a necessary evil in Java (e.g. method overloading), but not in Python. Correct. Python uses optional arguments instead of method overloading.
The bullet point went on to talk about closure; that isn't as helpful as the simple advice to use default argument values wisely.
One thing you might be used to in Java that you won't find in Python is strict privacy. This is not so much something to look out for as it is something not to look for (I am embarrassed by how long I searched for a Python equivalent to 'private' when I started out!). Instead, Python has much more transparency and easier introspection than Java. This falls under what is sometimes described as the "we're all consenting adults here" philosophy. There are a few conventions and language mechanisms to help prevent accidental use of "unpublic" methods and so forth, but the whole mindset of information hiding is virtually absent in Python.
The biggest one I can think of is not understanding or not fully utilizing duck typing. In Java you're required to specify very explicit and detailed type information upfront. In Python typing is both dynamic and largely implicit. The philosophy is that you should be thinking about your program at a higher level than nominal types. For example, in Python, you don't use inheritance to model substitutability. Substitutability comes by default as a result of duck typing. Inheritance is only a programmer convenience for reusing implementation.
Similarly, the Pythonic idiom is "beg forgiveness, don't ask permission". Explicit typing is considered evil. Don't check whether a parameter is a certain type upfront. Just try to do whatever you need to do with the parameter. If it doesn't conform to the proper interface, it will throw a very clear exception and you will be able to find the problem very quickly. If someone passes a parameter of a type that was nominally unexpected but has the same interface as what you expected, then you've gained flexibility for free.
The most important thing, from a Java POV, is that it's perfectly ok to not make classes for everything. There are many situations where a procedural approach is simpler and shorter.
The next most important thing is that you will have to get over the notion that the type of an object controls what it may do; rather, the code controls what objects must be able to support at runtime (this is by virtue of duck-typing).
Oh, and use native lists and dicts (not customized descendants) as far as possible.
The way exceptions are treated in Python is different from
how they are treated in Java. While in Java the advice
is to use exceptions only for exceptional conditions this is not
so with Python.
In Python things like Iterator makes use of exception mechanism to signal that there are no more items.But such a design is not considered as good practice in Java.
As Alex Martelli puts in his book Python in a Nutshell
the exception mechanism with other languages (and applicable to Java)
is LBYL (Look Before You Leap) :
is to check in advance, before attempting an operation, for all circumstances that might make the operation invalid.
Where as with Python the approach is EAFP (it's easier to Ask for forgiveness than permission)
A corrollary to "Don't use classes for everything": callbacks.
The Java way for doing callbacks relies on passing objects that implement the callback interface (for example ActionListener with its actionPerformed() method). Nothing of this sort is necessary in Python, you can directly pass methods or even locally defined functions:
def handler():
print("click!")
button.onclick(handler)
Or even lambdas:
button.onclick(lambda: print("click!\n"))