I'm going to reuse OGNL library out of Struts2 scope. I have rather large set of formulas, that is why I would like to precompile all of them:
Ognl.parseExpression(expressionString);
But I'm not sure if precompiled expression can be used in multi-thread environment. Does anybody knows if it can be used?
This PropertyUtils code from OGNL is written to be thread-safe, and so I would guess that compiled expressions are intended to be thread safe.
Further evidence is that most of the accessor API provide the mutable state as a context parameter (e.g. see PropertyAccessor), so the classes themselves have little mutable state. Immutable classes are intrinsicly thread-safe. The developer guide urges extensions to be thread-safe, and finally
looking through the code, where there is mutable state, it is guarded in a synchronized block, for example see EvaluationPool.
In summary, it seems OGNL has been designed to be thread-safe. Whether it actually is or not is another question! You could write a quick test to see for sure, using for example Concutest. Alternatively, if the number of threads is reasonable, storing all the expressions in a ThreadLocal sidesteps the issue altogether, at the cost of a little extra memory (or possibly not, as OGNL does expression caching.)
I think your best option is to contact original developers, directly or through mailing list:
http://www.opensymphony.com/ognl/members.action
https://ognl.dev.java.net/servlets/ProjectMailingListList
The project seems to be abandoned for some time, so there is hardly anybody else who knows :/
Related
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
When should one use final?
I tend to declare all variables final unless necessary. I consider this to be a good practice because it allows the compiler to check that the identifier is used as I expect (e.g. it is not mutated). On the other hand it clutters up the code, and perhaps this is not "the Java way".
I am wondering if there is a generally accepted best practice regarding the non-required use of final variables, and if there are other tradeoffs or aspects to this discussion that should be made aware of.
The "Java way" is inherently, intrinsically cluttery.
I say it's a good practice, but not one I follow.
Tests generally ensure I'm doing what I intend, and it's too cluttery for my aesthetics.
Some projects routinely apply final to all effectively final local variables. I personally find reading such code much easier, due to the lessened cognitive load. A non-final variable could be reassigned anywhere, and it's especially problematic in code with multiple levels of nested ifs and fors. You never know what code path may have reassigned it.
As for the concern of code clutter, when applied to local variables I don't find it damaging—in fact it makes me spot all declarations more easily due to syntax coloring.
Unfortunately, when final is used on parameters, catch blocks, enhanced-for loops and all other places except local variables in the narrow sense, the code does tend to become cluttered. This is quite unfortunate because a reassignment in these cases is even more confusing and they should really have been final by default.
There are code linting tools that will flag the reassignment of these variables, and that helps.
I consider it good practice, more for maintenance programmers (including me!) than for the compiler. It's easier to think about a method if I don't need to worry about which variables might be changing inside it.
Yes, it's a very good idea, because it clearly shows what fields must be provided at object construction.
I strongly disagree that it creates "code clutter"; it's a good and powerful aspect of the language.
As a design principle, you should make your classes immutable (all final fields) if you can, because they may be safely published (ie freely passed around without fear they will be corrupted). Although note that the fields themselves need to be immutable objects too.
It definitely gives you a better code, easy to see which all variables are going to be changed.
It also informs the compiler that it is not going to change which can result to better optimization.
Along side it allows your IDE to give you compile time notification if you tend to do any mistake.
Some good analysis tools, like PMD, advices to put always final unless necessary. So the convention in that tools says it's a good practice
But I think that so many final tokens in code may get it less human-friendly.
I would say yes, not so much for the compiler optimisation, but rather for readibility.
But personally I don't use it. Java is quite verbose by itself, and if we followed everything considered "good practice", the code would be unredable from all the boilerplate. It's a matter of preference, though.
You pretty much summed up the pros and cons...
I can just add another con:
the reader of the code need not to reason at all about the value of a final variable (except for rare bad-code cases).
So, yes, it's a good practice.
And the clutter isn't that bad, after you get used to it (like unix :-P). Plus, typical IDEs do it automatically for ya...
Is it safe to use the :volatile-mutable qualifier with deftype in a single-threaded program? This is a follow up to this question, this one, and this one. (It's a Clojure question, but I added the "Java" tag because Java programmers are likely to have insights about it, too.)
I've found that I can get a significant performance boost in a program I'm working on by using :volatile-mutable fields in a deftype rather than atoms, but I'm worried because the docstring for deftype says:
Note well that mutable fields are extremely difficult to use
correctly, and are present only to facilitate the building of higher
level constructs, such as Clojure's reference types, in Clojure
itself. They are for experts only - if the semantics and implications
of :volatile-mutable or :unsynchronized-mutable are not immediately
apparent to you, you should not be using them.
In fact, the semantics and implications of :volatile-mutable are not immediately apparent to me.
However, chapter 6 of Clojure Programming, by Emerick, Carper, and Grand says:
"Volatile" here has the same meaning as the volatile field modifier in
Java: reads and writes are atomic and must be executed in
program order; i.e., they cannot be reordered by the JIT compiler or
by the CPU. Volatiles are thus unsurprising and thread-safe — but
uncoordinated and still entirely open to race conditions.
This seems to imply that as long as accesses to a single volatile-mutable deftype field all take place within a single thread, there is nothing to special to worry about. (Nothing special, in that I still have to be careful about how I handle state if I might be using lazy sequences.) So if nothing introduces parallelism into my Clojure program, there should be no special danger to using deftype with :volatile-mutable.
Is that correct? What dangers am I not understanding?
That's correct, it's safe. You just have to be sure that your context is really single-threaded. Sometimes it's not that easy to guarantee that.
There's no risk in terms of thread-safety or atomicity when using a volatile mutable (or just mutable) field in a single-threaded context, because there's only one thread so there's no chance of two threads writing a new value to the field at the same time, or one thread writing a new value based on outdated values.
As others have pointed out in the comments you might want to simply use an :unsynchronized-mutable field to avoid the cost introduced by volatile. That cost comes from the fact that every write must be committed to main memory instead of thread local memory. See this answer for more info about this.
At the same time, you gain nothing by using volatile in a single-threaded context because there's no chance of having one thread writing a new value that will not be "seen" by other thread reading the same field.
That's what a volatile is intended for, but it's irrelevant in a single-thread context.
Also note that clojure 1.7 introduced volatile! intended to provide a "volatile box for managing state" as a faster alternative to
atom, with a similar interface but without it's compare and swap semantics. The only difference when using it is that you call vswap! and vreset! instead of swap! and reset!. I would use that instead of
deftype with ^:volatile-mutable if I need a volatile.
At the company I work for there's a document describing good practices that we should adhere to in Java. One of them is to avoid methods that return this, like for example in:
class Properties {
public Properties add(String k, String v) {
//store (k,v) somewhere
return this;
}
}
I would have such a class so that I'm able to write:
properties.add("name", "john").add("role","swd"). ...
I've seen such idiom many times, like in StringBuilder and don't find anything wrong with it.
Their argumentation is :
... can be the source of synchronization problems or failed expectations about the states of target objects.
I can't think of a situation where this could be true, can any of you give me an example?
EDIT The document doesn't specify anything about mutability, so I don't see the diference between chaining the calls and doing:
properties.add("name", "john");
properties.add("role", "swd");
I'll try to get in touch with the originators, but I wanted to do it with my guns loaded, thats' why I posted the question.
SOLVED: I got to talk with one of the authors, his original intention was apparently to avoid releasing objects that are not yet ready, like in a Builder pattern, and explained that if a context switch happens between calls, the object could be in an invalid state. I argued that this had nothing to do with returning this since you could make the same mistake buy calling the methods one by one and had more to do with synchronizing the building process properly. He admitted the document could be more explicit and will revise it soon. Victory is mine/ours!
My guess is that they are against mutable state (and often are rightly so). If you are not designing fluent interfaces returning this but rather return a new immutable instance of the object with the changed state, you can avoid synchronization problems or have no "failed expectations about the states of target objects". This might explain their requirement.
The only serious basis for the practice is avoiding mutable objects; the criticism that it is "confusing" and leads to "failed expectations" is quite weak. One should never use an object without first getting familiar with its semantics, and enforcing constraints on the API just to cater for those who opt out of reading Javadoc is not a good practice at all— especially because, as you note, returning this to achieve a fluent API design is one of the standard approaches in Java, and indeed a very welcome one.
I think sometimes this approach can be really useful, for example in 'builder' pattern.
I can say that in my organization this kind of things is controlled by Sonar rules, and we don't have such a rule.
Another guess is that maybe the project was built on top of existing codebase and this is kind of legacy restriction.
So the only thing I can suggest here is to talk to the people who wrote this doc :)
Hope this helps
I think it's perfectly acceptable to use that pattern in some situations.
For example, as a Swing developer, I use GridBagLayout fairly frequently for its strengths and flexibility, but anyone who's ever used it (with it's partener in crime GridBagConstraints) knows that it can be quite verbose and not very readable.
A common workaround that I've seen online (and one that I use) is to subclass GridBagConstraints (GBConstraints) that has a setter for each different property, and each setter returns this. This allows for the developer to chain the different properties on an as-needed basis.
The resultant code is about 1/4 the size, and far more readable/maintainable, even to the casual developer who might not be familiar with using GridBagConstaints.
Why are people so emphatic about making every variable within a class "final"? I don't believe that there is any true benefit to adding final to private local variables, or really to use final for anything other than constants and passing variables into anonymous inner classes.
I'm not looking to start any sort of flame war, I just honestly want to know why this is so important to some people. Am I missing something?
Intent. Other people modifying your code won't change values they aren't supposed to change.
Compiler optimizations can be made if the compiler knows a field's value will never change.
Also, if EVERY variable in a class is final (as you refer to in your post), then you have an immutable class (as long as you don't expose references to mutable properties) which is an excellent way to achieve thread-safety.
The downside, is that
annoy it is hard
annoy to read
annoy code or anything
annoy else when it all
annoy starts in the
annoy same way
Other than the obvious usage for creating constants and preventing subclassing/overriding, it is a personal preference in most cases since many believe the benefits of "showing programmer intent" are outweighed by the actual code readability. Many prefer a little less verbosity.
As for optimisations, that is a poor reason for using it (meaningless in many cases). It is the worst form of micro optimisation and in the days of JIT serves no purpose.
I would suggest to use it if you prefer, don't if you that is what you prefer. Since it will all come down to religious arguments in many cases, don't worry about it.
It marks that I'm not expecting that value to change, which is free documentation. The practice is because it clearly communicates the intent of that variable and forces the compiler to verify that. Beyond that, it allows the compiler to make optimizations.
It's important because immutability is important particularly when dealing with a shared memory model. If something is immutable then it's thread safe, that makes it good enough an argument to follow as a best practice.
http://www.artima.com/intv/blochP.html
One benefit for concurrent programming which hasn't been mentioned yet:
Final fields are guaranteed to be initialized when the execution of the constructor is completed.
A project I'm currently working on is setup in a way that whenever one presses "save" in Eclipse, the final modifier is added to every variable or field that is not changed in the code. And it hasn't yet hurt anybody.
There are many good reasons to use final, as noted elsewhere. One place where it is not worth it, IMO, is on parameters to a method. Strictly speaking, the keyword adds value here, but the value is not high enough to withstand the ugly syntax. I'd prefer to express that kind of information through unit tests.
I think use of final over values that are inner to a class is an overkill unless the class is probably going to be inherited. The only advantage is around the compiler optimizations, which surely may benefit.
DataflowAnomalyAnalysis: Found
'DD'-anomaly for variable 'variable'
(lines 'n1'-'n2').
DataflowAnomalyAnalysis: Found
'DU'-anomaly for variable 'variable'
(lines 'n1'-'n2').
DD and DU sound familiar...I want to say in things like testing and analysis relating to weakest pre and post conditions, but I don't remember the specifics.
NullAssignment: Assigning an Object to
null is a code smell. Consider
refactoring.
Wouldn't setting an object to null assist in garbage collection, if the object is a local object (not used outside of the method)? Or is that a myth?
MethodArgumentCouldBeFinal: Parameter
'param' is not assigned and could be
declared final
LocalVariableCouldBeFinal: Local
variable 'variable' could be declared
final
Are there any advantages to using final parameters and variables?
LooseCoupling: Avoid using
implementation types like
'LinkedList'; use the interface
instead
If I know that I specifically need a LinkedList, why would I not use one to make my intentions explicitly clear to future developers? It's one thing to return the class that's highest up the class path that makes sense, but why would I not declare my variables to be of the strictest sense?
AvoidSynchronizedAtMethodLevel: Use
block level rather than method level
synchronization
What advantages does block-level synchronization have over method-level synchronization?
AvoidUsingShortType: Do not use the
short type
My first languages were C and C++, but in the Java world, why should I not use the type that best describes my data?
DD and DU anomalies (if I remember correctly—I use FindBugs and the messages are a little different) refer to assigning a value to a local variable that is never read, usually because it is reassigned another value before ever being read. A typical case would be initializing some variable with null when it is declared. Don't declare the variable until it's needed.
Assigning null to a local variable in order to "assist" the garbage collector is a myth. PMD is letting you know this is just counter-productive clutter.
Specifying final on a local variable should be very useful to an optimizer, but I don't have any concrete examples of current JITs taking advantage of this hint. I have found it useful in reasoning about the correctness of my own code.
Specifying interfaces in terms of… well, interfaces is a great design practice. You can easily change implementations of the collection without impacting the caller at all. That's what interfaces are all about.
I can't think of many cases where a caller would require a LinkedList, since it doesn't expose any API that isn't declared by some interface. If the client relies on that API, it's available through the correct interface.
Block level synchronization allows the critical section to be smaller, which allows as much work to be done concurrently as possible. Perhaps more importantly, it allows the use of a lock object that is privately controlled by the enclosing object. This way, you can guarantee that no deadlock can occur. Using the instance itself as a lock, anyone can synchronize on it incorrectly, causing deadlock.
Operands of type short are promoted to int in any operations. This rule is letting you know that this promotion is occurring, and you might as well use an int. However, using the short type can save memory, so if it is an instance member, I'd probably ignore that rule.
DataflowAnomalyAnalysis: Found
'DD'-anomaly for variable 'variable'
(lines 'n1'-'n2').
DataflowAnomalyAnalysis: Found
'DU'-anomaly for variable 'variable'
(lines 'n1'-'n2').
No idea.
NullAssignment: Assigning an Object to
null is a code smell. Consider
refactoring.
Wouldn't setting an object to null assist in garbage collection, if the object is a local object (not used outside of the method)? Or is that a myth?
Objects in local methods are marked to be garbage collected once the method returns. Setting them to null won't do any difference.
Since it would make less experience developers what is that null assignment all about it may be considered a code smell.
MethodArgumentCouldBeFinal: Parameter
'param' is not assigned and could be
declared final
LocalVariableCouldBeFinal: Local
variable 'variable' could be declared
final
Are there any advantages to using final parameters and variables?
It make clearer that the value won't change during the lifecycle of the object.
Also, if by any chance someone try to assign a value, the compiler will prevent this coding error at compile type.
consider this:
public void businessRule( SomeImportantArgument important ) {
if( important.xyz() ){
doXyz();
}
// some fuzzy logic here
important = new NotSoImportant();
// add for/if's/while etc
if( important.abc() ){ // <-- bug
burnTheHouse();
}
}
Suppose that you're assigned to solve some mysterious bug that from time to time burns the house.
You know what wast the parameter used, what you don't understand is WHY the burnTHeHouse method is invoked if the conditions are not met ( according to your findings )
It make take you a while to findout that at some point in the middle, somone change the reference, and that you are using other object.
Using final help to prevent this kind of things.
LooseCoupling: Avoid using
implementation types like
'LinkedList'; use the interface
instead
If I know that I specifically need a LinkedList, why would I not use one to make my intentions explicitly clear to future developers? It's one thing to return the class that's highest up the class path that makes sense, but why would I not declare my variables to be of the strictest sense?
There is no difference, in this case. I would think that since you are not using LinkedList specific functionality the suggestion is fair.
Today, LinkedList could make sense, but by using an interface you help your self ( or others ) to change it easily when it wont.
For small, personal projects this may not make sense at all, but since you're using an analyzer already, I guess you care about the code quality already.
Also, helps less experienced developer to create good habits. [ I'm not saying you're one but the analyzer does not know you ;) ]
AvoidSynchronizedAtMethodLevel: Use
block level rather than method level
synchronization
What advantages does block-level synchronization have over method-level synchronization?
The smaller the synchronized section the better. That's it.
Also, if you synchronize at the method level you'll block the whole object. When you synchronize at block level, you just synchronize that specific section, in some situations that's what you need.
AvoidUsingShortType: Do not use the
short type
My first languages were C and C++, but in the Java world, why should I not use the type that best describes my data?
I've never heard of this, and I agree with you :) I've never use short though.
My guess is that by not using it, you'll been helping your self to upgrade to int seamlessly.
Code smells are more oriented to code quality than performance optimizations. So the advice are given for less experienced programmers and to avoid pitfalls, than to improve program speed.
This way, you could save a lot of time and frustrations when trying to change the code to fit a better design.
If it the advise doesn't make sense, just ignore them, remember, you are the developer at charge, and the tool is just that a tool. If something goes wrong, you can't blame the tool, right?
Just a note on the final question.
Putting "final" on a variable results in it only be assignable once. This does not necessarily mean that it is easier to write, but it most certainly means that it is easier to read for a future maintainer.
Please consider these points:
any variable with a final can be immediately classified in "will not change value while watching".
by implication it means that if all variables which will not change are marked with final, then the variables NOT marked with final actually WILL change.
This means that you can see already when reading through the definition part which variables to look out for, as they may change value during the code, and the maintainer can spend his/her efforts better as the code is more readable.
Wouldn't setting an object to null
assist in garbage collection, if the
object is a local object (not used
outside of the method)? Or is that a
myth?
The only thing it does is make it possible for the object to be GCd before the method's end, which is rarely ever necessary.
Are there any advantages to using final parameters and variables?
It makes the code somewhat clearer since you don't have to worry about the value being changed somwhere when you analyze the code. More often then not you don't need or want to change a variable's value once it's set anyway.
If I know that I specifically need a
LinkedList, why would I not use one to
make my intentions explicitly clear to
future developers?
Can you think of any reason why you would specifically need a
LinkedList?
It's one thing to
return the class that's highest up the
class path that makes sense, but why
would I not declare my variables to be
of the strictest sense?
I don't care much about local variables or fields, but if you declare a method parameter of type LinkedList, I will hunt you down and hurt you, because it makes it impossible for me to use things like Arrays.asList() and Collections.emptyList().
What advantages does block-level synchronization have over method-level synchronization?
The biggest one is that it enables you to use a dedicated monitor object so that only those critical sections are mutually exclusive that need to be, rather than everything using the same monitor.
in the Java world, why should I not
use the type that best describes my
data?
Because types smaller than int are automtically promoted to int for all calculations and you have to cast down to assign anything to them. This leads to cluttered code and quite a lot of confustion (especially when autoboxing is involved).
AvoidUsingShortType: Do not use the short type
List item
short is 16 bit, 2's compliment in java
a short mathmatical operaion with anything in the Integer family outside of another short will require a runtime sign extension conversion to the larger size. operating against a floating point requires sign extension and a non-trivial conversion to IEEE-754.
can't find proof, but with a 32 bit or 64 bit register, you're no longer saving on 'processor instructions' at the bytecode level. You're parking a compact car in a a semi-trailer's parking spot as far as the processor register is concerned.
If your are optimizing your project at the byte code level, wow. just wow. ;P
I agree on the design side of ignoring this pmd warning, just weigh accurately describing your object with a 'short' versus the incurred performance conversions.
in my opinion, the incurred performance hits are miniscule on most machines. ignore the error.
What advantages does block-level
synchronization have over method-level
synchronization?
Synchronize a method is like do a synchronize(getClass()) block, and blocks all the class.
Maybe you don't want that