Sonar Violation: Dodgy - Write to static field from instance method - java

I have a variable - "protected static Context jndi;" in my class where "Context" is an interface . When i try to access it in the below mentioned method, it generates the sonar violation mentioned in the title
public JMSQueueResource createQueueResource(String queueBindingName, String qcfBindingName, boolean messagePersisted, boolean autoAcknowledge, boolean nonJMS) throws JMSException, NamingException {
JMSQueueResource qResource = new JMSQueueResource();
try {
jndi = createInitialContext();
if (queueConnectionFactory == null) {
queueConnectionFactory = (QueueConnectionFactory) lookup(jndi, qcfBindingName);
}
qResource.theQueueConnection = queueConnectionFactory.createQueueConnection();
if (autoAcknowledge) {
qResource.theQueueSession = qResource.theQueueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
}
else {
qResource.theQueueSession = qResource.theQueueConnection.createQueueSession(false, Session.CLIENT_ACKNOWLEDGE);
}
Queue queue = (Queue) lookup(jndi, queueBindingName);
//if (nonJMS && queue instanceof com.ibm.mq.jms.MQQueue) {
// com.ibm.mq.jms.MQQueue q = (com.ibm.mq.jms.MQQueue) queue;
// q.setTargetClient(JMSC.MQJMS_CLIENT_NONJMS_MQ);
//}
qResource.theQueueSender = qResource.theQueueSession.createSender(queue);
if (messagePersisted) {
qResource.theQueueSender.setDeliveryMode(DeliveryMode.PERSISTENT);
}
else {
qResource.theQueueSender.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
}
qResource.theQueueConnection.start();
}
catch (JMSException jmse) {
throw jmse;
}
catch (NamingException ne) {
throw ne;
}
finally {
if(jndi != null){
jndi.close();
}
}
return qResource;
}
I could see there are suggestions like to use an Atomic Integer wrapper. What is the best fix for this problem?

The sonar violation is a valid one as mutating a static variable from an instance method can lead to some pretty messed up behavior like:
How can you ensure that the field is initialized by an instance method before a static read access?
What happens when multiple threads access the field, directly or through the createQueueResource method?
Regarding the Java documentation, making it static and potentially accessed by multiple thread is a bad idea:
An InitialContext instance is not synchronized against concurrent
access by multiple threads. Multiple threads each manipulating a
different InitialContext instance need not synchronize. Threads that
need to access a single InitialContext instance concurrently should
synchronize amongst themselves and provide the necessary locking.
Having a local variable as suggested seems like a reasonable first way to avoid the warning and the related problems.
Whether the construction of the context is expensive depends also on the factory that is used to provide it.
First you need to worry about the correctness of the program, then you can optimize when you can test where the real bottlenecks are.
EDIT:
This link should provide more insight into the Spring application context and how to leverage the dependency injection of the Spring container to make use of the Context instead of storing it in a variable in a class https://spring.io/understanding/application-context

Related

Java proxy for Autocloseable (Jedis resources)

I am trying to find out whether it is possible to create Java dynamic proxy to automatically close Autocloseable resources without having to remember of embedding such resources with try-resources block.
For example I have a JedisPool that has a getResource method which can be used like that:
try(Jedis jedis = jedisPool.getResource() {
// use jedis client
}
For now I did something like that:
class JedisProxy implements InvocationHandler {
private final JedisPool pool;
public JedisProxy(JedisPool pool) {
this.pool = pool;
}
public static JedisCommands newInstance(Pool<Jedis> pool) {
return (JedisCommands) java.lang.reflect.Proxy.newProxyInstance(
JedisCommands.class.getClassLoader(),
new Class[] { JedisCommands.class },
new JedisProxy(pool));
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
try (Jedis client = pool.getResource()) {
return method.invoke(client, args);
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw e;
}
}
}
Now each time when I call method on Jedis (JedisCommands) this method is passed to proxy which gets a new client from the pool, executes method and returns this resource to the pool.
It works fine, but when I want to execute multiple methods on client, then for each method resource is taken from pool and returned again (it might be time consuming). Do you have any idea how to improve that?
You would end up with your own "transaction manager" in which you normally would return the object to the pool immediately, but if you had started a "transaction" the object wouldn't be returned to the pool until you've "committed" the "transaction".
Suddenly your problem with using try-with-resources turns into an actual problem due to the use of a hand-crafted custom mechanism.
Using try with resources pros:
Language built-in feature
Allows you to attach a catch block, and the resources are still released
Simple, consistent syntax, so that even if a developer weren't familiar with it, he would see all the Jedis code surrounded by it and (hopefully) think "So this must be the correct way to use this"
Cons:
You need to remember to use it
Your suggestion pros (You can tell me if I forget anything):
Automatic closing even if the developer doesn't close the resource, preventing a resource leak
Cons:
Extra code always means extra places to find bugs in
If you don't create a "transaction" mechanism, you may suffer from a performance hit (I'm not familiar with [jr]edis or your project, so I can't say whether it's really an issue or not)
If you do create it, you'll have even more extra code which is prone to bugs
Syntax is no longer simple, and will be confusing to anyone coming to the project
Exception handling becomes more complicated
You'll be making all your proxy-calls through reflection (a minor issue, but hey, it's my list ;)
Possibly more, depending on what the final implementation will be
If you think I'm not making valid points, please tell me. Otherwise my assertion will remain "you have a 'solution' looking for a problem".
I don’t think that this is going into the right direction. After all, developers should get used to handle resources correctly and IDEs/compilers are able to issue warnings when autoclosable resources aren’t handled using try(…){}…
However, the task of creating a proxy for decorating all invocations and the addition of a way to decorate a batch of multiple action as a whole, is of a general nature, therefore, it has a general solution:
class JedisProxy implements InvocationHandler {
private final JedisPool pool;
public JedisProxy(JedisPool pool) {
this.pool = pool;
}
public static JedisCommands newInstance(Pool<Jedis> pool) {
return (JedisCommands) java.lang.reflect.Proxy.newProxyInstance(
JedisCommands.class.getClassLoader(),
new Class[] { JedisCommands.class },
new JedisProxy(pool));
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
try (Jedis client = pool.getResource()) {
return method.invoke(client, args);
} catch (InvocationTargetException e) {
throw e.getTargetException();
}
}
public static void executeBatch(JedisCommands c, Consumer<JedisCommands> action) {
InvocationHandler ih = Proxy.getInvocationHandler(c);
if(!(ih instanceof JedisProxy))
throw new IllegalArgumentException();
try(JedisCommands actual=((JedisProxy)ih).pool.getResource()) {
action.accept(actual);
}
}
public static <R> R executeBatch(JedisCommands c, Function<JedisCommands,R> action){
InvocationHandler ih = Proxy.getInvocationHandler(c);
if(!(ih instanceof JedisProxy))
throw new IllegalArgumentException();
try(JedisCommands actual=((JedisProxy)ih).pool.getResource()) {
return action.apply(actual);
}
}
}
Note that the type conversion of a Pool<Jedis> to a JedisPool looked suspicious to me but I didn’t change anything in that code as I don’t have these classes to verify it.
Now you can use it like
JedisCommands c=JedisProxy.newInstance(pool);
c.someAction();// aquire-someaction-close
JedisProxy.executeBatch(c, jedi-> {
jedi.someAction();
jedi.anotherAction();
}); // aquire-someaction-anotherAction-close
ResultType foo = JedisProxy.executeBatch(c, jedi-> {
jedi.someAction();
return jedi.someActionReturningValue(…);
}); // aquire-someaction-someActionReturningValue-close-return the value
The batch execution requires the instance to be a proxy, otherwise an exception is thrown as it’s clear that this method cannot guarantee a particular behavior for an unknown instance with an unknown life cycle.
Also, developers now have to be aware of the proxy and the batch execution feature just like they have to be aware of resources and the try(…){} statement when not using a proxy. On the other hand, if they aren’t, they lose performance when invoking multiple methods on a proxy without using the batch method, whereas they let resources leak when invoking multiple methods without try(…){}on an actual, non-proxy resource…

Java : Synchronization of code

I have this piece of code below as shown .
Our Application runs on 5 web servers controlled by a Load Balancer ,all connecting to one Memcache instance .
I guess that this piece of synchrnozation works only for one Instance .
Please let me know how can i synchrnoze this piece of code when 5 web servers are trying to access the Memcache
public class Memcache {
private MemcachedClient memclient = null;
private static Memcache instance = null;
public static Memcache getInstance() {
if (instance == null) {
try {
synchronized (Memcache.class) {
instance = new Memcache();
}
} catch (IOException e) {
throw new RuntimeException(e);
}
}
return instance;
}
private Memcache() throws IOException {
MemcachedClientBuilder builder = new XMemcachedClientBuilder();
memclient = builder.build();
}
}
Why not initialize it like this?
private static Memcache instance = new Memcache();
Bare in mind that what you tried to achieve at the synchronization here is problematic,
As two threads might pass the (if (instance == null) (a context switch might be after that line)
So you can consider the double check pattern,
BUt at some version of java there is a problem with it.
At the link I provided , there is info about problem, and
in this link, you can read about Singleton with the volatile keyword.
I still would go for the option I suggested above.
You can use the lazily initialized ClassHolder pattern to implement synchronized access to a class. Because the Memcache is initialized with a static initializer, it doesn't need more synchronization constructs. The first call to getInstance() by any thread causes MemcacheHolder to be loaded and initialized and the Memcache instance to make itself available to the calling code.
public class MemcacheFactory{
private static class MemcacheHolder {
public static Memcache instance = new Memcache();
}
public static Memcache getInstance() {
return MemcacheFactory.MemcacheHolder.instance;
}
}

How can I make this thread safe?

I have a server that receives various xml messages from clients (one thread per client) and routes the messages to different functions depending on the message type. Eg. if the first element in the messages contains the string 'login' it signifies that this is a login message so route the message to the login() function.
Anyway, I want to make this message so things don't get messed up if multiple clients are connected and the dispatcher switches threads in middle of the message routing. So here is how I am routing the messages -
public void processMessagesFromClient(Client client)
{
Document message;
while (true)
{
try
{
message = client.inputStream.readObject();
/*
* Determine the message type
*/
String messageType = getMessageType(message);
// Route the message depending on its type
switch (messageType)
{
case LOGIN:
userModel.handleLogin();
...
...
...
etc...
}
} catch(Exception e) {}
}
So how can I make this thread safe? I figure I need to put a synchronise statement in somewhere but Im not sure where. Also Ive been reading around on the subject and I found this post which says there is an issue with using synchronise on 'this' -
https://stackoverflow.com/a/416198/1088617
And another post here which says singletons aren't suitable for using synchronise on (My class in the code above is a singleton) - https://stackoverflow.com/a/416202/1088617
Your class is already thread safe, because you are only using local variables.
Thread safety only comes into play when you access class state (ie fields), which your code doesn't (seem to) do.
What you are talking about is serialization - you want to funnel all message processing through one point to guarantee that message processing is one-at-a-time (starts and finishes atomically). The solution is simple: Employ a static synchronized method:
public void processMessagesFromClient(Client client) {
Document Message;
while (true) {
processMessage(client);
}
}
private static synchronized processMessage(Client client) {
try {
message = client.inputStream.readObject();
String messageType = getMessageType(message);
// Route the message depending on its type
switch (messageType) {
case LOGIN:
userModel.handleLogin();
...
etc...
}
} catch(Exception e) {}
}
FYI static synchronized methods use the Class object as the lock. This code will make your code behave like a single thread, which your question seems to want.
I would actually have a message handler thread which is responsible for reading incoming messages. This will then hand off processing to a worker thread to do the time consuming processing of the message. You can use the Java ThreadPoolExecutor to manage this.
If you already have 1 thread per connection, then the only thing that you have to synchronize are the functions which handle the events (i.e. functions like userModel.handleLogin()).
I guess the best solution should be to use a thread safe queue like the ConcurrentQueue and use a single working thread to pick up this values and run the actions one by one.
Provided you have one of these objects per thread, you don't have a problem. You only need to synchronized a shared object which can be modified by one of the threads.
public void processMessagesFromClient(Client client) {
while (true) {
processMessage(client);
}
}
private void processMessage(Client client) {
try {
Document message = client.inputStream.readObject();
String messageType = getMessageType(message);
// Route the message depending on its type
switch (messageType) {
case LOGIN:
userModel.handleLogin();
...
etc...
}
} catch(Exception e) {}
}
You need to know which resource should be only used be one thread at a certain time.
In your case it is likely that reading the next message needs to protected.
synchronize (lock) {
message = client.inputStream.readObject();
}
However, your code sample does not really show what needs to protected against concurrent access
The method itself is thread safe.
However, noting that this your class is a singleton, you might want to use double checked locking in your getInstance to ensure thread safety.
Also you should make sure your instance is set to static
class Foo {
private static volatile Foo instance = null;
public static Foo getInstance() {
if (instance == null)
{
synchronized(this)
{
if (instance == null)
instance = new Foo ();
}
}
return instance ;
}
}

Factory of singleton objects: is this code thread-safe?

I have a common interface for a number of singleton implementations. Interface defines initialization method which can throw checked exception.
I need a factory which will return cached singleton implementations on demand, and wonder if following approach is thread-safe?
UPDATE1: Please don't suggest any 3rd partly libraries, as this will require to obtain legal clearance due to possible licensing issues :-)
UPDATE2: this code will likely to be used in EJB environment, so it's preferrable not to spawn additional threads or use stuff like that.
interface Singleton
{
void init() throws SingletonException;
}
public class SingletonFactory
{
private static ConcurrentMap<String, AtomicReference<? extends Singleton>> CACHE =
new ConcurrentHashMap<String, AtomicReference<? extends Singleton>>();
public static <T extends Singleton> T getSingletonInstance(Class<T> clazz)
throws SingletonException
{
String key = clazz.getName();
if (CACHE.containsKey(key))
{
return readEventually(key);
}
AtomicReference<T> ref = new AtomicReference<T>(null);
if (CACHE.putIfAbsent(key, ref) == null)
{
try
{
T instance = clazz.newInstance();
instance.init();
ref.set(instance); // ----- (1) -----
return instance;
}
catch (Exception e)
{
throw new SingletonException(e);
}
}
return readEventually(key);
}
#SuppressWarnings("unchecked")
private static <T extends Singleton> T readEventually(String key)
{
T instance = null;
AtomicReference<T> ref = (AtomicReference<T>) CACHE.get(key);
do
{
instance = ref.get(); // ----- (2) -----
}
while (instance == null);
return instance;
}
}
I'm not entirely sure about lines (1) and (2). I know that referenced object is declared as volatile field in AtomicReference, and hence changes made at line (1) should become immediately visible at line (2) - but still have some doubts...
Other than that - I think use of ConcurrentHashMap addresses atomicity of putting new key into a cache.
Do you guys see any concerns with this approach? Thanks!
P.S.: I know about static holder class idiom - and I don't use it due to ExceptionInInitializerError (which any exception thrown during singleton instantiation is wrapped into) and subsequent NoClassDefFoundError which are not something I want to catch. Instead, I'd like to leverage the advantage of dedicated checked exception by catching it and handling it gracefully rather than parse the stack trace of EIIR or NCDFE.
You have gone to a lot of work to avoid synchronization, and I assume the reason for doing this is for performance concerns. Have you tested to see if this actually improves performance vs a synchronized solution?
The reason I ask is that the Concurrent classes tend to be slower than the non-concurrent ones, not to mention the additional level of redirection with the atomic reference. Depending on your thread contention, a naive synchronized solution may actually be faster (and easier to verify for correctness).
Additionally, I think that you can possibly end up with an infinite loop when a SingletonException is thrown during a call to instance.init(). The reason being that a concurrent thread waiting in readEventually will never end up finding its instance (since an exception was thrown while another thread was initializing the instance). Maybe this is the correct behaviour for your case, or maybe you want to set some special value to the instance to trigger an exception to be thrown to the waiting thread.
Having all of these concurrent/atomic things would cause more lock issues than just putting
synchronized(clazz){}
blocks around the getter. Atomic references are for references that are UPDATED and you don't want collision. Here you have a single writer, so you do not care about that.
You could optimize it further by having a hashmap, and only if there is a miss, use the synchronized block:
public static <T> T get(Class<T> cls){
// No lock try
T ref = cache.get(cls);
if(ref != null){
return ref;
}
// Miss, so use create lock
synchronized(cls){ // singletons are double created
synchronized(cache){ // Prevent table rebuild/transfer contentions -- RARE
// Double check create if lock backed up
ref = cache.get(cls);
if(ref == null){
ref = cls.newInstance();
cache.put(cls,ref);
}
return ref;
}
}
}
Consider using Guava's CacheBuilder. For example:
private static Cache<Class<? extends Singleton>, Singleton> singletons = CacheBuilder.newBuilder()
.build(
new CacheLoader<Class<? extends Singleton>, Singleton>() {
public Singleton load(Class<? extends Singleton> key) throws SingletonException {
try {
Singleton singleton = key.newInstance();
singleton.init();
return singleton;
}
catch (SingletonException se) {
throw se;
}
catch (Exception e) {
throw new SingletonException(e);
}
}
});
public static <T extends Singleton> T getSingletonInstance(Class<T> clazz) {
return (T)singletons.get(clazz);
}
Note: this example is untested and uncompiled.
Guava's underlying Cache implementation will handle all caching and concurrency logic for you.
This looks like it would work although I might consider some sort of sleep if even a nanosecond or something when testing for the reference to be set. The spin test loop is going to be extremely expensive.
Also, I would consider improving the code by passing the AtomicReference to readEventually() so you can avoid the containsKey() and then putIfAbsent() race condition. So the code would be:
AtomicReference<T> ref = (AtomicReference<T>) CACHE.get(key);
if (ref != null) {
return readEventually(ref);
}
AtomicReference<T> newRef = new AtomicReference<T>(null);
AtomicReference<T> oldRef = CACHE.putIfAbsent(key, newRef);
if (oldRef != null) {
return readEventually(oldRef);
}
...
The code is not generally thread safe because there is a gap between the CACHE.containsKey(key) check and the CACHE.putIfAbsent(key, ref) call. It is possible for two threads to call simultaneously into the method (especially on multi-core/processor systems) and both perform the containsKey() check, then both attempt to do the put and creation operations.
I would protect that execution of the getSingletonInstnace() method using either a lock or by synchronizing on a monitor of some sort.
google "Memoizer". basically, instead of AtomicReference, use Future.

dao as a member of a servlet - normal?

I guess, DAO is thread safe, does not use any class members.
So can it be used without any problem as a private field of a Servlet ? We need only one copy, and
multiple threads can access it simultaneously, so why bother creating a local variable, right?
"DAO" is just a general term for database abstraction classes. Whether they are threadsafe or not depends on the specific implementation.
This bad example could be called a DAO, but it would get you into trouble if multiple threads call the insert method at the same time.
class MyDAO {
private Connection connection = null;
public boolean insertSomething(Something o) throws Exception {
try {
connection = getConnection()
//do insert on connection.
} finally {
if (connection != null) {
connection.close();
}
}
}
}
So the answer is: if your DAO handles connections and transactions right, it should work.

Categories