I am trying to see if this code is thread safe.
private void eventProcessing(final AcEvent acEvent){
String tree = null;
String symbol = null;
try {
if(acEvent.isDatafileTransaction()){
final AcEventDatafileTransaction datafileTransaction = acEvent.getDatafileTransaction();
tree = datafileTransaction.getTreeId();
symbol = datafileTransaction.getSymbol();
System.out.println(tree, symbol);
}
}
Do I need to make methods in AcEvent or ACEventDatafileTransaction synchronized. Both these classes just have only get methods as you saw in the code. I am thinking regardless of the number of threads, it will not have a problem in accessing the right values for tree and symbol. Can I say this code is thread safe or do I need to make any changes to make it thread safe?
I am putting eventprocessing in Callable
threadpool.submit(new Callable<Integer>(){
public Integer call() throws Exception{
eventProcessing(event);
}
});
EDIT :
These are two lines after what I have written in eventprocessing.
final List<AcStreamAble> transactions = getDatafileTransactions(datafileTransaction);
final List<AcEventRecordOperation> recordOperations = getTransactionsAsListOfRecordOperations(datafileTransaction, transactions);
I am adding a couple of methods that will come next. Tell me if this will change anything.
private List<AcEventRecordOperation> getTransactionsAsListOfRecordOperations(final AcEventDatafileTransaction datafileTransaction, final List transactions) {
final List <AcEventRecordOperation> recordOperations = new ArrayList<AcEventRecordOperation>(transactions.size());
int i = 0;
for (final Object o : transactions) {
if (!datafileTransaction.isRecordOperation(o)) {
log.debug( "[" + i + "] Ignored transaction - was not a RecordOperation" );
} else {
recordOperations.add(datafileTransaction.convert(o));
}
}
return recordOperations;
}
In the above method, even though there is a list and objects are added to it. I am thinking since it an internal variable, it will be thread safe.
private List<AcStreamAble> getDatafileTransactions(final AcEventDatafileTransaction datafileTransaction) throws IOException {
final List<AcStreamAble> transactions = new ArrayList<AcStreamAble>();
datafileTransaction.addTransactions(transactions);
return transactions;
}
Here since datafileTransaction object is different for different threads. I am assuming it is thread safe.
Related
I'm trying to split a list of objects within smaller sublist and to process them separately on different threads. So I have following code:
List<Instance> instances = xmlInstance.readInstancesFromXml();
List<Future<List<Instance>>> futureList = new ArrayList<>();
int nThreads = 4;
ExecutorService executor = Executors.newFixedThreadPool(nThreads);
final List<List<Instance>> instancesPerThread = split(instances, nThreads);
for (List<Instance> instancesThread : instancesPerThread) {
if (instancesThread.isEmpty()) {
break;
}
Callable<List<Instance>> callable = new MyCallable(instancesThread);
Future<List<Instance>> submit = executor.submit(callable);
futureList.add(submit);
}
instances.clear();
for (Future<List<Instance>> future : futureList) {
try {
final List<Instance> instancesFromFuture = future.get();
instances.addAll(instancesFromFuture);
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
}
executor.shutdown();
try {
executor.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException ie) {
ie.printStackTrace();
}
And the MyCallable class :
public class MyCallable implements Callable<List<Instance>> {
private List<Instance> instances;
public MyCallable (List<Instance> instances) {
this.instances = Collections.synchronizedList(instances);
}
#Override
public List<Instance> call() throws Exception {
for (Instance instance : instances) {
//process each object and changing some fields;
}
return instances;
}
}
Split method(It's split an given list in given number of list, also trying to have almost same size on each sublist) :
public static List<List<Instance>> split(List<Instance> list, int nrOfThreads) {
List<List<Instance>> parts = new ArrayList<>();
final int nrOfItems = list.size();
int minItemsPerThread = nrOfItems / nrOfThreads;
int maxItemsPerThread = minItemsPerThread + 1;
int threadsWithMaxItems = nrOfItems - nrOfThreads * minItemsPerThread;
int start = 0;
for (int i = 0; i < nrOfThreads; i++) {
int itemsCount = (i < threadsWithMaxItems ? maxItemsPerThread : minItemsPerThread);
int end = start + itemsCount;
parts.add(list.subList(start, end));
start = end;
}
return parts;
}
So, when I'm trying to execute it, I'm getting java.util.ConcurrentModificationException on this line for (Instance instance : instances) {, can somebody give any ideas why it's happening?
public MyCallable (List<Instance> instances) {
this.instances = Collections.synchronizedList(instances);
}
Using synchronizedList like this doesn't help you in the way you think it might.
It's only useful to wrap a list in a synchronizedList at the time you create it (e.g. Collections.synchronizedList(new ArrayList<>()). Otherwise, the underlying list is directly accessible, and thus accessible in an unsynchronized way.
Additionally, synchronizedList only synchronizes for the duration of individual method calls, not for the whole time while you are iterating over it.
The easiest fix here is to take a copy of the list in the constructor:
this.instances = new ArrayList<>(instances);
Then, nobody else has access to that list, so they can't change it while you are iterating it.
This is different to taking a copy of the list in the call method, because the copy is done in a single-threaded part of the code: no other thread can be modifying it while you are taking that copy, so you won't get the ConcurrentModificationException (you can get a CME in single-threaded code, but not using this copy constructor). Doing the copy in the call method means the list is iterated, in exactly the same way as with the for loop you already have.
I am trying to create a mechanism to cache objects into memory, for future use, even if these objects are out of context. There would be a parallel deterministic process which will dictate (by a unique ID) whether the cached object should be retrieved again or if it should completely die. Here is the simplest example, with debug information to make things easier:
package com.panayotis.resurrect;
import java.util.Map;
import java.util.HashMap;
public class ZObject {
private static int IDGEN = 1;
protected int id;
private boolean isKilled = false;
public static final Map<Integer, ZObject> zombies = new HashMap<>();
public static void main(String[] args) {
for (int i = 0; i < 5; i++)
System.out.println("* INIT: " + new ZObject().toString());
gc();
sleep(1000);
if (!zombies.isEmpty())
ZObject.revive(2);
gc();
sleep(1000);
if (!zombies.isEmpty())
ZObject.kill(1);
gc();
sleep(1000);
gc();
sleep(1000);
gc();
sleep(1000);
gc();
sleep(1000);
}
public ZObject() {
this.id = IDGEN++;
}
protected final void finalize() throws Throwable {
String debug = "" + zombies.size();
String name = toString();
String style;
if (!isKilled) {
style = "* Zombie";
zombies.put(id, this);
} else {
style = "*** FINAL ***";
zombies.remove(id);
super.finalize();
}
dumpZombies(style + " " + debug, name);
}
public String toString() {
return (isKilled ? "killed" : zombies.containsKey(id) ? "zombie" : "alive ") + " " + id;
}
public static ZObject revive(int peer) {
ZObject obj = zombies.remove(peer);
if (obj != null) {
System.out.println("* Revive " + obj.toString());
obj.isKilled = false;
} else
System.out.println("* Not found as zombie " + peer);
return obj;
}
public static void kill(int peer) {
int size = zombies.size();
ZObject obj = zombies.get(peer);
String name = obj == null ? peer + " TERMINATED " : obj.toString();
zombies.remove(peer);
dumpZombies("* Kill " + size, name);
if (obj != null)
obj.isKilled = true;
}
private static void dumpZombies(String baseMsg, String name) {
System.out.println(baseMsg + "->" + zombies.size() + " " + name);
for (Integer key : zombies.keySet())
System.out.println("* " + zombies.get(key).toString());
}
public static void gc() {
System.out.println("* Trigger GC");
for (int i = 0; i < 50; i++)
System.gc();
}
public static void sleep(int howlong) {
try {
Thread.sleep(howlong);
} catch (InterruptedException ex) {
}
}
}
This code will create 5 objects, resurrect the first one and then kill the first one. I was expecting
After first resurrection, and since the object doesn't have any more references yet, to re-enter zombie state through finalize (which it doesn't)
After killing an object again to completely be removed from memory through again the finalize method
It seems, in other words, that finalize is called only once. I have checked that this is not a byproduct of the HashMap object with this code:
package com.panayotis.resurrect;
import java.util.HashMap;
public class TestMap {
private static final HashMap<Integer, TestMap> map = new HashMap<>();
private static int IDGEN = 1;
private final int id;
public static void main(String[] args) {
map.put(1, new TestMap(1));
map.put(2, new TestMap(2));
map.put(3, new TestMap(3));
map.remove(1);
System.out.println("Size: " + map.size());
for (int i = 0; i < 50; i++)
System.gc();
}
public TestMap(int id) {
this.id = id;
}
protected void finalize() throws Throwable {
System.out.println("Finalize " + id);
super.finalize();
}
}
So, why this behavior? I am using Java 1.8
EDIT Since this is not directly possible, any ideas how I can accomplish this?
This is exactly the specified behavior:
Object.finalize()
After the finalize method has been invoked for an object, no further action is taken until the Java virtual machine has again determined that there is no longer any means by which this object can be accessed by any thread that has not yet died, including possible actions by other objects or classes which are ready to be finalized, at which point the object may be discarded.
The finalize method is never invoked more than once by a Java virtual machine for any given object.
You seem to have a wrong understanding of what the finalize() method does. This method does not free the object’s memory, declaring a custom non-trivial finalize() method is actually preventing the object’s memory from being freed as it has to be kept in memory for the execution of that method and afterwards, until the garbage collector has determined has it has become unreachable again. Not calling finalize() again does not imply that the object doesn’t get freed, it implies that it will be freed without calling finalize() again.
Instances of classes without a custom finalize() method or having a “trivial” finalize method (being empty or solely consisting of a super.finalize() call to another trivial finalizer) are not going through the finalization queue at all and are both, allocated faster and reclaimed faster.
That’s why you should never try to implement an object cache just for the memory, the result will always be less efficient than the JVM’s own memory management. But if you are managing an actually expensive resource, you may handle it by separating it into two different kinds of objects, a front-end providing the API to the application, which may get garbage collected whenever the application doesn’t use it, and a back-end object describing the actual resource, which is not directly seen by the application and may get reused.
It is implied that the resource is expensive enough to justify the weight of this separation. Otherwise, it’s not really a resource worth caching.
// front-end class
public class Resource {
final ActualResource actual;
Resource(ActualResource actual) {
this.actual = actual;
}
public int getId() {
return actual.getId();
}
public String toString() {
return actual.toString();
}
}
class ActualResource {
int id;
ActualResource(int id) {
this.id = id;
}
int getId() {
return id;
}
#Override
public String toString() {
return "ActualResource[id="+id+']';
}
}
public class ResourceManager {
static final ReferenceQueue<Resource> QUEUE = new ReferenceQueue<>();
static final List<ActualResource> FREE = new ArrayList<>();
static final Map<WeakReference<?>,ActualResource> USED = new HashMap<>();
static int NEXT_ID;
public static synchronized Resource getResource() {
for(;;) {
Reference<?> t = QUEUE.poll();
if(t==null) break;
ActualResource r = USED.remove(t);
if(r!=null) FREE.add(r);
}
ActualResource r;
if(FREE.isEmpty()) {
System.out.println("allocating new resource");
r = new ActualResource(NEXT_ID++);
}
else {
System.out.println("reusing resource");
r = FREE.remove(FREE.size()-1);
}
Resource frontEnd = new Resource(r);
USED.put(new WeakReference<>(frontEnd, QUEUE), r);
return frontEnd;
}
/**
* Allow the underlying actual resource to get garbage collected with r.
*/
public static synchronized void stopReusing(Resource r) {
USED.values().remove(r.actual);
}
public static synchronized void clearCache() {
FREE.clear();
USED.clear();
}
}
Note that the manager class may have arbitrary methods for controlling the caching or manual release of resources, the methods above are just examples. If your API supports the front-end to become invalid, e.g. after calling close(), dispose() or such alike, immediate explicit freeing or reuse can be provided without having to wait for the next gc cycle. While finalize() is called exactly one time, you can control the number of reuse cycles here, including the option of enqueuing zero times.
Here is some test code
static final ResourceManager manager = new ResourceManager();
public static void main(String[] args) {
Resource r1 = manager.getResource();
Resource r2 = manager.getResource();
System.out.println("r1 = "+r1+", r2 = "+r2);
r1 = null;
forceGC();
r1 = manager.getResource();
System.out.println("r1 = "+r1);
r1 = null;
forceGC();
r1 = manager.getResource();
System.out.println("r1 = "+r1);
manager.stopReusing(r1);
r1 = null;
forceGC();
r1 = manager.getResource();
System.out.println("r1 = "+r1);
}
private static void forceGC() {
for(int i = 0; i<5; i++ ) try {
System.gc();
Thread.sleep(50);
} catch(InterruptedException ex){}
}
Which will likely (System.gc() still isn’t guaranteed to have an effect) print:
allocating new resource
allocating new resource
r1 = ActualResource[id=0], r2 = ActualResource[id=1]
reusing resource
r1 = ActualResource[id=0]
reusing resource
r1 = ActualResource[id=0]
allocating new resource
r1 = ActualResource[id=2]
You should not implement the finalize method, as the GC will call it only once for each instance.
So if the GC will find an object to delete, it will call the finalize. Then it will check again for maybe new references. It might find one and keep the object in the memory.
On the next run, the same object will, again, not have any references. The GC will just kill it, it will not call the finalize again.
You know what?
I think that your stated requirements would simply be satisfied by a concurrent map.
I am trying to create a mechanism to cache objects into memory, for future use, even if these objects are out of context.
That is simply a map, with the ID as the key; e.g.
Map<IdType, ValueType> cache = new HashMap<>();
When you create an object that needs to be cached, you simply call cache.put(id, object). It will remain cached until you remove it.
There would be a parallel deterministic process which will dictate (by a unique ID) whether the cached object should be retrieved again or if it should completely die.
That's a thread ("parallel deterministic process") that calls cache.remove(id).
Now, if you remove an object from the cache and it is still in use somewhere else (i.e. it is still reachable) then it won't be garbage collected. But that is OK. But is shouldn't be!
But what about that stuff with finalize()?
As far as I can see, it does not contribute to your stated requirement at all. Your code seems to detecting objects that are destined to be deleted, and making them reachable (your zombies list). That seems to be the opposite of your requirements.
If the purpose of the finalize() is simply to track when the Zombie objects are actually deleted, then finalize() is only ever called once, so it can't do that. But, why is the finalize() method adding the object to the zombie list?
If your requirements are actually misstated and you are really trying to create "immortal" objects (i.e. objects that cannot be deleted), then a plain Map will do that. Just don't remove the object's key, and it will "live" for ever.
Now implementing a cache as a plain map risks creating a memory leak. There are a couple of ways to address that:
You can create a subclass of LinkedHashMap, and implement removeEldestEntry() to tell the map when to remove the oldest entry if the cache has too many entries; see the javadocs for details.
You can implement a cache as a HashMap<SoftReference<IdType>, ValueType> and use a ReferenceQueue to remove cache entries whose references have been broken by the GC. (Note that soft references will be broken by the GC when a key is no longer strongly reachable, and memory is running short.)
I have the following code:
for (int iThreadCounter = 1; iThreadCounter <= CONNECTIONS_NUM; iThreadCounter++){
WorkThread wt = new WorkThread(iThreadCounter);
new Thread(wt).start();
m_arrWorkThreadsToCreate.add(wt);
}
Those threads calls the following code:
int res = m_spLegJoin.call(m_workTread, m_workTread.getConfId());
And this is the call method inside LegJoinSp class:
public class LegJoinSp extends ConnEventSp {
private static final int _LEG_JOIN_ACTION_CODE = 22;
private static int m_nLegId = Integer.valueOf(IniUtils.getIniValue("General", "LEG_ID_START"));
private final Lock m_lock = new ReentrantLock();
public int call(WorkThread a_workThread, String a_sConfId) {
synchronized (this) {
//m_lock.lock();
m_nLegId++;
boolean bPass = false;
Log4jWrapper.writeLog(LogLevelEnum.DEBUG, "LegJoinSp - call", "a_workThread = " + a_workThread.getThreadId() + " a_sConfId = " + a_sConfId);
if (super.call(a_workThread, a_sConfId, _LEG_JOIN_ACTION_CODE, "" + m_nLegId) == 0) {
bPass = true;
} else {
bPass = false;
}
//m_lock.unlock();
if (bPass) {
Log4jWrapper.writeLog(LogLevelEnum.DEBUG, "LegJoinSp - call", "a_workThread = " + a_workThread.getThreadId() + " a_sConfId = " + a_sConfId + " returned leg id " + m_nLegId);
return m_nLegId;
} else {
return -1;
}
}
}
public Lock getLock() {
return m_lock;
}
}
I've got 2 threads calling this call() method.
m_nLegId is initiated with 100.
As you can see I have tried to lock the method with both
synchronized(this)
and
m_lock.lock() and m_lock.unlock()
The problem is that when I first get to if (bPass) inner code, it write 102 to my log as the m_nLegId value. However I expect it to be 101 because of the m_nLegId++; statement.
It seems that the second thread manage to get inside the code before the synchronize block ends for the first thread execution.
How can I fix that?
Thank you
For me your bug is related to the fact that m_nLegId is a static field and you try to synchronize access on the current instance instead of the class such that you don't properly prevent concurrent modifications of your field.
I mean
synchronized (this) {
Should rather be
synchronized (LegJoinSp.class) {
NB: In case you only need a counter, consider using an AtomicInteger for your field instead of an int.
The thing is you are creating a new object with every thread, but the way you applied the lock is applicable only to same object (as you applied the lock on the this).
So if you want to apply the lock on the class level, then you can create a static object and apply the lock on that object which can serve the purpose you wanted to achieve (if I understood your problem correctly based on the comments)
I am implementing an application using concurrent hash maps. It is required that one thread adds data into the CHM, while there is another thread that copies the values currently in the CHM and erases it using the clear() method. When I run it, after the clear() method is executed, the CHM always remains empty, though the other thread continues adding data to CHM.
Could someone tell me why it is so and help me find the solution.
This is the method that adds data to the CHM. This method is called from within a thread.
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.ConcurrentHashMap;
public static ConcurrentMap<String, String> updateJobList = new ConcurrentHashMap<String, String>(8, 0.9f, 6);
public void setUpdateQuery(String ticker, String query)
throws RemoteException {
dataBaseText = "streamming";
int n = 0;
try {
updateJobList.putIfAbsent(ticker, query);
}
catch(Exception e)
{e.printStackTrace();}
........................
}
Another thread calls the track_allocation method every minute.
public void track_allocation()
{
class Track_Thread implements Runnable {
String[] track;
Track_Thread(String[] s)
{
track = s;
}
public void run()
{
}
public void run(String[] s)
{
MonitoringForm.txtInforamtion.append(Thread.currentThread()+"has started runnning");
String query = "";
track = getMaxBenefit(track);
track = quickSort(track, 0, track.length-1);
for(int x=0;x<track.length;x++)
{
query = track[x].split(",")[0];
try
{
DatabaseConnection.insertQuery(query);
}
catch(Exception e)
{
e.printStackTrace();
}
}
}
}
joblist = updateJobList.values();
MonitoringForm.txtInforamtion.append("\nSize of the joblist is:"+joblist.size());
int n = joblist.size()/6;
String[][] jobs = new String[6][n+6];
MonitoringForm.txtInforamtion.append("number of threads:"+n);
int i = 0;
if(n>0)
{
MonitoringForm.txtInforamtion.append("\nSize of the joblist is:"+joblist.size());
synchronized(this)
{
updateJobList.clear();
}
Thread[] threads = new Thread[6];
Iterator it = joblist.iterator();
int k = 0;
for(int j=0;j<6; j++)
{
for(k = 0; k<n; k++)
{
jobs[j][k] = it.next().toString();
MonitoringForm.txtInforamtion.append("\n\ninserted into queue:\n"+jobs[j][k]+"\n");
}
if(it.hasNext() && j == 5)
{
while(it.hasNext())
{
jobs[j][++k] = it.next().toString();
}
}
threads[j] = new Thread(new Track_Thread(jobs[j]));
threads[j].start();
}
}
}
I can see a glaring mistake. This is the implementation of your Track_Thread classes run method.
public void run()
{
}
So, when you do this:
threads[j] = new Thread(new Track_Thread(jobs[j]));
threads[j].start();
..... the thread starts, and then immediately ends, having done absolutely nothing. Your run(String[]) method is never called!
In addition, your approach of iterating the map and then clearing it while other threads are simultaneously adding is likely to lead to entries occasionally being removed from the map without the iteration actually seeing them.
While I have your attention, you have a lot of style errors in your code:
The indentation is a mess.
You have named your class incorrectly: it is NOT a thread, and that identifier ignores the Java identifier rule.
Your use of white-space in statements is inconsistent.
These things make your code hard to read ... and to be frank, they put me off trying to really understand it.
I hope in a good manner :-)
I wrote this piece of code.
What I wished to do, is to build something like "cache".
I assumed that I had to watch for different threads, as might many calls get to that class, so I tried the ThreadLocal functionality.
Base pattern is
have "MANY SETS of VECTOR"
The vector holds something like:
VECTOR.FieldName = "X"
VECTOR.FieldValue= "Y"
So many Vector objects in a set. Different set for different calls from different machines, users, objects.
private static CacheVector instance = null;
private static SortedSet<SplittingVector> s = null;
private static TreeSet<SplittingVector> t = null;
private static ThreadLocal<SortedSet<SplittingVector>> setOfVectors = new ThreadLocal<SortedSet<SplittingVector>>();
private static class MyComparator implements Comparator<SplittingVector> {
public int compare(SplittingVector a, SplittingVector b) {
return 1;
}
// No need to override equals.
}
private CacheVector() {
}
public static SortedSet<SplittingVector> getInstance(SplittingVector vector) {
if (instance == null) {
instance = new CacheVector();
//TreeSet<SplittingVector>
t = new TreeSet<SplittingVector>(new MyComparator());
t.add(vector);
s = Collections.synchronizedSortedSet(t);//Sort the set of vectors
CacheVector.assign(s);
} else {
//TreeSet<SplittingVector> t = new TreeSet<SplittingVector>();
t.add(vector);
s = Collections.synchronizedSortedSet(t);//Sort the set of vectors
CacheVector.assign(s);
}
return CacheVector.setOfVectors.get();
}
public SortedSet<SplittingVector> retrieve() throws Exception {
SortedSet<SplittingVector> set = setOfVectors.get();
if (set == null) {
throw new Exception("SET IS EMPTY");
}
return set;
}
private static void assign(SortedSet<SplittingVector> nSet) {
CacheVector.setOfVectors.set(nSet);
}
So... I have it in the attach and I use it like this:
CachedVector cache = CachedVector.getInstance(bufferedline);
The nice part: Bufferedline is a splitted line based on some delimiter from data files. Files can be of any size.
So how do you see this code? Should I be worry ?
I apologise for the size of this message!
Writing correct multi-threaded code is not that easy (i.e. your singleton fails to be), so try to rely on existing solutions if posssible. If you're searching for a thread-safe Cache implementation in Java, check out this LinkedHashMap. You can use it to implement a LRU cache. And collections.synchronizedMap(). can make this thread-safe.