how to use Multiple database connection on multiple threads..
I know how to use database connection with routingdataSource dynamically
but I think it is not thread safety Because the class is static.!!!
// first thread
ContextHolder.set("firstId");
mapper.select();
ContextHolder.clear(idFirst);
// second thread
ContextHolder.set("secondId");
mapper.select();
ContextHolder.clear(idFirst);
public class ContextHolder {
private static ThreadLocal<String> CONTEXT = new ThreadLocal<>();
public static void set(String dbType) {
CONTEXT.set(dbType);
}
public static String getClientDatabase() {
return CONTEXT.get();
}
public static void clear() {
CONTEXT.remove();
}
}
like this code.
public class Poller implements Runnable {
#Override
public List<Map<String, Object>> getNext() {
Map<String, Object> params = new HashMap<>();
ContextHolder.set(dbConnectionId);
List<Map<String, Object>> list = blogMapper.findAll(params)
ContextHolder.clear();
return list;
}
....
}
the
private static ThreadLocal<String> CONTEXT = new ThreadLocal<>();
create a threadlocal variable. It means that what you put inside is bound to a specific thread and when you get something it is what in this thread that is returned.
For instance, in thread 1 you set:
CONTEXT.set(1);
in thread 2 you set:
CONTEXT.set(2);
and later thread print what inside with a
CONTEXT.get();
it will print 1 for thread 1 and 2 for thread 2.
So now, if you use a standard synchronous model, it is fine as long as you don't forget to clean the value when you enter and/or exit the request.
As soon as you use async (reactive, executor, ...) it will fail as part of your process will be run on another thread.
Related
Will the updated values in the hashmap below will reflect in the reader threads(the threads will not modifiy the state of the hashMap)?
What if ConcurrentHashMap is used instead?
public class SharedDataTest {
private static class SomeWork implements Runnable {
private Map<String, String> dataTable;
public SomeWork(Map<String, String> dataTable) {
this.dataTable = dataTable;
}
#Override
public void run() {
//do some stuff with dataTable
}
}
public static void main(String[] args) {
Map<String, String> dataTable = new HashMap<String, String>();
dataTable.put("someKey","someValue");
Runnable work1 = new SomeWork(dataTable);
Runnable work2 = new SomeWork(dataTable);
new Thread(work1).start();
new Thread(work2).start();
}
}
If all initialization of the map is done before starting the threads, then there is no need for additional synchronization. The start of a thread will act as a memory barrier, and anything happened before that thread start will be visible to that thread.
Note that if you change the map after threads start, then there is no guarantee on whether or if the threads will see that change.
Yes it will reflect but there may be inconsistency depending on you you do on your run() method because you are not synchronizing externally. Use Collections.synchronizedMap(map) or better use a ConcurrentHashMap which will provide you thread safety for free
public SomeWork(Map<String, String> dataTable) {
this.dataTable = Collections.synchronizedMap(dataTable);
}
Check the documentation of HashMap which asks to synchronize externally in case of concurrency
Note that this implementation is not
synchronized. * If multiple threads access a hash map
concurrently, and at least one of * the threads modifies the map
structurally, it must be * synchronized externally. (A
structural modification is any operation * that adds or deletes one
or more mappings; merely changing the value * associated with a key
that an instance already contains is not a * structural
modification.) This is typically accomplished by * synchronizing on
some object that naturally encapsulates the map.
It will be better to use a ConcurrentHashMap rather than externally synchronize the map.
public class SharedDataTest {
private static class SomeWork implements Runnable {
private Map<String, String> dataTable;
public SomeWork(Map<String, String> dataTable) {
this.dataTable = dataTable;
}
#Override
public void run() {
//do some stuff with dataTable
}
}
public static void main(String[] args) {
Map<String, String> dataTable = new ConcurrentHashMap<String, String>();
dataTable.put("someKey","someValue");
Runnable work1 = new SomeWork(dataTable);
Runnable work2 = new SomeWork(dataTable);
new Thread(work1).start();
new Thread(work2).start();
}
}
I have a class in which I have a ConcurrentHashMap which is updated by a single thread every 30 seconds and then I have multiple reader threads reading from the same ConcurrentHashMap by calling getNextSocket() method.
Below is my singleton class which on the initialization calls connectToSockets() method to populate my ConcurrentHashMap and then starts a background thread which updates the same map every 30 second by calling updateSockets() method.
And then from multiple threads I am calling getNextSocket() method to get next available live socket which uses same map to get the information out. I also have SocketInfo class which is immutable which contains the state of all the sockets whether they are live or not.
public class SocketHolder {
private final ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
private final Map<DatacenterEnum, List<SocketInfo>> liveSocketsByDc = new ConcurrentHashMap<>();
// Lazy Loaded Singleton Pattern
private static class Holder {
private static final SocketHolder INSTANCE = new SocketHolder();
}
public static SocketHolder getInstance() {
return Holder.INSTANCE;
}
private SocketHolder() {
connectToSockets();
scheduler.scheduleAtFixedRate(new Runnable() {
public void run() {
updateSockets();
}
}, 30, 30, TimeUnit.SECONDS);
}
private void connectToSockets() {
Map<DatacenterEnum, ImmutableList<String>> socketsByDc = TestUtils.SERVERS;
for (Map.Entry<DatacenterEnum, ImmutableList<String>> entry : socketsByDc.entrySet()) {
List<SocketInfo> addedColoSockets = connect(entry.getKey(), entry.getValue(), ZMQ.PUSH);
liveSocketsByDc.put(entry.getKey(), addedColoSockets);
}
}
private List<SocketInfo> connect(DatacenterEnum dc, List<String> addresses, int socketType) {
List<SocketInfo> socketList = new ArrayList<>();
// ... some code here
return socketList;
}
// called from multiple reader threads to get next live available socket
public Optional<SocketInfo> getNextSocket() {
Optional<SocketInfo> liveSocket = getLiveSocket(liveSocketsByDc.get(DatacenterEnum.CORP));
return liveSocket;
}
private Optional<SocketInfo> getLiveSocket(final List<SocketInfo> listOfEndPoints) {
if (!CollectionUtils.isEmpty(listOfEndPoints)) {
Collections.shuffle(listOfEndPoints);
for (SocketInfo obj : listOfEndPoints) {
if (obj.isLive()) {
return Optional.of(obj);
}
}
}
return Optional.absent();
}
// update CHM map every 30 seconds
private void updateSockets() {
Map<DatacenterEnum, ImmutableList<String>> socketsByDc = TestUtils.SERVERS;
for (Entry<DatacenterEnum, ImmutableList<String>> entry : socketsByDc.entrySet()) {
List<SocketInfo> liveSockets = liveSocketsByDc.get(entry.getKey());
List<SocketInfo> liveUpdatedSockets = new ArrayList<>();
for (SocketInfo liveSocket : liveSockets) {
Socket socket = liveSocket.getSocket();
String endpoint = liveSocket.getEndpoint();
boolean sent = ....;
boolean isLive = sent ? true : false;
// is this right here? or will it cause any race condition?
SocketInfo state = new SocketInfo(socket, liveSocket.getContext(), endpoint, isLive);
liveUpdatedSockets.add(state);
}
// update map with new liveUpdatedSockets
liveSocketsByDc.put(entry.getKey(), liveUpdatedSockets);
}
}
}
Question:
Is my above code thread safe and there is no race condition in updateSockets() and getNextSocket() method?
In my updateSockets() method, I extract List<SocketInfo> from liveSocketsByDc ConcurrentHashMap which was already populated before in connectToSockets() method during initialization or next interval of 30 second in updateSockets() method and then I am iterating same list liveSockets and create a new SocketInfo object depending on whether isLive is true or false. And then I update liveSocketsByDc ConcurrentHashMap with this new SocketInfo object. Does this look right? Because from multiple reader threads I am going to call getNextSocket() method which inturn calls getLiveSocket method which uses same map to get the next available live socket.
I am iterating liveSockets list and then creating a new SocketInfo object by just changing isLive field and other things will stay same. Is this right?
If there is a thread safety issue, what is the best way to fix this?
Here:
List<SocketInfo> liveSockets = liveSocketsByDc.get(entry.getKey());
Your different threads are potentially writing/reading the same list object in parallel.
So: not thread safe. It doesn't help to have an "outer" thread-safe data structure; when that thread-safe thing contains data ... that is not thread-safe; but then "worked on" in parallel!
I have a singleton class thus
public final class HandlerCache {
//the cache maintains a handler per thread
private final Map<Thread, Handler> cache = new ConcurrentHashMap<>();
private final Thread monitor;
private static final HandlerCache INSTANCE = new HandlerCache();
private HandlerCache() {
monitor = new Thread() {
//periodically monitor cache and close handlers when a thread has died
}
monitor.start()
}
public static HandlerCache getInstance() {
return INSTANCE;
}
public Handler getHandler() throws Exception {
final Thread thread = Thread.currentThread();
Handler handler = cache.get(thread);
if (!(handler == null))
return handler;
handler = HandlerFactory.get(getHandlerFromName(thread.getName()));
cache.put(thread, handler);
return handler;
}
}
I am leaking the singleton instance to the monitor thread before the constructor is finished, what is the better way ?
Will making cache volatile will fix the issue ?
As mentioned by user2677485, you should be using a ThreadLocal and implement the initialValue method. The other point is that the Handler implementation should implement the finalize method so that when it is being reclaimed by the GC this method will be called and you can clean up your resources.
The code can be simplified as something like the following:
public class HandlerCache {
private static final handlers = new ThreadLocal<Handler>() {
protected Handler initializeValue() {
return HandlerFactory.get(...);
}
};
public static Handler getHandler() {
return handlers.get();
}
}
Rather than starting the thread within the HandlerCache constructor, you could initialize the instance using a static function that would first construct the HandlerCache and then start the thread.
I am newbie to threading Concept, and I am trying learn....
I came across a situation where i
have a Method which returns a List Of Students...and other methods which uses this List to
pull Other Details of students Like ParentsName, Sports in which they have participated
etc (based on StudentID).. I tried returning a list using following code and it seems like it's not working :(
import java.util.ArrayList;
public class studentClass implements Runnable
{
private volatile List<Student> studentList;
#Override
public void run()
{
studentList = "Mysql Query which is returning StudentList(StudentID,StudentName etc)";
}
public List<Student> getStudentList()
{
return studentList;
}
}
public class mainClass
{
public static void main(String args[])
{
StudentClass b = new StudentClass();
new Thread(b).start();
// ...
List<Student> list = b.getStudentList();
for(StudentClass sc : b)
{
System.out.println(sc);
}
}
}
I used this Link - Returning value from Thread
the list is NULL.
Where am I going Wrong...???
Most likely you are not waiting for the result to complete.
A simple solution is to use an ExecutorService instead of creating your own thread pool.
See http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Executors.html#newSingleThreadExecutor--
ExecutorService es = Executors.newSingleThreadExecutor();
Future<List<Student>> future = es.submit(new Callable<List<Student>>() {
public List<Student> call() throws Exception {
// do some work to get this list
}
};
// do something
// wait for the result.
List<Student> list = future.get();
This gives to lots more options such as
capture any exception thrown so you know what went wrong.
pool isDone() to see if it is ready
call get() with a tiemout.
have a thread pool which re-uses the thread or has more than one thread.
You are getting null since line ArrayList<student> List=b.getStudentList(); is executed before your DB quering happens because that is happening in a separate thread.
You have to wait till database query thread execution finishes. One way to do is to use join() method on the thread.
Thread t = new Thread(new studentClass());
t.start();
t.join();
Or you can use Callable interface provided with Java to return value from a thread. Refer this article as a starting point.
In the code example, if StudentClass run method will take some seconds, you will print empty since list has not been set.
public class MainClass
{
public static void main(String args[]) throws Exception
{
StudentClass b = new StudentClass();
ExecutorService executorService = Executors.newFixedThreadPool(3);
Future<List<Student>> studentList = executorService.submit(b);
// When the thread completed fetching from DB, studentList will have records
while(studentList.isDone())
{
System.out.println("COoolllll" + studentList.get());
}
}
}
public class StudentClass implements Callable<List<Student>>{
private volatile List<Student> studentList;
public List<Student> getStudentList()
{
return studentList;
}
#Override
public List<Student> call() throws Exception
{
/**
* studentList will fetch from DB
*/
studentList = new ArrayList<Student>();
return studentList;
}}
I think it is good idea to have an global instance of the students list, then call the thread to fill it, and have another bool variable to recognize that the threads work is done or not or something like this.
I'm trying to implement a work queue in Java that limits the amount of work that can be taken at a time. In particular, it is trying to protect access to an external resource. My current approach is to use a Semaphore and a BlockingQueue so that I have something like this:
interface LimitingQueue<V> {
void put(Callable<V> work);
Callable<V> tryPoll();
}
It should behave like this:
#Test
public void workLimit() throws Exception {
final int workQueue = 2;
final LimitingQueue<Void> queue = new LimitingQueue<Void>(workQueue);
queue.put(new Work()); // Work is a Callable<Void> that just returns null.
queue.put(new Work());
// Verify that if we take out one piece of work, we don't get additional work.
Callable<Void> work = queue.tryPoll();
assertNotNull(work, "Queue should return work if none outstanding");
assertNull(queue.tryPoll(), "Queue should not return work if some outstanding");
// But we do after we complete the work.
work.call();
assertNotNull(queue.tryPoll(), "Queue should return work after outstanding work completed");
}
The implementation of tryPoll() uses Semaphore#tryAcquire and, if successful, creates an anonymous Callable that wraps the Semaphore#release call in a try/finally block around the call to work.call().
This works, but is somewhat unsatisfying in that if the user of this class puts work that is of some specific class that implements Callable, the user does not get access to that class back when looking at the result of tryPoll. Notably, tryPoll() returns a Callable<Void>, not a Work.
Is there a way to achieve what the work limitation effect while giving the caller back a usable reference to the work object that was submitted? (It's fine to strengthen the type signature of LimitingQueue to be more like LimitingQueue<R, T extends Callable<R>>.) I can't think of a way to ensure that the semaphore is released after calling the work item without doing this kind of wrapping.
EDIT2 I have replaced what was here with a suggestion on how to implement what you're looking for. Let me know if you want some of the old info back and I can restore it.
public class MyQueue<T> {
private Semaphore semaphore;
public void put(Work<T> w) {
w.setQueue(this);
}
public Work<T> tryPoll() {
return null;
}
public abstract static class Work<T> implements Callable<T> {
private MyQueue<T> queue;
private void setQueue(MyQueue<T> queue) {
if(queue != null) {
throw new IllegalStateException("Cannot add a Work object to multiple Queues!");
}
this.queue = queue;
}
#Override
public final T call() throws Exception {
try {
return callImpl();
} finally {
queue.semaphore.release();
}
}
protected abstract T callImpl() throws Exception;
}
}
Then use it like thus:
public class Test {
public static void main(String[] args) {
MyQueue<Integer> queue = new MyQueue<Integer>();
MyQueue.Work<Integer> work = new MyQueue.Work<Integer>() {
#Override
protected Integer callImpl() {
return 5;
}
};
queue.put(work);
MyQueue.Work<Integer> sameWork = queue.tryPoll();
}
}
Sounds to me like you should just use the builtin ExecutorService. Use Executors#newCachedThreadPool to get a pool, then submit Callable jobs which return back a Future.