Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I've been watching a lot of videos on data structures, and these terms are always being mentioned: synchronized/not synchronized and thread-safe/not thread-safe.
Can someone explain to me in simple words what synchronized and thread-safe mean in Java? What is sync and what is thread?
A thread is an execution path of a program. A single threaded program will only have one thread and so this problem doesn't arise. Virtually all GUI programs have multiple execution path and hence threads - one for processing the display of the GUI and handing user input, others for actually performing the operations of the program. This is so that the UI is still responsive while the program is working.
In the simplest of terms threadsafe means that it is safe to be accessed from multiple threads. When you are using multiple threads in a program and they are each attempting to access a common data structure or location in memory several bad things can happen. So, you add some extra code to prevent those bad things. For example, if two people were writing the same document at the same time, the second person to save will overwrite the work of the first person. To make it thread safe then, you have to force person 1 to wait for person 2 to complete their task before allowing person 1 to edit the document.
Synchronized means that in a multiple threaded environment, a Synchronizedobject does not let two threads access a method/block of code at the same time. This means that one thread can't be reading while another updates it.
The second thread will instead wait until the first is done. The overhead is speed, but the advantage is guaranteed consistency of data.
If your application is single threaded though, Synchronized has no benefit.
As per CIP:
A class is thread-safe if it behaves correctly when accessed from
multiple threads, regardless of the scheduling or interleaving of the
execution of those threads by the runtime environment, and with no
additional synchronization or other coordination on the part of the
calling code.
So thread safety is a desired behavior of the program in case it is accessed by multiple threads. Using the synchronized block is one way of achieving that behavior. You can also check the following:
What does 'synchronized' mean?
What does threadsafe mean?
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am learning multithreading and am now confused about one topic i.e ExecutorService and CompletableFuture.
Let me summarise that what I learnt till now.
ExecutorService is a Java High-level thread API, which helps in managing threads, i.e two independent threads which is doing their task. But still, if threads are dependent then we can use producer-consumer patterns and many more.
This helps in achieving concurrency. Since multiple threads can be used for running a multiple tasks.
But In CompletableFuture, which we called async programming/ Reactive programming, is also used for accomplishing the same task. i.e It can also run multiple threads.
But I don't get the point of when to use which one and how they are different from each other? What are there use cases in which they sit perfectly?
A CompletableFuture is, in essence, a mechanism by which one thread can find out when another thread has finished doing something.
So, a typical model is this kind of method:
CompletableFuture<Result> doSomething() {
CompletableFuture<Result> future = new CompletableFuture<>();
... arrange to do work in some other thread ...
return future;
}
The caller of doSomething() gets back an object which it can use to determine completion, wait for completion, get the Result of doing 'something', and perhaps run some other work using the Result.
So, how does doSomething() arrange to do work in some other thread. Well, one way is to execute the work vis some ExecutorService. Though there are plenty of other ways to go about it. Regardless, when the work is complete, it will call future.complete(someResult) to set the CompletableFuture into 'completed' state with the expected Result.
Maybe you're confused because our caller could write
doSomething().thenAcceptAsync((result) -> blahBlah(result));
In this case, doSomething() proceeds as above. When that is complete, we want to run another operation, also asynchronously. Because we used theAcceptAsync, this work will be handled via an ExecutorService known to the CompletableFuture framework (the common ForkJoinPool, to be exact - this is documented).
Summary - this is not 'choose one or the other'. ExecutorServices provide the means to run units of work in other threads. CompletableFutures provide the means to know and react to completion of those units of work.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
is the following statement correct:
" There shouldn't be any thread interference between two synchronized methods in 2 different classes . So they can run concurrently without any problems."
Thanks for your time
That is way too vague. A few pointers:
"how does synchronization work in Java": There are a couple of mechanisms, the question seems to be about the synchronized keyword. This works by marking "critical sections" that must not be executed by more than one thread at the same time, and having the threads "lock" a monitor object while they are in that section (so that all other threads wait).
synchronized methods synchronize on the object instance (or class object in case of a static method). So methods in different classes do not synchronize with each-other that way. They will run concurrently.
you can use the synchronized keyword to synchronize blocks on any other monitor object. This way, methods in different classes can still be synchronized with each-other.
"can run concurrently without problems" is not guaranteed just by having some synchronization (or lack thereof). You need to see what mutable state these methods (directly or indirectly) try to access (and who else does the same) to see what kind of concurrency control is necessary.
You misunderstood the concept a little bit. Collisions happen when two (or more) threads simultaneously try to make a change on the same data or when one of them tries the read the data while the other thread is trying to change it.
When two thread tries to change the shared resource simultaneously, a race condition occurs. Check out this link to learn more about Race Condition.
In order to prevent this kind of problems, you need to guard the shared resource for simultaneous changes. Mutexes and semaphores are invented for this purpose: To lock the shared resource for the other threads, when one thread is currently making a change on it. For this purpose, Java uses the synchronized keyword. You can read more about Synchronized in Java using the link.
Note that, using the synchronized keyword will not eliminate all of the synchronization related issues, but it is a good starting point.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
As mentioned in answer, synchronized is implemented using compareAndSwap, which is non-blocking algorithm.
On synchronized without using wait(), Does a thread state set to BLOCKED?
Does a thread in BLOCKED & WAITING state consume CPU cycles?
As mentioned in answer, synchronized is implemented using compareAndSwap, which is non-blocking algorithm.
I think you are misreading that answer. synchronized is certainly not implemented with the Java-level compareAndSwap call. It will be implemented in native code by the interpreter and JIT in your JVM. Under the covers it might use the compare-and-swap instruction, or it may use something else (atomic test-and-set and atomic exchange are also common - and some platforms don't even have a CAS primitive).
This is definitely not a "non-blocking algorithm" - by definition synchronized implements a type of critical section which implies blocking if a second thread tries to enter the critical section while another thread is inside it.
1) On synchronized without using wait(), Does a thread state set to BLOCKED?
Yes, if a thread is waiting to enter a synchronized section, its state is set to BLOCKED.
2) Does a thread in BLOCKED & WAITING state consume CPU cycles?
Generally no, at least not in an ongoing manner. There is a CPU cost associated with entering and exiting the state1, but once the thread is blocked it is generally held in a non-runnable state until it is awoken when the monitor becomes free, and doesn't use CPU resources during that time.
1 Which is exactly why good implementations will generally "spin" a bit before going to sleep, in case the critical section is short and the holding thread may soon exit - spinning in this case avoids the 1000+ cycle overhead of making the transition to sleep (which involves the OS, etc).
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have a scenario where I have to call multiple objects concurrently, and each object will call multiple other class's objects internally. After the execution of all child objects, it should return result back to parent object and finally, parent objects will return result back to Main thread. It is basically two level multi threading. I do not know what I should take under consideration when implementing this scenario. I would really appreciate any and all guidance, preferably with some sample code.
I have attached a picture which gives a clear understanding of the scenario.
simplty i need to creatre a set of threads and each created threads have to create another set of threads . and also reqires controll over every thread at any time. hope its clear thanks again
ForkJoinPool and RecursiveTask are designed for such use cases. See fork-join tag
From what I understand: Every parent spawns some amount of children and has to wait for all children (and children of children and so on) to complete?
In that case, each parent can spawn one thread per child and then use a semaphore to wait for all of them to finish. Semaphores allow you to wait for multiple threads at a time.
EDIT: You mention four tasks.
ADD thread: Create a new thread, manage all threads in a list of their parent. Use synchronization to maintain that list because if there is no guarantee that only a single thread will ever touch this list.
PAUSE: Set PAUSED flag. That will cause the thread to sleep() or to wait().
RESUME: Unset PAUSED flag. If PAUSE makes thread wait(), call notify() to wake it up.
DELETE: Set STOPPED flag, then remove from list, or wait until thread finishes before removing from list (depends on what you need). If thread might be PAUSED, make sure, to RESUME it first.
The flags must be used by the thread which is running a loop to determine: Whether to PAUSE and whether to opt out of the loop, thus STOPPING the thread. Something like this:
while (!isStopped)
{
while (hasWork() && !isPaused && !isStopped)
{
// do work
}
if (!isStopped)
{
// either just sleep for a few milliseconds (easy way) or wait()
}
}
Make sure that you don't spawn too many threads. You should rather let children wait than creating even more threads, if you already spawned more than x threads, where x depends on your OS and JVM. Play around with it. Intuition might tell you: The more threads the better, but that is absolutely false. Once you surpass a certain amount of threads, they are already using all your computer's available resources (such as CPU, memory bandwidth and hard-disk bandwidth). Spawning more threads than necessary to use all the resources will just add management overhead and slow down execution.
On modern systems, competing thread scheduling might be done well, but each thread still has it's price tag. Imagine, all threads want to access CPU, memory etc. at the same time. That creates contention and requires a very smart scheduler (smart enough to predict the future and who can do that?) to not cause any noticeable overhead.
Good luck.
you can create the N parent Threads on the main object and after calling start on each thread object you can call a join on each object. The main object will block on the first one waiting for it to finish, once it has finished it will try to join the second, the third, and so on until the Nth, but they probably will already have finished and then the main will be able to finish.
Use the same approach in the relationship between parent and child. However it is not clear from your question what the child threads will have to do, you may need to provide some concurrency control among them depending on the task at hand.
Cheers.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
What is this:
synchronized (this) {
// ...some code...
}
good for?
(Could you write an example?)
It prevents concurrent access to a resource. Sun's got a pretty good description with examples.
It prevents multiple threads from running the code contained within the braces. Whilst one thread is running that code, the remainder are blocked. When the first thread completes, one of the blocked threads will then run the synchronised code, and so on.
Why do you want to do this ? The code within the block may modify objects such that they're in an inconsistent state until the blocks exits. So a second thread coming in would find inconsistent objects. From that point on chaos ensues.
An example would be removing an object from one pool and inserting it in another. A second thread might run whilst the first thread is moving the object, and subsequently find the object referenced in both collections, or neither.
You can also use this mechanism to restrict multiple threads from accessing a resource designed to be used by one resource (e.g. a trivial database, for example).
Note that the following two are equivalent:
synchronized void someMethod() {
// ...
}
and
void someMethod() {
synchronized (this) {
// ...
}
}
From the now-defunct Java Quick Reference formerly at http://www.janeg.ca/scjp/threads/synchronized.html:
Synchronizing threads has the effect
of serializing access to blocks of
code running on the thread.
Serializing in this context means
giving one thread at a time the right
to execute specific block of code.