I'm working on an existing Java codebase which has an object that extends Thread, and also contains a number of other properties and methods that pertain to the operation of the thread itself. In former versions of the codebase, the thread was always an actual, heavyweight Thread that was started with Thread.start, waited for with Thread.join, and the like.
I'm currently refactoring the codebase, and in the present version, the object's Thread functionality is not always needed (but the object itself is, due to the other functionality contained in the object; in many cases, it's usable even when the thread itself is not running). So there are situations in which the application creates these objects (which extend Thread) and never calls .start() on them, purely using them for their other properties and methods.
In the future, the application may need to create many more of these objects than previously, to the point where I potentially need to worry about performance. Obviously, creating and starting a large number of actual threads would be a performance nightmare. Does the same thing apply to Thread objects that are never started? That is, are any operating system resources, or large Java resources, required purely to create a Thread? Or are the resources used only when the Thread is actually .started, making unstarted Thread objects safe to use in quantity? It would be possible to refactor the code to split the non-threading-related functionality into a separate function, but I don't want to do a large refactoring if it's entirely pointless to do so.
I've attempted to determine the answer to this with a few web searches, but it's hard to aim the query because search engines can't normally distinguish a Thread object from an actual Java thread.
You could implement Runnable instead of extending Thread.
public class MyRunnableClass implements Runnable {
// Your stuff...
#Override
public void run() {
// Thread-related stuff...
}
}
Whenever you need to run your Object to behave as a Thread, simply use:
Thread t = new Thread(new MyRunnableClass());
t.start();
As the others have pointed out: performance isn't a problem here.
I would focus much more on the "good design" approach. It simply doesn't make (much, any?) sense to extend Thread when you do not intend to ever invoke start(). And you see: you write code to communicate your intentions.
Extending Thread without using it as thread, that only communicates confusion. Every new future reader of your code will wonder "why is that"?
Therefore, focus on getting to a straight forward design. And I would go one step further: don't just turn to Runnable, and continuing to use threads. Instead: learn about ExecutorServices, and how to submit tasks, and Futures, and all that.
"Bare iron" Threads (and Runnables) are like 20 year old concepts. Java has better things to offer by now. So, if you are really serious about improving your code base: look into these new abstraction concepts to figure where they would make sense to be used.
You can create about 1.5 million of these objects per GB of memory.
import java.util.LinkedList;
import java.util.List;
class A {
public static void main(String[] args) {
int count = 0;
try {
List<Thread> threads = new LinkedList<>();
while (true) {
threads.add(new Thread());
if (++count % 10000 == 0)
System.out.println(count);
}
} catch (Error e) {
System.out.println("Got " + e + " after " + count + " threads");
}
}
}
using -Xms1g -Xmx1g for Oracle Java 8, the process grinds to halt at around
1 GB - 1780000
2 GB - 3560000
6 GB - 10690000
The object uses a bit more than you might expect from reading the source code, but it's still about 600 bytes each.
NOTE: Throwable also use more memory than you might expect by reading the Java source. It can be 500 - 2000 bytes more depending on the size of the stack at the time it was created.
Say I have a global object:
class Global {
public static int remoteNumber = 0;
}
There is a thread runs periodically to get new number from remote, and updates it (only write):
new Thread {
#override
public void run() {
while(true) {
int newNumber = getFromRemote();
Global.remoteNumber = newNumber;
Thread.sleep(1000);
}
}
}
And there are one or more threads using this global remoteNumber randomly (only read):
int n = Global.remoteNumber;
doSomethingWith(n);
You can see I don't use any locks or synchronize to protected it, is it correct? Is there any potential issue that might cause problems?
Update:
In my case, it's not really important that the reading threads must get the latest new value in realtime. I mean, if there is any issue (caused of lacking lock/synchronization) make one reading thread missed that value, it doesn't matter, because it will have chance to run the same code soon (maybe in a loop)
But reading a undetermined value is not allowed (I mean, if the old value is 20, the new updated value is 30, but the reading threads reads a non-existent value say 33, I'm not sure if it's possible)
You need synchronization here (with one caveat, which I'll discuss later).
The main problem is that the reader threads may never see any of the updates the writer thread makes. Usually any given write will be seen eventually. But here your update loop is so simple that a write could easily be held in cache and never make it out to main memory. So you really must synchronize here.
EDIT 11/2017 I'm going to update this and say that it's probably not realistic that a value could be held in cache for so long. I think it's a issue though that a variable access like this could be optimized by the compiler and held in a register though. So synchronization is still needed (or volatile) to tell the optimizer to be sure to actually fetch a new value for each loop.
So you either need to use volatile, or you need to use a (static) getter and setter methods, and you need to use the synchronized keyword on both methods. For an occasional write like this, the volatile keyword is much lighter weight.
The caveat is if you truly don't need to see timely updates from the write thread, you don't have to synchronize. If a indefinite delay won't affect your program functionality, you could skip the synchronization. But something like this on a timer doesn't look like a good use case for omitting synchronization.
EDIT: Per Brian Goetz in Java Concurrency in Practice, it is not allowed for Java/a JVM to show you "indeterminate" values -- values that were never written. Those are more technically called "out of thin air" values and they are disallowed by the Java spec. You are guaranteed to see some write that was previously made to your global variable, either the zero it was initialized with, or some subsequent write, but no other values are permitted.
Read threads can read old value for undetermined time, but in practice there no problem. Its because each thread has own copy of this variable. Sometimes they sync. You can use volatile keyword to remove this optimisation:
public static volatile int remoteNumber = 0;
In Collection framework, why is external synchronization is faster than internal one(Vector, HashTable etc)? Even though they both use same mechanism?
What exactly meaning of internal and external synchronizations and how do they differ from each other?
It is really helpful if someone can explain with examples.
What exactly meaning of internal and external synchronizations and how do they differ from each other?
External synchronization is when the caller (you) use the synchronized keyword or other locks to protect against another class being accessed by multiple threads. It is usually used if the class in question is not synchronized itself -- SimpleDateFormat is a prime example. It can also be used if you need signaling between threads -- even when dealing with a concurrent collection.
why is external synchronization is faster than internal one(Vector, HashTable etc)? Even though they both use same mechanism?
External synchronization is not necessarily faster. Typically a class can determine precisely when it needs to synchronize around a critical section of code instead of the caller wrapping all method calls in a synchronized block.
If you are talking about the general recommendation to not use Vector and HashTable and instead use the Collections.synchronizedList(...) or synchronizedMap(...) methods, then this is because Vector and HashTable are seen as old/old-of-date classes. A wrapped ArrayList or HashMap is seen as a better solution.
Sometimes as #Chris pointed out, external synchronization can be faster when you need to make a number of changes to a class one after another. By locking externally once and then performing multiple changes to the class, this works better than each change being locked internally. A single lock being faster than multiple lock calls are made in a row.
It is really helpful if someone can explain with examples.
Instead of Vector, people typically recommend a wrapped ArrayList as having better performance. This wraps the non-synchronized ArrayList class in a wrapper class which external synchronizes it.
List<Foo> list = Collections.synchronizedList(new ArrayList<Foo>());
In terms of internal versus external in general, consider the following class that you want to allow multiple threads to use it concurrently:
public class Foo {
private int count;
public void addToCount() {
count++;
log.info("count increased to " + count);
}
}
You could use external synchronization and wrap every call to addToCount() in a synchronized block:
synchronized (foo) {
foo.addToCount();
}
Or the class itself can use internal synchronization and do the locking for you. This performs better because the logger class does not have to be a part of the lock:
public void addToCount() {
int val;
synchronized (this) {
val = ++count;
}
// this log call should not be synchronized since it does IO
log.info("count increased to " + val);
}
Of course, the Foo class really should use an AtomicInteger in this case and take care of its own reentrance internally:
private final AtomicInteger count = new AtomicInteger(0);
public void addToCount() {
int val = count.incrementAndGet()
log.info("count increased to " + val);
}
Let's say you work in a bank. Every time you need to use the safe, it needs to be unlocked, and then re-locked when you're done using it.
Now let's say that you need to carry 50 boxes into the safe. You have two options:
Carry each box over individually, opening and closing the (extremely heavy) door each time
Lock the front door to the bank and leave the vault open, make 50 trips without touching the internal vault door
Which one is faster? (The first option is internal synchronization, the second option is external synchronization.)
I know there are many questions about this, but I still don't quite understand. I know what both of these keywords do, but I can't determine which to use in certain scenarios. Here are a couple of examples that I'm trying to determine which is the best to use.
Example 1:
import java.net.ServerSocket;
public class Something extends Thread {
private ServerSocket serverSocket;
public void run() {
while (true) {
if (serverSocket.isClosed()) {
...
} else { //Should this block use synchronized (serverSocket)?
//Do stuff with serverSocket
}
}
}
public ServerSocket getServerSocket() {
return serverSocket;
}
}
public class SomethingElse {
Something something = new Something();
public void doSomething() {
something.getServerSocket().close();
}
}
Example 2:
public class Server {
private int port;//Should it be volatile or the threads accessing it use synchronized (server)?
//getPort() and setPort(int) are accessed from multiple threads
public int getPort() {
return port;
}
public void setPort(int port) {
this.port = port;
}
}
Any help is greatly appreciated.
A simple answer is as follows:
synchronized can always be used to give you a thread-safe / correct solution,
volatile will probably be faster, but can only be used to give you a thread-safe / correct in limited situations.
If in doubt, use synchronized. Correctness is more important than performance.
Characterizing the situations under which volatile can be used safely involves determining whether each update operation can be performed as a single atomic update to a single volatile variable. If the operation involves accessing other (non-final) state or updating more than one shared variable, it cannot be done safely with just volatile. You also need to remember that:
updates to non-volatile long or a double may not be atomic, and
Java operators like ++ and += are not atomic.
Terminology: an operation is "atomic" if the operation either happens entirely, or it does not happen at all. The term "indivisible" is a synonym.
When we talk about atomicity, we usually mean atomicity from the perspective of an outside observer; e.g. a different thread to the one that is performing the operation. For instance, ++ is not atomic from the perspective of another thread, because that thread may be able to observe state of the field being incremented in the middle of the operation. Indeed, if the field is a long or a double, it may even be possible to observe a state that is neither the initial state or the final state!
The synchronized keyword
synchronized indicates that a variable will be shared among several threads. It's used to ensure consistency by "locking" access to the variable, so that one thread can't modify it while another is using it.
Classic Example: updating a global variable that indicates the current time
The incrementSeconds() function must be able to complete uninterrupted because, as it runs, it creates temporary inconsistencies in the value of the global variable time. Without synchronization, another function might see a time of "12:60:00" or, at the comment marked with >>>, it would see "11:00:00" when the time is really "12:00:00" because the hours haven't incremented yet.
void incrementSeconds() {
if (++time.seconds > 59) { // time might be 1:00:60
time.seconds = 0; // time is invalid here: minutes are wrong
if (++time.minutes > 59) { // time might be 1:60:00
time.minutes = 0; // >>> time is invalid here: hours are wrong
if (++time.hours > 23) { // time might be 24:00:00
time.hours = 0;
}
}
}
The volatile keyword
volatile simply tells the compiler not to make assumptions about the constant-ness of a variable, because it may change when the compiler wouldn't normally expect it. For example, the software in a digital thermostat might have a variable that indicates the temperature, and whose value is updated directly by the hardware. It may change in places that a normal variable wouldn't.
If degreesCelsius is not declared to be volatile, the compiler is free to optimize this:
void controlHeater() {
while ((degreesCelsius * 9.0/5.0 + 32) < COMFY_TEMP_IN_FAHRENHEIT) {
setHeater(ON);
sleep(10);
}
}
into this:
void controlHeater() {
float tempInFahrenheit = degreesCelsius * 9.0/5.0 + 32;
while (tempInFahrenheit < COMFY_TEMP_IN_FAHRENHEIT) {
setHeater(ON);
sleep(10);
}
}
By declaring degreesCelsius to be volatile, you're telling the compiler that it has to check its value each time it runs through the loop.
Summary
In short, synchronized lets you control access to a variable, so you can guarantee that updates are atomic (that is, a set of changes will be applied as a unit; no other thread can access the variable when it's half-updated). You can use it to ensure consistency of your data. On the other hand, volatile is an admission that the contents of a variable are beyond your control, so the code must assume it can change at any time.
There is insufficient information in your post to determine what is going on, which is why all the advice you are getting is general information about volatile and synchronized.
So, here's my general advice:
During the cycle of writing-compiling-running a program, there are two optimization points:
at compile time, when the compiler might try to reorder instructions or optimize data caching.
at runtime, when the CPU has its own optimizations, like caching and out-of-order execution.
All this means that instructions will most likely not execute in the order that you wrote them, regardless if this order must be maintained in order to ensure program correctness in a multithreaded environment. A classic example you will often find in the literature is this:
class ThreadTask implements Runnable {
private boolean stop = false;
private boolean work;
public void run() {
while(!stop) {
work = !work; // simulate some work
}
}
public void stopWork() {
stop = true; // signal thread to stop
}
public static void main(String[] args) {
ThreadTask task = new ThreadTask();
Thread t = new Thread(task);
t.start();
Thread.sleep(1000);
task.stopWork();
t.join();
}
}
Depending on compiler optimizations and CPU architecture, the above code may never terminate on a multi-processor system. This is because the value of stop will be cached in a register of the CPU running thread t, such that the thread will never again read the value from main memory, even thought the main thread has updated it in the meantime.
To combat this kind of situation, memory fences were introduced. These are special instructions that do not allow regular instructions before the fence to be reordered with instructions after the fence. One such mechanism is the volatile keyword. Variables marked volatile are not optimized by the compiler/CPU and will always be written/read directly to/from main memory. In short, volatile ensures visibility of a variable's value across CPU cores.
Visibility is important, but should not be confused with atomicity. Two threads incrementing the same shared variable may produce inconsistent results even though the variable is declared volatile. This is due to the fact that on some systems the increment is actually translated into a sequence of assembler instructions that can be interrupted at any point. For such cases, critical sections such as the synchronized keyword need to be used. This means that only a single thread can access the code enclosed in the synchronized block. Other common uses of critical sections are atomic updates to a shared collection, when usually iterating over a collection while another thread is adding/removing items will cause an exception to be thrown.
Finally two interesting points:
synchronized and a few other constructs such as Thread.join will introduce memory fences implicitly. Hence, incrementing a variable inside a synchronized block does not require the variable to also be volatile, assuming that's the only place it's being read/written.
For simple updates such as value swap, increment, decrement, you can use non-blocking atomic methods like the ones found in AtomicInteger, AtomicLong, etc. These are much faster than synchronized because they do not trigger a context switch in case the lock is already taken by another thread. They also introduce memory fences when used.
Note: In your first example, the field serverSocket is actually never initialized in the code you show.
Regarding synchronization, it depends on whether or not the ServerSocket class is thread safe. (I assume it is, but I have never used it.) If it is, you don't need to synchronize around it.
In the second example, int variables can be atomically updated so volatile may suffice.
volatile solves “visibility” problem across CPU cores. Therefore, value from local registers is flushed and synced with RAM. However, if we need consistent value and atomic op, we need a mechanism to defend the critical data. That can be achieved by either synchronized block or explicit lock.
Whenever a question pops up on SO about Java synchronization, some people are very eager to point out that synchronized(this) should be avoided. Instead, they claim, a lock on a private reference is to be preferred.
Some of the given reasons are:
some evil code may steal your lock (very popular this one, also has an "accidentally" variant)
all synchronized methods within the same class use the exact same lock, which reduces throughput
you are (unnecessarily) exposing too much information
Other people, including me, argue that synchronized(this) is an idiom that is used a lot (also in Java libraries), is safe and well understood. It should not be avoided because you have a bug and you don't have a clue of what is going on in your multithreaded program. In other words: if it is applicable, then use it.
I am interested in seeing some real-world examples (no foobar stuff) where avoiding a lock on this is preferable when synchronized(this) would also do the job.
Therefore: should you always avoid synchronized(this) and replace it with a lock on a private reference?
Some further info (updated as answers are given):
we are talking about instance synchronization
both implicit (synchronized methods) and explicit form of synchronized(this) are considered
if you quote Bloch or other authorities on the subject, don't leave out the parts you don't like (e.g. Effective Java, item on Thread Safety: Typically it is the lock on the instance itself, but there are exceptions.)
if you need granularity in your locking other than synchronized(this) provides, then synchronized(this) is not applicable so that's not the issue
I'll cover each point separately.
Some evil code may steal your lock (very popular this one, also has an
"accidentally" variant)
I'm more worried about accidentally. What it amounts to is that this use of this is part of your class' exposed interface, and should be documented. Sometimes the ability of other code to use your lock is desired. This is true of things like Collections.synchronizedMap (see the javadoc).
All synchronized methods within the same class use the exact same
lock, which reduces throughput
This is overly simplistic thinking; just getting rid of synchronized(this) won't solve the problem. Proper synchronization for throughput will take more thought.
You are (unnecessarily) exposing too much information
This is a variant of #1. Use of synchronized(this) is part of your interface. If you don't want/need this exposed, don't do it.
Well, firstly it should be pointed out that:
public void blah() {
synchronized (this) {
// do stuff
}
}
is semantically equivalent to:
public synchronized void blah() {
// do stuff
}
which is one reason not to use synchronized(this). You might argue that you can do stuff around the synchronized(this) block. The usual reason is to try and avoid having to do the synchronized check at all, which leads to all sorts of concurrency problems, specifically the double checked-locking problem, which just goes to show how difficult it can be to make a relatively simple check threadsafe.
A private lock is a defensive mechanism, which is never a bad idea.
Also, as you alluded to, private locks can control granularity. One set of operations on an object might be totally unrelated to another but synchronized(this) will mutually exclude access to all of them.
synchronized(this) just really doesn't give you anything.
While you are using synchronized(this) you are using the class instance as a lock itself. This means that while lock is acquired by thread 1, the thread 2 should wait.
Suppose the following code:
public void method1() {
// do something ...
synchronized(this) {
a ++;
}
// ................
}
public void method2() {
// do something ...
synchronized(this) {
b ++;
}
// ................
}
Method 1 modifying the variable a and method 2 modifying the variable b, the concurrent modification of the same variable by two threads should be avoided and it is. BUT while thread1 modifying a and thread2 modifying b it can be performed without any race condition.
Unfortunately, the above code will not allow this since we are using the same reference for a lock; This means that threads even if they are not in a race condition should wait and obviously the code sacrifices concurrency of the program.
The solution is to use 2 different locks for two different variables:
public class Test {
private Object lockA = new Object();
private Object lockB = new Object();
public void method1() {
// do something ...
synchronized(lockA) {
a ++;
}
// ................
}
public void method2() {
// do something ...
synchronized(lockB) {
b ++;
}
// ................
}
}
The above example uses more fine grained locks (2 locks instead one (lockA and lockB for variables a and b respectively) and as a result allows better concurrency, on the other hand it became more complex than the first example ...
While I agree about not adhering blindly to dogmatic rules, does the "lock stealing" scenario seem so eccentric to you? A thread could indeed acquire the lock on your object "externally"(synchronized(theObject) {...}), blocking other threads waiting on synchronized instance methods.
If you don't believe in malicious code, consider that this code could come from third parties (for instance if you develop some sort of application server).
The "accidental" version seems less likely, but as they say, "make something idiot-proof and someone will invent a better idiot".
So I agree with the it-depends-on-what-the-class-does school of thought.
Edit following eljenso's first 3 comments:
I've never experienced the lock stealing problem but here is an imaginary scenario:
Let's say your system is a servlet container, and the object we're considering is the ServletContext implementation. Its getAttribute method must be thread-safe, as context attributes are shared data; so you declare it as synchronized. Let's also imagine that you provide a public hosting service based on your container implementation.
I'm your customer and deploy my "good" servlet on your site. It happens that my code contains a call to getAttribute.
A hacker, disguised as another customer, deploys his malicious servlet on your site. It contains the following code in the init method:
synchronized (this.getServletConfig().getServletContext()) {
while (true) {}
}
Assuming we share the same servlet context (allowed by the spec as long as the two servlets are on the same virtual host), my call on getAttribute is locked forever. The hacker has achieved a DoS on my servlet.
This attack is not possible if getAttribute is synchronized on a private lock, because 3rd-party code cannot acquire this lock.
I admit that the example is contrived and an oversimplistic view of how a servlet container works, but IMHO it proves the point.
So I would make my design choice based on security consideration: will I have complete control over the code that has access to the instances? What would be the consequence of a thread's holding a lock on an instance indefinitely?
It depends on the situation.
If There is only one sharing entity or more than one.
See full working example here
A small introduction.
Threads and shareable entities
It is possible for multiple threads to access same entity, for eg multiple connectionThreads sharing a single messageQueue. Since the threads run concurrently there may be a chance of overriding one's data by another which may be a messed up situation.
So we need some way to ensure that shareable entity is accessed only by one thread at a time. (CONCURRENCY).
Synchronized block
synchronized() block is a way to ensure concurrent access of shareable entity.
First, a small analogy
Suppose There are two-person P1, P2 (threads) a Washbasin (shareable entity) inside a washroom and there is a door (lock).
Now we want one person to use washbasin at a time.
An approach is to lock the door by P1 when the door is locked P2 waits until p1 completes his work
P1 unlocks the door
then only p1 can use washbasin.
syntax.
synchronized(this)
{
SHARED_ENTITY.....
}
"this" provided the intrinsic lock associated with the class (Java developer designed Object class in such a way that each object can work as monitor).
Above approach works fine when there are only one shared entity and multiple threads (1: N).
N shareable entities-M threads
Now think of a situation when there is two washbasin inside a washroom and only one door. If we are using the previous approach, only p1 can use one washbasin at a time while p2 will wait outside. It is wastage of resource as no one is using B2 (washbasin).
A wiser approach would be to create a smaller room inside washroom and provide them one door per washbasin. In this way, P1 can access B1 and P2 can access B2 and vice-versa.
washbasin1;
washbasin2;
Object lock1=new Object();
Object lock2=new Object();
synchronized(lock1)
{
washbasin1;
}
synchronized(lock2)
{
washbasin2;
}
See more on Threads----> here
There seems a different consensus in the C# and Java camps on this. The majority of Java code I have seen uses:
// apply mutex to this instance
synchronized(this) {
// do work here
}
whereas the majority of C# code opts for the arguably safer:
// instance level lock object
private readonly object _syncObj = new object();
...
// apply mutex to private instance level field (a System.Object usually)
lock(_syncObj)
{
// do work here
}
The C# idiom is certainly safer. As mentioned previously, no malicious / accidental access to the lock can be made from outside the instance. Java code has this risk too, but it seems that the Java community has gravitated over time to the slightly less safe, but slightly more terse version.
That's not meant as a dig against Java, just a reflection of my experience working on both languages.
Make your data immutable if it is possible ( final variables)
If you can't avoid mutation of shared data across multiple threads, use high level programming constructs [e.g. granular Lock API ]
A Lock provides exclusive access to a shared resource: only one thread at a time can acquire the lock and all access to the shared resource requires that the lock be acquired first.
Sample code to use ReentrantLock which implements Lock interface
class X {
private final ReentrantLock lock = new ReentrantLock();
// ...
public void m() {
lock.lock(); // block until condition holds
try {
// ... method body
} finally {
lock.unlock()
}
}
}
Advantages of Lock over Synchronized(this)
The use of synchronized methods or statements forces all lock acquisition and release to occur in a block-structured way.
Lock implementations provide additional functionality over the use of synchronized methods and statements by providing
A non-blocking attempt to acquire a lock (tryLock())
An attempt to acquire the lock that can be interrupted (lockInterruptibly())
An attempt to acquire the lock that can timeout (tryLock(long, TimeUnit)).
A Lock class can also provide behavior and semantics that is quite different from that of the implicit monitor lock, such as
guaranteed ordering
non-re entrant usage
Deadlock detection
Have a look at this SE question regarding various type of Locks:
Synchronization vs Lock
You can achieve thread safety by using advanced concurrency API instead of Synchronied blocks. This documentation page provides good programming constructs to achieve thread safety.
Lock Objects support locking idioms that simplify many concurrent applications.
Executors define a high-level API for launching and managing threads. Executor implementations provided by java.util.concurrent provide thread pool management suitable for large-scale applications.
Concurrent Collections make it easier to manage large collections of data, and can greatly reduce the need for synchronization.
Atomic Variables have features that minimize synchronization and help avoid memory consistency errors.
ThreadLocalRandom (in JDK 7) provides efficient generation of pseudorandom numbers from multiple threads.
Refer to java.util.concurrent and java.util.concurrent.atomic packages too for other programming constructs.
The java.util.concurrent package has vastly reduced the complexity of my thread safe code. I only have anecdotal evidence to go on, but most work I have seen with synchronized(x) appears to be re-implementing a Lock, Semaphore, or Latch, but using the lower-level monitors.
With this in mind, synchronizing using any of these mechanisms is analogous to synchronizing on an internal object, rather than leaking a lock. This is beneficial in that you have absolute certainty that you control the entry into the monitor by two or more threads.
If you've decided that:
the thing you need to do is lock on
the current object; and
you want to
lock it with granularity smaller than
a whole method;
then I don't see the a taboo over synchronizezd(this).
Some people deliberately use synchronized(this) (instead of marking the method synchronized) inside the whole contents of a method because they think it's "clearer to the reader" which object is actually being synchronized on. So long as people are making an informed choice (e.g. understand that by doing so they're actually inserting extra bytecodes into the method and this could have a knock-on effect on potential optimisations), I don't particularly see a problem with this. You should always document the concurrent behaviour of your program, so I don't see the "'synchronized' publishes the behaviour" argument as being so compelling.
As to the question of which object's lock you should use, I think there's nothing wrong with synchronizing on the current object if this would be expected by the logic of what you're doing and how your class would typically be used. For example, with a collection, the object that you would logically expect to lock is generally the collection itself.
I think there is a good explanation on why each of these are vital techniques under your belt in a book called Java Concurrency In Practice by Brian Goetz. He makes one point very clear - you must use the same lock "EVERYWHERE" to protect the state of your object. Synchronised method and synchronising on an object often go hand in hand. E.g. Vector synchronises all its methods. If you have a handle to a vector object and are going to do "put if absent" then merely Vector synchronising its own individual methods isn't going to protect you from corruption of state. You need to synchronise using synchronised (vectorHandle). This will result in the SAME lock being acquired by every thread which has a handle to the vector and will protect overall state of the vector. This is called client side locking. We do know as a matter of fact vector does synchronised (this) / synchronises all its methods and hence synchronising on the object vectorHandle will result in proper synchronisation of vector objects state. Its foolish to believe that you are thread safe just because you are using a thread safe collection. This is precisely the reason ConcurrentHashMap explicitly introduced putIfAbsent method - to make such operations atomic.
In summary
Synchronising at method level allows client side locking.
If you have a private lock object - it makes client side locking impossible. This is fine if you know that your class doesn't have "put if absent" type of functionality.
If you are designing a library - then synchronising on this or synchronising the method is often wiser. Because you are rarely in a position to decide how your class is going to be used.
Had Vector used a private lock object - it would have been impossible to get "put if absent" right. The client code will never gain a handle to the private lock thus breaking the fundamental rule of using the EXACT SAME LOCK to protect its state.
Synchronising on this or synchronised methods do have a problem as others have pointed out - someone could get a lock and never release it. All other threads would keep waiting for the lock to be released.
So know what you are doing and adopt the one that's correct.
Someone argued that having a private lock object gives you better granularity - e.g. if two operations are unrelated - they could be guarded by different locks resulting in better throughput. But this i think is design smell and not code smell - if two operations are completely unrelated why are they part of the SAME class? Why should a class club unrelated functionalities at all? May be a utility class? Hmmmm - some util providing string manipulation and calendar date formatting through the same instance?? ... doesn't make any sense to me at least!!
No, you shouldn't always. However, I tend to avoid it when there are multiple concerns on a particular object that only need to be threadsafe in respect to themselves. For example, you might have a mutable data object that has "label" and "parent" fields; these need to be threadsafe, but changing one need not block the other from being written/read. (In practice I would avoid this by declaring the fields volatile and/or using java.util.concurrent's AtomicFoo wrappers).
Synchronization in general is a bit clumsy, as it slaps a big lock down rather than thinking exactly how threads might be allowed to work around each other. Using synchronized(this) is even clumsier and anti-social, as it's saying "no-one may change anything on this class while I hold the lock". How often do you actually need to do that?
I would much rather have more granular locks; even if you do want to stop everything from changing (perhaps you're serialising the object), you can just acquire all of the locks to achieve the same thing, plus it's more explicit that way. When you use synchronized(this), it's not clear exactly why you're synchronizing, or what the side effects might be. If you use synchronized(labelMonitor), or even better labelLock.getWriteLock().lock(), it's clear what you are doing and what the effects of your critical section are limited to.
Short answer: You have to understand the difference and make choice depending on the code.
Long answer: In general I would rather try to avoid synchronize(this) to reduce contention but private locks add complexity you have to be aware of. So use the right synchronization for the right job. If you are not so experienced with multi-threaded programming I would rather stick to instance locking and read up on this topic. (That said: just using synchronize(this) does not automatically make your class fully thread-safe.) This is a not an easy topic but once you get used to it, the answer whether to use synchronize(this) or not comes naturally.
A lock is used for either visibility or for protecting some data from concurrent modification which may lead to race.
When you need to just make primitive type operations to be atomic there are available options like AtomicInteger and the likes.
But suppose you have two integers which are related to each other like x and y co-ordinates, which are related to each other and should be changed in an atomic manner. Then you would protect them using a same lock.
A lock should only protect the state that is related to each other. No less and no more. If you use synchronized(this) in each method then even if the state of the class is unrelated all the threads will face contention even if updating unrelated state.
class Point{
private int x;
private int y;
public Point(int x, int y){
this.x = x;
this.y = y;
}
//mutating methods should be guarded by same lock
public synchronized void changeCoordinates(int x, int y){
this.x = x;
this.y = y;
}
}
In the above example I have only one method which mutates both x and y and not two different methods as x and y are related and if I had given two different methods for mutating x and y separately then it would not have been thread safe.
This example is just to demonstrate and not necessarily the way it should be implemented. The best way to do it would be to make it IMMUTABLE.
Now in opposition to Point example, there is an example of TwoCounters already provided by #Andreas where the state which is being protected by two different locks as the state is unrelated to each other.
The process of using different locks to protect unrelated states is called Lock Striping or Lock Splitting
The reason not to synchronize on this is that sometimes you need more than one lock (the second lock often gets removed after some additional thinking, but you still need it in the intermediate state). If you lock on this, you always have to remember which one of the two locks is this; if you lock on a private Object, the variable name tells you that.
From the reader's viewpoint, if you see locking on this, you always have to answer the two questions:
what kind of access is protected by this?
is one lock really enough, didn't someone introduce a bug?
An example:
class BadObject {
private Something mStuff;
synchronized setStuff(Something stuff) {
mStuff = stuff;
}
synchronized getStuff(Something stuff) {
return mStuff;
}
private MyListener myListener = new MyListener() {
public void onMyEvent(...) {
setStuff(...);
}
}
synchronized void longOperation(MyListener l) {
...
l.onMyEvent(...);
...
}
}
If two threads begin longOperation() on two different instances of BadObject, they acquire
their locks; when it's time to invoke l.onMyEvent(...), we have a deadlock because neither of the threads may acquire the other object's lock.
In this example we may eliminate the deadlock by using two locks, one for short operations and one for long ones.
As already said here synchronized block can use user-defined variable as lock object, when synchronized function uses only "this". And of course you can manipulate with areas of your function which should be synchronized and so on.
But everyone says that no difference between synchronized function and block which covers whole function using "this" as lock object. That is not true, difference is in byte code which will be generated in both situations. In case of synchronized block usage should be allocated local variable which holds reference to "this". And as result we will have a little bit larger size of function (not relevant if you have only few number of functions).
More detailed explanation of the difference you can find here:
http://www.artima.com/insidejvm/ed2/threadsynchP.html
Also usage of synchronized block is not good due to following point of view:
The synchronized keyword is very limited in one area: when exiting a synchronized block, all threads that are waiting for that lock must be unblocked, but only one of those threads gets to take the lock; all the others see that the lock is taken and go back to the blocked state. That's not just a lot of wasted processing cycles: often the context switch to unblock a thread also involves paging memory off the disk, and that's very, very, expensive.
For more details in this area I would recommend you read this article:
http://java.dzone.com/articles/synchronized-considered
This is really just supplementary to the other answers, but if your main objection to using private objects for locking is that it clutters your class with fields that are not related to the business logic then Project Lombok has #Synchronized to generate the boilerplate at compile-time:
#Synchronized
public int foo() {
return 0;
}
compiles to
private final Object $lock = new Object[0];
public int foo() {
synchronized($lock) {
return 0;
}
}
A good example for use synchronized(this).
// add listener
public final synchronized void addListener(IListener l) {listeners.add(l);}
// remove listener
public final synchronized void removeListener(IListener l) {listeners.remove(l);}
// routine that raise events
public void run() {
// some code here...
Set ls;
synchronized(this) {
ls = listeners.clone();
}
for (IListener l : ls) { l.processEvent(event); }
// some code here...
}
As you can see here, we use synchronize on this to easy cooperate of lengthly (possibly infinite loop of run method) with some synchronized methods there.
Of course it can be very easily rewritten with using synchronized on private field. But sometimes, when we already have some design with synchronized methods (i.e. legacy class, we derive from, synchronized(this) can be the only solution).
It depends on the task you want to do, but I wouldn't use it. Also, check if the thread-save-ness you want to accompish couldn't be done by synchronize(this) in the first place? There are also some nice locks in the API that might help you :)
I only want to mention a possible solution for unique private references in atomic parts of code without dependencies. You can use a static Hashmap with locks and a simple static method named atomic() that creates required references automatically using stack information (full class name and line number). Then you can use this method in synchronize statements without writing new lock object.
// Synchronization objects (locks)
private static HashMap<String, Object> locks = new HashMap<String, Object>();
// Simple method
private static Object atomic() {
StackTraceElement [] stack = Thread.currentThread().getStackTrace(); // get execution point
StackTraceElement exepoint = stack[2];
// creates unique key from class name and line number using execution point
String key = String.format("%s#%d", exepoint.getClassName(), exepoint.getLineNumber());
Object lock = locks.get(key); // use old or create new lock
if (lock == null) {
lock = new Object();
locks.put(key, lock);
}
return lock; // return reference to lock
}
// Synchronized code
void dosomething1() {
// start commands
synchronized (atomic()) {
// atomic commands 1
...
}
// other command
}
// Synchronized code
void dosomething2() {
// start commands
synchronized (atomic()) {
// atomic commands 2
...
}
// other command
}
Avoid using synchronized(this) as a locking mechanism: This locks the whole class instance and can cause deadlocks. In such cases, refactor the code to lock only a specific method or variable, that way whole class doesn't get locked. Synchronised can be used inside method level.
Instead of using synchronized(this), below code shows how you could just lock a method.
public void foo() {
if(operation = null) {
synchronized(foo) {
if (operation == null) {
// enter your code that this method has to handle...
}
}
}
}
My two cents in 2019 even though this question could have been settled already.
Locking on 'this' is not bad if you know what you are doing but behind the scene locking on 'this' is (which unfortunately what synchronized keyword in method definition allows).
If you actually want users of your class to be able to 'steal' your lock (i.e. prevent other threads from dealing with it), you actually want all the synchronized methods to wait while another sync method is running and so on.
It should be intentional and well thought off (and hence documented to help your users understand it).
To further elaborate, in the reverse you must know what you are 'gaining' (or 'losing' out on) if you lock on a non accessible lock (nobody can 'steal' your lock, you are in total control and so on...).
The problem for me is that synchronized keyword in the method definition signature makes it just too easy for programmers not to think about what to lock on which is a mighty important thing to think about if you don't want to run into problems in a multi-threaded program.
One can't argue that 'typically' you don't want users of your class to be able to do these stuff or that 'typically' you want...It depends on what functionality you are coding. You can't make a thumb rule as you can't predict all the use cases.
Consider for e.g. the printwriter which uses an internal lock but then people struggle to use it from multiple threads if they don't want their output to interleave.
Should your lock be accessible outside of the class or not is your decision as a programmer on the basis of what functionality the class has. It is part of the api. You can't move away for instance from synchronized(this) to synchronized(provateObjet) without risking breaking changes in the code using it.
Note 1: I know you can achieve whatever synchronized(this) 'achieves' by using a explicit lock object and exposing it but I think it is unnecessary if your behaviour is well documented and you actually know what locking on 'this' means.
Note 2: I don't concur with the argument that if some code is accidentally stealing your lock its a bug and you have to solve it. This in a way is same argument as saying I can make all my methods public even if they are not meant to be public. If someone is 'accidentally' calling my intended to be private method its a bug. Why enable this accident in the first place!!! If ability to steal your lock is a problem for your class don't allow it. As simple as that.
Let me put the conclusion first - locking on private fields does not work for slightly more complicated multi-threaded program. This is because multi-threading is a global problem. It is impossible to localize synchronization unless you write in a very defensive way (e.g. copy everything on passing to other threads).
Here is the long explanation:
Synchronization includes 3 parts: Atomicity, Visibility and Ordering
Synchronized block is very coarse level of synchronization. It enforces visibility and ordering just as what you expected. But for atomicity, it does not provide much protection. Atomicity requires global knowledge of the program rather than local knowledge. (And that makes multi-threading programming very hard)
Let's say we have a class Account having method deposit and withdraw. They are both synchronized based on a private lock like this:
class Account {
private Object lock = new Object();
void withdraw(int amount) {
synchronized(lock) {
// ...
}
}
void deposit(int amount) {
synchronized(lock) {
// ...
}
}
}
Considering we need to implement a higher-level class which handles transfer, like this:
class AccountManager {
void transfer(Account fromAcc, Account toAcc, int amount) {
if (fromAcc.getBalance() > amount) {
fromAcc.setBalance(fromAcc.getBalance() - amount);
toAcc.setBalance(toAcc.getBalance + amount);
}
}
}
Assuming we have 2 accounts now,
Account john;
Account marry;
If the Account.deposit() and Account.withdraw() are locked with internal lock only. That will cause problem when we have 2 threads working:
// Some thread
void threadA() {
john.withdraw(500);
}
// Another thread
void threadB() {
accountManager.transfer(john, marry, 100);
}
Because it is possible for both threadA and threadB run at the same time. And thread B finishes the conditional check, thread A withdraws, and thread B withdraws again. This means we can withdraw $100 from John even if his account has no enough money. This will break atomicity.
You may propose that: why not adding withdraw() and deposit() to AccountManager then? But under this proposal, we need to create a multi-thread safe Map which maps from different accounts to their locks. We need to delete the lock after execution (otherwise will leak memory). And we also need to ensure no other one accesses the Account.withdraw() directly. This will introduce a lots of subtle bugs.
The correct and most idiomatic way is to expose the lock in the Account. And let the AccountManager to use the lock. But in this case, why not just use the object itself then?
class Account {
synchronized void withdraw(int amount) {
// ...
}
synchronized void deposit(int amount) {
// ...
}
}
class AccountManager {
void transfer(Account fromAcc, Account toAcc, int amount) {
// Ensure locking order to prevent deadlock
Account firstLock = fromAcc.hashCode() < toAcc.hashCode() ? fromAcc : toAcc;
Account secondLock = fromAcc.hashCode() < toAcc.hashCode() ? toAcc : fromAcc;
synchronized(firstLock) {
synchronized(secondLock) {
if (fromAcc.getBalance() > amount) {
fromAcc.setBalance(fromAcc.getBalance() - amount);
toAcc.setBalance(toAcc.getBalance + amount);
}
}
}
}
}
To conclude in simple English, private lock does not work for slightly more complicated multi-threaded program.
(Reposted from https://stackoverflow.com/a/67877650/474197)
I think points one (somebody else using your lock) and two (all methods using the same lock needlessly) can happen in any fairly large application. Especially when there's no good communication between developers.
It's not cast in stone, it's mostly an issue of good practice and preventing errors.