What guarantees the thread safety of Guava's ImmutableList? - java

The Javadoc in Guava's ImmutableList says that the class has the properties of Guava's ImmutableCollection, one of which is thread safety:
Thread safety. It is safe to access this collection concurrently from multiple threads.
But look at how the ImmutableList is built by its Builder - The Builder keeps all elements in a Object[] (that's okay since no one said that the builder was thread safe) and upon construction passes that array (or possibly a copy) to the constructor of RegularImmutableList:
public abstract class ImmutableList<E> extends ImmutableCollection<E>
implements List<E>, RandomAccess {
...
static <E> ImmutableList<E> asImmutableList(Object[] elements, int length) {
switch (length) {
case 0:
return of();
case 1:
return of((E) elements[0]);
default:
if (length < elements.length) {
elements = Arrays.copyOf(elements, length);
}
return new RegularImmutableList<E>(elements);
}
}
...
public static final class Builder<E> extends ImmutableCollection.Builder<E> {
Object[] contents;
...
public ImmutableList<E> build() { //Builder's build() method
forceCopy = true;
return asImmutableList(contents, size);
}
...
}
}
What does RegularImmutableList do with these elements? What you'd expect, simply initiates its internal array, which is then used for all read oprations:
class RegularImmutableList<E> extends ImmutableList<E> {
final transient Object[] array;
RegularImmutableList(Object[] array) {
this.array = array;
}
...
}
How is this be thread safe? What guarantees the happens-before relationship between the writes performed in the Builder and the reads from RegularImmutableList?
According to the Java memory model there is a happens-before relationship in only five cases (from the Javadoc for java.util.concurrent):
Each action in a thread happens-before every action in that thread that comes later in the program's order.
An unlock (synchronized block or method exit) of a monitor happens-before every subsequent lock (synchronized block or method
entry) of that same monitor. And because the happens-before relation
is transitive, all actions of a thread prior to unlocking
happen-before all actions subsequent to any thread locking that
monitor.
A write to a volatile field happens-before every subsequent read of that same field. Writes and reads of volatile fields have similar
memory consistency effects as entering and exiting monitors, but do
not entail mutual exclusion locking.
A call to start on a thread happens-before any action in the started thread.
All actions in a thread happen-before any other thread successfully returns from a join on that thread.
None of these seem to apply here. If some thread builds the list and passes its reference to some other threads without using locks (for example via a final or volatile field), I don't see what guarantees thread-safety. What am I missing?
Edit:
Yes, the write of the reference to the array is thread-safe on account of it being final. So that's clearly thread safe.
What I was wondering about were the writes of the individual elements. The elements of the array are neither final nor volatile. Yet they seem to be written by one thread and read by another without synchronization.
So the question can be boiled down to "if thread A writes to a final field, does that guarantee that other threads will see not just that write but all of A's previous writes as well?"

JMM guarantees safe initialization (all values initialized in the constructor will be visible to readers) if all fields in the object are final and there is no leakage of this from constructor1:
class RegularImmutableList<E> extends ImmutableList<E> {
final transient Object[] array;
^
RegularImmutableList(Object[] array) {
this.array = array;
}
}
The final field semantics guarantees that readers will see an up-to-date array:
The effects of all initializations must be committed to memory before
any code after constructor publishes the reference to the newly
constructed object.
Thank you to #JBNizet and to #chrylis for the link to the JLS.
1 - "If this is followed, then when the object is seen by another thread, that thread will always see the correctly constructed version of that object's final fields. It will also see versions of any object or array referenced by those final fields that are at least as up-to-date as the final fields are." - JLS §17.5.

As you stated: "Each action in a thread happens-before every action in that thread that comes later in the program's order."
Obviously, if a thread could somehow access the object before the constructor was even invoked, you would be screwed. So something must prevent the object from being accessed before its constructor returns. But once the constructor returns, anything that lets another thread access the object is safe because it happens after in the constructing thread's program order.
Basic thread safety with any shared object is accomplished by ensuring that whatever allows threads to access the object does not take place until the constructor returns, establishing that anything the constructor might do happens before any other thread might access the object.
The flow is:
The object does not exist and cannot be accessed.
Some thread calls the object's constructor (or does whatever else is needed to get the object ready to be used).
That thread then does something to allow other threads to access the object.
Other threads can now access the object.
Program order of the thread invoking the constructor ensures that no part of 4 happens until all of 2 is done.
Note that this applies just the same if things need to be done after the constructor returns, you can just consider them logically part of the construction process. And similarly, parts of the job can be done by other threads so long as anything that needs to see work done by another thread cannot start until some relationship is established with the work that other thread did.
Does that not 100% answer your question?
To restate:
How is this be thread safe? What guarantees the happens-before relationship between the writes performed in the Builder and the reads from RegularImmutableList?
The answer is whatever prevented the object from being accessed before the constructor was even called (which has to be something, otherwise we'd be completely screwed) continues to prevent the object from being accessed until after the constructor returns. The constructor is effectively an atomic operation because no other thread could possibly attempt to access the object while it's running. Once the constructor returns, whatever the thread that called the constructor does to allow other threads to access the object necessarily takes place after the constructor returns because, "[e]ach action in a thread happens-before every action in that thread that comes later in the program's order."
And, one more time:
If some thread builds the list and passes its reference to some other threads without using locks (for example via a final or volatile field), I don't see what guarantees thread-safety. What am I missing?
The thread first builds the list and then next passes its reference. The building of the list "happens-before every action in that thread that comes later in the program's order" and thus happens-before the passing of the reference. Thus any thread that sees the passing of the reference happens-after the building of the list completed.
Were this not the case, there would be no good way to construct an object in one thread and then give other threads access to it. But this is perfectly safe to do because whatever method you use to hand the object from one thread to another will establish the necessarily relationship.

You are talking about two different things in here.
Access to already built RegularImmutableList and its array is thread safe because there wont be any concurrent writes and reads to that array. Only concurrent reads.
The threading issue can happen when you pass it to another thread. But that has nothing to do with RegularImmutableList but with how other threads see reference to it.
Lets say one thread creates RegularImmutableList and passes its reference to another thread. For the other thread to see that the reference has been updated and is now pointing to new created RegularImmutableList you will need to use either synchronization or volatile.
EDIT:
I think the concern OP has is how JMM makes sure that whatever got written into the array after its creation from one building thread gets visible to other threads after its reference gets passed to them.
This happens by the use or volatile or synchronization. When for example reader thread assigns RegularImmutableList to volatile variable the JMM will make sure that all writes to array get flashed into main memory and when other thread reads from it JMM makes sure that it will see all flashed writes.

Related

Questions about how the synchronized keyword works with locks and thread starvation

In this java tutorial there's some code that shows an example to explain the use of the synchronized keyword. My point is, why I shouldn't write something like this:
public class MsLunch {
private long c1 = 0;
private long c2 = 0;
//private Object lock1 = new Object();
//private Object lock2 = new Object();
public void inc1() {
synchronized(c1) {
c1++;
}
}
public void inc2() {
synchronized(c2) {
c2++;
}
}
}
Without bothering create lock objects? Also, why bother instantiate that lock objects? Can't I just pass a null reference? I think I'm missing out something here.
Also, assume that I've two public synchronized methods in the same class accessed by several thread. Is it true that the two methods will never be executed at the same time? If the answer is yes, is there a built-in mechanism that prevents one method from starvation (never been executed or been executed too few times compared to the other method)?
As #11thdimension has replied, you cannot synchronize on a primitive type (eg., long). It must be a class object.
So, you might be tempted to do something like the following:
Long c1 = 0;
public void incC1() {
synchronized(c1) {
c1++;
}
}
This will not work properly, as "c1++" is a shortcut for "c1 = c1 + 1", which actually assigns a new object to c1, and as such, two threads might end up in the same block of synchronized code.
For the lock to work properly, the object being synchronized upon should not be reassigned. (Well, maybe in some rare circumstances where you really know what you are doing.)
You cannot pass a null object to the synchronized(...) statement. Java is effectively creating semaphores on the ref'd object, and uses that information to prevent more than one thread accessing the same protected resource.
You do not always need a separate lock object, as in the case of a synchronized method. In this case, the class object instance itself is used to store the locking information, as if you used 'this' in the method iteslf:
public void incC1() {
synchronized(this) {
c1++;
}
}
First you can not pass primitive variable to synchronized, it requires a reference. Second that tutorial is just a example showing guarded block. It's not c1,c2 that it's trying to protect but it's trying to protect all the code inside synchronized block.
JVM uses Operating system's scheduling algorithm.
What is the JVM Scheduling algorithm?
So it's not JVM's responsibility to see if threads are starved. You can however assign priority of threads to prefer one over other to execute.
Every thread has a priority. Threads with higher priority are executed in preference to threads with lower priority. Each thread may or may not also be marked as a daemon. When code running in some thread creates a new Thread object, the new thread has its priority initially set equal to the priority of the creating thread, and is a daemon thread if and only if the creating thread is a daemon.
From:https://docs.oracle.com/javase/7/docs/api/java/lang/Thread.html
If you're concerned about this scenario then you have to implement it yourself. Like maintaining a thread which checks for starving threads and as time passes it increases the priority of the threads which have been waiting longer than others.
Yes it's true that two method which have been synchronized will never be executed on the same instance simultaneously.
Why bother instantiate that lock objects? Can't I just pass a null reference?
As others have mentioned, you cannot lock on long c1 because it is a primitive. Java locks on the monitor associated with an object instance. This is why you also can't lock on null.
The thread tutorial is trying to demonstrate a good pattern which is to create private final lock objects to precisely control the mutex locations that you are trying to protect. Calling synchronized on this or other public objects can cause external callers to block your methods which may not be what you want.
The tutorial explains this:
All updates of these fields must be synchronized, but there's no reason to prevent an update of c1 from being interleaved with an update of c2 — and doing so reduces concurrency by creating unnecessary blocking. Instead of using synchronized methods or otherwise using the lock associated with this, we create two objects solely to provide locks.
So they are also trying to allow updates to c1 and updates to c2 to happen concurrently ("interleaved") and not block each other while at the same time making sure that the updates are protected.
Assume that I've two public synchronized methods in the same class accessed by several thread. Is it true that the two methods will never be executed at the same time?
If one thread is working in a synchronized method of an object, another thread will be blocked if it tries the same or another synchronized method of the same object. Threads can run methods on different objects concurrently.
If the answer is yes, is there a built-in mechanism that prevents one method from starvation (never been executed or been executed too few times compared to the other method)?
As mentioned, this is handled by the native thread constructs from the operating system. All modern OS' handle thread starvation which is especially important if the threads have different priorities.

Java variable shared between two process [duplicate]

My teacher in an upper level Java class on threading said something that I wasn't sure of.
He stated that the following code would not necessarily update the ready variable. According to him, the two threads don't necessarily share the static variable, specifically in the case when each thread (main thread versus ReaderThread) is running on its own processor and therefore doesn't share the same registers/cache/etc and one CPU won't update the other.
Essentially, he said it is possible that ready is updated in the main thread, but NOT in the ReaderThread, so that ReaderThread will loop infinitely.
He also claimed it was possible for the program to print 0 or 42. I understand how 42 could be printed, but not 0. He mentioned this would be the case when the number variable is set to the default value.
I thought perhaps it is not guaranteed that the static variable is updated between the threads, but this strikes me as very odd for Java. Does making ready volatile correct this problem?
He showed this code:
public class NoVisibility {
private static boolean ready;
private static int number;
private static class ReaderThread extends Thread {
public void run() {
while (!ready) Thread.yield();
System.out.println(number);
}
}
public static void main(String[] args) {
new ReaderThread().start();
number = 42;
ready = true;
}
}
There isn't anything special about static variables when it comes to visibility. If they are accessible any thread can get at them, so you're more likely to see concurrency problems because they're more exposed.
There is a visibility issue imposed by the JVM's memory model. Here's an article talking about the memory model and how writes become visible to threads. You can't count on changes one thread makes becoming visible to other threads in a timely manner (actually the JVM has no obligation to make those changes visible to you at all, in any time frame), unless you establish a happens-before relationship.
Here's a quote from that link (supplied in the comment by Jed Wesley-Smith):
Chapter 17 of the Java Language Specification defines the happens-before relation on memory operations such as reads and writes of shared variables. The results of a write by one thread are guaranteed to be visible to a read by another thread only if the write operation happens-before the read operation. The synchronized and volatile constructs, as well as the Thread.start() and Thread.join() methods, can form happens-before relationships. In particular:
Each action in a thread happens-before every action in that thread that comes later in the program's order.
An unlock (synchronized block or method exit) of a monitor happens-before every subsequent lock (synchronized block or method entry) of that same monitor. And because the happens-before relation is transitive, all actions of a thread prior to unlocking happen-before all actions subsequent to any thread locking that monitor.
A write to a volatile field happens-before every subsequent read of that same field. Writes and reads of volatile fields have similar memory consistency effects as entering and exiting monitors, but do not entail mutual exclusion locking.
A call to start on a thread happens-before any action in the started thread.
All actions in a thread happen-before any other thread successfully returns from a join on that thread.
He was talking about visibility and not to be taken too literally.
Static variables are indeed shared between threads, but the changes made in one thread may not be visible to another thread immediately, making it seem like there are two copies of the variable.
This article presents a view that is consistent with how he presented the info:
http://jeremymanson.blogspot.com/2008/11/what-volatile-means-in-java.html
First, you have to understand a little something about the Java memory model. I've struggled a bit over the years to explain it briefly and well. As of today, the best way I can think of to describe it is if you imagine it this way:
Each thread in Java takes place in a separate memory space (this is clearly untrue, so bear with me on this one).
You need to use special mechanisms to guarantee that communication happens between these threads, as you would on a message passing system.
Memory writes that happen in one thread can "leak through" and be seen by another thread, but this is by no means guaranteed. Without explicit communication, you can't guarantee which writes get seen by other threads, or even the order in which they get seen.
...
But again, this is simply a mental model to think about threading and volatile, not literally how the JVM works.
Basically it's true, but actually the problem is more complex. Visibility of shared data can be affected not only by CPU caches, but also by out-of-order execution of instructions.
Therefore Java defines a Memory Model, that states under which circumstances threads can see consistent state of the shared data.
In your particular case, adding volatile guarantees visibility.
They are "shared" of course in the sense that they both refer to the same variable, but they don't necessarily see each other's updates. This is true for any variable, not just static.
And in theory, writes made by another thread can appear to be in a different order, unless the variables are declared volatile or the writes are explicitly synchronized.
Within a single classloader, static fields are always shared. To explicitly scope data to threads, you'd want to use a facility like ThreadLocal.
When you initialize static primitive type variable java default assigns a value for static variables
public static int i ;
when you define the variable like this the default value of i = 0;
thats why there is a possibility to get you 0.
then the main thread updates the value of boolean ready to true. since ready is a static variable , main thread and the other thread reference to the same memory address so the ready variable change. so the secondary thread get out from while loop and print value.
when printing the value initialized value of number is 0. if the thread process has passed while loop before main thread update number variable. then there is a possibility to print 0
#dontocsata
you can go back to your teacher and school him a little :)
few notes from the real world and regardless what you see or be told.
Please NOTE, the words below are regarding this particular case in the exact order shown.
The following 2 variable will reside on the same cache line under virtually any know architecture.
private static boolean ready;
private static int number;
Thread.exit (main thread) is guaranteed to exit and exit is guaranteed to cause a memory fence, due to the thread group thread removal (and many other issues). (it's a synchronized call, and I see no single way to be implemented w/o the sync part since the ThreadGroup must terminate as well if no daemon threads are left, etc).
The started thread ReaderThread is going to keep the process alive since it is not a daemon one!
Thus ready and number will be flushed together (or the number before if a context switch occurs) and there is no real reason for reordering in this case at least I can't even think of one.
You will need something truly weird to see anything but 42. Again I do presume both static variables will be in the same cache line. I just can't imagine a cache line 4 bytes long OR a JVM that will not assign them in a continuous area (cache line).

Java thread safety and primitives

I have an object which contains some primitive variables
public class Myobject {
public final double d
public long a
}
all those objects i store in a set which is not synchronized
private Set<Myobject> myset = new HashSet<>()
now i want to pass these objects into another thread and perform some calculations. in this thread i will only read variables "d" and "a", the varibales wont be ever changed
my question is if is thread safe to create an unmodifiable set
Collections.unmodifiableset(myset);
and pass it to the second thread.
You have two basic options to safely publish a reference to an object graph root:
be sure that the thread which does the construction of the object graph is the one which starts (all) the child thread(s) which will use it;
write a reference to a fully constructed object graph to a volatile variable.
Both approaches ensure a happens-before relationship between all inter-thread store actions which were executed while constructing your object graph and all inter-thread load actions which the other thread will be executing against the same graph. Since the first inter-thread action is guaranteed to be a load (reading the root reference), this implies a happens-before for all store actions of the other thread as well. So it is thread-safe to both read and write the object in the other thread—as long is it is the other thread, not one of the other threads.
As a standard precaution I include these quotes from the JLS, §17.4.4:
A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order).
An action that starts a thread synchronizes-with the first action in the thread it starts.
No, creating an unmodifiable colle tion is not enough. You must ensure that the thread that constructs (and/modifies) the object safely publishes the object to the thread that reads it. There are several ways of doing this. Some of them are rather tricky to get right. The easiest way to get it right is to have a lock object, and have bot threads synchronise on the lock when writing to, constructing or reading from it.
If I understand you correctly, then you don't even need to create an unmodifiable set for thread safety.
If you create your set in one thread, then pass it to another thread to do something with it while not accessing it from the first thread, then there is no chance of thread-collision, since maximally one thead at a time will have access to your set.

Threads and Synchronization

I have a little difficulty in understanding the concept of private locks:
public class MyObject {
private final Object lock = new Object(); // private final lock object
public void mymethod() {
synchronized (lock) { // Locks on the private Object
// ...
}
}
}
In the code above, lock is acquired on a different object but the code in the current object is guarded by synchronised block. Now, apart from the lock object in the code above, it could be any other object too. I find it difficult to understand how the lock on another object is related to the synchronised keyword in the current object. IMO, it may lead to some malicious code to lock any object. What is the basis of allowing locks on other objects.
Well you could, for example, have an object that manages two lists.
If its possible for thread A to alter list 1 while thread B alters list 2 then you'd use distinct locks, rather than synchronizing on the owning object.
Essentially explicit locks allow for finer grained control of behavior.
IMO, it may lead to some malicious code to lock any object.
This is the crux of the issue, actually.
With a separate lock object as shown (crucially, with private access) then only code in the MyObject class will be able to acquire a lock on that monitor - so you can see all of the code that might take part in locking situations involving this class.
Going to the other extreme, if you acquire a lock on e.g. a constant String, then any code, anywhere in the same JVM that locks on the same String will contend with your class - which is almost certainly not intended and will be very hard to track down.
Basically - if you lock on a non-private object, that then becomes part of your public interface, effectively. Sometimes this is intended (e.g. for the Collections.synchronizedFoo objects, they declare that one can synchronize on the object itself in order to coarsen your lock). Often it is not and is merely an oversight.
You should keep your lock monitors private, for the same reason you keep private member variables private - to prevent other code messing with things that they shouldn't. And this is basically never private.
You're right, the code you provided could lock on any object. However, it didn't. It locked on a private instance field--a field which only that instance can access. That means that no other code can possibly lock on that object. You didn't, in this case, lock on some other object because if some other code locked on it, then you'd have to wait for it (and it may never be released).
"Malicious" code could lock on any object, but it only hurts other code if that other code attempts to lock on the same object. My creating your own private object to lock on, you protect yourself from locks by other code.
synchronized is actually effective for multithreaded environment. This method is to allow concurrency in your system.
When an object is synchronized, the first thread that "touched" the object puts a lock on that object until that thread that is using the locked object finished using the object and releases it. It prevents many threads to change the same object concurrently.
Locks are always on objects.
purpose of syncronized block is to guard objects (one associated with block) with locks, So this means only thread which has lock for object can enter this block.
There is nothing wrong with it, There can be situation in code where you don't need to synchronize complete method but just few lines of code.
One use of synchronization block is in situation when state of object (and other objects related to it) needs to changed in multi threaded env, But the Class of this object doesn't have synchronized methods to alter state of object.
In such situation synchronization is achieved using such block.
Locks should be private iff there will be no reason for any lock related to your class to be held while no thread is actually running code in your class (or code called from code in your class). If you need to e.g. allow someone to maintain exclusive control over your object between operations, you'll have to expose a lock. This opens up many potential issues, including deadlock, so it's generally best if you can design your interfaces and contracts so as to render such extended locking unnecessary.
BTW, note that performing callbacks while holding a lock is slightly less dangerous than exposing a lock, but only slightly. You would eliminate the danger that a caller might acquire a lock and simply forget about it, but the danger of deadlock would still remain.

is this class thread safe?

consider this class,with no instance variables and only methods which are non-synchronous can we infer from this info that this class in Thread-safe?
public class test{
public void test1{
// do something
}
public void test2{
// do something
}
public void test3{
// do something
}
}
It depends entirely on what state the methods mutate. If they mutate no shared state, they're thread safe. If they mutate only local state, they're thread-safe. If they only call methods that are thread-safe, they're thread-safe.
Not being thread safe means that if multiple threads try to access the object at the same time, something might change from one access to the next, and cause issues. Consider the following:
int incrementCount() {
this.count++;
// ... Do some other stuff
return this.count;
}
would not be thread safe. Why is it not? Imagine thread 1 accesses it, count is increased, then some processing occurs. While going through the function, another thread accesses it, increasing count again. The first thread, which had it go from, say, 1 to 2, would now have it go from 1 to 3 when it returns. Thread 2 would see it go from 1 to 3 as well, so what happened to 2?
In this case, you would want something like this (keeping in mind that this isn't any language-specific code, but closest to Java, one of only 2 I've done threading in)
int incrementCount() synchronized {
this.count++;
// ... Do some other stuff
return this.count;
}
The synchronized keyword here would make sure that as long as one thread is accessing it, no other threads could. This would mean that thread 1 hits it, count goes from 1 to 2, as expected. Thread 2 hits it while 1 is processing, it has to wait until thread 1 is done. When it's done, thread 1 gets a return of 2, then thread 2 goes throguh, and gets the expected 3.
Now, an example, similar to what you have there, that would be entirely thread-safe, no matter what:
int incrementCount(int count) {
count++;
// ... Do some other stuff
return this.count;
}
As the only variables being touched here are fully local to the function, there is no case where two threads accessing it at the same time could try working with data changed from the other. This would make it thread safe.
So, to answer the question, assuming that the functions don't modify anything outside of the specific called function, then yes, the class could be deemed to be thread-safe.
Consider the following quote from an article about thread safety ("Java theory and practice: Characterizing thread safety"):
In reality, any definition of thread safety is going to have a certain degree of circularity, as it must appeal to the class's specification -- which is an informal, prose description of what the class does, its side effects, which states are valid or invalid, invariants, preconditions, postconditions, and so on. (Constraints on an object's state imposed by the specification apply only to the externally visible state -- that which can be observed by calling its public methods and accessing its public fields -- rather than its internal state, which is what is actually represented in its private fields.)
Thread safety
For a class to be thread-safe, it first must behave correctly in a single-threaded environment. If a class is correctly implemented, which is another way of saying that it conforms to its specification, no sequence of operations (reads or writes of public fields and calls to public methods) on objects of that class should be able to put the object into an invalid state, observe the object to be in an invalid state, or violate any of the class's invariants, preconditions, or postconditions.
Furthermore, for a class to be thread-safe, it must continue to behave correctly, in the sense described above, when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, without any additional synchronization on the part of the calling code. The effect is that operations on a thread-safe object will appear to all threads to occur in a fixed, globally consistent order.
So your class itself is thread-safe, as long as it doesn't have any side effects. As soon as the methods mutate any external objects (e.g. some singletons, as already mentioned by others) it's not any longer thread-safe.
Depends on what happens inside those methods. If they manipulate / call any method parameters or global variables / singletons which are not themselves thread safe, the class is not thread safe either.
(yes I see that the methods as shown here here have no parameters, but no brackets either, so this is obviously not full working code - it wouldn't even compile as is.)
yes, as long as there are no instance variables. method calls using only input parameters and local variables are inherently thread-safe. you might consider making the methods static too, to reflect this.
If it has no mutable state - it's thread safe. If you have no state - you're thread safe by association.
No, I don't think so.
For example, one of the methods could obtain a (non-thread-safe) singleton object from another class and mutate that object.
Yes - this class is thread safe but this does not mean that your application is.
An application is thread safe if the threads in it cannot concurrently access heap state. All objects in Java (and therefore all of their fields) are created on the heap. So, if there are no fields in an object then it is thread safe.
In any practical application, objects will have state. If you can guarantee that these objects are not accessed concurrently then you have a thread safe application.
There are ways of optimizing access to shared state e.g. Atomic variables or with carful use of the volatile keyword, but I think this is going beyond what you've asked.
I hope this helps.

Categories