The subject
I have some code that is decidedly not thread safe:
public class ExampleLoader
{
private List<String> strings;
protected List<String> loadStrings()
{
return Arrays.asList("Hello", "World", "Sup");
}
public List<String> getStrings()
{
if (strings == null)
{
strings = loadStrings();
}
return strings;
}
}
Multiple threads accessing getStrings() simultaneously are expected to see strings as null, and thus loadStrings() (which is an expensive operation) is triggered multiple times.
The problem
I wanted to make the code thread safe, and as a good citizen of the world I wrote a failing Spock spec first:
def "getStrings is thread safe"() {
given:
def loader = Spy(ExampleLoader)
def threads = (0..<10).collect { new Thread({ loader.getStrings() })}
when:
threads.each { it.start() }
threads.each { it.join() }
then:
1 * loader.loadStrings()
}
The above code creates and starts 10 threads that each calls getStrings(). It then asserts that loadStrings() was called only once when all threads are done.
I expected this to fail. However, it consistently passes. What?
After a debugging session involving System.out.println and other boring things, I found that the threads are indeed asynchronous: their run() methods printed in a seemingly random order. However, the first thread to access getStrings() would always be the only thread to call loadStrings().
The weird part
Frustrated after quite some time spent debugging, I wrote the same test again with JUnit 4 and Mockito:
#Test
public void getStringsIsThreadSafe() throws Exception
{
// given
ExampleLoader loader = Mockito.spy(ExampleLoader.class);
List<Thread> threads = IntStream.range(0, 10)
.mapToObj(index -> new Thread(loader::getStrings))
.collect(Collectors.toList());
// when
threads.forEach(Thread::start);
threads.forEach(thread -> {
try {
thread.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
});
// then
Mockito.verify(loader, Mockito.times(1))
.loadStrings();
}
This test consistently fails due to multiple calls to loadStrings(), as was expected.
The question
Why does the Spock test consistently pass, and how would I go about testing this with Spock?
The cause of your problem is that Spock makes methods it spies on synchronized. Particularly, the method MockController.handle(), through which all such calls go, is synchronized. You'll easily notice it if you add a pause and some output to your getStrings() method.
public List<String> getStrings() throws InterruptedException {
System.out.println(Thread.currentThread().getId() + " goes to sleep");
Thread.sleep(1000);
System.out.println(Thread.currentThread().getId() + " awoke");
if (strings == null) {
strings = loadStrings();
}
return strings;
}
This way Spock inadvertently fixes your concurrency problem. Mockito obviously uses another approach.
A couple of other thoughts on your tests:
First, you don't do much to ensure that all your threads have come to the getStrings() call at the same moment, thus decreasing the probability of collisions. Long time may pass between threads start (long enough for the first one to complete the call before others start it). A better approach would
be to use some synchronization primitive to remove the influence of threads startup time. For instance, a CountDownLatch may be of use here:
given:
def final CountDownLatch latch = new CountDownLatch(10)
def loader = Spy(ExampleLoader)
def threads = (0..<10).collect { new Thread({
latch.countDown()
latch.await()
loader.getStrings()
})}
Of course, within Spock it will make no difference, it's just an example on how to do it in general.
Second, the problem with concurrency tests is that they never guarantee that your program is thread safe. The best you can hope for is that such test will show you that your program is broken. But even if the test passes, it doesn't prove thread safety. To increase chances of finding concurrency bugs you may want to run the test many times and gather statistics. Sometimes, such tests only fail once in several thousands or even once in several hundreds thousands of runs. Your class is simple enough to make guesses about its thread safety, but such approach will not work with more complicated cases.
Related
I was about to write something about this, but maybe it is better to have a second opinion before appearing like a fool...
So the idea in the next piece of code (android's room package v2.4.1, RoomTrackingLiveData), is that the winner thread is kept alive, and is forced to check for contention that may have entered the process (coming from losing threads) while computing.
While fail CAS operations performed by these losing threads keep them out from entering and executing code, preventing repeating signals (mComputeFunction.call() OR postValue()).
final Runnable mRefreshRunnable = new Runnable() {
#WorkerThread
#Override
public void run() {
if (mRegisteredObserver.compareAndSet(false, true)) {
mDatabase.getInvalidationTracker().addWeakObserver(mObserver);
}
boolean computed;
do {
computed = false;
if (mComputing.compareAndSet(false, true)) {
try {
T value = null;
while (mInvalid.compareAndSet(true, false)) {
computed = true;
try {
value = mComputeFunction.call();
} catch (Exception e) {
throw new RuntimeException("Exception while computing database"
+ " live data.", e);
}
}
if (computed) {
postValue(value);
}
} finally {
mComputing.set(false);
}
}
} while (computed && mInvalid.get());
}
};
final Runnable mInvalidationRunnable = new Runnable() {
#MainThread
#Override
public void run() {
boolean isActive = hasActiveObservers();
if (mInvalid.compareAndSet(false, true)) {
if (isActive) {
getQueryExecutor().execute(mRefreshRunnable);
}
}
}
};
The most obvious thing here is that atomics are being used for everything they are not good at:
Identifying losers and ignoring winners (what reactive patterns need).
AND a happens once behavior, performed by the loser thread.
So this is completely counter intuitive to what atomics are able to achieve, since they are extremely good at defining winners, AND anything that requires a "happens once" becomes impossible to ensure state consistency (the last one is suitable to start a philosophical debate about concurrency, and I will definitely agree with any conclusion).
If atomics are used as: "Contention checkers" and "Contention blockers" then we can implement the exact principle with a volatile check of an atomic reference after a successful CAS.
And checking this volatile against the snapshot/witness during every other step of the process.
private final AtomicInteger invalidationCount = new AtomicInteger();
private final IntFunction<Runnable> invalidationRunnableFun = invalidationVersion -> (Runnable) () -> {
if (invalidationVersion != invalidationCount.get()) return;
try {
T value = computeFunction.call();
if (invalidationVersion != invalidationCount.get()) return; //In case computation takes too long...
postValue(value);
} catch (Exception e) {
e.printStackTrace();
}
};
getQueryExecutor().execute(invalidationRunnableFun.apply(invalidationCount.incrementAndGet()));
In this case, each thread is left with the individual responsibility of checking their position in the contention lane, if their position moved and is not at the front anymore, it means that a new thread entered the process, and they should stop further processing.
This alternative is so laughably simple that my first question is:
Why didn't they do it like this?
Maybe my solution has a flaw... but the thing about the first alternative (the nested spin-lock) is that it follows the idea that an atomic CAS operation cannot be verified a second time, and that a verification can only be achieved with a cmpxchg process.... which is... false.
It also follows the common (but wrong) believe that what you define after a successful CAS is the sacred word of GOD... as I've seen code seldom check for concurrency issues once they enter the if body.
if (mInvalid.compareAndSet(false, true)) {
// Ummm... yes... mInvalid is still true...
// Let's use a second atomicReference just in case...
}
It also follows common code conventions that involve "double-<enter something>" in concurrency scenarios.
So only because the first code follows those ideas, is that I am inclined to believe that my solution is a valid and better alternative.
Even though there is an argument in favor of the "nested spin-lock" option, but does not hold up much:
The first alternative is "safer" precisely because it is SLOWER, so it has MORE time to identify contention at the end of the current of incoming threads.
BUT is not even 100% safe because of the "happens once" thing that is impossible to ensure.
There is also a behavior with the code, that, when it reaches the end of a continuos flow of incoming threads, 2 signals are dispatched one after the other, the second to last one, and then the last one.
But IF it is safer because it is slower, wouldn't that defeat the goal of using atomics, since their usage is supposed to be with the aim of being a better performance alternative in the first place?
I’ve just finished writing a simple Blocking Queue with semaphores, and i'd to test its synchronization.
I've tested my implementation stability on a large number of threads which are inserting and removing from the Queue,
I'd like to get some help with some ideas\tests about how to test it in a more corrected way.
public class BBQ<T> {
private ArrayList<T> tasks;
private Semaphore mutex;
private Semaphore full;
private Semaphore zero;
public BBQ(int numofWorkers){
tasks = new ArrayList<T>();
mutex = new Semaphore(1, true);
full = new Semaphore(numofWorkers, true);
zero = new Semaphore(0, true);
}
public boolean add(T item) {
boolean ans = false;
try {
zero.acquire();
mutex.acquire();
ans = tasks.add(item);
} catch (InterruptedException e) {
e.printStackTrace();
}
finally{
mutex.release();
full.release();
}
return ans;
}
public boolean remove() {
boolean ans = false;
try {
full.acquire();
mutex.acquire();
if (tasks.remove(0) == null) {
ans = false;
}
} catch (InterruptedException e) {
e.printStackTrace();
}
finally{
mutex.release();
zero.release();
}
return ans;
}
public int size() {
return tasks.size();
}
public String toString() {
return tasks.toString();
}
Your size and toString are not thread safe. There is no fool proof way to test for thread safety, you are much better off writing code which is simple enough to be understood and validated.
That being said it doesn't hurt to have a simple test, as this might show an error. (The absence of an error doesn't mean it is thread safe)
I would use an ExecutorService to add and remove entries as fast as possible and see if it gets into an error state. In particular to would call toString() each time, as I am pretty sure this will fail, this is something you should be able to show.
I would run 3 types of tests to test your blocking code:
Deadlock testing - Run the calling threads with random time interval delays. Deadlocks are hard to reproduce and random delays are more likely to smoke out the problem. From your code, it does not appear that deadlocks will happen since the sequence of locking is the same
Performance - You mentioned a large number of threads. With your blocking code that will cause delays in the execution of the later threads
Multi CPU tests - You may want to test this on multiple vCPU VMs to see whether it scales with multiple CPUs. I think it may not since you are using shared memory
I'd like to get some help with some ideas\tests about how to test it in a more corrected way.
you can use jcstress framework. It is used by some Oracle engineers for internal testing
The question has been posted before but no real example was provided that works. So Brian mentions that under certain conditions the AssertionError can occur in the following code:
public class Holder {
private int n;
public Holder(int n) { this.n = n; }
public void assertSanity() {
if (n!=n)
throw new AssertionError("This statement is false");
}
}
When holder is improperly published like this:
class someClass {
public Holder holder;
public void initialize() {
holder = new Holder(42);
}
}
I understand that this would occur when the reference to holder is made visible before the instance variable of the object holder is made visible to another thread. So I made the following example to provoke this behavior and thus the AssertionError with the following class:
public class Publish {
public Holder holder;
public void initialize() {
holder = new Holder(42);
}
public static void main(String[] args) {
Publish publish = new Publish();
Thread t1 = new Thread(new Runnable() {
public void run() {
for(int i = 0; i < Integer.MAX_VALUE; i++) {
publish.initialize();
}
System.out.println("initialize thread finished");
}
});
Thread t2 = new Thread(new Runnable() {
public void run() {
int nullPointerHits = 0;
int assertionErrors = 0;
while(t1.isAlive()) {
try {
publish.holder.assertSanity();
} catch(NullPointerException exc) {
nullPointerHits++;
} catch(AssertionError err) {
assertionErrors ++;
}
}
System.out.println("Nullpointerhits: " + nullPointerHits);
System.out.println("Assertion errors: " + assertionErrors);
}
});
t1.start();
t2.start();
}
}
No matter how many times I run the code, the AssertionError never occurs. So for me there are several options:
The jvm implementation (in my case Oracle's 1.8.0.20) enforces that the invariants set during construction of an object are visible to all threads.
The book is wrong, which I would doubt as the author is Brian Goetz ... nuf said
I'm doing something wrong in my code above
So the questions I have:
- Did someone ever provoke this kind of AssertionError successfully? With what code then?
- Why isn't my code provoking the AssertionError?
Your program is not properly synchronized, as that term is defined by the Java Memory Model.
That does not, however, mean that any particular run will exhibit the assertion failure you are looking for, nor that you necessarily can expect ever to see that failure. It may be that your particular VM just happens to handle that particular program in a way that turns out never to expose that synchronization failure. Or it may turn out the although susceptible to failure, the likelihood is remote.
And no, your test does not provide any justification for writing code that fails to be properly synchronized in this particular way. You cannot generalize from these observations.
You are looking for a very rare condition. Even if the code reads an unintialized n, it may read the same default value on the next read so the race you are looking for requires an update right in between these two adjacent reads.
The problem is that every optimizer will coerce the two reads in your code into one, once it starts processing your code, so after that you will never get an AssertionError even if that single read evaluates to the default value.
Further, since the access to Publish.holder is unsynchronized, the optimizer is allowed to read its value exactly once and assume unchanged during all subsequent iterations. So an optimized second thread would always process the same object which will never turn back to the uninitialized state. Even worse, an optimistic optimizer may go as far as to assume that n is always 42 as you never initialize it to something else in this runtime and it will not consider the case that you want a race condition. So both loops may get optimized to no-ops.
In other words: if your code doesn’t fail on the first access, the likeliness of spotting the error in subsequent iterations dramatically drops down, possibly to zero. This is the opposite of your idea to let the code run inside a long loop hoping that you will eventually encounter the error.
The best chances for getting a data race are on the first, non-optimized, interpreted execution of your code. But keep in mind, the chance for that specific data race are still extremely low, even when running the entire test code in pure interpreted mode.
I want to clear my understanding that if I surround a block of code with synchronized(this){} statement, does this mean that I am making those statements atomic?
No, it does not ensure your statements are atomic. For example, if you have two statements inside one synchronized block, the first may succeed, but the second may fail. Hence, the result is not "all or nothing". But regarding multiple threads, you ensure that no statement of two threads are interleaved. In other words: all statements of all threads are strictly serialized, even so, there is no guarantee, that all or none statements of a thread gets executed.
Have a look at how Atomicity is defined.
Here is an example showing that the reader is able to ready a corrupted state. Hence the synchronized block was not executed atomically (forgive me the nasty formatting):
public class Example {
public static void sleep() {
try { Thread.sleep(400); } catch (InterruptedException e) {};
}
public static void main(String[] args) {
final Example example = new Example(1);
ExecutorService executor = newFixedThreadPool(2);
try {
Future<?> reader = executor.submit(new Runnable() { #Override public void run() {
int value; do {
value = example.getSingleElement();
System.out.println("single value is: " + value);
} while (value != 10);
}});
Future<?> writer = executor.submit(new Runnable() { #Override public void run() {
for (int value = 2; value < 10; value++) example.failDoingAtomic(value);
}});
reader.get(); writer.get();
} catch (Exception e) { e.getCause().printStackTrace();
} finally { executor.shutdown(); }
}
private final Set<Integer> singleElementSet;
public Example(int singleIntValue) {
singleElementSet = new HashSet<>(Arrays.asList(singleIntValue));
}
public synchronized void failDoingAtomic(int replacement) {
singleElementSet.clear();
if (new Random().nextBoolean()) sleep();
else throw new RuntimeException("I failed badly before adding the new value :-(");
singleElementSet.add(replacement);
}
public int getSingleElement() {
return singleElementSet.iterator().next();
}
}
No, synchronization and atomicity are two different concepts.
Synchronization means that a code block can be executed by at most one thread at a time, but other threads (that execute some other code that uses the same data) can see intermediate results produced inside the "synchronized" block.
Atomicity means that other threads do not see intermediate results - they see either the initial or the final state of the data affected by the atomic operation.
It's unfortunate that java uses synchronized as a keyword. A synchronized block in Java is a "mutex" (short for "mutual exclusion"). It's a mechanism that insures only one thread at a time can enter the block.
Mutexes are just one of many tools that are used to achieve "synchronization" in a multi-threaded program: Broadly speaking, synchronization refers to all of the techniques that are used to insure that the threads will work in a coordinated fashion to achieve a desired outcome.
Atomicity is what Oleg Estekhin said, above. We usually hear about it in the context of "transactions." Mutual exclusion (i.e., Java's synchronized) guarantees something less than atomicity: Namely, it protects invariants.
An invariant is any assertion about the program's state that is supposed to be "always" true. E.g., in a game where players exchange virtual coins, the total number of coins in the game might be an invariant. But it's often impossible to advance the state of the program without temporarily breaking the invariant. The purpose of mutexes is to insure that only one thread---the one that is doing the work---can see the temporary "broken" state.
For code that use syncronized on that object - yes.
For code, that don't use syncronized keyword for that object - no.
Can we say that by synchronizing a block of code we are making the contained statements atomic?
You are taking a very big leap there. Atomicity means that the operation if atomic will complete in one CPU cycle or equivalent to one CPU cycle whereas Synchronizing a block means only one thread can access the critical region. It may take multiple CPU cycles for processing code in the critical region(which will make it non atomic).
I have a method that I would like to call. However, I'm looking for a clean, simple way to kill it or force it to return if it is taking too long to execute.
I'm using Java.
to illustrate:
logger.info("sequentially executing all batches...");
for (TestExecutor executor : builder.getExecutors()) {
logger.info("executing batch...");
executor.execute();
}
I figure the TestExecutor class should implement Callable and continue in that direction.
But all i want to be able to do is stop executor.execute() if it's taking too long.
Suggestions...?
EDIT
Many of the suggestions received assume that the method being executed that takes a long time contains some kind of loop and that a variable could periodically be checked.
However, this is not the case. So something that won't necessarily be clean and that will just stop the execution whereever it is is acceptable.
You should take a look at these classes :
FutureTask, Callable, Executors
Here is an example :
public class TimeoutExample {
public static Object myMethod() {
// does your thing and taking a long time to execute
return someResult;
}
public static void main(final String[] args) {
Callable<Object> callable = new Callable<Object>() {
public Object call() throws Exception {
return myMethod();
}
};
ExecutorService executorService = Executors.newCachedThreadPool();
Future<Object> task = executorService.submit(callable);
try {
// ok, wait for 30 seconds max
Object result = task.get(30, TimeUnit.SECONDS);
System.out.println("Finished with result: " + result);
} catch (ExecutionException e) {
throw new RuntimeException(e);
} catch (TimeoutException e) {
System.out.println("timeout...");
} catch (InterruptedException e) {
System.out.println("interrupted");
}
}
}
Java's interruption mechanism is intended for this kind of scenario. If the method that you wish to abort is executing a loop, just have it check the thread's interrupted status on every iteration. If it's interrupted, throw an InterruptedException.
Then, when you want to abort, you just have to invoke interrupt on the appropriate thread.
Alternatively, you can use the approach Sun suggest as an alternative to the deprecated stop method. This doesn't involve throwing any exceptions, the method would just return normally.
I'm assuming the use of multiple threads in the following statements.
I've done some reading in this area and most authors say that it's a bad idea to kill another thread.
If the function that you want to kill can be designed to periodically check a variable or synchronization primitive, and then terminate cleanly if that variable or synchronization primitive is set, that would be pretty clean. Then some sort of monitor thread can sleep for a number of milliseconds and then set the variable or synchronization primitive.
Really, you can't... The only way to do it is to either use thread.stop, agree on a 'cooperative' method (e.g. occassionally check for Thread.isInterrupted or call a method which throws an InterruptedException, e.g. Thread.sleep()), or somehow invoke the method in another JVM entirely.
For certain kinds of tests, calling stop() is okay, but it will probably damage the state of your test suite, so you'll have to relaunch the JVM after each call to stop() if you want to avoid interaction effects.
For a good description of how to implement the cooperative approach, check out Sun's FAQ on the deprecated Thread methods.
For an example of this approach in real life, Eclipse RCP's Job API's 'IProgressMonitor' object allows some management service to signal sub-processes (via the 'cancel' method) that they should stop. Of course, that relies on the methods to actually check the isCancelled method regularly, which they often fail to do.
A hybrid approach might be to ask the thread nicely with interrupt, then insist a couple of seconds later with stop. Again, you shouldn't use stop in production code, but it might be fine in this case, esp. if you exit the JVM soon after.
To test this approach, I wrote a simple harness, which takes a runnable and tries to execute it. Feel free to comment/edit.
public void testStop(Runnable r) {
Thread t = new Thread(r);
t.start();
try {
t.join(2000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
if (!t.isAlive()) {
System.err.println("Finished on time.");
return;
}
try {
t.interrupt();
t.join(2000);
if (!t.isAlive()) {
System.err.println("cooperative stop");
return;
}
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
System.err.println("non-cooperative stop");
StackTraceElement[] trace = Thread.getAllStackTraces().get(t);
if (null != trace) {
Throwable temp = new Throwable();
temp.setStackTrace(trace);
temp.printStackTrace();
}
t.stop();
System.err.println("stopped non-cooperative thread");
}
To test it, I wrote two competing infinite loops, one cooperative, and one that never checks its thread's interrupted bit.
public void cooperative() {
try {
for (;;) {
Thread.sleep(500);
}
} catch (InterruptedException e) {
System.err.println("cooperative() interrupted");
} finally {
System.err.println("cooperative() finally");
}
}
public void noncooperative() {
try {
for (;;) {
Thread.yield();
}
} finally {
System.err.println("noncooperative() finally");
}
}
Finally, I wrote the tests (JUnit 4) to exercise them:
#Test
public void testStopCooperative() {
testStop(new Runnable() {
#Override
public void run() {
cooperative();
}
});
}
#Test
public void testStopNoncooperative() {
testStop(new Runnable() {
#Override
public void run() {
noncooperative();
}
});
}
I had never used Thread.stop() before, so I was unaware of its operation. It works by throwing a ThreadDeath object from whereever the target thread is currently running. This extends Error. So, while it doesn't always work cleanly, it will usually leave simple programs with a fairly reasonable program state. For example, any finally blocks are called. If you wanted to be a real jerk, you could catch ThreadDeath (or Error), and keep running, anyway!
If nothing else, this really makes me wish more code followed the IProgressMonitor approach - adding another parameter to methods that might take a while, and encouraging the implementor of the method to occasionally poll the Monitor object to see if the user wants the system to give up. I'll try to follow this pattern in the future, especially methods that might be interactive. Of course, you don't necessarily know in advance which methods will be used this way, but that is what Profilers are for, I guess.
As for the 'start another JVM entirely' method, that will take more work. I don't know if anyone has written a delegating class loader, or if one is included in the JVM, but that would be required for this approach.
Nobody answered it directly, so here's the closest thing i can give you in a short amount of psuedo code:
wrap the method in a runnable/callable. The method itself is going to have to check for interrupted status if you want it to stop (for example, if this method is a loop, inside the loop check for Thread.currentThread().isInterrupted and if so, stop the loop (don't check on every iteration though, or you'll just slow stuff down.
in the wrapping method, use thread.join(timeout) to wait the time you want to let the method run. or, inside a loop there, call join repeatedly with a smaller timeout if you need to do other things while waiting. if the method doesn't finish, after joining, use the above recommendations for aborting fast/clean.
so code wise, old code:
void myMethod()
{
methodTakingAllTheTime();
}
new code:
void myMethod()
{
Thread t = new Thread(new Runnable()
{
public void run()
{
methodTakingAllTheTime(); // modify the internals of this method to check for interruption
}
});
t.join(5000); // 5 seconds
t.interrupt();
}
but again, for this to work well, you'll still have to modify methodTakingAllTheTime or that thread will just continue to run after you've called interrupt.
The correct answer is, I believe, to create a Runnable to execute the sub-program, and run this in a separate Thread. THe Runnable may be a FutureTask, which you can run with a timeout ("get" method). If it times out, you'll get a TimeoutException, in which I suggest you
call thread.interrupt() to attempt to end it in a semi-cooperative manner (many library calls seem to be sensitive to this, so it will probably work)
wait a little (Thread.sleep(300))
and then, if the thread is still active (thread.isActive()), call thread.stop(). This is a deprecated method, but apparently the only game in town short of running a separate process with all that this entails.
In my application, where I run untrusted, uncooperative code written by my beginner students, I do the above, ensuring that the killed thread never has (write) access to any objects that survive its death. This includes the object that houses the called method, which is discarded if a timeout occurs. (I tell my students to avoid timeouts, because their agent will be disqualified.) I am unsure about memory leaks...
I distinguish between long runtimes (method terminates) and hard timeouts - the hard timeouts are longer and meant to catch the case when code does not terminate at all, as opposed to being slow.
From my research, Java does not seem to have a non-deprecated provision for running non-cooperative code, which, in a way, is a gaping hole in the security model. Either I can run foreign code and control the permissions it has (SecurityManager), or I cannot run foreign code, because it might end up taking up a whole CPU with no non-deprecated means to stop it.
double x = 2.0;
while(true) {x = x*x}; // do not terminate
System.out.print(x); // prevent optimization
I can think of a not so great way to do this. If you can detect when it is taking too much time, you can have the method check for a boolean in every step. Have the program change the value of the boolean tooMuchTime to true if it is taking too much time (I can't help with this). Then use something like this:
Method(){
//task1
if (tooMuchTime == true) return;
//task2
if (tooMuchTime == true) return;
//task3
if (tooMuchTime == true) return;
//task4
if (tooMuchTime == true) return;
//task5
if (tooMuchTime == true) return;
//final task
}