Java v Scala from a concurrency viewpoint - java

I am kicking off my final year project right now. I am going to be investigating the concurrency approaches from java and scala perspectives. Having come out of a java concurrency module, I can see why people say that the shared state threading approach is difficult to reason about. You have critical sections to worry about, run the risk of race conditions and deadlocks etc due to the non deterministic way in which java threads operate. With 1.5 this reasoning was given some clarity ,but still, far from crystal clear.
At first view, scala appears to remove this complex reasoning through the actors class. This has given the programmer the ability to develop concurrent systems from a more sequential viewpoint and easier to conceptualize. But, for this positive, am I right in saying that there are some drawbacks? For instance, say we want to sort a large list in both scenarios - with java you create two threads split the list in two, worry about the critical sections, atomic actions etc and go code. With scala, because it is "share nothing" you actually have to pass the list/2 to two actors to peform the sort operation, right?
I guess my question is that the price you pay for simpler reasoning is performance overhead of having to pass the collection to your actors, in scala?
I was thinking of doing some benchmark tests to this effect (selection sort, quick sort etc;) but because one is functional and one is imperative - I will not be comparing apples with apples from an algorithm viewpoint.
I would really appreciate any views you guys have on the above to give me some ideas to get me started.
Many thanks.

The nice thing about Scala is that you can do concurrency the Java way if you want. All the Java classes are available.
So it really boils down to the difference between a model where you have threads with concurrent access to mutable variables, and a model where you have stateful actors which send messages to each other but do not peek into each others' internals. And you're absolutely right that in some scenarios you have to trade off performance against ease of getting the code correct.
I generally find as a rough rule of thumb that if you're going to have a pile of threads spending a significant amount of time waiting for a lock to open up, using a Java model, and there is no clean way to separate the work to avoid having everyone waiting for that resource, and if the execution switches between threads quickly, then the Java model is far superior to an actor model where the actor sends an "I'm done" message back to a supervisor, which then sends out a "Here's new work!" message to an existing non-busy actor. Sorting algorithms, depending on how you envision them, can very much fall into this category.
For most everything else, the performance penalty associated with actors doesn't amount to much as far as I've seen. If you can conceive of your problem as lots and lots of reactive elements (i.e. they only need time when they've received a message), then actors can scale particularly well (millions available, though only a handful are working at any given instant); with threads, you'd need to have some sort of extra internal state to keep track of who should be doing what work, since you couldn't handle that many active threads.

I'm just going to point out here that Scala does not copy arguments passed to actors, so actors can share whatever it is passed to them.
As opposed to Erlang, it is the programmer's responsibility to avoid sharing mutable stuff. However, there is no penalty in sharing immutable stuff, since there's no need to lock it, as all accesses to it are read-only. And Scala has strong support for immutable data structures.

Related

Alternative for synchronisation in multi threading in JAVA

I am a beginner in Multi threading and have this one doubt:
Is there any other alternative for traditional Synchronisation(which uses synchronised keywords) in java,since it affects the performance of the application?
As others have indicated, it depends on what you're trying to avoid, as well as what you're trying to achieve with multithreading.
If you mean "is there a zero-overhead way to do multithreading with shared resources," the answer is almost certainly "no." If two cars going in different directions approach an intersection at the same time, one of them will have to wait for the other one - there's no way that the cars can occupy the same space at the same time. That's why we have stop signs and traffic lights. (Alternatively, there are things like traffic circles, but even those have some overhead - you really can't just go through them at full speed as if they weren't there).
There are lots of ways of doing asynchronous and parallel operations other than using that type of synchronization:
Non-blocking I/O. The argument here is that, when you're interacting with a server or slow I/O device or something, most of the time is spent waiting for a response from the device or server, so you really don't need multiple threads to handle that - you just need to allow the original thread to do other work while it's waiting for a response. My usual analogy here is: suppose you go out to eat with a group of 10 people. When the waiter comes to take orders, the first person he asks to order isn't ready yet. The sensible thing to do, of course, is for the waiter to take other people's orders first, and then to come back to the first guy. There's no need to bring in separate waiters for each person's orders, bring in another waiter to wait for the first guy, or anything like that.
Promise/futures based async
Event-driven async
Using immutable data structures to minimize the amount of shared resources.
There are, of course, a lot of types of locking and synchronization mechanisms available other than just the synchronized keywords, such as counting semaphores, reader-writer locks, etc.
There are a lot of other types of concurrency as well, such as the actor model.
When used properly, these can help minimize your overhead and possibly reduce the amount of explicit locking and synchronization required. They all have overhead, though.
TL;DR You have overhead no matter what you do - just select the design and primitives that result in the smallest overhead for your particular use case.
You can look for ReentrantLock and ReentrantReadWriteLock.

Biased locking design decision

I am trying understand a rationale behind biased locking and making it a default. Since reading this blog post, namely:
"Since most objects are locked by at most one thread during their lifetime, we allow that thread to bias an object toward itself"
I am perplexed... Why would anyone design a synchronized set of methods to be accessed by one thread only? In most cases, people devise certain building blocks specifically for the multi-threaded use-case, and not a single-threaded one. In such cases, EVERY lock aquisition by a thread which is not biased is at the cost of a safepoint, which is a huge overhead! Could someone please help me understand what I am missing in this picture?
The reason is probably that there are a decent number of libraries and classes that are designed to be thread safe but that are still useful outside of such circumstances. This is especially true of a number of classes that predate the Collections framework. Vector and it's subclasses is a good example. If you also consider that most java programs are not multi threaded it is in most cases an overall improvement to use a biased locking scheme, this is especially true of legacy code where the use of such Classes is all to common.
You are correct in a way, but there are cases when this is needed, as Holger very correctly points in his comment. There is so-called, the grace period when no biased-locking is attempted at all, so it's not like this will happen all the time. As I last remember looking at the code, it was 5 seconds. To prove this you would need a library that could inspect Java Object's header (jol comes to my mind), since biased locking is hold inside mark word. So only after 5 seconds will the object that held a lock before will be biased towards the same lock.
EDIT
I wanted to write a test for this, but seems like there is one already! Here is the link for it

Java Multithreading - More Threads That Do Less, or Fewer Threads that Do More?

EDIT: This question might be appropriate for other languages as well - the overall theory behind it seems mostly language agnostic. However, as this will run in a JVM, I'm sure there's differences between JVM overheads/threading and those of other environments.
EDIT 2: To clarify a little better, I guess the main question is which is better for scalability: to have smaller threads that can return quicker to enable processing other chunks of work for other workloads, or try to get a single workload through as quickly as possible? The workloads are sequential and multithreading won't help speed up a single unit of work in this case; it's more in hopes of increasing the throughput of the system overall (thanks to Uri for leading me towards the clarification).
I'm working on a system that's replacing an existing system; the current system has a pretty heavy load, so we already know the replacement needs to be highly scalable. It communicates with several outside processes, such as email, other services, databases, etc., and I'm already planning on making it multithreaded to help with scaling. I've worked on multithreaded apps before, just nothing with this high of a performance/scalability requirement, so I don't have much experience when it comes to getting the absolute most out of concurrency.
The question I have is what's the best way to divide the work up between threads? I'm looking at two different versions, one that creates a single thread for the full workflow, and another that creates a thread for each of the individual steps, continuing on to the next step (in a new/different thread) when the previous step completes - probably with a NodeJS-style callback system, but not terribly concerned about the direct implementation details.
I don't know much about the nitty-gritty details of multithreading - things like context switching, for example - so I don't know if the overhead of multiple threads would swamp the execution time in each of the threads. On one hand, the single thread model seems like it would be fastest for an individual work flow compared to the multiple threads; however, it would also tie up a single thread for the entire workflow, whereas the multiple threads would be shorter lived and would return to the pool quicker (I imagine, at least).
Hopefully the underlying concept is easy enough to understand; here's a contrived pseudo-code example though:
// Single-thread approach
foo();
bar();
baz();
Or:
// Multiple Thread approach
Thread.run(foo);
when foo.isDone()
Thread.run(bar);
when bar.isDone()
Thread.run(baz);
UPDATE: Completely forgot. The reason I'm considering the multithreaded approach is the (possibly mistaken) belief that, since the threads will have smaller execution times, they'll be available for other instances of the overall workload. If each operation takes, say 5 seconds, then the single-thread version locks up a thread for 15 seconds; the multiple thread version would lock up a single thread for 5 seconds, and then it can be used for another process.
Any ideas? If there's anything similar out there in the interwebs, I'd love even a link - I couldn't think of how to search for this (I blame Monday for that, but it would probably be the same tomorrow).
Multithreading is not a silver bullet. It's means to an end.
Before making any changes, you need to ask yourself where your bottlenecks are, and what you're really trying to parallelize. I'm not sure that without more information that we can give good advice here.
If foo, bar, and baz are part of a pipeline, you're not necessarily going to improve the overall latency of a single sequence by using multiple threads.
What you might be able to do is to increase your throughput by letting multiple executions of the pipeline over different input pieces work in parallel, by letting later items to travel through the pipeline while earlier items are blocked on something (e.g., I/O). For instance, if bar() for a particular input is blocked and waiting on a notification, it's possible that you could do computationally heavy operations on another input, or have CPU resources to devote to foo(). A particularly important question is whether any of the external dependencies act as a limited shared resource. e.g., if one thread is accessing system X, is another thread going to be affected?
Threads are also very effective if you want to divide and conquer your problem - splitting your input into smaller parts, running each part through the pipeline, and then waiting on all the pieces to be ready. Is that possible with the kind of workflow you're looking at?
If you need to first do foo, then do bar, and then do baz, you should have one thread do each of these steps in sequence. This is simple and makes obvious sense.
The most common case where you're better off with the assembly line approach is when keeping the code in cache is more important than keeping the data in cache. In this case, having one thread that does foo over and over can keep the code for this step in cache, keep branch prediction information around, and so on. However, you will have data cache misses when you hand the results of foo to the thread that does bar.
This is more complex and should only be attempted if you have good reason to think it will work better.
Use a single thread for the full workflow.
Dividing up the workflow can't improve the completion time for one piece of work: since the parts of the workflow have to be done sequentially anyway, only one thread can work on the piece of work at a time. However, breaking up the stages can delay the completion time for one piece of work, because a processor which could have picked up the last part of one piece of work might instead pick up the first part of another piece of work.
Breaking up the stages into multiple threads is also unlikely to improve the time to completion of all your work, relative to executing all the stages in one thread, since ultimately you still have to execute all the stages for all the pieces of work.
Here's an example. If you have 200 of these pieces of work, each requiring three 5 second stages, and say a thread pool of two threads running on two processors, keeping the entire workflow in a single thread results in your first two results after 15 seconds. It will take 1500 seconds to get all your results, but you only need the working memory for two of the pieces of work at a time. If you break up the stages, then it may take a lot longer than 15 seconds to get your first results, and you potentially may need memory for all 200 pieces of work proceeding in parallel if you still want to get all the results in 1500 seconds.
In most cases, there are no efficiency advantages to breaking up sequential stages into different threads, and there may be substantial disadvantages. Threads are generally only useful when you can use them to do work in parallel, which does not seem to be the case for your work stages.
However, there is a huge disadvantage to breaking up the stages into separate threads. That disadvantage is that you now need to write multithreaded code that manages the stages. It's extremely easy to write bugs in such code, and such bugs can be very difficult to catch prior to production deployment.
The way to avoid such bugs is to keep the threading code as simple as possible given your requirements. In the case of your work stages, the simplest possible threading code is none at all.

Statistically difference between Normal Multithreading and Executors with multithreading

Can anybody suggest me how can can I show Statistically difference between Normal
Multithreading and Executors with multithreading in-terms of as e.g CPU time,Total thread
user time,memory usage, & so on
Any suggestions will be helpful.
I am not sure I understand the term "Statistically difference". I believe that you are asking about using of executors and plain thread API and what is the difference among them.
First, executors a based on threads; it is just yet another layer on top of them. No magic. Plain threading API allows you creation and managing of multithreaded applications but requires dealing with gory details of thread synchronization, pooling, transfering data between threads etc.
Executors framework solves some of these problems. You can define thread pool policy, choose queue type according to your needs and just put new tasks to the incoming queue. The thread pool will execute the tasks according to it configuration.
The problem is that what your question is asking something that makes little sense.
Before you can meaningfully talk about the "statistical difference" between things, you have to have some way of quantifying and measuring them. And before that can happen, you have a clear statement of what you are trying to quantify / measure.
What you are asking satisfies none of these criteria.
Assuming that you have a meaningful question ...
At a practical level, the normal way that people try to quantify the effect of something like this (using thread pools versus creating new threads) is to develop a benchmark application with variants corresponding to the two strategies. Then measure the relative performance. But this has many problems.
The most fundamental problem that what you are actually measuring is effect of the two strategies for that benchmark, and that benchmark only. Generalizing from the benchmark to other applications is very difficult. The problem is that there are "hidden parameters" embedded in the design of any benchmark. For instance, the number of processors, the number of threads, the length and complexity of the tasks, and so on. Without having a good intuition as to what the parameters are, it is difficult to design a benchmark to take them into account. And even if you succeed in figuring out what the hidden parameters are and quantifying their effect, you have the problem that you can't figure out what those parameters will be in a real (more complex) application. At the end of the day, you'll end up with a model that can't give you quantitative answers for real problems. (Computing has nothing like Newton's Law of Gravity.)

Can Someone Explain Threads to Me? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I have been considering adding threaded procedures to my application to speed up execution, but the problem is that I honestly have no idea how to use threads, or what is considered "thread safe". For example, how does a game engine utilize threads in its rendering processes, or in what contexts would threads only be considered nothing but a hindrance? Can someone point the way to some resources to help me learn more or explain here?
This is a very broad topic. But here are the things I would want to know if I knew nothing about threads:
They are units of execution within a single process that happen "in parallel" - what this means is that the current unit of execution in the processor switches rapidly. This can be achieved via different means. Switching is called "context switching", and there is some overhead associated with this.
They can share memory! This is where problems can occur. I talk about this more in depth in a later bullet point.
The benefit of parallelizing your application is that logic that uses different parts of the machine can happen simultaneously. That is, if part of your process is I/O-bound and part of it is CPU-bound, the I/O intensive operation doesn't have to wait until the CPU-intensive operation is done. Some languages also allow you to run threads at the same time if you have a multicore processor (and thus parallelize CPU-intensive operations as well), though this is not always the case.
Thread-safe means that there are no race conditions, which is the term used for problems that occur when the execution of your process depends on timing (something you don't want to rely on). For example, if you have threads A and B both incrementing a shared counter C, you could see the case where A reads the value of C, then B reads the value of C, then A overwrites C with C+1, then B overwrites C with C+1. Notice that C only actually increments once!
A couple of common ways avoid race conditions include synchronization, which excludes mutual access to shared state, or just not having any shared state at all. But this is just the tip of the iceberg - thread-safety is quite a broad topic.
I hope that helps! Understand that this was a very quick introduction to something that requires a good bit of learning. I would recommend finding a resource about multithreading in your preferred language, whatever that happens to be, and giving it a thorough read.
There are four things you should know about threads.
Threads are like processes, but they share memory.
Threads often have hardware, OS, and language support, which might make them better than processes.
There are lots of fussy little things that threads need to support (like locks and semaphores) so they don't get the memory they share into an inconsistent state. This makes them a little difficult to use.
Locking isn't automatic (in the languages I know), so you have to be very careful with the memory they (implicitly) share.
Threads don't speed up applications. Algorithms speed up applications. Threads can be used in algorithms, if appropriate.
Well someone will probably answer this better, but threads are for the purpose of having background processing that won't freeze the user interface. You don't want to stop accepting keyboard input or mouse input, and tell the user, "just a moment, I want to finish this computation, it will only be a few more seconds." (And yet its amazing how many times commercial programs do this.
As far as thread safe, it means a function that does not have some internal saved state. If it did you couldn't have multiple threads using it simutaneously.
As far as thread programming you just have to start doing it, and then you'll start encountering various issues unique to thread programming, for example simultaneuous access to data, in which case you have to decide to use some syncronization method such as critical sections or mutexes or something else, each one having slightly different nuances in their behavior.
As far as the differences between processes and threads (which you didn't ask) processes are an OS level entity, whereas threads are associated with a program. In certain instances your program may want to create a process rather than a thread.
Threads are simply a way of executing multiple things simultaneously (assuming that the platform on which they are being run is capable of parallel execution). Thread safety is simply (well, nothing with threads is truly simple) making sure that the threads don't affect each other in harmful ways.
In general, you are unlikely to see systems use multiple threads for rendering graphics on the screen due to the multiple performance implications and complexity issues that may arise from that. Other tasks related to state management (or AI) can potentially be moved to separate threads however.
First rule of threading: don't thread. Second rule of threading: if you have to violate rule one...don't. Third rule: OK, fine you have to use threads, so before proceeding get your head into the pitfalls, understand locking and the common thread problems such as deadlock and livelocking.
Understand that threading does not speed up anything, it is only useful to background long-running processes allowing the user can do something else with the application. If you have to allow the user to interact with the application while the app does something else in the background, like poll a socket or wait for ansynchronous input from elsewhere in the application, then you may indeed require threading.
The thread sections in both Effective Java and Clean Code are good introductions to threads and their pitfalls.
Since the question is tagged with 'Java', I assume you are familiar with Java, in which case this is a great introductory tutorial
http://java.sun.com/docs/books/tutorial/essential/concurrency/
Orm, great question to ask. I think all serious programmers should learn about threads, cause eventually you will at least consider using them and you really want to be prepared when it happens. Concurrency bugs can be incredibly subtle and the best way to avoid them is to know what idioms are safe(-ish).
I highly recommend you take the time to read the book Concurrent Programming in Java: Design Principles and Patterns by Doug Lea:
http://gee.cs.oswego.edu/dl/cpj/
Lea takes the time not only to teach you the concepts, but also to show you the correct and incorrect ways to use the concurrent programming primitives (in Java but also helpful for any other environment that uses shared-memory locking/signaling style concurrency). Most of all he teaches respect for the difficulty of concurrent programming.
I should add that this style of concurrent programming is the most common but not the only approach. There's also message passing, which is safer but forces you to structure your algorithm differently.
Since the original post is very broad, and also tagged with C++, I think the following pointers are relevant:
Anthony Williams, maintainer of the Boost Thread Library, has been working on a book called "C++ Concurrency in Action", a description of which you can find here. The first (introductory) chapter is available for free in pdf form here.
Also, Herb Sutter (known, among other things, for his "Exceptional C++" series) has been writing a book to be called "Effective Concurrency", many articles of which are available in draft form here.
There's a nice book, Java Concurrency in Practice, http://www.javaconcurrencyinpractice.com/ .

Categories