What multiplatform Lock or synchronization approach should be used in multiplatform Kotlin code? Previously in Java code i used synchronized and i can see synchronized in Kotlin too. However it's marked as Deprecated and will be removed from common std lib soon.
I can see withLock, but it's supported on JVM only, not multiplatform.
Any thoughts?
PS. For now we don't want to migrate to Kotlin coroutines because of too much rewrite and coroutines library footprint (too large for Android library with strict disk footprint requirements).
From Kotlin/Native Concurrent documentation (here):
Concurrency in Kotlin/Native
Kotlin/Native runtime doesn't encourage a classical thread-oriented concurrency model with mutually exclusive code blocks and conditional variables, as this model is known to be error-prone and unreliable. Instead, we suggest a collection of alternative approaches, allowing you to use hardware concurrency and implement blocking IO. Those approaches are as follows, and they will be elaborated on in further sections:
Workers with message passing
Object subgraph ownership transfer
Object subgraph freezing
Object subgraph detachment
Raw shared memory using C globals
Coroutines for blocking operations (not covered in this document)
It seems that locks are not exposed in Kotlin/Native by design. There are implementations (see Lock.kt), however that class is marked internal.
However, there is a multi-platform implemenation of locks in KTOR (very limited doc, source code). It is public, but marked with #InternalApi, which may affect its stability.
You might also be interested in this KotlinLang discussion thread: Replacement for synchronized
There is no lock nor synchronized in Kotlin common. Kotlin's approach is to use immutable data. You can add your own expect AtomicReference in common and actual implementations in JVM Native, it will help a lot. Also keep in mind that coroutines in Native are single threaded at the moment. Also you can't share mutable state between threads in Native.
There is full "lock" multi-platform implementation in Kotlin coroutines library. It is based on atomicfu and I think that can be easily extracted from there, even if you really don't want to depend from the full coroutine library:
official documentation
reference
source code
Related
As part of a study I am doing, I am exploring the supposed simplicity of using languages like Scala & Clojure to achieve concurrency on the JVM.
By simplicity, I am hoping to prove that these languages provide easier concurrency constructs than what Java 7 provides.
Therefore, I am hoping to find some good references that explain the complexities of Java's concurrency model.
Outside of pointing me in the direction of Google (which I have already searched with limited success), I would appreciate if those in-the-know could provide me with some good references to get me started off in this area.
Thanks
Java does not support lambda expressions. Creating an inline callback (eg, for the completion of an asynchronous call) requires 5 lines of boilerplate for an anonymous type.
This strongly discourages people from using callbacks. This is probably why Java 7 still does not have an interface for a callback that takes a value (as opposed to Runnable and Callbable), whereas C# has had one since 2005.
Therefore, the JDK does not have any real support for asynchronous operations.
The key to an asynchronous operation is the ability to kick off a long-running request, and have it run a callback when it finishes, without consuming a thread for the duration of the request. In Java, you can only do this by making a separate thread call get() on a Future<V>. This limits the concurrency of an application using the standard API to the number of threads you can sanely support.
To solve this problem, Google's Guava framework for better Java code introduces a ListenableFuture<V> interface which does have completion callbacks.
Languages like Scala fix this problem by supporting lambda expressions (which compile to anonymous classes) and adding their own Promise / Future types.
While higher level languages are easier to use multiple cores, what is often forgotten is why you want to use multiple cores which is to make the program faster e.g. increase its throughput.
When you consider options which increase concurrency, you need to test whether these options actually improve performance in some way. (Because very often they don't)
e.g. STM (Software Transactional Memory) makes it easier to write multi-threaded applications without having to worry about concurrency issues. The problem is that for trivial examples, it would be faster to not use STM and only use one thread.
Using multiple threads adds complexity and makes your application more fragile, so there has to be a good reason to do it otherwise you should stick to the simplest solution possible.
For more discussion
http://vanillajava.blogspot.co.uk/2011/11/why-concurency-examples-are-confusing.html
why java Atomics uses sun Unsafe class rather than using synchronize block/volatile?
synchronization is much more heavy weight.
The backport of the concurrency library for Java 1.4 uses synchronization however it doesn't perform any where near as well.
Unsafe gives direct access to the Compare-and-Swap instructions of the CPU.
I can think of that the Programmers of the Atomics classes know what they do so they use the low level methodes for better performance.
Synchronize is a really heavy tool when doing multithreaded operations. Its way to powerful/blooted for simple locking/mutual exclusion.
How can scala make writing multi-threaded programs easier than in java? What can scala do (that java can't) to facilitate taking advantage of multiple processors?
The rules for concurrency are
1 avoid it if you can
2 share nothing if you can
3 share immutable objects if you can
4 be very careful (and lucky)
For rule 2 Scala helps by providing a nicely integrated message passing library out of the box in the form of the actors.
For rule 3 Scala helps to make everything immutable by default.
For rule 4 Scala's flexible syntax allows the creation of internal DSL's making it easier and less wordy to express what you need consicely. i.e. less place for surprises (if done well)
If one takes Akka as a foundation for concurrent (and distributed) computing, one might argue the only difference is the usual stuff that distinguishes Scala from Java, since Akka has both Java and Scala bindings for all its facilities.
There is nothing Scala does that Java does not. That would be silly. Scala runs on the same JVM that Java does.
What Scala does do is make it easier to write, easier to reason about and easier to debug a multi-thread program.
The good bits of Scala for concurrency are its focus on immutable objects, its message-passing and its Actors.
This gives you thread-safe read-only data, easy ways to pass that data to other threads, and easy use of a thread pool.
May I know is there any C++ equivalent class, to Java java.util.concurrent.ArrayBlockingQueue
http://download.java.net/jdk7/docs/api/java/util/concurrent/ArrayBlockingQueue.html
Check out tbb::concurrent_bounded_queue from the Intel Threading Building Blocks (TBB).
(Disclaimer: I haven't actually had a chance to use it in a project yet, but I've been following TBB).
The current version of C++ doesn't include anything equivalent (it doesn't include any thread support at all). The next version of C++ (C++0x) doesn't include a direct equivalent either.
Instead, it has both lower level constructs from which you could create a thread safe blocking queue (e.g. a normal container along with mutexes, condition variables, etc., to synchronize access to it).
It also has a much higher level set of constructs: a promise, a future, a packaged_task, and so on. These completely hide the relatively low level details like queuing between the threads. Instead, you basically just ask for something to be done, and sometime later you can get a result. All the details in between are handled internally.
If you want something right now, you might consider the Boost Interprocess library. This includes (among other things) a Message Queue class. If memory serves, it supports both blocking and non-blocking variants.
Intel's Threading Building Blocks has a couple different concurrent queues, one of which might be similar.
concurrent_queue might be the one you are looking for. It comes with Parallel Patterns library from Microsoft.
This is my C++ implementation of ArrayBlockingQueue trying to be as close and conformant as possible to Java implementation. except iterator thread safety rest is perfectly compliant. I dont consider generally there is a need to iterate the whole queue ar run time.
https://github.com/anandkulkarnisg/ArrayBlockingQueue
The Examples should demonstrate how to use the blocking queue. It is internally implemented as a circular buffer based queue using raw array [ for good performance ].
Standard C++ has no equivalent, as it has no concept of concurrency; without concurrency, such a structure is both useless and dangerous, as operating on it could potentially block forever if there are no other threads.
It would be easy to implement, however, but the implementation details would depend on the threading library you're using.
As a side note, the upcoming C++1x standard will add some basic threading features to the standard library.
When Java is providing the capabilities for concurrent programming, what are the major advantages in using Clojure (instead of Java)?
Clojure is designed for concurrency.
Clojure provides concurrency primitives at a higher level of abstraction than Java. Some of these are:
A Software Transactional Memory system for dealing with synchronous and coordinated changes to shared references. You can change several references as an atomic operation and you don't have to worry about what the other threads in your program are doing. Within your transaction you will always have a consistent view of the world.
An agent system for asynchronous change. This resembles message passing in Erlang.
Thread local changes to variables. These variables have a root binding which are shared by every thread in your program. However, when you re-bind a variable it will only be visible in that thread.
All these concurrency primitives are built on top of Clojures immutable data structures (i.e., lists, maps, vectors etc.). When you enter the world of mutable Java objects all of the primitives break down and you are back to locks and condition variables (which also can be used in clojure, when necessary).
Without being an expert on Clojure I would say that the main advantage is that Clojure hides a lot of the details of concurrent programming and as we all know the devil is in the details, so I consider that a good thing.
You may want to check this excellent presentation from Rick Hickey (creator of Clojure) on concurrency in Clojure. EDIT: Apparently JAOO has removed the old presentations. I haven't been able to locate a new source for this yet.
Because Clojure is based on the functional-programming paradigm, which is to say that it achieves safety in concurrency by following a few simple rules:
immutable state
functions have no side effects
Programs written thus pretty much have horizontal scalability built-in, whereas a lock-based concurrency mechanism (as with Java) is prone to bugs involving race conditions, deadlocks etc.
Because the world has advanced in the past 10 years and the Java language (!= the JVM) is finding it hard to keep up. More modern languages for the JVM are based on new ideas and improved concepts which makes many tedious tasks much more simple and safe.
One of the cool things about having immutable types is that most of the built-in functions are already multi-threaded. A simple 'reduce' will span multiple cores/processors, without any extra work from you.
So, sure you can be multi-threaded with Java, but it involves locks and whatnot. Clojure is multi-threaded without any extra effort.
Yes, Java provides all necessary capabilities for concurrent programs.
An analogy: C provides all necessary capabilities for memory-safe programs, even with lots of string handling. But in C memory safety is the programmer's problem.
As it happens, analyzing concurrency is quite hard. It's better to use inherently safe mechanisms rather than trying to anticipate all possible concurrency hazards.
If you attempt to make a shared-memory mutable-data-structure concurrent program safe by adding interlocks you are walking on a tightrope. Plus, it's largely untestable.
One good compromise might be to write concurrent Java code using Clojure's functional style.
In addition to Clojure's approach to concurrency via immutable data, vars, refs (and software transactional memory), atoms and agents... it's a Lisp, which is worth learning. You get Lisp macros, destructuring, first class functions and closures, the REPL, and dynamic typing - plus literals for lists, vectors, maps, and sets - all on top of interoperability with Java libraries (and there's a CLR version being developed too.)
It's not exactly the same as Scheme or Common Lisp, but learning it will help you if you ever want to work through the Structure and Interpretation of Computer Programs or grok what Paul Graham's talking about in his essays, and you can relate to this comic from XKCD. ;-)
This video presentation makes a very strong case, centred around efficient persistent data structures implemented as tries.
Java programming language evolution is quite slow mainly because of Sun's concern about backward compatibility.
Why don't you want just directly use JVM languages like Clojure and Scala?