As Described here
While locks seem to be the natural remedy to uphold encapsulation with multiple threads, in practice, they are inefficient and easily lead to deadlocks in any application of real-world scale.
My question simply, is akka really a solution for a deadlock.
I know as long as akka actors treat objects as completely decoupled, that code never meets a deadlock scenario. But, in the imperative programming also we can simply decouple them and lock separately without keeping a lock inside another lock, so that won't be a deadlock scenario at all in traditional programming paradigm also. what's the real use case of this statement? Do we find any use case, which can cause deadlock scenario in traditional programming but prevent using akka actor model?
P.S I am very new to akka but, I can understand the call stack, shared memory and threading issues in traditional programming is very costly in the modern computer architecture and akka is a good solution in performance-wise. But curious to know this special statement also.
I consider the Summary of the Discussion in the question thread as the answer.
Basically, Akka doesn't use locks doesn't mean that Akka can prevent deadlocks from real-world deadlock possible scenarios. But, since we think each and every actor as fully decoupled and completely different architecture from multithreaded applications, it performs faster. So,
it's very less probable for a deadlock to happen!
Related
I may be wrong, but as far as I understand, the whole Reactive/Event Loop thing, and Netty in particular, was invented as an answer to the C10K+ problem. It has obvious drawbacks, as all your code now becomes Async, with ugly callbacks, meaningless stack traces, and therefore hard to maintain and to reason about.
Go's language with goroutines was a solution, now they can write Sync code and also handle C10K+. So now Java comes up with Loom, which essentially copies the Go's solution, soon we will have Fibers and Continuations and will be able to write Sync code again.
So the questions are:
When the Loom is released in production, doesn't it make Netty kinda obsolete?
If we have Fibers and Continuations in Java, can we write nice Sync code and be ok with C10K+ without Netty?
Are there any advantages, for performance or solving C10K+, in writing Async code and using Netty, after production release of Loom?
I understand that Netty is more than just Reactive/Event Loop framework, it also has all the codecs for various protocols, which implementations will be useful somehow anyway, even afterwards.
I'm focusing on the reactive parts of Netty because those you seem to mostly want to address answering on a general level:
Currently reactive programming paradigms are often used to solve performance problems, not because they fit the problem. Those should be covered completely via project Loom.
However, some problems may remain where the reactive programming approach makes sense and is more straight forward to read than imperative code.
Reactive frameworks are typically stream oriented and are well suited to combine elements and operations on different entity/data streams. They also provide straight forward local event bus solutions with their provider/subscriber model. In such cases the reactive model might still be the best choice, performant and more readable than an imperative approach. But indeed, project loom should make all the "misuse" due to lack of better support in the native language structures obsolete.
Does anybody know about a tool that allows the explicit switching of threads at certain points in the code?
I am testing Software Transactional Memory for my bachelor thesis and for these tests, I need specific execution orders of threads (e.g. thread 1 reads 2 variables, after that switch to thread 2 and write to a variable, etc.). The problem is, the software library implementing the STM prohibits normal java synchronizaton methods in the code, so I cannot use sychronized blocks, locks or semaphores.
I was hoping someone knows about a tool like Concurrit (https://code.google.com/archive/p/concurrit/), only for Java...
No such tool exists. The only way you could achieve something like this would be to (drastically) modify the JVM itself, replacing the existing thread scheduling mechanisms. That would be an impractically large project ... by itself.
Opinion: the Concurrit DSL is not designed to be a practical programming language, and adding the mechanisms that it provides to a practical programming language most likely would make it it non-performant. Naturally, there is unlikely to be much enthusiasm for implementing such a tool for a performant1 language such as Java.
1 - Relatively speaking.
Although this applies to any multithreaded environment here I am only asking about Java. And only about pure Java, deadlocks caused by external devices accessed from Java, like database deadlocks, are not the topic of this.
What methodologies and supporting frameworks are there that, when properly used, give a guarantee that your code is deadlock free?
The ones I am aware of are :
No multithreading. Which is the solution used by many GUI frameworks.
Single global lock. Not a good solution since efficiency suffers.
Accessing locks in a fixed order. I know of no framework to support this.
Static analysis is helpful since it can detect many cases of potential deadlocks but gives no guarantees.
I'd like to add groovy-shell-server to our application. We have run into a couple production issues recently where a call to an internal API could have expedited diagnosis or even provided a short-term fix. Groovy-shell-server provides a nice way to achieve this.
But actually using this in production introduces a potential complication. Let's say that, despite careful peer review, we execute a script which pegs the CPU, or gets stuck in an endless loop. I need some way to kill that thread, pronto! So I was thinking about enhancing groovy-shell-server to support an optional hard stop() of a running Groovy client thread.
I know that Thread.stop() is inherently unsafe; it's been discussed on StackOverflow before. My question is, do you think the benefits might outweigh the risks in this case? Is using Thread.stop() a pragmatic choice as a kind of "emergency brake" for a runaway GroovyShell server thread? Or the likelihood of leaving objects in an inconsistent state is too high?
(Alternately, if someone has a better way to provide programmatic, interruptible access to a running java application, I'm all ears.)
I think that generally is it bad to use deprecated API and specifically it is not recommended to use Thread.stop().
BUT there is not rule without exception. I think this is the case. According to my experience Thread.stop() works and really stops thread. I used it many years ago in applet that was targeted for Netscape. Some of its versions did not support Thread.interrupt() well.
The only alternative solution I can think about is using separate process. But in this case you have to implement some process-to-process transport for data transfer. I do not know details of your task but usually the price is too high.
So, if I were you I'd use Thread.stop() with very big apologize comment.
what is the best method for inter process communication in a multithreaded java app.
It should be performant (so no JMS please) easy to implement and reliable,so that
objects & data can be bound to one thread only?
Any ideas welcome!
Could you clarify a bit? Do you mean IPC in a single JVM? (Multiple threads, yes, but at an OS-level only one process.) Or do you mean multiple JVMs? (And truly OS-level inter process communications.)
If it is the first, then maybe something out of java.util.concurrent, like ConcurrentLinkedQueue would do the trick. (I pass message around inbetween my threads with classes from java.util.concurrent with success.)
If the later, then I'm going to just guess and suggest taking a look at RMI, although I don' think it qualifies as fully reliable--you'd have to manage that a bit more 'hands on' like.
Assuming the scenario 1 JVM, multiple threads then indeed java.util.concurrent is the place to look, specifically the various Queue implementations. However an abstraction on top of that may be nice and there Jetlang looks very interesting, lightweight Java message passing.
I recommend looking into the entire java.util.concurrent package, which have multiple classes for dealing with concurrency and different communication means between threads. All depends on what you want to achieve, as your question is pretty general.
You should use a producer/consumer queue. By doing that you avoid the pitfalls of multithreaded programming: race-conditions and deadlocks. Plus it is not just easier and cleaner, but also much faster if you use a lock-free queue like Disruptor or MentaQueue. I wrote a blog article where I talk about this in detail and show how to get < 100 nanoseconds latencies: Inter-thread communication with 2-digit nanosecond latency.
I've just added MappedBus on github (http://github.com/caplogic/mappedbus) which is an efficient IPC library that enable several Java processes/JVMs to communicate by exchaning messages and it uses a memory mapped file for the transport. The troughput has been measured to 40 million messages/s.