what is RMI TCP connection - java

I'm making a desktop application in java and am doing some memory optimisations. That made me come across two threads running in the JVM, both named:
RMI TCP connection
And they're both contributing to heap growth quite considerably (at my perspective)
Now I don't know much, but TCP sounds to me like it's some internet thing. From what I've managed to find on google, it has something to do with serialization/deserialization over the internet.
But my application doesn't need the internet, so I would like to know two things:
what are they and what are they doing in my JVM?
Can I get rid of them somehow?
My tool has been "Java visualVM". A though has crossed my mind that the two threads are spawned as a result of using this tool, in which case I'll feel a bit stupid.

The threads are used to feed a remote JMX client (in your case Java VisualVM) with data from your JVM.
Once you disconnect the threads should not allocate so much data anymore.
To verify this you can go to the Threads tab and look at the thread dump of a RMI TCP Connection thread. You should see that the RMI operations trigger JMX beans.

RMI is a Java API, which allows you to divide the implementation of parts of the same application on multiple computers.
Do you use java.rmi library in your project?

Related

2 programs that send messages to each other in Java [duplicate]

I have the following situation:
I have 2 JVM processes (really 2 java processes running separately, not 2 threads) running on a local machine. Let's call them ProcessA an ProcessB.
I want them to communicate (exchange data) with one another (e.g. ProcessA sends a message to ProcessB to do something).
Now, I work around this issue by writing a temporary file and these process periodically scan this file to get message. I think this solution is not so good.
What would be a better alternative to achieve what I want?
Multiple options for IPC:
Socket-Based (Bare-Bones) Networking
not necessarily hard, but:
might be verbose for not much,
might offer more surface for bugs, as you write more code.
you could rely on existing frameworks, like Netty
RMI
Technically, that's also network communication, but that's transparent for you.
Fully-fledged Message Passing Architectures
usually built on either RMI or network communications as well, but with support for complicated conversations and workflows
might be too heavy-weight for something simple
frameworks like ActiveMQ or JBoss Messaging
Java Management Extensions (JMX)
more meant for JVM management and monitoring, but could help to implement what you want if you mostly want to have one process query another for data, or send it some request for an action, if they aren't too complex
also works over RMI (amongst other possible protocols)
not so simple to wrap your head around at first, but actually rather simple to use
File-sharing / File-locking
that's what you're doing right now
it's doable, but comes with a lot of problems to handle
Signals
You can simply send signals to your other project
However, it's fairly limited and requires you to implement a translation layer (it is doable, though, but a rather crazy idea to toy with than anything serious.
Without more details, a bare-bone network-based IPC approach seems the best, as it's the:
most extensible (in terms of adding new features and workflows to your
most lightweight (in terms of memory footprint for your app)
most simple (in terms of design)
most educative (in terms of learning how to implement IPC). (as you mentioned "socket is hard" in a comment, and it really is not and should be something you work on)
That being said, based on your example (simply requesting the other process to do an action), JMX could also be good enough for you.
I've added a library on github called Mappedbus (http://github.com/caplogic/mappedbus) which enable two (or many more) Java processes/JVMs to communicate by exchanging messages. The library uses a memory mapped file and makes use of fetch-and-add and volatile read/writes to synchronize the different readers and writers. I've measured the throughput between two processes using this library to 40 million messages/s with an average latency of 25 ns for reading/writing a single message.
What you are looking for is inter-process communication. Java provides a simple IPC framework in the form of Java RMI API. There are several other mechanisms for inter-process communication such as pipes, sockets, message queues (these are all concepts, obviously, so there are frameworks that implement these).
I think in your case Java RMI or a simple custom socket implementation should suffice.
Sockets with DataInput(Output)Stream, to send java objects back and forth. This is easier than using disk file, and much easier than Netty.
I tend to use jGroup to form local clusters between processes. It works for nodes (aka processes) on the same machine, within the same JVM or even across different servers.
Once you understand the basics it is easy working with it and having the options to actually run two or more processes in the same JVM makes it easy to test those processes easily.
The overhead and latency is minimal if both are on the same machine (usually only a TCP rountrip of about >100ns per action).
socket may be a better choice, I think.
Back in 2004 I implement code which do the job with sockets. Until then, many times I search for a better solution, because socket approach triggers firewall and my clients worry. There is no better solution until now. Client must serialize your data, send and server must receive and unserialize.
It is easy.

"In a distributed environment, one does not use multithreding" - Why?

I am working on a platfor that hosts small Java applications, all of which currently uses a single thread, living inside a Docker engine, consuming data from a Kafka server and logging to a central DB.
Now, I need to put another Java application to this platform. This app at hand uses multithreading relatively heavily, I already tested it inside a Docker container and it works perfectly there, so I'm ready to deploy it on the platform where it would be scaled manually, that is, some human would define the number of containers that would be started, each of them containing an instance of this app.
My Architect has an objection, saying that "In a distributed environment we never use multithreading". So now, I have to refactor my application eliminating any thread related logic from it, making it single threaded. I requested a more detailed reasoning from him, but he yells "If you are not aware of this principle, you have no place near Java".
Is it really a mistake to use a multithreaded Java application in a distributed system - a simple cluster with ten or twenty physical machines, each hosting a number of virtual machines, which then runs Docker containers, with Java applications inside them.
Honestly, I don't see the problem of multithreading inside a container.
Is it really a mistake or somehow "forbidden"?
Thanks.
When you write for example a web application that will run in a Java EE application server, then normally you should not start up your own threads in your web application. The application server will manage threads, and will allocate threads to process incoming requests on the server.
However, there is no hard rule or reason why it is never a good idea to use multi-threading in a distributed environment.
There are advantages to making applications single-threaded: the code will be simpler and you won't have to deal with difficult concurrency issues.
But "in a distributed environment we never use multithreading" is not necessarily always true and "if you are not aware of this principle, you have no place near Java" sounds arrogant and condescending.
I guess he only tells you this as using a single thread eliminates multi threading and data ordering issues.
There is nothing wrong with multithreading though.
Distributed systems usually have tasks that are heavily I/O bound.
If I/O calls are blocking in your system
The only way to achieve concurrency within the process is spawning new threads to do other useful work. (Multi-threading).
The caveat with this approach is that, if they are too many threads
in flight, the operating system will spend too much time context
switching between threads, which is wasteful work.
If I/O calls are Non-Blocking in your system
Then you can avoid the Multi-threading approach and use a single thread to service all your requests. (read about event-loops or Java's Netty Framework or NodeJS)
The upside for single thread approach
The OS does not any wasteful thread context switches.
You will NOT run into any concurrency problems like dead locks or race conditions.
The downside is that
It is often harder to code/think in a non-blocking fashion
You typically end up using more memory in the form of blocking queues.
What? We use RxJava and Spring Reactor pretty heavily in our application and it works pretty fine. You can't work with threads across two JVMs anyway. So just make sure that your logic is working as you expect on a single JVM.

Is there a way to completely disable RMI in a java application?

In our application remote Procedure call is solved with an own netty based command dispatcher system. We have a lot of modules (about 20) and I want to run all modules in separate jvm-s. My problem is, that RMI spawns about 17 threads for each JVM. I do not need RMI at all (as far as I know).
Can I completely disable RMI for a jvm? Or at least configure it in a way that it does not use this many threads?
You are most likely looking at your JVMs with a monitoring application, right? Well: these monitoring applications use RMI. So you will see always RMI threads within your monitoring application, profiler, etc. And you will always see them using some amount of CPU time. Just as gathering profiling information does not work without transporting these information (via RMI) to your tool. You can implement your own transportation protocol and direct the management beans to use it but I doubt that you can save enough resources to return your development costs.
If you don’t use any RMI, RMI won’t start any threads. But if you are using it, even if you aren’t aware of this, disabling RMI implies that your software will not work any more.
StuartMarks is correct. RMI doesn't start any threads until you use it.
Possibly you are using it in some way of which you are unaware, e.g. JMX?

How can you let two JVM's communicate without RMI or Sockets?

Is it possible to let two separate JVM's communicate with each other without RMI or a Socket?
My situation is as following:
I have a server (java app) in one console, in a command line interface. I can log in to the computer with SSH, so I have got another terminal. I want now to start a little java program, which can communicate with the server. This is to control the server, so there isn't a lot of traffic (just some small strings passing around).
I'd like not to use a Socket or RMI, as it uses another port number.
Is a PipedWriter an option? But how will the other JVM now where to find it?
Thanks
You don't want a PipedWriter; that's only for communication between threads in the same process. You also can't use an unnamed OS pipe, as there's no relationship between the processes. You could use an OS fifo (assuming you're on Unix, when you'd make it with the system mkfifo command; I don't know the equivalent API for Windows well enough to help there) but I've no idea how good Java will be at handling such named pipes.
The simplest mechanisms are to use a watched file or a server socket that is only bound to a port on localhost, 127.0.0.1. (By binding it in such a way, it becomes impossible to access the socket from off the machine without some kind of local program. Or a suitably-configured SSH tunnel.) The advantage and disadvantage of the watched-file method is that it needs somewhere that both processes can see (and presumably write to). An administration socket is somewhat more exposed, but handles two-way comms much more easily; if that matters to you, do consider it as Java's got good (if a little low-level for my tastes) support for sockets.
If you're thinking about a shared file solution, consider whether to make that file a database; there's a JDBC driver for SQLite and that supports safe concurrent access (and guards against problems with concurrent writes I bet you've never thought of). It also might mean you can avoid having to write a special client; there's a wealth of existing tools.

Java Socket Programming does not work for 10,000 clients

I can create multiple threads for supporting multi-client feature in socket programming; that's working fine. But if 10,000 clients want to be connected, my server cannot create so many threads.
How can I manage the threads so that I can listen to all these clients simultaneously?
Also, if in this case the server wants to send something to a particular client, then how is it possible?
You should investigate Java's NIO ("New I/O") library for non-blocking network programming. NIO was designed to solve precisely the server scalability problem you are facing!
Introductory article about NIO: Building Highly Scalable Servers with Java NIO
Excerpts from O'Reilly's Java NIO book
Highly scalable socket programming in Java requires the selectable channels provided in the "New I/O", or NIO packages. By using non-blocking IO, a single thread can service many sockets, tending only to those sockets that are ready.
One of the more scalable open-source NIO applications is the Grizzly component of the Glassfish application server. Jean-Francois Arcand has written a number of informative, in-depth blog posts about his work on the project, and covers many subtle pitfalls in writing this kind of software with NIO.
If the concept of non-blocking IO is new to you, using existing software like Grizzly, or at least using it as a starting point for your adaptation, might be very helpful.
The benefits of NIO are debatable. See Paul Tyma's blog entries here and here.
A thread-per-connection threading model (Blocking Socket I/O) will not scale too well. Here's an introduction to Java NIO which will allow you to use non-blocking socket calls in java:
http://today.java.net/cs/user/print/a/350
As the article states, there are plenty of frameworks available so you don't have to roll your own.
As previously mentioned, 10.000 clients is not easy. For java, NIO (possibly augmented with a separate threadpool to handle each request without blocking the NIO thread) is usual way to handle a large amount of clients.
As mentioned, depending on implementation, threads might actually scale, but it depends a lot on how much interaction there is between client connections. Massive threads are more likely to work if there is little synchronization between the threads.
That said, NIO is notoriously difficult to get 100% right the first time you implement it.
I'd recommend either trying out, or at least looking at the source for the Naga NIO lib at naga.googlecode.com. The codebase for the lib is small compared to most other NIO frameworks. You should be able to quickly implement a test to see if you can get 10.000 clients up and running.
(The Naga source also happens to be free to modify or copy without attributing the original author)
This is not a simple question, but for a very in depth (sorry, not in java though) answer see this: http://www.kegel.com/c10k.html
EDIT
Even with nio, this is still a difficult problem. 10000 connections is a tremendous resource burden on the machine, even if you are using non-blocking sockets. This is why large web sites have server farms and load balancers.
Why don't you process only a certain amount of requests at a time.
Let's say you want to process a maximum of 50 requests at a time (for not creating too many threads)
You create a threadpool of 50 threads.
You put all the requests in a Queue (accept connections, keep sockets open), and each thread, when it is done, gets the next request then process it.
This should scale more easily.
Also, if the need arise, it will be easier to do load balancing, since you could share your queues for multiple servers
Personally I would rather use create a custom I/O non blocking setup, for example using one thread to accept clients and using one other thread to process them (checking if any input is available and writing data to the output if necessary).
You'll have to figure out why your application is failing at 10,000 threads.
Is there a hard limit to the number of threads in the JVM or the OS? If so, can it be lifted?
Are you running out of memory? Try configuring a smaller stack size per thread, and/or add more memory to the server.
Something else? Fix it.
Only once you have determined the source of the problem will you be able to fix it. In theory 10,000 threads should be OK but at that level of concurrency it requires some extra tuning of the JVM and operating system if you want it to work out.
You can also consider NIO but I think it can work fine with threads as well.

Categories