I'm trying to create an app with notification service whenever a call is made on API.
Is it possible for me to create a logger on port:8080 and when app is run on the server it listens to api running on another server.
Both applications are run on local machine for testing purposes using Docker.
So far I've been reading https://www.baeldung.com/spring-boot-logging in order to implement it but I'm having problems with understanding the path mapping.
Any ideas?
First let's name the two applications:
API - the API service that you want to monitor
Monitor - which wants to see what calls are made to (1)
There are several ways to achieve this.
a) Open up a socket on Monitor for inbound traffic. Communicate the IP address and socket port manually to the API server, have it open up the connection to the Monitor and send some packet of data down this "pipe". This is the lowest level approach simple, but very fragile as you have to coordinate the starting of the services, and decide on a "protocol" for how the applications exchange data.
b) REST: Create a RESTful controller on the Monitor app that accepts a POST. Communicate the IP address and port manually to the API server. Initiate a POST request to the Monitor app when needed. This is more robust but still suffers from needing careful starting of the servers
c) Message Queue. install a message queue system like RabbitMQ or ActiveMQ (available in Docker containers). API server publishes a message to a Queue. Monitor subscribes to the Queue. Must more robust, still requires each application to be advised of the address of the MQ server, but now you can stop/start the two applications in any order
d) The java logging article is good started into java logging. Most use cases log to a local file on the local server. There are some implementations of backend logging that send logs to remote places (I don't think that article covers them), and there are ways of adding your own custom receiver of this log traffic. In this option, on the API side, it would use ordinary logging code with no knowledge of the downstream consumption of the logging. Your monitor app would need to integrate tightly into a particular logging system with this approach.
I need that all server console output will appear in client output.
I'm invoking remote method on remote VM, during remote method execution i have some log4j report to the console (on remote).
I want to get/ return all log4j report to my client side console.
is this possible?
Not really. You have to understand that client and server only communicate through that RMI interface that you defined. Then both programs run in their own JVM; so stdout is something completely different for client and server. Same is of course true for any kind of logging infrastructure.
If you really want to push the server messages into your client logs; then you need to enhance that RMI interface, for example by allow the server to send back a List<String> that contains all the messages.
But please note: that is a rather bad design idea. You really do not want that your client logs contain server details. What happens on the server ... stays on the server. Your clients have no business knowing about such details. Because your users might find it very helpful when planning to attack your server ... to know what that thing is doing in detail!
Update: given your input, I would go for the following::
Make sure that you can really capture any char printed to stdout/stderr on your server; for example by "replacing" stdout/stderr so that anything printed there goes in some file (see here). Alternatively, if your VM is Linux, you can make sure both get piped into files.
Instead of trying to capture stuff within your RMI service, I would go for a simpler solution - by adding a RMI interface that allows you to pull those stdout/stderr files from your server. In other words: keep your current RMI calls as they are; but built another service that you can use to retrieve full log files at arbitrary points in time.
i was wondering if the SNMP protocol can help me to developpe java application to centralized log files of local network.
i m not trying to monitoring the network devices,i just want to centralized the log files and analyse theme.
Probably not.
You'd have to first convert the log entries on each machine into SNMP traps, and then have some system gathering the traps and putting them in a file (i.e. converting them back into a log).
You'd be better off using a protocol designed for this usage, like syslog. If you set up a log server using a freely available application like rsyslog or syslog-ng, you can then develop your distributed Java application using a syslog library like Syslog4j. Each application instance could then log to the same syslog server. Individual servers running Linux or other Unixes can then also send their system log to the same log server.
I need to log events to the syslog on the localhost only. My first trial was with the logback SyslogAppender, but it looks like it writes the logs through UDP. The problem is that the syslog daemon needs to be configured to access remote logging, which i can't guarantee on all deployment targets. Is there any way to log to syslog "directly" ( the way /bin/logger would do it) from JAVA? ( i mean, without needing to go through UDP or TCP)
I would guess you could invoke /bin/logger directly, but you would need to write a new adapter to do so. The "easiest" thing could be subclassing the current adapter to get all the scaffolding and only override the snippet sending the message.
You may want to do some experiments on whether you can write a simple Java program invoking /bin/logger, to see if this is a viable way.
Just across similar issue and found SysLog4J. Unfortunately it has not been actively developed since 2011.
Some alternative packagings exist - this seems to be fresh and alive: https://github.com/Graylog2/syslog4j-graylog2
Edit: A bit better answer is already part of the following question: Using Syslog's unix socket with Log4J2 :)
We have some applications that sometimes get into a bad state, but only in production (of course!). While taking a heap dump can help to gather state information, it's often easier to use a remote debugger. Setting this up is easy -- one need only add this to his command line:
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=PORT
There seems to be no available security mechanism, so turning on debugging in production would effectively allow arbitrary code execution (via hotswap).
We have a mix of 1.4.2 and 1.5 Sun JVMs running on Solaris 9 and Linux (Redhat Enterprise 4). How can we enable secure debugging? Any other ways to achieve our goal of production server inspection?
Update: For JDK 1.5+ JVMs, one can specify an interface and port to which the debugger should bind. So, KarlP's suggestion of binding to loopback and just using a SSH tunnel to a local developer box should work given SSH is set up properly on the servers.
However, it seems that JDK1.4x does not allow an interface to be specified for the debug port. So, we can either block access to the debug port somewhere in the network or do some system-specific blocking in the OS itself (IPChains as Jared suggested, etc.)?
Update #2: This is a hack that will let us limit our risk, even on 1.4.2 JVMs:
Command line params:
-Xdebug
-Xrunjdwp:
transport=dt_socket,
server=y,
suspend=n,
address=9001,
onthrow=com.whatever.TurnOnDebuggerException,
launch=nothing
Java Code to turn on debugger:
try {
throw new TurnOnDebuggerException();
} catch (TurnOnDebugger td) {
//Nothing
}
TurnOnDebuggerException can be any exception guaranteed not to be thrown anywhere else.
I tested this on a Windows box to prove that (1) the debugger port does not receive connections initially, and (2) throwing the TurnOnDebugger exception as shown above causes the debugger to come alive. The launch parameter was required (at least on JDK1.4.2), but a garbage value was handled gracefully by the JVM.
We're planning on making a small servlet that, behind appropriate security, can allow us to turn on the debugger. Of course, one can't turn it off afterward, and the debugger still listens promiscuously once its on. But, these are limitations we're willing to accept as debugging of a production system will always result in a restart afterward.
Update #3: I ended up writing three classes: (1) TurnOnDebuggerException, a plain 'ol Java exception, (2) DebuggerPoller, a background thread the checks for the existence of a specified file on the filesystem, and (3) DebuggerMainWrapper, a class that kicks off the polling thread and then reflectively calls the main method of another specified class.
This is how its used:
Replace your "main" class with DebuggerMainWrapper in your start-up scripts
Add two system (-D) params, one specifying the real main class, and the other specifying a file on the filesystem.
Configure the debugger on the command line with the onthrow=com.whatever.TurnOnDebuggerException part added
Add a jar with the three classes mentioned above to the classpath.
Now, when you start up your JVM everything is the same except that a background poller thread is started. Presuming that the file (ours is called TurnOnDebugger) doesn't initially exist, the poller checks for it every N seconds. When the poller first notices it, it throws and immediately catches the TurnOnDebuggerException. Then, the agent is kicked off.
You can't turn it back off, and the machine is not terribly secure when its on. On the upside, I don't think the debugger allows for multiple simultaneous connections, so maintaining a debugging connection is your best defense. We chose the file notification method because it allowed us to piggyback off of our existing Unix authen/author by specifying the trigger file in a directory where only the proper uses have rights. You could easily build a little war file that achieved the same purpose via a socket connection. Of course, since we can't turn off the debugger, we'll only use it to gather data before killing off a sick application. If anyone wants this code, please let me know. However, it will only take you a few minutes to throw it together yourself.
If you use SSH you can allow tunneling and tunnel a port to your local host. No development required, all done using sshd, ssh and/or putty.
The debug socket on your java server can be set up on the local interface 127.0.0.1.
You're absolutely right: the Java Debugging API is inherently insecure. You can, however, limit it to UNIX domain sockets, and write a proxy with SSL/SSH to let you have authenticated and encrypted external connections that are then proxied into the UNIX domain socket. That at least reduces your exposure to someone who can get a process into the server, or someone who can crack your SSL.
Export information/services into JMX and then use RMI+SSL to access it remotely. Your situation is what JMX is designed for (the M stands for Management).
Good question.
I'm not aware of any built-in ability to encrypt connections to the debugging port.
There may be a much better/easier solution, but I would do the following:
Put the production machine behind a firewall that blocks access to the debugging port(s).
Run a proxy process on the host itself that connects to the port, and encrypts the input and output from the socket.
Run a proxy client on the debugging workstation that also encrypts/decrypts the input. Have this connect to the server proxy. Communication between them would be encrypted.
Connect your debugger to the proxy client.