I'm trying to create an app with notification service whenever a call is made on API.
Is it possible for me to create a logger on port:8080 and when app is run on the server it listens to api running on another server.
Both applications are run on local machine for testing purposes using Docker.
So far I've been reading https://www.baeldung.com/spring-boot-logging in order to implement it but I'm having problems with understanding the path mapping.
Any ideas?
First let's name the two applications:
API - the API service that you want to monitor
Monitor - which wants to see what calls are made to (1)
There are several ways to achieve this.
a) Open up a socket on Monitor for inbound traffic. Communicate the IP address and socket port manually to the API server, have it open up the connection to the Monitor and send some packet of data down this "pipe". This is the lowest level approach simple, but very fragile as you have to coordinate the starting of the services, and decide on a "protocol" for how the applications exchange data.
b) REST: Create a RESTful controller on the Monitor app that accepts a POST. Communicate the IP address and port manually to the API server. Initiate a POST request to the Monitor app when needed. This is more robust but still suffers from needing careful starting of the servers
c) Message Queue. install a message queue system like RabbitMQ or ActiveMQ (available in Docker containers). API server publishes a message to a Queue. Monitor subscribes to the Queue. Must more robust, still requires each application to be advised of the address of the MQ server, but now you can stop/start the two applications in any order
d) The java logging article is good started into java logging. Most use cases log to a local file on the local server. There are some implementations of backend logging that send logs to remote places (I don't think that article covers them), and there are ways of adding your own custom receiver of this log traffic. In this option, on the API side, it would use ordinary logging code with no knowledge of the downstream consumption of the logging. Your monitor app would need to integrate tightly into a particular logging system with this approach.
I have NFS server where in i require to host files and read it. The approach to read and write file on NFS server found is
using NFS Client Like here.
My question is when we can write content on NFS server with normal java read/write program then why is NFS client introduced ? Is there any service specific to NFS which these client provide and why is it different than normal file creation process ?
When you're using normal Java API to access a NFS folder, all communications are actually handled by your OS. So you can just use the normal File API and Java doesn't know if it's accessing a local file or a remote one. But in cases that your OS doesn't support NFS (e.g. if your Java app is running in an environment with limited resources or NFS mounting is disabled in OS level) or you are developing an application that needs more lower level details about the NFS resource (e.g. when you're developing a framework or a middleware), you may need to be able to communicate directly with the server that is exposing files/folders via a library like nfs-client-java.
So, we have a cluster of servers that run the same application. For logging, we use log4j and the Loggers used are attached to ConsoleAppenders that ultimately log the data to local log files. To aggregate logs from all of these servers, I also have a SocketAppender attached to each of these loggers. A SimpleSocketServer runs on another remote host configured with this SocketAppender that will consume the LoggingEvents and format them with the PatternLayout specified in the configuration file supplied to that process.
Since this remote server can consume only the LoggingEvents sent to it from the clients, I cannot apply a formatter before sending them across and so I cannot log the identifier of the client host that is sending the request. Ideally, I want to be able to log the IP of the client so that I can pinpoint a host with a particular issue from the logs. What is the best way to achieve this?
I have no prior experience with Java and I'm trying to debug a Java class. The class gets called from C++ code. The Java code is full of SysLog calls that would be very helpful to me, but I can't figure out where the output goes, if anywhere.
SysLog is initialized quite simply like this:
public static void initLogger() {
if (log == null) {
log = new Syslog();
}
}
And just to complicate matters, this code is running as a Windows service which means it cannot display any sort of UI, not even console output. The logging must go to a file to be useful.
Do I need to do something to enable the logging? If so, what?
Do I need to do something to send the logging to file(s)? If so, what?
Some info about Syslog - it was originally developed as a generic UNIX logging service. It can log local messages from the system or collect log messages from a number of different servers on the network. Generally any UNIX system will have some implementation of a syslog service available on it. It is a common way of aggregating messages from a number of systems.
On windows, there are a number of syslog server implementations out there. If a syslog service is running on your windows box, then the message might be ending up there.
However, the real question is: to which syslog services does your java app send messages? In theory you could configure your java app to send syslog messages to a remote machine. Or it could try to send messages to the local host (since the syslog listens on a default port).
So I think what VD is saying is to check your java application settings and look for syslog configuration settings. You will have to gain access to a syslog server (either install one locally on windows or get a remote machine to listen). And then using this link configure your java app appropriately.
An additional complication might be that your java environment uses a custom or special sysloghandler. The log4j is very common and is probably the default, but you might want to check that as well.
I started with requirement of reading and writing files in from/in a directory on a remote Ubuntu machine.
First, I wrote a Java program that could read,write files from a shared folder on a remote Windows machine i.e on a LAN. Here, something like this works on my(local) Windows machine :
File inputFile = new File(
"\\172.17.89.76\EBook PDF");/*ignore the syntax errors, the loc is just for the idea*/
Now when I consider a remote Ubuntu machine, obviously I cannot do something like this as the machine is not on the LAN(I'm not sure if that can be done even if it is on the LAN!). Hence, I tried following approaches :
Using Jsch, establishing the trust between two machines(local - remote Linux , remote Linux - remote Linux) and file writing using sftp.(done)
Running sockets on the two machines - one sender, one receiver(both Java)(done)
Attempting to achieve I/O alike the code snippet for Windows (LAN) machines(not achieved)
While doing all these, I had many queries, read many posts etc. and I felt that I'm missing something on the fundamentals :
Some sort of trust-building(between two machines) utility will be required to achieve IO. But finally, I want to write a code like the snippet given, irrespective of the machines, network etc.
The Jsch solution and the others suggested(usage of http, ftp etc. over URL) finally are using some services that are running on the remote machine. In other words, it is NOT THAT Java IO is being used to access the remote file system - this doesn't appeal to me as I'm relying on services rather than using good-old I/O.
Samba, SSHFS too popped onto the scene, only to add to my confusion. But I don't see them as the solutions to my objective !
To reiterate, I want to write a code using Java I/O(either plain or nio, both are fine) which simply can read, write remote files without using services over protocols like ftp, http etc. or socket sender-receiver model. Is my expectation valid?
If not, why and what is the best I can do to read/write remote files
using Java?
If yes, how to achieve the same !
P.S : Please comment in case I need to elaborate to pose my question accurately !
To answer your question - No, your expectation isn't valid.
Retrieving files from a remote server is inherently reliant on the services running on that server. To retrieve a file from a remote server, the remote server needs to be expecting your request for a file.
The cases you listed in your question (using jsch and sftp, using a sender and receiver Java sockets) that you have achieved already, are essentially the same as this:
File inputFile = new File(
"\\172.17.89.76\EBook PDF");
The only difference is that Java is using the native os's built in support for reading from a windows style share. The remote windows machine has a sharing service running on it (just like Samba on linux, or a java socket program) waiting for your request.
From the Java API docs on File (http://docs.oracle.com/javase/6/docs/api/java/io/File.html)
The canonical pathname of a file that resides on some other machine and is accessed via a remote-filesystem protocol such as SMB or NFS ...
So essentially "Good old Java I/O" is more or less just a wrapper over some common protocols.
To answer the second part of your question (what is the best I can do to read/write remote files using Java?), that depends on what remote system you are accessing and, more importantly, what services are running on it.
In the case of your target remote machine being an Ubuntu machine, I would say the best alternative would be to use Jsch. If your target machine can be either a windows machine or a linux machine, I would probably go for running Java sockets on the two machines (obviously dependant on whether you have access to installing your app on the remote machine).
Generally speaking, go with the common lowest denominator between your target systems (in terms of file sharing protocols).
If you want to access a filesystem on a remote computer, then this computer has to make his filesystem available with a service. Such a service is typically a background job, which handles incoming requests and returns a response, e.g. for authentication, authorization, reading and writing. The specification of the request/response pattern is called a protocol. Well known protocols are SMB (or SAMBA) on Windows or NFS on UNIX/LINUX. To access such a remote service you mount the remote filesystem on the level of the operating system and make it available locally as a drive on Windows or as mount point on UNIX.
Then you can access the remote file system from your Java program like any local file system.
Of course it is also possible to write your own file service provider (with your own protocol layer) and run it on the remote machine. As transport layer for such an endeavor sockets (TCP/IP) can be used. Another good transport layer would be the http protocol, e.g. with a restful service or something based on WebDav.
We used sshfs. You can add to /etc/fstab the line:
sshfs#user#remoteAddress:remoteDir /mnt/ssh fuse defaults 0 0
and then mount /mnt/ssh
I think RMI might be the solution, you could set up a server an RMI server on the machine you want to connect to, and use your machine a the client.
I would give the client a path to the file this will be sent to the server, the server could then read in the file as bytes and sent the file back to the client.