Manage running Java apps remotely - java

We have several Java standalone applications (in form of Jar files) running on multiple servers. These applications mainly read and stream data between systems. We are using Java 8 mainly in our development. I was put in charge recently. My main function is to manage and maintain these apps.
Currently, I check these apps manually by accessing these servers, check if the app is running, and sometimes run some database queries to see if the app started pulling data. My problem is that in many cases, some of these apps fail and shutdown due to data issue or edge cases without anyone noticing. We need some monitoring and application recovery in place.
We don't have docker infrastructure in place. We plan to implement docker in the future, but for now this is not an option.
After research, the following are options I thought of or solutions I tried:
Have the apps create a socket client which sends a heartbeat to a monitoring app (which needs to be developed). I am keeping this as my last option.
I tried to use Eclipse Vertx to wrap the apps into Verticles. Then create a web view that can show me status and other info. After several tries, the apps fail to parse the data correctly (might be due to my lack of understanding to Vertx library).
Have a third party solution that does this, but I have no idea what solutions are out there. I am open for suggestions.
My requirements are:
Proper monitoring of the apps running and their status.
In case of failure, the app should start again while notifying the admin/developer.
I am willing to develop a solution or implement a third party one. I need you guidance on this.
Thank you.

You could use spring-boot-actuator (see health). It comes with a built-in endpoint that has some health checks(depending on your spring-boot project), but you can create your own as well.
Then, doing a http request to http://{host}:{port}/{context}/actuator/health (replace with yours), you could see those health checks status and also use the response status code to monitor your application.

Have you heard of Java Service Wrappers? Not a full management functionality, however it would monitor for JVM crashes and out of memory conditions and restart your application for sure. Alerting should also be possible.
There is a small comparison table here: https://yajsw.sourceforge.io/#mozTocId284533
So some basic monitoring and management is included already. If you need more, I suggest using JMX (https://www.oracle.com/java/technologies/javase/javamanagement.html) or Prometheus (https://prometheus.io/ and https://github.com/prometheus/client_java)

Related

Google Cloud scheduler Java job containerized with Selenium

I've got a Java code to perform some interactions with web pages and used Selenium for it.
Now I'd like to get this code executed every hours and I've thought it's a great occasion to discover the cloud world.
I've created an account on Google Cloud.
Because my app need to have a driver to use Selenium (gecko driver for Firefox), I'll have to create an docker image to set everything it need inside it.
In Google Cloud services, there is the "Cloud Scheduler" which can allow me to run a code when I want to.
But here are my questions :
What kind of target should I configure (HTTP, Pub/Sub, HTTP App Engine)?
Because I'm not using the Google Cloud Functions, my container will always be up, it doesn't seems as a great idea for a pricing reason? I would have like to have my container up only the time of the execution.
Also I was thinking to use Quarkus framework to wrap my application since I've since it was made for the cloud and very quick to start, is that the best option for me?
I'll be very glade if someone can help me to see this a little better. I'm not a total beginner I work as a Java / JavaScript developer for 5 years now and dockerized some application but everything about the cloud is a big piece, not easy to know where to start.
So you:
are using docker images
run your workload occasionally
aren't willing to use Cloud Function
==> Cloud Run is your best bet. Here is Google Cloud Run Quick start : https://cloud.google.com/run/docs/quickstarts/prebuilt-deploy
Keep in mind that your containerised application needs to be listening to HTTP requests so take a look at Cloud Run Container runtime contract
Finally you can indeed trigger Cloud Run from Cloud Scheduler, and here a detailed documentation on how to do it https://cloud.google.com/run/docs/triggering/using-scheduler
As #MBHAPhoenix says, Cloud Run is your best option. You can then trigger the job from Cloud Scheduler. We have this exact scenario currently running for one of our projects but our container is Python. We wrote an article about it here
You should note that to trigger your Cloud Run job from Cloud Scheduler, you'll have to 'secure it'. This means means you won't be able to just type the URL in a web browser. A service account will be responsible for running the Cloud Run job and you'll then need to grant your Cloud Scheduler service access to this service account so it can invoke the Cloud Run Job. I've been meaning to put up a post about the exact steps for doing this (will try to get it done this weekend).
In terms of cost, we have this snippet from our article
...Cloud Run only runs when it receives an HTTP request. It plays dead and comes alive to execute your code when an HTTP request comes in. When it is done executing the request, it goes 'dead' again till the next request comes in. This means you're not paying for time spent idling i.e. when it is not doing anything.....

Application upgrade from monolithic to microservices

We have 13 years old monolithic java application using
Struts 2 for handling UI calls
JDBC/Spring JDBC Template for db calls
Spring DI
Tiles/JSP/Jquery for UI
Two deployables are created out of this single source code.
WAR for online application
JAR for running back-end jobs
The current UI is pretty old. Our goal is to redesign the application using microservices. We have identified modules which can run as separate microservice.
We have following questions in our mind
Which UI framework should we go for (Angular/React or a home grown one). Angular seems to be very slow and we need better performance as far as page loading is concerned.
Should UI/Javascript make call to backend web services directly or should there be a spring controller proxy in deployed WAR which kind of forwards UI calls to APIs. This will also help if a single UI calls requires getting/updating data from different microservice.
How should we cover microservice security aspect
Which load balancer should we go for if we want to have multiple instance of same microservice.
Since its a banking application, our organization does not allow using Elastic Search/Lucene for searching. So need suggestion for reporting using Oracle alone.
How should we run backend jobs?
There will also be a main payment microservice which will create payments. Since payments volume is huge hence it will require multiple instances. How will we manage user logged-in session. Should we go for in-memory distributed session store (may be memcache)
This is a very broad question. You need to get a consultant architect to understand your application in depth, because it is unlikely you will get meaningful in-depth answers here.
However as a rough guideline here are some brief answers:
Which UI framework should we go for (Angular/React or a home grown one). Angular seems to be very slow and we need better performance as far as page loading is concerned.
That depends on what the application actually needs to do. Angular is one of the leading frameworks, and is usually not slow at all. You might be doing something wrong (are you doing too many granular calls? is your backend slow?). React is also a strong contender, but seems to be losing popularity, although that is just a subjective opinion and could be wrong. Angular is a more feature complete framework, while React is more of a combination of tools. You would be just crazy if you think you can do a home grown one and bring it to the same maturity of these ready made tools.
Should UI/Javascript make call to backend web services directly or
should there be a spring controller proxy in deployed WAR which kind
of forwards UI calls to APIs. This will also help if a single UI calls
requires getting/updating data from different microservice.
A lot of larger microservice architectures often involve an API gateway. Then again it depends on your use case. You might also have an issue with CORS, so centralising calls through a proxy / API gateway, even if it is a simple reverse proxy (you don't need to develop it) might be a good idea.
How should we cover microservice security aspect.
Again no idea what your setup looks like. JWT is a common approach. I presume the authentication process itself uses some centralised LDAP / Exchange or similar process. Once you authenticate you can sign a token which you give to the client, which is then passed to the respective micro services in the HTTP authorization headers.
Which load balancer should we go for if we want to have multiple
instance of same microservice.
Depends on what you want. Are you deploying on a cloud based solution like AWS (in which case load balancing is provided by the infrastructure)? Are you going to deploy on a Kubernetes setup where load balancing and scaling is handled as part of its deployment fabric? Do you want client-side load balancing (comes part of Spring Cloud)?
Since its a banking application, our organization does not allow using
Elastic Search/Lucene for searching. So need suggestion for reporting
using Oracle alone.
Without knowledge of how the data on Oracle looks like and what the reporting requirements are, all solutions are possible.
How should we run backend jobs?
Depends on the infrastructure you choose. Everything is possible, from simple cron jobs, to cloud scheduling services, or integrated Java scheduling mechanisms like Quartz.
There will also be a main payment microservice which will create
payments. Since payments volume is huge hence it will require
multiple instances. How will we manage user logged-in session. Should
we go for in-memory distributed session store (may be memcache)
Not really. It will defeat the whole purpose of microservices. JWT tokens will be managed by the client's browser and expire automatically. You don't need to manage user logged-in session in such architectures.
As you have mentioned it's a banking site so security will be first priory. Here I have few suggestions for FE and BE.
FE : You better go with preactjs it's a react like library but much lighter and fast as compare to react. For ui you can go with styled components instead of using some heavy third party lib. This will also enhance performance and obviously CDNs for images and big files.
BE : As per your need you better go with hybrid solution node could be a good option.e.g. for sessions.
Setup an auth server and get you services validate user from there and it will be used in future for any kinda service .e.g. you will expose some kinda client API's.
User case for Auth : you can use redis for session info get user validated from auth server and add info to redis later check if user is logged in from redis this will reduce load from auth server. (I have used same strategy for a crypto exchange and went pretty well)
Load balancer : Don't have good familiarity with java but for node JS PM2 will do that for you not a big deal just one command and it will start multiple instances and will balance on it's own.
In case you have enormous traffic then you better go with some messaging service like rabbitmq this will reduce cost of servers by preventing you from scaling your servers.
BE Jobs : I have done that with node for extensive tasks and went quite well there you can use forking or spanning this will start a new instance for particular job and will be killed after completing it and you can easily generate logs along with that.
For further clarification I'm here :)

How to 'split' a Spring web application into several applications/services

I have a Spring web application that consists of the following parts:
Web GUI for Users
Background processes
Webservice-like API
It's all in one application. For every update, the whole application has to be stopped and redeployed, which of course means that all users are kicked out and the API is temporarily unavailable.
I am wondering whether there a ways to seperate the application into several applications/services which could be deployed seperately. All applications would need access to the DAOs and to several utility classes/services.
I know there will be no ready-made solution for this. But maybe you can show some 'best practice examples' or show me some direction where I could go.
But if some part of the application is offline while redeployment, the complete application will not work, so this is likely that the split is not the solution.
If the problem is just that the users session is killed, then you should have a look at values stored in the session that are not able to been serialized.
If your probem is that the application is offline for some time, then you need a second server and some switch (loadbalancer) to implement a blue-green-deployment

Back-end server for Play 2 framework app

I'm planning a web application where users will be able to upload and process their files. The specifics of the application are irrelevant to my questions, but lets assume that the application will deal with mp3 audio files. I'm going to split my application in two distinct parts: the front-end and the back-end.
The front-end application will be a usual web application serving html pages to users. Typically a user will upload his file and fill an html form to specify which operations he would like to perform on the file. The files will be initially uploaded to a storage facility, such as Amazon S3, and later processed by a back-end server. I'm using Play 2.0.4 framework to develop the front-end application and this is going very well for me. I managed to implement user authorization, drafted most of the UI and also implemented file upload to S3. The application is currently deployed on Heroku without any problems.
For my back-end server I'm considering to use Play 2 framework once again. The back-end server will receive notification (http request) from the front-end server about creation of a new job. Job specification will include a link to the original user file in the storage and arguments describing the job. The job should be added to a queue. Now the most important part is to delegate the actual processing job to a third party program, which most certainly will be a compiled command line utility, such as SoX for the case of audio processing, written by good people using a programming language of their choice. As far as I know it is possible to call an external program from java, pass command line arguments and collect the result. After processing is done, the back-end server will upload processed file back to storage, and send notification (http request) to the front-end application, which will store a link to the processed file and display it to the user at some later time. To be able to use command line utility I'm going to deploy the back-end application to a Amazon EC2 instance with a Typesafe stack installation.
Here are some questions about this basic plan:
Is Play 2 a reasonable choice for the back-end, or should I look into alternatives? One of them seems to be CGI, which according to Wikipedia "is a standard method for web server software to delegate the generation of web content to executable files." Unfortunately I don't have any experience with that.
There shouldn't be any problem implementing a job queue with Play?
Is it possible to install a command line utility on EC2 and call it from Play?
Should I expect any problems installing Typesafe stack on the EC2? This post briefly describes what I'm planning to do https://www.assembla.com/spaces/bufferine/wiki/Typesafe_stack_on_Amazon_EC2
Assuming that in the future the application will grow, how would I split the jobs among multiple instances on EC2? Should I create a separate job-balancing application in between my front-end and back-end?
I would appreciate any advice! Thanks!
Note: I'm using Java api for Play 2 framework, since I'm not familiar with Scala language.
You may consider Akka for processing and it's built in Play2. It will help you to manage tasks easily, and even saving hardware ressources if used with advanced features. There is a Java API that should cover all your needs. And it's not necessary in a backend APP, if you need more power you can scale even better with two same instancies. Play and Akka are stateless, you can just add new instances to scale. To make it run on EC2, just use the play dist command.
And yes, you can install whatever you want in EC2 and call it from your app.
You may like:
http://akka.io/
http://www.playframework.com/documentation/2.1.0/JavaAkka
http://www.playframework.com/documentation/2.1.0/ProductionDist
also, but in scala
http://blog.greweb.fr/2013/01/playcli-play-iteratees-unix-pipe/
http://blog.greweb.fr/2012/11/play-framework-enumerator-outputstream/

Configuring JBoss to create one process per http session?

In a web application I am developing, I am using a third party Java library (JPL) that uses JNI to connect to an external application: a Prolog engine.
For the nature of my problem, I need to have one Prolog engine per http session. But as far as I know the library I am using only let me work with one Prolog engine per java VM.
In order to solve this issue I came up with the idea of trying to configure JBoss to launch a new process (instead of just a new thread) per each http session, a bit like CGI where normally one process is started per http request.
In this way, certain servlets could use the required JNI based library without having to worry about synchronization issues in its side, since as I expect (and hope not be wrong about that), each of them will have an independent Prolog engine with different state (e.g., different asserted Prolog facts).
Is possible to configure JBoss (or other servlet container) in this way? Any feedback or pointer will be highly appreciated!.
To my knowledge this is not possible. However looking at the documentation http://www.swi-prolog.org/packages/jpl/java_api/high-level_interface.html#Multi-Threaded%20Queries the only problem seems to be that you can have only one open query per VM.

Categories