Check for file in intervals until timeout value expired - java

I'm trying to poll for whether out put file created in a specific directory after a script is invoked in background and then returning from the method execution.
Is that achievable using Timer and Callable. Anything available in spring framework with out using low level Thread.sleep?

Related

How to run a spring boot start up method only once in a multi pod environment

I have a method with #postconstruct annotation , that needs to be executed after application start up. My application is hosted in OS with multiple pods. Now the method gets executed every time a pod starts.But I want it to run only once irrespective of number of instances.
One way you can do this would be for each instance to lookup a record in a shared database and for the pod to lock this record while it executes the method.
Once the executing method has completed, it sets the record with a flag to indicate that the init sequence is complete.
This flag can the be used by instances that did not execute the init method to ignore execution.
Alternatively, if you can restructure your codes, you could use other techniques. e.g this link might be useful
Kubernetes: Tasks that need to be done once per cluster or per statefulset or replicaset

Periodically update a file in WEB-INF without restarting application - Google App Engine

My Java application in Google App Engine loads a whitelist file stored in /WEB-INF. The file is defined as a resource file in appengine-web.xml:
<resource-files>
<include path="/whitelist.txt" />
</resource-files>
The whitelist is loaded when the first GET request is received.
However, I want to modify the code such that the whitelist is loaded every 15 minutes. This way if I make any changes to the whitelist file (in WEB-INF/whitelist.txt), the changes are reflected soon after.
I tried using a ScheduledExecutorService with a Runnable task as mentioned here https://stackoverflow.com/a/2249068/1244329 where the task consists of just reading the file. However, the task inside contextInitialized is never executed. In fact, I don't think I am even hitting the contextInitialized method.
What am I doing wrong? How should I implement this?
You could use a cron job to execute the whitelist file loading. See Scheduling Tasks With Cron for Java.
But you have another problem: you can't actually change WEB-INF/whitelist.txt without deploying an updated app code, so you can't actually refresh the whitelist info this way without restarting your app.
You could do it, but by storing the file somwewhere else, where you can update it independently of the app deployment, for example in GCS.

Shutting down spring batch jobs gracefully in tomcat

We run several spring batch jobs within tomcat in the same web application that serves up our UI. Lately we have been adding many more jobs and we are noticing that when we patch our app, several jobs may get stuck in a STARTING or STARTED status. Many of those jobs ensure that another job is not running before they start up, so this means after we patch the server, some of our jobs are broken until we manually run SQL to update the statuses of the jobs to ABANDONED or STOPPED.
I have read here that JobScope and StepScope jobs don't play nicely with shutting down.
That article suggests not using JobScope or StepScope but I can't help but think that this is a solved problem where people must be doing something when the application exits to prevent this problem.
Are there some best practices for handling this scenario? What are you doing in your applications?
We are using spring-batch version 3.0.3.RELEASE
I will provide you an idea on how to solve this scenario. Not necessarily a spring-batch solution.
Everytime I need to add jobs in an application I do as this:
Create a table to control the jobs (queue, priority, status, etc.)
Create a JobController class to manage all jobs
All jobs are defined by the status R-running, F-Finished, Q-Queue (you can add more as you need like aborted, cancelled, etc) (the jobs control these statuses)
The jobController must be loaded only once, you can define it as a spring bean for this
Add a boolean attribute to JobController to inform if you already checked the jobs when you instantiate it. Set it to false
Check if there are jobs with the R status which means that in the last stop of the server they were running so you update every job with this R status to Q and increase their priority so it will get executed first after a restart of the server. This check is inside the if for that boolean attribute, after the check set it to true.
That way every time you call the JobController for the first time and there are unfinished jobs from a server crash you will be able to set then all to a status where it can be executed again. And this check will happens only once since you will be checking that boolean attribute.
A thing that you should be aware of is caution with your jobs priority, if you manage it wrong you may run into a starvation problem.
You can easily adapt this solution to spring-batch.
Hope it helps.

Two instances despite using concurrent requests and low traffic

My Apache Wicket web application uses JDO for its data persistence in GAE/J.
On application start-up, the home page enqueues a task before it is shown (with zero delay to its default ETA). This task causes the construction of a new Wicket web page, in order to construct the JVM's singleton Persistence Manager Factory (PMF) instance for use by the application during its lifetime.
I have set the application to use concurrent requests by adding
<threadsafe>true</threadsafe>
to the application's appengine-web.xml file.
Despite this, after a single request to visit the application's home page, I get two application instances: one created by the home page visit request, and the other created by the execution of the enqueued task (about 6 to 7 seconds later).
I could try to solve this problem by delaying the execution of the enqueued task (by around 10 seconds, perhaps?), but why should I need to try this when I have enabled concurrent requests? Should the first GAE/J application instance not be able to handle two requests close together without causing a second instance to be brought forth? I presume that I am doing something wrong, but what is it?
I have searched Stack Overflow's set of tags ([google-app-engine] [java]), and the depreciating group "Google App Engine for Java" too, but have found nothing relevant to my question.
I would appreciate any pointers.
If you want the task to use an existing instance, you can set the X-AppEngine-FailFast header, which according to the GAE docs:
This header instructs the Scheduler to immediately fail the request if an existing instance is not available. The Task Queue will retry and back-off until an existing instance becomes available to service the request
It's worth checking out the Managing Your App's Resource Usage document for performance and tuning techniques.

Multhreading in Java

I'm working with core java and IBM Websphere MQ 6.0. We have a standalone module say DBcomponent that hits the database and fetches a resultset based on the runtime query. The query is passed to the application via MQ messaging medium. We have a trigger configured for the queue which invokes the DBComponent whenever a message is available in the queue. The DBComponent consumes the message, constructs the query and returns the resultset to another queue. In this overall process we use log4j to log statements on a log file for auditing.
The connection is pooled to the database using Apache pool. I am trying to check whether the log messages are logged correctly using a sample program. The program places the input message to the queue and checks for the logs in the log file. Its expected for the trigger method invocation to complete before i try to check for the message in log file, but every time my program to check for log message gets executed first leading my check to failure.
Even if i introduce a Thread.sleep(time) doesn't solves the case. How can i make it to keep my method execution waiting until the trigger operation completes?
Any suggestion will be helpful.
I suggest you go and read up about the concurrency primitives that Java offers you. http://tutorials.jenkov.com/java-concurrency/index.html seems to cover the bases, the Thread Signalling chapter in particular.
I would recommend against relying on log4j (or any logging functionality) even in a simple test program.
Have your test run as you would expect it to, putting debugging/tracing statements in the log as you see fit (be liberal about it, log4j is very fast!) Then, when it's done, check the log yourself.
Writing log parsing will only complicate your goals.
Write your test, view the result, view the logs. If you want automated testing, consider setting up a functional test. You can set up tests free using Selenium. (http://seleniumhq.org/) There's no need to write your own functional testing/parsing stuff when there's easy to configure, easy to use, easy to customize frameworks out there! :-)

Categories