Leak memory in war servlet with tomcat - java

I am having some problems when I try to implement a new function in my working servlet.
Now I have a servlet in which mobile phones can register. Mobile phones use rest to register against this servlet. And it works perfect. Anytime you try to register a phone, it works.
But now, I need to add a new functionality. I want to register this server against other component of my infrastructure.
I want that registration done at the very beggining. I mean, when the servlet starts, make the registration and forget about it, just work as it did before.
This is the error tomcat gives me:
Grave: The web application [/servletRegister] appears to have started a thread named [Timer-8] but has failed to stop it. This is very likely to create a memory leak.
This is my start class:
#Override
public Set<Class<?>> getClasses() {
//-------------------------------
//Set registration here
//GatewayRegistrationHandler reg = GatewayRegistrationHandler.getInstance();
//reg.registerDevice();
//-------------------------------
//register on a new thread due to process time
new Thread (new RegisterGatewayOnBackground()).start();
//Next are the working servlet code
Set<Class<?>> classes = new HashSet<Class<?>>();
classes.add(PublicationsResource.class); /
classes.add(DeviceResource.class);
return classes;
}
}
I tried the commented lines firstly. Then I got a memory leak and I tried to execute them in a new thread trying to avoid the leak. But the behavior is the same.
The background function is this:
public class RegisterGatewayOnBackground implements Runnable {
public RegisterGatewayOnBackground() {
}
public void run() {
registerDevice();
}
private void registerDevice() {
GatewayRegistrationHandler reg = GatewayRegistrationHandler.getInstance();
reg.registerDevice();
}
}
GatewayRegistrationHandler works fine because when I run the servlet, it executes, makes the registration and then, after that, crash. I thought it was a time problem and background would solve it but I am stuck here since background does the same.
I don't know any way to check where to find my memory leak. I am looking for advice or any tools which might help me solve the problem.

When you start your thread like that, it will not be named "Timer-x". Therefore, this was probably a thread started elsewhere.
The message tomcat is giving you indicates that the webapp is somehow being undeployed (and then it checks for threads which are still there, and complains if there is). I'm not sure why the undeploy is happening, but if it's because you are stopping the webapp., you may not need to fix this unless you do (lots of) hot-deploys (deploying and undeploying while keeping the tomcat running). This is because, if it's leaking memory right before you are going to kill the process anyways, the memory leak won't have any harm and it would be waste of time to fix it.
If you want to fix it, one easy way is to hook a profiler and see who started this "Timer" thread.

Related

AndroidX Work Library is canceling operations, seemingly without reason

I am working with the AndroidX Work Dependency to try and run some background service operations. I am currently running the most recent stable version, 2.2.0 as of posting this question.
The actual operation I am running in the background is a fairly heavy-CPU operation as it is using some compression code in one of my libraries (here) and can take anywhere from 3-30 minutes depending on the size and length of the video in question.
Here is the code I have that builds and runs the work request:
public static void startService(){
//Create the Work Request
String uniqueTag = "Tag_" + new Date().getTime() + "_ChainVids";
OneTimeWorkRequest.Builder builder = new OneTimeWorkRequest.Builder(CompleteVideoBackgroundService.class);
Constraints.Builder constraints = new Constraints.Builder();
constraints.setRequiredNetworkType(NetworkType.CONNECTED);
builder.setConstraints(constraints.build());
builder.setInitialDelay(1, TimeUnit.MILLISECONDS);
builder.addTag(uniqueTag);
Data inputData = new Data.Builder().putString("some_key", mySerializedJSONString).build();
builder.setInputData(inputData);
OneTimeWorkRequest compressRequest = builder.build();
//Set the Work Request to run
WorkManager.getInstance(MyApplication.getContext())
.beginWith(compressRequest)
.enqueue();
}
It then fires off this class which runs all of the background service operations:
public class MyServiceSampleForStackoverflow extends Worker {
private Context context;
private WorkerParameters params;
public MyServiceSampleForStackoverflow(#NonNull Context context, #NonNull WorkerParameters params){
super(context, params);
this.context = context;
this.params = params;
}
/**
* Trimming this code down considerably, but the gist is still here
*/
#NonNull
#Override
public Result doWork() {
try {
//Using using a hard-coded 50% for this SO sample
float percentToBringDownTo = 0.5F;
Uri videoUriToCompress = MyCustomCode.getVideoUriToCompress();
VideoConversionProgressListener listener = (progressPercentage, estimatedNumberOfMillisecondsLeft) -> {
float percentComplete = (100 * progressPercentage);
//Using this value to update the Notification Bar as well as printing in the logcat. Erroneous code removed
};
String newFilePath = MyCustomCode.generateNewFilePath();
//The line below this is the one that takes a while as it is running a long operation
String compressedFilePath = SiliCompressor.with(MyApplication.getContext()).compressVideo(
listener, videoUriToCompress.getPath(), newFilePath, percentToBringDownTo);
//Do stuff here with the compressedFilePath as it is now done
return Result.success();
} catch (Exception e){
e.printStackTrace();
return Result.failure();
}
}
}
Once in a while, without any rhyme or reason to it, the worker randomly stops without me telling it to do so. When that happens, I am seeing this error:
Work [ id=254ae962-114e-4088-86ec-93a3484f948d, tags={ Tag_1571246190190_ChainVids, myapp.packagename.services.MyServiceSampleForStackoverflow } ] was cancelled
java.util.concurrent.CancellationException: Task was cancelled.
at androidx.work.impl.utils.futures.AbstractFuture.cancellationExceptionWithCause(AbstractFuture.java:1184)
at androidx.work.impl.utils.futures.AbstractFuture.getDoneValue(AbstractFuture.java:514)
at androidx.work.impl.utils.futures.AbstractFuture.get(AbstractFuture.java:475)
at androidx.work.impl.WorkerWrapper$2.run(WorkerWrapper.java:284)
at androidx.work.impl.utils.SerialExecutor$Task.run(SerialExecutor.java:91)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:764)
I am quite literally just staring at my phone with and not interacting with it at all when this randomly happens. I am not trying to run other apps, nor am I trying to hog the CPU for something else. I never see any stacktrace of my own and no immediate problem or cause is visible.
The question therefore is, what is happening here that is randomly stopping the Worker service without any provocation? Why is it randomly stopping operations?
Thanks all.
EDIT 1
I did test removing the network constraints requirement line thinking that may be causing the issue and I did see it occur even after that, so I don't believe that is the problem.
I am testing on a Google Pixel 3, API 28, Android 9, but any other device I have tested regardless of API level (minimum supported is 21) has shown the same issue.
Edit 2
I tried rewriting the class to work on the Asynchronous approach by having the class extend ListenableWorker instead of Worker and that did not resolve the issue.
you are most likely facing one of 2 issues.
First, assuming your background service runs more than 10 minutes can take anywhere from 3-30 minutes depending on the size and length of the video in question, you may be running into a hard time limit which is imposed by the WorkManager code.
In the docs here they say: The system instructed your app to stop your work for some reason. This can happen if you exceed the execution deadline of 10 minutes.
That seems the most likely, but the other issue could be related to Android background limitations outlined here in which it details changes related to Apps that target Android 8.0 or higher. As you mentioned in an edit, you are testing on a Google Pixel 3, API 28, Android 9 and that may be directly related.
As far as solutions go, the simplest, albeit a frustrating solution, would be to tell the user that they need to keep the app in the foreground. This would at least prevent that 10 minute gap.
Another option would be to utilize the new Bubble API that was introduced in API 29. The docs are here and the section of docs that might interest you is where it says, When a bubble is expanded, the content activity goes through the normal process lifecycle, resulting in the application becoming a foreground process. Making a miniaturized 'expanded' view and having that be expanded by users when the app closes my be a good alternative solution to bypassing that 10 minute timer.
We can now run longer than 10 mins, at least according to work manger version Support for long-running workers :
WorkManager 2.3.0-alpha02 adds first-class support for long running
workers. In such cases, WorkManager can provide a signal to the OS
that the process should be kept alive if possible while this work is
executing. These Workers can run longer than 10 minutes. Example
use-cases for this new feature include bulk uploads or downloads (that
cannot be chunked), crunching on an ML model locally, or a task that's
important to the user of the app.
An example is given in the link shared. Please check it out.

JPDA MethodEntryEvent causing app to run very slow

I am trying to capture all method calls made in any android app. For that I am using JDI to register MethodEntryRequest for each running thread of the app. I am successful to do this, but I am facing the problem that the app becomes very very slow. So I want to know if I am doing anything wrong in my implementation. I am adding my code where I first register ClassPreparedRequest to catch loading of each class by app process and in that I register MethodEntryRequest with threadfilter for thread which caused the class to load.
if(!traceMap.keySet().contains(event.thread()))
{
EventRequestManager mgr = vm.eventRequestManager();
MethodEntryRequest menr = mgr.createMethodEntryRequest();
menr.setSuspendPolicy(EventRequest.SUSPEND_NONE);
menr.addThreadFilter(event.thread());
menr.enable();
}
Code for registering ClassPreparedRequest is
ClassPrepareRequest cpr = mgr.createClassPrepareRequest();
cpr.addClassFilter("com.example.*");
cpr.setSuspendPolicy(EventRequest.SUSPEND_NONE);
cpr.enable();

Spark: Job restart and retries

Suppose you have Spark + Standalone cluster manager. You opened spark session with some configs and want to launch SomeSparkJob 40 times in parallel with different arguments.
Questions
How to set reties amount on job failures?
How to restart jobs programmatically on failure? This could be useful if jobs failure due lack of resources. Than I can launch one by one all jobs that require extra resources.
How to restart spark application on job failure? This could be useful if job lack resources even when it's launched simultaneously. Than to change cores, CPU etc configs I need to relaunch application in Standalone cluster manager.
My workarounds
1) I pretty sure the 1st point is possible, since it's possible at spark local mode. I just don't know how to do that in standalone mode.
2-3) It's possible to hand listener on spark context like spark.sparkContext().addSparkListener(new SparkListener() {. But seems SparkListener lacks failure callbacks.
Also there is a bunch of methods with very poor documentation. I've never used them, but perhaps they could help to solve my problem.
spark.sparkContext().dagScheduler().runJob();
spark.sparkContext().runJob()
spark.sparkContext().submitJob()
spark.sparkContext().taskScheduler().submitTasks();
spark.sparkContext().dagScheduler().handleJobCancellation();
spark.sparkContext().statusTracker()
You can use SparkLauncher and control the flow.
import org.apache.spark.launcher.SparkLauncher;
public class MyLauncher {
public static void main(String[] args) throws Exception {
Process spark = new SparkLauncher()
.setAppResource("/my/app.jar")
.setMainClass("my.spark.app.Main")
.setMaster("local")
.setConf(SparkLauncher.DRIVER_MEMORY, "2g")
.launch();
spark.waitFor();
}
}
See API for more details.
Since it creates process you can check the Process status and retry e.g. try following:
public boolean isAlive()
If Process is not live start again, see API for more details.
Hoping this gives high level idea of how we can achieve what you mentioned in your question. There could be more ways to do same thing but thought to share this approach.
Cheers !
check your spark.sql.broadcastTimeout and spark.broadcast.blockSize properties, try to increase them .

Timer causing issues on Tomcat server

I want to run a method every 22 minutes on a servlet running on Tomcat. So I'm using this code:
Timer timer = new Timer();
timer.schedule(new TimerTask() {
#Override
public void run() {
try {
update();
} catch (SQLException | NamingException | InterruptedException e1) {
e1.printStackTrace();
}
}
}, 22 * 60 * 1000, 22 * 60 * 1000);
I've got a feeling this is a terrible way to do it as I'm getting errors all the time regarding timers, and whenever I uploaded a new version of the servlet I don't think it stops the previous timer. I then get database connection warnings. If I reset everything and start fresh its fine.
javax.naming.NameNotFoundException: Name comp is not bound in this Context
at org.apache.naming.NamingContext.lookup(NamingContext.java:770)
at org.apache.naming.NamingContext.lookup(NamingContext.java:153)
at org.apache.naming.SelectorContext.lookup(SelectorContext.java:152)
at javax.naming.InitialContext.lookup(InitialContext.java:411)
at my.app.database.DatabaseManager.connect(DatabaseManager.java:44)
at my.app.database.DatabaseManager.returnInfo(DatabaseManager.java:133)
at my.app.genParse.Generate.updateHistory(Generate.java:89)
at my.app.MyServer$1.run(MyServer.java:52)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
And also on re-launch with the new version:
SEVERE: The web application [/myApp] appears to have started a thread named [Timer-7] but has failed to stop it. This is very likely to create a memory leak.
Whats a better way to achieve or avoid this?
An alternative to using timer tasks has been discussed here. Using the ScheduledThreadpoolExecutor class MAY help with the problem of having multiple threads still running when you upload a new version however I cannot say for sure that it will. This class is also preferred over the Timer class for various reasons discussed in the javadoc.
You shouldnt really use a Timer within a container - Timer spawn/reuse a thread thats managed outside of the container and this can cause problems.
Note too that as Timer uses a single thread if one instance of it takes too long the others accuracy can suffer.
Finally, if a Timer throws an unchecked exception the Timer thread is terminated.
For these (and other) reasons it has largely fallen out of favour - ScheduledThreadPoolExecutor is a better choice.
Again though, user managed threads within a container can be tricky. JSR-236 (Concurrency Utilities for the Java EE platform) will provide a mechanism to do so in the future, but for now its best avoided.
You can try schedule repeatable tasks via cron perhaps that could in turn call a dedicated servlet (or similar) on a regular basis
Use a ServletContextListener to start and stop your Timer when the web application starts and stops.

Scheduled processes running twice simultaneously in Openbravo (using Quartz)

I'm not quite sure whether this is more of an Openbravo issue or more of a Quartz issue, but we have some manual processes that run on schedules via Openbravo ProcessRequest objects (OB v2.50MP24), but it seems that the processes are running twice, at the exact same time. Openbravo extends the Quartz platform for their scheduling. I've tried to resolve this issue on my own by ensuring that my process classes extend this class:
import java.util.List;
import org.openbravo.dal.service.OBDal;
import org.openbravo.model.ad.ui.ProcessRequest;
import org.openbravo.scheduling.ProcessBundle;
import org.openbravo.service.db.DalBaseProcess;
public abstract class RBDDalProcess extends DalBaseProcess {
#Override
protected void doExecute(ProcessBundle bundle) throws Exception {
org.quartz.Scheduler sched = org.openbravo.scheduling.OBScheduler
.getInstance().getScheduler();
int runCount = 0;
synchronized (sched) {
List<org.quartz.JobExecutionContext> currentlyExecutingJobs = (List<org.quartz.JobExecutionContext>) sched
.getCurrentlyExecutingJobs();
for (org.quartz.JobExecutionContext jec : currentlyExecutingJobs) {
ProcessRequest processRequest = OBDal.getInstance().get(
ProcessRequest.class, jec.getJobDetail().getName());
if (processRequest == null)
continue;
String processClass = processRequest.getProcess()
.getJavaClassName();
if (bundle.getProcessClass().getCanonicalName()
.equals(processClass)) {
runCount++;
}
}
}
if (runCount > 1) {
System.out.println("Process "
+ bundle.getProcessClass().getSimpleName()
+ " is already running. Cancelling.");
return;
}
doRun(bundle);
}
protected abstract void doRun(ProcessBundle bundle);
}
This worked fine when I tested by requesting the process to run immediately twice at the same time. One of them cancelled. However, it's not working on the scheduled processes. I have S.o.p's set up to log when the processes start, and looking at the logs shows each line of the output twice, each line one right after the other.
I have a sneaking suspicion that it's because the processes are either running in two completely different threads that don't know about each others' processes, however, I'm not sure how to verify my suspicions or, if I am correct, what to do about it. I've already verified that there is only one instance of each of the ProcessRequest objects stored in the database.
Has anyone else experienced this, know why they might be running twice, or know what I can do to prevent them from simultaneously running?
The most common reasons for a double Job execution are the following:
EDITED:
Your application is deployed in a clustered environment and you have not configured Quartz to run in a cluster environment.
Your application is deployed more than once. There are many cases where the application is deployed twice especially in Tomcat server. As a consequence the QuartzInitializerListener is invoked twice and the Jobs are executed twice. In case you use Tomcat server and you are defining contexts explicitly in server.xml, you should turn off automatic application deployment or specify deployIgnore. Both the autoDeploy set to true and the context element existence in server.xml, have as a consequence the twice deployment of the application. Set autoDeploy to false or remove the context element from the server.xml.
Your application has been redeployed without unscheduling the current processes.
I hope this helps you.
Quartz uses a thread pool for the jobs execution. So as you suspect, the RBDDalProcess will probably have separate instances a in separate thread and the counter check will fail.
One thing you can do is list the jobs registered in the Scheduler (you can get the Scheduler using the OB API as: OBScheduler.getScheduler()):
// enumerate each job group
for(String group: sched.getJobGroupNames()) {
// enumerate each job in group
for(JobKey jobKey : sched.getJobKeys(groupEquals(group))) {
System.out.println("Found job identified by: " + jobKey);
}
}
If you see the same job added twice, check out org.quartz.spi.JobFactory and the org.quartz.Scheduler.setJobFactory method for controlling jobs instantiations.
Also make sure you have only one entry for this process in the 'Report and Process' table in Openbravo.
I have used DalBaseProcess in Openbravo 3.0 and I cannot confirm this behavior you're describing. Having this in mind it would be probably a good idea to checkout the reported bugs for Openbravov2.50MP24 and Quartz or post a thread in Openbravo Forge forums with your problem.

Categories