Does Vert.x has any overhead for deployed verticles? Is there any reason to undeploy them after they become unneeded?
Pls look at MyVerticle - the only purpose of it is to do load on app launching, after loading this verticle is unneeded. Is it sufficient to call consumer.unregister()? Is there any reasons to undeploy MyVerticle?
public class MyVerticle extends AbstractVerticle {
private MessageConsummer consumer;
#Override
public void start() {
consumer = vertx.eventBus().consumer(AppConstants.SOME_ADDRESS, this::load);
}
#Override
public void load(Message message) {
LocalMap<Short, String> map = vertx.sharedData().getLocalMap(AppConstants.MAP_NAME);
try (
DataInputStream in = new DataInputStream(new BufferedInputStream(new FileInputStream(AppConstants.INDEX_PATH)))
) {
while (true) {
map.put(
in.readShort(),
in.readUTF()
);
}
} catch(EOFException eof) {
message.reply(AppConstants.SUCCESS);
} catch (IOException ioe) {
message.fail(100, "Fail to load index in memory");
throw new RuntimeException("There are no recovery policy", ioe);
} finally {
//is this sufficient or I need to undeploy them programmatically?
consumer.unregister();
}
}
}
Verticles can be seen as applications running on Vert.x, deploying/undeploying only have a small overhead if for example you're doing high availability or failover. In this case Vert.x will need to keep track of deployed instances, monitor for failures and re-spawn verticles on other nodes, etc...
Undeploy will also allow you to perform any clean up (although you're not using it in your example) by calling the stop() method.
When not running with HA in mind undeploy will only allow you to recover any memory that was allocated by your Verticle but is not referenced anymore (plus the memory related to keep internal track of deployment verticles, which should be negletible as a single object reference).
Related
I am working on an application, where I continuously read data from a Kafka topic. This data comes in String format which I then write to an xml file & store it on hard disk. Now, this data comes randomly and mostly it's supposed to come in bulk, in quick succession.
To write these files, I am using an Executor Service.
ExecutorService executor = Executors.newFixedThreadPool(4);
/*
called multiple times in quick succession
*/
public void writeContent(String message) {
try {
executor.execute(new FileWriterTask(message));
} catch(Exception e) {
executor.shutdownNow();
e.printStackTrace();
}
}
private class FileWriterTask implements Runnable{
String data;
FileWriterTask(String content){
this.data = content;
}
#Override
public void run() {
try {
String fileName = UUID.randomUUID().toString();
File file = new File("custom path" + fileName + ".xml");
FileUtils.writeStringToFile(file, data, Charset.forName("UTF-8"));
} catch (IOException e) {
e.printStackTrace();
}
}
}
I want to know when should I shutdown my executor service. If my application was time bound, I would used awaitTermination on my executor instance, but my app is supposed to run continuously.
If in case of any exception, my whole app is killed, would it automatically shutdown my executor?
Or should I catch an unchecked exception and shutdown my executor, as I have done above in my code?
Can I choose not to explicitly shutdown my executor in my code? What are my options?
EDIT: Since my class was a #RestController class I used the following
way to shutdown my executor service
#PreDestroy
private void destroy() {
executor.shutdownNow();
if(executor != null) {
System.out.println("executor.isShutdown() = " + executor.isShutdown());
System.out.println("executor.isTerminated() = " + executor.isTerminated());
}
}
It is a good practice to shut down your ExecutorService. There are two types of shutdown that you should be aware of, shutdown() and shutdownNow().
If you're running you application on an application server with Java EE, you can also use a ManagedExecutorService, which is managed by the framework and will be shut down automatically.
My tomcat web service uses realtime developer notifications for Android, which requires Google Cloud Pub Sub. It is working flawlessly, all the notifications are received immediately. The only problem is that it uses too much RAM that causes the machine to respond very slowly than it is supposed to, and is not releasing it after undeploying the application. It uses HttpServlet (specifically Jersey which provides contextInitialized and contextDestroyed methods to set and clear references) and commenting the pub-sub code actually decreases a lot of the memory usage.
Here is the code for subscribing-unsubscribing for Android subscription notifications.
package com.example.webservice;
import com.example.webservice.Log;
import com.google.api.core.ApiService;
import com.google.api.gax.core.FixedCredentialsProvider;
import com.google.auth.oauth2.GoogleCredentials;
import com.google.cloud.pubsub.v1.MessageReceiver;
import com.google.cloud.pubsub.v1.Subscriber;
import com.google.common.collect.Lists;
import com.google.pubsub.v1.ProjectSubscriptionName;
import java.io.FileInputStream;
public class SubscriptionTest
{
// for hiding purposes
private static final String projectId1 = "api-000000000000000-000000";
private static final String subscriptionId1 = "realtime_notifications_subscription";
private static final String TAG = "SubscriptionTest";
private ApiService subscriberService;
private MessageReceiver receiver;
// Called when "contextInitialized" is called.
public void initializeSubscription()
{
Log.w(TAG, "Initializing subscriptions...");
try
{
GoogleCredentials credentials1 = GoogleCredentials.fromStream(new FileInputStream("googlekeys/apikey.json"))
.createScoped(Lists.newArrayList("https://www.googleapis.com/auth/cloud-platform"));
ProjectSubscriptionName subscriptionName1 = ProjectSubscriptionName.of(projectId1, subscriptionId1);
// Instantiate an asynchronous message receiver
receiver =
(message, consumer) ->
{
consumer.ack();
// do processing
};
// Create a subscriber for "my-subscription-id" bound to the message receiver
Subscriber subscriber1 = Subscriber.newBuilder(subscriptionName1, receiver)
.setCredentialsProvider(FixedCredentialsProvider.create(credentials1))
.build();
subscriberService = subscriber1.startAsync();
}
catch (Throwable e)
{
Log.e(TAG, "Exception while initializing async message receiver.", e);
return;
}
Log.w(TAG, "Subscription initialized. Messages should come now.");
}
// Called when "contextDestroyed" is called.
public void removeSubscription()
{
if (subscriberService != null)
{
subscriberService.stopAsync();
Log.i(TAG, "Awaiting subscriber termination...");
subscriberService.awaitTerminated();
Log.i(TAG, "Subscriber termination done.");
}
subscriberService = null;
receiver = null;
}
}
And this is the statement after the application is undeployed. (Names may not match but it is not important)
org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application
[example] created a ThreadLocal with key of type [java.lang.ThreadLocal]
(value [java.lang.ThreadLocal#2cb2fc20]) and a value of type
[io.grpc.netty.shaded.io.netty.util.internal.InternalThreadLocalMap]
(value [io.grpc.netty.shaded.io.netty.util.internal.InternalThreadLocalMap#4f4c4b1a])
but failed to remove it when the web application was stopped.
Threads are going to be renewed over time to try and avoid a probable memory leak.
From what I've observed, Netty is creating a static ThreadLocal with a strong reference to the value InternalThreadLocalMap which seems to be causing this message to appear. I've tried to delete it by using some sort of code like this (probably it's overkill but none of the answers worked for me so far, and this isn't seem to be working either)
InternalThreadLocalMap.destroy();
FastThreadLocal.destroy();
for (Thread thread : Thread.getAllStackTraces().keySet())
{
if (thread instanceof FastThreadLocalThread)
{
// Handle the memory leak that netty causes.
InternalThreadLocalMap map = ((FastThreadLocalThread) thread).threadLocalMap();
if (map == null)
continue;
for (int i = 0; i < map.size(); i++)
map.setIndexedVariable(i, null);
((FastThreadLocalThread) thread).setThreadLocalMap(null);
}
}
After the undeploy (or stop-start) tomcat detects a memory leak if I click Find leaks (obviously). The problem is, the RAM and CPU that has been used is not released because apparently the subscription is not closed properly. Re-deploying the app causes the used RAM to increase even further on every action like, if it uses 200 MB ram at first, after 2nd deploy it increases to 400, 600, 800 which goes unlimited until the machine slows down enough to die.
It is a serious issue and I have no idea how to solve it, the stop methods are called as defined, awaitTerminated is also called which immediately executes (means that the interface is actually stopped listening) but it does not release the RAM behind it.
So far I've only seen questions about python clients (ref 1, ref 2) but nobody seems to be mentioning the Java client, and I'm kind of losing hope about using this structure.
I've opened an issue about this problem as well.
What should I do to resolve this issue? Any help is appreciated, thank you very much.
I don't know if it will fully fix your issue, but you appear to be leaking some memory by not closing the FileInputStream.
The first option is to extract the FileInputStream into a variable and call the close() method on it after you are done reading the content.
A second (and better) option to work with these kind of streams is to use try-with-resources. Since FileInputStream implements the AutoCloseable interface, it will be closed automatically when exiting the try-with-resources.
Example:
try (FileInputStream stream = new FileInputStream("googlekeys/apikey.json")) {
GoogleCredentials credentials1 = GoogleCredentials.fromStream(stream)
.createScoped(Lists.newArrayList("https://www.googleapis.com/auth/cloud-platform"));
// ...
} catch (Exception e) {
Log.e(TAG, "Exception while initializing async message receiver.", e);
return;
}
I have some service that both consumes from an inbound queue and produces to some outbound queue (where another thread, created by this service, picks up the messages and "transports" them to their destination).
Currently I use two plain Threads as seen in the code bellow but I know that in general you should not use them anymore and instead use the higher level abstractions like the ExecutorService.
Would this make sense in my case? More specifically I mean ->
would it reduce code?
make the code more robust in case of failure?
allow for smoother thread termination? (which is helpfull when running tests)
Am I missing something important here? (maybee some other classes from java.util.concurrent)
// called on service startup
private void init() {
// prepare everything here
startInboundWorkerThread();
startOutboundTransporterWorkerThread();
}
private void startInboundWorkerThread() {
InboundWorkerThread runnable = injector.getInstance(InboundWorkerThread.class);
inboundWorkerThread = new Thread(runnable, ownServiceIdentifier);
inboundWorkerThread.start();
}
// this is the Runnable for the InboundWorkerThread
// the runnable for the transporter thread looks almost the same
#Override
public void run() {
while (true) {
InboundMessage message = null;
TransactionStatus transaction = null;
try {
try {
transaction = txManager.getTransaction(new DefaultTransactionDefinition());
} catch (Exception ex) {
// logging
break;
}
// blocking consumer
message = repository.takeOrdered(template, MESSAGE_POLL_TIMEOUT_MILLIS);
if (message != null) {
handleMessage(message);
commitTransaction(message, transaction);
} else {
commitTransaction(transaction);
}
} catch (Exception e) {
// logging
rollback(transaction);
} catch (Throwable e) {
// logging
rollback(transaction);
throw e;
}
if (Thread.interrupted()) {
// logging
break;
}
}
// logging
}
// called when service is shutdown
// both inbound worker thread and transporter worker thread must be terminated
private void interruptAndJoinWorkerThread(final Thread workerThread) {
if (workerThread != null && workerThread.isAlive()) {
workerThread.interrupt();
try {
workerThread.join(TimeUnit.SECONDS.toMillis(1));
} catch (InterruptedException e) {
// logging
}
}
}
The main benefit for me in using ThreadPools comes from structuring the work in single, independent and usually short jobs and better abstraction of threads in a ThreadPools private Workers. Sometimes you may want more direct access to those, to find out if they are still running etc. But there are usually better, job-centric ways to do that.
As for handling failures, you may want to submit your own ThreadFactory to create threads with a custom UncaughtExceptionHandler and in general, your Runnable jobs should provide good exception handling, too, in order to log more information about the specific job that failed.
Make those jobs non-blocking, since you don't want to fill up your ThreadPool with blocked workers. Move blocking operations before the job is queued.
Normally, shutdown and shutdownNow as provided by ExecutorServices, combined with proper interrupt handling in your jobs will allow for smooth job termination.
A program that I've developed is crashing the JVM occasionally due to this bug: http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8029516. Unfortunately the bug has not been resolved by Oracle and the bug report says that there are no known workarounds.
I've tried to modify the example code from the bug report by calling .register(sWatchService, eventKinds) in the KeyWatcher thread instead, by adding all pending register request to a list that I loop through in the KeyWatcher thread but it's still crashing. I'm guessing this just had the same effect as synchronizing on sWatchService (like the submitter of the bug report tried).
Can you think of any way to get around this?
From comments:
It appears that we have an issue with I/O cancellation when there is a pending ReadDirectoryChangesW outstanding.
The statement and example code indicate that the bug is triggered when:
There is a pending event that has not been consumed (it may or may not be visible to WatchService.poll() or WatchService.take())
WatchKey.cancel() is called on the key
This is a nasty bug with no universal workaround. The approach depends on the specifics of your application. Consider pooling watches to a single place so you don't need to call WatchKey.cancel(). If at one point the pool becomes too large, close the entire WatchService and start over. Something similar to.
public class FileWatcerService {
static Kind<?>[] allEvents = new Kind<?>[] {
StandardWatchEventKinds.ENTRY_CREATE,
StandardWatchEventKinds.ENTRY_DELETE,
StandardWatchEventKinds.ENTRY_MODIFY
};
WatchService ws;
// Keep track of paths and registered listeners
Map<String, List<FileChangeListener>> listeners = new ConcurrentHashMap<String, List<FileChangeListener>>();
Map<WatchKey, String> keys = new ConcurrentHashMap<WatchKey, String>();
boolean toStop = false;
public interface FileChangeListener {
void onChange();
}
public void addFileChangeListener(String path, FileChangeListener l) {
if(!listeners.containsKey(path)) {
listeners.put(path, new ArrayList<FileChangeListener>());
keys.put(Paths.get(path).register(ws, allEvents), path);
}
listeners.get(path).add(l);
}
public void removeFileChangeListener(String path, FileChangeListener l) {
if(listeners.containsKey(path))
listeners.get(path).remove(l);
}
public void start() {
ws = FileSystems.getDefault().newWatchService();
new Thread(new Runnable() {
public void run() {
while(!toStop) {
WatchKey key = ws.take();
for(FileChangeListener l: listeners.get(keys.get(key)))
l.onChange();
}
}
}).start();
}
public void stop() {
toStop = true;
ws.close();
}
}
I've managed to create a workaround though it's somewhat ugly.
The bug is in JDK method WindowsWatchKey.invalidate() that releases native buffer while the subsequent calls may still access it. This one-liner fixes the problem by delaying buffer clean-up until GC.
Here is a compiled patch to JDK. In order to apply it add the following Java command-line flag:
-Xbootclasspath/p:jdk-8029516-patch.jar
If patching JDK is not an option in your case, there is still a workaround on the application level. It relies on the knowledge of Windows WatchService internal implementation.
public class JDK_8029516 {
private static final Field bufferField = getField("sun.nio.fs.WindowsWatchService$WindowsWatchKey", "buffer");
private static final Field cleanerField = getField("sun.nio.fs.NativeBuffer", "cleaner");
private static final Cleaner dummyCleaner = Cleaner.create(Thread.class, new Thread());
private static Field getField(String className, String fieldName) {
try {
Field f = Class.forName(className).getDeclaredField(fieldName);
f.setAccessible(true);
return f;
} catch (Exception e) {
throw new IllegalStateException(e);
}
}
public static void patch(WatchKey key) {
try {
cleanerField.set(bufferField.get(key), dummyCleaner);
} catch (IllegalAccessException e) {
throw new IllegalStateException(e);
}
}
}
Call JDK_8029516.patch(watchKey) right after the key is registred, and it will prevent watchKey.cancel() from releasing the native buffer prematurely.
You might not be able to work around the problem itself but you could deal with the error and handle it. I don't know your specific situation but I could imagine the biggest issue is the crash of the whole JVM. Putting all in a try block does not work because you cannot catch a JVM crash.
Not knowing more about your project makes it difficult to suggest a good/acceptable solution, but maybe this could be an option: Do all the file watching stuff in a separate JVM process. From your main process start a new JVM (e.g. using ProcessBuilder.start()). When the process terminates (i.e. the newly started JVM crashes), restart it. Obviously you need to be able to recover, i.e. you need to keep track of what files to watch and you need to keep this data in your main process too.
Now the biggest remaining part is to implement some communication between the main process and the file watching process. This could be done using standard input/output of the file watching process or using a Socket/ServerSocket or some other mechanism.
I have a dedicated server running CentOS 5.9, Apache-Tomcat 5.5.36. I have written a JAVA web applications which runs every minute to collect the data from multiple sensors. I am using ScheduledExecutorService to execute the threads. (one thread for each sensor every minute and there can be more than hundred sensors) The flow of the thread is
Collect sensor information from the database.
Sends the command to the instrument to collect data.
Update the database with the data values.
There is another application that checks the database every minute and send the alerts to the users (if necessary). I have monitored the application using jvisualVM, I cant find any memory leak. for every thread. The applications work fine but after some time(24 Hour - 48 Hours) the applications stop working. I cant find out what the problem could be, is it server configuration problem, too many threads or what?
Does anyone have any idea what might be going wrong or is there anyone who has done think kind of work? Please help, Thanks
UPDATE : including code
public class Scheduler {
private final ScheduledExecutorService scheduler =
Executors.newScheduledThreadPool(1);
public void startProcess(int start) {
final Runnable uploader = new Runnable() {
#SuppressWarnings("rawtypes")
public void run()
{
//Select data from the database
ArrayList dataList = getData();
for(int i=0;i<dataList.size();i++)
{
String args = dataList.get(i).toString();
ExecutorThread comThread = new ExecutorThread(args...);
comThread.start();
}
}
};
scheduler.scheduleAtFixedRate(uploader, 0, 60 , TimeUnit.SECONDS);
}
}
public class ExecutorThread extends Thread {
private variables...
public CommunicationThread(args..)
{
//Initialise private variable
}
public void run()
{
//Collect data from sensor
//Update Database
}
}
Can't say much without a code, but you need to be sure that your thread always exits properly - doesn't hang in memory on any exception, closes connection to database, etc.
Also, for monitoring your application, you can take a thread dump every some period of time to see how many threads the application generates.
Another suggestion is configure Tomcat to take a heap dump on OutOfMemoryError. If that's an issue, you'll be able to analyze what is filling up the memory
Take heed of this innocuous line from the ScheduledExecutorService.schedule... Javadoc
If any execution of the task encounters an exception, subsequent executions are suppressed.
This means that if you are running into an Exception at some point and not handling it, the Exception will propagate into the ScheduledExecutorService and it will kill your task.
To avoid this problem you need to make sure the entire Runnable is wrapped in a try...catch and Exceptions are guaranteed to never be unhandled.
You can also extend the ScheduledExecutorService (also mentioned in the javadoc) to handle uncaught exceptions :-
final ScheduledExecutorService ses = new ScheduledThreadPoolExecutor(10){
#Override
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
if (t == null && r instanceof Future<?>) {
try {
Object result = ((Future<?>) r).get();
} catch (CancellationException ce) {
t = ce;
} catch (ExecutionException ee) {
t = ee.getCause();
} catch (InterruptedException ie) {
Thread.currentThread().interrupt(); // ignore/reset
}
}
if (t != null) {
System.out.println(t);
}
}
};
Here the afterExecute method simply System.out.printlns the Throwable but it could do other things. Alert users, restart tasks etc...