I'm working with a jersey server and want to be sure that when the server is terminated (usually via SIGTERM), all currently running requests are completed gracefully. If volume is high enough, there will likely be data loss if I don't do this.
So I'm trying to call HttpServer.shutdown() from a runtime shutdown hook.
I think it is working correctly, except for one problem. CompletionHandler.failed() is invoked, with an InterruptedException. shutdownNow is in the stack trace, so it seems like there is just some logical error occurring after the shutdown itself has finished:
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1302)
at org.glassfish.grizzly.impl.SafeFutureImpl$Sync.innerGet(SafeFutureImpl.java:354)
at org.glassfish.grizzly.impl.SafeFutureImpl.get(SafeFutureImpl.java:265)
at org.glassfish.grizzly.impl.SafeFutureImpl.notifyCompletionHandlers(SafeFutureImpl.java:181)
at org.glassfish.grizzly.impl.SafeFutureImpl.done(SafeFutureImpl.java:287)
at org.glassfish.grizzly.impl.SafeFutureImpl$Sync.innerSet(SafeFutureImpl.java:383)
at org.glassfish.grizzly.impl.SafeFutureImpl.result(SafeFutureImpl.java:112)
at org.glassfish.grizzly.http.server.HttpServer.shutdownNow(HttpServer.java:458)
at org.glassfish.grizzly.http.server.HttpServer$1.completed(HttpServer.java:384)
at org.glassfish.grizzly.http.server.HttpServer$1.completed(HttpServer.java:376)
at org.glassfish.grizzly.impl.SafeFutureImpl.notifyCompletionHandlers(SafeFutureImpl.java:199)
at org.glassfish.grizzly.impl.SafeFutureImpl.done(SafeFutureImpl.java:287)
at org.glassfish.grizzly.impl.SafeFutureImpl$Sync.innerSet(SafeFutureImpl.java:383)
at org.glassfish.grizzly.impl.SafeFutureImpl.result(SafeFutureImpl.java:112)
at org.glassfish.grizzly.http.server.NetworkListener$1$1.completed(NetworkListener.java:698)
at org.glassfish.grizzly.http.server.NetworkListener$1$1.completed(NetworkListener.java:693)
at org.glassfish.grizzly.http.server.HttpServerFilter.prepareForShutdown(HttpServerFilter.java:328)
at org.glassfish.grizzly.http.server.NetworkListener$1.shutdownRequested(NetworkListener.java:704)
at org.glassfish.grizzly.nio.GracefulShutdownRunner.run(GracefulShutdownRunner.java:93)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
I boiled it down to this test case which does not use jersey or a shutdown hook. It's very simple and the same exception occurs (in fact, the above is copied from the output of this program):
public class Server {
public static void main(String[] args) throws IOException {
HttpServer server = HttpServer.createSimpleServer();
server.start();
try { Thread.sleep(5000); } catch (InterruptedException ex) {}
shutdown(server);
}
public static void shutdown (HttpServer server) {
final boolean [] done = {false};
server.shutdown().addCompletionHandler(new EmptyCompletionHandler<HttpServer>() {
public void completed (HttpServer arg) {
System.out.println("Shutdown completed");
done[0] = true;
}
public void failed (Throwable error) {
System.out.println("Shutdown failed");
error.printStackTrace(System.out);
done[0] = true;
}
});
while (!done[0]) {
try { Thread.sleep(100); } catch (InterruptedException ex) {}
}
System.out.println("Goodbye");
}
}
Where's the bug?
Related
I use a Timer.schedule() to periodically call the run() method of the TimerTask class to poll devices. Sometimes a MalformedJsonException or a IllegalStateException are thrown, which is processed in the catch block. The thread should continue to poll devices after handling the exception, but it stops.
When there are no errors, the run method is periodically called as expected.
I also tried calling the runModulesPoll() method from the catch block, but that didn't help.
private static void runModulesPoll(Boiler boiler) {
new Timer("Modules Poll Flow").schedule(new TimerTask() {
#Override
public void run() {
try {
Module[] modules = boiler.getCore().getModules();
for (Module module : modules) {
String response = ControllersService.sendMessage(MessageBuilder.buildDataRequest(module.getAlias(), boiler.getBoilerMode()));
if (AppUtils.isStringInvalid(response)) {
module.setOnline(false);
ModulesResetService.reset();
continue;
}
module.setOnline(true);
module.fromShortJson(response);
}
MqttService.publishMessage(MqttMessageFactory.createDataMessage(boiler.getCore().getModulesDataAsJson()));
} catch (Throwable e) {
LoggerLocal.error("Exception in Modules Poll Flow: " + e.getLocalizedMessage());
e.printStackTrace();
}
}
}, 0, 1);
}
According to the logs, the exception is handled as expected, but the thread does not continue polling.
14-11-2019 18:10:47 -- Exception in Modules Poll Flow: Not a JSON Object: "hgjhgjhg"
java.lang.IllegalStateException: Not a JSON Object: "hgjhgjhg"
at com.google.gson.JsonElement.getAsJsonObject(JsonElement.java:90)
at eezo.AppUtils.getJsonObjectFromString(AppUtils.java:101)
at eezo.services.ControllersService.handleIfErrorMessage(ControllersService.java:148)
at eezo.services.ControllersService.sendMessage(ControllersService.java:53)
at eezo.ApplicationRunner$1.run(ApplicationRunner.java:107)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
UPDATE:
Exception throws inside try block and handles in catch but thread stops without any other exceptions.
LoggerLocal doesn't produce exceptions at all.
I simulated the situation with a simpler example and everything works as expected, the thread does not fall and handles exceptions constantly.
private static void run(String[] args) {
final int[] i = {0};
new Timer("Modules Poll Flow").schedule(new TimerTask() {
#Override
public void run() {
try {
System.out.println("run() " + args);
if (i[0] == 5) throw new IllegalStateException("ssss");
i[0]++;
} catch (Exception e) {
System.out.println("Exception in Modules Poll Flow: " + e.getLocalizedMessage());
e.printStackTrace();
}
}
}, 0, 1);
}
I have started learning vertx and adding endpoints using vertx-web. I have defined my verticle and starting my HttpServer from that verticle as shown below
public class Bootloader {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(WebVerticle.class.getName());
}
}
public class WebVerticle extends AbstractVerticle {
private static final Logger LOGGER = LoggerFactory.getLogger(WebVerticle.class);
#Override
public void start(Future<Void> startFuture) throws Exception {
HttpServer httpServer = vertx.createHttpServer();
httpServer.requestHandler(getRouter(vertx)::accept).listen(8080);
LOGGER.info("WebVerticle deployed");
}
private Router getRouter(Vertx vertx) {
Router router = Router.router(vertx);
registerHandlers(router);
return router;
}
private void registerHandlers(Router router) {
router.route("/foo").blockingHandler(routingContext -> {
LOGGER.info("For request having path: /foo");
LOGGER.info("routingContext = [" + routingContext + "]");
LOGGER.info("thread = [" + Thread.currentThread().getName() + "]");
try {
Thread.currentThread().sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
routingContext.response().end("BAR");
}, false);
}
}
with this when I called /foo endpoint from different tabs I see request processing happening sequentially. Am I missing something from vetx concetps that I should understand for scaling my webapp. Please guide me where I am doing wrong and way to fix it.
The request is executed on the event-loop thread. When you deploy your verticle without any additional parameters, one instance is deployed on one event loop thread.
When you invoke
try {
Thread.currentThread().sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
on your request, you block this one event loop for 10 seconds, during which it can not handle any other requests, so don't do that :)
When you want to delay the response, use
vertx.setTimer(10000, l -> {
//no finish your response
});
I am using Apache HttpClient 4 to communicate with a REST API and most of the time I do lengthy PUT operations. Since these may happen over an unstable Internet connection I need to detect if the connection is interrupted and possibly need to retry (with a resume request).
To try my routines in the real world I started a PUT operation and then I flipped the Wi-Fi switch of my laptop, causing an immediate total interruption of any data flow. However it takes a looong time (maybe 5 minutes or so) until eventually a SocketException is thrown.
How can I speed up to process? I'd like to set a timeout of maybe something around 30 seconds.
Update:
To clarify, my request is a PUT operation. So for a very long time (possibly hours) the only operation is a write() operation and there are no read operations. There is a timeout setting for read() operations, but I could not find one for write operations.
I am using my own Entity implementation and thus I write directly to an OutputStream which will pretty much immediately block once the Internet connection is interrupted. If OutputStreams had a timeout parameter so I could write out.write(nextChunk, 30000); I could detect such a problem myself. Actually I tried that:
public class TimeoutHttpEntity extends HttpEntityWrapper {
public TimeoutHttpEntity(HttpEntity wrappedEntity) {
super(wrappedEntity);
}
#Override
public void writeTo(OutputStream outstream) throws IOException {
try(TimeoutOutputStreamWrapper wrapper = new TimeoutOutputStreamWrapper(outstream, 30000)) {
super.writeTo(wrapper);
}
}
}
public class TimeoutOutputStreamWrapper extends OutputStream {
private final OutputStream delegate;
private final long timeout;
private final ExecutorService executorService = Executors.newSingleThreadExecutor();
public TimeoutOutputStreamWrapper(OutputStream delegate, long timeout) {
this.delegate = delegate;
this.timeout = timeout;
}
#Override
public void write(int b) throws IOException {
executeWithTimeout(() -> {
delegate.write(b);
return null;
});
}
#Override
public void write(byte[] b) throws IOException {
executeWithTimeout(() -> {
delegate.write(b);
return null;
});
}
#Override
public void write(byte[] b, int off, int len) throws IOException {
executeWithTimeout(() -> {
delegate.write(b, off, len);
return null;
});
}
#Override
public void close() throws IOException {
try {
executeWithTimeout(() -> {
delegate.close();
return null;
});
} finally {
executorService.shutdown();
}
}
private void executeWithTimeout(final Callable<?> task) throws IOException {
try {
executorService.submit(task).get(timeout, TimeUnit.MILLISECONDS);
} catch (TimeoutException e) {
throw new IOException(e);
} catch (ExecutionException e) {
final Throwable cause = e.getCause();
if (cause instanceof IOException) {
throw (IOException)cause;
}
throw new Error(cause);
} catch (InterruptedException e) {
throw new Error(e);
}
}
}
public class TimeoutOutputStreamWrapperTest {
private static final byte[] DEMO_ARRAY = new byte[]{1,2,3};
private TimeoutOutputStreamWrapper streamWrapper;
private OutputStream delegateOutput;
public void setUp(long timeout) {
delegateOutput = mock(OutputStream.class);
streamWrapper = new TimeoutOutputStreamWrapper(delegateOutput, timeout);
}
#AfterMethod
public void teardown() throws Exception {
streamWrapper.close();
}
#Test
public void write_writesByte() throws Exception {
// Setup
setUp(Long.MAX_VALUE);
// Execution
streamWrapper.write(DEMO_ARRAY);
// Evaluation
verify(delegateOutput).write(DEMO_ARRAY);
}
#Test(expectedExceptions = DemoIOException.class)
public void write_passesThruException() throws Exception {
// Setup
setUp(Long.MAX_VALUE);
doThrow(DemoIOException.class).when(delegateOutput).write(DEMO_ARRAY);
// Execution
streamWrapper.write(DEMO_ARRAY);
// Evaluation performed by expected exception
}
#Test(expectedExceptions = IOException.class)
public void write_throwsIOException_onTimeout() throws Exception {
// Setup
final CountDownLatch executionDone = new CountDownLatch(1);
setUp(100);
doAnswer(new Answer<Void>() {
#Override
public Void answer(InvocationOnMock invocation) throws Throwable {
executionDone.await();
return null;
}
}).when(delegateOutput).write(DEMO_ARRAY);
// Execution
try {
streamWrapper.write(DEMO_ARRAY);
} finally {
executionDone.countDown();
}
// Evaluation performed by expected exception
}
public static class DemoIOException extends IOException {
}
}
This is somewhat complicated, but it works quite well in my unit tests. And it works in real life as well, except that the HttpRequestExecutor catches the exception in line 127 and tries to close the connection. However when trying to close the connection it first tries to flush the connection which again blocks.
I might be able to dig deeper in HttpClient and figure out how to prevent this flush operation, but it is already a not too pretty solution, and it is just about to get even worse.
UPDATE:
It looks like this can't be done on the Java level. Can I do it on another level? (I am using Linux).
Java blocking I/O does not support socket timeout for write operations. You are entirely at the mercy of the OS / JRE to unblock the thread blocked by the write operation. Moreover, this behavior tends to be OS / JRE specific.
This might be a legitimate case to consider using a HTTP client based on non-blocking I/O (NIO) such as Apache HttpAsyncClient.
You can configure the socket timeout using RequestConfig:
RequestConfig myRequestConfig = RequestConfig.custom()
.setSocketTimeout(5000) // 5 seconds
.build();
When, when you do the call, just assign your new configuration. For instance,
HttpPut httpPut = new HttpPut("...");
httpPut.setConfig(requestConfig);
...
HttpClientContext context = HttpClientContext.create();
....
httpclient.execute(httpPut, context);
For more information regarthing timeout configurations, here there is a good explanation.
Her is one of the link i came across which talks connection eviction policy : here
public static class IdleConnectionMonitorThread extends Thread {
private final HttpClientConnectionManager connMgr;
private volatile boolean shutdown;
public IdleConnectionMonitorThread(HttpClientConnectionManager connMgr) {
super();
this.connMgr = connMgr;
}
#Override
public void run() {
try {
while (!shutdown) {
synchronized (this) {
wait(5000);
// Close expired connections
connMgr.closeExpiredConnections();
// Optionally, close connections
// that have been idle longer than 30 sec
connMgr.closeIdleConnections(30, TimeUnit.SECONDS);
}
}
} catch (InterruptedException ex) {
// terminate
}
}
public void shutdown() {
shutdown = true;
synchronized (this) {
notifyAll();
}
}}
I think you might want to look at this.
I'm trying to start a JMXConnectorServer for management and debug purposes. But I don't want this service to prevent application from exiting normally when the last non-daemon thread is terminated.
In other words, I want the following program to terminate immediately:
public class Main {
public static void main(final String[] args) throws IOException {
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
JMXServiceURL jmxUrl = new JMXServiceURL("rmi", null, 0);
JMXConnectorServer connectorServer =
JMXConnectorServerFactory.newJMXConnectorServer(jmxUrl, null, mbs);
connectorServer.start();
}
}
I play with similar issue and wrote this class:
public final class HardDaemonizer extends Thread {
private final Runnable target;
private final String newThreadName;
public HardDaemonizer(Runnable target, String name, String newThreadName) {
super(name == null ? "Daemonizer" : name);
setDaemon(true);
this.target = target;
this.newThreadName = newThreadName;
}
#Override
public void run() {
try {
List<Thread> tb = getSubThreads();
target.run();
List<Thread> ta = new java.util.ArrayList<>(getSubThreads());
ta.removeAll(tb);
for (Thread thread : ta) {
thread.setName(newThreadName);
}
Thread.sleep(Long.MAX_VALUE);
} catch (InterruptedException ex) {
Logger.getLogger(HardDaemonizer.class.getName()).log(Level.SEVERE, null, ex);
}
}
public static Thread daemonize(String daemonizerName, String newThreadName, Runnable target) {
HardDaemonizer daemonizer = new HardDaemonizer(target, daemonizerName, newThreadName);
daemonizer.start();
return daemonizer;
}
private static List<Thread> getSubThreads() {
ThreadGroup group = Thread.currentThread().getThreadGroup().getParent();
Thread[] threads = new Thread[group.activeCount()];
group.enumerate(threads);
return java.util.Arrays.asList(threads);
}
}
You can use it in this way:
HardDaemonizer.daemonize(null, "ConnectorServer", new Runnable(){
#Override
public void run() {
try {
connectorServer.start();
} catch (IOException ex) {
Logger.getLogger(Ralph.class.getName()).log(Level.SEVERE, null, ex);
}
}
});
Be careful - it's tricky!
EDIT
Agh... It's not solution for you. It hard-daemonize connector thread only and this thread will be killed when jvm stops. Additionaly you can customize name of this thread.
Alternatively you can add flag completed and sleep in loop in daemonize method until connector server start up.
SIMPLIFIED
This is simplified daemonizer without tricky thread renaming:
public abstract class Daemonizer<T> extends Thread {
private final T target;
private boolean completed = false;
private Exception cause = null;
public Daemonizer(T target) {
super(Daemonizer.class.getSimpleName());
setDaemon(true);
this.target = target;
}
#Override
public void run() {
try {
act(target);
} catch (Exception ex) {
cause = ex;
}
completed = true;
try {
Thread.sleep(Long.MAX_VALUE);
} catch (InterruptedException ex) {
java.util.logging.Logger.getLogger(Daemonizer.class.getName()).log(java.util.logging.Level.SEVERE, null, ex);
}
}
public abstract void act(final T target) throws Exception;
public static void daemonize(Daemonizer daemonizer) throws Exception {
daemonizer.start();
while (!daemonizer.completed) {
Thread.sleep(50);
}
if (daemonizer.cause != null) {
throw daemonizer.cause;
}
}
}
Usage:
Daemonizer.daemonize(new Daemonizer<JMXConnectorServer>(server) {
#Override
public void act(JMXConnectorServer server) throws Exception {
server.start();
}
});
Yeah, you will need to so a connectorServer.stop(); at some point.
Edit:
In reading your comments, it sounds like you should do something like:
connectorServer.start();
try {
// create thread-pool
ExecutorService threadPool = Executors...
// submit jobs to the thread-pool
...
threadPool.shutdown();
// wait for the submitted jobs to finish
threadPool.awaitTermination(Long.MAX_LONG, TimeUnit.SECONDS);
} finally {
connectorServer.stop();
}
#Nicholas' idea of the shutdown hook is a good one. Typically, however, I had my main thread wait on some sort of variable that is set from a shutdown() JMX operation. Something like:
public CountDownLatch shutdownLatch = new CountDownLatch(1);
...
// in main
connectorServer.start();
try {
// do the main-thread stuff
shutdownLatch.await();
} finally {
connectorServer.stop();
}
// in some JMX exposed operation
public void shutdown() {
Main.shutdownLatch.countDown();
}
As an aside, you could use my SimpleJMX package to manage your JMX server for you.
JmxServer jmxServer = new JmxServer(8000);
jmxServer.start();
try {
// register our lookupCache object defined below
jmxServer.register(lookupCache);
jmxServer.register(someOtherObject);
} finally {
jmxServer.stop();
}
From my experience, the JMXConnectorServer is only running in a user thread when you create it explicitly.
If you instead configure RMI access for the platform MBean server via system properties, the implicitly created JMX connector server will run as daemon process and not prevent the JMV shutdown. To do this, your code would shrink to the following
public class Main {
public static void main(final String[] args) throws IOException {
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
}
}
but you'll need to set the following system properties:
-Dcom.sun.management.jmxremote.port=1919
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
You could add a JVM Shutdown Hook to stop the connector server.
===== UPDATE =====
Not sure why your shutdown hook doesn't work. Perhaps you can supply your sample code. Here's an example:
public static void main(String[] args) {
try {
log("Creating Connector Server");
final JMXConnectorServer jcs = JMXConnectorServerFactory.newJMXConnectorServer(new JMXServiceURL("rmi", "localhost", 12387), null, ManagementFactory.getPlatformMBeanServer());
Thread jcsStopper = new Thread("JCS-Stopper") {
public void run() {
if(jcs.isActive()) {
try {
jcs.stop();
log("Connector Server Stopped");
} catch (Exception e) {
log("Failed to stop JCS");
e.printStackTrace();
}
}
}
};
jcsStopper.setDaemon(false);
Runtime.getRuntime().addShutdownHook(jcsStopper);
log("Registered Server Stop Task");
jcs.start();
log("Server Started");
Thread.sleep(3000);
System.exit(0);
} catch (Exception ex) {
ex.printStackTrace(System.err);
}
}
Output is:
[main]:Creating Connector Server
[main]:Registered Server Stop Task
[main]:Server Started
[JCS-Stopper]:Connector Server Stopped
String port = getProperty("com.sun.management.jmxremote.port");
if (port == null) {
port = String.valueOf(getAvailablePort());
System.setProperty("com.sun.management.jmxremote.port", port);
System.setProperty("com.sun.management.jmxremote.ssl", "false");
System.setProperty("com.sun.management.jmxremote.authenticate", "false");
sun.management.Agent.startAgent();
}
log.info(InetAddress.getLocalHost().getCanonicalHostName() + ":" + port);
I have a very simple Java RMI Server that looks like the following:
import java.rmi.*;
import java.rmi.server.*;
public class CalculatorImpl extends UnicastRemoteObject implements Calculator {
private String mServerName;
public CalculatorImpl(String serverName) throws RemoteException
{
super();
mServerName = serverName;
}
public int calculate(int op1, int op2) throws RemoteException
{
return op1 + op2;
}
public void exit() throws RemoteException
{
try{
Naming.unbind(mServerName);
System.out.println("CalculatorServer exiting.");
}
catch(Exception e){}
System.exit(1);
}
public static void main(String args[]) throws Exception
{
System.out.println("Initializing CalculatorServer.");
String serverObjName = "rmi://localhost/Calculator";
Calculator calc = new CalculatorImpl(serverObjName);
Naming.rebind(serverObjName, calc);
System.out.println("CalculatorServer running.");
}
}
When I call the exit method, System.exit(1) throws the following exception:
CalculatorServer exiting.
java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is:
java.io.EOFException
at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:203)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:126)
at CalculatorImpl_Stub.exit(Unknown Source)
at CalculatorClient.<init>(CalculatorClient.java:17)
at CalculatorClient.main(CalculatorClient.java:29)
Caused by: java.io.EOFException
at java.io.DataInputStream.readByte(DataInputStream.java:243)
at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:189)
... 4 more
[2]+ Exit 1 java CalculatorImpl
What am I doing wrong in this method?
In case anyone is having a similar problem, I figured out the answer myself. Here is my exit() method:
public void exit() throws RemoteException
{
try{
// Unregister ourself
Naming.unbind(mServerName);
// Unexport; this will also remove us from the RMI runtime
UnicastRemoteObject.unexportObject(this, true);
System.out.println("CalculatorServer exiting.");
}
catch(Exception e){}
}
Actually just unregistering and immediately calling System.exit doesn't shut down cleanly. It basically breaks the connection before informing the client that the message was completed. What works is to start a small thread that shuts down the system like:
public void quit() throws RemoteException {
System.out.println("quit");
Registry registry = LocateRegistry.getRegistry();
try {
registry.unbind(_SERVICENAME);
UnicastRemoteObject.unexportObject(this, false);
} catch (NotBoundException e) {
throw new RemoteException("Could not unregister service, quiting anyway", e);
}
new Thread() {
#Override
public void run() {
System.out.print("Shutting down...");
try {
sleep(2000);
} catch (InterruptedException e) {
// I don't care
}
System.out.println("done");
System.exit(0);
}
}.start();
}
The thread is needed to be able to let something happen in the future while still returning from the quit method.