Why does DisconnectNonTransientException occur? It happened only once and the error could not be reproduced after that. What is the fix to avoid the error from occurring in future. More importantly how to fix an issue that is no longer reproducible?
edit: More updates on this question.
Using mybatis,DB2,tomcat.(trying to access remote DB)
This error occurs when code hits data source for first time after a long gap since the application in tomcat was last accessed. When refreshed, error disappears and the application works as expected.
Connection to data source is closed after every access
SqlSession created is not closed(Does this cause the trouble?)
Error says : "The database manager is not able to accept new requests, has terminated all requests in progress, or has terminated this particular request due to unexpected error conditions detected at the target system. ERRORCODE=-4499, SQLSTATE=58009"
Is there any default timeout for mybatis SqlSession?
More importantly how to fix an issue that is no longer reproducible?
In the stacktrace you can find where the exception is thrown. Then debugging the code you can find why it's thrown.
Many exceptions occurs because something wrong with the code or with the data. You should find what is wrong and fix it. If you can't fix it in the code where it occurred then you catch the exception from your code and then decide what to do with it.
If you can't reproduce the exception then you can trace the code that may throw the exception. Show or write the debugging code by changing the logging mode.
If you got the exception stacktrace then you should be aware what the code was when it thrown.
If you have some tests that might reproduce the exception then you can write additional code to intercept the exception. You should write such code trying to reproduce the exception on the test environment. Then you should know how to handle it.
Related
We're on version 2.7 of Play Framework and we've been getting what seems like a random exception in the logs (see below), however we can not seem to trace it back to our code. Here is the stack trace:
[2020-07-15 14:02:36,294] - [ERROR] - from akka.actor.ActorSystemImpl at [akka.actor.ActorSystemImpl(application)]
Internal server error, sending 500 response
akka.http.impl.util.One2OneBidiFlow$OutputTruncationException: Inner flow was completed without producing result elements for 1 outstanding elements
at akka.http.impl.util.One2OneBidiFlow$OutputTruncationException$.apply(One2OneBidiFlow.scala:22)
at akka.http.impl.util.One2OneBidiFlow$OutputTruncationException$.apply(One2OneBidiFlow.scala:22)
at akka.http.impl.util.One2OneBidiFlow$One2OneBidi$$anon$1$$anon$4.onUpstreamFinish(One2OneBidiFlow.scala:97)
at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:506)
at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:376)
at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:606)
at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:485)
at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:581)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:749)
at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:764)
at akka.actor.Actor$class.aroundReceive(Actor.scala:539)
at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:671)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:614)
at akka.actor.ActorCell.invoke(ActorCell.scala:583)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
at akka.dispatch.Mailbox.run(Mailbox.scala:229)
at akka.dispatch.Mailbox.exec(Mailbox.scala:241)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
As you can see there are no references to any non-play code. The play app runs normally with visible problems in functionality. Googling hasn't yielded much insight apart from similar issues due to malformed config files (which causes the same stack trace but under different conditions) and possibly Kamon related issues. We do use kamon version 1 . What could be causing this exception? Any help would be greatly appreciated
It looks like this is caused by this issue:
https://github.com/playframework/playframework/issues/9020
Meaning that your configuration is malformed (it couldn't be parsed). Hitting ctrl+c should at least show another error message with more information about where the error is.
I've been having some issues with NoHandlerException's in a multi server configuration. I've been trying to figure out exactly when I get this exception but
I can not find any good description on what it actually means that no handler was found.
The thing here is that everything actually seems to work fine, we are not receiving any error reports on this from our production system, and we are not able
to reproduce the error in our test systems. But we can clearly see a big amount of no handler found errors in our production logs.
So my question is, could this error be due to some bad load-balancing? Like that we send our users between
different servers and the server receiving server does not have an updated state for this user/session? Or should it be some configuration error on the Spring-application
that can not be affected by the load balancing?
When I have searched for other people with the same error they seem to get it all the time, but I get it only sporadically
The error we receive:
Uncaught service() exception root cause AppName: javax.servlet.ServletException: org.springframework.web.portlet.NoHandlerFoundException: No handler found for portlet request: mode 'view', phase 'ACTION_PHASE', parameters map['action' -> array<String>['myController.parameter']]
Try to check xml somewhere contains portlet. Normally every handler stage error cased by configuration.
We are using c3p0 as the connection pool in our application with Microsoft SQL Database. The connections are tested on checkout with validation query so that application doesn't work with stale connections.
Recently, we have started seeing following warning in the application logs (a lot of these messages are present in sequence). Anyone have seen this sort of exception and what does it mean?
2017-03-29 09:34:24 [WARNING] [c3p0] A PooledConnection that has already signalled a Connection error is still in use!
2017-03-29 09:34:24 [WARNING] [c3p0] Another error has occurred [ com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed. ] which will not be reported to listeners!
2017-03-29 09:34:24 com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
2017-03-29 09:34:24 at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:190)
2017-03-29 09:34:24 at com.microsoft.sqlserver.jdbc.SQLServerConnection.checkClosed(SQLServerConnection.java:388)
2017-03-29 09:34:24 at com.microsoft.sqlserver.jdbc.SQLServerConnection.prepareStatement(SQLServerConnection.java:2166)
2017-03-29 09:34:24 at com.microsoft.sqlserver.jdbc.SQLServerConnection.prepareStatement(SQLServerConnection.java:1853)
2017-03-29 09:34:24 at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:1076)
My concerns are:
Does this warning (or exception message) mean that the query had actually failed to execute and the code will throw the exception?
Is it just a warning message that is logged by c3p0 because we test connection on checkout and since the connection is closed, it will now acquire a new connection from the database and the application will run without any issue?
Any help will be appreciated. Thanks!
So, there's not enough information here to say what the initial cause of the problem was. Anything could have happened, a network outage, whatever. Testing a Connection on checkout ensures that the Connection worked at the time of checkout, but once in client-land, nothing prevents a break. It should be very, unless you are keeping Connections checked out for long periods of time. (Don't do that! With a Connection pool, adopt a just-in-time, quick checkout, immediate check-in strategy.)
Anyway, some attempt by the application to use the Connection threw an Exception. c3p0 internally checked the Connection then, decided the Connection was broken, and emitted an event (specified by the JDBC spec, but of interest only to internal listeners) indicating a Connection error. c3p0 responds to this by marking the Connection for destruction rather than check-in when the application is done.
The application, despite having seen the first Exception, continued to use the Connection. A second Exception occurred (yes, this Connection really is broken). That's what c3p0 is logging here. It's ignoring the second Exception, not signaling a Connection error, because a Connection error has already been signalled for this Connection. But it's a bit surprised and annoyed to find that the Connection is still in use ;)
All exceptions are relayed to the application. Silently swallowing up problems is the very opposite of c3p0's philosophy. But whatever your application was doing with this Connection triggered an Exception, and your application kept doing other things that triggered more.
That doesn't necessarily mean that anything is wrong. An application may tentatively interpret an Exception as something other than a Connection failure. Perhaps an Exception occurred because of a constraint violation, and if so, there is a workaround? If it were something like that, here the application would find further evidence that, yes, the Connection is broken, because this next use of the Connection, after a previous Exception had been handled, will continue to fail.
If I were you, I'd review the application code that triggers this stack trace, and look particularly for Exception handling in prior steps that might be too forgiving, that might catch an Exception and continue when it should instead abort. Again, that's not necessarily the case -- it could be that your application is doing exactly what it should, it's appropriately retrying or attempting to continue after a potentially recoverable error, and it's robust to the possibility that the retry will fail too, in which case you'll just harmlessly see these stack traces in your logs, hopefully very rarely, when already-checked-out Connections fail. But I'd definitely review your Exception handling logic in this code path, during the step that triggered the stack trace, and importantly during prior steps which would have triggered the first Exception. Usually one Exception aborts a database codepath (except for an eventual rollback() and close()), here you are barreling on to a second, which may well be awesome, but make sure it is what you want to do.
If you are seeing this a lot, make sure Connection testing on checkout really is configured properly, then try to minimize the period during which the Connection is checked out, then try to understand why your network or something at the server side might be failing occasionally.
we experience the following error pattern:
sometimes we have GAE app request processing lasting long, this throws DeadlineExceededException as GAE has a limit for 1 min. This is a described by docs, ok.
apart from DeadlineExceededException we get A problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may be throwing exceptions during the initialization of your application. (Error code 104)
subsequent requests coming to GAE app within next few milliseconds fail with the same Error code 104
Questions:
Why #2 is reported?
How can we avoid #3? Is it a bug in GAE? What is the mechanism of such failure?
Thanks for help.
As Bruyere kindly pointed, related threads killing in result of timeout exception is detailed here:
If concurrent requests are enabled through the "threadsafe" flag, every other running concurrent request is killed with error code 104:
I'm getting an error when my application starts. It appears to be after it's initialized its connection to the database. It also may be when it starts to spawn threads, but I haven't been able to cause it to happen on purpose.
The entire error message is:
FATAL ERROR in native method: JDWP NewGlobalRef, jvmtiError=JVMTI_ERROR_NULL_POINTER(100)
JDWP exit error JVMTI_ERROR_NULL_POINTER(100): NewGlobalRef
erickson:
I'm not very familiar with the DB code, but hopefully this string is helpful:
jdbc:sqlserver://localhost;databasename=FOO
Tom Hawtin:
It's likely I was only getting this error when debugging, but it wasn't consistent enough for me to notice.
Also, I fixed a bug that was causing multiple threads to attempt to update the same row in DB and I haven't gotten the JVMTI... error since.
JVMTI is the debugging and profiling protocol. So, I'm guessint it's something peculiar to the environment you are attempting to run your application in.
I'm guessing you are using a native-code–based database driver (JDBC driver type 1 or 2). And I'm guessing that driver is buggy. If you could provide more information about the driver and your datasource configuration or connection string, it might help determine some answers.