I have a strange behavior from Quartz. The job was able to run after I clear the quartz database tables and tomcat restart. And after a few times of full run, the following error occur. I ran out of clue, anyone had this problem below?
Update:
If I changed the TRIGGER_STATE status from "ERROR" to "WAITING", that job will run again and after a few full cycle, it changed to "ERROR" with the same error stack trace.
[scheduler_QuartzSchedulerThread] 00:07:01,007 ERROR org.quartz.impl.jdbcjobstore.JobStoreSupport.triggerFired(JobStoreSupport.java:2908) - Error retrieving job, setting trigger state to ERROR.
org.quartz.JobPersistenceException: Couldn't retrieve job because a required class was not found: com.mbww.scgid.social.facebook.RunFbPageHourlyJob [See nested exception: java.lang.ClassNotFoundException: com.social.facebook.RunFbPageHourlyJob]
at org.quartz.impl.jdbcjobstore.JobStoreSupport.retrieveJob(JobStoreSupport.java:1416)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.triggerFired(JobStoreSupport.java:2903)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$38.execute(JobStoreSupport.java:2871)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3788)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.triggerFired(JobStoreSupport.java:2865)
at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:319)
Caused by: java.lang.ClassNotFoundException: com.mbww.scgid.social.facebook.RunFbPageHourlyJob
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1483)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1329)
at org.springframework.scheduling.quartz.ResourceLoaderClassLoadHelper.loadClass(ResourceLoaderClassLoadHelper.java:75)
at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.selectJobDetail(StdJDBCDelegate.java:894)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.retrieveJob(JobStoreSupport.java:1404)
... 5 more
This is probably a classpath error where you missed to define your classpath. As there's something wrong with your classpath, your app can't find the necessary jar and thus the class it needs.
Surprisingly, there are many reasons for this to happen base on Mr Google.
For my case, it is because someone else in the team has deployed another same application (with different name), and this actually cause a confusion to Quartz. When it tries to load the class, it sometimes trying to load the class from the old application, where the class is not there. then the error make sense:
Couldn't retrieve job because a required class was not found:
com.mbww.scgid.social.facebook.RunFbPageHourlyJob
After removing the old application from the tomcat, it all runs smooth and well now.
Related
I am experimenting with different instrumentation libraries but primarily spring-cloud-sleuth and open-telemetry ( OT) are the ones I liked the most. Spring-cloud-sleuth is simple but it will not work for a non-spring ( Jax-RS)project , so I diverted my attention to open telemetry.
I am able to export the metrics using OT, but there is just too much data which I do not need. Spring sleuth gave the perfect solution wherein it just traces the call across microservices and links all the spans with one traceId.
My question is - How to configure OT to get an output similar to spring-sleuth? I tried various configuration and few worked but still the information is huge.
My configuration
-Dotel.traces.exporter=zipkin -Dotel.instrumentation.[jdbc].enabled=false -Dotel.instrumentation.[methods].enabled=false -Dotel.instrumentation.[jdbc-datasource].enabled=false
However, this still gives me method calls and other data. Also, one big pain is am not able to SHUT DOWN metrics data.
gets error like below
ERROR io.opentelemetry.exporter.internal.grpc.OkHttpGrpcExporter - Failed to export metrics. The request could not be executed. Full error message: Failed to connect to localhost/0:0:0:0:0:0:0:1:4317
Anyhelp will be appreciated -
There are 2 ways to configure the open telemetry agent(otel).
Environment variable
Java system property
you can either set
export OTEL_METRICS_EXPORTER=none
or
java -Dotel.metrics.exporter=none app.jar
Reference
https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md
We've been working on an application that uses Tomcat 8 throught connection pool. We control Optimistic exceptions with #Version field, and we control transactions with Entitymanagers isolated by ThreadLocal.
However, the application triggers a concurrency exception sometimes with hangs other processes and requieres to restart the server.
The exception is always like this:
Caused by: Exception [EclipseLink-2004] (Eclipse Persistence Services - 2.6.4.v20160829-44060b6):
org.eclipse.persistence.exceptions.ConcurrencyException
Exception Description: A signal was attempted before wait() on ConcurrencyManager. This normally means that an attempt was made to commit or rollback a transaction before it was started, or to rollback a transaction twice.
at org.eclipse.persistence.exceptions.ConcurrencyException.signalAttemptedBeforeWait(ConcurrencyException.java:84)
at org.eclipse.persistence.internal.helper.ConcurrencyManager.releaseReadLock(ConcurrencyManager.java:468)
at org.eclipse.persistence.internal.identitymaps.CacheKey.releaseReadLock(CacheKey.java:475)
We've been trying to solve this problem or find any specific information about this error with no result. We even followed instructions in https://wiki.eclipse.org/EclipseLink/FAQ/JPA#How_to_diagnose_and_resolve_hangs_and_deadlocks.3F.
Disabling cache seems to solve the problem, but we cannt afford to not use cache due to performance needs.
Any help would be appreciated.
Thanks
Finally, after six months of headaches, I managed to solve the problem.
The general error was EclipseLink-2004. If you check Eclipselink error reference:
Eclipse link error reference page, it says "Verify transactions in the application".
I've been using a application managed Entity Managers. To control that transactions were single-thread I used this ManagerHelper class:
EntityManagerHelper
The problem was that I was putting a managed object inside a Session attribute. I don't really understand the process inside, but somehow it created new transactions outside the main transaction if it was used to query. Moving the object to Request attributes solved the problem.
I hope this helps to someone in the future.
Best regards
I am using the latest version of Bitnami Apache solr and the issue that I am facing is after adding a SolrCore every time services or server got restart the attached SolrCore collection got detached, and the interface shows like there was never been any SolrCore attached before.
But the strange thing is when I am attaching the SolrCore again the solr interface shows a error message "another core is already defined there" and once I refresh the page its like nothing happen everything fine.
enter image description here
This mean core exist on the back end but some reference is removed because of the services restart.
So I need to know why this happening? why core is getting detached after the services restarted? And how can I fix this issue?
Reference link of a solr version I am using:
https://bitnami.com/stack/solr/installer
I bet it is a SoftReference linked inside a Service, which is lost, causing these trouble.
I think you should look for a UnitedService which starts and stop together, keeping an united work in the same Context.
You could start/stop all your webserver and Solr instances together, at the same time, with a batch script. And you can look at how the SoftReferences are working inside Solr manual and which usual problems they are facing with.
Good luck!
My application is throwing this error:
Error : An error occurred trying to instantiate an instance of the API adapter "org.datanucleus.api.jdo.JDOAdapter"
(perhaps you dont have the requisite datanucleus-api-XXX jar in the CLASSPATH?) :
{1}
org.datanucleus.exceptions.NucleusUserException: Error :
An error occurred trying to instantiate an instance of the API adapter "org.datanucleus.api.jdo.JDOAdapter" (perhaps you dont have the requisite datanucleus-api-XXX jar in the CLASSPATH?) :
{1}
...
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Although my application do have datanucleus-api-jdo-3.0.0-release.jar
What could be the reason if not missing jars?
EDIT:
This is what have been suggested to fix this issue:
This is a sporadic error that happens from time to time on any
persistable class, but moreso on ones that are used a lot in parallel.
It happens in JDO and JPA, and it seems as though the local datastore
locks a particular table / entity group and forgets to release it;
thus causing all subsequent calls to ds operations to fail. I
generally don't have to restart eclipse; just stopping then starting
the server tends to fix the problem, if not, a full refresh/clean
build will do the trick.
However, I have already restarted my GAE server and re-run my application still getting the same error.
Here is the complete POM.xml
Here is the complete stack trace.
Use latest versions. Include all dependencies (jdo-api, datanucleus-api-jdo). Read docs for what needs to be in the CLASSPATH (enhanced versions of your classes, for example).
I am trying to implement a hotswap mechanism for a static java typed template engine. I follow the same approach used by Play!Framework to reload application classes. However I always get the following error:
Caused by: java.lang.UnsupportedOperationException: class redefinition failed: attempted to change the schema (add/remove fields)
at sun.instrument.InstrumentationImpl.redefineClasses0(Native Method)
at sun.instrument.InstrumentationImpl.redefineClasses(InstrumentationImpl.java:150)
at play.classloading.HotswapAgent.reload(HotswapAgent.java:21)
at com.greenlaw110.rythm.play.RythmPlugin$5.reload(RythmPlugin.java:226)
at com.greenlaw110.rythm.internal.compiler.TemplateClassLoader.detectChange(TemplateClassLoader.java:335)
... 19 more
Anyone has any idea how play can survive this issue?
I think I kind of understand what's going on. Play's application class loader can NOT survive this kind of error actually. What it does is to restart Play upon the error, in which process play will create an new instance of the application class loader. I followed the same process and it proved to work.