My application is throwing this error:
Error : An error occurred trying to instantiate an instance of the API adapter "org.datanucleus.api.jdo.JDOAdapter"
(perhaps you dont have the requisite datanucleus-api-XXX jar in the CLASSPATH?) :
{1}
org.datanucleus.exceptions.NucleusUserException: Error :
An error occurred trying to instantiate an instance of the API adapter "org.datanucleus.api.jdo.JDOAdapter" (perhaps you dont have the requisite datanucleus-api-XXX jar in the CLASSPATH?) :
{1}
...
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Although my application do have datanucleus-api-jdo-3.0.0-release.jar
What could be the reason if not missing jars?
EDIT:
This is what have been suggested to fix this issue:
This is a sporadic error that happens from time to time on any
persistable class, but moreso on ones that are used a lot in parallel.
It happens in JDO and JPA, and it seems as though the local datastore
locks a particular table / entity group and forgets to release it;
thus causing all subsequent calls to ds operations to fail. I
generally don't have to restart eclipse; just stopping then starting
the server tends to fix the problem, if not, a full refresh/clean
build will do the trick.
However, I have already restarted my GAE server and re-run my application still getting the same error.
Here is the complete POM.xml
Here is the complete stack trace.
Use latest versions. Include all dependencies (jdo-api, datanucleus-api-jdo). Read docs for what needs to be in the CLASSPATH (enhanced versions of your classes, for example).
Related
We've a cluster of 3 nodes running Oracle Weblogic 12.2.1.2.
Checking weblogic.Stdout log file, I've started to see tons of the following trace:
<Warning><oracle.dms.instrument><DMS-50763> <Attempt to create pre-existing noun /MY_DOMAIN/MY_CLUSTER_NODE_X/DEPLOY_NAME, of type wls_jaxrsapp_resources, with a conflicting type wls_ear.>
MY_DOMAIN is the configured domain in weblogic server, and DEPLOY_NAME is the name of my deployed war in the server.
Looking at Oracle documentation this error is described as:
DMS-50763: Attempt to create pre-existing noun {0}, of type {1}, with
a conflicting type {2}.
Cause: An attempt has been made to create a noun that already exists and has a type that is different from the current attempt to create it.
Action: Correct the code responsible for creating this noun.
Level: 1 Type: SET_AT_RUNTIME Impact: Other
This description looks confusing to me since the proposed action to solve the problem is about correcting the code, but I guess that error is related on how internally the server uses the war name for more than one resource.
I checked if there is other war or artifact deployed with the same name but it's not. Furthermore I thought that wls_ear refers to the deploy war, and wls_jaxrsapp_resources refers to some jaxb-rest resource (messages, stubs...) but I'm not even sure. I'm only guessing because I didn't find anything about it in the Oracle's documentation with this nomenclature.
Can somebody explain this error with more detail to put me in the right direction?
I am using the latest version of Bitnami Apache solr and the issue that I am facing is after adding a SolrCore every time services or server got restart the attached SolrCore collection got detached, and the interface shows like there was never been any SolrCore attached before.
But the strange thing is when I am attaching the SolrCore again the solr interface shows a error message "another core is already defined there" and once I refresh the page its like nothing happen everything fine.
enter image description here
This mean core exist on the back end but some reference is removed because of the services restart.
So I need to know why this happening? why core is getting detached after the services restarted? And how can I fix this issue?
Reference link of a solr version I am using:
https://bitnami.com/stack/solr/installer
I bet it is a SoftReference linked inside a Service, which is lost, causing these trouble.
I think you should look for a UnitedService which starts and stop together, keeping an united work in the same Context.
You could start/stop all your webserver and Solr instances together, at the same time, with a batch script. And you can look at how the SoftReferences are working inside Solr manual and which usual problems they are facing with.
Good luck!
I have a strange behavior from Quartz. The job was able to run after I clear the quartz database tables and tomcat restart. And after a few times of full run, the following error occur. I ran out of clue, anyone had this problem below?
Update:
If I changed the TRIGGER_STATE status from "ERROR" to "WAITING", that job will run again and after a few full cycle, it changed to "ERROR" with the same error stack trace.
[scheduler_QuartzSchedulerThread] 00:07:01,007 ERROR org.quartz.impl.jdbcjobstore.JobStoreSupport.triggerFired(JobStoreSupport.java:2908) - Error retrieving job, setting trigger state to ERROR.
org.quartz.JobPersistenceException: Couldn't retrieve job because a required class was not found: com.mbww.scgid.social.facebook.RunFbPageHourlyJob [See nested exception: java.lang.ClassNotFoundException: com.social.facebook.RunFbPageHourlyJob]
at org.quartz.impl.jdbcjobstore.JobStoreSupport.retrieveJob(JobStoreSupport.java:1416)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.triggerFired(JobStoreSupport.java:2903)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$38.execute(JobStoreSupport.java:2871)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3788)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.triggerFired(JobStoreSupport.java:2865)
at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:319)
Caused by: java.lang.ClassNotFoundException: com.mbww.scgid.social.facebook.RunFbPageHourlyJob
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1483)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1329)
at org.springframework.scheduling.quartz.ResourceLoaderClassLoadHelper.loadClass(ResourceLoaderClassLoadHelper.java:75)
at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.selectJobDetail(StdJDBCDelegate.java:894)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.retrieveJob(JobStoreSupport.java:1404)
... 5 more
This is probably a classpath error where you missed to define your classpath. As there's something wrong with your classpath, your app can't find the necessary jar and thus the class it needs.
Surprisingly, there are many reasons for this to happen base on Mr Google.
For my case, it is because someone else in the team has deployed another same application (with different name), and this actually cause a confusion to Quartz. When it tries to load the class, it sometimes trying to load the class from the old application, where the class is not there. then the error make sense:
Couldn't retrieve job because a required class was not found:
com.mbww.scgid.social.facebook.RunFbPageHourlyJob
After removing the old application from the tomcat, it all runs smooth and well now.
I want to run a java application which calls a web service. Everything works fine from the netbeans ide, but fails when I run the .jar generated. What could be the problem?
How can I check the content type that the error is pointing at?
the error says: "SEVERE: SAAJ0537: Invalid Content-Type. Could be an error message instead of a SOAP message. com.sun.xml.messaging.saaj.MessageImpl identifyContentType"
EDIT
I am realizing that the problem could be originated by the fact that the web service that Im consuming uses a custom data type, but I have no idea where to look.
Please help
Your question lacks the details that would help identify your problem (like, what web container are you using, some source code, etc) but I've encountered and resolved this same problem. I'm using Tomcat with Eclipse and apparently, the problem occurs because for some reason, Tomcat can't find some JARs even though I have specified them in my build path. The resolution is to put the JARs in the actual lib directory of Tomcat instead of in some project-specific location. (See this same case with log4j.)
What happens is this missing JAR causes the servlet to produce an internal server error when called. Tomcat generates an error page---which is of type "text/html"---and sends it back to client. So, client reads "text/html" instead of the expected "text/xml".
For a test this SOAP tutorial produces the specified error due to jaxm-api.jar but can be fixed with the solution I described above. I have verified this with Tomcat 7.
How can I check the content type that the error is pointing at?
A bit difficult to answer without some code. But if you are using javax.xml.soap.SOAPPart, it has methods to check the headers of the SOAP transaction. Check the javadocs. Shame it does not override toString(). But personally, I did not arrive at this answer with Java debug methods but via looking at TCP dumps.
Having created a successfully (locally) deployed service using Google cloud endpoints I wanted to switch from using JDO to Objectify instead.
Having update the endpoint class with the objectify code I have an issue where the .api file in the war is deleted and doesn't get regenerated. The upshot is that the endpoint is no longer exposed and every request returns a 404 error.
I manually added the file back in (amended from another project) and it worked once with the war deployed and it appearing to try and serve the request (failed due to missing objectify annotation) but then deleted the .api file again.
Can anyone help me with any suggestions as to what may be causing the file to be deleted and not re-generated at all?
Dan Holevoet's comment helped to solve this one.
It turns out that there is a problem with the parameterized objectivity key which was shown in the stack trace. Removing the Key fields caused the .api files to be re-generated successfully.
As pointed out in the comment below - this is expected to be resolved in the next version of the SDK.