java.lang.NoClassDefFoundError: java/util/Base64$Encoder on Centos 7 - java

Summary
We have spring boot projects that working on Tomcat in Centos 7. I'm using kafka for message transactions. When tomcat started, #KafkaListener 's trying to get metadata. At that time I'm getting exception;
java.lang.NoClassDefFoundError: java/util/Base64$Encoder
What I've Tried
At the first place i thought this was all about old kafka-logs. I removed them and restarted the server. Everything was fine. But today i got this exception again. Removing kafka-logs didn't solve my issue.
Kafka Broker Version: kafka_2.13-2.8.0
Kafka Client Version: org.apache.kafka:kafka-clients:jar:2.8.0
OpenJDK: 1.8.0_292
SLF4J: Failed toString() invocation on an object of type
[org.apache.kafka.common.requests.MetadataRequest] Reported exception:
java.lang.NoClassDefFoundError: java/util/Base64$Encoder
at org.apache.kafka.common.Uuid.toString(Uuid.java:101)
at org.apache.kafka.common.message.MetadataRequestData$MetadataRequestTopic.toString(MetadataRequestData.java:662)
at org.apache.kafka.common.protocol.MessageUtil.deepToString(MessageUtil.java:53)
at org.apache.kafka.common.message.MetadataRequestData.toString(MetadataRequestData.java:373)
at org.apache.kafka.common.requests.AbstractRequest.toString(AbstractRequest.java:115)
at org.apache.kafka.common.requests.AbstractRequest.toString(AbstractRequest.java:120)
at org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:277)
at org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:249)
at org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:211)
at org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:161)
at org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger.writeLog(LogContext.java:428)
at org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger.debug(LogContext.java:229)
at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:522)
at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:501)
at org.apache.kafka.clients.NetworkClient.sendInternalMetadataRequest(NetworkClient.java:467)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1141)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1046)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:559)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1296)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1206)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1110)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1031)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)

Related

Zookeeper : java.lang.ClassNotFoundException: org.apache.zookeeper.admin.ZooKeeperAdmin

I had a zookeeper with version 3.4.10 and curator with version 2.12.0 but zookeeper with versions less than 3.5.8 has a strict transitive dependency on log4j1.
I would like to use log4j2 that's why it's require to update zookeeper version. I tried different combinations:
zookeeper 3.6.1 and curator 5.1.0
zookeeper 3.5.9 and curator 5.1.0
zookeeper 3.5.9 and curator 5.0.0
zookeeper 3.5.9 and curator 5.1.0 + exclude zookeeper dependency from curator
zookeeper 3.5.9 and curator 4.3.0 + exclude zookeeper dependency from curator
zookeeper 3.6.1 and curator 5.1.0 + exclude zookeeper dependency from curator
All of these options fail.
1 option is failing with the following stacktrace:
java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.close(I)Z
2021-04-05 14:22:19.633 WARN o.a.c.loader.WebappClassLoaderBase The web application [ROOT] appears to have started a thread named [main-EventThread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2044)
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
2021-04-05 14:22:19.638 ERROR c.w.event.ApplicationFailedListener ApplicationFailedEvent, possibly port is not available or analyze message above, application will be restarted
2-6 options look like has the same stracktrace:
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'zookeeperPropertySourceLocator' defined in org.springframework.cloud.zookeeper.config.ZookeeperConfigBootstrapConfiguration: Unsatisfied dependency
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'curatorFramework' defined in org.springframework.cloud.zookeeper.ZookeeperAutoConfiguration: Bean instantiation via factory method failed; nested exception i ...
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.apache.curator.framework.CuratorFramework]: Factory method 'curatorFramework' threw exception; nested exception is java.lang.NoClassDefFoundError: org/apache/zookee
Caused by: java.lang.NoClassDefFoundError: org/apache/zookeeper/admin/ZooKeeperAdmin
at org.apache.curator.framework.CuratorFrameworkFactory.<clinit>(CuratorFrameworkFactory.java:65)
Caused by: java.lang.ClassNotFoundException: org.apache.zookeeper.admin.ZooKeeperAdmin
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
Actually, ZooKeeperAdmin class presents in Zookeeper since 3.5.7 that's why it's strange why it cannot be found.
Could someone suggest any ideas? Is it an issue with dependencies versions (I couldn't find any details for zk > 3.4 with curator)? Or any way to debug and get understanding of this issue?
you may need to check the env of the zookeeper version to whether has the class of org/apache/zookeeper/admin/ZooKeeperAdmin.
As I checked the ISSUE of Zookeeper, found that the zookeeperAdmin was new in the below issue.
https://issues.apache.org/jira/browse/ZOOKEEPER-3689
and refer to the pr https://github.com/apache/zookeeper/pull/1285
So you may have two methods to solve this problem.
downgrade the zookeeper version to 3.4.13, and using curator 4.2.0, please refer to https://curator.apache.org/zk-compatibility-34.html.
upgrade the curator to 5.2.1 and upgrade the env of zookeeper which needs 3.6.1 above.

ClassNotFoundException and NoClassDefFoundError in Flink app using Cassandra Driver

I am developing a Flink application and it will use Cassandra Driver to interact with Cassandra DB. The Cassandra Driver is implemented in Singleton fashion and multiple Flink process functions will interact with it to get data from Cassandra. I also add a future callback to each Session.executeAsync's ResultSetFuture. The app is run on Kubernetes through Docker containers.
The environment is:
Flink version is 1.10.0 and using shaded netty, hadoop, guava and jackson.
Using cassandra-driver-mapping: 3.9.0 and shaded cassandra-driver-core: 3.9.0.
All dependencies are packaged in a single jar using Bazel. Before starting the Flink app, I check all the required classes are in the jar and are correct and complete. And I use the shaded dependency in order to avoid class loading conflict in JVM. But, when I start and run the Flink app. I keep seeing the following ClassNotFoundException in the Taskmanager logs.
java.lang.NoClassDefFoundError: com/datastax/driver/core/SessionManager$State
at com.datastax.driver.core.SessionManager.getState(SessionManager.java:211)
at io.uhana.cassandra.CassandraDriver.sessionNeedsReconnect(CassandraDriver.java:508)
at io.uhana.cassandra.CassandraDriver.access$000(CassandraDriver.java:61)
at io.uhana.cassandra.CassandraDriver$1.onFailure(CassandraDriver.java:518)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1387)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1015)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:868)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:713)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:230)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:235)
at com.datastax.driver.core.RequestHandler.access$2600(RequestHandler.java:61)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:1011)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:647)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1262)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1180)
at com.datastax.shaded.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at com.datastax.shaded.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at com.datastax.shaded.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at com.datastax.shaded.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
at com.datastax.shaded.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at com.datastax.shaded.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at com.datastax.driver.core.InboundTrafficMeter.channelRead(InboundTrafficMeter.java:38)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at com.datastax.shaded.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1304)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at com.datastax.shaded.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:921)
at com.datastax.shaded.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:135)
at com.datastax.shaded.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:646)
at com.datastax.shaded.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:546)
at com.datastax.shaded.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:500)
at com.datastax.shaded.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:460)
at com.datastax.shaded.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at com.datastax.shaded.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.ClassNotFoundException: com.datastax.driver.core.SessionManager$State
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
at org.apache.flink.util.ChildFirstClassLoader.loadClass(ChildFirstClassLoader.java:69)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 49 more
and
ConstantReconnectionPolicy$ConstantSchedule' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
java.lang.NoClassDefFoundError: com/datastax/shaded/netty/handler/timeout/IdleState
at com.datastax.shaded.netty.handler.timeout.IdleStateHandler$ReaderIdleTimeoutTask.run(IdleStateHandler.java:493)
at com.datastax.shaded.netty.handler.timeout.IdleStateHandler$AbstractIdleTask.run(IdleStateHandler.java:466)
at com.datastax.shaded.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at com.datastax.shaded.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
at com.datastax.shaded.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:399)
at com.datastax.shaded.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:464)
at com.datastax.shaded.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at com.datastax.shaded.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.ClassNotFoundException: com.datastax.shaded.netty.handler.timeout.IdleState
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
at org.apache.flink.util.ChildFirstClassLoader.loadClass(ChildFirstClassLoader.java:69)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 9 more
I also notice that these issues are easier to reproduce when giving more resources and parallelism to the Flink app and the process functions. And the issues are most likely happen in the future callback.
Any help appreciated!

java.net.UnknownHostException in WildFly start

I'm getting the following error when I start up a Java server process (WildFly application server):
Caused by: java.net.UnknownHostException: proxy01.phx2.fedoraproject.org: Name or service not known
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
at java.net.InetAddress.getLocalHost(InetAddress.java:1501)
The server is not harmed by this error, but it bothers me to have this error in the logs. It is something related with my environment. Indeed, the problem happens no matter which version of the server and on any location of my file system.
I have checked in /etc/hosts and /etc/resolv.conf but I really don't have idea where the "proxy01.phx2.fedoraproject.org" comes from.
Any idea?
Thanks

Liquibase 3.7.0 on Azure - Unexpected Liquibase Exception: Cannot find LockService

I am currently working on a Java8 WebApp with Hibernate and Vaadin and Liquibase as a dependency which I tried to run on Azure for testing.I didn't write this app myself and the version that was given to me was originally using Liquibase 3.0.7 and apparently running on a Tomcat 8.0.28
I updated this to 3.7.0 (see bottom for specific reasons why) and now get the following error:
liquibase.exception.UnexpectedLiquibaseException: Cannot find LockService for unsupported
liquibase.lockservice.LockServiceFactory.getLockService(LockServiceFactory.java:74)
liquibase.Liquibase.update(Liquibase.java:183)
liquibase.Liquibase.update(Liquibase.java:179)
liquibase.Liquibase.update(Liquibase.java:175)
liquibase.Liquibase.update(Liquibase.java:168)
com.app.test.AppServlet.initDB(AppServlet.java:86)
com.app.test.AppServlet.servletInitialized(AppServlet.java:44)
com.vaadin.server.VaadinServlet.init(VaadinServlet.java:217)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:660)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798)
org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:808)
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)
org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
java.lang.Thread.run(Thread.java:748)
I am testing locally with a tomcat 8.5.47 and a Tomcat 7 as well as the tomcat7 maven plugin that allows me to run the app with maven and see all the logging in the console. I am trying to deploy the .war to a Tomcat 8.5 on Azure.
So I got the following Servers:
Tomcat 8.5.47 (local)
Tomcat 7.0.96 (local)
Tomcat 7.x (Maven Plugin)
Tomcat 8.5.41 (Azure)
The above problem occurs only on Azure
All the local Servers seem to work (I did not go too deep into testing though, but at least they render the front-page).
The database is a MariaDB also running on Azure and all of the configuration for the connection is made in the code via hibernate config or in the jdbc connection string, so all the servers run the same code and connect to the exact same db.
I already tried the latest Version 3.8.0 but have the same problem there so I got back to 3.7.0
How can I fix this error?
Update:
I found more errors in the server log after activating verbose logging:
2019-10-23T13:30:58.611225727Z Caused by: liquibase.exception.ServiceNotFoundException: liquibase.exception.ServiceNotFoundException: Could not find unique implementation of liquibase.executor.Executor. Found 0 implementations
2019-10-23T13:30:58.611235727Z at liquibase.servicelocator.ServiceLocator.newInstance(ServiceLocator.java:216) ~[liquibase-core-3.7.0.jar:na]
2019-10-23T13:30:58.611246427Z at liquibase.executor.ExecutorService.lambda$getExecutor$0(ExecutorService.java:26) ~[liquibase-core-3.7.0.jar:na]
2019-10-23T13:30:58.611256327Z ... 31 common frames omitted
2019-10-23T13:30:58.611265628Z Caused by: liquibase.exception.ServiceNotFoundException: Could not find unique implementation of liquibase.executor.Executor. Found 0 implementations
2019-10-23T13:30:58.611275428Z at liquibase.servicelocator.ServiceLocator.findClass(ServiceLocator.java:188) ~[liquibase-core-3.7.0.jar:na]
2019-10-23T13:30:58.611285028Z at liquibase.servicelocator.ServiceLocator.newInstance(ServiceLocator.java:214) ~[liquibase-core-3.7.0.jar:na]
2019-10-23T13:30:58.611294328Z ... 32 common frames omitted
and
2019-10-24T08:59:39.545505776Z 24.10.2019 08:59:39.483 INFO com.app.test.AppServlet - servletInitialized
2019-10-24T08:59:39.546076490Z 24.10.2019 08:59:39.492 INFO com.app.test.AppServlet - initDB
2019-10-24T08:59:40.130035108Z 24.10.2019 08:59:40.129 WARN liquibase.database.DatabaseFactory - Unknown database: MySQL
2019-10-24T08:59:40.162920831Z 24.10.2019 08:59:40.162 INFO l.database.core.UnsupportedDatabase - Error getting default schema
2019-10-24T08:59:40.162974433Z liquibase.exception.UnexpectedLiquibaseException: liquibase.exception.ServiceNotFoundException: liquibase.exception.ServiceNotFoundException: Could not find
unique implementation of liquibase.executor.Executor. Found 0 implementations
More Background
Azure only allows me to create App Services with Tomcat 8.5 or 9 and the App in it's original state would not work on both of the Tomcat 8.5 servers.
I received an error: Could not find implementation of liquibase.logging.Logger
Updating Liquibase to 3.2.3 (because that's where the error should've been fixed) fixed this for all of my local machines, but I still received it on Azure.
After updating to 3.7.0 I do not receive the could not find implementation.. error anymore but now receive the error above as well as a problem with my xml scheme which doesn't seem to crash the app though and doesn't seem to be related:
Caused by: org.xml.sax.SAXParseException: s4s-elt-schema-ns: Namespace des Elements 'databaseChangeLog' muss aus dem Schema-Namespace 'http://www.w3.org/2001/XMLSchema' stammen.

Restlet Version 2.2 java.lang.NoSuchMethodError: javax/xml/stream/XMLInputFactory.newFactory()

I am using the Restlet Version 2.2.0 with the IBM jdk 1.6.0_26 and I am trying to implement a REST Service. While executing my test project I am getting the following error:
Starting the internal [HTTP/1.1] server on port 8080
Server started ...
An exception occured writing the response entity
java.lang.NoSuchMethodError: javax/xml/stream/XMLInputFactory.newFactory()Ljavax/xml/stream/XMLInputFactory;
at org.restlet.ext.jackson.JacksonRepresentation.createObjectMapper(JacksonRepresentation.java:215)
at org.restlet.ext.jackson.JacksonRepresentation.getObjectMapper(JacksonRepresentation.java:333)
at org.restlet.ext.jackson.JacksonRepresentation.createObjectWriter(JacksonRepresentation.java:277)
at org.restlet.ext.jackson.JacksonRepresentation.getObjectWriter(JacksonRepresentation.java:361)
at org.restlet.ext.jackson.JacksonRepresentation.write(JacksonRepresentation.java:474)
at org.restlet.engine.adapter.ServerCall.writeResponseBody(ServerCall.java:519)
at org.restlet.engine.adapter.ServerCall.sendResponse(ServerCall.java:463)
at org.restlet.engine.adapter.ServerAdapter.commit(ServerAdapter.java:196)
at org.restlet.engine.adapter.HttpServerHelper.handle(HttpServerHelper.java:153)
at org.restlet.engine.connector.HttpServerHelper$1.handle(HttpServerHelper.java:73)
at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:77)
at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:77)
at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:80)
at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:567)
at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:77)
at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:539)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
at java.lang.Thread.run(Thread.java:737)
Unable to send error response
java.io.IOException: headers already sent
at sun.net.httpserver.ExchangeImpl.sendResponseHeaders(ExchangeImpl.java:180)
at sun.net.httpserver.HttpExchangeImpl.sendResponseHeaders(HttpExchangeImpl.java:80)
at org.restlet.engine.connector.HttpExchangeCall.writeResponseHead(HttpExchangeCall.java:157)
at org.restlet.engine.adapter.ServerCall.sendResponse(ServerCall.java:459)
at org.restlet.engine.adapter.ServerAdapter.commit(ServerAdapter.java:214)
at org.restlet.engine.adapter.HttpServerHelper.handle(HttpServerHelper.java:153)
at org.restlet.engine.connector.HttpServerHelper$1.handle(HttpServerHelper.java:73)
at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:77)
at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:77)
at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:80)
at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:567)
at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:77)
at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:539)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
at java.lang.Thread.run(Thread.java:737)
While using Sun/Oracle 1.6 JDK the problem doesn't exist, but I need to stay at
IBM jdk 1.6.0_26.
Any help would be appreciated.
Thanks,
EHa
It's likely a problem with the fallback JAXP selection between the two JDKs. Run java with -Djaxp.debug=1 and have a look at your logs. I'm currently encountering this same issue and my logs are showing
JAXP: find factoryId =javax.xml.parsers.SAXParserFactory
JAXP: loaded from fallback value: com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl
JAXP: created new instance of class com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl using ClassLoader: null
I'm pretty certain that my issue is OSGi related. Here's the link to another question with what could be a similar outcome: Unable to find a factory for http://www.w3.org/2001/XMLSchema

Categories