Tomcat 9 Valve Causing Server Start Failure - java

I have written a custom tomcat valve to parse HTTP headers and use them to authenticate. The valve works by extending AuthenticatorBase. I compiled it and placed it in $CATALINA_HOME/lib. Here is code:
public class TomcatLogin extends AuthenticatorBase {
private String[] roleNames = null;
private String ivcreds = null;
private String ivuser = null;
private String ivgroups = null;
private GenericPrincipal principal = null;
// Constructor defers to super class (Authenticator base)
public TomcatLogin(){
super();
}
protected boolean doAuthenticate(Request request, HttpServletResponse response)
{
List<String> groupsList = null;
System.out.println("Obtaining Headers from Request");
try {
ivuser = request.getHeader("iv-user");
ivcreds = request.getHeader("iv-creds");
ivgroups = request.getHeader("iv-groups");
} catch(Exception e) {
e.printStackTrace();
}
// Require all header credentials for proper authentication
if(ivuser == null || ivcreds == null || ivgroups == null)
return false;
// Split ivgroups by comma seporated value
// Then remove head and tail quotation marks
roleNames = ivgroups.split(",");
for(int i=0; i<roleNames.length; i++){
roleNames[i] = roleNames[i].substring(1, roleNames[i].length()-1 );
groupsList.add(roleNames[i]);
}
principal = new GenericPrincipal(ivuser, ivcreds, groupsList);
request.setUserPrincipal(principal);
return true;
}
public String getAuthMethod() {
return "HTTPAuthenticator";
}
}
I then tell Tomcat to use the valve in the server.xml. The documentation for extending AuthenticatorBase says When this class is utilized, the Context to which it is attached (or a parent Container in a hierarchy) must have an associated Realm that can be used for authenticating users and enumerating the roles to which they have been assigned. I thought I had configured it correctly, but it throws and error and Tomcat fails to start. Here is the server.xml config:
<?xml version="1.0" encoding="UTF-8"?>
...
<Server>
<Service name="Catalina">
<Engine name="Catalina" defaultHost="localhost">
<Realm className="org.apache.catalina.realm.LockOutRealm">
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
</Realm>
<!-- Here is my valve -->
<Valve className="package.of.my.custom.valve.TomcatLogin" />
<Host ... >
...
</Host>
</Engine>
</Service>
</Server>
And here is the Error message I am getting:
10-Jan-2019 10:11:03.576 SEVERE [main] org.apache.tomcat.util.digester.Digester.endElement End event threw exception
java.lang.reflect.InvocationTargetException
... A bunch of useless stacktrace
Caused by: java.lang.IllegalArgumentException: Configuration error: Must be attached to a Context at org.apache.catalina.authenticator.AuthenticatorBase.setContainer(AuthenticatorBase.java:278)
at org.apache.catalina.core.StandardPipeline.addValve(StandardPipeline.java:335)
at org.apache.catalina.core.ContainerBase.addValve(ContainerBase.java:1133)
... 27 more
10-Jan-2019 10:11:03.579 WARNING [main] org.apache.catalina.startup.Catalina.load Catalina.start using conf/server.xml: Error at (107, 82) : Configuration error: Must be attached to a Context
10-Jan-2019 10:11:03.579 SEVERE [main] org.apache.catalina.startup.Catalina.start Cannot start server. Server instance is not configured.
I think my valve is written correctly, so my guess is that the issue is in the configuration. Not sure why it is not getting a context to attach to. Any ideas?
Edit:
I tried putting the valve in my app's META-INF/context.xml (I had to make one since there wasn't one to begin with). Here it is:
<?xml version="1.0"?>
<Context>
<Valve className="package.of.my.custom.valve.TomcatLogin" />
</Context>
The server then start, but it fails to deploy any of the applications. I am getting similar error to an IBM valve which I originally tried to use over this custom implementation where it cannot find the AuthenticatorBase class from catalina.jar. Here are the SEVERE errors I am getting:
10-Jan-2019 15:34:06.673 SEVERE [main] org.apache.catalina.core.ContainerBase.addChildInternal ContainerBase.addChild: start:
org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/sample-DB]]
...
Stacktrace info
...
at org.apache.catalina.util.LifecycleBase.handleSubClassException(LifecycleBase.java:441)
Caused by: java.lang.ExceptionInInitializerError
at org.apache.tomcat.util.digester.Digester.startDocument(Digester.java:1102)
Caused by: java.lang.NullPointerException
at java.nio.charset.Charset.put(Charset.java:538)
...
Stacktrace
...
10-Jan-2019 15:34:06.688 INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/fmac/deploy/sample.war] has finished in [11] ms
10-Jan-2019 15:34:06.689 INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive [/fmac/deploy/sample-auth.war]
10-Jan-2019 15:34:06.692 SEVERE [main] org.apache.tomcat.util.digester.Digester.startElement Begin event threw error
java.lang.NoClassDefFoundError: org/apache/catalina/authenticator/AuthenticatorBase
...
Stacktrace
The Error Below is the most confusing one. How can it not find this class?
...
Caused by: java.lang.ClassNotFoundException: org.apache.catalina.authenticator.AuthenticatorBase
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
...
Stacktrace
...
10-Jan-2019 15:34:13.823 SEVERE [https-jsse-nio2-8443-exec-3] org.apache.coyote.http11.Http11Processor.service Error processing request
java.lang.NoClassDefFoundError: Could not initialize class
org.apache.tomcat.util.buf.B2CConverter
at org.apache.catalina.connector.CoyoteAdapter.convertURI(CoyoteAdapter.java:1072)

Put your <Valve> inside of your <Context> element. That should usually be within a META-INF/context.xml file in your web application and not in conf/server.xml.

So I was finally able to find out what the issues were. We have a custom tomcat implementation, part of which involves appending the classpath in the java command. For some reason, some of the database drivers that were being used caused the Tomcat to fail to find any of the libraries in CATALINA_HOME/lib. I'm running inside a Docker container and this was some old vestigial stuff from a VM version. We ended up just having to toss those
drivers out.
Not really sure why they would completely override the base lib/ directory, but at least these errors when away and I was actually able to use the pre-built authenticator I had instead of fine tuning this custom one.

Related

Error invoking scheduled task Error instantiating bean of type [io.micronaut.configuration.lettuce.health.RedisHealthIndicator]

I have the following problem when running this schedule.
#Singleton
public class TaskScheduler {
private static final Logger LOG = LoggerFactory.getLogger(TaskScheduler.class);
#Inject
private BuildLayerJob buildLayerJob;
#Scheduled(fixedDelay = "30s", initialDelay = "30s")
public void loadRegistriesDescriptions(){
try {
LOG.info("Cargando lista de registries cada 30s.");
buildLayerJob.getBuildLayer().loadRegistries();
}
catch(Exception exception) {
LOG.error("Error cargando lista de registries cada 30s: " +exception.getMessage());
//exception.printStackTrace();
}
}
}
In the first execution there is no problem, but when the time expires and it is executed again it throws me the following error.
20:26:59.291 [pool-1-thread-6] ERROR i.m.s.DefaultTaskExceptionHandler - Error invoking scheduled task Error instantiating bean of type [io.micronaut.configuration.lettuce.health.RedisHealthIndicator]
Message: Unable to connect to localhost:6379
Path Taken: new HealthMonitorTask(CurrentHealthStatus currentHealthStatus,[List healthIndicators]) --> new RedisHealthIndicator(BeanContext beanContext,HealthAggregator healthAggregator,[StatefulRedisConnection[] connections])
io.micronaut.context.exceptions.BeanInstantiationException: Error instantiating bean of type [io.micronaut.configuration.lettuce.health.RedisHealthIndicator]
Message: Unable to connect to localhost:6379
Path Taken: new HealthMonitorTask(CurrentHealthStatus currentHealthStatus,[List healthIndicators]) --> new RedisHealthIndicator(BeanContext beanContext,HealthAggregator healthAggregator,[StatefulRedisConnection[] connections])
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1719)
at io.micronaut.context.DefaultBeanContext.addCandidateToList(DefaultBeanContext.java:2727)
at io.micronaut.context.DefaultBeanContext.getBeansOfTypeInternal(DefaultBeanContext.java:2639)
at io.micronaut.context.DefaultBeanContext.getBeansOfType(DefaultBeanContext.java:924)
at io.micronaut.context.AbstractBeanDefinition.lambda$getBeansOfTypeForConstructorArgument$9(AbstractBeanDefinition.java:1124)
at io.micronaut.context.AbstractBeanDefinition.resolveBeanWithGenericsFromConstructorArgument(AbstractBeanDefinition.java:1762)
at io.micronaut.context.AbstractBeanDefinition.getBeansOfTypeForConstructorArgument(AbstractBeanDefinition.java:1119)
at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:981)
at io.micronaut.configuration.lettuce.health.$RedisHealthIndicatorDefinition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1693)
at io.micronaut.context.DefaultBeanContext.addCandidateToList(DefaultBeanContext.java:2727)
at io.micronaut.context.DefaultBeanContext.getBeansOfTypeInternal(DefaultBeanContext.java:2639)
at io.micronaut.context.DefaultBeanContext.getBeansOfType(DefaultBeanContext.java:924)
at io.micronaut.context.AbstractBeanDefinition.lambda$getBeansOfTypeForConstructorArgument$9(AbstractBeanDefinition.java:1124)
at io.micronaut.context.AbstractBeanDefinition.resolveBeanWithGenericsFromConstructorArgument(AbstractBeanDefinition.java:1762)
at io.micronaut.context.AbstractBeanDefinition.getBeansOfTypeForConstructorArgument(AbstractBeanDefinition.java:1119)
at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:984)
at io.micronaut.management.health.monitor.$HealthMonitorTaskDefinition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1693)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingletonInternal(DefaultBeanContext.java:2407)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2393)
at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:2084)
at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:2058)
at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:618)
at io.micronaut.scheduling.processor.ScheduledMethodProcessor.lambda$process$5(ScheduledMethodProcessor.java:123)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset$$$capture(FutureTask.java:305)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.lettuce.core.RedisConnectionException: Unable to connect to localhost:6379
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78)
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56)
at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:234)
at io.lettuce.core.RedisClient.connect(RedisClient.java:207)
at io.lettuce.core.RedisClient.connect(RedisClient.java:192)
at io.micronaut.configuration.lettuce.AbstractRedisClientFactory.redisConnection(AbstractRedisClientFactory.java:51)
at io.micronaut.configuration.lettuce.DefaultRedisClientFactory.redisConnection(DefaultRedisClientFactory.java:52)
at io.micronaut.configuration.lettuce.$DefaultRedisClientFactory$RedisConnection1Definition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1693)
... 31 common frames omitted
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:6379
Caused by: java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
I understand that there are problems with the connection to redis, but in the microservice deployed in GCP it continues to generate the same problem.
app.yaml
runtime: java11
service: default
instance_class: B2
env_variables:
LAYERS_SERVER_PORT: 8080
REDIS_FIXEDDELAY: 1s
REDISA_URL: "redis://A"
REDISB_URL: "redis://B"
REDISC_URL: "redis://C"
REDISD_URL: "redis://D"
basic_scaling:
max_instances: 1
idle_timeout: 270s
vpc_access_connector:
name: "projects/example/locations/us-central1/connectors/example"
Local settings. application.yml:
micronaut:
application:
name: example
server:
port: ${EXAMPLE_SERVER_PORT:3000}
cors:
enabled: true
---
redis:
servers:
REDISA:
uri: redis://IP_A
REDISB:
uri: redis://IP_B
REDISC:
uri: redis://IP_C
REDISD:
uri: redis://IP_D
Repository layers.server.repo.InfoRepositoryImpl:
#Singleton
public class InfoRepositoryImpl implements InfoRepository {
private BuildLayerJob buildLayerJob;
#Inject #Named("REDISB") RedisAsyncCommands<String, String> reddisConnectionB;
#Inject #Named("REDISA") RedisAsyncCommands<String, String> reddisConnectionA;
private static final Logger LOG = LoggerFactory.getLogger(InfoRepositoryImpl.class);
public InfoRepositoryImpl(BuildLayerJob buildLayerJob) {
this.buildLayerJob = buildLayerJob;
}
... implementation of methods to process information with redis
}
Can you please check if you are having io.micronaut.redis:micronaut-redis-lettuce dependency added to your class path/ build file.
By default Micronaut will assume redis server to be at localhost:6379, as health checks are by default enabled when redis-lettuce is being activated. It will keep probing for health checks.
If you are using micronaut application.yml, you need to provide the server url which will be accessible from the running app.
Micronaut redis
Example - application.yml
redis:
uri: redis://localhost
ssl: true
timeout: 30s
You can also use below connection string pattern to provide details about redis server.
Redis Standalone
redis :// [[username :] password#] host [: port] [/ database][?
[timeout=timeout[d|h|m|s|ms|us|ns]] [&_database=database_]]
Redis Standalone (SSL)
rediss :// [[username :] password#] host [: port] [/ database][?
[timeout=timeout[d|h|m|s|ms|us|ns]] [&_database=database_]]
Redis Standalone (Unix Domain Sockets)
redis-socket :// [[username :] password#]path
[?[timeout=timeout[d|h|m|s|ms|us|ns]][&_database=database_]]
for more details on connection string - Redis connections string
Micronaut redis configuration properties
Such errors can occur when the said data source is autoconfigured. You can disable Redis autoconfiguration if you're not using it in the application. If you need Redis for the application then you should set spring.redis.host and spring.redis.port.

Shutting down Apache ActiveMQ

I'm running a Spring Boot (1.3.5) console application with an embedded ActiveMQ server (5.10.0), which works just fine for receiving messages. However, I'm having trouble shutting down the application without exceptions.
This exception is thrown once for each queue, after hitting Ctrl-C:
2016-09-21 15:46:36.561 ERROR 18275 --- [update]] o.apache.activemq.broker.BrokerService : Failed to start Apache ActiveMQ ([my-mq-server, null], {})
java.lang.IllegalStateException: Shutdown in progress
at java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:66)
at java.lang.Runtime.addShutdownHook(Runtime.java:211)
at org.apache.activemq.broker.BrokerService.addShutdownHook(BrokerService.java:2446)
at org.apache.activemq.broker.BrokerService.doStartBroker(BrokerService.java:693)
at org.apache.activemq.broker.BrokerService.startBroker(BrokerService.java:684)
at org.apache.activemq.broker.BrokerService.start(BrokerService.java:605)
at org.apache.activemq.transport.vm.VMTransportFactory.doCompositeConnect(VMTransportFactory.java:127)
at org.apache.activemq.transport.vm.VMTransportFactory.doConnect(VMTransportFactory.java:56)
at org.apache.activemq.transport.TransportFactory.connect(TransportFactory.java:65)
at org.apache.activemq.ActiveMQConnectionFactory.createTransport(ActiveMQConnectionFactory.java:314)
at org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:329)
at org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:302)
at org.apache.activemq.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:242)
at org.apache.activemq.jms.pool.PooledConnectionFactory.createConnection(PooledConnectionFactory.java:283)
at org.apache.activemq.jms.pool.PooledConnectionFactory$1.makeObject(PooledConnectionFactory.java:96)
at org.apache.activemq.jms.pool.PooledConnectionFactory$1.makeObject(PooledConnectionFactory.java:93)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.create(GenericKeyedObjectPool.java:1041)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:357)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:279)
at org.apache.activemq.jms.pool.PooledConnectionFactory.createConnection(PooledConnectionFactory.java:243)
at org.apache.activemq.jms.pool.PooledConnectionFactory.createConnection(PooledConnectionFactory.java:212)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:180)
at org.springframework.jms.listener.AbstractJmsListeningContainer.createSharedConnection(AbstractJmsListeningContainer.java:413)
at org.springframework.jms.listener.AbstractJmsListeningContainer.refreshSharedConnection(AbstractJmsListeningContainer.java:398)
at org.springframework.jms.listener.DefaultMessageListenerContainer.refreshConnectionUntilSuccessful(DefaultMessageListenerContainer.java:925)
at org.springframework.jms.listener.DefaultMessageListenerContainer.recoverAfterListenerSetupFailure(DefaultMessageListenerContainer.java:899)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1075)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-09-21 15:46:36.564 INFO 18275 --- [update]] o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.12.3 (my-mq-server, null) is shutting down
It seems as the DefaultMessageListenerContainer tries to start an ActiveMQ server which doesn't make sense to me. I've set the phase of the BrokerService to Integer.MAX_INT - 1 and the phase of DefaultJmsListeningContainerFactory to Integer.MAX_INT to make it go away before the ActiveMQ server is stopped.
I have this in my main():
public static void main(String[] args) {
final ConfigurableApplicationContext context = SpringApplication.run(SiteServer.class, args);
context.registerShutdownHook();
}
I've tried setting daemon to true as suggested here: Properly Shutting Down ActiveMQ and Spring DefaultMessageListenerContainer.
Any ideas? Thanks! =)
Found it. This problem occurs when the Camel context is shutdown after the BrokerService. Adding proper life cycle management so that Camel is shutdown before resolved the issue. Now everything is shutdown in a clean way without errors.

Akka remote routees hostname configuration issue

I am experiencing the akka remote feature for a tool I am making. Actually, I was able to make core and remote systems works in the same host with diferent ports. Note that my remote servers runs over a router, as explained in akka docs.
Now I am trying to use several azure virtual machines to make a better experiment but I am experiencing some issues.
The core application has the following configuration (I've changed some names for security reasons):
akka.actor.deployment {
/querierActor/querierPool {
router = round-robin-pool
nr-of-instances = 12
target.nodes = [
"akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560"
,"akka.tcp://SYSTEM#remote-srv02.cloudapp.net:2560"
,"akka.tcp://SYSTEM#remote-srv03.cloudapp.net:2560"
]
}
}
// remote configuration. Use it for multiple machines calculation
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
maximum-frame-size = 100MiB
port = 2552
hostname = "0.0.0.0"
}
}
}
While the remote hosts has the following configuration:
akka.actor.deployment {
/querierActor/querierPool {
router = balancing-pool
nr-of-instances = 15
}
}
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
maximum-frame-size = 100MiB
hostname = "0.0.0.0"
port = 2560
}
}
}
Using this configuration, server and remote hosts apparently are able to comunicate but the remote host start to log some errors:
[ERROR] [01/17/2015 12:55:05.734] [SYSTEM-akka.remote.default-remote-dispatcher-16] [akka.tcp://SYSTEM#0.0.0.0:2560/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FSYSTEM%400.0.0.0%3A2552-0/endpointWriter] dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560/]] arriving at [akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560] inbound addresses are [akka.tcp://SYSTEM#0.0.0.0:2560]
And after while, server and remote host starts to log error and freeze.
Server error:
[WARN] [01/17/2015 12:21:05.658] [CRAWLER-LD-akka.remote.default-remote-dispatcher-7] [akka.tcp://SYSTEM#0.0.0.0:2552/system/remote-watcher] Detected unreachable: [akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560]
[WARN] [01/17/2015 12:21:05.664] [SYSTEM-akka.remote.default-remote-dispatcher-17] [Remoting] Association to [akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560] with unknown UID is reported as quarantined, but address cannot be quarantined without knowing the UID, gating instead for 5000 ms.
(...)
[INFO] [01/17/2015 12:21:05.712] [SYSTEM-akka.actor.default-dispatcher-6] [akka://SYSTEM/user/querierActor/querierPool] Message [akka.dispatch.sysmsg.DeathWatchNotification] from Actor[akka://SYSTEM/user/querierActor/querierPool#-1217916605] to Actor[akka://SYSTEM/user/querierActor/querierPool#-1217916605] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
(...)
Remote error (similar lines several times):
(...)
[ERROR] [01/17/2015 14:21:16.371] [SYSTEM-akka.remote.default-remote-dispatcher-16] [akka.tcp://SYSTEM#0.0.0.0:2560/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FSYSTEM%400.0.0.0%3A2552-2/endpointWriter] dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560/]] arriving at [akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560] inbound addresses are [akka.tcp://SYSTEM#0.0.0.0:2560]
[ERROR] [01/17/2015 14:21:17.388] [SYSTEM-akka.remote.default-remote-dispatcher-16] [akka.tcp://SYSTEM#0.0.0.0:2560/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FSYSTEM%400.0.0.0%3A2552-2/endpointWriter] dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560/]] arriving at [akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560] inbound addresses are [akka.tcp://SYSTEM#0.0.0.0:2560]
[WARN] [01/17/2015 14:21:17.465] [SYSTEM-akka.remote.default-remote-dispatcher-16] [akka.tcp://SYSTEM#0.0.0.0:2560/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FSYSTEM%400.0.0.0%3A2552-2] Association with remote system [akka.tcp://SYSTEM#0.0.0.0:2552] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
[INFO] [01/17/2015 14:21:17.467] [SYSTEM-akka.actor.default-dispatcher-21] [akka://SYSTEM/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FSYSTEM%40186.228.120.115%3A56044-3] Message [akka.remote.transport.AssociationHandle$Disassociated] from Actor[akka://SYSTEM/deadLetters] to Actor[akka://SYSTEM/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FSYSTEM%40186.228.120.115%3A56044-3#-2070785548] was not delivered. [6] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [01/17/2015 14:21:17.468] [SYSTEM-akka.actor.default-dispatcher-21] [akka://SYSTEM/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FSYSTEM%40186.228.120.115%3A56044-3] Message [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from Actor[akka://SYSTEM/deadLetters] to Actor[akka://SYSTEM/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FSYSTEM%40186.228.120.115%3A56044-3#-2070785548] was not delivered. [7] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
(...)
I figured out that the problem may be in the hostname configuration and tried to put the hostname to server and remote host. But, in this case, the system does not even load:
Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'FacadeMemory' defined in file [D:\data\development\git\semantic-web-crawler\crawlerld.core\target\classes\net\dovale\websemantics\linkedDataRecommender\facade\memory\FacadeMemory.class]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [facade.memory.FacadeMemory]: Constructor threw exception; nested exception is org.jboss.netty.channel.ChannelException: Failed to bind to: /remote-srv01.cloudapp.net:2560
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1077)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1022)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:504)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:298)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:706)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:762)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:109)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:691)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:320)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:952)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:941)
at facade.memory.GUIMain.main(GUIMain.java:23)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Caused by: org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [facade.memory.FacadeMemory]: Constructor threw exception; nested exception is org.jboss.netty.channel.ChannelException: Failed to bind to: /remote-srv01.cloudapp.net:2560
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:164)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:89)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1070)
... 21 more
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: /remote-srv01.cloudapp.net:2560
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:236)
at scala.util.Try$.apply(Try.scala:191)
at scala.util.Success.map(Try.scala:236)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.net.BindException: Cannot assign requested address: bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.Net.bind(Net.java:428)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296)
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I don't know what I am doing wrong. I tried to find information about the issue but any of what I found is related to my problem. I have opened the ports on azure configuration also.
How can I enable my server host to comunicate propertly with my remote hosts?
I was able to address the problem.
After some fruitless research, I had to try some different things. I am making some assumptions that could be wrong as I didn't find any other information. If you are reading this answer and find any error, please let me know.
The problem was that the framework (sun.nio.ch.Net.bind0 apparently, but I didn't find many docs about it) allows the following range of ips: 0.0.0.0 (in case you accept connections from any network interface in the machine), 127.0.0.0 (in case you only work with local request - I guest) and the IP address of any of computer's network interface. In this last case, requests will be only allowed to this specific interface.
The problem is that the "hostname" property is also used to address remote nodes of Akka. I mean, when the host node calls for a remote node, it uses this information to identify were the result need to be sent after finished. Also, if you put the property hostname with the value 0.0.0.0 and tries to reach this node by its dns name (which could not be associated to any network interface) it will fail. You have to identify the machine with the same IP as one of the network interface.
So, my setup changed slightly:
For the host node, I made this change:
(...)
akka.actor.deployment {
/sparqlQuerierMasterActor/sparqlQuerierPool {
router = round-robin-pool
nr-of-instances = 12
target.nodes = [
"akka.tcp://SYSTEM#XXX.XXX.XXX.XXX:2560"
,"akka.tcp://SYSTEM#YYY.YYY.YYY.YYY:2560"
,"akka.tcp://SYSTEM#ZZZ.ZZZ.ZZZ.ZZZ:2560"
]
}
}
(...)
XXX, YYY and ZZZ are reachable IP's of remote nodes which are also registered at a network interface.
The configuration of the remote node changed to:
(...)
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
maximum-frame-size = 100MiB
hostname = "YYY.YYY.YYY.YYY"
port = 2560
}
}
(...)
I didn't test if I can maintain the previous 0.0.0.0 configuration. Maybe it is possible.
This solution allowed me to make host and remote nodes to comunicate flawlessly =)

Error while applying OWSM policies on Web service created from Jdeveloper 11g (SEVERE: java.io.FileNotFoundException: .\config\jps-config.xml)

I created sample class Testws with one simple method sayHi(String name) that simply showing welcome message. The target is to create web service application for this class and applying OWSM policies.
All the needed file is generated from Jdeveloper 11.1.1.7 and I added the oracle/wss_username_token_client_policy to the client side in TestwsPortClient. Problem is that when I run the main method from TestwsPortClient I got the following error:
SEVERE: java.io.FileNotFoundException: .\config\jps-config.xml (The system cannot find the path specified)
Although I tried (But it did not work) to set the system propriety to find the missing file as follow:
System.setProperty("oracle.security.jps.config",
"C:\\Users\\user\\AppData\\Roaming\\JDeveloper\\system11.1.1.7.40.64.93\\DefaultDomain\\config\\fmwconfig\\jps-config.xml");
Here is the Client Class TestwsPortClient:
public static void main(String[] args) {
testwsService = new TestwsService();
SecurityPoliciesFeature securityFeatures =
new SecurityPoliciesFeature(new String[] { "oracle/wss_username_token_client_policy" });
Testws testws = testwsService.getTestwsPort(securityFeatures);
// Add your code to call the desired methods.
System.setProperty("oracle.security.jps.config",
"C:\\Users\\user\\AppData\\Roaming\\JDeveloper\\system11.1.1.7.40.64.93\\DefaultDomain\\config\\fmwconfig\\jps-config.xml");
List<CredentialProvider> credProviders =new ArrayList<CredentialProvider>();
String username = "weblogic";
String password = "weblogic1";
CredentialProvider cp =new ClientUNTCredentialProvider(username.getBytes(), password.getBytes());
credProviders.add(cp);
Map<String, Object> rc = ((BindingProvider)testws).getRequestContext();
rc.put(WSSecurityContext.CREDENTIAL_PROVIDER_LIST, credProviders);
testws.sayHi("Salman");
}
Here the exception stack trace:
Note: the stack trace is too long this is just part of it where I think it will gives an idea about the error
SEVERE: java.io.FileNotFoundException: .\config\jps-config.xml (The system cannot find the path specified)
SEVERE: java.io.FileNotFoundException: .\config\jps-config.xml (The system cannot find the path specified)
INFO: WSM-09004 Component auditing cannot be initialized.
INFO: Recipient Alias property not configured in the policy. Defaulting to encrypting with signers certificate.
WARNING: JPS-00065 Jps platform factory creation failed. Reason: java.lang.ClassNotFoundException: oracle.security.jps.se.JpsSEPlatformFactory.
WARNING: JPS-00065 Jps platform factory creation failed. Reason: {0}.
......
at $Proxy34.sayHi(Unknown Source)
at test.TestwsPortClient.main(TestwsPortClient.java:44)
Caused by: oracle.security.jps.JpsException: JPS-00065: Jps platform factory creation failed. Reason: java.lang.ClassNotFoundException: oracle.security.jps.se.JpsSEPlatformFactory.
at oracle.security.jps.ee.JpsPlatformFactory$2.run(JpsPlatformFactory.java:197)
at oracle.security.jps.ee.JpsPlatformFactory$2.run(JpsPlatformFactory.java:190)
........

Accessing Tomcat Monitors using Java code

I am very new to JMX. I am trying to log the tomcat statistics like threads used, cache, sessions and other standard values. I am trying to achieve this with java code.
I have done the following things as of now. (I am trying to access the values of a local tomcat 6.0 monitor on windows)
1)I have added the following options in the catalina.bat
set CATALINA_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
After that I restarted the tomcat server.
2) Then I wrote the following code.
package com.ss.fg;
import java.lang.management.ManagementFactory;
import javax.management.MBeanServer;
import javax.management.ObjectName;
public class SystemConfigManagement
{
static MBeanServer connection = ManagementFactory.getPlatformMBeanServer();
public static void main(String[] args) throws Exception {
getActiveSession();
}
public static void getActiveSession()throws Exception
{
ObjectName name=new ObjectName("Catalina:type=Manager,path=/MMDisplay,host=localhost");
String attrValue = ManagementFactory.getPlatformMBeanServer().getAttribute(name, "activeSessions").toString();
System.out.println(attrValue);
}
}
I even tried context instead of path.
I am getting the following exception
Exception in thread "main" javax.management.InstanceNotFoundException: Catalina:type=Manager,path=/MMDisplay,host=localhost
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(Unknown Source)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(Unknown Source)
at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(Unknown Source)
at com.softsmith.floodgates.SystemConfigManagement.getActiveSession(SystemConfigManagement.java:15)
at com.softsmith.floodgates.SystemConfigManagement.main(SystemConfigManagement.java:10)
How can I resolve this issue?
Should I add Some jar files, or should I do some other settings..
Please help
When using
MBeanServer connection = ManagementFactory.getPlatformMBeanServer();
you are actually connectiong to the MBean-Server of the JVM running your program, not to that of your tomcat-instance, so it doesn't know of the Catalina-MBeans.
To establish a connection to a remote jvm try something like:
JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://servername:9999/jmxrmi");
JMXConnector connector = JMXConnectorFactory.connect(url);
connector.connect();
MBeanServerConnection server = connector.getMBeanServerConnection();
//do work here
ObjectName name = new ObjectName("Catalina:type=Manager,path=/manager,host=localhost");
String attrValue = mb.getAttribute(name, "activeSessions").toString();
System.out.println(attrValue);
//..and don't forget to close the connection
connector.close();
If the error is still there make sure you are using the correct object-name.
When i using
MBeanServer connection = ManagementFactory.getPlatformMBeanServer();
i am actually connectiong to the MBean-Server of the JVM running my program, not to that of my tomcat-instance, so it doesn't know of the Catalina-MBeans.
so i try to connected remotly to jmx but have error like this :
java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is....
and now i solve :
for first i added catalina-jmx-remote.jar to TOMCAT_HOME/lib directory of Tomcat and then configure the listener on server.xml i added the following
snippet under the tag :
<Listener className="org.apache.catalina.mbeans.JmxRemoteLifecycleListener" rmiRegistryPortPlatform="10001" rmiServerPortPlatform="10002" useLocalPorts="true" />
and finally i set the following sentence in my catalina.bat :
JAVA_OPTS="$JAVA_OPTS -Djava.rmi.server.hostname=localhost -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"

Categories