I am experiencing the akka remote feature for a tool I am making. Actually, I was able to make core and remote systems works in the same host with diferent ports. Note that my remote servers runs over a router, as explained in akka docs.
Now I am trying to use several azure virtual machines to make a better experiment but I am experiencing some issues.
The core application has the following configuration (I've changed some names for security reasons):
akka.actor.deployment {
/querierActor/querierPool {
router = round-robin-pool
nr-of-instances = 12
target.nodes = [
"akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560"
,"akka.tcp://SYSTEM#remote-srv02.cloudapp.net:2560"
,"akka.tcp://SYSTEM#remote-srv03.cloudapp.net:2560"
]
}
}
// remote configuration. Use it for multiple machines calculation
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
maximum-frame-size = 100MiB
port = 2552
hostname = "0.0.0.0"
}
}
}
While the remote hosts has the following configuration:
akka.actor.deployment {
/querierActor/querierPool {
router = balancing-pool
nr-of-instances = 15
}
}
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
maximum-frame-size = 100MiB
hostname = "0.0.0.0"
port = 2560
}
}
}
Using this configuration, server and remote hosts apparently are able to comunicate but the remote host start to log some errors:
[ERROR] [01/17/2015 12:55:05.734] [SYSTEM-akka.remote.default-remote-dispatcher-16] [akka.tcp://SYSTEM#0.0.0.0:2560/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FSYSTEM%400.0.0.0%3A2552-0/endpointWriter] dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560/]] arriving at [akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560] inbound addresses are [akka.tcp://SYSTEM#0.0.0.0:2560]
And after while, server and remote host starts to log error and freeze.
Server error:
[WARN] [01/17/2015 12:21:05.658] [CRAWLER-LD-akka.remote.default-remote-dispatcher-7] [akka.tcp://SYSTEM#0.0.0.0:2552/system/remote-watcher] Detected unreachable: [akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560]
[WARN] [01/17/2015 12:21:05.664] [SYSTEM-akka.remote.default-remote-dispatcher-17] [Remoting] Association to [akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560] with unknown UID is reported as quarantined, but address cannot be quarantined without knowing the UID, gating instead for 5000 ms.
(...)
[INFO] [01/17/2015 12:21:05.712] [SYSTEM-akka.actor.default-dispatcher-6] [akka://SYSTEM/user/querierActor/querierPool] Message [akka.dispatch.sysmsg.DeathWatchNotification] from Actor[akka://SYSTEM/user/querierActor/querierPool#-1217916605] to Actor[akka://SYSTEM/user/querierActor/querierPool#-1217916605] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
(...)
Remote error (similar lines several times):
(...)
[ERROR] [01/17/2015 14:21:16.371] [SYSTEM-akka.remote.default-remote-dispatcher-16] [akka.tcp://SYSTEM#0.0.0.0:2560/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FSYSTEM%400.0.0.0%3A2552-2/endpointWriter] dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560/]] arriving at [akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560] inbound addresses are [akka.tcp://SYSTEM#0.0.0.0:2560]
[ERROR] [01/17/2015 14:21:17.388] [SYSTEM-akka.remote.default-remote-dispatcher-16] [akka.tcp://SYSTEM#0.0.0.0:2560/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FSYSTEM%400.0.0.0%3A2552-2/endpointWriter] dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560/]] arriving at [akka.tcp://SYSTEM#remote-srv01.cloudapp.net:2560] inbound addresses are [akka.tcp://SYSTEM#0.0.0.0:2560]
[WARN] [01/17/2015 14:21:17.465] [SYSTEM-akka.remote.default-remote-dispatcher-16] [akka.tcp://SYSTEM#0.0.0.0:2560/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FSYSTEM%400.0.0.0%3A2552-2] Association with remote system [akka.tcp://SYSTEM#0.0.0.0:2552] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
[INFO] [01/17/2015 14:21:17.467] [SYSTEM-akka.actor.default-dispatcher-21] [akka://SYSTEM/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FSYSTEM%40186.228.120.115%3A56044-3] Message [akka.remote.transport.AssociationHandle$Disassociated] from Actor[akka://SYSTEM/deadLetters] to Actor[akka://SYSTEM/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FSYSTEM%40186.228.120.115%3A56044-3#-2070785548] was not delivered. [6] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [01/17/2015 14:21:17.468] [SYSTEM-akka.actor.default-dispatcher-21] [akka://SYSTEM/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FSYSTEM%40186.228.120.115%3A56044-3] Message [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from Actor[akka://SYSTEM/deadLetters] to Actor[akka://SYSTEM/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FSYSTEM%40186.228.120.115%3A56044-3#-2070785548] was not delivered. [7] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
(...)
I figured out that the problem may be in the hostname configuration and tried to put the hostname to server and remote host. But, in this case, the system does not even load:
Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'FacadeMemory' defined in file [D:\data\development\git\semantic-web-crawler\crawlerld.core\target\classes\net\dovale\websemantics\linkedDataRecommender\facade\memory\FacadeMemory.class]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [facade.memory.FacadeMemory]: Constructor threw exception; nested exception is org.jboss.netty.channel.ChannelException: Failed to bind to: /remote-srv01.cloudapp.net:2560
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1077)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1022)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:504)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:298)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:706)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:762)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:109)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:691)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:320)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:952)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:941)
at facade.memory.GUIMain.main(GUIMain.java:23)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Caused by: org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [facade.memory.FacadeMemory]: Constructor threw exception; nested exception is org.jboss.netty.channel.ChannelException: Failed to bind to: /remote-srv01.cloudapp.net:2560
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:164)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:89)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1070)
... 21 more
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: /remote-srv01.cloudapp.net:2560
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:236)
at scala.util.Try$.apply(Try.scala:191)
at scala.util.Success.map(Try.scala:236)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.net.BindException: Cannot assign requested address: bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.Net.bind(Net.java:428)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296)
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I don't know what I am doing wrong. I tried to find information about the issue but any of what I found is related to my problem. I have opened the ports on azure configuration also.
How can I enable my server host to comunicate propertly with my remote hosts?
I was able to address the problem.
After some fruitless research, I had to try some different things. I am making some assumptions that could be wrong as I didn't find any other information. If you are reading this answer and find any error, please let me know.
The problem was that the framework (sun.nio.ch.Net.bind0 apparently, but I didn't find many docs about it) allows the following range of ips: 0.0.0.0 (in case you accept connections from any network interface in the machine), 127.0.0.0 (in case you only work with local request - I guest) and the IP address of any of computer's network interface. In this last case, requests will be only allowed to this specific interface.
The problem is that the "hostname" property is also used to address remote nodes of Akka. I mean, when the host node calls for a remote node, it uses this information to identify were the result need to be sent after finished. Also, if you put the property hostname with the value 0.0.0.0 and tries to reach this node by its dns name (which could not be associated to any network interface) it will fail. You have to identify the machine with the same IP as one of the network interface.
So, my setup changed slightly:
For the host node, I made this change:
(...)
akka.actor.deployment {
/sparqlQuerierMasterActor/sparqlQuerierPool {
router = round-robin-pool
nr-of-instances = 12
target.nodes = [
"akka.tcp://SYSTEM#XXX.XXX.XXX.XXX:2560"
,"akka.tcp://SYSTEM#YYY.YYY.YYY.YYY:2560"
,"akka.tcp://SYSTEM#ZZZ.ZZZ.ZZZ.ZZZ:2560"
]
}
}
(...)
XXX, YYY and ZZZ are reachable IP's of remote nodes which are also registered at a network interface.
The configuration of the remote node changed to:
(...)
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
maximum-frame-size = 100MiB
hostname = "YYY.YYY.YYY.YYY"
port = 2560
}
}
(...)
I didn't test if I can maintain the previous 0.0.0.0 configuration. Maybe it is possible.
This solution allowed me to make host and remote nodes to comunicate flawlessly =)
Related
I'm facing the following errors while connecting oracle DB, I'm using spring boot JDBC template to connect to database. The errors are below,
Exception in thread "main" java.lang.Exception: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at com.falabella.util.OracleDB.main(OracleDB.java:70)
Caused by: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:392)
Caused by: java.net.UnknownHostException: NODE-01: nodename nor servname provided, or not known
Below are my finding, My database server host having the cluster and it has two nodes, like below,
Cluster (wood.clsuter.com)
| NODE01 (wood-01)
| NODE02 (wood-02)
My connection string is like this, jdbc:oracle:thin:#wood-clsuter.com:1531/service_name
When I'm using the cluster name in the connection string, I'm facing the below error
Caused by: java.net.UnknownHostException: wood-01: nodename nor servname provided, or not known
Whereas if I use any of the node name in the connection string , able to connect Data base without any issue, the working connection string is like below,
jdbc:oracle:thin:#wood-01.com:1531/service_name or
jdbc:oracle:thin:#wood-02.com:1531/service_name
Since I need to use my DB requests as load balancing, I need to use the cluster name instead of slave nodes,
I would like to know the root cause of this issue, such kind of production environment issues,
Could you please help me out with this?
You need to change connect string to:
"jdbc:oracle:thin:#(DESCRIPTION=(FAILOVER=ON)(LOAD_BALANCE=ON)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=wood-01.com)(PORT = 1531))(ADDRESS = (PROTOCOL = TCP)(HOST = wood-02.com)(PORT = 1531)))(CONNECT_DATA=(SERVICE_NAME =service_name)(FAILOVER_MODE=(TYPE=select)(METHOD=basic))))"
I have the following problem when running this schedule.
#Singleton
public class TaskScheduler {
private static final Logger LOG = LoggerFactory.getLogger(TaskScheduler.class);
#Inject
private BuildLayerJob buildLayerJob;
#Scheduled(fixedDelay = "30s", initialDelay = "30s")
public void loadRegistriesDescriptions(){
try {
LOG.info("Cargando lista de registries cada 30s.");
buildLayerJob.getBuildLayer().loadRegistries();
}
catch(Exception exception) {
LOG.error("Error cargando lista de registries cada 30s: " +exception.getMessage());
//exception.printStackTrace();
}
}
}
In the first execution there is no problem, but when the time expires and it is executed again it throws me the following error.
20:26:59.291 [pool-1-thread-6] ERROR i.m.s.DefaultTaskExceptionHandler - Error invoking scheduled task Error instantiating bean of type [io.micronaut.configuration.lettuce.health.RedisHealthIndicator]
Message: Unable to connect to localhost:6379
Path Taken: new HealthMonitorTask(CurrentHealthStatus currentHealthStatus,[List healthIndicators]) --> new RedisHealthIndicator(BeanContext beanContext,HealthAggregator healthAggregator,[StatefulRedisConnection[] connections])
io.micronaut.context.exceptions.BeanInstantiationException: Error instantiating bean of type [io.micronaut.configuration.lettuce.health.RedisHealthIndicator]
Message: Unable to connect to localhost:6379
Path Taken: new HealthMonitorTask(CurrentHealthStatus currentHealthStatus,[List healthIndicators]) --> new RedisHealthIndicator(BeanContext beanContext,HealthAggregator healthAggregator,[StatefulRedisConnection[] connections])
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1719)
at io.micronaut.context.DefaultBeanContext.addCandidateToList(DefaultBeanContext.java:2727)
at io.micronaut.context.DefaultBeanContext.getBeansOfTypeInternal(DefaultBeanContext.java:2639)
at io.micronaut.context.DefaultBeanContext.getBeansOfType(DefaultBeanContext.java:924)
at io.micronaut.context.AbstractBeanDefinition.lambda$getBeansOfTypeForConstructorArgument$9(AbstractBeanDefinition.java:1124)
at io.micronaut.context.AbstractBeanDefinition.resolveBeanWithGenericsFromConstructorArgument(AbstractBeanDefinition.java:1762)
at io.micronaut.context.AbstractBeanDefinition.getBeansOfTypeForConstructorArgument(AbstractBeanDefinition.java:1119)
at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:981)
at io.micronaut.configuration.lettuce.health.$RedisHealthIndicatorDefinition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1693)
at io.micronaut.context.DefaultBeanContext.addCandidateToList(DefaultBeanContext.java:2727)
at io.micronaut.context.DefaultBeanContext.getBeansOfTypeInternal(DefaultBeanContext.java:2639)
at io.micronaut.context.DefaultBeanContext.getBeansOfType(DefaultBeanContext.java:924)
at io.micronaut.context.AbstractBeanDefinition.lambda$getBeansOfTypeForConstructorArgument$9(AbstractBeanDefinition.java:1124)
at io.micronaut.context.AbstractBeanDefinition.resolveBeanWithGenericsFromConstructorArgument(AbstractBeanDefinition.java:1762)
at io.micronaut.context.AbstractBeanDefinition.getBeansOfTypeForConstructorArgument(AbstractBeanDefinition.java:1119)
at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:984)
at io.micronaut.management.health.monitor.$HealthMonitorTaskDefinition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1693)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingletonInternal(DefaultBeanContext.java:2407)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2393)
at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:2084)
at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:2058)
at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:618)
at io.micronaut.scheduling.processor.ScheduledMethodProcessor.lambda$process$5(ScheduledMethodProcessor.java:123)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset$$$capture(FutureTask.java:305)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.lettuce.core.RedisConnectionException: Unable to connect to localhost:6379
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78)
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56)
at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:234)
at io.lettuce.core.RedisClient.connect(RedisClient.java:207)
at io.lettuce.core.RedisClient.connect(RedisClient.java:192)
at io.micronaut.configuration.lettuce.AbstractRedisClientFactory.redisConnection(AbstractRedisClientFactory.java:51)
at io.micronaut.configuration.lettuce.DefaultRedisClientFactory.redisConnection(DefaultRedisClientFactory.java:52)
at io.micronaut.configuration.lettuce.$DefaultRedisClientFactory$RedisConnection1Definition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1693)
... 31 common frames omitted
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:6379
Caused by: java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
I understand that there are problems with the connection to redis, but in the microservice deployed in GCP it continues to generate the same problem.
app.yaml
runtime: java11
service: default
instance_class: B2
env_variables:
LAYERS_SERVER_PORT: 8080
REDIS_FIXEDDELAY: 1s
REDISA_URL: "redis://A"
REDISB_URL: "redis://B"
REDISC_URL: "redis://C"
REDISD_URL: "redis://D"
basic_scaling:
max_instances: 1
idle_timeout: 270s
vpc_access_connector:
name: "projects/example/locations/us-central1/connectors/example"
Local settings. application.yml:
micronaut:
application:
name: example
server:
port: ${EXAMPLE_SERVER_PORT:3000}
cors:
enabled: true
---
redis:
servers:
REDISA:
uri: redis://IP_A
REDISB:
uri: redis://IP_B
REDISC:
uri: redis://IP_C
REDISD:
uri: redis://IP_D
Repository layers.server.repo.InfoRepositoryImpl:
#Singleton
public class InfoRepositoryImpl implements InfoRepository {
private BuildLayerJob buildLayerJob;
#Inject #Named("REDISB") RedisAsyncCommands<String, String> reddisConnectionB;
#Inject #Named("REDISA") RedisAsyncCommands<String, String> reddisConnectionA;
private static final Logger LOG = LoggerFactory.getLogger(InfoRepositoryImpl.class);
public InfoRepositoryImpl(BuildLayerJob buildLayerJob) {
this.buildLayerJob = buildLayerJob;
}
... implementation of methods to process information with redis
}
Can you please check if you are having io.micronaut.redis:micronaut-redis-lettuce dependency added to your class path/ build file.
By default Micronaut will assume redis server to be at localhost:6379, as health checks are by default enabled when redis-lettuce is being activated. It will keep probing for health checks.
If you are using micronaut application.yml, you need to provide the server url which will be accessible from the running app.
Micronaut redis
Example - application.yml
redis:
uri: redis://localhost
ssl: true
timeout: 30s
You can also use below connection string pattern to provide details about redis server.
Redis Standalone
redis :// [[username :] password#] host [: port] [/ database][?
[timeout=timeout[d|h|m|s|ms|us|ns]] [&_database=database_]]
Redis Standalone (SSL)
rediss :// [[username :] password#] host [: port] [/ database][?
[timeout=timeout[d|h|m|s|ms|us|ns]] [&_database=database_]]
Redis Standalone (Unix Domain Sockets)
redis-socket :// [[username :] password#]path
[?[timeout=timeout[d|h|m|s|ms|us|ns]][&_database=database_]]
for more details on connection string - Redis connections string
Micronaut redis configuration properties
Such errors can occur when the said data source is autoconfigured. You can disable Redis autoconfiguration if you're not using it in the application. If you need Redis for the application then you should set spring.redis.host and spring.redis.port.
In Spring boot application, I want to connect to 2 different kafka servers simultaneously. I am using KafkaAdmin and AdminClient to make the connection and perform CRUD Operations.
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
String krb5location = krb5Location;
System.setProperty("java.security.krb5.conf", krb5location);
System.setProperty("java.security.auth.login.config", jaasConfigLocation);
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, server);
configs.put("security.protocol", "SASL_SSL");
configs.put("ssl.truststore.location", sslTruststoreLocation);
configs.put("ssl.truststore.password", sslTruststorePassowrd);
return new KafkaAdmin(configs);
}
#Bean
#PostConstruct
public AdminClient config() {
return AdminClient.create(kafkaAdmin.getConfig());
}
Similarly server 2 is configured in same springboot application.
If I load configuration of both kafka server at once during app initialization following error is displayed
>>>KRBError:
cTime is Sun Jun 03 14:23:02 IST 2001 991558382000
sTime is Tue Nov 20 10:46:53 IST 2018 1542691013000
suSec is 512097
error code is 7
error Message is Server not found in Kerberos database
cname is config1#servername.com
sname is config2#servernname.com
msgType is 30
at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:73)
at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:251)
at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:262)
at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:308)
at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:126)
at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:361)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken(SaslClientAuthenticator.java:359)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendSaslClientToken(SaslClientAuthenticator.java:269)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:206)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:81)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:474)
at org.apache.kafka.common.network.Selector.poll(Selector.java:412)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1006)
at java.lang.Thread.run(Thread.java:748)
Caused by: KrbException: Identifier doesn't match expected value (906)
at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)
at sun.security.krb5.internal.TGSRep.init(TGSRep.java:65)
at sun.security.krb5.internal.TGSRep.<init>(TGSRep.java:60)
at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:55)
... 22 more
2018-11-20 10:46:53.605 ERROR 8672 --- [| adminclient-4] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-4] Connection to node -1 failed authentication due to: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) occurred when evaluating SASL token received from the Kafka Broker. This may be caused by Java's being unable to resolve the Kafka Broker's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Users must configure FQDN of kafka brokers when authenticating using SASL and `socketChannel.socket().getInetAddress().getHostName()` must match the hostname in `principal/hostname#realm` Kafka Client will go to AUTHENTICATION_FAILED state.
What does mean this exception?
I am trying to deploy flink cluster(v.1.5.2) with 3 nodes in HA mode (zookeeper).
I have following flink-conf.yaml settings:
high-availability: zookeeper
high-availability.storageDir: /flink/ha
high-availability.zookeeper.quorum: {node1_ip}:2181,{node2_ip}:2181,{node3_ip}:2181
high-availability.jobmanager.port: 50010
high-availability.zookeeper.path.root: /flink
high-availability.zookeeper.path.namespace: /default_ns
Zookeeper cluster is running.
After start-cluster.sh executed I have only one working node. Another 2 nodes return
{"errors":["Could not retrieve the redirect address of the current leader. Please try to refresh."]} from web UI
and exception in flink-root-standalongsession-.log:
2018-08-07 18:55:22,081 ERROR org.apache.flink.runtime.rest.handler.legacy.files.StaticFileServerHandler - Could not retrieve the redirect address.
java.util.concurrent.CompletionException: org.apache.flink.runtime.rpc.exceptions.FencingTokenException: Fencing token not set: Ignoring message LocalFencedMessage(aec7a76447f8d44131605f5c10fb4fdc, LocalRpc
...
<------>at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
<------>at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.runtime.rpc.exceptions.FencingTokenException: Fencing token not set: Ignoring message LocalFencedMessage(aec7a76447f8d44131605f5c10fb4fdc, LocalRpcInvocation(requestRestAddress(T
<------>at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:59)
I have a Mule flow that reads files through a generic inbound endpoint from an ftp server, makes some modification on the data, and writes files through a generic outbound endpoint to an sftp server. Yesterday it processed 60 files. On 57 there were no errors, but on three the following traces appeared. Any suggestions are welcome.
Error writing data over SFTP service, error was: Failed to open local file
4: Failed to open local file
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2297)
at com.jcraft.jsch.ChannelSftp._put(ChannelSftp.java:484)
at com.jcraft.jsch.ChannelSftp.put(ChannelSftp.java:438)
at com.jcraft.jsch.ChannelSftp.put(ChannelSftp.java:405)
at org.mule.transport.sftp.SftpClient.storeFile(SftpClient.java:385)
at org.mule.transport.sftp.SftpMessageDispatcher.doDispatch(SftpMessageDispatcher.java:176)
at org.mule.transport.AbstractMessageDispatcher.process(AbstractMessageDispatcher.java:100)
at org.mule.transport.AbstractConnector$DispatcherMessageProcessor.process(AbstractConnector.java:2553)
at org.mule.processor.AbstractInterceptingMessageProcessorBase.processNext(AbstractInterceptingMessageProcessorBase.java:105)
at org.mule.interceptor.AbstractEnvelopeInterceptor.process(AbstractEnvelopeInterceptor.java:55)
at org.mule.processor.AsyncInterceptingMessageProcessor.processNextTimed(AsyncInterceptingMessageProcessor.java:111)
at org.mule.processor.AsyncInterceptingMessageProcessor$AsyncMessageProcessorWorker.doRun(AsyncInterceptingMessageProcessor.java:158)
at org.mule.work.AbstractMuleEventWork.run(AbstractMuleEventWork.java:43)
at org.mule.work.WorkerContext.run(WorkerContext.java:310)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
[ERROR] DispatchingLogger [Services].xmlSftpConnector.dispatcher.29 2014-06-09 15:55:07 Unexpected exception attempting to write file, message was: Failed to open local file
java.io.IOException: Failed to open local file
at org.mule.transport.sftp.SftpClient.storeFile(SftpClient.java:390)
at org.mule.transport.sftp.SftpMessageDispatcher.doDispatch(SftpMessageDispatcher.java:176)
at org.mule.transport.AbstractMessageDispatcher.process(AbstractMessageDispatcher.java:100)
at org.mule.transport.AbstractConnector$DispatcherMessageProcessor.process(AbstractConnector.java:2553)
at org.mule.processor.AbstractInterceptingMessageProcessorBase.processNext(AbstractInterceptingMessageProcessorBase.java:105)
at org.mule.interceptor.AbstractEnvelopeInterceptor.process(AbstractEnvelopeInterceptor.java:55)
at org.mule.processor.AsyncInterceptingMessageProcessor.processNextTimed(AsyncInterceptingMessageProcessor.java:111)
at org.mule.processor.AsyncInterceptingMessageProcessor$AsyncMessageProcessorWorker.doRun(AsyncInterceptingMessageProcessor.java:158)
at org.mule.work.AbstractMuleEventWork.run(AbstractMuleEventWork.java:43)
at org.mule.work.WorkerContext.run(WorkerContext.java:310)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
[WARN] SftpUtil [Services].xmlSftpConnector.dispatcher.29 2014-06-09 15:55:07 Class java.io.ByteArrayInputStream did not implement the 'ErrorOccurred' decorator, errorOccured=true could not be set.
[ERROR] DispatchingLogger [Services].xmlSftpConnector.dispatcher.29 2014-06-09 15:55:07
********************************************************************************
Message : Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=sftp://user:<password>#server.com/folder/test/upload, connector=SftpConnector
{
name=xmlSftpConnector
lifecycle=start
this=cfc9fac
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[sftp]
serviceOverrides=<none>
}
, name='endpoint.sftp.server.com.22.folder.test.upload', mep=ONE_WAY, properties={outputPattern=#[function:datestamp:dd-MM-yy]_#[function:systime].xml}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: String
Code : MULE_ERROR-42999
--------------------------------------------------------------------------------
Exception stack is:
1. Failed to open local file (java.io.IOException)
org.mule.transport.sftp.SftpClient:390 (null)
2. Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=sftp://user:<password>#server.com/folder/test/upload, connector=SftpConnector
{
name=xmlSftpConnector
lifecycle=start
this=cfc9fac
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[sftp]
serviceOverrides=<none>
}
, name='endpoint.sftp.server.com.22.folder.test.upload', mep=ONE_WAY, properties={outputPattern=#[function:datestamp:dd-MM-yy]_#[function:systime].xml}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: String (org.mule.api.transport.DispatchException)
org.mule.transport.AbstractMessageDispatcher:109 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.io.IOException: Failed to open local file
at org.mule.transport.sftp.SftpClient.storeFile(SftpClient.java:390)
at org.mule.transport.sftp.SftpMessageDispatcher.doDispatch(SftpMessageDispatcher.java:176)
at org.mule.transport.AbstractMessageDispatcher.process(AbstractMessageDispatcher.java:100)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
It looks like one of 3 reasons:
Permissions on the folder you're writing to.
Spaces on the file path (or name).
Wrong slash in the file path.
EDIT based on comment.
You can try configuring maxThreadsActive to limit the no of threads active at a time.
<dispatcher-threading-profile maxThreadsActive="5" maxThreadsIdle="5"/>