RabbitMQ trying to connect to localhost - java

I have a Spring boot application running on embedded tomcat with rabbit listener which I configure like this
#Configuration
public class RabbitConfiguration {
public static final String REQUEST_QUEUE = "from-beeline-req";
public static final String REPLY_QUEUE = "from-beeline-reply";
#Bean
public Queue beelineRpcReqQueue() {
return new Queue(REQUEST_QUEUE);
}
#Bean
public Queue beelineRpcReplyQueue() {
return new Queue(REPLY_QUEUE);
}
#Bean
public RabbitTemplate rabbitTemplate(RabbitTemplateConfigurer configurer, ConnectionFactory connectionFactory) {
RabbitTemplate template = new RabbitTemplate();
configurer.configure(template, connectionFactory);
template.setDefaultReceiveQueue(REQUEST_QUEUE);
template.setReplyAddress(REPLY_QUEUE);
template.setUseDirectReplyToContainer(false);
return template;
}
#Bean
public SimpleMessageListenerContainer replyListenerContainer(ConnectionFactory connectionFactory, RabbitTemplate rabbitTemplate) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueues(beelineRpcReplyQueue());
container.setMessageListener(rabbitTemplate);
return container;
}
}
And my application.yml file looks like this
spring:
main:
banner-mode: LOG
rabbitmq:
host: 172.29.14.45
port: 5672
username: guest
password: guest
template:
reply-timeout: 15000
server:
port: 8888
So the main point is I want to connect to Rabbit server located at exact address (172.29.14.45). Created listener container is trying to connect to localhost instead. It ignores rabbit port property as well.
2021-02-23 23:04:59.715 [replyListenerContainer-1] INFO (AbstractConnectionFactory.java:636) - Attempting to connect to: [localhost:5672]
2021-02-23 23:05:01.721 [replyListenerContainer-1] ERROR (AbstractMessageListenerContainer.java:1877) - Failed to check/redeclare auto-delete queue(s).
org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused: connect
and continues to restart consumer after that
2021-02-23 23:17:49.069 [replyListenerContainer-1] INFO (SimpleMessageListenerContainer.java:1428) - Restarting Consumer#2a140ce5: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0
2021-02-23 23:17:49.069 [replyListenerContainer-1] DEBUG (BlockingQueueConsumer.java:758) - Closing Rabbit Channel: null
2021-02-23 23:17:49.071 [replyListenerContainer-2] INFO (AbstractConnectionFactory.java:636) - Attempting to connect to: [localhost:5672]
What should I do to tell spring to use my host property instead of localhost?

I always use the addresses property in the application.properties file
spring.rabbitmq.addresses=amqp://username:password#host:port/vhost
The name of the "virtual host" (or vhost) specifies the namespace for entities (such as exchanges and queues) referred to by the protocol. Note that this is not virtual hosting in the HTTP sense.
https://www.rabbitmq.com/uri-spec.html
example:
spring.rabbitmq.addresses=amqp://ihrpsvpp:In4etuiIkgu7FVBr0tr6wYGvGcGyJ9Ja#lion.rmq.cloudamqp.com/ihrpsvpp

Ok, it turned out it was a bean refreshing context of the application, what caused autoconfiguration to fail

Related

Cannot connect to clustered AWS Elasticache for Redis

I have created a cluster on AWS Elasticache for Redis. The cluster operates in cluster mode enabled. There is 1 shard with 2 nodes currently created.
I'm trying to connect to the cluster through bastion host, thus using such port forwarding:
LocalForward localhost:6379 my-app.redis.apps.region.company.com:6379
I can reach the elasticache service with Spring Boot standalone configuration, but when fetching data from the cache, I get MOVED exception.
I have tried to switch the configuration to cluster mode using only cluster endpoint, but am getting:
Caused by: io.lettuce.core.RedisException: Cannot obtain initial Redis Cluster topology
at io.lettuce.core.cluster.RedisClusterClient.lambda$getPartitions$0(RedisClusterClient.java:329)
at io.lettuce.core.cluster.RedisClusterClient.get(RedisClusterClient.java:941)
at io.lettuce.core.cluster.RedisClusterClient.getPartitions(RedisClusterClient.java:329)
at org.springframework.data.redis.connection.lettuce.ClusterConnectionProvider.getConnectionAsync(ClusterConnectionProvider.java:92)
at org.springframework.data.redis.connection.lettuce.ClusterConnectionProvider.getConnectionAsync(ClusterConnectionProvider.java:40)
at org.springframework.data.redis.connection.lettuce.LettuceConnectionProvider.getConnection(LettuceConnectionProvider.java:53)
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$ExceptionTranslatingConnectionProvider.getConnection(LettuceConnectionFactory.java:1595)
... 134 common frames omitted
Caused by: io.lettuce.core.cluster.topology.DefaultClusterTopologyRefresh$CannotRetrieveClusterPartitions: Cannot retrieve cluster partitions from [redis://clustercfg.redis-*******.****.****.cache.amazonaws.com]
Here's my configuration:
#Configuration
class RedisConfiguration {
#Bean
fun connectionFactory(): RedisConnectionFactory? {
return LettuceConnectionFactory(
RedisClusterConfiguration().clusterNode(
"clustercfg.redis-*******.****.****.cache.amazonaws.com",
6379
)
)
}
#Bean
fun redisTemplate(redisConnectionFactory: RedisConnectionFactory) =
RedisTemplate<String, String>().apply {
setConnectionFactory(redisConnectionFactory)
StringRedisSerializer().let {
keySerializer = it
valueSerializer = it
}
}
#Bean
fun remoteConfigCacheManager(
connectionFactory: RedisConnectionFactory
): RedisCacheManager? {
return RedisCacheManager.builder(connectionFactory)
.withCacheConfiguration(
“CACHE_NAME”,
RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofHours(1))
)
.build()
}
}
And there's the config in application.yml
redis:
host: ${REDIS_HOST:clustercfg.redis-*******.****.****.cache.amazonaws.com}
port: ${REDIS_PORT:6379}
password: ${REDIS_PASSWORD:<SOME_SECRET_PASSWORD>}
ssl: ${REDIS_SSL:true}
I've really browsed through every possible SO thread and could not find a solution. Can this be related to my bastion host?

Hikari Connection Pool - Slow , Block , Connection is not available : SpringBoot

Problem:
I have a springboot application which has hikari configured (auto). I'm getting error
Connection is not available, request timed out after 30113ms
when I just do an insert operation in database and flow is like Controller > Service > Repository > save(entity) also not using #Transactional in repository, but the result is the same if I use it.
While load test 50request/1sec to this service sequentially getting success for 20-30 requests? remaining failed with below exception.
2019-03-28 20:58:29.507 ERROR 90260 --- [http-nio-8080-exec-234] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection] with root cause
java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30113ms.
at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:697) ~[HikariCP-3.3.1.jar:na]
I am doing kind of load testing as triggering 50req/1sec and half and half getting success and failure. Enabled leak detection also but no trace in log.
Am I overdoing the configurations for this test or should I need to tune the pool connections? or it supports only that much?
Also hikari getconnection after 2nd request and subsequent requests takes almost increased 5+ seconds (blocks) why? Its not parallel why? Please help me or guide me on how much I need to tune to accept like 200 request per 1 min.
application.yml
spring:
application:
name: demo
datasource:
hikari:
connection-timeout: 20000
minimum-idle: 5
maximum-pool-size: 50
idle-timeout: 300000
max-lifetime: 1200000
auto-commit: true
driver-class-name: com.microsoft.sqlserver.jdbc.SQLServerDriver
jdbc-url:: jdbc:sqlserver://ip:port;databaseName=sample
username: username
leak-detection-threshold: 30000
BootApplication.java
#SpringBootApplication
public class Sample{
public static void main(String[] args) {
SpringApplication.run(Sample.class, args);
}
#Bean
#ConfigurationProperties(prefix = "spring.datasource.hikari")
public DataSource dataSource() {
HikariDataSource dataSource=new HikariDataSource();
//configuring pass from vault
return dataSource;
}
}
SampleService.java
#Service
public class SampleService implements SampleService {
#Autowired
private SampleRepository sampleRepository;
#Override
public List<String> getAll() {
return (List<String>) sampleRepository.findAll();
}
#Override
public String saveOrUpdate(Sample obj) {
return sampleRepository.save(obj);
}
}

Spring boot 2 connection to rabbitmq via Apache Camel

I have problem with connection to rabbitmq via Apache Camel on Spring Boot 2.
I did following steps:
My dependencies:
implementation "org.apache.camel:camel-spring-boot-starter:${camelVersion}"
implementation "org.apache.camel:camel-jackson-starter:${camelVersion}"
implementation "org.apache.camel:camel-core:${camelVersion}"
implementation "org.apache.camel:camel-rabbitmq-starter:${camelVersion}"
implementation "org.springframework.boot:spring-boot-starter-amqp"
Application.yaml
spring:
rabbitmq:
dynamic: true
host: 192.168.1.1
port: 5672
username: X
password: Y
And I have following route:
#Component
public class BasicRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
from("direct:loggerQueue")
.id("loggerQueue")
.to("rabbitmq://TEST-QUEUE.exchange?queue=TEST-QUEUE.queue&autoDelete=false&connectionFactory=#rabbitConnectionFactory")
.end();
}
}
Finnaly I have still following issue:
2019-03-06 12:46:05.766 WARN 19464 --- [ restartedMain] o.a.c.c.rabbitmq.RabbitMQProducer : Failed to create connection. It will attempt to connect again when publishing a message.
java.net.ConnectException: Connection refused: connect
Connection seems ok, I tested it. Something is bad with rabbitConnectionFactory.
I don't know what I have bad.
The problem appears to be that RabbitMQComponent is expecting to find a connection factory of type com.rabbitmq.client.ConnectionFactory.
However, the springboot auto-configure is creating a connection factory of type org.springframework.amqp.rabbit.connection.CachingConnectionFactory.
So, whenever the RabbitMQComponent attempts to find the appropriate connection factory, because it is looking for the specific type, and because it does not subclass the rabbitmq ConnectionFactory, it returns a null value, and fails to use the appropriate host name and configuration parameters specified in your application.yml.
You should also see the following in your log if you have debug level set:
2019-12-15 17:58:53.631 DEBUG 48710 --- [ main] o.a.c.c.rabbitmq.RabbitMQComponent : Creating RabbitMQEndpoint with host null:0 and exchangeName: asterix
2019-12-15 17:58:55.927 DEBUG 48710 --- [ main] o.a.c.c.rabbitmq.RabbitMQComponent : Creating RabbitMQEndpoint with host null:0 and exchangeName: asterix-sink
EDIT:
The CachingConnectionFactory is configured with the required Rabbit connection factory as part of the autoconfiguration. However, you need to provide a link to the correct factory.
Therefore, you need to add a #Bean to disambiguate.
#Configuration
#RequiredArgsConstructor
public class CamelConfig {
private final CachingConnectionFactory rabbitConnectionFactory;
#Bean
com.rabbitmq.client.ConnectionFactory rabbitSourceConnectionFactory() {
return rabbitConnectionFactory.getRabbitConnectionFactory();
}
}
and in your endpoint configuration:
rabbitmq:asterix?connectionFactory=#rabbitSourceConnectionFactory
Note that the # is optional, as it gets stripped out within the code when it is trying to find the rabbit connection factory bean.
In your application.yml, configure the connection parameters (the url is no longer included in the endpoint URI).
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest

Ehcache not cleaning up peer listeners after startup failure

I have a problem where ehcache is not cleaning up its listeners after the app fails to start. Basically after a failed start, it then tries to establish a listener on the remoteObjectPort 40003, and fails due to the port already being in use:
Caused by: java.rmi.server.ExportException: Port already in use: 40003; nested exception is:
java.net.BindException: Address already in use
...
Even though the app is stopped, the listeners are still running on those ports:
[root#server logs]# netstat -tulpn | grep 4000
tcp 0 0 0.0.0.0:40001 0.0.0.0:* LISTEN 14946/java
tcp 0 0 0.0.0.0:40003 0.0.0.0:* LISTEN 14946/java
This is the relevant config in my ehcache.xml file:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=automatic,
multicastGroupAddress=230.0.0.1,
multicastGroupPort=40002,
timeToLive=64"
/>
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="port=40001,remoteObjectPort=40003,
socketTimeoutMillis=20000"/>
Is there any way to create a #PreDestroy method in my Application.java where I can explicity kill any running ehcache listeners running in the container?
Edit: Here is how I'm initialising EhCache within spring:
#EnableCaching
...
public class Application extends SpringBootServletInitializer implements WebApplicationInitializer {
...
#Bean
public EhCacheCacheManager gdaCacheManager() {
return new EhCacheCacheManager(ehCacheCacheManager().getObject());
}
#Bean
public EhCacheManagerFactoryBean ehCacheCacheManager() {
EhCacheManagerFactoryBean factory = new EhCacheManagerFactoryBean();
factory.setConfigLocation(new ClassPathResource("cache/ehcache.xml"));
factory.setShared(true);
return factory;
}
I'm also setting the following properties within my hibernate config to enable 2nd level caching with ehcache:
properties.put("hibernate.cache.use_second_level_cache", "true");
properties.put("hibernate.cache.use_query_cache", "true");
properties.put("hibernate.cache.region.factory_class", "org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory");
properties.put("net.sf.ehcache.configurationResourceName", "/cache/ehcache.xml");

spring boot cannot connect to rabbitmq

I have a RabbitMQ server like this
When I try to connect to this server via Spring Boot amqp, I see com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
My configs are this one
# Message
spring.activemq.broker-url=tcp://127.0.0.1:5672
spring.activemq.user=test
spring.activemq.password=test
Yes, the user test can access Virtual Hosts on / and yes, I can login with test/test on RabbitMQ GUI
EDIT
Looking at the rabbitmq logs, I saw this
{handshake_error,starting,0,
{amqp_error,access_refused,
"PLAIN login refused: user 'guest' - invalid credentials",
'connection.start_ok'}}
seems like Spring is ignoring my configs and trying to connect with guest
src/main/resources/application.yml
spring:
rabbitmq:
username: guest
password: guest
host: rabbitmq
port: 5672
virtual-host: someVirtualHost
https://docs.spring.io/spring-boot/docs/current/reference/html/application-properties.html
Spring Properties includes specific settings for RabbitMQ. Try replacing your ActiveMQ config with below.
Example:
spring.rabbitmq.host = 127.0.0.1
spring.rabbitmq.port = 5672
spring.rabbitmq.username = guest
spring.rabbitmq.password = guest
Try to change your rabbitMQ configuration in spring boot properties :
spring.rabbitmq.host = 127.0.0.1
spring.rabbitmq.port = 5672
spring.rabbitmq.username = guest
spring.rabbitmq.password = guest
Using default setting up with springboot is good but if we want to add external rabbit instance to spring container then we should follow as below
application.yml
rabbitmq:
host: 'hostname'
vhost: 'vhostname'
user: 'userName'
password: 'passwd'
port: 5672
Config class
#Configuration
public class RabbitConfig {
#Value("${rabbitmq.host}")
private String host;
#Value("${rabbitmq.vhost}")
private String vhost;
#Value("${rabbitmq.user}")
private String user;
#Value("${rabbitmq.password}")
private String password;
#Value("${rabbitmq.port}")
private int port;
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory factory = new CachingConnectionFactory();
System.out.println("rmqhost is " + host);
factory.setHost(host);
factory.setVirtualHost(vhost);
factory.setUsername(user);
factory.setPassword(password);
factory.setPort(port);
return factory;
}
#Bean
public RabbitAdmin rabbitAdmin() {
return new RabbitAdmin(connectionFactory());
}
}
and we can create Bean for either rabbitmqtemplate or rabbitmqListener

Categories