Java Data Elasticsearch client appending port 9200 to url - java

Spring Data Elasticsearch version:
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-elasticsearch</artifactId>
<version>3.2.6.RELEASE</version>
</dependency>
I do not understand why my Elasticsearch Highlevel client is always forcing the port to 9200 even though I am specifying the port 443.
Here is how I am defining the RestHighLevelClient:
#Slf4j
#Configuration
public class ElasticsearchConfiguration {
#Value("${elasticsearch.host:127.0.0.1}")
private String elasticsearchHost;
#Value("${elasticsearch.port:9200}")
private String elasticsearchPort;
#Bean(destroyMethod = "close")
public RestHighLevelClient client() throws IOException {
log.info("Creating High level rest client for Elasticsearch with host: " + elasticsearchHost);
ClientConfiguration configuration = ClientConfiguration.builder()
.connectedTo(elasticsearchHost + ":" + elasticsearchPort)
.usingSsl()
.build();
return RestClients.create(configuration).rest();
}
#Bean
public ElasticsearchRestTemplate getElasticsearchTemplate() throws IOException {
return new ElasticsearchRestTemplate(client());
}
}
and how I have used the template:
private ElasticsearchRestTemplate elasticsearchTemplate;
#Autowired
public ElasticsearchServiceImpl(#Qualifier("getElasticsearchTemplate") ElasticsearchRestTemplate elasticsearchRestTemplate) {
this.elasticsearchTemplate = elasticsearchRestTemplate;
}
#PostConstruct
public void init() throws IOException {
create indexes...
}
I have verified that the host and the port are both set in the properties file:
--elasticsearch.host=https://vpc-example-ryphgjfwji3zonliyhhd3nu.eu-west-2.es.amazonaws.com
--elasticsearch.port=443
In the stacktrace error I am receiving: Caused by: java.io.IOException: https://vpc-example-ryphgjfwji3zonliyhhd3nu.eu-west-2.es.amazonaws.com:443:9200, it appears to be appending 9200 to the end of the url. Why is this happening?
I have verified that the connection is successfully made via curl.
Edit: Added another stacktrace
un 09 21:44:44 ip-10xxxx95.pre.xxx-prototype.xxx.uk bash[19738]: 2020-06-09 21:44:44.023 ERROR 19738 --- [ main] c.g.c.s.impl.ElasticsearchServiceImpl : https://vpc-example-ryphgjfwji3zonliyhhd3nu.eu-west-2.es.amazonaws.com: Name or service not known
Jun 09 21:44:44 ip-10-2xxxxx5.pre.xxx-prototype.xxx.uk bash[19738]: 2020-06-09 21:44:44.024 ERROR 19738 --- [ main] c.g.c.s.impl.ElasticsearchServiceImpl : [org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:964), org.elasticsearch.client.RestClient.performRequest(RestClient.java:233), org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1764),

Can you try creating the resthighleval client like below :
RestHighLevelClient restHighLevelClient = new RestHighLevelClient(
RestClient.builder(new HttpHost(configuration.getElasticsearchConfig().getHost(),
configuration.getElasticsearchConfig().getPort(),
"https")));
And use the below config, note I left the port config empty:
--elasticsearch.host=https://vpc-example-ryphgjfwji3zonliyhhd3nu.eu-west-2.es.amazonaws.com
--elasticsearch.port=
I've created a connection to AWS ES and ES hosted in AWS in past and let me know if you face issue with code, would be happy to help further.

Do not specify the protocol (https) when specifying the host. In your case, it would be:
--elasticsearch.host=vpc-example-ryphgjfwji3zonliyhhd3nu.eu-west-2.es.amazonaws.com
--elasticsearch.port=443

Related

Response from postgres r2db taking lot of time leading to connection time out

We are using spring boot(2.3.1) reactive programing in our project. DB used is r2dbc-postgres (0.8.7). we are unable to find out the root cause why the apis written using reactive stops responding once it connects to DB.
For example in the following code:
#Autowired
PlanPackageCurrencyPriceRepository planPackageCurrencyPriceRepository;
public Mono<Object> viewBySkuCodeAndCountryCode(String skuCode, String countryCode) {
Mono<PlanPackageCurrencyPrice> planPackagePriceInfo = planPackageCurrencyPriceRepository
.findBySkuCodeAndCountryCode(skuCode, countryCode);
return planPackagePriceInfo.map(planInfo -> {
PlanPackageCurrencyPriceDTO currencyPriceDTO = PlanPackageCurrencyPriceDTO.builder()
.skuCode(planInfo.getSkuCode())
.countryCode(planInfo.getCountryCode())
.currencyCode(planInfo.getCurrencyCode())
.price(planInfo.getPrice())
.status(planInfo.getStatus())
.build();
if(planInfo.getStatus() == Status.ACTIVE) {
final Mono<Boolean> monovalue = redisTemplate.opsForHash().put("getplanpackagecurrencycodeprice",
skuCode + countryCode, currencyPriceDTO);
logger.info(REDIS_VALUE, monovalue.subscribe(System.out::println));
return currencyPriceDTO;
} else {
logger.debug(serviceName.concat(LoggerConstants.PLAN_PACKAGE_GROUP_INFO_VIEW_DEBUG_LOG)
.concat(" No items found for Plan/Package Group Info for the sku code {} "), skuCode);
throw new CustomException("VIEW_ERRORMESSAGE", HttpStatus.MULTI_STATUS, 10006);
}
});
}
when a query is made to DB using planPackageCurrencyPriceRepository, The logs stops at this query, following is the response seen right before time out
2021-03-07 10:52:47.427 DEBUG 1 --- [tor-tcp-epoll-4] o.s.d.r.c.R2dbcTransactionManager : Acquired Connection [MonoRetry] for R2DBC transaction
2021-03-07 10:52:47.427 DEBUG 1 --- [tor-tcp-epoll-4] o.s.d.r.c.R2dbcTransactionManager : Switching R2DBC Connection [PooledConnection[PostgresqlConnection{client=io.r2dbc.postgresql.client.ReactorNettyClient#7d1a251f, codecs=io.r2dbc.postgresql.codec.DefaultCodecs#7925be64}]] to manual commit
given some time. The API responds with error saying connection time out.
But then it works fine if we restart our docker container.Then the same behaviour is observed after some time. We are not able to find solution for this intermittent behaviour.
Following is the DB configuration used:
#Configuration
#EnableR2dbcRepositories(basePackages = "com.crm.smsauth.postgresrepo")
public class DatabaseConfig extends AbstractR2dbcConfiguration {
#Value("${postgres.host}")
private String host;
#Value("${postgres.protocol}")
private String protocol;
#Value("${postgres.username}")
private String username;
#Value("${postgres.password}")
private String password;
#Value("${postgres.database}")
private String database;
#Override
#Bean
public ConnectionFactory connectionFactory() {
final ConnectionFactory connectionFactory = ConnectionFactories.get(ConnectionFactoryOptions.builder()
.option(ConnectionFactoryOptions.DRIVER, "pool")
.option(ConnectionFactoryOptions.PROTOCOL, protocol)
.option(ConnectionFactoryOptions.HOST, host)
.option(ConnectionFactoryOptions.USER, username)
.option(ConnectionFactoryOptions.PASSWORD, password)
.option(ConnectionFactoryOptions.DATABASE, database)
.option(MAX_SIZE, 1000)
.option(INITIAL_SIZE, 1)
.build());
return connectionFactory;
}
#Bean
ReactiveTransactionManager transactionManager(ConnectionFactory connectionFactory) {
return new R2dbcTransactionManager(connectionFactory);
}
}
Please let me know if the issue is with the way reactive code is written or in the DB configuration.
EDIT 2:
Postgres logs : The DB name is planPackage.
2021-03-07 16:26:47.389 IST [24368] postgres#planpackage LOG: could not receive data from client: Connection timed out
The timestamps of both the logs doesn't match because, our deployment vm has a timezone set to GMT, But the postgres one is IST.

SocketTimeoutException while retrieving or inserting data into Elastic Search by using Rest High Level Client

I'm facing SocketTimeoutException while retrieving/inserting data from/to elastic. This is happening when there are around 10-30 request/second. These requests are combination of get/put.
Here is my elastic configuration:
3 master nodes each of 4GB RAM
2 data nodes each of 8GM RAM
Azure load balancer which connects to above data node (seems only 9200 port is opened on it). And java client connects to this load balancer as it's only exposed.
Elastic Version: 7.2.0
Rest High Level Client:
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>7.2.0</version>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>7.2.0</version>
</dependency>
Index Information:
Index shards: 2
Index replica: 1
Index total fields: 10000
Size of index from kibana: Total-27.2 MB & Primaries: 12.2MB
Index structure:
{
"dev-index": {
"mappings": {
"properties": {
"dataObj": {
"type": "object",
"enabled": false
},
"generatedID": {
"type": "keyword"
},
"transNames": { //it's array of string
"type": "keyword"
}
}
}
}
}
Dynamic mapping is disabled.
Following is my elastic Config file. Here I've two connection bean, one is for read & another for write to elastic.
ElasticConfig.java:
#Configuration
public class ElasticConfig {
#Value("${elastic.host}")
private String elasticHost;
#Value("${elastic.port}")
private int elasticPort;
#Value("${elastic.user}")
private String elasticUser;
#Value("${elastic.pass}")
private String elasticPass;
#Value("${elastic-timeout:20}")
private int timeout;
#Bean(destroyMethod = "close")
#Qualifier("readClient")
public RestHighLevelClient readClient(){
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(elasticUser, elasticPass));
RestClientBuilder builder = RestClient
.builder(new HttpHost(elasticHost, elasticPort))
.setHttpClientConfigCallback(httpClientBuilder ->
httpClientBuilder
.setDefaultCredentialsProvider(credentialsProvider)
.setDefaultIOReactorConfig(IOReactorConfig.custom().setIoThreadCount(5).build())
);
builder.setRequestConfigCallback(requestConfigBuilder ->
requestConfigBuilder
.setConnectTimeout(10000)
.setSocketTimeout(60000)
.setConnectionRequestTimeout(0)
);
RestHighLevelClient restClient = new RestHighLevelClient(builder);
return restClient;
}
#Bean(destroyMethod = "close")
#Qualifier("writeClient")
public RestHighLevelClient writeClient(){
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(elasticUser, elasticPass));
RestClientBuilder builder = RestClient
.builder(new HttpHost(elasticHost, elasticPort))
.setHttpClientConfigCallback(httpClientBuilder ->
httpClientBuilder
.setDefaultCredentialsProvider(credentialsProvider)
.setDefaultIOReactorConfig(IOReactorConfig.custom().setIoThreadCount(5).build())
);
builder.setRequestConfigCallback(requestConfigBuilder ->
requestConfigBuilder
.setConnectTimeout(10000)
.setSocketTimeout(60000)
.setConnectionRequestTimeout(0)
);
RestHighLevelClient restClient = new RestHighLevelClient(builder);
return restClient;
}
}
Here is the function which makes a call to elastic, if data is available in elastic it will take it else it will generate data & put into elastic.
public Object getData(Request request) {
DataObj elasticResult = elasticService.getData(request);
if(elasticResult!=null){
return elasticResult;
}
else{
//code to generate data
DataObj generatedData = getData();//some function which will generated data
//put above data into elastic by Async call.
elasticAsync.putData(generatedData);
return generatedData;
}
}
ElasticService.java getData Function:
#Service
public class ElasticService {
#Value("${elastic.index}")
private String elasticIndex;
#Autowired
#Qualifier("readClient")
private RestHighLevelClient readClient;
public DataObj getData(Request request){
String generatedId = request.getGeneratedID();
GetRequest getRequest = new GetRequest()
.index(elasticIndex) //elastic index name
.id(generatedId); //retrieving by index id from elastic _id field (as key-value)
DataObj result = null;
try {
GetResponse response = readClient.get(getRequest, RequestOptions.DEFAULT);
if(response.isExists()) {
ObjectMapper objectMapper = new ObjectMapper();
result = objectMapper.readValue(response.getSourceAsString(), DataObj.class);
}
} catch (Exception e) {
LOGGER.error("Exception occurred during fetch from elastic !!!! " + ,e);
}
return result;
}
}
ElasticAsync.java Async Put Data Function:
#Service
public class ElasticAsync {
private static final Logger LOGGER = Logger.getLogger(ElasticAsync.class.getName());
#Value("${elastic.index}")
private String elasticIndex;
#Autowired
#Qualifier("writeClient")
private RestHighLevelClient writeClient;
#Async
public void putData(DataObj generatedData){
ElasticVO updatedRequest = toElasticVO(generatedData);//ElasticVO matches to the structure of index given above.
try {
ObjectMapper objectMapper = new ObjectMapper();
String jsonString = objectMapper.writeValueAsString(updatedRequest);
IndexRequest request = new IndexRequest(elasticIndex);
request.id(generatedData.getGeneratedID());
request.source(jsonString, XContentType.JSON);
request.setRefreshPolicy(WriteRequest.RefreshPolicy.NONE);
request.timeout(TimeValue.timeValueSeconds(5));
IndexResponse indexResponse = writeClient.index(request, RequestOptions.DEFAULT);
LOGGER.info("response id: " + indexResponse.getId());
}
} catch (Exception e) {
LOGGER.error("Exception occurred during saving into elastic !!!!",e);
}
}
}
Here is the some part of the stack trace when exception is occurred during saving data into elastic:
2019-07-19 07:32:19.997 ERROR [data-retrieval,341e6ecc5b10f3be,1eeb0722983062b2,true] 1 --- [askExecutor-894] a.c.s.a.service.impl.ElasticAsync : Exception occurred during saving into elastic !!!!
java.net.SocketTimeoutException: 60,000 milliseconds timeout on connection http-outgoing-34 [ACTIVE]
at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:789) ~[elasticsearch-rest-client-7.2.0.jar!/:7.2.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:225) ~[elasticsearch-rest-client-7.2.0.jar!/:7.2.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:212) ~[elasticsearch-rest-client-7.2.0.jar!/:7.2.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1448) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1418) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1388) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0]
at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:836) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0]
Caused by: java.net.SocketTimeoutException: 60,000 milliseconds timeout on connection http-outgoing-34 [ACTIVE]
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.timeout(HttpAsyncRequestExecutor.java:387) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:92) ~[httpasyncclient-4.1.3.jar!/:4.1.3]
at org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:39) ~[httpasyncclient-4.1.3.jar!/:4.1.3]
at org.apache.http.impl.nio.reactor.AbstractIODispatch.timeout(AbstractIODispatch.java:175) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.reactor.BaseIOReactor.sessionTimedOut(BaseIOReactor.java:263) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.timeoutCheck(AbstractIOReactor.java:492) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.reactor.BaseIOReactor.validate(BaseIOReactor.java:213) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:280) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
... 1 common frames omitted
Here is the some part of the stack trace when exception is occurred during retrieving data into elastic:
2019-07-19 07:22:37.844 ERROR [data-retrieval,104cf6b2ab5b3349,b302d3d3cd7ebc84,true] 1 --- [o-8080-exec-346] a.c.s.a.service.impl.ElasticService : Exception occurred during fetch from elastic !!!!
java.net.SocketTimeoutException: 60,000 milliseconds timeout on connection http-outgoing-30 [ACTIVE]
at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:789) ~[elasticsearch-rest-client-7.1.1.jar!/:7.1.1]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:225) ~[elasticsearch-rest-client-7.1.1.jar!/:7.1.1]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:212) ~[elasticsearch-rest-client-7.1.1.jar!/:7.1.1]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1433) ~[elasticsearch-rest-high-level-client-7.1.1.jar!/:7.1.1]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1403) ~[elasticsearch-rest-high-level-client-7.1.1.jar!/:7.1.1]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1373) ~[elasticsearch-rest-high-level-client-7.1.1.jar!/:7.1.1]
at org.elasticsearch.client.RestHighLevelClient.get(RestHighLevelClient.java:699) ~[elasticsearch-rest-high-level-client-7.1.1.jar!/:7.1.1]
Caused by: java.net.SocketTimeoutException: 60,000 milliseconds timeout on connection http-outgoing-30 [ACTIVE]
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.timeout(HttpAsyncRequestExecutor.java:387) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:92) ~[httpasyncclient-4.1.3.jar!/:4.1.3]
at org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:39) ~[httpasyncclient-4.1.3.jar!/:4.1.3]
at org.apache.http.impl.nio.reactor.AbstractIODispatch.timeout(AbstractIODispatch.java:175) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.reactor.BaseIOReactor.sessionTimedOut(BaseIOReactor.java:263) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.timeoutCheck(AbstractIOReactor.java:492) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.reactor.BaseIOReactor.validate(BaseIOReactor.java:213) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:280) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591) ~[httpcore-nio-4.4.11.jar!/:4.4.11]
... 1 common frames omitted
I've gone through couple of stackoverflow & elastic related blogs where they have mentioned this issue could be due to RAM & cluster configuration of elastic. Then I've changed my shards from 5 to 2 as there were only two data nodes. Also increased ram of Data nodes from 4GB to 8GB, as I get to know that elastic will use only 50% of total RAM. The occurrences of exception have decreased but problem still persist.
What could be possible ways to solve this problem ? What I'm missing from java/elastic configuration point of view which frequently throwing this kind of SocketTimeoutException ? Let me know if you require any more details regarding the configuration.
We've had the same issue and after quite some digging I found the root cause: a config mismatch of the firewall between the client and the elastic servers kernel config for tcp keep alive.
The firewall drops idle connections after 3600 seconds. The problem was that the kernel parameter for the tcp keep alive was set to 7200 seconds (default in RedHat 6.x/7.x):
sysctl -n net.ipv4.tcp_keepalive_time
7200
So the connections are dropped before a keep alive probe is being sent. The asyncHttpClient in the elastic http client doesn't seem to handle dropped connections very well, it just waits until the socket timeout.
So check whether you have any network device (Loadbalancer, Firewall, Proxy etc.) between your client and server which has a session timeout or similar and either increase that timeout or lower the tcp_keep_alive kernel parameter.

JMX Connection Hangs

With the following super simple Java application:
class Main {
static final private Logger logger = LoggerFactory.getLogger(Main.class);
public static void main(String[] args) throws Exception {
JMXServiceURL url = new JMXServiceURL(null, "localhost", 9999);
logger.info("got JMXServiceURL");
logger.info(String.format("JMXServiceURL=%s", url.toString()));
try (JMXConnector connector = JMXConnectorFactory.connect(url)) {
logger.info("got JMXConnector");
MBeanServerConnection connection = connector.getMBeanServerConnection();
logger.info("got MBeanServerConnection");
logger.info(String.format("connection.getMBeanCount()=%d", connection.getMBeanCount()));
}
logger.info("exiting");
}
}
I'm using a minimal build.gradle file with dependencies:
dependencies {
implementation 'ch.qos.logback:logback-classic:1.2.3'
implementation 'org.jvnet.opendmk:jmxremote_optional:1.0_01-ea'
}
Without the jmxremote_optional dependency, I get a java.net.MalformedURLException: Unsupported protocol: jmxmp error. I presume I've added the correct Maven dependency to resolve that.
When I run this, I get the following and then the application hangs indefinitely:
120 18:43:33.693 [main] INFO jmxclient.Main - got JMXServiceURL
123 18:43:33.696 [main] INFO jmxclient.Main - JMXServiceURL=service:jmx:jmxmp://localhost:9999
I definitely have a Java application exposing JMX metrics on that port:
time curl localhost:9999
curl: (52) Empty reply from server
real 0m0.020s
user 0m0.012s
sys 0m0.000s

Spring Boot Redis configuration not working

I am developing a Spring Boot [web] REST-style application with a ServletInitializer (since it needs to be deployed to an existing Tomcat server). It has a #RestController with a method that, when invoked, needs to write to a Redis pub-sub channel. I have the Redis server running on localhost (default port, no password). The relevant part of the POM file has the required starter dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
When I deploy the WAR and hit the endpoint http://localhost:8080/springBootApp/health, I get this response:
{
"status": "DOWN",
"diskSpace": {
"status": "UP",
"total": 999324516352,
"free": 691261681664,
"threshold": 10485760
},
"redis": {
"status": "DOWN",
"error": "org.springframework.data.redis.RedisConnectionFailureException: java.net.SocketTimeoutException: Read timed out; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out"
}
}
I added the following to my Spring Boot application class:
#Bean
JedisConnectionFactory jedisConnectionFactory() {
return new JedisConnectionFactory();
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<String, Object> template = new RedisTemplate<String, Object>();
template.setConnectionFactory(jedisConnectionFactory());
return template;
}
I also tried adding the following to my #RestController before executing some test Redis code, but I get the same error as above in the stack trace:
#Autowired
private RedisTemplate<String, String> redisTemplate;
Edit (2017-05-09)
My understanding is that Spring Boot Redis starter assumes the default values of spring.redis.host=localhost and spring.redis.port=6379, I still added the two to application.properties, but that did not fill the gap.
Update (2017-05-10)
I added an answer to this thread.
I done a simple example with redis and spring boot
First I installed redis on docker:
$ docker run --name some-redis -d redis redis-server --appendonly yes
Then I Used this code for receiver :
import java.util.concurrent.CountDownLatch;
public class Receiver {
private static final Logger LOGGER = LoggerFactory.getLogger(Receiver.class);
private CountDownLatch latch;
#Autowired
public Receiver(CountDownLatch latch) {
this.latch = latch;
}
public void receiveMessage(String message) {
LOGGER.info("Received <" + message + ">");
latch.countDown();
}
}
And this is my spring boot app and my listener:
#SpringBootApplication
// after add security library then it is need to use security configuration.
#ComponentScan("omid.spring.example.springexample.security")
public class RunSpring {
private static final Logger LOGGER = LoggerFactory.getLogger(RunSpring.class);
public static void main(String[] args) throws InterruptedException {
ConfigurableApplicationContext contex = SpringApplication.run(RunSpring.class, args);
}
#Autowired
private ApplicationContext context;
#RestController
public class SimpleController{
#RequestMapping("/test")
public String getHelloWorld(){
StringRedisTemplate template = context.getBean(StringRedisTemplate.class);
CountDownLatch latch = context.getBean(CountDownLatch.class);
LOGGER.info("Sending message...");
Thread t = new Thread(new Runnable() {
#Override
public void run() {
for (int i = 0 ; i < 100 ; i++) {
template.convertAndSend("chat", i + " => Hello from Redis!");
try {
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
});
t.start();
try {
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
return "hello world 1";
}
}
///////////////////////////////////////////////////////////////
#Bean
RedisMessageListenerContainer container(RedisConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
RedisMessageListenerContainer container = new RedisMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.addMessageListener(listenerAdapter, new PatternTopic("chat"));
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(Receiver receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
#Bean
Receiver receiver(CountDownLatch latch) {
return new Receiver(latch);
}
#Bean
CountDownLatch latch() {
return new CountDownLatch(1);
}
#Bean
StringRedisTemplate template(RedisConnectionFactory connectionFactory) {
return new StringRedisTemplate(connectionFactory);
}
}
The important point is the redis IP. if you installed it on docker like me then
you should set ip address in application.properties like this:
spring.redis.host=172.17.0.4
I put all my spring examples on github here
In addition I used redis stat to monitor redis. it is simple monitoring.
Spring data redis properties are updated, e.g. spring.redis.host is now spring.data.redis.host.
You need to configure your redis server information using the application.properties:
# REDIS (RedisProperties)
spring.redis.cluster.nodes= # Comma-separated list of "host:port"
spring.redis.database=0 # Database index
spring.redis.url= # Connection URL,
spring.redis.host=localhost # Redis server host.
spring.redis.password= # Login password of the redis server.
spring.redis.ssl=false # Enable SSL support.
spring.redis.port=6379 # Redis server port.
Spring data docs: https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html#REDIS
This was a proxy related problem, where even access to localhost was somehow being curtailed. Once I disabled the proxy settings, Redis health was UP! So the problem is solved. I did not have to add any property to application.properties and neither did I have to explicitly configure anything in the Spring Boot application class, because Spring Boot and the Redis Starter auto-configures based on Redis defaults (as applicable in my development environment). I just added the following to the pom.xml:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
and the following to the #RestController annotated class, and Spring Boot auto-wired as needed (awesome!).
#Autowired
private RedisTemplate<String, String> redisTemplate;
To publish a simple message to a channel, this single line of code was sufficient for validating the setup:
this.redisTemplate.convertAndSend(channelName, "hello world");
I appreciate all the comments, which were helpful in backing up my checks.

SpringBoot + ActiveMQ - How to set trusted packages?

I'm creating two springboot server & client applications communicating using JMS, and everything is working fine with the release 5.12.1 for activemq, but as soon as I update to the 5.12.3 version, I'm getting the following error :
org.springframework.jms.support.converter.MessageConversionException: Could not convert JMS message; nested exception is javax.jms.JMSException: Failed to build body from content. Serializable class not available to broker. Reason: java.lang.ClassNotFoundException: Forbidden class MyClass! This class is not trusted to be serialized as ObjectMessage payload. Please take a look at http://activemq.apache.org/objectmessage.html for more information on how to configure trusted classes.
I went on the URL that is provided and I figured out that my issue is related to the new security implemented in the 5.12.2 release of ActiveMQ, and I understand that I could fix it by defining the trusted packages, but I have no idea on where to put such a configuration in my SpringBoot project.
The only reference I'm making to the JMS queue in my client and my server is setting up it's URI in application.properties and enabling JMS on my "main" class with #EnableJms, and here's my configuration on the separate broker :
#Configuration
#ConfigurationProperties(prefix = "activemq")
public class BrokerConfiguration {
/**
* Defaults to TCP 10000
*/
private String connectorURI = "tcp://0.0.0.0:10000";
private String kahaDBDataDir = "../../data/activemq";
public String getConnectorURI() {
return connectorURI;
}
public void setConnectorURI(String connectorURI) {
this.connectorURI = connectorURI;
}
public String getKahaDBDataDir() {
return kahaDBDataDir;
}
public void setKahaDBDataDir(String kahaDBDataDir) {
this.kahaDBDataDir = kahaDBDataDir;
}
#Bean(initMethod = "start", destroyMethod = "stop")
public BrokerService broker() throws Exception {
KahaDBPersistenceAdapter persistenceAdapter = new KahaDBPersistenceAdapter();
persistenceAdapter.setDirectory(new File(kahaDBDataDir));
final BrokerService broker = new BrokerService();
broker.addConnector(getConnectorURI());
broker.setPersistent(true);
broker.setPersistenceAdapter(persistenceAdapter);
broker.setShutdownHooks(Collections.<Runnable> singletonList(new SpringContextHook()));
broker.setUseJmx(false);
final ManagementContext managementContext = new ManagementContext();
managementContext.setCreateConnector(true);
broker.setManagementContext(managementContext);
return broker;
}
}
So I'd like to know where I'm supposed to specify the trusted packages.
Thanks :)
You can just set one of the below spring boot properties in application.properties to set trusted packages.
spring.activemq.packages.trust-all=true
or
spring.activemq.packages.trusted=<package1>,<package2>,<package3>
Add the following bean:
#Bean
public ActiveMQConnectionFactory activeMQConnectionFactory() {
ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory("your broker URL");
factory.setTrustedPackages(Arrays.asList("com.my.package"));
return factory;
}
The ability to do this via a configuration property has been added for the next release:
https://github.com/spring-projects/spring-boot/issues/5631
Method: public void setTrustedPackages(List<String> trustedPackages)
Description: add all packages which is used in send and receive Message object.
Code : connectionFactory.setTrustedPackages(Arrays.asList("org.api","java.util"))
Implementated Code:
private static final String DEFAULT_BROKER_URL = "tcp://localhost:61616";
private static final String RESPONSE_QUEUE = "api-response";
#Bean
public ActiveMQConnectionFactory connectionFactory(){
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(DEFAULT_BROKER_URL);
connectionFactory.setTrustedPackages(Arrays.asList("org.api","java.util"));
return connectionFactory;
}
#Bean
public JmsTemplate jmsTemplate(){
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(connectionFactory());
template.setDefaultDestinationName(RESPONSE_QUEUE);
return template;
}
If any one still looking for an answer, below snippet worked for me
#Bean
public ActiveMQConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(BROKER_URL);
connectionFactory.setPassword(BROKER_USERNAME);
connectionFactory.setUserName(BROKER_PASSWORD);
connectionFactory.setTrustAllPackages(true); // all packages are considered as trusted
//connectionFactory.setTrustedPackages(Arrays.asList("com.my.package")); // selected packages
return connectionFactory;
}
I am setting Java_opts something like below and passing to java command and its working for me.
JAVA_OPTS=-Xmx256M -Xms16M -Dorg.apache.activemq.SERIALIZABLE_PACKAGES=*
java $JAVA_OPTS -Dapp.config.location=/data/config -jar <your_jar>.jar --spring.config.location=file:/data/config/<your config file path>.yml
Yes I found it's config in the new version
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.4.0.RELEASE</version>
</parent>
spring:
profiles:
active: #profileActive#
cache:
ehcache:
config: ehcache.xml
activemq:
packages:
trusted: com.stylrplus.api.model

Categories