Spring Boot Redis configuration not working - java

I am developing a Spring Boot [web] REST-style application with a ServletInitializer (since it needs to be deployed to an existing Tomcat server). It has a #RestController with a method that, when invoked, needs to write to a Redis pub-sub channel. I have the Redis server running on localhost (default port, no password). The relevant part of the POM file has the required starter dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
When I deploy the WAR and hit the endpoint http://localhost:8080/springBootApp/health, I get this response:
{
"status": "DOWN",
"diskSpace": {
"status": "UP",
"total": 999324516352,
"free": 691261681664,
"threshold": 10485760
},
"redis": {
"status": "DOWN",
"error": "org.springframework.data.redis.RedisConnectionFailureException: java.net.SocketTimeoutException: Read timed out; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out"
}
}
I added the following to my Spring Boot application class:
#Bean
JedisConnectionFactory jedisConnectionFactory() {
return new JedisConnectionFactory();
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<String, Object> template = new RedisTemplate<String, Object>();
template.setConnectionFactory(jedisConnectionFactory());
return template;
}
I also tried adding the following to my #RestController before executing some test Redis code, but I get the same error as above in the stack trace:
#Autowired
private RedisTemplate<String, String> redisTemplate;
Edit (2017-05-09)
My understanding is that Spring Boot Redis starter assumes the default values of spring.redis.host=localhost and spring.redis.port=6379, I still added the two to application.properties, but that did not fill the gap.
Update (2017-05-10)
I added an answer to this thread.

I done a simple example with redis and spring boot
First I installed redis on docker:
$ docker run --name some-redis -d redis redis-server --appendonly yes
Then I Used this code for receiver :
import java.util.concurrent.CountDownLatch;
public class Receiver {
private static final Logger LOGGER = LoggerFactory.getLogger(Receiver.class);
private CountDownLatch latch;
#Autowired
public Receiver(CountDownLatch latch) {
this.latch = latch;
}
public void receiveMessage(String message) {
LOGGER.info("Received <" + message + ">");
latch.countDown();
}
}
And this is my spring boot app and my listener:
#SpringBootApplication
// after add security library then it is need to use security configuration.
#ComponentScan("omid.spring.example.springexample.security")
public class RunSpring {
private static final Logger LOGGER = LoggerFactory.getLogger(RunSpring.class);
public static void main(String[] args) throws InterruptedException {
ConfigurableApplicationContext contex = SpringApplication.run(RunSpring.class, args);
}
#Autowired
private ApplicationContext context;
#RestController
public class SimpleController{
#RequestMapping("/test")
public String getHelloWorld(){
StringRedisTemplate template = context.getBean(StringRedisTemplate.class);
CountDownLatch latch = context.getBean(CountDownLatch.class);
LOGGER.info("Sending message...");
Thread t = new Thread(new Runnable() {
#Override
public void run() {
for (int i = 0 ; i < 100 ; i++) {
template.convertAndSend("chat", i + " => Hello from Redis!");
try {
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
});
t.start();
try {
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
return "hello world 1";
}
}
///////////////////////////////////////////////////////////////
#Bean
RedisMessageListenerContainer container(RedisConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
RedisMessageListenerContainer container = new RedisMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.addMessageListener(listenerAdapter, new PatternTopic("chat"));
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(Receiver receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
#Bean
Receiver receiver(CountDownLatch latch) {
return new Receiver(latch);
}
#Bean
CountDownLatch latch() {
return new CountDownLatch(1);
}
#Bean
StringRedisTemplate template(RedisConnectionFactory connectionFactory) {
return new StringRedisTemplate(connectionFactory);
}
}
The important point is the redis IP. if you installed it on docker like me then
you should set ip address in application.properties like this:
spring.redis.host=172.17.0.4
I put all my spring examples on github here
In addition I used redis stat to monitor redis. it is simple monitoring.

Spring data redis properties are updated, e.g. spring.redis.host is now spring.data.redis.host.

You need to configure your redis server information using the application.properties:
# REDIS (RedisProperties)
spring.redis.cluster.nodes= # Comma-separated list of "host:port"
spring.redis.database=0 # Database index
spring.redis.url= # Connection URL,
spring.redis.host=localhost # Redis server host.
spring.redis.password= # Login password of the redis server.
spring.redis.ssl=false # Enable SSL support.
spring.redis.port=6379 # Redis server port.
Spring data docs: https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html#REDIS

This was a proxy related problem, where even access to localhost was somehow being curtailed. Once I disabled the proxy settings, Redis health was UP! So the problem is solved. I did not have to add any property to application.properties and neither did I have to explicitly configure anything in the Spring Boot application class, because Spring Boot and the Redis Starter auto-configures based on Redis defaults (as applicable in my development environment). I just added the following to the pom.xml:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
and the following to the #RestController annotated class, and Spring Boot auto-wired as needed (awesome!).
#Autowired
private RedisTemplate<String, String> redisTemplate;
To publish a simple message to a channel, this single line of code was sufficient for validating the setup:
this.redisTemplate.convertAndSend(channelName, "hello world");
I appreciate all the comments, which were helpful in backing up my checks.

Related

Aws Elastic Cache (Redis) failed to connect (jedis connection error) when acessed locally through spring boot java

I am working on a spring boot application where I have to store OTP in Elastic cache (Redis).
Is elastic cache right choice to store OTP?
Using Redis to store OTP
To connect to Redis locally I used "sudo apt-get install Redis-server". It installed and successfully run.
I created a Redisconfig where I asked the application config file for port and hostname. Here I thought I will use this hostname and port to connect to aws elastic cache but Right now I am running locally.
public class RedisConfig {
#Value("${redis.hostname}")
private String redisHostName;
#Value("${redis.port}")
private int redisPort;
#Bean
protected JedisConnectionFactory jedisConnectionFactory() {
return new JedisConnectionFactory();
}
#Bean
public RedisTemplate<String,Integer> redisTemplate() {
final RedisTemplate<String, Integer> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
Now I used the RedisTemplate and valueOperation to put, read the data in Redis cache
public class MyService {
private RedisTemplate<String, Integer> redisTemplate;
private ValueOperations<String, Integer> valueOperations;
public OtpService(RedisTemplate<String, Integer> redisTemplate) {
super();
this.redisTemplate = redisTemplate;
valueOperations = redisTemplate.opsForValue();
}
public int generateOTP(String key) throws Exception {
try {
Random random = new Random();
int otp = 1000 + random.nextInt(9000);
valueOperations.set(key, otp, 120, TimeUnit.SECONDS);
return otp;
} catch (Exception e) {
throw new Exception("Exception while setting otp" + e.getMessage()) ;
}
}
public int getOtp(String key) {
try {
return valueOperations.get(key);
} catch (Exception e) {
return 0;
}
}
}
Now This is what I have done and which is running perfectly in local.
Questions I have :
What changes do I need when I am deploying the application in EC2 instance. Do we need to configure hostname and port in the code?
If we need to configure, Is there a way to test locally what would happen when we deploy? Can we simulate that environment somehow?
I have read that to access aws elastic cache (Redis) locally we have to set up proxy server, which is not a good practice, so how can we easily build the app locally and deploy on the cloud?
Why did ValueOperations don't have "delete" method when it has set, put methods? How can I invalidate cache once its usage is done before the expiry time?
Accessing the AWS cache locally:
When I tried to access the aws elastic cache (Redis) by putting the post and hostname in the creation of JedisConnectionFactory instance
#Bean
protected JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration configuration = new RedisStandaloneConfiguration(redisHostName, redisPort);
JedisConnectionFactory factory = new JedisConnectionFactory(configuration);
return factory;
}
I got an error while setting the key value:
Cannot get Jedis connection; nested exception is
redis.clients.jedis.exceptions.JedisConnectionException: Could not get
a resource from the pool
I tried to explain what I have done and what I needed to know?
If anybody knows any blog, resources where things are mentioned in detail please direct me there.
After posting the question, I tried things myself.
As per amazon,
Your Amazon ElastiCache instances are designed to be accessed through
an Amazon EC2 instance.
To connect to Redis locally on Linux,
Run "sudo apt-get install Redis-server". It will install redis server.
Run "redis-cli". It will run Redis on localhost:6379 successfully run.
To connect to server in java(spring boot)
Redisconfig
For local in application.properties: redis.hostname = localhost, redis.port = 6379
For cloud or when deployed to ec2: redis.hostname = "amazon Elastic cache endpoint", redis.port = 6379
public class RedisConfig {
#Value("${redis.hostname}")
private String redisHostName;
#Value("${redis.port}")
private int redisPort;
#Bean
protected JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration configuration = new RedisStandaloneConfiguration(redisHostName, redisPort);
JedisConnectionFactory factory = new JedisConnectionFactory(configuration);
return factory;
}
#Bean
public RedisTemplate<String,Integer> redisTemplate() {
final RedisTemplate<String, Integer> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
With this whether you are running locally or on cloud just need to change the URL and things will work perfectly.
After this use RedisTemplate and valueOperation to put, read the data in Redis cache. Same as I mentioned in the question above. No need for any changes.
Answers to the questions:
We need to change the hostname when deploying in the EC2 instance.
Running Redis server locally is exactly same as running Redis when the application is deployed on EC2, no need for changes, use the Redis config I am using.
Yes, don't create a proxy server, this beats the very idea of the cache. run locally with Redis server and change hostname
I still need to find a way to invalidate the cache when using valueOperations

Resume transfer of files after connection reset FTP

I am building an application using Spring Integration which is used to send files from one FTP server (source) to another FTP server (target). I first send files from source to the local directory using the inbound adapter and then send files from the local directory to the target using the outbound adapter.
My code seems to be working fine and I am able to achieve my goal but my problem is when the connection is reset to the target FTP server during the transfer of files, then the transfer of files don't continue after the connection starts working.
I used the Java configurations using inbound and outbound adapters. Can anyone please tell me if it is possible to resume my transfer of files somehow after the connection reset?
P.S: I am a beginner at Spring, so correct me if I have done something wrong here. Thanks
AppConfig.java:
#Configuration
#Component
public class FileTransferServiceConfig {
#Autowired
private ConfigurationService configurationService;
public static final String FILE_POLLING_DURATION = "5000";
#Bean
public SessionFactory<FTPFile> sourceFtpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(configurationService.getSourceHostName());
sf.setPort(Integer.parseInt(configurationService.getSourcePort()));
sf.setUsername(configurationService.getSourceUsername());
sf.setPassword(configurationService.getSourcePassword());
return new CachingSessionFactory<FTPFile>(sf);
}
#Bean
public SessionFactory<FTPFile> targetFtpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(configurationService.getTargetHostName());
sf.setPort(Integer.parseInt(configurationService.getTargetPort()));
sf.setUsername(configurationService.getTargetUsername());
sf.setPassword(configurationService.getTargetPassword());
return new CachingSessionFactory<FTPFile>(sf);
}
#MessagingGateway
public interface MyGateway {
#Gateway(requestChannel = "toFtpChannel")
void sendToFtp(Message message);
}
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(sourceFtpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory(configurationService.getSourceDirectory());
fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter(
configurationService.getFileMask()));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "ftpChannel",
poller = #Poller(fixedDelay = FILE_POLLING_DURATION ))
public MessageSource<File> ftpMessageSource() {
FtpInboundFileSynchronizingMessageSource source =
new FtpInboundFileSynchronizingMessageSource(ftpInboundFileSynchronizer());
source.setLocalDirectory(new File(configurationService.getLocalDirectory()));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
return source;
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler targetHandler() {
FtpMessageHandler handler = new FtpMessageHandler(targetFtpSessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression(
configurationService.getTargetDirectory()));
return handler;
}
}
Application.java:
#SpringBootApplication
public class Application {
public static ConfigurableApplicationContext context;
public static void main(String[] args) {
context = new SpringApplicationBuilder(Application.class)
.web(false)
.run(args);
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler sourceHandler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
Object payload = message.getPayload();
System.out.println("Payload: " + payload);
if (payload instanceof File) {
File file = (File) payload;
System.out.println("Trying to send " + file.getName() + " to target");
}
MyGateway gateway = context.getBean(MyGateway.class);
gateway.sendToFtp(message);
}
};
}
}
First of all it isn't clear what is that sourceHandler for, but you really should be sure that it is subscribed (or targetHandler) to proper channel.
I somehow believe that in your target code the targetHandler is really subscribed to the toFtpChannel.
Anyway that isn't related.
I think the problem here is exactly with the AcceptOnceFileListFilter and error. So, filter work first during directory scan and load all the local files to the in-memory queue for performance reason. Then all of them are sent to the channel for processing. When we reach the targetHandler and got an exception, we just silently got away to the global errorChannel loosing the fact that file hasn't been transferred. And this happens with all the remaining files in memory. I think the transfer is resumed anyway but it is going work already only for new files in the remote directory.
I suggest you to add ExpressionEvaluatingRequestHandlerAdvice to the targetHandler definition (#ServiceActivator(adviceChain)) and in case of error call the AcceptOnceFileListFilter.remove(File):
/**
* Remove the specified file from the filter so it will pass on the next attempt.
* #param f the element to remove.
* #return true if the file was removed as a result of this call.
*/
boolean remove(F f);
This way you remove the failed files from the filter and it will be picked up on the next poll task. You have to make AcceptOnceFileListFilter to be able to get an access to it from the onFailureExpression. The file is the payload of request message.
EDIT
The sample for the ExpressionEvaluatingRequestHandlerAdvice:
#Bean
public Advice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnFailureExpressionString("#acceptOnceFileListFilter.remove(payload)");
advice.setTrapException(true);
return advice;
}
...
#ServiceActivator(inputChannel = "ftpChannel", adviceChain = "expressionAdvice")
Everything rest you can get from their JavaDocs.

Looking for a way to parse values in springboot application.yml file before SpringBootApplication is initialized

So my problem is as follows:
I'm using spring AMQP to connect to a rabbitMQ instance that using SSL. Unfortunately, spring AMQP does not currently support full length amqps URIs and adding support is not high on the priority list (see issue: https://github.com/spring-projects/spring-boot/issues/6401 ). They need to be separated.
The following fields are required in my application.yml to connect:
spring:
rabbitmq:
host: hostname
port: portnumber
username: username
password: password
virtual-host: virtualhost
ssl:
enabled: true
My VCAP_Services environment for my rabbitMQ instance only provides the virtualhost and the full length uri in the following format: amqps://username:password#hostname:portnumber/virtualhost
Copy and pasting these values into my application.yml is fine for now, but in the long run is not viable. They will need to come from vcap_services.
My #SpringBootApplication has #Beans that initialize a connection to the rabbitMQ instance on startup, so I am looking for a way to parse out the individual values and set them before the application is started.
If you are just interested in reading the properties before your Spring Boot application is initialized, you can parse the yaml file using Spring's YamlPropertiesFactoryBean before you call SpringApplication.run. For example,
#SpringBootApplication
public class Application {
public static void main(String[] args) {
YamlPropertiesFactoryBean yamlFactory = new YamlPropertiesFactoryBean();
yamlFactory.setResources(new ClassPathResource("application.yml"));
Properties props = yamlFactory.getObject();
String hostname = props.getProperty("spring.rabbitmq.hostname");
...
SpringApplication.run(Application.class, args);
}
}
Simply override Boot's auto configured connection factory...
#SpringBootApplication
public class So46937522Application {
public static void main(String[] args) {
SpringApplication.run(So46937522Application.class, args);
}
#Bean
public CachingConnectionFactory rabbitConnectionFactory(RabbitProperties config)
throws Exception {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.getRabbitConnectionFactory()
.setUri("amqps://guest:guest#10.0.0.3:5671/virtualhost");
return connectionFactory;
}
#RabbitListener(queues = "si.test.queue")
public void listen(Message in) {
System.out.println(in);
}
}

How do I setup TCPConnectionFactory or SSLServerSocketFactory from Java in Spring Boot

I am new to Spring Boot but have been requested by my job to implement a small web service using spring boot.
The web service needs to accept SSL TCP connections (an external system will connect to my web service using a custom protocol - NOT HTTP). Also, I would like to handle these connections in a background task (or multiple background tasks).
After looking at the official documentation (http://docs.spring.io/spring-integration/reference/html/ip.html), I still don't understand (where do I place all that XML). When I asked on SO about where to place that XML, I was answered that this is a very old method of configuration and should not be used anymore.
What would be the "up-to-date" way to do this ?
#SpringBootApplication
public class So43983296Application implements CommandLineRunner {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(So43983296Application.class, args);
Thread.sleep(10_000);
context.close();
}
#Autowired
private DefaultTcpNetSSLSocketFactorySupport ssl;
#Override
public void run(String... args) throws Exception {
Socket socket = ssl.getSocketFactory().createSocket("localhost", 1234);
socket.getOutputStream().write("foo\r\n".getBytes());
BufferedReader br = new BufferedReader(new InputStreamReader(socket.getInputStream()));
String result = br.readLine();
System.out.println(result);
br.close();
socket.close();
}
#Bean
public TcpNetServerConnectionFactory scf() {
TcpNetServerConnectionFactory scf = new TcpNetServerConnectionFactory(1234);
DefaultTcpNetSSLSocketFactorySupport tcpSocketFactorySupport = tcpSocketFactorySupport();
scf.setTcpSocketFactorySupport(tcpSocketFactorySupport);
// Add custom serializer/deserializer here; default is ByteArrayCrLfSerializer
return scf;
}
#Bean
public DefaultTcpNetSSLSocketFactorySupport tcpSocketFactorySupport() {
TcpSSLContextSupport sslContextSupport = new DefaultTcpSSLContextSupport("classpath:test.ks",
"classpath:test.truststore.ks", "secret", "secret");
DefaultTcpNetSSLSocketFactorySupport tcpSocketFactorySupport =
new DefaultTcpNetSSLSocketFactorySupport(sslContextSupport);
return tcpSocketFactorySupport;
}
#Bean
public TcpInboundGateway inGate() {
TcpInboundGateway inGate = new TcpInboundGateway();
inGate.setConnectionFactory(scf());
inGate.setRequestChannelName("upperCase");
return inGate;
}
#ServiceActivator(inputChannel = "upperCase")
public String upCase(byte[] in) {
return new String(in).toUpperCase();
}
}
If you prefer XML configuration for Spring Integration, add it to a spring configuration xml file and use #ImportResource("my-context.xml") on the class.

SpringBoot + ActiveMQ - How to set trusted packages?

I'm creating two springboot server & client applications communicating using JMS, and everything is working fine with the release 5.12.1 for activemq, but as soon as I update to the 5.12.3 version, I'm getting the following error :
org.springframework.jms.support.converter.MessageConversionException: Could not convert JMS message; nested exception is javax.jms.JMSException: Failed to build body from content. Serializable class not available to broker. Reason: java.lang.ClassNotFoundException: Forbidden class MyClass! This class is not trusted to be serialized as ObjectMessage payload. Please take a look at http://activemq.apache.org/objectmessage.html for more information on how to configure trusted classes.
I went on the URL that is provided and I figured out that my issue is related to the new security implemented in the 5.12.2 release of ActiveMQ, and I understand that I could fix it by defining the trusted packages, but I have no idea on where to put such a configuration in my SpringBoot project.
The only reference I'm making to the JMS queue in my client and my server is setting up it's URI in application.properties and enabling JMS on my "main" class with #EnableJms, and here's my configuration on the separate broker :
#Configuration
#ConfigurationProperties(prefix = "activemq")
public class BrokerConfiguration {
/**
* Defaults to TCP 10000
*/
private String connectorURI = "tcp://0.0.0.0:10000";
private String kahaDBDataDir = "../../data/activemq";
public String getConnectorURI() {
return connectorURI;
}
public void setConnectorURI(String connectorURI) {
this.connectorURI = connectorURI;
}
public String getKahaDBDataDir() {
return kahaDBDataDir;
}
public void setKahaDBDataDir(String kahaDBDataDir) {
this.kahaDBDataDir = kahaDBDataDir;
}
#Bean(initMethod = "start", destroyMethod = "stop")
public BrokerService broker() throws Exception {
KahaDBPersistenceAdapter persistenceAdapter = new KahaDBPersistenceAdapter();
persistenceAdapter.setDirectory(new File(kahaDBDataDir));
final BrokerService broker = new BrokerService();
broker.addConnector(getConnectorURI());
broker.setPersistent(true);
broker.setPersistenceAdapter(persistenceAdapter);
broker.setShutdownHooks(Collections.<Runnable> singletonList(new SpringContextHook()));
broker.setUseJmx(false);
final ManagementContext managementContext = new ManagementContext();
managementContext.setCreateConnector(true);
broker.setManagementContext(managementContext);
return broker;
}
}
So I'd like to know where I'm supposed to specify the trusted packages.
Thanks :)
You can just set one of the below spring boot properties in application.properties to set trusted packages.
spring.activemq.packages.trust-all=true
or
spring.activemq.packages.trusted=<package1>,<package2>,<package3>
Add the following bean:
#Bean
public ActiveMQConnectionFactory activeMQConnectionFactory() {
ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory("your broker URL");
factory.setTrustedPackages(Arrays.asList("com.my.package"));
return factory;
}
The ability to do this via a configuration property has been added for the next release:
https://github.com/spring-projects/spring-boot/issues/5631
Method: public void setTrustedPackages(List<String> trustedPackages)
Description: add all packages which is used in send and receive Message object.
Code : connectionFactory.setTrustedPackages(Arrays.asList("org.api","java.util"))
Implementated Code:
private static final String DEFAULT_BROKER_URL = "tcp://localhost:61616";
private static final String RESPONSE_QUEUE = "api-response";
#Bean
public ActiveMQConnectionFactory connectionFactory(){
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(DEFAULT_BROKER_URL);
connectionFactory.setTrustedPackages(Arrays.asList("org.api","java.util"));
return connectionFactory;
}
#Bean
public JmsTemplate jmsTemplate(){
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(connectionFactory());
template.setDefaultDestinationName(RESPONSE_QUEUE);
return template;
}
If any one still looking for an answer, below snippet worked for me
#Bean
public ActiveMQConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(BROKER_URL);
connectionFactory.setPassword(BROKER_USERNAME);
connectionFactory.setUserName(BROKER_PASSWORD);
connectionFactory.setTrustAllPackages(true); // all packages are considered as trusted
//connectionFactory.setTrustedPackages(Arrays.asList("com.my.package")); // selected packages
return connectionFactory;
}
I am setting Java_opts something like below and passing to java command and its working for me.
JAVA_OPTS=-Xmx256M -Xms16M -Dorg.apache.activemq.SERIALIZABLE_PACKAGES=*
java $JAVA_OPTS -Dapp.config.location=/data/config -jar <your_jar>.jar --spring.config.location=file:/data/config/<your config file path>.yml
Yes I found it's config in the new version
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.4.0.RELEASE</version>
</parent>
spring:
profiles:
active: #profileActive#
cache:
ehcache:
config: ehcache.xml
activemq:
packages:
trusted: com.stylrplus.api.model

Categories