I have a Spring Boot application and my goal is to declare queues, exchanges, and bindings on application startup. The application will produce messages to various queues there will be no consumers on the application.
I have included those dependencies on my pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<version>2.3.5.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.amqp</groupId>
<artifactId>spring-rabbit</artifactId>
<version>2.2.12.RELEASE</version>
</dependency>
my configuration class
#Configuration
public class RabbitConfiguration {
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("myhost", 5672);
connectionFactory.setUsername("example_name");
connectionFactory.setPassword("example_pass");
return connectionFactory;
}
#Bean
public AmqpAdmin rabbitAdmin(ConnectionFactory connectionFactory) {
return new RabbitAdmin(connectionFactory);
}
#Bean
public Queue declareQueue() {
return new Queue("test_queue", true, false, false);
}
#Bean
public DirectExchange declareDirectExchange() {
return new DirectExchange("test_direct_exchange", true, false);
}
#Bean
public Declarables declareBindings() {
return new Declarables(
new Binding("test_queue", DestinationType.QUEUE, "test_direct_exchange", "test_routing_key", null)
);
}
}
My problem is that queues, exchanges, and bindings are not created on the application startup. Spring boot does not even open the connection. The connection, queues, etc are created only when I produce messages to the queues.
If you want to force declaration during app startup and don't have any consumers, you can either add the actuator starter to the classpath, or simply create the shared connection yourself.
#Bean
ApplicationRunner runner(ConnectionFactory cf) {
return args -> cf.createConnection().close();
}
This won't close the connection; if you want to do that, call cf.resetConnection().
If you want the app to start if the broker is down, do something like this.
#Bean
ApplicationRunner runner(ConnectionFactory cf) {
return args -> {
boolean open = false;
while(!open) {
try {
cf.createConnection().close();
open = true;
}
catch (Exception e) {
Thread.sleep(5000);
}
}
};
}
After some digging, I have found out that I was missing the actuator dependency.
So adding this dependency solved my issue
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
It is weird that the connection is not opened on application startup unless the actuator dependency is present.
You can use
#Component
public class QueueConfig {
private AmqpAdmin amqpAdmin;
public QueueConfig(AmqpAdmin amqpAdmin) {
this.amqpAdmin = amqpAdmin;
}
#PostConstruct
public void createQueues() {
amqpAdmin.declareQueue(new Queue("q1", true));
amqpAdmin.declareQueue(new Queue("q2", true));
}
}
Related
I'm using Spring version 2.1.18.RELEASE (I can't change this version)
I need to implement a crud repository for elasticsearch, for this I connect spring-data-elasticsearch (version 3.1.21.RELEASE is automatically pulled up) with elasticsearch 6.4.3
I tried a bunch of manuals, but elastic is so different from version to version that I can't find a solution. I need to connect to remote server via https with username and password.
pom:
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-elasticsearch</artifactId>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-client</artifactId>
</dependency>
Config:
#Configuration
#EnableElasticsearchRepositories(basePackages = "path.to.elastic")
#ComponentScan(basePackages = {"path.to.elastic"})
public class ElasticConfiguration {
private final int port = 1234;
#Bean
public Client client() throws Exception {
Settings settings = Settings.builder().put("cluster.name", "test")
.put("xpack.security.user", "login:pass")
.put("xpack.security.transport.ssl.enabled", "true")
.put("xpack.security.transport.ssl.enabled", "true")
.build();
return new PreBuiltTransportClient(settings)
.addTransportAddress(new TransportAddress(InetAddress.getByName("bah.elk1.com"), port))
.addTransportAddress(new TransportAddress(InetAddress.getByName("bah.elk2.com"), port))
.addTransportAddress(new TransportAddress(InetAddress.getByName("bah.elk3.com"), port));
}
#Bean
public ElasticsearchOperations elasticsearchTemplate() throws Exception {
return new ElasticsearchTemplate(client());
}
}
Repository:
#Repository
public interface ElkThingRepository extends ElasticsearchRepository<Things, Long> {
List<Things> findByEarNameAndName(String earName, String thingName);
}
Service interface:
public interface ElkThingsService {
List<Things> simpleGet(String thingName);
}
Service impl:
#Service
public class ElkThingsServiceImpl implements ElkThingsService {
private final ElkThingsRepository elkThingsRepository;
#Autowired
public ElkThingsServiceImpl(ElkThingsRepository elkThingsRepository) {
this.elkThingsRepository = elkThingsRepository;
}
public List<Things> simpleGet(String thingsName) {
return elkThingsRepository.findByEarNameAndName(AppInfo.SUBSYSTEM_CODE, thingName);
}
}
Things:
#Data
#Document(indexName = "things-*", type = "things")
public class Things {
#Id
private Long id;
private String earName;
private String name;
}
Now im get exceprion:
java.lang.IllegalArgumentException: unknown setting [xpack.security.user] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
When i add spring-data-elasticsearch 4.0.0 and try to use RestHighLevelClient i get NoClassDefFoundError error creating RestHighLevelClient bean
I am NOT able to stop an JMS consumer dynamically using a Spring Boot REST endpoint.
The number of consumers stays as is. No exceptions either.
IBM MQ Version: 9.2.0.5
pom.xml
<dependency>
<groupId>com.ibm.mq</groupId>
<artifactId>mq-jms-spring-boot-starter</artifactId>
<version>2.0.8</version>
</dependency>
JmsConfig.java
#Configuration
#EnableJms
#Log4j2
public class JmsConfig {
#Bean
public MQQueueConnectionFactory mqQueueConnectionFactory() {
MQQueueConnectionFactory mqQueueConnectionFactory = new MQQueueConnectionFactory();
mqQueueConnectionFactory.setHostName("my-ibm-mq-host.com");
try {
mqQueueConnectionFactory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
mqQueueConnectionFactory.setCCSID(1208);
mqQueueConnectionFactory.setChannel("my-channel");
mqQueueConnectionFactory.setPort(1234);
mqQueueConnectionFactory.setQueueManager("my-QM");
} catch (Exception e) {
log.error("Exception while creating JMS connecion...", e.getMessage());
}
return mqQueueConnectionFactory;
}
}
JmsListenerConfig.java
#Configuration
#Log4j2
public class JmsListenerConfig implements JmsListenerConfigurer {
#Autowired
private JmsConfig jmsConfig;
private Map<String, String> queueMap = new HashMap<>();
#Bean
public DefaultJmsListenerContainerFactory mqJmsListenerContainerFactory() throws JMSException {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(jmsConfig.mqQueueConnectionFactory());
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setSessionTransacted(true);
factory.setConcurrency("5");
return factory;
}
#Override
public void configureJmsListeners(JmsListenerEndpointRegistrar registrar) {
queueMap.put("my-queue-101", "101");
log.info("queueMap: " + queueMap);
queueMap.entrySet().forEach(e -> {
SimpleJmsListenerEndpoint endpoint = new SimpleJmsListenerEndpoint();
endpoint.setDestination(e.getKey());
endpoint.setId(e.getValue());
try {
log.info("Reading message....");
endpoint.setMessageListener(message -> {
try {
log.info("Receieved ID: {} Destination {}", message.getJMSMessageID(), message.getJMSDestination());
} catch (JMSException ex) {
log.error("Exception while reading message - " + ex.getMessage());
}
});
registrar.setContainerFactory(mqJmsListenerContainerFactory());
} catch (JMSException ex) {
log.error("Exception while reading message - " + ex.getMessage());
}
registrar.registerEndpoint(endpoint);
});
}
}
JmsController.java
#RestController
#RequestMapping("/jms")
#Log4j2
public class JmsController {
#Autowired
ApplicationContext context;
#RequestMapping(value = "/stop", method = RequestMethod.GET)
public #ResponseBody
String haltJmsListener() {
JmsListenerEndpointRegistry listenerEndpointRegistry = context.getBean(JmsListenerEndpointRegistry.class);
Set<String> containerIds = listenerEndpointRegistry.getListenerContainerIds();
log.info("containerIds: " + containerIds);
//stops all consumers
listenerEndpointRegistry.stop(); //DOESN'T WORK :(
//stops a consumer by id, used when there are multiple consumers and want to stop them individually
//listenerEndpointRegistry.getListenerContainer("101").stop(); //DOESN'T WORK EITHER :(
return "Jms Listener stopped";
}
}
Here is the result that I noticed.
Initial # of consumers: 0 (as expected)
After server startup and queue connection, total # of consumers: 1 (as expected)
After hitting http://localhost:8080/jms/stop endpoint, total # of consumers: 1 (NOT as expected, should go back to 0)
Am I missing any configuration ?
You need to also call shutDown on the container; see my comment on this answer DefaultMessageListenerContainer's "isActive" vs "isRunning"
start()/stop() set/reset running; initialize()/shutDown() set/reset active. It depends on what your requirements are. stop() just stops the consumers from getting new messages, but the consumers still exist. shutDown() closes the consumers. Most people call stop + shutdown and then initialize + start to restart. But if you just want to stop consuming for a short time, stop/start is all you need.
You will need to iterate over the containers and cast them to call shutDown().
I have configured Shedlock by adding dependencies to the POM.XML as follows:
<dependency>
<groupId>net.javacrumbs.shedlock</groupId>
<artifactId>shedlock-spring</artifactId>
<version>4.29.0</version>
</dependency>
<dependency>
<groupId>net.javacrumbs.shedlock</groupId>
<artifactId>shedlock-provider-jdbc-template</artifactId>
<version>4.29.0</version>
</dependency>
Registered the bean:
#Bean
public LockProvider lockProvider(DataSource dataSource, JdbcTemplate jdbcTemplate) {
// #formatter:off
return new JdbcTemplateLockProvider(JdbcTemplateLockProvider.Configuration.builder()
.withTableName("scheduler_lock_vw")
.withJdbcTemplate(new JdbcTemplate(dataSource))
.usingDbTime()
.withLockedByValue("search-service")
.build());
// #formatter:on
}
Added scheduler:
#Component
#Slf4j
public class Scheduler {
#Scheduled(cron = "0 * * * * *")
#SchedulerLock(name = "elastic_reindex_scheduler", lockAtLeastFor = "PT30S", lockAtMostFor = "PT45S")
public void shortRunningTask() {
LockAssert.assertLocked();
log.info("Start short running task");
}
}
The issue is the I do not see a record inserted to the table ("scheduler_lock_vw") with this name "elastic_reindex_scheduler".
The line - "LockAssert.assertLocked();" throws an error "Unexpected error occurred in scheduled task
java.lang.IllegalStateException: The task is not locked.
I cannot see all of your code, however have you added the #EnableSchedulerLock annotation to your configuration class?
For example,
#Configuration
#EnableSchedulerLock(defaultLockAtMostFor = "5m")
...
public class MyConfig {
...
}
I have a Spring Boot application and it needs to process some Kafka streaming data. I added an infinite loop to a CommandLineRunner class that will run on startup. In there is a Kafka consumer that can be woken up. I added a shutdown hook with Runtime.getRuntime().addShutdownHook(new Thread(consumer::wakeup));. Will I run into any problems? Is there a more idiomatic way of doing this in Spring? Should I use #Scheduled instead? The code below is stripped of specific Kafka-implementation stuff but otherwise complete.
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.errors.WakeupException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
import java.time.Duration;
import java.util.Properties;
#Component
public class InfiniteLoopStarter implements CommandLineRunner {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
#Override
public void run(String... args) {
Consumer<AccountKey, Account> consumer = new KafkaConsumer<>(new Properties());
Runtime.getRuntime().addShutdownHook(new Thread(consumer::wakeup));
try {
while (true) {
ConsumerRecords<AccountKey, Account> records = consumer.poll(Duration.ofSeconds(10L));
//process records
}
} catch (WakeupException e) {
logger.info("Consumer woken up for exiting.");
} finally {
consumer.close();
logger.info("Closed consumer, exiting.");
}
}
}
I'm not sure if you'll run into any issues there but it's a bit dirty - Spring has really nice built in support for working with Kafka so I would lean towards that (there's plenty of documentation on that on the web, but a nice one is: https://www.baeldung.com/spring-kafka).
You'll need the following dependency:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>2.2.2.RELEASE</version>
</dependency>
Configuration is as easy adding the #EnableKafka annotation to a config class and then setting up Listener and ConsumerFactory beans
Once configured you can setup a consumer easily as follows:
#KafkaListener(topics = "topicName")
public void listenWithHeaders(
#Payload String message,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {
System.out.println("Received Message: " + message"+ "from partition: " + partition);
}
Implementation look ok but using CommandLineRunner is not made for this. CommandLineRunner is used to run some task on startup only once. From Design perspective it's not very elegant. I would rather use spring integration adapter component with kafka. You can find example here https://github.com/raphaelbrugier/spring-integration-kafka-sample/blob/master/src/main/java/com/github/rbrugier/esb/consumer/Consumer.java .
To just answer my own question, I had a look at Kafka integration libraries like Spring-Kafka and Spring Cloud Stream but the integration with Confluent's Schema Registry is either not finished or not quite clear to me. It's simply enough for primitives but we need it for typed Avro objects that are validated by the schema registry. I now implemented a Kafka-agnostic solution, based on the answer at Spring Boot - Best way to start a background thread on deployment
The final code looks like this:
#Component
public class AccountStreamConsumer implements DisposableBean, Runnable {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
private final AccountService accountService;
private final KafkaProperties kafkaProperties;
private final Consumer<AccountKey, Account> consumer;
#Autowired
public AccountStreamConsumer(AccountService accountService, KafkaProperties kafkaProperties,
ConfluentProperties confluentProperties) {
this.accountService = accountService;
this.kafkaProperties = kafkaProperties;
if (!kafkaProperties.getEnabled()) {
consumer = null;
return;
}
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getBootstrapServers());
props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, confluentProperties.getSchemaRegistryUrl());
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, kafkaProperties.getSecurityProtocolConfig());
props.put(SaslConfigs.SASL_MECHANISM, kafkaProperties.getSaslMechanism());
props.put(SaslConfigs.SASL_JAAS_CONFIG, PlainLoginModule.class.getName() + " required username=\"" + kafkaProperties.getUsername() + "\" password=\"" + kafkaProperties.getPassword() + "\";");
props.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true);
props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getAccountConsumerGroupId());
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
consumer = new KafkaConsumer<>(props);
consumer.subscribe(Collections.singletonList(kafkaProperties.getAccountsTopicName()));
Thread thread = new Thread(this);
thread.start();
}
#Override
public void run() {
if (!kafkaProperties.getEnabled())
return;
logger.debug("Started account stream consumer");
try {
//noinspection InfiniteLoopStatement
while (true) {
ConsumerRecords<AccountKey, Account> records = consumer.poll(Duration.ofSeconds(10L));
List<Account> accounts = new ArrayList<>();
records.iterator().forEachRemaining(record -> accounts.add(record.value()));
if (accounts.size() != 0)
accountService.store(accounts);
}
} catch (WakeupException e) {
logger.info("Account stream consumer woken up for exiting.");
} finally {
consumer.close();
}
}
#Override
public void destroy() {
if (consumer != null)
consumer.wakeup();
logger.info("Woke up account stream consumer, exiting.");
}
}
I was reading through the Spring Integration Documentation thinking that a file download would be pretty simple to implement. Instead, the article provided me with many different components that seem to over-qualify my needs:
The FTP Inbound Channel Adapter is a special listener that will connect to the FTP server and will listen for the remote directory events (e.g., new file created) at which point it will initiate a file transfer.
The streaming inbound channel adapter produces message with payloads of type InputStream, allowing files to be fetched without writing to the local file system.
Let's say I have a SessionFactory declared as follows:
#Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("localhost");
sf.setPort(20);
sf.setUsername("foo");
sf.setPassword("foo");
return new CachingSessionFactory<>(sf);
}
How do I go from here to downloading a single file on a given URL?
You can use an FtpRemoteFileTemplate...
#SpringBootApplication
public class So44194256Application implements CommandLineRunner {
public static void main(String[] args) {
SpringApplication.run(So44194256Application.class, args);
}
#Bean
public DefaultFtpSessionFactory ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("10.0.0.3");
sf.setUsername("ftptest");
sf.setPassword("ftptest");
return sf;
}
#Bean
public FtpRemoteFileTemplate template(DefaultFtpSessionFactory sf) {
return new FtpRemoteFileTemplate(sf);
}
#Autowired
private FtpRemoteFileTemplate template;
#Override
public void run(String... args) throws Exception {
template.get("foo/bar.txt",
inputStream -> FileCopyUtils.copy(inputStream,
new FileOutputStream(new File("/tmp/bar.txt"))));
}
}
To add to #garyrussell's answer:
In FTPS protocol, if you are behind a firewall, you will might encounter
Host attempting data connection x.x.x.x is not the same as server y.y.y.y error (as described here). The reason is the FtpSession instance returned from DefaultFtpsSessionFactory by default does remote verification test, i.e. it runs in an "active" mode.
The solution is to disable this verification on the FtpSession instance by setting the "passive mode", when you create the DefaultFtpsSessionFactory.
DefaultFtpsSessionFactory defaultFtpsSessionFactory() {
DefaultFtpsSessionFactory defaultFtpSessionFactory = new DefaultFtpsSessionFactory(){
#Override
public FtpSession getSession() {
FtpSession ftpSession = super.getSession();
ftpSession.getClientInstance().setRemoteVerificationEnabled(false);
return ftpSession;
}
};
defaultFtpSessionFactory.setHost("host");
defaultFtpSessionFactory.setPort(xx);
defaultFtpSessionFactory.setUsername("username");
defaultFtpSessionFactory.setPassword("password");
defaultFtpSessionFactory.setFileType(2); //binary data transfer
return defaultFtpSessionFactory;
}
following code block might be helpful
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true) {
{
setHost("localhost");
setPort(20);
setUser("foo");
setPassword("foo");
setAllowUnknownKeys(true);
}
};
return new CachingSessionFactory<LsEntry>(factory);
}
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory()) {
{
setDeleteRemoteFiles(true);
setRemoteDirectory("/remote");
setFilter(new SftpSimplePatternFileListFilter("*.txt"));
}
};
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "sftpChannel", poller = #Poller(fixedDelay = "600"))
public MessageSource<File> sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource messageSource = new SftpInboundFileSynchronizingMessageSource(
sftpInboundFileSynchronizer()) {
{
setLocalDirectory(new File("/temp"));
setAutoCreateLocalDirectory(true);
setLocalFilter(new AcceptOnceFileListFilter<File>());
}
};
return messageSource;
}
obtained from https://github.com/edtoktay/spring-integraton