Over internet I didn't got reference to implement Kafka in liferay.
Below are the requirements needs to be implemented
Push the message to Kafka topic
Poll and receive message from Kafka topic.
Tried below, but unable to receive message from kafka after pushed from kafka terminal
Dependencies :
compileInclude "org.springframework.kafka:spring-kafka:2.9.2"
compileInclude "org.apache.kafka:kafka-streams:3.3.1"
Code:
import org.apache.kafka.common.serialization.Serdes;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.annotation.EnableKafkaStreams;
import org.springframework.kafka.annotation.KafkaStreamsDefaultConfiguration;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import java.util.HashMap;
import java.util.Map;
import static org.apache.kafka.streams.StreamsConfig.APPLICATION_ID_CONFIG;
import static org.apache.kafka.streams.StreamsConfig.BOOTSTRAP_SERVERS_CONFIG;
import static org.apache.kafka.streams.StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG;
import static org.apache.kafka.streams.StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG;
#Configuration
#EnableKafka
#EnableKafkaStreams
public class KafkaConfig {
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kStreamsConfig() {
Map<String, Object> props = new HashMap<>();
props.put(APPLICATION_ID_CONFIG, "streams-app");
props.put(BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
return new KafkaStreamsConfiguration(props);
}
}
import com.liferay.portal.kernel.log.Log;
import com.liferay.portal.kernel.log.LogFactoryUtil;
import org.osgi.service.component.annotations.Component;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.scheduling.annotation.Async;
import org.springframework.transaction.annotation.Transactional;
#Component(immediate = true)
public class KafkaMessageReceiver {
public static Log _log = LogFactoryUtil.getLog(KafkaMessageReceiver.class);
#Async
#KafkaListener(
topics = "liferay-topic",
concurrency = "2"
)
#Transactional
public void handleMessage(String payload) {
_log.info(payload);
}
}
Related
I am trying to build a springboot application integrated with S3 and SQS. I have an AppConfig, S3Config, SqsConfig wherein I autowire the S3Config and SqsConfig inside the AppConfig.
S3Config declares a bean s3client which I use to perform aws s3 operations. I am getting an error where in the autowiring is failing due to multiple bean definitions available but I'm not sure what part is causing what duplicacy. Attaching the snippets.
Note that I am creating a different definition of s3client bean depending on the profile ( local development vs server deployment )
AppConfig
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.context.annotation.Configuration;
import org.springframework.stereotype.Component;
#Configuration
#Slf4j
#Component
#RefreshScope
#Getter
public class AppConfig {
#Autowired
S3Config s3Config;
#Autowired
SqsConfig sqsConfig;
}
S3Config
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.stereotype.Component;
#Configuration
#Slf4j
#Component
#RefreshScope
#Getter
public class S3Config {
#Value("${aws.s3.roleArn}")
private String s3RoleArn;
#Value("${aws.s3.roleSessionName}")
private String s3RoleSessionName;
#Value("${aws.s3.region}")
private String s3region;
#Value("${aws.s3.inputBucketName}")
private String inputBucketName;
#Value("${aws.s3.outputBucketName}")
private String outputBucketName;
#Bean(name = "s3client")
#Profile("!local")
public AmazonS3 AmazonS3Client() {
log.info(String.format("validating_config_repo_values: s3RoleArn=%s s3RoleSessionName=%s s3RoleSessionName=%s", s3RoleArn, s3RoleSessionName,s3region));
AmazonS3 s3client = null;
try {
STSAssumeRoleSessionCredentialsProvider roleCredentialsProvider = new STSAssumeRoleSessionCredentialsProvider.Builder(
s3RoleArn, s3RoleSessionName).build();
AmazonS3ClientBuilder amazonS3ClientBuilder = AmazonS3ClientBuilder.standard()
.withCredentials(roleCredentialsProvider);
s3client = amazonS3ClientBuilder.withRegion(s3region).build();
} catch (Exception e) {
log.error(String.format("exception_while_creating_AmazonS3Client : %s", e));
}
return s3client;
}
#Bean(name = "s3client")
#Profile("local")
public AmazonS3 localhostAmazonS3Client() {
AmazonS3 s3client = null;
try {
s3client = AmazonS3ClientBuilder.standard().withRegion(Regions.DEFAULT_REGION).withCredentials(new DefaultAWSCredentialsProviderChain()).build();
} catch (Exception e) {
log.error(String.format("exception_while_creating_AmazonS3Client : %s", e));
}
return s3client;
}
}
UploadServiceImpl
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.io.*;
import java.net.URI;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
#Service
#Slf4j
public class S3DatasetDownloadServiceImpl implements DatasetDownloadService {
#Autowired
AmazonS3 s3Client;
#Autowired
AppConfig appConfig;
public String downloadDataset(URI uri) throws IOException {
String key = uri.getPath();
String outputBucketName = appConfig.getS3Config().getOutputBucketName();
log.info(String.format("client_downloading_file : key=%s from bucket=%s",key,outputBucketName));
try {
} catch (Exception e) {
}
}
}
Kill me. Found the error, I have the bean name as s3client ( c of client small ) whereas on autowiring I am using AmazonS3 s3Client;
Fixing the casing worked!
Sample yaml looks like
"mappings":
"name": "foo"
"aliases": "abc"
Trying to implement it using PropertySourceFactory, but unsuccessful.
import import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
#Configuration
//#ConfigurationProperties
#PropertySource(value = "classpath:order_config.yml", factory = YamlPropertySourceFactory.class)
public class ValidatorConfig {
#Value("${yaml.mappings}")
private Map<String, String> mappings = new HashMap<>();
#Value("${yaml.mappings.name}")
private String create;
public String getValidatorBean(String tenant, String requestType) {
System.out.println(mappings);
return "yes";
}
}
import org.springframework.beans.factory.config.YamlPropertiesFactoryBean;
import org.springframework.core.env.PropertiesPropertySource;
import org.springframework.core.env.PropertySource;
import org.springframework.core.io.support.EncodedResource;
import org.springframework.core.io.support.PropertySourceFactory;
import java.io.IOException;
import java.util.Properties;
public class YamlPropertySourceFactory implements PropertySourceFactory {
#Override
public PropertySource<?> createPropertySource(String name, EncodedResource encodedResource)
throws IOException {
YamlPropertiesFactoryBean factory = new YamlPropertiesFactoryBean();
factory.setResources(encodedResource.getResource());
Properties properties = factory.getObject();
return new PropertiesPropertySource(encodedResource.getResource().getFilename(), properties);
}
}
Have tried using a bunch of methods using #Value, #ConfigurationProperties, but unsuccessful
Can we implement it using YamlMapFactoryBean. Have not been able to find its working demonstration.
I have a SpringBoot Kafka streams maven app. I use a spring-boot-starter-parent 2.4.4 for my springboot dependencies and kafka-streams 2.7.0.
I am stuck at running tests with
java.lang.NullPointerException
when trying to load my application configuration from either
resources/application.yml or test/resources/application-test.resources or test/resources/application.yml
I have a Config class with this annotations and getters and setters for fields, which are defined with same name as in the application.yml
package com.acme.rtc.configuration;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
import org.springframework.stereotype.Component;
#ConfigurationProperties(prefix = "topics")
#Component
#Configuration
public class ConfigProps {
private String MATRIXX_ADJ_EVENT_TOPIC;
private String OUTPUT_MNVO_KGN_ADJ_EVENT_TOPIC;
private String OUTPUT_MNVO_LEB_ADJ_EVENT_TOPIC;
private String EVENTS_NO_MVNO;
public void setMATRIXX_ADJ_EVENT_TOPIC(String MATRIXX_ADJ_EVENT_TOPIC) {
this.MATRIXX_ADJ_EVENT_TOPIC = MATRIXX_ADJ_EVENT_TOPIC;
}
public void setOUTPUT_MNVO_KGN_ADJ_EVENT_TOPIC(String OUTPUT_MNVO_KGN_ADJ_EVENT_TOPIC) {
this.OUTPUT_MNVO_KGN_ADJ_EVENT_TOPIC = OUTPUT_MNVO_KGN_ADJ_EVENT_TOPIC;
}
public void setOUTPUT_MNVO_LEB_ADJ_EVENT_TOPIC(String OUTPUT_MNVO_LEB_ADJ_EVENT_TOPIC) {
this.OUTPUT_MNVO_LEB_ADJ_EVENT_TOPIC = OUTPUT_MNVO_LEB_ADJ_EVENT_TOPIC;
}
public String getEVENTS_NO_MVNO() {
return EVENTS_NO_MVNO;
}
public void setEVENTS_NO_MVNO(String EVENTS_NO_MVNO) {
this.EVENTS_NO_MVNO = EVENTS_NO_MVNO;
}
public String getMATRIXX_ADJ_EVENT_TOPIC() {
return MATRIXX_ADJ_EVENT_TOPIC;
}
public String getOUTPUT_MNVO_KGN_ADJ_EVENT_TOPIC() {
return OUTPUT_MNVO_KGN_ADJ_EVENT_TOPIC;
}
public String getOUTPUT_MNVO_LEB_ADJ_EVENT_TOPIC() {
return OUTPUT_MNVO_LEB_ADJ_EVENT_TOPIC;
}
}
I am doing #Autowire of this class in my test and app class,
#Autowired
ConfigProps cp;
and trying to access fields using cp.getBootstrapServerHost() but this resolves to a NullPointer in my test class. But resolves properly in my application class...
My test class looks like this
package distinct;
import com.acme.rtc.configuration.ConfigProps;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.*;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import com.acme.rtc.configuration.KafkaConfiguration;
import com.acme.rtc.configuration.TopologyConfiguration;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit.jupiter.SpringJUnitConfig;
import java.util.List;
import static java.util.Collections.singletonList;
import static java.util.stream.Collectors.toList;
import static org.junit.jupiter.api.Assertions.assertEquals;
#SpringBootTest
#ContextConfiguration(classes = TopologyConfiguration.class)
#SpringJUnitConfig
public class TestWithTopologyTestDriver {
private TestInputTopic<String, String> inputTopicWrong;
private TestOutputTopic<String, String> outputTopicWrong;
private TestInputTopic<String, String> inputTopicRight;
private TestOutputTopic<String, String> outputTopicRight;
private TopologyTestDriver topologyTestDriver;
#Autowired
ConfigProps configProps;
#BeforeEach
public void setUp() {
KafkaProperties properties = new KafkaProperties();
properties.setBootstrapServers(singletonList("localhost:9092"));
KafkaStreamsConfiguration config = new KafkaConfiguration(properties).getStreamsConfig();
StreamsBuilder sb = new StreamsBuilder();
Topology topology = new TopologyConfiguration().createTopology(sb);
topologyTestDriver = new TopologyTestDriver(topology, config.asProperties());
inputTopicWrong =
topologyTestDriver.createInputTopic(configProps.getMATRIXX_ADJ_EVENT_TOPIC(), new StringSerializer(),
new StringSerializer());
outputTopicWrong =
topologyTestDriver.createOutputTopic(configProps.getOUTPUT_MNVO_KGN_ADJ_EVENT_TOPIC(), new StringDeserializer(),
new StringDeserializer());
inputTopicRight =
topologyTestDriver.createInputTopic(configProps.getMATRIXX_ADJ_EVENT_TOPIC(), new StringSerializer(),
new StringSerializer());
outputTopicRight =
topologyTestDriver.createOutputTopic(configProps.getOUTPUT_MNVO_LEB_ADJ_EVENT_TOPIC(), new StringDeserializer(),
new StringDeserializer());
}
#AfterEach
public void tearDown() {
topologyTestDriver.close();
}
#Test
void wrongDistinctTopology() {
testTopology(inputTopicWrong, outputTopicWrong);
}}
where TopologyConfiguration is my application and that has this signature
package com.acme.rtc.configuration;
import com.fasterxml.jackson.databind.JsonNode;
import lombok.RequiredArgsConstructor;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.*;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.env.Environment;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.kafka.support.serializer.JsonSerializer;
import org.springframework.stereotype.Component;
#Configuration
#ConfigurationProperties(prefix = "topics")
#Component
#RequiredArgsConstructor
public class TopologyConfiguration {
#Autowired
Environment env;
#Autowired
ConfigProps configProps;
private void acmeStreamsTopoloy(StreamsBuilder streamsBuilder) {
Deserializer<JsonNode> jsonDeserializer = new JsonDeserializer();
Serializer<JsonNode> jsonSerializer = new JsonSerializer();
Serde<JsonNode> jsonSerde = Serdes.serdeFrom(jsonSerializer, jsonDeserializer);
System.out.println("ConfigProps.getMattrix: "+configProps.getMATRIXX_ADJ_EVENT_TOPIC());
KStream<String, String> inputStream =
streamsBuilder.stream(configProps.getMATRIXX_ADJ_EVENT_TOPIC(), Consumed.with(Serdes.String(), Serdes.String()));
KStream<String, String>[] branches = inputStream.branch(
(key, value)-> value.contains("KGN"),
(key, value)-> value.contains("LEB"),
(key, value)->true);
branches[0].to(configProps.getOUTPUT_MNVO_KGN_ADJ_EVENT_TOPIC());
branches[1].to(configProps.getOUTPUT_MNVO_LEB_ADJ_EVENT_TOPIC());
branches[2].to(configProps.getEVENTS_NO_MVNO());
}
#Bean
public Topology createTopology(StreamsBuilder streamsBuilder) {
acmeStreamsTopoloy(streamsBuilder);
return streamsBuilder.build();
}
}
My KafkaConfiguration class
package com.acme.rtc.configuration;
import lombok.RequiredArgsConstructor;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.LogAndContinueExceptionHandler;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.annotation.KafkaStreamsDefaultConfiguration;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.kafka.support.serializer.JsonSerializer;
import java.util.HashMap;
import java.util.Map;
#Configuration
#RequiredArgsConstructor
public class KafkaConfiguration {
public static final String APP_ID = "acme-stream-rtc";
private final KafkaProperties kafkaProperties;
#Autowired
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public KafkaStreamsConfiguration getStreamsConfig() {
Map<String, Object> props = new HashMap<>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, APP_ID);
props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 2);
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getBootstrapServers());
props.put(JsonSerializer.ADD_TYPE_INFO_HEADERS,false);
KafkaStreamsConfiguration streamsConfig = new KafkaStreamsConfiguration(props);
return streamsConfig;
}
}
My application.yml has the right syntax etc.
spring:
kafka:
bootstrap-servers: localhost:9092
json:
value:
default:
type: true
kafka:
streams:
properties:
default:
value:
serde: org.springframework.kafka.support.serializer.JsonSerde
admin:
security:
protocol: SSL
ssl:
trust-store-location: ${TRUSTSTORE_LOCATION}
trust-store-password: ${TRUSTSTORE_PASSWORD}
key-store-location: ${KEYSTORE_LOCATION}
key-store-password: ${KEYSTORE_PASSWORD}
key-password: ${KEY_PASSWORD}
topics:
MATRIXX_ADJ_EVENT_TOPIC: input-matrixx-adj-event
OUTPUT_MNVO_KGN_ADJ_EVENT_TOPIC: output-KGN-adj-event
OUTPUT_MNVO_LEB_ADJ_EVENT_TOPIC: output-LEB-adj-event
EVENTS_NO_MVNO: events-no-mvno-spec
My main class
package com.acme.rtc;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.kafka.annotation.EnableKafkaStreams;
#SpringBootApplication
#EnableKafkaStreams
public class StreamProcessing {
public static void main(String[] args) {
SpringApplication.run(StreamProcessing.class, args);
}
}
I am not sure if I am missing any context when Autowiring my ConfigProps class or if I need further annotations on my test class.
For JUnit4 you need #Runwith(SpringJUnit4ClassRunner.class) alongside the #ContextConfiguration.
For JUnit5, use #SpringJUnitConfig.
For proper loading of properties, though, you need #SpringBootTest.
Boot 2.4 uses JUnit5.
And you should not have #ConfigurationProperties on the test.
EDIT
I just tested it with no problems.
#Configuration
public class Config {
#Bean
String str() {
return "str";
}
}
#ConfigurationProperties(prefix = "foo")
#Component
public class MyProps {
String bar;
public String getBar() {
return this.bar;
}
public void setBar(String bar) {
this.bar = bar;
}
#Override
public String toString() {
return "MyProps [bar=" + this.bar + "]";
}
}
#SpringBootApplication
public class So67078244Application {
public static void main(String[] args) {
SpringApplication.run(So67078244Application.class, args);
}
}
#SpringBootTest
class So67078244ApplicationTests {
#Autowired
MyProps props;
#Test
void contextLoads() {
System.out.println(this.props);
}
}
foo.bar=baz
MyProps [bar=baz]
I have a React-based application in an AWS EC2 instance and a Java Spring Boot-based application in another AWS EC2 instance. I need to send POST requests using AWS' SQS from the React application. Once the messages are sent, I need to retrieve them in the Java Spring application hosting API endpoints. Guidance on how to accomplish the send and receive operations would be appreciated.
I used below code for fetching the object from sqs.
MessagingConfiguration.java
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.jms.listener.DefaultMessageListenerContainer;
import com.amazon.sqs.javamessaging.SQSConnectionFactory;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.es.spring.messaging.MessageReceiver;
#Configuration
public class MessagingConfiguration {
#Value("${queue.endpoint}")
private String endpoint;
#Value("${queue.name}")
private String queueName;
#Autowired
private MessageReceiver messageReceiver;
#Bean
public DefaultMessageListenerContainer jmsListenerContainer() {
SQSConnectionFactory sqsConnectionFactory = SQSConnectionFactory.builder()
.withAWSCredentialsProvider(new DefaultAWSCredentialsProviderChain())
.withEndpoint(endpoint)
.withAWSCredentialsProvider(awsCredentialsProvider)
.withNumberOfMessagesToPrefetch(10).build();
DefaultMessageListenerContainer dmlc = new DefaultMessageListenerContainer();
dmlc.setConnectionFactory(sqsConnectionFactory);
dmlc.setDestinationName(queueName);
dmlc.setMessageListener(messageReceiver);
return dmlc;
}
#Bean
public JmsTemplate createJMSTemplate() {
SQSConnectionFactory sqsConnectionFactory = SQSConnectionFactory.builder()
.withAWSCredentialsProvider(awsCredentialsProvider)
.withEndpoint(endpoint)
.withNumberOfMessagesToPrefetch(10).build();
JmsTemplate jmsTemplate = new JmsTemplate(sqsConnectionFactory);
jmsTemplate.setDefaultDestinationName(queueName);
jmsTemplate.setDeliveryPersistent(false);
return jmsTemplate;
}
private final AWSCredentialsProvider awsCredentialsProvider = new AWSCredentialsProvider() {
#Override
public AWSCredentials getCredentials() {
return new BasicAWSCredentials("accessKey", "Secretkey");
}
#Override
public void refresh() {
}
};
}
MessageReceiver.java
import java.io.PrintWriter;
import java.io.StringWriter;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.TextMessage;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import com.fasterxml.jackson.databind.ObjectMapper;
#Component
public class MessageReceiver implements MessageListener{
Logger LOG = LoggerFactory.getLogger(MessageReceiver.class);
#Override
public void onMessage(Message message) {
try {
TextMessage textMessage = (TextMessage) message;
System.out.println("message:"+textMessage.getText());
CustomClass activeMq =new ObjectMapper().readValue(textMessage.getText(),CustomClass.class);
LOG.info("Application : Active Mq : {}",activeMq);
} catch (Exception e) {
e.printStackTrace();
}
}
}
Send message in SQS.
AmazonSQS awsSqs = new AwsSqsClient().getAWSSqsclient();
awsSqs.sendMessage(new SendMessageRequest().withDelaySeconds(60)
.withQueueUrl("https://sqs-url/TestQueue").withMessageBody(input));
Hope above answer help you.
I want to confirm rollback behavior, but I'm having trouble getting it to work.
I have a postgres DB with the following tables:
select * from cat;
pkid | name
--------------------------------------+-------
c75d6e8b-6aff-4214-ad45-d17db254857b | Abbey
select * from toy;
pkid | name | description
--------------------------------------+---------------+-------------------------------------------
dda72782-a1aa-4c0e-9cf6-a408db58a1ae | Laser pointer | Red laser.
f4d7e67d-1b26-4d8d-bb98-1a5c69f3cb49 | String | Colored string attached to a plastic rod.
select * from cattoy;
pkid | fkcat | fktoy
------+-------+-------
I have created a CatService implementation with the idea being you can create a cat, toy, and associate that toy with that cat. If any one of the 3 operations fails I want them all to rollback.
DefaultCatService.java:
import java.util.List;
import java.util.UUID;
import javax.inject.Inject;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import com.bonkeybee.dao.CatDao;
import com.bonkeybee.dao.CatToyDao;
import com.bonkeybee.dao.ToyDao;
import com.bonkeybee.querydsl.Cat;
import com.bonkeybee.querydsl.Cattoy;
import com.bonkeybee.querydsl.Toy;
#Inject
private CatDao catDao;
#Inject
private CatToyDao catToyDao;
#Inject
private ToyDao toyDao;
#Override
#Transactional
public UUID createCat(final Cat cat){
LOG.debug("Creating cat");
UUID catPkid = catDao.createCat(cat);
Toy toy = new Toy();
toy.setName("Box");
toy.setDescription("Cardboard box.");
toy.setPkid(toyDao.createToy(toy));
Cattoy catToy = new Cattoy();
catToy.setFkcat(catPkid);
catToy.setFktoy(toy.getPkid());
catToyDao.createCatToy(catToy);
return catPkid;
}
I have created DAO's and their implementations for each table with basic CRUD operations.
CatDaoJdbc.java:
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.UUID;
import javax.inject.Inject;
import org.apache.commons.lang3.StringUtils;
import org.springframework.stereotype.Repository;
import org.springframework.transaction.annotation.Transactional;
import com.bonkeybee.querydsl.Cat;
import com.bonkeybee.querydsl.QCat;
import com.querydsl.core.BooleanBuilder;
import com.querydsl.core.types.Predicate;
import com.querydsl.core.types.dsl.SimplePath;
import com.querydsl.core.types.dsl.StringPath;
import com.querydsl.sql.SQLQueryFactory;
private static final QCat CAT = QCat.cat;
private static final SimplePath<Object> CAT_PKID = CAT.pkid;
private static final StringPath CAT_NAME = CAT.name;
#Inject
private SQLQueryFactory sqlQueryFactory;
#Override
public UUID createCat(final Cat cat) {
UUID catPkid = UUID.randomUUID();
sqlQueryFactory.insert(CAT)
.columns(CAT_PKID, CAT_NAME)
.values(catPkid, cat.getName())
.execute();
return catPkid;
}
ToyDaoJdbc.java
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.UUID;
import javax.inject.Inject;
import org.apache.commons.lang3.StringUtils;
import org.springframework.stereotype.Repository;
import org.springframework.transaction.annotation.Transactional;
import com.bonkeybee.querydsl.QToy;
import com.bonkeybee.querydsl.Toy;
import com.querydsl.core.BooleanBuilder;
import com.querydsl.core.types.Predicate;
import com.querydsl.core.types.dsl.SimplePath;
import com.querydsl.core.types.dsl.StringPath;
import com.querydsl.sql.SQLQueryFactory;
private static final QToy TOY = QToy.toy;
private static final SimplePath<Object> TOY_PKID = TOY.pkid;
private static final StringPath TOY_NAME = TOY.name;
private static final StringPath TOY_DESCRIPTION = TOY.description;
#Inject
private SQLQueryFactory sqlQueryFactory;
#Override
public UUID createToy(Toy toy) {
UUID toyPkid = UUID.randomUUID();
sqlQueryFactory.insert(TOY)
.columns(TOY_PKID, TOY_NAME, TOY_DESCRIPTION)
.values(toyPkid, toy.getName(), toy.getDescription())
.execute();
return toyPkid;
}
CatToyDaoJdbc.java:
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.UUID;
import javax.inject.Inject;
import org.springframework.stereotype.Repository;
import org.springframework.transaction.annotation.Transactional;
import com.bonkeybee.querydsl.Cattoy;
import com.bonkeybee.querydsl.QCat;
import com.bonkeybee.querydsl.QCattoy;
import com.bonkeybee.querydsl.QToy;
import com.bonkeybee.querydsl.Toy;
import com.querydsl.core.BooleanBuilder;
import com.querydsl.core.types.Predicate;
import com.querydsl.core.types.dsl.SimplePath;
import com.querydsl.sql.SQLQueryFactory;
#Override
public UUID createCatToy(Cattoy catToy) {
throw new RuntimeException("Simulating exception");
}
Main.java:
import java.util.UUID;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.transaction.annotation.Transactional;
import com.bonkeybee.querydsl.Cat;
import com.bonkeybee.querydsl.Cattoy;
import com.bonkeybee.querydsl.Toy;
import com.bonkeybee.service.CatService;
import com.bonkeybee.service.CatToyService;
import com.bonkeybee.service.ToyService;
public static void main(String[] args) {
try (ConfigurableApplicationContext applicationContext = new AnnotationConfigApplicationContext(ApplicationConfiguration.class)) {
CatService catService = applicationContext.getBean(CatService.class);
Cat newCat = new Cat();
newCat.setName(DORA);
newCat.setPkid(catService.createCat(newCat));
}
}
ApplicationConfiguration.java:
import java.beans.PropertyVetoException;
import javax.inject.Inject;
import javax.sql.DataSource;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.jdbc.datasource.DataSourceTransactionManager;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.annotation.EnableTransactionManagement;
import com.mchange.v2.c3p0.ComboPooledDataSource;
import com.querydsl.sql.PostgreSQLTemplates;
import com.querydsl.sql.SQLQueryFactory;
#Configuration
#EnableTransactionManagement
#ComponentScan("com.bonkeybee")
public class ApplicationConfiguration {
#Bean
public DataSource getDataSource() throws PropertyVetoException {
ComboPooledDataSource comboPooledDataSource = new ComboPooledDataSource();
comboPooledDataSource.setDriverClass(JDBC_DRIVER);
comboPooledDataSource.setJdbcUrl(JDBC_URL);
comboPooledDataSource.setUser(USER);
comboPooledDataSource.setPassword(PASSWORD);
comboPooledDataSource.setMinPoolSize(MIN_POOL_SIZE);
comboPooledDataSource.setInitialPoolSize(MIN_POOL_SIZE);
comboPooledDataSource.setMaxPoolSize(MAX_POOL_SIZE);
return comboPooledDataSource;
}
#Bean
#Inject
public PlatformTransactionManager getPlatformTransactionManager(final DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
#Bean
public com.querydsl.sql.Configuration getQueryDslConfiguration() {
return new com.querydsl.sql.Configuration(PostgreSQLTemplates.builder().build());
}
#Bean
#Inject
public SQLQueryFactory getSQLQueryFactory(final com.querydsl.sql.Configuration configuration, final DataSource dataSource) {
return new SQLQueryFactory(configuration, dataSource);
}
}
When Main calls catService.createCat() the cat and toy are created, then a RuntimeException is thrown as expected, however inspection of the tables afterward show the new cat and new toy created instead of being rolled back. Please SO, help me ensure no cat goes toyless >:3
EDIT: Adding imports as requested
Solved it after more searching, there were two configuration issues.
First: the transaction manager bean spring looks for by default should be named "transactionManager" otherwise you have to explicitly set the name.
Second: I added a dependency on "querydsl-sql-spring" artifact and changed my SQLQueryFactory to use a SpringConnectionProvider instead of the DataSource bean (found from this example from the querydsl people). Below is the final configuration:
#Configuration
#EnableTransactionManagement
#ComponentScan("com.bonkeybee")
public class ApplicationConfiguration {
#Bean
public DataSource getDataSource() throws PropertyVetoException {
ComboPooledDataSource comboPooledDataSource = new ComboPooledDataSource();
comboPooledDataSource.setDriverClass(JDBC_DRIVER);
comboPooledDataSource.setJdbcUrl(JDBC_URL);
comboPooledDataSource.setUser(USER);
comboPooledDataSource.setPassword(PASSWORD);
comboPooledDataSource.setMinPoolSize(MIN_POOL_SIZE);
comboPooledDataSource.setInitialPoolSize(MIN_POOL_SIZE);
comboPooledDataSource.setMaxPoolSize(MAX_POOL_SIZE);
return comboPooledDataSource;
}
#Inject
#Bean(name = "transactionManager")
public PlatformTransactionManager getPlatformTransactionManager(final DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
#Bean
public com.querydsl.sql.Configuration getQueryDslConfiguration() {
return new com.querydsl.sql.Configuration(PostgreSQLTemplates.builder().build());
}
#Inject
#Bean
public SQLQueryFactory getSQLQueryFactory(final com.querydsl.sql.Configuration configuration, final DataSource dataSource) {
Provider<Connection> provider = new SpringConnectionProvider(dataSource);
return new SQLQueryFactory(configuration, provider);
}
}
Thanks querydsl people for such a cool lib.
Since it is more than one database request involve in your transaction, you have to specify
the persistence context as PersistenceContextType.EXTENDED, which means that it can survive multiple requests.
You have to have a entity manager and then get transaction object from it.
After getting the transactionmanager, begin a new transaction, do all your database operations and then commit. Here is a sample below
EntityManagerFactory emf = ...
EntityManager em = emf.createEntityManager (PersistenceContextType.EXTENDED);
Magazine mag1 = em.find (Magazine.class, magId);
Magazine mag2 = em.find (Magazine.class, magId);
em.getTransaction().begin();
:
:
em.getTransaction().end();