I was working with karate framework to test my rest service and it work great, however I have service that consume message from kafka topic then persist on mongo to finally notify kafka.
I made a java producer on my karate project, it called by js to be used by feature.
Then I have a consumer to check the message
Feature:
* def kafkaProducer = read('../js/KafkaProducer.js')
JS:
function(kafkaConfiguration){
var Producer = Java.type('x.y.core.producer.Producer');
var producer = new Producer(kafkaConfiguration);
return producer;
}
Java:
public class Producer {
private static final Logger LOGGER = LoggerFactory.getLogger(Producer.class);
private static final String KEY = "C636E8E238FD7AF97E2E500F8C6F0F4C";
private KafkaConfiguration kafkaConfiguration;
private ObjectMapper mapper;
private AESEncrypter aesEncrypter;
public Producer(KafkaConfiguration kafkaConfiguration) {
kafkaConfiguration.getProperties().put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
kafkaConfiguration.getProperties().put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArraySerializer");
this.kafkaConfiguration = kafkaConfiguration;
this.mapper = new ObjectMapper();
this.aesEncrypter = new AESEncrypter(KEY);
}
public String produceMessage(String payload) {
// Just notify kafka with payload and return id of payload
}
Other class
public class KafkaConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConfiguration.class);
private Properties properties;
public KafkaConfiguration(String host) {
try {
properties = new Properties();
properties.put(BOOTSTRAP_SERVERS_CONFIG, host);
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "karate-integration-test");
properties.put(ConsumerConfig.CLIENT_ID_CONFIG, "offset123");
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
} catch (Exception e) {
LOGGER.error("Fail creating the consumer...", e);
throw e;
}
}
public Properties getProperties() {
return properties;
}
public void setProperties(Properties properties) {
this.properties = properties;
}
}
I'd would like to use the producer code with anotation like cucumber does like:
#Then("^Notify kafka with payload (-?\\d+)$")
public void validateResult(String payload) throws Throwable {
new Producer(kafkaConfiguration).produceMessage(payload);
}
and on feature use
Then Notify kafka with payload "{example:value}"
I want to do that because I want to reuse that code on base project in order to be included in other project
If annotation doesn't works, maybe you can suggest me another way to do it
The answer is simple, use normal Java / Maven concepts. Move the common Java code to the "main" packages (src/main/java). Now all you need to do is build a JAR and add it as a dependency to any Karate project.
The last piece of the puzzle is this: use the classpath: prefix to refer to any features or JS files in the JAR. Karate will be able to pick them up.
EDIT: Sorry Karate does not support Cucumber or step-definitions. It has a much simpler approach. Please read this for details: https://github.com/intuit/karate/issues/398
Related
I'm working on a springboot project following a microservice architecture and I use Kafka as an event bus to exchange data between some of them. I also have Junit tests which test some part of my application which doesn't require the bus and others that require it by using an embedded Kafka broker.
The problem I have is when I launch all my tests, they take so much time and they fail because each of then is trying to connect to the embedded Kafka broker (connection not available) whereas they don't need Kafka bus in order to achieve their task.
Is it possible to disable the loading of Kafka components for these tests and only allow them for the ones that require it ?
This is how I usually write my JUnit tester classes, that usually wont connect to KAFKA Brokers for each test.
Mocking the REST API, if your KAFKA Client (Producer/Consumer) integrated with a REST API
public class MyMockedRESTAPI {
public MyMockedRESTAPI() {
}
public APIResponseWrapper apiResponseWrapper(parameters..) throws RestClientException {
if (throwException) {
throw new RestClientException(....);
}
return new APIResponseWrapper();
}
}
A factory class to generate an incoming KAFKA Event and REST API request and response wrappers
public class mockFactory {
private static final Gson gson = new Gson();
public static KAKFAEvent generateKAFKAEvent() {
KAKFAEvent kafkaEvent = new KAKFAEvent();
kafkaEvent.set...
kafkaEvent.set...
kafkaEvent.set...
return KAKFAEvent;
}
public static ResponseEntity<APIResponse> createAPIResponse() {
APIResponse response = new APIResponse();
return new ResponseEntity<>(response, HttpStatus.OK);
}
}
A Test Runner Class
#RunWith(SpringJUnit4ClassRunner.class)
public class KAFKAJUnitTest {
Your assertion should be declared here
}
You can also refer : https://www.baeldung.com/spring-boot-kafka-testing
a good practice would be to avoid sending messages to Kafka while testing code in your isolated microservice scope. but when you need to make an integration test ( many microservices in the same time ) sometimes you need to activate Kafka messages.
So my purpose is :
1- Activate/Deactivate loding Kafka configuration as required
#ConditionalOnProperty(prefix = "my.kafka.consumer", value = "enabled", havingValue = "true", matchIfMissing = false)
#Configuration
public class KafkaConsumerConfiguration {
...
}
#ConditionalOnProperty(prefix = "my.kafka.producer", value = "enabled", havingValue = "true", matchIfMissing = false)
#Configuration
public class KafkaProducerConfiguration {
...
}
and then u will be able to activate/deactivate loading consumer and producer as you need...
Examples :
#SpringBootApplication
#Import(KafkaConsumerConfiguration.class)
public class MyMicroservice_1 {
public static void main(String[] args) {
SpringApplication.run(MyMicroservice_1.class, args);
}
}
or
#SpringBootApplication
#Import(KafkaProducerConfiguration.class)
public class MyMicroservice_2 {
public static void main(String[] args) {
SpringApplication.run(MyMicroservice_2.class, args);
}
}
or maybe a microservice that need both of configurations
#SpringBootApplication
#Import(value = { KafkaProducerConfiguration.class, KafkaConsumerConfiguration.class })
public class MyMicroservice_3 {
public static void main(String[] args) {
SpringApplication.run(MyMicroservice_3.class, args);
}
}
2 - You need also to make sending messages depending on the current spring profile. To do that u can override the send method of the Kafka template object:
#ConditionalOnProperty(prefix = "my.kafka.producer", value = "enabled", havingValue = "true", matchIfMissing = false)
#Configuration
public class KafkaProducerConfiguration {
...
#Resource
Environment environment;
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory()) {
#Override
protected ListenableFuture<SendResult<String, String>> doSend(ProducerRecord<String, String> producerRecord) {
if (Arrays.asList(environment.getActiveProfiles()).contains("test")) {
return null;
}
return super.doSend(producerRecord);
}
};
}
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
...
return new DefaultKafkaProducerFactory<>(props);
}
}
I have integration flow that reads files from specific dir, transform it to pojo and save in list.
Config class:
#Configuration
#ComponentScan
#EnableIntegration
#IntegrationComponentScan
public class IntegrationConfig {
#Bean
public MessageChannel fileChannel(){
return new DirectChannel();
}
#Bean
public MessageSource<File> fileMessageSource(){
FileReadingMessageSource readingMessageSource = new FileReadingMessageSource();
CompositeFileListFilter<File> compositeFileListFilter= new CompositeFileListFilter<>();
compositeFileListFilter.addFilter(new SimplePatternFileListFilter("*.csv"));
compositeFileListFilter.addFilter(new AcceptOnceFileListFilter<>());
readingMessageSource.setFilter(compositeFileListFilter);
readingMessageSource.setDirectory(new File("myFiles"));
return readingMessageSource;
}
#Bean
public CSVToOrderTransformer csvToOrderTransformer(){
return new CSVToOrderTransformer();
}
#Bean
public IntegrationFlow convert(){
return IntegrationFlows.from(fileMessageSource(),source -> source.poller(Pollers.fixedDelay(500)))
.channel(fileChannel())
.transform(csvToOrderTransformer())
.handle("loggerOrderList","processOrders")
.channel(MessageChannels.queue())
.get();
}
}
Transformer:
public class CSVToOrderTransformer {
#Transformer
public List<Order> transform(File file){
List<Order> orders = new ArrayList<>();
Pattern pattern = Pattern.compile("(?m)^(\\d*);(WAITING_FOR_PAYMENT|PAYMENT_COMPLETED);(\\d*)$");
Matcher matcher = null;
try {
matcher = pattern.matcher(new String(Files.readAllBytes(file.toPath()), StandardCharsets.UTF_8));
} catch (IOException e) {
e.printStackTrace();
}
while (!matcher.hitEnd()){
if(matcher.find()){
Order order = new Order();
order.setOrderId(Integer.parseInt(matcher.group(1)));
order.setOrderState(matcher.group(2).equals("WAITING_FOR_PAYMENT")? OrderState.WAITING_FOR_PAYMENT:OrderState.PAYMENT_COMPLETED);
order.setOrderCost(Integer.parseInt(matcher.group(3)));
orders.add(order);
}
}
return orders;
}
}
OrderState enum :
public enum OrderState {
CANCELED,
WAITING_FOR_PAYMENT,
PAYMENT_COMPLETED
}
Order :
public class Order {
private int orderId;
private OrderState orderState;
private int orderCost;
}
LoggerOrderList service:
#Service
public class LoggerOrderList {
private static final Logger LOGGER = LogManager.getLogger(LoggerOrderList.class);
public List<Order> processOrders(List<Order> orderList){
orderList.forEach(LOGGER::info);
return orderList;
}
}
1)How can I do that flow starts when I pass invoke gateway method?
2)How can I read passed message in inbound-channel-adapter(in my case is FileReadingMessageSource)?
The FileReadingMessageSource is based on the polling the provided in the configuration directory. This is the beginning of the flow and it cannot be used in the middle of some logic.
You didn’t explain what is your gateway is, but probably you would like to have similar logic to get content if the dir passed as a payload of sent message. However such a logic doesn’t look like a fit for that message source anyway. It’s goal is to poll the dir all the time for new content. If you want something similar for several dirs, you may consider to have dynamically registered flows for provided dirs: https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-runtime-flows.
Otherwise you need to consider to have a plain services activator which would call listFiles() on the provided dir. just because without “wait for new content “ feature it does not make sense to abuse FileReeadingMessageSource
I'm building a kafka streams application with spring-kafka to group records by key and apply some business logic. I'm following the configuration stated on spring-kafka-streams doc, but the problem is that when I want to retrieve a value from the local store I get the following error:
org.apache.kafka.streams.errors.InvalidStateStoreException: The state store, user-data-response-count, may have migrated to another instance.
at org.apache.kafka.streams.state.internals.QueryableStoreProvider.getStore(QueryableStoreProvider.java:60)
at org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:1053)
at com.umantis.management.service.UserDataManagementService.broadcastUserDataRequest(UserDataManagementService.java:121)
Here is my KafkaStreamsConfiguration:
#Configuration
#EnableConfigurationProperties(EventsKafkaProperties.class)
#EnableKafka
#EnableKafkaStreams
public class KafkaConfiguration {
#Value("${app.kafka.streams.application-id}")
private String applicationId;
// This contains both the bootstrap servers and the schema registry url
#Autowired
private EventsKafkaProperties eventsKafkaProperties;
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public StreamsConfig streamsConfig() {
Map<String, Object> props = new HashMap<>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, this.eventsKafkaProperties.getBrokers());
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class);
props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, this.eventsKafkaProperties.getSchemaRegistryUrl());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return new StreamsConfig(props);
}
#Bean
public KGroupedStream<String, UserDataResponse> responseKStream(StreamsBuilder streamsBuilder, TopicUtils topicUtils) {
final Map<String, String> serdeConfig = Collections.singletonMap("schema.registry.url", this.eventsKafkaProperties.getSchemaRegistryUrl());
final Serde<UserDataResponse> valueSpecificAvroSerde = new SpecificAvroSerde<>();
valueSpecificAvroSerde.configure(serdeConfig, false);
return streamsBuilder
.stream("myTopic", Consumed.with(Serdes.String(), valueSpecificAvroSerde))
.groupByKey();
}
And here is my service code failing on getKafkaStreams().store:
#Slf4j
#Service
public class UserDataManagementService {
private static final String RESPONSE_COUNT_STORE = "user-data-response-count";
#Autowired
private StreamsBuilderFactoryBean streamsBuilderFactory;
public UserDataResponse broadcastUserDataRequest() {
this.responseGroupStream.count(Materialized.as(RESPONSE_COUNT_STORE));
if (!this.streamsBuilderFactory.isRunning()) {
throw new KafkaStoreNotAvailableException();
}
// here we should have a single running kafka instance
ReadOnlyKeyValueStore<String, Long> countStore =
this.streamsBuilderFactory.getKafkaStreams().store(RESPONSE_COUNT_STORE, QueryableStoreTypes.keyValueStore());
...
}
Context: I'm running the app on a single instance in a spring boot test and I'm ensuring the kafka instance is on a running state. I've searched on documentation from apache on this issue, but my case does not appear to match.
Can anyone point me what I'm doing wrong and a possible solution?
I'm quite new on Kafka Streams, so any help would be highly appreciated.
Ok, just saw that I was asking if the streams factory was running but I wasn't asking if the kakfa streams instance was actually running.
Polling over streamsBuilderFactory.getKafkaStreams().state solved the issue.
KafkaProperties java doc:
/**
* What to do when there is no initial offset in Kafka or if the current offset
* does not exist any more on the server.
*/
private String autoOffsetReset;
I have hello world appllication which contains application.properties
spring.kafka.consumer.group-id=foo
spring.kafka.consumer.auto-offset-reset=latest
At this case #KafkaListener method is invoked for all entries. But expected result was that #KafkaListener method is invoked only for latest 3 options I send. I tried to use another option:
spring.kafka.consumer.auto-offset-reset=earlisest
But behaviour the same.
Can you explain this stuff?
P.S.
code sample:
#SpringBootApplication
public class Application implements CommandLineRunner {
public static Logger logger = LoggerFactory.getLogger(Application.class);
public static void main(String[] args) {
SpringApplication.run(Application.class, args).close();
}
#Autowired
private KafkaTemplate<String, String> template;
private final CountDownLatch latch = new CountDownLatch(3);
#Override
public void run(String... args) throws Exception {
this.template.send("spring_kafka_topic", "foo1");
this.template.send("spring_kafka_topic", "foo2");
this.template.send("spring_kafka_topic", "foo3");
latch.await(60, TimeUnit.SECONDS);
logger.info("All received");
}
#KafkaListener(topics = "spring_kafka_topic")
public void listen(ConsumerRecord<?, ?> cr) throws Exception {
logger.info(cr.toString());
latch.countDown();
}
}
Update:
Behaviour doesn't depends on
spring.kafka.consumer.auto-offset-reset
it is only depends on spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit
if I set spring.kafka.consumer.enable-auto-commit=false - I see all records.
if I set spring.kafka.consumer.enable-auto-commit=true - I see only 3 last records.
Please clarify menaning of spring.kafka.consumer.auto-offset-reset property
The KafkaProperties in Spring Boot does this:
public Map<String, Object> buildProperties() {
Map<String, Object> properties = new HashMap<String, Object>();
if (this.autoCommitInterval != null) {
properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,
this.autoCommitInterval);
}
if (this.autoOffsetReset != null) {
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
this.autoOffsetReset);
}
This buildProperties() is used from the buildConsumerProperties() which, in turn in the:
#Bean
#ConditionalOnMissingBean(ConsumerFactory.class)
public ConsumerFactory<?, ?> kafkaConsumerFactory() {
return new DefaultKafkaConsumerFactory<Object, Object>(
this.properties.buildConsumerProperties());
}
So, if you use your own ConsumerFactory bean definition be sure to reuse those KafkaProperties: https://docs.spring.io/spring-boot/docs/1.5.7.RELEASE/reference/htmlsingle/#boot-features-kafka-extra-props
UPDATE
OK. I see what's going on.
Try to add this property:
spring.kafka.consumer.enable-auto-commit=false
This way we won't have async auto-commits based on some commit interval.
The logic in our application is based on the exit fact after the latch.await(60, TimeUnit.SECONDS);. When we get 3 expected records we exit. This way the async auto-commit from the consumer might not happen yet. So, the next time you run the application the consumer polls data from the uncommited offset.
When we turn off auto-commit, we have an AckMode.BATCH, which is performed synchronously and we have an ability to see really latest recodrs in the topic for this foo consumer group.
I have problem with customizing API gateway domain, for my restful app deployed on AWS lambda. Customized domain, works this way, that depending on basePath it chooses different APIs which finally touches Lambda. For example:
api.mycustomdomain.com/view/ping -> goes to application view with path /view/ping
api.mycustomdomain.com/admin/ping -> goes to application admin with path /admin/ping
I am using this example as boilerplate: https://github.com/awslabs/aws-serverless-java-container/tree/master/samples/spring/pet-store
What I would like to achieve is handler which depending on Host header strips prefix from request path.
I have prepared following application.yml file:
server:
contextPath: "/view"
productionHost: "api.mycustomdomain.com"
The problem/question is. How can I now load those into my Lambda function? Here is my naive try:
public class LambdaHandler implements RequestHandler<AwsProxyRequest, AwsProxyResponse> {
SpringLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;
boolean isinitialized = false;
#Value("${server.contextPath}")
private String prefix;
#Value("${server.productionHost}")
private String productionHost;
public AwsProxyResponse handleRequest(AwsProxyRequest awsProxyRequest, Context context) {
if(awsProxyRequest.getHeaders().get("Host").equals(productionHost))
awsProxyRequest.setPath(awsProxyRequest.getPath().substring(prefix.length()));
if (!isinitialized) {
isinitialized = true;
try {
handler = SpringLambdaContainerHandler.getAwsProxyHandler(PingPongApp.class);
} catch (ContainerInitializationException e) {
e.printStackTrace();
return null;
}
}
return handler.proxy(awsProxyRequest, context);
}
}
Obviously this doesn't work, LambdaHandler is working out of Spring context.
Any ideas how can I deal with that?
It seems you can not load those properties. Follow either of the 2 options given below.
1> You can add following bean in your configuration and that way you can autowire strings and use the way you are already using
#Bean
public static PropertySourcesPlaceholderConfigurer propertyConfigInDev() {
return new PropertySourcesPlaceholderConfigurer();
}
2>
public AwsProxyResponse..{
#Autowired
private Environment env;
..
public AwsProxyResponse handleRequest{
..
String contextPath = env.getRequiredProperty(“server.contextPath”));
...
}
}