Combine Mono with Flux - java

I want to create a service that combines results from two reactive sources.
One is producing Mono and another one is producing Flux. For merging I need the same value of mono for every flux emitted.
For now I have something like this
Flux.zip(
service1.getConfig(), //produces flux
service2.getContext() //produces mono
.cache().repeat()
)
This gives me what I need,
service2 is called only once
context is provided for every configuration
resulting flux has as many elements as configurations
But I have noticed that repeat() is emitting a massive amount of elements after context is cached. Is this a problem?
Is there something I can do to limit number of repeats to the number of received configurations, yet still do both request simultaneously?
Or this is not an issue and I Can safely ignore those extra emitted elements?
I tried to use combineLatest but depending on timing I some elements fo configuration can get lost and not processed.
EDIT
Looking at the suggestions from #Ricard Kollcaku I have created sample test that shows why this is not what I'm looking for.
import java.util.concurrent.atomic.AtomicLong;
import java.util.stream.Stream;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.core.scheduler.Schedulers;
import reactor.test.StepVerifier;
public class SampleTest
{
Logger LOG = LoggerFactory.getLogger(SampleTest.class);
AtomicLong counter = new AtomicLong(0);
Flux<String> getFlux()
{
return Flux.fromStream(() -> {
LOG.info("flux started");
sleep(1000);
return Stream.of("a", "b", "c");
}).subscribeOn(Schedulers.parallel());
}
Mono<String> getMono()
{
return Mono.defer(() -> {
counter.incrementAndGet();
LOG.info("mono started");
sleep(1000);
return Mono.just("mono");
}).subscribeOn(Schedulers.parallel());
}
private void sleep(final long milis)
{
try
{
Thread.sleep(milis);
}
catch (final InterruptedException e)
{
e.printStackTrace();
}
}
#Test
void test0()
{
final Flux<String> result = Flux.zip(
getFlux(),
getMono().cache().repeat()
.doOnNext(n -> LOG.warn("signal on mono", n)),
(s1, s2) -> s1 + " " + s2
);
assertResults(result);
}
#Test
void test1()
{
final Flux<String> result =
getFlux().flatMap(s -> Mono.zip(Mono.just(s), getMono(),
(s1, s2) -> s1 + " " + s2));
assertResults(result);
}
#Test
void test2()
{
final Flux<String> result = getFlux().flatMap(s -> getMono().map((s1 -> s + " " + s1)));
assertResults(result);
}
void assertResults(final Flux<String> result)
{
final Flux<String> flux = result;
StepVerifier.create(flux)
.expectNext("a mono")
.expectNext("b mono")
.expectNext("c mono")
.verifyComplete();
Assertions.assertEquals(1L, counter.get());
}
Looking at the test results for test1 and test2
2020-01-20 12:55:22.542 INFO [] [] [ parallel-3] SampleTest : flux started
2020-01-20 12:55:24.547 INFO [] [] [ parallel-4] SampleTest : mono started
2020-01-20 12:55:24.547 INFO [] [] [ parallel-5] SampleTest : mono started
2020-01-20 12:55:24.548 INFO [] [] [ parallel-6] SampleTest : mono started
expected: <1> but was: <3>
I need to reject your proposal. In both cases getMono is
- invoked as many times as items in flux
- invoked after first element of flux arrives
And those are interactions that I want to avoid. My services are making http requests under the hood and they may be time consuming.
My current solution does not have this problem, but if I add logger to my zip I will get this
2020-01-20 12:55:20.505 INFO [] [] [ parallel-1] SampleTest : flux started
2020-01-20 12:55:20.508 INFO [] [] [ parallel-2] SampleTest : mono started
2020-01-20 12:55:21.523 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.528 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.529 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.529 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.529 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.529 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.530 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.530 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.530 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.530 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.531 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.531 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.531 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.531 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.531 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.532 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.532 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.532 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.532 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.533 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.533 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.533 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.533 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.533 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.533 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.533 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.534 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.534 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.534 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.534 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.534 WARN [] [] [ parallel-2] SampleTest : signal on mono
2020-01-20 12:55:21.535 WARN [] [] [ parallel-2] SampleTest : signal on mono
As you can see there is a lot of elements emitted by combining cache().repeat() together and I want to know if this is an issue and if yes then how to avoid it (but keep single invocation of mono and parallel invocation).

I think what you are trying to achieve could be done with Flux.join
Here is some example code:
Flux<Integer> flux = Flux.concat(Mono.just(1).delayElement(Duration.ofMillis(100)),
Mono.just(2).delayElement(Duration.ofMillis(500))).log();
Mono<String> mono = Mono.just("a").delayElement(Duration.ofMillis(50)).log();
List<String> list = flux.join(mono, (v1) -> Flux.never(), (v2) -> Flux.never(), (x, y) -> {
return x + y;
}).collectList().block();
System.out.println(list);

Libraries like Project Reactor and RxJava try to provide as much combinations of their capabilities as possible, but do not provide access to the instruments of combining capabilities. And as a result, there are always corner cases which are not covered.
My own DF4J, as far as I know, is the only asynchronous library which provides the means to combine capabilities. For example, this is how user can zip Flux and Mono: (of course, this class is not part of DF4J itself):
import org.df4j.core.dataflow.Actor;
import org.df4j.core.port.InpFlow;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
abstract class ZipActor<T1, T2> extends Actor {
InpFlow<T1> inpFlow = new InpFlow<>(this);
InpFlow<T2> inpScalar = new InpFlow<>(this);
ZipActor(Flux<T1> flux, Mono<T2> mono) {
flux.subscribe(inpFlow);
mono.subscribe(inpScalar);
}
#Override
protected void runAction() throws Throwable {
if (inpFlow.isCompleted()) {
stop();
return;
}
T1 element1 = inpFlow.removeAndRequest();
T2 element2 = inpScalar.current();
runAction(element1, element2);
}
protected abstract void runAction(T1 element1, T2 element2);
}
and this is how it can be used:
#Test
public void ZipActorTest() {
Flux<Integer> flux = Flux.just(1,2,3);
Mono<Integer> mono = Mono.just(5);
ZipActor<Integer, Integer> actor = new ZipActor<Integer, Integer>(flux, mono){
#Override
protected void runAction(Integer element1, Integer element2) {
System.out.println("got:"+element1+" and:"+element2);
}
};
actor.start();
actor.join();
}
The console output is as follows:
got:1 and:5
got:2 and:5
got:3 and:5

You can do it with just a simple change
getFlux()
.flatMap(s -> Mono.zip(Mono.just(s),getMono(), (s1, s2) -> s1+" "+s2))
.subscribe(System.out::println);
Flux<String> getFlux(){
return Flux.just("a","b","c");
}
Mono<String> getMono(){
return Mono.just("mono");
}
if you want to use zip but you can achieve same results using flatmap
getFlux()
.flatMap(s -> getMono()
.map((s1 -> s + " " + s1)))
.subscribe(System.out::println);
}
Flux<String> getFlux() {
return Flux.just("a", "b", "c");
}
Mono<String> getMono() {
return Mono.just("mono");
}
in both result is:
a mono
b mono
c mono
EDIT
Ok now i understand it better. Can you try this solution.
getMono().
flatMapMany(s -> getFlux().map(s1 -> s1 + " " + s))
.subscribe(System.out::println);
Flux<String> getFlux() {
return Flux.defer(() -> {
System.out.println("init flux");
return Flux.just("a", "b", "c");
});
}
Mono<String> getMono() {
return Mono.defer(() -> {
System.out.println("init Mono");
return Mono.just("sss");
});
}

Related

Apache Flink batch mode FileSink to S3 can't finish in jetbrains

What we are trying to do: we are evaluating Flink to perform batch processing using DataStream API in BATCH mode.
Minimal application to reproduce the issue:
FileSystem.initialize(GlobalConfiguration.loadConfiguration(System.getenv("FLINK_CONF_DIR")))
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setRuntimeMode(RuntimeExecutionMode.BATCH)
val inputStream = env.fromSource(
FileSource.forRecordStreamFormat(new TextLineFormat(), new Path("s3://testtest/2022/04/12/")).build(), WatermarkStrategy.noWatermarks()
.withTimestampAssigner(new SerializableTimestampAssigner[String]() {
override def extractTimestamp(element: String, recordTimestamp: Long): Long = -1
}), "MySourceName"
)
.map(str => {
val jsonNode = JsonUtil.getJSON(str)
val log = JsonUtil.getJSONString(jsonNode, "log")
if (StringUtils.isNotBlank(log)) {
log
} else {
""
}
})
.filter(StringUtils.isNotBlank(_))
val sink: FileSink[BaseLocation] = FileSink
// .forBulkFormat(new Path("/Users/temp/flinksave"), AvroWriters.forSpecificRecord(classOf[BaseLocation]))
.forBulkFormat(new Path("s3://testtest/avro"), AvroWriters.forSpecificRecord(classOf[BaseLocation]))
.withRollingPolicy(OnCheckpointRollingPolicy.build())
.withOutputFileConfig(config)
.build()
inputStream.map(data => {
val baseLocation = new BaseLocation()
baseLocation.setRegion(data)
baseLocation
}).sinkTo(sink)
inputStream.print("input:")
env.execute()
Flink version: 1.14.2
the program executes normally when the path is local.
The program does not give a error when path change to s3://. However I do not see any files being written in S3 either.
This problem does not exist in the stand-alone mode, but only in the local development environment jetbrains IDEA. Is it because I lack configuration? I have already configured flink-config.yaml like:
s3.access-key: test
s3.secret-key: test
s3.endpoint: http://127.0.0.1:39000
log
18:42:25,524 INFO org.apache.flink.connector.base.source.reader.SourceReaderBase [] - Finished reading split(s) [0000000002]
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.SourceReaderBase [] - Finished reading split(s) [0000000001]
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager [] - Closing splitFetcher 0 because it is idle.
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager [] - Closing splitFetcher 0 because it is idle.
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Shutting down split fetcher 0
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Shutting down split fetcher 0
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Split fetcher 0 exited.
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Split fetcher 0 exited.
18:42:25,525 INFO org.apache.flink.connector.file.src.impl.StaticFileSplitEnumerator [] - Subtask 11 (on host '') is requesting a file source split
18:42:25,525 INFO org.apache.flink.connector.file.src.impl.StaticFileSplitEnumerator [] - No more splits available for subtask 11
18:42:25,525 INFO org.apache.flink.connector.file.src.impl.StaticFileSplitEnumerator [] - Subtask 8 (on host '') is requesting a file source split
18:42:25,525 INFO org.apache.flink.connector.file.src.impl.StaticFileSplitEnumerator [] - No more splits available for subtask 8
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.SourceReaderBase [] - Reader received NoMoreSplits event.
18:42:25,526 INFO org.apache.flink.connector.base.source.reader.SourceReaderBase [] - Reader received NoMoreSplits event.

Word Count Number is always changing, when using Flink

I am trying to create word count example with flink. Here is the link for words data (this is the example from flink's github account)
When I count the words with simple java program:
public static void main(String[] args) throws Exception {
int count = 0;
for (String eachSentence : WordCountData.WORDS){
String[] splittedSentence = eachSentence.toLowerCase().split("\\W+");
for (String eachWord: splittedSentence){
count++;
}
}
System.out.println(count);
// result is 287
}
Now when I do this with flink, first I will split the sentence to words.
DataStream<Tuple2<String, Integer>> readWordByWordStream = splitSentenceWordByWord(wordCountDataSource);
//...
public DataStream<Tuple2<String, Integer>> splitSentenceWordByWord(DataStream<String> wordDataSourceStream)
{
DataStream<Tuple2<String, Integer>> wordByWordStream = wordDataSourceStream.flatMap(new TempTransformation());
return wordByWordStream;
}
Here is the my TempTransformationclass:
public class TempTransformation extends RichFlatMapFunction<String, Tuple2<String, Integer>> {
#Override
public void flatMap(String input, Collector<Tuple2<String, Integer>> collector) throws Exception
{
String[] splittedSentence = input.toLowerCase().split("\\W+");
for (String eachWord : splittedSentence)
{
collector.collect(new Tuple2<String, Integer>(eachWord, 1));
}
}
}
Now I am going to count the words by converting it to KeyedStream (keyed by word)
public SingleOutputStreamOperator<String> keyedStreamExample(DataStream<Tuple2<String, Integer>> wordByWordStream)
{
return wordByWordStream.keyBy(0).timeWindow(Time.milliseconds(1)).apply(new TempWindowFunction());
}
TempWindowFunction():
public class TempWindowFunction extends RichWindowFunction<Tuple2<String, Integer>, String, Tuple, TimeWindow> {
private Logger logger = LoggerFactory.getLogger(TempWindowFunction.class);
private int count = 0;
#Override
public void apply(Tuple tuple, TimeWindow window, Iterable<Tuple2<String, Integer>> input, Collector<String> out) throws Exception
{
logger.info("Key is:' {} ' and collected element for that key and count: {}", (Object) tuple.getField(0), count);
StringBuilder builder = new StringBuilder();
for (Tuple2 each : input)
{
String key = (String) each.getField(0);
Integer value = (Integer) each.getField(1);
String tupleStr = "[ " + key + " , " + value + "]";
builder.append(tupleStr);
count ++;
}
logger.info("All tuples {}", builder.toString());
logger.info("Exit method");
logger.info("----");
}
}
After running this job with Flink's local environments, outputs always changing, here is the a few samples:
18:09:40,086 INFO com.sampleFlinkProject.transformations.TempWindowFunction - Key is:' rub ' and collected element for that key and count: 86
18:09:40,086 INFO TempWindowFunction - All tuples [ rub , 1]
18:09:40,086 INFO TempWindowFunction - Exit method
18:09:40,086 INFO TempWindowFunction - ----
18:09:40,086 INFO TempWindowFunction - Key is:' for ' and collected element for that key and count: 87
18:09:40,086 INFO TempWindowFunction - All tuples [ for , 1]
18:09:40,086 INFO TempWindowFunction - Exit method
18:09:40,086 INFO TempWindowFunction - ----
// another running outputs:
18:36:21,660 INFO TempWindowFunction - Key is:' for ' and collected element for that key and count: 103
18:36:21,660 INFO TempWindowFunction - All tuples [ for , 1]
18:36:21,660 INFO TempWindowFunction - Exit method
18:36:21,660 INFO TempWindowFunction - ----
18:36:21,662 INFO TempWindowFunction - Key is:' coil ' and collected element for that key and count: 104
18:36:21,662 INFO TempWindowFunction - All tuples [ coil , 1]
18:36:21,662 INFO TempWindowFunction - Exit method
18:36:21,662 INFO TempWindowFunction - ----
Lastly, here is the execution setup
//...
final StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
env.setParallelism(1);
//...
Why Flink is giving different outputs for each execution?
One source of non-determinism in your application is the processing time windows (which are 1 ms long). Whenever you use processing time for windowing, then the windows end up containing whatever events happen to show up and get processed during the time interval. (Event time windows do behave deterministically, since they are based on timestamps in the events.) Having the windows be so short is going to exaggerate this effect.

Log don't display all the data

I se spring boot 2.
I have a method who use a scheduler.
I try to log some info
#Scheduled(cron = "0 52 8 * * *")
private void sendEmailOrders() throws MessagingException {
List<FactoryEmailNCDto> factoryEmails = prepareDataNoncompliantSampling();
log.info("start sendEmailOrders: " + factoryEmails != null ? String.valueOf(factoryEmails.size()) : "0");
for (FactoryEmailNCDto factoryEmail : factoryEmails) {
String message = mailContentBuilder.build(factoryEmail);
if (factoryEmail.getEmails() != null && !factoryEmail.getEmails().isEmpty()) {
log.info("prepare to sent email to : " + factoryEmail.getFactoryName());
mailService.sendHtmlMail(factoryEmail.getEmails(), "Order", message);
}
}
log.info("end sendEmailOrders");
}
I get
2019-03-19 08:52:00.379 INFO 17831 --- [ scheduling-1]
com.mermacon.facade.OrdersFacade : 2 2019-03-19 08:52:00.760 INFO
17831 --- [ scheduling-1] com.mermacon.facade.OrdersFacade :
prepare to sent email to : NY 2019-03-19 08:52:03.898 ERROR 17831 ---
[ scheduling-1] o.s.s.s.TaskUtils$LoggingErrorHandler :
Unexpected e
Why I don't get this string: start sendEmailOrders in the log?

spring-boot-starter-integration 1.4.3 performance degradation

I noticed a significant performance degradation when upgrading Spring-Boot from 1.4.2 to 1.4.3 while using spring-integration.
I could reproduce the problem with the following program:
pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<!--<version>1.4.2.RELEASE</version>-->
<version>1.4.3.RELEASE</version>
</parent>
<artifactId>performance-test</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-integration</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</project>
Application.java:
package performance.test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
#SpringBootApplication
public class Application {
public static void main(final String[] args) {
SpringApplication.run(Application.class, args);
}
}
UsersManager.java:
package performance.test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.integration.annotation.MessageEndpoint;
import org.springframework.integration.annotation.ServiceActivator;
import org.springframework.messaging.handler.annotation.Payload;
#MessageEndpoint
public class UsersManager {
private static final Logger LOGGER = LoggerFactory.getLogger(UsersManager.class);
#ServiceActivator(inputChannel = "testChannel")
public void process(#Payload String payload) {
LOGGER.debug("payload: {}", payload);
}
}
IntegrationTest.java:
package performance.test;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.SpringApplicationConfiguration;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration(classes = Application.class)
public class IntegrationTest {
private static final Logger LOGGER = LoggerFactory.getLogger(IntegrationTest.class);
#Autowired
private MessageChannel testChannel;
#Test
public void basic_test() {
for (int i = 1; i <= 10; i++) {
final long start = System.currentTimeMillis();
test(1000);
final long end = System.currentTimeMillis();
LOGGER.info("test run: {} took: {} msec", i, end - start);
}
}
private void test(int count) {
for (int i = 0; i < count; i++) {
testChannel.send(MessageBuilder.withPayload("foo").setHeader("monkey", "do").build());
}
}
}
When I use Spring-Boot 1.4.2, I get the following results:
2016-12-27 14:55:39.331 INFO 4939 --- [ main] performance.test.IntegrationTest : Started IntegrationTest in 0.975 seconds (JVM running for 1.52)
2016-12-27 14:55:39.468 INFO 4939 --- [ main] performance.test.IntegrationTest : test run: 1 took: 123 msec
2016-12-27 14:55:39.552 INFO 4939 --- [ main] performance.test.IntegrationTest : test run: 2 took: 83 msec
2016-12-27 14:55:39.632 INFO 4939 --- [ main] performance.test.IntegrationTest : test run: 3 took: 80 msec
2016-12-27 14:55:39.696 INFO 4939 --- [ main] performance.test.IntegrationTest : test run: 4 took: 63 msec
2016-12-27 14:55:39.763 INFO 4939 --- [ main] performance.test.IntegrationTest : test run: 5 took: 67 msec
2016-12-27 14:55:39.842 INFO 4939 --- [ main] performance.test.IntegrationTest : test run: 6 took: 78 msec
2016-12-27 14:55:39.895 INFO 4939 --- [ main] performance.test.IntegrationTest : test run: 7 took: 53 msec
2016-12-27 14:55:39.950 INFO 4939 --- [ main] performance.test.IntegrationTest : test run: 8 took: 55 msec
2016-12-27 14:55:40.004 INFO 4939 --- [ main] performance.test.IntegrationTest : test run: 9 took: 54 msec
2016-12-27 14:55:40.057 INFO 4939 --- [ main] performance.test.IntegrationTest : test run: 10 took: 53 msec
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.776 sec - in performance.test.IntegrationTest
However, when I use Spring-Boot 1.4.3, there is significant performance degradation:
2016-12-27 14:57:41.705 INFO 5122 --- [ main] performance.test.IntegrationTest : Started IntegrationTest in 1.002 seconds (JVM running for 1.539)
2016-12-27 14:57:41.876 INFO 5122 --- [ main] performance.test.IntegrationTest : test run: 1 took: 156 msec
2016-12-27 14:57:42.031 INFO 5122 --- [ main] performance.test.IntegrationTest : test run: 2 took: 153 msec
2016-12-27 14:57:42.251 INFO 5122 --- [ main] performance.test.IntegrationTest : test run: 3 took: 220 msec
2016-12-27 14:57:42.544 INFO 5122 --- [ main] performance.test.IntegrationTest : test run: 4 took: 293 msec
2016-12-27 14:57:42.798 INFO 5122 --- [ main] performance.test.IntegrationTest : test run: 5 took: 254 msec
2016-12-27 14:57:43.111 INFO 5122 --- [ main] performance.test.IntegrationTest : test run: 6 took: 312 msec
2016-12-27 14:57:43.544 INFO 5122 --- [ main] performance.test.IntegrationTest : test run: 7 took: 432 msec
2016-12-27 14:57:44.112 INFO 5122 --- [ main] performance.test.IntegrationTest : test run: 8 took: 567 msec
2016-12-27 14:57:44.807 INFO 5122 --- [ main] performance.test.IntegrationTest : test run: 9 took: 695 msec
2016-12-27 14:57:45.620 INFO 5122 --- [ main] performance.test.IntegrationTest : test run: 10 took: 813 msec
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.989 sec - in performance.test.IntegrationTest
I have no idea what could be the cause for this.
Are the other persons having the same issue?
This might be related to SPR-14929.
In this case, removing #Payload from the message parameter solves the issue (it is not needed in this case - you only need a payload annotation if there are multiple parameters).

why java concurrency test fails?

I have a simple class:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class DummyService {
private final Logger logger = LoggerFactory.getLogger(getClass());
private boolean dataIndexing = false;
public boolean isDataIndexing() {
logger.info("isDataIndexing: {}", dataIndexing);
return dataIndexing;
}
public void cancelIndexing() {
logger.info("cancelIndexing: {}", dataIndexing);
dataIndexing = false;
}
public void createIndexCorp() {
logger.info("createIndexCorp: {}", dataIndexing);
createIndex();
}
public void createIndexEntr() {
logger.info("createIndexEntr: {}", dataIndexing);
createIndex();
}
private void createIndex() {
logger.info("createIndex: {}", dataIndexing);
if(dataIndexing)
throw new IllegalStateException("Service is busy!");
dataIndexing = true;
try {
while(dataIndexing) {
Thread.sleep(100);
logger.debug("I am busy...");
}
logger.info("Indexing canceled");
} catch (InterruptedException e) {
logger.error("Error during sleeping", e);
} finally {
dataIndexing = false;
}
}
}
and a unit test, with which i want to test object behavior:
public class CommonUnitTest
{
#Test
public void testCreateIndexWithoutAsync() throws InterruptedException {
final long sleepMillis = 500;
final DummyService indexService = new DummyService();
assertFalse(indexService.isDataIndexing());
new Thread(() -> {
indexService.createIndexCorp();
}
).start();
Thread.sleep(sleepMillis);
assertTrue(indexService.isDataIndexing());
// TaskExecutor should fails here
new Thread(() -> {
indexService.createIndexEntr();
logger.error("Exception expected but not occurred");
}
).start();
assertTrue(indexService.isDataIndexing());
indexService.cancelIndexing();
Thread.sleep(sleepMillis);
assertFalse(indexService.isDataIndexing());
}
}
The behaviour of object must be: If the method createIndexCorp or createIndexEntr is called by one thread, then another thread must get exception by trying to call one of this methods. But this does not happens! Here is the log:
2015-10-15 17:15:06.277 INFO --- [ main] c.c.o.test.DummyService : isDataIndexing: false
2015-10-15 17:15:06.318 INFO --- [ Thread-0] c.c.o.test.DummyService : createIndexCorp: false
2015-10-15 17:15:06.319 INFO --- [ Thread-0] c.c.o.test.DummyService : createIndex: false
2015-10-15 17:15:06.419 DEBUG --- [ Thread-0] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:06.524 DEBUG --- [ Thread-0] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:06.624 DEBUG --- [ Thread-0] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:06.724 DEBUG --- [ Thread-0] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:06.818 INFO --- [ main] c.c.o.test.DummyService : isDataIndexing: true
2015-10-15 17:15:06.820 INFO --- [ main] c.c.o.test.DummyService : isDataIndexing: true
2015-10-15 17:15:06.820 INFO --- [ Thread-1] c.c.o.test.DummyService : createIndexEntr: true
2015-10-15 17:15:06.820 INFO --- [ main] c.c.o.test.DummyService : cancelIndexing: true
2015-10-15 17:15:06.820 INFO --- [ Thread-1] c.c.o.test.DummyService : createIndex: true
2015-10-15 17:15:06.824 DEBUG --- [ Thread-0] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:06.921 DEBUG --- [ Thread-1] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:06.924 DEBUG --- [ Thread-0] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:07.021 DEBUG --- [ Thread-1] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:07.024 DEBUG --- [ Thread-0] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:07.121 DEBUG --- [ Thread-1] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:07.124 DEBUG --- [ Thread-0] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:07.221 DEBUG --- [ Thread-1] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:07.224 DEBUG --- [ Thread-0] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:07.321 DEBUG --- [ Thread-1] c.c.o.test.DummyService : I am busy...
2015-10-15 17:15:07.321 INFO --- [ main] c.c.o.test.DummyService : isDataIndexing: true
You can see that second thread can start process, but it should get the exception. Also the last assertion in the test code fails. How can that happen ? I dont understand this behavior. I tried to use volatile and synchronized keyword, but nothing helps. What is wrong with DummyService ?
You have 3 threads, t0, t1 and tm (main).
The order of operations is like this:
tm starts t0
t0 checks dataIndexing flag - false, goes into the loop, sets flag to true
tm sleeps
tm starts t1
tm sets indexing flag to false
t1 checks dataIndexing flag - false, goes into the loop, sets flag to true
t0 continues the loop because it missed that brief period when indexing was cancelled
If you sleep in the main tm before setting indexing flag to false, then t1 will get the exception.
You need to synchronize access to variables shared between multiple threads. I.e. checking the state of the flag and changing it needs to be done while holding a mutex.
It seems you're hitting the difference between logging and the actual execution. The threads can conceivably run cancel and create index in the space between logging and and the exception, thus the second thread slipping by and preventing cancelling of the first and the second.
It is not advisable to allow simultaneous changes to shared resource, namely private boolean dataIndexing. There are two solutions (at least):
1.A syncronized method to allow for change of the shared resource (thus limiting access to only one thread at a time)
private synchronized void setDataIndexing(boolean value) {
dataIndexing = value;
}
2.Guarding each change of this value in a syncronized section (in both the = true and the = false places):
syncronized (this) {
dataIndexing = /* the relevant value */;
}
I would advise a separate method, but good to know the alternatives.
Not an answer to your question, but this is completely unsynchronized:
if (dataIndexing)
throw new IllegalStateException("Service is busy!");
dataIndexing = true;
Is the service busy if your execution reaches the throw statement? Not necessarily! Another thread could have changed the value of dataIndexing from true to false in between the test and the throw.
What's worse, maybe much worse, is that two threads might both reach the statement after the throw at the same time:
Thread A Thread B
tests dataIndexing, finds it to
be false.
Tests dataIndexing, finds it to be false.
sets dataIndexing = true; sets dataIndexing = true;
... ...
Also, this is unreliable, and it takes time.
Thread.sleep(sleepMillis);
assertTrue(indexService.isDataIndexing());
Better to design your classes for testability. If your test needs to wait until isDataIndexing(), then your class should provide a means for the test to wait()...
Also, don't underestimate the importance of making tests that complete in the least amount of time possible. When you have a system that has thousands or tens of thousands of test cases, the seconds really start to add up.

Categories