I have 2 microservices(images and comments) which communicate one with another(discovered by eureka discovery service) using Spring Cloud Stream with broker rabbit mq. I want to send a message from images microservice to comment microservice.
The problem is that CommentController StreamListener method 'save' is not called.
Image Microservice-> CommentController.java:
public CommentController(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
this.flux = Flux.<Message<Comment>>create(
emitter -> this.commentSink = emitter,
FluxSink.OverflowStrategy.IGNORE)
.publish()
.autoConnect();
}
#PostMapping("/comments")
public Mono<String> addComment(Mono<Comment> newComment) {
if (commentSink != null) {
return newComment
.map(comment -> {
commentSink.next(MessageBuilder
.withPayload(comment)
.setHeader(MessageHeaders.CONTENT_TYPE,
MediaType.APPLICATION_JSON_VALUE)
.build());
return comment;
})
.flatMap(comment -> {
meterRegistry
.counter("comments.produced", "imageId", comment.getImageId())
.increment();
return Mono.just("redirect:/");
});
} else {
return Mono.just("redirect:/");
}
}
#StreamEmitter
#Output(Source.OUTPUT)
public void emit(FluxSender output) {
output.send(this.flux);
}
Comment Microservice -> CommentService.java
#StreamListener
#Output(Processor.OUTPUT)
public Flux<Void> save(#Input(Processor.INPUT) Flux<Comment> newComment) {
return repository
.saveAll(newComment)
.flatMap(comment -> {
meterRegistry
.counter("comments.consumed", "imageId", comment.getImageId())
.increment();
return Mono.empty();
});
}
CommentService.java->#Service
#EnableBinding(Processor.class)
public class CommentService { ....
Repository which I cloned Chapter7/part 1
Image Microservice -> CommentController.java
Comment Microservice -> CommentService.java
Related
I am new to vertx and async programming.
I have 2 verticles communicating via an event bus as follows:
//API Verticle
public class SearchAPIVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private Integer defaultPort;
private void sendSearchRequest(RoutingContext routingContext) {
final JsonObject requestMessage = routingContext.getBodyAsJson();
final EventBus eventBus = vertx.eventBus();
eventBus.request(GET_USEARCH_DOCS, requestMessage, reply -> {
if (reply.succeeded()) {
Logger.info("Search Result = " + reply.result().body());
routingContext.response()
.putHeader("content-type", "application/json")
.setStatusCode(200)
.end((String) reply.result().body());
} else {
Logger.info("Document Search Request cannot be processed");
routingContext.response()
.setStatusCode(500)
.end();
}
});
}
#Override
public void start() throws Exception {
Logger.info("Starting the Gateway service (Event Sender) verticle");
// Create a Router
Router router = Router.router(vertx);
//Added bodyhandler so we can process json messages via the event bus
router.route().handler(BodyHandler.create());
// Mount the handler for incoming requests
// Find documents
router.post("/api/search/docs/*").handler(this::sendSearchRequest);
// Create an HTTP Server using default options
HttpServer server = vertx.createHttpServer();
// Handle every request using the router
server.requestHandler(router)
//start listening on port 8083
.listen(config().getInteger("http.port", 8083)).onSuccess(msg -> {
Logger.info("*************** Search Gateway Server started on "
+ server.actualPort() + " *************");
});
}
#Override
public void stop(){
//house keeping
}
}
//Below is the target verticle should be making the multiple web client call and merging the responses
.
#Component
public class SolrCloudVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private SearchRepository searchRepositoryService;
#Override
public void start() throws Exception {
Logger.info("Starting the Solr Cloud Search Service (Event Consumer) verticle");
super.start();
ConfigStoreOptions fileStore = new ConfigStoreOptions().setType("file")
.setConfig(new JsonObject().put("path", "conf/config.json"));
ConfigRetrieverOptions configRetrieverOptions = new ConfigRetrieverOptions()
.addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, configRetrieverOptions);
configRetriever.getConfig(ar -> {
if (ar.succeeded()) {
JsonObject configJson = ar.result();
EventBus eventBus = vertx.eventBus();
eventBus.<JsonObject>consumer(GET_USEARCH_DOCS).handler(getDocumentService(searchRepositoryService, configJson));
Logger.info("Completed search service event processing");
} else {
Logger.error("Failed to retrieve the config");
}
});
}
private Handler<Message<JsonObject>> getDocumentService(SearchRepository searchRepositoryService, JsonObject configJson) {
return requestMessage -> vertx.<String>executeBlocking(future -> {
try {
//I need to incorporate the logic here that adds futures to list and composes the compositefuture
/*
//Below is my logic to populate the future list
WebClient client = WebClient.create(vertx);
List<Future> futureList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
Future<String> future1 = client.post(8983, "127.0.0.1", "/solr/" + collection + "/query")
.expect(ResponsePredicate.SC_OK)
.sendJsonObject(requestMessage.body())
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
futureList.add(future1);
}
//Below is the CompositeFuture logic, but the logic and construct does not make sense to me. What goes as first and second argument of executeBlocking method
/*CompositeFuture.join(futureList)
.onSuccess(result -> {
result.list().forEach( x -> {
if(x != null){
requestMessage.reply(result.result());
}
}
);
})
.onFailure(error -> {
System.out.println("We should not fail");
})
*/
future.complete("DAO returns a Json String");
} catch (Exception e) {
future.fail(e);
}
}, result -> {
if (result.succeeded()) {
requestMessage.reply(result.result());
} else {
requestMessage.reply(result.cause()
.toString());
}
});
}
}
I was able to use the org.springframework.web.reactive.function.client.WebClient calls to compose my search result from multiple web client calls, as against using Future<io.vertx.ext.web.client.WebClient> with CompositeFuture.
I was trying to avoid mixing Springboot and Vertx, but unfortunately Vertx CompositeFuture did not work here:
//This method supplies the parameter for the future.complete(..) line in getDocumentService(SearchRepository,JsonObject)
private List<JsonObject> findByQueryParamsAndDataSources(SearchRepository searchRepositoryService,
JsonObject configJson,
JsonObject requestMessage)
throws SolrServerException, IOException {
List<JsonObject> searchResultList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
searchResultList.add(new JsonObject(doSearchPerCollection(collection.toString(), requestMessage.toString())));
}
return aggregateMultiCollectionSearchResults(searchResultList);
}
public String doSearchPerCollection(String collection, String message) {
org.springframework.web.reactive.function.client.WebClient client =
org.springframework.web.reactive.function.client.WebClient.create();
return client.post()
.uri("http://127.0.0.1:8983/solr/" + collection + "/query")
.contentType(MediaType.APPLICATION_JSON)
.accept(MediaType.APPLICATION_JSON)
.body(BodyInserters.fromValue(message.toString()))
.retrieve()
.bodyToMono(String.class)
.block();
}
private List<JsonObject> aggregateMultiCollectionSearchResults(List<JsonObject> searchList){
//TODO: Search result aggregation
return searchList;
}
My use case is the second verticle should make multiple vertx web client calls and should combine the responses.
If an API call falls, I want to log the error and still continue processing and merging responses from other calls.
Please, any help on how my code above could be adaptable to handle the use case?
I am looking at vertx CompositeFuture, but no headway or useful example seen yet!
What you are looking for can done with Future coordination with a little bit of additional handling:
CompositeFuture.join(future1, future2, future3).onComplete(ar -> {
if (ar.succeeded()) {
// All succeeded
} else {
// All completed and at least one failed
}
});
The join composition waits until all futures are completed, either with a success or a failure.
CompositeFuture.join
takes several futures arguments (up to 6) and returns a future that is succeeded when all the futures are succeeded, and failed when all the futures are completed and at least one of them is failed
Using join you will wait for all Futures to complete, the issue is that if one of them fails you will not be able to obtain response from others as CompositeFuture will be failed. To avoid this you should add Future<T> recover(Function<Throwable, Future<T>> mapper) on each of your Futures in which you should log the error and pass an empty response so that the future does not fail.
Here is short example:
Future<String> response1 = client.post(8887, "localhost", "work").expect(ResponsePredicate.SC_OK).send()
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
Future<String> response2 = client.post(8887, "localhost", "error").expect(ResponsePredicate.SC_OK).send()
map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
CompositeFuture.join(response2, response1)
.onSuccess(result -> {
result.list().forEach(x -> {
if(x != null) {
System.out.println(x);
}
});
})
.onFailure(error -> {
System.out.println("We should not fail");
});
Edit 1:
Limit for CompositeFuture.join(Future...) is 6 Futures, in the case you need more you can use: CompositeFuture.join(Arrays.asList(future1, future2, future3)); where you can pass unlimited number of futures.
I don't understand spring integration with JobLaunchingGateway behavior. I have example config:
public SftpInboundChannelAdapterSpec sftpInboundChannelAdapterSpec() {
return Sftp.inboundAdapter(ftpFileSessionFactory())
.preserveTimestamp(true)
.deleteRemoteFiles(false)
.remoteDirectory(integrationProperties.getRemoteDirectory())
.filter(sftpFileListFilter())
.localDirectory(new File(integrationProperties.getLocalDirectory()));
}
public PollerSpec pollerSpec() {
PollerSpec cron = Pollers.cron(integrationProperties.getPollerCron());
cron.maxMessagesPerPoll(integrationProperties.getMessagePerPoll());
return cron;
}
#Bean
public IntegrationFlow sftpInboundFlow() {
return IntegrationFlows.from(sftpInboundChannelAdapterSpec(), pc -> pc.poller(pollerSpec()))
.transform(fileMessageToJobRequest())
.handle(jobLaunchingGateway())
.handle(message -> {
logger.info("Handle message: {}", message.getPayload());
})
.get();
}
#Bean
public JobLaunchingGateway jobLaunchingGateway() {
SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
simpleJobLauncher.setJobRepository(jobRepository);
simpleJobLauncher.setTaskExecutor(new SyncTaskExecutor());
JobLaunchingGateway jobLaunchingGateway = new JobLaunchingGateway(simpleJobLauncher);
return jobLaunchingGateway;
}
private ChainFileListFilter<ChannelSftp.LsEntry> sftpFileListFilter() {
ChainFileListFilter<ChannelSftp.LsEntry> chainFileListFilter = new ChainFileListFilter<>();
chainFileListFilter.addFilter(new SftpSimplePatternFileListFilter("*.xlsx"));
chainFileListFilter.addFilter(new SftpPersistentAcceptOnceFileListFilter(metadataStore(), "INT"));
return chainFileListFilter;
}
If I set polling every 1 minute, the job will be created every minute. I don't see any new record in MetaDataStore.
When I comment line with .handle(jobLaunchingGateway())
#Bean
public IntegrationFlow sftpInboundFlow() {
return IntegrationFlows.from(sftpInboundChannelAdapterSpec(), pc -> pc.poller(pollerSpec()))
.transform(fileMessageToJobRequest())
// .handle(jobLaunchingGateway())
.handle(message -> {
logger.info("Handle message: {}", message.getPayload());
})
.get();
}
Everything works as expected.
I expected that SFTP fetch new file(s) and then create new job for each file.
I don't understand why I don't see records in MetaDataStore when I JobLaunchingGateway is enabled.
Can you help me and explain this?
I am new in spring integration and batch?
I am trying to implement an Integration flow for a sqs queue using a void async service activator but the handling logic is never triggered.
The message is received in the flow, succesfuly converted by my custom transformer but the async handling is never completed.
This is my configuration class:
#Configuration
public class SqsConfiguration {
/**
...
...
**/
#Bean("amazonSQSClientConfiguration")
ClientConfiguration getAmazonSQSClientConfiguration() {
ClientConfiguration clientConfiguration = new ClientConfiguration();
clientConfiguration.setConnectionTimeout(connectionTimeout);
clientConfiguration.setMaxConnections(maxConnections);
clientConfiguration.setSocketTimeout(socketTimeout);
clientConfiguration.setMaxConsecutiveRetriesBeforeThrottling(maxConsecutiveRetriesBeforeThrottling);
return clientConfiguration;
}
#Bean("amazonSQSAsync")
AmazonSQSAsync getAmazonSQSAsync() {
return AmazonSQSAsyncClientBuilder.standard()
.withClientConfiguration(getAmazonSQSClientConfiguration())
.withRegion(this.region)
.build();
}
#Bean("amazonSQSRequestListenerContainerConsumerPool")
protected ThreadPoolTaskExecutor amazonSQSRequestListenerContainerConsumerPool() {
int maxSize = (int) Math.round(concurrentHandlers * poolSizeFactor);
int queueCapacity = (int) Math.round(concurrentHandlers * poolQueueSizeFactor);
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(concurrentHandlers);
taskExecutor.setMaxPoolSize(maxSize);
taskExecutor.setKeepAliveSeconds(poolKeepAliveTimeSeconds);
taskExecutor.setQueueCapacity(queueCapacity);
taskExecutor.setThreadFactory(new NamedDaemonThreadFactory("AmazonSQSRequestHandler"));
taskExecutor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
log.info(
String.format(
"Amazon SQS request handler pool settings: {coreSize: %d, maxSize: %d, queueCapacity: %d}",
concurrentHandlers,
maxSize,
queueCapacity
)
);
return taskExecutor;
}
#Bean("sqsMessageDrivenChannelAdapter")
public MessageProducerSupport sqsMessageDrivenChannelAdapter() {
SqsMessageDrivenChannelAdapter adapter = new SqsMessageDrivenChannelAdapter(getAmazonSQSAsync(), this.queueName);
adapter.setMaxNumberOfMessages(this.maxNumberOfMessages);
adapter.setVisibilityTimeout(this.visibilityTimeout);
adapter.setSendTimeout(this.sendTimeout);
adapter.setWaitTimeOut(this.waitTimeOut);
adapter.setMessageDeletionPolicy(SqsMessageDeletionPolicy.ON_SUCCESS);
adapter.setTaskExecutor(amazonSQSRequestListenerContainerConsumerPool());
return adapter;
}
#Bean
#SuppressWarnings("unchecked")
IntegrationFlow sqsRequestIntegrationFlow() {
SqsEventHandlerDispatcher commandHandler = applicationContext.getBean(SqsEventHandlerDispatcher.class);
return IntegrationFlows.from(sqsMessageDrivenChannelAdapter())
.transform(converter::toEvent)
.log()
.handle(commandHandler, "handle", a -> a.async(true))
.log()
.get();
}
}
This is my handler:
#Slf4j
#Component
#MessageEndpoint
public class SqsEventHandlerDispatcher {
/**
...
...
**/
public ListenableFuture<?> handle(EventMessage event) {
return new ListenableFutureTask<Void>(() -> doHandle(event), null);
}
private void doHandle(EventMessage event) {
//my handling logic
}
}
The logic in doHandle() method is never reached.
Same integration flow with a sync handler which will return void works perfectly:
#Bean
#SuppressWarnings("unchecked")
IntegrationFlow sqsRequestIntegrationFlow() {
SqsEventHandlerDispatcher commandHandler = applicationContext.getBean(SqsEventHandlerDispatcher.class);
return IntegrationFlows.from(sqsMessageDrivenChannelAdapter())
.transform(converter::toEvent)
.log()
.handle(commandHandler, "handle")
.log()
.get();
}
===============================================================================
#Slf4j
#Component
#MessageEndpoint
public class SqsEventHandlerDispatcher {
public void handle(EventMessage event) {
//my handling logic
}
}
Am I missing something? Or can I achieve it by using Mono?
I don't have much experience neither with spring integration nor async processing.
I found a solution using reactive java.
This is how my service activator looks now:
public Mono handle(EventMessage event, #Header(AwsHeaders.ACKNOWLEDGMENT) Acknowledgment acknowledgment) {
return Mono.fromRunnable(() -> doHandle(event)).subscribeOn(Schedulers.elastic())
.doOnSuccess(r -> {
log.trace("Message successfully processed. Will delete it now!");
acknowledgment.acknowledge();
});
}
private void doHandle(EventMessage event) {
//my handling logic
}
I ve also updated the sqs message deletion policy to NEVER and will manually acknowledge when a message was successfully processed and can be deleted.
adapter.setMessageDeletionPolicy(SqsMessageDeletionPolicy.NEVER);
I am trying to get a GraphQL subscription working with Java/Vert.x and to have the results shown in GraphiQL. I see all the System.out.println statements in the console, but GraphiQL is not displaying any results because the server is generating an 'Internal Server Error' message.
Schema:
type Subscription {
test: String
}
Vert.x Verticle
private RuntimeWiring getRuntimeWiring() {
return new RuntimeWiring()
.type("Subscription", builder -> builder
.dataFetcher("test", getTestDataFetcher()))
.build();
}
private VertxDataFetcher<Publisher<String>> getTestDataFetcher() {
return new VertxDataFetcher<>((env, future) -> future.complete(doTest()));
}
private Publisher<String> doTest() {
AtomicReference<Subscription> ar = new AtomicReference<>();
Observable<String> obs = Observable.just("Hello");
Publisher<String> pub = obs.toFlowable(BackpressureStrategy.BUFFER);
pub.subscribe(new Subscriber<String>() {
#Override
public void onSubscribe(Subscription s) {
System.out.println("SUBSCRIBE");
ar.set(s);
s.request(1);
}
#Override
public void onNext(String s) {
System.out.println("NEXT="+s);
ar.get().request(1);
}
#Override
public void onError(Throwable t) {
System.out.println("ERROR");
}
#Override
public void onComplete(){
System.out.println("COMPLETE");
}
}
return pub;
}
If I run the subscription using GraphiQL and look on my vert.x servers console, the output on the console is:
SUBSCRIBE
NEXT=Hello
COMPLETE
The GraphiQL output window says "Internal Server Error" and is sent a 500 error code from the server
If I modify the DataFetcher to exactly what is shown at the bottom of the first link, I also receive "Internal Server Error".
private DataFetcher<Publisher<String>> getTestDataFetcher() {
return env -> doTest();
}
I do not see any stack traces for the 500 error in the vertx console. So maybe this is a bug?
Sidenote - If I try using a CompletionStage as shown below (based off the bottom of the 2nd link) I get an error message saying 'You data fetcher must return a publisher of events when using graphql subscriptions'
private DataFetcher<CompletionStage<String>> getTestDataFetcher() {
Single<String> single = Single.create(emitter -> {
new Thread(()-> {
try {
emitter.onSuccess("Hello");
} catch(Exception e) {
emitter.onError(e);
}
}).start();
)};
return environment -> single.to(SingleInterop.get());
}
I have used the following sources as references to get this far:
https://www.graphql-java.com/documentation/v9/subscriptions/
https://vertx.io/docs/vertx-web-graphql/java/
How do I create an Angular 4 client for a Java Project Reactor reactive Flux API? The sample below has two APIs: a Mono API; and, Flux API. Both work from curl; but in Angular 4 (4.1.2) only the Mono API works; any ideas how to get Angular 4 to work with the Flux API?
Here's a trivial Spring Boot 2.0.0-SNAPSHOT application with a Mono API and a Flux API:
#SpringBootApplication
#RestController
public class ReactiveServiceApplication {
#CrossOrigin
#GetMapping("/events/{id}")
public Mono<Event> eventById(#PathVariable long id) {
return Mono.just(new Event(id, LocalDate.now()));
}
#CrossOrigin
#GetMapping(value = "/events", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<Event> events() {
Flux<Event> eventFlux = Flux.fromStream(
Stream.generate(
()->new Event(System.currentTimeMillis(), LocalDate.now()))
);
Flux<Long> durationFlux = Flux.interval(Duration.ofSeconds(1));
return Flux.zip(eventFlux, durationFlux).map(Tuple2::getT1);
}
public static void main(String[] args) {
SpringApplication.run(ReactiveServiceApplication.class);
}
}
with a Lombok-ed event:
#Data
#AllArgsConstructor
public class Event {
private final long id;
private final LocalDate when;
}
These reactive APIs work from curl as I'd expect:
jan#linux-6o1s:~/src> curl -s http://localhost:8080/events/123
{"id":123,"when":{"year":2017,"month":"MAY","monthValue":5,"dayOfMonth":15,"dayOfWeek":"MONDAY","era":"CE","dayOfYear":135,"leapYear":false,"chronology":{"calendarType":"iso8601","id":"ISO"}}}
and similarly for the non-terminating Flux API:
jan#linux-6o1s:~/src> curl -s http://localhost:8080/events
data:{"id":1494887783347,"when":{"year":2017,"month":"MAY","monthValue":5,"dayOfMonth":15,"dayOfWeek":"MONDAY","era":"CE","dayOfYear":135,"leapYear":false,"chronology":{"calendarType":"iso8601","id":"ISO"}}}
data:{"id":1494887784348,"when":{"year":2017,"month":"MAY","monthValue":5,"dayOfMonth":15,"dayOfWeek":"MONDAY","era":"CE","dayOfYear":135,"leapYear":false,"chronology":{"calendarType":"iso8601","id":"ISO"}}}
data:{"id":1494887785347,"when":{"year":2017,"month":"MAY","monthValue":5,"dayOfMonth":15,"dayOfWeek":"MONDAY","era":"CE","dayOfYear":135,"leapYear":false,"chronology":{"calendarType":"iso8601","id":"ISO"}}}
...
The similarly trivial Angular 4 client with RxJS:
#Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent implements OnInit, OnDestroy {
title = 'app works!';
event: Observable<Event>;
subscription: Subscription;
constructor(
private _http: Http
) {
}
ngOnInit() {
this.subscription = this._http
.get("http://localhost:8080/events/322")
.map(response => response.json())
.subscribe(
e => {
this.event = e;
}
);
}
ngOnDestroy() {
this.subscription.unsubscribe();
}
}
works fine for the Mono API:
"http://localhost:8080/events/322"
but the Flux API:
"http://localhost:8080/events"
never triggers the event handler, unlike curl.
Here's a working Angular 4 SSE example as Simon describes in his answer. This took a while to piece together so perhaps it'll be useful to others. The key piece here is Zone -- without Zone, the SSE updates won't trigger Angular's change detection.
import { Component, NgZone, OnInit, OnDestroy } from '#angular/core';
import { Http } from '#angular/http';
import { Observable } from 'rxjs/Observable';
import { BehaviorSubject } from 'rxjs/BehaviorSubject';
import { Subscription } from 'rxjs/Subscription';
import 'rxjs/add/operator/map';
#Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent implements OnInit {
event: Observable<MyEvent>;
private _eventSource: EventSource;
private _events: BehaviorSubject<MyEvent> = new BehaviorSubject<MyEvent>(null);
constructor(private _http: Http, private _zone: NgZone) {}
ngOnInit() {
this._eventSource = this.createEventSource();
this.event = this.createEventObservable();
}
private createEventObservable(): Observable<MyEvent> {
return this._events.asObservable();
}
private createEventSource(): EventSource {
const eventSource = new EventSource('http://localhost:8080/events');
eventSource.onmessage = sse => {
const event: MyEvent = new MyEvent(JSON.parse(sse.data));
this._zone.run(()=>this._events.next(event));
};
eventSource.onerror = err => this._events.error(err);
return eventSource;
}
}
The corresponding HTML is simply:
<b>Observable of sse</b>
<div *ngIf="(event | async); let evt; else loading">
<div>ID: {{evt.id}} </div>
</div>
<ng-template #loading>Waiting...</ng-template>
The event is trivial:
export class MyEvent {
id: number;
when: any;
constructor(jsonData) {
Object.assign(this, jsonData);
}
}
and since my TS does not include EventSource or Callback, I stubbed them in:
interface Callback { (data: any): void; }
declare class EventSource {
onmessage: Callback;
onerror: Callback;
addEventListener(event: string, cb: Callback): void;
constructor(name: string);
close: () => void;
}
The Flux based controller is producing Server Sent Events (SSE). I don't think the Http client from Angular2 lets you consume SSE...
edit: looks like EventSource is what you need, see this similar question/answer: https://stackoverflow.com/a/36815231/1113486
Going to guess here that the url for /events is the problem because it should produce json to be handled.
#SpringBootApplication
#RestController
public class ReactiveServiceApplication {
#CrossOrigin
#GetMapping("/events/{id}")
public Mono<Event> eventById(#PathVariable long id) {
return Mono.just(new Event(id, LocalDate.now()));
}
#CrossOrigin
#GetMapping(value = "/events", produces = MediaType.APPLICATION_JSON_VALUE)
public Flux<Event> events() {
Flux<Event> eventFlux = Flux.fromStream(
Stream.generate(
()->new Event(System.currentTimeMillis(), LocalDate.now()))
);
Flux<Long> durationFlux = Flux.interval(Duration.ofSeconds(1));
return Flux.zip(eventFlux, durationFlux).map(Tuple2::getT1);
}
public static void main(String[] args) {
SpringApplication.run(ReactiveServiceApplication.class);
}
}