I need help writing a unit test for class NotificationHandler. so I made NotificationHandlerTest (using junit4) but I don't know how to determine what I should expect as a result versus what the actual result is, so one or more simple test's (for some of its methods) would help me a lot!
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.integration.annotation.Poller;
import org.springframework.integration.annotation.ServiceActivator;
import org.springframework.messaging.Message;
import org.springframework.stereotype.Component;
import java.util.List;
import java.util.stream.Collectors;
#Component
class NotificationHandler {
private static Logger LOG = LoggerFactory.getLogger(NotificationHandler.class);
#Autowired
private NotificationRoutingRepository routingRepository;
#Autowired
private SendNotificationGateway gateway;
#Autowired
private AccessService accessService;
#Autowired
private EndpointService endpointService;
#ServiceActivator(inputChannel = Channels.ASSET_MODIFIED_CHANNEL, poller = #Poller("assetModifiedPoller"), outputChannel = Channels.NULL_CHANNEL)
public Message<?> handle(Message<EventMessage> message) {
final EventMessage event = message.getPayload();
LOG.debug("Generate notification messages: {}, {}", event.getOriginType(), event.getType());
routingRepository.findByOriginTypeAndEventType(event.getOriginType(), event.getType()).stream()
.filter(routing -> routing.getOriginId() == null || routing.getOriginId() == event.getOriginId())
.map(routing -> getNotificationMessages(event, routing))
.flatMap(List::stream)
.forEach(notificationMessage -> {
LOG.debug("Sending message {}", notificationMessage);
gateway.send(notificationMessage);
});
return message;
}enter code here
enter code here`enter code here`
private List<NotificationMessage> getNotificationMessages(EventMessage event, NotificationRouting routing) {
switch (routing.getDestinationType()) {
case "USERS":
LOG.trace("Getting endpoints for users");
return getEndpointsByUsers(routing, event.getOrigin(), event.getOriginType()).stream()
.map(endpoint -> new NotificationMessage(event.getOriginType(), event.getOrigin(), endpoint))
.collect(Collectors.toList());
default:
LOG.trace("Getting default endpoints");
return getEndpoints(routing, event.getOrigin(), event.getOriginType()).stream()
.map(endpoint -> new NotificationMessage(event.getOriginType(), event.getOrigin(), endpoint))
.collect(Collectors.toList());
}
}
private List<Endpoint> getEndpoints(NotificationRouting routing, Object origin, String originType) {
final Asset asset = getAssetForObject(origin, originType);
final List<Long> userIds = accessService.list(asset).stream()
.map(ResourceAccess::getUser)
.map(AbstractEntity::getId)
.collect(Collectors.toList());
userIds.add(asset.getCreatorId());
LOG.trace("getEndpoints usersIds {}", userIds);
final List<Endpoint> endpoints = endpointService.getEndpoints(userIds, routing.getEndpointType());
LOG.trace("Endpoints {}", endpoints.stream().map(Endpoint::getId).collect(Collectors.toList()));
return endpoints;
}
private List<Endpoint> getEndpointsByUsers(NotificationRouting routing, Object origin, String originType) {
final Asset asset = getAssetForObject(origin, originType);
final List<Long> userIds = accessService.list(asset).stream()
.map(ResourceAccess::getUser)
.map(AbstractEntity::getId)
.filter(routing.getDestinations()::contains)
.collect(Collectors.toList());
routing.setDestinations(userIds);
routingRepository.save(routing);
LOG.trace("getEndpointsByUsers usersIds {}", userIds);
final List<Endpoint> endpoints = endpointService.getEndpoints(userIds, routing.getEndpointType());
LOG.trace("Endpoints {}", endpoints.stream().map(Endpoint::getId).collect(Collectors.toList()));
return endpoints;
}
private Asset getAssetForObject(Object origin, String originType) {
switch (originType) {
case EventMessage.POINT:
return (Point) origin;
case EventMessage.FEED:
return ((Feed) origin).getPoint();
case EventMessage.ACTUATOR:
return ((Actuator)origin).getPoint();
case EventMessage.DEVICE:
return (Device) origin;
case EventMessage.ALARM:
return ((Alarm) origin).getPoint();
default:
throw new IllegalArgumentException("Unsupported type: " + originType);
}
}
}
I'd say you start with a simple test if you're not sure what to test. One test that verifies you don't get any exception if you send null as an argument.
E.g.
#Test
public void shouldNotThrowAnyExceptionIfArgumentIsNull() {
// given
NotificationHandler handler = new NotificationHandler();
// when
handler.handle(null);
// then no exception is thrown.
}
After that, you can analyze line by line what the method handle is doing and write tests that verify its behavior.
You can, for example, verify that the method gateway.send(...); was executed or not depending on what you sent in the parameter.
For dependency mocking and behavior verification, I'd recommend you use mockito or a similar tool.
You can follow this tutorial to learn how to do it.
Related
Is it possible to concisely implement a single HAL-JSON & JSON endpoints in Spring Boot 2? The goal is to have:
curl -v http://localhost:8091/books
return this application/hal+json result:
{
"_embedded" : {
"bookList" : [ {
"title" : "The As",
"author" : "ab",
"isbn" : "A"
}, {
"title" : "The Bs",
"author" : "ab",
"isbn" : "B"
}, {
"title" : "The Cs",
"author" : "cd",
"isbn" : "C"
} ]
}
and for this (and/or the HTTP Accept header since this is a REST API):
curl -v http://localhost:8091/books?format=application/json
to return the plain application/json result:
[ {
"title" : "The As",
"author" : "ab",
"isbn" : "A"
}, {
"title" : "The Bs",
"author" : "ab",
"isbn" : "B"
}, {
"title" : "The Cs",
"author" : "cd",
"isbn" : "C"
} ]
with minimal controller code. These endpoints work as expected:
#GetMapping("/asJson")
public Collection<Book> booksAsJson() {
return _books();
}
#GetMapping("/asHalJson")
public CollectionModel<Book> booksAsHalJson() {
return _halJson(_books());
}
#GetMapping
public ResponseEntity<?> booksWithParam(
#RequestParam(name="format", defaultValue="application/hal+json")
String format) {
return _selectedMediaType(_books(), format);
}
#GetMapping("/asDesired")
public ResponseEntity<?> booksAsDesired() {
return _selectedMediaType(_books(), _format());
}
with the following helpers:
private String _format() {
// TODO: something clever here...perhaps Spring's content-negotiation?
return MediaTypes.HAL_JSON_VALUE;
}
private <T> static CollectionModel<T> _halJson(Collection<T> items) {
return CollectionModel.of(items);
}
private <T> static ResponseEntity<?> _selectedMediaType(
Collection<T> items, String format) {
return ResponseEntity.ok(switch(format.toLowerCase()) {
case MediaTypes.HAL_JSON_VALUE -> _halJson(items);
case MediaType.APPLICATION_JSON_VALUE -> items;
default -> throw _unknownFormat(format);
});
}
but the booksWithParam implementation is too messy to duplicate for each endpoint. Is there a way to get to, or close to, something like the booksAsDesired implementation or something similarly concise?
One way you could tell Spring that you want to support plain JSON is by adding a custom converter for such media types. This can be done by overwriting the extendMessageConverters method of WebMvcConfigurer and adding your custom converters there like in the sample below:
import ...PlainJsonHttpMessageConverter;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.web.config.EnableSpringDataWebSuport;
import org.springframework.http.converter.HttpMessageConverter;
import org.springframework.web.servelt.config.annotation.ResourceHandlerRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;
import java.util.List;
import javax.annotation.Nonnull;
#Configuration
#EnableSpringeDataWebSupport
public class WebMvcConfiguration implements WebMvcConfigurer {
#Override
public void extendMessageConverters(#Nonnull final List<HttpMessageConverter<?>> converters) {
converters.add(new PlainJsonHttpMessageConverter());
}
}
The message converter itself is also no rocket-science as can be seen by the PlainJsonHttpMessageConverter sample below:
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
import com.fasterxml.jackson.databind.jsr310.JavaTimeModule;
import org.springframework.hateoas.RepresentationModel;
import org.springframework.http.MediaType;
import org.springframework.http.converter.json.AbstractJackson2HttpMessageConverter;
import org.springframework.stereotype.Component;
import javax.annotation.Nonnull;
#Component
public class PlainJsonHttpMessageConverter extends AbstractJackson2HttpMessageConverter {
public PlainJsonHttpMessageConverter() {
super(new ObjectMapper(), MediaType.APPLICATION_JSON);
// add support for date and time format conversion to ISO 8601 and others
this.defaultObjectMapper.registerModule(new JavaTimeModule());
// return JSON payload in pretty format
this.defaultObjectMapper.enable(SerializationFeature.INDENT_OUTPUT);
}
#Override
protected boolean supports(#Nonnull final Class<?> clazz) {
return RepresentationModel.class.isAssignableFrom(clazz);
}
}
This should enable plain JSON support besides HAL-JSON without you having to do any further branching or custom media-type specific conversion within your domain logic or service code.
I.e. let's take a simple task as example case. Within a TaskController you might have a code like this
#GetMapping(path = "/{taskId:.+}", produces = {
MediaTypes.HAL_JSON_VALUE,
MediaType.APPLICATION_JSON_VALUE,
MediaTypes.HTTP_PROBLEM_DETAILS_JSON_VALUE
})
public ResponseEntity<?> task(#PathVariable("taskId") String taskId,
#RequestParam(required = false) Map<String, String> queryParams,
HttpServletRequest request) {
if (queryParams == null) {
queryParams = new HashMap<>();
}
Pageable pageable = RequestUtils.getPageableForInput(queryParams);
final String caseId = queryParams.get("caseId");
...
final Query query = buildSearchCriteria(taskId, caseId, ...);
query.with(pageable);
List<Task> matches = mongoTemplate.find(query, Task.class);
if (!matches.isEmpty()) {
final Task task = matches.get(0);
return ResponseEntity.ok()
.eTag(Long.toString(task.getVersion())
.body(TASK_ASSEMBLER.toModel(task));
} else {
if (request.getHeader("Accept").contains(MediaTypes.HTTP_PROBLEM_DETAILS_JSON_VALUE)) {
return ResponseEntity.status(HttpStatus.NOT_FOUND)
.contentType(MediaTypes.HTTP_PROBLEM_DETAILS_JSON)
.body(generateNotFoundProblem(request, taskId));
} else {
final String msg = "No task with ID " + taskId + " found";
throw new ResponseStatusException(HttpStatus.NOT_FOUND, msg);
}
}
}
which simply retrieves an arbitrary task via its unique identifier and returns the representation for it according to the one specified in the Accept HTTP header. The TASK_ASSEMBLER here is just a custom Spring HATEOAS RepresentationModelAssembler<Task, TaskResource> class that converts task objects to task resources by adding links for certain related things.
This can now be easily tested via Spring MVC tests such as
#Test
public void halJson() throws Exception {
given(mongoTemplate.find(any(Query.class), eq(Task.class)))
.willReturn(setupSingleTaskList());
final ResultActions result = mockMvc.perform(
get("/api/tasks/taskId")
.accept(MediaTypes.HAL_JSON_VALUE)
);
result.andExpect(status().isOk())
.andExpect(content().contentType(MediaTypes.HAL_JSON_VALUE));
// see raw payload received by commenting out below line
// System.err.println(result.andReturn().getResponse().getContentAsString());
verifyHalJson(result);
}
#Test
public void plainJson() throws Exception {
given(mongoTemplate.find(any(Query.class), eq(Task.class)))
.willReturn(setupSingleTaskList());
final ResultActions result = mockMvc.perform(
get("/api/tasks/taskId")
.accept(MediaType.APPLICATION_JSON_VALUE)
);
result.andExpect(status().isOk())
.andExpect(content().contentType(MediaType.APPLICATION_JSON_VALUE));
// see raw payload received by commenting out below line
// System.err.println(result.andReturn().getResponse().getContentAsString());
verifyPlainJson(result);
}
...
private void verifyHalJson(final ResultActions action) throws Exception {
action.andExpect(jsonPath("taskId", is("taskId")))
.andExpect(jsonPath("caseId", is("caseId")))
...
.andExpect(jsonPath("_links.self.href", is(BASE_URI + "/tasks/taskId")))
.andExpect(jsonPath("_links.up.href", is(BASE_URI + "/tasks")));
}
rivate void verifyPlainJson(final ResultActions action) throws Exception {
action.andExpect(jsonPath("taskId", is("taskId")))
.andExpect(jsonPath("caseId", is("caseId")))
...
.andExpect(jsonPath("links[0].rel", is("self")))
.andExpect(jsonPath("links[0].href", is(BASE_URI + "/tasks/taskId")))
.andExpect(jsonPath("links[1].rel", is("up")))
.andExpect(jsonPath("links[1].href", is(BASE_URI + "/tasks")));
}
Note how links are presented here differently depending on which media type you've selected.
I have a spring batch job that is executed with web parameters like this:
https://localhost:8443/batch/async/orz003A?id=123&name=test
I've added these parameters, id and test to my ExecutionContext
I'm having trouble accessing them in my Setup Tasklet, seen below.
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.batch.item.ExecutionContext;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.stereotype.Component;
import com.yrc.mcc.app.online.NTfp211;
import com.yrc.mcc.core.batch.tasklet.AbstractSetupTasklet;
#Component
public class Tfp211SetupTasklet extends AbstractSetupTasklet {
final static Logger LOGGER = LoggerFactory.getLogger(Tfp211SetupTasklet.class);
String gapId;
#Override
protected RepeatStatus performTask(ExecutionContext ec) {
//TODO create the map, check the params for the needed params
// throw an error if the param doesn't exist, because the param
// is necessary to run the job. If the param does exist, set the specific param
if (ec.isEmpty()) {
LOGGER.info("this shit is empty");
}
//setg on GAPID
gapId = ec.toString();
ec.get(BATCH_PROGRAM_PARAMS);
LOGGER.info(gapId);
ec.put(AbstractSetupTasklet.BATCH_PROGRAM_NAME, NTfp211.class.getSimpleName());
return RepeatStatus.FINISHED;
}
}
Any suggestions?
edit:
Here is a piece from my AbstractSetupTaskler
Map<String, String> params = new HashMap<>();
if (!ec.containsKey(BATCH_PROGRAM_PARAMS)) {
ec.put(BATCH_PROGRAM_PARAMS, params);
}
within each job's SetupTasklet I want to specify the parameters needed for that job
edit: I have this tasklet that I believe launches my jobs
#Component
public class CallM204ProgramTasklet implements Tasklet {
private static final Logger LOGGER = LoggerFactory.getLogger(CallM204ProgramTasklet.class);
#Autowired
private CommonConfig commonConfig;
#Autowired
private ProgramFactory programFactory;
#Autowired
private MidusService midusService;
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
ExecutionContext ec = chunkContext.getStepContext().getStepExecution().getJobExecution().getExecutionContext();
JobParameters jobParameters = chunkContext.getStepContext().getStepExecution().getJobParameters();
jobParameters.getParameters();
String progName = ec.getString(AbstractSetupTasklet.BATCH_PROGRAM_NAME);
Random randomSession = new Random();
String sessionId = "000000" + randomSession.nextInt(1000000);
sessionId = sessionId.substring(sessionId.length() - 6);
SessionData sessionData = new SessionDataImpl("Batch_" + sessionId, commonConfig);
IOHarness io = new BatchIOHarnessImpl(midusService, commonConfig.getMidus().getSendToMidus());
sessionData.setIOHarness(io);
sessionData.setUserId("mccBatch");
Program program = programFactory.createProgram(progName, sessionData);
String progResult = null;
// Create necessary globals for flat file handling.
#SuppressWarnings("unchecked")
Map<String, MccFtpFile> files = (Map<String, MccFtpFile>) ec.get(AbstractSetupTasklet.BATCH_FTP_FILES);
if (files != null) {
for (MccFtpFile mccFtpFile : files.values()) {
program.setg(mccFtpFile.getGlobalName(), mccFtpFile.getLocalFile());
}
}
#SuppressWarnings("unchecked")
Map<String, String> params = (Map<String, String>) ec.get(AbstractSetupTasklet.BATCH_PROGRAM_PARAMS);
//put params into globals
if (params != null) {
params.forEach((k, v) -> program.setg(k, v));
}
try {
program.processUnthreaded(sessionData);
progResult = io.close(sessionData);
} catch (Exception e) {
progResult = "Error running renovated program " + progName + ": " + e.getMessage();
LOGGER.error(progResult, e);
chunkContext.getStepContext().getStepExecution().setExitStatus(ExitStatus.FAILED);
} finally {
String currResult = ec.getString(AbstractSetupTasklet.BATCH_PROGRAM_RESULT).trim();
// Put the program result into the execution context.
ec.putString(AbstractSetupTasklet.BATCH_PROGRAM_RESULT, currResult + "\r" + progResult);
}
return RepeatStatus.FINISHED;
}
}
You need to setup a job launcher and pass the parameters as described in the docs here: https://docs.spring.io/spring-batch/4.0.x/reference/html/job.html#runningJobsFromWebContainer.
After that, you can get access to job parameters in your tasklet from the chunk context. For example:
class MyTasklet implements Tasklet {
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
JobParameters jobParameters = chunkContext.getStepContext().getStepExecution().getJobParameters();
// get id and name from jobParameters
// use id and name to do the required work
return RepeatStatus.FINISHED;
}
}
I am trying to understand CompletableFuture in Java 8. As a part of it, I am trying to make some REST calls to solidify my understanding. I am using this library to make REST calls: https://github.com/AsyncHttpClient/async-http-client.
Please note, this library returns a Response object for the GET call.
Following is what I am trying to do:
Call this URL which gives the list of users: https://jsonplaceholder.typicode.com/users
Convert the Response to List of User Objects using GSON.
Iterate over each User object in the list, get the userID and then get the list of Posts made by the user from the following URL: https://jsonplaceholder.typicode.com/posts?userId=1
Convert each post response to Post Object using GSON.
Build a Collection of UserPost objects, each of which has a User Object and a list of posts made by the user.
public class UserPosts {
private final User user;
private final List<Post> posts;
public UserPosts(User user, List<Post> posts) {
this.user = user;
this.posts = posts;
}
#Override
public String toString() {
return "user = " + this.user + " \n" + "post = " + posts+ " \n \n";
}
}
I currently have it implemented as follows:
package com.CompletableFuture;
import java.util.List;
import java.util.Optional;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.Future;
import java.util.function.Function;
import java.util.stream.Collectors;
import org.asynchttpclient.Response;
import com.http.HttpResponse;
import com.http.HttpUtil;
import com.model.Post;
import com.model.User;
import com.model.UserPosts;
/**
* Created by vm on 8/20/18.
*/
class UserPostResponse {
private final User user;
private final Future<Response> postResponse;
UserPostResponse(User user, Future<Response> postResponse) {
this.user = user;
this.postResponse = postResponse;
}
public User getUser() {
return user;
}
public Future<Response> getPostResponse() {
return postResponse;
}
}
public class HttpCompletableFuture extends HttpResponse {
private Function<Future<Response>, List<User>> userResponseToObject = user -> {
try {
return super.convertResponseToUser(Optional.of(user.get().getResponseBody())).get();
} catch (Exception e) {
e.printStackTrace();
return null;
}
};
private Function<Future<Response>, List<Post>> postResponseToObject = post -> {
try {
return super.convertResponseToPost(Optional.of(post.get().getResponseBody())).get();
} catch (Exception e) {
e.printStackTrace();
return null;
}
};
private Function<UserPostResponse, UserPosts> buildUserPosts = (userPostResponse) -> {
try {
return new UserPosts(userPostResponse.getUser(), postResponseToObject.apply(userPostResponse.getPostResponse()));
} catch (Exception e) {
e.printStackTrace();
return null;
}
};
private Function<User, UserPostResponse> getPostResponseForUser = user -> {
Future<Response> resp = super.getPostsForUser(user.getId());
return new UserPostResponse(user, resp);
};
public HttpCompletableFuture() {
super(HttpUtil.getInstance());
}
public List<UserPosts> getUserPosts() {
try {
CompletableFuture<List<UserPosts>> usersFuture = CompletableFuture
.supplyAsync(() -> super.getUsers())
.thenApply(userResponseToObject)
.thenApply((List<User> users)-> users.stream().map(getPostResponseForUser).collect(Collectors.toList()))
.thenApply((List<UserPostResponse> userPostResponses ) -> userPostResponses.stream().map(buildUserPosts).collect(Collectors.toList()));
List<UserPosts> users = usersFuture.get();
System.out.println(users);
return users;
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
}
However, I am not sure if the way I am doing this is right. More specifically, in userResponseToObject and postResponseToObject Functions, I am calling the get() method on the Future, which will be blocking.
Is there a better way to implement this?
If you plan to use CompletableFuture, you should use the ListenableFuture from async-http-client library. ListenableFuture can be converted to CompletableFuture.
The advantage of using CompletableFuture is that you can write logic that deals with Response object without having to know anything about futures or threads. Suppose you wrote the following 4 methods. 2 to make requests and 2 to parse responses:
ListenableFuture<Response> requestUsers() {
}
ListenableFuture<Response> requestPosts(User u) {
}
List<User> parseUsers(Response r) {
}
List<UserPost> parseUserPosts(Response r, User u) {
}
Now we can write a non-blocking method that retrieves posts for a given user:
CompletableFuture<List<UserPost>> userPosts(User u) {
return requestPosts(u)
.toCompletableFuture()
.thenApply(r -> parseUserPosts(r, u));
}
and a blocking method to read all posts for all users:
List<UserPost> getAllPosts() {
// issue all requests
List<CompletableFuture<List<UserPost>>> postFutures = requestUsers()
.toCompletableFuture()
.thenApply(userRequest -> parseUsers(userRequest)
.stream()
.map(this::userPosts)
.collect(toList())
).join();
// collect the results
return postFutures.stream()
.map(CompletableFuture::join)
.flatMap(List::stream)
.collect(toList());
}
Depending on the policy you want to use to manage blocking response, you can explore at least these implementations:
1) Invoking the overloaded method get of the class CompletableFuture with a timeout:
List<UserPosts> users = usersFuture.get(long timeout, TimeUnit unit);
From the documentation:
Waits if necessary for at most the given time for this future to
complete, and then returns its result, if available.
2) Using the alternative method getNow:
List users = usersFuture.getNow(T valueIfAbsent);
Returns the result value (or throws any encountered exception) if
completed, else returns the given valueIfAbsent.
3) Using CompletableFuture instead of Future, you can force manually the unlocking of get calling complete :
usersFuture.complete("Manual CompletableFuture's Result")
When I run the following piece of code:
import io.vertx.core.*;
import io.vertx.core.eventbus.MessageConsumer;
import io.vertx.core.json.JsonObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
class TestVerticle extends AbstractVerticle {
final Logger logger = LoggerFactory.getLogger(TestVerticle.class.getName());
public static final String ADDRESS = "oot.test";
public void start(Future<Void> startFuture) {
logger.info("starting test verticle");
MessageConsumer<JsonObject> consumer = vertx.eventBus().consumer(ADDRESS);
consumer.handler(message -> {
final JsonObject body = message.body();
logger.info("received: " + body);
JsonObject replyMessage = body.copy();
replyMessage.put("status", "processed");
message.reply(replyMessage);
});
logger.info("started test verticle");
}
}
public class Scratchpad {
private final static Logger logger = LoggerFactory.getLogger(Scratchpad.class);
public static void main(String[] args) throws InterruptedException {
Vertx vertx = Vertx.vertx();
logger.info("deploying test verticle");
Handler<AsyncResult<String>> completionHandler = result -> {
System.out.println("done");
if (result.succeeded()) {
logger.info("deployment result: " + result.result());
} else {
logger.error("failed to deploy: " + result);
}
};
TestVerticle testVerticle = new TestVerticle();
vertx.deployVerticle(testVerticle, completionHandler);
logger.info("deployment completed");
}
}
I expect the CompletionHandler content to be executed and therefore I should get something sent to stdout (at least "done", although the logging should be working as well) but nothing happens. All the other logging information shows up correctly on my screen. What am I doing wrong?
After:
logger.info("started test verticle");
Add:
startFuture.complete();
See Asynchronous Verticle start and stop section of the docs about the asynchronous start method:
This version of the method takes a Future as a parameter. When the method returns the verticle will not be considered deployed.
Some time later, after you've done everything you need to do (e.g. start other verticles), you can call complete on the Future (or fail) to signal that you're done.
I am using Spring AWS Cloud Framework to poll for S3 Event notifications on a queue. I am using the QueueMessagingTemplate to do this. I want to be able to set the max number of messages and wait time to poll see: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html.
import com.amazonaws.services.s3.event.S3EventNotification;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.aws.messaging.core.QueueMessagingTemplate;
public class MyQueue {
private QueueMessagingTemplate queueMsgTemplate;
#Autowired
public MyQueue(QueueMessagingTemplate queueMsgTemplate) {
this.queueMsgTemplate = queueMsgTemplate;
}
#Override
public S3EventNotification poll() {
S3EventNotification s3Event = queueMsgTemplate
.receiveAndConvert("myQueueName", S3EventNotification.class);
}
}
Context
#Bean
public AWSCredentialsProviderChain awsCredentialsProviderChain() {
return new AWSCredentialsProviderChain(
new DefaultAWSCredentialsProviderChain());
}
#Bean
public ClientConfiguration clientConfiguration() {
return new ClientConfiguration();
}
#Bean
public AmazonSQS sqsClient(ClientConfiguration clientConfiguration,// UserData userData,
AWSCredentialsProviderChain credentialsProvider) {
AmazonSQSClient amazonSQSClient = new AmazonSQSClient(credentialsProvider, clientConfiguration);
amazonSQSClient.setEndpoint("http://localhost:9324");
return amazonSQSClient;
}
#Bean
public QueueMessagingTemplate queueMessagingTemplate(AmazonSQS sqsClient) {
return new QueueMessagingTemplate(sqsClient);
}
Any idea how to configure these? Thanks
The QueueMessagingTemplate.receiveAndConvert() is based on the QueueMessageChannel.receive() method where you can find a desired code:
#Override
public Message<String> receive() {
return this.receive(0);
}
#Override
public Message<String> receive(long timeout) {
ReceiveMessageResult receiveMessageResult = this.amazonSqs.receiveMessage(
new ReceiveMessageRequest(this.queueUrl).
withMaxNumberOfMessages(1).
withWaitTimeSeconds(Long.valueOf(timeout).intValue()).
withAttributeNames(ATTRIBUTE_NAMES).
withMessageAttributeNames(MESSAGE_ATTRIBUTE_NAMES));
if (receiveMessageResult.getMessages().isEmpty()) {
return null;
}
com.amazonaws.services.sqs.model.Message amazonMessage = receiveMessageResult.getMessages().get(0);
Message<String> message = createMessage(amazonMessage);
this.amazonSqs.deleteMessage(new DeleteMessageRequest(this.queueUrl, amazonMessage.getReceiptHandle()));
return message;
}
So, as you see withMaxNumberOfMessages(1) is hardcoded to 1. And that is correct because receive() can poll only one message. The withWaitTimeSeconds(Long.valueOf(timeout).intValue()) is fully equals to the provided timeout. And eh, you can't modify it in case of receiveAndConvert().
Consider to use QueueMessageChannel.receive(long timeout) and messageConverter from the QueueMessagingTemplate. Like it is done in the:
public <T> T receiveAndConvert(QueueMessageChannel destination, Class<T> targetClass) throws MessagingException {
Message<?> message = destination.receive();
if (message != null) {
return (T) getMessageConverter().fromMessage(message, targetClass);
} else {
return null;
}
}
You can reach an appropriate QueueMessageChannel via code:
String physicalResourceId = this.destinationResolver.resolveDestination(destination);
new QueueMessageChannel(this.amazonSqs, physicalResourceId);