I want to draw five different routes in Google Maps API v3, GWT 2.5.1. I initialize a route which sets its DirectionDisplay and DirectionsRequest in this class.
When I start my web project, sometimes only my first route is shown, sometimes all five, so I decided to make a System.out.print(m);.
The results:
01234 -> as expected, all routes shown
10234 -> error, only first route shown.
Why does Google serve my second request before my first? I tried to use Thread.sleep(1000) to ensure that my requests have time to come back in order, also Timer/TimerTasks, no success. Any ideas?
DirectionsService o = DirectionsService.newInstance();
for (Integer i = 0; i < 4; i++) { //routes.size()
final int m = i;
final Route route = new Route("Route " + i.toString());
route.initRoute(m, getRoutingPresenter(), adressData, addressIndex);
//here i initialize the DirectionsRequests and its Displays, which
//i set in this class after execution.
o.route(directionsRequest, new DirectionsResultHandler() {
#Override
public void onCallback(DirectionsResult result,DirectionsStatus status) {
if (status == DirectionsStatus.OK) {
System.out.print(m);
...
}
}
);
}
}
Google can take as long as they like to handle your requests and you should code accordingly. This is true of any HTTP traffic. Even if the remote server guaranteed a fixed service time for all requests, the Internet does not and your requests could be taking any old route through it.
You can either right your handling code so that the response order doesn't matter, or write it so that it waits until all responses are back and then sort out the order itself.
I would recommend the first unless there are very specific an important reasons not to.
Related
I am trying to do live streaming example app, where I can live update the list in the browser. I want to return all elements and then still listening (don't stop the stream) when new item is add to the database. Then I want to show new item in the browser. My current solution all the time prints all items (second by second) but I think there is better solution, when I can a) find the difference in list from last processing repository.findAll() and return only currList - prevList b) I can listen to some kind of events? Like inserting to table and add new item to still opened stream.
Here is my current code:
#RestController
#RequestMapping("/songs")
public class SongController {
private final SongRepository songRepository;
public SongController(SongRepository songRepository) {
this.songRepository = songRepository;
}
#GetMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<Song> getAllSongs() {
return Flux.interval(Duration.ofSeconds(1))
.flatMap(x -> songRepository.findAll());
}
#PostMapping
public Mono<Song> addSong(#RequestBody Song song) {
return songRepository.save(song);
}
}
Here is how it looks like now:
As you can see, Its obviously looped, and I just need plain list with 7 elements on begining and then +1 element every time I post new song (by addSong()).
I don't need a entire ready solution, I just don't know what should I use.
Thank you in advance, cheers
In my experience there are three options that have different pros and cons.
You could create a web socket connection from the browser to your backend service. This will create a bi-directional connection that will allow you push updates from the server to your browser. In this instance whenever you add a song you would then write that song to the web socket connection and handle that on the browser side, so adding it to the list in the browser.
The cons of this are in my experience web socket connections are finicky and aren't the most stable or reliable.
You could use server side events. I haven't used this personally but I have heard this can be a viable options for pushing events from the server to the browser. https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events
You could poll the endpoint. I know this approach gets a lot of hate in recent years but it is a viable options. The benefit with polling the endpoint is that it is resilient to failures. If your backend is overloaded and can't respond for one request it will likely be able to respond to a subsequent request. Also there are ways of improving commonly used endpoints so you're not hammering your database like a cache or something of that nature.
I'm writing a spring-boot based project where I have some synchronous (eg. RESTI API calls) and asynchronous (JMS) pieces of code (the broker I use is a dockerized instance of ActiveMQ in case there's some kind of trick/workaround).
One of the problems I'm currently struggling with is: my application receives a REST api call (I'll call it "a sync call"), it does some processing and then sends a JMS message to a queue (async) whose message in then handled and processed (let's say I have a heavy load to perform, so that's why I want it to be async).
Everything works fine when running the application, async messages are enqueued and dequeued as expecting.
When I'm writing tests, (and I'm testing the whole service, which includes the sync and async call in rapid succession) it happens that the test code is too fast, and the message is still waiting to be dequeued (we are talking about milliseconds, but that's the problem).
Basically as soon as i receive the response from the API call, the message is still in the queue, so if, for example I make a query to check for its existence -> ka-boom the test fails because (obviously) it doesn't find the object (that probably meanwhile is being processed and created).
Is there any way, or any pattern, I can use to make my test wait for that async message to be dequeued? I can attach code to my implementation if needed, It's a bachelors degree thesis project.
One obvious solution I'm temporarily using is adding a hundred milliseconds sleep between the method call and the assert section (hoping everything is done and persisted), but honestly I kinda dislike this solution because it seems so non-deterministic to me. Also creating a latch between development code and testing doesn't sound really good to me.
Here's the code I use as an entry-point to al the mess I explained before:
public TransferResponseDTO transfer(Long userId, TransferRequestDTO transferRequestDTO) {
//Preconditions.checkArgument(transferRequestDTO.amount.compareTo(BigDecimal.ZERO) < 0);
Preconditions.checkArgument(userHelper.existsById(userId));
Preconditions.checkArgument(walletHelper.existsByUserIdAndSymbol(userId, transferRequestDTO.symbol));
TransferMessage message = new TransferMessage();
message.userId = userId;
message.symbol = transferRequestDTO.symbol;
message.destination = transferRequestDTO.destination;
message.amount = transferRequestDTO.amount;
messageService.send(message);
TransferResponseDTO response = new TransferResponseDTO();
response.status = PENDING;
return response;
}
And here's the code that receives the message (although you wouldn't need it):
public void handle(TransferMessage transferMessage) {
Wallet source = walletHelper.findByUserIdAndSymbol(transferMessage.userId, transferMessage.symbol);
Wallet destination = walletHelper.findById(transferMessage.destination);
try {
walletHelper.withdraw(source, transferMessage.amount);
} catch (InsufficientBalanceException ex) {
String u = userHelper.findEmailByUserId(transferMessage.userId);
EmailMessage email = new EmailMessage();
email.subject = "Insufficient Balance in your account";
email.to = u;
email.text = "Your transfer of " + transferMessage.amount + " " + transferMessage.symbol + " has been DECLINED due to insufficient balance.";
messageService.send(email);
}
walletHelper.deposit(destination, transferMessage.amount);
String u = userHelper.findEmailByUserId(transferMessage.userId);
EmailMessage email = new EmailMessage();
email.subject = "Transfer executed";
email.to = u;
email.text = "Your transfer of " + transferMessage.amount + " " + transferMessage.symbol + " has been ACCEPTED.";
messageService.send(email);
}
Im' sorry if the code sounds "a lil sketchy or wrong" it's a primordial implementation.
I'm willing to write a utility to share with you all if that's the case, but, as you've probably noticed, I'm low on ideas right now.
I'm an ActiveMQ developer working mainly on ActiveMQ Artemis (the next-gen broker from ActiveMQ). We run into this kind of problem all the time in our test-suite given the asynchronous nature of the broker, and we developed a little utility class that automates & simplifies basic polling operations.
For example, starting a broker is asynchronous so it's common for our tests to include an assertion to ensure the broker is started before proceeding. Using old-school Java 6 syntax it would look something like this:
Wait.assertTrue(new Condition() {
#Override
public boolean isSatisfied() throws Exception {
return server.isActive();
}
});
Using a Java 8 lambda would look like this:
Wait.assertTrue(() -> server.isActive());
Or using a Java 8 method reference:
Wait.assertTrue(server::isActive);
The utility is quite flexible as the Condition you use can test anything you want as long as it ultimately returns a boolean. Furthermore, it is deterministic unlike using Thread.sleep() (as you noted) and it keeps testing code separate from the "product" code.
In your case you can check to see if the "object" being created by your JMS process can be found. If it's not found then it can keep checking until either the object is found or the timeout elapses.
I have a following piece for loop in a function which I intended to parallelize but not sure if the load of multiple threads will overweight the benefit of concurrency.
All I need is to send different log files to corresponding receivers. For the timebeing lets say number of receivers wont more than 10. Instead of sending log files back to back, is it more efficient if I send them all parallel?
for(int i=0; i < receiversList.size(); i++)
{
String receiverURL = serverURL + receiversList.get(i);
HttpPost method = new HttpPost(receiverURL);
String logPath = logFilesPath + logFilesList.get(i);
messagesList = readMsg(logPath);
for (String message : messagesList) {
StringEntity entity = new StringEntity(message);
log.info("Sending message:");
log.info(message + "\n");
method.setEntity(entity);
if (receiverURL.startsWith("https")) {
processAuthentication(method, username, password);
}
httpClient.execute(method).getEntity().getContent().close();
}
Thread.sleep(500); // Waiting time for the message to be sent
}
Also please tell me how can I make it parallel if it is gonna work? Should I do it manual or use ExecutorService?
All I need is to send different log files to corresponding receivers. For the time being lets say number of receivers won't be more than 10. Instead of sending log files back to back, is it more efficient if I send them all parallel?
There are a lot of questions to be asked before we can determine if doing this in parallel will buy you anything. You mentioned "receivers" but are you really talking about different receiving servers on different web addresses or are all threads sending their log files to the same server? If it is the latter then chances are you will get very little improvement in speed with concurrency. A single thread should be able to fill the network pipeline just fine.
Also, you probably would get no speed up if the messages are small. Only large messages would take any time and give you any true savings if they were sent in parallel.
I'm most familiar with the ExecutorService classes. You could do something like:
ExecutorService threadPool = Executors.newFixedThreadPool(10);
...
threadPool.submit(new Runnable() {
// you could create your own Runnable class if each one needs its own httpClient
public void run() {
StringEntity entity = new StringEntity(message);
...
// we assume that the client is some sort of pooling client
httpClient.execute(method).getEntity().getContent().close();
}
}
});
What will be good is if you want to queue up these messages and send them in a background thread to not slow down your program. Then you could submit the messages to the threadPool and keep on moving. Or you could put them in BlockingQueue<String> and have a thread taking from the BlockingQueue and calling the httpClient.execute(...).
More implementation details from this good ExecutorService tutorial.
Lastly, how about putting all of your messages into one entity and divide the messages on the server. That would be the most efficient although you might not control the server handler code.
Hello ExecutorService is certainly an option. You have 4 ways to do it in Java.
Using Threads (exposes to many details easy to make mistake)
Executor service as you have already mentioned. It comes from Java 6
Here is a tutorial demonstrating ExecutorService http://tutorials.jenkov.com/java-util-concurrent/executorservice.html
ForkJoin framework comes from Java 7
ParallelStreams comes from Java 8 bellow is a solution using ParallelStreams
Going for higher level api will spare you some errors you might otherwise do.
receiversList.paralelstream().map(t->{
String receiverURL = serverURL + receiversList.get(i);
HttpPost method = new HttpPost(receiverURL);
String logPath = logFilesPath + logFilesList.get(i);
return readMsg(logPath);
})
.flatMap(t->t.stream)
.forEach(t->{
StringEntity entity = new StringEntity(message);
log.info("Sending message:");
log.info(message + "\n");
method.setEntity(entity);
if (receiverURL.startsWith("https")) {
processAuthentication(method, username, password);
}
httpClient.execute(method).getEntity().getContent().close();})
I am using Spring #RequestMapping for REST synchronous services consuming and producing JSON. I now want to add asynchronous responses were the client sends a list of ids and the server sends back the details as it gets them to only that one client.
I been searching a while and have not found what I am looking for. I have seen two different approaches for Spring. The most common is a message broker approach where it appears that everybody gets every message by subscribing to a queue or topic. This is VERY unacceptable since this is private data. I also have a finite number of data points to return. The other approach is a Callable, AsyncResult or DeferredResult. This appears to keep the data private but I want to send multiple responses.
I have seen something similar to what I want but is uses Jersey SSE on the server. I would like to stick with Spring.
This is what I currently have using pseudo code.
#RequestMapping(value = BASE_PATH + "/balances", method = RequestMethod.POST, consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
public GetAccountBalancesResponse getAccountBalances(#RequestBody GetAccountBalancesRequest request) {
GetAccountBalancesResponse ret = new GetAccountBalancesResponse();
ret.setBalances(synchronousService.getBalances(request.getIds());
return ret;
}
This is what I am looking to do. It is rather rough since I have no clue of the details. Once I figure out sending I will work on the asynchronous part but would take any suggestions.
#RequestMapping(value = BASE_PATH + "/balances", method = RequestMethod.POST, consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
public ???<BalanceDetails> getAccountBalances(#RequestBody GetAccountBalancesRequest request) {
final ???<BalanceDetails> ret = new ???<>();
new Thread(new Runnable() {
public void run() {
List<Future<BalanceDetails>> futures = asynchronousService.getBalances(request.getIds());
while(!stillWaiting(futures)) {
// Probably use something like a Condition to block until there is some details.
ret.send(getFinishedDetails(futures));
}
ret.close();
}
}).start();
return ret;
}
Thanks, Wes.
It doesn't work like this: you are using plain Spring actions, which are intended to be processed in a single thread that possilby blocks until the full request is computed. You don't create threads inside controllers - or at least not in that way.
If the computation lasts really long and you want to give your users a visual feedback, these are the steps:
optimize the procedure :) use indices, caching, whatever
if still isn't enough, the computation still lasts forever and users demand feedback, you'll have two options
poll with javascript and show a visual feedback (easier). Basically you submit the task to a thread pool and return immediately, and there's another controller method that reads the current state of the computation and returns it to the user. This method is called by javascript every 10 seconds or so
use a backchannel (server push, websocket) - not so easy because you have to implement both the client and the server part. Obviously there are libraries and protocols that will make this only a handful lines of code long, but if you have never tried it before, you'll spend some time understanding the setup - plus debugging websockets is not as easy as regular HTTP because of the debugging tools
Took a lot of digging but it looks like Spring Web 4.2 does support server side events. I was using Spring Boot 1.2.7 that uses Spring Web 4.1.7. Switching to Spring Boot 1.3.0.RC1 adds the SseEmitter.
Here is my pseudo code.
#RequestMapping(value = BASE_PATH + "/getAccountBalances", method = RequestMethod.GET)
public SseEmitter getAccountBalances(#QueryParam("accountId") Integer[] accountIds) {
final SseEmitter emitter = new SseEmitter();
new Thread(new Runnable() {
#Override
public void run() {
try {
for (int xx = 0; xx < ids.length; xx++) {
Thread.sleep(2000L + rand.nextInt(2000));
BalanceDetails balance = new BalanceDetails();
...
emitter.send(emitter.event().name("accountBalance").id(String.valueOf(accountIds[xx]))
.data(balance, MediaType.APPLICATION_JSON));
}
emitter.send(SseEmitter.event().name("complete").data("complete"));
emitter.complete();
} catch (Exception ee) {
ee.printStackTrace();
emitter.completeWithError(ee);
}
}
}).start();
return emitter;
}
Still working out closing the channel gracefully and parsing the JSON object using Jersey EventSource but it is a lot better than a message bus.
Also spawning a new thread and using a sleep are just for the POC. I wouldn't need either since we already have an asynchronous process to access a slow back end system.
Wes.
I am developing a REST based web application, which will call a 3rd system asynchronously for some data (using websockets). So
Browser -> REST -> My WebApp -> Another App -> My WebApp -> Browser
The communication between My WebApp to Another APP is asynchronous and I can only track the responses for a request using some identifiers.
So, I request C as <counter>.C and the response will be <counter>.Response where both counters will be same.
To map the response to request, I am setting the command, counter, flag to a bean. I keep a while loop which keeps on checking whether the flag has been set or not. Once I get the response, the set the flag, while loop exits and I know that the data is available.
Is this the right way? Is there a way I can make this better, because I feel (I might be wrong!) that keeping an open while loop is incorrect.
Class bean is set like below,
public void setAllProperties(){
bean.setCommand(commandString);
bean.setCounter(counter);
bean.hasResponse(false);
}
The snippet in webservice is
bean.setAllProperties();
sendToApplication(bean);
int checkCounter = 0;
while(!bean.hasResponse && checkCounter > 0){
if(bean.hasResponse){
checkCounter++;
// loggers and other logic here
}
}
The loop defeats a lot of the value of the asynchronous operation. It also consumes a significant amount of CPU time (try adding a delay - a quick sleep - when using such a loop).
I recommend to use "wait()" and "notify()"/"notifyAll()" instead.
In the code that's waiting for a response, do something like this:
synchronized ( bean ) {
while ( ! bean.hasResponse ) {
bean.wait();
}
}
In the code that processes the response and updates the bean:
synchronized ( bean ) {
bean.hasResponse = true;
bean.notifyAll();
}