retrofit invoking asynchronous call synchronously - java

This looks weird but I ended up in this situation. Implemented Restful API call using Retrofit asynchronously. Now there is a sudden requirement change and have to call API one after the other (One at a time), so that in the second API call I have to send session token received from the previous response. One way is to make every API call as synchronous but it takes time to implement this change.
I have tried :
Used setExecutor(Executors.newSingleThreadExecutor(),new
MainThreadExecutor) for RestAdapter.Builder.This didn't work
since API calls are asynchronous and before getting response for the
previous API call second call is made. So the second request has
invalid session token.
In the class where I have implemented all Restful Web services,
used Executors.newSingleThreadExecutor() , this also didn't work
for the same reason.
Could anybody suggest how to resolve this with minimal changes.
Webservice Manager is as below, this is partial and there are many more api's like login:
public class WebServiceManager {
private static final String ROOT_PATH = Urls.REST_ROOT_URL;
RestAdapter restAdapter;
WebServiceInterface webServiceInterface;
private String requestKey;
private String sessionId;
private Context context;
public WebServiceManager(Context context) {
this.context = context;
initializeWebServiceAdapter();
}
private void initializeWebServiceAdapter() {
restAdapter = new RestAdapter.Builder()
.setEndpoint(ROOT_PATH)
.setLogLevel(RestAdapter.LogLevel.FULL)
.build();
webServiceInterface = restAdapter.create(WebServiceInterface.class);
}
private void setHeaderValues(BaseModel model) {
SessionManager sm= context.getApplicationContext().getSessionManager();
model.getRequestHeader().setRequestKey(sm.getRequestKey());
model.getRequestHeader().setSessionId(sm.getSessionId());
}
public void login(String emailID, String passwd, final WebServiceCallback loginModelWebServiceCallback) {
LoginModel model = RestRequest.getLoginModel(emailID, passwd);
setHeaderValues(model);
webServiceInterface.login(model, new Callback() {
#Override
public void success(LoginModel loginModel, Response response) {
if (loginModelWebServiceCallback != null)
{
SessionManager sm= context.getApplicationContext().getSessionManager();
sm.setSessionDetails(response.getRequestKey(),response.getSessionId());
loginModelWebServiceCallback.success(loginModel);
}
}
#Override
public void failure(RetrofitError error) {
if (loginModelWebServiceCallback != null)
loginModelWebServiceCallback.failure(error);
}
});
}
}

The Executor doesn't matter since you're always invoking the Retrofit service with the Callback argument, which makes it asynchronous. If you want your Retrofit call to be synchronous then the service call method needs a return type, not void. You can read this in the docs.
Once you make your API calls synchronous and ordered how you want, you can wrap them in a Runnable and let an Executor handle the threading for you.

A request can be made in the response from first API itself, when some parameter is expected for the second api call. Have a look at the sample :
public void login(String emailID, String passwd, final WebServiceCallback loginModelWebServiceCallback) {
LoginModel model = RestRequest.getLoginModel(emailID, passwd);
setHeaderValues(model);
webServiceInterface.login(model, new Callback() {
#Override
public void success(LoginModel loginModel, Response response) {
if (loginModelWebServiceCallback != null) {
makeSecondAPIcall();
}
}
#Override
public void failure(RetrofitError error) {
if (loginModelWebServiceCallback != null)
loginModelWebServiceCallback.failure(error);
}
});
}

Related

Apache Avro IPC server side async

I'm trying to implement an async server application using Avro IPC. As far as I've researched, it is possible to make client async calls by calling the method of generated Callback protocol classes like this:
My generated protocol class:
#org.apache.avro.specific.AvroGenerated
public interface TestProtocol {
public static final org.apache.avro.Protocol PROTOCOL = /*ommited*/;
ResponseTest send(MessageTest TestMsg);
#org.apache.avro.specific.AvroGenerated
public interface Callback extends TestProtocol {
public static final org.apache.avro.Protocol PROTOCOL = TestProtocol.PROTOCOL;
void send(MessageTest TestMsg, org.apache.avro.ipc.Callback<ResponseTest> callback) throws java.io.IOException;
}
}
And implementation of this code
public final class TestProtocolImplAsync implements TestProtocol.Callback {
#Override
public #NotNull ResponseTest send(#NotNull MessageTest TestMsg) {
return new ResponseTest("Sync call");
}
#Override
public void send(#NotNull MessageTest TestMsg,
#NotNull org.apache.avro.ipc.Callback<ResponseTest> callback) {
callback.handleResult(new ResponseTest("Async call"));
}
}
The implementation of TestProtocol is binded to TestProtocolImpl on server side. However, while calling it on client side this way:
SpecificRequestor.getClient(TestProtocol.Callback.class, client).send(new MessageTest(/*params*/), new Callback<ResponseTest>() {
#Override
public void handleResult(#NotNull ResponseTest result) {
System.out.println(result.toString());
}
#Override
public void handleError(#NotNull Throwable error) {
//whatever
}
})
I keep getting sync server method called. I haven't found any info about this in documentation, but am I right that callback method is only to be implemented on async client side, not server, and it is impossible to process request on server asyncrously this way and call callback method from server side? Or am I missing something in my server settings?

How to access incoming HTTP requests in X-Ray SegmentListener?

Issue
I use AWS X-Ray SDK for Java to enable X-Ray tracing for my Spring Boot micro services.
With following snippet I am able to attach a custom SegmentListener:
final AWSXRayRecorder recorder = AWSXRayRecorderBuilder
.standard()
.withPlugin(new EcsPlugin())
.withSegmentListener(new SLF4JSegmentListener())
.withSegmentListener(new MyHttpHeaderSegementListener())
.build();
AWSXRay.setGlobalRecorder(recorder);
In MyHttpHeaderSegementListener I try to inject a X-Ray annotation based on an incoming HTTP request header (from the frontend):
public class MyHttpHeaderSegementListener implements SegmentListener {
// snippet source: https://stackoverflow.com/a/54349178/6489012
public static Optional<HttpServletRequest> getCurrentHttpRequest() {
return Optional.ofNullable(RequestContextHolder.getRequestAttributes())
.filter(ServletRequestAttributes.class::isInstance)
.map(ServletRequestAttributes.class::cast)
.map(ServletRequestAttributes::getRequest);
}
public MyHttpHeaderSegementListener() {}
#Override
public void onBeginSegment(final Segment segment) {
final var httpContext = MyHttpHeaderSegementListener.getCurrentHttpRequest();
httpContext.ifPresent(context -> segment.putAnnotation("Origin", context.getHeader("Origin")));
}
}
The segment listener is triggered as expected onBeginSegment segment but MyHttpHeaderSegementListener.getCurrentHttpRequest() always returns an Optional.empty.
Questions
Is there a possibility to inspect incoming HTTP requests (as they
were received by a Controller) within a SegmentListener?
Does aws-xray-sdk-java maybe even support a native way to do so?
Why is the request retrieved from RequestContextHolder always empty?
(A bit off-topic but: 4. Is it even a good practice to set an annotation based on a HTTP header)
I have no answer for the 2. and 3. question but I found an answer for 1. question.
For incoming requests you need to add a Spring Filter to configure AWS X-Ray. As filters have access to the HTTP request I just wrapped my own filter around the com.amazonaws.xray.javax.servlet.AWSXRayServletFilter of AWS:
public class XRayServletFilter extends AWSXRayServletFilter {
public XRayServletFilter(String fixedSegmentName) {
super(fixedSegmentName);
}
#Override
public void doFilter(final ServletRequest request, final ServletResponse response, final FilterChain chain) throws IOException, ServletException {
this.addHttpRequestToContext(request);
super.doFilter(request, response, chain);
}
private void addHttpRequestToContext(final ServletRequest request){
final Optional<HttpServletRequest> httpServletRequest = HttpRequestUtils.castToHttpRequest(request);
if (httpServletRequest.isPresent()) {
final ServletRequestAttributes requestAttributes = new ServletRequestAttributes(httpServletRequest.get());
RequestContextHolder.setRequestAttributes(requestAttributes);
}
}
}
Which uses a static class that I wrote:
public final class HttpRequestUtils {
public static Optional<HttpServletRequest> getCurrentHttpRequest() {
return Optional.ofNullable(RequestContextHolder.getRequestAttributes())
.filter(ServletRequestAttributes.class::isInstance)
.map(ServletRequestAttributes.class::cast)
.map(ServletRequestAttributes::getRequest);
}
public static Optional<HttpServletRequest> castToHttpRequest(ServletRequest request) {
try {
return Optional.of((HttpServletRequest) request);
} catch (ClassCastException classCastException) {
return Optional.empty();
}
}
}
This custom filter basically sets the HTTP requests in the RequestContextHolder. After that you can use it in your segment listeners:
public class MyHttpHeaderSegementListener implements SegmentListener {
public MyHttpHeaderSegementListener() {}
#Override
public void onBeginSegment(final Segment segment) {
final Optional<HttpServletRequest> request = HttpRequestUtils.getCurrentHttpRequest();
request.map(req -> req.getHeader("Origin")).ifPresent(origin -> segment.putAnnotation("client_origin", origin));;
}
}

Spring REST caching RSS AbstractRssFeedView

I am developing a REST based webservice using springs to serve a RSS feed. Updates to the RSS are very rare (a couple of times a week) and hence I want to cache the RSS feed rather than building it every time someone requests for that. Here is my code. My first request after starting my webserver hits getRssFeed() method in SubscriptionEventHandler class and then goes into SubscriptionRssFeedView and calls buildFeedMetadata, buildFeedItems methods and so on which is correct. But when I make the second request, it skips getRssFeed() method in SubscriptionEventHandler BUT the buildFeedMetadata, buildFeedItems methods in SubscriptionRssFeedView gets called which in turn calls the getIncidents() and builds the RSS again from scratch. Is there a way I can avoid this and cache the RSS until I call the #CacheEvict
Here is my SubscriptionRssFeedView
#Component("subscriptionRssView")
public class SubscriptionRssFeedView extends AbstractRssFeedView
{
private String base_Url=”http://mycompany.com/”;
private final String feed_title = "My RSS Title ";
private final String feed_desc = "RSS feed desc";
private final String feed_type = "rss_2.0";
#Override
protected void buildFeedMetadata(Map<String, Object> model, Channel feed, HttpServletRequest request)
{
feed.setTitle(feed_title);
feed.setDescription(feed_desc);
feed.setLink(base_Url);
feed.setFeedType(feed_type);
super.buildFeedMetadata(model, feed, request);
}
#Override
protected List<Item> buildFeedItems(Map<String, Object> model, HttpServletRequest request,
HttpServletResponse response) throws Exception
{
List<Message> messageList = new ArrayList(Arrays.asList(getIncidents()));
List<Item> itemList = new ArrayList<Item>(messageList.size());
for (Message message : messageList)
{
itemList.add(createItem(message));
}
return itemList;
}
private Message[] getIncidents()
{
RestTemplate restTemplate = new RestTemplate();
Message[] message = restTemplate.getForObject("http://xxxxx.com/api/message", Message[].class);
return message;
}
private Item createItem(Message message)
{
Item item = new Item();
item.setLink(getFeedItemURL(message));
item.setTitle(prepareFeedItemTitle(message));
item.setDescription(createDescription(message));
item.setPubDate(getLocalizedDateTimeasDate(message.getT()));
return item;
}
}
My SubscriptionEventHandler
#Component("SubscriptionService")
public class SubscriptionEventHandler implements SubscriptionService
{
#Autowired
private SubscriptionRssFeedView subscriptionRssFeedView;
#Override
#Cacheable("rssFeedCache")
public SubscriptionRssFeedView getRssFeed()
{
return subscriptionRssFeedView;
}
}
My SubscriptionService
#Service
public interface SubscriptionService
{
SubscriptionRssFeedView getRssFeed();
}
My SubscriptionController
#Controller
#RequestMapping("/subscription")
public class SubscriptionController
{
#Autowired
private SubscriptionService subscriptionService;
#RequestMapping(value = "/rss", method = RequestMethod.GET)
public SubscriptionRssFeedView getRSS() throws Exception
{
return subscriptionService.getRssFeed();
}
}
When rendering the response of your SubscriptionController the render method of your SubscriptionRssFeedView will always get called. This method is the one triggering the calls to buildFeedMetadata, buildFeedEntries and so and so. The sequence is as follows:
AbstractView.render => AbstractFeedView.renderMergedOutputModel => SubscriptionRssFeedView.buildFeedMetadata and SubscriptionRssFeedView.buildFeedEntries
you can check the parent classes methods AbstractView.render and AbstractFeedView.renderMergedOutputModel if you want to see in more details what triggers the call to those methods.
In you want to avoid recalculating the RSS you can cache the SubscriptionRssFeedView.getIncidents() method instead of the SubscriptionEventHandler.getRssFeed()
I suggest adding a key to your cache otherwise all calls to getIncidents will return always the same value and this will be undesired when you have multiple feeds.

Replying multiple times over web-socket without spring authentication

Note: see update at the bottom of the question for what I eventually concluded.
I need to send multiple responses to a request over the web socket that sent the request message, the first one quickly, and the others after the data is verified (somewhere between 10 and 60 seconds later, from multiple parallel threads).
I am having trouble getting the later responses to stop broadcasting over all open web sockets. How do I get them to only send to the initial web socket? Or should I use something besides Spring STOMP (because, to be honest, all I want is the message routing to various functions, I don't need or want the ability to broadcast to other web sockets, so I suspect I could write the message distributor myself, even though it is reinventing the wheel).
I am not using Spring Authentication (this is being retrofitted into legacy code).
On the initial return message, I can use #SendToUser, and even though we don't have a user, Spring only sends the return value to the websocket that sent the message. (see this question).
With the slower responses, though, I think I need to use SimpMessagingTemplate.convertAndSendToUser(user, destination, message), but I can't, because I have to pass in the user, and I can't figure out what user the #SendToUser used. I tried to follow the steps in this question, but didn't get it to work when not authenticated (principal.getName() returns null in this case).
I've simplified this considerably for the prototype, so don't worry about synchronizing threads or anything. I just want the web sockets to work correctly.
Here is my controller:
#Controller
public class TestSocketController
{
private SimpMessagingTemplate template;
#Autowired
public TestSocketController(SimpMessagingTemplate template)
{
this.template = template;
}
// This doesn't work because I need to pass something for the first parameter.
// If I just use convertAndSend, it broacasts the response to all browsers
void setResults(String ret)
{
template.convertAndSendToUser("", "/user/topic/testwsresponse", ret);
}
// this only sends "Testing Return" to the browser tab hooked to this websocket
#MessageMapping(value="/testws")
#SendToUser("/topic/testwsresponse")
public String handleTestWS(String msg) throws InterruptedException
{
(new Thread(new Later(this))).start();
return "Testing Return";
}
public class Later implements Runnable
{
TestSocketController Controller;
public Later(TestSocketController controller)
{
Controller = controller;
}
public void run()
{
try
{
java.lang.Thread.sleep(2000);
Controller.setResults("Testing Later Return");
}
catch (Exception e)
{
}
}
}
}
For the record, here is the browser side:
var client = null;
function sendMessage()
{
client.send('/app/testws', {}, 'Test');
}
// hooked to a button
function test()
{
if (client != null)
{
sendMessage();
return;
}
var socket = new SockJS('/application-name/sendws/');
client = Stomp.over(socket);
client.connect({}, function(frame)
{
client.subscribe('/user/topic/testwsresponse', function(message)
{
alert(message);
});
sendMessage();
});
});
And here is the config:
#Configuration
#EnableWebSocketMessageBroker
public class TestSocketConfig extends AbstractWebSocketMessageBrokerConfigurer
{
#Override
public void configureMessageBroker(MessageBrokerRegistry config)
{
config.setApplicationDestinationPrefixes("/app");
config.enableSimpleBroker("/queue", "/topic");
config.setUserDestinationPrefix("/user");
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry)
{
registry.addEndpoint("/sendws").withSockJS();
}
}
UPDATE: Due to the security issues involved with the possibility of information being sent over other websockets than the originating socket, I ended up recommending to my group that we do not use the Spring 4.0 implementation of STOMP over Web Sockets. I understand why the Spring team did it the way they did it, and it is more power then we needed, but the security restrictions on our project were severe enough, and the actual requirements were simple enough, that we decided to go a different way. That doesn't invalidate the answers below, so make your own decision based on your projects needs. At least we have hopefully all learned the limitations of the technology, for good or bad.
Why don't you use a separate topic for each client?
Client generates a session id.
var sessionId = Math.random().toString(36).substring(7);
Client subscribes to /topic/testwsresponse/{sessionId}, then sends a message to '/app/testws/{sessionId}'.
In your controller you use #MessageMapping(value="/testws/{sessionId}") and remove #SendToUser. You can use #DestinationVariable to access sessionId in your method.
The controller sends further responses to /topic/testwsresponse/{sessionId}.
Essentially Spring does a similar thing internally when you use user destinations. Since you don't use Spring Authentication you cannot rely on this mechanism but you can easily implement your own as I described above.
var client = null;
var sessionId = Math.random().toString(36).substring(7);
function sendMessage()
{
client.send('/app/testws/' + sessionId, {}, 'Test');
}
// hooked to a button
function test()
{
if (client != null)
{
sendMessage();
return;
}
var socket = new SockJS('/application-name/sendws/');
client = Stomp.over(socket);
client.connect({}, function(frame)
{
client.subscribe('/topic/testwsresponse/' + sessionId, function(message)
{
alert(message);
});
// Need to wait until subscription is complete
setTimeout(sendMessage, 1000);
});
});
Controller:
#Controller
public class TestSocketController
{
private SimpMessagingTemplate template;
#Autowired
public TestSocketController(SimpMessagingTemplate template)
{
this.template = template;
}
void setResults(String ret, String sessionId)
{
template.convertAndSend("/topic/testwsresponse/" + sessionId, ret);
}
#MessageMapping(value="/testws/{sessionId}")
public void handleTestWS(#DestinationVariable String sessionId, #Payload String msg) throws InterruptedException
{
(new Thread(new Later(this, sessionId))).start();
setResults("Testing Return", sessionId);
}
public class Later implements Runnable
{
TestSocketController Controller;
String sessionId;
public Later(TestSocketController controller, String sessionId)
{
Controller = controller;
this.sessionId = sessionId;
}
public void run()
{
try
{
java.lang.Thread.sleep(2000);
Controller.setResults("Testing Later Return", sessionId);
}
catch (Exception e)
{
}
}
}
}
Just tested it, works as expected.
This is not full answer. Just general consideration and suggestion.
You cannot do different stuff or type of connection via the same socket. Why not have different sockets for different work? Some with authentication and some without. Some for quick task and some for long execution.

How to TDD for Restful client code example

I did some TDDs before, but they were just straightforward and simple.
However, I will implement a restful client and invoke a restful API of third parties (Twitter, or Jira).
I used Resteasy client framework to implement that. The code is:
public void invokePUT() {
ClientRequest request =
new ClientRequest("http://example.com/customers");
request.accept("application/xml");
ClientResponse<Customer> response = request.put(Customer.class);
try {
if (response.getStatus() != 201)
throw new RuntimeException("Failed!");
} finally {
response.releaseConnection();
}}
If I want to write a test for this method (should write test before implement this method), what kind of the code should I write.
For GET, I can test the return Entity is equals to my expected entity and for POST, I can test the created entity's id is not null.
But how about for PUT and DELETE. Thanks.
Try to use REST Assured testing framework. It is great tool for testing REST services. On their website you'll find tons of examples how to use it. Just use it together with JUnit or TestNG to check assertions and you are done.
Here's how I'd go about the problem in the short term:
1) Extract the request into a parameter to the method. invokePUT() now becomes:
public void invokePUT(ClientRequest request) {
request.accept("application/xml");
ClientResponse<Customer> response = request.put(Customer.class);
try {
if (response.getStatus() != 201)
throw new RuntimeException("Failed!");
} finally {
response.releaseConnection();
}
}
2) In your test, use a stubbed version of ClientRequest
#Test
public void sendsPayloadAsXml() {
StubbedClientRequest request = new StubbedClientRequest(new StubbedResponse());
restApi.invokePUT(request);
assertEquals("application/xml", request.acceptHeader);
}
#Test
public void makesTheCallUsingPut() {
StubbedClientRequest request = new StubbedClientRequest(new StubbedResponse());
restApi.invokePUT(request);
assertTrue(request.putWasCalled);
}
#Test
public void releasesTheConnectionWhenComplete() {
StubbedResponse success = new StubbedResponse();
StubbedClientRequest request = new StubbedClientRequest(success);
restApi.invokePUT(request);
assertTrue(success.connectionWasClosed);
}
#Test(expected = RuntimeException.class)
public void raisesAnExceptionWhenInvalidResponseReceived() {
StubbedClientRequest request = new StubbedClientRequest(new StubbedResponse(400));
restApi.invokePUT(request);
}
private static class StubbedClientRequest extends ClientRequest {
public String acceptHeader = "";
public boolean putWasCalled;
public ClientResponse response
public StubbedRequest(ClientResponse response) {
this.response = response;
}
#Override
public ClientResponse put(Class klass) {
putWasCalled = true;
return response;
}
#Override
public void accept(String header) {
acceptHeader += header;
}
}
private static class StubbedResponse extends ClientResponse {
public boolean connectionWasReleased;
public int status = 201;
public StubbedResponse(int status) {
this.status = status;
}
public StubbedResponse() { }
}
This may not be a perfect design (Handing the ClientRequest to the class and having the RestEasy stuff exposed to the outside world) but it's a start.
Hope that helps!
Brandon
i would inject mocked classes that test, if put and delete was called as intended (with expected parameters and so on). easymock or similar is good for that
(same with post and get)
EDIT:
in case you want to test the rest client, use dependency injection to inject the request, then use easymock to mock it like this (for example to test, if delete is called properly):
#Test void myTest(){
ClientRequest mock = EasyMock.createMock(ClientRequest.class);
mock.delete(2); //test if resource with id=2 is deleted or something similar
EasyMock.replay(mock);
invokeDelete(mock);
EasyMock.verify(mock);
}

Categories