Camel route loops after a specific number of requests - java

I'm having a problem with a Camel route I'm trying to solve for days now.
As I'm not a specialist for this technical issue, there might be omissions for the information you need to know...
First, the goal of my route is to connect to a distant server via an endpoint, requesting xml responses from xml requests via jaxb marshalling.
Nothing special in this route, which is the following one:
routeX.from("direct:requeteObjects)
.setHeader("element")
.constant(element)
.to("bean:importStructureI2VGestionnaireImpl?method=sendRequestObjects)
.to(endpoint)
.to("bean:importGestionnaireImpl?method=" + getObjects)
The server sends responses by pieces : for example, I'm expecting 3000 objects, received by packages of 30 objects.
My method that creates the request "sendRequest" is renewed and asks for the next objects.
After treating the response, I'm sending this request back to the endpoint, which gets the response, which is sent to the getObjects method, that processes the response.
Everything works well, I'm getting my responses. But after 5 requests/responses, after entering the endpoint, nothing happens. Debugging the code, it looks like there's a loop inside my route as the code keeps on going in the AsyncProcessor class, in the process method, etc. No logs, nothing. it doesn't stop.
I have no idea why it's going like that. I thought it might be because I was using the same route definition. So I created a route for each request, stopping and removing the old one. With this, I'm getting to 6 responses. But then, the same problem happens.
I tried setting the context maximum endpoint cache size and the context maximum cache pool size, multiplying by 10 the default values. I checked if the values were taken into account : they were. But still, I'm getting the same problem.
Also, the exchange object is always a new one, so my responses are never piled up inside one big exchange object.
Do you know where the problem could be? Can the context become too large? Or the endpoint? Where should I look?
Thank you for your answers. If you need any more information , I'll be pleased to add it.
PS : I tried applying the answer in this topic : Camel route inside routeContext executing infinitely, but no changes :-(

Related

How is request (socket connections) bean scoping handled in absence of http requests?

I'm building a back-end service which needs to handle 100,000 requests per day (mvp) and up to 1 million thereafter.
Our requests are not HTTP requests (due to high demand) so a request is received in industry standard format (assume fixed length text file) which is converted to a java object and that object is later written to socket which my app will receive.
Traditionally I would have assumed that all beans should be request scoped since that is essentially what I want, but since requests are not HTTP I'm very confused about how to scope this correctly. Each socket transmission should get its own set of beans and it should not interfere with the previous or following transmission.
Could you kindly help point me in the right direction? Http and request aware annotations (#RequestScope) seem to not apply in my case but yet that's very close to what I want to achieve. Likewise I'm unable to meaningfully research since I am unsure what vocabulary to use. Thank you very much in advance.
How about introducing your own scope as described here. You can use ThreadLocal storage to keep the beans or even use the thread scope See here

Jersey response containing incorrect data

Apologies: I don't have a simple test case that reproduces this problem, as it happens very intermittently. However, I would greatly appreciate some help regarding how to even begin diagnosing the issue.
I have a Jersey server running on Tomcat.
When the client makes a request, sometimes a response from a totally different request is mixed in with the correct response.
The "correct" request can be of any kind, but the "bad" response which gets mixed in is always from an SSE stream (EventOutput) or an AsyncResponse.
For example, this is the output received by a client through a normal request:
event: message_sent
id: 1
data: {"value":"hello world"}
{"event-id":"13"}event: message_sent
id: 2
data: {"value":"hello world"}
The genuine response {"event-id":"13"} is present... but surrounding that there are two erroneous SSE events.
The method to handle this request returns simply:
return Response.created(uri).entity(eventId).build();
So I don't understand at which point the unwanted data gets sent (unless Response.created() is returning a response object which had already been used for an SSE stream).
The server logs always show the correct output. We know the client is not at fault, however, as we used a packet sniffer to confirm the responses are malformed.
Notes:
For SSE streams, I always check that the EventOutput is not closed before writing to them
When writing to AsyncResponse objects, I always check isSuspended() first (and they are injected with the #Suspended annotation)
Again, any hints or pointers would be such a great help. I've run out of ideas!
After a lot of research, I've concluded that my problem must be (of course) a user error even though I couldn't reproduce the error when I removed the apache proxy from the equation. In my case, I had to be more careful about when considering EventOutputs as closed -- since you can only tell if the connection is open when trying to write to it. In general though, my findings are:
A bug of this kind occurs when you keep references to response objects around, and then write to them after Tomcat has re-used them for another request. This could be a list of e.g. AsyncResponse or EventOutput objects which you need to keep around to resume or write to at a later time.
However, if it's a bug which is difficult to track down and you need a solution, there is a Tomcat setting which will disable the re-use of these objects at the cost of performance, as it says here: https://tomcat.apache.org/tomcat-8.0-doc/security-howto.html
Setting org.apache.catalina.connector.RECYCLE_FACADES system property
to true will cause a new facade object to be created for each request.
This reduces the chances of a bug in an application exposing data from
one request to another.
(Don't be confused by the name; RECYCLE_FACADES=true means that the facades will not get reused, and new ones will get created as they are needed).
There are examples of application bugs like this only manifesting when behind an apache proxy, so if the bug disappears if you access Tomcat directly it doesn't necessarily mean that apache is at fault.
As I say, it's unlikely for this to be caused by a bug in Tomcat, Apache or the mod_proxy_ajp... but if you are an unlucky one, you can try using another connector (mod_jk, for example).

Restlet URL/path pattern mismapping

I'm developing a REST API using Restlet.
So far everything has been working just fine. However, I now encountered an issue with the Router mapping of URL to ServerResource.
I've got the following scenario:
GET /car returns a list of all cars
GET /car/{id} returns details about the car with id 1
GET /car/advancedsearch?param1=test should run a search across all cars with some parameters
The first two calls work without any problems. If I try to hit the third call though, the Restlet Router somehow maps it to the second one instead. How can I tell Restlet to instead use the third case?
My mapping is defined as follows:
router.attach("/car", CarListResource.class);
router.attach("/car/{id}", CarResource.class);
router.attach("/car/advancedsearch", CarSearchResource.class);
CarSearchResource is never invoked, but rather the request ends up in CarResource.
The router's default matching mode is set to Template.MODE_EQUALS, so that can't be causing it.
Does anyone have any further suggestions how I could fix it?
Please don't suggest to use /car with the parameters instead, as there's already another kind of search in place on that level. Also, I'm not in control of the API structure, so it has to remain as it is.
you need to add .setMatchingQuery(true); to that rout in order it to recognize that it is with a query at the end of it.
Router router = (Router) super.createInboundRoot();
TemplateRoute route1 = router.attach("/car/advancedsearch?{query_params}", MyResource.class);
route1.setMatchingQuery(true);
return router;
Mind that this pattern is with the exact specific order that you have determined in the route i.e. advancedsearch comes first and query_params comes after
I was able to solve this by simply reordering the attach statements:
router.attach("/car/advancedsearch", CarSearchResource.class);
router.attach("/car", CarListResource.class);
router.attach("/car/{id}", CarResource.class);

Pausing and notifying particular threads in a Java Webservice

I'm writing a Java webservice with CXF. I have the following problem: A client calls a method from the webservice. The webservice has to do two things in parallel and starts two threads. One of the threads needs some additional information from the client. It is not possible to add this information when calling the webservice method, because it is dependent from the calculation done in the webservice. I cannot redesign the webservice becuase it is part of a course assignement and the assignements states that I have to do it this way. I want to pause the thread and notify it when the client delivers the additional information. Unfortunately it is not possible in Java to notify a particular thread. I can't find any other way to solve my problem.
Has anybody a suggestion?
I've edited my answer after thinking about this some more.
You have a fairly complex architecture and if your client requires information from the server in order to complete the request then I think you need to publish one or more 'helper' methods.
For example, you could publish (without all the Web Service annotation):
MyData validateMyData(MyData data);
boolean processMyData(MyData data);
The client would then call validateMyData() as many times as it liked, until it knew it had complete information. The server can modify (through calculation, database look-up, or whatever) the variables in MyData in order to help complete the information and pass it back to the client (for updating the UI, if there is one).
Once the information is complete the client can then call processMyData() to process the complete request.
This has the advantage that the server methods can be implemented without the need for background threads as they should be able to do their thing using the request-thread supplied by the server environment.
The only caveat to this is if MyData can get very large and you don't want to keep passing it back and forth between client and server. In that case you would need to come up with a smaller class that just contains the changes the server wants to make to MyData and exclude data that doesn't need correcting.
IMO it's pretty odd for a web service request to effectively be incomplete. Why can't the request pass all the information in one go? I would try to redesign your service like that, and make it fail if you don't pass in all the information required to process the request.
EDIT: Okay, if you really have to do this, I wouldn't actually start a new thread when you receive the first request. I would store the information from the first request (whether in a database or just in memory if this is just a dummy one) and then when the second request comes in, launch the thread.

Get status of servlet request before the response is returned

Good evening,
I am in the process of writing a Java Servlet (Struts 2, Tomcat, JSP etc) which is capable of doing some fairly complex simulations. These can take up to 2 minutes to complete on the and will return a graph of the results. It is trivial to calculate the percentage of the simulation completed because the process works by repeating the same calculations 1000s of times.
I would be interested to know if anyone has ever tried to use client side technology to provide any estimate of the percentage complete. I.e query the servlet processing to get the number of cycles completed at various point throughout the simulation. This could then be displayed as a bar in the client browser.
Any thoughts, advice, resources would be much appreciated.
Thanks,
Alex
In your database, have a table to maintain a list of simulations along with their server-calculated progress.
In your web-application, use AJAX to query the server every few seconds (1-20 depending on load is what I'd go with) and update a graphical progress bar. Most javascript libraries have simple timer-based AJAX functions to do exactly this sort of thing.
There's a few details to figure out, such as whether or not completed simulations remain in the DB (could be useful logging info), but overall, this should be fairly simple.
You could encapsulate your response in a mime/multipart message and sends your updates until you have a full response done.
Bugzilla uses this in their search to show "Searching ..."-screen until the searchresult is done.
If you want to use plain Struts2, you should take a look at the ExecuteAndWait Interceptor.
It works by the old refresh-with-timeout method. Sure, it has lower coolness factor than some AJAX thing, but it's simple and works (I've used it).
Struts2 takes care (by using this interceptor) of executing the action method (which would typically take a long time) in a separate thread, and it returns a special result wait until the work is completed. You just have to code a jsp that shows some "waiting..." message for this result, and make it refresh to the same action, repeatedly, with (say) two or three seconds of timeout. The jsp has access to the action properties, of course, hence you can code some getProgress() method to show a progress message or bar.
AJAX is the way to go here.

Categories