I have an application that creates routes to connect to a REST endpoint and process the responses for several vendors. Each route is triggered with a quartz2 timer. Recently when the timer fires it creates multiple exchanges instead of just one and I cannot determine what is causing it.
The method that creates the routes is here:
public String generateRoute(String vendorId) {
routeBuilders.add(new RouteBuilder() {
#Override
public void configure() throws Exception {
System.out.println("Building REST input route for vendor " + vendorId);
String vendorCron = vendorProps.getProperty(vendorId + ".rest.cron");
String vendorEndpoint = vendorProps.getProperty(vendorId + ".rest.endpoint");
String vendorAuth = vendorProps.getProperty(vendorId + ".rest.auth");
int vendorTimer = Integer.valueOf(vendorId) * 10000;
GsonDataFormat format = new GsonDataFormat(RestResponse.class);
from("quartz2://timer" + vendorId + "?cron=" + vendorCron)
.routeId("Rte-vendor" + vendorId)
.streamCaching()
.log("Starting route " + vendorId)
.setHeader("Authorization",constant(vendorAuth))
.to("rest:get:" + vendorEndpoint)
.to("direct:processRestResponse")
.end();
};
});
return "direct:myRoute." + vendorId;
and a sample 'vendorCron' string is
"*+5+*+*+*+?&trigger.timeZone=America/New_York".
When the quartz route fires I see this type of output in the log
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
When I should ( and used to) only see one of these.
Any ideas what would cause this?
Thanks!
This is because of your vendorCron
If Cron trigger is every 5secs then you see this log in every 5 secs..
If Cron trigger is every 5mins/hours you see these login in 5 mins/hours.
I was staring so hard I missed the obvious. I need a 0 in the seconds place of the cron expression.
Thank you for the time.
Related
I expect that all invocations of the server will be processed in parallel, but it is not true.
Here is simple example.
RSocket version: 1.1.0
Server
public class ServerApp {
private static final Logger log = LoggerFactory.getLogger(ServerApp.class);
public static void main(String[] args) throws InterruptedException {
RSocketServer.create(SocketAcceptor.forRequestResponse(payload ->
Mono.fromCallable(() -> {
log.debug("Start of my business logic");
sleepSeconds(5);
return DefaultPayload.create("OK");
})))
.bind(WebsocketServerTransport.create(15000))
.block();
log.debug("Server started");
TimeUnit.MINUTES.sleep(30);
}
private static void sleepSeconds(int sec) {
try {
TimeUnit.SECONDS.sleep(sec);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Client
public class ClientApp {
private static final Logger log = LoggerFactory.getLogger(ClientApp.class);
public static void main(String[] args) throws InterruptedException {
RSocket client = RSocketConnector.create()
.connect(WebsocketClientTransport.create(15000))
.block();
long start1 = System.currentTimeMillis();
client.requestResponse(DefaultPayload.create("Request 1"))
.doOnNext(r -> log.debug("finished within {}ms", System.currentTimeMillis() - start1))
.subscribe();
long start2 = System.currentTimeMillis();
client.requestResponse(DefaultPayload.create("Request 2"))
.doOnNext(r -> log.debug("finished within {}ms", System.currentTimeMillis() - start2))
.subscribe();
TimeUnit.SECONDS.sleep(20);
}
}
In client logs, we can see that both request was sent at the same time, and both responses was received at the same time after 10sec (each request was proceed in 5 seconds).
In server logs, we can see that requests executed sequentially and not in parallel.
Could you please help me to understand this behavior?
Why we have received the first response after 10 seconds and not 5?
How do I create the server correctly if I want all requests to be processed in parallel?
If I replace Mono.fromCallable by Mono.fromFuture(CompletableFuture.supplyAsync(() -> myBusinessLogic(), executorService)), then it will resolve 1.
If I replace Mono.fromCallable by Mono.delay(Duration.ZERO).map(ignore -> myBusinessLogic(), then it will resolve 1. and 2.
If I replace Mono.fromCallable by Mono.create(sink -> sink.success(myBusinessLogic())), then it will not resolve my issues.
Client logs:
2021-07-16 10:39:46,880 DEBUG [reactor-tcp-nio-1] [/] - sending ->
Frame => Stream ID: 0 Type: SETUP Flags: 0b0 Length: 56
Data:
2021-07-16 10:39:46,952 DEBUG [main] [/] - sending ->
Frame => Stream ID: 1 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 31 |Request 1 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:46,957 DEBUG [main] [/] - sending ->
Frame => Stream ID: 3 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 32 |Request 2 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:57,043 DEBUG [reactor-tcp-nio-1] [/] - receiving ->
Frame => Stream ID: 1 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:57,046 DEBUG [reactor-tcp-nio-1] [/] - finished within 10120ms
2021-07-16 10:39:57,046 DEBUG [reactor-tcp-nio-1] [/] - receiving ->
Frame => Stream ID: 3 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:57,046 DEBUG [reactor-tcp-nio-1] [/] - finished within 10094ms
Server Logs:
2021-07-16 10:39:46,965 DEBUG [reactor-http-nio-2] [/] - receiving ->
Frame => Stream ID: 0 Type: SETUP Flags: 0b0 Length: 56
Data:
2021-07-16 10:39:47,021 DEBUG [reactor-http-nio-2] [/] - receiving ->
Frame => Stream ID: 1 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 31 |Request 1 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:47,027 DEBUG [reactor-http-nio-2] [/] - Start of my business logic
2021-07-16 10:39:52,037 DEBUG [reactor-http-nio-2] [/] - sending ->
Frame => Stream ID: 1 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:52,038 DEBUG [reactor-http-nio-2] [/] - receiving ->
Frame => Stream ID: 3 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 32 |Request 2 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:52,038 DEBUG [reactor-http-nio-2] [/] - Start of my business logic
2021-07-16 10:39:57,039 DEBUG [reactor-http-nio-2] [/] - sending ->
Frame => Stream ID: 3 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
You shouldn't mix asynchronous code like Reactive Mono operations with blocking code like
private static void sleepSeconds(int sec) {
try {
TimeUnit.SECONDS.sleep(sec);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
I suspect the central issue here is that a framework like rsocket-java doesn't want to run everything on new threads, at the cost of excessive context switching. So generally relies on you run long running CPU or IO operations appropriately.
You should look at the various async delay operations instead https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html#delayElement-java.time.Duration-
If your delay is meant to simulate a long running operation, then you should look at subscribing on a different scheduler like https://projectreactor.io/docs/core/release/api/reactor/core/scheduler/Schedulers.html#boundedElastic--
I have to following function to query users from an AD server:
public List<LDAPUserDTO> getUsersWithPaging(String filter)
{
List<LDAPUserDTO> userList = new ArrayList<>();
try(LDAPConnection connection = new LDAPConnection(config.getHost(),config.getPort(),config.getUsername(),config.getPassword()))
{
SearchRequest searchRequest = new SearchRequest("", SearchScope.SUB,filter, null);
ASN1OctetString resumeCookie = null;
while (true)
{
searchRequest.setControls(
new SimplePagedResultsControl(100, resumeCookie));
SearchResult searchResult = connection.search(searchRequest);
for (SearchResultEntry e : searchResult.getSearchEntries())
{
LDAPUserDTO tmp = new LDAPUserDTO();
tmp.distinguishedName = e.getAttributeValue("distinguishedName");
tmp.name = e.getAttributeValue("name");
userList.add(tmp);
}
LDAPTestUtils.assertHasControl(searchResult,
SimplePagedResultsControl.PAGED_RESULTS_OID);
SimplePagedResultsControl responseControl =
SimplePagedResultsControl.get(searchResult);
if (responseControl.moreResultsToReturn())
{
resumeCookie = responseControl.getCookie();
}
else
{
break;
}
}
return userList;
} catch (LDAPException e) {
logger.error(e.getExceptionMessage());
return null;
}
}
However, this breaks when I try to search on the RootDSE.
What I've tried so far:
baseDN = null
baseDN = "";
baseDN = RootDSE.getRootDSE(connection).getDN()
baseDN = "RootDSE"
All resulting in various exceptions or empty results:
Caused by: LDAPSDKUsageException(message='A null object was provided where a non-null object is required (non-null index 0).
2020-04-01 10:42:22,902 ERROR [de.dbz.service.LDAPService] (default task-1272) LDAPException(resultCode=32 (no such object), numEntries=0, numReferences=0, diagnosticMessage='0000208D: NameErr: DSID-03100213, problem 2001 (NO_OBJECT), data 0, best match of:
''
', ldapSDKVersion=4.0.12, revision=aaefc59e0e6d110bf3a8e8a029adb776f6d2ce28')
So, I really spend a lot of time with this. It is possible to kind of query the RootDSE, but it's not that straight forward as someone might think.
I mainly used WireShark to see what the guys at Softerra are doing with their LDAP Browser.
Turns out I wasn't that far away:
As you can see, the baseObject is empty here.
Also, there is one additional Control with the OID LDAP_SERVER_SEARCH_OPTIONS_OID and the ASN.1 String 308400000003020102.
So what does this 308400000003020102 more readable: 30 84 00 00 00 03 02 01 02 actually do?
First of all, we decode this into something, we can read - in this case, this would be the int 2.
In binary, this gives us: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
As we know from the documentation, we have the following notation:
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 |
|---|---|---|---|---|---|---|---|---|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|-------|-------|
| x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | SSFPR | SSFDS |
or we just take the int values from the documentation:
1 = SSFDS -> SERVER_SEARCH_FLAG_DOMAIN_SCOPE
2 = SSFPR -> SERVER_SEARCH_FLAG_PHANTOM_ROOT
So, in my example, we have SSFPR which is defined as follows:
For AD DS, instructs the server to search all NC replicas except
application NC replicas that are subordinate to the search base, even
if the search base is not instantiated on the server. For AD LDS, the
behavior is the same except that it also includes application NC
replicas in the search. For AD DS and AD LDS, this will cause the
search to be executed over all NC replicas (except for application NCs
on AD DS DCs) held on the DC that are subordinate to the search base.
This enables search bases such as the empty string, which would cause
the server to search all of the NC replicas (except for application
NCs on AD DS DCs) that it holds.
NC stands for Naming Context and those are stored as Operational Attribute in the RootDSE with the name namingContexts.
The other value, SSFDS does the following:
Prevents continuation references from being generated when the search
results are returned. This performs the same function as the
LDAP_SERVER_DOMAIN_SCOPE_OID control.
So, someone might ask why I even do this. As it turns out, I got a customer with several sub DCs under one DC. If I tell the search to handle referrals, the execution time is pretty high and too long - therefore this wasn't really an option for me. But when I turn it off, I wasn't getting all the results when I was defining the BaseDN to be the group whose members I wanted to retrieve.
Searching via the RootDSE option in Softerra's LDAP Browser was way faster and returned the results in less then one second.
I personally don't have any clue why this is way faster - but the ActiveDirectory without any interface of tool from Microsoft is kind of black magic for me anyway. But to be frank, that's not really my area of expertise.
In the end, I ended up with the following Java code:
SearchRequest searchRequest = new SearchRequest("", SearchScope.SUB, filter, null);
[...]
Control globalSearch = new Control("1.2.840.113556.1.4.1340", true, new ASN1OctetString(Hex.decode("308400000003020102")));
searchRequest.setControls(new SimplePagedResultsControl(100, resumeCookie, true),globalSearch);
[...]
The used Hex.decode() is the following: org.bouncycastle.util.encoders.Hex.
A huge thanks to the guys at Softerra which more or less put my journey into the abyss of the AD to an end.
You can't query users from the RootDSE.
Use either a domain or if you need to query users from across domains in a forest use the global catalog (running on different ports, not the default 389 / 636 for LDAP(s).
RootDSE only contains metadata. Probably this question should be asked elsewhere for more information but first read up on the documentation from Microsoft, e.g.:
https://learn.microsoft.com/en-us/windows/win32/ad/where-to-search
https://learn.microsoft.com/en-us/windows/win32/adschema/rootdse
E.g.: namingContexts attribute can be read to find which other contexts you may want to query for actual users.
Maybe start with this nice article as introduction:
http://cbtgeeks.com/2016/06/02/what-is-rootdse/
i'm developing the Connect4 game in Java and i'm having problem with the Logger. I don't know why prints in different place ad between other kind of prints.
public void setPlacement(Move lastMove){
Logger.getGlobal().info("Player" + lastMove.getPlayerIndex() + " placed a checker in position : " + lastMove.toString());
display();
}
The method display() just prints the grid of the game. Here's the output of the above method :
| 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| - | - | - | - | - | - | - |
| - | - | - | - | - | - | - |
| - ago 01, 2019 6:10:24 PM it.unicam.cs.pa.connectFour.GameViewsetPlacement
INFO: Player1 placed a checker in position : column 4, row 5
| - | - | - | - | - | - |
| - | - | - | - | - | - | - |
| - | O | O | O | - | - | - |
| O | X | X | X | - | - | - |
*****************************
Can someone explain me why the logger acts like this?
The response to this question concerning System::out and System::err may help you. Specifically, your console is displaying two streams of output at the same time, with no ordering guarantees between messages sent to different streams.
One way to enable Loggers to write to a standard output stream is to create your own StreamHandler and configure your Logger to send input to the handler instead. You may also have to disable parent handlers to avoid duplicate output. You may also want to ensure that output is proactively sent to the desired output stream, as so:
Handler h = new StreamHandler(System.out, formatter)
{
public void publish(LogRecord record)
{
if(record == null)
return;
super.publish(record);
super.flush();
}
};
I have setup where multiple servers run a #Schedule which run a spring batch job that sends out emails to users. I want to make sure that only one instance of this job is ran across multiple servers.
Based on this question
I have implemented some logic to see if its possible to solve this using only spring batch.
To run a job I created a helper class JobRunner with the following methods:
public void run(Job job) {
try {
jobLauncher.run(job, new JobParameters());
} catch (JobExecutionAlreadyRunningException e) {
// Check if job is inactive and stop it if so.
stopIfInactive(job);
} catch (JobExecutionException e) {
...
}
}
The stopIfInactive method:
private void stopIfInactive(Job job) {
for (JobExecution execution : jobExplorer.findRunningJobExecutions(job.getName())) {
Date createTime = execution.getCreateTime();
DateTime now = DateTime.now();
// Get running seconds for more info.
int seconds = Seconds
.secondsBetween(new DateTime(createTime), now)
.getSeconds();
LOGGER.debug("Job '{}' already has an execution with id: {} with age of {}s",
job.getName(), execution.getId(), seconds);
// If job start time exceeds the execution window, stop the job.
if (createTime.before(now.minusMillis(EXECUTION_DEAD_MILLIS)
.toDate())) {
LOGGER.warn("Execution with id: {} is inactive, stopping",
execution.getId());
execution.setExitStatus(new ExitStatus(BatchStatus.FAILED.name(),
String.format("Stopped due to being inactive for %d seconds", seconds)));
execution.setStatus(BatchStatus.FAILED);
execution.setEndTime(now.toDate());
jobRepository.update(execution);
}
}
}
And then the jobs are ran by the following on all servers:
#Scheduled(cron = "${email.cron}")
public void sendEmails() {
jobRunner.run(emailJob);
}
Is this a valid solution for a multiple server setup? If not, what are the alternatives?
EDIT 1
I've did a bit more testing - setup two applications which run a #Schedule every 5 seconds that initiates a job using the helper class I created. It seems that my solution does not resolve the problem. Here is the data from batch_job_execution table that is used by spring batch:
job_execution_id | version | job_instance_id | create_time | start_time | end_time | status | exit_code | exit_message | last_updated | job_configuration_location
------------------+---------+-----------------+-------------------------+-------------------------+-------------------------+-----------+-----------+--------------+-------------------------+----------------------------
1007 | 2 | 2 | 2016-08-25 14:43:15.024 | 2016-08-25 14:43:15.028 | 2016-08-25 14:43:16.84 | COMPLETED | COMPLETED | | 2016-08-25 14:43:16.84 |
1006 | 1 | 2 | 2016-08-25 14:43:15.021 | 2016-08-25 14:43:15.025 | | STARTED | UNKNOWN | | 2016-08-25 14:43:15.025 |
1005 | 2 | 2 | 2016-08-25 14:43:10.326 | 2016-08-25 14:43:10.329 | 2016-08-25 14:43:12.047 | COMPLETED | COMPLETED | | 2016-08-25 14:43:12.047 |
1004 | 2 | 2 | 2016-08-25 14:43:10.317 | 2016-08-25 14:43:10.319 | 2016-08-25 14:43:12.03 | COMPLETED | COMPLETED | | 2016-08-25 14:43:12.03 |
1003 | 2 | 2 | 2016-08-25 14:43:05.017 | 2016-08-25 14:43:05.02 | 2016-08-25 14:43:06.819 | COMPLETED | COMPLETED | | 2016-08-25 14:43:06.819 |
1002 | 2 | 2 | 2016-08-25 14:43:05.016 | 2016-08-25 14:43:05.018 | 2016-08-25 14:43:06.811 | COMPLETED | COMPLETED | | 2016-08-25 14:43:06.811 |
1001 | 2 | 2 | 2016-08-25 14:43:00.038 | 2016-08-25 14:43:00.042 | 2016-08-25 14:43:01.944 | COMPLETED | COMPLETED | | 2016-08-25 14:43:01.944 |
1000 | 2 | 2 | 2016-08-25 14:43:00.038 | 2016-08-25 14:43:00.041 | 2016-08-25 14:43:01.922 | COMPLETED | COMPLETED | | 2016-08-25 14:43:01.922 |
999 | 2 | 2 | 2016-08-25 14:42:55.02 | 2016-08-25 14:42:55.024 | 2016-08-25 14:42:57.603 | COMPLETED | COMPLETED | | 2016-08-25 14:42:57.603 |
998 | 2 | 2 | 2016-08-25 14:42:55.02 | 2016-08-25 14:42:55.023 | 2016-08-25 14:42:57.559 | COMPLETED | COMPLETED | | 2016-08-25 14:42:57.559 |
(10 rows)
I also tried the method provided by #Palcente, I've got similar results.
Spring Integration's latest release added some functionality around distributed locks. This is really what you'd want to use to make sure that only one server fires the job (only the server that obtains the lock should launch the job). You can read more about Spring Integration's locking capabilities in the documentation here: http://projects.spring.io/spring-integration/
I have a strange behaviour with a restlet server and I can't understand what's happening!
I have a service that make basic string concatenation with datas from a mySQL database.
My code is :
private static void testWS() throws Exception {
Client client = new Client(Protocol.HTTP);
for (String id : listIds) {
long startTime = System.nanoTime();
Request request = new Request(Method.GET, REST_SERVICE + id);
Response response = client.handle(request);
long endTime = System.nanoTime();
System.out.println("Duration of WS call:" + ((endTime - startTime) / 1000000) + " ms");
}
}
When I run this batch I have something like this:
Duration of WS call:128 ms
Duration of WS call:1015 ms
Duration of WS call:1069 ms
But when this same batch runs on two different computer at the same time, I have for both batch something like this:
Duration of WS call:90 ms
Duration of WS call:92 ms
Duration of WS call:81 ms
The response is 10 x faster when two programs are querying the server instead of one !
Real example: Same batch running on two different computer:
+-----------------------------+----------------------------+
| Batch 1 | Batch 2 |
+-----------------------------+----------------------------+
| Duration of WS call:128 ms | | Start of Batch1
| Duration of WS call:1015 ms | |
| Duration of WS call:1010 ms | |
| Duration of WS call:1012 ms | |
| Duration of WS call:1031 ms | |
| Duration of WS call:1036 ms | |
| Duration of WS call:834 ms | |
| Duration of WS call:90 ms | Duration of WS call:75 ms | Start of Batch2
| Duration of WS call:92 ms | Duration of WS call:82 ms |
| Duration of WS call:81 ms | Duration of WS call:85 ms |
| Duration of WS call:89 ms | Duration of WS call:82 ms |
| Duration of WS call:146 ms | Duration of WS call:90 ms |
| Duration of WS call:92 ms | Duration of WS call:85 ms |
| Duration of WS call:85 ms | Duration of WS call:76 ms |
| Duration of WS call:28 ms | Duration of WS call:96 ms |
| Duration of WS call:165 ms | Duration of WS call:88 ms |
| Duration of WS call:78 ms | Duration of WS call:84 ms |
| Duration of WS call:85 ms | Duration of WS call:63 ms |
| Duration of WS call:103 ms | Duration of WS call:37 ms |
| Duration of WS call:129 ms | Duration of WS call:74 ms |
| Duration of WS call:73 ms | Duration of WS call:140 ms | Batch2 manually stopped
| Duration of WS call:1058 ms | |
| Duration of WS call:1016 ms | |
| Duration of WS call:1006 ms | |
| Duration of WS call:1020 ms | |
| Duration of WS call:1055 ms | |
| Duration of WS call:958 ms | |
| Duration of WS call:1003 ms | | End of Batch1
+-----------------------------+----------------------------+
Is there an explanation for this ?
Thanks in advance.
I was using the default HTTP server in org.restlet.jar wich seems to produce strange results. Since I switch to Jetty everything works as expected (average response time is 50-70 ms)