Axis Client Timeout - java

I'm using a 3rd party web service that's implemented as SOAP web services.
Per their instructions I used eclipse to generate java stub classes from the WSDL.
After experiencing some long-running requests with them I dug into the generated classes, found where an org.apache.axis.client.Call was being created and invoked. I set a configurable timeout on the Call object.
I can test this by setting the timeout to something unrealistic, like 10 milliseconds. When I do this every request times out as expected.
In production, I'm seeing calls to them take longer than the timeout. As in the timeout is 3 seconds but the execution takes over a minute.
Is there something I'm missing? Maybe I need to dust off my TCP/IP Illustrated books and reacquaint myself with the finer points or maybe it's just something under the covers of the axis code.

I would suggest you to find out how long this 3rd party web service is actually taking to respond to the clients rather than digging into your code since the problem might be at the server side.
For this you actually do not need to write a client yourself, but can use a tool like SOAP-IU and try sending a request. SOAP-UI has lots of details in the internet if you need to find out how to send a request. Please refer here to get started.

Related

Monitor database with GWT

Maybe I'm overthinking this but I'd like some advice. Customers can place an order inside my GWT application and on a secondary computer I want to monitor those submittals inside th eGWT application and flash an alarm every time an order is submitted, provided the user has OK'd this. I cant figure out the best way to do this. Orders are submitted to a mysql database if that makes any difference. Does anyone have a suggestion on what to do or try?
There are two options: 1) polling or 2) pushing which would allow your server (in the servlet handling the GWT request) to notify you (after the order is successfully placed).
In 1) polling, the client (meaning the browser you are using to monitor the app) will periodically call the server to see if there is data waiting. It may be more resource intensive as many calls are made for infrequent data. It may also be slower due to the delay between calls. If only your monitoring client is calling though it wouldn't be so resource intensive.
In 2) pushing, the client will make a request and the request will be held open until there is data. It is less resource intensive and can be faster. Once data is returned, the client sends another request (this is long polling). Alternatively, streaming is an option where the server doesn't sent a complete request and just keeps sending data. This streaming option requires a specific client-/browser-specific implementation though. If it's just you monitoring though, you should know the client and could set it up specifically for that.
See the demo project in GWT Event Service
Here is the documentation (user manual) for it.
Also see GWT Server Push FAQ
There are other ways of doing it other than GWT Event Service of course. Just google "GWT server push" and you'll find comet, DWR, etc., and if you are using Google's App Engine the Channel API

Sending message from server to client with Java

I'm working with Apache Tomcat 7 (JSP and Servlets). In my application, I need to send some messages from server to client. Bellow, I'll explain a little bit what I'm working on.
Brief explanation: The application will bring up a login page if the user isn't logged in every time when he wants to connect to internet. After the user logged in successfully and his time is going to end, I will need to send to client a message with remained time (for example in last few minutes). It can also be another requirement to open advertising popup at a specific time.
I now about JMS but I don't know how fit is that for my scenario. I also read in other posts, the WebSocket can be also an option.
I'm running the server on CentOS 6.2.
Question: For this scenario, do you have some thoughts on how to treat it with Java technologies? If you have some other ideas, feel free to expose!
N.B. Related to JavaScript and PHP I found good answers on SO's questions. I'm interested on how to solve this issue with Java technologies especially.
http://jwebsocket.org/
Maybe this fits your needs.
You will not be able to initiate an HTTP connection from the server to the client. One solution will be to use WebSocket/Comet Framework. Unfortunately websockets are not really wide spread (server+browser) for now. I will suggest you to use a framework to fill the "gap": https://github.com/Atmosphere/atmosphere
I don't understand your obsession with us implementing the solution in Java - any valid solution should be portable across different serverside languages. However if the termination is to occur without synchronous user-driven interaction, then you're just creating load on your server by trying to handle it here. If you want somebody to write the code for you then this isn't the right forum.
I now about JMS....CentOS 6.2.
Not much help here.
The thing we really need to know is what you mean by:
After the user logged in successfully and his time is going to end
(I assume you mean the session time is going to end, unless you've written some software which predicts when people will die).
How do you determine when the session will be ended?
Is it a time limit per page?
Is it a fixed time from when they login?
Is it when the session is garbage collected by the Java?
In the case of 1, then the easiest way to achieve this would be to use javascript to set a timeout on the page (when the user navigates to a new screen the timeout will be discarded), e.g.
setTimeout(function() {
alert('5 minutes has expired since you got here - about to be logged out');
}, (300000)); // 5 minutes
In the 2nd case then you'd still use the method above, but reduce the timeout on the javascript by the time already spent on the server (write the javascript using java, or drop a cookie containing the javascript timestamp at login).
In the 3rd case.....you don't really have any way of knowing when the user will be logged out.

SocketTimeoutException occurring non-stop after app has run for a while, but are instantly-solved by restarting it

I have a crawler Java application which is supposed to connect to some HTTP servers, download the HTML content of their pages, then move on to other HTTP servers. For this task, I've used the Apache HTTP library.
At the first few hours of the run, things seem to work rather smoothly (there are some connection-related exceptions thrown around from time to time, but that's to be expected).
Yet after a while, it seems like I keep getting SocketTimeoutException on every request I send out. The exception does not occur on the HttpClient class's "execute" method, but rather when I try to get the content of the Entity (which I retrieve from the HttpResponse object), or when I try to write that content to a file.
Then, if I stop the application, and start it over again, things seem to go back to working fine - even though it picks up from where it stopped at, meaning it's interacting with the same servers which I received the SocketTimeoutException when trying to interact with before.
I tried looking for all kinds of possible clean-ups that I might be missing and might be essential when using this library, but couldn't find anything.
Any help would be greatly appreciated.
Thanks.
This sounds like the kind of thing which could be caused by connection pools where you're not closing things when you're done with them, if the timeout occurs while the client library waits to retrieve a pooled connection. Are you sure you're closing everything properly (in finally statements)?
If you run Wireshark to monitor your traffic, what network traffic occurs while it's "broken"?
Make sure that you're not using a lot of http requests at the same time. For example, send 5 http requests, and wait for first response. Then you can make another request etc. Looks like your http requests opens too much sockets.

server push or client push is better?

I am developing a chat website using jsp/servlet.I will be hosting my website on gooogle appengine .Now i have some doubts regarding whether to use server push or client pull technology
1)If i use server push and if i dont close the response of servlet will it cause the server to go slow?How many simultanious connection can a tyicall tomcat server can handle if i keep the socket open for the entire chat session between 2 clinets??
2)Will server push or clinet push be better??
If you are using a servlet (prior to 3.0), then I guess you'll have to go with pull because of the programming model of servlet. However, there ARE advantages in using a push model. Primarily, wasted load on server and the limitation in latency. That's why there are technologies such as comet. Servlet 3.0 also supports push model. These are commonly used in ajax based apps.
In fact I believe a push model is more suited for a chatting app. because of the fast response time (=better user experience) it can provide.
If you use a nio based implementation for push-model, you can support thousands or even more than 10k concurrent connections (obviously, your millage varies).
If you use a conventional IO based implementation, it will be likely in the range of hundreds of concurrent connections (don't take this estimation too seriously though. I'm just giving these numbers to give a very, very rough feeling).
As for tomcat, last time I checked, people were saying that it won't have a good push-model support until version 7.0. But I'm not following the current status so I'm not sure (Sorry, perhaps somebody else can help you on this). If that is the case, you might want to check out comet support of jetty.
grizzly and netty are also good NIO based network frameworks, but if you want to use JSP, and find that tomcat is not sufficient, I guess jetty would be the best bet.
edit: (some additional info)
In this "push models", it's not like the server opens a connection to the client. The connection will be kept alive, and the server will push messages as it sees fit.
Also, it's not like there are only "push" and "pull" models. You can have a hybrid, like long polling.
I don't know how are you thinking of achieving server push here. As far as I can see, server needs a request to respond over HTTP. So, when there is a request, server will respond to that.
If i use server push and if i dont close the response of servlet will it cause the server to go slow?
App Engine will not let you do that. You have to finish your response within thirty seconds, or it will be killed. The thirty seconds is also an edge case, most calculations they do (for quota and such) are based on a 75 millisecond response time.
How many simultanious connection can a tyicall tomcat server
Tomcat? I thought you are planning to use App Engine?
Pull. Always pull.
I know it's a manufacturing-oriented book but the advice from Lean Thinking (Womack & Jones) is invaluable in any context (roughly, from memory):
Start by defining value,
line up the activities that create value in the value-stream,
create flow across the value-stream,
let customers pull value from the value-stream,
compete against perfection rather than other organizations
If I misquoted them, I apologize. Anyway, all of those principles can easily be applied in the development of any software product just as they could in the production of any physical product but the one that matters for you is pull.
Letting consumers of a service pull rather than pushing to them not only makes your programming model easier, it aligns activity with demand. You can still use queuing to load-level over time, if you have to, just the way you could with push but, this way, you have complete visibility into what, exactly happens in any given transaction.
I don't quite get your first question but the answer is still pull.
The answer to your query depends on what underlying protocol you wish to use.
Since you have mentioned JSP/servlets, your app will be implemented over the HTTP protocol.
HTTP is a protocol over TCP. TCP is connection oriented and remains alive, until the connection is ended. However, HTTP connections are persistent, only for the duration of a single request-response cycle. The TCP connection is broken after every request-response cycle. So that should answer your doubt with regards to how many socket connections a typical TOMCAT server will be able to handle. The connections will not be persistent, at all. They will only last the duration of a HTTP request-response cycle.
Given this basic idea, I would suggest , you use a client pull strategy, to implement your app.
Even with server push, over HTTP, even though the name says "server push", it is always the web client that polls the server at regular intervals, which just gives an illusion of "server-push". HTTP specification mandates that the client makes a request to which the server responds.
I have considerable experience in developing chat applications (both mobile and web).
Let me know , if you need any assistance. I will be more that willing to help.

php comet with quercus

if i write the comet push with php but use this code on a java server via quercus, will that solve the one process per request problem that apache had and scale well with lot of users using my chat?
Yes, Quercus solves the one process per request Apache bottleneck. However, you need to understand the possible bottlenecks of the JVM. In my opinion, though, you should write the service or app in C/C++ using something like libevent, in Erlang, in Google Go, or simply as a Java servlet simply for portability's sake.
Well, Quercus runs on the (J)VM so it can run with other code that can start threads. But why do you need threads to do chat? You simply set the timeout on a vanilla PHP request to 0 (no timeout) and wait for there to be something to send back to the user.
That something else will be in response to someone else's request (ie A says "hello" which interrupts B's wait for something to happen). That doesn't require multithreading.
Also you could keep using Apache/PHP and do the above and instead connect to a Java (or other) service via something like XML RPC, which could wait forever. That server could do run multiple threads or do whatever it needs to.

Categories