I'll get right into the subject
I have a server that works a music recommendation system ( for some kind of application)
the server has a very large database
So i made a singleton constructor of the recommendation system.
My problem is
the first time this constructor is being created it has to run a training data and it connects to the database a lot which is a time consuming operation
This has to run only the first time according to my singleton object and then afterwards, it'll be able to use the results of the constructor right away
My problem is that on the first HTTP request from my PC to the server, the explorer times out and the singleton object is never created on the server
I think my solution would be in extending the wait time of the explorer until the server finishes computation and returns with result, however
if someone has a better solution i'd be greatly in his dept
I really need an easy applicable solution that requires minimal effort because the delivery deadline is closing up and i need to wrap the project as fast as possible
Thanks again
Few comments/suggestions
Increasing timeout is one way but its not sure shot way of solving the problem. The time taken by the recommendation system may not always be same over the time.
I suggest another approach to solve this. Not sure if its an option for you, but Will it be possible to create the recommendation system asynchronously in a separate thread so that the server start up is not held back by this ?
If you could do above, then provision a flag which indicates that recommendation system has started.
Meanwhile if you receive any request, first check the flag if the flag indicate that the recommendation system has not yet started, the return some meaningful message/status.
This way you will get the response immediately and based on the response you can work out retries on the client side.
Please note that this will be substantial change on the server side. Just an opinion to improve the things further and full proof way of avoiding timeout.
You can increase the connection time out using below
HttpResponse response = null;
final HttpParams httpParams = new BasicHttpParams();
// 60 second connection timeout
HttpConnectionParams.setConnectionTimeout(httpParams, 60000);
HttpClient httpClient = new DefaultHttpClient(httpParams);
Related
I have a REST API created in Java with the Spark framework, but right now a lot of work is being done on the request thread that is significantly slowing down requests.
I'm wanting to solve this by creating some kind of background worker/queue that will do all the needed work off of the request thread. The response from the server contains data that the client will need (it's data that will be displayed). In these examples the client is a web browser.
Here's what the current cycle looks like
API request from client to server
Server does blocking work; Response from server after several seconds/minutes
Client receives response. It has all the data it needs in the response
Here's what I would like
API request from client to server
Server does work off-thread
Client receives response from server almost instantly, but it doesn't have the data it needs. This response will contain some ID (Integer or UUID), which can be used to check the progress of the work being done
Client regularly checks the status of the work being done, the response will contain a status (like a percentage or time estimate). Once the work is done, the response will also contain the data we need
What I dislike about this approach is that it will significantly complicate my API. If I want to get any data, I will have to make two requests. One to initiate the blocking work, and another to check the status (and get the result of the blocking work). Not only will the API become more complicated, but the backend will too.
Is this efficient, or is there a better way to implement what I want to accomplish?
Neither way is more efficient than the other since the same amount and time of work will be done in either case. In the first case it will be done on the request thread, the client will not know of progress and the request will take as long as it takes to run the task. This has the client wait on the reply.
In the second case you need to add complexity, but you get progress status and possibly other advantages depending on the task. This has the client poll on the reply.
You can use async processing to perform work on non-request threads, but that probably won't make any difference if most of your requests are long running ones. So it's up to you to decide what you want, the client will have to wait the same amount anyway.
I'm working on a component in an android/java app, responsible (currently) for sending GET requests to a remote server. My code is based on this sample:
HTTP Client Template.
I've utilized methods setConnectTimeout() and setReadTimeout() from the URLConnection class to my favor, however I lack full understanding of their impact, now say I specify a value of 10 seconds for both:
Does it mean it should give up after 10 seconds of inability to start a connection? and will never timeout if the connection is open & active?
Or giving up after 10 seconds from the moment of the call? even if say the connection was actually started successfully after 2 seconds, and it could not finish all data transfer during the next 8 seconds?
Or is it even another different case?
Also, the concept is clear for timing-out a connect attempt, but how does a receive timeout occur? because as far as I know the OS will automatically receive and hold the data sent to you in its local buffer even before you call for receiving, since the data could actually be sent to you before you make a call in your code, and thus the OS does what it does to guarantee that data isn't lost around.
So is my timeout value for receiving passed to the OS for it to handle stuff?
Forward thanks, I hope I did my part well in the question.
I finished coding a java application that uses 25 different threads, each thread is an infinite loop where an http request is sent and the json object(small one) that is returned is processed. It is crucial that the time between two requests sent by a specific thread is less than 500ms. However, I did some benchmark on my program and that time is well over 1000ms. SO my question is: Is there a better way to handle multiple connections other than creating multiple threads ?
I am in desperate need for help so I'm thankful for any advice you may have !
PS: I have a decent internet connection ( my ping to the destination server of the requests is about 120ms).
I'd suggest looking at Apache HttpClient:
Specifically, you'll be interested in constructing a client that has a pooling connection manager. You can then leverage the same client.
PoolingClientConnectionManager connectionManager = new PoolingClientConnectionManager();
connectionManager.setMaxTotal(number);
HttpClient client = new DefaultHttpClient(connectionManager);
Here's a specific example that handles your use-case:
PoolingConnectionManager example
I am currently creating a program that, based on constantly changing variables, connects to a website, and gathers information. It must connect to the website up to 400 times. The subject website seems to display a blank screen after a certain amount of connections, about 10-30. Does anyone know the best way find how long to wait between connections?
public static String pullString(int id) {
return null;
}
I can't get there from work, but google runescape api. They have one here, and I'll bet they expect you to use it.
Once it starts blocking you, does it eventually let you reconnect? You might be able to do some sort of algorithm to dynamically find how fast to try again.
You could consider something similar to what TCP congestion control does: start with some wait time between connections. When one completes successfully, decrease the wait time by a constant. When you get an error, double the wait time (or multiply by a constant).
It's very likely, though, that they're doing something more complicated than just rate-limiting connections. Without knowing what you have to get around, it's hard to know how to get around it.
If the website is giving you random block time after you reach certain limit it's almost impossible to find the best wait time. I think your best bet is to use a pool of http proxy to access the website in round robin way. Though that's not very nice... But technically it should be the best way to access a website programmatically if it blocks you after a certain amount of traffic.
Here is a link about how to use proxy: http://docs.oracle.com/javase/6/docs/technotes/guides/net/proxies.html
You can also use HttpClient which is simpler.
Try to google around and you can find a lot of free proxy server list.
I've been reading the Apache javadocs and have concluded that the correct way to handle a heavily multithreaded application which involves HttpClient is to use a static singleton MultiThreadedHttpConnectionManager with a global HttpClient, with each thread having its instance of HttpState for its cookies and HostConfiguration. Is this correct? Obviously I want to avoid each thread interfering with the others' states.
Also, one of the problems I've been noticing is that when these threads end the socket associated with the HttpClient remains (checking /proc/###/fd), even after RequestMethod.releaseConnection() is called in the finally block. This is a problem as the default limit for fd is 1024 and this will be breached after a few days. What's the best way to kill with these left over sockets?
The only applicable method I can see for killing these sockets is MultiThreadedHttpConnectionManager.closeIdleConnections(timeout)- is this correct?
This means I'm going to have to either instantiate another thread to periodically call this method, or just call it every time the threads are killed manually, which seems quite clunky. Any advice?
Thanks a lot