I'm working on a WSDL-based web service and using Apache Axis 2. I'm not an expert on web services, and the person I'm working with claims that in order for this particular web service to work two calls have to be made on the same connection, i.e. using http keep-alive (There's basically a "commit transaction" method that needs to be called after the "save" method). This seems like it would be a pretty common issue, but I haven't found anything on Google.
I'm wondering if there's a way to explicitly tell Axis to do this. Also, how could I verify whether or not two calls are indeed being made on the same connection. I imagine some HTTP monitoring software like wireshark might be able to tell me this, but I haven't installed it yet.
The person you are working with is wrong. Even if HTTP can be optimized by using keep-alive to process multiple requests over a single TCP connection, this optimization should be transparent to the caller or callee, e.g. it should not matter if a client make two requests after each other on a keep-alive connection or if it's using two separate connections.
Java libraries (HttpURLConnection on the client side and the servlet API on the server side) do not even offer access to this information, so that the using software cannot know how the HTTP requests are actually performed.
You can use nmaplink text to see what is actually running on each port.
But if you are making 2 calls at same time, axis2 will throw port is already binded error. Any port can't handle 2 requests at the same time (my opinion). Maybe you can queue it and do it consecutively. But just confirm with other sources as well.
Related
Well I am new to this and I don't know how to do it, so my senior fellows please help!!!!
There is a situation described below:
An HTTP client is sending a request (Request can be of any type, not concerned regarding the request type) that directly hits a Loadbalancer. The Loadbalancer then redirects the traffic, based on the load of the traffic, towards a "Gateway" system running in two V440 Server, GW logic is written in Java, that actually logically routs this request towards another two server nodes which actually process the request.
Now the scene is something like that: there are several parallel connections are established with this Gateway from several HTTP clients. One connection per client. It has been observed that, while making connections to this GW, in case of some clients the CPU utilization is going 98-99%.
Client is creating one connection with the GW on particular port. Opens a socket connection:
ServerSocket _ss = new ServerSocket(_port);
Socket s = _ss.accept();
and then GW waits for the input to come from the client.
Now my question is:
Why this kind of situation is happening, as it seems all fine
for rest of the clients and there connections.
Only few clients who are creating connections with the GW is making the situation?
Is there anyway we can track this client's IP so that we can understand if this
has been occurred by same clients every time?
Is there any resolution for this?
Since it is not happening for all the clients, we are certainly not going to find an immediate answer for this. However, this is what my limited research on the question yields
Firstly, Question 2
Configure your F5 to capture the client's IP. Since it is HTTP, there are multiple ways of tracking the requests. One is to
sniff the header X-FORWARDED-FOR which will give the client's IP
Address
Or try adding this rule in your logging engine
when CLIENT_ACCEPTED { log local0. "clientIP:[IP::client_addr] accessed" }
If you also need other data such as resources you can use one of the
other events such as HTTP_REQUEST:
when HTTP_REQUEST {
log local0. "clientIP:[IP::client_addr] accessed [HTTP::host][HTTP::uri]" }
Refer link for above here
Secondly, Question 1
For this you need to look at your available traffic statistic mechanisms. I read this, this and this. Enable the statistics, monitor them live, test, mock requests and analyze the output. I do not know of any other options other than this, right now.
Another option, if you can modify your Java program is to include some sort of performance logging mechanism for each request. But this means there is a lot of development and that I do not recommend at all.
Thirdly, Question 3
This is primarily opinion based. As far as I think if you figure out the problem, you can resolve it.
I'm creating a small utility which receives a lot of HTTP requests. It is written in java and uses embedded-jetty to handle requests via https.
I have a load-testing tool for it, but when it is being run for some time it starts to throw exceptions:
java.net.BindException: Address already in use: connect
(note, this is on sender's side, not in my project)
As I understand this means no more free sockets were found in system when another connect was called. Throughput is about 1000 requests per second, and failures start to appear somewhere after 20000 to 50000 requests.
However when I use the same load testing tool with another program (a kind of simple consumer, written in scala using netty by some colleague - it simply receives all requests and returns empty ok response) - there is no so problem with sockets (though typical speed is 1.5-2 times slower).
I wonder if this could be fixed by telling Jetty somehow to close connections immediately after response was sent. Anyway each new request is sent via new connection. I tried to play with Connector#setIdleTimeout - it seems to be 30000 by default but have not succeeded.
What can I do to fix this - or at least to research the matter deeper to find its cause (if I am wrong in my suggestions)?
UPD Thanks for suggestions, I think I am not allowed to post the source, but I get the idea that I should study client's code (this will make me busy for some time since it is written in scala).
I found that really there was a problem with client - it sends requests with Connection: Keep-Alive in header, though creates new HttpURLConnection for each request and calls disconnect() method after it.
To solve this trouble on the server-side it was sufficient to send Connection: close in response header, since I have no allowance to change testing utility.
For the past few weeks, I have been scouring the internet, the minds of computer programmers, and just a few random people over the situation I am looking to overcome. Basically, what I am trying to do it write a AntiJoinBot "plugin" (if you will) for the popular game Minecraft. This would be like all others in respect that it blocks IPs based on if they are using a proxy or not, but this AntiJoinBot is running on a different VPS than the actual server.
This is the best graph I can make of the situation (it's not that good):
(non-minecraft server) Connection -> Proxy check -> Redirect to the
server -> Minecraft
The only problem is, I need to be able to redirect the IP and close the connection so that the player's real IP is the one that would connect to the server. If the connection is not able to be closed, it would cause real problems due to some of the plugins we are running.
If you have a solution or a better way to do this, please help me.
Redirection of connections along the lines that you want requires support from the (application) protocol. TCP/IP does not support it. AFAIK, SOCKS does not support it either. Unless the Minecraft application protocol (and by implication, Minecraft clients and servers) include support for redirection, you are out of luck.
(FWIW - that's how HTTP redirection works. HTTP has a "protocol element" that allows the server to tell the client to redirect, and where to redirect to. The client then resends the original request to a new address.)
But that doesn't mean that you can't deal with the pests. It just means that the redirection approach is not viable. Try a custom proxy or an IP filter / redirector instead.
You are trying to save the server's resources on the cost of increase Traffic.
I am not sure with the answer but may be by looking into the concept of LBS(Load Balancing Server) you may find the answer.
LBS is purely defined and controlled by us so you can manage the resources of two servers using one load balancing server.
I created a game and I want to put it on online. I want to buy a website (I'll probably use goddaddy to buy a domain name and use them as the web host) to use as the server to handle game play. Because I would need a separate server for each game, I would need each game's server to exists on different ports. So this leads to my question, is is possible to access these ports on my future web server? (I wrote the program in Java, so I would assume that I would access the ports from the server side by choosing a port for a ServerSocket, and from the client side by using the IP address from the website and the chosen port for a Socket)
(note: also, I am aware that it may be easier to simply use one port and run the servers on different threads instead, but I am just curious to have my question answered)
thanks a lot,
Ian
Technically it is possible to use different ports, but I don't think that a webhoster like goddaddy will let you run a java process that binds to a special port.
If you mean that you are going to create your own TCP server you obviously can create as many instances of your server and configure them to listen to different ports. But it is year 2011 now. This solution was OK in early 90s.
I'd suggest you to use Restful API that works over HTTP. In this case you can forward calls to server side of each application using URL, e.g.
http://www.lan.com/foo/login?user=u123&password=123456 - log in into application foo
http://www.lan.com/bar/login?user=u123&password=123456 - log in into application bar
In this case you need only one server (the web server) that is listening to socket (port 80).
Your server side implementation could be done using various web techonlogis (php, java, asp.net etc) on your choice.
Yes, that should work. The security manager permits connections to a different port on the same IP address that the applet was loaded from.
You can run a Java server on whatever port you want. Each server will accept incoming requests on one port.
The correct way is simply run on one port and each connection will instantiate a new servlet instance (which happens to run in its own thread) that can then service that request. You usually don't need to run separate ports or worry about concurrency, especially if all the stuff that's shared between connections (e.g. multiple players in one game) is handled through database read/writes.
Your host (GoDaddy) will have to allow you use of those ports, but if they are providing proper hosting (not virtual hosting) and given you your own IP there's no reason why you shouldn't be able to.
Your solution may work theoritically, and I like AlexR's solution. But providers like godaddy doesnt let you run a java server, on ANY port. You will need to find out somebody who does. What I found is the cost goes up from $5/mo to about $20/mo, but you get a much better (read faster) machine. Good wishes, - MS.
I am looking for a good TCP connection library from Java with the following facilities:
1. Retry on failed publishes
2. Multiple connections
Which library have you sucessfully used.
EDIT: Based on the comment changed the question to reflect which type of connection library.
May be Apache MINA will help you.Have a look .
I'm not sure this really makes sense. You're talking about retrying on failed publishes, yet TCP doesn't have a concept of publishing. Merely message transfer. So you could be publishing, or you could be requesting info.
e.g. HTTP over TCP has the verbs GET/PUT/POST (amongst others). All of these run over TCP. Only two actually write something (PUT/POST). And only PUT is supposed to be idempotent (that is to say, you should be able to the same operation again and again with the same result). If you POSTed repeatedly, I'd expect to republish something and create a new version on the server for every POST.
And the above are only recommendations for how PUT/POST are implemented. I wouldn't want an HTTP library to assume this on my behalf.
So the concept of retrying messages at the TCP layer is mistaken (note that TCP will resend packets etc. making up a message). This is a higher-level function, which may use TCP at a lower level. e.g. I've written my own wrappers around HTTPClient to retry PUTting when my remote server becomes temporarily unavailable or reports an error (I'm not sure a retrying HTTP library exists)
Maybe this help others, Try this library called socketal, Pure Java uses ServerSocket and Socket, it's pretty simple and doesn't have any unnecessary feature.
This library is capable of:
Autoreconnect on disconnection
Capable of handling Connecting/Disconnected/Connected
Pretty simple to send String, Object or File
Set your own Authentication code and Verification just like Login Password
It's seems like the Netty but these doesn't have a lot of complicated setup and features.
It's compatible for Android/Java.