I have a Windows application (written in Java) that connects to the server (spring + httpd) via websockets. Another websocket connection is established as soon as a user authenticates to the same server from a web browser. If server notice that both clients have same IP address, it "pairs" them so both applications can talk with each other.
The challenge I'm facing currently is that when multiple Windows application are starting up, all of them establish new websocket connections that exceeds httpd limitation of 255 active connections and the server goes down.
I'm looking for some feasible solution that would not overwhelm the server. A perfect scenario: a user logs into the system using a web browser, the server tries to connect the Windows application running on a clients machine afterwards and everyone is happy.
Have you any idea how to achieve it?
What I've tried already is to not create a new websocket connection on the Windows application startup but send a GET request to the request to the server and wait for the response that will occur after authenticating a user from a web browser. Hanging GET requests still need resources and httpd keeps a separate process for each of them. Also, it turned out that httpd has a 5 minutes timeout for hanging requests and sends 502 back after reaching it out.
I thought that maybe it is possible to handle GET requests in Spring by only one process / thread, but I haven't found any info for that.
Another limitation worth noting is that the Windows application runs on customer machines and customer's security policy may not allow for any wise tricks.
Related
The problem:
I am having some strange behaviour from a Jetty server (rest over https) when some client connections are closed (client-side) before the server has had time to reply. Normally this is well managed and expected by a webserver/application server but in a specific instance something breaks the server that stops replying.
I am trying to reproduce programmatically and locally the issue, opening a client connection and closing it before the server has had time to reply, but I do not have much experience with a situation like this, normally the clients I write are expected to not die immediately.
I am not interested in the language/application I have to use to replicate my case, it can be a Java program, a netcat command, telnet, dotnetcore... The only limit I have is that it should run on a Kubernetes pod, if possible.
I am trying to use Java to open a socket then close it immediately, or to create an Http client and stop it immediately after a request sent, but with no luck at the moment.
At the same time I am looking at netcat, but I fear it's too low level for a rest request.
So i wrote a program to connect to a Clustered WebLogic server behind a VIP with 4 servers and 4 queues that are all connected( i think they call them distributed...) When i run the program from my local machine and just get JMS Connections, look for messages and disconnect, it works great. and by that i mean it:
iteration #1
connects to server 1.
look for a message
disconnects
iteration #2
connects to server 2.
look for a message
disconnects
and so on.
When i run it on the server though, the application picks a server and stick to it. It will never pick a new server, so the queues on the other servers don't ever get worked. like with a "sticky session" setup... My OS is Win7, and the Server os is Win2008r2 JDK is identical for both machines.. How is this configured client side? The server implementation uses "Apache Procrun" to run it as a service. but i haven't seen too many issues with that part...
is there a session cookie getting written out somewhere?
any ideas?
Thanks!
Try disabling 'Server Affinity' on the JMS Connection factory. If you are using the Default Connection Factory, define your own an disable Server Affinity.
EDIT:
Server Affinity is a Server-side setting, but it controls how messages are routed to consumers after a WebLogic JMS Server receives the message. The other option is to use round-robin DNS and send to only one hostname that resolves to a different IP(Managed Server) such that each connection goes to a different server.
I'm pretty sure this is the setting you're looking for :)
I have a central load balancing server and several application servers running on Apache Tomcat. The load balancing server receives request and forwards them to the application servers in round robin fashion. If one these application servers goes down, the load balancing server should stop forwarding requests to it.
My current solution for this is to ping the application servers every few minutes and if I don't receive a response, remove them from a list of available servers. Is there a better way to monitor the status of these servers? Should I ping more often or should the application servers constantly inform the load balancing server?
Execute a null transaction on it regularly. Pinging really isn't enough, it only exercises the TCP/IP stack, and I have seen operating systems in states where TCP/IP was up but no applications and not even part of the OS stack itself. Executing a transaction exercises everything. Include the database in the null transaction.
First ensure your server isn DDOS attrack protected , if the depends on you application connection avg time edit keep alive time
Then you should study about precock mpm , i think it will give you best solution
I would like to inform all logged in users that the server will shutdown. This special interest would be nice in an ajaxfy application (RIA).
What are the possible solutions? What are the best practice solutions?
There were two possible end-scenarios:
Send a text $x to the server ergo to all users. ("The server will not be available for some minutes.")
Send a key $y to the server which will used to generate a (custom) text to all users. ("SERVER_SHUTDOWN")
Environment: Tomcat (6/7), Spring 3+
Messaging to users: with polling or pseudo-pushing via an async servlet.
Ideas
1. Context.destroy(): Implementing a custom ContextListener's destroy
I don't think it is a good solution to block within a "destroy()" -- blocking, because we should wait about 5-10 seconds to make sure that all logged in users receive a message.
2. JMX Beans
This would mean, that any server service operation (start, stop) have to invoke a special program which sends the message.
3. Any other messaging queues like AMQP or ActiveMQ
Like 2.
Unless the server shuts down regularly and the shutdown has a significant impact on users (for e.g. they will lose any unsubmitted work - think halfway through editing a big post on a page) then notifying of server shutdown won't really be of much benefit.
There are a couple of things you could do.
First, if the server is going to be shutdown due to planned maintenance then you could include a message on web pages like;
Server will be unavailable Monday 22nd Aug 9pm - 6am for planned
maintenance. Contact knalli#example.com for more information.
Second, before shutting down the server, redirect requests to a static holding page (just change your web server config). This holding page should have information on why the server is down and when it will be available again.
With both options, its also important to plan server downtime. It's normal to have maintenance windows outside of normal working hours. Alternatively, if you have more than one server you can cluster them. This allows you to take individual servers out of the cluster to perform maintenance without having any server downtime at all.
I am reading a lot about HTML5 and I like the web sockets in particular because they facilitate bi-directional communication between web server and web browser.
But we keep reading about chrome, opera, firefox, safari getting ready for html5. Which web server is ready to use web sockets feature? I mean, are web servers capable of initiating subsequent communication as of today? How about Google's own Appengine?
How can I write a sample web application that takes advantage of this feature in Java?
Bi-directional communication between web servers and browsers is nothing new. Stack Overflow does it today if a new answer is posted to a question you're reading. There are a few different strategies for implementing socket-style behavior using existing technologies:
AJAX short polling: Connect to the server and ask if there are any new messages. If not, disconnect immediately and ask again after a short interval. This is useful when you don't want to leave a lot of long-running, idle connections open to the server, but it means that you will only receive new messages as fast as your polling interval, and you incur the overhead of establishing a new HTTP connection every time you poll.
AJAX long polling: Connect to the server and leave the connection open until a new message is available. This gives you fast delivery of new messages and less frequent HTTP connections, but it results in more long-running idle processes on the server.
Iframe long polling: Same as above, only with a hidden iframe instead of an XHR object. Useful for getting around the same-origin policy when you want to do cross-site long polling.
Plugins: Flash's XMLSocket, Java applets, etc. can be used to establish something closer to a real low-level persistent socket to a browser.
HTML5 sockets don't really change the underlying strategies available. Mostly they just formalize the strategies already in use, and allow persistent connections to be explicitly identified and thus handled more intelligently. Let's say you want to do web-based push messaging to a mobile browser. With normal long-polling, the mobile device needs to stay awake to persist the connection. With WebSockets, when the mobile device wants to go to sleep, it can hand off the connection to a proxy, and when the proxy receives new data it can wake up the device and pass back the message.
The server-side is wide open. To implement the server-side of a short polling application, you just need some kind of a chronological message queue. When clients connect they can shift new messages off the queue, or they can pass an offset and read any messages that are newer than their offset.
Implementing server-side long polling is where your choices start to narrow. Most HTTP servers are designed for short-lived requests: connect, request a resource, and then disconnect. If 300 people visit your site in 10 minutes, and each takes 2 seconds to connect and download HTTP resources, your server will have an average of 1 HTTP connection open at any given time. With a long polling app, you're suddenly maintaining 300 times as many connections.
If you're running your own dedicated server you may be able to handle this, but on shared hosting platforms you're likely to bump up against resource limits, and App Engine is no exception. App Engine is designed to handle a high volume of low latency requests, e.g. short polling. You can implement long polling on App Engine, but it's ill-advised; requests that run for longer than 30 seconds will get terminated, and the long running processes will eat up your CPU quota.
App Engine's solution for this is the upcoming Channel API. The channel API implements long polling using Google's existing robust XMPP infrastructure.
Brett Bavar and Moishe Lettvin's Google I/O talk lays out the usage pattern as follows:
App Engine apps create a channel on a remote server, and are returned a channel ID which they pass off to the web browser.
class MainPage(webapp.RequestHandler):
def get(self):
id = channel.create_channel(key)
self.response.out.write(
{'channel_id': id})
The web browser passes the channel ID to the same remote server to establish a connection via iframe long polling:
<script src='/_ah/channel/jsapi'></script>
<script>
var channelID = '{{ channel_id }}';
var channel =
new goog.appengine.Channel(channelId);
var socket = channel.open();
socket.onmessage = function(evt) {
alert(evt.data);
}
</script>
When something interesting happens, the App Engine app can push a message to the user's channel, and the browser's long poll request will immediately receive it:
class OtherPage(webapp.RequestHandler):
def get(self):
# something happened
channel.send_message(key, 'bar')
Jetty, for example, supports this feature since version 7: Jetty Websocket Server
Google App Engine have plans for this also. They even have working demo of this at Google I/O 2010, but it's not in production yet. See ticket #377