Service Availability issue - java

We are facing two general issues in our production env and would like to get recommendations.
We are using cluster of nodes running Jboss and apache web server for load balancing.
The two problems are,
All the server nodes work fine normally, however, suddenly within a minute, one of the node reach out the maximum DB connection limit (say, from 30 to 100) and start throwing errors (Unable to get manage connection).
I have seen that sometimes, we got simultaneously lot of same webservice calls from one user. For instance, more than 1000 web service calls of the same service by same user within a minute. It looks like, may be user is stuck in some kind of repetitive loop in browser (not sure).
To fix first problem, I have seen we don't have any connection leak issue. Mostly, we found that the service response time becomes very high, however the load balancer sends the equal traffic to each node and therefore possibly, this node might get exhausted. One solution I was thinking is to timeout the service call earlier which takes more than certain time but I am not sure is it good idea. Any thoughts, what recommendations/practice available to tackle such situation?
To fix second problem, I think the application should not defend or check for such large number of service calls but it should be in the higher level like firewall or web server. However, I would like to know your thoughts for this.
I hope my question make sense but if it doesn't, please feel free to ask,

Related

Why the instance on the google app engine is still alive even though there has never been an incoming request?

i work using google cloud platform. there seems to be something strange with basic scaling on my google app engine. referring to the instance documentation, "Instances are created on demand to handle requests". But as you can see in the figure, there are 3 instances generated even though they don't have an incoming request. Even if there is a high demand request, it creates a new instance instead of using the 3 active instance
Then it should be based on the configuration that I have set, the instances will shutdown at 5 minutes idle. But seen in the figure the clock shows 09.15 and if it is calculated at the start time of each instance it should have exceeded 5 minutes. I rarely check the graph of this instance, but lately the services down often occurs due to the use of instances that are not optimal. Anyone know what's going on? thank you

QuickFIX - Receive and send orders from different algorithms (sources)

I built a FIX Initiator Application using the QuickFIX/J library to send orders to my broker. If you don't know what is a FIX Application, consider that my program is an application that sends message to a server through TCP connection.
To get and send the orders created by multiple algorithms I have a Directory Watcher (WatchService) that watches for modifications on a local directory that is synchronized with a S3 bucket using AWS Cli.
This approach works well, except for the fact that I have to wait about 6-8 seconds before the file is on my local directory, so I can parse it to fix orders and send to broker's FIX app. I really would like to decrease this delay between the order creation and the moment when it is send to the broker.
What are the possible solutions that I'd tought:
1) Reading directly from S3 bucket without using AWS CLI
2) Opening different FIX sessions for each different algorithm
3) Instead of reading from a bucket, peaking a database (MySQL) for new orders. The algos would generate table rows instead of files
4) Having an API between my FIX application and the algorithms, so the algos can connect directly with my application.
Solution (1) didn't improved the order receiving time because it takes about the same time to list S3 objects, get summary and filter the desired file.
Solution (2) I didn't tried, but I think it is not the best one. If I have, for example, 100 different strategies I would have to open 100 different connections and I am not sure if my broker app can handle. But I may be wrong.
Solution (3) I also didn't tried.
Solution (4) is what I believe that is ideal, but I don't know how to implement. I tried to create an REST API, but I don't know if it is conceptually correct. Supposing that my FIX application is currently connected to the broker's server, my idea was to (i) create a new webapp to create a REST API (ii) receive order info through a API, (iii) find the current alive session and (iv) send order to broker server using the current session. Unfortunately, I was not able to find the current session by ID using the following on a different of the class that is running the FIX application:
SessionID sessionID = new SessionID("FIX.4.4", "CLIENT1", "FixServer");
Session session = Session.lookupSession(sessionID);
What I would like to hear from you:
What do you think that is the best solution to send FIX orders created by multiple sources?
If I want to create an API to connect 2 different applications, what are the steps that I can follow?
I am sorry if I was a bit confuse. Let me know if you need further clarification.
Thank you
Q : What do you think that is the best solution to send FIX orders created by multiple sources?
Definitely the 4) -i.e.- consolidate your multiple sources of decisions and interface the Broker-side FIX Protocol Gateway from a single point.
Reasons:
- isolation of concerns in design/implementation/operations
- single point of authentication/latency-motivated colocation for FIX Protocol channel
- minimised costs of FIX Protocol Gateway acceptance-testing (without this Tier-1 market participants will not let you run business with, so expenses on FIX Protocol E2E-mutual-cooperation compliance-testing do matter - both costs-wise and time-wise )
Q : what are the steps that I can follow?
Follow you own use-case, that defines all the MVP-features that need to be ready for going into testing.
Do not try to generalise your needs into any "new-Next-Gen-API", your trading is all about latency+trading, so rather specialise on the MVP-definition and do not design/implement anything beyond an MVP with minimum latency (overhead) on a point to point basis. Using stable professional frameworks, like nanomsg or ZeroMQ, may avoid spending a bit of time on reinventing any already invented wheels for low-latency trading messaging/signaling tools. Using REST is rather an anti-pattern in the 3rd millenium low-latency motivated high performance distributed computing eco-system for trading.

How to commit a DB transaction in Java to multiple DBs via a middle tier

I have this scenario:
3 databases that need to be kept identical
3 java app servers that handle requests and write to the databases
each java app server connects to a single database, as well as the other app servers.
I am looking for a solution where I can have a transaction that either commits or rollsback on all 3 databases, however it seems like all the solutions I find are for a single app server connecting to all 3 databases. Normally I would implement some RPC logic such that the fact that the database is behind a second app server is transparent, however I would like to ask the following:
Is there a better way of doing this instead? (I don't see as having each app server connect to all the databases as a better solution, simply due to the sheer number of increased connections)
If not, what object should I pass with the RPC call? a Connection? a ConnectionFactory?
More context:
This is intended to be a HA solution where if a single app server or DB goes down, transactions can still occur on the remaining 2 nodes, and when the failed node comes back up, it will re-sync and then come back "online". Each app server/DB pair is in a seperate datacenter, and the cross-datacenter connections should be kept to a minimum. The writes are inevitably going to be cross-datacenter, in order to keep the DB's in sync, however the reads (main usecase) don't need to be, as if an app server is "online", then it can be fairly confident that it's data is an identical copy to the other's. I haven't found a good way to do this in the DB layer, as MMR seems to be a very restrictive PITA. Also, any solution should be scalable such that if the node count increases to 4/5/6/etc. any changes are limited to configuration, and not code changes as much as possible.

Sending message from server to client with Java

I'm working with Apache Tomcat 7 (JSP and Servlets). In my application, I need to send some messages from server to client. Bellow, I'll explain a little bit what I'm working on.
Brief explanation: The application will bring up a login page if the user isn't logged in every time when he wants to connect to internet. After the user logged in successfully and his time is going to end, I will need to send to client a message with remained time (for example in last few minutes). It can also be another requirement to open advertising popup at a specific time.
I now about JMS but I don't know how fit is that for my scenario. I also read in other posts, the WebSocket can be also an option.
I'm running the server on CentOS 6.2.
Question: For this scenario, do you have some thoughts on how to treat it with Java technologies? If you have some other ideas, feel free to expose!
N.B. Related to JavaScript and PHP I found good answers on SO's questions. I'm interested on how to solve this issue with Java technologies especially.
http://jwebsocket.org/
Maybe this fits your needs.
You will not be able to initiate an HTTP connection from the server to the client. One solution will be to use WebSocket/Comet Framework. Unfortunately websockets are not really wide spread (server+browser) for now. I will suggest you to use a framework to fill the "gap": https://github.com/Atmosphere/atmosphere
I don't understand your obsession with us implementing the solution in Java - any valid solution should be portable across different serverside languages. However if the termination is to occur without synchronous user-driven interaction, then you're just creating load on your server by trying to handle it here. If you want somebody to write the code for you then this isn't the right forum.
I now about JMS....CentOS 6.2.
Not much help here.
The thing we really need to know is what you mean by:
After the user logged in successfully and his time is going to end
(I assume you mean the session time is going to end, unless you've written some software which predicts when people will die).
How do you determine when the session will be ended?
Is it a time limit per page?
Is it a fixed time from when they login?
Is it when the session is garbage collected by the Java?
In the case of 1, then the easiest way to achieve this would be to use javascript to set a timeout on the page (when the user navigates to a new screen the timeout will be discarded), e.g.
setTimeout(function() {
alert('5 minutes has expired since you got here - about to be logged out');
}, (300000)); // 5 minutes
In the 2nd case then you'd still use the method above, but reduce the timeout on the javascript by the time already spent on the server (write the javascript using java, or drop a cookie containing the javascript timestamp at login).
In the 3rd case.....you don't really have any way of knowing when the user will be logged out.

How to optimize number of database connections?

We have a Java (Spring) web application with Tomcat servlet container.
We have a something like blog.
But the blog must load its posts dynamically with Ajax.
The client's ajax script checks for new posts every second.
I.e. Ajax must ask the server for new posts every second and it will be very heavy for database.
But what if we have hundreds of thousands connects simultaneously?
I think that we must retrieve all posts with cron every second and after that save it somewhere. But where? The main idea is to unload the database.
Any ideas about architecture?
Thanks in advance!
There is other architecture for polling that could be more optimal, depending on the case:
Long polling
Long polling is a variation of the
traditional polling technique and
allows emulation of an information
push from a server to a client. With
long polling, the client requests
information from the server in a
similar way to a normal poll. However,
if the server does not have any
information available for the client,
instead of sending an empty response,
the server holds the request and waits
for some information to be available.
Once the information becomes available
(or after a suitable timeout), a
complete response is sent to the
client. The client will normally then
immediately re-request information
from the server, so that the server
will almost always have an available
waiting request that it can use to
deliver data in response to an event.
In a web/AJAX context, long polling is
also known as Comet programming.
Long Polling
Example of Implementations of this technology:
Push Server
You could also use the observer pattern to register the requests, and notify them when an update is done.
Hundreds of thousands of concurrent users all polling our site every second makes for a huge amount of traffic. If you truly expect this load you are going to have to design your platform accordingly, probably by clustering multiple web, application and DB servers.
Remember that with a database connection pool you don't need a DB connection for every user.
I'm not as familiar with Tomcat, but in WebSphere we can set up connection pools to prepare a certain number of connections.
Also, are you mainly worried about reads or the same number of writes?
Plus, you may also want to have the database "split" depending on region etc. This way there is no single heavy load across the entire database, but it can then be split and even load balanced.
There is also the "NoSQL" databases to look into as well. Maybe something to consider. Just ideas to help out.

Categories