I am using QuickFIX library to connect to my broker's FIX server and send orders. My Application (QuickFIX Initiator) receive orders message from different algorithms by consuming a RabbitMQ queue and route it to my broker using FIX protocol, as outlined by the diagram bellow:
I had to implement a custom order execution that must check the bid and ask of a specific security before deciding if the order to be send is a LIMIT or a STOP LIMIT.Before having access to market data through FIX, I used to make a request to an internal API build uppon the Bloomberg API
instrumentData = bbgApi.getInstrumentData(this.params.getSymbol());
String side = this.params.getSide();
if (side.equals(Side.BUY)) {
Double ask = instrumentData.getAsk();
// ...
Session.sendToTarget(this.getOrder(), sessionID);
}
else if (side.equals(Side.SELL)) {
Double bid = instrumentData.getBid();
//...
Session.sendToTarget(this.getOrder(), sessionID);
}
I just have to check the bid and ask of the specific moment that I am sending the order.
Now that I have access to market data through FIX, I would like to consume the bid and ask using it because the price are closer to the real price of the securities at each moment. But because of the way that FIX works, I don't know how can I make a call like
instrumentData = fixApi.getInstrumentData(this.params.getSymbol());
because when I send a FIX message, it does not returns me a "promise" that I can wait to be complete before continuing my code execution. I am used to the way that JavaScript and REST APIs works, so I am a little bit stuck. I am wondering what is the best way that I can consume and produces market data that is received through FIX.
My ideas
Create a Market Data FIX application (Initiator) that will subscribe to securities data. The data of each securitity will be put to an RabbitMQ queue. For each order received from my FIX Order Application, the Order Execution Object will consume from the specific security queue and react to the first market data message received and send an order.
Also create a Market Data FIX Application (Initiator) that will subscribe to securities data. The data of each security will be put to a table of MySQL. Then I would create an API like the bbgApi that I mentioned above that will consult the MySQL and get the most recent data of the required security.
Create an Market Data Application that is composed of an Initiator and Acceptor. The Initiator will connect to my broker's market data app and the acceptor will be used to accept new connections from internal applications. My Order Execution Object will require market data through the acceptor and wait for a message containing the required data.
In my opinion, the solution 3 seems ideal, but would require multiple connections to my acceptor, which can slow down (and delay) the order execution. It would be ideal if I only have one object connected to the FIX market data app that would send the request and this request returns a promise that, when completed, returns the required data.
I appreciate your opinion about which way is better to consume market data. If you have a different opinion or suggestion, please let me know. Thank you very much for your help.
Edit:
RabbitMQ won't be a good option because maybe I would like to have multiple consumers for the same message. Maybe kafka would be ideal.
Related
I have 2 applications:
desktop (java)
web (symfony)
I have some data in the desktop app that must be consistent with the data in the web app.
So basically I send a POST request from the desktop app to the web app to update online data.
But the problem is that the internet cannot always be available when I send my request and in the same time I can't prevent the user from updating the desktop data
So far, this is what I have in mind to make sure to synchronize data when the internet is available.
Am I on the right direction or not ?
If not, I hope you guys put me in the right path to achieve my goal in a professional way.
Any link about this kind of topics will be appreciated.
In this case the usefull pattern is to assume that sending data is asynchronous by default. The data, after collecting, are stored in some intermediate structure and wait for a sutable moment to be send. I think the queue could be useful because it can be backend with a database and prevent data lost in case of the sending server failure. Separate thread (e.g. a job) check for data in the queue and if exists, read them and try to send. If sending was performed correctly the data are removed from queue. If failure occurs, the data stays in queue and an attempt will be made to send them next time.
This is a typical scenario when you want to send a message to an un-transactional external system in a transaction and you need to garantee that data will be transfered to the external system as soon as possible without losing it.
2 solutions come up in my mind, maybe the second fits better to your architecture.
Use case 1)
You can use message queue + redelivery limit setting with dead letter pattern. In t that case you need to have an application server.
Here you can read details about the Dead letter pattern.
This document explain how redelivery limit works on Weblogic server.
Use case 2)
You can create an interface table in the database of the destop application. Then insert your original data into database and insert a new record into the interface table as well (all in same transaction). The data what you want to POST needs to be inserted into the interface table as well. The status flag of the new record in the interface table can be "ARRIVED". Then create an independent timer in your desktop app which search periodically for records in the interface table with status "ARRIVED". This timer controlled process will try to POST data to webservice. If the HTTP response is 200 then update the status of the record to "SENT".
Boot can work like a charm.
You can solve it many way. Here give 2 way:
1.You can use circuit breaker pattern. You can get link about it from here
You can use JMS concept to manage this.
I'm developing an app on Android using Firebase that should embed a chat.
My issue is that to fetch messages, I need to query all the messages which have my uid either in sender or receiver field. This would be extremely easy to do in MySQL, but in Firebase (I must stick onto Firebase) looks kinda pain.
I cannot just filter them like that. And being receiver and sender fields of the object inside chat, I cannot even just filter them when I'm using a Firebase url like firebase.myapp.io/chat.
So the only possible solution in this model, is fetching all the chats and client-filtering them. This is all but good way to do this job. Also, when the message will be many, if they should, everything could become extremely slow.
So I thought about different ways to achieve the result:
I get in chat the keys corresponding to user uid. Among the values, I get chats from user point of view, or I got the keys of my receivers as values, and inside the messages
But I don't like this very much since it can be extremely redundant, since every message should be inserted twice in the db.
Another way would be memorizing messages keys into another object, like chatmessages, and inside I got users uids as values, and as keys containing chat message keys.
What would be the best NoSQL way to manage multiple, private chat conversations?
I currently have the same problem with a chat application am creating. I recently found a solution:
1) I basically used the two users unique ID and converted them to a number using ID.getBytes().
2)Then I added the 2 numbers one to the other (addition) and converted them to a base64 number again.
3) Used the resulting number as the id for the chatroom.
It worked but since am doing all thisfor a android app it shows me a warning that am doing too much work on the main thread.
Let me know if anyone knows a better way to do that!
I have collection of apps that total almost 1 million users. I am now adding a push notification system using Google cloud messaging to create alerts. My database contains an entity with the GcmId and app name (ex. "myApp1").
Now, I want to send a GCM message to all users of "myApp1". The objectify documents do not describe the .limit function well though. For example, from the GCM demo app:
List<RegistrationRecord> records = ofy().load().type(RegistrationRecord.class).limit(10).list();
will send to the first 10 entries. But I need all entries that match appType="myApp1". This is harder because the query can be large and could potentially match half a million users and I need to send the GCM push to all of them.
How is such a large query performed?
EDIT: I am currently using
List<RegistrationRecord> records = ofy().load().type(RegistrationRecord.class).filter("app","myApp1").list();
for testing and it is working fine. However when pushed live, the dataset is huge, and I don't know what the repercussions are.
i believe you are looking it from the wrong angle.
objectify or low level appengine deal very well with paginated results using cursors, so you need to process results by chunks. i wont go into details on how to do that because that would end up costing you a lot of $ for all those datastore reads and you will need task queues.
instead, look at topics in google cloud messaging:
https://developers.google.com/cloud-messaging/topic-messaging
the user (client side app) subscribes to the topic (the appid in your case). you then send a single topic push which is much easier from an appengine frontend instance (limited to 30 seconds response and such).
I found this blog post to be a great example of a full implementation and how to properly handle possible errors:
https://blog.pushbullet.com/2014/02/12/keeping-google-cloud-messaging-for-android-working-reliably-techincal-post/
the only issue i can see is that a push from server is documented to take up to 30 seconds. An appengine frontend instance also has a 30 seconds limit total, so while it waits for GCM push to complete the servlet itself can timeout. one way to solve this is to send the push from taskqueue which would give you 60 seconds for urlfetch calls (i assume that limit applies to any api call as well): https://cloud.google.com/appengine/docs/java/urlfetch/
I have a scenario where my Java program has to continuously communicate with the database table, for example my Java program has to get the data of my table when new rows are added to it at runtime. There should be continuous communication between my program and database.
If the table has 10 rows initially and 2 rows are added by the user, it must detect this and return the rows.
My program shouldn't use AJAX and timers.
If the database you are using is Oracle, consider using triggers, that call java stored procedure, that notifies your client of changes in the db (using JMS, RMI or whatever you want).
without Ajax and timers, it not seems to do this task.
I have also faced the same issue, where i need to push some data from server to client when it changes.
For this, you can user Server push AKA "Comet" programming.
In coment
we make a channel between client and server, where client subscribes for particular channel.
Server puts its data in the channel when it has it.
when client reads the channel, it gets all the data in the channel and channel is emptied.
so every time client reads from channel, it will get new data only.
Also to monitor DB changes, you can have two things,
Some trigger/timer (Check out Quartz Scheduler)
Event base mechanism, which pushes data in the channel on particular events.
basically, client can't know anything happening on server side, so you must push some data or event to tell client that, i have some new data, please call some method. Its kind of notification. So please check in comet/server push with event notification.
hope this helps.
thanks.
Not the simplest problem, really.
Let's divide it into 2 smaller problems:
1) how to enable reloading without timers and ajax
2) how to implement server side
There is no way to notify clients from the server. So, you need to use flash or silverlight or JavaFX or Applets to create a thick client. If the problem with Ajax is that you don't know how to use it for this problem then you can investigate some ready-to-use libraries of jsp tags or jsf components with ajax support.
If you have only 1 server then just add a cache. If there are several servers then consider using distributed caches.
If you have a low-traffic database you could implement a thread that rapidly checks for updates to the DB (polling).
If you have a high-traffic DB i wouldn't recommend that, 'cause polling creates much additional traffic.
server notifying client is not a good idea (consider a scenario with a 1000 clients). Do u use some persistence layer or u have to stick to pure JDBC?
If you have binary logs turned on in MYSQL , you can see all of the transactions that occur in the database.
A portable way to do this, is adding a column time stamp (create date) which indicates when the row was added to the table. After initial loading of the content you simply poll for new content which a where clause current_time >= create_date. In case that rows could have identical timestamps you need to filter duplicates before adding them.
When you store a message in a queue, isn't it more of meta data information so whoever pulls from the queue knows how to process the data? the actual information in the queue doesn't always hold all the information.
Say you have an app like Twitter, whenever someone posts a message, you would still need to store the actual message text in the database correct?
The queue would be more used to broadcast to other subscribers that a new message has arrived, and then those services could take further action.
Or could you actually store the tweet text in the queue also? (or you COULD, but that would be silly?)
Could a queue message have status fields, which subscribers can change as they process their part of the work flow? (or would you do that in the db?)
Just trying to get some clarification of when you would use a queue versus db.
When a process wants to farm data and the processing of that data out to another process (possibly on a different host), there are 2 strategies:
Stuff all your data into the queue item and let the receiving app worry about storing it in the database, among with whatever other processing.
Update your database, and then queue a tiny message to the other process just to notify it that there's new data to be massaged.
There are a number of factors that can be used to decide on which strategy:
If your database is fully ACID (one would hope) but your queueing system (QS) is not, your data would be safer in the DB. Even if the queue message gets lost in a server crash, you could run a script to process unprocessed data found in the DB. This would be a case for option 2.
If your data is quite large (say, 1 MB or more) then it might be cruel to burden your QS with it. If it's persistent, you'll end up writing the data twice, first to the QS's persister and later to the DB. This could be a drag on performance and influence you to go for option 1.
If your DB is slow or not even accessible to your app's front end, then option 1 it is.
If your second process is going to do something with the data but not store it in a DB, then option 1 may be the way to go.
Can't think of any more, but I hope you get the idea.
In general, a queue is used to 'smooth' out publish rate versus consume rate, by buffering incoming requests that can't be handled immediately. A queue is usually backed by some sort of non-volatile storage (such as a database table). So the distinction is not so clear cut.
Use a database when you want to perform many searches against your 'queue', or provide rich reporting.
I recommend that you look at Gregor Hophe's book, Enterprise Integration Patterns, which explains many different patterns for messaging-based approaches.
We used JMS extensively at my last job where we were passing data around from machine to machine. In the end, we were both sending and storing the data at the same time; however, we stored far less data than we sent out. We had a lot of metadata surrounding the real values.
We used JMS as simply a messaging service and it worked very well for that. But, you don't want to use JMS to store your data as it has no persistence (aside from being able to log and replay the messages perhaps).
One of the main advantages that JMS gives you is the ability to send out your messages in the correct and appropriate order and ensure that everybody receives them in that order. This makes synchronization easy since the majority of the message handling is done for you.
My understanding is Twitter will be using both DB and JMS in conjunction. First when the tweets are written it will store it in the database and this is how it will display in the message board. However since this is a publisher/subscriber model when the tweets are published it will then be sent to the subscribers. So both the items will be used.
I think your twitter example is good. You want the database for long term data. There wouldn't be much point in putting the tweet in the message because it has to go in the database. However, if you were running a chat room then you could go ahead and put the message in the JMS queue because you're not storing it long term anywhere.
It's not that you can't put the tweet in the JMS it's that you need to put it in the database anyways.
I would use the queue whenever you can utilize a "fire-and-forget" pattern. In your Twitter example, I would use the queue to post the message from the client. The queue processor can then store it to the database when it gets to it.
If you require some sort of immediate success/failure status, then the message queue isn't for you.