I have 2 applications:
desktop (java)
web (symfony)
I have some data in the desktop app that must be consistent with the data in the web app.
So basically I send a POST request from the desktop app to the web app to update online data.
But the problem is that the internet cannot always be available when I send my request and in the same time I can't prevent the user from updating the desktop data
So far, this is what I have in mind to make sure to synchronize data when the internet is available.
Am I on the right direction or not ?
If not, I hope you guys put me in the right path to achieve my goal in a professional way.
Any link about this kind of topics will be appreciated.
In this case the usefull pattern is to assume that sending data is asynchronous by default. The data, after collecting, are stored in some intermediate structure and wait for a sutable moment to be send. I think the queue could be useful because it can be backend with a database and prevent data lost in case of the sending server failure. Separate thread (e.g. a job) check for data in the queue and if exists, read them and try to send. If sending was performed correctly the data are removed from queue. If failure occurs, the data stays in queue and an attempt will be made to send them next time.
This is a typical scenario when you want to send a message to an un-transactional external system in a transaction and you need to garantee that data will be transfered to the external system as soon as possible without losing it.
2 solutions come up in my mind, maybe the second fits better to your architecture.
Use case 1)
You can use message queue + redelivery limit setting with dead letter pattern. In t that case you need to have an application server.
Here you can read details about the Dead letter pattern.
This document explain how redelivery limit works on Weblogic server.
Use case 2)
You can create an interface table in the database of the destop application. Then insert your original data into database and insert a new record into the interface table as well (all in same transaction). The data what you want to POST needs to be inserted into the interface table as well. The status flag of the new record in the interface table can be "ARRIVED". Then create an independent timer in your desktop app which search periodically for records in the interface table with status "ARRIVED". This timer controlled process will try to POST data to webservice. If the HTTP response is 200 then update the status of the record to "SENT".
Boot can work like a charm.
You can solve it many way. Here give 2 way:
1.You can use circuit breaker pattern. You can get link about it from here
You can use JMS concept to manage this.
Related
I am using QuickFIX library to connect to my broker's FIX server and send orders. My Application (QuickFIX Initiator) receive orders message from different algorithms by consuming a RabbitMQ queue and route it to my broker using FIX protocol, as outlined by the diagram bellow:
I had to implement a custom order execution that must check the bid and ask of a specific security before deciding if the order to be send is a LIMIT or a STOP LIMIT.Before having access to market data through FIX, I used to make a request to an internal API build uppon the Bloomberg API
instrumentData = bbgApi.getInstrumentData(this.params.getSymbol());
String side = this.params.getSide();
if (side.equals(Side.BUY)) {
Double ask = instrumentData.getAsk();
// ...
Session.sendToTarget(this.getOrder(), sessionID);
}
else if (side.equals(Side.SELL)) {
Double bid = instrumentData.getBid();
//...
Session.sendToTarget(this.getOrder(), sessionID);
}
I just have to check the bid and ask of the specific moment that I am sending the order.
Now that I have access to market data through FIX, I would like to consume the bid and ask using it because the price are closer to the real price of the securities at each moment. But because of the way that FIX works, I don't know how can I make a call like
instrumentData = fixApi.getInstrumentData(this.params.getSymbol());
because when I send a FIX message, it does not returns me a "promise" that I can wait to be complete before continuing my code execution. I am used to the way that JavaScript and REST APIs works, so I am a little bit stuck. I am wondering what is the best way that I can consume and produces market data that is received through FIX.
My ideas
Create a Market Data FIX application (Initiator) that will subscribe to securities data. The data of each securitity will be put to an RabbitMQ queue. For each order received from my FIX Order Application, the Order Execution Object will consume from the specific security queue and react to the first market data message received and send an order.
Also create a Market Data FIX Application (Initiator) that will subscribe to securities data. The data of each security will be put to a table of MySQL. Then I would create an API like the bbgApi that I mentioned above that will consult the MySQL and get the most recent data of the required security.
Create an Market Data Application that is composed of an Initiator and Acceptor. The Initiator will connect to my broker's market data app and the acceptor will be used to accept new connections from internal applications. My Order Execution Object will require market data through the acceptor and wait for a message containing the required data.
In my opinion, the solution 3 seems ideal, but would require multiple connections to my acceptor, which can slow down (and delay) the order execution. It would be ideal if I only have one object connected to the FIX market data app that would send the request and this request returns a promise that, when completed, returns the required data.
I appreciate your opinion about which way is better to consume market data. If you have a different opinion or suggestion, please let me know. Thank you very much for your help.
Edit:
RabbitMQ won't be a good option because maybe I would like to have multiple consumers for the same message. Maybe kafka would be ideal.
I am java developer and my application is in iOS and android.I have created web service for that and it is in restlet Framework as JDBC as DB connectivity.
My problem is i have three types of data it is called intersection like current + Past + Future.and this intersection contain list of user as a data.There is single web service for giving all users to device as his/her intersection.I have implement pagination but server has to process all of his/her intersections and out of this giving (start-End) data to device.I did this because there are chances that past user may also come in current.This the total logic.
But as intersection grows in his/her profile server has to process all user.so it become slow and this is obvious.also device call this web service in every 5 minutes.
please provide better suggestion to handle this scenario.
Thanks in advance.
Ketul Rathod
It's a little hard to follow your logic, but it sounds like you can probably benefit from caching your results on the server.
If it makes sense, after every time you process the users data on the server, save the results (to a file, to a database table, whatever). Then, in 5min, if there are no changes, simply return the same. If there were changes, retrieve from cache (optionally invalidating the cache in the process), append those changes to what is cached, re-save the results in the cache, and return the results.
If this is applicable to your workflow, your server-side processing time will be significantly less.
I started to develop an application which consists of two parts. The whole system acts as a digital signage application. Everything is located at one physical machine.
First component is an administration backend for the content management. Users can schedule and upload multimedia files. This is web based thing available via browser.
The second part is a media player. Its task is basically to load data which people uploaded and display them. Metadata for those are saved in a database (e.g. how long should it display this picture, what goes next ...) and physical data in the filesystem since it is all on the same machine.
All related data are stored in PostgreSQL database. There are basicaly schedule information + filesystem paths of content - not their binary form.
There is a socket connection through local loop between those two. There is some really simple communication and command parsing (status check, exit, refresh content). If user uploads and schedules new content or changes current one via webbased backend - a message via socket is sent to media player which tells him to build a new schedule.
However as soon as player recieves a message to check for new content, it loads scheduling data from database.
I would like to know if this is considered as a Database as an IPC antipattern? If yes what would be better ways of solving it?
Does not sound like it at all to me. The scheduling data is something persistent, no? Schedules can be recurring, last for a long time? Getting them from the database is fine.
"Database as IPC" would be if you sent "messages" by inserting say, an "exit"-row in the database and the media player decided when to exit by querying that table and checking for an "exit message" in it every 15 seconds or something.
There's nothing wrong with Process A inserting data, then sending a message to Process B that says "I have created new data you may be interested in" as long as the data you created is actually persistent data that belongs in a database. It's only a problem if what you're putting in the database isn't actually persistent data and you're just using is as a transient intermediate step.
I have a scenario where my Java program has to continuously communicate with the database table, for example my Java program has to get the data of my table when new rows are added to it at runtime. There should be continuous communication between my program and database.
If the table has 10 rows initially and 2 rows are added by the user, it must detect this and return the rows.
My program shouldn't use AJAX and timers.
If the database you are using is Oracle, consider using triggers, that call java stored procedure, that notifies your client of changes in the db (using JMS, RMI or whatever you want).
without Ajax and timers, it not seems to do this task.
I have also faced the same issue, where i need to push some data from server to client when it changes.
For this, you can user Server push AKA "Comet" programming.
In coment
we make a channel between client and server, where client subscribes for particular channel.
Server puts its data in the channel when it has it.
when client reads the channel, it gets all the data in the channel and channel is emptied.
so every time client reads from channel, it will get new data only.
Also to monitor DB changes, you can have two things,
Some trigger/timer (Check out Quartz Scheduler)
Event base mechanism, which pushes data in the channel on particular events.
basically, client can't know anything happening on server side, so you must push some data or event to tell client that, i have some new data, please call some method. Its kind of notification. So please check in comet/server push with event notification.
hope this helps.
thanks.
Not the simplest problem, really.
Let's divide it into 2 smaller problems:
1) how to enable reloading without timers and ajax
2) how to implement server side
There is no way to notify clients from the server. So, you need to use flash or silverlight or JavaFX or Applets to create a thick client. If the problem with Ajax is that you don't know how to use it for this problem then you can investigate some ready-to-use libraries of jsp tags or jsf components with ajax support.
If you have only 1 server then just add a cache. If there are several servers then consider using distributed caches.
If you have a low-traffic database you could implement a thread that rapidly checks for updates to the DB (polling).
If you have a high-traffic DB i wouldn't recommend that, 'cause polling creates much additional traffic.
server notifying client is not a good idea (consider a scenario with a 1000 clients). Do u use some persistence layer or u have to stick to pure JDBC?
If you have binary logs turned on in MYSQL , you can see all of the transactions that occur in the database.
A portable way to do this, is adding a column time stamp (create date) which indicates when the row was added to the table. After initial loading of the content you simply poll for new content which a where clause current_time >= create_date. In case that rows could have identical timestamps you need to filter duplicates before adding them.
When you store a message in a queue, isn't it more of meta data information so whoever pulls from the queue knows how to process the data? the actual information in the queue doesn't always hold all the information.
Say you have an app like Twitter, whenever someone posts a message, you would still need to store the actual message text in the database correct?
The queue would be more used to broadcast to other subscribers that a new message has arrived, and then those services could take further action.
Or could you actually store the tweet text in the queue also? (or you COULD, but that would be silly?)
Could a queue message have status fields, which subscribers can change as they process their part of the work flow? (or would you do that in the db?)
Just trying to get some clarification of when you would use a queue versus db.
When a process wants to farm data and the processing of that data out to another process (possibly on a different host), there are 2 strategies:
Stuff all your data into the queue item and let the receiving app worry about storing it in the database, among with whatever other processing.
Update your database, and then queue a tiny message to the other process just to notify it that there's new data to be massaged.
There are a number of factors that can be used to decide on which strategy:
If your database is fully ACID (one would hope) but your queueing system (QS) is not, your data would be safer in the DB. Even if the queue message gets lost in a server crash, you could run a script to process unprocessed data found in the DB. This would be a case for option 2.
If your data is quite large (say, 1 MB or more) then it might be cruel to burden your QS with it. If it's persistent, you'll end up writing the data twice, first to the QS's persister and later to the DB. This could be a drag on performance and influence you to go for option 1.
If your DB is slow or not even accessible to your app's front end, then option 1 it is.
If your second process is going to do something with the data but not store it in a DB, then option 1 may be the way to go.
Can't think of any more, but I hope you get the idea.
In general, a queue is used to 'smooth' out publish rate versus consume rate, by buffering incoming requests that can't be handled immediately. A queue is usually backed by some sort of non-volatile storage (such as a database table). So the distinction is not so clear cut.
Use a database when you want to perform many searches against your 'queue', or provide rich reporting.
I recommend that you look at Gregor Hophe's book, Enterprise Integration Patterns, which explains many different patterns for messaging-based approaches.
We used JMS extensively at my last job where we were passing data around from machine to machine. In the end, we were both sending and storing the data at the same time; however, we stored far less data than we sent out. We had a lot of metadata surrounding the real values.
We used JMS as simply a messaging service and it worked very well for that. But, you don't want to use JMS to store your data as it has no persistence (aside from being able to log and replay the messages perhaps).
One of the main advantages that JMS gives you is the ability to send out your messages in the correct and appropriate order and ensure that everybody receives them in that order. This makes synchronization easy since the majority of the message handling is done for you.
My understanding is Twitter will be using both DB and JMS in conjunction. First when the tweets are written it will store it in the database and this is how it will display in the message board. However since this is a publisher/subscriber model when the tweets are published it will then be sent to the subscribers. So both the items will be used.
I think your twitter example is good. You want the database for long term data. There wouldn't be much point in putting the tweet in the message because it has to go in the database. However, if you were running a chat room then you could go ahead and put the message in the JMS queue because you're not storing it long term anywhere.
It's not that you can't put the tweet in the JMS it's that you need to put it in the database anyways.
I would use the queue whenever you can utilize a "fire-and-forget" pattern. In your Twitter example, I would use the queue to post the message from the client. The queue processor can then store it to the database when it gets to it.
If you require some sort of immediate success/failure status, then the message queue isn't for you.