I started to develop an application which consists of two parts. The whole system acts as a digital signage application. Everything is located at one physical machine.
First component is an administration backend for the content management. Users can schedule and upload multimedia files. This is web based thing available via browser.
The second part is a media player. Its task is basically to load data which people uploaded and display them. Metadata for those are saved in a database (e.g. how long should it display this picture, what goes next ...) and physical data in the filesystem since it is all on the same machine.
All related data are stored in PostgreSQL database. There are basicaly schedule information + filesystem paths of content - not their binary form.
There is a socket connection through local loop between those two. There is some really simple communication and command parsing (status check, exit, refresh content). If user uploads and schedules new content or changes current one via webbased backend - a message via socket is sent to media player which tells him to build a new schedule.
However as soon as player recieves a message to check for new content, it loads scheduling data from database.
I would like to know if this is considered as a Database as an IPC antipattern? If yes what would be better ways of solving it?
Does not sound like it at all to me. The scheduling data is something persistent, no? Schedules can be recurring, last for a long time? Getting them from the database is fine.
"Database as IPC" would be if you sent "messages" by inserting say, an "exit"-row in the database and the media player decided when to exit by querying that table and checking for an "exit message" in it every 15 seconds or something.
There's nothing wrong with Process A inserting data, then sending a message to Process B that says "I have created new data you may be interested in" as long as the data you created is actually persistent data that belongs in a database. It's only a problem if what you're putting in the database isn't actually persistent data and you're just using is as a transient intermediate step.
Related
I have 2 applications:
desktop (java)
web (symfony)
I have some data in the desktop app that must be consistent with the data in the web app.
So basically I send a POST request from the desktop app to the web app to update online data.
But the problem is that the internet cannot always be available when I send my request and in the same time I can't prevent the user from updating the desktop data
So far, this is what I have in mind to make sure to synchronize data when the internet is available.
Am I on the right direction or not ?
If not, I hope you guys put me in the right path to achieve my goal in a professional way.
Any link about this kind of topics will be appreciated.
In this case the usefull pattern is to assume that sending data is asynchronous by default. The data, after collecting, are stored in some intermediate structure and wait for a sutable moment to be send. I think the queue could be useful because it can be backend with a database and prevent data lost in case of the sending server failure. Separate thread (e.g. a job) check for data in the queue and if exists, read them and try to send. If sending was performed correctly the data are removed from queue. If failure occurs, the data stays in queue and an attempt will be made to send them next time.
This is a typical scenario when you want to send a message to an un-transactional external system in a transaction and you need to garantee that data will be transfered to the external system as soon as possible without losing it.
2 solutions come up in my mind, maybe the second fits better to your architecture.
Use case 1)
You can use message queue + redelivery limit setting with dead letter pattern. In t that case you need to have an application server.
Here you can read details about the Dead letter pattern.
This document explain how redelivery limit works on Weblogic server.
Use case 2)
You can create an interface table in the database of the destop application. Then insert your original data into database and insert a new record into the interface table as well (all in same transaction). The data what you want to POST needs to be inserted into the interface table as well. The status flag of the new record in the interface table can be "ARRIVED". Then create an independent timer in your desktop app which search periodically for records in the interface table with status "ARRIVED". This timer controlled process will try to POST data to webservice. If the HTTP response is 200 then update the status of the record to "SENT".
Boot can work like a charm.
You can solve it many way. Here give 2 way:
1.You can use circuit breaker pattern. You can get link about it from here
You can use JMS concept to manage this.
I was reading this thread, but my problem seems more serious.
the thread: Google fit Recording Api Delay
How long is the delay usually?
According to the documentation, if I subscribe the dataSources for the dataTypes I want to record and store in the Google Fitness Store, RecordingAPI will take care of it for me. However, I keep querying the data, and no data is returned.
Secondly, for now, I would like to record live data within a workout Session. So, I have a method that is called when the user clicked the "Start Workout Session" button, and starts a new session, using SessionAPI, and then stops the session, when the "End Workout Session" is called. Then, the session is inserted without any dataset because I query the data using the start and end times of that session, assuming that RecordingAPI will do automated storage of data during that session. However, no data is returned, when I do the querying.
Am I doing it wrong?
I have 4 different raspberry pi's running the same program on each, the program sends information to a mySQL DB to be inserted into a table.
It is possible for this to happen or what problems will occur?
e.g.
Rpi:1 accessed -> sends info to DB
Rpi:2 accessed -> sends info to DB
Rpi:3 accessed -> sends info to DB
can these happen simultaneously?
I dont have 4 devices at the minute thats why i haven't tried it but i'm just wondering how this would work or if it is possible.
Revised : Cheers for the responses guys, each of the RPi are connected to a RFID module so when the fob get read, it send the timestamp to a DB and thats the same with all the 4 devices! Each device will be used at a random time when some one wants to access the system, will this cause problems?
Thanks :)
As #mastah indicates it really depends on what you want to do.
The answer is yes it can be done, but the some things are more complex than others. EG you want the devices to record temperature in different places, then each device will simply create a new record every few minutes along with the location name. the name and time of the record would be the unique key. No problem.
If, say, you want to be able to change any record in the database on any device, you need to think about how two people changing the same record will be reconciled.
It also depends on what you mean by "simultaneously". In general, database writes are done sequentially in "transactions". So you may need to consider whether "simultaneously" means "very quickly one after another" or not. Does the order of the writes matter?
I am java developer and my application is in iOS and android.I have created web service for that and it is in restlet Framework as JDBC as DB connectivity.
My problem is i have three types of data it is called intersection like current + Past + Future.and this intersection contain list of user as a data.There is single web service for giving all users to device as his/her intersection.I have implement pagination but server has to process all of his/her intersections and out of this giving (start-End) data to device.I did this because there are chances that past user may also come in current.This the total logic.
But as intersection grows in his/her profile server has to process all user.so it become slow and this is obvious.also device call this web service in every 5 minutes.
please provide better suggestion to handle this scenario.
Thanks in advance.
Ketul Rathod
It's a little hard to follow your logic, but it sounds like you can probably benefit from caching your results on the server.
If it makes sense, after every time you process the users data on the server, save the results (to a file, to a database table, whatever). Then, in 5min, if there are no changes, simply return the same. If there were changes, retrieve from cache (optionally invalidating the cache in the process), append those changes to what is cached, re-save the results in the cache, and return the results.
If this is applicable to your workflow, your server-side processing time will be significantly less.
I have a scenario where my Java program has to continuously communicate with the database table, for example my Java program has to get the data of my table when new rows are added to it at runtime. There should be continuous communication between my program and database.
If the table has 10 rows initially and 2 rows are added by the user, it must detect this and return the rows.
My program shouldn't use AJAX and timers.
If the database you are using is Oracle, consider using triggers, that call java stored procedure, that notifies your client of changes in the db (using JMS, RMI or whatever you want).
without Ajax and timers, it not seems to do this task.
I have also faced the same issue, where i need to push some data from server to client when it changes.
For this, you can user Server push AKA "Comet" programming.
In coment
we make a channel between client and server, where client subscribes for particular channel.
Server puts its data in the channel when it has it.
when client reads the channel, it gets all the data in the channel and channel is emptied.
so every time client reads from channel, it will get new data only.
Also to monitor DB changes, you can have two things,
Some trigger/timer (Check out Quartz Scheduler)
Event base mechanism, which pushes data in the channel on particular events.
basically, client can't know anything happening on server side, so you must push some data or event to tell client that, i have some new data, please call some method. Its kind of notification. So please check in comet/server push with event notification.
hope this helps.
thanks.
Not the simplest problem, really.
Let's divide it into 2 smaller problems:
1) how to enable reloading without timers and ajax
2) how to implement server side
There is no way to notify clients from the server. So, you need to use flash or silverlight or JavaFX or Applets to create a thick client. If the problem with Ajax is that you don't know how to use it for this problem then you can investigate some ready-to-use libraries of jsp tags or jsf components with ajax support.
If you have only 1 server then just add a cache. If there are several servers then consider using distributed caches.
If you have a low-traffic database you could implement a thread that rapidly checks for updates to the DB (polling).
If you have a high-traffic DB i wouldn't recommend that, 'cause polling creates much additional traffic.
server notifying client is not a good idea (consider a scenario with a 1000 clients). Do u use some persistence layer or u have to stick to pure JDBC?
If you have binary logs turned on in MYSQL , you can see all of the transactions that occur in the database.
A portable way to do this, is adding a column time stamp (create date) which indicates when the row was added to the table. After initial loading of the content you simply poll for new content which a where clause current_time >= create_date. In case that rows could have identical timestamps you need to filter duplicates before adding them.