I searched in google and stackoverflow for my problem, but couldn't find a good solution. Below is the description,
Our Java web application displays search results from our local database and external webservice API calls. So, the search logic should combine these results and display it in the result page. The problem is, the external API calls return the results slower than our local DB calls. Performance is crucial for our search results and the results should be live i.e. we should not cache or persist the external results in our local DB. Right now, we are spanning two threads, one for the DB call and another one for the exteral API, and combine these results and display it on the screen. But it kills the performance of our application, particularly when we call more than one external APIs.
Is there any architectural solution for this problem?
Any help would be greatly appreciated.
Thanks.
You cannot display data before you have it.
1) You can display your local data and as they come, add via ajax other data.
2) And if there are repeated questions, you could cache external answers for short time (and display them with warning that they are old and that they will be replaced by fresh answer) and as soon as fresh anwer arrive, push new answer.
With at least 1), system will be responsive, with 2) usable answer can be available imediately, even if its not current.
btw, if external source take long to answer, are you sure that their answer is not stale (eg. if they gather some data and wait for rest, then what they gathered so far can go stale)? So maybe (and maybe not) short term persisting is not as bad as you think.
Related
I have 2 SQLite databases in my android project, 1 is used to store profile details of users and another one stores book appointments. Currently the two do not link but work fine. I might sound a bit stupid, but is it possible to connect the two so when some clicks on the booked appointment of a certain user it will go to their profile. I cannot see a way how this can be possible and after spending quite a well I've not been proven right.
If it is possible, could I please get some pointers as how to go about doing it. I have unique id's in both that increment, and potentially the names would be same in both but I have no measures to check for links etc.
Sorry if I have not made myself clear, I am quite loss with this and having spent several days, the only way I can think of doing it is redesigning the whole thing and instead of having seperate databases, have one for both but only shoot off relevant content to the activities required i.e. profile details to one and appointments to another. Not sure whether this will work as well and I have already spent very long designing what I have!
I I suggest to export table(s) from one database to another one. it is easy and compared to what you are trying to do, is like a piece of cake. you can do this in SQL Server(i assumed it is your database) without have to code. but i stand correct if you need some of your data, you must do it with queries.
I'm trying to integrate the orders from Amazon Marketplace into our system. I did that before with Magento and thought this should be easy as that, but somehow I got stuck.
I downloaded the Java APIs from Amazon and started playing around with the examples.
So far so good - I was able to get them running.
But playing with the Reports API and the Orders API, I started to wonder which one to use if I only want to get the unshipped orders to put them into our system.
1. doing this with the Report API seems very complicated and involves a lot of calls to the MWS. This is documented by Amazon here.
2. using the Orders API seems pretty straightforward. I only have to create a ListOrdersRequest, define what type of orders I want to have and finally get them via a ListOrders call.
So my question is: What is the reason to choose the Reports API over the Orders API?
Seems like Amazon is recommending the Reports API, but I really do not understand why this should be so complicated. Why should I get Reports when I can get the Orders directly?
Both approaches can work. Here's why I would pick the Reports API:
Reports are more scalable. I believe MWS reports can return an unlimited number of records. ListOrders can only return a maximum of 100 orders. You can get more using ListOrdersByNextToken, but that brings throttling into the problem and it is not clear whether or not you're just paging by an offset (which could cause lost/duplicate orders) or whether it is a snapshot.
You can acknowledge reports and filter on unacknowledged reports. Orders can be acknowledged too, but I don't think there is a way of filtering ListOrders based on acknowledgement status.
Reports can be scheduled to auto-generate on an interval, as often as every 15 minutes. This means that it may not be as many calls as you think: really, it's only three every interval: one to list unacknowledged order reports, one to pull the report you want and one to acknowledge it.
I m actually an android developper in a little society in France. I ve never been faced to this problem before now.
Explanation :
We have developped an android application which has to work with and without network connection (impliying synchronization ascending and descending)
The user has to connect himself, then we request the WebService to access the informations he needs to make his treatments. But, he needs to get ~2500 lines (marshalled by JackSon in our objects). This synchronization takes nearly 3 mns in 3g and more than 5 mns in Edge... and then make what he had to do, and sent the information back to the server when he get a network connection.
MySQL and our webservices give information in good time ( ~0.05s/requests for mysql, and ~105ms/request accessing the webservice from a webpage). We actually need 10-15 requests to get all the needed informations.
Is there any way to reduce, or a solution to improve / refactor our coding methods.
In fact, I guess we didnt think the application by the good side, when I look Google drive mobile or messenger Facebook app which are really really fast x__x' .
So I m looking for a solution, and moreover, we have a client that needs to get ~50 000 lines per users in the next few monthes...
Thanks for all,
I have some issues with my parts of final year projects. We are implementing a plagiarism detection framework. I'm working on internet sources detection part. Currently my internet search algorithm is completed. But I need to enhance it so that internet search delay is reduced.
My idea is like this:
First user is prompt to insert some web links as the initial knowledge feed for the system.
Then it crawl through internet and expand it's knowledge
Once the knowledge is fetch System don't need to query internet again. Can someone provide me some guidance to implement it? We are using Java. But any abstract detail will surely help me.
if the server side programming is you hand then you can manage a tabel having a boolean in database which shows whether the details were read before. every time your client connects to server, it will check the boolean first and if boolean was set false then it will mean that there is a need to send updates to client other wise no updates will be sent,
the boolean will become true every time when client downloads any data from server and will become false when ever the database is updated
I'm not quite sure that I understand what you're asking. Anyway:
if you're looking for a Java Web crawler, then you I recommend that you read this question
if you're looking for Java libraries to build a knowledge base (KB), then it really depends on (1) what kind of properties your KB should have, and (2) what kind of reasoning capabilities you expect from your KB. One option is to use the Jena framework, but this requires that you're comfortable with Semantic Web formalisms.
Good luck!
I'm looking for a elegant way to create a queue for serving up batch compiled applets.
I have hacked together a SQL and PHP script to handle this but it chokes under mild loads.
is there an existing system that can handle taking a list in SQL and serve the applets in descending order each time it is requested. I'm also trying to handle this all server side as well.
The trick would be getting file001, then file002 ++ ect. to get served each time a web page is loaded. I'm batch creating applets that has a slightly modified background and I'm trying to serve a never been used applet waiting in the queue to load each time the a page is requested.
Is there a applet server I can tweak or does look like something that needs to be built?
No, I have never heard of a "batch compile applet server".
Could you maybe explain in more detail why you feel this is necessary?
Why don't you just use the same class and pass parameters to it?
That said, you can do compilation on demand quite well with e.g. ant and / or CruiseControl. You could put the pre-compiled applets into a directory. Then your PHP frontend just needs to keep track of what applet it delivered last, and fetch the next one the next time.
Still, this sounds rather complicated to me; so maybe you could explain your motivation.
In particular, why do you want a new applet on every reload? Would that not be rather confusing? Why not offer a link for each variant, so the user can choose?