Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
What is the best way to post large data for processing within enterprise applications?
Data
will be of size upto 1 GB Data
when consumed, can be removed (if processed); need not be persisted.
Can we look at JMS technologies / Kafka Cluster to receive and distribute the data? Data has to be consumed fully, only-ones, and cant be shared(partitioned) across multiple consumers.
What are the other options can be explored?
Apache KAFKA is more designed to handle realtime streams of data rather than large data transfer. Also messages do not delete, rather they are commited. Also exactly once processing would need to be implemented by you. KAFKA by itself is not JTA aware. I would recommend against using large message sizes.
No matter what other queuing technology you use you will need to use message sizes smaller than 1GB (i.e. you will need to chunk your data and reassemble or make your processing stream like instead of bulk).
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm looking for the optimal way to handle the following scenario, preferably some implementation that's already been made for something like this.
There are two Systems (Springboot Webapps) that are supposed to communicate which each other through a Rest-gateway. Transfers shall be handled with Json over HTTP.
1st System is the Project part, 2nd System is the Persons part and they both implement their own persistent sql-database. Both systems depend on each others data and it cannot be expected that both systems are online at all times.
What would be the best way to achieve that both systems data is always up 2 date? Is there any plugin you could recommend to handle the synchronization process which also implements scenarios like one system shutting down while sending or the other way round?
Thanks in advance!
If you can't expect both systems to be online at all times, and you don't want any downtime when one of them is down, I think that the best way to do it is to share a common database. This has some problems of its own and you should think if it's worth, maybe you would be better having two completely independent services which rely on each other and being ready to replicate one of them if it's needed.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have a question about messaging systems.
There are two Java applications - A and B. A works constantly and checks the resource. In some cases it need to notifiy B to start. It seems that there is no need to enlarge this messaging later: there will be always two components.
What is the most elegant way to implement it? JMS? Spring Integration somehow?
Another options?
Do I understand correctly that in any case B needs to busily wait?
IMO it is better to use Apache active mq . It is open source and supports JMS 1.1 and J2EE 1.4.
As you are using two applications.You can add the message from A to Active MQ Queue and B would be continuously checking the message queue. So once B receives a message you could perform the operations that you would require.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
My client program wants to send a huge file to the server and in return the server program returns a double or triple sized file.
My question is, which approach should I use? Either TCP or UDP.
You could utilize FTP (File Transfer Protocol) for your use case.
It is very common and you can use it with java to get or to upload
files to the FTP server.
Also take a look at this question on SO: File upload in Java through FTP
If you still want to implement it yourself, I would recommend using TCP, since it offers you some services:
Ordered data transfer — the destination host rearranges according to sequence number
Retransmission of lost packets — any cumulative stream not acknowledged is retransmitted
Error-free data transfer
http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Data_transfer
This question is too broad, but the answer is probably TCP; if you're needing to transfer a file, TCP provides ordering and retransmission services that UDP doesn't, and there's no reason to reinvent the wheel.
Along those lines, though, why reinvent HTTP? This sounds like a classic case for using a Web server.
UDP programmin but it will be difficult to implement
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
What I am trying to do is cache all the data that I have written into mongodb. So that all client requests are served from the cache. Should I consider ehcache or memcache.
Note the mongodb data is queried a lot, that is why I have thought to cache all of it at server start time, no writes are permitted to this data. I am using java for the application.
It makes very little sense to use a cache in front of MongoDB if you are using it for reads only. An extra cache is just going to take up more memory. MongoDB uses memory-mapped files and the Operating System will keep the most requested data in memory. If all of your data fits in memory, then MongoDB will returns all the documents straight from it - just like an additional cache would.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm developing an application for data gathering from Internet and creating charts and plots from them with Java.
I need 2 tables (each about 50 columns) and each can have 2000 (maybe less but not more) rows of data in a day.
Currently I'm using SQLite but I doubt it be able to handle data for 6 month.
What DBMS you suggest me to use?
Thanks
You probably won't have to worry about this. Any major DBMS should be able to handle that much data. Use whatever you feel most comfortable with.
Unless you've got corporate reasons to choose otherwise, MySQL will do the job for you. 2000 rows a day won't tax it in the slightest.
You can use any DBMS of your choice, all supports and can handle large amount of data. You do not have to worry about this, any DBMS is tailored and built to handle such task effectively, and efficiently.