Spring Boot Rest API - input file + endpoints - java

I have common question about architecture I should use for my specific problem.
I have .TSV file with some informations and my task is to create REST API app that will consume this .TSV file and there will be 3 REST API endpoints. Each endpoint will return JSON data I processed from .TSV file.
My question is: Should I crate some POST method that will upload the TSV file and I will save it eg to the session and do the logic with using the API Endpoints?
Or should I POST the content of TFS file as JSON in every request to the specific endpoint?
I dont know how to glue it all together.
There is no requirement fot the DB. The program will be tested just with numerous requests through the API and I dont know how to process or store the .TSV content in my app so one user could call all three endpoint sequentially above the same data without reuploading the TSV file.

It's better to upload the file and then do the processing on server. The file will upload in one request and it's better rather than send multiple request.

I believe the solution will depend on the size of the file. Storing the file in the memory can not be a good approach if the file is very large. And also, saving the file in a session may not be good, because if you need to scale your service in the future, you will not be able to do it. Even storing the file in a /tmp directory can also be a bad approach, because the solution continues to be not scalable.
It will be a good idea using a Storage Service like AWS S3 or Google Firebase or any other related. When you would call one of your three RESTs, your application will verify if that file was not yet processed, read that file, process anything you want and save the result to your S3 Bucket (If you don't want to save the processed files, you can use a retention policy on S3 to delete the file after X period of time).
And only after this, you will return the result. As you can see, this is a synchronous solution.
If the file processing need a lot of CPU and takes so long, you will need an asynchronous solution. So instead of processing the files directly when you call the REST API, you will have to create another application that will read that file from S3, process it and save it. All asynchronously. And your REST API would only get the file from S3 and return it.

Related

Download huge data using StreamingOutput available in JAX-RS

I have requirement where I am having some huge volume[Approx 20000+ records] of employee details and respective contact details to be downloaded in one spreadsheet. Current system is not capable to handle this much data and responding with internal server error. Since my application is build with Java + JAX RS so I am working with StreamingOutput and binding the data to one spreadsheet. What is the best to handle such kind of request? How can I break it into chunks and finally concatenate and create one single file.

how to retrieve files from amazon emr?

My Apache Spark application takes various input files and stores the results and logs in other files. The input files are provided along with the application which is supposed to run on the Amazon cloud (EMR seemed preferable to EC2).
Now, I know that I'm supposed to create an uber-jar containing my input files and the application that accesses them. However, how do I retrieve the generated files from the cloud, once the execution finishes?
As an additional info, the files are created and written using relative paths from the code.
Assuming you mean that you want to access the output generated by the Spark application outside the cluster, the usual thing to do is to write it to S3. Then you may of course read the data directly from S3 from outside the EMR cluster.

How to download a file in chunks using camel?

I want to download a very large file using camel, but I don't want to hold the entire file in memory and THEN save it to file.
I want to stream the file in and save or write to a file in chunks.
Is this possible with Camel, and if so, how do I do this?
Note: Is it possible that the endpoint I am downloading the file does not support streaming/chunking? If yes, how can I verify this?
Camel's HTTP component uses Netty to make the request. Netty reads the entire response into memory, so there is no way to do what you are asking for.
You would need to implement your own endpoint for Camel that utilizes another HTTP library which has support for HTTP response streaming.
More documentation is available here :
https://cwiki.apache.org/confluence/display/CAMEL/Netty4+HTTP
You can 3 option to download the file i.e. using:
ftp://[username#]hostname[:port]/directoryname[?options]
sftp://[username#]hostname[:port]/directoryname[?options]
ftps://[username#]hostname[:port]/directoryname[?options]
There is a option of streamDownload in it.
For more check out http://camel.apache.org/ftp.html

Polling data from REST API to HDFS

I have a blog that offers a REST API to download data. The API gives the list of topics (in JSON). It's possible to iterate on the list to download the messages of each topic. I want to download all messages of the forum every day and store them in HDFS.
I was thinking about writing a Java program that calls the API to get the data and store it on HDFS using Hadoop API. I can run the Java program withing a daily Oozie batch.
Is there a better way for doing this? maybe store the data on the local file system and put the file on HDFS at the end. I was wondering if Flume can be used in this case and what would be it's added value ?
Thanks in advance
This seems to be such a "simple" program. You can use any language / tool to read JSON from a rest API and then upload the content to hdfs.
And you also need a scheduler to schedule the job.
With Oozie + java/shell action/, it provides better tracking in terms of job history. I would go for this if oozie is already available.

Creating a Listening Service In Java

I have a webapp with an architecture I'm not thrilled with. In particular, I have a servlet that handles a very large file upload (via commons-fileupload), then processes the file, passing it to a service/repository layer.
What has been suggested to me is that I simply have my servlet upload the file, and a service on the backend do the processing. I like the idea, but I have no idea to go about it. I do not know JMS.
Other details:
- App is a GWT app split into the recommended client/server/shared subpackages, using an MVP architecture.
- Currently, I am only running in GWT hosted mode, but am planning to move to Tomcat in the very near future.
I'm perfectly willing to learn whatever I need to in order to get this working (in fact, that's the point of writing the app). I'm not expecting anyone to write code for me, but can someone point me in the right direction to get started?
There are many options for this scenario, but the simplest may be just copying the uploaded file to a known location on the file system, and have a background daemon monitor the location and process when it finds it.
#Jason, there are many ways to solve your problem.
i) Have dump you file data into Database with column type BLOB. and have a DB polling thread(after a particular time period) polls table for newly inserted file .
ii) Have dump file into file system and have a file montioring process.
Benefit of i) over ii) is that DB is centralized and fast resource where as file systems are genrally slow and non-centalized in nature.
So basically servlet would dump either to DB or file system. Now about who will process that dumped file:- a) It could be either montioring process as discussed above or b) you can use JMS which is asynchronous in nature what it means servlet would put a trigger event in queue which will asynchronously trigger new processing thread.
Well don't introduce JMS in your system unnecessarily if you are ok with monitoring process.
This sounds interesting and familiar to me :). We do it in the similar way.
We have our four projects, all four projects includes file upload and file processing (Image/Video/PDF/Docs) etc. So we created a single project to handle all file processing, it is something like below:
All four projects and File processor use Amazon S3/Our File Storage for file storage so file storage is shared among all five projects.
We make request to File Processor providing details in XML via http request which include file-path on S3/Stoarge, aws authentication details, file conversion/processing parameters. File Processor does processing and puts processed files on S3/Storage, constructs XML with processed files details and sends XML via response.
We use Spring Frameowrk and Tomcat.
Since this is foremost a learning exercise, you need to pick an easy to use JMS provider. This discussion suggested FFMQ just one year ago.
Since you are starting with a simple processor, you can keep it simple and use a JMS Queue.
In the simplest form, each message send by the servlet has to correspond to a single job. You can either put the entire payload of the upload in the message, or just send a filename as reference to the content in the message. These are details you can refactor later.
On the processor side, if you are using Java EE, you can use a MessageBean. If you are not, then I would suggest a 3 JVM solution -- one each for Tomcat, the JMS server, and the message processor. This article includes the basics of a message consuming client.

Categories