I have requirement where I am having some huge volume[Approx 20000+ records] of employee details and respective contact details to be downloaded in one spreadsheet. Current system is not capable to handle this much data and responding with internal server error. Since my application is build with Java + JAX RS so I am working with StreamingOutput and binding the data to one spreadsheet. What is the best to handle such kind of request? How can I break it into chunks and finally concatenate and create one single file.
My Apache Spark application takes various input files and stores the results and logs in other files. The input files are provided along with the application which is supposed to run on the Amazon cloud (EMR seemed preferable to EC2).
Now, I know that I'm supposed to create an uber-jar containing my input files and the application that accesses them. However, how do I retrieve the generated files from the cloud, once the execution finishes?
As an additional info, the files are created and written using relative paths from the code.
Assuming you mean that you want to access the output generated by the Spark application outside the cluster, the usual thing to do is to write it to S3. Then you may of course read the data directly from S3 from outside the EMR cluster.
I want to download a very large file using camel, but I don't want to hold the entire file in memory and THEN save it to file.
I want to stream the file in and save or write to a file in chunks.
Is this possible with Camel, and if so, how do I do this?
Note: Is it possible that the endpoint I am downloading the file does not support streaming/chunking? If yes, how can I verify this?
Camel's HTTP component uses Netty to make the request. Netty reads the entire response into memory, so there is no way to do what you are asking for.
You would need to implement your own endpoint for Camel that utilizes another HTTP library which has support for HTTP response streaming.
More documentation is available here :
https://cwiki.apache.org/confluence/display/CAMEL/Netty4+HTTP
You can 3 option to download the file i.e. using:
ftp://[username#]hostname[:port]/directoryname[?options]
sftp://[username#]hostname[:port]/directoryname[?options]
ftps://[username#]hostname[:port]/directoryname[?options]
There is a option of streamDownload in it.
For more check out http://camel.apache.org/ftp.html
I have a blog that offers a REST API to download data. The API gives the list of topics (in JSON). It's possible to iterate on the list to download the messages of each topic. I want to download all messages of the forum every day and store them in HDFS.
I was thinking about writing a Java program that calls the API to get the data and store it on HDFS using Hadoop API. I can run the Java program withing a daily Oozie batch.
Is there a better way for doing this? maybe store the data on the local file system and put the file on HDFS at the end. I was wondering if Flume can be used in this case and what would be it's added value ?
Thanks in advance
This seems to be such a "simple" program. You can use any language / tool to read JSON from a rest API and then upload the content to hdfs.
And you also need a scheduler to schedule the job.
With Oozie + java/shell action/, it provides better tracking in terms of job history. I would go for this if oozie is already available.
I have a webapp with an architecture I'm not thrilled with. In particular, I have a servlet that handles a very large file upload (via commons-fileupload), then processes the file, passing it to a service/repository layer.
What has been suggested to me is that I simply have my servlet upload the file, and a service on the backend do the processing. I like the idea, but I have no idea to go about it. I do not know JMS.
Other details:
- App is a GWT app split into the recommended client/server/shared subpackages, using an MVP architecture.
- Currently, I am only running in GWT hosted mode, but am planning to move to Tomcat in the very near future.
I'm perfectly willing to learn whatever I need to in order to get this working (in fact, that's the point of writing the app). I'm not expecting anyone to write code for me, but can someone point me in the right direction to get started?
There are many options for this scenario, but the simplest may be just copying the uploaded file to a known location on the file system, and have a background daemon monitor the location and process when it finds it.
#Jason, there are many ways to solve your problem.
i) Have dump you file data into Database with column type BLOB. and have a DB polling thread(after a particular time period) polls table for newly inserted file .
ii) Have dump file into file system and have a file montioring process.
Benefit of i) over ii) is that DB is centralized and fast resource where as file systems are genrally slow and non-centalized in nature.
So basically servlet would dump either to DB or file system. Now about who will process that dumped file:- a) It could be either montioring process as discussed above or b) you can use JMS which is asynchronous in nature what it means servlet would put a trigger event in queue which will asynchronously trigger new processing thread.
Well don't introduce JMS in your system unnecessarily if you are ok with monitoring process.
This sounds interesting and familiar to me :). We do it in the similar way.
We have our four projects, all four projects includes file upload and file processing (Image/Video/PDF/Docs) etc. So we created a single project to handle all file processing, it is something like below:
All four projects and File processor use Amazon S3/Our File Storage for file storage so file storage is shared among all five projects.
We make request to File Processor providing details in XML via http request which include file-path on S3/Stoarge, aws authentication details, file conversion/processing parameters. File Processor does processing and puts processed files on S3/Storage, constructs XML with processed files details and sends XML via response.
We use Spring Frameowrk and Tomcat.
Since this is foremost a learning exercise, you need to pick an easy to use JMS provider. This discussion suggested FFMQ just one year ago.
Since you are starting with a simple processor, you can keep it simple and use a JMS Queue.
In the simplest form, each message send by the servlet has to correspond to a single job. You can either put the entire payload of the upload in the message, or just send a filename as reference to the content in the message. These are details you can refactor later.
On the processor side, if you are using Java EE, you can use a MessageBean. If you are not, then I would suggest a 3 JVM solution -- one each for Tomcat, the JMS server, and the message processor. This article includes the basics of a message consuming client.