I am trying to upload a file, My front end application is in PHP and backend engine is in Java. They both communicate through PHP-Java_bridge.
My first action was, when a file is posted to PHP page, it will retrieve its content.
$filedata= file_get_contents($tmpUploadedLocation);
and then pass this information to Java EJB façade which accepts byte array saveFileContents(byte[] contents)
Here is how in PHP I converted the $filedata into byte array.
$bytearrayData = unpack("C*",$filedata);
and finally called the Java service (Java service object was retrieved using php-java-bridge)
$javaService->saveFileContents($bytearrayData);
This works fine if file size is less, but if the size increase 2.9 MB, I receive an error and hence file contents are not saved on to the disk.
Fatal error: Allowed memory size of 134217728 bytes exhausted //This is PHP side error due to unpack
I am not sure how to do this, Above method is not accurate, Please I have few limits.
The engine(Java) is responsible for saving and retrieving the
contents.
PHP-HTML is the front end application, It could be any thing for now its just PHP
PHP communicate with Java using PHP-Java-Bridge
EJB's methods are accessed by PHP for saving and retrieving information.
Everything was working fine with above combination, but now its about upload and saving documents. It is EJB (Application Engine access point) that will be used for any front-end application (PHP or another java application through remote interface (lookups)).
My question is how File contents from PHP can be sent to Java, where it does not break any thing (Memory)?
Instead of converting a file into an array I'd try to pass it as string. Encode the string into base64 in PHP and decode it into array in Java.
Another option is to pass the file thru the filesystem. Some Linux systems have /dev/shm or /run/shm mounted to a tmpfs, which is often a good way to pass temporary data between programs without incurring a hard-drive overhead. A typical tmpfs algorithm is 1) create a folder; 2) remove old files from it (e.g. files older than a minute); 3) save the new file; 4) pass the file path to Java; 5) remove the file. Step 2 is important in order not to waste RAM if steps 3-5 are not completed for some reason.
Related
I am trying to find a way to read just end of huge and dynamic log file (like 20-30 lines from end) via SFTP from server and to save the point until where I read, and if I need more lines, to read more from this point upper.
Everything I've tried takes too long time, I've tried to copy this file on machine and after this to read from end using ReversedLinesFileReader because this method need the File object, when via SFTP you will get only InputStream, takes a lot to download file.
Also tried to count lines and to read from n line but also takes too long and throws exception because sometime in this time file is modified. Another way I tried to connect via SSH and used tail -100 and get the desired result, but just for one time, because next time I will get also new logs, but I need to go upper. Is there a fast way to get the end of file and to save the point and to read more upper of this point later? Any idea?
You don't say what SFTP library you're using, but the most widely used Java SSH/SFTP library is JSch, so I'll assume you're using that.
The SFTP protocol has operations to perform random-access I/O on remote files. Unfortunately, the JSch SFTP client doesn't expose the full range of operations. However, it does have versions of the get operation (for getting a file from the remote server) which permit skipping over the first part of the remote file. You can use one of these operations to read for example the last 10 KB of a file.
Several of the JSch get operations return an InputStream. You can read the contents of the remote file from the input stream. If you want to access the remote file line-by-line, you can convert it to Reader using InputStreamReader.
So, a process might do the following:
Call stat() on the remote file to get its size.
Figure out where in the file you want to start reading from. You could keep track of where you stopped reading last time, or you could guess based on the amount of data you're willing to download and the expected size in bytes of these last 20-30 lines.
Call get() to start reading it.
Process data read from the InputStream returned by the get() call.
Best would be to have a kind of rotating log files, possibly with compression.
Hower rsync is a unidirectional synchronisation, that can transmit only the changed parts of a file: for a log the new end.
I am not sure whether it works sufficiently performant in your case, and ssh is a prerequisite.
We are using Commons FileUpload API for handling the files uploading. We use a disk item factory where the file is written at a temp location and then we get an InputStream from the file item to encrypt the file and write it to the final location. My question is that the encryption, when we run it as a standalone application, runs in 25 seconds (for a 1 GB file). But when we use the same in the web application it is taking 12 minutes. And stranger thing is that this works fine on a different server (both the standalone and web application take the same time to encrypt). So, is there any issue with FileUpload API that causes some kind of a lock on the file even after it's completely written to the temp location, which in turn slows down our encryption?
The issue was that the encryption block of the code had log statements, so for each chunk that was encrypted there was a log being sent out, it was really fast once it was commented out.
I'm using Oracle-MAF for mobile app development (Android and IOS). Having a requirement to capture media (image, audio and video) in the application and want to store into Oracle DB usually CLOB column.
So for I converted the captured media into base64 string (using commons-codec-1.10.jar) and passing through Rest webservice (Accept JSON/XML) to store into DB.
For image and audio length of the base64 string is fine, but for video it is consuming around 6.4 million of characters even for 2 sec video (2MB Rear camera) and this cause slow down the application and resulting Java heap space error.
Is there any other way to convert the media content into String using Java which gives feasible solution?
If it's really necessary to save the videos in CLOB/BLOB column of a table in DB then first save the content in a file and then have an asynchronous scheduler to save it in the DB. The best would be to have this scheduler running in another Java VM not to interfere with the app server running your server side.
If you can save all binary content as binary in a file and a path in the DB. File systems are the best solution for serving binary data anyway.
I have a Java client/server desktop application, where the communication between client and server is based on Sockets, and the messages exchanged between client and server are serialized objects (message objects, that incapsulate requests and responses).
Now I need to make the client able to upload a file from the local computer to the server, but I can't send the File through the buffer, since the Buffer is already used for exchanging the message objects.
Should i open another stream to send the file, or is there any better way to upload a file for my situation?
I need to make the client able to upload a file from the local computer to the server
- Open a Solely Dedicated Connection to the Server for File uploading.
- Use File Transfer Protocol to ease your work, and moreover its quite easy and reliable to use the Apache's common lib for File uploading and downloading....
See this link:
http://commons.apache.org/net/
You really only have two options:
Open another connection dedicated to the file upload and send it through that.
Make a message object representing bits of a file being uploaded, and send the file in chunks via these message objects.
The former seems simpler & cleaner to me, requiring less overhead and less complicated code.
You can keep your solution and pass the file content as an object, for example as a String - use Base64 encoding (or similar) of the content if it contains troublesome characters
Using Rest webservices (WS) to upload file whose size is between 10 and 50 MB
At the moment, we use Java, Jax-RS and CXF for doing it.
The behavior of this stack is to buffer the uploaded file by writing them into a temporary file (because we have large files). This is fine for most users.
Is it possible to stream directly from the socket input?
(not from a whole file in memory nor from a temporary file)
My purpose is to have less overhead on IOs and CPUs (each file is written twice : 1 buffer and 1 final). The WS only have to write the files (sometimes several in the same HTTPrequest) to a path that I calculate from the HTTP query string.
Thanks for your attention