I am using CentOs Kernel version 2.6.32.I plan to do a test with and without transferTo(sendFile) using NIO. My Test is to copy a 1GB file from one directory to another. However i didn't find any significant performance improvement because of using transferTo(). Please let me know if file to file sendFile really works in Linux kernel or only file to socket only works? Do I need to enable anything for sendFile?
Sample code:
private static void doCopyNIO(String inFile, String outFile) {
FileInputStream fis = null;
FileOutputStream fos = null;
FileChannel cis = null;
FileChannel cos = null;
long len = 0, pos = 0;
try {
fis = new FileInputStream(inFile);
cis = fis.getChannel();
fos = new FileOutputStream(outFile);
cos = fos.getChannel();
len = cis.size();
/*while (pos < len) {
pos += cis.transferTo(pos, (1024 * 1024 * 10), cos); // 10M
}*/
cis.transferTo(0, len, cos);
fos.flush();
} catch (Exception e) {
e.printStackTrace();
}
}
Related
I have a java application which uses spring rest to upload a jar file.But the uploaded file is corrupted and I am not able to access the jar file from the server.Please help.
fileloc = fileloc.replace("$", "/");
String filename = uploadedFileRef.getOriginalFilename();
String path = fileloc + filename;
byte[] buffer = new byte[1000];
File outputFile = new File(path);
FileInputStream reader = null;
FileOutputStream writer = null;
int totalBytes = 0;
try {
outputFile.createNewFile();
reader = (FileInputStream) uploadedFileRef.getInputStream();
writer = new FileOutputStream(outputFile);
int bytesRead = 0;
while ((bytesRead = reader.read(buffer)) != -1) {
writer.write(buffer);
totalBytes += bytesRead;
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
reader.close();
writer.close();
} catch (IOException e) {
e.printStackTrace();
}
}
You call
writer.write(buffer);
so you write always the buffer size. Imagine the last read reads 10 bytes only but you will write 1000 bytes anyway.
Use
write(buffer, 0, bytesRead );
Or just check the question
I am working on reading a file and write same file, but the problem is the downloaded file is 2kb larger than input original file.
Some piece of code
#Override
public void run() {
try {
BufferedInputStream bis;
ArrayList<byte[]> al =new ArrayList<byte[]>();
File file = new File(Environment.getExternalStorageDirectory(), "test.mp3");
byte[] bytes = new byte[2048];
bis = new BufferedInputStream(new FileInputStream(file));
OutputStream os = socket.getOutputStream();
int read ;
int fileSize = (int) file.length();
int readlen=1024;
while (fileSize>0) {
if(fileSize<1024){
readlen=fileSize;
System.out.println("Hello.........");
}
bytes=new byte[readlen];
read = bis.read(bytes, 0, readlen);
fileSize-=read;
al.add(bytes);
}
ObjectOutputStream out1 = new ObjectOutputStream(new FileOutputStream(Environment.getExternalStorageDirectory()+"/newfile.mp3"));
for(int ii=1;ii<al.size();ii++){
out1.write(al.get(ii));
// out1.flush();
}
out1.close();
File file1 = new File(Environment.getExternalStorageDirectory(), "newfile.mp3");
Don't use an ObjectOutputStream. Just use the FileOutputStream, or a BufferedOutputStream wrapped around it.
The correct way to copy streams in Java is as follows:
byte[] buffer = new byte[8192]; // or more, or even less, anything > 0
int count;
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
out.close();
Note that you don't need a buffer the size of the input, and you don't need to read the entire input before writing any of the output.
Wish I had $1 for every time I've posted this.
I think you should use ByteArrayOutputStream not an ObjectOutputStream.
I belive this is not a raw code, but the parts of the code, placed in different procedures, otherwise it is meaningless.
For example, in case you want to stream some data from a file, process this data, and then write the data to another file.
BufferedInputStream bis = null;
ByteArrayOutputStream al = new ByteArrayOutputStream();
FileOutputStream out1 = null;
byte[] bytes;
try {
File file = new File("testfrom.mp3");
bis = new BufferedInputStream(new FileInputStream(file));
int fileSize = (int) file.length();
int readLen = 1024;
bytes = new byte[readLen];
while (fileSize > 0) {
if (fileSize < readLen) {
readLen = fileSize;
}
bis.read(bytes, 0, readLen);
al.write(bytes, 0, readLen);
fileSize -= readLen;
}
bis.close();
} catch (IOException e){
e.printStackTrace();
}
//proceed the data from al here
//...
//finish to proceed
try {
out1 = new FileOutputStream("testto.mp3");
al.writeTo(out1);
out1.close();
} catch (IOException e){
e.printStackTrace();
}
Don't forget to use try-catch directives where it needed
http://codeinventions.blogspot.ru/2014/08/creating-file-from-bytearrayoutputstrea.html
I have a problem with the time it requires for .gz files to get uncompressed using the following code:
import java.io.*;
import java.util.zip.*;
public class UnGunzipClass{
public static boolean ungunzip(String compressedFile, String decompressedFile){
try{
// in
FileInputStream fileIn = new FileInputStream(compressedFile);
GZIPInputStream gZipIn = new GZIPInputStream(fileIn);
BufferedInputStream in = new BufferedInputStream(gZipIn);
// out
FileOutputStream fileOut = new FileOutputStream(decompressedFile);
BufferedOutputStream out = new BufferedOutputStream(fileOut);
int n = 0;
int len = 1024*1024*1024;
byte[] buffer = new byte[len];
while((n = in.read(buffer,0,len)) > 0){
out.write(buffer,0,n);
}
gZipIn.close();
fileOut.close();
return true;
} catch (IOException e){
e.printStackTrace();
return false;
}
}
}
Note: files are up to 100MB, but it is still taking me tens of minutes to run, so I am trying to get something faster. Speed is good :)
You created your BufferedInputStream from the GZIPInputStream, for performance you would do that in the reverse order. Also, I suggest you shrink your buffer size (and use your buffered streams). Finally, I would use a try-with-resources Statement and that might look something like
public static boolean ungunzip(String compressedFile,
String decompressedFile) {
final int BUFFER_SIZE = 32768;
try (InputStream in = new GZIPInputStream(new BufferedInputStream(
new FileInputStream(compressedFile)), BUFFER_SIZE);
OutputStream out = new BufferedOutputStream(
new FileOutputStream(decompressedFile), BUFFER_SIZE)) {
int n = 0;
byte[] buffer = new byte[BUFFER_SIZE];
while ((n = in.read(buffer, 0, BUFFER_SIZE)) > 0) {
out.write(buffer, 0, n);
}
return true;
} catch (IOException e) {
e.printStackTrace();
}
return false;
}
I try make code by java able to read 100 kb from file then divide the file into blocks each block has 256 bit or 32 byte also I want conver each block to binary format or integer format the following is it
I need any suggest
public static void main(String[] args) {
ReadFileExample newclass = new ReadFileExample();
System.out.println("-----------Wellcome in ECC ENCRYPTION NEW--------");
File clearmsg = new File("F:/java_projects/clearmsg.txt");
File ciphermsg = new File("F:/java_projects/ciphermsg.txt");
byte[] block = new byte[32];
try {
FileInputStream fis = new FileInputStream(clearmsg);
FileOutputStream fos = new FileOutputStream(ciphermsg);
CipherOutputStream cos = new CipherOutputStream(fos);
System.out.println("Total file size to read (in bytes) : "
+ fis.available());
int i;
while ((i = fis.read(block)) != -1) {
System.out.println(block);
fos.write(block, 0, i);
}
fos.close();
I have been messing with this for some time and it's getting better and better, but it's still a little slow for me. Can anyone help speed this up / make the design better, please?
Also, the files must only be numbers and the file must end with the file extension ".dat"
I never added the checks because I didn't feel is was necessary.
public void preloadModels() {
try {
File directory = new File(signlink.findcachedir() + "raw", File.separator);
File[] modelFiles = directory.listFiles();
for (int modelIndex = modelFiles.length - 1;; modelIndex--) {
String modelFileName = modelFiles[modelIndex].getName();
byte[] buffer = getBytesFromInputStream(new FileInputStream(new File(directory, modelFileName)));
Model.method460(buffer, Integer.parseInt(modelFileName.replace(".dat", "")));
}
} catch (Throwable e) {
return;
}
}
public static final byte[] getBytesFromInputStream(InputStream inputStream) throws IOException {
byte[] buffer = new byte[32 * 1024];
int bufferSize = 0;
for (;;) {
int read = inputStream.read(buffer, bufferSize, buffer.length - bufferSize);
if (read == -1) {
return Arrays.copyOf(buffer, bufferSize);
}
bufferSize += read;
if (bufferSize == buffer.length) {
buffer = Arrays.copyOf(buffer, bufferSize * 2);
}
}
}
I would do the following.
public void preloadModels() throws IOException {
File directory = new File(signlink.findcachedir() + "raw");
for (File file : directory.listFiles()) {
if (!file.getName().endsWith(".dat")) continue;
byte[] buffer = getBytesFromFile(file);
Model.method460(buffer, Integer.parseInt(file.getName().replace(".dat", "")));
}
}
public static byte[] getBytesFromFile(File file) throws IOException {
byte[] buffer = new byte[(int) file.length()];
try (DataInputStream dis = new DataInputStream(new FileInputStream(file))) {
dis.readFully(buffer);
return buffer;
}
}
If this is still too slow, most likely the limitation is the speed of hard drive.
How about using Apache Commons IOUtils class.
IOUtils.toByteArray(InputStream input)
I think the easiest way is to add all directories content to archive. Have a look at java.util.zip. It has some bugs with file names before 7th version. There is also Apache Commons implementation.