org.apache.sshd.common.SshException: Channel has been closed - java

I'm trying to send in-memory data as a file to remote server by using org.apache.sshd.client.
My code so far:
// Connect to remote server code
sftp = session.createSftpClient();
StringBuilder sb = new StringBuilder();
for (int j = 0; j < 1000; j++) {
sb.append("a");
}
byte[] data = sb.toString().getBytes(StandardCharsets.UTF_8);
sftp.write(h, 0, data, 0, data.length);
Every things are fine until i try to make my input data bigger.
I set loop count from 1000 to 1000000.
And it's keep throwing following exception:
org.apache.sshd.common.SshException: Channel has been closed
.
I try to set SFTP's [channel open timeout] more bigger but can not resolve this problem.
So I realized that any time data size greater than 256KB, the exception will be thrown. I tried to edit IO Buffer size, Write buffer, Read buffer but the problem is still there.
Is there any way to setting data size or any way to solve this problem? Thanks you all in advance.

Related

Sending large data over TCP/IP socket

I have a small project running a server in C# and a client in Java. The server sends images to the client.
Some images are quite big (up to 10MiB sometimes), so I split the image bytes and send it in chunks of 32768 bytes each.
My C# Server code is as follows:
using (var stream = new MemoryStream(ImageData))
{
for (int j = 1; j <= dataSplitParameters.NumberOfChunks; j++)
{
byte[] chunk;
if (j == dataSplitParameters.NumberOfChunks)
chunk = new byte[dataSplitParameters.FinalChunkSize];
else
chunk = new byte[dataSplitParameters.ChunkSize];
int result = stream.Read(chunk, 0, chunk.Length);
string line = DateTime.Now + ", Status OK, " + ImageName+ ", ImageChunk, " + j + ", " + dataSplitParameters.NumberOfChunks + ", " + chunk.Length;
//write read params
streamWriter.WriteLine(line);
streamWriter.Flush();
//write the data
binaryWriter.Write(chunk);
binaryWriter.Flush();
Console.WriteLine(line);
string deliveryReport = streamReader.ReadLine();
Console.WriteLine(deliveryReport);
}
}
And my Java Client code is as follows:
long dataRead = 0;
for (int j = 1; j <= numberOfChunks; j++) {
String line = bufferedReader.readLine();
tokens = line.split(", ");
System.out.println(line);
int toRead = Integer.parseInt(tokens[tokens.length - 1]);
byte[] chunk = new byte[toRead];
int read = inputStream.read(chunk, 0, toRead);
//do something with the data
dataRead += read;
String progressReport = pageLabel + ", progress: " + dataRead + "/" + dataLength + " bytes.";
bufferedOutputStream.write((progressReport + "\n").getBytes());
bufferedOutputStream.flush();
System.out.println(progressReport);
}
The problem is when I run the code, either the client crashes with an error saying it is reading bogus data, or both the client and the server hang. This is the error:
Document Page 1, progress: 49153/226604 bytes.
�9��%>�YI!��F�����h�
Exception in thread "main" java.lang.NumberFormatException: For input string: .....
What am I doing wrong?
The basic problem.
Once you wrap an inputstream into a bufferedreader you must stop accessing the inputstream. That bufferedreader is buffered, it will read as much data as it wants to, it is NOT limited to reading exactly up to the next newline symbol(s) and stopping there.
The BufferedReader on the java side has read a lot more than that, so it's consumed a whole bunch of image data already, and there's no way out from here. By making that BufferedReader, you've made the job impossible, so you can't do that.
The underlying problem.
You have a single TCP/IP connection. On this, you send some irrelevant text (the page, the progress, etc), and then you send an unknown amount of image data, and then you send another irrelevant progress update.
That's fundamentally broken. How can an image parser possibly know that halfway through sending an image, you get a status update line? Text is just binary data too, there is no magic identifier that lets a client know: This byte is part of the image data, but this byte is some text sent in-between with progress info.
The simple fix.
You'd think the simple fix is.. well, stop doing that then! Why are you sending this progress? The client is perfectly capable of knowing how many bytes it read, there is no point sending that. Just.. take your binary data. open the outputstream. send all that data. And on the client side, open the inputstream, read all that data. Don't involve strings. Don't use anything that smacks of 'works with characters' (so, BufferedReader? No. BufferedInputStream is fine).
... but now the client doesn't know the title, nor the total size!
So make a wire protocol. It can be near trivial.
This is your wire protocol:
4 bytes, big endian: SizeOfName
SizeOfName number of bytes. UTF-8 encoded document title.
4 bytes, big endian: SizeOfData
SizeOfData number of bytes. The image data.
And that's if you actually want the client to be able to render a progress bar and to know the title. If that's not needed, don't do any of that, just straight up send the bytes, and signal that the file has been completely sent by.. closing the connection.
Here's some sample java code:
try (InputStream in = ....) {
int nameSize = readInt(in);
byte[] nameBytes = in.readNBytes(nameSize);
String name = new String(nameBytes, StandardCharsets.UTF_8);
int dataSize = readInt(in);
try (OutputStream out =
Files.newOutputStream(Paths.get("/Users/TriSky/image.png")) {
byte[] buffer = new byte[65536];
while (dataSize > 0) {
int r = in.read(buffer);
if (r == -1) throw new IOException("Early end-of-stream");
out.write(buffer, 0, r);
dataSize -= r;
}
}
}
public int readInt(InputStream in) throws IOException {
byte[] b = in.readNBytes(4);
return ByteBuffer.wrap(b).getInt();
}
Closing notes
Another bug in your app is that you're using the wrong method. Java's 'read(bytes)' method will NOT (neccessarily) fully fill that byte array. All read(byte[]) will do is read at least 1 byte (unless the stream is closed, then it reads none, and returns -1. The idea is: read will read the optimal number of bytes: Exactly as many as are ready to give you right now. How many is that? Who knows - if you ignore the returned value of in.read(bytes), your code is neccessarily broken, and you're doing just that. What you really want is for example readNBytes which guarantees that it fully fills that byte array (or until stream ends, whichever happens first).
Note that in the transfer code above, I also use the basic read, but here I don't ignore the return value.
Your Java code seems to be using a BufferedReader. It reads data into a buffer of its own, meaning it is no longer available in the underlying socket input stream - that's your first problem. You have a second problem with how inputStream.read is used - it's not guaranteed to read all the bytes you ask for, you would have to put a loop around it.
This is not a particularly easy problem to solve. When you mix binary and text data in the same stream, it is difficult to read it back. In Java, there is a class called DataInputStream that can help a little - it has a readLine method to read a line of text, and also methods to read binary data:
DataInputStream dataInput = new DataInputStream(inputStream);
for (int j = 1; j <= numberOfChunks; j++) {
String line = dataInput.readLine();
...
byte[] chunk = new byte[toRead];
int read = dataInput.readFully(chunk);
...
}
DataInputStream has limitations: the readLine method is deprecated because it assumes the text is encoded in latin-1, and does not let you use a different text encoding. If you want to go further down this road you'll want to create a class of your own to read your stream format.
Some images are quite big (up to 10MiB sometimes), so I split the image bytes and send it in chunks of 32768 bytes each.
You know this is totally unnecessary right? There is absolutely no problem sending multiple megabytes of data into a TCP socket, and streaming all of the data in on the receiving side.
When you try to send image, you have to open the image as a normal file then substring the image into some chunks and every chunk change it into "base64encode" when you send and the client decode it because the image data is not normal data, so base64encode change this symbols to normal chars like AfHM65Hkgf7MM

Socket programming in android - Reading bytes from input stream is very slow over Wifi

I am using Sockets for transferring data over Wifi in my android app, I have set the Buffer size to around 10MB and here is my code for sending data.
// Sending data from a file in chunks, PREFERRED_CHUNK_SIZE is [1024 * 1024 * 10]
var fileSize = fileList[index].totalSize
val buffer = ByteArray(SocketConstants.PREFERRED_CHUNK_SIZE)
var length: Int
do {
length = stream.read(buffer, 0, min(buffer.size, fileSize))
bos.write(buffer, 0, length)
fileSize -= length
} while (fileSize != 0)
stream.close()
I am reusing the same code for multiple file transfer, the condition filesize != 0 makes sure I read only that much bytes for a single file and thus I have used that min function lets say I want to send 36 MB it is sent as 10(10485760 bytes), 10(10485760 bytes), 10(10,485,760 bytes), 6.
Below is my code for receiver:
var fileSize = file.totalSize
var current: Int
var offset = 0
val byteArray = ByteArray(SocketConstants.PREFERRED_CHUNK_SIZE)
val bufferedInputStream = BufferedInputStream(inputStream)
do {
current = bufferedInputStream.read(byteArray, 0, min(byteArray.size, fileSize))
// outputStream.write(byteArray, 0, current) To ignore file write for now
fileSize -= current
offset += current
println("Length: $current")
file.bytesDownloaded = offset
updateList(offset)
} while (fileSize != 0)
outputStream.flush()
outputStream.close()
When reading I am getting really small chunks of bytes, 1358, 1358, 1358, 1358, 1358.
This is really slow, I don't understand what is causing inputStream.read() such small reads.
I have already set the sender and receiver buffer size for SocketServer and Socket instance. But there was no difference in the results
After looking for a solution for few days and trying multiple ways to increase the speed when read is called on inputstream I finally discovered the solution.
Thanks Gerd in comments who kind of helped me start thinking about the strength of a Wifi connection.
What I found was, if you connect to wifi using Android Wifi Manager, the connection established is a slow connection and you wouldn't even see in the task bar if you are connected to an actual wifi.
To overcome it, I manually connected to hotspot using Wifi settings in the mobile and that has better speed.

How to read a large json from a text file in android app?

I have a text file in my android app which consist a json. I need to read and parse that json. File size is 21 mb. I am using following code to read file:
StringBuilder stringBuilder = new StringBuilder();
InputStream input = getAssets().open(filename);
int size = input.available();
byte[] buffer = new byte[size];
byte[] tempBuffer = new byte[1024];
int tempBufferIndex = 0;
for(int i=0; i<size; i++){
if(i == 0){
tempBuffer[tempBufferIndex] = buffer[i];
}else{
int mod = 1024 % i;
if(mod == 0){
input.read(tempBuffer);
stringBuilder.append(new String(tempBuffer));
tempBufferIndex = 0;
}
tempBuffer[tempBufferIndex] = buffer[i];
}
}
input.close();
Size int is 20949874 in real case. After loop is done stringBuilder length is always 11264 even if i change range of for loop. I tried to make one String from InputStream without using loop but it always gives me OutOfMemoryError Exception. I also get "Grow heap (frag case) to 26.668MB for 20949890-byte allocation" in my logs. I searched here and tried different solutions but did not make it work. Any idea how should i solve this issue. Thanks in advance.
For big json files you should use SAX parser and not DOM. For example JsonReader.
DOM (“Document Object Model”) loads the entire content into memory and permits the developer to query the data as they wish. SAX presents the data as a stream: the developer waits for their desired pieces of data to appear and saves only the parts they need. DOM is considered easier to use but SAX uses much less memory.
You can try to split the file into several parts.
So during processing the app hopefully doesn't get out of memory.
You should also consider using "largeHeap" flag in your manifest
(See http://developer.android.com/guide/topics/manifest/application-element.html)
I don't know your file, but maybe if you use smaller JSON tags, you can reduce storage as well.

Out of memory java heap space

I am trying to send chunks of files from server to more than one clients. When I am trying to send file of size 700mb, it showed "OutOfMemory java heap space" error. I am using Netbeans 7.1.2 version.
I also tried VMoption in the properties. But still the same error happens. I think there is some problem with reading the entire file. Below code is working for up to 300mb. Please give me some suggestions.
Thanks in advance
public class SplitFile {
static int fileid = 0 ;
public static DataUnit[] getUpdatableDataCode(File fileName) throws FileNotFoundException, IOException{
int i = 0;
DataUnit[] chunks = new DataUnit[UAProtocolServer.singletonServer.cloudhosts.length];
FileInputStream fis;
long Chunk_Size = (fileName.length())/chunks.length;
int cursor = 0;
long fileSize = (long) fileName.length();
int nChunks = 0, read = 0;long readLength = Chunk_Size;
byte[] byteChunk;
try {
fis = new FileInputStream(fileName);
//StupidTest.size = (int)fileName.length();
while (fileSize > 0) {
System.out.println("loop"+ i);
if (fileSize <= Chunk_Size) {
readLength = (int) fileSize;
}
byteChunk = new byte[(int)readLength];
read = fis.read(byteChunk, 0, (int)readLength);
fileSize -= read;
// cursor += read;
assert(read==byteChunk.length);
long aid = fileid;
aid = aid<<32 | nChunks;
chunks[i] = new DataUnit(byteChunk,aid);
// Lister.add(chunks[i]);
nChunks++;
++i;
}
fis.close();
fis = null;
}catch(Exception e){
System.out.println("File splitting exception");
e.printStackTrace();
}
return chunks;
}
Reading in the whole file would definitely trigger OutOfMemoryError as file size grow. Tuning the -Xmx1024M may be good for temporary fix, but it's definitely not the right/scalable solution. Also, doesn't matter how you move your variables around (like creating buffer outside of the loop instead of inside the loop) you will get OutOfMemoryError sooner or later. The only way to not get OutOfMemoryError for you is to not to read the complete file in memory.
If you have to use just memory, then an approach is to send off chunks to the client so you don't have to keep all the chunks in memory:
instead of:
chunks[i] = new DataUnit(byteChunk,aid);
do:
sendChunkToClient(new DataUnit(byteChunk, aid));
But the above solution has the drawback that if some error happened in-between chunk sending, you may have hard time trying to resume/recover from the error point.
Saving the chunks to temporary files like Ross Drew suggested is probably better and more reliable.
How about creating the
byteChunk = new byte[(int)readLength];
outside of the loop and just reuse it instead of creating an array of bytes over and over if it's always the same.
Alternatively
You could write incoming data to a temporary file as it comes in instead of maintaining that huge array then process it once it's all arrived.
Also
If you are using it multiple times as an int, you should probably just case readLength to an int outside the loop as well
int len = (int)readLength;
And Chunk_Size is a variable right? It should begin with a lower case letter.

Java Read File Larger than 2 GB (Using Chunking)

I'm implementing a file transfer server, and I've run into an issue with sending a file larger than 2 GB over the network. The issue starts when I get the File I want to work with and try to read its contents into a byte[]. I have a for loop :
for(long i = 0; i < fileToSend.length(); i += PACKET_SIZE){
fileBytes = getBytesFromFile(fileToSend, i);
where getBytesFromFile() reads a PACKET_SIZE amount of bytes from fileToSend which is then sent to the client in the for loop. getBytesFromFile() uses i as an offset; however, the offset variable in FileInputStream.read() has to be an int. I'm sure there is a better way to read this file into the array, I just haven't found it yet.
I would prefer to not use NIO yet, although I will switch to using that in the future. Indulge my madness :-)
It doesn't look like you're reading data from the file properly. When reading data from a stream in Java, it's standard practice to read data into a buffer. The size of the buffer can be your packet size.
File fileToSend = //...
InputStream in = new FileInputStream(fileToSend);
OutputStream out = //...
byte buffer[] = new byte[PACKET_SIZE];
int read;
while ((read = in.read(buffer)) != -1){
out.write(buffer, 0, read);
}
in.close();
out.close();
Note that, the size of the buffer array remains constant. But-- if the buffer cannot be filled (like when it reaches the end of the file), the remaining elements of the array will contain data from the last packet, so you must ignore these elements (this is what the out.write() line in my code sample does)
Umm, realize that your handling of the variable i is not correct..
Iteration 0: i=0
Iteration 1: i=PACKET_SIZE
...
...
Iteration n: i=PACKET_SIZE*n

Categories