Writing data into a VB(Variable Block) file using FTP/FTPS - java

package base;
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.PrintWriter;
import org.apache.commons.net.PrintCommandListener;
import org.apache.commons.net.ftp.FTP;
import org.apache.commons.net.ftp.FTPConnectionClosedException;
import org.apache.commons.net.ftp.FTPReply;
import org.apache.commons.net.ftp.FTPSClient;
import com.ibm.jzos.ZFile;
public class FTPSVB {
public static void main(String[] args) {
BufferedInputStream binp=null;
BufferedOutputStream bout=null;
String server, username, password, fileTgt, fileSrc;
String protocol = "TLS"; // SSL/TLS
FTPSClient ftps = new FTPSClient(protocol);
FTPSClient ftps2 = new FTPSClient(protocol);
server="***";
username="***";
password="***";
fileSrc="ABC00T.SMP.SAVE.ULRL";
fileTgt="ABC00T.SMP.SAVE.OUT.ULRL";
try
{
int reply;
ftps.connect(server);
ftps2.connect(server);
reply = ftps.getReplyCode();
reply = ftps2.getReplyCode();
}
try
{
ftps.setBufferSize(200);
ftps.setFileType(FTP.BINARY_FILE_TYPE);
if (!ftps.login(username, password))
{
ftps.logout();
System.out.println("ERROR..");
System.exit(-1);
}
ftps.execPBSZ(0);
ftps.execPROT("P");
ftps.enterLocalPassiveMode();
ftps.setAutodetectUTF8(true);
ftps.site("QUOTE RDW");
ftps2.setBufferSize(200);
ftps2.setFileType(FTP.BINARY_FILE_TYPE);
ftps2.site("QUOTE RDW");
ftps2.site("QUOTE recfm=VB lrecl=106 blksize=27998");
if (!ftps2.login(username, password))
{
ftps2.logout();
System.out.println("ERROR..");
System.exit(-1);
}
ftps2.execPBSZ(0);
ftps2.execPROT("P");
ftps2.enterLocalPassiveMode();
ftps2.setAutodetectUTF8(true);
binp=new BufferedInputStream(ftps.retrieveFileStream(fileSrc));
bout=new BufferedOutputStream(ftps2.storeFileStream(fileTgt));
final byte []bufLen= new byte[4];
int readLen=binp.read(bufLen, 0, 4);// Read len
int recCounter=1;
while(readLen!=-1){
ByteArrayInputStream ba2=new ByteArrayInputStream (bufLen,0,4);
int z=ba2.read();
int reclen=0;
int li=0;
while(z!=-1){
if(li==0)
reclen+=z*256;
else if(li==1)
reclen+=z;
li++;
z=ba2.read();
}
ba2.close();
reclen-=4;
byte []buf=new byte[reclen];
readLen=binp.read(buf, 0, reclen);
boolean isEOF=false;
while(readLen<reclen) {
int nextLen=binp.read(buf, readLen, reclen-readLen);
if(nextLen==-1){// End of file is reached.
isEOF=true;
break;
}
readLen=readLen+nextLen;
}
String a=new String(buf, ZFile.DEFAULT_EBCDIC_CODE_PAGE);
StringBuilder str=new StringBuilder(a);
//str.append(System.getProperty("line.separator"));
System.out.println(""+str);
//appending extra space for record till its length matches file record length
if(str.length()<102) {
for (int i = str.length(); i < 102; i++) {
str.append(" ");
}
}
byte []outBytes=new byte[102];
outBytes=str.toString().getBytes(ZFile.DEFAULT_EBCDIC_CODE_PAGE);
bout.write(outBytes);
if(isEOF){
break;
}
readLen=binp.read(bufLen, 0, 4);// Read length- RDW
recCounter++;
}
bout.flush();
bout.close();
binp.close();
ftps.completePendingCommand();
ftps2.completePendingCommand();
ftps.logout();
}
catch (FTPConnectionClosedException e)
{
System.err.println("Server closed connection.");
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
if (ftps.isConnected())
{
try
{
ftps.disconnect();
}
catch (IOException f)
{
// do nothing
f.printStackTrace();
}
}
}
}
}
I am using the above code to read and write a VB file. I am able to read the variable block records. but while writing if I dont append the extra space to match the file record length, the data is getting jumbled. If I add extra space to it, it consumes a lot of memory. Am I missing something here? How can I solve this issue?..

I guess your issue is with ftps2.setFileType(FTP.BINARY_FILE_TYPE);.
Windows/Unix/Linux files don't know anything about "records", every file is just a stream of bytes. When working with text files that stream may contain end-of-line-characters (x0A,x0D or x0D0A in ASCII). These may be interpreted as end of a record and start of a new one, so most FTP-tools would start a new record on the z/OS side when they encounter one of these in text mode (and vice versa add one when starting a new record when transfering from z/OS).
With binary files things are a bit different, since x0D and x0A are not treated in any special way but are just two byte values among others.
So to get what you want you have these posibilities:
Transfer the file using text-mode, but this will likely result in some sort of codepage-conversion being done. If possible you might configure a custom translation-table doing no conversion at all.
Transfer the file in binary to some FB dataset and write a tool to split the continuous bytestream at the correct line-termination character and write the resulting records to a VB-dataset.

Related

Java ZipInputStream extraction errors

Below is some code which extracts a file from a zip file containing only a single file. However, the extracted file does not match the same file extracted via WinZip or other zip utility. I expect that it might be off by a byte if the file contains an odd number of bytes (because my buffer is size 2 and I just abort once the read fails). However, when analyzing (using WinMerge or Diff) the file extracted with code below vs. file extracted via Winzip, there are several areas where bytes are missing from the Java extraction. Does anyone know why or how I can fix this?
package zipinputtest;
import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.util.zip.ZipInputStream;
public class test2 {
public static void main(String[] args) {
try {
ZipInputStream zis = new ZipInputStream(new FileInputStream("C:\\temp\\sample3.zip"));
File outputfile = new File("C:\\temp\\sample3.bin");
OutputStream os = new BufferedOutputStream(new FileOutputStream(outputfile));
byte[] buffer2 = new byte[2];
zis.getNextEntry();
while(true) {
if(zis.read(buffer2) != -1) {
os.write(buffer2);
}
else break;
}
os.flush();
os.close();
zis.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}
I was able to produce the error using this image (save it and zip as sample3.zip and run the code on it), but any binary file of sufficient size should show the discrepancies.
while (true) {
if(zis.read(buffer2) != -1) {
os.write(buffer2);
}
else break;
}
Usual problem. You're ignoring the count. Should be:
int count;
while ((count = zis.read(buffer2)) != -1)
{
os.write(buffer2, 0, count);
}
NB:
A buffer size of 2 is ridiculous. Use 8192 or more.
flush() before close() is redundant.
You can use a more verbatim way to check whether all bytes are read and written, e.g. a method like
public int extract(ZipInputStream in, OutputStream out) throws IOException {
byte[] buffer = new byte[BUFFER_SIZE];
int total = 0;
int read;
while ((read = in.read(buffer)) != -1) {
total += read;
out.write(buffer, 0, read);
}
return total;
}
If the read parameter is not used in write(), the method assumes that the entire contents of the buffer will be written out, which may not be correct, if the buffer is not fully filled.
The OutputStream can be flushed and closed inside or outside the extract() method. Calling close() should be enough, since it also calls flush().
In any case, the "standard" I/O code of Java, like the java.util.zip package, have been tested and used extensively, so it is highly unlikely it could have a bug so fundamental as to cause bytes to be missed so easily.

Java: how to synchronize file modification by threads

Only one instance of my Java application can run at a time. It runs on Linux. I need to ensure that one thread doesn't modify the file while the other thread is using it.
I don't know which file locking or synchronization method to use. I have never done file locking in Java and I don't have much Java or programming experience.
I looked into java NIO and I read that "File locks are held on behalf of the entire Java virtual machine. They are not suitable for controlling access to a file by multiple threads within the same virtual machine." Right away I knew that I needed expert help because this is production code and I have almost no idea what I'm doing (and I have to get it done today).
Here's a brief outline of my code to upload some stuff (archive files) to a server. It gets the list of files to upload from a file (call it "listFile") -- and listFile can be modified while this method is reading from it. I minimize the chances of that by copying listFile to a temp file and using that temp file thereafter. But I think I need to lock the file during this copy process (or something like that).
package myPackage;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import com.example.my.FileHelper;
import com.example.my.Logger;
public class BatchUploader implements Runnable {
private int processUploads() {
File myFileToUpload;
File copyOfListFile = null;
try {
copyOfListFile = new File("/path/to/temp/workfile");
File origFile = new File("/path/to/listFile"); //"listFile" - the file that contains a list of files to upload
DataWriter.copyFile(origFile, copyOfListFile);//see code below
} catch (IOException ex) {
Logger.log(ex);
}
try {
BufferedReader input = new BufferedReader(new FileReader(copyOfListFile));
try {
while (!stopRunning && (fileToUploadName = input.readLine()) != null) {
upload(new File(fileToUploadName));
}
} finally {
input.close();
isUploading = false;
}
}
return filesUploadedCount;
}
}
Here is the code that modifies the list of files to be uploaded used in the above code:
public class DataWriter {
public void modifyListOfFilesToUpload(String uploadedFilename) {
StringBuilder content = new StringBuilder();
try {
File listOfFiles = new File("/path/to/listFile"); //file that contains a list of files to upload
if (!listOfFiles.exists()) {
//some code
}
BufferedReader input = new BufferedReader(new FileReader(listOfFiles));
try {
String line = "";
while ((line = input.readLine()) != null) {
if (!line.isEmpty() && line.endsWith(FILE_EXTENSION)) {
if (!line.contains(uploadedFilename)) {
content.append(String.format("%1$s%n", line));
} else {
//some code
}
} else {
//some code
}
}
} finally {
input.close();
}
this.write("/path/to/", "listFile", content.toString(), false, false, false);
} catch (IOException ex) {
Logger.debug("Error reading/writing uploads logfile: " + ex.getMessage());
}
}
public static void copyFile(File in, File out) throws IOException {
FileChannel inChannel = new FileInputStream(in).getChannel();
FileChannel outChannel = new FileOutputStream(out).getChannel();
try {
inChannel.transferTo(0, inChannel.size(), outChannel);
} catch (IOException e) {
throw e;
} finally {
if (inChannel != null) {
inChannel.close();
}
if (outChannel != null) {
outChannel.close();
}
}
}
private void write(String path, String fileName, String data, boolean append, boolean addNewLine, boolean doLog) {
try {
File file = FileHelper.getFile(fileName, path);
BufferedWriter bw = new BufferedWriter(new FileWriter(file, append));
bw.write(data);
if (addNewLine) {
bw.newLine();
}
bw.flush();
bw.close();
if (doLog) {
Logger.debug(String.format("Wrote %1$s%2$s", path, fileName));
}
} catch (java.lang.Exception ex) {
Logger.log(ex);
}
}
}
My I suggest a slightly different approach. Afair on Linux the file rename (mv) operation is atomic on local disks. No chance for one process to see a 'half written' file.
Let XXX be a sequence number with three (or more) digits. You could let your DataWriter append to a file called listFile-XXX.prepare and write a fixed number N of filenames into it. When N names are written, close the file and rename it (atomic, see above) to listFile-XXX. With the next filename, start writing to listFile-YYY where YYY=XXX+1.
Your BatchUploader may at any time check whether it finds files matching the pattern listFile-XXX, open them, read them upload the named files, close and delete them. There is no chance for the threads to mess up each other's file.
Implementation hints:
Make sure to use a polling mechanism in BatchUploader that waits 1 or more seconds if it does not find a file ready for upload (prevent idle wait).
You may want to make sure to sort the listFile-XXX according to XXX to make sure the uploading is kept in sequence.
Of course you could variate the protocol of when listFile-XXX.prepare is closed. If DataWriter has nothing to do for a longer time, you don't want to have files ready for upload hang around just because there are not yet N ready.
Benefits: no locking (which will be a pain to get right), no copying, easy overview over the work queue and it state in the file system.
Here is a slightly different suggestion. Assuming your file names don't have '\n' characters in them (it's a big assumption on linux, I know, but you can have your writer look up for that), why not only read complete lines and ignore the incomplete ones? By incomplete lines, I mean lines that end with EOF and not with \n.
Edit: see more suggestions in comments below.

copying XML file from URL returns incomplete file

I am writing a small program to retrieve a large number of XML files. The program sort of works, but no matter which solution from stackoverflow I use, every XML file I save locally misses the end of the file. By "the end of the file" I mean approximately 5-10 lines of xml code. The files are of different length (~500-2500 lines) and the total length doesn't seem to have an effect on the size of the missing bit. Currently the code looks like this:
package plos;
import static org.apache.commons.io.FileUtils.copyURLToFile;
import java.io.File;
public class PlosXMLfetcher {
public PlosXMLfetcher(URL u,File f) {
try {
org.apache.commons.io.FileUtils.copyURLToFile(u, f);
} catch (IOException ex) {
Logger.getLogger(PlosXMLfetcher.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
I have tried using BufferedInputStream and ReadableByteChannel as well. I have tried running it in threads, I have tried using read and readLine. Every solution gives me an incomplete XML file as return.
In some of my tests (I can't remember which, sorry), I got a socket connection reset error - but the above code executes without error messages.
I have manually downloaded some of the XML files as well, to check if they are actually complete on the remote server - which they are.
I'm guessing that somewhere along the way a BufferedWriter or BufferedOutputStream has not had flush() called on it.
Why not write your own copy function to rule out FileUtils.copyURLToFile(u, f)
public void copyURLToFile(u, f) {
InputStream in = u.openStream();
try {
FileOutputStream out = new FileOutputStream(f);
try {
byte[] buffer = new byte[1024];
int count;
while ((count = in.read(buffer) > 0) {
out.write(buffer, 0, count);
}
out.flush();
} finally {
out.close();
}
} finally {
in.close();
}
}

Java Heap Memory error from socket

I am trying to open a socket and listen. Clients written in PHP will then send XML requests. At the moment I am just send the string "test" to it and I am getting a Memory Heap Error.
Here is my java code for the server:
import java.io.DataInputStream;
import java.io.IOException;
import java.net.ServerSocket;
import java.net.Socket;
public class main {
/**
* #param args
*/
public static void main(String[] args) {
server();
}
public static void server() {
ServerSocket MyService = null;
try {
MyService = new ServerSocket(3030);
}
catch (IOException e) {
System.out.println(e);
}
Socket serviceSocket = null;
try {
serviceSocket = MyService.accept();
}
catch (IOException e) {
System.out.println(e);
}
DataInputStream in;
try {
in = new DataInputStream(serviceSocket.getInputStream());
System.out.println("DEV STEP 1");
int len = in.readInt();
System.out.println(len);
byte[] xml = new byte[len];
in.read(xml, 0, len);
//System.out.print(xml.toString());
//Document doc = builder.parse(new ByteArrayInputStream(xml));
}
catch (IOException e) {
System.out.println(e);
}
}
}
The error I am getting is:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at main.server(main.java:39)
at main.main(main.java:12)
I have done a search and there are plenty of explanations of this error on here, however I can not work out why when I am sending a 4 letter String len is 1952805748.
Well you are getting the out of memory error because the len is so huge. If you are sending the data as characters and then doing a readInt() on it, then that's what's causing your problem. You need to read the data as characters.
Your numeric valid is probably the binary for the string "test". You should just read a string from the InputStream, not sure why you need a DataInputStream as that's something that supports reading binary, etc, which is not what you are doing. Just use a BufferedInputStream and then do a normal read on it.
To expand on Francis Upton's answer, you are getting a heap exception because you are trying to read n bytes from the incoming socket stream, where n represents the totally arbitrary integer you read at the beginning of your processing loop. And the reason I call it totally arbitrary is because you never actually sent a separate int in your client code. So your code is simply reading an int from whatever is in the first 4 bytes of the input stream, which could be anything at all.
Take a look at IOUtils in Apache Commons IO, it contains nice methods for reading an entire data stream in one shot (toByteArray, toString, etc).

FTP server output and accents

I've written this little test class to connect up to an FTP server.
import java.io.BufferedInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLConnection;
public class FTPTest {
public static void main(String[] args) {
URL url = null;
try {
url = new URL("ftp://anonymous:Password#127.0.0.1");
} catch (MalformedURLException e) {
e.printStackTrace();
}
URLConnection conn = null;
try {
conn = url.openConnection();
} catch (IOException e) {
e.printStackTrace();
}
InputStream in = null;
try {
in = conn.getInputStream();
} catch (IOException e) {
e.printStackTrace();
}
BufferedInputStream bin = new BufferedInputStream(in);
int b;
try {
while ((b = bin.read()) != -1) {
char c = (char) b;
System.out.print("" + (char) b);
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
Here's the output:
-rw-r--r-- 1 ftp ftp 4700 Apr 30 2007 premier.java
-rw-r--r-- 1 ftp ftp 88576 Oct 23 2007 Serie1_1.doc
-rw-r--r-- 1 ftp ftp 1401 Nov 21 2006 tp20061121.txt
drwxr-xr-x 1 ftp ftp 0 Apr 23 20:04 répertoire
Notice the name of the directory at the end of the list. There should be an "é" (e with acute accent) instead of the double character "é".
This reminds me of an issue encountered previously with JSF where there was a mix-up between standards. I have little experience with character-encoding though so I'm not sure what's happening. I'm supposing that the server output is in ASCII so how do I adapt the output so it appears correctly in the console?
You're brute-force converting bytes from the input stream into chars using
char c = (char) b;
This is definitely not the Good Housekeeping approved form.
Streams deliver bytes, and you want chars. Readers deliver chars and will do character set translation for you in an automatic and controlled way.
You should wrap an InputStreamReader around the InputStream. The constructor for InputStreamReader allows you to specify a CharSet, which will let you control the translation.
Reading from the InputStreamReader will of course yield "real" chars. Another benefit is that you can wrap a BufferedReader around the InputStreamReader and then read entire lines at a time (into a String) using readLine.
EDIT: To illustrate what I mean by "wrap around," here's some (untested!) coding to illustrate the idea:
BufferedReader br = new BufferedReader(new InputStreamReader(bin, "US-ASCII"));
...
String line = br.readLine();

Categories