All I'm trying to do is to drop a log on IFS
Here is my code:
def write(target_filename, data)
stream = com.ibm.as400.access.IFSFileOutputStream.new(AS400.sys, target_filename)
stream.write(data.to_java_bytes)
stream.flush
stream.close
end
When i read it though the jt400 library, it comes out ok.
But when i go thought the qShell or wrklnk the file seems empty.
Any ideas why? Is it the CCID?
Found the issue. I was using IFSFileOutputStream to write binary stream of text. By switching it to IFSTextFileOutputStream, problem was resolved.
Related
this is my first post. I'm new in Java. I'm working on file parser. I've tried to identify if it is CSV or another file format, but it looks like it is not quite a standard format. I'm working on apache camel solution (my first and last idea :( ), but maybe some of you recognize this kind of file format? Additionally, I've got .imp file for my output.
Here is my example input:
NrDok:FS-2222/17/W
Data:12.02.2017
SposobPlatn:GOT
NazwaWystawcy:MAAKAI Gawron
AdresWystawcy:33-123 bABA
KodWystawcy:33-112
MiastoWystawcy:bABA
UlicaWystawcy:czysfa 8
NIPWystawcy:123-19-85-123
NazwaOdbiorcy:abc abc-HANDLOWO-USŁUGOWE
AdresOdbiorcy:33-123 fghd
KodOdbiorcy:33-123
MiastoOdbiorcy:Tdsfs
UlicaOdbiorcy:dfdfdA 39
NIPOdbiorcy:82334349
TelefonOdbiorcy:654-522-124
NrOdbiorcyWSieciSklepow:efdsS-sffgsA
IloscLinii:1
Linia:Nazwa{ĆWIARTKA KG}Kod{C1}Vat{5}Jm{kg.}Asortyment{dfgv}Sww{}PKWIU{10.12.10}Ilosc{3.40}Cena{n3.21}Wartosc{n11.83}IleWOpak{1}CenaSp{b0.00}
DoZaplaty:252.32
And here is my example output file:
FH 2015.07.31 2015.07.31 F04443 Gotowka
FO 812-123-45-11 P.a.b.Uc"fdad" abcd deffF UL.fdfgdfdA 12/33 33-123 afvdf
FS 779-19-06-082 badfdf S.A. ul. Wisniowa 89 60-003 Poznan
FP 00218746 CHRZAN TARTY EXTRA POLONAISE 180G SZT 32.00 2.21 8 10.39.17.0 32.00 5900138000055
Is there any easy way to convert the first file to second file format? Maybe you know the type of this file? In a meanwhile, I'm continuing my work with apache camel.
Thanks in advance for your time and help!
I suggest you to play with https://tika.apache.org/1.1/detection.html#Mime_Magic_Detection
It's very good lib for file type recognition.
Here https://www.tutorialspoint.com/tika/tika_document_type_detection.htm we have simple example.
Your file can be read as standard Java .properties file. This type of files allows both = and : as key and value separators. While the fact that it contains non ISO-8859-1 characters like Polish Ć may prevent Java from correctly parsing it.
This line
Nazwa{ĆWIARTKA KG}Kod{C1}Vat{5}Jm{kg.}Asortyment{dfgv}Sww{}PKWIU{10.12.10}Ilosc{3.40}Cena{n3.21}Wartosc{n11.83}IleWOpak{1}CenaSp{b0.00}
Seem to be some custom serialization format of the object in the form
key1{value1}key2{value2}...
Your output file contains lots of data that is not listed in the input which makes me think that there is some data querying from external systems to build the output. You should investigate it yourself. There is no way anyone can guess the transformation with provided input.
I am re-implementing a subset of pdftk (pdftk fails with newer versions of pdf) and one of it's features it the ability to output an interactive pdf file to the command line (for piping purposes). I currently am doing that with
if("".equals(output)){
File tmp=new File("tmp.pdf");
doc.save(tmp);
output= new String(Files.readAllBytes(Paths.get("tmp.pdf")), "UTF-8");
tmp.delete();
}
System.out.println(output);
The problem is when I pipe this to out.pdf. and open it, only the form fields are in the new pdf field. My first thought would be that the second line would be the faulty one, but tmp.pdf is the full pdf file, suggesting that the problem is in the line where I am reading the pdf. Any suggestions?
Edit:
I found a different way that mostly works using /dev/nul or CON (os dependent). This way is better as it doesn't create temp files, but on windows it doesn't pipe correctly. Any ways to make it pipe?
if("".equals(output)){
if("W".equals(System.getProperty("os.name").substring(0,1)))
doc.save(new File("CON"));
else
doc.save(new File("/dev/stdout"));
System.out.println(output);
As discussed in the comments - instead of saving to a temp file, you can save to System.out:
doc.save(System.out);
Although I've never tested whether System.out can be used for such a purpose and keep the content intact, so I'd recommend that you do some binary test to compare the original PDF and what you get out of the pipe.
We have a folder where we dump lot of files. Our program needs to read one of the specific files with the latest version. The file name would be something like "2016-03-04-12-46-48_ABC_123456_1.xml".
Insted of reading all the files and then iterating to find the exact file i have used following code with a regular expression
File folder = new File("C:\\some_folder")
folder.listFiles((FilenameFilter) new AwkFilenameFilter("(\\d){4}-(\\d){2}-(\\d){2}-(\\d){2}-(\\d){2}-(\\d){2}_ABC_" + <ID_String> +"_(\\d){1,2}"))
But for some reason the reqular expression is not working. Can someone please help with this?
It seems you are missing the file extension in the regex:
(\\d){4}-(\\d){2}-(\\d){2}-(\\d){2}-(\\d){2}-(\\d){2}_ABC_" + <ID_String> +"_(\\d){1,2}\\.xml
Try out this one.
"\d{4}-\d{2}-\d{2}-\d{2}-\d{2}-\d{2}_ABC_"+<ID string>+"_\d{1}.xml"
It works perfectly for me.
I am having a DataHandler problem.
I am trying to collect their contents in a file. This is created with the size of the buffer allocated damaged but without content, so I do not get to write anything on it.
This is the code i'm using:
Important, the "ciDoc" is a javax.activation.DataHandler.
byte[] buffer = org.apache.commons.io.IOUtils.toByteArray(ciDoc.getInputStream());
org.apache.commons.io.FileUtils.writeByteArrayToFile(fileItemUCM.getFile(), buffer);
item.setFile(fileItemUCM.getFile());
The fileItemUCM.getFile() is always damaged, really is nothing writing into it.
Finally I solved this issue. The trouble happened because in the server side someone enables the MTOM way to tranfer information. So modifing spring webservicestemplates for working with MTOM resolve my problem.
thanks,
I'm using poi 3.7 , upload the file is .xlsx
The console show:
org.apache.poi.POIXMLException: org.apache.poi.openxml4j.exceptions.InvalidFormatException: Package should contain a content type part [M1.13]
at org.apache.poi.util.PackageHelper.open(PackageHelper.java:41)
at org.apache.poi.xssf.usermodel.XSSFWorkbook.<init>(XSSFWorkbook.java:186)
at poi.POITest.ReadAndPrintExcelFile(POITest.java:15)
at poi.POITest.main(POITest.java:59)
Caused by: org.apache.poi.openxml4j.exceptions.InvalidFormatException: Package should contain a content type part [M1.13]
at org.apache.poi.openxml4j.opc.ZipPackage.getPartsImpl(ZipPackage.java:147)
at org.apache.poi.openxml4j.opc.OPCPackage.getParts(OPCPackage.java:592)
at org.apache.poi.openxml4j.opc.OPCPackage.open(OPCPackage.java:222)
at org.apache.poi.util.PackageHelper.open(PackageHelper.java:39)
... 3 more
Just use
org.apache.poi.ss.usermodel.WorkbookFactory instead of creating instances : new HSSFWorkbook() or new XSSFWorkbook().
Workbook exWorkBook = WorkbookFactory.create(excelInputStream);
Not the answer you may want to hear. But i found i get this error when the password is wrong.
Check if the call to Decryptor.verifyPassword() returns true in your code. If so password should be fine.
For me I got a false, and the code ignored and try to read the file anyway.Then i got the "Package should contain a content type part [M1.13]" error.
Once i typed in the correct password, i got true returned and the file was decrypted.
Hope this helps
I use POI, and sometimes when this happens you just have to experiment to pinpoint the problem. Here are things I have done in the past to help figure out what the problem is:
Convert the file to .xls format and see if it loads. If it does, resave as .xlsx and try again.
If the file has multiple sheets, try saving each sheet as a separate file and see if they can load,
If you narrow it down to a specific sheet, load parts of the sheet and see which part causes the problem.
Usually, if you use this "divide and conquer" approach, you can figure out the problem pretty quickly.
Formulas and macros can be particularly problematic.
If you are using open office and trying to save the file in xlsx format still you get the error; also using xssf does'nt solve the purpose.you need to use Microsoft Office Excel sheet to avoid the error.
I found that I had corrupted my xlsx file and so had to delete it and recreate it.