I want to write a password protected file at remote location in JAVA. So I am using SMB api(JCIFS).
However I am seeing an inconsistency in the data written at destination file.
E.g.
If my intension is to write the data as,
AB
CD
EF
GH
When I write the data in file,
The api misses some characters,
Some times it writes,
AB
CDE //EF not on next line & 'E' missed by api.
GH
some times it writes,
AB
CD
EFH
I am using SMBFileOutputStream to write a line by line in file.
smbOutputStream.write(myString.getByte);
Related
this is my first post. I'm new in Java. I'm working on file parser. I've tried to identify if it is CSV or another file format, but it looks like it is not quite a standard format. I'm working on apache camel solution (my first and last idea :( ), but maybe some of you recognize this kind of file format? Additionally, I've got .imp file for my output.
Here is my example input:
NrDok:FS-2222/17/W
Data:12.02.2017
SposobPlatn:GOT
NazwaWystawcy:MAAKAI Gawron
AdresWystawcy:33-123 bABA
KodWystawcy:33-112
MiastoWystawcy:bABA
UlicaWystawcy:czysfa 8
NIPWystawcy:123-19-85-123
NazwaOdbiorcy:abc abc-HANDLOWO-USŁUGOWE
AdresOdbiorcy:33-123 fghd
KodOdbiorcy:33-123
MiastoOdbiorcy:Tdsfs
UlicaOdbiorcy:dfdfdA 39
NIPOdbiorcy:82334349
TelefonOdbiorcy:654-522-124
NrOdbiorcyWSieciSklepow:efdsS-sffgsA
IloscLinii:1
Linia:Nazwa{ĆWIARTKA KG}Kod{C1}Vat{5}Jm{kg.}Asortyment{dfgv}Sww{}PKWIU{10.12.10}Ilosc{3.40}Cena{n3.21}Wartosc{n11.83}IleWOpak{1}CenaSp{b0.00}
DoZaplaty:252.32
And here is my example output file:
FH 2015.07.31 2015.07.31 F04443 Gotowka
FO 812-123-45-11 P.a.b.Uc"fdad" abcd deffF UL.fdfgdfdA 12/33 33-123 afvdf
FS 779-19-06-082 badfdf S.A. ul. Wisniowa 89 60-003 Poznan
FP 00218746 CHRZAN TARTY EXTRA POLONAISE 180G SZT 32.00 2.21 8 10.39.17.0 32.00 5900138000055
Is there any easy way to convert the first file to second file format? Maybe you know the type of this file? In a meanwhile, I'm continuing my work with apache camel.
Thanks in advance for your time and help!
I suggest you to play with https://tika.apache.org/1.1/detection.html#Mime_Magic_Detection
It's very good lib for file type recognition.
Here https://www.tutorialspoint.com/tika/tika_document_type_detection.htm we have simple example.
Your file can be read as standard Java .properties file. This type of files allows both = and : as key and value separators. While the fact that it contains non ISO-8859-1 characters like Polish Ć may prevent Java from correctly parsing it.
This line
Nazwa{ĆWIARTKA KG}Kod{C1}Vat{5}Jm{kg.}Asortyment{dfgv}Sww{}PKWIU{10.12.10}Ilosc{3.40}Cena{n3.21}Wartosc{n11.83}IleWOpak{1}CenaSp{b0.00}
Seem to be some custom serialization format of the object in the form
key1{value1}key2{value2}...
Your output file contains lots of data that is not listed in the input which makes me think that there is some data querying from external systems to build the output. You should investigate it yourself. There is no way anyone can guess the transformation with provided input.
I have a string "Përshëndetje botë!" in a .java file and I am trying to print it using System.out.println(). The file is in ISO-8859-1 encoding. In cmd I do
chcp 28591
to change the encoding to ISO-8859-1 (per the list).
Then I compile a .java file using
javac -encoding ISO-8859-1 C:\...\Hello.java
and run it using
java -Dfile.encoding=ISO-8859-1 packagename.Hello
In this case the ë are replaced with spaces. I also tried
java -Dfile.encoding=ISO88591 packagename.Hello
and the ë were replaced with wrong foreign symbols.
How would I get it running?
Actual answer
Per the OP's comment, the actual issue was that the font cmd was using didn't have the relevant symbols.
Original post
I'm posting this as an answer because what I want to say is too long for comments :) .
First, please edit your question to include a minimal example of the printing code. For example, if you could write a separate Java program that did nothing but print the message, that would be much easier to debug. (Maybe packagename.Hello is such an example, but I can't tell.)
Second, please try the below, and edit your question to include the results of each step.
Check the actual bytes in your source file to confirm its encoding, then edit your question to include that information. You can use, e.g., the FileFormat.info hex dumper (I am not affiliated with it). For example, here is the output for your string, pasted into a UTF-8 text file:
file name: foo.txt
mime type:
0000-0010: 50 c3 ab 72-73 68 c3 ab-6e 64 65 74-6a 65 20 62 P..rsh.. ndetje.b
0000-0017: 6f 74 c3 ab-21 0d 0a ot..!..
^^ ^^ ^^
Note, at the ^^ markers, that ë in UTF-8 is 0xc3 0xab.
By contrast, in ISO 8859-1 (aka "latin1" in vim), the same text is:
file name: foo.txt
mime type:
0000-0010: 50 eb 72 73-68 eb 6e 64-65 74 6a 65-20 62 6f 74 P.rsh.nd etje.bot
0000-0014: eb 21 0d 0a .!..
^^ ^
Note that ë is now 0xeb.
Try running your command as java packagename.Hello, without any -D option. In my answer that you read, the -D option to java was not necessary.
Try code page 1250, as in the earlier question.
I wish to take page size of document, such as A4, A5, A6 etc.
Solution, which I found it's parsing of postscript text and extracting string A6 from
featurebegin{
%%BeginFeature: *PageSize A6
<</DeferredMediaSelection true /PageSize [298 420] /ImagingBBox null /MediaClass null>> setpagedevice
%%EndFeature
}featurecleanup
but this works slowly...
How I can do this? Do exist any libraries for getting full document information?
I prefer solutions in java, if exists.
Your solution there only works for a DSC (Document Structure Convention) conforming file. While many files do conform, others do not. Also that only works if the PostScript file contains a comment (% introduces a comment in PostScript).
You could instead override the setpagedevice operator and have it print the requested media size if present.
/Oldsetpagedevice /setpagedevice load def
/setpagedevice {
dup /PageSize known {
dup /PageSize get
dup 0 get 20 string cvs exch 1 get 20 string cvs exch
(Requested Media Size is ) print print (points by ) print print (points\n) print
} if
Oldsetpagedevice
} bind def
What do you mean by 'full document information' ? By the way, you need to be aware that (unlike PDF) PostScript files are programs, not documents. So the only way to know what's really going on is to interpret the program.
You could use Ghostscript, but it does not have a Java interface, and you would need to be much more specific about the information you want.
If you run the postscript through ghostscript with -sDEVICE=bbox it would report the corners of a rectangle which crops the rendered output, which may be (close to) what you want.
The info is usually printed to stderr in a DSC %%BoundingBox: x0 y0 x1 y1 format.
I want to read and write data in SLE4442 smart card
i have ACR38U-i1 smart card reader
For write I am use this commandAPDU
byte[] cmdApduPutCardUid = new byte[]{(byte)0xFF, (byte)0xD0, (byte)0x40,(byte)0x00, (byte)4,(byte)6,(byte)2,(byte)6,(byte)2};
And for read data
byte[] cmdApduGetCardUid = new byte[]{(byte)0xFF,(byte)0xB0,(byte)0x40,(byte)0x00,(byte)0xFF};
both are execute and send SW= 9000
but no one data receive in responseAPDU
Like I write 6262 data but it not receive
I am also use Select command to before write and read command
The select command is
byte[] cmdApduSlcCardUid = new byte[]{(byte)0xFF,(byte)0xA4,(byte)0x00,(byte)0x00,(byte)0x01,(byte)0x06};
Have anyone Proper java code to read and write in SLE4442 smart card ?
APDU Commands related to work with Memory Cards could be different for different readers and implemented support. Here is an example for OmniKey reader.
Take a look to your ACR reader specification and use specific Pseudo-APDU command to work with SLE 4442.
For your question:
4.6.1 SELECT_CARD_TYPE: "FF A4 00 00 01 06", where 0x06 in the data meant "Infineon SLE 4432 and SLE 4442".
4.6.2 READ_MEMORY_CARD: "FF B0 00 [Bytes Address] [MEM_L]", where
[Bytes Address]: is the memory address location of memory card
[MEM_L]: Length of data to be read from the memory card
4.6.5 WRITE_MEMORY_CARD: "FF D0 00 [Bytes Address] [MEM_L] [Data]"
[Data]: data to be written to the memory card
You used P1 = 0x40 and this could be an issue.
I'm receiving a base64-encoded zip file (in the form of a string) from a SOAP request.
I can decode the string successfully using a stand-alone program, b64dec.exe, but I need to do it in a java routine. I'm trying to decode it (theZipString) with Apache commons-codec-1.7.jar routines:
import org.apache.commons.codec.binary.Base64;
import org.apache.commons.codec.binary.StringUtils;
StringUtils.newString(Base64.decodeBase64(theZipString), "ISO-8859-1");
Zip file readers open the resulting file and show the list of content files but the content files have CRC errors.
I compared the result of my java routine with the result of the b64dec.exe program (using UltraEdit) and found that they are identical with the exception that eight different byte-values, where ever they appear in the b64dec.exe result, are replaced by 3F ("?") in mine. The values and their ISO-8859-1 character names are A4 ('currency'), A6 ('broken bar'), A8 ('diaeresis'), B4 ('acute accent'), B8 ('cedilla'), BC ('vulgar fraction 1/4'), BD ('vulgar fraction 1/2'), and BE ('vulgar fraction 3/4').
I'm guessing that the StringUtils.newString function is not translating those eight values to the string output, because I tried other 8-bit character sets: UTF-8, and cp437. Their results are similar but worse, with many more 3F, "?" substitutions.
Any suggestions? What character set should I use for the newString function to convert a .zip string? Is the Apache function incapable of this translation? Is there a better way to do this decode?
Thanks!
A zip file is not a string. It's not encoded text. It may contain text files, but that's not the same thing. It's just binary data.
If you treat arbitrary binary data as a string, bad things will happen. Instead, you should use streams or byte arrays. So this is fine:
byte[] zipData = Base64.decodeBase64(theZipString);
... but don't try to convert that to a string. If you write out that byte[] to a file (probably with FileOutputStream or some utility method) it should be fine.