java.util.zip.ZipException: invalid general purpose flag: 9 - java

I get this error on Android 6.0
java.util.zip.ZipException: Invalid General Purpose Bit Flag: 9
java.util.zip.ZipInputStream.getNextEntry(ZipInputStream.java:253)
And this is my code:
ZipInputStream zin = new ZipInputStream(getAppContext().getContentResolver().openInputStream(uri));
What does it mean? What am I doing wrong?

Here's the ZIP file specification: https://users.cs.jmu.edu/buchhofp/forensics/formats/pkzip.html
Flags General purpose bit flag:
Bit 00: encrypted file
Bit 01: compression option
Bit 02: compression option
Bit 03: data descriptor
Bit 04: enhanced deflation
Bit 05: compressed patched data
Bit 06: strong encryption
Bit 07-10: unused
Bit 11: language encoding
Bit 12: reserved
Bit 13: mask header values
Bit 14-15: reserved
So, a GPBF value of 9 has both the "encrypted file" and "data descriptor" bits set.
A peek at the Android source code here:
https://chromium.googlesource.com/android_tools/+/9e9b6169a098bc19986e44fbbf65e4c29031e4bd/sdk/sources/android-22/java/util/zip/ZipFile.java
(an older version, but I suspect this hasn't changed) shows this:
static final int GPBF_ENCRYPTED_FLAG = 1 << 0;
[...]
/**
* Supported General Purpose Bit Flags Mask.
* Bit mask of bits not supported.
* Note: The only bit that we will enforce at this time
* is the encrypted bit. Although other bits are not supported,
* we must not enforce them as this could break some legitimate
* use cases (See http://b/8617715).
*/
static final int GPBF_UNSUPPORTED_MASK = GPBF_ENCRYPTED_FLAG;
[...]
// At position 6 we find the General Purpose Bit Flag.
int gpbf = Short.reverseBytes(is.readShort()) & 0xffff;
if ((gpbf & ZipFile.GPBF_UNSUPPORTED_MASK) != 0) {
throw new ZipException("Invalid General Purpose Bit Flag: " + gpbf);
}
So, your ZIP file claims to have encrypted the file (bit 00 of the GPBF is set), and the ZipFile implementation doesn't support reading encrypted files.

Related

Generating and validating a signature with ED25519 expanded private key

I am building a encrypted messaging app over tor network and currently I'm struggling on using tor generated ed25519 private key to sign and verify any message.
Below piece of code works with a 32 bytes key however after skipping 32 header bytes of hs_ed25519_secret_key it fails to verify the signature on below cases:
1 - secret: left half of the remaining 64 bytes, public: right half
2 - secret: left half of the remaining 64 bytes, public: last 32 bytes of hs_ed25519_public_key after removing the header
3 - secret: all 64 bytes, public: last 32 bytes of hs_ed25519_public_key
I found a python library that seems to do this PyNaCl however i not familiar with py too much.
Is there something i am doing wrong or bouncycastle does not support expanded 64 bytes private keys
import org.bouncycastle.crypto.Signer;
import org.bouncycastle.crypto.params.Ed25519PrivateKeyParameters;
import org.bouncycastle.crypto.params.Ed25519PublicKeyParameters;
import org.bouncycastle.crypto.signers.Ed25519Signer;
import java.nio.charset.StandardCharsets;
public class ED25519 {
public static void main(String[] args) throws Exception {
byte[] message = "a msg to be signed".getBytes(StandardCharsets.UTF_8);
Signer signer = new Ed25519Signer();
signer.init(true, new Ed25519PrivateKeyParameters(KeysUtil.myPrivKey, 0));
signer.update(message, 0, message.length);
Signer verifier = new Ed25519Signer();
verifier.init(false, new Ed25519PublicKeyParameters(KeysUtil.myPubKey, 0));
verifier.update(message, 0, message.length);
boolean validSig = verifier.verifySignature(signer.generateSignature());
}
}
BouncyCastle uses the RFC 8032 definition of the private key, which is basically a 32 byte seed. That seed is input to SHA512, which produces 64 bytes consisting of an 'internal' 32 byte secret ("s") and an additional 32 bytes pseudo-random value ("h"). It looks like Tor treats this latter 64 bytes (the output of SHA512) as the secret key, so this is incompatible.
Of course it would be relatively straightforward to provide a way to work with these keys (at least in low-level utilities), but it doesn't exist yet.

javax.smartcardio case 4 APDU vanishing - 6700 response - warning

Using javax.smartcardio classes for smartcard programming, I encountered a persistent error - getting back 6700 (invalid length) and similar error codes from the card when the code looked fine. Example code:
req = new CommandAPDU(0x00, 0xA4, 0x04, 0x00, aid, 0x00);
This is supposed to construct a case 4 APDU. Why does the card respond as if I were missing something?
req = new CommandAPDU(0x00, 0xA4, 0x04, 0x00, aid, 0x00);
This is supposed to construct a case 4 APDU. Why does the card respond as if I were missing something?
Short answer
Use aid, 0x100 instead of aid, 0x00.
Long answer (better get some coffee):
That's because of the confusion between Ne and Le. Ne is the maximum amount of bytes that can be returned to the terminal. Ne is a number without specific representation. Le however is the encoding or representation in bytes of Ne.
Now for ISO/IEC 7816-4 there is a little trick: Le is absent (no bytes) in case of an ISO case 1 or 3 command without response data (RDATA). So defining Le = 00 to mean "no response data" is spurious. Instead 7816-4 uses Le = 00 to mean Ne = 256. Similarly, Le = 0000 (or Le = 000000) means Ne = 65536, i.e. 2^16. The double and triple byte encoding are only used for extended length APDU's.
As you can see in the CommandAPDU constructor however you have to specify Ne, not Le. What you specify is therefore the same as saying that there is no response data. So the APDU will not be interpreted correctly as an ISO case 4 and the command will fail (correctly in this case, 6700 is exactly what you should expect).
So just specify how many bytes you expect. If the value is larger than 256 then an extended length APDU will be required (or command chaining, but that's a topic in itself). Ne < 0 or Ne > 64Ki is of course not supported.
Note that many protocol descriptions including the Java Card API got the distinction between Ne and Le wrong (this has been fixed in the Java Card API v3.0.5 by the way). That's kind of strange as there are many many issues with 7816-4, but this is not one of them. It's specified pretty clearly.

ASC Visual Basic for Java

I need a function on Java that do the same as ASC function on Visual Basic. I've had looking for it on internet, but I can't found the solution.
The String that I have to know the codes was created on Visual Basic. It's according to ISO 8859-1 and Microsoft Windows Latin-1 characters. The ASC function on Visual Basic knows those codes, but in Java, I can't find a function that does the same thing.
I know in Java this sentence:
String myString = "ÅÛ–ßÕÅÝ•ÞÃ";
int first = (int)string.chartAt(0); // "Å"- VB and Java returns: 197
int second = (int)string.chartAt(0); // "Û" - VB and Java returns: 219
int third = (int)string.chartAt(0); // "–" - VB returns: 150 and Java returns: 8211
The first two characters, I haven't had problem, but the third character is not a ASCII code.
How can I get same codes in VB and Java?
First of all, note that ISO 8859-1 != Windows Latin-1. (See http://en.wikipedia.org/wiki/Windows-1252)
The problem is that Java encodes characters as UTF16, so casting to int will generally result in the Unicode value of the char.
To get the Latin-1 encoding of a char, first convert it to a Latin-1 encoded byte array:
public class Encoding {
public static void main(String[] args) {
// Cp1252 is Windows codepage 1252
byte[] bytes = "ÅÛ–ßÕÅÝ•ÞÃ".getBytes(Charset.forName("Cp1252"));
for (byte b: bytes) {
System.out.println(b & 255);
}
}
}
prints:
197
219
150
223
213
197
221
149
222
195

String.getBytes("UTF-32") returns different results on JVM and Dalvik VM

I have a 48 character AES-192 encryption key which I'm using to decrypt an encrypted database.
However, it tells me the key length is invalid, so I logged the results of getBytes().
When I execute:
final String string = "346a23652a46392b4d73257c67317e352e3372482177652c";
final byte[] utf32Bytes = string.getBytes("UTF-32");
System.out.println(utf32Bytes.length);
Using BlueJ on my mac (Java Virtual Machine), I get 192 as the output.
However, when I use:
Log.d(C.TAG, "Key Length: " + String.valueOf("346a23652a46392b4d73257c67317e352e3372482177652c".getBytes("UTF-32").length));
I get 196 as the output.
Does anybody know why this is happening, and where Dalvik is getting an additional 4 bytes from?
You should specify endianess on both machines
final byte[] utf32Bytes = string.getBytes("UTF-32BE");
Note that "UTF-32BE" is a different encoding, not special .getBytes parameter. It has fixed endianess and doesn't need BOM. More info: http://www.unicode.org/faq/utf_bom.html#gen6
Why would you UTF-32 encode plain a hexidecimal number. Thats 8x larger than it needs to be. :P
String s = "346a23652a46392b4d73257c67317e352e3372482177652c";
byte[] bytes = new BigInteger(s, 16).toByteArray();
String s2 = new BigInteger(1, bytes).toString(16);
System.out.println("Strings match is "+s.equals(s2)+" length "+bytes.length);
prints
Strings match is true length 24

Java Text File Encoding

I have a text file and it can be ANSI (with ISO-8859-2 charset), UTF-8, UCS-2 Big or Little Endian.
Is there any way to detect the encoding of the file to read it properly?
Or is it possible to read a file without giving the encoding? (and it reads the file as it is)
(There are several program that can detect and convert encoding/format of text files.)
Yes, there's a number of methods to do character encoding detection, specifically in Java. Take a look at jchardet which is based on the Mozilla algorithm. There's also cpdetector and a project by IBM called ICU4j. I'd take a look at the latter, as it seems to be more reliable than the other two. They work based on statistical analysis of the binary file, ICU4j will also provide a confidence level of the character encoding it detects so you can use this in the case above. It works pretty well.
UTF-8 and UCS-2/UTF-16 can be distinguished reasonably easily via a byte order mark at the start of the file. If this exists then it's a pretty good bet that the file is in that encoding - but it's not a dead certainty. You may well also find that the file is in one of those encodings, but doesn't have a byte order mark.
I don't know much about ISO-8859-2, but I wouldn't be surprised if almost every file is a valid text file in that encoding. The best you'll be able to do is check it heuristically. Indeed, the Wikipedia page talking about it would suggest that only byte 0x7f is invalid.
There's no idea of reading a file "as it is" and yet getting text out - a file is a sequence of bytes, so you have to apply a character encoding in order to decode those bytes into characters.
You can use ICU4J (http://icu-project.org/apiref/icu4j/)
Here is my code:
String charset = "ISO-8859-1"; //Default chartset, put whatever you want
byte[] fileContent = null;
FileInputStream fin = null;
//create FileInputStream object
fin = new FileInputStream(file.getPath());
/*
* Create byte array large enough to hold the content of the file.
* Use File.length to determine size of the file in bytes.
*/
fileContent = new byte[(int) file.length()];
/*
* To read content of the file in byte array, use
* int read(byte[] byteArray) method of java FileInputStream class.
*
*/
fin.read(fileContent);
byte[] data = fileContent;
CharsetDetector detector = new CharsetDetector();
detector.setText(data);
CharsetMatch cm = detector.detect();
if (cm != null) {
int confidence = cm.getConfidence();
System.out.println("Encoding: " + cm.getName() + " - Confidence: " + confidence + "%");
//Here you have the encode name and the confidence
//In my case if the confidence is > 50 I return the encode, else I return the default value
if (confidence > 50) {
charset = cm.getName();
}
}
Remember to put all the try catch need it.
I hope this works for you.
If your text file is a properly created Unicode text file then the Byte Order Mark (BOM) should tell you all the information you need. See here for more details about BOM
If it's not then you'll have to use some encoding detection library.

Categories