For a class, I was given a file of base64 encoded salted sha-256 hashed passwords.
the file is in the form:
username:base64 encoded sha256 password:salt
My original thought was to base64 decode the hash so I would be left with:
username:salted hashed password:salt
then run it through JTR or hashcat to crack the passwords.
My problem is in the base64 decoding process.
my code looks like:
public static byte[] decode(String string) {
try {
return new BASE64Decoder().decodeBuffer(string);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public static void splitLine(String strLine)
throws Exception {
StringTokenizer st = new StringTokenizer(strLine, ":");
if (st.hasMoreTokens())
userName = st.nextToken();
if (st.hasMoreTokens())
password = st.nextToken();
if (st.hasMoreTokens())
salt = st.nextToken();
}
public static void main(String[] argv) {
String line = null;
String pwdFile = null;
int count = 0;
try {
pwdFile = argv[0];
BufferedReader br = new BufferedReader(new FileReader(pwdFile));
line = br.readLine();
while (line != null) {
splitLine(line);
/* alternative #1: generates a lot of non-printable characters for the hash */
System.out.println(userName+":"+new String(decode(password))+":"+salt);
/* alternative #2: gives a list of the decimal values for each byte of the hash */
System.out.println(userName+":"+Arrays.toString(decode(password))+":"+salt);
count++;
line = br.readLine();
}
br.close();
System.err.println("total lines read: " + count);
} catch (Exception e) {
e.printStackTrace();
System.exit(-1);
}
}
With alternative #1, I end up with 50,000 more lines in my output file than were in the input file, so i assume some of the decoded strings contain newline characters which I need to fix as well.
How do I get back to and print the original hash value for the password in a format that either hashcat or JTR will recognize as salted sha256?
Problem: You are trying to to work with Base64 encoded password hashes and when they are decoded, there are unprintable characters
Background: When a value is hashed, the bytes are all changed according to a hashing algorithm and the resulting bytes are often beyond the range of printable characters. Base64 encoding is simply an alphabet that maps ALL bytes into printable characters.
Solution: work with the bytes that Base64 decode returns instead of trying to make them into a String. Convert those raw bytes to Hex representations (Base16) before you print them or give them to Hashcat or JTR. In short, you need to do something like the following (it happens to use Guava library):
String hex = BaseEncoding.base16().encode(bytesFromEncodedString);
This is condensed from a longer answer I posted
Related
I have to read a file called test.p2b with the following content:
I tried reading it like this:
static void branjeIzDatoteke(String location){
byte[] allBytes = new byte[10000];
try {
InputStream input = new FileInputStream(location);
int byteRead;
int j=0;
while ((byteRead = input.read())!=-1){
allBytes[j] = (byte)input.read();
}
String str = new String(allBytes,"UTF-8");
for (int i=0;i<=str.length()-8;i+=8){
//int charCode = Integer.parseInt(str.substring(i,i+8),2);
//System.out.println((char)charCode);
int drek = (int)str.charAt(i);
System.out.println(Integer.toBinaryString(drek));
}
} catch (IOException ex) {
Logger.getLogger(Slika.class.getName()).log(Level.SEVERE, null, ex);
}
}
I tried just printing out the string (when I created String str = new String(allBytes,"UTF-8");), but all I get is a square at the beginning and then 70+ blank lines with no text.
Then I tried the int charCode = Integer.parseInt(str.substring(i,i+8),2); and printing out each individual character, but then I got a NumberFormatException.
I even tried just converting
Finally I tried the Integer.toBinaryString I have at the end but in this case I get 1s and 0s. That's not what I want, I need to read the actual text but no method seems to work.
I've actually read a binary file before using the method I already tried:
int charCode = Integer.parseInt(str.substring(i,i+8),2);
System.out.println((char)charCode);
but like I said, I get a NumberFormatException.
I don't understand why these methods won't work.
If you want to read all the bytes you can use the java.nio.file.Files utility class:
Path path = Paths.get("test.p2b");
byte[] allBytes = Files.readAllBytes(path);
String str = new String(allBytes, "UTF-8");
System.out.print(str);
You iteration over str content might not work. Certain UTF characters are expressed as surrogate pairs, a code points that can span more than one char (as explained here). Since you are using UTF you should be using String#codePoinst() method to iterate over the code points instead of the characters.
I am parsing an XML document in UTF-8 encoding with Java using VTD-XML.
A small excerpt looks like:
<literal>๐ </literal>
<literal>๐ </literal>
<literal>๐ ข</literal>
I want to iterate through each literal and print it out to the console. However, what I get is:
ยข
I am correctly navigating to each element. The way that I get the text value is by calling:
private static String toNormalizedString(String name, int val, final VTDNav vn) throws NavException {
String strValue = null;
if (val != -1) {
strValue = vn.toNormalizedString(val);
}
return strValue;
}
I've also tried vn.getXPathStringVal();, however it yields the same results.
I know that each of the literals above aren't just strings of length one. Rather, they seem to be unicode "characters" composed of two characters. I am able to correctly parse and output the kanji characters if they're length is just one.
My question is - how can I correctly parse and output these characters using VTD-XML? Is there a way to get the underlying bytes of the text between the literal tags so that I can parse the bytes myself?
EDIT
Code to process each line of the XML - converting it to a byte array and then back to a String.
try (BufferedReader br = new BufferedReader(new FileReader("res/sample.xml"))) {
String line;
while ((line = br.readLine()) != null) {
byte[] myBytes = null;
try {
myBytes = line.getBytes("UTF-8");
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
System.exit(-1);
}
System.out.println(new String(myBytes));
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
You are probably trying to get the string involving characters that is greater than 0x10000. That bug is known and is in the process of being addressed... I will notify you once the fix is out.
This question may be identical to this one...
Map supplementary Unicode characters to BMP (if possible)
Say I open a text file like this:
public static void main(String[] args) throws IOException {
String file_name = "file.txt";
try {
Read file = new ReadFile(file_name);
String[] Lines = file.openFile();
for (int i = 0; i < es.length; i++) {
System.out.println(Lines[i]);
}
} catch (IOException e) {
System.out.println(e.getMessage());
}
}
Now, I want to change the result to binary (for further conversion into AMI coding), and I suppose that firstly I should turn it to ASCII (though I'm also not 100% certain if that's absolutely necessary), but I'm not sure if I should better change it to chars, or perhaps is there an easier way?
Please, mind that I'm just a beginner.
Do you happen to know for sure that the files will be ASCII encoded? Assuming it is, you can just use the getBytes() function of string:
byte[] lineDefault = line.getBytes();
There is a second option for .getBytes() as well if you don't want to use the default encoding. I often am using:
byte[] lineUtf8 = line.getBytes("UTF-8");
which gives byte sequences which are equivalent to ASCII for characters whose hex values are less than 0x80.
I have the output of UTF-8 hash_file that I need to calculate and check on my java client. Based on the hash_file manual I'm extracting the contents of the file and create the MD5 hash hex on Java, but I can't make them match. I tried suggestions on [this question] without success2.
Here's how I do it on Java:
public static String calculateStringHash(String text, String encoding)
throws NoSuchAlgorithmException, UnsupportedEncodingException{
MessageDigest md = MessageDigest.getInstance("MD5");
return getHex(md.digest(text.getBytes(encoding)));
}
My results match the ones from this page.
For example:
String jake: 1200cf8ad328a60559cf5e7c5f46ee6d
From my Java code: 1200CF8AD328A60559CF5E7C5F46EE6D
But when trying on files it doesn't work. Here's the code for the file function:
public static String calculateHash(File file) throws NoSuchAlgorithmException,
FileNotFoundException, IOException {
BufferedReader br = null;
StringBuilder sb = new StringBuilder();
try {
String sCurrentLine;
br = new BufferedReader(new FileReader(file));
while ((sCurrentLine = br.readLine()) != null) {
sb.append(sCurrentLine);
}
} catch (IOException ex) {
LOG.log(Level.SEVERE, null, ex);
} finally {
try {
if (br != null) {
br.close();
}
} catch (IOException ex) {
LOG.log(Level.SEVERE, null, ex);
}
}
return calculateStringHash(sb.toString(),"UTF-8");
}
I verified that on the PHP side hash_file is used and UTF-8 is the encryption. Any ideas?
Your reading method removes all the end of lines from the file. readLine() returns a line, without its line terminator. Print the contents of the StringBuilder, and you'll understand the problem.
Moreover, a hashing algorithm is a binary operation. It operates on bytes, and returns bytes. Why are you transforming the bytes in the file into a String, to later transform the String back to an array of bytes in order to hash it. Just read the file as a byte array, using an InputStream, instead of reading it as a String. Then hash this byte array. This will also avoid using the wrong file encoding (your code uses the platform default encoding, which might not be the encding used to create the file).
I guess you are missing out on the new line characters from the file since you call br.readLine().
It is better to read the file into byte array, and pass that onto md.digest(...).
Here is the code for my class:
public class Md5tester {
private String licenseMd5 = "?jZ2$??f???%?";
public Md5tester(){
System.out.println(isLicensed());
}
public static void main(String[] args){
new Md5tester();
}
public boolean isLicensed(){
File f = new File("C:\\Some\\Random\\Path\\toHash.txt");
if (!f.exists()) {
return false;
}
try {
BufferedReader read = new BufferedReader(new InputStreamReader(new FileInputStream(f)));
//get line from txt
String line = read.readLine();
//output what line is
System.out.println("Line read: " + line);
//get utf-8 bytes from line
byte[] lineBytes = line.getBytes("UTF-8");
//declare messagedigest for hashing
MessageDigest md = MessageDigest.getInstance("MD5");
//hash the bytes of the line read
String hashed = new String(md.digest(lineBytes), "UTF-8");
System.out.println("Hashed as string: " + hashed);
System.out.println("LicenseMd5: " + licenseMd5);
System.out.println("Hashed as bytes: " + hashed.getBytes("UTF-8"));
System.out.println("LicenseMd5 as bytes: " + licenseMd5.getBytes("UTF-8"));
if (hashed.equalsIgnoreCase(licenseMd5)){
return true;
}
else{
return false;
}
} catch (FileNotFoundException e) {
return false;
} catch (IOException e) {
return false;
} catch (NoSuchAlgorithmException e) {
return false;
}
}
}
Here's the output I get:
Line read: Testing
Hashed as string: ?jZ2$??f???%?
LicenseMd5: ?jZ2$??f???%?
Hashed as bytes: [B#5fd1acd3
LicenseMd5 as bytes: [B#3ea981ca
false
I'm hoping someone can clear this up for me, because I have no clue what the issue is.
A byte[] returned by MD5 conversion is an arbitrary byte[], therefore you cannot treat it as a valid representation of String in some encoding.
In particular, ?s in ?jZ2$??f???%? correspond to bytes that cannot be represented in your output encoding. It means that content of your licenseMd5 is already damaged, therefore you cannot compare your MD5 hash with it.
If you want to represent your byte[] as String for further comparison, you need to choose a proper representation for arbitrary byte[]s. For example, you can use Base64 or hex strings.
You can convert byte[] into hex string as follows:
public static String toHex(byte[] in) {
StringBuilder out = new StringBuilder(in.length * 2);
for (byte b: in) {
out.append(String.format("%02X", (byte) b));
}
return out.toString();
}
Also note that byte[] uses default implementation of toString(). Its result (such as [B#5fd1acd3) is not related to the content of byte[], therefore it's meaningless in your case.
The ? symbols in the printed representation of hashed aren't literal question marks, they're unprintable characters.
You get this error when your java file format is not UTF-8 encoding while you encode a string using UTF-8, try remove UTF-8 and the md5 will output another result, you can copy to the string and see the result true.
Another way is set the file encoding to UTF-8, the string encode also be different