I am learning about JWS for authentication, and tried to implement using one of the java libraries jjwt. I created one JWT token string and added a single character to its end. To my surprise, jjwt library parsed it without throwing any exceptions. I don't know if there is any issue with the library or the algorithm used. I tested the same with jwt debugger and it is working as expected(showing invalid token).
CODE :
public class TestJwt {
// private static final String JWT_SECRET_KEY = "qdsfkjbwfjn323rwefwdef3kewrwerv5236v56d56w1xweec3wdn3i432oi"; // WORKING !!!
// private static final String JWT_SECRET_KEY = "qdsfkjbwfjn323rwefwdef3kewrwerv52f36v56d56w1xweec3wdn3i432oi"; // WORKING !!!
// private static final String JWT_SECRET_KEY = "qdsfkjbwfjn323rwefwdef3kewrwerv52f36345345weec3wdn3i432oi"; // WORKING !!!
// private static final String JWT_SECRET_KEY = "qdsfkjbwfjn323rwefwdef3kewrwerv52f36345432oi"; // NOT WORKING
// private static final String JWT_SECRET_KEY = "qdsfkjbwfjn323rwefwdef3kewrwerv52f3632222222245y6454524524tef45432oi"; // NOT WORKING
private static final String JWT_SECRET_KEY = "qdsfkjbwfjn323rwefwdef3kewrwerv5236v56d56w1xweec3wdn3i432oi"; // WORKING
private static final Key key = Keys.hmacShaKeyFor(JWT_SECRET_KEY.getBytes());
public static void main(String args[]) {
String token =
Jwts.builder()
.claim("name", "RANDOM")
.claim("surname", "ANOTHER_RANDOM")
.signWith(key)
.compact();
System.out.println("TOKEN BEFORE");
System.out.println(token);
// Adding single character to jwt and still working!!
token = token+"7";
System.out.println("TOKEN AFTER");
System.out.println(token);
Claims claims = Jwts.parserBuilder()
.setSigningKey(key).build()
.parseClaimsJws(token).getBody();
claims.forEach((k,v)->{
System.out.println("________________________");
System.out.println(k + " : "+v);
});
System.out.println("________________________");
}
}
OUTPUT :
TOKEN BEFORE
eyJhbGciOiJIUzM4NCJ9.eyJuYW1lIjoiUkFORE9NIiwic3VybmFtZSI6IkFOT1RIRVJfUkFORE9NIn0.evntuAcZ0Urnv-5QniShmENKNBSzrjoxeNWN0uW-sy-qXzC-G2PJyi316m9LqQH9
TOKEN AFTER
eyJhbGciOiJIUzM4NCJ9.eyJuYW1lIjoiUkFORE9NIiwic3VybmFtZSI6IkFOT1RIRVJfUkFORE9NIn0.evntuAcZ0Urnv-5QniShmENKNBSzrjoxeNWN0uW-sy-qXzC-G2PJyi316m9LqQH97
________________________
name : RANDOM
________________________
surname : ANOTHER_RANDOM
________________________
I was expecting a SignatureException. I have tested with few keys and random claims, some of them are working but some are parsing without an issue(commented out keys). Should I use more complex keys?
It's not a problem with the secret, but how the Base64Url decoder was implemented.
The original signature is 64 characters long, which in Base64Url encoding (6 bits per character) means, that there are 64 * 6 bits = 384 bits encoded. This is exactly as expected when you use the HS384 algorithm.
And 384 bits/6 is exactly 64, so the signature can be encoded in a 64 character Base64Url string without any unused bits. Now when you add another character, it means that you add 6 bits. And the decoder in your Java code seems to just ignore these 6 bits, because it's not enough information for a complete byte. But technically it's an invalid Base64Url string and would require at least one more character. The decoder used on jwt.io seems to be stricter in this regard.
See also JWT token decoding even when the last character of the signature is changed which explains why you sometimes can change the last character of the signature without invalidating it.
Related
I have seen this interesting thing during split of properties string using regex. I am not able to find the root cause.
I have a string which contains text like properties key=value pair.
I have a regex which split the string into keys/values based on the = position. It considers first = as the split point. Value can also contain = in it.
I tried using two different ways in Java to do it.
using Scanner.findAll() method
This is not behaving as expected. It should extract and print all keys based on pattern. But I found its behaving weird. I have one key-value pair as below
SectionError.ErrorMessage=errorlevel=Warning {HelpMessage:This is very important message This is very important .....}
The key which should be extracted is SectionError.ErrorMessage= but it also considers errorlevel= as key.
The interesting point is if I remove one of characters from properties String passed, it behaves fine and only extracts SectionError.ErrorMessage= key.
using Matcher.results() method
This works fine. No problem whatever we put in the properties string.
Sample code I tried :
import java.util.Scanner;
import java.util.regex.MatchResult;
import java.util.regex.Pattern;
import static java.util.regex.Pattern.MULTILINE;
public class MessageSplitTest {
static final Pattern pattern = Pattern.compile("^[a-zA-Z0-9._]+=", MULTILINE);
public static void main(String[] args) {
final String properties =
"SectionOne.KeyOne=first value\n" + // removing one char from here would make the scanner method print expected keys
"SectionOne.KeyTwo=second value\n" +
"SectionTwo.UUIDOne=379d827d-cf54-4a41-a3f7-1ca71568a0fa\n" +
"SectionTwo.UUIDTwo=384eef1f-b579-4913-a40c-2ba22c96edf0\n" +
"SectionTwo.UUIDThree=c10f1bb7-d984-422f-81ef-254023e32e5c\n" +
"SectionTwo.KeyFive=hello-world-sample\n" +
"SectionThree.KeyOne=first value\n" +
"SectionThree.KeyTwo=second value additional text just to increase the length of the text in this value still not enough adding more strings here n there\n" +
"SectionError.ErrorMessage=errorlevel=Warning {HelpMessage:This is very important message This is very important message This is very important messageThis is very important message This is very important message This is very important message This is very important message This is very important message This is very important message This is very important message This is very important messageThis is very important message This is very important message This is very important message This is very important message This is very important message}\n" +
"SectionFour.KeyOne=sixth value\n" +
"SectionLast.KeyOne=Country";
printKeyValuesFromPropertiesUsingScanner(properties);
System.out.println();
printKeyValuesFromPropertiesUsingMatcher(properties);
}
private static void printKeyValuesFromPropertiesUsingScanner(String properties) {
System.out.println("===Using Scanner===");
try (Scanner scanner = new Scanner(properties)) {
scanner
.findAll(pattern)
.map(MatchResult::group)
.forEach(System.out::println);
}
}
private static void printKeyValuesFromPropertiesUsingMatcher(String properties) {
System.out.println("===Using Matcher===");
pattern.matcher(properties).results()
.map(MatchResult::group)
.forEach(System.out::println);
}
}
Output printed:
===Using Scanner===
SectionOne.KeyOne=
SectionOne.KeyTwo=
SectionTwo.UUIDOne=
SectionTwo.UUIDTwo=
SectionTwo.UUIDThree=
SectionTwo.KeyFive=
SectionThree.KeyOne=
SectionThree.KeyTwo=
SectionError.ErrorMessage=
errorlevel=
SectionFour.KeyOne=
SectionLast.KeyOne=
===Using Matcher===
SectionOne.KeyOne=
SectionOne.KeyTwo=
SectionTwo.UUIDOne=
SectionTwo.UUIDTwo=
SectionTwo.UUIDThree=
SectionTwo.KeyFive=
SectionThree.KeyOne=
SectionThree.KeyTwo=
SectionError.ErrorMessage=
SectionFour.KeyOne=
SectionLast.KeyOne=
What could be the root cause of this? Do scanner's findAll works differently than matcher?
Please let me know if any more info is required.
Scanner's documentation mentions the word "buffer" a lot. This suggests that Scanner does not know about the entire string from which it is reading, and only holds a small bit of it at a time in a buffer. This makes sense, because Scanners are designed to read from streams as well, reading everything from the stream might take a long time(, or forever!) and takes up a lot of memory.
In the source code of Scanner, there is indeed a CharBuffer:
// Internal buffer used to hold input
private CharBuffer buf;
Because of the length and contents of your string, the Scanner has decided to load everything up to...
SectionError.ErrorMessage=errorlevel=Warning {HelpMessage:This is very...
^
somewhere here
(It could be anywhere in the word "errorlevel")
...into the buffer. Then, after that half of the string is read, the other half the string starts like this:
errorlevel=Warning {HelpMessage:This is very...
errorLevel= is now the start of the string, causing the pattern to match.
Related Bug?
Matcher doesn't use a buffer. It stores the whole string against which it is matching in the field:
/**
* The original string being matched.
*/
CharSequence text;
So this behaviour is not observed in Matcher.
Sweepers answer got it right, this is an issue of the Scanner’s buffer not containing the entire string. We can simplify the example to trigger the issue specifically:
static final Pattern pattern = Pattern.compile("^ABC.", Pattern.MULTILINE);
public static void main(String[] args) {
String testString = "\nABC1\nXYZ ABC2\nABC3ABC4\nABC4";
String properties = "X".repeat(1024 - testString.indexOf("ABC4")) + testString;
String s1 = usingScanner(properties);
System.out.println("Using Scanner: "+s1);
String m = usingMatcher(properties);
System.out.println("Using Matcher: "+m);
if(!s1.equals(m)) System.out.println("mismatch");
if(s1.equals(usingScannerNoStream(properties)))
System.out.println("Not a stream issue");
}
private static String usingScanner(String source) {
return new Scanner(source)
.findAll(pattern)
.map(MatchResult::group)
.collect(Collectors.joining(" + "));
}
private static String usingScannerNoStream(String source) {
Scanner s = new Scanner(source);
StringJoiner sj = new StringJoiner(" + ");
for(;;) {
String match = s.findWithinHorizon(pattern, 0);
if(match == null) return sj.toString();
sj.add(match);
}
}
private static String usingMatcher(String source) {
return pattern.matcher(source).results()
.map(MatchResult::group)
.collect(Collectors.joining(" + "));
}
which prints:
Using Scanner: ABC1 + ABC3 + ABC4 + ABC4
Using Matcher: ABC1 + ABC3 + ABC4
mismatch
Not a stream issue
This example prepends a prefix with as much X characters needed to align the beginning of the false-positive match with the buffer’s size. The Scanner’s initial buffer size is 1024, though it may get enlarged when needed.
Since findAll ignores the scanner’s delimiters, just like findWithinHorizon, this code also shows that looping with findWithinHorizon manually exhibits the same behavior, in other words, this is not an issue of the Stream API used.
Since Scanner will enlarge the buffer when needed, we can work-around the issue by using a match operation that forces the reading of the entire contents into the buffer before performing the intended match operation, e.g.
private static String usingScanner(String source) {
Scanner s = new Scanner(source);
s.useDelimiter("(?s).*").hasNext();
return s
.findAll(pattern)
.map(MatchResult::group)
.collect(Collectors.joining(" + "));
}
This specific hasNext() with a delimiter that consumes the entire string will force the complete buffering of the string, without advancing the position. The subsequent findAll() operation ignores both, the delimiter and the result of the hasNext() check, but does not suffer from the issue anymore due to the completely filled buffer.
Of course, this destroys the advantage of Scanner when parsing an actual stream.
I'm upgrading from Spring Security 4.x to 5.x.
The ReflectionSaltSource from Spring 4 lets us configure a custom salt. But that's removed in Spring Security 5. I then found out that I should use MessageDigestPasswordEncoder. It has a long detailed java-doc but unfortunately the doc is a bag of words without conveying any structured information (I tried multiple times; my bad if I was ignorant).
Anyways I thought I should do the following based on my limited understanding.
Old system with 4.x - myEncodedPassword and mySalt are passed separately to the encoder.
New System with 5.x - Pass one field with the value {mySalt}myEncodedPassword to the MessageDigestPasswordEncoder
However, that did not work. The Problem was that when MessageDigestPasswordEncoder sees {mySalt}encodedPassword, it uses {mySalt} (with the {}) as the salt instead of using mySalt as the salt . I'm confused.
Here's a coding example. I used Groovy to reduce noise.
#Grab(group='org.springframework.security', module='spring-security-core', version='5.1.4.RELEASE')
import org.springframework.security.crypto.password.MessageDigestPasswordEncoder
String password = 'myPassword'
String salt_1 = 'mySalt'
String salt_2 = '{mySalt}'
// http://www.lorem-ipsum.co.uk/hasher.php generated below hashes
String encodedPasswordWithSalt_1 = '57bc828628811a10496215e217b7ae9b714c859fc7a8b1c678c9a0cc40aac422'
String encodedPasswordWithSalt_2 = 'a18b53fc58843def1e08e00a718f40d6f8eda0b97ef97824b5078c1fad93c0c5'
MessageDigestPasswordEncoder encoder = new MessageDigestPasswordEncoder('SHA-256')
println "expected=true, actual=" + encoder.matches(password, "{${salt_1}}${encodedPasswordWithSalt_1}") // <--- expected to match but did not
println "expected=false, actual=" + encoder.matches(password, "{${salt_1}}${encodedPasswordWithSalt_2}") // <--- why does this match?
The output is
expected=true, actual=false
expected=false, actual=true
I'm hoping to find a way to support SHA256 with custom and separate salt for each user password.
If anyone's interested, I created a ticket on GitHub - https://github.com/spring-projects/spring-security/issues/6594 . No solution so far. I will update here if there's any. So this is still an unanswered question.
I guess the issue is in the org.springframework.security.crypto.password.MessageDigestPasswordEncoder class
By debugging it in the method private String extractSalt(String prefixEncodedPassword) they try to extract salt by returning prefixEncodedPassword.substring(start, end + 1); where start is the index of the prefix { while end is the index of suffix } and it stops to the first suffix it matches so what happens in your code?
It happens this:
MessageDigestPasswordEncoder encoder = new MessageDigestPasswordEncoder('SHA-256')
println "expected=true, actual=" + encoder.matches(password, "{${salt_1}}${encodedPasswordWithSalt_1}") //It's not matched because the extracted salt will be {mySalt} and not mySalt
println "expected=false, actual=" + encoder.matches(password, "{${salt_1}}${encodedPasswordWithSalt_2}") //It's matched because the extracted salt will be {mySalt} and not mySalt
I state I don't know if it's a bug or not, in any case in your scenario it should be enough to properly investigate and properly modify the method
private String extractSalt(String prefixEncodedPassword) {
int start = prefixEncodedPassword.indexOf(PREFIX);
if (start != 0) {
return "";
}
int end = prefixEncodedPassword.indexOf(SUFFIX, start);
if (end < 0) {
return "";
}
return prefixEncodedPassword.substring(start, end + 1);
}
This method is inside the org.springframework.security.crypto.password.MessageDigestPasswordEncoder class
I hope it's useful
Angelo
The code below is intended to encrypt and decrypt messages input. When I encrypt and decrypt data, it sometimes works and at other times doesn't. The following is an example of the problems I am experiencing.
Encryption
Decryption
As you can see, when I try to decrypt, my program terminated and some gibberish is output into the console. What is the issue with my code?
I'm unsure if mentioning this helps, but I have Eclipse's file encoding set to UTF-8.
Please excuse any poor code. I'm still very much a beginner with Java and I'm puzzled as to why this is happening.
import java.util.Scanner;
public class Transcrypt {
static String mode = "",
msg,
key;
public static void main(String[] args) {
Scanner input = new Scanner(System.in);
while (!mode.equals("e") && !mode.equals("d")) { // Ask for mode until equal to "e" or "d"
System.out.print("Encrypt or decrypt? (e/d) ");
mode = input.nextLine().toLowerCase();
}
System.out.print("Message: "); // Ask for message
msg = input.nextLine();
System.out.print("Passkey: "); // Ask for key
key = input.nextLine();
input.close();
System.out.println(transcrypt(msg, key, mode.equals("d"))); // Transcrypt and output
}
public static String transcrypt(String msg, String key, boolean decode) {
String result = "";
for (int i=0; i<msg.length(); i++) {
// Add or subtract Unicode index of key.charAt(i % key.length()) and/from msg.msg.charAt(i) and convert back to character
result += (char) ((int) msg.charAt(i) + ((int) key.charAt(i % key.length())) * (decode ? -1 : 1));
}
return result;
}
}
Your encoded message looks like "¼ÊßàãÊãæÑ×", but actually, it is "¼Êßàã\u0085ÊãæÑ×\u0095".
Most notably, it contains the control character 0x85 in-between which has “new line” semantics. So when copying that string, you’re copying the control characters with it and when pasting into the console upon your application’s query for the message, you are basically entering ¼Êßàã as message, the input committed via the “new line” control character, causing the subsequent query for the password to consume the trailing ÊãæÑ× characters.
The garbage you see right after the Passkey: output, is the result of your attempt to decode ¼Êßàã using the key ÊãæÑ×, as there was no newline entered at this point as the characters being already in the console’s buffer have been used.
Generally, as already said by Nándor Előd Fekete in this comment, you should not write characters to the console, that are actually binary data, like the encoded string.
By the way, you shouldn’t declare variables as static fields that are actually local to a method, i.e. your main method. Further, you don’t need to cast char to int when doing computations. char values are already a subset of the int values.
I am running a GWT application on Google App Engine which passes text input from the GUI via GWT-RPC/Servlet to an API. But umlauts like ä,ö,ü are misinterpreted by the API and the API shows only a ? instead of an umlaut.
I am pretty sure that the problem is the default character encoding on the Google App Engine, which is US-ASCII: US-ASCII does not know any umlaut.
Using umlauts with the API from JUnit-Tests on my local machine works. The default character encoding there is UTF-8.
The problem does not come from GWT or the Encoding with any HTML file; I used a Constant Java String within the appliation containing some umlauts and passed it to the API: the problem appears if the application is deployed in the Google App Engine.
Is there any way to change the Character Encoding in the Google App Engine? Or does anyone know another solution to my problem?
Storing umlauts from the GUI in the GAE Datastore and bringing them back to the GUI works funnily enough.
I was having the same problem: the default charset of a web application deployed to Google App Engine was set to US-ASCII, but I needed it to be UTF-8.
After a bit of head scratching, I found that adding:
<system-properties>
<property name="appengine.file.encoding" value="UTF-8" />
</system-properties>
to appengine-web.xml correctly sets the charset to UTF-8. More details can be found on Google Issue Tracker - Setting of default encoding.
Workaround (safe)
I wrote this class to encode UTF-Strings to ASCII-Strings (replacing all chars which are not in the ASCII-table with their table-number, preceded and followed by a mark), using AsciiEncoder.encode(yourUtfString)
The String can then be decoded back to UTF with AsciiEncoder.decode(yourAsciiEncodedUtfString) where UTF is supported.
package <your_package>;
import java.util.ArrayList;
/**
* Created by Micha F. aka Peracutor.
* 04.06.2017
*/
public class AsciiEncoder {
public static final char MARK = '%'; //use whatever ASCII-char you like (should be occurring not often in regular text)
public static String encode(String s) {
StringBuilder result = new StringBuilder(s.length() + 4 * 10); //buffer for 10 special characters (4 additional chars for every special char that gets replaced)
for (char c : s.toCharArray()) {
if ((int) c > 127 || c == MARK) {
result.append(MARK).append((int) c).append(MARK);
} else {
result.append(c);
}
}
return result.toString();
}
public static String decode(String s) {
int lastMark = -1;
ArrayList<Character> chars = new ArrayList<>();
try {
//noinspection InfiniteLoopStatement
while (true) {
String charString = s.substring(lastMark = s.indexOf(MARK, lastMark + 1) + 1, lastMark = s.indexOf(MARK, lastMark));
char c = (char) Integer.parseInt(charString);
chars.add(c);
}
} catch (IndexOutOfBoundsException | NumberFormatException ignored) {}
for (char c : chars) {
s = s.replace("" + MARK + ((int) c) + MARK, String.valueOf(c));
}
return s;
}
}
Hope this helps someone.
If you (like myself) are using the Java flexible environment on Google AppEngine, the default encoding can "simply" be fixed by setting the file.encoding system property through your app.yaml (via an environment variable that is automatically picked up by the runtime) like this:
env_variables:
JAVA_USER_OPTS: -Dfile.encoding=UTF-8
I have to read a file (existing format not under my control) that contains an XML document and encoded data. This file unfortunately includes MQ-related data around it including hex zeros (end of files).
So, using Java, how can I read this file, stripping or ignoring the "garbage" I don't need to get at the XML and encoded data. I believe an acceptable solution is to just leave out the hex zeros (are there other values that will stop my reading?) since I don't need the MQ information (RFH header) anyway and the counts are meaningless for my purposes.
I have searched a lot and only find really heinous complicated "solutions". There must be a better way...
What worked was to pull out the XML documents - Groovy code:
public static final String REQUEST_XML = "<Request>";
public static final String REQUEST_END_XML = "</Request>";
/**
* #param xmlMessage
* #return 1-N EncodedRequests for those I contain
*/
private void extractRequests( String xmlMessage ) {
int start = xmlMessage.indexOf(REQUEST_XML);
int end = xmlMessage.indexOf(REQUEST_END_XML);
end += REQUEST_END_XML.length();
while( start >= 0 ) { //each <Request>
requests.add(new EncodedRequest(xmlMessage.substring(start,end)));
start = xmlMessage.indexOf(REQUEST_XML, end);
end = xmlMessage.indexOf(REQUEST_END_XML, start);
end += REQUEST_END_XML.length();
}
}
and then decode the base64 portion:
public String getDecodedContents() {
if( decodedContents == null ) {
byte[] decoded = Base64.decodeBase64(getEncodedContents().getBytes());
String newString = new String(decoded);
decodedContents = newString;
decodedContents = decodedContents.replace('\r','\t');
}
return decodedContents;
}
I've hit this issue before (well ... something similar). Have a look a my FilterInputStream for a file filter that you should be able to modify to your needs.
Essentially it implements a push-back buffer that chucks away anything you don't want.