I am trying to write a code for encryption in Python and decryption in Java but I am getting an error.
I am using cryptography.fernet in python to encrypt a file and when I use Fernet Java for decryption it shows an error.
Here is my python code:
from cryptography.fernet import Fernet
key = Fernet.generate_key()
cipher_suite = Fernet(key)
with open("key.txt", "wb") as f:
f.write(key)
with open("read_plain_text_from_here.txt", "r") as f:
encoded_text = f.read().encode()
cipher_text = cipher_suite.encrypt(encoded_text)
with open("write_cipher_text_here.txt", "wb") as f:
f.write(cipher_text)
with open("write_cipher_text_here.txt", "rb") as f:
cipher_text = f.read()
with open("key.txt", "rb") as f:
decryption_key = f.read()
with open("write_plain_text_here.txt", "wb") as f:
cipher_suite = Fernet(decryption_key)
f.write(cipher_suite.decrypt(cipher_text))
Here is my java code:
package encryptapp;
import com.macasaet.fernet.*;
public class Decrypt
{
public static void main(String args[])
{
final Key key = new Key("***key i got from python**");
final Token token = Token.fromString("***cipher text i got from python***");
final Validator<String> validator = new StringValidator() {};
final String payload = token.validateAndDecrypt(key, validator);
System.out.println("Payload is " + payload);
}
}
The error in Java that I get is:
Exception in thread "main" com.macasaet.fernet.TokenExpiredException: Token is expired
at com.macasaet.fernet.Token.validateAndDecrypt(Token.java:240)
at com.macasaet.fernet.Validator.validateAndDecrypt(Validator.java:104)
at com.macasaet.fernet.Token.validateAndDecrypt(Token.java:218)
at encryptapp.Decrypt.main(Decrypt.java:60)
LINKS for docs:
Python: https://cryptography.io/en/latest/
Java: https://github.com/l0s/fernet-java8/blob/master/README.md
The fernet-java8 class does not have an explicit TTL argument for decryption like the python class does. Instead, it has a default of 60 seconds. You need to override the getTimeToLive() method of the Validator interface to specify a custom TTL. If you want to set the TTL to "forever", which is equivalent to the keyword argument ttl=None in python fernet, do something like this:
import java.time.Duration;
import java.time.Instant;
.
.
.
#Override
final Validator < String > validator = new StringValidator() {
public TemporalAmount getTimeToLive() {
return Duration.ofSeconds(Instant.MAX.getEpochSecond());
}
};
Related
I have written X25519 DH keyagreement with Java, but I have to use nodejs to reimplement this, so that I can make keyagreement between js client and java backend.
I used node crypto module, but the length of shared key is not the same regarding implemented by Java.
Here is my Java code, and could anybody help me show the nodejs codes. Thanks.
package com.demo;
import java.util.Base64;
import javax.crypto.KeyAgreement;
import java.security.*;
import java.security.spec.ECGenParameterSpec;
import java.security.spec.X509EncodedKeySpec;
public class Main {
public static void main(String[] args) {
// write your code here
System.out.println("Hello world");
String peerPub = "MCowBQYDK2VuAyEAfMePklV88QMhq8qlVxLI6RK1pV4cFUrMwJgPmrXLyVU=";
try {
buildSecret(peerPub);
}
catch (Exception e) {
}
}
public static void buildSecret(String peerPub) throws Exception {
KeyPairGenerator kpgen = KeyPairGenerator.getInstance("XDH");
kpgen.initialize(new ECGenParameterSpec("X25519"));
KeyPair myKP = kpgen.generateKeyPair();
byte[] pp = Base64.getDecoder().decode(peerPub);
PublicKey peerKey = bytesToPublicKey(pp);
KeyAgreement ka = KeyAgreement.getInstance("XDH");
ka.init(myKP.getPrivate());
ka.doPhase(peerKey, true);
// System.out.println( myKP.getPublic().getEncoded().length );
String publicKey = Base64.getEncoder().encodeToString(myKP.getPublic().getEncoded());
// System.out.println( ka.generateSecret().length );
String sharedKey = Base64.getEncoder().encodeToString(ka.generateSecret());
System.out.println(publicKey);
System.out.println(sharedKey);
}
private static PublicKey bytesToPublicKey(byte[] data) throws Exception {
KeyFactory kf = KeyFactory.getInstance("X25519");
return kf.generatePublic(new X509EncodedKeySpec(data));
}
}
And the nodejs code is following(not working):
const crypto = require('crypto');
const ecdhKeyagreement = () => {
const CURVE = 'x25519';
let m_privateKey;
let m_publicKey;
let m_sharedKey;
const generatePublicAndPrivateKeys = () => {
const {publicKey, privateKey} = crypto.generateKeyPairSync('x25519', {
modulusLength: 4096,
publicKeyEncoding: {
type: 'spki',
format: 'pem'
},
privateKeyEncoding: {
type: 'pkcs8',
format: 'pem'
}
})
m_privateKey = privateKey
m_publicKey = publicKey
}
const computeSharedKey = (peerPub) => {
// console.log(m_publicKey)
// console.log(m_privateKey)
const bob = crypto.createDiffieHellman(512)
bob.setPrivateKey(m_privateKey)
m_sharedKey = bob.computeSecret(peerPub).toString('base64')
console.log(m_sharedKey)
};
return {
generatePublicAndPrivateKeys,
computeSharedKey,
};
};
const my_obj = ecdhKeyagreement();
my_obj.generatePublicAndPrivateKeys()
const peerPub = "MCowBQYDK2VuAyEAME2NXThH2T+PMTV2R2YGo5hYiVFhu7nbQGY0R89aYFE="
my_obj.computeSharedKey(peerPub)
You don't show your nodejs code where the (presumed) problem is, which is the accepted practice on StackOverflow, but since you came halfway:
with your Java code, modified (only) to use the public half of one of my test (static) keypairs, and run on j16 because 11-15 apparently produce an algid with parameters which violates RFC8410 and is rejected by OpenSSL and thus nodejs crypto which uses OpenSSL, and the following straightforward js code using the corresponding private key (run on v14.15.5), I get exactly the same agreement result as Java:
const crypto = require('crypto'), fs = require('fs')
const peerb64 = "MCowBQYDK2VuAyEAZWJZEjPzc6E4UUSyOcMmxj2cRqqmDhE4/VfyPyfe7j4="
console.log("sRbfKaWmO9u2eKWSfY25i8Z2YFNNiLYeVcoh6DOI2ik=") // expected agreement
const myprv = crypto.createPrivateKey(fs.readFileSync('n:certx/x2.pem'))
const peerpub = crypto.createPublicKey({key:Buffer.from(peerb64,'base64'),format:'der',type:'spki'})
console.log( crypto.diffieHellman({privateKey:myprv,publicKey:peerpub}) .toString('base64') )
Edit: I discovered this other question from a few years back (How to populate the cache in CachedSchemaRegistryClient without making a call to register a new schema?). It mentions that the CachedSchemaRegistryClient needs to register the schema to the actual registry to make it cached, and there has been no solution yet to work around this. So leaving my question here, but wanted that to be made aware as well.
I am working on a program that is pulling a byte array from kafka, decrypting it (so it is secure while on kafka), converting the bytes to a string, the json string to json object, looking up the schema from the schema registry (utilizing CachedSchemaRegistryClient), converting the json bytes to a generic record using the schema from the retrieved schema from the registry metadata, and then serializing that generic record into avro bytes.
After running some tests it seems that the CachedSchemaRegistyClient is the major performance drain. But from what I can tell this is the best way to go about getting the schema metadata. Have I implemented something poorly or is there some other way that this can be done that works with my use case?
Here is the code for what handles everything after the decrypting:
package org.apache.flink;
import avro.fullNested.FinalMessage;
import io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient;
import io.confluent.kafka.schemaregistry.client.SchemaMetadata;
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericRecord;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.util.Collector;
import org.json.simple.JSONObject;
import org.json.simple.parser.JSONParser;
import serializers.AvroFinishedMessageSerializer;
import tech.allegro.schema.json2avro.converter.JsonAvroConverter;
public class JsonToAvroBytesParser implements FlatMapFunction<String, byte[]> {
private transient CachedSchemaRegistryClient schemaRegistryClient;
private transient AvroFinishedMessageSerializer avroFinishedMessageSerializer;
private String schemaUrl;
private Integer identityMaxCount;
public JsonToAvroBytesParser(String passedSchemaUrl, int passedImc){
schemaUrl = passedSchemaUrl;
identityMaxCount = passedImc;
}
private void ensureInitialized() {
if (schemaUrl.equals("")) {
schemaUrl = "https://myschemaurl.com/";
}
if(identityMaxCount == null){
identityMaxCount = 5;
}
if(schemaRegistryClient == null){
schemaRegistryClient = new CachedSchemaRegistryClient(schemaUrl, identityMaxCount);
}
if(avroFinalMessageSerializer == null){
avroFinalMessageSerializer = new AvroFinalMessageSerializer(FinalMessage.class);
}
}
#Override
public void flatMap(String s, Collector<byte[]> collector) throws Exception {
ensureInitialized();
Object obj = new JSONParser().parse(s);
JSONObject jsonObject = (JSONObject) obj;
try {
String headers = jsonObject.get("headers").toString();
JSONObject body = (JSONObject) jsonObject.get("requestBody");
if(headers != null && body != null){
String kafkaTopicFromHeaders = "hard_coded_name-value";
//NOTE: this schema lookup has serious performance issues.
SchemaMetadata schemaMetadata = schemaRegistryClient.getLatestSchemaMetadata(kafkaTopicFromHeaders);
//TODO: need to implement recovery method if schema cannot be reached.
JsonAvroConverter converter = new JsonAvroConverter();
GenericRecord specificRecord = converter.convertToGenericDataRecord(body.toJSONString().getBytes(), new Schema.Parser().parse(schemaMetadata.getSchema()));
byte[] bytesToReturn = avroFinishedMessageSerializer.serializeWithSchemaId(schemaMetadata, specificRecord);
collector.collect(bytesToReturn);
}
else {
System.out.println("json is incorrect.");
}
} catch (Exception e){
System.out.println("json conversion exception caught");
}
}
}
Thanks for any help in advance!
It appears the getLatestSchemaMetadata method does not use the cache. If you want your calls to use the cache to improve performance perhaps you can reorganize your program to use one of the other methods that does use the cache, perhaps lookup schema by ID or register schema by name with definition string.
I'm having trouble locating the documentation for Java (or Python, or C++) that confirms this is how SchemaRegistry works (tried here). But the .Net docs says at least in that client API the getLatest method is not cached.
I have a java aws lambda function or handler as AHandler that does some stuff e.g. It has been subscribed to SNS events, It parses that SNS event and log relevant data to the database.
I have another java aws lambda BHandler, Objective of this BHandler to receive a request from AHandler and provide a response back to AHandler. Because BHandler's objective is to provide a response with some json data. and that would be used by the AHandler.
May I see any clear example which tells how we can do such things ?
I saw this example call lambda function from a java class and Invoke lambda function from java
My question talks about that situation, when one aws java lambda function (or handler) calls to another aws java lambda function when both are in same region, same account,same vpc execution stuff, same rights. In that case aws java lambda function can directly call( or invoke) to another or still it has to provide aws key,region etc stuff (as in above links) ? A clear example/explanation would be very helpful.
EDIT
The AHandler who is calling another Lambda function (BHandler) , exist on same account have given complete AWSLambdaFullAccess with everything e.g.
“iam:PassRole",
"lambda:*",
Here is the code to call :
Note : Below code works when I call the same function with everything same from a normal java main function. But its not working like calling from on lambda function (like ALambdaHandler calling BLambdaHandler as a function call). Even its not returning any exception. Its just showing timeout, its got stuck at the code of: lambdaClient.invoke
String awsAccessKeyId = PropertyManager.getSetting("awsAccessKeyId");
String awsSecretAccessKey = PropertyManager.getSetting("awsSecretAccessKey");
String regionName = PropertyManager.getSetting("regionName");
String geoIPFunctionName = PropertyManager.getSetting("FunctionName");
Region region;
AWSCredentials credentials;
AWSLambdaClient lambdaClient;
credentials = new BasicAWSCredentials(awsAccessKeyId,
awsSecretAccessKey);
lambdaClient = (credentials == null) ? new AWSLambdaClient()
: new AWSLambdaClient(credentials);
region = Region.getRegion(Regions.fromName(regionName));
lambdaClient.setRegion(region);
String returnGeoIPDetails = null;
try {
InvokeRequest invokeRequest = new InvokeRequest();
invokeRequest.setFunctionName(FunctionName);
invokeRequest.setPayload(ipInput);
returnDetails = byteBufferToString(
lambdaClient.invoke(invokeRequest).getPayload(),
Charset.forName("UTF-8"),logger);
} catch (Exception e) {
logger.log(e.getMessage());
}
EDIT
I did everything as suggested by others and followed everything. At the end I reached to AWS support, and the problem was related to some VPC configurations stuff, and that got solved.If you have encountered similar stuff, then may be check security configs, VPC stuff.
We have achieved this by using com.amazonaws.services.lambda.model.InvokeRequest.
Here is code sample.
public class LambdaInvokerFromCode {
public void runWithoutPayload(String functionName) {
runWithPayload(functionName, null);
}
public void runWithPayload(String functionName, String payload) {
AWSLambdaAsyncClient client = new AWSLambdaAsyncClient();
client.withRegion(Regions.US_EAST_1);
InvokeRequest request = new InvokeRequest();
request.withFunctionName(functionName).withPayload(payload);
InvokeResult invoke = client.invoke(request);
System.out.println("Result invoking " + functionName + ": " + invoke);
}
public static void main(String[] args) {
String KeyName ="41159569322017486.json";
String status = "success";
String body = "{\"bucketName\":\""+DBUtils.S3BUCKET_BULKORDER+"\",\"keyName\":\""+KeyName+"\", \"status\":\""+status+"\"}";
System.out.println(body);
JSONObject inputjson = new JSONObject(body);
String bucketName = inputjson.getString("bucketName");
String keyName = inputjson.getString("keyName");
String Status = inputjson.getString("status");
String destinationKeyName = keyName+"_"+status;
LambdaInvokerFromCode obj = new LambdaInvokerFromCode();
obj.runWithPayload(DBUtils.FILE_RENAME_HANDLER_NAME,body);
}
}
Make sure the role which your Lambda function executes with has lambda:InvokeFunction permission.
Then use AWS SDK to invoke the 2rd function. (Doc: http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/lambda/AWSLambdaClient.html#invoke(com.amazonaws.services.lambda.model.InvokeRequest))
Edit: For such a scenario, consider using Step Functions.
We had similar problem and tried to gather various implementations to achieve this. Turns out it had nothing to do with the code.
Few basic rules:
Ensure proper policy and role for your lambda function, at minimum:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:::"
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
""
]
}
]
}
Have functions in same regions.
No VPC configurations needed. If your applications have VPC, make sure your lambda function has appropriate role policy (refer AWSLambdaVPCAccessExecutionRole)
Most important (primarily why it was failing for us), set right timeouts and heap sizes. Calling Lambda is going to wait until called one is finished. Simple math of 2x the called lambda values works. Also this was only with java lambda function calling another java lambda function. With node js lambda function calling another lambda function did not have this issue.
Following are some implementations that works for us:
Using service interface
import com.amazonaws.regions.Regions;
import com.amazonaws.services.lambda.AWSLambdaAsyncClientBuilder;
import com.amazonaws.services.lambda.invoke.LambdaInvokerFactory;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
public class LambdaFunctionHandler implements RequestHandler {
#Override
public String handleRequest(Object input, Context context) {
context.getLogger().log("Input: " + input);
FineGrainedService fg = LambdaInvokerFactory.builder()
.lambdaClient(
AWSLambdaAsyncClientBuilder.standard()
.withRegion(Regions.US_EAST_2)
.build()
)
.build(FineGrainedService.class);
context.getLogger().log("Response back from FG" + fg.getClass());
String fgRespone = fg.callFineGrained("Call from Gateway");
context.getLogger().log("fgRespone: " + fgRespone);
// TODO: implement your handler
return "Hello from Gateway Lambda!";
}
}
import com.amazonaws.services.lambda.invoke.LambdaFunction;
public interface FineGrainedService {
#LambdaFunction(functionName="SimpleFineGrained")
String callFineGrained(String input);
}
Using invoker
import java.nio.ByteBuffer;
import com.amazonaws.services.lambda.AWSLambdaClient;
import com.amazonaws.services.lambda.model.InvokeRequest;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
public class LambdaFunctionHandler implements RequestHandler {
#Override
public String handleRequest(Object input, Context context) {
context.getLogger().log("Input: " + input);
AWSLambdaClient lambdaClient = new AWSLambdaClient();
try {
InvokeRequest invokeRequest = new InvokeRequest();
invokeRequest.setFunctionName("SimpleFineGrained");
invokeRequest.setPayload("From gateway");
context.getLogger().log("Before Invoke");
ByteBuffer payload = lambdaClient.invoke(invokeRequest).getPayload();
context.getLogger().log("After Inoke");
context.getLogger().log(payload.toString());
context.getLogger().log("After Payload logger");
} catch (Exception e) {
// TODO: handle exception
}
// TODO: implement your handler
return "Hello from Lambda!";
}
}
AWSLambdaClient should be created from builder.
You can use LambdaClient to invoke Lambda asynchronously by passing InvocationType.EVENT parameter. Look at an example:
LambdaClient lambdaClient = LambdaClient.builder().build();
InvokeRequest invokeRequest = InvokeRequest.builder()
.functionName("functionName")
.invocationType(InvocationType.EVENT)
.payload(SdkBytes.fromUtf8String("payload"))
.build();
InvokeResponse response = lambdaClient.invoke(invokeRequest);
I am having problem to make work jaspyt in this scenario:
StrongTextEncryptor textEncryptor = new StrongTextEncryptor();
textEncryptor.setPassword("myPassword");
String myEncryptedParam = textEncryptor.encrypt("myClearMessage");
myObject.setCallbackUrl("http://myhost/notification?myparam="+myEncryptedParam);
When I receive the callback url and try to decrypt the param 'myParam' provided in the url WITH THE SAME STRONGTEXTENCRYPTOR used in the request, it raises an exception:
org.jasypt.exceptions.EncryptionOperationNotPossibleException
at org.jasypt.encryption.pbe.StandardPBEByteEncryptor.decrypt(StandardPBEByteEncryptor.java:1055)
at org.jasypt.encryption.pbe.StandardPBEStringEncryptor.decrypt(StandardPBEStringEncryptor.java:725)
at org.jasypt.util.text.StrongTextEncryptor.decrypt(StrongTextEncryptor.java:118)
at com.softlysoftware.caligraph.util.Util.decryptMessage(Util.java:30)
Digging a bit more in the exception I get:
BadPaddingException: Given final block not properly padded
If I test the encryption/decryption process without httprequest, works ok.
The problem is that StrongTextEncryptor uses StandardPBEStringEncryptor which in turn uses Base64 to encode the ciphertexts. The problem is that Base64 has a / character which is not URL-safe. When you try to decrypt, the parameter parser that you use probably drops those / characters which makes the ciphertext incomplete.
The easiest solution is probably to change the offending characters with replace all:
myEncryptedParam.replaceAll("/", "_").replaceAll("\\+", "-");
and back again before you try to decrypt:
receivedParam.replaceAll("_", "/").replaceAll("-", "\\+");
This transforms the encoding from the normal Base64 encoding to the "URL and Filename safe" Base 64 alphabet.
Building on Artjom's answer, here is a Jasypt text encryptor wrapper
import org.jasypt.util.text.TextEncryptor;
public class UrlSafeTextEncryptor implements TextEncryptor {
private TextEncryptor textEncryptor; // thread safe
public UrlSafeTextEncryptor(TextEncryptor textEncryptor) {
this.textEncryptor = textEncryptor;
}
public String encrypt(String string) {
String encrypted = textEncryptor.encrypt(string);
return encrypted.replaceAll("/", "_").replaceAll("\\+", "-");
}
public String decrypt(String encrypted) {
encrypted = encrypted.replaceAll("_", "/").replaceAll("-", "\\+");
return textEncryptor.decrypt(encrypted);
}
}
and corresponding test case
import org.jasypt.util.text.StrongTextEncryptor;
import org.jasypt.util.text.TextEncryptor;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
public class UrlSafeTextEncryptorTest {
private String password = "12345678";
protected TextEncryptor encryptor;
protected UrlSafeTextEncryptor urlSafeEncryptor;
#Before
public void init() {
StrongTextEncryptor encryptor = new StrongTextEncryptor(); // your implementation here
encryptor.setPassword(password);
this.encryptor = encryptor;
this.urlSafeEncryptor = new UrlSafeTextEncryptor(encryptor);
}
#Test
public void scramble_roundtrip_urlSafe() {
int i = 0;
while(true) {
String key = Integer.toString(i);
String urlSafeEncrypted = urlSafeEncryptor.encrypt(key);
Assert.assertFalse(urlSafeEncrypted, urlSafeEncrypted.contains("/"));
Assert.assertEquals(key, urlSafeEncryptor.decrypt(urlSafeEncrypted));
if(urlSafeEncrypted.contains("_")) {
break;
}
i++;
}
}
}
I've implemented the basic OpenID connect flow in my java application and it seems to work fine.
I'd like to use an existing java library to verify the id token, as detailed here on a Salesforce page about implementing OpenId connect.
Are there any existing libraries that implement this well? I've got the response parsed, I just need to find some simple way to verify the id token is valid.
The following example will validate an id_token from an OAuth2 call for Salesforce, without any 3rd party libraries. Note that you'll have to supply a valid id_token below to test this out.
package jwt_validate_signature_sf_no_third_party;
import java.math.BigInteger;
import java.nio.charset.StandardCharsets;
import java.security.GeneralSecurityException;
import java.security.KeyFactory;
import java.security.PublicKey;
import java.security.Signature;
import java.security.spec.RSAPublicKeySpec;
import org.apache.commons.codec.binary.Base64;
public class Main
{
// Sample id_token that needs validation. This is probably the only field you need to change to test your id_token.
// If it doesn't work, try making sure the MODULUS and EXPONENT constants are what you're using, as detailed below.
public static final String id_token = "YOUR_ID_TOKEN_HERE";
public static final String[] id_token_parts = id_token.split("\\.");
// Constants that come from the keys your token was signed with.
// Correct values can be found from using the "kid" value and looking up the "n (MODULUS)" and "e (EXPONENT)" fields
// at the following url: https://login.salesforce.com/id/keys
// MAJOR NOTE: This url will work for 90% of your use cases, but for the other 10%
// you'll need to make sure you get the "kid" value from the instance url that
// the api responses from Salesforce suggest for your token, as the kid values *will* be different.
// e.g. Some users would need to get their kid values from https://na44.salesforce.com/id/keys for example.
// The following 2 values are hard coded to work with the "kid=196" key values.
public static final String MODULUS = "5SGw1jcqyFYEZaf39RoxAhlq-hfRSOsneVtsT2k09yEQhwB2myvf3ckVAwFyBF6y0Hr1psvu1FlPzKQ9YfcQkfge4e7eeQ7uaez9mMQ8RpyAFZprq1iFCix4XQw-jKW47LAevr9w1ttZY932gFrGJ4gkf_uqutUny82vupVUETpQ6HDmIL958SxYb_-d436zi5LMlHnTxcR5TWIQGGxip-CrD7vOA3hrssYLhNGQdwVYtwI768EvwE8h4VJDgIrovoHPH1ofDQk8-oG20eEmZeWugI1K3z33fZJS-E_2p_OiDVr0EmgFMTvPTnQ75h_9vyF1qhzikJpN9P8KcEm8oGu7KJGIn8ggUY0ftqKG2KcWTaKiirFFYQ981PhLHryH18eOIxMpoh9pRXf2y7DfNTyid99ig0GUH-lzAlbKY0EV2sIuvEsIoo6G8YT2uI72xzl7sCcp41FS7oFwbUyHp_uHGiTZgN7g-18nm2TFmQ_wGB1xCwJMFzjIXq1PwEjmg3W5NBuMLSbG-aDwjeNrcD_4vfB6yg548GztQO2MpV_BuxtrZDJQm-xhJXdm4FfrJzWdwX_JN9qfsP0YU1_mxtSU_m6EKgmwFdE3Yh1WM0-kRRSk3gmNvXpiKeVduzm8I5_Jl7kwLgBw24QUVaLZn8jC2xWRk_jcBNFFLQgOf9U";
public static final String EXPONENT = "AQAB";
public static final String ID_TOKEN_HEADER = base64UrlDecode(id_token_parts[0]);
public static final String ID_TOKEN_PAYLOAD = base64UrlDecode(id_token_parts[1]);
public static final byte[] ID_TOKEN_SIGNATURE = base64UrlDecodeToBytes(id_token_parts[2]);
public static String base64UrlDecode(String input)
{
byte[] decodedBytes = base64UrlDecodeToBytes(input);
String result = new String(decodedBytes, StandardCharsets.UTF_8);
return result;
}
public static byte[] base64UrlDecodeToBytes(String input)
{
Base64 decoder = new Base64(-1, null, true);
byte[] decodedBytes = decoder.decode(input);
return decodedBytes;
}
public static void main(String args[])
{
dumpJwtInfo();
validateToken();
}
public static void dump(String data)
{
System.out.println(data);
}
public static void dumpJwtInfo()
{
dump(ID_TOKEN_HEADER);
dump(ID_TOKEN_PAYLOAD);
}
public static void validateToken()
{
PublicKey publicKey = getPublicKey(MODULUS, EXPONENT);
byte[] data = (id_token_parts[0] + "." + id_token_parts[1]).getBytes(StandardCharsets.UTF_8);
try
{
boolean isSignatureValid = verifyUsingPublicKey(data, ID_TOKEN_SIGNATURE, publicKey);
System.out.println("isSignatureValid: " + isSignatureValid);
}
catch (GeneralSecurityException e)
{
e.printStackTrace();
}
}
public static PublicKey getPublicKey(String MODULUS, String EXPONENT)
{
byte[] nb = base64UrlDecodeToBytes(MODULUS);
byte[] eb = base64UrlDecodeToBytes(EXPONENT);
BigInteger n = new BigInteger(1, nb);
BigInteger e = new BigInteger(1, eb);
RSAPublicKeySpec rsaPublicKeySpec = new RSAPublicKeySpec(n, e);
try
{
PublicKey publicKey = KeyFactory.getInstance("RSA").generatePublic(rsaPublicKeySpec);
return publicKey;
}
catch (Exception ex)
{
throw new RuntimeException("Cant create public key", ex);
}
}
private static boolean verifyUsingPublicKey(byte[] data, byte[] signature, PublicKey pubKey) throws GeneralSecurityException
{
Signature sig = Signature.getInstance("SHA256withRSA");
sig.initVerify(pubKey);
sig.update(data);
return sig.verify(signature);
}
}
Note if you're not opposed to using a third party library, I'd totally suggest using this, as it works great. I couldn't use it for business reasons, but was glad to find it as it helped me understand how this process works, validated an id_token, I'm sure in a much more robust way.
Also, to be certain this request was signed by the same client, ensure the aud parameter in the payload matches your own client key given to you by Salesforce.
As part of Spring Security OAuth, the Spring team has developed a library called Spring Security JWT that allows manipulation of JWTs, including decoding and verifying tokens.
See the following helper class for example:
https://github.com/spring-projects/spring-security-oauth/blob/master/spring-security-jwt/src/main/java/org/springframework/security/jwt/JwtHelper.java
The library is in version 1.0.0-RELEASE and available in the maven repo.