We are trying to secure our api with JWT.We are using webflux for our app development.Our requirement is to read the JWT token coming from the consumer and extract the certificate from JWT and validate.
We are using filter for TraceId in our application.What is my best approach to JWT Authentication?
using another filter? or can i chain with existing TraceId Filter.Does spring provide any solutions for JWT Authentication ?
Here is the code I am using for Trace Id Filter.
#Component
#Slf4j
public class TraceIdFilter implements WebFilter {
#Override
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
Map<String, String> headers = exchange.getRequest().getHeaders().toSingleValueMap();
return chain.filter(exchange)
.subscriberContext(context -> {
var traceId = "";
if (headers.containsKey("X-B3-TRACEID")) {
traceId = headers.get("X-B3-TRACEID");
MDC.put("X-B3-TraceId", traceId);
} else if (!exchange.getRequest().getURI().getPath().contains("/actuator")) {
log.warn("TRACE_ID not present in header: {}", exchange.getRequest().getURI());
}
// simple hack to provide the context with the exchange, so the whole chain can get the same trace id
Context contextTmp = context.put("X-B3-TraceId", traceId);
exchange.getAttributes().put("X-B3-TraceId", traceId);
return contextTmp;
});
}
}
When working with JWT in a Java project, it is recommended to use this dependency:
// https://mvnrepository.com/artifact/com.auth0/java-jwt
compile group: 'com.auth0', name: 'java-jwt', version: '3.11.0'
I personally use implementation instead of compile and tend to use "+" for the version.
Since you're using RSA, you'll need to establish a variable like this near the code you're workign with, ideally within the TraceIdFilter class from your code snippet.
RSAPublicKey publicKey;
Assuming you're able to get the contents of the public Key into a string called publicKeyStr you'll want to initialize your public Key Object like so:
try {
X509EncodedKeySpec pubKeySpec = new X509EncodedKeySpec(Base64.getDecoder().decode(publicKeyStr));
publicKey = (RSAPublicKey)KeyFactory.getInstance("RSA").generatePublic(pubKeySpec);
} catch (InvalidKeySpecException | NoSuchAlgorithmException e)
{
// Handle Exception accordingly
}
In your filter code, you can explore the JWT contents like so (using token as the String holding your JWT token:
DecodedJWT decodedJwt = null;
try
{
decodedJwt = JWT.require(Algorithm.RSA512(publicKey,null))
.build()
.verify(token);
}
catch(JWTVerificationException e)
{
// If your key is valid, then the token is invalid, so handle accordingly
}
Note that JWT tokens can sometimes contain fields such as iat and exp, both of which take in dates and if the current time is after exp or before iat, then verification will fail.
should it succeed and you have a valid DecodedJWT object, then you can view various features available in the token.
For instance, if you are looking for a "valid" claim and expect it to be of a boolean type, you can access the boolean like-so.
Claim idClaim = decodedJwt.getClaim("valid");
Boolean idBool = idClaim.asBoolean();
Bear in mind, if no such claim is in the token, idClaim will be null and if it is not a Boolean value (or can't be converted to one), idBool will be null.
Here is a link to more of the Playload interface, which DecodedJWT extends.
JWT Payload (this link works better in Firefox than in Edge Chromium)
Not knowing what the Certificate within the JWT token is supposed to look like, hopefully, you'll be able to find it using the Claims feature of this dependency.
Most of my answer focuses on the JWT aspect, but I found a Stackoverflow Question that might help that might inspire you regarding the WebFilter aspect of your query.
Here are some of the imports used for the code:
import java.security.KeyFactory;
import java.security.NoSuchAlgorithmException;
import java.security.interfaces.RSAPublicKey;
import java.security.spec.InvalidKeySpecException;
import java.security.spec.X509EncodedKeySpec;
import java.util.Base64;
import com.auth0.jwt.JWT;
import com.auth0.jwt.algorithms.Algorithm;
import com.auth0.jwt.exceptions.JWTVerificationException;
import com.auth0.jwt.interfaces.Claim;
import com.auth0.jwt.interfaces.DecodedJWT;
import com.auth0.jwt.interfaces.RSAKeyProvider;
My example is hantsy/spring-reactive-jwt-sample.
User a Fitler to populate SecurityContextHolder, register it in the Secuirty Filter Chain.
Related
I can create a ProjectApiRoot using the Java SDK and perform requests with that using the following code:
private static ProjectApiRoot createProjectClient() {
ProjectApiRoot apiRoot = ApiRootBuilder.of()
.defaultClient(ClientCredentials.of()
.withClientId(System.getenv("CTP_CLIENT_ID"))
.withClientSecret(System.getenv("CTP_CLIENT_SECRET"))
.build(),
ServiceRegion.GCP_EUROPE_WEST1)
.build(System.getenv("CTP_PROJECT_KEY"))
return apiRoot
}
However, I would like to authorize as a specific customer (email and password) and interact with the Commercetools API using the customer. The following code throws an error:
private static ProjectApiRoot createCustomerClient() {
def tokenUri = "https://auth.europe-west1.gcp.commercetools.com/oauth/*CTP_PROJECT_KEY*/customers/token"
def projectKey = System.getenv("CTP_PROJECT_KEY")
def scopes = System.getenv("CTP_SCOPES")
def credentials = ClientCredentials.of()
.withClientId("*email*")
.withClientSecret("*password*")
.withScopes(scopes)
.build()
def apiRootBuilder = ApiRootBuilder.of()
.withApiBaseUrl("https://api.europe-west1.gcp.commercetools.com")
.withClientCredentialsFlow(credentials, tokenUri)
return apiRootBuilder.build(projectKey)
}
Error:
io.vrap.rmf.base.client.oauth2.AuthException: detailMessage: Unauthorized
"message" : "Please provide valid client credentials using HTTP Basic Authentication.",
By using the withGlobalCustomerPasswordFlow instead of the withClientCredentialsFlow which authenticates the customer prior to doing the request.
But I would advise to do this only in a context where the customer is logging in everytime. Using it in any other context e.g. remembered log in of needs a more sophisticated approach as you need to store the bearer token and refresh token and can't easily use the middleware approach for authenticating the customer but instead do it not as part of an auth flow middleware.
Please see also https://github.com/commercetools/commercetools-sdk-java-v2/tree/main/commercetools/commercetools-sdk-java-api/src/integrationTest/java/commercetools/me
I have been researching for a working example for this but haven't found any.
I referred following links
Stackoverflow Link and Google Official Docs
From these documentations I did understand that I need to implement this
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential;
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.compute.Compute;
import com.google.api.services.compute.model.CacheInvalidationRule;
import com.google.api.services.compute.model.Operation;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Arrays;
public class ComputeExample {
public static void main(String args[]) throws IOException, GeneralSecurityException {
// Project ID for this request.
String project = "my-project"; // TODO: Update placeholder value.
// Name of the UrlMap scoping this request.
String urlMap = "my-url-map"; // TODO: Update placeholder value.
// TODO: Assign values to desired fields of `requestBody`:
CacheInvalidationRule requestBody = new CacheInvalidationRule();
Compute computeService = createComputeService();
Compute.UrlMaps.InvalidateCache request =
computeService.urlMaps().invalidateCache(project, urlMap, requestBody);
Operation response = request.execute();
// TODO: Change code below to process the `response` object:
System.out.println(response);
}
public static Compute createComputeService() throws IOException, GeneralSecurityException {
HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
JsonFactory jsonFactory = JacksonFactory.getDefaultInstance();
GoogleCredential credential = GoogleCredential.getApplicationDefault();
if (credential.createScopedRequired()) {
credential =
credential.createScoped(Arrays.asList("https://www.googleapis.com/auth/cloud-platform"));
}
return new Compute.Builder(httpTransport, jsonFactory, credential)
.setApplicationName("Google-ComputeSample/0.1")
.build();
}
}
BUT if you see this example it only has placeholder values in its place.
IF I WANTED TO FLUSH CACHE OF A PAGE CALLED https://mywebsite.com/homepage.html
WHERE WOULD I ENTER THIS INFORMATION IN THE ABOVE CODE?
Do I added it here
credential.createScoped(Arrays.asList("https://mywebsite.com/homepage.html"));
OR Should I add it in UrlMaps? This is very confusing.
It should go in the request body. The request body contains data with the following structure:
JSON representation
{
"path": string,
"host": string
}
These Fields takes followings:
path as string
host as string
If set, this invalidation rule will only apply to requests with a Host header matching host.
You might need to create requestbody object
CacheInvalidationRule requestBody = new CacheInvalidationRule();
this should creates cacheinvalidationrule object and assigns to requestBody
Additionally, you might also need to something like this
requestBody.setHostand requestBody.setPath = ""
This two properties takes string as an argument
requestBody.setHost=" mywebsite.com"
and
requestBody.setPath = "/homepage.html"
Hope it helps, good luck
I would suggest to checking developers.google.com Compute.UrlMaps.InvalidateCache class, use Method summary description I believe it would be helpful for you to understand this class and how to incorporate it in your code. It contains method details and parameter description for example
Constructor Detail
InvalidateCache
protected InvalidateCache(java.lang.String project,
java.lang.String urlMap,
CacheInvalidationRule content)
Initiates a cache invalidation operation, invalidating the specified path, scoped to the specified UrlMap. Create a request for the method "urlMaps.invalidateCache". This request holds the parameters needed by the the compute server. After setting any optional parameters, call the AbstractGoogleClientRequest.execute() method to invoke the remote operation.
InvalidateCache#initialize(com.google.api.client.googleapis.services.AbstractGoogleC lientRequest)
must be called to initialize this instance immediately after invoking the constructor.
Parameters:
project - Project ID for this request.
urlMap - Name of the UrlMap scoping this request.
content - the CacheInvalidationRule
Method detail for example for setAlt
set
public Compute.UrlMaps.InvalidateCache set(java.lang.String parameterName,
java.lang.Object value)
Javascript has an amazing library called morgan that logs out all the incoming http requests. Wondering if there's an equivalent library in Java/Kotlin for logging spring boot webflux requests.
This is what ended up working for me after having a look at this repo
I have tried all night to get the request body as well, but kept getting errors re: single subscriber only per request. It's either impossible, pretty darn difficult, or not recommended in the case of requests with a large body (since it could block your server). So I would highly recommend you convert your #RequestBody objects to #RequestParam query params instead if you want to log variables in post requests.
package com.example.demo
import org.slf4j.LoggerFactory
import org.springframework.stereotype.Component
import org.springframework.web.server.ServerWebExchange
import org.springframework.web.server.WebFilter
import org.springframework.web.server.WebFilterChain
import reactor.core.publisher.Mono
#Component
class LogFilter: WebFilter {
private val logger = LoggerFactory.getLogger(LogFilter::class.java)
override fun filter(exchange: ServerWebExchange, chain: WebFilterChain): Mono<Void> {
val startTime = System.currentTimeMillis()
val request = exchange.request
val path = request.uri.path
val requestPrintMap = mutableMapOf<Any, Any>()
requestPrintMap["headers"] = request.headers
requestPrintMap["uri"] = request.uri
requestPrintMap["params"] = request.queryParams
return chain
.filter(exchange)
.doAfterTerminate {
logger.info("Served '{}' as {} in {} msec",
path,
exchange.response.statusCode,
System.currentTimeMillis() - startTime)
}
}
}
Using CuratorFramework, could someone explain how I can:
Create a new path
Set data for this path
Get this path
Using username foo and password bar? Those that don't know this user/pass would not be able to do anything.
I don't care about SSL or passwords being sent via plaintext for the purpose of this question.
ACL in Apache Curator are for access control. Therefore, ZooKeeper do not provide any authentication mechanism like, clients who don't have correct password cannot connect to ZooKeeper or cannot create ZNodes. What it can do is, preventing unauthorized clients from accessing particular Znode/ZNodes. In order to do that, you have to setup CuratorFramework instance as I have described below. Remember, this will guarantee that, a ZNode create with a given ACL, can be again accessed by the same client or by a client presenting the same authentication information.
First you should build the CuratorFramework instane as follows. Here, the connectString means a comma separated list of ip and port combinations of the zookeeper servers in your ensemble.
CuratorFrameworkFactory.Builder builder = CuratorFrameworkFactory.builder()
.connectString(connectString)
.retryPolicy(new ExponentialBackoffRetry(retryInitialWaitMs, maxRetryCount))
.connectionTimeoutMs(connectionTimeoutMs)
.sessionTimeoutMs(sessionTimeoutMs);
/*
* If authorization information is available, those will be added to the client. NOTE: These auth info are
* for access control, therefore no authentication will happen when the client is being started. These
* info will only be required whenever a client is accessing an already create ZNode. For another client of
* another node to make use of a ZNode created by this node, it should also provide the same auth info.
*/
if (zkUsername != null && zkPassword != null) {
String authenticationString = zkUsername + ":" + zkPassword;
builder.authorization("digest", authenticationString.getBytes())
.aclProvider(new ACLProvider() {
#Override
public List<ACL> getDefaultAcl() {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
#Override
public List<ACL> getAclForPath(String path) {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
});
}
CuratorFramework client = builder.build();
Now you have to start it.
client.start();
Creating a path.
client.create().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path");
Here, the CreateMode specify what type of a node you want to create. Available types are PERSISTENT,EPHEMERAL,EPHEMERAL_SEQUENTIAL,PERSISTENT_SEQUENTIAL,CONTAINER. Java Docs
If you are not sure whether the path up to /your/ZNode already exists, you can create them as well.
client.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path");
Set Data
You can either set data when you are creating the ZNode or later. If you are setting data at the creation time, pass the data as a byte array as the second parameter to the forPath() method.
client.create().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path","your data as String".getBytes());
If you are doing it later, (data should be given as a byte array)
client.setData().forPath("/your/ZNode/path",data);
Finally
I don't understand what you mean by get this path. Apache Curator is a java client (more than that with Curator Recipes) which use Apache Zookeeper in the background and hides edge cases and complexities of Zookeeper. In Zookeeper, they use the concept of ZNodes to store data. You can consider it as the Linux directory structure. All ZNodePaths should start with / (root) and you can go on specifying directory like ZNodePaths as you like. Ex: /someName/another/test/sample.
As shown in the above diagram, ZNode are organized in a tree structure. Every ZNode can store up to 1MB of data. Therefore, if you want to retrieve data stored in a ZNode, you need to know the path to that ZNode. (Just like you should know the table and column of a database in order to retrive data).
If you want to retrive data in a given path,
client.getData().forPath("/path/to/ZNode");
That's all you have to know when you want to work with Curator.
One more thing
ACL in Apache Curator are for access control. That is, if you set ACLProvider as follows,
new ACLProvider() {
#Override
public List<ACL> getDefaultAcl () {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
#Override
public List<ACL> getAclForPath (String path){
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
}
only the client with the credentials identical to the creator will be given access to the corresponding ZNode later on. Autherization details are set as follows (See the client building example). There are other modes of ACL availble, like OPEN_ACL_UNSAFE which do not do any access control if you set it as the ACLProvider.
authorization("digest", authorizationString.getBytes())
they will be used later to control access to a given ZNode.
In short, if you want to prevent others from interfering your ZNodes, you can set the ACLProvider to return CREATOR_ALL_ACL and set the authorization to digest as shown above. Only the CuratorFramework instances using the same authorization string ("username:password") will be able to access those ZNodes. But it will not prevent others from creating ZNodes in paths which are not interfering with yours.
Hope you found what you want :-)
It wasn't part of the original question, but I thought I would share a solution I came up with in which the credentials used determine the access level.
I didn't have much luck finding any examples and kept ending up on this page so maybe it will help someone else. I dug through the source code of Curator Framework and luckily the org.apache.curator.framework.recipes.leader.TestLeaderAcls class was there to point me in the right direction.
So in this example:
One generic client used across multiple apps which only needs to read data from ZK.
Another admin client has the ability to read, delete, and update nodes in ZK.
Read-only or admin access is determined by the credentials used.
FULL-CONTROL ADMIN CLIENT
import java.security.NoSuchAlgorithmException;
import java.util.ArrayList;
import java.util.List;
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.api.ACLProvider;
import org.apache.curator.retry.ExponentialBackoffRetry;
import org.apache.zookeeper.ZooDefs;
import org.apache.zookeeper.data.ACL;
import org.apache.zookeeper.data.Id;
import org.apache.zookeeper.server.auth.DigestAuthenticationProvider;
public class AdminClient {
protected static CuratorFramework client = null;
public void initializeClient() throws NoSuchAlgorithmException {
String zkConnectString = "127.0.0.1:2181";
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
final List<ACL> acls = new ArrayList<>();
//full-control ACL
String zkUsername = "adminuser";
String zkPassword = "adminpass";
String fullControlAuth = zkUsername + ":" + zkPassword;
String fullControlDigest = DigestAuthenticationProvider.generateDigest(fullControlAuth);
ACL fullControlAcl = new ACL(ZooDefs.Perms.ALL, new Id("digest", fullControlDigest));
acls.add(fullControlAcl);
//read-only ACL
String zkReadOnlyUsername = "readuser";
String zkReadOnlyPassword = "readpass";
String readOnlyAuth = zkReadOnlyUsername + ":" + zkReadOnlyPassword;
String readOnlyDigest = DigestAuthenticationProvider.generateDigest(readOnlyAuth);
ACL readOnlyAcl = new ACL(ZooDefs.Perms.READ, new Id("digest", readOnlyDigest));
acls.add(readOnlyAcl);
//create the client with full-control access
client = CuratorFrameworkFactory.builder()
.connectString(zkConnectString)
.retryPolicy(retryPolicy)
.authorization("digest", fullControlAuth.getBytes())
.aclProvider(new ACLProvider() {
#Override
public List<ACL> getDefaultAcl() {
return acls;
}
#Override
public List<ACL> getAclForPath(String string) {
return acls;
}
})
.build();
client.start();
//Now create, read, delete ZK nodes
}
}
READ-ONLY CLIENT
import java.security.NoSuchAlgorithmException;
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.retry.ExponentialBackoffRetry;
public class ReadOnlyClient {
protected static CuratorFramework client = null;
public void initializeClient() throws NoSuchAlgorithmException {
String zkConnectString = "127.0.0.1:2181";
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
String zkReadOnlyUsername = "readuser";
String zkReadOnlyPassword = "readpass";
String readOnlyAuth = zkReadOnlyUsername + ":" + zkReadOnlyPassword;
client = CuratorFrameworkFactory.builder()
.connectString(zkConnectString)
.retryPolicy(retryPolicy)
.authorization("digest", readOnlyAuth.getBytes())
.build();
client.start();
//Now read ZK nodes
}
}
I am building a java application in order to index some json files using amazon cloudsearch. I think that I've used correct the aws documentation, but I can't make my application work.
package com.myPackage;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.InputStream;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.cloudsearchdomain.AmazonCloudSearchDomainClient;
import com.amazonaws.services.cloudsearchdomain.model.UploadDocumentsRequest;
import com.amazonaws.services.cloudsearchdomain.model.UploadDocumentsResult;
public class App
{
public static final String ACCESS_KEY = "myAccessKey";
public static final String SECRET_KEY = "mySecretKey";
public static final String ENDPOINT = "myDocumentEndpoint";
public static void main( String[] args ) throws FileNotFoundException
{
AWSCredentials credentials = new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY);
AmazonCloudSearchDomainClient domain = new AmazonCloudSearchDomainClient(credentials);
domain.setEndpoint(ENDPOINT);
File file = new File("path to my file");
InputStream docAsStream = new FileInputStream(file);
UploadDocumentsRequest req = new UploadDocumentsRequest();
req.setDocuments(docAsStream);
System.out.print(file.length());
UploadDocumentsResult result = domain.uploadDocuments(req);//here i get the exception
System.out.println(result.toString());
//
// SearchRequest searchReq = new SearchRequest().withQuery("my Search request");
// SearchResult s_res = domain.search(searchReq);
// System.out.println(s_res);
}
}
The problem is that I get the following errors:
Exception in thread "main" com.amazonaws.AmazonClientException: Unable to unmarshall error response (Unable to parse error response: '<html><body><h1>403 Forbidden</h1>Request forbidden by administrative rules.</body></html>'). Response Code: 403, Response Text: Forbidden
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1071)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:725)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
at com.amazonaws.services.cloudsearchdomain.AmazonCloudSearchDomainClient.invoke(AmazonCloudSearchDomainClient.java:527)
at com.amazonaws.services.cloudsearchdomain.AmazonCloudSearchDomainClient.uploadDocuments(AmazonCloudSearchDomainClient.java:310)
at gvrhtyhuj.dfgbmn.App.main(App.java:31)
Caused by: com.amazonaws.AmazonClientException: Unable to parse error response: '<html><body><h1>403 Forbidden</h1>Request forbidden by administrative rules.</body></html>'
at com.amazonaws.http.JsonErrorResponseHandler.handle(JsonErrorResponseHandler.java:55)
at com.amazonaws.http.JsonErrorResponseHandler.handle(JsonErrorResponseHandler.java:29)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1045)
... 6 more
Caused by: com.amazonaws.util.json.JSONException: A JSONObject text must begin with '{' at 1 [character 2 line 1]
at com.amazonaws.util.json.JSONTokener.syntaxError(JSONTokener.java:422)
at com.amazonaws.util.json.JSONObject.<init>(JSONObject.java:196)
at com.amazonaws.util.json.JSONObject.<init>(JSONObject.java:323)
at com.amazonaws.http.JsonErrorResponseHandler.handle(JsonErrorResponseHandler.java:53)
This is the json file:
{"a":123,"b":"4 5 6"}
First: Please don't put your credentials in your code. It's way too easy to accidentally check credentials into version control, or otherwise post them. If you have your credentials in the default location, you can just do:
AmazonCloudSearchDomainClient client = new AmazonCloudSearchDomainClient();
And the SDK will find them.
One common cause of 403s is getting the endpoint wrong. Make sure you don't have /documents/batch on the end of your endpoint string. The SDK will add that.
One other thing to try is setting the content length:
req.setContentLength(file.length());
My code has that, and works, and is otherwise the same as yours.
You're getting a 403 Forbidden error from CloudSearch, meaning that you don't have permission to upload documents to that domain.
Are you literally using "myAccessKey" as your access key value or did you redact it when you posted this? If you never set it, then you need to set your access key; otherwise check the access policies on your CloudSearch domain through the AWS web console, since it may be configured to accept/reject submissions based on IP address or some other set of conditions.