Using Play 1.2.3 I am trying to implement the Secure Module across multiple controllers.
I have added - play -> secure to my dependencies and the secure module appears in my project. I have imported the default secure routes * / module:secure, customised the authenitcation method and annotated all of my controllers using the #With(Secure.class).
The problem that I am having is that when I move between controllers I receive a Null Pointer Exception thrown from the secure:module at line 193: return Java.invokeStaticOrParent(security, m, args);
A second issue that I am having is that when methods are called from within the same controller, some calls can take up to 20 seconds to complete where they would normally be instant without the secure module included. Edit: this was unrelated - the secure module has no visible effect on loading time
My question is has anyone else implemented the secure module in Play across multiple controllers, and if so, did they come across any of these issues?
Edit
The problem was down to the use of a tag in my template - not in the implementation of the secure module. See below for reason and how to resolve.
The reason the null pointer exception was being thrown was because in my template I was calling:
#{secure.check}
<li>Logout</li>
#{/secure.check}
I had oddly thought this was a template tag to check whether there was security enabled - but it actually needs be followed by a 'profile' type - which is not something I specified nor have implemented (hence the exception).
To get around this, I added a basic template tag that checks whether the security associated session is in use:
#{if session.username}
#{doBody /}
#{/if}
And can simply implement as follows:
#{secure.secure}
<li>Logout</li>
#{/secure.secure}
Related
I have an application where single user can work in contexts of multiple companies. We call such a connection (user<->company) a permit. Every one of this permits can have different sets of permissions/roles. We want user to login just once and then he can simply change permits within application without need to enter password again.
Till now we had only one application and kept this whole permission model in our own DB. Unfortunately now we have to support second application which should inherit those permits. I was wondering wether is possible to move that model to keycloak so we don't have to replicate it to every single db and keep it in sync manually.
I have searched keycloak documentation regarding this topic but have found no information att all, which seems quite odd, because I don't think we are the first one working with multiple context application.
So now I'm asking is it possible to configure our model in keycloak and if so, how to do it? Eventually are there different options? I guess that I can provided that model as a claim with json structure but that doesn't feel right to me. I was thinking about custom IDP which could provide such claims based on DB so there no spelling errors and less repetition but I feel there should be a better way.
You could try to write your own Keycloak provider (SPI). There is a built in mechanism that allows you to expose REST endpoint on the Keycloak: https://github.com/keycloak/keycloak/tree/master/examples/providers/domain-extension
That REST could be called with authorized context only for example by passing Access-Token (Authorization header with Bearer value). On the provider level (through implementation of: org.keycloak.services.resource.RealmResourceProviderFactory and org.keycloak.services.resource.RealmResourceProvider) you have access to user's Keycloak session and object UserModel like in the following code:
AuthenticationManager.AuthResult authResult = new AppAuthManager().authenticateBearerToken(keycloakSession, keycloakSession.getContext().getRealm());
UserModel userModel = authResult.getUser();
UserModel class has methods for getting and setting attributes, so some information that indicates the current permit/company ID can be stored there. You can use REST methods exposed on the Keycloak to modify the model within the 'session' (represented by Access-Token).
The Github example shows also how to use another Keycloak provider (ex. built-in JPA provider) from you custom provider's level, so using that approach you could try to connect to the database with your permits/company informations. Of course the datasource representing you database should also be registered as Keycloak datasource.
Here a solution is described to handle redirects to a custom URL based on a condition via use of AccessStrategy.
This however is part of the unauthorized login logical flow therefore results into a still not-logged in user arriving at the end url we redirect to. (via getUnauthorizedUrl)
If we want to redirect the user based on a condition, say via injecting an action to the webflow, how can we manipulate the return URL to be changed into a custom one?
WebUtils.getService(requestContext) include getters of the source/originalUrl but no obvious way to set/manipulate said value through an action bean.
p.s. Currently using CAS version 5.3.x
Responses for normal web applications from CAS are built using WebApplicationServiceResponseBuilder.
If you examine this block you will find that the final response is built using WebApplicationServiceResponseBuilder bean. It is only created conditionally, if an existing bean is not already found in the context by the same name. So to provide your own, you just need to register a bean with the same name using your own #Configuration class.
#Bean
public ResponseBuilder<WebApplicationService> webApplicationServiceResponseBuilder() {
return new MyOwnWebApplicationServiceResponseBuilder(...);
}
...and then proceed to design your own MyOwnWebApplicationServiceResponseBuilder, perhaps even by extending WebApplicationServiceResponseBuilder and overriding what you need where necessary to build the final redirect logic conditionally.
To learn about how #Configuration classes work in general, you can:
Review this post
or this post
or consult the documentation for Spring and/or Spring Boot.
Let say I have these configurations in my xml,
<int-sftp:outbound-channel-adapter id="sftpOutbound"
channel="sftpChannel"
auto-create-directory="true"
remote-directory="/path/to/remote/directory/"
session-factory="cachingSessionFactory">
<int-sftp:request-handler-advice-chain>
<int:retry-advice />
</int-sftp:request-handler-advice-chain>
</int-sftp:outbound-channel-adapter>
How can I retrieve the attributes i.e, remote-directory in Java class ?
I tried to use context.getBean('sftpOutbound') but it returns EventDrivenConsumer class which doesn't have methods to get the configurations.
I'm using spring-integration-sftp v 4.0.0.
I am actually more concerned with why you wan to access it. I mean the remote directory and other attributes will come with the headers of each message, so you will have access to it at the Message level, but not at the level of Event Driven Consumer and that is by design, hence my question.
I have a business requirement that as part of some processing, we enable a "plugin" functionality for an externally-managed code.
It was decided that the best approach would be to #Reference a list of "ranked" (ordered according to an irrelevant algorithm) services. More or less like this:
public interface ExternalProcessor {
void doSomething();
}
#Component
#Service(ImportantTaskManager.class)
public class ImportantTaskManager{
#Reference(cardinality = ReferenceCardinality.OPTIONAL_MULTIPLE, referenceInterface = ExternalProcessor.class)
protected List<ExternalProcessor> processors;
public void doImportantStuff(){
for(ExternalProcessor processor: processors){
processor.doSomething();
}
}
}
To keep it short, I've ommitted as much boilerplate as I could, including the bind/unbind method pair.
Now, another business requirement is that we don't perform any kind of processing whatsoever if there are services that implement the ExternalProcessor interface that are not bound to our main processor (for any reason: not resolved dependencies, crashed during activation, missing required configuration etc.). I have a feeling it's kind of against OSGi principle (where OSGi provides only available services as opposed to info about the unavailable ones), but how can I achieve that?
So far I've come up with the following candidates for solutions:
Ask the external team to provide a count of services we're supposed to be expecting, then compare that against what we get from OSGi - it's unreliable
Crawl through all the installed bundles and their meta xmls taken from bundle's headers looking for service definitions - it's... well, it's time-consuming, for a start.
grep through the logs looking for service registrations and/or failures - this one seems just... wrong.
Is any of the above a proper solution to this? Are there better solutions? How else can I tacke this problem? What am I missing here?
I had a similar requirement for security plugins. The code calling the plugins should not run when the necessary security plugins were missing.
I solved it by defining a service property like id. Each plugin would have a unique id. In the config of your main code you specify a list of security plugins by id that you need.
The code then checks each service for the ids and only activates the main component when all mandatory plugins are present.
I'm looking to return per request debug information in a JSON HTTP response body. This debug information would be collected throughout my application, including data such as DB queries and how long they took, external calls made, whether certain conditions were met, general messages showing application flow etc.
I've looked into Java logging frameworks and the Play Framework, which I am using, has a built in logger that works great for logging information to files. But I can't seem to find any that handle request level logging i.e. store debug messages for this particular HTTP request to be returned with this request and then destroyed.
I could of course create a Debug class, instantiate that and pass that around throughout my application for each request, but this doesn't seem like a nice way to handle this as I would need to be passing this into a lot of classes in my application.
Are there any better ways/design patterns/libraries out there that can do what I'm looking for without having to pass a Debug object round my entire application?
It is not a common usage, so I do not think that you will find a product implementing that. You have basically 2 possibilities :
fully implement a custom logger and use it throughout your application
use a well known local api (slf4j, apache commons logging) and implement a dedicated back-end
Either way, the backend part should :
initiate a buffer to collect logs at request reception (probably in a filter) and put it in a ThreadLocal variable
collects all logs during the request help to the ThreadLocal variable
release the ThreadLocal variable at the end of request (in same filter that allocates it)
And all servlets of controllers should be modified to add the logging content to the json response body.
I found a solution, the Play Framework provides request level storage for arbitrary objects using Http.Context.current().args I'm using this HashMap to store my custom debug storage class so that I can access it throughout a request anywhere in my application.
Instead of passing Debug object from layer to layer, why don't you use Http.Context? It is defined at Controller class:
public abstract class Controller extends Results implements Status, HeaderNames {
...
public static Context ctx() {
return Http.Context.current();
}
}
Usage:
ctx().args.put(key, value)
ctx().args.get(key);
You don't need to use a logging framework for it. The best approach to return your debug info in the json response is to use the same method you are using to return the rest of the json.
This way, you can setup it to work in debug mode via -Dsomevar=debug or via an HTTP request parameter like debug=true