Restrict execution of a Method with Java Annotations - java

Do you know, if there is the possibility to check who is calling a method and to restrict whether they are allowed to execute it with Java Annotations?
For example if you have a client and a server. There are several users, which have different roles and they login into the client. Then (the same client) with different users wants to call a getMethod on the server.
Can I restrict, who is allowed to call this methos with Java Annotations?
Like:
#Role(role="AllowedRole")
public ReturnType getMethod() {
...
}

Well, I used to achieve this with Seam/DeltaSpike in JBoss Server. It's pretty straightforward.
Basically, you have a method which you annotate with your annotation. For example, mine is #User:
public class MyClass {
#User
public Object getMethod() {
//implementation
}
}
Next, you need a class where you define how you check your annotations:
public class Restrictions {
#Secures #User
public boolean isOk(Identity identity) {
if (identity.getUsername("Peter")) {
return true;
}
return false;
}
}
That's it! Ofcourse, you need some libraries and to define these intercepting stuff in certain xml files (like beans.xml) but it can be easily done with a little googling.
Start from these links:
Seam framework
Questions I asked on JBoss community when I was starting with this

This seems to be a good case for Method Security of Spring Security.

Annotations do not include code and are not processed magically. They just define metadata, so you need some kind of engine that processes the annotations and performs the access validation.
There are a lot of frameworks and tools that do this. For example you can implement this using AspectJ, Spring framework and Java EE support similar annotations.
You can also implement this logic yourself using dynamic proxy, byte code engineering or other technique.
So, please explain better what kind of application are you implementing and we can probably give you better advice.

Related

Is it possible to create necessary / required interfaces?

i have a little kont in my brain about structuring our code. We have a REST Backend based on SpringBoot. To handle requests regarding to security checks we use HandlerInterceptors. In some specific cases we need a specific interceptor and not our default one. The default one is registered in a 3rd party lib that no one can forget it. But i want all coders to think about this specific interceptor.
Actually, i just said it to them to achieve this.
Here's my question: Is there an option to create required (or necessary) interfaces which must be implemented? This would be a way to provide our security code by lib and to have the security that every coder implemented our specific interface (also if he just does nothing with it).
pseudo code:
public interface thinkForIt(){
Object SecBean specificSecBean;
public void methodToThinkOn();
}
public SecImpl implements thinkForIt(){
#Override
public void methodToThinkOn(){
return null; // i thought about it but i do not need to do anyting!
}
If the interface thinkForIt would have any annotations like #required, users could get warning or error if they did not implement it...
Looking for a solution and thanks for your comments in advance!
Your overall design is questionable; you are reinventing security code, which is always a red flag. Use Spring Security instead.
However, there's a simple way to ensure that "some bean of type Foo" has been registered with the context:
#Component
#RequiredArgsConstructor
public class ContextConfigurationVerifier {
final Foo required;
}

Springfox duplicating controllers

I am trying to work with both swagger-codegen and springfox to gain time during webservice development.
I am facing an issue with an endpoint being created for my annotated interface classes as well as my controller implementations as you can see below:
I found a workaround by adding the tag where the controller should be (ex: #Api(tags={ "Player" })) in my controller but I am looking for a better one preventing this because if I am using code generation its to avoid this kind of situation where you have to add stuff in your code.
With swagger-codegen, I just have to write a RestController (PlayerApiImpl) like this:
#RestController
public class PlayerApiImpl implements PlayerApi {
#Override
public ResponseEntity<Player> playerIdGet(String id) {
PlayerDTO ret = service.getOne(Long.parseLong(id));
if (ret == null) {
throw new PlayerNotFoundException();
}
return ResponseEntity.ok(mapper.toModel(ret));
}
}
While everything is generated into an interface (here PlayerApi). So I would like to stay as simple as it is possible.
I decided to push my workaround a bit more and found this solution:
It's actualy possible de add the #Api annotation directly in the generated interface to prevent you to add it in your implementation. For that you need to work with mustache templates files that you can create in the directory where is located your swagger file.
It might not be the best solution, so if you have a better proposolal, feel free to comment.

When and how cglib-proxied component instance is created

I'd like to learn if there are some rules / conditions that a Spring component is wrapped (proxied) by CGLIB. For example, take this case:
#Component
public class TestComponent {
}
#Service
//#Transactional(rollbackFor = Throwable.class)
public class ProcessComponent {
#Autowired
private TestComponent testComponent;
public void doSomething(int key) {
// try to debug "testComponent" instance here ...
}
}
If we let it like this and debug the testComponent field inside the method, then we'll see that it's not wrapped by CGLIB.
Now if we uncomment the #Transactional annotation and debug, we'll find that the instance is wrapped: it's of type ProcessComponent$$EnhancerByCGLIB$$14456 or something like that. It's clearly because Spring needs to create a proxy class to handle the transaction support.
But I'm wondering, is there any way that we can detect how and when does this wrapping happen ? For example, some specific locations in Spring's source code to debug into to find more information; or some documentations on the rules of how they decide to create a proxy.
For your information, I need to know about this because I'm facing a situation where some component (not #Transactional, above example is just for demonstrating purpose) in my application suddenly becomes proxied (I found a revision a bit in the past where it is not). The most important issue is that this'll affect such components that also contain public final methods and another issue (also of importance) is that there must have been some unexpected changes in the design / structure of classes. For these kind of issues, of course we must try to find out what happened / who did the change that led to this etc...
One note is that we have just upgraded our application from Spring Boot 2.1.0RELEASE to 2.1.10RELEASE. And checking the code revision by revision up till now is not feasible, because there have been quite a lot of commits.
Any kind of help would be appreciated, thanks in advance.
You could debug into org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.getAdvicesAndAdvisorsForBean(Class, String, TargetSource).
If any advisor is found, the bean will be proxied.
If you use a #Lookup method injection it will also proxy the component class.

How can i manage multiple method calls as one transaction

I have a problem with establishing a transaction manager/scope for my REST API (JAVA), I have below functions in my API back end and i want to excute all below functions as one transaction,
Call third party WS end point
Decrypt the response
Save the response in to DB1
Save the response in to DB2
I want to make sure all above steps are completed or rollback if any one fail, I have enough information to do the rollback, but i have no idea what would be the best practice to implement proper transaction management mechanism, because above mentioned steps happen in 3 separate classes per API call,
This is a pseudo code for my class structure
class CallWS {
public People callThWS() {
// functions related to call third party WS and decryption (step 1,2)
}
}
class People peopleServices {
public People getPeopleData() {
callThWS ppl= new callThWS();
People pplObj = ppl.callThWS();
// save to DB1, (step 3)
return pplObj;
}
}
class People peopleContr {
public People getAllPeople() {
peopleServices ppSer= new peopleServices();
People pplObj2 = ppSer.getPeopleData();
// save to DB2, (Step 4)
return pplObj2;
}
}
Please help me on this,
Thanks
What you need is Distributed Transactions(XA). Check for examples of various transaction managers which support XA. Check this article for using XA provider in standalone applications(Warning: Old article).
If you control sources of all classes listed and you can refactor your code the way you have a single entry point, you can do it quite easily, except the call to an external web service. The pseudo code is below.
Here we should agree that all resources your are calling in your methods are transactional. As I mentioned earlier call to an external WS would not fall to that category because calls to external web services are not transactional by their nature. Again if you do not change data withing a call to the external service you may consider just leave it outside transaction. You still have a bit of control. Like rolling back transaction in case a call to the external service was unsuccessful and as far as you have not changed anything on the other side, you may not care about rolling back a transaction there.
However you still have some options for a transaction call to an external WS call like Web Services Atomic Transactions, but I bet you would need control for sources and maybe even environment on the other side. In such a lucky circumstances you would rather want to achieve it by avoiding the WS call.
class RestAPIEntryPointResource {
#Inject
CallWS callWS;
#Inject
PeopleServices peopleServices ;
#Inject
PeopleContr peopleContr;
/*Put some transaction demarcation here.
If your class is an EJB, it is already done for you.
With Spring you have various options to mark the method transactional.
You also may want to take a manual control, but it look redundant here. */
public void entryPointMethod() {
callWS.callThWS();
peopleServices.getPeopleData();
peopleContr.getAllPeople();
}
}
class CallWS {
public People callThWS() {
// functions related to call third party WS and decryption (step 1,2)
}
}
class PeopleServices {
public People getPeopleData() {
..........
}
}
class PeopleContr {
public People getAllPeople() {
.........
}
}

Implementing secure native Play Framework 2.3.x (Java style) authentication

First of all, I am fully aware of the authentication modules that are available to Play. That said, I am unable to implement even the simplest example code from let's say SecureSocial. With a little bit of research it became clear that a lot of things were broken in their example code provided here when the Play Framework updated to version 2.3.x.
With the help of online docs and the excellent video tutorial by Philip Johnson on implementing standard (unsafe) authentication I did succesfully implemented the following:
// Class which is used by the #Security annotation
public class Secured extends Security.Authenticator {
#Override
public String getUsername(Context ctx) {
return ctx.session().get("auth");
}
#Override
public Result onUnauthorized(Context ctx) {
return redirect(routes.Application.login());
}
}
// Controller class that serves routes
public class Application extends Controller {
#Security.Authenticated(Secured.class)
public static Result index() {
return ok(index.render("Your new application is ready."));
}
public static Result login() {
session().clear();
session("auth", "a1234"); // dummy data simulating succesful login
returning redirect(routes.Application.index());
}
}
I need to ultimately implement a safe login system to authenticate users.
My question is two-sided. What would be the better of the following: 'reinventing the wheel' (at least partly) by taking the working code base and improving it or give implementing one of the authentication modules another shot?
We all do not like reinventing the wheel, that said, I have a much better chance of succesfully compiling when I made it myself it seems...
I am aware that for a wholesome security-in-depty (a.k.a. layered security) a secure connection implementation is also needed (HTTPS with TLS1.2` at the time of writing). This is beyond the scope of my question.
I don't know if there's a right answer to this question. Whether to build your own framework or to try an existing framework (which might not work perfectly) is a matter for your own judgement. Personally, I'd probably use SecureSocial as a starting point but then write my own code if I couldn't get it working. It sounds like this is the approach you've already tried.
To use SecureSocial you'd probably need to check out the master branch and build from source. It might be hard to use if the examples are out of date, but then again writing your own auth code is difficult too.

Categories