I am now comparing spring saml and pac4j saml. Generally speaking, I think pac4j is easier to implement than spring saml. But there are one thing I can not figure out:
See this config code:
#Configuration
public class Pac4jConfig {
#Bean
public Config config() {
final SAML2ClientConfiguration cfg = new SAML2ClientConfiguration(
"resource:samlKeystoreNgcsc.jks",
"juniper",
"juniper",
"resource:metadata-okta.xml"
);
cfg.setMaximumAuthenticationLifetime(3600);
cfg.setServiceProviderEntityId("http://localhost:8080/callback?client_name=SAML2Client");
cfg.setServiceProviderMetadataPath("sp-metadata.xml");
final SAML2Client saml2Client = new SAML2Client(cfg);
final Clients clients = new Clients("http://localhost:8080/callback", saml2Client);
final Config config = new Config(clients);
//config.addAuthorizer("admin", new RequireAnyRoleAuthorizer("ROLE_ADMIN"));
//config.addAuthorizer("custom", new CustomAuthorizer());
return config;
}
}
From this sample code, we already have IDP metaData, that is fine, we just ask for IDP to provide metaData and we can use directly.
But where is the sp-metadata.xml? We need to generate it and provide to idp to intergration purpose.
If I am using springSaml, it provides a UI to generate this metaData, we just need to download and send over to IDP. But for pac4j saml, I do not see this utility at all. So can anyone help to tell me what will be the best solution to generate the sp metaData?
Thanks
saml2Client.init() does all work of generating sp-metadata just make sure that you have sufficient permissions to create the file on the specified path.
saml2Client.getConfiguration().setServiceProviderMetadataResource(new FileSystemResource(new File("C:\\sp-metadata.xml").getAbsolutePath()));
saml2Client.init();
String spMetadata = saml2Client.getServiceProviderMetadataResolver().getMetadata();
I somehow manage to generate it by using this setting in the SecurityModule configuration. This might not be the best way, but I still figuring out the best way.
cfg.setServiceProviderMetadataPath(new File("yourPath", "fileName.xml").getAbsolutePath())
Note that SPMetata ONLY generate when there's a SAML Request happen.
If you come across this issue when using pac4j and TestShib, make sure your Identity Provider metadata is up-to-date, i.e., update your local testshib-providers.xml with the one from the TestShib website.
Related
I want to find the actual java class that serves the Spring Actuator endpoint (/actuator).
It's similar to this question in a way, but that person wanted to call it via a network HTTP call. Ideally, I can call it within the JVM to save on the cost of setting up an HTTP connection.
The reason for this is because we have 2 metrics frameworks in our system. We have a legacy metrics framework built on OpenCensus and we migrated to Spring Actuator (Prometheus metrics based on Micrometer). I think the Spring one is better but I didn't realize how much my company built infrastructure around the old one. For example, we leverage internal libraries that use OpenCensus. Infra team is depending on Opencensus-based metrics from our app. So the idea is to try to merge and report both sets of metrics.
I want to create my own metrics endpoint that pulls in data from Opencensus's endpoint and Actuator's endpoint. I could make an HTTP call to each, but I'd rather call them within the JVM to save on resources and reduce latency.
Or perhaps I'm thinking about it wrong. Should I simply be using MeterRegistry.forEachMeter() in my endpoint?
In any case, I thought if I found the Spring Actuator endpoint, I can see an example of how they're doing it and mimic the implementation even if I don't call it directly.
Bonus: I'll need to track down the Opencensus handler that serves its endpoint too and will probably make another post for that, but if you know the answer to that as well, please share!
I figured it out and posting this for anyone else interested.
The key finding: The MeterRegistry that is #Autowired is actually a PrometheusMeterRegistry if you enable the prometheus metrics.
Once you cast it into a PrometheusMeterRegistry, you can call its .scrape() method to return the exact same metrics printout you would when you hit the http endpoint.
I also need to get the same info from OpenCensus and I found a way to do that too.
Here's the snippet of code for getting metrics from both frameworks
Enumeration<MetricFamilySamples> openCensusSamples = CollectorRegistry.defaultRegistry.filteredMetricFamilySamples(ImmutableSet.of());
StringWriter writer = new StringWriter();
TextFormat.write004(writer, openCensusSamples);
String openCensusMetrics = writer.toString();
PrometheusMeterRegistry registry = (PrometheusMeterRegistry) meterRegistry;
String micrometerMetrics = registry.scrape();
return openCensusMetrics.concat(micrometerMetrics);
I found out another interesting way of doing this.
The other answer I gave but it has one issue. It contains duplicate results. When I looked into it, I realized that both OpenCensus and Micrometer were reporting the same result.
Turns out that the PrometheusScrapeEndpoint implementation uses the same CollectorRegistry that OpenCensus does so the both sets of metrics were being added to the same registry.
You just need to make sure to provide these beans
#PostConstruct
public void openCensusStats() {
PrometheusStatsCollector.createAndRegister();
}
#Bean
public CollectorRegistry collectorRegistry() {
return CollectorRegistry.defaultRegistry;
}
I have a Keycloak extension (Custom Endpoints, SPI). Now I want to add sending of AdminEvents, which I implemented as follows:
private void logAdminEvent(ClientConnection clientConnection, UserRepresentation rep, OperationType operation, ResourceType resource) {
RealmModel realm = session.getContext().getRealm();
// beware: clientConnection must not be null because of missing check for NullPointer in Keycloak
ClientModel client = realm.getClientByClientId(ROLE_ATTRIBUTE_CLIENT);
AdminAuth adminAuth = new AdminAuth(realm, authResult.getToken(), authResult.getUser(), client);
AdminEventBuilder adminEvent = new AdminEventBuilder(realm, adminAuth, session, clientConnection);
adminEvent
.operation(operation)
.resource(resource)
.authIpAddress(authResult.getSession().getIpAddress())
.authClient(client)
.resourcePath(session.getContext().getUri())
.representation(rep);
adminEvent
.success();
}
I am aware that the admin event logging must be activated in Keycloak admin console, which I did.
Maybe it is relevant that the logged in user has no administration privileges, but it also did not work when I gave admin privileges.
I need Ideas or Hints to what I am doing wrong here. Documentation and web research unfortunately did not help.
Take a look at Keycloak sources, especially something like RootAdminResource. As far as i remember all admin resources (e.g. controllers) create events via builder that cloned from builder that was injected via constructor by parent resource. You may be missing some initialization tricks.
Ok, we found that.
First, for update / delete, we had to add the realm to the adminEvent.
Second, for create, we had the event logging after the
session.getTransactionManager().commit();
took place. Setting commit after the adminEvent.success() fixed the Issue.
Maybe this can help anyone.
I am generating some endpoints and it works correctly, however, I would like to keep one session per client so I do not have to send the request by mail and password, but I am not sure of doing it.
This is an example of one of my endpoints
#Api(name = "test")
public class MyApi {
#ApiMethod(name = "printHi", httpMethod = "POST")
public Message imprimirHola(Input input) {
Message message = new Message();
if(datosCorrectos(input.getMail(), input.getPassword()))
message.setMessage("Hi");
else
message.setMessage("authentication failed");
return message;
}
}
After doing some research in the topic you have issues with, I have found the following information that can be useful for what you are trying to achieve.
Google Cloud Platform offers the Cloud Endpoints service, which has some available frameworks for user authentication. You can find detailed information and procedures about that in the documentation but, in short, you can use Firebase Auth, Auth0 or Google Accounts to authenticate a user against your endpoints (this link will help you decide which option suits you better).
However, in order to operate with one of those options, you will need that your API is managed by Cloud Endpoints, so you will have to follow this walkthrough to Add API Management to your API using OpenAPI.
Finally, here you have a working example on how to that using Java.
I know it is a lot of information, but I think you will be able to solve the authentication issue with your Java API just by reading more detailed information in this last documentation page, and move step by step in the "Getting Started" dropdown menu in the left of this page.
We have an API which uses Spring JPA and provides access to some data in our database via REST. This API is exposed in a Hateoas fashion (we are using the Spring implementation).
We are now considering whether stick with this approach or code s=our own REST interface manually. Now, I have read a lot of articles about HATEOAS but I am not sure what's the big advantage of using it. Sure, I understand that I can navigate through it using links but I still have to know about the existence of the links at each level, right?
To illustrate my problem, let's say that I have the following structure:
server.com/
- /store
- /users/
server.com/users
- /managers/
- /other/
server.com/managers
- list of entities with ids
I want to consume this API and get all 'manager' entities (located under server.com/users/managers)
What is the correct way to do so when using Spring boot links?
Option one:
RequestEntity<Void> request = RequestEntity.get("server.com/users/managers").accept(HAL_JSON).build();
final Resource<Manager> managers = restTemplate.exchange(request, new ResourcesType<Manager>() {
}).getBody();
Option two:
// global endpoint
RequestEntity<Void> request = RequestEntity.get("server.com").accept(HAL_JSON).build();
final Resource<Object> rootLinks = restTemplate.exchange(request, new ResourceType<Object>() {
}).getBody();
Links links = new Links(rootLinks.getLinks());
final Link userLink = links.getLink("users").expand();
// users endpoint
request = RequestEntity.get(URI.create(userLink.getHref())).accept(HAL_JSON).build();
final Resource<Object> managerLinks = restTemplate.exchange(request, new ResourceType<Object>() {
}).getBody();
links = new Links(managerLinks.getLinks());
final Link managerLink = links.getLink("managers").expand();
// managers endpoint
request = RequestEntity.get(URI.create(managerLink.getHref())).accept(HAL_JSON).build();
final Resources<Manager> resourceAccounts = restTemplate.exchange(request, new ResourcesType<Manager>() {
}).getBody();
The first option one seems straightforward and I can get all entities with single request. However, I fail to see hot Hateoas is beneficial if I just use this approach. Spring documentation states, that using hardcoded links is not recommended.
The second approach seems to be more in the Hateoas fashion but it creates three requests just to get to the resource which location I already know. That doesn't seem right either.
I know it's probably a dummy question but can somebody explain me what is the great idea behind Hateoas that I am clearly missing?
With HATEOAS server can guide a client through provided links. A contract between server and client is link's relation type and media type. A server can, by providing or not providing links on same resource representation, give client information if a resource is in a state where editing is enabled or not enabled or if a user is authorized for some operation on the resource and so on. A server can change URLs without breaking a contract.
I am using the ServletTester class provided by Jetty to test one of my servlets.
The servlet reads the the body of the request using InputStream.read() to construct a byte[] which is the decoded and acted on by the servlet.
The ServletTest class provides a method getResponses(ByteArrayBuffer) but I'm unsure how to create one of these in the correct format since it would also need to contain things like headers (e.g. "Content-Type: application/octet-stream).
Can anyone show me an easy way to construct this, preferably using an existing library so that I can use it in a similar way to the HttpTester class.
If there is a "better" way to test servlets (ideally using a local connector rather than via the tcp stack) I'd like to hear that also.
Many thanks,
Why use a mock at all? Why not test the servlet by running it in jetty?
Servlet servlet = new MyServlet();
String mapping = "/foo";
Server server = new Server(0);
Context servletContext = new Context(server, contextPath, Context.SESSIONS);
servletContext.addServlet(new ServletHolder(servlet), mapping);
server.start();
URL url = new URL("http", "localhost", server.getConnectors()[0].getLocalPort(), "/foo?bar");
//get the url...assert what you want
//finally server.stop();
Edit: Just wanting to reassure people that this is very fast. Its also a very reliable indicator of what your code will actually do, because it is in fact doing it.
Spring MVC provides a small set of "mock" classes for the various javax.servlet interfaces, such as HttpServletRequest, HttpSession, and so on. This makes it easy to unit test the likes of a servlet, you just inject mocks into the e.g. doGet() method.
Even if you don't use Spring itself on the server, you can still use the mock from the library, just for your tests.
You can use HttpClient to simplify testing somewhat. Take a look at the following article:
http://roberthanson.blogspot.com/2007/12/testing-servlets-with-junit.html
That in combination with servlet tester should give you what you want unit test wise.