How to set up OIDC connection to Keycloak in Quarkus on Kubernetes - java

Has somebody succeeded in setting up OIDC connection to Keycloack in a Quarkus app deployed in a Kubernetes cluster ?
Could you clarify how does the connection-delay (and other related parameters) work ?
(Here is the documentation I tried to follow)
In our env (Quarkus 1.13.3.Final, Keycloak 12.0.4) we have such config:
quarkus.oidc.connection-delay: 6M
quarkus.oidc.connection-timeout: 30S
quarkus.oidc.tenant-id: testTenant-01
And these msgs appear in pod's log when it's being started:
2021-07-26 14:44:22,523 INFO [main] [OidcRecorder.java:264] -
Connecting to IDP for up to 180 times every 2 seconds
2021-07-26
14:44:24,142 DEBUG [vert.x-eventloop-thread-1]
[OidcRecorder.java:115] - 'testTenant-01' tenant initialization has failed:
'OpenId Connect Provider configuration metadata is not configured and
can not be discovered'. Access to resources protected by this tenant
will fail with HTTP 401.
(... following log comes later as the pod is running ...)
2021-07-27 06:11:54,261 DEBUG [vert.x-eventloop-thread-0]
[DefaultTenantConfigResolver.java:112] - Tenant 'null' is not
initialized
2021-07-27 06:11:54,262 ERROR
[vert.x-eventloop-thread-0] [QuarkusErrorHandler.java:101] - HTTP
Request to /q/health/live failed, error id:
89f83d1d-894c-4fed-9995-0d42d60cec17-2: io.quarkus.oidc.OIDCException:
Tenant configuration has not been resolved at
io.quarkus.oidc.runtime.OidcAuthenticationMechanism.resolve(OidcAuthenticationMechanism.java:61)
at
io.quarkus.oidc.runtime.OidcAuthenticationMechanism.authenticate(OidcAuthenticationMechanism.java:40)
at
io.quarkus.oidc.runtime.OidcAuthenticationMechanism_ClientProxy.authenticate(OidcAuthenticationMechanism_ClientProxy.zig:189)
at
io.quarkus.vertx.http.runtime.security.HttpAuthenticator.attemptAuthentication(HttpAuthenticator.java:100)
at
io.quarkus.vertx.http.runtime.security.HttpAuthenticator_ClientProxy.attemptAuthentication(HttpAuthenticator_ClientProxy.zig:157)
at
io.quarkus.vertx.http.runtime.security.HttpSecurityRecorder$2.handle(HttpSecurityRecorder.java:101)
at
io.quarkus.vertx.http.runtime.security.HttpSecurityRecorder$2.handle(HttpSecurityRecorder.java:51)
at
io.vertx.ext.web.impl.RouteState.handleContext(RouteState.java:1038)
Questions:
Any way how to find out what metadata are missing ?
Can I somehow change the 2s period between connection attempts ?
Any relation between connection-delay and connection-timeout ?
It failed after cca 2s - does it mean that it fails immediately in the 1st attempt, or has it finished 180 attempts so fast ?
Does DefaultTenantConfigResolver get tenant from different resource than OidcRecorder in initialization, i.e. should tenant be configured at multiple places ?

Finally made it work. Caused by incorrect auth-server-url which is not clear at all from the log messages.
quarkus.oidc.client-id: my-app
quarkus.oidc.enabled: true
quarkus.oidc.connection-delay: 6M
quarkus.oidc.connection-timeout: 30S
quarkus.oidc.tenant-id: testTenant-01
quarkus.oidc.auth-server-url: ${keycloak.url}/auth/realms/${quarkus.oidc.tenant-id}
The URL format is emphasized in Quarkus doc: Note if you work with Keycloak OIDC server, make sure the base URL is in the following format: https://host:port/auth/realms/{realm} where {realm} has to be replaced by the name of the Keycloak realm

Related

Accessing Neo4j/GrapheneDB (Dev free plan) on Heroku from Micronaut Java app fails: Connection to database terminated

currently I'm struggling with Neo4j/GrapheneDB (Dev free plan) on Heroku platform.
Launching my app locally via "heroku local" works fine, it connects (Neo4j Java Driver 4) to a Neo4j 3.5.18 (running from Docker image "neo4j:3.5").
My app is built using Micronaut framework, using its Neo4j support. Launching my app on Heroku platform succeeds, I'm using Gradle Heroku plugin for this task.
But accessing the database with business operations (and health checks) fails with exception like this:
INFO Driver - Direct driver instance 1523082263 created for server address hobby-[...]ldel.dbs.graphenedb.com:24787
WARN RetryLogic - Transaction failed and will be retried in 1032ms
org.neo4j.driver.exceptions.ServiceUnavailableException: Connection to the database terminated. Please ensure that your database is listening on the correct host and port and that you have compatible encryption settings both on Neo4j server and driver. Note that the default encryption setting has changed in Neo4j 4.0.
at org.neo4j.driver.internal.util.Futures.blockingGet(Futures.java:143)
at org.neo4j.driver.internal.InternalSession.beginTransaction(InternalSession.java:163)
at org.neo4j.driver.internal.InternalSession.lambda$transaction$4(InternalSession.java:147)
at org.neo4j.driver.internal.retry.ExponentialBackoffRetryLogic.retry(ExponentialBackoffRetryLogic.java:101)
at org.neo4j.driver.internal.InternalSession.transaction(InternalSession.java:146)
at org.neo4j.driver.internal.InternalSession.readTransaction(InternalSession.java:112)
at org.neo4j.driver.internal.InternalSession.readTransaction(InternalSession.java:106)
at PersonController.logInfoOf(PersonController.java:57)
at PersonController.<init>(PersonController.java:50)
at $PersonControllerDefinition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1814)
[...]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:832)
Suppressed: org.neo4j.driver.internal.util.ErrorUtil$InternalExceptionCause: null
at org.neo4j.driver.internal.util.ErrorUtil.newConnectionTerminatedError(ErrorUtil.java:52)
at org.neo4j.driver.internal.async.connection.HandshakeHandler.channelInactive(HandshakeHandler.java:81)
[...]
at org.neo4j.driver.internal.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 common frames omitted
I'm sure to get login credentials from OS environment variables GRAPHENEDB_BOLT_URL, GRAPHENEDB_BOLT_USER, and GRAPHENEDB_BOLT_PASSWORD injected to the app correctly; I've verified it with some debug log statements:
State changed from starting to up
INFO io.micronaut.runtime.Micronaut - Startup completed in 2360ms. Server Running: http://localhost:7382
INFO Application - Neo4j Bolt URIs: [bolt://hobby-[...]ldel.dbs.graphenedb.com:24787]
INFO Application - Neo4j Bolt encrypted? false
INFO Application - Neo4j Bolt trust strategy: TRUST_SYSTEM_CA_SIGNED_CERTIFICATES
INFO Application - Changed trust strategy to: TRUST_ALL_CERTIFICATES
INFO Application - Env.: GRAPHENEDB_BOLT_URL='bolt://hobby-[...]ldel.dbs.graphenedb.com:24787'
INFO Application - Env.: GRAPHENEDB_BOLT_USER='app1[...]hdai'
INFO Application - Env.: GRAPHENEDB_BOLT_PASSWORD of length 31
I've also tried restarting GrapheneDB instance via Heroku plugin website, but with same negative results.
What's going wrong here? Are there any ways to further nail down the root cause?
Thanks
Christian
I had a closer look at this and it seems that you need the driver encryption turned on for the Graphene db instances. This can be configured in application.yml as below:
neo4j:
encryption: true
For reference, here is a sample project https://github.com/aldrinm/micronaut-neo4j-heroku

Kafka Streams is not detecting renewed kerberos ticket after initial tickets expiry

I've found some similar questions, but they're not quite the same situation as this.
I have a Kafka Streams application which authenticates with brokers using Kerberos ticket details found within a Credential Cache.
The application works great until the original ticket's expiry is reached, then I get the following error.
04:21:45.630 [kafka-producer-network-thread | sample-app-StreamThread-1-producer] ERROR org.apache.kafka.clients.NetworkClient - [Producer clientId=sample-app-StreamThread-1-producer] Connection to node 2 (<Hostname>/<ipAddress>:<Port>) failed authentication due to: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]) occurred when evaluating SASL token received from the Kafka Broker. Kafka Client will go to AUTHENTICATION_FAILED state.
Now, that would all seem expected, but my ticket is renewed every 2 hours by another system, and yet, the Kafka Streams application isn't detecting that the ticket has been renewed. Querying the ticket using 'klist' tells me that there is a valid ticket at the time when the error occurs.
Ticket cache: FILE:/var/ABC/SYSTEM_ACCOUNT/cc/krb5cc_12345
Default principal: 12345#EXCHAD.ABC123.com
Valid starting Expires Service principal
04/02/20 02:28:02 04/02/20 12:28:02 krbtgt/EXCHAD.ABC123.com#EXCHAD.ABC123.com
renew until 04/08/20 08:28:04
Oddly, I can bounce my application again, and it'll work, but only until the new current ticket's expiry is reached in approx 10 hours.
Why isn't Kafka Streams looking for the latest ticket? Is this potentially a bug within Kafka Streams itself? I can't find any other settings related to this beyond the initial JAAS configuration.
com.sun.security.auth.module.Krb5LoginModule required
refreshKrb5Config=true
useKeyTab=false
useTicketCache=true
renewTGT=true
doNotPrompt=true
ticketCache="/var/ABC/SYSTEM_ACCOUNT/cc/krb5cc_12345"
principal="12345#EXCHAD.ABC123.com"
I'm using Java 8, and Kafka Streams 2.4.0
As always, any help or guidance would be greatly appreciated.
Thanks!

SSLHandshakeErrorTracker : SSLC0008E: Unable to initialize SSL connection.Unauthorized access was denied or security settings have expired

There are 2 servers. The 1st one is production env and 2nd is my Websphere (Mock server). I have installed an application in Websphere which is not able to process the https/http request from the client server.
When the client configure to access the https (port 9443) to my mock server then I am getting the following error on request processing:
ERROR 13172876 --- [bContainer : 13]
c.i.w.s.c.impl.SSLHandshakeErrorTracker : SSLC0008E: Unable to
initialize SSL connection. Unauthorized access was denied or
security settings have expired. Exception is
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext
connection?
When the client configures http (port 9080), then I am getting the following error:
WARN 13172876 --- [ebContainer : 7] o.s.web.servlet.PageNotFound
: Request method 'GET' not supported
ERROR 13172876 --- [ebContainer :
7] o.s.boot.web.support.ErrorPageFilter : Cannot forward to error
page for request [/] as the response has already been committed. As a
result, the response may have the wrong status code. If your
application is running on WebSphere Application Server you may be able
to resolve this problem by setting
com.ibm.ws.webcontainer.invokeFlushAfterService to false
I have configured the WireMock application for service virtualization in my application. My application has 1 RestController which has 2 request methods(POST) and 1 class to configure/run wiremock service. I do not have any error when I test the application directly from Mock server. But when the client hits my server, I am getting the following error.
For https issue, the certificate may get expired which is configured # Application Server (Websphere) level.
For Http issue, there may be some other firewall or configuration which executes the request before it goes to server. Otherwise, it should create issue # your local setup as well.

How to define jwt configuration in jhipster?

I have two microservices, The first one is a web API, the another is a gateway.
I've already started both, but I cannot authenticate with no user. Neither admin or user.
When I try to log in, I receive this message on API server.
InsufficientAuthenticationException: Full authentication is required
to access this resource 2018-02-23 13:05:31.373 WARN 7144 --- [
XNIO-2 task-3] o.z.p.spring.web.advice.AdviceTrait :
Unauthorized: Full authentication is required to access this resource
Both microservices are with the default configuration of jhipster .
The gateway was created with --skip-server option.
I believe the problem is the server_api_url because all the calls to api is directed to port 9000 but the api is running on 8081.
I got this from Chrome Dev tools:
Request URL:http://localhost:9000/api/profile-info?cacheBuster=1519408025933
The :9000 is wrong the port must be 8081.
Where can I change that ?.
I have already change the webpack.common.js
SERVER_API_URL: 'http://localhost:8081/'
But nothing work.

Kafka ACL - LEADER_NOT_AVAILABLE

I have an issue producing messages to a Kafka topic (named secure.topic) secured with ACL.
My Groovy-based producer throws this error:
Error while fetching metadata with correlation id 9 : {secure.topic=LEADER_NOT_AVAILABLE}
Some notes about the configuration:
1 Kafka server, version 2.11_1.0.0 (both server and Java client libs)
topic ACL is set to All (also tested with --producer) and the user is the full name specified in the certificate
client auth enabled using self generated certificates
Additional server config:
security.inter.broker.protocol = SSL
ssl.client.auth = required
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
If I remove the authorizer.class.name property, then my client can produce messages (so, no problem with SSL and certificates).
Also, the kafka-authorizer.log produces the following message:
[2018-01-25 11:57:02,779] INFO Principal = User:CN= User,OU=XXX,O=XXX,L=XXX,ST=Unknown,C=X is Denied Operation = ClusterAction from host = 127.0.0.1 on resource = Cluster:kafka-cluster (kafka.authorizer.logger)
Any idea what can cause the LEADER_NOT_AVAILABLE error when enabling ACL?
From the authorizer logs, it looks like the Authorizer denied ClusterAction on the Cluster resource.
If you check your topic status (for example using kafka-topic.sh), I'd expect to see it without a Leader (-1).
When enabling authorizations, they are applied to all Kafka API messages reaching your cluster including inter-broker messages like StopReplica, LeaderAndIsr, ControlledShutdown, etc. So it looks like you only added ACLs for your client but forgot to add the ACLs required for the brokers to function.
So you need to at least add an ACL granting ClusterAction on the Cluster resource for your broker's principals. IIRC that's the only required ACL for inter-broker messages.
Following that, your cluster should be able to correctly elect a leader for the partition enabling your client to produce.

Categories