Getting Kerberos ticket expired error when accessing a service - java

My application is accessing a service which authenticates using Kerberos. This service is contacted from two classes in my application. After I start the application it runs fine. But after a specific amount of time, mostly near midnight, we start seeing below error
Caused by: com.dstc.security.kerberos.gssapi.GSSKrbException: Failure unspecified at GSS-API level (Mechanism level: com.dstc.security.kerberos.KerberosError: Ticket expired
KrbError:
Error code: 32
Error message: null
Client name: null
Client realm: null
Client time: null
Server name: HTTP/orderstore-sit.xyz.com
Server realm: INTRANET.XYZ.COM
Server time: Fri Oct 14 22:12:02 UTC 2022)
at com.dstc.security.kerberos.gssapi.GSSKrbException.create(GSSKrbException.java:208) ~[vsj-standard-2.1.2.jar!/:?]
at com.dstc.security.kerberos.gssapi.GSSContext.initSecContext(GSSContext.java:316) ~[vsj-standard-2.1.2.jar!/:?]
at com.dstc.security.kerberos.gssapi.GSSContext.initSecContext(GSSContext.java:286) ~[vsj-standard-2.1.2.jar!/:?]
at com.xyz.security.providers.KerberosQSJSession.getToken(KerberosQSJSession.java:134) ~[best-2.6.jar!/:?]
... 42 more
Caused by: com.dstc.security.kerberos.KerberosError: Ticket expired
KrbError:
Error code: 32
Error message: null
Client name: null
Client realm: null
Client time: null
Server name: HTTP/orderstore-sit.xyz.com
Server realm: INTRANET.XYZ.COM
Server time: Fri Oct 14 22:12:02 UTC 2022
at com.dstc.security.kerberos.Kerberos.getKrbTGSRepFromKDC(Kerberos.java:1356) ~[vsj-standard-2.1.2.jar!/:?]
at com.dstc.security.kerberos.Kerberos.requestServiceTicket(Kerberos.java:1309) ~[vsj-standard-2.1.2.jar!/:?]
at com.dstc.security.kerberos.Kerberos.requestServiceTicket(Kerberos.java:1333) ~[vsj-standard-2.1.2.jar!/:?]
at com.dstc.security.kerberos.gssapi.DefaultCredentialManager.requestServiceTicket(DefaultCredentialManager.java:194) ~[vsj-standard-2.1.2.jar!/:?]
at com.dstc.security.kerberos.gssapi.ClientHandShaker.getServiceTicket(ClientHandShaker.java:706) ~[vsj-standard-2.1.2.jar!/:?]
at com.dstc.security.kerberos.gssapi.ClientHandShaker.huntServiceTicket(ClientHandShaker.java:286) ~[vsj-standard-2.1.2.jar!/:?]
at com.dstc.security.kerberos.gssapi.ClientHandShaker.handle(ClientHandShaker.java:192) ~[vsj-standard-2.1.2.jar!/:?]
at com.dstc.security.kerberos.gssapi.GSSContext.initSecContext(GSSContext.java:307) ~[vsj-standard-2.1.2.jar!/:?]
at com.dstc.security.kerberos.gssapi.GSSContext.initSecContext(GSSContext.java:286) ~[vsj-standard-2.1.2.jar!/:?]
at com.xyz.security.providers.KerberosQSJSession.getToken(KerberosQSJSession.java:134) ~[best-2.6.jar!/:?]
... 42 more
The service which we try to access requires an SPN parameter. Two features in our application contact same service with two different values of SPN. The service provides a token which is used in further API calls. We use below two values of SPN.
HTTP/orderstore-sit.xyz.com
HTTP/addresses-sit.xyz.com
If we access this service from any one feature everything works fine. But if we use both features then after sometime aforementioned error occurs. This error keeps coming for a lot of time. Afterwards this error seems to be affecting other parts of application. Like we have a separate thread which is a Kafka consumer. That also authenticates with Kafka cluster using Kerberos mechanism. That Kafka consumer also crashes.
We use below jaas config. Please help, we are completely stuck on this issue.
SUN_JDK_KRB5.com.xyzpatterns.security.IIdentity.create_OBSCURED_SECRET_PRINCIPAL_devSvcUser_REALM_INTRANET.XYZ.COM
{
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
principal="devSvcUser"
keyTab="/apps/XyzSvc/Applications/ssl/devSvcUser.keytab"
debug=true
storeKey=true
realm="INTRANET.XYZ.COM"
doNotPrompt=true;
};
SUN_JDK_KRB5.com.xyzpatterns.security.IIdentity.create_CACHED_CREDENTIALS
{
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
principal="devSvcUser"
keyTab="/apps/XyzSvc/Applications/ssl/devSvcUser.keytab"
debug=true
storeKey=true
realm="INTRANET.XYZ.COM"
doNotPrompt=true;
};
QUEST_VSJ.Identity.create_CACHED_CREDENTIALS
{
com.dstc.security.kerberos.jaas.KerberosLoginModule required
useKeyTab=true
principal="devSvcUser"
keyTab="/apps/XyzSvc/Applications/ssl/devSvcUser.keytab"
debug=true
storeKey=true
realm="INTRANET.XYZ.COM"
doNotPrompt=true;
};
QUEST_VSJ.Identity.create_USERNAME_PASSWORD
{
com.dstc.security.kerberos.jaas.KerberosLoginModule required
useKeyTab=true
principal="devSvcUser"
keyTab="/apps/XyzSvc/Applications/ssl/devSvcUser.keytab"
debug=true
storeKey=true
realm="INTRANET.XYZ.COM"
doNotPrompt=true;
};
QUEST_VSJ.Identity.create_OBSCURED_SECRET_PRINCIPAL_devSvcUser_REALM_INTRANET.XYZ.COM
{
com.dstc.security.kerberos.jaas.KerberosLoginModule required
useKeyTab=true
principal="devSvcUser"
keyTab="/apps/XyzSvc/Applications/ssl/devSvcUser.keytab"
debug=true
storeKey=true
realm="INTRANET.XYZ.COM"
doNotPrompt=true;
};

Related

SCDF: Error handling when pod failed to start

I'm working on a service where it will call Spring Cloud Dataflow (SCDF) to spin off a new k8s Pod for Spring Batch job.
Map<String, String> properties = Map.of("testApp.cpu", cpu, "testApp.memory", memory);
LOGGER.info("Create task '{}' with definition '{}'", taskName, taskDefinition);
taskOperations.create(taskName, taskDefinition);
LOGGER.info("Launching task '{}' with properties {} and arguments '{}'", taskName, properties, args);
return taskOperations.launch(taskName, properties, args);
Everything works fine. The problem is, whenever we pull a non-existing image (eg: due to some connection issue), the pod failed to start AND we end up with pending tasks (with NO batch jobs created whatever)
For example, we will have tasks in the table task_execution (SCDF table) with empty end time
But no related jobs in batch_job_execution table.
It seems fine at first since no pod is created, we don't consume any resource. But as the number of "pending jobs" reached 20, we have the famous error:
Cannot launch task testApp. The maximum concurrent task executions is at its limit [20]
I'm trying to find a way to detect that the pod spin-off has failed (and hence we should mark the task as error), but to no avail.
Is there a way to detect if the task launch has failed when that task launch a new k8s pod?
UPDATE
Not sure if it is relevant, we are using SCDF 1.7.3.RELEASE
Describe the failed pod:
Name: podname-lp2nyowgmm
Namespace: my-namespace
Priority: 1000
Priority Class Name: test-cluster-default
Node: some-ip.compute.internal/XX.XXX.XXX.XX
Start Time: Thu, 14 Jan 2021 18:47:52 +0700
Labels: role=spring-app
spring-app-id=podname-lp2nyowgmm
spring-deployment-id=podname-lp2nyowgmm
task-name=podname
Annotations: iam.amazonaws.com/role: arn:aws:iam::XXXXXXXXXXXX:role/svc-XXXX-XXX-XX-XXXX-X-XXX-XXX-XXXXXXXXXXXXXXXXXXXX
kubernetes.io/psp: eks.privileged
Status: Pending
IP: XX.XXX.XXX.XXX
IPs:
IP: XX.XXX.XXX.XXX
Containers:
podname-lp2nyowgmm:
Container ID:
Image: image_host:XXX/mysystem/myapp:notExist
Image ID:
Port: <none>
Host Port: <none>
Args:
--spring.datasource.username=postgres
--spring.cloud.task.name=podname
--spring.datasource.url=jdbc:postgresql://...
--spring.datasource.driverClassName=org.postgresql.Driver
--spring.datasource.password=XXXX
--fileId=XXXXXXXXXXX
--spring.application.name=app-name
--fileName=file_name.csv
...
--spring.cloud.task.executionid=3
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 8Gi
Requests:
cpu: 2
memory: 8Gi
Environment:
ELASTIC_SEARCH_PORT: 80
ELASTIC_SEARCH_PROTOCOL: http
SPRING_RABBITMQ_PORT: ${RABBITMQ_SERVICE_PORT}
ELASTIC_SEARCH_URL: elasticsearch
SPRING_PROFILES_ACTIVE: kubernetes
CLIENT_SECRET: ${CLIENT_SECRET}
SPRING_RABBITMQ_HOST: ${RABBITMQ_SERVICE_HOST}
RELEASE_ENV_NAME: QA_TEST
SPRING_CLOUD_APPLICATION_GUID: ${HOSTNAME}
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xxxxx(ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-xxxxx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xxxxx
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m22s default-scheduler Successfully assigned my-namespace/podname-lp2nyowgmm to some-ip.compute.internal
Normal Pulling 103s (x4 over 3m21s) kubelet Pulling image "image_host:XXX/mysystem/myapp:notExist"
Warning Failed 102s (x4 over 3m19s) kubelet Failed to pull image "image_host:XXX/mysystem/myapp:notExist": rpc error: code = Unknown desc = Error response from daemon: manifest for image_host:XXX/mysystem/myapp:notExist not found: manifest unknown: manifest unknown
Warning Failed 102s (x4 over 3m19s) kubelet Error: ErrImagePull
Normal BackOff 88s (x6 over 3m19s) kubelet Back-off pulling image "image_host:XXX/mysystem/myapp:notExist"
Warning Failed 73s (x7 over 3m19s) kubelet Error: ImagePullBackOff
1.7.3 is a very old release. We just released 2.7. The original logic used the task execution tables instead of the pod status. If the version you are using is subject to that, then it would explain what you are seeing. I strongly recommend an upgrade.
Thanks for the question. Looking at the source code, we don't include Pendingpods when calculating the current number of executing tasks. It may be something else is going on. 1) Could you run kubectl describe pod on a pod when it's in this state and post the result? (status details). 2) Is the deployer configured to create a job for each task? (false by default).

Camunda External client: TASK/CLIENT-02002 Exception while establishing connection for request

We started with the implementation of a camunda based workflow solution.
In the moment the setup is like this:
A spring boot application with an embedded camunda BPM (via camunda-bpm-spring-boot-starter-rest and camunda-bpm-spring-boot-starter-webapp)
A spring boot application with an external task client (via camunda-external-task-client)
Everything is working fine so far. Our workflow is running and the external client is doing his job...
But after a while (when there is nothing to do for the external client) is see an exception in the log of the external task client:
15:49:09.692 [E] [TopicSubscripti] client.logError:70 - TASK/CLIENT-03001 Exception while fetch and lock task.
org.camunda.bpm.client.impl.EngineClientException: TASK/CLIENT-02002 Exception while establishing connection for request 'POST http://localhost:8080/enrichmentservice/api/rest/1.0/rest/external-task/fetchAndLock HTTP/1.1'
at org.camunda.bpm.client.impl.EngineClientLogger.exceptionWhileEstablishingConnection(EngineClientLogger.java:36)
at org.camunda.bpm.client.impl.RequestExecutor.executeRequest(RequestExecutor.java:101)
at org.camunda.bpm.client.impl.RequestExecutor.postRequest(RequestExecutor.java:74)
at org.camunda.bpm.client.impl.EngineClient.fetchAndLock(EngineClient.java:72)
at org.camunda.bpm.client.topic.impl.TopicSubscriptionManager.fetchAndLock(TopicSubscriptionManager.java:135)
at org.camunda.bpm.client.topic.impl.TopicSubscriptionManager.acquire(TopicSubscriptionManager.java:101)
at org.camunda.bpm.client.topic.impl.TopicSubscriptionManager.run(TopicSubscriptionManager.java:87)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.http.NoHttpResponseException: localhost:8080 failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:141)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:72)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:221)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:165)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:140)
at org.camunda.bpm.client.impl.RequestExecutor.executeRequest(RequestExecutor.java:88)
at org.camunda.bpm.client.impl.RequestExecutor.postRequest(RequestExecutor.java:74)
at org.camunda.bpm.client.impl.EngineClient.fetchAndLock(EngineClient.java:72)
at org.camunda.bpm.client.topic.impl.TopicSubscriptionManager.fetchAndLock(TopicSubscriptionManager.java:135)
What could be the reason for this?
Maybe Configuration error in server or client?
One remark: The execution of the external task is slow (like 10-30 seconds)
Update:
I created a complete example: https://c.gmx.net/#505442085592110443/BIItJGdwTcuWwk7_XqNOXw
To create the error scenario you have to:
Start the ExampleApplication inside the spring-boot project
Start the ExternalClientApp inside the Spring-Boot-Client project
Wait a few minutes
the log output of the external client should look like this:
Subscribe client for: approveLoan
Subscription done
Subscribe client for: waitTask
Subscription done
pojo before: ObjectValue [value=ExamplePojo [num=123, textVal=some text], isDeserialized=true, serializationDataFormat=application/x-java-serialized-object, objectTypeName=org.camunda.bpm.example.tasks.ExamplePojo, serializedValue=156 chars, isTransient=false]
pojo changed: ObjectValue [value=ExamplePojo [num=123, textVal=external changed], isDeserialized=true, serializationDataFormat=application/x-java-serialized-object, objectTypeName=org.camunda.bpm.example.tasks.ExamplePojo, serializedValue=156 chars, isTransient=false]
The External Task 28 has been completed!
The External Task 32 has been completed! (done = false)
The External Task 39 has been completed! (done = false)
The External Task 46 has been completed! (done = true)
149038 [TopicSubscriptionManager] ERROR org.camunda.bpm.client - TASK/CLIENT-03001 Exception while fetch and lock task.
org.camunda.bpm.client.impl.EngineClientException: TASK/CLIENT-02002 Exception while establishing connection for request 'POST http://localhost:8080/rest/external-task/fetchAndLock HTTP/1.1'
at org.camunda.bpm.client.impl.EngineClientLogger.exceptionWhileEstablishingConnection(EngineClientLogger.java:36)
at org.camunda.bpm.client.impl.RequestExecutor.executeRequest(RequestExecutor.java:101)
at org.camunda.bpm.client.impl.RequestExecutor.postRequest(RequestExecutor.java:74)
at org.camunda.bpm.client.impl.EngineClient.fetchAndLock(EngineClient.java:72)
at org.camunda.bpm.client.topic.impl.TopicSubscriptionManager.fetchAndLock(TopicSubscriptionManager.java:135)
at org.camunda.bpm.client.topic.impl.TopicSubscriptionManager.acquire(TopicSubscriptionManager.java:101)
at org.camunda.bpm.client.topic.impl.TopicSubscriptionManager.run(TopicSubscriptionManager.java:87)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.http.NoHttpResponseException: localhost:8080 failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:141)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:72)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:221)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:165)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:140)
at org.camunda.bpm.client.impl.RequestExecutor.executeRequest(RequestExecutor.java:88)
... 6 more
After testing it on different machines/jdk/OS it looks like the problem is only on windows 7 machines.
i think the error reason is described here: Apache HttpClient Interim Error: NoHttpResponseException
so the server is killing the http session because it has not been used for a longer time!
my workaround is to configure a different backup strategy in the client and so far i have no errors anymore:
tldr: i think it is a bug in the external client that only happens on Win7 but i found a workaround:
ExternalTaskClient client = ExternalTaskClient.create().baseUrl(baseUrl).backoffStrategy(new ExponentialBackoffStrategy(500L, 2, 30000L)).
build();
PS i reported it as a bug: https://app.camunda.com/jira/browse/CAM-10526

Connect to multiple Kafka servers using springboot

In Spring boot application, I want to connect to 2 different kafka servers simultaneously. I am using KafkaAdmin and AdminClient to make the connection and perform CRUD Operations.
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
String krb5location = krb5Location;
System.setProperty("java.security.krb5.conf", krb5location);
System.setProperty("java.security.auth.login.config", jaasConfigLocation);
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, server);
configs.put("security.protocol", "SASL_SSL");
configs.put("ssl.truststore.location", sslTruststoreLocation);
configs.put("ssl.truststore.password", sslTruststorePassowrd);
return new KafkaAdmin(configs);
}
#Bean
#PostConstruct
public AdminClient config() {
return AdminClient.create(kafkaAdmin.getConfig());
}
Similarly server 2 is configured in same springboot application.
If I load configuration of both kafka server at once during app initialization following error is displayed
>>>KRBError:
cTime is Sun Jun 03 14:23:02 IST 2001 991558382000
sTime is Tue Nov 20 10:46:53 IST 2018 1542691013000
suSec is 512097
error code is 7
error Message is Server not found in Kerberos database
cname is config1#servername.com
sname is config2#servernname.com
msgType is 30
at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:73)
at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:251)
at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:262)
at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:308)
at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:126)
at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:361)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken(SaslClientAuthenticator.java:359)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendSaslClientToken(SaslClientAuthenticator.java:269)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:206)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:81)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:474)
at org.apache.kafka.common.network.Selector.poll(Selector.java:412)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1006)
at java.lang.Thread.run(Thread.java:748)
Caused by: KrbException: Identifier doesn't match expected value (906)
at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)
at sun.security.krb5.internal.TGSRep.init(TGSRep.java:65)
at sun.security.krb5.internal.TGSRep.<init>(TGSRep.java:60)
at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:55)
... 22 more
2018-11-20 10:46:53.605 ERROR 8672 --- [| adminclient-4] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-4] Connection to node -1 failed authentication due to: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) occurred when evaluating SASL token received from the Kafka Broker. This may be caused by Java's being unable to resolve the Kafka Broker's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Users must configure FQDN of kafka brokers when authenticating using SASL and `socketChannel.socket().getInetAddress().getHostName()` must match the hostname in `principal/hostname#realm` Kafka Client will go to AUTHENTICATION_FAILED state.

Unable to authenticate with apache http client 4.5 using kerberos ticket cache

I am executing an https request to a kerberos authenticated REST service. All is fine if I am using the keytab. However, I have a requirement that I should use the kerberos ticket cache file which is created when one logs in in the workstation using its password.
I'll replace the domain with MY_DOMAINE.COM
So, klist shows:
Ticket cache: FILE:/tmp/krb5cc_210007
Default principal: dragomira#MY_DOMAINE.COM
Valid starting Expires Service principal
05/15/18 07:21:51 05/15/18 17:21:51 krbtgt/MY_DOMAINE.COM#MY_DOMAINE.COM
renew until 05/22/18 06:18:22
Using curl like this works ok:
curl -k --negotiate -u : 'my_url' -v
Now, let's ho back to code. My login.conf is like this:
com.sun.security.jgss.login {
com.sun.security.auth.module.Krb5LoginModule required
client=TRUE
doNotPrompt=true
useTicketCache=true;
};
com.sun.security.jgss.initiate {
com.sun.security.auth.module.Krb5LoginModule required
client=TRUE
doNotPrompt=true
useTicketCache=true;
};
com.sun.security.jgss.accept {
com.sun.security.auth.module.Krb5LoginModule required
client=TRUE
doNotPrompt=true
useTicketCache=true;
};
The relevant java code for my http client which is et up for kerberos is:
try {
SSLContext sslContext = new SSLContextBuilder().loadTrustMaterial(null, (chain, authType) -> true).build();
HostnameVerifier hostnameVerifier = new NoopHostnameVerifier();
Registry<AuthSchemeProvider> authSchemeRegistry = RegistryBuilder.<AuthSchemeProvider>create()
.register(AuthSchemes.SPNEGO, new SPNegoSchemeFactory())
.build();
Credentials dummyCredentials = new NullCredentials();
CredentialsProvider credProv = new BasicCredentialsProvider();
credProv.setCredentials(new AuthScope(null, -1, null), dummyCredentials);
this.httpClient = HttpClientBuilder.create()
.setDefaultAuthSchemeRegistry(authSchemeRegistry)
.setDefaultCredentialsProvider(credProv)
.setSSLContext(sslContext)
.setSSLHostnameVerifier(hostnameVerifier)
.build();
} catch (NoSuchAlgorithmException | KeyStoreException | KeyManagementException e) {
throw new RuntimeException(e.getMessage(), e);
}
Before this, I am setting these java proerties:
java.security.auth.login.config=/home/dragomira/kerberos/login.conf
java.security.krb5.conf=/etc/krb5.conf
sun.security.krb5.debug=true
javax.security.auth.useSubjectCredsOnly=false
The output of the kerberos log is:
Loaded from Java config
>>>KinitOptions cache name is /tmp/krb5cc_210007
>>>DEBUG <CCacheInputStream> client principal is dragomira#MY_DOMANIN.COM
>>>DEBUG <CCacheInputStream> server principal is krbtgt/MY_DOMANIN.COM#MY_DOMANIN.COM
>>>DEBUG <CCacheInputStream> key type: 18
>>>DEBUG <CCacheInputStream> auth time: Tue May 15 06:18:22 EDT 2018
>>>DEBUG <CCacheInputStream> start time: Tue May 15 07:21:51 EDT 2018
>>>DEBUG <CCacheInputStream> end time: Tue May 15 17:21:51 EDT 2018
>>>DEBUG <CCacheInputStream> renew_till time: Tue May 22 06:18:22 EDT 2018
>>> CCacheInputStream: readFlags() FORWARDABLE; RENEWABLE; INITIAL; PRE_AUTH;
>>>DEBUG <CCacheInputStream> client principal is dragomira#MY_DOMANIN.COM
>>>DEBUG <CCacheInputStream> server principal is HTTP/configuration.prd.int.MY_DOMANIN.COM#MY_DOMANIN.COM
>>>DEBUG <CCacheInputStream> key type: 23
>>>DEBUG <CCacheInputStream> auth time: Tue May 15 06:18:22 EDT 2018
>>>DEBUG <CCacheInputStream> start time: Tue May 15 07:57:49 EDT 2018
>>>DEBUG <CCacheInputStream> end time: Tue May 15 17:21:51 EDT 2018
>>>DEBUG <CCacheInputStream> renew_till time: Tue May 22 06:18:22 EDT 2018
>>> CCacheInputStream: readFlags() FORWARDABLE; RENEWABLE; PRE_AUTH;
>>> unsupported key type found the default TGT: 18
So it would seem to me that the ticket is read but no credentials are extracted from it since i receive in the end 401.
Must I do something special to apache http client 4.5 in order to use ticket tacke?
Kind regards
Based on the error:
unsupported key type found the default TGT: 18
Type 18 = aes-256-cts-hmac-sha1-96 (See IANA Kerberos Parameters)
I think you are using a JRE with limited strength JCE policy and have to set unlimited strength JCE policy.
On the Oracle downloads site for Oracle JRE. Check under Additional Resources the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for JDK/JRE 8
Oracle Java SE downloads
See also: Oracle Java SE 8 technotes jgss
NOTE: The JCE framework within JDK includes an ability to enforce restrictions regarding the cryptographic algorithms and maximum cryptographic strengths available to applications. Such restrictions are specified in "jurisdiction policy files." The jurisdiction policy files bundled in Java SE limit the maximum key length. Hence, to use the AES256 encryption type, you will need to install the JCE crypto policy with the unlimited version to allow AES with 256-bit key.
Testing your policy (source):
jrunscript -e 'print (javax.crypto.Cipher.getMaxAllowedKeyLength("AES") >= 256);'
As of the start of 2018, Oracle JDK in all supported versions is beginning to ship with default unlimited strength JCE policy:
https://bugs.openjdk.java.net/browse/JDK-8189377
Also see these interesting workarounds with reflection, and a possible override setting for JRE9:
https://stackoverflow.com/a/22492582/2824577
mmm...
Default principal: dragomira#MY_DOMAINE.COM
DEBUG client principal is dragomira#MY_DOMANIN.COM
DOMANIN?
I am doing same thing in spring boot application. I am able to make rest call using cache ticket (users/conf/krb5_xyz) and authenticated properly.
my working client :
public class Test {
public static void main(String[] args) {
Map<String, Object> loginOption = new HashMap<>();
loginOption.put("refreshKrb5Config","true");
loginOption.put("useTicketCache", "true");
loginOption.put("ticketCache","h:/config/krb5cc_xyz");
loginOption.put("doNotPrompt","true");
loginOption.put("debug","true");
/*
option 1 : using keytab
KerberosRestTemplate restTemplate = new KerberosRestTemplate("C:\\Users\\xyz\\kerberos\\kerberos\\src\\main\\resources\\xyz.keytab", "wdd#sd.sd.sd");*/
/* option 2: using cache */
KerberosRestTemplate restTemplate = new KerberosRestTemplate(null , "-",loginOption);
String response = restTemplate.getForObject("http://host:13080/xyz",String.class);
System.out.println("Result"+response);
}

JMeter test on Netty-based impl produces error for every second request

I've implemented an HTTP service based on the HTTP server example as provided by the netty.io project.
When I execute a GET request to the service URL from command-line (wget) or from a browser, I receive a result as expected.
When I perform a load test using ApacheBench ab -n 100000 -c 8 http://localhost:9000/path/to/service, experience no errors (neither on service nor on ab side) and see fair numbers for request processing duration.
Afterwards, I set up a test plan in JMeter having a thread group with 1 thread and a loop count of 2. I inserted an HTTP request sampler where I simply added the server name localhost, the port number 9000 and the path /path/to/service. Then I also added a View Results Tree and a Summary Report listener.
Finally, I executed the test plan and received one valid response and one error showing the following content:
Thread Name: Thread Group 1-1
Sample Start: 2015-06-04 09:23:12 CEST
Load time: 0
Connect Time: 0
Latency: 0
Size in bytes: 2068
Headers size in bytes: 0
Body size in bytes: 2068
Sample Count: 1
Error Count: 1
Response code: Non HTTP response code: org.apache.http.NoHttpResponseException
Response message: Non HTTP response message: The target server failed to respond
Response headers:
HTTPSampleResult fields:
ContentType:
DataEncoding: null
The associated exception found in response data tab showed the following content
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:95)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:61)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)
at org.apache.jmeter.protocol.http.sampler.MeasuringConnectionManager$MeasuredConnection.receiveResponseHeader(MeasuringConnectionManager.java:201)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:127)
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:520)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.executeRequest(HTTPHC4Impl.java:517)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:331)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:74)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1146)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1135)
at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:434)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:261)
at java.lang.Thread.run(Thread.java:745)
As I have a similar service already running which receives and processes web tracking data which shows no errors, it might be a problem within my test plan or JMeter .. but I am not sure :-(
Did anyone experience similar behavior? Thanks in advance ;-)
Issue can be related to Keep-Alive management.
Read those:
https://bz.apache.org/bugzilla/show_bug.cgi?id=57921
https://wiki.apache.org/jmeter/JMeterSocketClosed
So your solution is one of those:
If you're sure it's a keep alive issue:
Try jmeter nightly build http://jmeter.apache.org/nightly.html:
Download the _bin and _lib files
Unpack the archives into the same directory structure
The other archives are not needed to run JMeter.
And adapt the value of httpclient4.idletimeout
A workaround is to increase retry or add connection stale check as per :
https://wiki.apache.org/jmeter/JMeterSocketClosed

Categories