We are trying to access the kerberized Hadoop cluster(Cloudera distribution) using code(java) but getting the below exception.
Caused by: javax.security.auth.login.LoginException: Unable to obtain
password from user at
com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:897)
at
com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5Login
Module.java:760) at
com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
We have used the property "hadoop.security.authentication" as kerberos,fs.defaultFS as hdfs://devha:8020 and passed the keytabfilepath in the userinformationgroup.
First, read the comments on your question. Good stuff.
Taking a step back since that information can be overwhelming there's two possible ways to authenticate to a Hadoop cluster. A user will normally use a username (principal) and password. An application will normally use a principal and a keytab file. A keytab file is created by the Kerberos administrator using the 'kadmin' application.
Furthermore there's the concept of a "Login" user - an application wide default, or a "Current" user that could be specific to your current need. You'll often use the former to access resources on your local cluster and the latter to access resources on an external cluster.
Since I'm using the latter I can give you a quick code snippet to get you started. For initialization:
UserGroupInformation.setConfiguration(configuration);
where "configuration" is either read from the standard location (/etc/hadoop) or generated on the fly. Note - that sets a static value so you need to be very careful!
For the individual user (application) I use
UserGroupApplication user = UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytabFile);
there are several variants of this method - e.g., do they take username or keytab file? Do they set the "Login" user or do they return a new UserGroupInformation object? Be careful you understand the consequences of which one you're using since some set global values.
You now have to wrap your calls to the cluster in a doAs() call:
user.doAs(new PrivilegedExceptionAction<Void>() {
public Void run() throws Exception {
// do all of your hadoop calls here
return null;
}
}
I don't recall if you need to do this if you're always using the "Login" user. We need to support both local and external clusters and for us it's easiest to always wrap everything like this. It means we only need to set "user" once, at the start of the action.
See the resources mentioned above if you want details on user impersonation, using SSL encryption (rpc.privacy), etc.
Related
I have a java application running in ECS in which I want to read data from table in account 1 (source_table) and write it to a table in account 2 (destination_table). I created two dynamodb clients with different credential providers - for source_table client I'm using an STSAssumeRoleSessionCredentialsProvider with the arn of a role in account 1; for destination client I'm using DefaultAWSCredentialsProviderChain.
The assume role bit works and I'm able to read using the source client but using the destination client does not work - it still tries to use the assumed role credentials when trying to write to destination_table and fails with unauthorized error (assumed-role is not authorized to perform Put Item).
I tried using EC2ContainerCredentialsProviderWrapper on the destination client but same error.
Should this work? Or are the credentials shared under the hood which makes it impossible to have two different AWSCredentialProviders running simultaneously like this?
I noticed this answer which uses static credentials and apparently works, so I'm at a loss why this doesn't work.
I figured it out with some help from AWS support. It was a problem with my IAM configuration on the role in account 2. I was misled by the error message which said 'assumed-role is not authorized to perform Put Item' when in fact my original account 2 role itself was unable to do so.
I am looking for a Single Sign-On authentication in a Java client.
Since I am logged in to Windows using an AD, the main goal is that I do not have to enter username and password again. I want Java to use the Ticket I recieved at Windows-login. This code is the best I have for the purpose:
LoginContext lc = new LoginContext("com.sun.security.jgss.krb5.initiate", new DialogCallbackHandler());
lc.login();
Subject.doAs(lc.getSubject(), (PrivilegedExceptionAction<Void>) () -> {
System.out.println("This is privileged");
return null;
});
I've set the java.security.krb5.conf and java.security.auth.login.config properties with corresponding conf-files, but still a dialog asking for Username and Password pops up.
I also tried working with GSSName, but GSSManager.createCredential() is also asking for Username and Password (probably using the TextCallbackHandler()).
I tried to get along with Waffle, but did not get it working. Most examples and explanations are Server sided (I only found one example combining server and client side, but I was not able to split it up).
I know, there are Similar questions (e.g. this), but i did not get that working without entering a password.
PS: i know, that DialogCallbackHandler is depricated, I use it for test purposes only.
Ok, after several tries I found a solution. The problem was not in the code, but in the registry. As stated on this page, since Java 7 You can't access the ticket of Windows natively. To change this, You have to set an additional registry key. For this, go into the registry folder
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Lsa\Kerberos\Parameters
and add the key
Value Name: AllowTgtSessionKey
Value Type: REG_DWORD
Value: 0x01
To fully make this work you will need some additional settings:
The jaas configuration file
In the jaas configuration file you have to set up which security modules jaas should use. The part in front of the brackets names your configuration. If you use the GSS libraries you must name it com.sun.security.jgss.krb5.initiate. When you use the LoginContext you just pass the name of the configuration as first parameter. My jaas.conf look as follows:
com.sun.security.jgss.krb5.initiate {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache = true;
};
The kerberos configuration
You will also need a configuration for the Kerberos module. This mainly contains the realm address, but can hold additional information. A minimal working example:
[realms]
YOUR.REALM.COM = {
kdc = your.realm.com:88
default_domain = REALM.COM
}
Note, that this is case sensitive!
The System Properties
Finally, you have to set up Java to find this files. You do this either by giving the properties on startup or by calling System.setProperty():
System.setProperty("java.security.krb5.conf", "src/resources/krb5.conf");
System.setProperty("java.security.auth.login.config", "src/resources/jaas.conf");
I'm looking for simple way of verifying an arbitrary Azure Table connection string that uses a SAS such as the one below using the Azure Storage's Java SDK:
https://example.table.core.windows.net/example?sig=aaabbbcccdddeeefffggghhh%3D&se=2020-01-01T00%3A00%3A00Z&sv=2015-04-05&tn=example&sp=raud
I tried a bunch of different methods exposed by the CloudTable api, but none of them works.
CloudTable.exists() throws a StorageException, regardless of whether the credentials are valid
getName(), getStorageUri(), getUri(), and other getters - all work locally, regardless of the credentials
getServiceClient().downloadServiceProperties() and getServiceClient().getServiceStats() also throw various exceptions, while getServiceClient().getEndpoint() and getServiceClient().getCredentials() and others always work locally.
Why don't I just query the Table for a row or two? Well, in many cases I need to verify a SAS that gives only write or update premissions (without delete or read permissions), and I do not want to execute a statement that changes something in the table just to check the credentials.
To answer your questions:
CloudTable.exists() throws a StorageException, regardless of whether
the credentials are valid
I believe there's a bug with the SDK when using this method with SAS Token. I remember running into the same issue some time back.
getName(), getStorageUri(), getUri(), and other getters - all work
locally, regardless of the credentials
These will work as they don't make network call. They simply use the data available to them in the different instance variables and return the data.
getServiceClient().downloadServiceProperties() and
getServiceClient().getServiceStats() also throw various exceptions,
while getServiceClient().getEndpoint() and
getServiceClient().getCredentials() and others always work locally.
In order for getServiceClient().someMethod() to work using SAS, you would need Account SAS instead of Service SAS (which you're using right now).
Why don't I just query the Table for a row or two? Well, in many cases
I need to verify a SAS that gives only write or update premissions
(without delete or read permissions), and I do not want to execute a
statement that changes something in the table just to check the
credentials.
One possible way to check the validity of a SAS Token for write operation is to perform a write operation which you know will fail with an error. For example, you can try to insert an entity which is already there. In this case, you should get a Conflict (409) error. Other thing you could try to do is perform an optimistic write by specifying some random Etag value and check for Precondition Failed (412) error. If you get a 403 error or 404 error, that would indicate there's something wrong with your SAS token.
I'm using the 1.4.3 version of the java client and am attempting to connect to the Couchbase server I have running locally but I'm getting auth errors. After looking through the code (isn't open source great?) of how their client library is using the variables amongst their classes I've come to the conclusion that if I want to be able to connect to a "bucket" that I have to create a user for each "bucket" with the same user name as that bucket. This makes no sense to me. I have to be wrong. Aren't I? There has to be another way. What is that way?
For reference, here is what I'm using to create a connection (it's Scala but would look nearly identical in Java):
val cf = new CouchbaseConnectionFactoryBuilder()
.setViewTimeout(opTimeout)
.setViewWorkerSize(workerSize)
.setViewConnsPerNode(conPerNode)
.buildCouchbaseConnection(nodes, bucket, password)
new CouchbaseClient(cf)
which follows directly from their examples.
Their Code
If I look into the code in which they're connecting to the "view" itself I see the following:
public ViewConnection createViewConnection(
List<InetSocketAddress> addrs) throws IOException {
return new ViewConnection(this, addrs, bucket, pass);
}
which is then passed to a constructor:
public ViewConnection(final CouchbaseConnectionFactory cf,
final List<InetSocketAddress> seedAddrs, final String user,
final String password) //more code...
and that user variable is actually used in the HTTP Basic Auth to form the Authentication header. That user variable being, of course, equivalent to the bucket variable in the CouchbaseConnectionFactory.
You are correct - each bucket should be authenticated with the bucket name as the user. However, there aren't any users to 'create' - you're just using whatever (bucket) name and password you setup when you created the bucket on the Cluster UI.
Note that people usually use one bucket per application (don't think bucket == table, think bucket == database) and so you wouldn't typically need more than a couple of buckets for most applications.
I am obtaining a kerberos ticket with the following code:
String client = "com.sun.security.jgss.krb5.initiate";
LoginContext lc = new LoginContext(client, new CallbackHandler() {
#Override
public void handle(Callback[] arg0) throws IOException, UnsupportedCallbackException {
System.out.println("CB: " + arg0);
}
});
lc.login();
System.out.println("SUBJ: " + lc.getSubject());
This code works fine, I get a subject that shows my user ID. The problem I'm having is now I need to know whether the user belongs to a certain group in AD. Is there a way to do this from here?
I've seen code to get user groups using LDAP but it requires logging in with a user/password, I need to do it the SSO way.
You cannot actually do this with the kind of ticket you get at login. The problem is that the Windows PAC (which contains the group membership information) is in the encrypted part of the ticket. Only the domain controller knows how to decrypt that initial ticket.
It is possible to do with a service ticket.
So, you could set up a keytab, use jgss to authenticate to yourself and then decrypt the ticket, find the PAC, decode the PAC and then process the SIDs. I wasn't able to find code for most of that in Java, although it is available in C. Take a look at this for how to decrypt the ticket.
Now, at this point you're talking about writing or finding an NDR decoder, reading all the specs about how the PAC and sids are put together, or porting the C code to Java.
My recommendation would be to take a different approach.
Instead, use Kerberos to sign into LDAP. Find an LDAP library that supports Java SASL and you should be able to use a Kerberos ticket to log in.
If your application wants to know the groups the user belongs to in order to populate menus and stuff like that, you can just log in as the user.
However, if you're going to decide what access the user has, don't log in as the user to gain access to LDAP. The problem is that with Kerberos, an attacker can cooperate with the user to impersonate the entire infrastructure to your application unless you confirm that your ticket comes from the infrastructure.
That is, because the user knows their password, and because that's the only secret your application knows about, the user can cooperate with someone to pretend to be the LDAP server and claim to have any access they want.
Instead, your application should have its own account to use when accessing LDAP. If you do that, you can just look up the group list.
I do realize this is all kind of complex.