I am working on golang version of fabcar smart contract while seeking to implement a Java-SDK API which enrolls an admin, registers a user and performs query-update value operations based on https://github.com/hyperledger/fabric-samples/tree/master/fabcar/java
I have successfully set up a 3 org-9 peers blockchain network, installed, instantiated and invoked chaincode on peers.
However, as i am working on implementing the relative API, i am only able to successfully query blockchain database, while getting a "Could not meet endorsement policy for chaincode mycc"
Please find below screenshot of relative error
Endorsement policy is "OR ('Org1MSP.member','Org2MSP.member', 'Org3MSP.member')".
Should registered user somehow get an Org1/Org2/Org3.member attribute? Any leads would be appreciated!
Like #Ikar Pohorský said, for me this got resolved after I used correct method name. Also, ensure that you delete 'wallet' folder in order to regenerate the user if your HLF n/w was recreated.
#Test
public void testMyMethodToBeInvoked() throws Exception {
deleteDirectory(".\\wallet");
EnrollAdmin.main(null);
RegisterUser.main(null);
// Load a file system based wallet for managing identities.
final Path walletPath = Paths.get("wallet");
final Wallet wallet = Wallet.createFileSystemWallet(walletPath);
// load a CCP
final Path networkConfigPath = Paths
.get("C:\\sw\\hlf146-2\\fabric-samples\\first-network\\connection-org1.yaml");
final Gateway.Builder builder = Gateway.createBuilder();
builder.identity(wallet, "user1").networkConfig(networkConfigPath).discovery(true);
// create a gateway connection
try (Gateway gateway = builder.connect()) {
final Network network = gateway.getNetwork("mychannel");
final Contract contract = network.getContract("mycc");
String myJSONString="{\"a\":\"b\"}";
byte[] result;
// Following did NOT work. Control goes directly to 'invoke' when 'submitTransaction' is done directly. 'invoke' need not be mentioned here.
// result = contract.submitTransaction("invoke", myJSONString);
// Following DID work. In chaincode (my chain code was Java) I had a method named 'myMethodToBeInvoked'. The chain code was written similar to https://github.com/hyperledger/fabric-samples/blob/release-1.4/chaincode/chaincode_example02/java/src/main/java/org/hyperledger/fabric/example/SimpleChaincode.java
result = contract.submitTransaction("myMethodToBeInvoked", my);
System.out.println(new String(result));
}
}
EDIT: Also, please remember that if your chaincode throws errorResponse, even then we can have this endorsement fail issue. So, check if your chain code is working without any issues.
Related
I am trying to setup Google Cloud Vision API, I have defined a Application Credential Variable through CMD by using set GOOGLE_APPLICATION_CREDENTIALS PathToJSON however this still does not allow me to connect to Google Cloud Vision API for OCR.
I have also tried to set it manually through the windows UI, however still no luck, I created and defined a project in the Google Cloud page, and generated a credential key, when it asked me "Are you planning to use this API with App Engine or Compute Engine?", I selected No.
I am currently using Googles boilerplate code
public class DetectText {
public static void main(String args[])
{
try{
detectText();
}catch (IOException e)
{
e.printStackTrace();
}
}
public static void detectText() throws IOException {
// TODO(developer): Replace these variables before running the sample.
String filePath = "C:\\Users\\Programming\\Desktop\\TextDetection\\Capture.PNG";
detectText(filePath);
}
// Detects text in the specified image.
public static void detectText(String filePath) throws IOException {
List<AnnotateImageRequest> requests = new ArrayList<>();
ByteString imgBytes = ByteString.readFrom(new FileInputStream(filePath));
Image img = Image.newBuilder().setContent(imgBytes).build();
Feature feat = Feature.newBuilder().setType(Feature.Type.TEXT_DETECTION).build();
AnnotateImageRequest request =
AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
requests.add(request);
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests. After completing all of your requests, call
// the "close" method on the client to safely clean up any remaining background resources.
try (ImageAnnotatorClient client = ImageAnnotatorClient.create()) {
BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
List<AnnotateImageResponse> responses = response.getResponsesList();
for (AnnotateImageResponse res : responses) {
if (res.hasError()) {
System.out.format("Error: %s%n", res.getError().getMessage());
return;
}
// For full list of available annotations, see http://g.co/cloud/vision/docs
for (EntityAnnotation annotation : res.getTextAnnotationsList()) {
System.out.format("Text: %s%n", annotation.getDescription());
System.out.format("Position : %s%n", annotation.getBoundingPoly());
}
}
}
}
static void authExplicit(String jsonPath) throws IOException {
}
}
I am not using a server or Google compute virtual machine.
Can someone please explain to me what the problem is, and how I would go about fixing it?
Stack Trace
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
at com.google.auth.oauth2.DefaultCredentialsProvider.getDefaultCredentials(DefaultCredentialsProvider.java:134)
at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:119)
at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:91)
at com.google.api.gax.core.GoogleCredentialsProvider.getCredentials(GoogleCredentialsProvider.java:67)
at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:142)
at com.google.cloud.vision.v1.stub.GrpcImageAnnotatorStub.create(GrpcImageAnnotatorStub.java:117)
at com.google.cloud.vision.v1.stub.ImageAnnotatorStubSettings.createStub(ImageAnnotatorStubSettings.java:156)
at com.google.cloud.vision.v1.ImageAnnotatorClient.<init>(ImageAnnotatorClient.java:136)
at com.google.cloud.vision.v1.ImageAnnotatorClient.create(ImageAnnotatorClient.java:117)
at com.google.cloud.vision.v1.ImageAnnotatorClient.create(ImageAnnotatorClient.java:108)
at DetectText.detectText(DetectText.java:54)
at DetectText.detectText(DetectText.java:36)
at DetectText.main(DetectText.java:25)
Based on your error message, it seems that the GOOGLE_APPLICATION_CREDENTIALS environment variable is not being found.
On one hand, before trying to run the text detection sample code, follow the steps outlined in the documentation in order to discard that any of them has been skipped.
On the other hand, if you are using an IDE such as IntelliJ or Eclipse, you have to set the environment variable GOOGLE_APPLICATION_CREDENTIALS global through Windows System Properties, so it can be used by the IDE. Nevertheless, when testing I had to close and reopen the IDE for the changes to take effect and the aforementioned error would not appear.
Additionally, there is also a way to specify the location of the JSON file within the code, as shown in this example. However, it is not advisable to put it that way, it is best to use the environment variables.
I'm trying to implement a custom keycloack Authenticator SPI for authentication purposes against an external Identity provider. The users already exist on the keycloak store, I only need connection to the custom SPI to authenticate them.
I'm following section 8.3 of the official guide https://www.keycloak.org/docs/latest/server_development/index.html#_auth_spi_walkthrough, which is very similar to what I need.
The problem I'm running into is that after the authentication flow runs into the "action" method of the custom Authenticator, an exception is thrown from the AuthenticationProcessor Class, which after inspection, comes from following check:
// org.keycloak.authentication.AuthenticationProcessor - line 876
if (authenticationSession.getAuthenticatedUser() == null) {
throw new AuthenticationFlowException(AuthenticationFlowError.UNKNOWN_USER);
}
after seeing this problem, my idea for trying solving it, was getting the user (already verified against the externl Identity Provider) from the keycloak store, and pushing it into the AuthenticationSession, like this:
// Connect against external Service Provider
// and asume "USER_ID" represents an already validated User
// AuthenticationFlowContext = afc is given as parameter
UserFederationManager ufm = afc.getSession().users(); // <-- PROBLEM
UserModel userFound = ufm.getUserById("USER_ID", afc.getRealm());
if (userFound != null) {
// get reference to the authSession
AuthenticationSessionModel asm = afc.getAuthenticationSession();
// set authenticated user on the session
asm.setAuthenticatedUser(userFound );
return true;
}
return false;
The problem with the above code, is that a Java NoSuchMethodExceptionError is thrown regarding the users() method of the org.keaycloak.models.KeycloackSession class. Like this:
11:26:32,628 ERROR [org.keycloak.services.error.KeycloakErrorHandler] (default task-14) Uncaught server error: java.lang.NoSuchMethodError: org.keycloak.models.KeycloakSession.users()Lorg/keycloak/models/UserFederationManager;
Any suggestion that you could make to help me solve this would be greatly appreciated!
It seems the problem was that I was using an org.keycloak.models.UserFederationManager instance, instead of an org.keycloak.models.UserProvider instance. The UserFederationManager implements the UserProvider, and it seems the more general type works better than the more specific type under the injection mechanism this keycloak is using
// UserFederationManager ufm = afc.getSession().users(); // <-- PROBLEM
// UserProvider ufm = afc.getSession().users(); // <-- WORKS
Even though it works now, both of your suggestions are valid because my build version is indeed diferent that the one on the runtime, I'll solve that to avoid further Bugs.
Thanks your input Guys!
As Henry stated, it's likely to be a version conflict. I had a similar problem which was solved with this thread's help. It suggests you downgrade some dependencies version, but in my case, we solved it changing back our server to Tomcat.
I have created 2 demo Kerberos Clients using the GSS-API.
One in Python3, the second in Java.
Both clients seem to be broadly equivalent, and both "work" in that I get a service ticket that is accepted by my Java GSS-API Service Principal.
However on testing I noticed that the Python client saves the service ticket in the kerberos credentials cache, whereas the Java client does not seem to save the ticket.
I use "klist" to view the contents of the credential cache.
My clients are running on a Lubuntu 17.04 Virtual Machine, using FreeIPA as the Kerberos environment. I am using OpenJDK 8 u131.
Question 1: Does the Java GSS-API not save service tickets to the credentials cache? Or can I change my code so it does so?
Question 2: Is there any downside to the fact that the service ticket is not saved to the cache?
My assumption is that cached service tickets reduce interaction with the KDC, but comments on How to save Kerberos Service Ticket using a Windows Java client? suggest that is not the case, but this Microsoft technote says "The client does not need to go back to the KDC each time it wants access to this particular server".
Question 3: The cached service tickets from the python client vanish after some minutes - long before the expiry date. What causes them to vanish?
Python code
#!/usr/bin/python3.5
import gssapi
from io import BytesIO
server_name = 'HTTP/app-srv.acme.com#ACME.COM'
service_name = gssapi.Name(server_name)
client_ctx = gssapi.SecurityContext(name=service_name, usage='initiate')
initial_client_token = client_ctx.step()
Java Code
System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
System.setProperty("javax.security.auth.useSubjectCredsOnly","false");
GSSManager manager = GSSManager.getInstance();
GSSName clientName;
GSSContext context = null;
//try catch removed for brevity
GSSName serverName =
manager.createName("HTTP/app-srv.acme.com#ACME.COM", null);
Oid krb5Oid = new Oid("1.2.840.113554.1.2.2");
//use default credentials
context = manager.createContext(serverName,
krb5Oid,
null,
GSSContext.DEFAULT_LIFETIME);
context.requestMutualAuth(false);
context.requestConf(false);
context.requestInteg(true);
byte[] token = new byte[0];
token = context.initSecContext(token, 0, token.length);
Edit:
While the original question focusses on the use of the Java GSS-API to build a Java Kerberos Client, GSS is not a must. I am open to other Kerberos approaches that work on Java. Right now I am experimenting with Apache Kerby kerb-client.
So far Java GSS-API seems to have 2 problems:
1) It uses the credentials cache to get the TGT (Ok), but not to cache service-tickets (Not Ok).
2) It cannot access credential caches of type KEYRING. (Confirmed by behaviour, debugging the Java runtime security classes, and by comments in that code. For the Lubuntu / FreeIPA combination I am using KEYRING was the out-of-the-box default. This won't apply to Windows, and may not apply to other Linux Kerberos combinations.
Edit 2:
The question I should have asked is:
How do I stop my KDC from being hammered for repeated SGT requests because Java GSS is not using the credentials cache.
I leave my original answer in place at the bottom, because if largely focusses on the original question.
After another round of deep debugging and testing, I have found an acceptable solution to the root problem.
Using Java GSS API with JAAS, as opposed to "pure" GSS without JAAS in my original solution makes a big difference!
Yes, existing Service Tickets (SGTs) that may be in the credentials cache are not being loaded,
nor are any newly acquired SGTs written back to the cache, however the KDC is not be constantly hammered (the real problem).
Both pure GSS, and GSS with JAAS use a client principal subject. The subject has an in-memory privateCredentials set,
which is used to store TGTs and SGTs.
The key difference is:
"pure GSS": the subject + privateCredentials is created within the GSSContext, and lives only as long as the GSSContext lives.
GSS with JAAS: the subject is created by JAAS, outside the GSSContext, and thus can live for the life of the application,
spanning many GSSContexts during the life of the application.
The first GSSContext established will query the subject's privateCredentials for a SGT, not find one,
then request a SGT from the KDC.
The SGT is added to the subject's privateCredentials, and as the subject lives longer than the GSSContext,
it is available, as is the SGT, when following GSSContexts are created. These will find the SGT in the subject's privateCredentials, and do not need to hit the KDC for a new SGT.
So seen in the light of my particular Java Fat Client, opened once and likely to run for hours, everything is ok.
The first GSSContext created will hit the KDC for a SGT which will then be used by all following GSSContexts created until the client is closed.
The credentials cache is not being used, but that does not hurt.
In the light of a much shorter lived client, reopened many many times, and perhaps in parallel,
then use / non-use of the credentials cache might be a more serious issue.
private void initJAASandGSS() {
LoginContext loginContext = null;
TextCallbackHandler cbHandler = new TextCallbackHandler();
try {
loginContext = new LoginContext("wSOXClientGSSJAASLogin", cbHandler);
loginContext.login();
mySubject = loginContext.getSubject();
} catch (LoginException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
gssManager = GSSManager.getInstance();
try {
//TODO: LAMB: This name should be got from config / built from config / serviceIdentifier
serverName = gssManager.createName("HTTP/app-srv.acme.com#ACME.COM", null);
Oid krb5Oid = new Oid("1.2.840.113554.1.2.2");
} catch (GSSException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private String getGSSwJAASServiceToken() {
byte[] token = null;
String encodedToken = null;
token = Subject.doAs(mySubject, new PrivilegedAction<byte[]>(){
public byte[] run(){
try{
System.setProperty("javax.security.auth.useSubjectCredsOnly","true");
GSSContext context = gssManager.createContext(serverName,
krb5Oid,
null,
GSSContext.DEFAULT_LIFETIME);
context.requestMutualAuth(false);
context.requestConf(false);
context.requestInteg(true);
byte[] ret = new byte[0];
ret = context.initSecContext(ret, 0, ret.length);
context.dispose();
return ret;
} catch(Exception e){
Log.log(Log.ERROR, e);
throw new otms.util.OTMSRuntimeException("Start Client (Kerberos) failed, cause: " + e.getMessage());
}
}
});
encodedToken = Base64.getEncoder().encodeToString(token);
return encodedToken;
}
End Edit 2: Original answer below:
Question 1: Does the Java GSS-API not save service tickets to the credentials cache? Or can I change my code so it does so?
Edit: Root Cause Analysis.
After many hours debugging the sun.security.* classes, I now understand what GSS and Java Security code is doing / not doing - at least in Java 8 u 131.
In this example we have a credential cache, of a type Java GSS can access, containing a valid Ticket Granting Ticket (TGT) and a valid Service Ticket (SGT).
1) When the client principal Subject is created, the TGT is loaded from the cache (Credentials.acquireTGTFromCache()), and stored in the privateCredentials set of the Subject. --> (OK)
Only the TGT is loaded, SGTs are NOT loaded and saved to the Subject privateCredentials. -->(NOT OK)
2) Later, deep in the GSSContext.initSecContext() process, the security code actually tries to retrieve a Service Ticket from the privateCredentials of the Subject. The relevant code is Krb5Context.initSecContext() / KrbUtils.getTicket() / SubjectComber.find()/findAux(). However as SGTs were never loaded in step 1) an SGT will not be found! Therefore a new SGT is requested from the KDC and used.
This is repeated for each Service request.
Just for fun, and strictly as a proof-of-concept hack, I added a few lines of code between the login, and the initSecContext() to parse the credentials cache, extract the credentials, convert to Krb Credentials, and add them to the Subject’s private credentials.
This done, in step 2) the existing SGT is found and used. No new SGT is requested from the KDC.
I will not post the code for this hack as it calls sun internal classes that we should not be calling, and I don’t wish to inspire anybody else to do so. Nor do I intend to use this hack as a solution.
—> The root cause problem is not that the service ticket are not SAVED to the cache; but rather
a) that SGTs are not LOADED from the credential cache to the Subject of the client principal
and
b) that there is no public API or configuration settings to do so.
This affects GSS-API both with and without JAAS.
So where does this leave me?
i) Use Java GSS-API / GSS-API with JAAS “as is”, with each SGT Request hitting the KDC —> Not good.
ii) As suggested by Samson in the comments below, use Java GSS-API only for initial login of the application, then for all further calls use an alternative security mechanism for subsequent calls (a kind of self-built kerberos-light) using tokens or cookies.
iii) Consider alternatives to GSS-API such as Apache Kerby kerb-client. This has implications outside the scope of this answer, and may well prove to be jumping from the proverbial frying pan to the fire.
I have submitted a Java Feature Request to Oracle, suggesting that SGTs should be retrieved from the cache and stored in the Subject credentials (as already the case for TGTs).
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8180144
Question 2: Is there any downside to the fact that the service ticket is not saved to the cache?
Using the credentials cache for Service Tickets reduces interaction between the client and the KDC. The corollary to this is that where service tickets are not cached, each request will require interaction with the KDC, which could lead to the KDC being hammered.
I have a requirement to add group members to an IBM Domino Group through java code. I am using Notes.jar to connect to IBM Domino v9.0, and my java code is running on a different machine, then the Domino machine.
From the Domino documentation I found out that "AdministrationProcess" class needs to be used to add member to group. But when i am trying to create "AdministrationProcess" object by calling session.createAdministrationProcess("IBMDominoServer"). I am getting the error Restricted operation on a server.
My test code is as follows
public class LotusDomino{
public static void main(String args[]) throws Exception{
String[] argv = {"192.168.2.111","Administrator","<password>"};
deleteUser(argv[0], argv[1], argv[2]);
}
private static void deleteUser(String host, String userName, String password) throws Exception{
Session s = NotesFactory.createSession(host, userName, password);
try{
AdministrationProcess process = s.createAdministrationProcess("IBMDominoServer.xanadufinancials.com");
}catch(NotesException e){
System.err.println("exception --- "+e.id+":"+e.text+":"+e.internal); // this prints the following error : exception --- 4183:Restricted operation on a server:null
}
}
The code shows same error irrespective of what i pass in as the server Name . So it shouldn't be a code issue. I did a little bit of search for this, and found out that Administrator should have editor access on admin4.nsf. Verified the access it was present.
Please let me know what can be the issue. Thanks in advance.
Using the Administration Process is one way to add a user to a group, and it is the safest way when you have no knowledge of how directory services on the Domino server have been configured. But in most basic configurations, adding a user to a group is very simple. You open the names.nsf database, open the Groups view, locate the document for the group, and add the name to the list stored in the Members item. The one catch is that if the Members list is too long, you may have to write code that is capable of divide it into subgroups (and/or code to detect the pattern of existing subgroups and add to them instead).
Regarding using the NotesAdministrationProcess class, if we can trust that the error message means what it says it means, then your problem is that the user id you are using does not have permission to run restricted operations on the server. Here is a link to info about server configuration for agent permissions. If you're using NCSO.jar (see my question above), then be a separate configuration for users permitted to perform restricted operations over IIOP, but I'm not sure and my server is down at the moment so I can't check.
Im working on oauth 1 Sparklr and Tonr sample apps and I'm trying to create a two-legged call. Hipoteticly the only thing you're supposed to do is change the Consumer Details Service from (Im ommiting the igoogle consumer info to simplify):
<oauth:consumer-details-service id="consumerDetails">
<oauth:consumer name="Tonr.com" key="tonr-consumer-key" secret="SHHHHH!!!!!!!!!!"
resourceName="Your Photos" resourceDescription="Your photos that you have uploaded to sparklr.com."/>
</oauth:consumer-details-service>
to:
<oauth:consumer-details-service id="consumerDetails">
<oauth:consumer name="Tonr.com" key="tonr-consumer-key" secret="SHHHHH!!!!!!!!!!"
resourceName="Your Photos" resourceDescription="Your photos that you have uploaded to sparklr.com."
requiredToObtainAuthenticatedToken="false" authorities="ROLE_CONSUMER"/>
</oauth:consumer-details-service>
That's adding requiredToObtainAuthenticatedToken and authorities which will cause the consumer to be trusted and therefore all the validation process is skipped.
However I still get the login and confirmation screen from the Sparklr app. The current state of the official documentation is pretty precarious considering that the project is being absorbed by Spring so its filled up with broken links and ambiguous instructions. As far as I've understood, no changes are required on the client code so I'm basically running out of ideas. I have found people actually claiming that Spring-Oauth clients doesn't support 2-legged access (which I found hard to believe)
The only way I have found to do it was by creating my own ConsumerSupport:
private OAuthConsumerSupport createConsumerSupport() {
CoreOAuthConsumerSupport consumerSupport = new CoreOAuthConsumerSupport();
consumerSupport.setStreamHandlerFactory(new DefaultOAuthURLStreamHandlerFactory());
consumerSupport.setProtectedResourceDetailsService(new ProtectedResourceDetailsService() {
public ProtectedResourceDetails loadProtectedResourceDetailsById(
String id) throws IllegalArgumentException {
SignatureSecret secret = new SharedConsumerSecret(
CONSUMER_SECRET);
BaseProtectedResourceDetails result = new BaseProtectedResourceDetails();
result.setConsumerKey(CONSUMER_KEY);
result.setSharedSecret(secret);
result.setSignatureMethod(SIGNATURE_METHOD);
result.setUse10a(true);
result.setRequestTokenURL(SERVER_URL_OAUTH_REQUEST);
result.setAccessTokenURL(SERVER_URL_OAUTH_ACCESS);
return result;
}
});
return consumerSupport;
}
and then reading the protected resource:
consumerSupport.readProtectedResource(url, accessToken, "GET");
Has someone actually managed to make this work without boiler-plate code?