Hello i have create an RDS on AWS, and created a policy
with this permission based on this link
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds-db:us-east-2:1234567890:dbuser:db-ABCDEFGHIJKL01234/db_user"
]
}
]
}
I've have a general user that defined with a spesific password
i tried login with the user but instead of the password i tried to set
auth token link in this guide
private static Properties setMySqlConnectionProperties() {
Properties mysqlConnectionProperties = new Properties();
mysqlConnectionProperties.setProperty("verifyServerCertificate","true");
mysqlConnectionProperties.setProperty("useSSL", "true");
mysqlConnectionProperties.setProperty("user",DB_USER);
mysqlConnectionProperties.setProperty("password",generateAuthToken());
return mysqlConnectionProperties;
}
public static String generateAuthToken(String region, String hostName, int port, String username) {
RdsIamAuthTokenGenerator generator = RdsIamAuthTokenGenerator.builder()
.credentials(new DefaultAWSCredentialsProviderChain())
.region(region)
.build();
String authToken = generator.getAuthToken(
GetIamAuthTokenRequest.builder()
.hostname(hostName)
.port(port)
.userName(username)
.build());
return authToken;
}
Im using in my case with postgresql
and it result this error
"FATAL: password authentication failed for user \"root\"","error.stack_trace":"org.postgresql.util.PSQLException: FATAL: password authentication failed for user \"root\"
my root user should support with IAM, what can i validate in order to fix it
below you can see from AWS, that my policy is defined
First all i had a bug - i used the db name instead DBI resource ID
This is the expected format:
arn:aws:rds-db:region:account-id:dbuser:DbiResourceId/db-user-name
and here is the code
data "aws_iam_policy_document" "policy_fooweb_job" {
statement {
actions = [
"rds-db:connect"
]
effect = "Allow"
resources = [
"arn:aws:rds-db:${var.region}:${data.aws_caller_identity.current.account_id}:dbuser:${data.aws_db_instance.database.resource_id}/someUser"
]
}
}
## get the db instance
data "aws_db_instance" "database" {
db_instance_identifier = "company-oltp1"
}
Related
I am new to using Swagger Codegen so I assume I am doing something completely wrong. I have a remote swagger.json that I am using to generate a java client. Everything works and looks good but the example usages in the readme are not referencing any API key.
I don't have access to the remote API that I am using just trying to create a nice Java SDK for interfacing with it.
This is what I am using to generate the code.
java -jar C:/Development/codegen/swagger-codegen-cli.jar generate
-a "client_id:XXXXXXXXXX,client_secret:YYYYYYYYYYYYY"
-i https://BLAH/swagger/v1/swagger.json -l java
-o C:/Development/workspace/JavaClient
-c C:/Development/workspace/JavaClient/java-genconfig.json
Then the example that get's generated in the read me looks something like this...
public static void main(String[] args) {
BusinessGroupApi apiInstance = new BusinessGroupApi();
Integer businessGroupId = 56; // Integer | The ID of the business group to fetch.
String apiVersion = "1.0"; // String | The requested API version
try {
BusinessGroup result = apiInstance.getBusinessGroupAsync(businessGroupId, apiVersion);
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling BusinessGroupApi#getBusinessGroupAsync");
e.printStackTrace();
}
}
Side Note
When I try running the example, I get an error saying
Failed to connect to localhost/0:0:0:0:0:0:0:1:443
Is there some setting that I need to set so it looks for the service at a remote location instead of locally?
EDIT
I realized that I needed to modify their swagger as they might not have everything fleshed out.
Swagger Codegen Version = 2.4.5
swagger.json (Excluded the paths and definitions) I also downloaded thiers locally and added some more info to make it generate more info. I added host, basePath, schemes, consumes, produces, security and securityDefinitions.
{
"swagger": "2.0",
"info": {
"version": "1.0",
"title": "removed"
},
"host": "apitest.removed.com",
"basePath": "/removed/api",
"schemes": [
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"security": [
{
"clientId": []
},
{
"clientSecret": []
}
],
"securityDefinitions": {
"clientId": {
"type": "apiKey",
"in": "header",
"name": "client_id"
},
"clientSecret": {
"type": "apiKey",
"in": "header",
"name": "client_secret"
}
}
}
This actually updated the readme to look how I would expect it to look
ApiClient defaultClient = Configuration.getDefaultApiClient();
// Configure API key authorization: client_id
ApiKeyAuth client_id = (ApiKeyAuth) defaultClient.getAuthentication("client_id");
client_id.setApiKey("YOUR API KEY");
// Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null)
//client_id.setApiKeyPrefix("Token");
// Configure API key authorization: client_secret
ApiKeyAuth client_secret = (ApiKeyAuth) defaultClient.getAuthentication("client_secret");
client_secret.setApiKey("YOUR API KEY");
// Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null)
//client_secret.setApiKeyPrefix("Token");
BusinessGroupApi apiInstance = new BusinessGroupApi();
Integer businessGroupId = 56; // Integer | The ID of the business group to fetch.
String apiVersion = "1.0"; // String | The requested API version
try {
BusinessGroup result = apiInstance.getBusinessGroupAsync(businessGroupId, apiVersion);
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling BusinessGroupApi#getBusinessGroupAsync");
e.printStackTrace();
}
Yet when I plug in my client_id and client_secret and try to run it I get a Exception in thread "main" java.lang.NullPointerException
at removed.Tester.main(Tester.java:18). I think I might have the security definitions not set up correctly.
It's not OAuth but they require a client_id and client_secret to be passed in the header parameters.
EDIT 2
Exception when calling BusinessGroupApi#getBusinessGroupAsync
package.client.ApiException:
at package.client.ApiClient.handleResponse(ApiClient.java:927)
at package.client.ApiClient.execute(ApiClient.java:843)
at package.client.api.BusinessGroupApi.getBusinessGroupAsyncWithHttpInfo(BusinessGroupApi.java:148)
at package.client.api.BusinessGroupApi.getBusinessGroupAsync(BusinessGroupApi.java:133)
at Tester.main(Tester.java:30)
Line 30 -> BusinessGroup result = apiInstance.getBusinessGroupAsync(businessGroupId, apiVersion);
EDIT 3
I enabled debugging with
defaultClient.setDebugging(true);
It looks like it's working but for some reason doesn't like my client_id and client_secret despite them both being the ones I used in postman.
--> GET <removed>/api/api/businessGroups/56?api-version=1.0 HTTP/1.1
Accept: application/json
client_secret: <removed>
client_id: <removed>
User-Agent: Swagger-Codegen/1.0-SNAPSHOT/java
--> END GET
<-- HTTP/1.1 403 (393ms)
Content-Length: 80
Content-Type: application/json
Date: Mon, 10 Jun 2019 20:05:07 GMT
QL-LB-Appliance: <removed>
QL-LB-Pool: <removed>
QL-LB-Server: <removed>
OkHttp-Sent-Millis: <removed>
OkHttp-Received-Millis: <removed>
{ "error": "invalid_client", "description": "wrong client_id or client_secret" }
<-- END HTTP (80-byte body)
Exception when calling BusinessGroupApi#getBusinessGroupAsync
<removed>.client.ApiException:
at <removed>.client.ApiClient.handleResponse(ApiClient.java:927)
at <removed>.client.ApiClient.execute(ApiClient.java:843)
at <removed>.client.api.BusinessGroupApi.getBusinessGroupAsyncWithHttpInfo(BusinessGroupApi.java:148)
at <removed>.client.api.BusinessGroupApi.getBusinessGroupAsync(BusinessGroupApi.java:133)
at Tester.main(Tester.java:32)
I am using android aws dependency com.amazonaws:aws-android-sdk-s3:2.6.+
While uploading Image getting error as bellow
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied;
Request ID: XXXXXXXXXXX), S3 Extended Request ID:XXXXXXXXXXXX
Here is the Code for uploading Image
private void beginUpload(String filePath, final String mediaCaption,
Message message,boolean isThumb,final
UploadFileToStorageCompletionListener listener) {
getLogger().log(Strings.TAG, "########## 3: " + filePath);
//construct a bucket path
final String fullBucketPath =
constructBucketPath(message.getMediaType(), message.getId(),
isThumb);
File file = new File(filePath);
mObserver = mTransferUtility.upload(fullBucketPath, mediaCaption,
file);
mObserver.setTransferListener(new TransferListener() {
#Override
public void onStateChanged(int id, TransferState state) {
getLogger().log(Strings.TAG," onStateChanged() " + state);
if (state.equals(TransferState.COMPLETED)) {
listener.onUploadSuccess(fullBucketPath);
}
}
#Override
public void onProgressChanged(int id, long bytesCurrent, long bytesTotal) {
getLogger().log(Strings.TAG,"onProgressChanged() " + bytesCurrent + "/" + bytesTotal);
dismissProgressDialog();
}
#Override
public void onError(int id, Exception ex) {
listener.onDatabaseError(new FirebaseFailure(ex));
getLogger().log(Strings.TAG, "onError() " + ex.getMessage());
}
});
}
First, need to check the permissions for the s3 bucket. And go to the bucket policy and check the json object which is permissions for put, get and post.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::{FILE NAME}/*"
}
]
}
Try the above permissions.
You need to check whether the user [ Access Key & Secret Key ] for the current configs you are using has the permission to use the S3. You can check the detail information, or goto your IAM for changing permissions, for detail regarding IAM visit this
For the starter try with S3 Full Access
Hope this helps
I update my data by passing an object to my service (i'm using angularjs and spring)
The data is user details
the thing is I can't update other info without changing/typing the password again
to get a clearer information please check this photo
I update them separately but when I edit the personal details the password being saved is the encrypted password from the database
My java getter and setters for password are:
public String getPassword() {
return password;
}
public void setPassword(String password) {
BCryptPasswordEncoder passwordEncoder = new BCryptPasswordEncoder();
this.password = passwordEncoder.encode(password);
}
and the object I pass is:
var profile = {
"id": $scope.userData.id,
"firstName": $scope.first,
"middleName": $scope.mid,
"lastName": $scope.last,
"emailAddress": $scope.mail,
"bday": $scope.bday,
"contactNo": $scope.num,
"address":$scope.add,
"gender":$scope.gender,
"username": $scope.userData.username,
"password": $scope.userData.password,
"role": $scope.userData.role
}
where $scope.userData is the current user's data and the other scopes are user input
is there a way to update my table without touching password?
i'm trying to check user permissions from a keycloak server via the keycloak authzclient. But failing constantly, by now i'm not sure if i have some misconceptions about the process.
AuthzClient authzClient = AuthzClient.create();
String eat = authzClient.obtainAccessToken("tim", "test123").getToken();
AuthorizationResource resource = authzClient.authorization(eat);
PermissionRequest request = new PermissionRequest();
request.setResourceSetName("testresource");
String ticket = authzClient.protection().permission().forResource(request).getTicket();
AuthorizationResponse authResponse = resource.authorize(new AuthorizationRequest(ticket));
System.out.println(authResponse.getRpt());
The last call authResponse.getRpt() fails with a 403 forbidden.
But the following settings in the admin console evaluates to Permit?
keycloak evaluation setting
The Client config is:
{
"realm": "testrealm",
"auth-server-url": "http://localhost:8080/auth",
"ssl-required": "external",
"resource": "tv",
"credentials": {
"secret": "d0c436f7-ed19-483f-ac84-e3b73b6354f0"
},
"use-resource-role-mappings": true
}
The following code:
AuthzClient authzClient = AuthzClient.create();
String eat = authzClient.obtainAccessToken("tim", "test123").getToken();
EntitlementResponse response = authzClient.entitlement(eat).getAll("tv");
String rpt = response.getRpt();
TokenIntrospectionResponse requestingPartyToken = authzClient.protection().introspectRequestingPartyToken(rpt);
if (requestingPartyToken.getActive()) {
for (Permission granted : requestingPartyToken.getPermissions()) {
System.out.println(granted.getResourceSetId()+" "+granted.getResourceSetName()+" "+granted.getScopes());
}
}
Just gives me the "default resource"
7d0f10d6-6f65-4866-816b-3dc5772fc465 Default Resource []
But even when i put this Default Resource in the first code snippet
...
PermissionRequest request = new PermissionRequest();
request.setResourceSetName("Default Resource");
...
it fives me a 403 . Where am I wrong?
Kind regards
Keycloak Server is 3.2.1.Final.
keycloak-authz-client is 3.2.0.Final.
Minutes after posting found the problem. Sorry. I had to perform an EntitlementRequest.
AuthzClient authzClient = AuthzClient.create();
String eat = authzClient.obtainAccessToken("tim", "test123").getToken();
PermissionRequest request = new PermissionRequest();
request.setResourceSetName("testresource");
EntitlementRequest entitlementRequest = new EntitlementRequest();
entitlementRequest.addPermission(request);
EntitlementResponse entitlementResponse = authzClient.entitlement(eat).get("tv", entitlementRequest);
String rpt = entitlementResponse.getRpt();
TokenIntrospectionResponse requestingPartyToken = authzClient.protection().introspectRequestingPartyToken(rpt);
if (requestingPartyToken.getActive()) {
for (Permission granted : requestingPartyToken.getPermissions()) {
System.out.println(granted.getResourceSetId()+" "+granted.getResourceSetName()+" "+granted.getScopes());
}
}
ouputs:
27b3d014-b75a-4f52-a97f-dd01b923d2ef testresource []
Kind regards
Does anyone have an example of using Apache Qpid within a standalone junit test.
Ideally I want to be able to create a queue on the fly which I can put/get msgs within my test.
So I'm not testing QPid within my test, I'll use integration tests for that, however be very useful to test methods handling msgs with having to mock out a load of services.
Here is the setup method I use for QPID 0.30 (I use this in a Spock test but should be portable to Java of Junit with no problems). This supports SSL connection, the HTTP management, and uses only in-memory startup. Startup time is sub-second. Configuration for QPID is awkward compared to using ActiveMQ for the same purpose, but QPID is AMQP compliant and allows for a smooth, neutral testing for AMQP clients (obviously the use of exchanges can not mimic RabbitMQs implementation, but for basic purposes it is sufficient)
First I created a minimal test-config.json which I put in the resources folder:
{
"name": "${broker.name}",
"modelVersion": "2.0",
"defaultVirtualHost" : "default",
"authenticationproviders" : [ {
"name" : "passwordFile",
"type" : "PlainPasswordFile",
"path" : "${qpid.home_dir}${file.separator}etc${file.separator}passwd",
"preferencesproviders" : [{
"name": "fileSystemPreferences",
"type": "FileSystemPreferences",
"path" : "${qpid.work_dir}${file.separator}user.preferences.json"
}]
} ],
"ports" : [ {
"name" : "AMQP",
"port" : "${qpid.amqp_port}",
"authenticationProvider" : "passwordFile",
"keyStore" : "default",
"protocols": ["AMQP_0_10", "AMQP_0_8", "AMQP_0_9", "AMQP_0_9_1" ],
"transports" : [ "SSL" ]
}, {
"name" : "HTTP",
"port" : "${qpid.http_port}",
"authenticationProvider" : "passwordFile",
"protocols" : [ "HTTP" ]
}],
"virtualhostnodes" : [ {
"name" : "default",
"type" : "JSON",
"virtualHostInitialConfiguration" : "{ \"type\" : \"Memory\" }"
} ],
"plugins" : [ {
"type" : "MANAGEMENT-HTTP",
"name" : "httpManagement"
}],
"keystores" : [ {
"name" : "default",
"password" : "password",
"path": "${qpid.home_dir}${file.separator}keystore.jks"
}]
}
I
I also needed to create a keystore.jks file for localhost because the QPID broker and the RabbitMQ client do not like to communicate over an unencrypted channel. I also added a file called "passwd" in "integTest/resources/etc" that has this content:
guest:password
Here is the code from the unit test setup:
class level variables:
def tmpFolder = Files.createTempDir()
Broker broker
def amqpPort = PortFinder.findFreePort()
def httpPort = PortFinder.findFreePort()
def qpidHomeDir = 'src/integTest/resources/'
def configFileName = "/test-config.json"
code for the setup() method:
def setup() {
broker = new Broker();
def brokerOptions = new BrokerOptions()
File file = new File(qpidHomeDir)
String homePath = file.getAbsolutePath();
log.info(' qpid home dir=' + homePath)
log.info(' qpid work dir=' + tmpFolder.absolutePath)
brokerOptions.setConfigProperty('qpid.work_dir', tmpFolder.absolutePath);
brokerOptions.setConfigProperty('qpid.amqp_port',"${amqpPort}")
brokerOptions.setConfigProperty('qpid.http_port', "${httpPort}")
brokerOptions.setConfigProperty('qpid.home_dir', homePath);
brokerOptions.setInitialConfigurationLocation(homePath + configFileName)
broker.startup(brokerOptions)
log.info('broker started')
}
code for cleanup()
broker.shutdown()
To make an AMQP connection from a Rabbit MQ client:
ConnectionFactory factory = new ConnectionFactory();
factory.setUri("amqp://guest:password#localhost:${amqpPort}");
factory.useSslProtocol()
log.info('about to make connection')
def connection = factory.newConnection();
//get a channel for sending the "kickoff" message
def channel = connection.createChannel();
The Qpid project has a number of tests that use an embedded broker for testing. Whilst we use a base case to handle startup shutdown you could do the following to simply integrate a broker within your tests:
public void setUp()
{
int port=1;
// Config is actually a Configuaration File App Registry object, or Configuration Application Registry.
ApplicationRegistry.initialise(config, port);
TransportConnection.createVMBroker(port);
}
public void test()
{...}
public void tearDown()
{
TransportConnection.killVMBroker(port);
ApplicationRegistry.remove(port);
}
Then for the connection you need to specify the conectionURL for the broker. i.e. borkerlist='vm://1'
My solution on qpid-broker # 6.1.1, add below to pom.xml
<dependency>
<groupId>org.apache.qpid</groupId>
<artifactId>qpid-broker</artifactId>
<version>6.1.1</version>
<scope>test</scope>
</dependency>
qpid config file as:
{
"name" : "${broker.name}",
"modelVersion" : "6.1",
"defaultVirtualHost" : "default",
"authenticationproviders" : [ {
"name" : "anonymous",
"type" : "Anonymous"
} ],
"ports" : [ {
"name" : "AMQP",
"port" : "${qpid.amqp_port}",
"authenticationProvider" : "anonymous",
"virtualhostaliases" : [ {
"name" : "defaultAlias",
"type" : "defaultAlias"
} ]
} ],
"virtualhostnodes" : [ {
"name" : "default",
"type" : "JSON",
"defaultVirtualHostNode" : "true",
"virtualHostInitialConfiguration" : "{ \"type\" : \"Memory\" }"
} ]
}
code to start the qpid server
Broker broker = new Broker();
BrokerOptions brokerOptions = new BrokerOptions();
// I use fix port number
brokerOptions.setConfigProperty("qpid.amqp_port", "20179");
brokerOptions.setConfigurationStoreType("Memory");
// work_dir for qpid's log, configs, persist data
System.setProperty("qpid.work_dir", "/tmp/qpidworktmp");
// init config of qpid. Relative path for classloader resource or absolute path for non-resource
System.setProperty("qpid.initialConfigurationLocation", "qpid/qpid-config.json");
brokerOptions.setStartupLoggedToSystemOut(false);
broker.startup(brokerOptions);
code to stop qpid server
broker.shutdown();
Since I use anonymouse mode, client should do like:
SaslConfig saslConfig = new SaslConfig() {
public SaslMechanism getSaslMechanism(String[] mechanisms) {
return new SaslMechanism() {
public String getName() {
return "ANONYMOUS";
}
public LongString handleChallenge(LongString challenge, String username, String password) {
return LongStringHelper.asLongString("");
}
};
}
};
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
factory.setPort(20179);
factory.setSaslConfig(saslConfig);
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
That's all.
A little more on how to do it on other version.
You can download qpid-broker binary package from official site. After download and unzip, you can run it to test as server against your case. After your case connected server well, using commandline to generate or just copy the initial config file in QPID_WORK, remove useless id filed and use it for embedded server like above.
The most complicated thing is the authentication. You can choose PLAIN mode but you have to add the username and password in initial config. I choose anonymous mode which need a little code when connecting. For other authentication mode you have specify the password file or key/cert store, which I didnt try.
If it still not working, you can read the qpid-borker doc and Main class code in qpid-broker artifact which show how command line works for each settings.
The best I could figure out was:
PropertiesConfiguration properties = new PropertiesConfiguration();
properties.addProperty("virtualhosts.virtualhost.name", "test");
properties.addProperty("security.principal-databases.principal-database.name", "testPasswordFile");
properties.addProperty("security.principal-databases.principal-database.class", "org.apache.qpid.server.security.auth.database.PropertiesPrincipalDatabase");
ServerConfiguration config = new ServerConfiguration(properties);
ApplicationRegistry.initialise(new ApplicationRegistry(config) {
#Override
protected void createDatabaseManager(ServerConfiguration configuration) throws Exception {
Properties users = new Properties();
users.put("guest","guest");
users.put("admin","admin");
_databaseManager = new PropertiesPrincipalDatabaseManager("testPasswordFile", users);
}
});
TransportConnection.createVMBroker(ApplicationRegistry.DEFAULT_INSTANCE);
With a URL of:
amqp://admin:admin#/test?brokerlist='vm://:1?sasl_mechs='PLAIN''
The big pain is with configuration and authorization. Milage may vary.