I'm trying to send emails through my Java application that I'm running on a container in Fargate. My containers are running in a VPC behind a API gateway, the connections to external services are made through VPC endpoints.
All that infra is deployed using Terraform. The Java app runs ok localy, but not when deployed to AWS, so I'm thinking that there is one missing config.
The Java app follows the AWS guidelines found here:
https://docs.aws.amazon.com/ses/latest/dg/send-email-raw.html
Following are some spinets of the Terraform code:
# SECURITY GROUPS
resource "aws_security_group" "security_group_containers" {
name = "security_group_containers_${var.project_name}_${var.environment}"
vpc_id = var.vpc_id
ingress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
self = true
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "security_group_containers_${var.project_name}_${var.environment}"
}
}
resource "aws_security_group" "security_group_ses" {
name = "security_group_ses_${var.project_name}_${var.environment}"
vpc_id = var.vpc_id
ingress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "security_group_ses_${var.project_name}_${var.environment}"
}
}
# VPC
resource "aws_vpc" "main" {
cidr_block = var.cidr
enable_dns_support = true
enable_dns_hostnames = true
}
resource "aws_subnet" "private_subnet" {
vpc_id = aws_vpc.main.id
cidr_block = var.private_subnets[0]
availability_zone = "us-east-1b"
tags = {
Name= "private_subnet_${var.project_name}_${var.environment}"
}
}
# VPC ENDPOINT
resource "aws_vpc_endpoint" "ses_endpoint" {
security_group_ids = [aws_security_group.security_group_ses]
service_name = "com.amazonaws.${var.aws_region}.email-smtp"
vpc_endpoint_type = "Interface"
subnet_ids = [aws_subnet.private_subnet.id]
private_dns_enabled = true
tags = {
"Name" = "vpc_endpoint_ses_${var.project_name}_${var.environment}"
}
vpc_id = aws_vpc.main.id
}
If there are any important service missing tell me so I can add it.
As you can see I'm keeping all traffic open, so the solution found here doesn't works for me. When the app tries to send an email I get to following error:
software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Connect to email.us-east-1.amazonaws.com:443 [email.us-east-1.amazonaws.com/52.0.170.238, email.us-east-1.amazonaws.com/54.234.96.52, email.us-east-1.amazonaws.com/34.239.37.81, email.us-east-1.amazonaws.com/18.208.125.60, email.us-east-1.amazonaws.com/52.204.223.71, email.us-east-1.amazonaws.com/18.235.72.5, email.us-east-1.amazonaws.com/18.234.10.182, email.us-east-1.amazonaws.com/44.194.249.132] failed: connect timed out
I think that I'm missing some config to make the java awssdk use the VPC endpoint.
Edit 01 - adding execution policies:
arn:aws:iam::aws:policy/AmazonSESFullAccess
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ses:*"
],
"Resource": "*"
}
]
}
arn:aws:iam::aws:policy/AmazonECS_FullAccess (too large)
arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
Edit 02 - changed to use a SMTP library:
The code used can be found here
Everything worked fine with SMTP
You've created a VPC endpoint for the SES SMTP API, but the error message you are getting email.us-east-1.amazonaws.com:443 is for the AWS SES Service API. You can see the two sets of APIs here. If you are using the AWS SDK to interact with SES in your Java application, then you need to change VPC endpoint to be service_name = "com.amazonaws.${var.aws_region}.email"
Your current endpoint configuration would work if you were configuring your Java application to use SMTP (such as with the JavaMail API).
Related
I want to Migrating from zuul to spring cloud gateway, I don't want to change my config of previous app. I want to know how to handle with the url with "/api/ + 'serviceId'", route to lb://serviceId
the previous zuul config
zuul:
prefix: /api
there are lots of service regist to eureka ,i don't want to config a route for each one.
eg. the auto generated route by org.springframework.cloud.gateway.discovery.DiscoveryClientRouteDefinitionLocator
{
"route_id": "CompositeDiscoveryClient_APIGATEWAY",
"route_definition": {
"id": "CompositeDiscoveryClient_APIGATEWAY",
"predicates": [
{
"name": "Path",
"args": {
"pattern": "/apigateway/**"
}
}
],
"filters": [
{
"name": "RewritePath",
"args": {
"regexp": "/apigateway/(?<remaining>.*)",
"replacement": "/${remaining}"
}
}
],
"uri": "lb://APIGATEWAY",
"order": 0
}
what i wanted is
{
"route_id": "CompositeDiscoveryClient_APIGATEWAY",
"route_definition": {
"id": "CompositeDiscoveryClient_APIGATEWAY",
"predicates": [
{
"name": "Path",
"args": {
"pattern": "/api/apigateway/**"
}
}
],
"filters": [
{
"name": "RewritePath",
"args": {
"regexp": "/api/apigateway/(?<remaining>.*)",
"replacement": "/${remaining}"
}
}
],
"uri": "lb://APIGATEWAY",
"order": 0
}
how can i config my route to get what i want
And I also found the source code
public static List<PredicateDefinition> initPredicates() {
ArrayList<PredicateDefinition> definitions = new ArrayList<>();
// TODO: add a predicate that matches the url at /serviceId?
// add a predicate that matches the url at /serviceId/**
PredicateDefinition predicate = new PredicateDefinition();
predicate.setName(normalizeRoutePredicateName(PathRoutePredicateFactory.class));
predicate.addArg(PATTERN_KEY, "'/'+serviceId+'/**'");
definitions.add(predicate);
return definitions;
}
the "'/'+serviceId+'/**'" is there without a prefix
[2019-01-10] UPDATE
I think #spencergibb's suggestion is a good solution, but I had new trouble with the (SpEL)
I tried many ways:
args:
regexp: "'/api/' + serviceId.toLowerCase() + '/(?<remaining>.*)'"
replacement: '/${remaining}'
args:
regexp: "'/api/' + serviceId.toLowerCase() + '/(?<remaining>.*)'"
replacement: "'/${remaining}'"
start failed
Origin: class path resource [application.properties]:23:70
Reason: Could not resolve placeholder 'remaining' in value "'${remaining}'"
when i use an escape "\" like
args:
regexp: "'/api/' + serviceId.toLowerCase() + '/(?<remaining>.*)'"
replacement: '/$\{remaining}'
it start success but i got an exception when running
org.springframework.expression.spel.SpelParseException: Expression [/$\{remaining}] #2: EL1065E: unexpected escape character.
at org.springframework.expression.spel.standard.Tokenizer.raiseParseException(Tokenizer.java:590) ~[spring-expression-5.0.5.RELEASE.jar:5.0.5.RELEASE]
at org.springframework.expression.spel.standard.Tokenizer.process(Tokenizer.java:265) ~[spring-expression-5.0.5.RELEASE.jar:5.0.5.RELEASE]
UPDATE 2
I found in the org.springframework.cloud.gateway.filter.factory.RewritePathGatewayFilterFactory, there's a replacement to deal with "\"
...
#Override
public GatewayFilter apply(Config config) {
String replacement = config.replacement.replace("$\\", "$");
return (exchange, chain) -> {
...
when it comes to SpelParseException there's none
You can customize the automatic filters and predicates used via properties.
spring:
cloud:
gateway:
discovery:
locator:
enabled: true
predicates:
- name: Path
args:
pattern: "'/api/'+serviceId.toLowerCase()+'/**'"
filters:
- name: RewritePath
args:
regexp: "'/api/' + serviceId.toLowerCase() + '/(?<remaining>.*)'"
replacement: "'/${remaining}'"
Note the values (ie args.pattern or args.regexp) are all Spring Expression Language (SpEL) expressions, hence the single quotes and + etc...
If different routes need to have different prefixes, you'd need to define each route in properties.
I have the following Activiti 6 applications running from the official provided .WAR files. Have succesfully deployed these to my localhost
activiti-app - http://localhost:8080/activiti-admin/
activiti-admin - http://localhost:8080/activiti-admin/
activiti-rest - http://localhost:8080/activiti-rest/
So far I can use activiti-app to produce BPMN files and start up applications using the interface. So far so good.
However what im looking to do is write my own Spring Apps but be able to view them running using the activiti UI apps.
So looking at the baeldung-activiti tutorial. You can start the application.
#GetMapping("/start-process")
public String startProcess() {
runtimeService.startProcessInstanceByKey("my-process");
return "Process started. Number of currently running process instances = " + runtimeService.createProcessInstanceQuery().count();
}
The above returns an incremented value everytime the endpoint is hit.
My questions is this.
Using the activiti tools (running on localhost:8008) how can view the processes. How do I link the standalone java application . (running on localhost:8081) with the Activiti ui interfaces?
That's pretty easy if you have the activity-rest configured and running. The REST API is documented here.
So you just need to do a Web Service call to the correct API endpoint. For example to list all of the processes you need to do a GET request to the repository/process-definitions endpoint.
Note: The Rest API uses Basic Auth.
public void loadProcesses(){
// the username and password to access the rest API (same as for UI)
String plainCreds = "username:p#ssword";
byte[] plainCredsBytes = plainCreds.getBytes();
byte[] base64CredsBytes = Base64.getEncoder().encode(plainCredsBytes);
String base64Creds = new String(base64CredsBytes);
HttpHeaders headers = new HttpHeaders();
headers.add("Authorization", "Basic " + base64Creds);
RestTemplate restTemplate = new RestTemplate();
HttpEntity<String> request = new HttpEntity<>(headers);
ResponseEntity<String> responseAsJson = restTemplate.exchange("http://localhost:8080/activiti-rest/repository/process-definitions", HttpMethod.GET, request, String.class);
}
The response for the following API call will be JSON like
{
"data": [
{
"id" : "oneTaskProcess:1:4",
"url" : "http://localhost:8182/repository/process-definitions/oneTaskProcess%3A1%3A4",
"version" : 1,
"key" : "oneTaskProcess",
"category" : "Examples",
"suspended" : false,
"name" : "The One Task Process",
"description" : "This is a process for testing purposes",
"deploymentId" : "2",
"deploymentUrl" : "http://localhost:8081/repository/deployments/2",
"graphicalNotationDefined" : true,
"resource" : "http://localhost:8182/repository/deployments/2/resources/testProcess.xml",
"diagramResource" : "http://localhost:8182/repository/deployments/2/resources/testProcess.png",
"startFormDefined" : false
}
],
"total": 1,
"start": 0,
"sort": "name",
"order": "asc",
"size": 1
}
Set<String> graphNames = JanusGraphFactory.getGraphNames();
for(String name:graphNames) {
System.out.println(name);
}
The above snippet produces the following exception
java.lang.IllegalStateException: Gremlin Server must be configured to use the JanusGraphManager.
at com.google.common.base.Preconditions.checkState(Preconditions.java:173)
at org.janusgraph.core.JanusGraphFactory.getGraphNames(JanusGraphFactory.java:175)
at com.JanusTest.controllers.JanusController.getPersonDetail(JanusController.java:66)
my.properties
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=cql
storage.hostname=127.0.0.1
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.5
index.search.backend=elasticsearch
index.search.hostname=127.0.0.1
gremlin-server.yaml
host: 0.0.0.0
port: 8182
scriptEvaluationTimeout: 30000
channelizer: org.apache.tinkerpop.gremlin.server.channel.WebSocketChannelizer
graphManager: org.janusgraph.graphdb.management.JanusGraphManager
graphs: {
ConfigurationManagementGraph: conf/my.properties,
}
plugins:
- janusgraph.imports
scriptEngines: {
gremlin-groovy: {
imports: [java.lang.Math],
staticImports: [java.lang.Math.PI],
scripts: [scripts/empty-sample.groovy]}}
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoLiteMessageSerializerV1d0, config: {ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
processors:
- { className: org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor, config: { sessionTimeout: 28800000 }}
- { className: org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor, config: { cacheExpirationTime: 600000, cacheMaxSize: 1000 }}
metrics: {
consoleReporter: {enabled: true, interval: 180000},
csvReporter: {enabled: true, interval: 180000, fileName: /tmp/gremlin-server-metrics.csv},
jmxReporter: {enabled: true},
slf4jReporter: {enabled: true, interval: 180000},
gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST},
graphiteReporter: {enabled: false, interval: 180000}}
maxInitialLineLength: 4096
maxHeaderSize: 8192
maxChunkSize: 8192
maxContentLength: 65536
maxAccumulationBufferComponents: 1024
resultIterationBatchSize: 64
writeBufferLowWaterMark: 32768
writeBufferHighWaterMark: 65536
This answer to this is similar to this other question.
The call to JanusGraphFactory.getGraphNames() needs to be sent to the remote server. If you're working in the Gremlin Console, first establish a remote sessioned connection then set remote console mode.
gremlin> :remote connect tinkerpop.server conf/remote.yaml session
==>Configured localhost/127.0.0.1:8182
gremlin> :remote console
==>All scripts will now be sent to Gremlin Server - [localhost:8182]-[5206cdde-b231-41fa-9e6c-69feac0fe2b2] - type ':remote console' to return to local mode
Then as described in the JanusGraph docs for "Listing the Graphs":
ConfiguredGraphFactory.getGraphNames() will return a set of graph names for which you have created configurations using the ConfigurationManagementGraph APIs.
JanusGraphFactory.getGraphNames() on the other hand returns a set of graph names for which you have instantiated and the references are stored inside the JanusGraphManager.
If you are not using the Gremlin Console, then you should be using a remote client, such as the TinkerPop gremlin-driver (Java), to send your requests to the Gremlin Server.
Creating Java app that will capture Google Drive changes and using the Java client for the Google Drive V3 API. The code below shows how we are calling the Changes.List method to return a list of drive changes.
https://developers.google.com/drive/v3/reference/changes/list following this for page token 3411 gives list
{
"kind": "drive#changeList",
"newStartPageToken": "3420",
"changes": [
{
"kind": "drive#change",
"type": "file",
"time": "2017-06-11T10:23:44.740Z",
"removed": false,
"fileId": "0B5nxCVMvw6oHaGNXZnlIb1I1OEE",
"file": {
"kind": "drive#file",
"id": "0B5nxCVMvw6oHaGNXZnlIb1I1OEE",
"name": "NewsLetters",
"mimeType": "application/vnd.google-apps.folder"
}
},
{
"kind": "drive#change",
"type": "file",
"time": "2017-06-11T10:23:49.982Z",
"removed": false,
"fileId": "0B5nxCVMvw6oHeWdTYzlsOWpFOEU",
"file": {
"kind": "drive#file",
"id": "0B5nxCVMvw6oHeWdTYzlsOWpFOEU",
"name": "Copy of Copy of learning11.txt",
"mimeType": "text/plain"
}
},
But by using code
AppIdentityCredential credential= new
AppIdentityCredential(Collections.singleton(DriveScopes.DRIVE_METADATA));
driveService = new Drive.Builder(
HTTP_TRANSPORT_REQUEST, JSON_FACTORY, credential)
.setApplicationName(APPLICATION_NAME)
.build();
String pageToken = "3411";
while (pageToken != null) {
ChangeList changes = driveService.changes().list(pageToken)
.execute();
Log.info("changes.getChanges 3411 "+changes.getChanges().size());
for (Change change : changes.getChanges()) {
// Process change
System.out.println("Change found for file: " + change.getFileId());
}
if (changes.getNewStartPageToken() != null) {
// Last page, save this token for the next polling interval
savedStartPageToken = changes.getNewStartPageToken();
}
pageToken = changes.getNextPageToken();
}
It gives
Log.info("changes.getChanges 3411 "+changes.getChanges().size());
size returns 0
even I tried with
driveService.changes().list("3411"). setFields("changes").execute()
same result 0
I am using AppEngine Google cloud server.
I would like to get a list of changes in folderID.
What mistake I am doing.Any pointers. Please help.
Is this because
Google Drive API through Google App Engine
Service Accounts are not supported by the Drive SDK due to its security model.
App Identity isn't working with the Drive API. Wouldn't it be a bug
But with AppIdentity I am able to read files in folder
result = service.files().list().setQ("'" + locdriveFolderID + "' in
parents")
.setPageSize(10)
.setFields("nextPageToken, files(id,
name,description,mimeType,modifiedTime)")
.setOrderBy("modifiedTime")
.execute();
why changes.getChanges() returns 0 it should return list of changes which shows by api >1.
Please let me correct.
the result of changes.getChanges() return list if
AuthorizationCodeFlow authFlow = initializeFlow();
Credential credential = authFlow.loadCredential(getUserId(req));
driveService = new Drive.Builder(
HTTP_TRANSPORT_REQUEST, JSON_FACTORY, credential)
.setApplicationName(APPLICATION_NAME)
.build();
ChangeList changes = mService.changes().list("3411").execute();
Log.info("changes.getChanges "+changes.getChanges().size() );
Log output
changes.getChanges 10
I'm currently stuck on a problem I can't seem to solve. Maybe someone can either clarify what I'm doing wrong or get a better insight into what's happening.
I have a file/folder in Google Drive that is shared (1) on a domain level with link only and (2) with some specific users. In my currently application written in Java I would like to get all the permissions that are currently set on this file/folder.
Obviously my first point of entry was to test out the permission list call: Google Developers. Following permissions were granted to test out the rest call:
drive
drive.appdata
drive.apps.readonly
drive.file
drive.readonly
The result of my call contains permissions granted on both users as well as domain (type). Here's the response:
200 OK
- SHOW HEADERS -
{
"kind": "drive#permissionList",
"etag": "\"xxxxxxxxxx\"",
"selfLink": "https://www.googleapis.com/drive/v2/files/xxxxxxxxxx/permissions",
"items": [
{
"kind": "drive#permission",
"etag": "\"xxxxxxxxxx\"",
"id": "xxxxxxxxxx",
"selfLink": "xxxxxxxxxx",
"name": "Owner Name",
"emailAddress": "owner#random-domain.com",
"domain": "random-domain.com",
"role": "owner",
"type": "user"
},
{
"kind": "drive#permission",
"etag": "\"xxxxxxxxxx\"",
"id": "xxxxxxxxxx",
"selfLink": "xxxxxxxxxx",
"name": "User Name",
"emailAddress": "user#random-domain.com",
"domain": "random-domain.com",
"role": "writer",
"type": "user",
"photoLink": "xxxxxxxxxx"
},
{
"kind": "drive#permission",
"etag": "\"xxxxxxxxxx\"",
"id": "xxxxxxxxxx",
"selfLink": "xxxxxxxxxx",
"name": "Domain Name",
"domain": "random-domain.com",
"role": "reader",
"type": "domain",
"withLink": true
}
]
}
So far so good. In the response above you can observe that following permissions have been returned: owner, user, domain sharing with link. So now I'm trying to do the same in my project.
public static void myAwesomeMethod(String fileId) {
Drive service = DriveDirectoryServiceManager.getDriveService("owner#random-domain.com");
PermissionList permissions = service.permissions().list(fileId).execute();
List<Permission> permissionList = permissions.getItems();
...
}
For the people who want to know what's happening behind the DriveDirectoryServiceManager. Here it is:
public class DriveDirectoryServiceManager {
/** Email of the Service Account */
private static final String SERVICE_ACCOUNT_EMAIL = "xxxxxxxxxx";
private static final String APPLICATION_NAME = "xxxxxxxxxx";
/** Path to the Service Account's Private Key file */
private static final String PKCS = "/xxxxxxxxxx";
/**
* Build and returns a Directory service object authorized with the service
* accounts that act on behalf of the given user.
*
* #return Directory service object that is ready to make requests.
*/
public static Drive getDriveService(String user)
throws GeneralSecurityException, IOException, URISyntaxException {
HttpTransport httpTransport = new NetHttpTransport();
JacksonFactory jsonFactory = new JacksonFactory();
List<String> scope = new ArrayList<>();
scope.add(DriveScopes.DRIVE);
scope.add(DriveScopes.DRIVE_FILE);
scope.add(DriveScopes.DRIVE_APPDATA);
scope.add(DriveScopes.DRIVE_APPS_READONLY);
scope.add(DriveScopes.DRIVE_READONLY);
InputStream keyStream = DriveDirectoryServiceManager.class.getResourceAsStream(PKCS);
PrivateKey key = SecurityUtils.loadPrivateKeyFromKeyStore(
SecurityUtils.getPkcs12KeyStore(), keyStream, "notasecret", "privatekey", "notasecret");
GoogleCredential credential = new GoogleCredential.Builder()
.setTransport(httpTransport)
.setJsonFactory(jsonFactory)
.setServiceAccountId(SERVICE_ACCOUNT_EMAIL)
.setServiceAccountScopes(scope)
.setServiceAccountUser(user)
.setServiceAccountPrivateKey(key)
.build();
return new Drive.Builder(httpTransport, jsonFactory, null).setHttpRequestInitializer(credential).setApplicationName(APPLICATION_NAME).build();
}
}
One would expect that the permissionList would now contain 3 permissions but this is not the case. It seems that only 2 permissions are returned. The permission that is granted on domain level is not in the result list. Here's the output when I inspect it in the debugger.
permissionList = {ArrayList#6468} size = 2
0 = {Permission#6472} size = 9
0 = {DataMap$Entry#6481} "domain" -> "random-domain.com"
1 = {DataMap$Entry#6482} "emailAddress" -> "owner#random-domain.com"
2 = {DataMap$Entry#6483} "etag" -> ""xxxxxxxxxx""
3 = {DataMap$Entry#6484} "id" -> "xxxxxxxxxx"
4 = {DataMap$Entry#6485} "kind" -> "drive#permission"
5 = {DataMap$Entry#6486} "name" -> "Owner Name"
6 = {DataMap$Entry#6487} "role" -> "owner"
7 = {DataMap$Entry#6488} "selfLink" -> "xxxxxxxxxx"
8 = {DataMap$Entry#6489} "type" -> "user"
1 = {Permission#6473} size = 10
0 = {DataMap$Entry#6556} "domain" -> "random-domain.com"
1 = {DataMap$Entry#6557} "emailAddress" -> "user#random-domain.com"
2 = {DataMap$Entry#6558} "etag" -> ""xxxxxxxxxx""
3 = {DataMap$Entry#6559} "id" -> "xxxxxxxxxx"
4 = {DataMap$Entry#6560} "kind" -> "drive#permission"
5 = {DataMap$Entry#6561} "name" -> "User Name"
6 = {DataMap$Entry#6562} "photoLink" -> "xxxxxxxxxx"
7 = {DataMap$Entry#6563} "role" -> "writer"
8 = {DataMap$Entry#6564} "selfLink" -> "xxxxxxxxxx"
9 = {DataMap$Entry#6565} "type" -> "user"
So far I've not found a real reason why this the result list differs.
I've found the root of the problem. After trying this I found out that it did find the permission if I manually entered the file ID and permission ID into the java permission get method.
Apparently another 'fileId' value was passed in 'myAwesomeMethod' when comparing the creation of the drive folder and the updating of a drive folder. In this particular case the parent folder of the folder I was inspecting with the rest call had exactly the same sharing permissions except for the domain sharing, hence this confusing data.
The only tip I can give is to always check it with a simple Get() on the permission itself. This did the trick for me as it pointed me in the right direction.