How to setup user policy for the minio bucket using s3Client? - java

We use Minio as our backend service but we communicate with it through
software.amazon.awssdk.services.s3.S3Client
I see that this class contains method putBucketPolicy
but I don't see any method which allow to assign policy to user. Is there any way to assigh user policy using S3Client ?

Edited Answer:
Your updated question helped me determine what you were looking for.
You need to create a policy and assign it to a role. You can then assign that role to your user. The AWS SDK for Java 2.x provides support for all of these actions with IAM.
Here's what we can do:
1- Creating a policy
To create a new policy, provide the policy’s name and a JSON-formatted policy document in a CreatePolicyRequest to the IamClient’s createPolicy method.
Imports
import software.amazon.awssdk.core.waiters.WaiterResponse;
import software.amazon.awssdk.services.iam.model.CreatePolicyRequest;
import software.amazon.awssdk.services.iam.model.CreatePolicyResponse;
import software.amazon.awssdk.services.iam.model.GetPolicyRequest;
import software.amazon.awssdk.services.iam.model.GetPolicyResponse;
import software.amazon.awssdk.services.iam.model.IamException;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.iam.IamClient;
import software.amazon.awssdk.services.iam.waiters.IamWaiter;
Code
public static String createIAMPolicy(IamClient iam, String policyName ) {
try {
// Create an IamWaiter object
IamWaiter iamWaiter = iam.waiter();
CreatePolicyRequest request = CreatePolicyRequest.builder()
.policyName(policyName)
.policyDocument(PolicyDocument).build();
CreatePolicyResponse response = iam.createPolicy(request);
// Wait until the policy is created
GetPolicyRequest polRequest = GetPolicyRequest.builder()
.policyArn(response.policy().arn())
.build();
WaiterResponse<GetPolicyResponse> waitUntilPolicyExists = iamWaiter.waitUntilPolicyExists(polRequest);
waitUntilPolicyExists.matched().response().ifPresent(System.out::println);
return response.policy().arn();
} catch (IamException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
return "" ;
}
You can check out CreatePolicy.java for complete example.
2- Attach a role policy
You can attach a policy to an IAM role by calling the IamClient’s attachRolePolicy method.
Imports
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.iam.IamClient;
import software.amazon.awssdk.services.iam.model.IamException;
import software.amazon.awssdk.services.iam.model.AttachRolePolicyRequest;
import software.amazon.awssdk.services.iam.model.AttachedPolicy;
import software.amazon.awssdk.services.iam.model.ListAttachedRolePoliciesRequest;
import software.amazon.awssdk.services.iam.model.ListAttachedRolePoliciesResponse;
import java.util.List;
Code
public static void attachIAMRolePolicy(IamClient iam, String roleName, String policyArn ) {
try {
ListAttachedRolePoliciesRequest request = ListAttachedRolePoliciesRequest.builder()
.roleName(roleName)
.build();
ListAttachedRolePoliciesResponse response = iam.listAttachedRolePolicies(request);
List<AttachedPolicy> attachedPolicies = response.attachedPolicies();
// Ensure that the policy is not attached to this role
String polArn = "";
for (AttachedPolicy policy: attachedPolicies) {
polArn = policy.policyArn();
if (polArn.compareTo(policyArn)==0) {
System.out.println(roleName +
" policy is already attached to this role.");
return;
}
}
AttachRolePolicyRequest attachRequest =
AttachRolePolicyRequest.builder()
.roleName(roleName)
.policyArn(policyArn)
.build();
iam.attachRolePolicy(attachRequest);
System.out.println("Successfully attached policy " + policyArn +
" to role " + roleName);
} catch (IamException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
System.out.println("Done");
}
You can check out AttachRolePolicy.java for complete example.
Bonus Content
Scenario for create a user and assume a role
The following code example shows how to:
Create a user who has no permissions.
Create a role that grants permission to list Amazon S3 buckets for the account.
Add a policy to let the user assume the role.
Assume the role and list Amazon S3 buckets using temporary credentials.
Delete the policy, role, and user.
/*
To run this Java V2 code example, set up your development environment, including your credentials.
For information, see this documentation topic:
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
This example performs these operations:
1. Creates a user that has no permissions.
2. Creates a role and policy that grants Amazon S3 permissions.
3. Creates a role.
4. Grants the user permissions.
5. Gets temporary credentials by assuming the role. Creates an Amazon S3 Service client object with the temporary credentials.
6. Deletes the resources.
*/
public class IAMScenario {
public static final String DASHES = new String(new char[80]).replace("\0", "-");
public static final String PolicyDocument =
"{" +
" \"Version\": \"2012-10-17\"," +
" \"Statement\": [" +
" {" +
" \"Effect\": \"Allow\"," +
" \"Action\": [" +
" \"s3:*\"" +
" ]," +
" \"Resource\": \"*\"" +
" }" +
" ]" +
"}";
public static void main(String[] args) throws Exception {
final String usage = "\n" +
"Usage:\n" +
" <username> <policyName> <roleName> <roleSessionName> <fileLocation> <bucketName> \n\n" +
"Where:\n" +
" username - The name of the IAM user to create. \n\n" +
" policyName - The name of the policy to create. \n\n" +
" roleName - The name of the role to create. \n\n" +
" roleSessionName - The name of the session required for the assumeRole operation. \n\n" +
" fileLocation - The file location to the JSON required to create the role (see Readme). \n\n" +
" bucketName - The name of the Amazon S3 bucket from which objects are read. \n\n" ;
if (args.length != 6) {
System.out.println(usage);
System.exit(1);
}
String userName = args[0];
String policyName = args[1];
String roleName = args[2];
String roleSessionName = args[3];
String fileLocation = args[4];
String bucketName = args[5];
Region region = Region.AWS_GLOBAL;
IamClient iam = IamClient.builder()
.region(region)
.credentialsProvider(ProfileCredentialsProvider.create())
.build();
System.out.println(DASHES);
System.out.println("Welcome to the AWS IAM example scenario.");
System.out.println(DASHES);
System.out.println(DASHES);
System.out.println(" 1. Create the IAM user.");
Boolean createUser = createIAMUser(iam, userName);
System.out.println(DASHES);
if (createUser) {
System.out.println(userName + " was successfully created.");
System.out.println(DASHES);
System.out.println("2. Creates a policy.");
String polArn = createIAMPolicy(iam, policyName);
System.out.println("The policy " + polArn + " was successfully created.");
System.out.println(DASHES);
System.out.println(DASHES);
System.out.println("3. Creates a role.");
String roleArn = createIAMRole(iam, roleName, fileLocation);
System.out.println(roleArn + " was successfully created.");
System.out.println(DASHES);
System.out.println(DASHES);
System.out.println("4. Grants the user permissions.");
attachIAMRolePolicy(iam, roleName, polArn);
System.out.println(DASHES);
System.out.println(DASHES);
System.out.println("*** Wait for 1 MIN so the resource is available");
TimeUnit.MINUTES.sleep(1);
System.out.println("5. Gets temporary credentials by assuming the role.");
System.out.println("Perform an Amazon S3 Service operation using the temporary credentials.");
assumeGivenRole(roleArn, roleSessionName, bucketName);
System.out.println(DASHES);
System.out.println(DASHES);
System.out.println("6 Getting ready to delete the AWS resources");
deleteRole(iam, roleName, polArn);
deleteIAMUser(iam, userName);
System.out.println(DASHES);
System.out.println(DASHES);
System.out.println("This IAM Scenario has successfully completed");
System.out.println(DASHES);
} else {
System.out.println(userName +" was not successfully created.");
}
}
public static Boolean createIAMUser(IamClient iam, String username ) {
try {
// Create an IamWaiter object
IamWaiter iamWaiter = iam.waiter();
CreateUserRequest request = CreateUserRequest.builder()
.userName(username)
.build();
// Wait until the user is created.
CreateUserResponse response = iam.createUser(request);
GetUserRequest userRequest = GetUserRequest.builder()
.userName(response.user().userName())
.build();
WaiterResponse<GetUserResponse> waitUntilUserExists = iamWaiter.waitUntilUserExists(userRequest);
waitUntilUserExists.matched().response().ifPresent(System.out::println);
return true;
} catch (IamException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
return false;
}
public static String createIAMRole(IamClient iam, String rolename, String fileLocation ) throws Exception {
try {
JSONObject jsonObject = (JSONObject) readJsonSimpleDemo(fileLocation);
CreateRoleRequest request = CreateRoleRequest.builder()
.roleName(rolename)
.assumeRolePolicyDocument(jsonObject.toJSONString())
.description("Created using the AWS SDK for Java")
.build();
CreateRoleResponse response = iam.createRole(request);
System.out.println("The ARN of the role is "+response.role().arn());
return response.role().arn();
} catch (IamException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
return "";
}
public static String createIAMPolicy(IamClient iam, String policyName ) {
try {
// Create an IamWaiter object.
IamWaiter iamWaiter = iam.waiter();
CreatePolicyRequest request = CreatePolicyRequest.builder()
.policyName(policyName)
.policyDocument(PolicyDocument).build();
CreatePolicyResponse response = iam.createPolicy(request);
// Wait until the policy is created.
GetPolicyRequest polRequest = GetPolicyRequest.builder()
.policyArn(response.policy().arn())
.build();
WaiterResponse<GetPolicyResponse> waitUntilPolicyExists = iamWaiter.waitUntilPolicyExists(polRequest);
waitUntilPolicyExists.matched().response().ifPresent(System.out::println);
return response.policy().arn();
} catch (IamException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
return "" ;
}
public static void attachIAMRolePolicy(IamClient iam, String roleName, String policyArn ) {
try {
ListAttachedRolePoliciesRequest request = ListAttachedRolePoliciesRequest.builder()
.roleName(roleName)
.build();
ListAttachedRolePoliciesResponse response = iam.listAttachedRolePolicies(request);
List<AttachedPolicy> attachedPolicies = response.attachedPolicies();
String polArn;
for (AttachedPolicy policy: attachedPolicies) {
polArn = policy.policyArn();
if (polArn.compareTo(policyArn)==0) {
System.out.println(roleName + " policy is already attached to this role.");
return;
}
}
AttachRolePolicyRequest attachRequest = AttachRolePolicyRequest.builder()
.roleName(roleName)
.policyArn(policyArn)
.build();
iam.attachRolePolicy(attachRequest);
System.out.println("Successfully attached policy " + policyArn + " to role " + roleName);
} catch (IamException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
// Invoke an Amazon S3 operation using the Assumed Role.
public static void assumeGivenRole(String roleArn, String roleSessionName, String bucketName) {
StsClient stsClient = StsClient.builder()
.region(Region.US_EAST_1)
.build();
try {
AssumeRoleRequest roleRequest = AssumeRoleRequest.builder()
.roleArn(roleArn)
.roleSessionName(roleSessionName)
.build();
AssumeRoleResponse roleResponse = stsClient.assumeRole(roleRequest);
Credentials myCreds = roleResponse.credentials();
String key = myCreds.accessKeyId();
String secKey = myCreds.secretAccessKey();
String secToken = myCreds.sessionToken();
// List all objects in an Amazon S3 bucket using the temp creds.
Region region = Region.US_EAST_1;
S3Client s3 = S3Client.builder()
.credentialsProvider(StaticCredentialsProvider.create(AwsSessionCredentials.create(key, secKey, secToken)))
.region(region)
.build();
System.out.println("Created a S3Client using temp credentials.");
System.out.println("Listing objects in "+bucketName);
ListObjectsRequest listObjects = ListObjectsRequest.builder()
.bucket(bucketName)
.build();
ListObjectsResponse res = s3.listObjects(listObjects);
List<S3Object> objects = res.contents();
for (S3Object myValue : objects) {
System.out.println("The name of the key is " + myValue.key());
System.out.println("The owner is " + myValue.owner());
}
} catch (StsException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
public static void deleteRole(IamClient iam, String roleName, String polArn) {
try {
// First the policy needs to be detached.
DetachRolePolicyRequest rolePolicyRequest = DetachRolePolicyRequest.builder()
.policyArn(polArn)
.roleName(roleName)
.build();
iam.detachRolePolicy(rolePolicyRequest);
// Delete the policy.
DeletePolicyRequest request = DeletePolicyRequest.builder()
.policyArn(polArn)
.build();
iam.deletePolicy(request);
System.out.println("*** Successfully deleted "+polArn);
// Delete the role.
DeleteRoleRequest roleRequest = DeleteRoleRequest.builder()
.roleName(roleName)
.build();
iam.deleteRole(roleRequest);
System.out.println("*** Successfully deleted " +roleName);
} catch (IamException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
public static void deleteIAMUser(IamClient iam, String userName) {
try {
DeleteUserRequest request = DeleteUserRequest.builder()
.userName(userName)
.build();
iam.deleteUser(request);
System.out.println("*** Successfully deleted " + userName);
} catch (IamException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
public static Object readJsonSimpleDemo(String filename) throws Exception {
FileReader reader = new FileReader(filename);
JSONParser jsonParser = new JSONParser();
return jsonParser.parse(reader);
}
}
Original Answer:
PutBucketPolicy
If you don't have PutBucketPolicy permissions, Amazon S3 returns a 403 Access Denied error. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowed error.
You can check out for more from AWS API Reference: PutBucketPolicy

Related

Cannot write parquet to amazon s3 bucket using AvroParquetWriter in Java

Hi am trying to write parquets to an amazon s3 bucket/key using JAVA but getting an [org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "file"].
Cannot write
[s3a://nprd-pr-snd-rtgrev-edp/sndrtgrev/out/jcl-0.snappy.parquet]:
org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for
scheme "file"
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3443)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3466)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:496)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:316)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:393)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:165)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:146)
at org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:1019)
at org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:816)
at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:204)
at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:182)
at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:1369)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1195)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1175)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1064)
at org.apache.parquet.hadoop.ParquetFileWriter.(ParquetFileWriter.java:244)
at org.apache.parquet.hadoop.ParquetWriter.(ParquetWriter.java:273)
at org.apache.parquet.hadoop.ParquetWriter$Builder.build(ParquetWriter.java:494)
at com.jclconsultants.parquet.tool.AwsParquetProcessor.writeParquetWithAvroToAwsBucket(AwsParquetProcessor.java:210)
at com.jclconsultants.parquet.tool.AwsParquetProcessor.run(AwsParquetProcessor.java:81)
at com.jclconsultants.parquet.tool.ToolProcessParquetsFromAwsBucket.launchAwsParquetProcessor(ToolProcessParquetsFromAwsBucket.java:269)
at com.jclconsultants.parquet.tool.ToolProcessParquetsFromAwsBucket.launchThreads(ToolProcessParquetsFromAwsBucket.java:251)
at com.jclconsultants.parquet.tool.ToolProcessParquetsFromAwsBucket.processParquetsFromBucket(ToolProcessParquetsFromAwsBucket.java:160)
at com.jclconsultants.parquet.tool.ToolProcessParquetsFromAwsBucket.main(ToolProcessParquetsFromAwsBucket.java:127)
Here is how I am writing (code was gathered from different post):
private void writeParquetWithAvroToAwsBucket(String outputParquetName, List<GenericData.Record> records) {
URI awsURI = null;
try {
awsURI = new URI("s3a://" + bucketName + "/" + outputParquetName );
} catch (URISyntaxException e1) {
e1.printStackTrace();
return;
}
Path dataFile = new Path(awsURI);
Configuration config = new Configuration();
config.set("fs.s3a.access.key", accesskey);
config.set("fs.s3a.secret.key", secretAccessKey);
config.set("fs.s3a.endpoint", "s3." + Regions.CA_CENTRAL_1.getName() + ".amazonaws.com");
config.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem");
config.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider");
config.set("fs.s3a.server-side-encryption-algorithm", S3AEncryptionMethods.SSE_KMS.getMethod());
config.set("fs.s3a.connection.ssl.enabled", "true");
config.set("fs.s3a.impl.disable.cache", "true");
config.set("fs.s3a.path.style.access", "true");
try (ParquetWriter<GenericData.Record> writer = AvroParquetWriter.<GenericData.Record>builder(dataFile)//
.withSchema(getSchema())//
.withConf(config)//
.withCompressionCodec(SNAPPY)//
.withWriteMode(OVERWRITE)//
.build()) {
for (GenericData.Record record : records) {
writer.write(record);
}
} catch (Exception e) {
e.printStackTrace();
}
}
and the schema si as follow:
private Schema getSchema() {
String json = "{\"type\":\"record\",\r\n" + " \"name\":\"spark_schema\",\r\n" + " \"fields\":[\r\n"
+ " {\"name\":\"PolicyVersion_uniqueId\",\"type\":[\"null\",\"string\"],\"default\":null},\r\n"
+ " {\"name\":\"agreementNumber\",\"type\":[\"null\",\"string\"],\"default\":null},\r\n"
+ " {\"name\":\"epoch_time\",\"type\":[\"null\",\"string\"],\"default\":null},\r\n"
+ " {\"name\":\"xml\",\"type\":[\"null\",\"string\"],\"default\":null},\r\n"
+ " {\"name\":\"filename\",\"type\":[\"null\",\"string\"],\"default\":null},\r\n"
+ " {\"name\":\"message_header\",\"type\":[\"null\",{\"type\":\"map\",\"values\":[\"null\",\"string\"]}],\"default\":null},\r\n"
+ " {\"name\":\"tracking_number\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"fullTermPremium\",\"type\":[\"null\",\"string\"],\"default\":null},\r\n"
+ " {\"name\":\"epoch_date\",\"type\":[\"null\",{\"type\":\"int\",\"logicalType\":\"date\"}],\"default\":null},\r\n"
+ " {\"name\":\"collection_date\",\"type\":[\"null\",{\"type\":\"int\",\"logicalType\":\"date\"}],\"default\":null}\r\n"
+ "]}";
return new Schema.Parser().parse(json);
}
I've also tried with a different credential provider (AnonymousAWSCredentialsProvider and TemporaryAWSCredentialsProvider) but still did not get it to work. I have a feeling that my config is missing something as it fails during the AvroParquetWriter build().
What am I doing wrong?
Notice that I can read parquets from S3 with a similar configuration.
I can also write a parquet file to my local drive and then upload the parquet file to s3 as follow:
private Path writeGenericRecordsToLocalDrive() {
Path dataFile = new Path(OUTPUT_DIR + "/" + parquetName);
Configuration config = new Configuration();
config.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
config.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName());
try (ParquetWriter<GenericData.Record> writer = AvroParquetWriter.<GenericData.Record>builder(dataFile)//
.withSchema(getSchema())//
.withConf(config)//
.withCompressionCodec(SNAPPY)//
.withWriteMode(OVERWRITE)//
.build()) {
for (GenericData.Record parquet : parquetRecords) {
writer.write(parquet);
nbRecordsInParquet++;
}
} catch (IOException e) {
e.printStackTrace();
}
return dataFile;
}
private void uploadLocalParquetToAwsBucket(Path dataFile) {
AmazonS3 s3Client = null;
try {
File fileToUpload = new File(OUTPUT_DIR + "/" + dataFile.getName());
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setHeader(Headers.SERVER_SIDE_ENCRYPTION, "aws:kms");
objectMetadata.setContentLength(fileToUpload.length());
BasicAWSCredentials creds = new BasicAWSCredentials(accessKey, secretAccessKey);
s3Client = AmazonS3ClientBuilder.standard().withRegion(Regions.CA_CENTRAL_1)
.withCredentials(new AWSStaticCredentialsProvider(creds)).build();
String bucketDestination = fileToUpload.getName();
com.amazonaws.services.s3.model.PutObjectRequest putRequest = new com.amazonaws.services.s3.model.PutObjectRequest(
bucketName + "/" + bucketFolder, bucketDestination, new FileInputStream(fileToUpload),
objectMetadata);
putRequest.putCustomRequestHeader(Headers.SERVER_SIDE_ENCRYPTION, "aws:kms");
PutObjectResult putResult = s3Client.putObject(putRequest);
} catch (Exception e) {
e.printStackTrace();
} finally {
if (s3Client != null) {
s3Client.shutdown();
}
}
}

Broadcasting device using jmDNS

I'm currently trying to have my device (program) show up as a mock chromecast on the network. The application itself can see the Mock-Caster as a chromecast but if i try searching for it using Youtube or any other sender app, the "Mock-Caster" does not appear in the list.
I registered a service on jmDNS with all the criteria that would be available on an actual chromecast as well.
Is there something that I am missing or getting wrong?
Here's what I have so far.
public void postConstruct() throws Exception {
InetAddress discoveryInterface = InetAddress.getByName("192.168.1.1");
CAST_DEVICE_MONITOR.startDiscovery(discoveryInterface, "Mock-Server");
CAST_DEVICE_MONITOR.registerListener(new DeviceDiscoveryListener() {
#Override
public void deviceDiscovered(CastDevice castDevice) {
System.out.println("New chrome cast detected: " + castDevice);
}
#Override
public void deviceRemoved(CastDevice castDevice) {
System.out.println("New chrome cast detected: " + castDevice);
}
});
JmDNS jmdns = JmDNS.create(discoveryInterface, "Mock-Server");
final String name = "Mock-Caster";
final String id = UUID.randomUUID().toString();
// Register a service
HashMap<String, String> properties = new HashMap<>();
//values scraped from chromecast or another java project
properties.put("sf", "1");
properties.put("id", id);
properties.put("md", name);
properties.put("fd", name);
properties.put("s#", "1");
properties.put("ff", "0");
properties.put("ci", "1");
properties.put("c#", Integer.toString(1));
properties.put("pv", "1.1");
properties.put("cd", "E465315D08CFDEF2742E1264D78F6035");
properties.put("rm", "ED724E435DA8115C");
properties.put("ve", "05");
properties.put("ic", "/setup/icon.png");
properties.put("ca", "201221");
properties.put("st", "0");
properties.put("bs", "FA8FCA771881");
properties.put("nf", "1");
properties.put("rs", "");
ServiceInfo serviceInfo = ServiceInfo.create("_googlecast._tcp.local.", name, 8009, 1, 1, properties);
jmdns.registerService(serviceInfo);
}
//This is where i know that the mock-caster is registered and available
#Bean
public IntegrationFlow tcpServer() throws Exception {
TcpNioServerConnectionFactory factory = new TcpNioServerConnectionFactory(8009);
DefaultTcpSSLContextSupport defaultTcpSSLContextSupport = new DefaultTcpSSLContextSupport(keystore, keystore, password, password);
defaultTcpSSLContextSupport.setProtocol("TLS");
factory.setTcpNioConnectionSupport(new DefaultTcpNioSSLConnectionSupport(defaultTcpSSLContextSupport));
factory.setTcpSocketSupport(new DefaultTcpSocketSupport(false));
factory.setDeserializer(TcpCodecs.lengthHeader4());
factory.setSerializer(TcpCodecs.lengthHeader4());
TcpInboundGatewaySpec inboundGateway = Tcp.inboundGateway(factory);
return IntegrationFlows
.from(inboundGateway)
.handle(message -> {
String ip_address = message.getHeaders().get("ip_address").toString();
if(ip_address.equalsIgnoreCase("192.168.1.1")) {
System.out.println("Self IP Address received");
System.out.println("Payload: " + message.getPayload());
System.out.println("MessageHeaders: " + message.getHeaders());
}else{
System.out.println("Other IP address received: " + ip_address);
}
})
.get();
}
//Mock-Caster detected here
#Scheduled(fixedRate = 15000)
public void checkCasters() throws Exception {
Set<CastDevice> casters = CAST_DEVICE_MONITOR.getCastDevices();
System.out.println("Current CCs: " + casters.size());
for (CastDevice device : casters) {
System.out.println("CC (" + device.getDisplayName() + ")");
var receiver = new CastEvent.CastEventListener() {
#Override
public void onEvent(#Nonnull CastEvent<?> castEvent) {
System.out.println("Event: " + castEvent);
}
};
var appId = DEFAULT_MEDIA_RECEIVER_APP;
try {
if (!device.isConnected()) {
device.connect();
}
device.addEventListener(receiver);
} catch (Exception ex) {
ex.printStackTrace();
} finally {
device.removeEventListener(receiver);
device.disconnect();
}
}

Is it possible to operate a VM using Azure Functions managed ID?

Is it possible to operate a VM using Azure Functions managed ID?
I used a service principal to write code to operate a VM from my PC.
/**
* Main function which runs the actual sample.
*
* #param azure instance of the azure client
* #return true if sample runs successfully
*/
public static boolean runSample(Azure azure) {
final String rgName1 = "rgName";
final String linuxVMName = "vmName";
try {
VirtualMachine virtualMachine = azure.virtualMachines().getByResourceGroup(rgName1, linuxVMName);
System.out.println("Running Command");
List<String> commands = new ArrayList<>();
commands.add("whoami");
commands.add("touch /tmp/tmp.txt");
RunCommandInput runParams = new RunCommandInput()
.withCommandId("RunShellScript")
.withScript(commands);
RunCommandResult runResult = azure.virtualMachines().runCommand(virtualMachine.resourceGroupName(), virtualMachine.name(), runParams);
for (InstanceViewStatus resopnse : runResult.value()) {
System.out.println("code : " + resopnse.code());
System.out.println("status : " + resopnse.displayStatus());
System.out.println("message : " + resopnse.message());
}
return true;
} catch (Exception e) {
System.out.println(e.getMessage());
e.printStackTrace();
} finally {
System.out.println("final");
}
return false;
}
/**
* Main entry point.
*
* #param args the parameters
*/
public static void main(String[] args) {
try {
// Authenticate
String clientId = "XXXXXXXXX";
String domain = "XXXXXXXXXX";
String secret = "XXXXXXXXXX";
//MSICredentials credentials = new MSICredentials();
AzureTokenCredentials credentials = new ApplicationTokenCredentials(clientId, domain, secret, AzureEnvironment.AZURE);
Azure azure = Azure
.configure()
.withLogLevel(LogLevel.NONE)
.authenticate(credentials)
.withDefaultSubscription();
// Print selected subscription
System.out.println("Selected subscription: " + azure.subscriptionId());
runSample(azure);
} catch (Exception e) {
System.out.println(e.getMessage());
e.printStackTrace();
}
}
I would like to modify some of this code to run on Azure Functions.
Is it possible to operate a VM using an Azure Functions managed ID without using a service principal?
According to my understanding, you want to use MSI to use Azure VM run command feature in Azure Function. If so, please refer to the following steps
Enable Azure MSI in Azure Function
Assing Azure RABC role to the MSI.
Running a command requires the Microsoft.Compute/virtualMachines/runCommand/action permission. The Virtual Machine Contributor role and higher levels have this permission.
Code. I use the package com.microsoft.azure:azure:1.38.0
String subscriptionId="";
AppServiceMSICredentials appServiceMsiCredentials = new AppServiceMSICredentials(AzureEnvironment.AZURE);
Azure azure = Azure
.configure()
.withLogLevel(LogLevel.NONE)
.authenticate(appServiceMsiCredentials)
.withSubscription(subscriptionId);
final String rgName1 = "testlinux_group";
final String linuxVMName = "testlinux";
try {
VirtualMachine virtualMachine = azure.virtualMachines().getByResourceGroup(rgName1, linuxVMName);
System.out.println("Running Command");
List<String> commands = new ArrayList<>();
commands.add("echo 1");
RunCommandInput runParams = new RunCommandInput()
.withCommandId("RunShellScript")
.withScript(commands);
RunCommandResult runResult = azure.virtualMachines().runCommand(virtualMachine.resourceGroupName(), virtualMachine.name(), runParams);
for (InstanceViewStatus res : runResult.value()) {
context.getLogger().info("code : " + res.code());
context.getLogger().info("status : " + res.displayStatus());
context.getLogger().info("message : " + res.message());
}
} catch (Exception e) {
System.out.println(e.getMessage());
e.printStackTrace();
} finally {
System.out.println("final");
}

Elasticsearch create index and post

I am trying to perform actions in ES, so far I believe that I was able established connection correctly using Jest(http request) and now I am trying to create a new topic and post some information so it will be visible throw the elasticsearch head plugin, I run my code I dont receive any Exception but nothing happens as well,
public class ElasticSearch {
private String ES_HOST = "localhost";
private String ES_PORT = "9200";
private static JestClient jestClient = null;
public JestClient getElasticSearchClient() {
return jestClient;
}
public void connectToElasticSearch() {
try {
JestClientFactory factory = new JestClientFactory();
factory.setHttpClientConfig(
new HttpClientConfig.Builder("http://" + ES_HOST + ":" + ES_PORT)
.multiThreaded(true)
// //Per default this implementation will create no more than 2 concurrent
// connections per given route
// .defaultMaxTotalConnectionPerRoute(<YOUR_DESIRED_LEVEL_OF_CONCURRENCY_PER_ROUTE>)
// // and no more 20 connections in total
// .maxTotalConnection(<YOUR_DESIRED_LEVEL_OF_CONCURRENCY_TOTAL>)
.build());
jestClient = factory.getObject();
} catch (Exception e) {
e.printStackTrace();
}
}
public void createIndex(String indexName, String indexType) throws IOException {
// jestClient.execute(new CreateIndex.Builder(indexName).build());
PutMapping putMapping = new PutMapping.Builder(
indexName,
indexType,
"{ \"my_type\" : { \"properties\" : { \"message\" : {\"type\" : \"string\", \"store\" : \"yes\"} } } }"
).build();
jestClient.execute(putMapping);
}
public void postInES() throws IOException {
String source = jsonBuilder()
.startObject()
.field("user", "kimchy")
.field("postDate", "date")
.field("message", "trying out Elastic Search")
.endObject().string();
}
public static void main(String[] args) throws IOException {
ElasticSearch es = new ElasticSearch();
es.connectToElasticSearch();
es.getElasticSearchClient();
es.createIndex("ES TEST", "TEST");
es.postInES();
}
I am using:
<dependency>
<groupId>io.searchbox</groupId>
<artifactId>jest</artifactId>
<version>5.3.3</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>6.2.4</version>
</dependency>`enter code here`
I will appreciate your help
thanks
Thanks.
I found few problems in my code above and I was able to fix it, first when using java the port has to be 9300 and not 9200, I actually changed my entire code and decided to use TransportClient instead of JestClient which helped me. in case anyone else needs or had a similar problem I will share my code here hope it will help others
import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;
import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;
import org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsResponse;
import org.elasticsearch.action.bulk.BulkRequestBuilder;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.support.WriteRequest;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.index.reindex.BulkByScrollResponse;
import org.elasticsearch.index.reindex.DeleteByQueryAction;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.sort.FieldSortBuilder;
import org.elasticsearch.search.sort.SortOrder;
import org.elasticsearch.transport.client.PreBuiltTransportClient;
import java.io.IOException;
import java.net.InetAddress;
import java.util.Map;
/**
* #author YoavT #Date 6/26/2018 #Time 9:20 AM
*/
public class ElasticSearch{
private String ES_HOST = "localhost";
private int ES_PORT = 9300;
private TransportClient client = null;
protected boolean connectToElasticSearch(String clusterName) {
boolean flag = false;
try {
Settings settings =
Settings.builder()
.put("cluster.name", clusterName)
.put("client.transport.ignore_cluster_name", true)
.put("client.transport.sniff", true)
.build();
// create connection
client = new PreBuiltTransportClient(settings);
client.addTransportAddress(new TransportAddress(InetAddress.getByName(ES_HOST), ES_PORT));
System.out.println(
"Connection " + clusterName + "#" + ES_HOST + ":" + ES_PORT + " established!");
flag = true;
} catch (Exception e) {
e.printStackTrace();
flag = false;
}
return flag;
}
/**
* Check the health status of the cluster
*/
public boolean isClusterHealthy(String clusterName) {
connectToElasticSearch(clusterName);
final ClusterHealthResponse response =
client
.admin()
.cluster()
.prepareHealth()
.setWaitForGreenStatus()
.setTimeout(TimeValue.timeValueSeconds(2))
.execute()
.actionGet();
if (response.isTimedOut()) {
System.out.println("The cluster is unhealthy: " + response.getStatus());
return false;
}
System.out.println("The cluster is healthy: " + response.getStatus());
return true;
}
/**
* Previous step is (check if cluster is healthy) The cluster is ready now and we can start with
* creating an index. Before that, we check that the same index was not created previously.
*/
public boolean isIndexRegistered(String indexName, String clusterName) {
connectToElasticSearch(clusterName);
// check if index already exists
final IndicesExistsResponse ieResponse =
client.admin().indices().prepareExists(indexName).get(TimeValue.timeValueSeconds(1));
// index not there
if (!ieResponse.isExists()) {
return false;
}
System.out.println("Index already created!");
return true;
}
/**
* If the index does not exist already, we create the index. *
*/
public boolean createIndex(String indexName, String numberOfShards, String numberOfReplicas, String clusterName) {
connectToElasticSearch(clusterName);
try {
CreateIndexResponse createIndexResponse =
client
.admin()
.indices()
.prepareCreate(indexName.toLowerCase())
.setSettings(
Settings.builder()
.put("index.number_of_shards", numberOfShards)
.put("index.number_of_replicas", numberOfReplicas))
.get();
if (createIndexResponse.isAcknowledged()) {
System.out.println(
"Created Index with "
+ numberOfShards
+ " Shard(s) and "
+ numberOfReplicas
+ " Replica(s)!");
return true;
}
} catch (Exception e) {
e.printStackTrace();
}
return false;
}
public static void main(String[] args) throws IOException {
ElasticSearch elasticSearch = new ElasticSearch();
elasticSearch.connectToElasticSearch("elasticsearch");
boolean isHealthy = elasticSearch.isClusterHealthy("elasticsearch");
System.out.println("is cluster healthy= " + isHealthy);
boolean isIndexExsist = elasticSearch.isIndexRegistered("Test", "elasticsearch");
System.out.println("is index exsist = " + isIndexExsist);
boolean createIndex = elasticSearch.createIndex("TestIndex", "3", "1", "elasticsearch");
System.out.println("Is index created = " + createIndex);
boolean bulkInsert = elasticSearch.bulkInsert("TestIndex", "Json", "elasticsearch");
System.out.println("Bulk insert = " + bulkInsert);
long deleteBulk = elasticSearch.deleteBulk("TestIndex", "name", "Mark Twain", "elasticsearch");
System.out.println("Delete bulk = " + deleteBulk);
}
/**
* We basically want to index a JSON array consisting of objects with the properties name and age. We use a bulk insert to insert all the data at once.
* In our tests it happened that the cluster health status was not ready when we tried to run a search/delete query directly after the insert. Consequently,
* we added the setRefreshPolicy( RefreshPolicy.IMMEDIATE ) method to signalize the server to refresh the index after the specified request.
* The data can now be queried directly after.
*
* #param indexName
* #param indexType
* #return
* #throws IOException
*/
public boolean bulkInsert(String indexName, String indexType, String clusterName) throws IOException {
connectToElasticSearch(clusterName);
boolean flag = true;
BulkRequestBuilder bulkRequest = client.prepareBulk();
// for (int i = 0; i < listOfParametersForInsertion.length; i++) {
bulkRequest
.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)
.add(
client
.prepareIndex(indexName, indexType, null)
.setSource(
XContentFactory.jsonBuilder()
.startObject()
.field("name", "Mark Twain")
.field("age", 75)
.endObject()));
// }
BulkResponse bulkResponse = bulkRequest.get();
if (bulkResponse.hasFailures()) {
// process failures by iterating through each bulk response item
System.out.println("Bulk insert failed!");
flag = false;
}
return flag;
}
/**
* After successfully querying data, we try to delete documents using a key-value pair to get
* deeper into the Elasticsearch behavior.
*/
public long deleteBulk(String indexName, String key, String value, String clusterName) {
connectToElasticSearch(clusterName);
BulkByScrollResponse response =
DeleteByQueryAction.INSTANCE
.newRequestBuilder(client)
.filter(QueryBuilders.matchQuery(key, value))
.source(indexName)
.refresh(true)
.get();
System.out.println("Deleted " + response.getDeleted() + " element(s)!");
return response.getDeleted();
}
/**
* To query the data, we use a SearchResponse in combination with a scroll. A scroll is basically
* the Elasticsearch counterpart to a cursor in a traditional SQL database. Using that sort of
* query is quite an overkill for our example and just for demonstration purposes. It is rather
* used to query large amounts of data (not like five documents in our case) and not intended for
* real-time user requests.
*
* #param indexName
* #param from
* #param to
*/
public void queryResultsWithFilter(String indexName, int from, int to, String clusterName, String filterField) {
connectToElasticSearch(clusterName);
SearchResponse scrollResp =
client
.prepareSearch(indexName)
// sort order
.addSort(FieldSortBuilder.DOC_FIELD_NAME, SortOrder.ASC)
// keep results for 60 seconds
.setScroll(new TimeValue(60000))
// filter for age
.setPostFilter(QueryBuilders.rangeQuery(filterField).from(from).to(to))
// maximum of 100 hits will be returned for each scroll
.setSize(100)
.get();
// scroll until no hits are returned
do {
int count = 1;
for (SearchHit hit : scrollResp.getHits().getHits()) {
Map<String, Object> res = hit.getSourceAsMap();
// print results
for (Map.Entry<String, Object> entry : res.entrySet()) {
System.out.println("[" + count + "] " + entry.getKey() + " --> " + entry.getValue());
}
count++;
}
scrollResp =
client
.prepareSearchScroll(scrollResp.getScrollId())
.setScroll(new TimeValue(60000))
.execute()
.actionGet();
// zero hits mark the end of the scroll and the while loop.
} while (scrollResp.getHits().getHits().length != 0);
}
}

How to get the auth token for dropbox directly into java program so that user don't need to copy paste it

I am developing an application where I have to upload file and download file from dropbox account, application is a java desktop application. I have used the sample code and was able to run it without any glitch. But I want to bypass that user needs to copy the auth code form browser to the application, how can be that done. I want my application to get the auth token directly into the application, because this is causing overhead on user.
All the experts please help me.
Here's the code I have Implemented.
import com.dropbox.core.*;
import java.awt.Desktop;
import java.io.*;
import java.util.Locale;
public class Main {
public static void main(String[] args) throws IOException, DbxException {
// Get your app key and secret from the Dropbox developers website.
final String APP_KEY = "xxxxxxxxxxxxxxxx";
final String APP_SECRET = "xxxxxxxxxxxxxxxx";
DbxAppInfo appInfo = new DbxAppInfo(APP_KEY, APP_SECRET);
DbxRequestConfig config = new DbxRequestConfig("JavaTutorial/1.0",
Locale.getDefault().toString());
DbxWebAuthNoRedirect webAuth = new DbxWebAuthNoRedirect(config, appInfo);
// Have the user sign in and authorize your app.
String authorizeUrl = webAuth.start();
System.out.println("1. Go to: " + authorizeUrl);
System.out.println("2. Click \"Allow\" (you might have to log in first)");
System.out.println("3. Copy the authorization code.");
Desktop.getDesktop().browse(java.net.URI.create(authorizeUrl));
String code = new BufferedReader(new InputStreamReader(System.in)).readLine().trim();
// This will fail if the user enters an invalid authorization code.
DbxAuthFinish authFinish = webAuth.finish(code);
DbxClient client = new DbxClient(config, authFinish.accessToken);
System.out.println("Linked account: " + client.getAccountInfo().displayName);
File inputFile = new File("OpenCv_Video_display_link.txt");
FileInputStream inputStream = new FileInputStream(inputFile);
try {
DbxEntry.File uploadedFile = client.uploadFile("/OpenCv_Video_display_link.txt",
DbxWriteMode.add(), inputFile.length(), inputStream);
System.out.println("Uploaded: " + uploadedFile.toString());
} finally {
inputStream.close();
}
DbxEntry.WithChildren listing = client.getMetadataWithChildren("/");
System.out.println("Files in the root path:");
for (DbxEntry child : listing.children) {
System.out.println(" " + child.name + ": " + child.toString());
}
FileOutputStream outputStream = new FileOutputStream("121verbs.pdf");
try {
DbxEntry.File downloadedFile = client.getFile("/121verbs.pdf", null,
outputStream);
System.out.println("Metadata: " + downloadedFile.toString());
} finally {
outputStream.close();
}
}
}
Go to https://www.dropbox.com/developers/apps
Click on the app. On this page under Outh 2, look for Generated access token.
Click on Generate.
This is your access token. This authenticates that the user is you and you don't have to go through the standard authorization flow everytime.
I have commented the unnecessary code, you will only need the last line.
/* // Have the user sign in and authorize your app.
String authorizeUrl = webAuth.start();
System.out.println("1. Go to: " + authorizeUrl);
System.out.println("2. Click \"Allow\" (you might have to log in first)");
System.out.println("3. Copy the authorization code.");
String code = new BufferedReader(new InputStreamReader(System.in)).readLine().trim();
// This will fail if the user enters an invalid authorization code.
DbxAuthFinish authFinish = webAuth.finish(code); */
String accessToken = "Your access token goes here";
Based in this example from dropbox blog i made this example in java it will open a Webview where you can introduce your dropbox credentials after that dropbox will redirect you and the redirection url will have the token then we get the token from the URl and print to the console.
public class AnotherClass extends Application {
public static void main(String[] args) {
java.net.CookieHandler.setDefault(new com.sun.webkit.network.CookieManager());
launch(args);
}
//This can be any URL. But you have to register in your dropbox app
final String redirectUri = "https://www.dropbox.com/1/oauth2/redirect_receiver";
final String AppKey = "YOUR APP KEY ";
final String url = "https://www.dropbox.com/1/oauth2/authorize?client_id=" + AppKey + "&response_type=token&redirect_uri=" + redirectUri;
#Override
public void start(Stage pStage) throws Exception {
Stage primaryStage = pStage;
primaryStage.setTitle("Dropbox Sign In");
WebView webView = new WebView();
WebEngine webEngine = webView.getEngine();
webEngine.load(url);
webEngine.locationProperty().addListener(new ChangeListener<String>() {
#Override
public void changed(ObservableValue<? extends String> arg0, String oldLocation, String newLocation) {
try {
if (newLocation.startsWith(redirectUri)) {
ArrayMap<String, String> map = parseQuery(newLocation.split("#")[1]);
if (map.containsKey("access_token")) {
System.out.print("Token: " + map.get("access_token"));
}
}
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
}
});
StackPane stackPane = new StackPane();
stackPane.getChildren().add(webView);
Scene rootScene = new Scene(stackPane);
primaryStage.setScene(rootScene);
primaryStage.show();
}
ArrayMap<String, String> parseQuery(String query) throws UnsupportedEncodingException {
ArrayMap<String, String> params = new ArrayMap<String, String>();
for (String param : query.split("&")) {
String[] pair = param.split("=");
String key = URLDecoder.decode(pair[0], "UTF-8");
String value = URLDecoder.decode(pair[1], "UTF-8");
params.put(key, value);
}
return params;
}
}
Based on my development experience on Android platform, you may insert JavaScript code to scan the web page and extract the auth code, like below :
#Override
public void onPageFinished(final WebView webView, final String url) {
if (AUTHORIZE_SUBMIT_URL.equalsIgnoreCase(url)
&& AUTHORIZE_SUBMIT_TITLE.equalsIgnoreCase(webView.getTitle())) {
webView.loadUrl("javascript:HtmlViewer.showHTML"
+ "('<html>'+document.getElementById('auth-code').innerHTML+'</html>');");
}
}

Categories