AmazonEC2Client describeInstances() returns zero Reservations in Java - java

While running "aws ec2 describe-instances" in command line, It gives list of all ec2 Instances but with Java AWS-SDK it's gives zero Reservations. Please see below code snippet,
AmazonEC2 ec2;
if (ec2 == null) {
AWSCredentialsProviderChain credentialsProvider = new
AWSCredentialsProviderChain(
new InstanceProfileCredentialsProvider(),
new ProfileCredentialsProvider("default"));
ec2 = new AmazonEC2Client(credentialsProvider);
}
for (Reservation reservation : ec2.describeInstances().getReservations()) {
for (Instance instance : reservation.getInstances()) {
System.out.println("TAG" + instance.getInstanceId());
}
}
`

The most likely cause is that it's not looking in the correct region.
Another possibility is that it throws an exception that you don't see. To verify that this is not the case, you need to insert some logging statements. At the very least, one before and after the for loop.

This is the code in Java 8 which I use to describe all instances from all the regions:
amazonEC2.describeRegions().getRegions().forEach(region -> {
System.out.println("Region : " + region.getRegionName());
amazonEC2 = AmazonEC2ClientBuilder.standard().withCredentials(awsprovider).withRegion(region.getRegionName()).build();
amazonEC2.describeInstances().getReservations().forEach(reservation -> {
reservation.getInstances().forEach(instance -> {
System.out.println(instance.getInstanceId());
});
});
});
Thanks,
Akshay

Related

ComboAnalyzer - AttributeImpl not found in AttributeSource

The analyzer OpenNLPAnalyzer based on OpenNLPTokenizer in the opennlp package that ships with Lucene in this blog post works as promised. I am now trying to use it inside an ComboAnalyzer (a part of an ES-plugin to combine multiple analyzers; see link below) in the following way:
ComboAnalyzer analyzer = new ComboAnalyzer(new EnglishAnalyzer(), new OpenNLPAnalyzer());
TokenStream stream = analyzer.tokenStream("fieldname", new StringReader(text));
stream is a ComboTokenStream. On calling stream.incrementToken(), I get the following exception at line 105 here:
Exception in thread "main": State contains AttributeImpl of type org.apache.lucene.analysis.tokenattributes.OffsetAttributeImpl that is not in in this AttributeSource
Here is what the called method restoreState does.
public final void restoreState(State state) {
if (state == null) return;
do {
AttributeImpl targetImpl = attributeImpls.get(state.attribute.getClass());
if (targetImpl == null) {
throw new IllegalArgumentException("State contains AttributeImpl of type " +
state.attribute.getClass().getName() + " that is not in in this AttributeSource");
}
state.attribute.copyTo(targetImpl);
state = state.next;
} while (state != null);
}
This hints that one of the TokenStreams has an OffsetAttribute but the other does not. Is there a clean way to fix this?
I tried to add the line addAttribute(OffsetAttribute.class) in the same file here. I still get the same exception.
The problem was here:
Tokenizer source = new OpenNLPTokenizer(
AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY, sentenceDetectorOp, tokenizerOp);
The fix is to pass in TokenStream.DEFAULT_TOKEN_ATTRIBUTE_FACTORY instead of AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY. The former uses PackedTokenAttributeImpl for implementing OffsetAttribute (and many other attributes) and the latter picks OffsetAttributeImpl.

EMR cluster bootstrap failure (timeout) occurs most of the times I initialize a cluster

I'm writing an app that is consisted of 4 chained MapReduce jobs, which runs on Amazon EMR. I'm using the JobFlow interface to chain the jobs. Each job is contained in its own class, and has its own main method. All of these are packed into a .jar which is saved in S3, and the cluster is initialized from a small local app on my laptop, which configures the JobFlowRequest and submits it to EMR.
For most of the attempts I make to start the cluster, it fails with the error message Terminated with errors On the master instance (i-<cluster number>), bootstrap action 1 timed out executing. I looked up info on this issue, and all I could find is that if the combined bootstrap time of the cluster exceeds 45 minutes, then this exception is thrown. However, This only occurs ~15 minutes after the request is submitted to EMR, with disregard to the requested cluster size, be it of 4 EC2 instances, 10 or even 20. This makes no sense to me at all, what am I missing?
Some tech specs:
-The project is compiled with Java 1.7.79
-The requested EMR image is 4.6.0, which uses Hadoop 2.7.2
-I'm using the AWS SDK for Java v. 1.10.64
This is my local main method, which sets up and submits the JobFlowRequest:
import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.ec2.model.InstanceType;
import com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduce;
import com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduceClient;
import com.amazonaws.services.elasticmapreduce.model.*;
public class ExtractRelatedPairs {
public static void main(String[] args) throws Exception {
if (args.length != 1) {
System.err.println("Usage: ExtractRelatedPairs: <k>");
System.exit(1);
}
int outputSize = Integer.parseInt(args[0]);
if (outputSize < 0) {
System.err.println("k should be positive");
System.exit(1);
}
AWSCredentials credentials = null;
try {
credentials = new ProfileCredentialsProvider().getCredentials();
} catch (Exception e) {
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (~/.aws/credentials), and is in valid format.",
e);
}
AmazonElasticMapReduce mapReduce = new AmazonElasticMapReduceClient(credentials);
HadoopJarStepConfig jarStep1 = new HadoopJarStepConfig()
.withJar("s3n://dsps162assignment2benasaf/jars/ExtractRelatedPairs.jar")
.withMainClass("Phase1")
.withArgs("s3://datasets.elasticmapreduce/ngrams/books/20090715/eng-gb-all/5gram/data/", "hdfs:///output1/");
StepConfig step1Config = new StepConfig()
.withName("Phase 1")
.withHadoopJarStep(jarStep1)
.withActionOnFailure("TERMINATE_JOB_FLOW");
HadoopJarStepConfig jarStep2 = new HadoopJarStepConfig()
.withJar("s3n://dsps162assignment2benasaf/jars/ExtractRelatedPairs.jar")
.withMainClass("Phase2")
.withArgs("shdfs:///output1/", "hdfs:///output2/");
StepConfig step2Config = new StepConfig()
.withName("Phase 2")
.withHadoopJarStep(jarStep2)
.withActionOnFailure("TERMINATE_JOB_FLOW");
HadoopJarStepConfig jarStep3 = new HadoopJarStepConfig()
.withJar("s3n://dsps162assignment2benasaf/jars/ExtractRelatedPairs.jar")
.withMainClass("Phase3")
.withArgs("hdfs:///output2/", "hdfs:///output3/", args[0]);
StepConfig step3Config = new StepConfig()
.withName("Phase 3")
.withHadoopJarStep(jarStep3)
.withActionOnFailure("TERMINATE_JOB_FLOW");
HadoopJarStepConfig jarStep4 = new HadoopJarStepConfig()
.withJar("s3n://dsps162assignment2benasaf/jars/ExtractRelatedPairs.jar")
.withMainClass("Phase4")
.withArgs("hdfs:///output3/", "s3n://dsps162assignment2benasaf/output4");
StepConfig step4Config = new StepConfig()
.withName("Phase 4")
.withHadoopJarStep(jarStep4)
.withActionOnFailure("TERMINATE_JOB_FLOW");
JobFlowInstancesConfig instances = new JobFlowInstancesConfig()
.withInstanceCount(10)
.withMasterInstanceType(InstanceType.M1Small.toString())
.withSlaveInstanceType(InstanceType.M1Small.toString())
.withHadoopVersion("2.7.2")
.withEc2KeyName("AWS")
.withKeepJobFlowAliveWhenNoSteps(false)
.withPlacement(new PlacementType("us-east-1a"));
RunJobFlowRequest runFlowRequest = new RunJobFlowRequest()
.withName("extract-related-word-pairs")
.withInstances(instances)
.withSteps(step1Config, step2Config, step3Config, step4Config)
.withJobFlowRole("EMR_EC2_DefaultRole")
.withServiceRole("EMR_DefaultRole")
.withReleaseLabel("emr-4.6.0")
.withLogUri("s3n://dsps162assignment2benasaf/logs/");
System.out.println("Submitting the JobFlow Request to Amazon EMR and running it...");
RunJobFlowResult runJobFlowResult = mapReduce.runJobFlow(runFlowRequest);
String jobFlowId = runJobFlowResult.getJobFlowId();
System.out.println("Ran job flow with id: " + jobFlowId);
}
}
A while back, I encountered a similar issue, where even a Vanilla EMR cluster of 4.6.0 was failing to get past the startup, and thus it was throwing a timeout error on the bootstrap step.
I ended up just creating a cluster on a different/new VPC in a different region and it worked fine, and thus it led me to believe there may be a problem with either the original VPC itself or the software in 4.6.0.
Also, regarding the VPC, it was specifically having an issue setting and resolving DNS names for the newly created cluster nodes, even though older versions of EMR were not having this problem

Team foundation server getting users of a project using Java SDK

i'm trying to get all the users that belong to a project through the SDK for Java version 11.0.0 but i'm stuck.
With that code i retrieve the collections and the projects:
TeamFoundationServerEntity teamFoundationServer=configurationServer.getTeamFoundationServerEntity(true);
if (teamFoundationServer != null)
{
ProjectCollectionEntity[] projectCollections = teamFoundationServer.getProjectCollections();
for (ProjectCollectionEntity pce : projectCollections) {
System.out.println("Collection: "+pce.getDisplayName()+" "+pce.getDescription());
TeamProjectEntity[] tpe=pce.getTeamProjects();
for (TeamProjectEntity teamProjectEntity : tpe) {
System.out.println(" teamProjectEntity: "+teamProjectEntity.getDisplayName()+" * "+teamProjectEntity.getProjectURI());
}
}
}
Also with the following code taken from the example in the downloaded zip and has the groups information:
GUID[] resourceTypes = new GUID[]{
CatalogResourceTypes.PROJECT_COLLECTION
};
CatalogResource[] resources =
configurationServer.getCatalogService().queryResourcesByType(resourceTypes, CatalogQueryOptions.NONE);
if (resources != null)
{
for (CatalogResource resource : resources)
{
String instanceId = resource.getProperties().get("InstanceId");
TFSTeamProjectCollection tpc = configurationServer.getTeamProjectCollection(new GUID(instanceId));
System.out.println("TFSTeamProjectCollection");
System.out.println("\tName: " + tpc.getName().toString());
System.out.println("\tURI: " + tpc.getBaseURI());
ProjectCollection pc=tpc.getWorkItemClient().getProjects();
for (Project project : pc) {
System.out.println("---"+project.getName()+" * "+project.getID());
String[] grps=tpc.getWorkItemClient().getGroupDataProvider(project.getName()).getGroups();
}
}
}
I have found the class IdentityManagementService
IdentityManagementService ims=new IdentityManagementService(configurationServer);
but i don't know how to use the listApplicationGroups and readIdentities methods that maybe are useful to find a solution.
Has anyone some idea to get the users in every project group?
A few more trial after #Cece - MSFT answer and looking at the blog and the book Microsoft Team Foundation Server 2015 Cookbook. Using this code
TeamFoundationIdentity[] appGroups=ims.listApplicationGroups(project.getURI(), ReadIdentityOptions.EXTENDED_PROPERTIES);
for (TeamFoundationIdentity group : appGroups)
{
System.out.println(group.getDisplayName());
TeamFoundationIdentity[] groupMembers= ims.readIdentities(new IdentityDescriptor[]{group.getDescriptor()}, MembershipQuery.EXPANDED, ReadIdentityOptions.EXTENDED_PROPERTIES);
for (TeamFoundationIdentity member : groupMembers)
{
for(IdentityDescriptor memberID : member.getMembers())
{
TeamFoundationIdentity memberInfo=ims.readIdentity(IdentitySearchFactor.IDENTIFIER, memberID.getIdentifier(), MembershipQuery.EXPANDED, ReadIdentityOptions.EXTENDED_PROPERTIES);
System.out.println(memberInfo.getDisplayName());
}
}
}
the variable appGroups is always empty. Maybe the method project.getURI() is not suitable? If I put null
TeamFoundationIdentity[] tfi=ims.listApplicationGroups(null, ReadIdentityOptions.INCLUDE_READ_FROM_SOURCE);
for (TeamFoundationIdentity teamFoundationIdentity : tfi) {
System.out.println(teamFoundationIdentity.getDisplayName());
System.out.println(teamFoundationIdentity.getDescriptor().getIdentityType());
IdentityDescriptor[] mbs=teamFoundationIdentity.getMembers();
for (IdentityDescriptor mb : mbs) {
TeamFoundationIdentity mbi=ims.readIdentity(mb, MembershipQuery.EXPANDED, ReadIdentityOptions.EXTENDED_PROPERTIES);
System.out.println(mbi.getProperties());
}
}
the output is
[DefaultCollection]\Project Collection Administrators
Microsoft.TeamFoundation.Identity
[DefaultCollection]\Project Collection Build Administrators
Microsoft.TeamFoundation.Identity
[DefaultCollection]\Project Collection Build Service Accounts
Microsoft.TeamFoundation.Identity
[DefaultCollection]\Project Collection Proxy Service Accounts
Microsoft.TeamFoundation.Identity
[DefaultCollection]\Project Collection Service Accounts
Microsoft.TeamFoundation.Identity
[DefaultCollection]\Project Collection Test Service Accounts
Microsoft.TeamFoundation.Identity
[DefaultCollection]\Project Collection Valid Users
Microsoft.TeamFoundation.Identity
Why I can't get the Contributors, Readers and the other groups with project.getURI() in listApplicationGroups method? I can only get them with
String[] grps=tpc.getWorkItemClient().getGroupDataProvider(project.getName()).getGroups();
Check this blog. In this blog, author used IGroupSecurityService service to get the list of application groups and get the details of which group the user is a member.
But now, IGroupSecurityService is obsolete. You need to use the IIdentityManagementService or ISecurityService instead.
The code snippet should look like:
var sec = tfs.GetService<IIdentityManagementService>();
Identity[] appGroups = sec.ListApplicationGroups(Scope Uri);
foreach (Identity group in appGroups)
{
Identity[] groupMembers = sec.ReadIdentities(SearchFactor.Sid, new string[] { group.Sid }, QueryMembership.Expanded);
foreach (Identity member in groupMembers)
{
var groupM = new GroupMembership {GroupName = member.DisplayName, GroupSid = member.Sid};
if (member.Members != null)
{
foreach (string memberSid in member.Members)
{
Identity memberInfo = sec.ReadIdentity(SearchFactor.Sid, memberSid, QueryMembership.Expanded);
var userName = memberInfo.Domain + "\\" + memberInfo.AccountName;
Detailed steps, you can check the blog.
For anyone who is interested in this two link there is the answer:
get groups of project and get members of a group

How to use JCo connection without creating *.JcoDestination file

I'm trying to connect to SAP ECC 6.0 using JCo. I'm following this tutorial. However, there is a Note saying:
For this example the destination configuration is stored in a file that is called by the program. In practice you should avoid this for security reasons.
And that is reasonable and understood. But, there is no explenation how to set up secure destination provider.
I found solution in this thread that created custom implementation of DestinationDataProvider and that works on my local machine. But when I deploy it on Portal I get an error saying that there is already registered DestinationDataProvider.
So my question is:
How to store destination data in SAP Java EE application?
Here is my code to further clarify what I'm trying to do.
public static void main(String... args) throws JCoException {
CustomDestinationProviderMap provider = new CustomDestinationProviderMap();
com.sap.conn.jco.ext.Environment.registerDestinationDataProvider(provider);
Properties connectProperties = new Properties();
connectProperties.setProperty(DestinationDataProvider.JCO_ASHOST, "host.sap.my.domain.com");
connectProperties.setProperty(DestinationDataProvider.JCO_SYSNR, "00");
connectProperties.setProperty(DestinationDataProvider.JCO_CLIENT, "100");
connectProperties.setProperty(DestinationDataProvider.JCO_USER, "user");
connectProperties.setProperty(DestinationDataProvider.JCO_PASSWD, "password");
connectProperties.setProperty(DestinationDataProvider.JCO_LANG, "en");
provider.addDestination(DESTINATION_NAME1, connectProperties);
connect();
}
public static void connect() throws JCoException {
String FUNCTION_NAME = "BAPI_EMPLOYEE_GETDATA";
JCoDestination destination = JCoDestinationManager.getDestination(DESTINATION_NAME1);
JCoContext.begin(destination);
JCoFunction function = destination.getRepository().getFunction(FUNCTION_NAME);
if (function == null) {
throw new RuntimeException(FUNCTION_NAME + " not found in SAP.");
}
//function.getImportParameterList().setValue("EMPLOYEE_ID", "48");
function.getImportParameterList().setValue("FSTNAME_M", "ANAKIN");
function.getImportParameterList().setValue("LASTNAME_M", "SKYWALKER");
try {
function.execute(destination);
} catch (AbapException e) {
System.out.println(e.toString());
return;
}
JCoTable table = function.getTableParameterList().getTable("PERSONAL_DATA");
for (int i = 0; i < table.getNumRows(); i++) {
table.setRow(i);
System.out.println(table.getString("PERNO") + '\t' + table.getString("FIRSTNAME") + '\t' + table.getString("LAST_NAME")
+'\t' + table.getString("BIRTHDATE")+'\t' + table.getString("GENDER"));
}
JCoContext.end(destination);
}
Ok, so I got this up and going and thought I'd share my research.
You need to add your own destination in Portal. To achieve that you need to go to NetWeaver Administrator, located at: host:port/nwa. So it'll be something like sapportal.your.domain.com:50000/nwa.
Then you go to Configuration-> Infrastructure-> Destinations and add your destination there. You can leave empty most of the fields like Message Server. The important part is Destination name as it is how you will retrieve it and destination type which should be set to RFC Destination in my case. Try pinging your newly created destination to check if its up and going.
Finally you should be able to get destination by simply calling: JCoDestination destination = JCoDestinationManager.getDestination(DESTINATION_NAME); as it is added to your Portal environment and managed from there.
Take a look at the CustomDestinationDataProvider in the JCo examples of the Jco connector download. The important parts are:
static class MyDestinationDataProvider implements DestinationDataProvider
...
com.sap.conn.jco.ext.Environment.registerDestinationDataProvider(new MyDestinationDataProvider());
Then you can simply do:
instance = JCoDestinationManager.getDestination(DESTINATION_NAME);
Btw. you may also want to check out http://hibersap.org/ as they provide nice ways to store the config as well.

Java ProgramCall.run hangs

Busy trying to Call RPG function from Java and got this example from JamesA. But now I am having trouble, here is my code:
AS400 system = new AS400("MachineName");
ProgramCall program = new ProgramCall(system);
try
{
// Initialise the name of the program to run.
String programName = "/QSYS.LIB/LIBNAME.LIB/FUNNAME.PGM";
// Set up the 3 parameters.
ProgramParameter[] parameterList = new ProgramParameter[2];
// First parameter is to input a name.
AS400Text OperationsItemId = new AS400Text(20);
parameterList[0] = new ProgramParameter(OperationsItemId.toBytes("TestID"));
AS400Text CaseMarkingValue = new AS400Text(20);
parameterList[1] = new ProgramParameter(CaseMarkingValue.toBytes("TestData"));
// Set the program name and parameter list.
program.setProgram(programName, parameterList);
// Run the program.
if (program.run() != true)
{
// Report failure.
System.out.println("Program failed!");
// Show the messages.
AS400Message[] messagelist = program.getMessageList();
for (int i = 0; i < messagelist.length; ++i)
{
// Show each message.
System.out.println(messagelist[i]);
}
}
// Else no error, get output data.
else
{
AS400Text text = new AS400Text(50);
System.out.println(text.toObject(parameterList[1].getOutputData()));
System.out.println(text.toObject(parameterList[2].getOutputData()));
}
}
catch (Exception e)
{
//System.out.println("Program " + program.getProgram() + " issued an exception!");
e.printStackTrace();
}
// Done with the system.
system.disconnectAllServices();
The application Hangs at this lineif (program.run() != true), and I wait for about 10 minutes and then I terminate the application.
Any idea what I am doing wrong?
Edit
Here is the message on the job log:
Client request - run program QSYS/QWCRTVCA.
Client request - run program LIBNAME/FUNNAME.
File P6CASEL2 in library *LIBL not found or inline data file missing.
Error message CPF4101 appeared during OPEN.
Cannot resolve to object YOBPSSR. Type and Subtype X'0201' Authority
FUNNAME insert a row into table P6CASEPF through a view called P6CASEL2. P6CASEL2 is in a different library lets say LIBNAME2. Is there away to maybe set the JobDescription?
Are you sure FUNNAME.PGM is terminating and not hung with a MSGW? Check QSYSOPR for any messages.
Class ProgramCall:
NOTE: When the program runs within the host server job, the library list will be the initial library list specified in the job description in the user profile.
So I saw that my problem is that my library list is not setup, and for some reason, the user we are using, does not have a Job Description. So to over come this I added the following code before calling the program.run()
CommandCall command = new CommandCall(system);
command.run("ADDLIBLE LIB(LIBNAME)");
command.run("ADDLIBLE LIB(LIBNAME2)");
This simply add this LIBNAME, and LIBNAME2 to the user's library list.
Oh yes, the problem is Library list not set ... take a look at this discussion on Midrange.com, there are different work-around ...
http://archive.midrange.com/java400-l/200909/msg00032.html
...
Depe

Categories