I'm trying to grab a list of snapshots from our Storage Gateway and put them into a JTable. However, when I use the AWS Java API to retrieve a list of snapshots, I only am able to retrieve what appears to be the public snapshots published by Amazon. When I set the DescribeSnapshotsRequest.setOwnerIds() to include "self", the list is empty.
Here is the offending code:
private void createTable() {
Object[][] data = null;
String[] columns = new String[]{"Snapshot ID", "Owner ID", "Snapshot Date"};
DescribeSnapshotsRequest req = new DescribeSnapshotsRequest();
req.setOwnerIds(Arrays.<String>asList("self"));
try {
snapshots = ec2Client.describeSnapshots(req).getSnapshots();
data = new Object[snapshots.size()][3];
int i = 0;
for(Snapshot item : snapshots) {
data[i][0] = item.getSnapshotId();
data[i][1] = item.getOwnerId();
data[i][2] = item.getStartTime();
i++;
}
} catch(Exception e) {
System.out.println("Invalid Credentials!");
}
table = new JTable(data, columns);
table.setAutoCreateRowSorter(true);
}
The List snapshots is empty unless I either remove the DescribeSnapshotsRequest, or set the owner ID to "amazon".
Long story short, why can't I access my private snapshots from the Storage Gateway?
Figured it out. Turns out you have to explicitly define the EC2 endpoint. Somehow I missed that step.
Here is the list of endpoints:
http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region
AmazonEC2Client.setEndpoint("<Endpoint URL>");
Related
We have an application that loads all contacts stored in an account using the Microsoft Graph API. The initial call we issue is https://graph.microsoft.com/v1.0/users/{userPrincipalName}/contacts$count=true&$orderBy=displayName%20ASC&$top=100, but we use the Java JDK to do that. Then we iterate over all pages and store all loaded contacts in a Set (local cache).
We do this every 5 minutes using an account with over 3000 contacts and sometimes, the count of contacts we received due to using $count does not match the number of contacts we loaded and stored in the local cache.
Verifying the numbers manually we can say, that the count was always correct, but there are contacts missing.
We use the following code to achieve this.
public List<Contact> loadContacts() {
Set<Contact> contacts = new TreeSet<>((contact1, contact2) -> StringUtils.compare(contact1.id, contact2.id));
List<QueryOption> requestOptions = List.of(
new QueryOption("$count", true),
new QueryOption("$orderBy", "displayName ASC"),
new QueryOption("$top", 100)
);
ContactCollectionRequestBuilder pageRequestBuilder = null;
ContactCollectionRequest pageRequest;
boolean hasNextPage = true;
while (hasNextPage) {
// initialize page request
if (pageRequestBuilder == null) {
pageRequestBuilder = graphClient.users(userId).contacts();
pageRequest = pageRequestBuilder.buildRequest(requestOptions);
} else {
pageRequest = pageRequestBuilder.buildRequest();
}
// load
ContactCollectionPage contactsPage = pageRequest.get();
if (contactsPage == null) {
throw new IllegalStateException("request returned a null page");
} else {
contacts.addAll(contactsPage.getCurrentPage());
}
// handle next page
hasNextPage = contactsPage.getNextPage() != null;
if (hasNextPage) {
pageRequestBuilder = contactsPage.getNextPage();
} else if (contactsPage.getCount() != null && !Objects.equals(contactsPage.getCount(), (long) contacts.size())) {
throw new IllegalStateException(String.format("loaded %d contacts but response indicated %d contacts", contacts.size(), contactsPage.getCount()));
} else {
// done
}
}
log.info("{} contacts loaded using graph API", contacts.size());
return new ArrayList<>(contacts);
}
Initially, we did not put the loaded contacts in a Set by ID but just in a List. With the List we very often got more contacts than $count. My idea was, that there is some caching going on and some pages get fetched multiple times. Using the Set we can make sure, that we only have unique contacts in our local cache.
But using the Set, we sometimes have less contacts than $count, meaning some pages got skipped and we end up in the condition that throws the IllegalStateException.
Currently, we use microsoft-graph 5.8.0 and azure-identiy 1.4.2.
Have you experienced similar issues and can help us solve this problem?
Or do you have any idea what could be causing these inconsistent results?
Your help is very much appreciated!
I'm sorry this question header is not 100% correct. Because of that, I'll explain my scenario here.
I created a function to merge 4 data sets into one return format. Because that's the format front-end side needed. So this is working fine now.
public ReturnFormat makeThribleLineChart(List<NameCountModel> totalCount, List<NameCountModel>,p1Count, List<NameCountModel> p2Count, List<NameCountModel> average) {
ReturnFormat returnFormat = new ReturnFormat(null,null);
try {
String[] totalData = new String[totalCount.size()];
String[] p1Data = new String[p1Count.size()];
String[] p2Data = new String[p2Count.size()];
String[] averageData = new String[p2Count.size()];
String[] lableList = new String[totalCount.size()];
for (int x = 0; x < totalCount.size(); x++) {
totalData[x] = totalCount.get(x).getCount();
p1Data[x] = p1Count.get(x).getCount();
p2Data[x] = p2Count.get(x).getCount();
averageData[x] = average.get(x).getCount();
lableList[x] = totalCount.get(x).getName();
}
FormatHelper<String[]> totalFormatHelper= new FormatHelper<String[]>();
totalFormatHelper.setData(totalData);
totalFormatHelper.setType("line");
totalFormatHelper.setLabel("Uudet");
totalFormatHelper.setyAxisID("y-axis-1");
FormatHelper<String[]> p1FormatHelper= new FormatHelper<String[]>();
p1FormatHelper.setData(p1Data);
p1FormatHelper.setType("line");
p1FormatHelper.setLabel("P1 päivystykseen heti");
FormatHelper<String[]> p2FormatHelper= new FormatHelper<String[]>();
p2FormatHelper.setData(p2Data);
p2FormatHelper.setType("line");
p2FormatHelper.setLabel("P2 päivystykseen muttei yöllä");
FormatHelper<String[]> averageFormatHelper= new FormatHelper<String[]>();
averageFormatHelper.setData(averageData);
averageFormatHelper.setType("line");
averageFormatHelper.setLabel("Jonotusaika keskiarvo");
averageFormatHelper.setyAxisID("y-axis-2");
List<FormatHelper<String[]>> formatHelpObj = new ArrayList<FormatHelper<String[]>>();
formatHelpObj.add(totalFormatHelper);
formatHelpObj.add(p1FormatHelper);
formatHelpObj.add(p2FormatHelper);
formatHelpObj.add(averageFormatHelper);
returnFormat.setData(formatHelpObj);
returnFormat.setLabels(lableList);
returnFormat.setMessage(Messages.Success);
returnFormat.setStatus(ReturnFormat.Status.SUCCESS);
} catch (Exception e) {
returnFormat.setData(null);
returnFormat.setMessage(Messages.InternalServerError);
returnFormat.setStatus(ReturnFormat.Status.ERROR);
}
return returnFormat;
}
so, as you can see here, all the formatting is hardcoded. So my question is how to automate this code for list count. Let's assume next time I have to create chart formatting for five datasets. So I have to create another function to it. That's the thing I want to reduce. So I hope you can understand my question.
Thank you.
You're trying to solve the more general problem of composing a result object (in this case ReturnFormat) based on dynamic information. In addition, there's some metadata being setup along with each dataset - the type, label, etc. In the example that you've posted, you've hardcoded the relationship between a dataset and this metadata, but you'd need some way to establish this relationship for data dynamically if you have a variable number of parameters here.
Therefore, you have a couple of options:
Make makeThribleLineChart a varargs method to accept a variable number of parameters representing your data. Now you have the problem of associating metadata with your parameters - best option is probably to wrap the data and metadata together in some new object that is provided as each param of makeThribleLineChart.
So you'll end up with a signature that looks a bit like ReturnFormat makeThribleLineChart(DataMetadataWrapper... allDatasets), where DataMetadataWrapper contains everything required to build one FormatHelper instance.
Use a builder pattern, similar to the collection builders in guava, for example something like so:
class ThribbleLineChartBuilder {
List<FormatHelper<String[]>> formatHelpObj = new ArrayList<>();
ThribbleLineChartBuilder addDataSet(String describeType, String label, String yAxisId, List<NameCountModel> data) {
String[] dataArray = ... ; // build your array of data
FormatHelper<String[]> formatHelper = new FormatHelper<String[]>();
formatHelper.setData(dataArray);
formatHelper.setType(describeType);
... // set any other parameters that the FormatHelper requires here
formatHelpObj.add(formatHelper);
return this;
}
ReturnFormat build() {
ReturnFormat returnFormat = new ReturnFormat(null, null);
returnFormat.setData(this.formatHelpObj);
... // setup any other fields you need in ReturnFormat
return returnFormat;
}
}
// usage:
new ThribbleLineChartBuilder()
.addDataSet("line", "Uudet", "y-axis-1", totalCount)
.addDataSet("line", "P1 päivystykseen heti", null, p1Count)
... // setup your other data sources
.build()
I am trying to update bigquery tabledata using java. WriteDisposition is an option according to my research. I am bit novice, couldn't get through. kindly help.
Being said that I have tried to insert using WriteChannelConfiguration which worked fine. Need to make changes to this code to update the table.
public class BigQryAPI {
public static void explicit() {
// Load credentials from JSON key file. If you can't set the GOOGLE_APPLICATION_CREDENTIALS
// environment variable, you can explicitly load the credentials file to construct the
// credentials.
try {
GoogleCredentials credentials;
File credentialsPath = new File(BigQryAPI.class.getResource("/firstprojectkey.json").getPath()); // TODO: update to your key path.
FileInputStream serviceAccountStream = new FileInputStream(credentialsPath);
credentials = ServiceAccountCredentials.fromStream(serviceAccountStream);
// Instantiate a client
BigQuery bigquery =
BigQueryOptions.newBuilder().setCredentials(credentials).build().getService();
System.out.println("Datasets:");
for (Dataset dataset : bigquery.listDatasets().iterateAll()) {
System.out.printf("%s%n", dataset.getDatasetId().getDataset());
}
//load into table
TableId tableId = TableId.of("firstdataset","firsttable");
WriteChannelConfiguration writeChannelConfiguration =
WriteChannelConfiguration.newBuilder(tableId).setFormatOptions(FormatOptions.csv()).build();
TableDataWriteChannel writer = bigquery.writer(writeChannelConfiguration);
String csvdata="zzzxyz,zzzxyz";
// Write data to writer
try {
writer.write(ByteBuffer.wrap(csvdata.getBytes(Charsets.UTF_8)));
} finally {
writer.close();
}
// Get load job
Job job = writer.getJob();
job = job.waitFor();
LoadStatistics stats = job.getStatistics();
System.out.printf("these are my stats"+stats);
String query = "SELECT Name,Phone FROM `firstproject-256319.firstdataset.firsttable`;";
QueryJobConfiguration queryConfig = QueryJobConfiguration.newBuilder(query).build();
// Print the results.
for (FieldValueList row : bigquery.query(queryConfig).iterateAll()) {
for (FieldValue val : row) {
System.out.printf("%s,", val.toString());
}
System.out.printf("\n");
}
}catch(Exception e) {System.out.println(e.getMessage());}
}
}
We can set write desposition while building the 'WriteChannelConfiguration'
WriteChannelConfiguration writeChannelConfiguration = WriteChannelConfiguration.newBuilder(table.tableId).setFormatOptions(FormatOptions.csv())
.setWriteDisposition(JobInfo.WriteDisposition.WRITE_TRUNCATE).build()
Details could be found here BigQuery API Docs
Could someone help me out ow to get data from a table in the database by sending a list.
`
List<CampaignStructure> campIdList = new ArrayList<>(); try {
JSONArray clientjson = new JSONArray(campaignids);
Set<Long> list = new HashSet<>();
for(int i = 0; i < clientjson.length(); i++){
list.add(Long.parseLong(clientjson.getJSONObject(i).getString("id")));
}
for(Long id: list){
CampaignStructure c = new CampaignStructure();
c.setCampaignId(id);
campIdList.add(c);
}
} catch (JSONException e) {
}
Here campaignids contains list of ids (can contain duplicates).
campIdList contains distinct ids. i want to send these ids and get the data from database.
findByCampaignIdIn(List campIdList) should work.
You can find all list of JPA repository keywords can be found in the current documentation listing.
It shows that IsIn is equivalent – if you prefer the verb for readability – and that JPA also supports NotIn and IsNotIn.
Can anyone give me advice on how to read the general ledger using SuiteTalk, the SOAP API from NetSuite?
For example, if you look at an account or a transaction on the NetSuite UI, there is an option to select "GL Impact". This produces a list of relevant general ledger entries.
However, I couldn't figure out a way to get the same list using SuiteTalk. One initially promising SOAP operation I tried calling was getPostingTransactionSummary(), but that is just a summary and lacks detail such as transaction dates. Another way is to call search() passing a TransactionSearchBasic object. That returns too many types of transaction and I'm not sure which of those actually have an impact on the general ledger.
I'm using Java and Axis toolkit for the SOAP operations, but examples in any language whatsoever (or raw SOAP XML) would be appreciated.
you are on the right track with your transaction search.
You are looking for posting is true and where the line has an account.
However I'd set this up in the saved search editor at least until you've figured out how you are going to filter to manageable numbers of lines. Then use TransactionSearchAdvanced with savedSearchId to pull that info via SuiteTalk
I am able to search GL transaction with below code, this could help you.
public void GetTransactionData()
{
DataTable dtData = new DataTable();
string errorMsg = "";
LoginToService(ref errorMsg);
TransactionSearch objTransSearch = new TransactionSearch();
TransactionSearchBasic objTransSearchBasic = new TransactionSearchBasic();
SearchEnumMultiSelectField semsf = new SearchEnumMultiSelectField();
semsf.#operator = SearchEnumMultiSelectFieldOperator.anyOf;
semsf.operatorSpecified = true;
semsf.searchValue = new string[] { "Journal" };
objTransSearchBasic.type = semsf;
objTransSearchBasic.postingPeriod = new RecordRef() { internalId = "43" };
objTransSearch.basic = objTransSearchBasic;
//Set Search Preferences
SearchPreferences _searchPreferences = new SearchPreferences();
Preferences _prefs = new Preferences();
_serviceInstance.preferences = _prefs;
_serviceInstance.searchPreferences = _searchPreferences;
_searchPreferences.pageSize = 1000;
_searchPreferences.pageSizeSpecified = true;
_searchPreferences.bodyFieldsOnly = false;
//Set Search Preferences
try
{
SearchResult result = _serviceInstance.search(objTransSearch);
List<JournalEntry> lstJEntry = new List<JournalEntry>();
List<JournalEntryLine> lstLineItems = new List<JournalEntryLine>();
if (result.status.isSuccess)
{
for (int i = 0; i <= result.recordList.Length - 1; i += 1)
{
JournalEntry JEntry = (JournalEntry)result.recordList[i];
lstJEntry.Add((JournalEntry)result.recordList[i]);
if (JEntry.lineList != null)
{
foreach (JournalEntryLine line in JEntry.lineList.line)
{
lstLineItems.Add(line);
}
}
}
}
try
{
_serviceInstance.logout();
}
catch (Exception ex)
{
}
}
catch (Exception ex)
{
throw ex;
}
}