Get Items in a PurchaseOrder using SuiteTalk - java

I am attempting to get the items and some of the related information from a Purchase Order with SuiteTalk. I am able to get the desired Purchase Orders with TransactionSearch using the following in Scala:
val transactionSearch = new TransactionSearch
val search = new TransactionSearchBasic
...
search.setLastModifiedDate(searchLastModified) //Gets POs modified in the last 10 minutes
transactionSearch.setBasic(search)
val result = port.search(transactionSearch)
I am able to cast each result to a record as an instance of the PurchaseOrder class.
if (result.getStatus().isIsSuccess()) {
println("Transactions: " + result.getTotalRecords)
for (i <- 0 until result.getTotalRecords) {
try {
val record = result.getRecordList.getRecord.get(i).asInstanceOf[PurchaseOrder]
record.get<...>
}
catch {...}
}
}
From here I am able to use the getters to access the individual fields, except for the ItemList.
I can see in the NetSuite web interface that there are items attached to the Purchase Orders. However using getItemList on the result record is always returning a null response.
Any thoughts?

I think you have not used search preferences and that is why you are not able to fetch purchase order line items. You will have to use following search preferences in your code -
SearchPreferences prefrence = new SearchPreferences();
prefrence.bodyFieldsOnly = false;
_service.searchPreferences = prefrence;
Following is working example using above preferences -
private void SearchPurchaseOrderByID(string strPurchaseOrderId)
{
TransactionSearch tranSearch = new TransactionSearch();
TransactionSearchBasic tranSearchBasic = new TransactionSearchBasic();
RecordRef poRef = new RecordRef();
poRef.internalId = strPurchaseOrderId;
poRef.type = RecordType.purchaseOrder;
poRef.typeSpecified = true;
RecordRef[] poRefs = new RecordRef[1];
poRefs[0] = poRef;
SearchMultiSelectField poID = new SearchMultiSelectField();
poID.searchValue = poRefs;
poID.#operator = SearchMultiSelectFieldOperator.anyOf;
poID.operatorSpecified = true;
tranSearchBasic.internalId = poID;
tranSearch.basic = tranSearchBasic;
InitService();
SearchResult results = _service.search(tranSearch);
if (results.status.isSuccess && results.status.isSuccessSpecified)
{
Record[] poRecords = results.recordList;
PurchaseOrder purchaseOrder = (PurchaseOrder)poRecords[0];
PurchaseOrderItemList poItemList = purchaseOrder.itemList;
PurchaseOrderItem[] poItems = poItemList.item;
if (poItems != null && poItems.Length > 0)
{
for (var i = 0; i < poItems.Length; i++)
{
Console.WriteLine("Item Line On PO = " + poItems[i].line);
Console.WriteLine("Item Quantity = " + poItems[i].quantity);
Console.WriteLine("Item Descrition = " + poItems[i].description);
}
}
}
}

Related

Get number of entries in properties file apache commons

I'm creating a list of IP address' to ping in which a user can add to the list which is then saved to a properties file in the form of site.name1 = ... site.name2 = ...
Currently I have a for loop with a fixed amount, is there a way to get the number of entries in a properties file so I can set this in the for loop rather than wait for a exception?
PropertiesConfiguration config = configs.properties(new File("IPs.properties"));
//initially check for how many values there are - set to max increments for loop
for (int i = 0; i < 3; i++) { //todo fix
siteName = config.getString("site.name" + i);
siteAddress = config.getString("site.address" + i);
SiteList.add(i, siteName);
IPList.add(i, siteAddress);
}
I've looked through the documentation and other questions but they seem to be unrelated.
It looks to me based on the documentation you should be able to use PropertiesConfiguration#getLayout#getKeys to get a Set of all keys as a String.
I had to modify the code a bit to use apache-commons-configuration-1.10
PropertiesConfiguration config = new PropertiesConfiguration("ips.properties");
PropertiesConfigurationLayout layout = config.getLayout();
String siteName = null;
String siteAddress = null;
for (String key : layout.getKeys()) {
String value = config.getString(key);
if (value == null) {
throw new IllegalStateException(String.format("No value found for key: %s", key));
}
if (key.equals("site.name")) {
siteName = value;
} else if (key.equals("site.address")) {
siteAddress = value;
} else {
throw new IllegalStateException(String.format("Unsupported key: %s", key));
}
}
System.out.println(String.format("name=%s, address=%s", siteName, siteAddress));

Why is my Spark driver program so slow?

My problem: I have a model engine that takes a list of parameter configuration, and evaluates a double value that corresponds to the metric associated to that configuration. I have six parameters, and each of them can vary according to a list. I want to find by brute force the best parameter configuration considering the combination that will produce the higher value for the output metric. Since I'm learning Spark, I realized that with the cartesian product operation I can easily generate the combinations, and split the RDD to be processed in parallel. So, I came up with this driver program:
public static void main(String[] args) {
String scriptName = "model.mry";
String scriptStr = null;
try {
scriptStr = new String(Files.readAllBytes(Paths.get(scriptName)));
} catch (IOException ex) {
Logger.getLogger(BruteForceDriver.class.getName()).log(Level.SEVERE, null, ex);
System.exit(1);
}
final String script = scriptStr;
SparkConf conf = new SparkConf()
.setAppName("wordCount")
.setSparkHome("/home/danilo/bin/spark-2.2.0-bin-hadoop2.7")
.setJars(new String[]{"/home/danilo/NetBeansProjects/SparkHello1/target/SparkHello1-1.0.jar",
"/home/danilo/.m2/repository/org/modcs/mercury/4.7/mercury-4.7.jar"})
.setMaster("spark://danilo-desktop:7077");
String baseDir = "/home/danilo/NetBeansProjects/SimulationOptimization/workspace/";
JavaSparkContext sc = new JavaSparkContext(conf);
final int NUM_SERVICES = 6;
final int QTD = 3;
JavaRDD<Service>[] providers = new JavaRDD[NUM_SERVICES];
for (int i = 1; i <= NUM_SERVICES; i++) {
providers[i - 1] = sc.textFile(baseDir + "provider"
+ i
+ ".mat")
.filter((t1) -> !t1.contains("#") && !t1.trim().isEmpty())
.map(Service.createParser("" + i))
.zipWithIndex().filter((t1) -> {
return t1._2 < QTD;
}).keys();
}
JavaPairRDD c = null;
JavaRDD<Service> p = providers[0];
for (int i = 1; i < NUM_SERVICES; i++) {
if (c == null) {
c = p.cartesian(providers[i]);
} else {
c = c.cartesian(providers[i]);
}
}
JavaRDD<List<Service>> cartesian = c.map(new FlattenTuple<>());
final Broadcast<ModelEvaluator> model = sc.broadcast(new ModelEvaluator(script));
JavaPairRDD<Double, List<Service>> results = cartesian.mapToPair(
(t) -> {
try {
double val = model.value().evaluateModel(t);
System.out.println(val);
return new Tuple2<>(val, t);
} catch (Exception ex) {
return null;
}
}
);
results.sortByKey().collect().forEach((t) -> {
System.out.println(t._1 + ", " + t._2);
});
sc.close();
}
The "QTD" variable allows me to control the size the interval which each parameter will vary. For QTD = 3, I'll have 3^6 = 729 combinations. The problem is that it is taking so long to compute all those combinations. I wrote a implementations using only normal Java threads, and the runtime is about 40 seconds. Using my Spark driver program, the runtime more than 6 minutes. Why is my Spark program so slow compared to the plain Java multi-thread program?
Edit:
I put:
results = results.cache();
before sorting the results and now the runtime is 2.5 minutes.
Edit 2:
I created a RDD with the cartesian product of the parameters by hand instead of using the operation provided by the framework. Now my runtime is 1'25''. It does make sense now, since there is some overhead to start the driver and move the jars to the workers.

How to extract key phrases from a given text with OpenNLP?

I'm using Apache OpenNLP and i'd like to extract the Keyphrases of a given text. I'm already gathering entities - but i would like to have Keyphrases.
The problem i have is that i can't use TF-IDF cause i don't have models for that and i only have a single text (not multiple documents)
Here is some code (prototyped - not so clean)
public List<KeywordsModel> extractKeywords(String text, NLPProvider pipeline) {
SentenceDetectorME sentenceDetector = new SentenceDetectorME(pipeline.getSentencedetecto("en"));
TokenizerME tokenizer = new TokenizerME(pipeline.getTokenizer("en"));
POSTaggerME posTagger = new POSTaggerME(pipeline.getPosmodel("en"));
ChunkerME chunker = new ChunkerME(pipeline.getChunker("en"));
ArrayList<String> stopwords = pipeline.getStopwords("en");
Span[] sentSpans = sentenceDetector.sentPosDetect(text);
Map<String, Float> results = new LinkedHashMap<>();
SortedMap<String, Float> sortedData = new TreeMap(new MapSort.FloatValueComparer(results));
float sentenceCounter = sentSpans.length;
float prominenceVal = 0;
int sentences = sentSpans.length;
for (Span sentSpan : sentSpans) {
prominenceVal = sentenceCounter / sentences;
sentenceCounter--;
String sentence = sentSpan.getCoveredText(text).toString();
int start = sentSpan.getStart();
Span[] tokSpans = tokenizer.tokenizePos(sentence);
String[] tokens = new String[tokSpans.length];
for (int i = 0; i < tokens.length; i++) {
tokens[i] = tokSpans[i].getCoveredText(sentence).toString();
}
String[] tags = posTagger.tag(tokens);
Span[] chunks = chunker.chunkAsSpans(tokens, tags);
for (Span chunk : chunks) {
if ("NP".equals(chunk.getType())) {
int npstart = start + tokSpans[chunk.getStart()].getStart();
int npend = start + tokSpans[chunk.getEnd() - 1].getEnd();
String potentialKey = text.substring(npstart, npend);
if (!results.containsKey(potentialKey)) {
boolean hasStopWord = false;
String[] pKeys = potentialKey.split("\\s+");
if (pKeys.length < 3) {
for (String pKey : pKeys) {
for (String stopword : stopwords) {
if (pKey.toLowerCase().matches(stopword)) {
hasStopWord = true;
break;
}
}
if (hasStopWord == true) {
break;
}
}
}else{
hasStopWord=true;
}
if (hasStopWord == false) {
int count = StringUtils.countMatches(text, potentialKey);
results.put(potentialKey, (float) (Math.log(count) / 100) + (float)(prominenceVal/5));
}
}
}
}
}
sortedData.putAll(results);
System.out.println(sortedData);
return null;
}
What it basically does is giving me the Nouns back and sorting them by prominence value (where is it in the text?) and counts.
But honestly - this doesn't work soo good.
I also tried it with lucene analyzer but the results were also not so good.
So - how can i achieve what i want to do? I already know of KEA/Maui-indexer etc (but i'm afraid i can't use them because of GPL :( )
Also interesting? Which other algorithms can i use instead of TF-IDF?
Example:
This text: http://techcrunch.com/2015/09/04/etsys-pulling-the-plug-on-grand-st-at-the-end-of-this-month/
Good output in my opinion: Etsy, Grand St., solar chargers, maker marketplace, tech hardware
Finally, i found something:
https://github.com/srijiths/jtopia
It is using the POS from opennlp/stanfordnlp. It has an ALS2 license. Haven't measured precision and recall yet but it delivers great results in my opinion.
Here is my code:
Configuration.setTaggerType("openNLP");
Configuration.setSingleStrength(6);
Configuration.setNoLimitStrength(5);
// if tagger type is "openNLP" then give the openNLP POS tagger path
//Configuration.setModelFileLocation("model/openNLP/en-pos-maxent.bin");
// if tagger type is "default" then give the default POS lexicon file
//Configuration.setModelFileLocation("model/default/english-lexicon.txt");
// if tagger type is "stanford "
Configuration.setModelFileLocation("Dont need that here");
Configuration.setPipeline(pipeline);
TermsExtractor termExtractor = new TermsExtractor();
TermDocument topiaDoc = new TermDocument();
topiaDoc = termExtractor.extractTerms(text);
//logger.info("Extracted terms : " + topiaDoc.getExtractedTerms());
Map<String, ArrayList<Integer>> finalFilteredTerms = topiaDoc.getFinalFilteredTerms();
List<KeywordsModel> keywords = new ArrayList<>();
for (Map.Entry<String, ArrayList<Integer>> e : finalFilteredTerms.entrySet()) {
KeywordsModel keyword = new KeywordsModel();
keyword.setLabel(e.getKey());
keywords.add(keyword);
}
I modified the Configurationfile a bit so that the POSModel is loaded from the pipeline instance.

Hot to get last modified credit memo record from netsuite

I am using Java and NetSuite webserivces to get the last modified credit memo transactions or refund transactions for all customers but there is no any kind of searchBasic class to do it. If anyone done it before then please provide your suggestion or absolute answer as I am new to netsuite I don't know all the thing.
If you know about the balance rather than credit memo then also it will be helpful for me.
public ArrayList<CreditMemo> searchRecentCreditMemos()
throws Exception {
TransactionSearch transactionsSearch = new TransactionSearch();
TransactionSearchBasic transactionSearchBasic = new TransactionSearchBasic();
CustomerSearchBasic custSearchBasic = new CustomerSearchBasic();
Calendar startDate = Calendar.getInstance();
startDate.add(Calendar.DAY_OF_MONTH, -1);
Calendar endDate = Calendar.getInstance();
// Create criteria
com.netsuite.webservices.platform.core_2014_1.SearchDateField searchDateField = new com.netsuite.webservices.platform.core_2014_1.SearchDateField();
searchDateField
.setOperator(com.netsuite.webservices.platform.core_2014_1.types.SearchDateFieldOperator.within);
searchDateField.setSearchValue(startDate);
searchDateField.setSearchValue2(endDate);
transactionSearchBasic.setLastModifiedDate(searchDateField);
transactionsSearch.setBasic(transactionSearchBasic);
transactionsSearch.setCustomerJoin(custSearchBasic);
SearchResult result = port.search(transactionsSearch);
ArrayList<CreditMemo> creditMemoList = new ArrayList<>();
if (result.getStatus().isIsSuccess()) {
RecordList recordList = result.getRecordList();
Record[] records = recordList.getRecord();
if (records != null && records.length != 0) {
for (int i = 0; i < records.length; i++) {
if (records[i] instanceof CreditMemo) {
CreditMemo creditMemo = (CreditMemo) records[i];
creditMemoList.add(creditMemo);
}
}
}
}
return creditMemoList;
}

BIRT: How to remove a dataset parameter programmatically

I want to modify an existing *.rptdesign file and save it under a new name.
The existing file contains a Data Set with a template SQL select statement and several DS parameters.
I'd like to use an actual SQL select statement which uses only part of the DS parameters.
However, the following code results in the exception:
Exception in thread "main" `java.lang.RuntimeException`: *The structure is floating, and its handle is invalid!*
at org.eclipse.birt.report.model.api.StructureHandle.getStringProperty(StructureHandle.java:207)
at org.eclipse.birt.report.model.api.DataSetParameterHandle.getName(DataSetParameterHandle.java:143)
at org.eclipse.birt.report.model.api.DataSetHandle$DataSetParametersPropertyHandle.removeParamBindingsFor(DataSetHandle.java:851)
at org.eclipse.birt.report.model.api.DataSetHandle$DataSetParametersPropertyHandle.removeItems(DataSetHandle.java:694)
--
OdaDataSetHandle dsMaster = (OdaDataSetHandle) report.findDataSet("Master");
HashSet<String> bindVarsUsed = new HashSet<String>();
...
// find out which DS parameters are actually used
HashSet<String> bindVarsUsed = new HashSet<String>();
...
ArrayList<OdaDataSetParameterHandle> toRemove = new ArrayList<OdaDataSetParameterHandle>();
for (Iterator iter = dsMaster.parametersIterator(); iter.hasNext(); ) {
OdaDataSetParameterHandle dsPara = (OdaDataSetParameterHandle)iter.next();
String name = dsPara.getName();
if (name.startsWith("param_")) {
String bindVarName = name.substring(6);
if (!bindVarsUsed.contains(bindVarName)) {
toRemove.add(dsPara);
}
}
}
PropertyHandle paramsHandle = dsMaster.getPropertyHandle( OdaDataSetHandle.PARAMETERS_PROP );
paramsHandle.removeItems(toRemove);
What is wrong here?
Has anyone used the DE API to remove parameters from an existing Data Set?
I had similar issue. Resolved it by calling 'removeItem' multiple times and also had to re-evaluate parametersIterator everytime.
protected void updateDataSetParameters(OdaDataSetHandle dataSetHandle) throws SemanticException {
int countMatches = StringUtils.countMatches(dataSetHandle.getQueryText(), "?");
int paramIndex = 0;
do {
paramIndex = 0;
PropertyHandle odaDataSetParameterProp = dataSetHandle.getPropertyHandle(OdaDataSetHandle.PARAMETERS_PROP);
Iterator parametersIterator = dataSetHandle.parametersIterator();
while(parametersIterator.hasNext()) {
Object next = parametersIterator.next();
paramIndex++;
if(paramIndex > countMatches) {
odaDataSetParameterProp.removeItem(next);
break;
}
}
if(paramIndex < countMatches) {
paramIndex++;
OdaDataSetParameter dataSetParameter = createDataSetParameter(paramIndex);
odaDataSetParameterProp.addItem(dataSetParameter);
}
} while(countMatches != paramIndex);
}
private OdaDataSetParameter createDataSetParameter(int paramIndex) {
OdaDataSetParameter dataSetParameter = StructureFactory.createOdaDataSetParameter();
dataSetParameter.setName("param_" + paramIndex);
dataSetParameter.setDataType(DesignChoiceConstants.PARAM_TYPE_INTEGER);
dataSetParameter.setNativeDataType(1);
dataSetParameter.setPosition(paramIndex);
dataSetParameter.setIsInput(true);
dataSetParameter.setIsOutput(false);
dataSetParameter.setExpressionProperty("defaultValue", new Expression("<evaluation script>", ExpressionType.JAVASCRIPT));
return dataSetParameter;
}

Categories