Pentaho java api job - java

I am currently using Java API to connect to Pentaho Repository.I want to know if we have any methods if a particular Pentaho file is of Job or Transformation type.
I am using samle code as below.Here if you see I am manually creating Jobmeta or Transmeta.
Is there any API to get the pentao job type.
Repository repository = new PentahoContext().initialize(repositoryName,
userName, password);
RepositoryDirectoryInterface directoryPublic = repository
.loadRepositoryDirectoryTree();
RepositoryDirectoryInterface directoryPublic1 = directoryPublic
.findDirectory(“/home");
JobMeta jobMeta = repository.loadJob(jobName, directoryPublic1, null,
null);

I don't think you can query for a single file, but you can use Repository.getJobAndTransformationObjects() or RepositoryDirectoryInterface.getRepositoryObjects() to return a list of objects in the specified directory. You can then iterate over that list looking for the object with that name, then call getObjectType() to see if it is equal to RepositoryObjectType.TRANSFORMATION or RepositoryObjectType.JOB:
// pseudo-Java code
repositoryObjects = repository.getJobAndTransformationObjects( directoryPublic1.getObjectId(), false );
for (RepositoryElementMetaInterface object : repositoryObjects) {
if(object.getName().equals("myFile")) {
if(object.getObjectType().equals(RepositoryObjectType.TRANSFORMATION) {
TransMeta transMeta = repository.loadTransformation(...);
}
else if(object.getObjectType().equals(RepositoryObjectType.JOB) {
JobMeta jobMeta = repository.loadJob(...);
}
}
}

Related

Update/replace tabledata in google bigquery via java coding

I am trying to update bigquery tabledata using java. WriteDisposition is an option according to my research. I am bit novice, couldn't get through. kindly help.
Being said that I have tried to insert using WriteChannelConfiguration which worked fine. Need to make changes to this code to update the table.
public class BigQryAPI {
public static void explicit() {
// Load credentials from JSON key file. If you can't set the GOOGLE_APPLICATION_CREDENTIALS
// environment variable, you can explicitly load the credentials file to construct the
// credentials.
try {
GoogleCredentials credentials;
File credentialsPath = new File(BigQryAPI.class.getResource("/firstprojectkey.json").getPath()); // TODO: update to your key path.
FileInputStream serviceAccountStream = new FileInputStream(credentialsPath);
credentials = ServiceAccountCredentials.fromStream(serviceAccountStream);
// Instantiate a client
BigQuery bigquery =
BigQueryOptions.newBuilder().setCredentials(credentials).build().getService();
System.out.println("Datasets:");
for (Dataset dataset : bigquery.listDatasets().iterateAll()) {
System.out.printf("%s%n", dataset.getDatasetId().getDataset());
}
//load into table
TableId tableId = TableId.of("firstdataset","firsttable");
WriteChannelConfiguration writeChannelConfiguration =
WriteChannelConfiguration.newBuilder(tableId).setFormatOptions(FormatOptions.csv()).build();
TableDataWriteChannel writer = bigquery.writer(writeChannelConfiguration);
String csvdata="zzzxyz,zzzxyz";
// Write data to writer
try {
writer.write(ByteBuffer.wrap(csvdata.getBytes(Charsets.UTF_8)));
} finally {
writer.close();
}
// Get load job
Job job = writer.getJob();
job = job.waitFor();
LoadStatistics stats = job.getStatistics();
System.out.printf("these are my stats"+stats);
String query = "SELECT Name,Phone FROM `firstproject-256319.firstdataset.firsttable`;";
QueryJobConfiguration queryConfig = QueryJobConfiguration.newBuilder(query).build();
// Print the results.
for (FieldValueList row : bigquery.query(queryConfig).iterateAll()) {
for (FieldValue val : row) {
System.out.printf("%s,", val.toString());
}
System.out.printf("\n");
}
}catch(Exception e) {System.out.println(e.getMessage());}
}
}
We can set write desposition while building the 'WriteChannelConfiguration'
WriteChannelConfiguration writeChannelConfiguration = WriteChannelConfiguration.newBuilder(table.tableId).setFormatOptions(FormatOptions.csv())
.setWriteDisposition(JobInfo.WriteDisposition.WRITE_TRUNCATE).build()
Details could be found here BigQuery API Docs

alfresco buildonly indexer for searching the properties created on the fly

I am using the latest version of alfresco 5.1 version.
one of my requirement is to create properties (key / value) where user enter the key as well as the value.
so I have done that like this
Map<QName, Serializable> props = new HashMap<QName, Serializable>();
props.put(QName.createQName("customProp1"), "prop1");
props.put(QName.createQName("customProp2"), "prop2");
ChildAssociationRef associationRef = nodeService.createNode(nodeService.getRootNode(storeRef), ContentModel.ASSOC_CHILDREN, QName.createQName(GUID.generate()), ContentModel.TYPE_CMOBJECT, props);
Now what I want to do is search the nodes with these newly created properties. I was able to search the newly created property like this.
public List<NodeRef> findNodes() throws Exception {
authenticate("admin", "admin");
StoreRef storeRef = new StoreRef(StoreRef.PROTOCOL_WORKSPACE, "SpacesStore");
List<NodeRef> nodeList = null;
Map<QName, Serializable> props = new HashMap<QName, Serializable>();
props.put(QName.createQName("customProp1"), "prop1");
props.put(QName.createQName("customProp2"), "prop2");
ChildAssociationRef associationRef = nodeService.createNode(nodeService.getRootNode(storeRef), ContentModel.ASSOC_CHILDREN, QName.createQName(GUID.generate()), ContentModel.TYPE_CMOBJECT, props);
NodeRef nodeRef = associationRef.getChildRef();
String query = "#cm\\:customProp1:\"prop1\"";
SearchParameters sp = new SearchParameters();
sp.addStore(storeRef);
sp.setLanguage(SearchService.LANGUAGE_LUCENE);
sp.setQuery(query);
try {
ResultSet results = serviceRegistry.getSearchService().query(sp);
nodeList = new ArrayList<NodeRef>();
for (ResultSetRow row : results) {
nodeList.add(row.getNodeRef());
System.out.println(row.getNodeRef());
}
System.out.println(nodeList.size());
} catch (Exception e) {
e.printStackTrace();
}
return nodeList;
}
The alfresco-global.properties indexer configuration is
index.subsystem.name=buildonly
index.recovery.mode=AUTO
dir.keystore=${dir.root}/keystore
Now my question is
Is it possible to achieve the same using the solr4 indexer ?
Or Is there any way to use buildonly indexer for a particular query ?
In your query
String query = "#cm\\:customProp1:\"prop1\"";
remove cm as you are building the QName on the fly so it does not come under cm i.e. (ContentModel) properties. So your query will be
String query = "#\\:customProp1:\"prop1\"";
Hope this will work for you
First, double check if you're simply experiencing eventual consistency, as described below. If you are, and if this presents a problem for you, consider switching to CMIS queries while staying on SOLR.
http://docs.alfresco.com/5.1/concepts/solr-event-consistency.html
Other than this, check if the node has been indexed at all. If it has, take a closer look at how you build your query.
How to find List of unindexed file in alfresco

Using ElasticSearch's script_upsert to create a document

According to the official documentation Update API - Upserts one can use scripted_upsert in order to handle update (for existing document) or insert (for new document) form within the script. The thing is they never show how the script should look to do that. The Java - Update API Doesn't have any information on the ScriptUpsert uses.
This is the code I'm using:
//My function to build and use the upsert
public void scriptedUpsert(String key, String parent, String scriptSource, Map<String, ? extends Object> parameters) {
Script script = new Script(scriptSource, ScriptType.INLINE, null, parameters);
UpdateRequest request = new UpdateRequest(index, type, key);
request.scriptedUpsert(true);
request.script(script);
if (parent != null) {
request.parent(parent);
}
this.bulkProcessor.add(request);
}
//A test call to validate the function
String scriptSource = "if (!ctx._source.hasProperty(\"numbers\")) {ctx._source.numbers=[]}";
Map<String, List<Integer>> parameters = new HashMap<>();
List<Integer> numbers = new LinkedList<>();
numbers.add(100);
parameters.put("numbers", numbers);
bulk.scriptedUpsert("testUser", null, scriptSource, parameters);
And I'm getting the following exception when "testUser" documents doesn't exists:
DocumentMissingException[[user][testUser]: document missing
How can I make the scriptUpsert work from the Java code?
This is how a scripted_upsert command should look like (and its script):
POST /sessions/session/1/_update
{
"scripted_upsert": true,
"script": {
"inline": "if (ctx.op == \"create\") ctx._source.numbers = newNumbers; else ctx._source.numbers += updatedNumbers",
"params": {
"newNumbers": [1,2,3],
"updatedNumbers": [55]
}
},
"upsert": {}
}
If you call the above command and the index doesn't exist, it will create it, together with the newNumbers values in the new documents. If you call again the exact same command the numbers values will become 1,2,3,55.
And in your case you are missing "upsert": {} part.
As Andrei suggested I was missing the upsert part, changing the function to:
public void scriptedUpsert(String key, String parent, String scriptSource, Map<String, ? extends Object> parameters) {
Script script = new Script(scriptSource, ScriptType.INLINE, null, parameters);
UpdateRequest request = new UpdateRequest(index, type, key);
request.scriptedUpsert(true);
request.script(script);
request.upsert("{}"); // <--- The change
if (parent != null) {
request.parent(parent);
}
this.bulkProcessor.add(request);
}
Fix it.

How to perform Amazon Cloud Search with .net code?

I am learning Amazon Cloud Search but I couldn't find any code in either C# or Java (though I am creating in C# but if I can get code in Java then I can try converting in C#).
This is just 1 code I found in C#: https://github.com/Sitefinity-SDK/amazon-cloud-search-sample/tree/master/SitefinityWebApp.
This is 1 method i found in this code:
public IResultSet Search(ISearchQuery query)
{
AmazonCloudSearchDomainConfig config = new AmazonCloudSearchDomainConfig();
config.ServiceURL = "http://search-index2-cdduimbipgk3rpnfgny6posyzy.eu-west-1.cloudsearch.amazonaws.com/";
AmazonCloudSearchDomainClient domainClient = new AmazonCloudSearchDomainClient("AKIAJ6MPIX37TLIXW7HQ", "DnrFrw9ZEr7g4Svh0rh6z+s3PxMaypl607eEUehQ", config);
SearchRequest searchRequest = new SearchRequest();
List<string> suggestions = new List<string>();
StringBuilder highlights = new StringBuilder();
highlights.Append("{\'");
if (query == null)
throw new ArgumentNullException("query");
foreach (var field in query.HighlightedFields)
{
if (highlights.Length > 2)
{
highlights.Append(", \'");
}
highlights.Append(field.ToUpperInvariant());
highlights.Append("\':{} ");
SuggestRequest suggestRequest = new SuggestRequest();
Suggester suggester = new Suggester();
suggester.SuggesterName = field.ToUpperInvariant() + "_suggester";
suggestRequest.Suggester = suggester.SuggesterName;
suggestRequest.Size = query.Take;
suggestRequest.Query = query.Text;
SuggestResponse suggestion = domainClient.Suggest(suggestRequest);
foreach (var suggest in suggestion.Suggest.Suggestions)
{
suggestions.Add(suggest.Suggestion);
}
}
highlights.Append("}");
if (query.Filter != null)
{
searchRequest.FilterQuery = this.BuildQueryFilter(query.Filter);
}
if (query.OrderBy != null)
{
searchRequest.Sort = string.Join(",", query.OrderBy);
}
if (query.Take > 0)
{
searchRequest.Size = query.Take;
}
if (query.Skip > 0)
{
searchRequest.Start = query.Skip;
}
searchRequest.Highlight = highlights.ToString();
searchRequest.Query = query.Text;
searchRequest.QueryParser = QueryParser.Simple;
var result = domainClient.Search(searchRequest).SearchResult;
//var result = domainClient.Search(searchRequest).SearchResult;
return new AmazonResultSet(result, suggestions);
}
I have already created domain in Amazon Cloud Search using AWS console and uploaded document using Amazon predefine configuration option that is movie Imdb json file provided by Amazon for demo.
But in this method I am not getting how to use this method, like if I want to search Director name then how do I pass in this method as because this method parameter is of type ISearchQuery?
I'd suggest using the official AWS CloudSearch .NET SDK. The library you were looking at seems fine (although I haven't look at it any detail) but the official version is more likely to expose new CloudSearch features as soon as they're released, will be supported if you need to talk to AWS support, etc, etc.
Specifically, take a look at the SearchRequest class -- all its params are strings so I think that obviates your question about ISearchQuery.
I wasn't able to find an example of a query in .NET but this shows someone uploading docs using the AWS .NET SDK. It's essentially the same procedure as querying: creating and configuring a Request object and passing it to the client.
EDIT:
Since you're still having a hard time, here's an example. Bear in mind that I am unfamiliar with C# and have not attempted to run or even compile this but I think it should at least be close to working. It's based off looking at the docs at http://docs.aws.amazon.com/sdkfornet/v3/apidocs/
// Configure the Client that you'll use to make search requests
string queryUrl = #"http://search-<domainname>-xxxxxxxxxxxxxxxxxxxxxxxxxx.us-east-1.cloudsearch.amazonaws.com";
AmazonCloudSearchDomainClient searchClient = new AmazonCloudSearchDomainClient(queryUrl);
// Configure a search request with your query
SearchRequest searchRequest = new SearchRequest();
searchRequest.Query = "potato";
// TODO Set your other params like parser, suggester, etc
// Submit your request via the client and get back a response containing search results
SearchResponse searchResponse = searchClient.Search(searchRequest);

Filenet storing and getting objects from store - Java Step Processor

I'm implementing Filenet Java Step Processor. I have to get a document from object store, moddify and store modified version to the same object store. I have a working applet for tests. I can't ask user for additional sending login and password, I have to work with VWSession object. Document which I have to get is send as step attachment. I know how to get attached document ID. How to get document by ID from Java Step Processor using VWSession object as my connection point to Object Store?
Get an ObjectStore instance (Get the domainId from the VWSession, and construct the domain using the Connection from VWSession also) and pass the Attachment to the method below:
private String getIdFromAttachment(VWAttachment att, ObjectStore os) {
if (att.getLibraryType() != 3) {
String libtype = String.valueOf(att.getLibraryType());
throw new VWException("hk.ibm.ecm.cpa.uwfcustomcomponent.InvalidLibraryType",
"Cannot convert object from non-CE repository, type {0}.", libtype);
}
String attId = att.getId();
switch (att.getType()) {
case 3:
case 4:
VersionSeries vs = (VersionSeries) os.getObject("VersionSeries", attId);
vs.refresh();
String ver = att.getVersion();
return getIdBasedOnVersion(vs, ver);
default:
return attId;
}
}
private String getIdBasedOnVersion(VersionSeries vs, String ver) {
if (ver == null) {
Document current = (Document) vs.get_CurrentVersion();
current.refresh();
return current.get_Id().toString();
} else if ("-1".equals(ver)) {
Document released = (Document) vs.get_ReleasedVersion();
released.refresh();
return released.get_Id().toString();
} else {
return ver;
}
}
Getting the user credentials is not that difficult. In the stepprocessor java module just dop the following:
String userName = controller.getDataStore().getServerCredentials().getUserId();
String userPwd = controller.getDataStore().getServerCredentials().getPassword();
For getting the document you mean, normally you will not get that over a VWConnection but over a Content engine connection.
getting your document via ID:
Factory.Document.fetchInstance(objectStore, guid, null);
Hope it helps

Categories