I am trying to get all the assets inside a smart collection in AEM given the path of the smart collection.
I could do this for a normal collection by getting the node paths under sling:members
But how to get all the assets of a Smart Collection
The data under sling:members is empty because of which my code works only for normal collections but not Smart Collection
I expect to get all the assets under for a smart Collection given the path of the smart collection in java
Here is a simple snippet you can run with AEM Groovy Console:
// https://helpx.adobe.com/experience-manager/6-4/sites/developing/using/reference-materials/javadoc/com/day/cq/dam/api/collection/SmartCollection.html
import com.day.cq.dam.api.collection.SmartCollection;
import com.day.cq.dam.api.Asset;
def SMART_COLLECTION_PATH = "/content/dam/collections/J/Jx4h69ABp_KoLbZJ-8dq/test-collection";
def smartCollectionResource = getResource(SMART_COLLECTION_PATH)
def smartCollection = smartCollectionResource.adaptTo(SmartCollection.class)
smartCollection
.getQuery()
.getResult()
.getNodes()
.each {
def assetResource = getResource(it.path);
def asset = assetResource.adaptTo(Asset.class)
println asset.path
}
The basic gist is that you can get the smart collection resource then adapt it to a SmartCollection from there you can call getQuery, execute the query, get the nodes and adapt them to Asset objects or just process the nodes directly. In the code above, I print the asset paths.
Even thought the code above is groovy, it is simple enough that you could convert it to java very quickly.
Related
I have a 3-level nested Java POJO that looks like this in the schema file:
struct FPathSegment {
originIata:ushort;
destinationIata:ushort;
}
table FPathConnection {
segments:[FPathSegment];
}
table FPath {
connections:[FPathConnection];
}
When I try to serialize a Java POJO to the Flatbuffer equivalent I pretty much get "nested serialzation is not allowed" error every time I try to use a common FlatBufferBuilder to build this entire object graph.
There is no clue in the docs to state if I have a single builder for the entire graph? A separate one for every table/struct? If separate, how do you import the child objects into the parent?
There are all these methods like create/start/add various vectors, but no explanation what builders go in there. Painfully complicated.
Here is my Java code where I attempt to serialize my Java POJO into Flatbuffers equivalent:
private FPath convert(Path path) {
FlatBufferBuilder bld = new FlatBufferBuilder(1024);
// build the Flatbuffer object
FPath.startFPath(bld);
FPath.startConnectionsVector(bld, path.getConnections().size());
for(Path.PathConnection connection : path.getConnections()) {
FPathConnection.startFPathConnection(bld);
for(Path.PathSegment segment : connection.getSegments()) {
FPathSegment.createFPathSegment(bld,
stringCache.getPointer(segment.getOriginIata()),
stringCache.getPointer(segment.getDestinationIata()));
}
FPathConnection.endFPathConnection(bld);
}
FPath.endFPath(bld);
return FPath.getRootAsFPath(bld.dataBuffer());
}
Every start() method throws a "FlatBuffers: object serialization must not be nested" exception, can't figure out what is the way to do this.
You use a single FlatBufferBuilder, but you must finish serializing children before starting the parents.
In your case, that requires you to move FPath.startFPath to the end, and FPath.startConnectionsVector to just before that. This means you need to store the offsets for each FPathConnection in a temp array.
This will make the nesting error go away.
The reason for this inconvenience is to allow the serialization process to proceed without any temporary data structures.
I have the following objects:
ITeamRepository repo;
IProjectArea projArea;
ITeamArea teamArea;
The process of obtaining the projArea and the teamArea is quite straightforward (despite the quantity of objects involved). However I can't seem to find a way to obtain a list with all the Workitems associated with these objects in a direct way. Is this directly possible, probably via the IQueryClient objects?
This 2012 thread (so it might have changed since) suggests:
I used the following code to get the work items associated with each project area:
auditableClient = (IAuditableClient) repository.getClientLibrary(IAuditableClient.class);
IQueryClient queryClient = (IQueryClient) repository.getClientLibrary(IQueryClient.class);
IQueryableAttribute attribute = QueryableAttributes.getFactory(IWorkItem.ITEM_TYPE).findAttribute(currProject, IWorkItem.PROJECT_AREA_PROPERTY, auditableClient, null);
Expression expression = new AttributeExpression(attribute, AttributeOperation.EQUALS, currProject);
IQueryResult<IResolvedResult<IWorkItem>> results = queryClient.getResolvedExpressionResults(currProject, expression, IWorkItem.FULL_PROFILE);
In my code, currProject would be the IProjectArea pointer to the current project as you loop through the List of project areas p in your code.
The IQueryResult object 'results' then contains a list of IResolvedResult records with all of the work items for that project you can iterate through and find properties for each work item.
I've created a model based on the 'wide and deep' example (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py).
I've exported the model as follows:
m = build_estimator(model_dir)
m.fit(input_fn=lambda: input_fn(df_train, True), steps=FLAGS.train_steps)
results = m.evaluate(input_fn=lambda: input_fn(df_test, True), steps=1)
print('Model statistics:')
for key in sorted(results):
print("%s: %s" % (key, results[key]))
print('Done training!!!')
# Export model
export_path = sys.argv[-1]
print('Exporting trained model to %s' % export_path)
m.export(
export_path,
input_fn=serving_input_fn,
use_deprecated_input_fn=False,
input_feature_key=INPUT_FEATURE_KEY
My question is, how do I create a client to make predictions from this exported model? Also, have I exported the model correctly?
Ultimately I need to be able do this in Java too. I suspect I can do this by creating Java classes from proto files using gRPC.
Documentation is very sketchy, hence why I am asking on here.
Many thanks!
I wrote a simple tutorial Exporting and Serving a TensorFlow Wide & Deep Model.
TL;DR
To export an estimator there are four steps:
Define features for export as a list of all features used during estimator initialization.
Create a feature config using create_feature_spec_for_parsing.
Build a serving_input_fn suitable for use in serving using input_fn_utils.build_parsing_serving_input_fn.
Export the model using export_savedmodel().
To run a client script properly you need to do three following steps:
Create and place your script somewhere in the /serving/ folder, e.g. /serving/tensorflow_serving/example/
Create or modify corresponding BUILD file by adding a py_binary.
Build and run a model server, e.g. tensorflow_model_server.
Create, build and run a client that sends a tf.Example to our tensorflow_model_server for the inference.
For more details look at the tutorial itself.
Just spent a solid week figuring this out. First off, m.export is going to deprecated in a couple weeks, so instead of that block, use: m.export_savedmodel(export_path, input_fn=serving_input_fn).
Which means you then have to define serving_input_fn(), which of course is supposed to have a different signature than the input_fn() defined in the wide and deep tutorial. Namely, moving forward, I guess it's recommended that input_fn()-type things are supposed to return an InputFnOps object, defined here.
Here's how I figured out how to make that work:
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from tensorflow.python.ops import array_ops
from tensorflow.python.framework import dtypes
def serving_input_fn():
features, labels = input_fn()
features["examples"] = tf.placeholder(tf.string)
serialized_tf_example = array_ops.placeholder(dtype=dtypes.string,
shape=[None],
name='input_example_tensor')
inputs = {'examples': serialized_tf_example}
labels = None # these are not known in serving!
return input_fn_utils.InputFnOps(features, labels, inputs)
This is probably not 100% idiomatic, but I'm pretty sure it works. For now.
So I am trying to write a little utility in Scala that constantly listens on a bunch of directories for file system changes (deletes, creates, modifications etc) and rsyncs it immediately across to a remote server. (https://github.com/Khalian/LockStep)
My configurations are stored in JSON as the follows:-
{
"localToRemoteDirectories": {
"/workplace/arunavs/third_party": {
"remoteDir": "/remoteworkplace/arunavs/third_party",
"remoteServerAddr": "some Remote server address"
}
}
}
This configuration is stored in a Scala Map (key = localDir, value = (remoteDir, remoteServerAddr)). The tuple is represented as a case class
sealed case class RemoteLocation(remoteDir:String, remoteServerAddr:String)
I am using an actor from a third party:
https://github.com/lloydmeta/schwatcher/blob/master/src/main/scala/com/beachape/filemanagement/FileSystemWatchMessageForwardingActor.scala)
that listens on these directories (e.g. /workplace/arunavs/third_party and then outputs an Java 7 WatchKind event (EVENT_CREATE, EVENT_MODIFY etc). The problem is that the events sent are absolute path (for instance if I create a file helloworld in third_party dir, the message sent by the actor is (ENTRY_CREATE, /workplace/arunavs/third_party/helloworld))
I need a way to write a getter that gets the nearest prefix from the configuration map stored above. The obvious way to do it is to filter on the map:-
def getRootDirsAndRemoteAddrs(localDir:String) : Map[String, RemoteLocation] =
localToRemoteDirectories.filter(e => localDir.startsWith(e._1))
This simply returns the subset of keys that are a prefix to the localDir (in the above example this method is called with localDir = /workplace/arunavs/third_party/helloworld. While this works, this implementation is O(n) where n is the number of items in my configuration. I am looking for better computational complexity (I looked at radix and patricia tries, but they dont cut it since I feeding a string and trying to get keys which are prefixes to it, tries solve the opposite problem).
I am new to BO, I need to find universe name and the corresponding metadata information like(Table name, column names, join conditions etc...). I am unable to find proper way to start. I looked with Data Access SDK, Semantic SDk.
Can any one please provide me the sample code or procedure for starting..
I googled a lot but i am unable to find any sample examples
I looked into this link but that code will work only on R2 Server.
http://www.forumtopics.com/busobj/viewtopic.php?t=67088
Help is Highly Apprecitated.....
Assuming you're talking about IDT based universes, you'll need to code some Java. The JavaDoc for the API is available here.
In a nutshell, you do something like this:
SlContext context = SlContext.create() ;
LocalResourceService service = context.getService(LocalResourceService.class) ;
String blxFile = service.retrieve("universe.unx","output directory") ;
RelationalBusinessLayer businessLayer = (RelationalBusinessLayer)service.load(blxFile);
RootFolder rootFolder = businessLayer.getRootFolder() ;
Once you have a hook on the rootFolder, you can use the getChildren() method to drill into the folder structure and access the various subfolders/business objects available.
You may also want to check the CmsResourceService class to access universes stored on the repository.
To get the information you are after will require a 2 part solution. Part 1 use the Rebean SDK looking at WebI reports for the Universe and object names being used with in it.
Part 2 is to break out your favorite COM programming tool, since I try to avoid COM I use the Excel Macro editor, and access the BusinessObjects Designer library. Main code snippets that I currently have are:
Dim boUniv As Designer.Universe
Dim tbl As Designer.Table
For Each tbl In boUniv.Tables
Debug.Print tbl.Name
Next tbl
This prints all of the tables in a universe.
You will need to combine the 2 parts on your own for a dependency list between WebI reports and Universes.