I have to extract the geometry of a ifc file in JAVA. My problem is, that i don't know how to do it.
I tried to use openifctools but the documentation is really bad. For now i have the ifc file loaded, but i cannot get the geometry out of the model.
Does anyone have experience with ifc model loading?
Thanks in advance.
EDIT: This is what I've done so far
try {
IfcModel ifcModel = new IfcModel();
ifcModel.readStepFile(new File("my-project.ifc"));
Collection<IfcClass> ifcObjects = ifcModel.getIfcObjects();
System.out.println(ifcObjects.iterator().next());
} catch (Exception e) {
e.printStackTrace();
}
This correctly loads the ifc file. But I don't know what to do with this information.
I also tried to use IfcOpenShell but the provided jar container hadn't worked either. At the moment I try to build IfcOpenShell by myself.
I'm kinda desperate because everything is very undocumented and I really need to load and parse the ifc geometry.
Depending on what you want to do with the geometry, how deep you want to delve into the IFC standard and what performance you need for your solution you have two different options:
Extract the implicit geometry on your own
Use an external geometry engine
If you go for the first option, you'd have to study the IFC schema intensively. You would only be interested in IFCProducts, because only those can have geometry. Using OpenIfcTools you could do something like:
Collection<IfcProduct> products = model.getCollection(IfcProduct.class);
for(IfcProduct product: products){
List<IfcRepresentation> representations = product.getRepresentation().getRepresentations();
assert ! representations.isEmpty();
assert representations.get(0) instanceof IfcShapeRepresentation:
Collection<IfcRepresentationItem> repr = representations.get(0).getItems();
assert !repr.isEmpty();
IfcRepresentationItem representationItem = repr.iterator().next();
assert representationItem instanceof IfcFacetedBrep;
for(IfcFace face: ((IfcFacetedBrep)representationItem).getOuter().getCfsFaces()){
for(IfcFaceBound faceBound: face.getBounds()){
IfcLoop loop = faceBound.getBound();
assert loop instanceof IfcPolyLoop;
for(IfcCartesianPoint point: ((IfcPolyLoop) loop).getPolygon()){
point.getCoordinates();
}
}
}
}
However, there are a lot of different GeometryRepresentations, which you'd have to cover, probably doing triangulation and stuff on your own. I've shown one special case and made a lot of assertions. And you'd have to fiddle with coordinate transformations, because these may be nested recursively.
If you go for the second option the geometry engines I know are all written in C/C++ (Ifcopenshell, RDF IfcEngine), so you'd have to cope with native library integration. The jar package provided with IFCOpenshell is intended to be used as a Bimserver plugin. Those you can't use it without the respective dependencies. However you can grab the native binaries from this package. In order to use the engine you can draw some inspiration from the Bimserver plugin source. The key native methods you're gonna use are
boolean setIfcData(byte[] ifc) to parse the ifc data
IfcGeomObject getGeometry() to access the extracted geometry successively.
Related
I've created a model based on the 'wide and deep' example (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py).
I've exported the model as follows:
m = build_estimator(model_dir)
m.fit(input_fn=lambda: input_fn(df_train, True), steps=FLAGS.train_steps)
results = m.evaluate(input_fn=lambda: input_fn(df_test, True), steps=1)
print('Model statistics:')
for key in sorted(results):
print("%s: %s" % (key, results[key]))
print('Done training!!!')
# Export model
export_path = sys.argv[-1]
print('Exporting trained model to %s' % export_path)
m.export(
export_path,
input_fn=serving_input_fn,
use_deprecated_input_fn=False,
input_feature_key=INPUT_FEATURE_KEY
My question is, how do I create a client to make predictions from this exported model? Also, have I exported the model correctly?
Ultimately I need to be able do this in Java too. I suspect I can do this by creating Java classes from proto files using gRPC.
Documentation is very sketchy, hence why I am asking on here.
Many thanks!
I wrote a simple tutorial Exporting and Serving a TensorFlow Wide & Deep Model.
TL;DR
To export an estimator there are four steps:
Define features for export as a list of all features used during estimator initialization.
Create a feature config using create_feature_spec_for_parsing.
Build a serving_input_fn suitable for use in serving using input_fn_utils.build_parsing_serving_input_fn.
Export the model using export_savedmodel().
To run a client script properly you need to do three following steps:
Create and place your script somewhere in the /serving/ folder, e.g. /serving/tensorflow_serving/example/
Create or modify corresponding BUILD file by adding a py_binary.
Build and run a model server, e.g. tensorflow_model_server.
Create, build and run a client that sends a tf.Example to our tensorflow_model_server for the inference.
For more details look at the tutorial itself.
Just spent a solid week figuring this out. First off, m.export is going to deprecated in a couple weeks, so instead of that block, use: m.export_savedmodel(export_path, input_fn=serving_input_fn).
Which means you then have to define serving_input_fn(), which of course is supposed to have a different signature than the input_fn() defined in the wide and deep tutorial. Namely, moving forward, I guess it's recommended that input_fn()-type things are supposed to return an InputFnOps object, defined here.
Here's how I figured out how to make that work:
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from tensorflow.python.ops import array_ops
from tensorflow.python.framework import dtypes
def serving_input_fn():
features, labels = input_fn()
features["examples"] = tf.placeholder(tf.string)
serialized_tf_example = array_ops.placeholder(dtype=dtypes.string,
shape=[None],
name='input_example_tensor')
inputs = {'examples': serialized_tf_example}
labels = None # these are not known in serving!
return input_fn_utils.InputFnOps(features, labels, inputs)
This is probably not 100% idiomatic, but I'm pretty sure it works. For now.
I'm using commons-images to extract TIFF / JPG images metadata.
I know this lib cannot be considered as "stable", but this is the best alternative i've found already.
Previously, our app was using imagemagick with a system call, which was really bad in terms of performances.
But I got a problem with ICC profiles.
I would like to extract the data property from a particular IccTag (where itdt is "DESC_TYPE") using a similar code :
IccProfileParser iccParser = new IccProfileParser();
IccProfileInfo iccProfileInfo = iccParser.getICCProfileInfo(metadataItem.get().getTiffField().getByteArrayValue());
for(IccTag tag : iccProfileInfo.getTags()) {
//not getters in tag !
LOG.debug(tag);
}
But that's not natively possible to do so, because this class has no getters..
I've extracted the commons-image source to modify IccTag class with getters.. Is there a better way ?
https://commons.apache.org/proper/commons-imaging/apidocs/org/apache/commons/imaging/icc/IccTag.html
I am new to BO, I need to find universe name and the corresponding metadata information like(Table name, column names, join conditions etc...). I am unable to find proper way to start. I looked with Data Access SDK, Semantic SDk.
Can any one please provide me the sample code or procedure for starting..
I googled a lot but i am unable to find any sample examples
I looked into this link but that code will work only on R2 Server.
http://www.forumtopics.com/busobj/viewtopic.php?t=67088
Help is Highly Apprecitated.....
Assuming you're talking about IDT based universes, you'll need to code some Java. The JavaDoc for the API is available here.
In a nutshell, you do something like this:
SlContext context = SlContext.create() ;
LocalResourceService service = context.getService(LocalResourceService.class) ;
String blxFile = service.retrieve("universe.unx","output directory") ;
RelationalBusinessLayer businessLayer = (RelationalBusinessLayer)service.load(blxFile);
RootFolder rootFolder = businessLayer.getRootFolder() ;
Once you have a hook on the rootFolder, you can use the getChildren() method to drill into the folder structure and access the various subfolders/business objects available.
You may also want to check the CmsResourceService class to access universes stored on the repository.
To get the information you are after will require a 2 part solution. Part 1 use the Rebean SDK looking at WebI reports for the Universe and object names being used with in it.
Part 2 is to break out your favorite COM programming tool, since I try to avoid COM I use the Excel Macro editor, and access the BusinessObjects Designer library. Main code snippets that I currently have are:
Dim boUniv As Designer.Universe
Dim tbl As Designer.Table
For Each tbl In boUniv.Tables
Debug.Print tbl.Name
Next tbl
This prints all of the tables in a universe.
You will need to combine the 2 parts on your own for a dependency list between WebI reports and Universes.
The requirement I receive is to model some existing content available on a SQL Server database using Alfresco content managment so, I create my new content model and it seems to working fine. But I've a problem with multi language: I know in Alfresco is possible for one node add multiple language (how can I do that using Java for a massive load?) but, I used also some aspects that need to be translated.
What do you usually do in that case? I thoug to follow this steps:
Create Eng content and add aspects
Create new child translted and add aspects
Is it correct? How can I make a node Multilingual programmatically (Java) and how can I add the new translate content with aspects? I took a look to Alfresco documentation but, I didn't find it, could you help me to find some documentation or tutorial about that?
UPDATE:
I'm trying to make a content multilangue:
void makeTranslation(Reference contentNodeRef, Locale locale) throws AlfrescoRuntimeException, Exception
{
try {
NodeRef nodeRef = new NodeRef("workspace://SpacesStore/" + contentNodeRef.getUuid());
MultilingualContentServiceImpl multilingualContentServiceImpl = new MultilingualContentServiceImpl();
multilingualContentServiceImpl.makeTranslation(nodeRef, locale);
}
catch (org.alfresco.error.AlfrescoRuntimeException ex) {
throw new AlfrescoRuntimeException(ex.getMessage());
}
catch (Exception ex) {
throw new Exception(ex.getMessage());
}
}
but, makeTranslation raise an nullPoint exception because MultilingualContentServiceImpl it's not initialized correctly. Any suggestion how to initialize it? I've to use spring but, how?
Any suggerstion or reply will be very helpful!
Thanks,
Andrea
You can use MultilingualContentService to add translations. But! I guess your properties should be of type d:mltext (like cm:title and cm:description are) to support multilingual content.
This means if you access alfresco using browser with english language you will see a different description as someone using german language settings in browser. This can be a little confusing because in Share there is (was?) no identifier that the property is multilingual.
If you want your translations to appear everywhere, no matter what kind of language in browser people are using, then the better approach is to define some aspect (for example ex:translatable) with as many properties as you need translations. Then you can programatically (using Java or JavaScript) use search service to find nodes you want and add the aspect to them. Finally you then add properties (translations) of that aspect to the node.
I hope this helps to clear things a bit... :)
I would like to read a pom.xml in Java code. I wonder if there is a library for that, so I can have an iterator for different sections, e.g., dependenes, plugins, etc. I want to avoid to build a parser by hand.
You can try MavenXpp3Reader which is part of maven-model. Sample code:
MavenXpp3Reader reader = new MavenXpp3Reader();
Model model = reader.read(new FileReader(mypom));
Firstly, I'm assuming you are not already running inside a Maven plugin, as there are easier ways to achieve that with the available APIs there.
The MavenXpp3Reader solution posted earlier will allow you to read the POM easily, however does not take into account inheritance of the parent and interpolation of expressions.
For that, you would need to use the ModelBuilder class.
Use of this is quite simple, for example from Archiva is this code fragment:
ModelBuildingRequest req = new DefaultModelBuildingRequest();
req.setProcessPlugins( false );
req.setPomFile( file );
req.setModelResolver( new RepositoryModelResolver( basedir, pathTranslator ) );
req.setValidationLevel( ModelBuildingRequest.VALIDATION_LEVEL_MINIMAL );
Model model;
try
{
model = builder.build( req ).getEffectiveModel();
}
catch ( ModelBuildingException e )
{
...
}
You must do two things to run this though:
instantiate and wire an instance of ModelBuilder including its private fields
use one of Maven's resolvers for finding the parent POMs, or write your own (as is the case in the above snippet)
How best to do that depends on the DI framework you are already using, or whether you want to just embed Maven's default container.
This depends on what you're trying to achieve. If you just want to treat it as an XML with embedded XML files, go with suggestions already offered.
If you are looking to implement some form of Maven functionality into your app, you could try the new aether library. I haven't used it, but it looks simple enough to integrate and should offer Maven functionality with little effort on your part.
BTW, this library is a Maven 3 lib, not Maven 2 (as specified in your tag). Don't know if that makes much difference to you