Accessing "AllocatedFrom" compartment from Rhapsody java api - java

I need to customise a generic Internal block diagram to show customer specific variants. A simple tag based discrimination mechanism is used to suppress non relevant flows and allocated operations.
I have written a Java plug-in to iterate over the IBD’s encapsulated IRPGraphElements, examining their associated type and acting appropriately.
Manipulation of the flows is working fine, but I am having a lot of problems with the allocated operations – in summary I have 2 problems…
I cannot get a handle to the “AllocatedFrom” compartment
And therefore I cannot get to the IRPCollection containing the references to the actual operations.
Problem 1.
I have examined both the Rhapsody Java api documentation (!!!) and Java objects at runtime trying to discover the appropriate method(s) to invoke.
As it is a purely presentational issue (I do not want to suppress underlying allocations between model elements) I guess it is some kind of graphical property & I had thought it would be ObjectModelGe oriented.
I have looked at the properties mentioned in the Diagram package of the SysML profile.
Within General::Graphics I can see the AdditionalCompartments property mentions (amongst other things) AllocatedFrom
However within ObjectModelGe::Object I see that the Compartments property only mentions Operation – do I need to add AllocatedFrom to this property?
Problem 2.
Even if I get access to the compartment I am not sure what methods will be available to access the collection – there appears no interface defined for compartments. Looking in the .sbs file I can see that it is of type IRPYRawContainer but I cannot find any documentation on this.

Related

How does MicroStream (de)serialization work?

I was wondering how the serialization of MicroStream works in detail.
Since it is described as "Super-Fast" it has to rely on code-generation, right? Or is it based on reflections?
How would it perform in comparison to the Protobuf-Serialization, which relies on Code-generation that directly reads out of the java-fields and writes them into a bytebuffer and vice-versa.
Using reflections would drastically decrease the performance when serializing objects on a huge scale, wouldn't it?
I'm looking for a fast way to transmit and persist objects for a multiplayer-game and every millisecond counts. :)
Thanks in advance!
PS: Since I don't have enough reputation, I can not create the "microstream"-tag. https://microstream.one/
I am the lead developer of MicroStream.
(This is not an alias account. I really just created it. I'm reading on StackOverflow for 10 years or so but never had a reason to create an account. Until now.)
On every initialization, MicroStream analyzes the current runtime's versions of all required entity and value type classes and derives optimized metadata from them.
The same is done when encountering a class at runtime that was unknown so far.
The analysis is done per reflection, but since it is only done once for every handled class, the reflection performance cost is negligible.
The actual storing and loading or serialization and deserialization is done via optimized framework code based on the created metadata.
If a class layout changes, the type analysis creates a mapping from the field layout that the class' instances are stored in to that of the current class.
Automatically if possible (unambiguous changes or via some configurable heuristics), otherwise via a user-provided mapping. Performance stays the same since the JVM does not care if it (simplified speaking) copies a loaded value #3 to position #3 or to position #5. It's all in the metadata.
ByteBuffers are used, more precisely direct ByteBuffers, but only as an anchor for off-heap memory to work on via direct "Unsafe" low-level operations. If you are not familiar with "Unsafe" operations, a short and simple notion is: "It's as direct and fast as C++ code.". You can do anything you want very fast and close to memory, but you are also responsible for everything. For more details, google "sun.misc.Unsafe".
No code is generated. No byte code hacking, tacit replacement of instances by proxies or similar monkey business is used. On the technical level, it's just a Java library (including "Unsafe" usage), but with a lot of properly devised logic.
As a side note: reflection is not as slow as it is commonly considered to be. Not any more. It was, but it has been optimized pretty much in some past Java version(s?).
It's only slow if every operation has to do all the class analysis, field lookups, etc. anew (which an awful lot of frameworks seem to do because they are just badly written). If the fields are collected (set accessible, etc.) once and then cached, reflection is actually surprisingly fast.
Regarding the comparison to Protobuf-Serialization:
I can't say anything specific about it since I haven't used Protocol Buffers and I don't know how it works internally.
As usual with complex technologies, a truly meaningful comparison might be pretty difficult to do since different technologies have different optimization priorities and limitations.
Most serialization approaches give up referential consistency but only store "data" (i.e. if two objects reference a third, deserialization will create TWO instances of that third object.
Like this: A->C<-B ==serialization==> A->C1 B->C2.
This basically breaks/ruins/destroys object graphs and makes serialization of cyclic graphs impossible, since it creates and endlessly cascading replication. See JSON serialization, for example. Funny stuff.)
Even Brian Goetz' draft for a Java "Serialization 2.0" includes that limitation (see "Limitations" at http://cr.openjdk.java.net/~briangoetz/amber/serialization.html) (and another one which breaks the separation of concerns).
MicroStream does not have that limitation. It handles arbitrary object graphs properly without ruining their references.
Keeping referential consistency intact is by far not "trying to do too much", as he writes. It is "doing it properly". One just has to know how to do it properly. And it even is rather trivial if done correctly.
So, depending on how many limitations Protobuf-Serialization has ("pacts with the devil"), it might be hardly or even not at all comparable to MicroStream in general.
Of course, you can always create some performance comparison tests for your particular requirements and see which technology suits you best. Just make sure you are aware of the limitations a certain technology imposes on you (ruined referential consistency, forbidden types, required annotations, required default constructor / getters / setters, etc.).
MicroStream has none*.
(*) within reason: Serializing/storing system-internals (e.g. Thread) or non-entities (like lambdas or proxy instances) is, while technically possible, intentionally excluded.

Prevent debuggers to see variable value

Is there a way that I can configure properties of my JPA(I am using hibernate as implementation) entity such that no one can see its value while debugging?
The property is transient and I don't want anyone to see it while debugging due to security reasons. The jar/war of my application will be used by third party.
Assuming you're running your program on an Oracle JVM, and allowing people to attach to that JVM via a debugger -- no, you can't hide certain fields.
The interface that the debuggers will use to talk to the Java process is JDI 1, and it gives pretty much all of the information that the JVM has about your code. Specifically:
If a person has an ObjectReference to the object that contains your sensitive data, they can get its ReferenceType.
They can call ReferenceType::allFields to list all of the fields, including transient ones, in the class:
All declared and inherited fields are included, regardless of whether they are hidden or multiply inherited.
Back on the ObjectReference, they can call ObjectReference::getValue(Field) to get the field's value. Note that the documentation doesn't say anything about an IllegalAccessException, or anything like that.
Even if you could lock down certain fields, it wouldn't do you much good; the debugger would be able to see the value when it's in a local variable (either when you read the field, or when you're about to write to it). What you really want is to lock down certain values, not fields. And that's also not in the JDI.
1 Actually JDWP under the hood, but JDI is built on top of that and easier to discuss here.

How can I test for the presence of an object in a List managed by Protocol Buffers?

I have generated a Java API from some Protocol Buffers source code. You can see it here: https://microbean.github.io/microbean-helm/apidocs/ Things in the hapi.* package hierarchy are generated. The semantics of this API don't matter for this question.
As part of working with this API, I need to assemble a Chart.
With Protocol Buffers, when you are creating or mutating things, you work with "builders". These builders have a build() method that returns the (immutable) object that they are responsible for building. That's fine. Consequently, in this generated API a Chart.Builder builds Charts.
In this generated API, a Chart.Builder has "sub-" Chart.Builders that it contains. The list of such sub-builders is known as its dependencies and is represented via a smattering of generated methods. I'll point out two interesting ones:
addDependencies that takes a Chart.Builder
addDependenciesBuilder that takes no arguments and returns the current Chart.Builder
(I'll start by saying I have no idea what the second method is good for. It appears to stuff a new builder into an internal list and then...that's it.)
In the first method's case, if I hand it a Chart.Builder, this should add it. Just what I needed!
OK, but now I don't want to add a Chart.Builder if it's already present. How can I reliably check for the presence of a sub-builder in a top-level builder? More generally, in a Protocol Buffers Java API, how can I check for a builder inside a List of builders?
(I understand I can roll my own "equality" checker, but I'm wary that perhaps I'm off in the wrong patch of weeds altogether: maybe this isn't how you construct an object graph using Protocol Buffers-generated code?)
You would think that I could call the getDependenciesBuilderList() method, and see if it contains my candidate builder. But I've found this doesn't work: java.util.List#contains(Object) will return false for any builder passed in.
Surely this very simple treat-a-list-as-a-set pattern has a solution when working with Java APIs generated from .proto files? If so, I'm failing to see it. How can I do what I want?
(For further reading, I've gone through https://developers.google.com/protocol-buffers/docs/javatutorial which treats the subject in a fairly cursory fashion.)

How to analyze MAT with eclipse

My web application is running in apache tomcat.
The classloader/component org.apache.catalina.loader.WebappClassLoader # 0x7a199fae8 occupies 1,70,86,32,104 (88.08%) bytes.
The memory is accumulated in one instance of java.util.concurrent.ConcurrentHashMap$Segment[] loaded by <system class loader>.
I got this problem while analyzing Heapdump. How to analyze it further ?
You provide very little information so I only can provide very little advice… ;-)
First you need to find out who is using the largest objects (the HashMap in your case). Try to look at the contents of the HashMap so you may find out what it is used for. You should also try to look at where this objects are referenced.
Than you can try to limit its size. Depending on whether it is used by a framework you use or by your own code this can be easy (e.g. configuration change for a frameworks cache), medium (e.g. you need to refactor your own code) or difficult (e.g. it is deeply buried in a library you have no control over).
Often the culprit is not the one you expect: Only because an object instance (in your case the HashMap) accumulates a lot of memory does not mean the "owner" of this object is the root cause of the problem. You might well have to look some levels above or below in the object tree or even in a completely different location. In most cases it is crucial that you know your application very well.
Update: You can try to inspect the contents of a HashMap by right clicking it and selecting Java Collections, Hash Entries. For general objects you can use List objects, with incoming references (to list all objects that reference the selected object) or with outgoing references (to list all object that are referenced by the selected object).
Memory analysis is not an easy task and can require a lot of time, at least if you are not used to it…
If you need further assistance you need to provide more details about your application, the frameworks you use and how the heap looks like in MAT.

Why do some APIs (like JCE, JSSE, etc) provide their configurable properties through singleton maps?

For example:
Security.setProperty("ocsp.enable", "true");
And this is used only when a CertPathValidator is used. I see two options for imporement:
again singleton, but with getter and setter for each property
an object containing the properties relevant to the current context:
CertPathValidator.setValidatorProperties(..) (it already has a setter for PKIXParameters, which is a good start, but it does not include everything)
Some reasons might be:
setting the properties from the command line - a simple transformer from command-line to default values in the classes suggested above would be trivial
allowing additional custom properties by different providers - they can have public Map getProviderProperties(), or even public Object .. with casting.
I'm curious, because these properties are not always in the most visible place, and instead of seeing them while using the API, you have to go though dozens of google results before (if lucky) getting them. Because - in the first place - you don't always know what exactly you are looking for.
Another fatal drawback I just observed is that this is not thread-safe. For example if two threads want to check a revocation via ocsp, they have to set the ocsp.responderURL property.. and perhaps override the settings of each other.
This is actually a great question that forces you to think about design decisions you may have made in the past. Thanks for asking a question that should have occurred to me years ago!
It sounds like the objection is not so much the singleton aspect of this (although an entirely different discussion could occur about that) - but the use of string keys.
I've worked on APIs that used this sort of scheme, and the reasons you outline above were definitely the driving factors - it makes it crazy simple to parse a command line or properties file, and it allows for 3rd party extensibility without impact to the official API.
In our library, we actually had a class with a bunch of static final String entries for each of the official parameters. This gave us the best of both worlds - the developer could still use code completion where it made sense to do so. It also becomes possible to construct hierarchies of related settings using inner classes.
All that said, I think that the first reason (easy parsing of command line) doesn't really cut it. Creating a reflection driven mechanism for pushing settings into a bunch of setters would be fairly easy, and it would prevent the cruft of String->object transformation from drifting into the main application classes.
Extensibility is a bit trickier, but I think it could still be handled using a reflection driven system. The idea would be to have the main configuration object (the one with all the setters in it) also have a registerExtensionConfiguration(xxx) method. A standard notation (probably dot separated) could be used to dive into the resultant acyclic graph of configuration objects to determine where the setter should be called.
The advantage of the above approach is that it puts all of the command line argument/properties file parsing exception handling in one place. There isn't a risk of a mis-formatted argument floating around for weeks before it gets hit.

Categories