I am programming an extension for analyzing files in an deliver changesets to a stream.
It is an Advisor because if the analyze fails then you can't deliver anything.
In addition I have read the articles:
https://jazz.net/library/article/1000
https://rsjazz.wordpress.com/2013/02/28/setting-up-rational-team-concert-for-api-development/
https://jazz.net/wiki/bin/view/Main/CustomPreconditionsTable
But I have some doubts yet.
I have created a plugin project with extension point ID: com.ibm.team.scm.server.deliver and a java class, but I don't know how to get the path of the files included in the deliver for analyzing them:
import org.eclipse.core.runtime.IProgressMonitor;
import com.ibm.team.process.common.IProcessConfigurationElement;
import com.ibm.team.process.common.advice.AdvisableOperation;
import com.ibm.team.process.common.advice.IAdvisorInfoCollector;
import com.ibm.team.process.common.advice.runtime.IOperationAdvisor;
import com.ibm.team.repository.common.TeamRepositoryException;
import com.ibm.team.repository.service.AbstractService;
public class CheckBadCharacterAdvisor extends AbstractService implements IOperationAdvisor{
#Override
public void run(AdvisableOperation operation,
IProcessConfigurationElement advisorConfiguration,
IAdvisorInfoCollector collector, IProgressMonitor monitor)
throws TeamRepositoryException {
Object data = operation.getOperationData();
// what else here?
}
}
How could I get the change sets included in the delivery?
or
What javadoc or steps do you follow for getting this information?
I don't have the reputation for all the links yet....
These posts show some SCM API that you should look at, in order to approach your problem:
https://rsjazz.wordpress.com/2013/10/15/extracting-an-archive-into-jazz-scm-using-the-plain-java-client-libraries/
http://thescmlounge.blogspot.de/2013/08/getting-your-stuff-using-rtc-sdk-to-zip.html
Unfortunately the answers are in the wrong order...
And more posts I have found useful for RTC SCM API:
https://rsjazz.wordpress.com/2014/09/02/reading-and-writing-files-directly-from-and-to-an-rtc-scm-stream/
this page has pointer to more API examples that could come in handy as well: https://rsjazz.wordpress.com/interesting-links/
I have been relatively successful to find usage in the RTC SDK using PluginSpy, YARI as well as simply the Java Search e.g. for references of classes or methods I found. Sometimes just guessing a method name and search with asterisk helps a lot.
Good luck with your efforts.
I have only done a little bit with the SCM APIs. Here is an example for an Advisor. The most part is common the a follow up action/participant, so this could be a good starter. https://rsjazz.wordpress.com/2012/11/01/restrict-delivery-of-changesets-to-workitem-types-advisordelivery-of-changesets-associated-to-wrong-work-item-types-advisor/
You want to use com.ibm.team.scm.service.internal.AbstractScmService instead of the AbstractService, because it is the entry-point into the SCM API.
Related
First of all, Java is not my usual language, so I'm quite basic at it. I need to use it for this particular project, so please be patient, and if I have omitted any relevant information, please ask for it, I will be happy to provide it.
I have been able to implement coreNLP, and, seemingly, have it working right, but is generating lots of messages like:
ene 20, 2017 10:38:42 AM edu.stanford.nlp.process.PTBLexer next
ADVERTENCIA: Untokenizable: 【 (U+3010, decimal: 12304)
After some research (documentation, google, other threads here), I think (sorry, I don't know how I can tell for sure) coreNLP is finding the slf4j-api.jar in my classpath, and logging through it.
Which properties of the JVM can I use to set logging level of the messages that will be printed out?
Also, in which .properties file I could set them? (I already have a commons-logging.properties, a simplelog.properties and a StanfordCoreNLP.properties in my project's resource folder to set properties for other packages).
Om’s answer is good, but two other possibly useful approaches:
If it is just these warnings from the tokenizer that are annoying you, you can (in code or in StanfordCoreNLP.properties) set a property so they disappear: props.setProperty("tokenize.options", "untokenizable=NoneKeep");.
If slf4j is on the classpath, then, by default, our own Redwoods logger will indeed log through slf4j. So, you can also set the logging level using slf4j.
If I understand your problem, you want to disable all StanfordNLP logging message while the program is executing.
You can disable the logging message. Redwood logging framework is used as logging framework in Stanford NLP. First, clear the Redwood's default configuration(to display log message) then create StanfordNLP pipeline.
import edu.stanford.nlp.util.logging.RedwoodConfiguration;
RedwoodConfiguration.current().clear().apply();
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
Hope it helps.
In accordance with Christopher Manning's suggestion, I followed this link
How to configure slf4j-simple
I created a file src/simplelogger.properties with the line org.slf4j.simpleLogger.defaultLogLevel=warn.
I am able to solve it by setting a blank output stream to system error stream.
System.setErr(new PrintStream(new BlankOutputStream())); // set blank error stream
// ... Add annotators ...
System.setErr(System.err); // Reset to default
Accompanying class is
public class BlankOutputStream extends OutputStream {
#Override
public void write(int b) throws IOException {
// Do nothing
}
}
Om's answer disables all logging. However, if you wish to still log errors then use:
RedwoodConfiguration.errorLevel().apply();
I also use jdk logging instead of slf4j logging to avoid loading slfj dependencies as follows:
RedwoodConfiguration.javaUtilLogging().apply();
Both options can be used together and in any order. Required import is:
import edu.stanford.nlp.util.logging.RedwoodConfiguration;
I've created a model based on the 'wide and deep' example (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py).
I've exported the model as follows:
m = build_estimator(model_dir)
m.fit(input_fn=lambda: input_fn(df_train, True), steps=FLAGS.train_steps)
results = m.evaluate(input_fn=lambda: input_fn(df_test, True), steps=1)
print('Model statistics:')
for key in sorted(results):
print("%s: %s" % (key, results[key]))
print('Done training!!!')
# Export model
export_path = sys.argv[-1]
print('Exporting trained model to %s' % export_path)
m.export(
export_path,
input_fn=serving_input_fn,
use_deprecated_input_fn=False,
input_feature_key=INPUT_FEATURE_KEY
My question is, how do I create a client to make predictions from this exported model? Also, have I exported the model correctly?
Ultimately I need to be able do this in Java too. I suspect I can do this by creating Java classes from proto files using gRPC.
Documentation is very sketchy, hence why I am asking on here.
Many thanks!
I wrote a simple tutorial Exporting and Serving a TensorFlow Wide & Deep Model.
TL;DR
To export an estimator there are four steps:
Define features for export as a list of all features used during estimator initialization.
Create a feature config using create_feature_spec_for_parsing.
Build a serving_input_fn suitable for use in serving using input_fn_utils.build_parsing_serving_input_fn.
Export the model using export_savedmodel().
To run a client script properly you need to do three following steps:
Create and place your script somewhere in the /serving/ folder, e.g. /serving/tensorflow_serving/example/
Create or modify corresponding BUILD file by adding a py_binary.
Build and run a model server, e.g. tensorflow_model_server.
Create, build and run a client that sends a tf.Example to our tensorflow_model_server for the inference.
For more details look at the tutorial itself.
Just spent a solid week figuring this out. First off, m.export is going to deprecated in a couple weeks, so instead of that block, use: m.export_savedmodel(export_path, input_fn=serving_input_fn).
Which means you then have to define serving_input_fn(), which of course is supposed to have a different signature than the input_fn() defined in the wide and deep tutorial. Namely, moving forward, I guess it's recommended that input_fn()-type things are supposed to return an InputFnOps object, defined here.
Here's how I figured out how to make that work:
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from tensorflow.python.ops import array_ops
from tensorflow.python.framework import dtypes
def serving_input_fn():
features, labels = input_fn()
features["examples"] = tf.placeholder(tf.string)
serialized_tf_example = array_ops.placeholder(dtype=dtypes.string,
shape=[None],
name='input_example_tensor')
inputs = {'examples': serialized_tf_example}
labels = None # these are not known in serving!
return input_fn_utils.InputFnOps(features, labels, inputs)
This is probably not 100% idiomatic, but I'm pretty sure it works. For now.
I am trying to use JACOB 1.17 (latest stable version) to access a 64-bit in-process COM server, i.e. MyObject-x64.dll .
My CoClass has two dualinterfaces: IFoo (default), and IBar. IFoo contains foo_method(), and IBar contains bar_method(). Both methods have dispatch ID of 1.
My Java code is:
import com.jacob.activeX.ActiveXComponent;
import com.jacob.com.Dispatch;
import com.jacob.com.LibraryLoader;
import com.jacob.com.Variant;
// ...
ActiveXComponent my_object = new ActiveXComponent("MyObject.MyClass"); // OK
Dispatch.call(my_object, "foo_method"); // OK
Dispatch ibar = my_object.QueryInterface("{DE3FF217-120B-4F1E-BEF5-098B8ABDEC1F}"); // OK
Dispatch.call(ibar, "bar_method"); // Exception - "Can't map names to dispid:bar_method"
Dispatch.getIDOfName(ibar, "bar_method"); // Exception - "Can't map names to dispid:bar_method"
Dispatch.call(ibar, "foo_method"); // OK, executes foo_method
Dispatch.call(ibar, 1); // OK, executes foo_method
So, it seems that either the QueryInterface has returned the wrong interface, or the call function on ibar is calling the default interface instead of the result of the QueryInterface.
I have had a quick look through the JNI source code for jacob-1.17-x64.dll and can't see any obvious problem with the QueryInterface implementation or with the call implementation, although I haven't looked at JNI code before so I may be missing something obvious.
There is a sample that comes with JACOB, samples/com/jacob/samples/atl which accesses multiple interfaces, and it uses QueryInterface the same as I have. However I can't run this sample as it requires a MultiFace.dll which is not provided. (Source is provided but it is MSVC++-specific source, and I don't use MSVC++).
The IID in QueryInterface is definitely correct , and my object definitely isn't broken; I can access IBar fine using a free trial of one of the commercial Java-COM bridges, as well as from Visual Basic.
Is JACOB bugged or am I doing something wrong?
Using JRE 1.7.0_51-b13 .
Actually, Jacob is OK. The problem is that C++Builder XE5 has bugged implementation of IDispatch. If you QueryInterface for IDispatch plus the IID of the interface you want, then you get a valid pointer, however it actually points to the original interface you queried from, not the new one.
The other access methods all must be using vtable binding, so they did not encounter a problem.
Leaving this answer here in case anyone else has the same issue and searches.
So far, I have not discovered a workaround.
Basically I have started updating a lot of Heroes spells to 1.7.2 and this update broke the .getHealth() and .getMaxHealth(). I am trying to fix it but I do not know how to. If anyone has some advice or samples I will be in debt. I will place some code where I use the .getHealth() method.
this is the link of the error: http://puu.sh/7BrEP.png. It is saying this method is ambigous for that type.
public void tickHero(Hero hero) {
if ( hero.getPlayer().getHealth() - damage > 1) {
addSpellTarget(hero.getPlayer(), plugin.getCharacterManager().getHero(caster));
damageEntity(hero.getPlayer(), caster, damage, DamageCause.MAGIC);
//hero.getPlayer().damage(damage, caster);
}
}
As of 1.7.2, there are two getHealth() and getMaxHealth() methods. This is becaue of the way Bukkit handled Minecraft changing the way entity health is stored in 1.6. You can read more about this here.
If you aren't using any NMS code, you should use the bukkit.jar in your build path as opposed to craftbukkit.jar. This should resolve your issue easily enough.
If you do need NMS code, you need to have both bukkit.jar AND craftbukkit.jar in your build path. Furthermore, you have to have bukkit.jar above craftbukkit.jar in the build path for it to work.
I'm wondering if this is possible to achieve with Apache Camel. What I would like to do, is have Camel look at a directory of files, and only copy the ones whose "Last Modified" date is more recent than a certain date. For example, only copy files that were modified AFTER February 7, 2014. Basically I want to update a variable for the "Last Run Date" every time Camel runs, and then check if the files were modified after the Last Run.
I would like to use the actual timestamp on the file, not anything provided by Camel... it is my understanding that there is a deprecated method in Camel that used to stamp files when Camel looked at them, and then that would let you know whether they have been processed already or not. But this functionality is deprecated so I need an alternative.
Apache recommends moving or deleting the file after processing to know whether it has been processed, but this is not an option for me. Any ideas? Thanks in advance.
SOLVED (2014-02-10):
import java.util.Date;
import org.apache.camel.builder.RouteBuilder;
public class TestRoute extends RouteBuilder {
static final long A_DAY = 86400000;
#Override
public void configure() throws Exception {
Date yesterday = new Date(System.currentTimeMillis() - A_DAY);
from("file://C:\\TestOutputFolder?noop=true").
filter(header("CamelFileLastModified").isGreaterThan(yesterday)).
to("file://C:\\TestInputFolder");
}
}
No XML configuration required. Thanks for the answers below.
Yes you can implement a filter and then return true|false if you want to include the file or not. In that logic you can check the file modification and see if the file is more than X days old etc.
See the Camel file docs at
http://camel.apache.org/file2
And look for the filter option, eg where you implement org.apache.camel.component.file.GenericFileFilter interface.
Take a look at Camel's File Language. Looks like file:modified might be what you are looking for.
example:
filterFile=${file:modified} < ${date:now-24h}