Netlogo Api Controller - Get Table View - java

I am using Netlogo Api Controller With spring boot
this my code (i got it from this link )
HeadlessWorkspace workspace = HeadlessWorkspace.newInstance();
try {
workspace.open("models/Residential_Solar_PV_Adoption.nlogo",true);
workspace.command("set number-of-residences 900");
workspace.command("set %-similar-wanted 7");
workspace.command("set count-years-simulated 14");
workspace.command("set number-of-residences 500");
workspace.command("set carbon-tax 13.7");
workspace.command("setup");
workspace.command("repeat 10 [ go ]");
workspace.command("reset-ticks");
workspace.dispose();
workspace.dispose();
}
catch(Exception ex) {
ex.printStackTrace();
}
i got this result in the console:
But I want to get the table view and save to database. Which command can I use to get the table view ?
Table view:
any help please ?

If you can clarify why you're trying to generate the data this way, I or others might be able to give better advice.
There is no single NetLogo command or NetLogo API method to generate that table, you have to use BehaviorSpace to get it. Here are some options, listed in rough order of simplest to hardest.
Option 1
If possible, I'd recommend just running BehaviorSpace experiments from the command line to generate your table. This will get you exactly the same output you're looking for. You can find information on how to do that in the NetLogo manual's BehaviorSpace guide. If necessary, you can run NetLogo headless from the command line from within a Java program, just look for resources on calling out to external programs from Java, maybe with ProcessBuilder.
If you're running from within Java in order to setup and change the parameters of your BehaviorSpace experiments in a way that you cannot do from within the program, you could instead generate experiment XML files in Java to pass to NetLogo at the command line. See the docs on the XML format.
Option 2
You can recreate the contents of the table using the CSV extension in your model and adding a few more commands to generate the data. This will not create the exact same table, but it will get your data output in a computer and human readable format.
In pure NetLogo code, you'd want something like the below. Note that you can control more of the behavior (like file names or the desired variables) by running other pre-experiment commands before running setup or go in your Java code. You could also run the CSV-specific file code from Java using the controlling API and leave the model unchanged, but you'll need to write your own NetLogo code version of the csv:to-row primitive.
globals [
;; your model globals here
output-variables
]
to setup
clear-all
;;; your model setup code here
file-open "my-output.csv"
; the given variables should be valid reporters for the NetLogo model
set output-variables [ "ticks" "current-price" "number-of-residences" "count-years-simulated" "solar-PV-cost" "%-lows" "k" ]
file-print csv:to-row output-variables
reset-ticks
end
to go
;;; the rest of your model code here
file-print csv:to-row map [ v -> runresult v ] output-variables
file-flush
tick
end
Option 3
If you really need to reproduce the BehaviorSpace table export exactly, you can try to run a BehaviorSpace experiment directly from Java. The table is generated by this code but as you can see it's tied in with the LabProtocol class, meaning you'll have to setup and run your model through BehaviorSpace instead of just step-by-step using a workspace as you've done in your sample code.
A good example of this might be the Main.scala object, which extracts some experiment settings from the expected command-line arguments, and then uses them with the lab.run() method to run the BehaviorSpace experiment and generate the output. That's Scala code and not Java, but hopefully it isn't too hard to translate. You'd similarly have to setup an org.nlogo.nvm.LabInterface.Settings instance and pass that off to a HeadlessWorkspace.newLab.run() to get things going.

Related

How to run a GPR file in Java API and run a model GAMS

I have a model with GMS extension. When I run that model with Gams studio, it run perfectly and I obtain the expected results.
I have tried to run the GMS model with Gams IDE but I obtain a lot of errors, so, I have tried something different. I Have opened a file with GPR extension and after of that I have imported the GMS model and everything works perfectly when I run the project.
I think I need to do same thing usinge Gams Java API, but I don't know how to import to my workspace a GPR file.
In this moment I just have the next code:
GAMSWorkspace workspace = new GAMSWorkspace();
workspace.setDebugLevel(DebugLevel.KEEP_FILES);
GAMSJob jobGams = workspace.addJobFromFile("fileModelGms");
jobGams.run();
When I run that code, I obtain an error:
GAMS process returns unsuccessfully with return code : 2 [there was a
compilation error]. Check \_gams_java_gjo1.lst] for more details.
The gpr file has a format that is only understood by the GAMSIDE. You can not pass it to any API. If you get errors calling your model from the API but not from the GAMSIDE, you probably have set certain options using the IDE which you should set now trough the API as well. Though, without seeing the exact error, it is hard to give further hints.
I have solved the problem with Lutz's helper. I needed to Include a dir with input that model uses.
This is my code commented line per line to understand how API Gams works. I used a specific workspace too because API creates a folder in temps file when you run a new Job. I did use for a database GDX too to run my model.
//specific workspace information is created example: C:/Desktop/Workspace
GAMSWorkspaceInfo workspaceInfo = new GAMSWorkspaceInfo();
workspaceInfo.setWorkingDirectory("specificPathWorkspace");
//A new workspace is created with workspaceInfo.
GAMSWorkspace workspace = new GAMSWorkspace(workspaceInfo);
workspace.setDebugLevel(DebugLevel.KEEP_FILES);
//Options where you're going to set input file data.
GAMSOptions options = workspace.addOptions();
//Set path with input Data example: C:/Desktop/InputDate
options.IDir.add("PathWithInputData");
//Using a database where is data to be processed example: db.gdx
GAMSDatabase gdxdb = workspace.addDatabaseFromGDX("db.gdx");
// Creating a JOB to execute the model.
GAMSJob jobGams = workspace.addJobFromFile(entradasModeloGamsDTO.getPathModeloGams());
//Running model
jobGams.run(options,gdxdb);

How to create a tensorflow serving client for the 'wide and deep' model?

I've created a model based on the 'wide and deep' example (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py).
I've exported the model as follows:
m = build_estimator(model_dir)
m.fit(input_fn=lambda: input_fn(df_train, True), steps=FLAGS.train_steps)
results = m.evaluate(input_fn=lambda: input_fn(df_test, True), steps=1)
print('Model statistics:')
for key in sorted(results):
print("%s: %s" % (key, results[key]))
print('Done training!!!')
# Export model
export_path = sys.argv[-1]
print('Exporting trained model to %s' % export_path)
m.export(
export_path,
input_fn=serving_input_fn,
use_deprecated_input_fn=False,
input_feature_key=INPUT_FEATURE_KEY
My question is, how do I create a client to make predictions from this exported model? Also, have I exported the model correctly?
Ultimately I need to be able do this in Java too. I suspect I can do this by creating Java classes from proto files using gRPC.
Documentation is very sketchy, hence why I am asking on here.
Many thanks!
I wrote a simple tutorial Exporting and Serving a TensorFlow Wide & Deep Model.
TL;DR
To export an estimator there are four steps:
Define features for export as a list of all features used during estimator initialization.
Create a feature config using create_feature_spec_for_parsing.
Build a serving_input_fn suitable for use in serving using input_fn_utils.build_parsing_serving_input_fn.
Export the model using export_savedmodel().
To run a client script properly you need to do three following steps:
Create and place your script somewhere in the /serving/ folder, e.g. /serving/tensorflow_serving/example/
Create or modify corresponding BUILD file by adding a py_binary.
Build and run a model server, e.g. tensorflow_model_server.
Create, build and run a client that sends a tf.Example to our tensorflow_model_server for the inference.
For more details look at the tutorial itself.
Just spent a solid week figuring this out. First off, m.export is going to deprecated in a couple weeks, so instead of that block, use: m.export_savedmodel(export_path, input_fn=serving_input_fn).
Which means you then have to define serving_input_fn(), which of course is supposed to have a different signature than the input_fn() defined in the wide and deep tutorial. Namely, moving forward, I guess it's recommended that input_fn()-type things are supposed to return an InputFnOps object, defined here.
Here's how I figured out how to make that work:
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from tensorflow.python.ops import array_ops
from tensorflow.python.framework import dtypes
def serving_input_fn():
features, labels = input_fn()
features["examples"] = tf.placeholder(tf.string)
serialized_tf_example = array_ops.placeholder(dtype=dtypes.string,
shape=[None],
name='input_example_tensor')
inputs = {'examples': serialized_tf_example}
labels = None # these are not known in serving!
return input_fn_utils.InputFnOps(features, labels, inputs)
This is probably not 100% idiomatic, but I'm pretty sure it works. For now.

Using java code in the django framework

Okay, so I have a simple interface that I designed with the Django framework that takes natural language input from a user and stores it in table.
Additionally I have a pipeline that I built with Java using the cTAKES library to do named entity recognition i.e. it will take the text input submitted by the user and annotate it with relevant UMLS tags.
What I want to do is take the input given from the user then once, its submitted, direct it into my java-cTAKES pipeline then feed the annotated output back into the database.
I am pretty new to the web development side of this and can't really find anything on integrating scripts in this sense. So, if someone could point me to a useful resource or just in the general right direction that would be extremely helpful.
=========================
UPDATE:
Okay, so I have figured out that the subprocess is the module that I want to use in this context and I have tried implementing some simple code based on the documentation but I am getting an
Exception Type: OSError
Exception Value: [Errno 2] No such file or directory
Exception Location: /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py in _execute_child, line 1335.
A brief overview of what I'm trying to do:
This is the code I have in views. Its intent is to take text input from the model form, POST that to the DB and then pass that input into my script which produces an XML file which is stored in another column in the DB. I'm very new to django so I'm sorry if this is an simple fix, but I couldn't find any documentation relating django to subprocess that was helpful.
def queries_create(request):
if not request.user.is_authenticated():
return render(request, 'login_error.html')
form = QueryForm(request.POST or None)
if form.is_valid():
instance = form.save(commit=False)
instance.save()
p=subprocess.Popen([request.POST['post'], './path/to/run_pipeline.sh'])
p.save()
context = {
"title":"Create",
"form": form,
}
return render(request, "query_form.html", context)
Model code snippet:
class Query(models.Model):
problem/intervention = models.TextField()
updated = models.DateTimeField(auto_now=True, auto_now_add=False)
timestamp = models.DateTimeField(auto_now=False, auto_now_add=True)
UPDATE 2:
Okay so the code is no longer breaking by changing the subprocess code as below
def queries_create(request):
if not request.user.is_authenticated():
return render(request, 'login_error.html')
form = QueryForm(request.POST or None)
if form.is_valid():
instance = form.save(commit=False)
instance.save()
p = subprocess.Popen(['path/to/run_pipeline.sh'], stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
(stdoutdata, stderrdata) = p.communicate()
instance.processed_data = stdoutdata
instance.save()
context = {
"title":"Create",
"form": form,
}
return render(request, "query_form.html", context)
However, I am now getting a "Could not find or load main class pipeline.CtakesPipeline" that I don't understand since the script runs fine from the shell in this working directory. This is the script I am trying to call with subprocess.
#!/bin/bash
INPUT=$1
OUTPUT=$2
CTAKES_HOME="full/path/to/CtakesClinicalPipeline/apache-ctakes-3.2.2"
UMLS_USER="####"
UMLS_PASS="####"
CLINICAL_PIPELINE_JAR="full/path/to/CtakesClinicalPipeline/target/
CtakesClinicalPipeline-0.0.1-SNAPSHOT.jar"
[[ $CTAKES_HOME == "" ]] && CTAKES_HOME=/usr/local/apache-ctakes-3.2.2
CTAKES_JARS=""
for jar in $(find ${CTAKES_HOME}/lib -iname "*.jar" -type f)
do
CTAKES_JARS+=$jar
CTAKES_JARS+=":"
done
current_dir=$PWD
cd $CTAKES_HOME
java -Dctakes.umlsuser=${UMLS_USER} -Dctakes.umlspw=${UMLS_PASS} -cp
${CTAKES_HOME}/desc/:${CTAKES_HOME}/resources/:${CTAKES_JARS%?}:
${current_dir}/${CLINICAL_PIPELINE_JAR} -
-Dlog4j.configuration=file:${CTAKES_HOME}/config/log4j.xml -Xms512M -Xmx3g
pipeline.CtakesPipeline $INPUT $OUTPUT
cd $current_dir
I'm not sure how to go about fixing this error so any help is appreciated.
If I understand you correctly, you want to pipe the value of request.POST['post'] to the program run_pipeline.sh and store the output in a field of your instance.
You are calling subprocess.Popen incorrectly. It should be:
p = subprocess.Popen(['/path/to/run_pipeline.sh'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
Then pass in the input and read the output
(stdoutdata, stderrdata) = p.communicate()
Then save the data, e.g. in a field of your instance
instance.processed_data = stdoutdata
instance.save()
I suggest you first make sure to get the call to the subprocess working in a Python shell and then integrate it in your Django app.
Please note that creating a (potentially long-running) subprocess in a request is really bad practice and can lead to a lot of problems. The best practice is to delegate long-running tasks in a job queue. For Django, Celery is probably most commonly used. There is a bit of setup involved, though.

Extract details from syslog message with java grok patterns

I am working on a Java project to parse logs where I am using Java Grok library for pattern recognition. I have given the pattern as follows:
%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\\[%{POSINT:syslog_pid}\\])?: %{GREEDYDATA:syslog_message}
When I try parsing the line,
Dec 23 14:30:01 louis CRON[619]: (www-data) CMD (php /usr/share/cacti/site/poller.php >/dev/null 2>/var/log/cacti/poller-error.log)
it gives the following output:
syslog_timestamp=Dec 23 14:30:01
syslog_hostname=louis
syslog_program=CRON
syslog_pid=619
syslog_message=(www-data) CMD (php /usr/share/cacti/site/poller.php >/dev/null 2>/var/log/cacti/poller-error.log)
I want to extract details from the syslog_message further to get stuff like facility and severity. How can I improve the grok pattern to get these details?
I am not an expert of Java Grok. You can use www-data as your facility and error as your severity.
Change your pattern to get this information.
syslog_facility=www-data
syslog-severity=error
Regards,
Ivo
As far as I see, you can't get the severity and facility from that message because it's not there. Assuming this is written by a syslog daemon, and assuming that syslog daemon is rsyslog, you can change the way things are written down by altering the template and select the properties you want.
You can write, for example, one JSON per line in a file with a template like this (basically copy-pasted from this rsyslog->Elasticsearch blog post):
template(name="jsonPerLine"
type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
constant(value="\",\"message\":\"") property(name="msg" format="json")
constant(value="\"}\n")
}
This way, in your application you can parse this JSON and get all these fields, which is much cheaper than using regular expressions (which is what grok does).
You'd use this template to write to files later on in the configuration, for example:
action(
type="omfile"
file="/var/log/json_messages"
template="jsonPerLine"
)
If you get errors from the syntax above, it may be that your distro has a really-really old version of rsyslog. If that's so, you can get packages from the official repositories.

Using Inline::Java in perl with threads

I am writing a trading program in perl with the newest Finance::InteractiveBrokers::TWS
module. I kick off a command line interface in a separate thread at the
beginning of my program but then when I try to create a tws object, my program
exits with this message:
As of Inline v0.30, use of the Inline::Config module is no longer supported or
allowed. If Inline::Config exists on your system, it can be removed. See the
Inline documentation for information on how to configure Inline. (You should
find it much more straightforward than Inline::Config :-)
I have the newest versions of Inline and Inline::Java. I looked at TWS.pm and it doesn't seem to be using Inline::Config. I set 'SHARED_JVM => 1' in the 'use Inline()' and 'Inline->bind()' calls in TWS.pm but that did not resolve the issue...
My Code:
use Finance::InteractiveBrokers::TWS;
use threads;
use threads::shared;
our $callback;
our $tws;
my $interface = UserInterface->new();
share($interface);
my $t = threads->create(sub{$interface->runUI()});
$callback= TWScallback->new();
$tws = Finance::InteractiveBrokers::TWS->new($manager); #This is where the program fails
So is Inline::Config installed on your system or not? A cursory inspection of the code is not sufficient to tell whether Perl is loading a module or not. There are too many esoteric ways (some intentional and some otherwise) to load a package or otherwise populate a namespace.
The error message in question comes from this line of code in Inline.pm:
croak M14_usage_Config() if %main::Inline::Config::;
so something in your program is populating the Inline::Config namespace. You should do what the program instructs you to do: find out where Inline/Config.pm is installed on your system (somewhere in your #INC path) and delete it.

Categories