My mobile app has quite a lot of hard-coded data, that I want to share between the native Android and iOS versions of the app.
I don't want to create two settings files for each app: I would like to have a single place where the information is stored.
My first thought was using a JSON file embedded with my app and decode it at runtime.
Instead my goal here is to deserialize the app's data at compile time, in order to:
track potential errors within the JSON file or the decoding code before shipping the app: the build will fail instead
avoid being slowed down by some deserialization at startup time
avoid leaving unencrypted app data lying around as JSON files (.ipa/.apk are zip files where resources can be easily extracted), I'd rather have it obfuscated in code
I'm looking for a command line tool that I could add to my build scripts that given a JSON file infer a schema and thus classes AND instantiates an object with all the app settings.
For instance given the following settings.json file:
{
"gravity": 9.81,
"scientists": ["Kepler", "Einstein", "Newton"],
"planets": [
{
"name": "Mars",
"radius": 3390
},
{
"name": "Venus",
"radius": 6052
}
]
}
I would like to generate automatically a Settings.swift file that could look like:
struct Settings {
struct Planet {
var name: String
var radius: Int
}
var gravity: Double
var scientists: [String]
var planets: [Planet]
static func content() -> Settings {
return Settings(gravity: 9.81, scientists: ["Kepler", "Einstein", "Newton"], planets: [Planet(name: "Mars", radius: 3390), Planet(name: "Venus", radius: 6052)])
}
}
I could then include the generated file into my project and call Settings.content() once, keep it in a global variable and use it throughout my project.
I want to achieve the same with Android as well.
Tools like quicktype or json2swift do half the job and don't generate the object instantiation bit, that still needs to be done at runtime.
Any idea?
I have created an open source NodeJS mobile_app_data tool to achieve what I wanted.
A possible solution will be as follows:
Save the .json file in the project directory
In AppDelegate applicationDidFinishLauncing() read the above file into a Data object (Read JSON file with Swift 3)
Make your Settings class implement Decodable (your inner class Planet will need to implement Decodable as well)
Call JsonDecoder().decode() and provide the data you obtained in 2)
You can then save this value anywhere you want
I just noticed that you needed the generation to happed when Settings.content() is called. Follow the above steps just move step 2) into the content() function
Related
I am using Netlogo Api Controller With spring boot
this my code (i got it from this link )
HeadlessWorkspace workspace = HeadlessWorkspace.newInstance();
try {
workspace.open("models/Residential_Solar_PV_Adoption.nlogo",true);
workspace.command("set number-of-residences 900");
workspace.command("set %-similar-wanted 7");
workspace.command("set count-years-simulated 14");
workspace.command("set number-of-residences 500");
workspace.command("set carbon-tax 13.7");
workspace.command("setup");
workspace.command("repeat 10 [ go ]");
workspace.command("reset-ticks");
workspace.dispose();
workspace.dispose();
}
catch(Exception ex) {
ex.printStackTrace();
}
i got this result in the console:
But I want to get the table view and save to database. Which command can I use to get the table view ?
Table view:
any help please ?
If you can clarify why you're trying to generate the data this way, I or others might be able to give better advice.
There is no single NetLogo command or NetLogo API method to generate that table, you have to use BehaviorSpace to get it. Here are some options, listed in rough order of simplest to hardest.
Option 1
If possible, I'd recommend just running BehaviorSpace experiments from the command line to generate your table. This will get you exactly the same output you're looking for. You can find information on how to do that in the NetLogo manual's BehaviorSpace guide. If necessary, you can run NetLogo headless from the command line from within a Java program, just look for resources on calling out to external programs from Java, maybe with ProcessBuilder.
If you're running from within Java in order to setup and change the parameters of your BehaviorSpace experiments in a way that you cannot do from within the program, you could instead generate experiment XML files in Java to pass to NetLogo at the command line. See the docs on the XML format.
Option 2
You can recreate the contents of the table using the CSV extension in your model and adding a few more commands to generate the data. This will not create the exact same table, but it will get your data output in a computer and human readable format.
In pure NetLogo code, you'd want something like the below. Note that you can control more of the behavior (like file names or the desired variables) by running other pre-experiment commands before running setup or go in your Java code. You could also run the CSV-specific file code from Java using the controlling API and leave the model unchanged, but you'll need to write your own NetLogo code version of the csv:to-row primitive.
globals [
;; your model globals here
output-variables
]
to setup
clear-all
;;; your model setup code here
file-open "my-output.csv"
; the given variables should be valid reporters for the NetLogo model
set output-variables [ "ticks" "current-price" "number-of-residences" "count-years-simulated" "solar-PV-cost" "%-lows" "k" ]
file-print csv:to-row output-variables
reset-ticks
end
to go
;;; the rest of your model code here
file-print csv:to-row map [ v -> runresult v ] output-variables
file-flush
tick
end
Option 3
If you really need to reproduce the BehaviorSpace table export exactly, you can try to run a BehaviorSpace experiment directly from Java. The table is generated by this code but as you can see it's tied in with the LabProtocol class, meaning you'll have to setup and run your model through BehaviorSpace instead of just step-by-step using a workspace as you've done in your sample code.
A good example of this might be the Main.scala object, which extracts some experiment settings from the expected command-line arguments, and then uses them with the lab.run() method to run the BehaviorSpace experiment and generate the output. That's Scala code and not Java, but hopefully it isn't too hard to translate. You'd similarly have to setup an org.nlogo.nvm.LabInterface.Settings instance and pass that off to a HeadlessWorkspace.newLab.run() to get things going.
I am trying to build a Jenkins Post build plugin where I have to process JSON file (contains test results) and show it in tabular format in Jenkins once build is executed.
Following are the steps done till now:
Created Jenkins plugin
Able to retrieve JSON file content and read it as Google gson JSONElement.
Built BuildAction (extends Action) to show the results.
In index.jelly (view for BuildAction) corresponding to BuildAction, trying to show each record in JSON file, as a row.
JSON File sample:
{
"records": [{
"objectProps": {
"OTYPE": "TEST",
"NAME": "testMethodError",
}
},
{
"objectProps": {
"OTYPE": "TEST",
"NAME": "testMethodFail",
}
}]
}
BuildAction class:
public class BuildAction implements Action {
private JsonElement results;
private Run<?, ?> build;
TaskListener listener;
// this value referred as `it.results` in `index.jelly`
public JsonArray getResults(){
return results.getAsJsonObject().get("records").getAsJsonArray();
}
}
current index.jelly for above BuildAction class
<?jelly escape-by-default='true'?>
<j:jelly xmlns:j="jelly:core" xmlns:st="jelly:stapler" xmlns:l="/lib/layout">
<l:layout>
<st:include it="${it.build}" page="sidepanel.jelly"/>
<l:main-panel>
<table> Test - Wise Results
<j:forEach items="${it.results}" var="i">
<tr><td>Test case name: ${i}</td></tr>
</j:forEach>
</table>
</l:main-panel>
</l:layout>
</j:jelly>
Actual behaviour:
As of now, ${results} value is of JSONArray type. forEach in jelly, I am able iterate over and get the record using var i (syntax ${i}). i refers to each record in records JSONArray. Now, I want to access objectProps.NAME field using i, I don't know the syntax in Jelly to achieve the same.
Expected behaviour:
I wan to iterate through records array in JSON file and each child/jsonObject as one table row (and its values as corresponding columns).
something similar to this (to access NAME value):
<j:forEach items="${it.results}" var="i">
<tr><td>Test case name: ${i}."objectProps"."NAME"</td></tr>
</j:forEach>
Need help in building the table out of a JSON using Jelly. Any other way of achieving the same also welcome (please post code samples when suggesting the same).
Note: Groovy related answer also welcome as Jenkins support both Jelly and Groovy for View.
I am interested in solving you problem, but might not have a 100% certain answer as I can't test locally.
Have you tried to use ${i.objectProps.NAME}or ${i."objectProps"."NAME"} instead of ${i}."objectProps"."NAME"in your example?
You could also see if g:evaluate is available, as jelly might not evaluate your variable without explicitly telling it to do so. You can find some documentation on g:evaluate here.
I've created a model based on the 'wide and deep' example (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py).
I've exported the model as follows:
m = build_estimator(model_dir)
m.fit(input_fn=lambda: input_fn(df_train, True), steps=FLAGS.train_steps)
results = m.evaluate(input_fn=lambda: input_fn(df_test, True), steps=1)
print('Model statistics:')
for key in sorted(results):
print("%s: %s" % (key, results[key]))
print('Done training!!!')
# Export model
export_path = sys.argv[-1]
print('Exporting trained model to %s' % export_path)
m.export(
export_path,
input_fn=serving_input_fn,
use_deprecated_input_fn=False,
input_feature_key=INPUT_FEATURE_KEY
My question is, how do I create a client to make predictions from this exported model? Also, have I exported the model correctly?
Ultimately I need to be able do this in Java too. I suspect I can do this by creating Java classes from proto files using gRPC.
Documentation is very sketchy, hence why I am asking on here.
Many thanks!
I wrote a simple tutorial Exporting and Serving a TensorFlow Wide & Deep Model.
TL;DR
To export an estimator there are four steps:
Define features for export as a list of all features used during estimator initialization.
Create a feature config using create_feature_spec_for_parsing.
Build a serving_input_fn suitable for use in serving using input_fn_utils.build_parsing_serving_input_fn.
Export the model using export_savedmodel().
To run a client script properly you need to do three following steps:
Create and place your script somewhere in the /serving/ folder, e.g. /serving/tensorflow_serving/example/
Create or modify corresponding BUILD file by adding a py_binary.
Build and run a model server, e.g. tensorflow_model_server.
Create, build and run a client that sends a tf.Example to our tensorflow_model_server for the inference.
For more details look at the tutorial itself.
Just spent a solid week figuring this out. First off, m.export is going to deprecated in a couple weeks, so instead of that block, use: m.export_savedmodel(export_path, input_fn=serving_input_fn).
Which means you then have to define serving_input_fn(), which of course is supposed to have a different signature than the input_fn() defined in the wide and deep tutorial. Namely, moving forward, I guess it's recommended that input_fn()-type things are supposed to return an InputFnOps object, defined here.
Here's how I figured out how to make that work:
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from tensorflow.python.ops import array_ops
from tensorflow.python.framework import dtypes
def serving_input_fn():
features, labels = input_fn()
features["examples"] = tf.placeholder(tf.string)
serialized_tf_example = array_ops.placeholder(dtype=dtypes.string,
shape=[None],
name='input_example_tensor')
inputs = {'examples': serialized_tf_example}
labels = None # these are not known in serving!
return input_fn_utils.InputFnOps(features, labels, inputs)
This is probably not 100% idiomatic, but I'm pretty sure it works. For now.
So I am trying to write a little utility in Scala that constantly listens on a bunch of directories for file system changes (deletes, creates, modifications etc) and rsyncs it immediately across to a remote server. (https://github.com/Khalian/LockStep)
My configurations are stored in JSON as the follows:-
{
"localToRemoteDirectories": {
"/workplace/arunavs/third_party": {
"remoteDir": "/remoteworkplace/arunavs/third_party",
"remoteServerAddr": "some Remote server address"
}
}
}
This configuration is stored in a Scala Map (key = localDir, value = (remoteDir, remoteServerAddr)). The tuple is represented as a case class
sealed case class RemoteLocation(remoteDir:String, remoteServerAddr:String)
I am using an actor from a third party:
https://github.com/lloydmeta/schwatcher/blob/master/src/main/scala/com/beachape/filemanagement/FileSystemWatchMessageForwardingActor.scala)
that listens on these directories (e.g. /workplace/arunavs/third_party and then outputs an Java 7 WatchKind event (EVENT_CREATE, EVENT_MODIFY etc). The problem is that the events sent are absolute path (for instance if I create a file helloworld in third_party dir, the message sent by the actor is (ENTRY_CREATE, /workplace/arunavs/third_party/helloworld))
I need a way to write a getter that gets the nearest prefix from the configuration map stored above. The obvious way to do it is to filter on the map:-
def getRootDirsAndRemoteAddrs(localDir:String) : Map[String, RemoteLocation] =
localToRemoteDirectories.filter(e => localDir.startsWith(e._1))
This simply returns the subset of keys that are a prefix to the localDir (in the above example this method is called with localDir = /workplace/arunavs/third_party/helloworld. While this works, this implementation is O(n) where n is the number of items in my configuration. I am looking for better computational complexity (I looked at radix and patricia tries, but they dont cut it since I feeding a string and trying to get keys which are prefixes to it, tries solve the opposite problem).
I come from a Java/Spring background and I've just recently moved to Python/Django. I'm working on a new project from scratch with Django. I was wondering how Django handles common String messages. Is there one single common file that can be called in a resources folder? For example, in Spring, we have a MessageSource is a key/value pair properties file that is global to most of the app. Is there something similar in Django? If so, how does it work for the normal app side and the unit tests side?
You could take a look over Django's messages framework.
Also, you can use key-value pairs in Python, with dicts:
# Upper case because it is constant
LOGIN_ERRROS = {
'login_error_message': 'message here',
...
}
You could put this in a file, you can even name it message_source.py, inside you app and import it when you need it:
For example, in your view:
# views.py
...
from myapp.message_source import LOGIN_ERRORS
Django uses the standard gettext + .po files for internationalization/translation. Check out the Translation docs for all the steps needed: https://docs.djangoproject.com/en/1.9/topics/i18n/translation/