Disclaimer: I am the author of Jsonix and Jsonix Schema Compiler and I'm trying to figure the canonical way the Jsonix Schema Compiler should be integrated in NPM package.json.
The jsonix-schema-compiler NPM package provides a Java-based tool for code generation. If the jsonix-schema-compiler is installed as dependency then it can be used to generate XML<->JS mappings. The invocation is as something like:
java -jar node_modules/jsonix-schema-compiler/lib/jsonix-schema-compiler-full.jar
schema.xsd
This generates a JavaScript file like Mappings.js which is basically a part of module's code.
Ideally, jsonix-schema-compiler invocation above (java -jar ... and so on) should be executed during the module build. But it must be executed after modules dependencies are installed (otherwise node_modules/jsonix-schema-compiler will be missing).
My question is - where should I canonically configure code generation in NPM packages?
Right now I'm doing it in the postinstall scripts, something like:
{
...
"dependencies": {
"jsonix": "x.x.x",
"jsonix-schema-compiler": "x.x.x"
},
"devDependencies" : {
"nodeunit" : "~0.8.6"
},
"scripts": {
"postinstall" : "java -jar node_modules/jsonix-schema-compiler/lib/jsonix-schema-compiler-full.jar schema.xsd",
"test": "nodeunit src/test/javascript/tests.js"
}
}
However having read this:
tl;dr Don't use install. Use a .gyp file for compilation, and
prepublish for anything else.
You should almost never have to explicitly set a preinstall or install
script. If you are doing this, please consider if there is another
option.
I am now confused if postinstall is also OK.
All I want to do is to be able to execute a certain command-line command after dependencies are installed but before other things (like tests or publish). How should I canonically do it?
Typically people are running things like coffeescript-to-javascript compilers, Ecmascript 6->5 transpilers, and minifiers as a build step, which is what it sounds like you're doing.
The difference between doing it pre-publish and post-install is that a prepublish script is probably going to be run in your checked-out directory, so it's reasonable to assume that java is available and various dev-dependencies are available; while the post-install script would be run after every install, and will fail if java (etc.) is not available, as on a minimalist docker image. So you should put your build step in a prepublish or similar script.
Personally what I like to do is define a script 'mypublish' in package.json that ensures all tests pass, runs the build, ensures build artefacts exist, and then runs npm publish. I find this more intuitive than prepublish which is meant to be used as an "I'm about to publish" hook, not a "do the build before publishing".
Here is a package.json that uses this setup: https://github.com/reid/node-jslint/blob/master/package.json and here's the Makefile with the prepublish target: https://github.com/reid/node-jslint/blob/master/Makefile
Let me know if you have more questions; I'm kind of rambling because there are many legitimate ways to get it done-- as long as you avoid postinstall scripts. ;-)
Related
I've been trying to find the equivalent of sctipts in NPM for Java and had no luck.
Makes me wonder if it's even possible, but I assume it is, it's just probably very hard to find.
My main goal is to avoid my Readme.md file containing long commands, and would like very much to have something similar with npm scripts, which would trigger a certain line based on a certain keyword.
For example, I would like to create something like this:
"smoke" : "mvn clean test -Dsmoke="src/test/smokeSuite.xml""
or
"smoke" : "clean test -Dsmoke="src/test/smokeSuite.xml""
Which would then be run by a simple mvn smoke command.
Is it possible?
I would like to execute my Gatling simulation from within Java code and not with a command maven or gradle. Is it possible to run the tests/scenarios directly from Java code?
Option 1:
If you want to run Gatling from code you can invoke this class:
io.gatling.app.Gatling
Source code:
https://github.com/gatling/gatling/blob/master/gatling-app/src/main/scala/io/gatling/app/Gatling.scala
I probably wouldn't call main directly but rather the start function or custom start function like that. Here is an attempt at that or.
Something like this (copied from the link above):
import io.gatling.app.Gatling
import io.gatling.core.config.GatlingPropertiesBuilder
object Engine extends App {
val props = new GatlingPropertiesBuilder
props.simulationClass("your.simulation.class.goes.here")
props.dataDirectory("path.to.data.directory") //optional
props.resultsDirectory("path.to.results.directory") //optional
props.bodiesDirectory("path.to.template.directory") //optional
props.binariesDirectory("path.to.binaries.directory") //optional
Gatling.fromMap(props.build)
}
Option 2:
(Compilation Phase) Use Maven archetype to generate helper classes (you probably need to compile your Java anyway). Docs. This will generate Engine (code) and other classes which you can run. This is similar to Option 1 but helps to resolve paths if you are working from a maven project. Makes sense if you use maven to build your project.
Option 3:
Invoke gatling.sh or gatling.bat as a process from Java with Runtime.getRuntime().exec() or similar.
Bear in mind:
Gatling tests need to be compiled before they are executed. This is basically what gatling.[sh|bat] is doing:
# Run the compiler
"$JAVA" $COMPILER_OPTS -cp "$COMPILER_CLASSPATH" io.gatling.compiler.ZincCompiler $EXTRA_COMPILER_OPTIONS "$#" 2> /dev/null
# Run Gatling
"$JAVA" $DEFAULT_JAVA_OPTS $JAVA_OPTS -cp "$GATLING_CLASSPATH" io.gatling.app.Gatling "$#"
If you call Scala code from Java there is definitely inter-op available. Make sure Scala is compiled first in your case or can be loaded in Java CP easily. Create Java friendly wrapper if needed on Scala side.
I would start with Option 1 or 2 if you want tight integration and with option 3 if you just want to glue things together and don't mind startup/init time.
Pay attention to Classpath needed for Gatling - this will depend where its classes are located (in your proj or outside) and how you invoke it.
You can definitely pass test names, just see how the arguments are used in those classes. For example simulationClass in props. All available methods (simulationsDirectory, simulationClass, etc).
I'm sure it will take a bit of trial and error but definitely can be done.
I have a CMakeList.txt which will build a Java project with Maven to a war file when running make, but when I run make install, it will rebuild it again before copy to the installation folder of Web Application.
How can I only build Java once with make but not again with make install? Here is the CMakeList.txt:
add_custom_target(JavaProject ALL
COMMAND ${MAVEN_EXECUTABLE} package
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
VERBATIM)
install(FILES "${JAVA_PROJECT_TARGET_DIR}/java_project.war"
DESTINATION ${WAR_DIR})
As the documentation of add_custom_target() says, custom targets are always considered out of date, which means they will re-build with each invocation of make which includes them.
What you want instead is a custom command to produce the .war file:
add_custom_command(
OUTPUT "${JAVA_PROJECT_TARGET_DIR}/java_project.war"
COMMAND ${MAVEN_EXECUTABLE} package
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
VERBATIM
)
This tells CMake how the file named "${JAVA_PROJECT_TARGET_DIR}/java_project.war" is produced when someone requests it. For files, CMake can generate dependency checks just fine, so it will not re-build needlessly. Note that you will probably also want to include some DEPENDS in that add_custom_command(), otherwise it will never rebuild once built(1).
Then, you need one more thing: a driver for the custom command. That is something that will depend on the command's OUTPUT and actually cause it to be built. So you'll add a custom target:
add_custom_target(
JavaProject ALL
DEPENDS "${JAVA_PROJECT_TARGET_DIR}/java_project.war"
)
Then, the sequence will be as follows:
During a make, JavaProject will be considered out of date (since it's a custom target) and will be built. This means its dependencies will be checked for up-to-datedness, and re-built if they're not up to date. That's what the custom command is for. After that, the custom target itself would run its COMMAND, but it doesn't have any, so nothing else happens.
On a subsequent make invocation, JavaProject will again be considered out of date and will thus be built. Its dependencies are checked again, but this time, they're up to date (since the .war already exists). It's therefore not built again. The custom target still has no COMMAND, so nothing further happens.
This "custom target as driver for custom commands" approach is very a idiomatic piece of CMake code, and you will see it in many projects which produce additional files which do not participate in further build steps (such as documentation).
(1) If the list of dependencies is very large, you want to move it to a separate files and include that. Something like this:
In CMakeLists.txt:
include(files.cmake)
add_custom_command(
OUTPUT "${JAVA_PROJECT_TARGET_DIR}/java_project.war"
COMMAND ${MAVEN_EXECUTABLE} package
DEPENDS ${MyFiles}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
VERBATIM
)
In files.cmake:
set(MyFiles
a/file1.java
a/file2.java
a/b/file1.java
a/c/file1.java
# ... list all files as necessary
)
This keeps the CMakeList itself readable, while allowing you to explicitly depend on all you need.
Although Angew has an excellent answer but unfortunately it does not work as I expect (i.e: when update source code folder and run make, it will not build the war again).
Here is the way to solve what I wanted:
set(CMAKE_SKIP_INSTALL_ALL_DEPENDENCY TRUE)
Then when I run make it will build and make install will just copy to the installation folder.
I work on a Scala project that uses c++ code, using sbt. Once compiled, this c++ code is imported into Scala through Java code that uses jna.
Now, currently the Java wrapper are manually written, and I like to automatize this. I've found jnaerator that can do that, but I don't know how I should use it in sbt.
I see two general approaches:
use command line, such as java -jar jnaerator ... but I don't know how to setup such command line task in sbt? Also, I would need to know the typical project structure to follow: where to output the jna generated code?
Use jnaerator maven plugin through sbt, if it is possible?
This might take some iteration until we get it do what you need.
For the first approach, here is how you can run custom system command on sbt (you essentially solve this using Scala code). Add the following to your build.sbt file:
lazy val runJnaerator= taskKey[Unit]("This task generates libraries from native code")
runJnaerator := {
import sys.process._
Seq("java" , "-jar", "jnaerator", "..." ).!
}
To execute:
>sbt runJnaerator
Now the question is where do you need these files files to go? Finally, how do you want to invoke everything?
I'm using Grails 2.1.0, and I have a Groovy class that I've written that's not dependent on services, controllers, or any of the other Grails goodness. It uses some .jar libraries and other classes that are already in the Grails classpath.
I want to:
Run the Groovy class (or a Java class, it shouldn't mattter) use the other libraries/classes that Grails already has on its classpath (not services, not controllers, none of that).
Be able to access the command line arguments [this is required]
Does not require bootstrapping the entire Grails environment (I need the classpath obviously, but nothing else)
Ideally, I'd like to be able to do something like this:
java -classpath (I_HAVE_NO_IDEA_HOW_TO_DETERMINE_THIS) com.something.MyClass param1 param2 param3
Things I've already looked into:
Using "grails create-script" which results in a Gant script.
Using "grails run-script"
The first one (using a Gant script) seems horribly wrong to me. Using an Gant script as some sort of intermediary wrapper seems to require bootstrapping the whole Grails environment, plus I have to figure out how to get a reference to the actual class I want to call which seems to be difficult (but I Am Not A Gant Expert, so enlighten me). =)
The second one (using run-script) sort of works... I've used this approach to call service methods before, but it has two problems: first, it bootstraps the entire Grails environment; second, there does not appear to be any way to pass the command-line arguments easily.
Really, I just want the stuff in the classpath (and my command-line parameters) and to be able to call the main() method of my class with minimial frustration. That being said, if you can come up with a working example of any sort that solves the issue (even if it involves some intermediary Gant or other class) I'll be happy to use that approach.
Thanks.
Update: A solution that works with a Gant task, still open to better ideas if anyone has any...
In scripts/FooBar.groovy
includeTargets << grailsScript("_GrailsInit")
target(main: "Runs a generic script and passes parameters") {
def myclass = classLoader.loadClass('com.whatever.scripting.GenericRunScript')
myclass.execute(args);
}
setDefaultTarget(main)
In src/groovy/com/whatever/scripting/GenericRunScript.groovy
package com.whatever.scripting
class GenericRunScript {
public static execute(def args) {
println "args=" + args.inspect()
}
}
Then from the command line, at while in the root directory of the Grails project:
$ grails compile
| Environment set to development.....
| Compiling 2 source files.
$ grails foo-bar test one two
| Environment set to development....
args='test\none\ntwo'
Note 1: When I first did this, I kept forgetting the compile statement, so I added that in.
Note 2: Yes, the args are separated by carriage returns; fixing that is left as an exercise to the reader.
The way described above would work but all grails facility will be gone including domains and dependencies.
If you require everything that you have defined in your grails project, the run-script command will do the trick
grails run-script [path to your groovy file]
http://grails.org/doc/latest/ref/Command%20Line/run-script.html
As described in http://grails.org/doc/latest/guide/commandLine.html, you can include targets _GrailsClasspath and _GrailsArgParsing, and whatever else you need. For example, if you want to parse command-line arguments without creating a second script:
$ grails create-script ArgsScript
| Created file scripts/ArgsScript.groovy
Now edit the script scripts/ArgsScript.groovy as follows:
includeTargets << grailsScript("_GrailsArgParsing") // grailsScript("_GrailsInit")
target(main: "The description of the script goes here!") {
println argsMap
for (p in argsMap['params'])
println p
}
setDefaultTarget(main)
See the result:
$ grails args-script one two three=four
| Environment set to development....
[params:[one, two, three=four]]
one
two
three=four
Update: well, it is not as easy as I thought. Basically, you can either run a script as a Gant task, e.g. by doing grails myscript, or as a script, e.g. by doing grails run-script src/groovy/MyScript.groovy. In the first case you have access to parameters, as I already explained, but you still miss some of the Grails environment, which is, perhaps, a bug. For example, you can't really access scripts or classes defined in src/groovy/ from a Gant task. On the other hand, as was already discussed, if you use run-script, you can't get the arguments.
However, you can use System.getProperty to pass command-line arguments with the -Dproperty=value syntax. Also see Java system properties and environment variables