Java, Maven: Need help embedding j2mod library into a java plugin - java

I need some help using j2mod, a java library for Modbus TCP/RTU into a plugin for Universal Robots.
Using the maven-bungle-plugin
What I have tried:
Including dependencies one by one (never ending dependency tree)
Using the tag to include the j2mod package in jar (program starts with a requires 'jSerialComm' error)
Using tag to include all dependencies of scope compile and runtime (program starts with a requires 'org.slf4.impl' error and I cant seem to figure out a way to adapt slf4j to log4j.)
I know from snooping around the polyscope simulator program's files. The simulator seems to use log4j-api-2.3.jar and a modified-log4j-core-1.11.6.jar.
Any insights on using j2mod or a similar java Modbus TCP/RTU library in an embedded system environment will me helpful.
Thank you

Related

how gradle add code and resources from another sub-project to the sub-project

i have a minecraft fabric mod project
this is its struct
airgame-api-parent:
airgame-api-common:
airgame-api-client:
airgame-api-server:
airgame-api-all:
Initially, i build them as a single project, but with the increase of code and function, i added some another depend into my project. such as mysql-connector and HikariCP.
its only needed in server side because the client does not need to connect my sql.
but mysql-connector is too big. it caused my jar file size increase to 4MB+ from 100KB+.
I think it's unbearable.
So I disassembled my project.
the project airgame-api-common is universal environment code: it can be running with client and server.
the project airgame-api-client is client side only code. it just can be running with client. it depend with api-common.
the project airgame-api-server is server side only code. it just can be running with server. it depend with api-common too.
the api-server include some server-side code. example as mysql-connector and HikariCP.
and finally, the api-all include all code of api-common, api-client and api-server. In this way, I don't need to import api-client and api-server at the same time when coding other projects. (Actually, I can't do that because api-client and api-server used the same mod_id. If I import them, when I execute the test, the running environment will contain both dependencies, and then crash due to mod_id conflict.)
okay, first i try to use api project(":airgame-api-common") in the api-client, but it now work, other project that depend api-client still can not see api-common. i guess may plugin fabric-loom changed gradle's build or depend logic.
the fabric-loom docs say that i need use modApi, i tried, but it look like cant be use to import self sub-project.
OK, I'm sorry to say a lot of things that have nothing to do with the problem, but I just want to show that I've done my best to solve the problem.
So now I guess there's one way left: add classpath and resources from api-common to other projects before gradle starts compiling code. I think modifying build.gradle can do it, but I don't know what to do.
I tried to read gradle's documentation, but I really didn't know much about the software, so I couldn't find much useful information. Can someone tell me?
I need the api-client compile file have both its own code and api-common code, and the api-common code needs to be visible to the projects that depend on the api-client.(This is also required for api-server and api-all. But I think if you teach me to configure api-client, I should be able to configure others.)
Finally, my English is not very good, but I try my best to express my intention. I don't mean any harm to anyone. If I offend you, please forgive me.
OK, I found the answer in another answer: Gradle: common resource dependency for multiple java projects
its
sourceSets.main {
java.srcDirs += [
project(':api-common').sourceSets.main.java,
project(':api-common-server').sourceSets.main.java
]
resources.srcDirs += [
project(':api-common').sourceSets.main.resources,
project(':api-common-server').sourceSets.main.resources
]
}

Spark, Alternative to Fat Jar

I know at least 2 ways to get my dependencies into a Spark EMR job. One is to create a fat jar and another is to specify which packages you want in spark submit using the --packages option.
The fat jar takes quite a long time to zip up. Is that normal? ~10 minutes. Is it possible that we have it incorrectly configured?
The command line option is fine, but error prone.
Are there any alternatives? I'd like it if there (already existed) a way to include the dependency list in the jar with gradle, then have it download them. Is this possible? Are there other alternatives?
Update: I'm posting a partial answer. One thing I didn't make clear in my original question was that I also care about when you have dependency conflicts because you have the same jar with different versions.
Update
Thank you for the responses relating to cutting back the number of dependencies or using provided where possible. For the sake of this question, lets say we have the minimal number of dependencies necessary to run the jar.
Spark launcher can used if spark job has to be launched through some application with the help of Spark launcher you can configure your jar patah and no need to create fat.jar for runing application.
With a fat-jar you have to have Java installed and launching the Spark application requires executing java -jar [your-fat-jar-here]. It's hard to automate it if you want to, say, launch the application from a web application.
With SparkLauncher you're given the option of launching a Spark application from another application, e.g. the web application above. It is just much easier.
import org.apache.spark.launcher.SparkLauncher
SparkLauncher extends App {
val spark = new SparkLauncher()
.setSparkHome("/home/knoldus/spark-1.4.0-bin-hadoop2.6")
.setAppResource("/home/knoldus/spark_launcher-assembly-1.0.jar")
.setMainClass("SparkApp")
.setMaster("local[*]")
.launch();
spark.waitFor();
}
Code:
https://github.com/phalodi/Spark-launcher
Here
setSparkHome(“/home/knoldus/spark-1.4.0-bin-hadoop2.6”) is use to set spark home which is use internally to call spark submit.
.setAppResource(“/home/knoldus/spark_launcher-assembly-1.0.jar”) is use to specify jar of our spark application.
.setMainClass(“SparkApp”) the entry point of the spark program i.e driver program.
.setMaster(“local[*]”) set the address of master where its start here now we run it on loacal machine.
.launch() is simply start our spark application
What are the benefits of SparkLauncher vs java -jar fat-jar?
https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-SparkLauncher.html
https://spark.apache.org/docs/2.0.0/api/java/org/apache/spark/launcher/SparkLauncher.html
http://henningpetersen.com/post/22/running-apache-spark-jobs-from-applications
For example on Cloudera's clusters, there is some set of libraries already available on all nodes which will be available on classpath for drivers, executors.
Those are e.g. spark-core, spark-hive, hadoop etc
Versions are grouped by Cloudera, so e.g. you have spark-core-cdh5.9.0 where cdh5.9.0 suffix means that all libraries with that suffix were actually verified by Cloudera to be working together properly.
Only thing you should do is to use libraries with the same group suffix and you shouldn't have any classpath conflicts.
What that allows is:
set dependencies configured in an app as Maven's scope provided, so they will not be part of fat jar, but resolved from classpath on nodes.
You dind't write what kind of cluster you have, but maybe you you can use similliar approach.
maven shade plugin may be used to create fat jar which additionally allows to set libraries you want to include ina jar, and those not in a list are not included.
I think something similiar is described in this answer Spark, Alternative to Fat Jar but using S3 as dependency storage.
HubSpot has a (partial) solution: SlimFast. You can find an explanation here http://product.hubspot.com/blog/the-fault-in-our-jars-why-we-stopped-building-fat-jars and you can find the code here https://github.com/HubSpot/SlimFast
Effectively it stores all the jars it'll ever need on s3, so when it builds it does it without packaging the jars, but when it needs to run it gets them from s3. So you're builds are quick, and downloads don't take long.
I think if this also had the ability to shade the jar's paths on upload, in order to avoid conflicts, then it would be a perfect solution.
The fat jar indeed take a lot of time to create. I was able to optimize a little bit by removing the dependencies which were not required at runtime. But it is really a pain.

Why is Java RunTime has reported a java.lang.NoSuchMethodError on Hex.encodeHex() method call?

I had to include some bittorrent java library to my Android project. My workspace: Android Studio 1.0.2 (osx) and jdk8. I've connected its maven-repository (ttorrent:1.4) with Gradle and after starting using main classes and features i've got an error:
java.lang.NoSuchMethodError: No static method encodeHex([BZ)[C in class Lorg/apache/commons/codec/binary/Hex; or its super classes (declaration of 'org.apache.commons.codec.binary.Hex' appears in /system/framework/ext.jar).
I went to library's code and find out that it's using org.apache.commons.codec from where ttorrent is importing encodeHex and calling it. Looks like binaryHex method is gone! Or it never been. But I went to commons.codec's code and found binaryHex in its place and with arguments that I was looking for. How come? Why? My Android Studio found it. But java runtime not.
In fact, the decision was more difficult than I thought. Let's start with the fact that I came across an article by Dieser Beitrag'a, from which it is clear that not one I had similar problems. The whole thing turned out that within the Android operating system already has some libraries that have a higher priority use, rather than loaded with dependencies along with the application. Among them there and my org.apache.commons.codec.
Yes, such things.
To solve the problem in two ways, either you need to pump source code library and using some tool to rename the project (i.e. org.apache.commons.codec to org.apache.commons.codec.android), collected it to a .jar file, include .jar in a project and at code use imports of the necessary classes only "our" library, or just get the required class to your project and do not pull a megabytes of unneeded code. However, I did just that.
Thanks for help!

RemoteActorRefProvider ClassNotFound

I'm struggling trying to get remote actors setup in Scala. I'm running Scala 2.10.2 and Akka 2.2.1.
I compile using [I've shortened the paths on the classpath arg for clarity sake]:
$ scalac -classpath "akka-2.2.1/lib:akka-2.2.1/lib/scala-library.jar:akka-2.2.1/lib/akka:akka-2.2.1/lib/akka/scala-reflect-2.10.1.jar:akka-2.2.1/lib/akka/config-1.0.2.jar:akka-2.2.1/lib/akka/akka-remote_2.10-2.2.1.jar:akka-2.2.1/lib/akka/akka-kernel_2.10-2.2.1.jar:akka-2.2.1/lib/akka/akka-actor_2.10-2.2.1.jar:." [file.scala]
I've continuously added new libraries trying to debug this - I'm pretty sure all I really need to include is akka-remote, but the others shouldn't hurt.
No issues compiling.
I attempt to run like this:
$ scala -classpath "[same as above]" [application]
And I receive a NSM exception:
java.lang.NoSuchMethodException: akka.remote.RemoteActorRefProvider.<init>(java.lang.String, akka.actor.ActorSystem$Settings, akka.event.EventStream, akka.actor.Scheduler, akka.actor.DynamicAccess)
at java.lang.Class.getConstructor0(Class.java:2810)
at java.lang.Class.getDeclaredConstructor(Class.java:2053)
...
Looking into the source code, it appears that Akka 2.2.X's flavor of this constructor takes 4 arguments (the Scheduler is removed). But in Akka < 2.2.X, the constructor takes 5 args.
Thus, I'm thinking my classpath isn't setup quite right. At compile-time, Scala must be finding the <2.2.X flavor. I don't even know where it would be finding it, since I only have Akka 2.2.1 installed.
Any suggestions!? Thanks! (Please don't say to use SBT).
The problem here is that the Scala distribution contains akka-actor 2.1.0 and helpfully puts that in the boot class path for you. We very strongly recommend using a dependency manager like sbt or maven when building anything which goes beyond the most trivial projects.
As noted in another answer, the problem is that scala puts a different version of Akka into the bootclasspath.
To more directly answer your question (as you said you don't want to use sbt): you can execute your program with java instead of scala. You just have to put the appropriate Scala jars into the classpath.
Here is a spark-dev message about the problem. The important part is: "the workaround is to use java to launch the application instead of scala. All you need to do is to include the right Scala jars (scala-library and scala-compiler) in the classpath."

Java compiled with gcj using javax.comm api. Possible?

I have a java program that I'm required to compile into a Linux native program using gcj-4.3. This program requires serial port access. The javax.comm api provides serial port access but I'm not sure how to get my compiled java program to use it.
The target box has Java installed, but of course my compiled program isn't running in the JRE...so I'm not exactly sure how I can link in the comm.jar file or how that file can find the .properties file it requires.
I wonder if I can just compile the comm.jar allong with my .jar file and link the two object files together. Can my code then reference the classes in comm.jar?
Thanks in advance for your help!
I'm not an GCJ expert but I have some suggestions (I'm not providing the syntax, this will require some exploration that I didn't perform):
first, I think that you'll have to compile comm.jar into a (shared) library,
then, you'll have to link your code against the library,
finally, use the GCJ_PROPERTIES environment variable to pass properties to the program at invocation time.
The following pointers might be helpful to implement this:
GCJ---The GNU Compiler for Java (great resources IMO, covers all the steps mentioned above)
GCJ – Getting Started (more an intro but still nice)
Compile ActiveMQ with GCJ (more use cases but I don't think they apply here)
And of course, the official documentation :)

Categories