I have a database-heavy distributable java application that's currently only 400k. We need improved database query building as well as support for a few specific database dialects.
jOOQ has to be shaded into our JAR and it balloons it up to 1.6MB, even when using the minimizeJar elements for shade.
Is there a way I can do a custom build or strip out the components of jOOQ that we have no use for right now? Dialects, non insert/select/delete query classes, other features we don't need?
I thought about trying to identify every imported class that we're using and setting maven to only shade those, but I'd also need to handle classes jOOQ uses internally and I don't know how reliant jOOQ is on everything.
If I could strip it down to a few hundred k, I'd be sold on continuing to use it.
jOOQ is a domain-specific language implemented according to the principles explained here:
http://blog.jooq.org/2012/01/05/the-java-fluent-api-designer-crash-course/
This means that every "production" or "primary" from the DSL specification generates a Java interface with all the overhead this may generate in the class loader. Additionally, since jOOQ 3.0, record and row types with degree 1-22 were introduced (e.g. org.jooq.Row1, org.jooq.Row2, ... org.jooq.Row22). All of these elements are part of the API, which probably cannot be stripped down any further.
Of course, you can try to manually strip down the jOOQ API and implementation, removing all the row types from it. Another entire statement that you might not need is the MERGE statement, which also has an extensive API. Then, there are the tools packages, which aren't strictly needed, specifically:
org.jooq.tools.csv
org.jooq.tools.json
org.jooq.types
org.jooq.util.[dialect]
Also, you can try to remove a couple of classes from the org.jooq.impl package. The class names should be fairly straight-forward to help you decide whether something is needed.
It would be interesting to see how far you get with such measures. This might be useful for Android users, too.
Related
I'm using RDF4J as I got caught by the advertised implementation of GEOSPARQL (which I didn't find in other RDF frameworks). I followed basic guides and tutorial, but unfortunately I haven't been able to perform basically any of the advertised queries.
I read and followed all the documentation at http://docs.rdf4j.org/programming/#_geosparql, and all the examples at http://graphdb.ontotext.com/documentation/standard/geosparql-support.html, and at https://portal.opengeospatial.org/files/?artifact_id=47664. The only spatial function that seemed to work in a SPARQL query is the geof:distance, all the others do not produce any results.
So I ultimately dug into the code in the package org.eclipse.rdf4j.query.algebra.evaluation.function.geosparql to kind of understand that there are some classes and interfaces that I should probably implements and extends, e.g. SpatialAlgebra, SpatialSupport, SpatialSupportInitializer. It looks like many of the function are not completely (or partially) implemented in the spatial logic. Apparently, there is a DefaultSpatialAlgebra which returns a lot of notSupported. Anyway, it's quite a mess (and undocumented) understanding what's the right procedure to have GEOSPARQL working properly. They only say that you can implement your own SpatialSupportInitializer, but how to use it afterwards is a mystery.
From the documentation, apparently there's also a way by using other SAILs, but again, nothing is clear about that.
Can anybody provide me with some guidance, or at least a snippet of code where it is shown how to actually pass to the engine a SpatialAlgebra or SpatialSupport or SpatialSupportInitializer, which is not the default one? Or is there any already existing SAIL which implements all these methods, and how can I use it? Thanks.
PS: I'm actually relying on the 2.4.0 M2 version of RDF4J, which doesn't seem to have the org.eclipse.rdf4j.query.algebra.evaluation.function.geosparql package inside (which I imported manually). I tried also with version 2.3.1, but I had the same issue.
Update Since RDF4J 2.4.0-M3, GeoSPARQL function support is a lot more comprehensive. The improved documentation gives a full list of all supported functions, as well as, hopefully, a better explanation on how to get started with GeoSPARQL. The short and sweet of it is that all you need to do is add this maven module:
<dependency>
<groupId>org.eclipse.rdf4j</groupId>
<artifactId>rdf4j-queryalgebra-geosparql</artifactId>
<version>2.4.0-M3</version>
</dependency>
and you're good to go to use GeoSPARQL on any kind of RDF4J repository.
There are several other GeoSPARQL functions supported by RDF4J out of the box: apart from distance, union, intersection, symDifference, difference, convexHull, boundary, envelope, and getSRID are also at a minimum supported. sfContains is currently not part of the default set, unfortunately. This is mostly due to a licensing issue RDF4J had with a previous version of the JTS library (required for polygon support). However, more recent JTS releases are done as part of the LocationTech project, and those license issues have cleared up, so we should hopefully be able to extend this in the near future (there's an issue tracking this at https://github.com/eclipse/rdf4j-storage/issues/89).
In the meantime you will indeed need to create your own `SpatialAlgebra` class, which you can add to RDF4J by means of a `SpatialSupportInitializer`. This is a bit of a workaround hack, but you should create a class with `org.eclipse.rdf4j.query.algebra.evaluation.function.geosparql.SpatialSupportInitializer` as its fully-qualified name, and make sure that it extends the `org.eclipse.rdf4j.query.algebra.evaluation.function.geosparql.SpatialSupport` abstract class, overriding its `getSpatialContext` and `getSpatialAlgebra` methods to provide your custom variants. Add to your classpath and restart, RDF4J will pick this up and use your `SpatialAlgebra` implementation instead of its own.
The bottom line is: this is all very beta. To be frank, if you think you could handle implementing additional GeoSPARQL functions using the workaround I mentioned above, then we would love to have your input (and if possible also your help) in actually adding this to RDF4J itself.
IDEs like Netbeans allow generation of entity classes through a persistence context. If you had access to underlying generation method (not sure if it is an external tool or part of the IDE), could you generate database entity classes dynamically at runtime? The idea being that you could hook into the entity classes using reflection.
I know you can go the other way and generate the database from the entity class, however due to permissions issues in my work environment that would be a no go. However, if you reverse the process and pull the classes from the database it may be feasible in my environment. The idea being that the database would serve as a single point of configuration/control.
It's theoretically possible but what would be the point? Java is statically typed so you would only be able to use the generated classes by reflection and you would have no way of giving them behaviour, so removing the whole point of object-relational mapping. Loading the data into Maps or just using SQL record sets would be more convenient.
If you have an existing schema you can write classes that act in the way your application needs and declaratively map them onto the schema. That way the code is the simplest expression of your application logic and is persistence-agnostic.
You can find on JBoss website a tool to do the reverse engineering from database to java objects.
The source code is available, you should dig in!
https://www.jboss.org/tools/download/stable.html
Assuming you're using Hibernate, you might be able to use Hibernate Tools to generate the database schema. Although primarily designed for Eclipse and Ant, its theoretically possible to link it in and invoke it like any other JAR.
This is for an Android application but I'm broadening the question to Java as I don't know how this is usually implemented.
Assuming you have a project that targets a specific SDK version. A new release of the SDK is backward incompatible and requires changing three lines in one class.
How is this managed in Java without duplicating any code(or by duplicating the least amount)?
I don't want to create two projects for only 3 lines that are different.
What I'm trying to achieve in the end is a single executable that'll work for both versions. In C/C++, you'd have a #define based on the version. How do I achieve the same thing in Java?
Edit: after reading the comments about the #define, I realized there were two issues I was merging into one:
So first issue is, how do I not
duplicate code ? What construct is there that is the equivalent of a
#define in C.
The second one is: is it possible
to bundle everything in the same
executable? (this is less of a
concern as the first one).
It depends heavily on the incompatibility. If it is simply behavior, you can check the java.version system property and branch the code accordingly (for three lines, something as simple as an if statement).
If, however, it is a lack of a class or something similar that will throw an error when the class is loaded or when the code gets closer to execution (not necessarily something you can void reasonably by checking before calling), then the solution gets a lot harder. The notion of having a separate version is the cleanest from a code point of view, but it does mean you have to distribute two versions.
Another solution is reflection. Don't reference the class directly, call it via reflection (test for the methods or classes to determine what environment you are currently running in and execute the methods). This is probably the "official" approach in that reflection exists to deal with classes that you don't have or don't know you will have at compile time. It is just being applied to libraries within the JDK. It gets very ugly very fast, however. For three lines of code, it's ok, but doing anything extensive is going to get bad.
The last thing I can think of is to write common denominator code - that is code that gets the job done in both, finding another way to do it that doesn't trigger the problematic class or method.
I would isolate the code that needs to be different in a separate class (or multiple classes if necessary), and include / exclude them when building the project for the different versions.
So i would have like src/java/org/myproj/Foo.java which is the common stuff, and then oldversion/java/org/myproj/Bar.java and newversion/java/org/myproj/Bar.java which is the different implementations of the class that uses changed api.
Then I either compile "src/java and oldversion/java" or "src/java and newversion/java".
Possibly a similar situation, I had a method which wasn't available in the previous version of the JDK but if it was there I wanted to call it, I didn't want to force people to use the more recent version though. I used reflection to look for the method, if it was there I called it, if it wasn't I didn't.
Pretty hacky but might give you what you want.
Addressing Java in general, I see two primary approaches.
1). Refactor the specific code to its own library. Have different versions of that library. Effectively your app is creating an abstaction above the different SDKs. Heavyweight for 3 lines of code, but perhaps quite reasonable for larger scale problems.
2). Injection using annotation. Write your own annotation processor to manage the appropriate injection. More work, but maybe more fun.
Separate changing code in different classes with the same interface. Place classes in the same jar. Use factory design pattern to instantiate one or another class depending on SDK version.
My project is slowly implementing Java annotations. Half of the developers - myself included - find that doing anything complex with annotations seems to add to our overall maintenance burden. The other half of the team thinks they're the bee's knees.
What's your real-world experience with teams of developers being able to maintain annotated code?
My personal experience is that, on average, dealing with annotations is far easier for most developers than dealing with your standard Java XML Configuration hell. For things like JPA and Spring testing they are absolute life-savers.
The good thing about annotations is that they make configuration on your classes self-documenting. Now, instead of having to search through a huge XML file to try and figure out how a framework is using your class, your class tells you.
Usually the issue with changes like this is that getting used to them simply takes time. Most people, including developers, resist change. I remember when I started working with Spring. For the first few weeks I wondered why anyone would put up with the headaches associated with it. Then, a few weeks later, I wondered how I'd ever lived without it.
I feel it breaks into two uses of annotations - annotations to provide a 'description' of a class vs. annotations to provide a 'dependency' of the class.
I'm fine with a 'description' use of annotations on the class - that's something that belongs on the class and the annotation helps to make a shorthand version of that - JPA annotations fall under this.
However, I don't really like the 'dependency' annotations - if you're putting the dependency directly on the class - even if it's determined at runtime from an annotation rather than at compile time in the class - isn't that breaking dependency injection? (perhaps in spirit rather than in rule...)
It may be personal preference, but I like the one big XML file that contains all the dependency information of my application - I view this as 'application configuration' rather than 'class configuration'. I'd rather search through the one known location than searching through all the classes in the app.
It depends highly on IDE support. I feel that annotations should be kept in sync with the code via checks in the IDE, but that support for this is somewhat lacking.
E.g. the older version of IDEA would warn if you overrode a function without #Override, but wouldn't remove the #Override tag if you changed the method signature (or the superclass signature, for that matter) and broke the relation.
Without support I find them a cumbersome way to add metadata to code.
I absolutely love annotations. I use them from Hibernate/JPA, Seam, JAXB....anything that I can. IMO there's nothing worse than having to open up an XML file just to find out how a class is handled.
To my eye annotations allow a class to speak for itself. Also annotations are (hopefully) part of your IDEs content assist, whereas with XML config you are usually on your own.
However, it may come down to how the XML configs and Annotations are actually used by any particular library (as most offer both), and what sort of annotation is used. I can imagine that annotations that define something that is build-specific (eg. file/url paths) may actually be easier as XML config.
i personally feel that the the specific use case you mentioned (auto-generate web forms) is a great use case for annotations. any sort of "framework" scenario where you can write simplified code and let the framework do the heavy (often repetitive) lifting based on a few suggestions (aka annotations) is, i think, the ideal use case for annotations.
i'm curious why you don't like annotations in this situation, and what you consider to be the "maintenance burden"? (and, i'm not trying to insult your position, just understand it).
I am currently using Hibernate Tools 3.1; I customized naming convention and DAO templates. The database (SQL Server 2005) in early development phase and I'm in charge of rebuilding the mappings, entities, DAOs, configuration, whatever. Each time I have to reverse-engineer the tables and so I lose every customization I made on the mappings (*.hbm.xml files) like adjusting the identity columns, picking the fields used in equals and toString. I was considering to write the diff XML in a file and the "merge" that onto the generated mapping (see my related question) but I was wondering... is there any best practice/tool for dealing with these annoying, unavoidable, critical tasks?
I'd strongly recommend against continual reverse engineering. Reverse engineering is a great one time thing, but changes need to be managed as changes to both the hbm and the database.
We use migrations to manage db changes, and we include the associated changes in the hbm. If Hibernate has it (I believe it does) you may want to look into annotations instead of an hbm, they can be quite a bit easier to maintain.
This is two and a half years late, but I'll offer a dissenting opinion. You should be able to make any customizations you need to the mapping files through the hibernate.reveng.xml file or a custom ReverseEngineeringStrategy. For the classes themselves, you should always generate to base classes and extend them with classes containing custom code.
For example, generate com.company.vo.generated.CustomerGenerated and extend it with com.company.vo.custom.Customer. Code generation should overwrite all classes in the generated package but never in the custom package (although you can have Hibernate Tools generate these custom classes in the target directory so that you can copy and paste blanks into the custom directory as needed). This way you can override methods for equals, toString, etc in the custom classes and not lose your changes when you regenerate. Also note that the best practice is to not check in generated code into SCM.
There are some great examples on this site of how to achieve this using Maven, the Hibernate3 plugin, and the build helper plugin. Most of these have very helpful answers by Pascal Thivent. This method is working beautifully for me, and while there is a bit of a learning curve it's a wonderful thing to be able to propagate database changes to the app with a single Maven command.