I have a java library project which contains a dependency to guava library. Guava has near 11k methods count, and I expect most of the users would came from the Android community. On Android there is a limit count method, it is 65k...
But the total count methods of my library is about 11.400, so my library's code is under 200 lines.
I was able to download and shrank a guava jar using proguard, reducing the count method number to 1k. But now the project needs to contain a reference to this shrank jar, instead a reference to the remote repository where the guava is hosted. But any jar added to the project would be discard by maven when it were published at any remote repository as an artifact, so the guava dependencies could not be resolved and the application client ultimately would crash.
Guava itself advices to not use proguard if your “application” is actually a library, and leave to the users of your library deal with this situation, using themselves proguard in order to shrank guava. But I don’t like this idea, because I would like to offer an easy configuration solution.
As far as I know, the output that proguard provides is some sort of executable (jar, apk, etc), so, If I shrank my own library, the final output would be a jar, and this jar, again, could not be published as an artifact, because it would be discarded (I tried it several times).
Is there any way of using proguard in my own java library project and pass the resulting output to the build chain in order to be published as a remote repository, not as a jar?
I’m using gradle by the way to build my project, but at this point I would be up to move to a maven one it that solves the problem.
Thanks.
Do one of the followings:
Use your shrank version of guava as separate maven artifact, publish it, and let your lib depend on it as on any other dependency
Do not shrink the library and use multidex build - this is the
method to solve 65K method limiation
http://developer.android.com/tools/building/multidex.html
Anyway as for ease of configuration you should not use progruard on your library for a simple reason: User will have to add dependency to its project anyway. And what if in some cases users will start having ClassNotFound exception because you have truncated some of the code that you didnt expect it can be used?
If somebody is programing for Android he will sooner or later bang into ProGuard, and I think sooner is better.
So as for ease of configuration, I would rather suggest in the documentation, that if user wants to avoid 65k limitation because your library is already exceeding it, he can use proguard in provided example configuration.
Related
I'm starting a new project in Scala that heavily depends on source files in another Java project. At first I thought about creating a new package/module in that project, but that seemed messy. However, I don't want to duplicate the Java code in my new project. The best solution I came up with is to reference the Java project through an svn external dependancy. Another solution is creating a jar file from the original, I would attach it as a library to the new Scala project. However, this makes keeping up-to-date more inconvenient. Which route is better?
SVN external:
Easy to update
Easy to commit
Inconvenient setup
Jar file as library:
Easy to setup
Old code isn't in active development(stable, bug fixes only)
Multi-step update
Need to open old project to make changes
You have your Scala project, and it depends on parts of your Java project. To me, that sounds like the Java project should be a library, with a version number, which is simply referenced by your Scala project. Or perhaps those parts that are shared across the two projects should be separated into a library. Using build tools like Maven will keep it clear which version is being used. The Java project can then evolve separately, or if it needs to change for the sake of the Scala project, you can bring out a new version and keep using an older one in other contexts if you're afraid of breakage.
The only exception where you go beyond binary dependencies that I can think of is if the Java code itself is actually being processed in some way at compile-time that is specific to the Scala project. Like, say, annotation processing.
Using SVN externals could be a solution. Just make sure you work with branches and snapshots to make sure some update to your Java code on the trunk doesn't suddenly make your Scala project inoperable.
Whether you have mixed scala/java or not is irrelevent.
You probably want to use external dependencies (jars) if both projects are very distinct and have different release cycles.
Otherwise, you can use sbt and have your two projects be sub-projects of the same build (your sbt build file would have the scala project depend on the java project). Or even, if they are really so intertwined, just have one project with both java and scala source files. Sbt handles that just fine.
Maven is an option too.
You can very easily set up a mixed scala/Java project using maven. Have a look at the scala-maven-plugin.
https://github.com/davidB/scala-maven-plugin
Eclipse users may not be too pleased due the poor maven integration.
I want to use Jackson JSON parser library in my android project. I saw this library in Maven repository, but I don't know how to use it. I've downloaded sources from the Maven repository and Jackson jars and attached sources to jar, but in the logcat I saw error message NoClassDefFoundError. When googling I' ve read that I have to declare Jackson dependencies in pom.xml file.I' m a newbie in Java development so I don't know what all these means. And have some questions:
1.How to write pom.xml for the Jackson library
2.Where to put this pom.xml
3. Do I really need to install Maven if I just want to use the library.
4. What else I need to begin work with the library?
No, you do not need to write a pom file, unless you are using Maven for building (in which case you need it regardless of Jackson).
What you need are just Jackson jars -- there is more than one, since some projects only need some pieces. This page:
http://wiki.fasterxml.com/JacksonDownload
should show what you need, and where to get them from. If you are starting from scratch, I would strongly recommend using Jackson 2.1 (not 1.9). And then you most likely need 3 jars (jackson-annotations, jackson-databind, jackson-core) -- although minimal is just jackson-core, if you use so-called "streaming API" (low-level, highest performance, but more work).
The benefit of using Maven would be just that you can define logical depenendency (group and artifact id of jar), and Maven would resolve it to physical jar, as well as references to other jars.
I'm new to Maven, using the m2e plugin for Eclipse. I'm still wrapping my head around Maven, but it seems like whenever I need to import a new library, like java.util.List, now I have to manually go through the hassle of finding the right repository for the jar and adding it to the dependencies in the POM. This seems like a major hassle, especially since some jars can't be found in public repositories, so they have to be uploaded into the local repository.
Am I missing something about Maven in Eclipse? Is there a way to automatically update the POM when Eclipse automatically imports a new library?
I'm trying to understand how using Maven saves time/effort...
You picked a bad example. Portions of the actual Java Library that come with the Java Standard Runtime are there regardless of Maven configuration.
With that in mind, if you wanted to add something external, say Log4j, then you would need to add a project dependency on Log4j. Maven would then take the dependency information and create a "signature" to search for, first in the local cache, and then in the external repositories.
Such a signature might look like
groupId:artifactId:version
or perhaps
groupId:artifactId:version:classifier
This identifies a maven "module" which will then be downloaded and configured into your system. Once in place it adds all of the classes within the module to your configured project.
Maven principally saves time in downloading and organizing JAR files in your build. By defining a "standard" project layout and a "standard" build order, Maven eliminates a lot of the guesswork in the "why isn't my project building" sweepstakes. Also, you can use neat commands like "mvn dependency:tree" to print out a list of all the JARs your project depends on, recursively.
Warning note: If you are using the M2E plugin and Eclipse, you may also run into problems with the plugin itself. The 1.0 version (hosted at eclipse.org) was much less friendly than the previous 0.12 version (hosted at Sonatype). You can get around this to some extent by downloading and installing the "standalone" version of Maven from apache (maven.apache.org) and running Maven from the command line. This is actually much more stable than trying to run Maven inside Eclipse (in my personal experience) and may save you some pain as you try to learn about Maven.
While downloading Google Guice I noticed two main "types" of artifacts available on their downloads page:
guice-3.0.zip; and
guice-3.0-src.zip
Upon downloading them both and inspecting their contents, they seem to be two totally different "perspectives" of the Guice 3.0 release.
The guice-3.0.zip just contains the Guice jar and its dependencies. The guice-3.0-src.zip, however, did not contain the actual Guice jar, but it did contain all sorts of other goodness: javadocs, examples, etc.
So it got me thinking: there must be different "configurations" of jars that get released inside Java projects. Crossing this idea with what little I know from build tools like Ivy (which has the concept of artifact configurations) and Maven (which has the concept of artifact scopes), I am wondering what the relation is between artifact configuration/scope and the end deliverable (the jar).
Let's say I was making a utility jar called my-utils.jar. In its Ivy descriptor, I could cite log4j as a compile-time dependency, and junit as a test dependency. I could then specify which of these two "configurations" to resolve against at buildtime.
What I want to know is: what is the "mapping" between these configurations and the content of the jars that are produced in the end result?
For instance, I might package all of my compile configuration dependencies wind up in the main my-utils.jar, but would there ever be a reason to package my test dependencies into a my-utils-test.jar? And what kind of dependencies would go in the my-utils-src.jar?
I know these are a lot of tiny questions, so I guess you can sum everything up as follows:
For a major project, what are the typical varieties of jars that get released (such as guice-3.0.zip vs guice-3.0-src.zip, etc.), what are the typical contents of each, and how do they map back to the concept of Ivy configurations or Maven scopes?
The one you need to run is guice-3.0.zip. It has the .class files in the correct package structure.
The other JAR, guice-3.0-src.zip, has the .java source files and other things that you might find useful. A smart IDE, like IntelliJ, can use the source JAR to allow you to step into the Guice code with a debugger and see what's going on.
You can also learn a lot by reading the Guice source code. It helps to see how developers who are smarter than you and me write code.
I'd say that the best example I've found is the Efficient Java Matrix Library at Google Code. That has an extensive JUnit test suite that's available along with the source, the docs, and everything else that you need. I think it's most impressive. I'd like to emulate it myself.
I'm trying to check out slf4j-simple-1.6.2 from a trusted repository (preferably, SLF4J's official repo) and pull it down into an Eclipse project. I'm doing this because I need to tweak SLF4J Simple's code so that it binds to my own logging implementation.
I'm hoping there is a way to do this without having to use Maven, because I've never used Maven before and feel much more comfortable running Ant builds.
Nevertheless, I've searched SLF4J's site high and low and cannot find any trusted links to their repository.
Even once I get the project imported into Eclipse, I still need to figure out how to get it building with Ant.
Could someone please help me:
Find the repo
Confirm whether an Ant build is possible
Thanks in advance!
The zip download here also contains the sources.
The official source code repository is hosted on GitHub. However, I believe you are doing it the wrong way.
The idea of SLF4J is to have a dependency on slf4j-api and let the developer to add exactly one binding. Instead of tweaking original bindings just write your own one. Of course you can use simple binding a starting point, but modifying existing open source libraries and maintaining patched versions is a lot of work.
As you said, slf4j is present in the official Maven repository.
So basically, you have 2 simple solutions without using Maven:
Download the JAR / sources / javadocs from this Maven repository, and copy them in your own project directory.
Use Ivy. This is an extension of Ant to give a better dependencies management. It can connect to Maven repositories. So you will be able to retrieve your slf4j dependency without having to use Maven.