I am considering switching my applications existing WFS/WMS "SDK" to GeoServer. However my application has a few special requirements.
I must remain in control full of the applications entry point (main(...)).
I cannot introduce any additional interfaces (such as the GeoServer GUI).
Essentially my application just needs an SDK which exposes our data over an HTTP "/wfs" path. We need to avoid any other interfaces or code being added or exposed. This is unfortunately an unavoidable requirement. Also, until now I have little experience in the GeoServer source code as we have been using a different Toolset. I am of course combing through the source, but am having trouble finding the right classes to get started.
In our existing SDK, I am able to programmatically create a Jetty server with a WFS Servlet assigned to our desired path. One class is provided during the Servlet initialisation which handles communication between our code, and the Servlet.
In order to get a similar setup using GeoServer, I am assuming:
I must add org.geoserver.gs-wfs to my pom.xml dependencies
I must run my own Jetty server in my Main function, and programmatically add the WFS module somehow
I do not yet know:
How to initialise and add the gs-wfs module to my own programatically created Jetty server
How to get a shared instance of the Catalog to add / remove the configured data
With specific focus on points 1 and 2 above, how do I initialise an instance of "just" the GeoServer WFS endpoint?
The path you're taking is too complicated (besides, there is no WFS servlet to start with)... the GeoServer war is the packaging of a modular application, with a selection of modules of common usage included in it.
If you want to remove the GUI you simply go into the packaged war file, and remove any jar that starts with "gs-web". Since you want full control, you probably want to remove also the administrative REST interface, thus remove also all jars starting with "gs-rest". That should get you close to an application that can start, and can run.
I say "close" because this operation is not commonly attempted, and there might be some unintended cross-module dependency preventing it to work.
Another possibility is to check-out GeoServer, get into src/web/ap (or clone it) and edit the pom.xml file, removing all dependencies you don't want... rebuild and you'll get a minimized war file with only the necessary jars.
GeoServer is a lot more complex than just a pick and mix bag of jars. If you want to create a single jar WFS server you will need to start with a copy of the specification and probably an understanding of how GeoTools (the underlying library of GeoServer) works, and about a year or two of development time.
Or you could read the GeoServer manual and turn off the GeoServer GUI. Then all you need to do is master the REST API to load data into it.
Related
When distributing a Java application to others, it can be deployed as a JAR file for easy execution.
But is there a way to change a Java class / part of the code after deployment without having to rebundle the whole application again?
If you have an app with say 10 classes where 9 are finalized but one needs to be adjusted according to the individual case. What would be the easiest way to change just one class in an app?
Probably you want to use java web start. If your user starts application via java web start it is automatically being updated if updates are available.
EDIT
It does not provide class-based granularity, but I believe this is not the real issue. It however provides the jar-based granularity, i.e. the newer version of jar is being downloaded only if it was changed.
No, there's not.
You should repackage OR design the one that should be adjusted to be configurable at runtime. If you can modify it using a configuration database and factory that would be the only way to do it without repackaging.
In theory you could create another jar for the customized classes and put it into the classpath before the old jar, and the JVM will load the customized classes. But this is simply looking for trouble...
Better to build two jars, one with the non changing classes and another with the customized classes and rebuild the later when you need it.
This may be a very rudimentary question, but please help me out if this is well-known and has been solved elsewhere.
I have a multi-war setup (all maven modules) say kilo-webapp1 and kilo-webapp2 as two WARs that I need to deploy on a Tomcat instance. These two webapps both use services from a common service jar, say kilo-common-services.jar. The kilo-common-services.jar has its own spring context that is loaded by the users of the jar viz. kilo-webapp1 and kilo-webapp2 in this case. It so happens that the initialization of the services in kilo-common-services takes a long time and hence I want it to happen only once (to ensure that the time it takes to bring up the instance is not very high) which also helps me to use it as a second level cache that it kept current in the JVM instance. To do this, we resorted to the following steps:
Modify the catalina.properties of CATALINA_BASE in tomcat to have shared.loader as ${catalina.base}/shared/lib
Copied the kilo-common-services.jar and all of its dependent jars to the CATALINA_BASE/shared/lib. [Manual step]
Copy spring related jars to the CATALINA_BASE/shared/lib location [Manual step]
Created a beanRefContext.xml file in kilo-common-services.jar. Define a new ClassPathXmlApplicationContext here, where the constructor was provided with the location to the spring context file for the common services.
Noted the dependency scope of kilo-common-services.jar and every other dependency (like Spring related jars) as provided in the kilo-webapp1 and kilo-webapp2 pom files. For Spring this is needed to ensure that the classpath scanning actions are not triggered twice. Also this causes different ClassCastExceptions (for log4j lets's say) if not excluded via the provided scope.
web.xml for kilo-webapp1 and kilo-webapp2 indicated that the parentContext for them is the servicesContext defined in kilo-common-services.jar.
I was able to verify that only one instance of the services of kilo-common-services exist, but the setup as you might have imagined is painful. If someone has best practices about such a setup in an IDE like Eclipse, would really appreciate it. My problems are as below:
#2 is becoming a challenge. I am currently running mvn dependency:copy-dependencies on kilo-common-services to copy dependent jars from target/dependency to the shared/lib which is a woefully manual step. Time and again, I forget to regenerate dependencies and have to do a redeploy again.
#3 is also not straight-forward as time and again there are newer common dependencies and we always have to remember to copy it to shared lib to avoid ClassCastExceptions
#5 is again a maintenance nightmare.
Also as time progresses, there will more such disparate common jars that need to be shared and it would involve pain for each of those jars. Feel free to critique the setup and propose a better one in its place that may be easy to use (from an IDE as well). Would be happy to provide any other details.
Thanks in advance!
The problem is that your architecture is broken (and that's why you're struggling with the solution). You have two solutions:
1) If you want to share a service that takes a long time (to initialise) between two war applications, make that a separate service completely and access it via rest or any kind of remoting.
2) Merge both webapps into one.
Having the common library is the shared lib folder is going to bring you lots of headaches, and you'll end up rolling it back.
My (personal) approach would be to merge both applications, but keep the packages separate enough and have separate spring configurations. In this way, at least you still keep the logic separation of both webapps.
Also since both run on the same container, there's little gain from having 2 separate wars (unless you're planning to move them to different containers very soon).
About the IDE, you can use the maven-cargo-plugin to start up a tomcat with several web applications with (almost) any configuration you want.
We are developing restful soa, with spring and tomcat and utilizing Domain Driven Design (well thats the plan anyway). There is migrationProject and a initial basic search service. Two separate WAR files, with two separate POMs. Both utilize the same Domain objects.
So I will have separate project that will be just the DomainObjects I will wrap them up into a jar, and then using maven and/or jenkins it will deploy automatically (whenever I configure (for example when pushed to a specific repository).
Having two copies of the same jar, sounds like a much worse idea to me. Its not your architecture that is broken, its your deployment and development process thats needs improvement, imho.
(my kind of related question).
Our long term plan is to have one project as the restful interface, with multiple Controllers that have service classes and repositories injected into them from their dependencies.
i have an API that is being written for a large group of 40 or so applications to share.
my problem is currently they plan on having the API as a simple library included in each war file for each program. the problem thats going to occur is when two apps are running on the same instance with different versions of the api library. ive had a lot of problems in the past with this.
i seem to remember a while ago something where i can wrap my library into an ear file or something and deploy it to tomcat to make it global. simply including it in the lib folder won't work because it will include hibernate systems that have to be deployed to allow the api methods to access the database. then in each application i would have an interface i can implement that allows me to call those api methods. very similar to local EJB3 but not as complex and didn't require an enterprise level server to implement.
anyone else remember something like this or was it a bad dream on my part?
You will have problems if you use a single jar shared by all the webapps, since it will then be impossible for two apps to use a different version of a library. But if each webapp has its own version of the library in its WEB-INF/lib, the container shouldn't have any problem: each webapp has its own classloader, which doesn't see the libraries of other webapps.
I have a Java project that has both server and client packages. In addition I have a library package.
I use eclipse and have put everything in a single Java project, each section server,client and library are in separate packages, the problem is that when I export, everything gets added to the Jar file.
So I suppose I need two different projects, client and server, but what about the shared library files? What do I do about them? Do I actually need three different projects? It will become a little unwieldy as everything is actually related and I would like to keep them together.
I use eclipse and have put everything
in a single java project, each section
server,client and library are in
separate packages, the problem is that
when I export, everything gets added
to the Jar file.
This is the part that intrigued me, why are you exporting something that has both the client and the server? From a client-server perspective they are going to be distributed separately.
Do I actually need three different
projects? It will become a little
unwieldy as everything is actually
related and I would like to keep them
together.
Thanks to how IDEs can now manage dependencies across projects/modules, I don't think it looks as bad as you picture it. For example you can work simultaneously on the server code, and use its classes and interfaces from your client code, and reference JARs produced by the server project.
I'd like also to add that a 'Project' isn't the broadest encapsulation of code either, there is still a 'Workspace' that can contain a number of related 'Projects'. Other IDEs go for other wordings like 'Module' instead of 'Project'.
Closing thoughts:
For the least impedance path, I think you should separate the client and the server parts into two projects, and do the same think for the shared library in case you are compiling it from source i.e, not a 3rd party JAR.
So in the end of the day you will have 3 'products' from the compilation process and distribute them where they belong, with the 'library' duplicated on both distribution sides.
You can have a separate project for your shared code, and create a library (i.e. jar file) for that. Then, your client and server projects can both use the shared library.
Even better, you can use this shared library for other projects in the future.
Note:
Eclipse is just going to compile the source files into their respective class files and put then in the bin folder, or wherever you have your output folder set for the project properties. It doesn't create a jar file by default.
If you want to create jar files, the best way is to use a tool like ant. Then you would be able to create whatever jars you need, and structure it however you like.
Here's a link for reference:
Create Multiple JARs from Eclipse Project
You can create the separate project for client and server side, the shared package can be attach in the class path definition.
... the problem is that when I export,
everything gets added to the Jar file.
Is that really a problem? Maybe the shared code is an asset rather than a liability. Perhaps you should optimize the developer issues before worrying about the deployment problems that, around here, we've decided aren't problems after all.
So I suppose I need two different
projects, client and server, but what
about the shared library files? What
do I do about them? Do I actually need
three different projects? It will
become a little unwieldy as everything
is actually related and I would like
to keep them together.
We have a similar situation here and chose to embrace the shared code. Everyone gets the same code and choses what mode and configuration they need to start up.
If you check out our large-ish system (a bit over 5000 classes), you get the code for the servers (two main flavors), the clients (another two types), shared content (third party jars, visual assets, etc.) and site specific material (configuration files, start-up scripts and example data).
The result is that, after one checkout, you have the complete package for all of our primary locations, build scripts and Netbeans and Eclipse launch configs. As a result, you can go from an empty machine (with just an IDE) to a working client-server combination in about five minutes.
As a result, double-click the server icon and you launch a server process, running the site-specific configuration. Double-click the client and you launch a client process that's ready to connect to the server you just made.
Punchline: don't make development and deployment harder on yourself unless there's a very good reason. In our case, it was simpler, cheaper and easier to maintain the situation where we gave every installation the exact same package.
I am just investigating the idea of this so have no example code. I've been digging around on the web but I'm not really sure what I should actually be looking for so I'm not finding much. So I thought I'd ask and see if anyone knew if a) This is possible. b) How I should do it. (Or at least what I should be looking to learn about to do it.)
I'm building a web app using JSP pages on the client with a JBoss server running J2EE, in the middle there is a tomcat web server.
Basically the app contains different sections that will be rolled out over time as they're developed and also customers may be using different combinations of sections. The tidiest way of deploying this I can think of is to build each section into it's own jar. Then depending on the combination of sections that are relevant for the customer, install only the required jars on the JBoss server for deployment.
To support this I'd like the client navigation menu to only show the available sections based on what is deployed on the JBoss server. Is it possible for my servlet to find out what is deployed on the server? I'd like the solution to be as 'dumb' as possible, i.e. I don't want to tell it what to look for, (other than a prefix to identify our jars), as I don't know yet everything we might build.
My current solution is to have a table in the database to hold a list of the installed sections. But this is going to require update scripts etc during install and I'm sure we should be able to do this by just deploying jars on the server.
Thanks in advance.
You could add this information to the MANIFEST.MF file and then enumerate all files in your webapp (see this answer for how to list all manifests) when you start.
I do something similar by configuring a "Plugin Directory" setting for my application. It then scans that directory regularly for Jars. It looks for specific metadata in the manifest to determine whether the Jar is actually a valid plugin, and what class to load from it (the static initializer of that class registers the plugin with the application).
Then all you need to do is place a new Jar into that directory to add its functionality.