Did anybody run Apache Sling with an enabled Java SecurityManager? That'd need a special java.policy file to allow the actions done by all deployed bundles, and it'd be extremely helpful to have a basic version that already allows what's needed by the bundles provided with the basic Sling Starter, and to which one could add policies for additional deployed code.
I'd also be interested if someone can tell that employing the SecurityManager is infeasible in a Sling setting, perhaps due to its design properties (such as the ability to add JSPs to the JCR at runtime).
Background: If you run code of several mandants on one server, that might be neccessary to separate their code from each other. While OSGI does have some mechanisms to separate bundles from each other, it'd be trivial for malicious code to use e.g. Java reflection to grab internal stuff from services provided by other bundles. An enabled security manager might at least make that much more difficult.
(I do realize that even with a security manager it's probably quite possible for malicious code to use bugs and design flaws to get access to resources of other users on the system, and that probably the only way to seriously separate code from different mandants would be using different servers. But at least one can try to make it hard.)
Related
In order to favor certain service implementations over others, I wrote a customizable version of java.util.ServiceLoader (adds priority and enabled/disabled flag to implementations via preference files for non-OSGi code).
The client was pleased and wanted the same customization for OSGi service implementations.
The devised solution is based on calling getServiceReferences(Class<S> clazz, String filter) on BundleContext and uses a null filter to retrieve all implementations.
Nevertheless, fiddling around with OSGi on such a low level leaves a bad taste. There's much boilerplate code (e.g. mandatory subtypes of BundleActivator) and the used approach will also hinder an smooth upgrade to declarative services and some point in time.
I also read about the SERVICE_RANKING property, but compared to the preference files from the approach above, it has the drawback that each implementation sets its own ranking property and it's not possible to change the ranking afterwards.
So my question is: What are good arguments against this low-level approach? Why should declarative services be used instead?
At the core OSGi is a dynamic environment. Bundles and services can come and go at every moment (theoretically). So the only way to cope with this environment is to react on changes compared to waiting for something to happen.
For example a declarative services component will come up once all its mandatory services are present and will vanish if one goes away.
A solution based on service loader or similar will actively get the services that are available if such a service is mandatory then you will have to block until it is available. This can easily lead to deadlocks in the application.
Of course in practice the application is normally not so dynamic. In most cases this only affects the startup of the application. So in many cases the blocking behaviour can work but it will produce an application that is inherently fragile.
On the other hand if you have the problem that your application needs to run inside and outside of OSGi then DS is problematic as it relies on OSGi to be present.
Typical examples are Aapache CXF and Apache Camel. Both do not use DS and instead invented different abstractions for usage in OSGi and both sometimes have problems in OSGi exactly because of this. Still it would be difficult to improve this as they need to work outside of OSGi too.
This is a rather high-level question so apologies if it's off-topic. I'm new to the enterprise Java world.
Suppose I have written some individual Java packages that do things like parse data feeds and store the parsed information to a queue. Another package might read from that queue and ingest those entries into a rules engine package. Tripped alerts get fed into another queue, which is polled by an alerting service (assume it's written in Python) that reads from the queue and issues emails.
As it stands I have to manually run each jar file and stick it in the background. While I could probably daemonize some or all of these services for resiliency or write some kind of service manager to do the same, this strikes me as being very amateur. Especially since I'd have to start a dozen services for this single workflow at boot.
I feel like I'm missing something, but I don't know what I don't know. Short of writing one giant, monolithic application, what should I be looking into to help me manage all these discrete components and be able to (conceptually) deliver a holistic application? I'd like to end up with some sort of hypervisor where I can click one button, it starts/stops all the above services, provides me some visibility into their status and makes sure the services are running when they should.
Is this where frameworks come into play? I see a number of them but don't know if that's just overkill, especially if I'm not actively developing a solution for that framework.
It seems you architected a system with a lot of components, and then after some time you decided to aggregate some of them because they happen to share the same programming language: Java. So, first a warning: this is not the best way to wire components together.
Also, it seems you don't know Java very well because you mix terms like package, jar and executable that are totally unrelated and distinct concepts.
However, let's assume that the current state of the art is the best possible and is immutable. Your current requirement is building a graphical interface (I guess HTTP/HTML based) to manage all the distinct components of the system written in Java. I suggest you use a single JVM, writing your components as EJB (essentially a start(), stop() and a method to query the component state that returns a custom object), and finally wire everything up with the Spring framework, that has a nice annotation-driven configuration for #Bean's.
SpringBoot also has an actuator package that simplify exposing objects. You may also find it useful to register your beans as Managed beans, and using the Hawtio framework to administer them (via a Jolokia agent).
I am not sure if you're actually using J2EE (i.e. Java Enterprise Edition). It is possible to write enterprise software also in J2SE. J2SE is not having too much available off the shelf for this, but in contrast has a lot of micro-frameworks such as Ninja, or full stack frameworks such as Play framework which work quite well, much easier to program, and performs much better than J2EE.
If you're not using J2EE, then you can go as simple as:
make one new Java project
add all the jars as dependency to that project (see the comment on Maven above by NimChimpsky)
start the classes in the jars by simply calling their constructor
This is quite a naive approach, but can serve you at this point. Of course, if you're aiming for a scalable platform, there is a lot more you need to learn first. For scalability, I suggest the Play! framework as a good start. Alternatively you can use Vert.x which has its own message queue implementation as well as support for high performance distributed caches.
The standard J2EE approach is doable (and considered "de-facto" in many oldschool enterprises) but has fundamental -flaws- or "differences" which makes a very steep learning curve and a very much non-scalable application.
It seems like you're writing your application in a microservice architecture.
You need an orchestrator.
If you are running everything in a single machine, a simple orchestrator that you probably is already running is systemd. You write systemd service description, and systemd will maintain your services according to your services description. You can specify the order the services should be brought up based on dependencies between services, restart policy if your service goes down unexpectedly, logging for stdout/stderr, etc. Note that this is the same systemd that runs the startup sequence of most modern Linux distros.
If you're running multiple machines, you can still keep using single machine orchestrator like systemd, but usually the requirement for the orchestrator will also become more complex. With multiple machines, you now have to take into account things like moving services between machines, phased roll out, etc. For these setups, there are software that adapts systemd for multi machine orchestration, like CoreOS's fleetd; and there are also standalone multi machine orchestrator like Kubernetes. Both uses docker as application container mechanism.
None of what I've described here is Java specific, which means you can use the same orchestration for Java as you used for Python or other languages or architecture.
You have to choose, As Raffaele suggested you can choose to write all your requirements into one app/service. Seems like a possible mission, using java Ejb's or using spring integration - ampqTemplate ( can write to a queue with ampqTemplate and receive the message with a dedicated listener (example).
Or choosing implementation with microservices architecture. write a service that will push to the queue another one that will contain the listener etc. a task that can be done easily with spring boot.
"One button to control them all" - in the case of a monolithic app - it's easy.
In case that you choose microservices architecture. It depends what are you needs. if its just the "start" "stop" operation I guess that that start and stop of your tomcat/other server will do. For other metrics, there is a variety of solutions. again, it depends on your needs.
I'm writing a JavaFX2 application that accepts arbitrary code to be loaded from remote locations. For me using a custom SecurityManager, ClassLoader and ProtectionDomain was the way to go. Unfortunately this seems to be the same setup that's used to sandbox applets, which has caused a lot of security exploits and that in turn has persuaded people to fear Java Web Plugin and removing it from their OS entirely.
Is Java sandbox a secure environment to run untrusted code onto, or is it just the Java Web Plugin as a whole to be insecure?
The security manager provides your app. with exactly as much protection as it provided the plug-in. Which was, given the security bugs, 'not much'.
It currently plugs the known security bugs (AFAIU). But as in any complex plug-in there are probably more, yet to be discovered, or possibly to be introduced in new versions or new APIs.
So basically, your code should go somewhat beyond a standard security manager, black-listing entire packages and (if need be) providing utility methods through which to perform activity normally handled by that package.
But then, that advice is the first point of a 20+ point list that I might be able to name 2 or 3 of the possible things an app. might need to guard against, in running untrusted code. Though that is not the question..
Is Java sandbox a secure environment to run untrusted code onto..
No. Java security might provide a good starting point for security against untrusted code, but it would need to be expanded specific to the app., and have other elements in order to be suited to the task required. Even then, there are the 'unknown security bugs' (in both the JRE as well as your own security efforts) to consider.
First and foremost: I want to state that this is mostly a personal exercise. There are plenty of containers and servers out there (Tomcat, Jetty, Winstone) that satisfy the needs of the market.
The other day I came across the Akka project and, having had a lot of fun with Erlang in the past, decided that it would be really cool to use it to build a functional web server.
Then I started daydreaming. What if I could use modern frameworks and build, in a code golf way, a web server that almost completely "stood on the shoulders of giants"? That is to say: how much of other people's work could I manage to use.
Ideally the requirements would resemble something like:
Fault tolerant, clusterable, distributed
Easy to configure
Supports HTTP, HTTPS, and AJP on configurable ports
Supports interface binding and multiple domains
Supports JSP, Jython, etc. through a pluggable interface
Supports modules that allow implementation of things like WebDAV, proxy, and URL rewrite
My biggest stumbling block at this juncture is how on earth do you use Jasper, Jetty, GlassFish or anything to interpret JSPs without worrying about all the other stuff, like networking, that they bring?
Any other suggestions for features would be highly awesome. I'm also investigating non-traditional configuration methods to see if there's anything out there that I like more than XML or properties files. For those of you who are familiar with Apache, sometimes you need a little scripting and sometimes you just need key/value pairs.
So, in any case, hit me up with your suggestions.
At least Tomcat has implemented its JSP engine as a module. It's not released separately and it might require some work to fully dis-couple it from the rest of the Tomcat code.
It's got a separate name (Jasper) and its own Howto. It's found in the org.apache.jasper package (and below).
Is it sufficient to secure a Java web application with the rights of the user that is running the application server process or is it reasonable also to use SecurityManager with a suitable policy file?
I have used to do the former and not the latter, but some customers would like us to also use SecurityManager that would explicitly give permissions to every third-party component to be sure there isn't any evil code lurking there.
I've seen some Servlet containers, like Resin to propose not using SecurityManager to slow things up. Any thoughts?
While I hate to ever recommend not using a security feature, it's my opinion that a SecurityManager is more intended to manage situations where untrusted or third-party code is executing in the JVM. Think applets, or a hosted, shared app server scenario. If you have complete control over the app server and are not running anybody else's code, I think it's redundant. Enable the SecurityManager does have significant performance impact in my experience.
There is no simple yes/no answer to your question, because it really depends: what do you want to secure, and what do you want to secure it from?
For example, I've used SecurityManager to implement IP filtering and allow only whitelisted IP addresses to connect to my application. If you just want to disallow access to disk files, maybe running application as user with lesser privileges is better solution.
If you don't trust third party plugins at all, remember that once you allow execution of plugin code, that plugin can crash your application if it wants to even if you use SecurityManager. If your application loads plugins, maybe whitelisting plugin and checking the list before loading plugin is better solution.
If you decide to use it, you will take a performance hit (since JVM will do more checks), but how fast it will run really depends on your code/configuration that will do the checks. My IP whitelist was pretty fast since it included only single list lookup; if your checks include invoking remote web service and accessing database you can slow things down a lot, but on the other hand, even that should not matter if you have enough hardware and few concurrent users (in other words, if you can afford it).
Correctly configurin Security Manager in java can be hard. For instance, if you do not restrict the security manager itself, one can bypass all security just by setting the Security Manager to null.
Using a security manager only makes sense if your JVM will run untrusted code, otherwise it is a pain to set up, because you'll have to know beforehand what permissions you should set for each feature (ex: RMI, sockets, I/O) and for each client.