Should client-server code be written in one "project" or two? - java

I've been beginning a client-server application. At first I naturally created two projects in Eclipse, two source control repositories, etc. But I'm quickly seeing that there is a bit of shared code between the two that would probably benefit from sharing (in the same project or in a shared library) instead of copying.
In addition, I've been learning and trying test-driven development, and it seems to me that it would be easier to test based on real client components rather than having to set up a huge amount of code just to mock something, when the code is probably mostly in the client. In this case it seems having the client and server together, in one project, thinly separated by root packages (org.myapp.client.* and org.myapp.server., maybe org.myapp.shared. too).
My biggest concern in merging the client and server, however, is of security; how do I ensure that the server pieces of the code do not reach an user's computer? When Eclipse bundles a JAR, I'd have to pick out the server-specific bits and hope I don't miss any, right?
So especially if you are writing client-server applications yourself (and especially in Java, though this can turn into a language-agnostic question if you'd like to share your experience with this in other languages), what sort of separation do you keep between your client and server code? Are they just in different packages/namespaces or completely different binaries using shared libraries, or something else entirely? How do you test the code together and yet ship separately?

A lot of this is going to depend on your specific implementation but I typically find that you have at least three assemblies (binaries) that are created with a project like this.
A Common DLL that contains shared functionality that is used by both the client and the server
The DLL/Exe for the client
The Dll/exe for the server
Using this approach you have your shared items, but you make sure that items that are server specific are never in a distribution that is sent to the client workstations.

Neither. It should be 3. (common, client and server) However, it doesn't necessarily need to be three "projects". Using Maven I create three sub-modules under a master project. You can do something similar using Ant.

I have found that at least one project per finished entity (server deployment, client binary, etc) works well with e.g. Hudson. Then you can have shared code in a basic project available to all.

Related

What is the difference between Microservices and Monolythical approach for the provided use-case

So I started reading some things about different software architectures and inevitably came across Microservices Architecture. Yet, I am wondering about the way these achitectures differ from each other. In a monolythical approach I would e.g. modify a POM.XML to take my different layers and pack them into one application to deploy it. I'd say this might even be the most common way to set up a simple application.
Now as I understood Microservices, you seperate each service from each other and let them run independently. For me that means, that every service is deployed on its own (so you basically got a tomcat running with quite a lot of .war-files on it. But I think I miss the difference to a monolythical application.
I am going to try to set an (quite common) example:
You got a frontend (e.g. Angular) with a Spring-Boot Backend communicating via REST-Services. Now I take a POM.XML and do the following steps:
build the Frontend-Application
include the necessary JS-files into my Spring-Application
create WAR-file from the project
As a result I got one single WAR-file that I can deploy but got two Services included: Backend and Frontend. Yet, I would call it a monolythical approach.
Now when I would include the Angular-Application into my tomcat and deploy a WAR-file of my Spring-Boot part of the application (without integrated frontend). That way I would have two deployed services running on the same server interacting with each other that can be replaced without touching each other. By definition I would not call it a monolythical approach (same code, different deployment) but a Microservice-architecture, right? But this can not be the case since literally every article I read told me the same advantages and disadvantages for architectures and I cannot see any difference except for the possibility to exchange frontend and backend (which I have in both cases, yet in one I would need to redeploy the full application in the first case).
Microservices are set just set of guide lines that talk about how to design your application so that it is scalable, manageable and adapts to fast development pace. It is not just about how you are deploying your application.
Over the years, we have learned that when you try build one big application as monolith, initially it gives you pace, different modules in your monolith has complete visibility of each other and can access things, tweak things around as they wish, even one change that should affect one module may migrate into other classes, where it should not have been. While it helps you prototype, but code becomes less and less maintainable. You can ofcourse put in effort to make sure your code remains clean, but that effort grows as app grows.
Also you as developer are required to know whole product and it is difficult to work in silo, without worrying about the whole architecture, which makes it difficult for new people to join and make changes.
Next when you deploy, specially now a day, scale is important, and you need to adapt to traffic. All your modules will not expect high traffic 24/7. But if you have monolith, even if one module is being used by 100 of users your application have to scale for 100 of users.
Microservices just pulls in info from this, and defines some guidelines
You should breakdown your app based on biz domains. Every service is responsible for one aspect only. They talk to other via contract (API or events) and as long as contract stands you can do what you want within your service. New dev need to learn just one service to start with.
Scaling becomes easy, because if you have load on one service only that will scale. Other modules deployed independently can scale as the load specific to them.
By keeping it small you can build fast, make changes in rapid way. No shared database make sure that you take a call on what you want to save, how you want to save and how you want to change.
For you case, just deploy it the way you think its best. But if you start to grow, you have some 50 odd services (or that size project) you will see benefits of divide and conquer.
Spliting Frontend from Backend is not the best/canonical example of microservices deployments; this is the equivalent of having layers in a monolith. Thing better about how you'd split your monolith by (sub)domain into modules, each module having frontend and backend responsibilities; then each module can become a microservice, if needed.
The canonical MS architecture for a web-based app is a Gateway that assembles (in paralel!) HTML responses from different MSs. So, individual MS would respond with HTML, CSS and JS instead of JSON or other incomplete form of data. This is Tell Don't Ask principle applied to MSs. This gives you a real MS, in which you can very easily replace one MS with another.
If the Gateway cannot assemble the individual responses in parallel because they depend one another then the splitting is wrong and you need to refactor.
The biggest notable difference between a modular monolith and microservices is that microservices run in separate processes.
If you create your monolith using location transparency, then you could deploy components as microservices without touching others' components code. For example if you use CQRS, you could deploy a Readmodel as a microservice just by using cut/paste on your code, from monolith to microservice.

How to effectively manage a bunch of jar files and their plumbing?

This is a rather high-level question so apologies if it's off-topic. I'm new to the enterprise Java world.
Suppose I have written some individual Java packages that do things like parse data feeds and store the parsed information to a queue. Another package might read from that queue and ingest those entries into a rules engine package. Tripped alerts get fed into another queue, which is polled by an alerting service (assume it's written in Python) that reads from the queue and issues emails.
As it stands I have to manually run each jar file and stick it in the background. While I could probably daemonize some or all of these services for resiliency or write some kind of service manager to do the same, this strikes me as being very amateur. Especially since I'd have to start a dozen services for this single workflow at boot.
I feel like I'm missing something, but I don't know what I don't know. Short of writing one giant, monolithic application, what should I be looking into to help me manage all these discrete components and be able to (conceptually) deliver a holistic application? I'd like to end up with some sort of hypervisor where I can click one button, it starts/stops all the above services, provides me some visibility into their status and makes sure the services are running when they should.
Is this where frameworks come into play? I see a number of them but don't know if that's just overkill, especially if I'm not actively developing a solution for that framework.
It seems you architected a system with a lot of components, and then after some time you decided to aggregate some of them because they happen to share the same programming language: Java. So, first a warning: this is not the best way to wire components together.
Also, it seems you don't know Java very well because you mix terms like package, jar and executable that are totally unrelated and distinct concepts.
However, let's assume that the current state of the art is the best possible and is immutable. Your current requirement is building a graphical interface (I guess HTTP/HTML based) to manage all the distinct components of the system written in Java. I suggest you use a single JVM, writing your components as EJB (essentially a start(), stop() and a method to query the component state that returns a custom object), and finally wire everything up with the Spring framework, that has a nice annotation-driven configuration for #Bean's.
SpringBoot also has an actuator package that simplify exposing objects. You may also find it useful to register your beans as Managed beans, and using the Hawtio framework to administer them (via a Jolokia agent).
I am not sure if you're actually using J2EE (i.e. Java Enterprise Edition). It is possible to write enterprise software also in J2SE. J2SE is not having too much available off the shelf for this, but in contrast has a lot of micro-frameworks such as Ninja, or full stack frameworks such as Play framework which work quite well, much easier to program, and performs much better than J2EE.
If you're not using J2EE, then you can go as simple as:
make one new Java project
add all the jars as dependency to that project (see the comment on Maven above by NimChimpsky)
start the classes in the jars by simply calling their constructor
This is quite a naive approach, but can serve you at this point. Of course, if you're aiming for a scalable platform, there is a lot more you need to learn first. For scalability, I suggest the Play! framework as a good start. Alternatively you can use Vert.x which has its own message queue implementation as well as support for high performance distributed caches.
The standard J2EE approach is doable (and considered "de-facto" in many oldschool enterprises) but has fundamental -flaws- or "differences" which makes a very steep learning curve and a very much non-scalable application.
It seems like you're writing your application in a microservice architecture.
You need an orchestrator.
If you are running everything in a single machine, a simple orchestrator that you probably is already running is systemd. You write systemd service description, and systemd will maintain your services according to your services description. You can specify the order the services should be brought up based on dependencies between services, restart policy if your service goes down unexpectedly, logging for stdout/stderr, etc. Note that this is the same systemd that runs the startup sequence of most modern Linux distros.
If you're running multiple machines, you can still keep using single machine orchestrator like systemd, but usually the requirement for the orchestrator will also become more complex. With multiple machines, you now have to take into account things like moving services between machines, phased roll out, etc. For these setups, there are software that adapts systemd for multi machine orchestration, like CoreOS's fleetd; and there are also standalone multi machine orchestrator like Kubernetes. Both uses docker as application container mechanism.
None of what I've described here is Java specific, which means you can use the same orchestration for Java as you used for Python or other languages or architecture.
You have to choose, As Raffaele suggested you can choose to write all your requirements into one app/service. Seems like a possible mission, using java Ejb's or using spring integration - ampqTemplate ( can write to a queue with ampqTemplate and receive the message with a dedicated listener (example).
Or choosing implementation with microservices architecture. write a service that will push to the queue another one that will contain the listener etc. a task that can be done easily with spring boot.
"One button to control them all" - in the case of a monolithic app - it's easy.
In case that you choose microservices architecture. It depends what are you needs. if its just the "start" "stop" operation I guess that that start and stop of your tomcat/other server will do. For other metrics, there is a variety of solutions. again, it depends on your needs.

automatically detecting external calls from java

I work for an enterprise that has a many millions of lines of Java code base. Unfortunately, there were very poor practices put in place to track when one java EAR calls another EAR on another system. The problem gets even worse, we run DB2 and all the DB2 schemas run on the same data connection. This means there is no standard way to look at a config file or database connection to even tell what databases the application accesses. This problem extends to other REST services, since we have REST data services, MQ systems, JMS, EJB RMI, etc. Trying to do impact analysis is a nightmare.
Is there a tool that exists, maybe a findbugs plugin, that I can run on an application and have it generate a report of the systems that the application accesses?
If not, if I put say TRACE on the java.io and java.nio to log everything, should that capture any network connections that Java attempts to make thru the app server?
My ultimate goal, if i can't find a static analysis system that can help with these problems, i would like to write some AOP app that would live between the EAR and WebSphere and log all outbound and possibly inbound connections to the EAR resources.
is this possible?
Tricky one ;-)
Findbugs can help you identify all communication related places in the java code. But you have to do some stuff for that:
Identify all kinds of connections you want to flag (e.g. DB connections, EJB communication, ReST client code ...)
If you have that you need to write your own findbugs plugin which detect those places. May sound complicated however depending on how many places you want to identify a versed developer can do that in 2-3 days I would guess. As starting point have a look at the sourcecode of the available bug patterns in findbugs, look for a similar one and use that as a starting point. There are also lots of tutorials in the web on how to write your own bug pattern...
Configure findbugs to only use your bug pattern and run it on your code base (otherwise all the other bugs will clutter the result especially if your codebase is this huge).
Findbugs will generate a report / show you all the "communication" places...

Java - Network application- Real-Time

What are recommended strategies for building Java application that will be run on "desktop", not in browser. Characteristics of the application would be:
1. Multiple application instances would be running on different machines
2. Applications must communicate in real-time (if one user make changes,
in another application data must be refreshed)
Do you want to create a networking application maybe? based on sockets and so on? Regarding your 2 questions, I have implemented that scenario some time ago and I am working in something similar for my job, it is not complex at all, but I will answer to you according the two issues that concern to you.
Multiple application instances would be run on different machines.
If you are going to install an instance of the application in the people's desktop, I'd suggest to be very careful with "paths", do not hard code any path, since the resources loading will be dynamic.
Check carefully what is the network architecture in which your application will be installed. Maybe it is just a LAN, or maybe it will work in a big network and access through VPN, etc. Check what is the scenario.
Once you make sure your application works fine in different machines without any path conflict or resource loading conflict, you can export your jar, generate it using maven, ant, etc.
Also, if you want to move forward, you can create an installer using any Install wizard creation and create a batch file (.exe) for Windows or (.sh) for Linux distr. But these are only suggestions for the installation stage.
On the other hand, if you wanna execute the application as a Java desktop but using an URL to launch it, you can take a look to JNLP.
Applications must communicate in real-time (if one user make changes, then other will be able to see that)
If you want to do that, you will need, for sure, a server to provide and store information. The server can be a physical machine set up in the office or a remote one.
You have two options here:
Use Java Networking: Create an application that works as a server that provides and saves the information (it should be a concurrent environment since many people will perform transactions or queries over it). Check how can you create a basic server - client application using Sockets to understand better how it works and then you will not have problems to add the complexity of the requirements your environment demands.
You can simply, develop a Java REST Based application and make your Client application connect to the machine (or machines if you plan to implement load balancing) and consume those REST. You can take a look to Jersey libraries in order to implement your scenario. Make sure to add security to these Web Services and make the server private access for the network in which your application instances will work.
Well, that's what I can tell you regarding the scenario you try to implement, based on what I've done and what I'm doing now so far.
Maybe if you need additional or further information, you can reply in the comments, and it will be great to help you.
Regards and happy coding :)
you want to look into using sockets, TCP or UDP, and also figure out if you want a central authoritative server ( what if two users change the same thing in different ways, whose data is saved?)
read this article from Oracle/Java hereJava Custom Networking

Suggestion need for Code sharing Onsite Offsite

I am new to a project where developers still share code by sending files by mail.
We are using eclipse and cvs.
Developers from offsite send there code for reveiw to onsite where other developers take files one by one from there mail and replace in eclipse. It is ok for 2 or 3 files. But as the files keep on increasing this task really becomes a pain.
We cannot put the source files into the cvs as untested code from offsite can crash our build server.
Here my question begins:-
What can be the better ways to share code?
We dont want to create branches for each change because in this case we will end up with 10-12 branches everyday.
Code should be tested via continuous integration, especially in your situation where your programmers are scattered literally across the world. Your offshore people should be using unit/integration testing to insure that they don't break the build. You should institute process where before they finish for the day, they verify the integrity of the build.
If they are not, they are not worth the money you are paying them.
I suggest you give the offsite developers the ability to perform the same test as your build server. There is no reason they should be sending you code which they cannot test (or test that it at least runs without crashing).
Is there any reason they cannot access your systems via VPN. That way they can test the code via your's or a second build server and merge the code themselves.

Categories