What are good reasons for using Camel XML routes? [closed] - java

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
The Camel Java DSL provides type-safety, code completion and proper support for refactoring. It also helps to modularize and (unit-)test your code in a great manner.
Speaking for the Camel XML syntax I only see the advantage of being able to modify and reload routes at runtime (e.g. via hawtio).
Obviously I'm really missing something here - so what is the rationale behind the use of Camel XML routes today?

In-Place editing of routes (although I would discourage doing that)
quick&dirty one-off routes (e.g. routing from test to qa environment) or very simple projects - when you have a container like karaf or servicemix. No need to fire up your IDE/compile. Just write and drop to deploy folder.
Maybe easier for non-developers
It is a matter of taste and preference.

I have used both and I have to say the java dsl is far the easier and more powerful to use.
But the best approach is to combine them, especially if you are deploying to an OSGI environment like Karaf.
Use blueprint to define your beans and routeBuilder beans and bind them. The actual implementation is done in routeBuilder classes. In blueprint you can define properties and do a few other things as well, but the actual behavior of the routes is done in java.

First off, when you say XML do you mean Spring XML DSL or Blueprint XML DSL? While they do share most of their syntax, they are not identical. Blueprint XML DSL is the preferred way of defining Camel routes for an OSGi environment (i.e. Apache Karaf runtimes) while Spring XML DSL is nowadays more or less a legacy of the times when you could only use Spring through XML.
Having said that, I think it really boils down to personal preference - a lot of developers still prefer XML over Java for defining routes and find it easier to read and follow. I myself have always preferred the Java DSL since it's more flexible and powerful but I have to admit that XML provides better overview of the routes.

Related

How to go about inserting domain objects in db at start of API [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Im building a REST API in java with jersey and jetty. I have a complex domain in which I have a class (Workers). I want to be able to dynamically add workers through a POST. However, the business logic requires me to have a few default workers with fixed values. So at the start of the API, I need to add them to my db (right now its in memory). In terms of clean code whats the best way to go about that?
I thought about initializing my repository with these defaults workers, but I feel like its violating the SRP for the WorkerRepo class, I feel like that should be the job of the application layer as its specific to this application, not to the domain if that makes sense. Where should I move the logic for this initialization? Thanks!
From my perspective I would design your needs just as I would design all other use cases. E.g. a SetupWorkerInteractor.
I would use an ApplicationRunner that encapsulated the application startup logic. E.g. parse the command line args, build the application context, call the initialization process and run the application. Of cource I would also separate these aspects into different classes, but I guess you get the picture of what I mean.
In my case I would use the ApplicationLog as the "presenter" of the setup use case's output.
For simplicity I omitted the entity and request/response models.
If I do it this way, it doesn't matter if the SetupWorkerInputBoundary is called from the ApplicationRunner, a RestService or e.g. a messaging system. I can also test the setup just like any other use case.
After a bit of thinking, I moved the setup in the application layer, where I actually instanciate all my dependencies (in a class named ApplicationContext).
I created an interface WorkerContext and created a concrete implementation WorkerContextX where X = NAMEOFTHEAPP. This context class contains all the default values and uses the injected repo to add these default values to the repo. So at the startup of the API, in the ApplicationContext class, I call the WorkerContext method that setup my workerRepo.
This way, I can easily change the setup strategy in the blink of an eye and it doesnt violate the SRP anymore in the repo. And I now respect the DIP as the repo(which was in the domain) doesnt rely on things that are dictated by the application layer.
I posted this as I thought it was a decent solution and could help other people, feel free to critique or improve this solution.

How to reuse code in microservices architecture [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have 2 services (service1 and service2),and both services are using the same data model "studentModel",I'm wondering how to share the studentModel between the two services.
1.Build a studentModel.jar , and all the services refer to this jar
2.Copy & Paste code
Please help me how to reuse code in microservices architecture.
I would recommend going even further. From my experience, the best approach would be the following:
to build a separate module with all models for the microservice
to build a separate client library (module) for the microservice
Following this approach, you can release a new client library each time you change your micro-service - it will be easy to maintain and manage.
In addition, it will help you to save a lot of time when your system grows. Just imagine, you're going to use your core service (e.g. user service or profile service) as a dependency for all other services. Copy-paste is definitely not an option in this case.
Update. Currently, we have such things as OpenAPI and GraphQL in our toolsets. It's enough to design a good schema for the supplier service and simply use code generation tools for consumers.
When it comes to microservices, its ok to keep duplicated files because you might end up with a distributed monolith. Remember the Bounded Context from DDD and use your thought process. No shared library means no coupling.
But again the DRY (Don't Repeat Yourself) says you should not have duplicate, but to what extent?
One failure in one Library should not cause all your microservices to fail using that library, then the whole purpose of microservice is of no use.
There are tools to shared code among microservices, you can have a look into https://bitsrc.io/
All these are my thought, there must be some better way.
For better version control, I would recommend you to build a jar and add it as a dependency on your microservices. Alternatively, you can also explore git sub modules by putting duplicate codes in sub modules and utilizing it in your respective microservice module.

What is good about writing Java code in XML format as in Spring configuration? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
As a .Net developer who started Java dev for less than a year, one thing that puzzles me is the wide usage of Spring configuration file. Let me clarify:
In the case of an IoC container, I haven't seen the community being interested in setting up their Catalog/Module/Etc. through xml config in any other platform than Java.
The XML configuration is usually used as a highly verbose alternative to calling constructors/factory methods. This clearly is a disadvantage over code as is not type-safe, too verbose, and not indexable in IDEs (e.g. Find Usages of a Method)
Other IoC frameworks such as Autofac support xml configuration but in those non-java platforms XML config is unpopular.
My Question:
Is there a best/practice design principle, etc backing this choice of XML configs for IoC or its merely a historical habit?
XML and Properties files do not need to be re-compiled, allowing you to make in-deployment changes on a server environment.
For example, you can have 2 bean implementations and swap which one is injected:
<bean id="impl1" ... />
<bean id="impl2" ... />
<bean id="dependent" ... >
<constructor-arg ref="impl1"/>
</bean>
You can add or remove items to Collection type items:
<util:set id="some_set">
<value>value #1</value>
</util:set>
Also for controlling environments for unit testing, if you have a set of XML files, one of which is for DB connections, then that's the one XML file you replace for your unit tests which need an in-memory DB, and can't connect to the real macoy at dev time:
src/main/resources/
- META-INF/spring/
service-context.xml
dao-context.xml
datasource-context.xml
src/test/resources/
- META-INF/spring/
datasource-context.xml // this is the test version of that context
When we are talking about Spring you are not limited to XML configuration. Since version 3 of the Spring framework, you can configure everything using Java Annotations. However, you are limited. If you want to change something, you have to recompile your application.
You can find useful hints about how to design concise configuration using XML.
http://gordondickens.com/wordpress/2012/07/30/enterprise-spring-framework-best-practices-part-3-xml-config/
I found it somewhere,
1.Configuration is centralized, it’s not scattered among all different components so you can have a nice overview of beans and their wirings
in a single place.
2.If you need to split your files, no problem, Spring let you do that. It then reassembles them at runtime through internal tags or
external context files aggregation.
3.Only XML configuration allows for explicit wiring – as opposed to autowiring. Sometimes, the latter is a bit too magical for my own
taste. Its apparent simplicity hides real complexity: not only do we
need to switch between by-type and by-name autowiring, but more
importantly, the strategy for choosing the relevant bean among all
eligible ones escapes but the more seasoned Spring developers.
Profiles seem to make this easier, but is relatively new and is known
to few.
4.Last but not least, XML is completely orthogonal to the Java file: there’s no coupling between the 2 so that the class can be used in
more than one context with different configurations.
For more details refer this link
The XML allows you to alter your configuration without recompiling your program, sacrificing type safety for flexibility.
Some Java organizations also see an added advantage in letting non programmers such as field technicians or customers to modify the configuration setup.
Another possible advantage is creating user oriented custom customization tools (Pun?) which can allow GUI creation/modification of the XML files based on user choices.
Personally I don't like the world of XML based programming. It is too prone to run time errors, and difficult to debug using standard debuggers.

Struts2 Annotated or XML based ? which is more easier to manage and uncomplicated? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Which is the easier and organized way to use Struts2 ? With Annotations or with XML files ?
If with annotations, then with which kind of annotations? With struts-convention-plugin you can even avoid completely writing conventions i.e#results or #action.
What benefits will annotations give over not writing them ?
I've always used XML, and I've started recently using Convention.
I would now say that you can still use XML, but it would be better to use Annotations.
The facts in support to this are that, with Convention plugin;
less code is needed: since Convention plugin will scan some packages looking for Actions, you don't need to declare any action anymore. The same is applied to method name, result declaration, and so on: you will specify only things that differ from the standard behavior, while with XML you will have to write "the obvious" each time;
the knowledge is decentralized (or distributed): the configuration is now in the place it is meant to be; if you are inside an Action, you don't need to open the struts.xml configuration file, find the action element (among many others) to discover how it is configured; you can simply look at the Annotations inside the class to understand immediately how it works. The same is applied to Validation (inside the class and not in the actionName-validation.xml file;
This will help you having more granularity, a cleaner code, smaller files and almost no configuration.
Both configurations using XML files and annotations are equivalent. In my opinion, it's just a matter of feeling and learning-curve.
The major benefit using annotations is that you can a priori get rid of the struts.xml file (yeah, with all the frameworks using XML declarative architecture, removing one of them can alleviate the classpath content). But from my little experience, you can't consider not using the struts.xml file for large projects. Also, as you mentionned, using annotations, you can see the power of inheriting intelligent defaults provided by the framework. Therefore, some annotaions are not mandatory, struts 2 will automatically use a default configuration(it's called zero configuration, or convention over configuration).
The major benefit using XML file is that you have a centralized way to manage your application architectural components, even if you can modulirize your components declaration in multiple XML file (which are referenced in the struts.xml file). I guess using XML file alleviate the learning-curve. XML file also allows you to inherit intelligent defaults.

Java Jersey Framework RESTful Webservices Best Practices [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am working on a project to provide RESTful API for a hospital related data transaction stuff. And I am using Jersey to be the server side framework for me.
However, apart from the accepted notion of dividing the code into resources, models and data access, I can't find information that provides some helpful best practices on the subject.
Any useful suggestions?
I'll try to compile some best practices that I learnt into some topics.
JPA and ORM
If you use an ORM, then use JPA. It helps to keep your ORM of choice and the application loosely coupled, i.e. you can easily switch between ORM's.
Dependency Injection
This is an awesome way, again, to keep your application as much as loosely coupled as possible. Use Guice or Spring. Basically, with this you can inject generic instances on your classes without coupling them with their concrete implementation.
Useful with DAO's. You can inject a GenericDao (interface) in your JAX-RS classes, but the true implementation of it is a JpaDao, for instance.
Also, this is awesome to quickly switch to test environments. When testing some logic in your application, you probably don't want to use the database but just a dummy implementation of your GenericDao, for example. I consider using DAO's itself as another important best practice.
Security
I have some questions about this on my profile, but basically use OAuth or HTTP Basic/Digest + SSL (HTTPS). It's a bit hard to accomplish security the way you want, surprisingly. You can use the security mechanisms your Servlet Container may provide or something internal to your application like Apache Shiro, Spring Security or even manually defining your security filters.
HATEOAS (and other REST contraints)
Most RESTful API's aren't REST. People often misunderstand this: REST implies a set of contraints. When these constraints aren't met, it's simply an HTTP API, which is also ok. In any case, I advise you to link your resource representations so that the client can navigate through your API. This is called HATEOAS and I merely scratched the surface on this matter. Please read more about REST if you want a true REST API with all its benefits.
Maven
This is a special best practice, not related to the application itself, but to its development. Indeed, Maven increases productivity a lot, specially due to its dependency management capabilities. I couldn't live without it.
I don't know if this information was useful to you. I hope it was.
If you need information about any other topic, I'll edit the answer if I know it.
In addition to the above answers, designing the resources keeping the HTTP verbs out of your base URLs, carefully selecting the #PathParam, #QueryParam, #FormParam and #FormDataParam annotations are something I strongly emphasize.
For error handling, I return a Response object with HTTP response codes to convey the error to the client calling my API.
return Response.status(<HTTPErrorCode>).entity("Error msg here or my Error Bean object as an argument").build();
Using documentation tools like Swagger API helps a lot in developer testing.
Brian Mulloy's Web API design eBook and Vinay Sahni's post had been a handy resource for me to review/correct my design.

Categories