I'm building an Automation Tool using Java Automation Framework under the hood, to which users will not have access to the main POM.xml file. Sometimes users require to add a custom Java function which requires additional depedencies / repositories. Presently I have to make changes to the main POM file to accommodate the user request. User has access only to "src/test/java/com/script/custom" folder to write custom scripts / functions. I have explored options like Parent/Child POM, Plugin Management, Profile, etc. but examples are mainly for multiple projects. I'm a NodeJs/Angular person, so I'm a beginner at Java.
Project
|
|--src/test/java/com/script/custom
| |
| custom_code.java
| |
| custom_pom.xml
|
--pom.xml
Users should only enter additional dependencies / repos in custom_pom.xml. Parent pom.xml will still hold the main dependencies/repos of the project.
Running code (apart from tests) is against the core concept of Maven as a build tool. There are ways, however, to excute arbitrary code at build time:
Exec Maven Plugin
without an additional plugin (and with cleanly separated projects):
+- project
| +- pom.xml
+- custom
+- src/main/java/com/script/custom
| +- CustomCode.java ... convention for Java class names is CamelCase
+- src/test/java/com/script/custom
| +- CustomCodeTest.java ... instantiates and runs CustomCode
+- pom.xml ... containing <parent><relativePath>../project
For <parent> see Introduction to the POM #Project Inheritance. See also Maven: Lifecycle vs. Phase vs. Plugin vs. Goal for further basics.
Related
I scanned my Spring Boot app using Synk and there are some vulnerabilities after scan. For this reason, I needed to update snakeyaml, but as far as I know, it is a dependency below spring-boot-starter-web.
Here is the dependency tree for my project:
[INFO] +- org.springframework.boot:spring-boot-starter-web:jar:2.7.5:compile
[INFO] | +- org.springframework.boot:spring-boot-starter:jar:2.7.5:compile
[INFO] | | +- org.springframework.boot:spring-boot-starter-logging:jar:2.7.5:compile
[INFO] | | | +- ch.qos.logback:logback-classic:jar:1.2.11:compile
[INFO] | | | | \- ch.qos.logback:logback-core:jar:1.2.11:compile
[INFO] | | | +- org.apache.logging.log4j:log4j-to-slf4j:jar:2.17.2:compile
[INFO] | | | | \- org.apache.logging.log4j:log4j-api:jar:2.17.2:compile
[INFO] | | | \- org.slf4j:jul-to-slf4j:jar:1.7.36:compile
[INFO] | | +- jakarta.annotation:jakarta.annotation-api:jar:1.3.5:compile
[INFO] | | \- org.yaml:snakeyaml:jar:1.30:compile
In this scene, how can I update snakeyaml? Should I add a remove annotation below the spring-boot-starter-web and then add the following dependency in pom.xml?
I know the last version also has a vulnerability, but I just wanted to know what should I do for this kind of situations (assume that the last version has no vulnerability)? Any idea?
<!-- https://mvnrepository.com/artifact/org.yaml/snakeyaml -->
<dependency>
<groupId>org.yaml</groupId>
<artifactId>snakeyaml</artifactId>
<version>1.33</version>
</dependency>
First off all don't panic and step back. Although the dependency mentioned has a vulnerability, if you don't actually use it (i.e. no YAML in your application) it actually doesn't apply. Those dependency scans are pretty dumb as they can only see dependencies not the fact that you are or aren't using them.
That being said, to fix you should upgrade the Spring Boot version for your application. For this, assuming you are using a Spring Boot as a parent, update that version.
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.8</version>
</parent>
NOTE: Don't mix modules of different versions of Spring Boot (or any framework/library for that matter) as that will lead to problems. So don't put a version tag in your dependencies (at least not the spring-boot-starter-* ones!).
This has Snake YAML 1.30, if you can you could also upgrade to 3.0.2 which actually includes the 1.33 version.
To upgrade only Snake YAML you can just override the version property used by Spring Boot to manage the dependencies. The version properties are documented in the Spring Boot Documentation. How to override those versions is documented in the Spring Boot Plugin Documentation.
In short you would need to override the snakeyaml.version property with the version you wish to use (that is only for 2.7.x) if you upgrade to 3.02 you don't need this.
<properties>
<snakeyaml.version>1.33</snakeyaml.version>
</properties>
This will pull in that version. You don't need to exclude anything or use dependency management for this. Drawback when a new version of Snake YAML comes out and Spring Boot pulls that in, this will override it. So take care when updating after this.
This all being said, as mentioned in the beginning the fact that a certain jar/dependency is included doesn't mean it is a risk. If not used it won't pose a risk, so instead of panicking on Snyk warnings you should properly investigate them.
Try to update whatever brings you the dependency (here: spring-boot-starter-web)
If not possible: Add an entry to <dependencyManagement> with the newer version and put a comment into the pom.xml why you do this (Vulnerability XY).
I'm trying to organise my maven project.
Let's say my project is called "awesome". "awesome" has several artifact, each of them built differently (e.g., some of them may be built with some plugin, others are built with some other plugins): in general thse build-configurations are finite and limited (let's say there are at most 3 different ways to build an artifact), however, each artifact can only be built with exactly one build (e.g., the utility artifact is built with maven-jar-plugin configured in a particular way, while the artifact client-ui is built with maven-war-plugin configured in a particular way).
Now, I know I could organize the maven project as follows:
awesome-root
|---jars
| |--- utility
| |--- client-model
| |--- task-model
| |--- supplier-model
| |--- client-logic
| |--- task-logic
| ---- supplier-logic
|---wars
|--- client-ui
|--- task-ui
---- supplier-ui
This way, each particular configuration build can be put inside the build --> plugins section of the projects jars and wars, while general properties/dependency management/plugin management can be put in awesome-root.
Problem:
I quickly realized that the developers generates artifacts closely related with eachother but with different builds. In the previous example, we can notice that the artifacts can be grouped in this other way:
awesome-root
|--- tasks
| |--- task-model
| |--- task-logic
| ---- task-ui
|--- clients
| |--- client-model
| |--- client-logic
| ---- client-ui
|--- supplier
| |--- supplier-model
| |--- supplier-logic
| ---- supplier-ui
|--- others
|--- utility
The main advantage of this grouping is that tasks, clients and suppliers are 3 different, independent software sectors. When the developer needs to make a change in the, let's say, client sectors, she has everything she needs in a small part of the file system (or in the project explorer tab in an IDE, like Eclipse). Viceversa, in the first mapping, the clients software sector is scrambled all over in the project repository.
While this may not be a big deal, if "awesome" project starts to get really big, with a lot of artifacts and so on, finding all the related parts of clients sectors start to be annoying (not impossible, IDEs offer searches for this purpose).
I'd say the second structure is much better, developer wise.
However, It's seems difficult to implement this strategy in maven: the main difficulty is to where to put the different build configurations for each artifacts (e.g., *-ui needs to be built in a different way of *-model).
One may be tempted to put such configurations in client-ui, client-logic, client-model, but this would mean duplicate configuration build everywhere (e.g, client-ui, supplier-ui, task-ui has the same build configuration): if a build configuration needs to be changed, you need to change all the other copies;
Another solution might be to declare plugins management in awesome-root and them write the plugin definition in each artifactId: while this seems better, it still suffer from the same duplication problem of option 1;
Use archetype to generate poms with the correct build configuration: same as above;
Profiles: profiles are not inherited and they depend only on system properties, not maven's one;
My questions are:
Is the second structure impossible to achieve in Maven? Is there a way?
If not, do I need to bite the bullet and set on the first structure?
Is there any alternative? (I'm trying not to propose a XY problem, any alternative is appreciated);
Additional information:
OS: Ubuntu 18.04.3 (bionic), 64 bit
java version: openjdk 11.0.4 2019-07-16
IDE: Eclipse 4.10.0
m2e plugin: 1.10.0.20181127-2120
Thanks for any kind reply
I'm setting up a GitRepo, with different modules of a project. It has some legacy code, some C/C++ for Ardino, and a JavaFX project with some dependencies and Kotlin files in it.
What I actually need is, to build that JavaFx project on pull requests, targeting the develop branch
I already have an empty cloudbuild.yaml in my repository root. What I want is a non-Docker continous integration, so on pull request, I need an artifact build, so the executable can be downloaded for the other project members. GitHub and Google Cloud are connected, only the config is needed.
What is also specific, that I want to build with jdk8u201 (because of the licensing)
The folder structure is something like that:
+- legacy
+- Arduino_codes
+- JavaFX_project
| +- FILES...
+- cloudbuild.yaml
+- .git
If it is possible, that would be great if the builded version would be downloadable, or stored in a specific place in the repository
Try to add :3.5.0-jdk-8 to gcr.io/cloud-builders/mvn if you use maven. Result gcr.io/cloud-builders/mvn:3.5.0-jdk-8
It helped me. More info and cases you can find here.
We have a microservice architecture where the entire project looks as follows:
+- Utilities
+- Service A REST
+- Service A Backend
+- Service B REST
+- Service B Backend
...
+- Service X REST
+- Service X Backend
Each of these is an independent Maven project that can be independently developed.
It so happens, of course, that some of the projects may have to use classes from another one (e.g. to be able to give back corresponding Exception classes in error messages).
Thus one of the projects may have the following dependencies:
Service A backend
+- dependency 1
+- dependency 2
...
\- Utilities
\- Service B backend
In a standard deployment, we would use a Maven repository and simply add the latest jars as dependencies. The problem is during development: if we make a change in Service B backend, a dependency of Service A backend, we cannot simply do mvn compile Service-A-backend because Service B backend will not be recompiled. Doing this for every single project during development is extremely error-prone.
Eclipse might be able to work around this by having the project on the build path, but we do not want to bind ourselves to an IDE and would like to ideally be able to solve the issue with Maven itself.
Can you use Maven in the above scenario so that you can list Service B backend above as a source dependency where, if we compile or package Service A backend, its local source dependencies also get recompiled if there have been any changes? If not, can you do it with gradle or ant?
You should consider looking at the Maven modules feature. Here is an example of use in a POM file:
<modules>
<module>A</module>
....
</modules>
I personally use in addition to maven modules (for compilation ) git sub-module for pull/push code base on multiple project feature.
I have a multi-module Maven project in a Subversion repository with many developers working on it with Eclipse + M2Eclipse. Now if a developer adds a module, others need to do an SVN update from the command line (as Eclipse doesn't see the common root of the Maven project), and import the new module manually as an Eclipse project.
Is there a way to do this automatically?
My project structure looks like this:
Working Copy Eclipse Workspace
working copy root -X->
+- parent ---> +- parent
| \- pom.xml | \- pom.xml
+- child1 ---> +- child1
| \- pom.xml | \- pom.xml
+- child2 ---> +- child2
\- pom.xml \- pom.xml
You can have a pom in the root, that will have parent, child1, child2, etc as modules. After SVN update if there a new module was added, you can run
mvn eclipse:clean eclipse:m2eclipse
from the eclipse tools button (right of the debug and run buttons)
If you are using TortoiseSVN you can set a post-update client side hook, but each developer will have to set it independently.
May be buckminster project can help you check FAQ. Hope it helps
Is there a way to do this automatically?
To do what? To avoid importing the new module manually as an Eclipse project? AFAIK, this is currently not supported, you'll have to add it manually (it should be possible to do it programmatically though, there is such a request for the Maven Eclipse plugin - MECLIPSE-75 - couldn't find one for Maven Intergration for Eclipse).
That said, does adding a module really happen so often? Your situation might be different but, to my experience, you'll reach a stable point quite fast and adding a module will become something unusual.
Nevertheless, good team communication is the best solution I found to deal with this. When a developer add a new module, it is a duty for him to let others team members know that he introduced a change and to describe the required steps to take the modification into account. Nobody is omniscient, nobody can read in others mind, active communication is the key to good collaboration.
you can also use maven-eclipse-plugin
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-eclipse-plugin</artifactId>
<configuration>
<wtpversion>2.0</wtpversion>
<projectNameTemplate>[artifactId]-[version]</projectNameTemplate>
</configuration>
</plugin>
refresh the project after importing it from svn
or you can also right click the project and click on 'enable dependency management'