mvn jgitflow -- Push fails when no JIRA number available in jgitflow commits - java

We have placed hook on Stash to have JIRA number at start of commit message.
But when we use jgitflow, it does not put any JIRA number in commits, hence later pushing to Stash fails.
Question: How can we pass JIRA number to jgitflow while releasing to avoid this problem?

The release-start goal provides the scmCommentPrefix property for such a purpose:
The message prefix to use for all SCM changes. Will be appended as is. e.g. getScmMessagePrefix() + the_message;
You could hence invoke it as:
mvn jgitflow:release-start -DscmCommentPrefix=JIRA-123
The same is also provided for the release-finish goal via the same property, scmCommentPrefix.
mvn jgitflow:release-finish -DscmCommentPrefix=JIRA-123
It's an optional property in both cases, so no need to provide it if not required, but very useful in similar cases (hooks) indeed.

Related

How to solve AttributeNotSupportedException in Hybris

Everytime that we add a new attribute to items.xml, we have to execute a hybris update, otherwise we will get some error like: JaloItemNotFoundException: no attribute Cart.newAttribute
But, sometimes after executing an update, instead of getting JaloItemNotFoundException, we get something like:
de.hybris.platform.servicelayer.exceptions.AttributeNotSupportedException: cannot find attribute newAttribute
For this second case, it always work if we restart the server after the update.
Is there any other way to fix that besides restarting the server after the update?
I worked for a company years ago that added this restart as a "deploy step" after the update. I am trying to avoid that here.
I tried to execute several updates and clean type cache. But no luck.
Platform Update with "Update Running System" is usually enough. If you have localization, impex, or some other changes, you might need to include the other options or extensions.
If you have a clustered environment, make sure all nodes have been updated / refreshed as well.
Make sure that your build and deploy process is something like:
Build
Deploy
Restart Server. You stop/start manually (or by script), or let Hybris restart itself when it detects changes from the deployment.
Run Platform Update
You can try to update the platform directly after the build from the command line(i.e "ant updatesystem") before starting the server.
The restart after deploy is a pretty common step(In case the update system is performed with the server started).
I believe that one of the reasons the restart is needed is due to the fact that the Spring Context needs to be reinitialized since some of the beans need the new type system information.
For example, Let's say you need to create a new type and an interceptor for that newly created type. When deploying this change you do the following:
Change the binaries and start the server
Perform an update system in order for the database to get the latest columns and so on
Now if you try to see whether the interceptor is working you will see it does not work because when its spring bean was instantiated(during the server startup) the type that it is suppose to handle was not present in the database.
Because of that, after a restart the Interceptor works as expected.
PS: The above described Interceptor problem might have been fixed somehow in the latest Hybris Versions.

Azure release pipeline not picking up variable from nestedStack.yml file

I'm configuring our Release pipeline so that Integration Tests are automatically run after pull requests are merged to master and deployed to Dev environment.
I'm currently getting a connection error, specifically java.net.UnknownHostException: and it looks like my one of my output variables from my nestedStack.yml code is not being imported/read properly:
my-repo/cloud-formation/nestedStack.yml
You can see there is a property there "ApiGatewayInvokeUrl" which is marked as an Output. It is used on Azure DevOps for the "Integration Testing" task in my "Deploy to Dev" stage. It is written as $(ApiGatewayInvokeUrl) as that's how variables on Azure DevOps are used.
This Deploy to Dev will "succeed", however when I further inspect the Integration Tests, I see none actually ran and there was a connection error immediately. I can see it is outputting the variable as $(ApiGatewayInvokeUrl) , so it looks like it just reads it as a String, and never substitutes it for the correct URL value:
I was going off the way another team set up there Integration Tests on a similar pipeline but I might have missed something. Do I need to define $(ApiGatewayInvokeUrl) somewhere in my codebase, or somewhere on Azure? Or am I missing something? I checked the other teams code and didn't see them define it anywhere else, that's why I am ultra confused.
Update: I went into AWS API Gateway and copied the invoke URL and hard-coded that into the Azure DevOps Maven (integration testing) goal path and now it's connecting. So it's 100% just not importing that variable for somer reason.
you need to define\create the variable to use it (unless its an automatic variable, and this one is definitely not an automatic variable). that variable isnt getting substituted because it doesnt exist (afaik).

Way to disallow Stash merges for failed pull request builds with Stash pullrequest builder plugin

Is there a way to disallow Stash merges
for failed pull request builds with Stash pullrequest builder plugin?
For answering your question, yes (but not automatically or programatically)
As you know that when a developer creates a pull request he mention some users as Reviewer. You can show the reviewer that the build is failed or not as added in comment. Then its reviewer's work to reject the merge. you automatically not able to reject the build.
Also if your are using SonarQube then you can also show the sonarqube result also in pull request

MessageGroupStoreReaper fails with "was expecting a single payload" IllegalStateException

I have a simple Spring Integration 2.0.1 aggregator setup, using a SimpleMessageStore along with a regular Spring MessageGroupStoreReaper defined, in order to be able to implement a grouping timeout mechanism (which worked simply as an aggregator 'timeout' parameter in Spring Integration 1.0.4).
I already debugged and verified the messages are aggregated correctly within SimpleMessageStore based on groupId, but upon timeout the MessageGroupStoreReaper fails, with "Unable to access property 'messages' through getter" AccessException. The precise error is located within MessagingMethodInvokerHelper class, which asserts the messages field is not null. It seems somehow the messages aggregated are not available during execution of the Invoker, resulting in the ""Invalid method parameter for messages: was expecting a single payload." IllegalStateException.
What could be the cause of this issue and how to resolve it?
I already tried updating to 2.0.6, but the issue remains.
***EDIT
I updated my SI dependencies to 2.2.5, but this did not resolve my problem. I use Spring 3.0.7.
Only one solution to fix it just to upgrade to latest version of Spring Integration - 2.2.6.
2.0 is now out of support. Sorry
I resolved the issue, it seems that the previous setup that worked for SI 1.0.4 required modification on service activator side - the output channel was expecting List<Message<?>>, while the actual aggregated type was Message<List<Message<?>>>. After modification of activator's method signature to match the latter type, SI was able to match signature candidate properly. This could possibly be also fixed by modification of aggregator add method to operate on specific Message instead of List, with no modifications to the activator.

How to order executions in one phase?

I have two <execution>s attached to the same phase deploy. First execution is tomcat:redeploy, second one is a custom one that makes a HTTP request to the production server to validate that the application really works.
How can I instruct maven to execute them in this particular order?
Check http://jira.codehaus.org/browse/MNG-2258, try Maven 3.

Categories