How to order executions in one phase? - java

I have two <execution>s attached to the same phase deploy. First execution is tomcat:redeploy, second one is a custom one that makes a HTTP request to the production server to validate that the application really works.
How can I instruct maven to execute them in this particular order?

Check http://jira.codehaus.org/browse/MNG-2258, try Maven 3.

Related

How to solve AttributeNotSupportedException in Hybris

Everytime that we add a new attribute to items.xml, we have to execute a hybris update, otherwise we will get some error like: JaloItemNotFoundException: no attribute Cart.newAttribute
But, sometimes after executing an update, instead of getting JaloItemNotFoundException, we get something like:
de.hybris.platform.servicelayer.exceptions.AttributeNotSupportedException: cannot find attribute newAttribute
For this second case, it always work if we restart the server after the update.
Is there any other way to fix that besides restarting the server after the update?
I worked for a company years ago that added this restart as a "deploy step" after the update. I am trying to avoid that here.
I tried to execute several updates and clean type cache. But no luck.
Platform Update with "Update Running System" is usually enough. If you have localization, impex, or some other changes, you might need to include the other options or extensions.
If you have a clustered environment, make sure all nodes have been updated / refreshed as well.
Make sure that your build and deploy process is something like:
Build
Deploy
Restart Server. You stop/start manually (or by script), or let Hybris restart itself when it detects changes from the deployment.
Run Platform Update
You can try to update the platform directly after the build from the command line(i.e "ant updatesystem") before starting the server.
The restart after deploy is a pretty common step(In case the update system is performed with the server started).
I believe that one of the reasons the restart is needed is due to the fact that the Spring Context needs to be reinitialized since some of the beans need the new type system information.
For example, Let's say you need to create a new type and an interceptor for that newly created type. When deploying this change you do the following:
Change the binaries and start the server
Perform an update system in order for the database to get the latest columns and so on
Now if you try to see whether the interceptor is working you will see it does not work because when its spring bean was instantiated(during the server startup) the type that it is suppose to handle was not present in the database.
Because of that, after a restart the Interceptor works as expected.
PS: The above described Interceptor problem might have been fixed somehow in the latest Hybris Versions.

How to use Modules in Google App Engine and add a target to them using Task Queue (Java)?

I have a task that exceeds more than 10 minutes deadline of the Task Queue. Going through different documentations, I found that using modules I could run an instance that would process the long running task but preferably even that should be done using the task queue. I had used backends but they are deprecated.
My question is how do I introduce Modules into my existing App Engine Project and how do I use them to run long-running tasks?
Following is the piece of code :
Queue queue = QueueFactory.getQueue("myqueue");
TaskOptions task = TaskOptions.Builder.withUrl("/submitworker").method(Method.POST);
queue.add(task);
What changes do I have to make in the above code to add a long-running task using a module? [The "submitworker" is a servlet which is the actual long running task]
I had referred this link, but I am unable to get around with the third step:
3. Add service declaration elements to the appengine-application.xml file.
Also, even if I successfully add a module to my project, how can I target this module using Task Queue?
I had gone through this question, but it is a python implementation, my implementation is in Java.
I am looking for a step by step process on how do I use "Target" in the modules and how to use it while adding to the task queue.
Even if I add the long-running module target to the task queue, still would it terminate the execution after 10 minutes or will it complete the task even if the task in the task queue expires?
Please suggest.
Modules and Services are the same thing, they're similar to the old backends (which still work, but are deprecated).
There are two basic ways of getting modules to work:
Create an EAR and deploy that
Deploy services independently as WAR files (which is probably what you're doing now to the default module)
The second option is probably easier, because it's just a matter of changing your application-web.xml. You could have a repo or branch per module, or just a build process that changes the module you're targeting.
Right now your application-web.xml probably has something like this:
<application>#appId#</application>
<version>#appVersion#</version>
<module>default</module>
change it to something like this
<application>#appId#</application>
<version>#appVersion#</version>
<module>long-running-service</module>
<instance-class>B1</instance-class>
<manual-scaling>
<instances>1</instances>
</manual-scaling>
You configure the queue itself to target a specific module in queue.xml See here.
Disclaimer: the answer is based exclusively on docs (I'm actually using python - same concepts but different configs).
To make a service/module allow long-running task you have to configure it for basic or manual scaling. From Scaling types and instance classes (the Deadline row in the table):
in the Manual scaling column:
Requests can run indefinitely. A manually-scaled instance can choose
to handle /_ah/start and execute a program or script for many hours
without returning an HTTP response code. Tasks can run up to 24 hours.
in the Basic scaling column:
Same as manual scaling.
The module scaling configs, done via the respective module's appengine-web.xml file, are described in Scaling elements:
<manual-scaling>:
Optional. The element enables manual scaling for a
module and sets the number of instances for a module.
<basic-scaling>:
Optional. The element sets the number of instances
for a module.
As for the actual conversion to modules, complement the guide you pointed to with the Configuration Files (includes an example) and the appengine-web.xml Syntax (see module and service configs).
About appengine-application.xml, from Configuration Files:
The META-INF directory has two configuration files:
appengine-application.xml and application.xml. The
appengine-application.xml file contains general information used by
App Engine tools when your app is deployed...
...
Note that while every appengine-web.xml file must contain the
<application> tag, the name you supply there is ignored. The name of
the application is taken from the <application> tag in the
appengine-application.xml file.
To direct a certain queue to a certain service/module you use the queue.xml file. From Syntax:
<target> (push queues):
Optional. A string naming a module/version, a frontend version, or a
backend, on which to execute all of the tasks enqueued onto this
queue.
The string is prepended to the domain name of your app when
constructing the HTTP request for a task. For example, if your app ID
is my-app and you set the target to my-version.my-service, the
URL hostname will be set to
my-version.my-service.my-app.appspot.com.
If target is unspecified, then tasks are invoked on the same version
of the application where they were enqueued. So, if you enqueued a
task from the default application version without specifying a target
on the queue, the task is invoked in the default application version.
Note that if the default application version changes between the time
that the task is enqueued and the time that it executes, then the task
will run in the new default version.
If you are using modules along with a dispatch file, your task's
HTTP request might be intercepted and re-routed to another module.

mvn jgitflow -- Push fails when no JIRA number available in jgitflow commits

We have placed hook on Stash to have JIRA number at start of commit message.
But when we use jgitflow, it does not put any JIRA number in commits, hence later pushing to Stash fails.
Question: How can we pass JIRA number to jgitflow while releasing to avoid this problem?
The release-start goal provides the scmCommentPrefix property for such a purpose:
The message prefix to use for all SCM changes. Will be appended as is. e.g. getScmMessagePrefix() + the_message;
You could hence invoke it as:
mvn jgitflow:release-start -DscmCommentPrefix=JIRA-123
The same is also provided for the release-finish goal via the same property, scmCommentPrefix.
mvn jgitflow:release-finish -DscmCommentPrefix=JIRA-123
It's an optional property in both cases, so no need to provide it if not required, but very useful in similar cases (hooks) indeed.

Shutting down spring batch jobs gracefully in tomcat

We run several spring batch jobs within tomcat in the same web application that serves up our UI. Lately we have been adding many more jobs and we are noticing that when we patch our app, several jobs may get stuck in a STARTING or STARTED status. Many of those jobs ensure that another job is not running before they start up, so this means after we patch the server, some of our jobs are broken until we manually run SQL to update the statuses of the jobs to ABANDONED or STOPPED.
I have read here that JobScope and StepScope jobs don't play nicely with shutting down.
That article suggests not using JobScope or StepScope but I can't help but think that this is a solved problem where people must be doing something when the application exits to prevent this problem.
Are there some best practices for handling this scenario? What are you doing in your applications?
We are using spring-batch version 3.0.3.RELEASE
I will provide you an idea on how to solve this scenario. Not necessarily a spring-batch solution.
Everytime I need to add jobs in an application I do as this:
Create a table to control the jobs (queue, priority, status, etc.)
Create a JobController class to manage all jobs
All jobs are defined by the status R-running, F-Finished, Q-Queue (you can add more as you need like aborted, cancelled, etc) (the jobs control these statuses)
The jobController must be loaded only once, you can define it as a spring bean for this
Add a boolean attribute to JobController to inform if you already checked the jobs when you instantiate it. Set it to false
Check if there are jobs with the R status which means that in the last stop of the server they were running so you update every job with this R status to Q and increase their priority so it will get executed first after a restart of the server. This check is inside the if for that boolean attribute, after the check set it to true.
That way every time you call the JobController for the first time and there are unfinished jobs from a server crash you will be able to set then all to a status where it can be executed again. And this check will happens only once since you will be checking that boolean attribute.
A thing that you should be aware of is caution with your jobs priority, if you manage it wrong you may run into a starvation problem.
You can easily adapt this solution to spring-batch.
Hope it helps.

What causes duplicate requests to occur using spring,tomcat and hibernate

I'm working on a project in Java using the spring framework, hibernate and tomcat.
Background:
I have a form page which takes data, validates, processes it and ultimately persists the data using hibernate. In processing the data I do some special command (model)
manipulation prior to persisting using hibernate.
Problem:
For some reason my onSubmit method is being called twice, the first time through things
are processed properly. However the second time through they are not; and the incorrect
information is being persisted.
I've also noticed that on other pages which are simply pulling information from the data
base and displaying on screen; Double requests are happening there too.
Is there something misconfigured, am I not using spring properly..any help on this would
be great!
Additional Information:
The app is still being developed. In testing the app I'm running into this problem. I'm using the app as I would expect it to be used (single clicks,valid data,etc...)
If you are testing in IE, make note that in some versions of IE it sometimes submits two requests. What browsers are you testing the app in?
There is the javascript issue, if an on click handler is associated with submit button and calls submit() and does not return false to cancel the event bubble.
Could be as simple as users clicking on a link twice, re-submitting a form while the server is still processing the first request, or hitting refresh on a POST-ed page.
Are you doing anything on the server side to account for duplicate requests such as these from your users?
This is a very common problem faced by someone who is starting off. And not very sure about the application eco-system.
To deploy a spring app, we build the war file.
Then we put it inside 'webapps' folder of tomcat.
Then we run the tomcat instance using terminal (I am presuming a linux system).
Now, we set up env in that terminal.
The problem arises when we set up our environment for the spring application where there can be more than one war files to be deployed.
Then we must cater to the fact that the env must be exclusive to a specific war file.
To achieve this, what we can do is create exclusive env files for every war. (e.g. war_1.sh,war_2.sh,.....,war_n.sh) and so on.
Now we can source that particular env file for which we have to deploy its corresponding war. This way we can segregate the multiple wars (applications) and their environment.

Categories