Aside from adding a custom timer to measure the beginning and end of a controller's action, is there an easier or more helpful way to show how long a page is really loaded (i.e. show at the bottom of the page: this page is generated in 30.5 seconds)? Note that in Grails, there's the concept of taglibs wherein you can add additional logic after all the processing done in the controller.
I'm actually not yet sure on how controller and taglib works or how the whole page is rendered in Grails, perhaps they are processed in parallel? Feel free to enlighten me on this too.
Thanks!
If you just want the times spent on the action and to render your gsp (with all its tags), you can use a simple filter to measure that. Take a look at this blogpost: Profiling web requests in your Grails application (disclaimer: I'm the author)
Regards,
Deluan
there are several ways to get the timing.
Easiest way is to configure your server to write logs with page generation time
you could inject your timing into for instance a security filter and the end of your page - but as you already mentioned, even this would mean to reinvent the wheel
have you checked the plugins?
debug plugin gives you timing info on the console http://www.grails.org/plugin/debug
perf4j also helps to profile your pages http://www.grails.org/plugin/perf4j
but if you want to present the timing to your users, I would suggest to download the debug plugin, unzip it and check where the timing is measured. You can easily copy this code and use it to output the timing on the page.
I used Spring Insight with STS. That's absolutely awesome for Grails Application in developpement. Modifying a tomcat for use in poroduction make it a bit tricky though
But you can go down to the duration of each select from hibernate and you have timing metric in real time through the all stack of application
Not really what you asked for (sorry), but maybe of interest is the JavaMelody plugin for Grails:
The goal of JavaMelody is to monitor
Java or Java EE application servers in
QA and production environments. It is
not a tool to simulate requests from
users, it is a tool to measure and
calculate statistics on real operation
of an application depending on the
usage of the application by users.
Not tried it myself, but it looks useful
Related
Im developing a scheduling system that the user must be able to go online login to the website and feed schedules in pdf that will be received by optaplanner which will schedule the resources and return a grid that the user will be able to interact with dynamically. My question is how can i integrate my website with planner.I will appreciate your insight?
There are numerous approaches on how to tackle this. Check out the webexamples and their source. One of the approaches for integrating OptaPlanner in a web-app could be:
create a Java EE web app (for deploying on a application server, f.e. Wildfly, Websphere, ...) with OptaPlanner as a dependency (see the docs for more info)
create a few Servlets that handle login, uploading the PDF schedules, storing them
create an EJB to convert the PDFs into a into a OptaPlanner problem description you implement on the back end (see the integration chapter in the docs and how our examples handle the problem
create another EJB to handle the actual solving (run the solver, wait for the results, notify someone)
create a few Servlets to interact with the solution
Do note, this is just a general outline: there's a lot of things left out for brevity (security, persistence details, etc.). Also, there are currently efforts to build an OptaPlanner execution server, but it's definitely not production ready yet (as of March 2016).
I'm converting a Struts1 application to Struts2. As a beginning I only ported a few of the actions to see how they behave in Struts2. One of these actions serves an Ajax request sent by the clients once every second. In the current Struts1 implementation the request takes about 10-15 ms to execute, which I can see with Firebug. The Struts2 version now takes over 250 ms. I added the profiling interceptor to the action and I can see that most of this time is spent in setting up the execution of the action. The time spent in the interceptors is negligible.
Is it expected?
Thanks in advance for any help.
follwing the steps,
Turn off development mode using struts.dev="false";
Create your own default interceptorStack which specific to your project and remove unneccesary interceptors you are not using.
For further information refer the following link struts.apache.org/2.2.3/docs/performance-tuning.html
And you can find action execution time using timer interceptor so called timer.
And also i tried with JSTL, OGNL tags to compare the performance of jsp page rendering time. In my case OGNL has given best performance.
There are different aspects of benchmarking an application for the performance.The one mentioned by you seems very alarming as the difference is somewhat 25x.
Not sure what you mean by setting up the execution of the Action? so its really a bit hard to suggest any thing is particular.
We have like 9-10 S2 application and none of them have any performance issue as of now.
My suggestion is to use some profiling tool and get th information which specific block is causing an application to get slow, beside you can always follow the tips as suggest in other answer.
Which version of S2 are you using?
We have an application at work that we'd like to monitor for performance. Actually, what we want to monitor is not our app's performance, but things like response time for external web services we invoke.
Years ago, using ATG Dynamo, you could instrument your code with something like...
Performance.monitorStart("my.operation");
try {
// code goes here
}
finally {
Performance.monitorEnd("my.operation");
}
; this generated a nice report of the time spent in diverse operations, in a friendlier way than hprof. Ideally, the time should be persisted (db or otherwise).
I recall seeing somewhere (here? Dzone? TSS?) about a new library that does this, but googling reveals nothing.
Thoughts?
Alex
What you're describing sounds a lot like Perf4J.
Springsource TC Server (which is a Tomcat++) with Insight enabled has been helpful to me
It will time your entire call-stack and give you nice reports. Here's a screencast http://www.youtube.com/watch?v=nBqSh7nVNzc
Since you are already showing an example where code changes are involved, you could simply roll your own using your probably existing logging facility.
Another option would be JMX beans for live statistics - this option is often used together with a 'professional' monitoring facility which aggregates these statistics.
Can you separate components of an IceFaces application so they can be tested in isolation instead of using something like Selenium or HttpUnit on the assembled application?
Backing beans can be easily isolated (if written to be testable) but I am interested in testing the template/display parts of the application while using as little of the rest of the application as possible. Can this be done? How?
Is there a way to render an IceFaces object as text using "dummy data" that I can then run through traditional unit tests?
I can think of ways to do all of this, but they involve creating multiple applications (one for each component I wish to test). However, this seems like a sub-optimal way of doing things.
If I understand your question correctly, then it ought to be a simple matter of creating special dummy backing beans for your pages, and then creating a test JSF configuration file mapping those beans to the .jspx files. The dummy beans, of course, won't touch any business logic or back-end services -- they'll simply be simple sets of data that will be easy to verify in your tests.
Create an ant script to substitute in your dummy backing beans and the test config file. Run your tests. If you don't want something as heavy as HTTPUnit, and if you're using Spring in your app, look at this blog post for an excellent way to mock up a full web context without a web server. Your tests will probably need to sniff the raw HTML output to verify the results. This is going to be tricky, because IceFaces loves to munge DIV IDs and other relevant parts of the DOM tree that you may want to sniff for. (This alone may be the reason why very few JSF developers try to unit test JSF output.)
Once your tests are verified, swap the regular beans and config file back into the app.
Voila! You've just unit-tested your JSF components.
Mind you, the whole business of swapping out beans and config files is messy. It would be much, much easier if IceFaces used Spring to match backing beans to JSF pages -- then you could simply define the test-beans in an application.xml with the relevant test classes. But such is life.
Good luck, and let me know how it works out for you!
This is not what exactly what you are asking for but JSFUnit (which uses JUnit, Cactus, HtmlUnit, and HttpUnit) seems to be a serious candidate for testing in the JSF land. Did you consider this option? Maybe have a look at the JSFUnit Wiki and its Getting Started Guide.
Please note that the FAQ is reporting some problems with IceFaces but its pretty old (early 2009) and the situation might have changed since then (there are some demo projects like jboss-jsfunit-examples-icefaces or icefaces-demo-address in JBoss repository so it may be worth to ask the exact status either on JSFUnit or IceFaces mailing lists).
EDIT: As mentioned in a comment, the OP is looking for something less "high level". Maybe have a look at the Shale Test Framework:
The Shale Test Framework provides mock
object libraries, plus base classes
for creating your own JUnit TestCases.
Mock objects are provided in package
org.apache.shale.test.mock for the
following container APIs:
JavaServer Faces
Servlet
Disclaimer: Apache Shale moved into the Attic in May 2009 (i.e. it has reached its end of life) but I don't know any other "mature" mock framework for JSF so I'm mentioning it anyway (the code is still there). I'll follow this thread with a very high interest for other solutions :)
While using DWR in a intranet, will disadvantages like perfomance or security issues occur?
Direct web remoting is a tool which uses Ajax request to contact a server from a js file.
One thing I would watch out for is that your server will most likely get hit by more HTTP requests than if you have the (normal) full page HTTP delivery.
Let me explain. When your web page is AJAX-enabled, your clients will end up creating more HTTP requests for (say) form filling, page-fragment regeneration etc. I've seen scenarios where developers have gone AJAX-crazy, and made the web page a largely dynamic document. This results in a great user experience (if done well), but every request results in a server hit, leading to scalability and latency issues.
Note - this isn't particular to DWR, but is an AJAX issue. I've used DWR, and it works nicely. Unfortunately, I found that it worked so well, and so easily, that everything becomes a candidate for remoting, and you can end up with huge numbers of small requests.
I worked on a project with DWR - a really nice tool.
I'm not convinced about the pace of development though. They did post on the development log that they're working on getting 3.0 out the door, but the last stable release - 2.0 - was out in summer 2006. It's a bit worrying taken from a support perspective - bug fixes especially.
Main problem I've experienced is trying to script a load test on a system where the main bulk of the work is done via DWR calls. The format of the calls is difficult to replicate when compared with just replying a bunch of urls with changing parameters.
Still DWR is an excellent framework and makes implementing Javascript -> Java RPC pretty damn easy.
One feature missing of current DWR 3.x that any user should take good care is that when an instance of a bean has properties of NULL value, those properties will be still injected to the JSON and these redundant data DO affect the performance.
When a property has the value of NULL, usually it should not be sent to frontend.
Details of problem: http://dwr.2114559.n2.nabble.com/Creating-Custom-bean-converter-td6178318.html
DWR is a great tool when your site has a lot of ajax calls.
Each page that makes dwr rpc calls needs to include :
a) an interface file corresponding to the calls being made.
and
b) a js file bundled with dwr that contains the dwr engine code that makes these calls possible. for e.g. <script src="/dwr/engine.js" ></script>
one technique that is frequently used while optimizing web applications is to use the browser cache as much as possible when a resource(like a js file) has not changed on a server.
engine.js is something that will never change unless you upgrade your dwr to a newer version. But, by default, engine.js is not a static file served by your webserver. its bundled as part of the dwr tool itsef and is served by the dwr controller/servlet.this doesnt aid client side caching.
So, it is beneficial to save engine.js under the document root of your webserver and let the webserver serve it as a static file.
The biggest difference among other solutions to transfer objects (marshaling) is object references.
For instance, if you use it to transfer a tree:
A
|-B
|-C
in a list {A,B,C}:
B.parent = A
C.parent= A
then A is the same object in Javascrit!
On the bad side, if you have complex structures with circular dependencies and lot of objects: A<-B, B<-C, C<-B, C<.A,... it could crash.
Anyway, I use it in a real project used by many hundreds of companies in production to transfer thousands of objects to a single html page in order to draw a complex graph and it works nicely with a good performance.