I have a web application that is using a framework where I have to implement an interface named Plot:
interface Plot {
Image getImage();
String getTitle();
}
I know the framework calls the getImage() before the getTitle(). In some cases, I need the results from the image generation in order to create the title.
I know if I do something naive like this:
class MyNaivePlot implements Plot {
private String title;
public Plot getImage() {
title = "...";
}
public String getTitle() { return title; }
}
Then I could introduce a race condition. It seems I can fix this by using a ThreadLocal but I haven't seen enough examples to know if my solution is correct (and these sorts of things are hard to test with certainty). So here's what I've come up with:
class MyThreadLocalPlot implements Plot {
private ThreadLocal<String> title = new ThreadLocal<String>();
public Plot getImage() {
title.set("...");
}
public String getTitle() {
return title.get();
}
}
Is this sufficient? Am I using ThreadLocal correctly? Note that I only need the title to hang around long enough until it is called for by getTitle(). I don't care what it's value is after that nor before getImage() is called.
Also note that I believe the framework "long lives" the MyPlot object, and a new one isn't created for each request / thread, otherwise this would be a non-issue.
Thanks!
To directly answer your question - it sounds ok.
However, I would consider some additional points:
(1) If you have a hook for a beginning/end of request - you might want to clear the thread local at the end of each such request (e.g. if it's a servlet I'd use a filter). That's for two reasons: release it for the garbage collection, and for cases of errors (so that if the next request runs into some parsing error, it will see an empty image and not the previous user's).
(2) Make sure your framework indeed guarantees a single thread (and same machine) during those 2 requests. Perhaps also check if it's going to work on upcoming versions, and on horizontal scaling/clusters.
(3) As a side note, one might also consider other solutions - e.g. a cache (which would help you as a side effect). Obviously this requires some though as to cache size, periodical clearing/updating etc.
You code is quite right; you don't have a setter method but I guess there is a typo and instead of getImage you want to write setTitle().
threadLocal has also a remove method that you should invoke when you don't need the title attribute anymore. You could find some usage examples here and here
Before deploying a ThreadLocal based version of Plot I suggest you to check if your framework create one o or more instances; simply create a regolare class with a counter and increase the counter value in the get method; you can log it to see how the counter value changes with different calls. If you use a logging framework such as log4j or logback I suggest to put the thread name in the log so you can check how/if the counter value changes with different checks.
I also suggest you to test it with multiple clients concurrently, if you have a "serial client" you may end up using always the same server thread if you are using a dedicated test instance.
Related
Background
An access token is used by my system to access a platform API.
I have a Thread A to update the access token every 2 min.
Then the other threads who handle front-end requests will use the access token to make API calls.
The platform who provides the access token itself has implemented some token overlap mechanism. That is, an old token will still be available for 30 seconds after a new token is generated.
Thoughts
I have an interface like below:
public interface AccessTokenService {
String fetchAccessToken();
void refreshAccessToken();
}
Apparently, the easiest and error-free way to handle it is to make these two methods synchronized. However, since the token providing platform has token overlap built-in. I don't think "synchronized" is needed to make it business right. In addition, since my system highly relies on the API calls, make the methods synchronized would result in a performance drop.
Question
My question is, how, under the hood, the JVM handle multi-thread String write/read.
Say, a string accessToken is being changed from "abc" to "def". Meanwhile, a few threads are trying to read the value of accessToken. What are the possible output of the read?
abc (possible)
def (possible)
dbc (?)
messy content in between the change (?)
null (?)
Since strings in java are immutable your reading threads can only get either to old token or the new token, but never something mixed together.
Updates to references are always atomar (here too: your reading threads can either read the old reference or the new reference, but not something mixed together).
There is only one point to observe: the field that stores the current token has to be declared as volatile, otherwise the reading threads may cache some value that they have seen.
A dummy implementation of your AccessTokenService could look like this:
public class DummyAccessTokenService implements AccessTokenService {
private volatile String currentToken = null;
#Override
public String fetchAccessToken() {
return currentToken;
}
private int tokenNumber = 0;
#Override
public void refreshAccessToken() {
currentToken = String.format("Token-%d", tokenNumber++);
}
}
A lot of times while writing applications, I wish to profile and measure the time taken for all methods in a stacktrace. What I mean is say:
Method A --> Method B --> Method C ...
A method internally calls B and it might call another. I wish to know the time taken to execute inside each method. This way in a web application, I can precisely know the percentage of time being consumed by what part of the code.
To explain further, most of the times in spring application, I write an aspect which collects information for every method call of a class. Which finally gives me summary. But I hate doing this, its repetitive and verbose and need to keep changing regex to accommodate different classes. Instead I would like this:
#Monitor
public void generateReport(int id){
...
}
Adding some annotation on method will trigger instrumentation api to collect all statistics of time taken by this method and any method later called. And when this method is exited, it stops collection information. I think this should be relatively easy to implement.
The questions is: Are there any reasonable alternatives that lets me do that for general java code? Or any quick way of collection this information. Or even a spring plugin for spring applications?
PS: Exactly like XRebel, it generates beautiful summaries of time take by the security, dao, service etc part of code. But it costs a bomb. If you can afford, you should definitely buy it.
You want to write a Java agent. Such an agent allows you to redefine a class when it is loaded. This way, you can implement an aspect without polluting your source code. I have written a library, Byte Buddy, which makes this fairly easy.
For your monitor example, you could use Byte Buddy as follows:
new AgentBuilder.Default()
.rebase(declaresMethod(isAnnotatedWith(Monitor.class))
.transform( (builder, type) ->
builder
.method(isAnnotatedWith(Monitor.class))
.intercept(MethodDelegation.to(MonitorInterceptor.class);
);
class MonitorInterceptor {
#RuntimeType
Object intercept(#Origin String method,
#SuperCall Callable<?> zuper)
throws Exception {
long start = System.currentTimeMillis();
try {
return zuper.call();
} finally {
System.out.println(method + " took " + (System.currentTimeMillis() - start);
}
}
}
The above built agent can than be installed on an instance of the instrumentation interface which is provided to any Java agent.
As an advantage over using Spring, the above agent will work for any Java instance, not only for Spring beans.
I don't know if theres already a library doing it nor can I give you a ready to use code. But I can give you a description how you can implement it on your own.
First of all i assume its no problem to include AspectJ into your project. Than create an annotation f.e. #Monitor which acts as marker for the time measurment of whatever you like.
Than create a simple data strucutre holding the information you wana track.
An example for this could be the following :
public class OperationMonitoring {
boolean active=false;
List<MethodExecution> methodExecutions = new ArrayList<>();
}
public class MethodExecution {
MethodExcecution invoker;
List<MethodExeuction> invocations = new ArrayList<>();
long startTime;
long endTime;
}
Than create an Around advice for all methods. On execution check if the called Method is annotated with your Monitoring annotation. If yes started monitoring each method execution in this thread. A simple example code could look like:
#Aspect
public class MonitoringAspect {
private ThreadLocal<OperationMonitoring> operationMonitorings = new ThreadLocal<>();
#Around("execution(* *.*(..))")
public void monitoring(ProceedingJoinPoint pjp) {
Method method = extractMethod(pjp);
if (method != null) {
OperationMonitoring monitoring = null;
if(method.isAnnotationPresent(Monitoring.class){
monitoring = operationMonitorings.get();
if(monitoring!=null){
if(!monitoring.active) {
monitoring.active=true;
}
} else {
// Create new OperationMonitoring object and set it
}
}
if(monitoring == null){
// this method is not annotated but is the tracking already active?
monitoring = operationMonitoring.get();
}
if(monitoring!=null && monitoring.active){
// do monitoring stuff and invoke the called method
} else {
// invoke the called method without monitoring
}
// Stop the monitoring by setting monitoring.active=false if this method was annotated with Monitoring (and it started the monitoring).
}
}
private Method extractMethod(JoinPoint joinPoint) {
if (joinPoint.getKind().equals(JoinPoint.METHOD_EXECUTION) && joinPoint.getSignature() instanceof MethodSignature) {
return ((MethodSignature) joinPoint.getSignature()).getMethod();
}
return null;
}
}
The code above is just a how to. I would also restructure the code but I've written it in a textfield, so please be aware of architectural flaws. As mentioned with a comment at the end. This solution does not supporte multiple annotated methods along the way. But it would be easy to add this.
A limitation of this approach is that it fails when you start additional threads during a tracked path. Adding support for starting new threads in a monitored Thread is not that easy. Thats also the reason why IoC frameworks have own features for handling threads to be able to track this.
I hope you understand the general concept of this, if not feel free to ask further questions.
This is the exact reason why I built the open source tool stagemonitor, which uses Byte Buddy to insert profiling code. If you want to monitor a web application you don't have to alter or annotate your code. If you have a standalone application, there is a #MonitorRequests annotation you can use.
You say you want to know the percentage of time taken within each routine on the stack.
I assume you mean inclusive time.
I also assume you mean wall-clock time, on the theory that if one of those lower-level callees happens to do some I/O, locking, etc., you don't want to be blind to that.
So a stack-sampling profiler that samples on wall-clock time will be getting the right kind of information.
The percentage time that A takes is the percentage of samples containing A, same for B, etc.
To get the percentage of A's time used by B, it is the percentage of samples containing A that happen to have B at the next level below.
The information is all in the stack samples, but it may be hard to get the profiler to extract just the information you want.
You also say you want precise percentage.
That means you also need a large number of stack samples.
For example, if you want to shrink the uncertainty of your measurements by a factor of 10, you need 100 times as many samples.
In my experience finding performance problems, I am willing to tolerate an uncertainty of 10% or more, because my goal is to find big wastage, not to know with precision how bad it is.
So I take samples manually, and look at them manually.
In fact, if you look at the statistics, you only have to see something wasteful on as few as two samples to know it's bad, and the fewer samples you take before seeing it twice, the worse it is.
(Example: If the problem wastes 30% of time, it takes on average 2/30% = 6.67 samples to see it twice. If it wastes 90% of time, it only takes 2.2 samples, on average.)
I have a small question i came up with reading this thread:
Why are static variables considered evil?
In my application i have a really massive amount of lets say configuration variables. E.g. fonts, colors, cached images, ... As most of them never change i considered them being static. Nevertheless my application is an applet and as the client has been executing the applet once, some of these static information may change, as a given configuration may have been changed. Therefore lets say this kind of data is used to change rarely, but is not considered being final.
In addition as the amount of information i handle that way is huge, i mapped them onto own Enums like that:
public enum Fonts {
COLOR_CHOOSER, MAP_META_DATA;
private Font localFont;
public Font getValue() {
return localFont;
}
private void setValue(Font newFont) {
localFont = newFont;
}
}
protected static void initFonts() {
Fonts.COLOR_CHOOSER.setValue(new Font("Arial", Font.PLAIN, 15));
Fonts.MAP_META_DATA.setValue(font_race.deriveFont(Font.BOLD, 11));
}
By using enums like that, I was able to identify the value I am looking for pretty easily, while i can maintain all of them at just one place.
Someone may say, since this is static, I could alternatively place them within the objects, they are used in anyway. Nevertheless, I considered the current behavior being more easy to read.
Besides of that the initFonts() method becomes replaced in future by a mapping method, which gets the currently hard coded values from an external source like json or xml. Therefore if i work OO this would mean to forward any of the incoming data to the corresponding objects, which I consider of being NOT easy to read.
To come up with my question:
How would somebody of you map/cache halfway final parameters (I also considered having a hashmap with an enum as key value). E.g for Images, Fonts, Colors, pixel margins, etc. Is the way i am using these enums appropriate or may I consider them being evil, since they are static? If yes - what would be an appropriate way, which keeps being easy to read and easy to maintain?!
I considered my solution of being a possible way to go, but were going to rethink the whole design after reading the above mentioned thread.
Thanks a lot for any advice.
Kind regards.
Enums are alright to represent static final data, which is why I don't think it's proper to have such an initFonts() method, modifying the content of the enum values, even if it's using a private method. What you should have is more like:
public enum Fonts {
COLOR_CHOOSER(new Font("Arial", Font.PLAIN, 15)),
MAP_META_DATA(new Font("Arial", Font.BOLD, 11)); // No reference to font_race, of course
private final Font localFont;
private Fonts(Font font) {
localFont = font;
}
public Font getValue() {
return localFont;
}
}
I don't see why you say that your kind of configuration data is not final: it doesn't seem to change while the application is running.
However, if the runtime initialization needs to load values that may change, as opposed to being determined at compilation time, an enum may not be the correct model anymore, if it's unable to initialize its values itself in a simple manner (calling too much stuff from the static initializer of an enum might not be the best think to do). In that case, I'd replace the direct use of enums by an intermediate service, for example holding an immutable Map<Fonts, Font> (which might be an EnumMap<Fonts, Font> wrapped with Collections.unmodifiableMap()) initialized once and for all using the proper method, and instead of calling Fonts.COLOR_CHOOSER.getValue(), you'd call FontService.getFont(Fonts.COLOR_CHOOSER).
When you are modelling your page objects, how would you deal with a page which has form and about 50 input fields on it? What is the best practice here?
Would you create a page object and write a separate function for each input action? or would you write one function which parameters are passed to it and enters the text?
e.g.
public void enterFirstName(String firstName) {
driver.type("firstNameField", firstName);
}
public void enterSecondName(String secondName) {
driver.type("secondNameField", secondName);
}
or
public void fillInForm(String inputFieldName, String text) {
driver.type(inputFieldName, text);
}
I can see in the first model, when writing tests, the tests are more descriptive, but if the page contains too many input fields, creating the page object becomes cumbersome.
This post is also quite interesting in structuring selenium tests in Page Objects
Functional Automated Testing Best Practices with Selenium WebDriver
The idea behind the page object model is that it abstracts the implementation away from the caller. In the first mechanism, you are successfully doing that because the caller doesn't need to know if the html input field name changes from "firstName" to "user_first_name", whereas in your second implementation any changes to the actual page would have to be trickled out to all callers of your page object.
While it may be more work up front to create your page object, if you maintain the encapsulation it'll save work in the long run when the real html page inevitably changes.
I always like to break things up into groups of related information. For instance, if I have a user class I might break that up into a few smaller classes: LoginCredentials, ProfileInfo, Settings, etc, but I would still usually have a top level User class that contains these sub classes.
One thing I would certainly recommend would be to pass in an object to one FillForm function rather than all of those individual functions. There are some great advantages using this approach. One, you could have some "common" pre-configured objects that you use for many of your test cases. For instance:
public class FormInfo
{
string Domain;
string Name;
string Category;
// etc...
public FormInfo(string domain, string name, string category)
{
Domain = domain;
Name = name;
Category = category;
// etc...
}
}
// Somewhere in your initialization code
public static FormInfo Info1 = new FormInfo("myDomain1", "myName1", "myCategory1");
public static FormInfo Info2 = new FormInfo("myDomain2", "myName2", "myCategory2");
You can still update one of your common merchants if you need to do something one-off:
// In your test case:
Info1.Category = "blah";
FormPage.FillForm(Info1);
OR, you can create a brand new merchant object for a specific test case if necessary. You can also do things like field validation either using these objects, or what I normally do is break the page object pattern for specific field validation, so if I am validating the merchant domain field I might do this:
Info1.Domain = null; //This should make the FillForm function skip doing anything with this field.
FormPage.FillForm(Info1);
FormPage.DomainTextBox.Text = "field validation string";
Another important advantage of this approach is that if the page is ever updated to add, remove or modify fields, you would only need to update your FormInfo object and FillForm function, and would not need to modify specific test cases that call the FillForm function - assuming they are using one of your common FormInfo objects. Another possibility to get more coverage would be to set up one of your common FormInfo objects to generate random strings for each of the fields that comply to the min/max length and cycle between all different allowed characters. This allows you to get some additional testing out of the same set of tests, although it could also add some noise if you start getting failure results only from specific strings, so be careful.
In addition to your enterWhatever() methods, I usually also create a createWhatever(field1, field2, ...) method, which I can use as a fast path to creating whatever the form builds, for use when the real purpose of the test is something else. Thus, if I need to create a Customer in order to test submitting a Ticket, the test goes to the CreateACustomer page and just invokes createCustomer(firstName, lastName, emailAddress, ...), and then continues on to the more-nuanced task of creating a Ticket using that Customer.
I am answering an old question for the benefit of readers.
Along with other good answers here, I would like to add few suggestions here for those who are new to POM.
Page objects is a well known design pattern, widely accepted by the automation engineers, to create separate class file for each page of the application to group all the elements as properties and their behaviors / business functionalities as methods of the class. But it has few issues in creating a class for a page - especially when the page has more / different sets of elements / complex element like a grid / calendar widget / a HTML table etc.
The class might contain too many responsibilities to handle. It should be restructured and broken into smaller classes. Ie, following the Single Responsibility Responsible.
Check the image here for the idea.
That is, create reusable page fragments & let the main page object serve the page fragments.
Check here for more info.
The way I do this in my forms is to get a list of all of the inputs on the page. Then remove any input elements that are not displayed. After that I can put valid or invalid text to each of the inputs. From there I catch the validation summary to make sure I am getting the correct error or not. If not then log exception.
What this does is allow me to input text into as many inputs as are on the page, and it still allows me to log exceptions and send them via e-mail. I also catch textareas and password fields in my list and I have a separate list for checkbox fields and Options since there are different things that I usually want to do with those.
what it boils down to is that all I have to do to test a page is this:
for (int i = 0; i < inputs.Count(); i++)
{
//This captures the error message string created in the input validation method
//nextButton is the IWebElement of the button to click to submit the form for validation
//ErrorMessageID is the ID of the Validation Summary display box (i.e. ErrorMessageID = "FormSummary" <asp:ValidationSummary ID="FormSummary" runat="server" CssClass="errorMessage" />
string InputValidationText = utilities.InputValidation(driver, inputs, i, nextButton, ErrorMessageID)
if(InputValidationText != string.Empty)
{
//LogError
}
}
Please have a look at the readme, https://github.com/yujunliang/seleniumcapsules
I'm trying to write a construct which allows me to run computations in a given time window. Something like:
def expensiveComputation(): Double = //... some intensive math
val result: Option[Double] = timeLimited( 45 ) { expensiveComputation() }
Here the timeLimited will run expensiveComputation with a timeout of 45 minutes. If it reaches the timeout it returns None, else it wrapped the result into Some.
I am looking for a solution which:
Is pretty cheap in performance and memory;
Will run the time-limited task in the current thread.
Any suggestion ?
EDIT
I understand my original problem has no solution. Say I can create a thread for the calculation (but I prefer not using a threadpool/executor/dispatcher). What's the fastest, safest and cleanest way to do it ?
Runs the given code block or throws an exception on timeout:
#throws(classOf[java.util.concurrent.TimeoutException])
def timedRun[F](timeout: Long)(f: => F): F = {
import java.util.concurrent.{Callable, FutureTask, TimeUnit}
val task = new FutureTask(new Callable[F]() {
def call() = f
})
new Thread(task).start()
task.get(timeout, TimeUnit.MILLISECONDS)
}
Only an idea: I am not so familiar with akka futures. But perhaps its possible to stick the future executing thread to the current thread and use akka futures with timeouts?
To the best of my knowledge, either you yield (the computation calls to some scheduler) or you use a thread, which gets manipulated from the "outside".
If you want to run the task in the current thread and if there should be no other threads involved, you would have to check whether the time limit is over inside of expensiveComputation. For example, if expensiveComputation is a loop, you could check for the time after each iteration.
If you are ok for the code of expensiveComputation to check Thread.interrupted() frequently, pretty easy. But I suppose you are not.
I don't think there is any solution that will work for arbitrary expensiveComputation code.
The question is what are you prepared to have as constraint on expensiveComputation.
You have the deprecated and quite unsafe Thead.stop(Throwable) too. If your code does not modify any object but those it created by itself, it might work.
I saw a pattern like this work well for time-limited tasks (Java code):
try {
setTimeout(45*60*1000); // 45 min in ms
while (not done) {
checkTimeout();
// do some stuff
// if the stuff can take long, again:
checkTimeout();
// do some more stuff
}
return Some(result);
}
catch (TimeoutException ex) {
return None;
}
The checkTimeout() function is cheap to call; you add it to code so that it is called reasonably often, but not too often. All it does is check current time against timer value set by setTimeout() plus the timeout value. If current time exceeds that value, checkTimeout() raises a TimeoutException.
I hope this logic can be reproduced in Scala, too.
For a generic solution (without having to go litter each of your expensiveComputations with checkTimeout() code) perhaps use Javassist.
http://www.csg.is.titech.ac.jp/~chiba/javassist/
You can then insert various checkTimeout() methods dynamically.
Here is the intro text on their website:
Javassist (Java Programming Assistant) makes Java bytecode manipulation simple. It is a class library for editing bytecodes in Java; it enables Java programs to define a new class at runtime and to modify a class file when the JVM loads it. Unlike other similar bytecode editors, Javassist provides two levels of API: source level and bytecode level. If the users use the source-level API, they can edit a class file without knowledge of the specifications of the Java bytecode. The whole API is designed with only the vocabulary of the Java language. You can even specify inserted bytecode in the form of source text; Javassist compiles it on the fly. On the other hand, the bytecode-level API allows the users to directly edit a class file as other editors.
Aspect Oriented Programming: Javassist can be a good tool for adding new methods into a class and for inserting before/after/around advice at the both caller and callee sides.
Reflection: One of applications of Javassist is runtime reflection; Javassist enables Java programs to use a metaobject that controls method calls on base-level objects. No specialized compiler or virtual machine are needed.
In the currentThread?? Phhhew...
Check after each step in computation
Well if your "expensive computation" can be broken up into multiple steps or has iterative logic you could capture the time when you start and then check periodically between your steps. This is by no means a generic solution but will work.
For a more generic solution you might make use of aspects or annotation processing, that automatically litters your code with these checks. If the "check" tells you that your time is up return None.
Ill ponder a solution in java quickly below using annotations and an annotation processor...
public abstract Answer{}
public class Some extends Answer {public Answer(double answer){answer=answer}Double answer = null;}
public class None extends Answer {}
//This is the method before annotation processing
#TimeLimit(45)
public Answer CalculateQuestionToAnswerOf42() {
double fairydust = Math.Pi * 1.618;
double moonshadowdrops = (222.21) ^5;
double thedevil == 222*3;
return new Answer(fairydust + moonshadowdrops + thedevil);
}
//After annotation processing
public Answer calculateQuestionToAnswerOf42() {
Date start = new Date() // added via annotation processing;
double fairydust = Math.Pi * 1.618;
if(checkTimeout(start, 45)) return None; // added via annotation processing;
double moonshadowdrops = (222.21) ^5;
if(checkTimeout(start, 45)) return None; // added via annotation processing;
double thedevil == 222*3;
if(checkTimeout(start, 45)) return None; // added via annotation processing;
return new Answer(fairydust + moonshadowdrops + thedevil);
}
If you're very seriously in need of this you could create a compiler plugin that inserts check blocks in loops and conditions. These check blocks can then check Thread.isInterrupted() and throw an Exception to escape.
You could possibly use an annotation, i.e. #interruptible, to mark the methods to enhance.