Are there any practical uses of anonymous code blocks in Java?
public static void main(String[] args) {
// in
{
// out
}
}
Please note that this is not about named blocks, i.e.
name: {
if ( /* something */ )
break name;
}
.
They restrict variable scope.
public void foo()
{
{
int i = 10;
}
System.out.println(i); // Won't compile.
}
In practice, though, if you find yourself using such a code block that's probably a sign that you want to refactor that block out to a method.
#David Seiler's answer is right, but I would contend that code blocks are very useful and should be used frequently and don't necessarily indicate the need to factor out into a method. I find they are particularly useful for constructing Swing Component trees, e.g:
JPanel mainPanel = new JPanel(new BorderLayout());
{
JLabel centerLabel = new JLabel();
centerLabel.setText("Hello World");
mainPanel.add(centerLabel, BorderLayout.CENTER);
}
{
JPanel southPanel = new JPanel(new FlowLayout(FlowLayout.LEFT, 0,0));
{
JLabel label1 = new JLabel();
label1.setText("Hello");
southPanel.add(label1);
}
{
JLabel label2 = new JLabel();
label2.setText("World");
southPanel.add(label2);
}
mainPanel.add(southPanel, BorderLayout.SOUTH);
}
Not only do the code blocks limit the scope of variables as tightly as possible (which is always good, especially when dealing with mutable state and non-final variables), but they also illustrate the component hierarchy much in the way as XML / HTML making the code easier to read, write and maintain.
My issue with factoring out each component instantiation into a method is that
The method will only be used once yet exposed to a wider audience, even if it is a private instance method.
It's harder to read, imagining a deeper more complex component tree, you'd have to drill down to find the code you're interested, and then loose visual context.
In this Swing example, I find that when complexity really does grow beyond manageability it indicates that it's time to factor out a branch of the tree into a new class rather than a bunch of small methods.
It's usually best to make the scope of local variables as small as possible. Anonymous code blocks can help with this.
I find this especially useful with switch statements. Consider the following example, without anonymous code blocks:
public String manipulate(Mode mode) {
switch(mode) {
case FOO:
String result = foo();
tweak(result);
return result;
case BAR:
String result = bar(); // Compiler error
twiddle(result);
return result;
case BAZ:
String rsult = bar(); // Whoops, typo!
twang(result); // No compiler error
return result;
}
}
And with anonymous code blocks:
public String manipulate(Mode mode) {
switch(mode) {
case FOO: {
String result = foo();
tweak(result);
return result;
}
case BAR: {
String result = bar(); // No compiler error
twiddle(result);
return result;
}
case BAZ: {
String rsult = bar(); // Whoops, typo!
twang(result); // Compiler error
return result;
}
}
}
I consider the second version to be cleaner and easier to read. And, it reduces the scope of variables declared within the switch to the case to which they were declared, which in my experience is what you want 99% of the time anyways.
Be warned however, it does not change the behavior for case fall-through - you'll still need to remember to include a break or return to prevent it!
I think you and/or the other answers are confusing two distinct syntactic constructs; namely Instance Initializers and Blocks. (And by the way, a "named block" is really a Labeled Statement, where the Statement happens to be a Block.)
An Instance Initializer is used at the syntactic level of a class member; e.g.
public class Test {
final int foo;
{
// Some complicated initialization sequence; e.g.
int tmp;
if (...) {
...
tmp = ...
} else {
...
tmp = ...
}
foo = tmp;
}
}
The Initializer construct is most commonly used with anonymous classes as per #dfa's example. Another use-case is for doing complicated initialization of 'final' attributes; e.g. see the example above. (However, it is more common to do this using a regular constructor. The pattern above is more commonly used with Static Initializers.)
The other construct is an ordinary block and appears within a code block such as method; e.g.
public void test() {
int i = 1;
{
int j = 2;
...
}
{
int j = 3;
...
}
}
Blocks are most commonly used as part of control statements to group a sequence of statements. But when you use them above, they (just) allow you to restrict the visibility of declarations; e.g. j in the above.
This usually indicates that you need to refactor your code, but it is not always clear cut. For example, you sometimes see this sort of thing in interpreters coded in Java. The statements in the switch arms could be factored into separate methods, but this may result in a significant performance hit for the "inner loop" of an interpreter; e.g.
switch (op) {
case OP1: {
int tmp = ...;
// do something
break;
}
case OP2: {
int tmp = ...;
// do something else
break;
}
...
};
You may use it as constructor for anonymous inner classes.
Like this:
This way you can initialize your object, since the free block is executed during the object construction.
It is not restricted to anonymous inner classes, it applies to regular classes too.
public class SomeClass {
public List data;{
data = new ArrayList();
data.add(1);
data.add(1);
data.add(1);
}
}
Anonymous blocks are useful for limiting the scope of a variable as well as for double brace initialization.
Compare
Set<String> validCodes = new HashSet<String>();
validCodes.add("XZ13s");
validCodes.add("AB21/X");
validCodes.add("YYLEX");
validCodes.add("AR2D");
with
Set<String> validCodes = new HashSet<String>() {{
add("XZ13s");
add("AB21/X");
add("YYLEX");
add("AR5E");
}};
Instance initializer block:
class Test {
// this line of code is executed whenever a new instance of Test is created
{ System.out.println("Instance created!"); }
public static void main() {
new Test(); // prints "Instance created!"
new Test(); // prints "Instance created!"
}
}
Anonymous initializer block:
class Test {
class Main {
public void method() {
System.out.println("Test method");
}
}
public static void main(String[] args) {
new Test().new Main() {
{
method(); // prints "Test method"
}
};
{
//=========================================================================
// which means you can even create a List using double brace
List<String> list = new ArrayList<>() {
{
add("el1");
add("el2");
}
};
System.out.println(list); // prints [el1, el2]
}
{
//==========================================================================
// you can even create your own methods for your anonymous class and use them
List<String> list = new ArrayList<String>() {
private void myCustomMethod(String s1, String s2) {
add(s1);
add(s2);
}
{
myCustomMethod("el3", "el4");
}
};
System.out.println(list); // prints [el3, el4]
}
}
}
Variable scope restrict:
class Test {
public static void main() {
{ int i = 20; }
System.out.println(i); // error
}
}
You can use a block to initialize a final variable from the parent scope. This a nice way to limit the scope of some variables only used to initialize the single variable.
public void test(final int x) {
final ClassA a;
final ClassB b;
{
final ClassC parmC = getC(x);
a = parmC.getA();
b = parmC.getB();
}
//... a and b are initialized
}
In general it's preferable to move the block into a method, but this syntax can be nice for one-off cases when multiple variables need to be returned and you don't want to create a wrapper class.
I use the anonymous blocks for all the reasons explained in other answers, which boils down to limiting the scope of variables. I also use them to have proper delimitation of pairs of method belonging together.
Consider the following excerpt:
jg.writeStartObject();
{
jg.writeStringField("fieldName", ((JsonFormFieldDependencyData.FieldLocator) valueOrLocator).getFieldName());
jg.writeStringField("kind", "field");
}
jg.writeEndObject();
Not only you can see at a glance that the methods are properly paired, but doesn't also kind of look like the output too ?
Just be careful to not abuse it and end up in-lining methods ^^
Refer to
Are fields initialized before constructor code is run in Java?
for the order or execution used in this discussion.
Instance init blocks in Java solve initialization problems other languages have been grappling with - what if we need to enforce of perform an instance initialization that must be run but only after all the static intializers and constructors have completed.
Consider this issue is more prevalent in C# with WPF components where sequence of component initialization is tighter. In Java backend such issues are probably resolved by some redesign. So this remains an overly simplistic illustration of the issue.
public class SessionInfo {
public String uri;
..blah, blah ..
}
abstract public class Session {
final public SessionInfo sessInf;
final public Connection connection;
public Session(SessionInfo sessInf) {
this.sessInf = sessInf;
this.connection = connect(sessInf.uri);
}
abstract void connect(String uri) throws NullPointerException;
}
sessInf.uri is looked up by each impl by the impl constructor.
abstract public class SessionImpl extends Session {
public SessionImpl (SessionInfo sessInf) {
super(sessInf);
sessInf.uri = lookUpUri();
}
..blah, blah ..
}
If you trace the flow, you will find that SessionImpl connect would throw NPE, simply because
SessionImpl constructor is run after constructor of parent Session.
therefore sessInf.uri will be null
and connect(sessInf.uri) at parent constructor would hit NPE.
The solution for such requirement for such a right initialization cycle is
abstract public class Session {
final public SessionInfo sessInf;
final public Connection connection;
public Session(SessionInfo sessInf) {
this.sessInf = sessInf;
}
abstract void connect(String uri) throws NullPointerException;
// This block runs after all the constuctors and static members have completed
{
// instance init blocks can initialize final instance objects.
this.connection = connect(sessInf.uri);
}
}
In this way, you will be able to enforce getting connection onto all extension classes and have it done after all constructors have completed.
Describe a task, either with a comment or inherently due to the structure of your code and the identifiers chosen, and then use code blocks to create a hierarchical relationship there where the language itself doesn't enforce one. For example:
public void sendAdminMessage(String msg) throws IOException {
MessageService service; {
String senderKey = properties.get("admin-message-server");
service = MessageService.of(senderKey);
if (!ms.available()) {
throw new MessageServiceException("Not available: " + senderKey);
}
}
/* workaround for issue 1298: Stop sending passwords. */ {
final Pattern p = Pattern.compile("^(.*?)\"pass\":.*(\"stamp\".*)$");
Matcher m = p.matcher(msg);
if (m.matches()) msg = m.group(1) + m.group(2);
}
...
}
The above is just some sample code to explain the concept. The first block is 'documented' by what is immediately preceding it: That block serves to initialize the service variable. The second block is documented by a comment. In both cases, the block provide 'scope' for the comment/variable declaration: They explain where that particular process ends. It's an alternative to this much more common style:
public void sendAdminMessage(String msg) throws IOException {
// START: initialize service
String senderKey = properties.get("admin-message-server");
MessageService service = MessageService.of(senderKey);
if (!ms.available()) {
throw new MessageServiceException("Not available: " + senderKey);
}
// END: initialize service
// START: workaround for issue 1298: Stop sending passwords.
final Pattern p = Pattern.compile("^(.*?)\"pass\":.*(\"stamp\".*)$");
Matcher m = p.matcher(msg);
if (m.matches()) msg = m.group(1) + m.group(2);
// END: workaround for issue 1298: Stop sending passwords.
...
}
The blocks are better, though: They let you use your editor tooling to navigate more efficiently ('go to end of block'), they scope the local variables used within the block so that they cannot escape, and most of all, they align the concept of containment: You are already familiar, as java programmer, with the concept of containment: for blocks, if blocks, method blocks: They are all expressions of hierarchy in code flow. Containment for code for documentary reasons instead of technical is still containment. Why use a different mechanism? Consistency is useful. Less mental load.
NB: Most likely the best design is to isolate the initialisation of the MessageService object to a separate method. However, this does lead to spaghettification: At some point isolating a simple and easily understood task to a method makes it harder to reason about method structure: By isolating it, you've turned the job of initializing the messageservice into a black box (at least, until you look at the helper method), and to fully read the code in order of how it flows, you need to hop around all over your source files. That's usually the better choice (the alternative is very long methods that are hard to test, or reuse parts of), but there are times when it's not. For example, if your block contains references to a significant number of local variables: If you make a helper method you'd have to pass all those variables. A method is also not control flow and local variable transparent (a helper method cannot break out of the loop from the main method, and a helper method cannot see or modify the local variables from the main method). Sometimes that's an impediment.
Related
Are there any practical uses of anonymous code blocks in Java?
public static void main(String[] args) {
// in
{
// out
}
}
Please note that this is not about named blocks, i.e.
name: {
if ( /* something */ )
break name;
}
.
They restrict variable scope.
public void foo()
{
{
int i = 10;
}
System.out.println(i); // Won't compile.
}
In practice, though, if you find yourself using such a code block that's probably a sign that you want to refactor that block out to a method.
#David Seiler's answer is right, but I would contend that code blocks are very useful and should be used frequently and don't necessarily indicate the need to factor out into a method. I find they are particularly useful for constructing Swing Component trees, e.g:
JPanel mainPanel = new JPanel(new BorderLayout());
{
JLabel centerLabel = new JLabel();
centerLabel.setText("Hello World");
mainPanel.add(centerLabel, BorderLayout.CENTER);
}
{
JPanel southPanel = new JPanel(new FlowLayout(FlowLayout.LEFT, 0,0));
{
JLabel label1 = new JLabel();
label1.setText("Hello");
southPanel.add(label1);
}
{
JLabel label2 = new JLabel();
label2.setText("World");
southPanel.add(label2);
}
mainPanel.add(southPanel, BorderLayout.SOUTH);
}
Not only do the code blocks limit the scope of variables as tightly as possible (which is always good, especially when dealing with mutable state and non-final variables), but they also illustrate the component hierarchy much in the way as XML / HTML making the code easier to read, write and maintain.
My issue with factoring out each component instantiation into a method is that
The method will only be used once yet exposed to a wider audience, even if it is a private instance method.
It's harder to read, imagining a deeper more complex component tree, you'd have to drill down to find the code you're interested, and then loose visual context.
In this Swing example, I find that when complexity really does grow beyond manageability it indicates that it's time to factor out a branch of the tree into a new class rather than a bunch of small methods.
It's usually best to make the scope of local variables as small as possible. Anonymous code blocks can help with this.
I find this especially useful with switch statements. Consider the following example, without anonymous code blocks:
public String manipulate(Mode mode) {
switch(mode) {
case FOO:
String result = foo();
tweak(result);
return result;
case BAR:
String result = bar(); // Compiler error
twiddle(result);
return result;
case BAZ:
String rsult = bar(); // Whoops, typo!
twang(result); // No compiler error
return result;
}
}
And with anonymous code blocks:
public String manipulate(Mode mode) {
switch(mode) {
case FOO: {
String result = foo();
tweak(result);
return result;
}
case BAR: {
String result = bar(); // No compiler error
twiddle(result);
return result;
}
case BAZ: {
String rsult = bar(); // Whoops, typo!
twang(result); // Compiler error
return result;
}
}
}
I consider the second version to be cleaner and easier to read. And, it reduces the scope of variables declared within the switch to the case to which they were declared, which in my experience is what you want 99% of the time anyways.
Be warned however, it does not change the behavior for case fall-through - you'll still need to remember to include a break or return to prevent it!
I think you and/or the other answers are confusing two distinct syntactic constructs; namely Instance Initializers and Blocks. (And by the way, a "named block" is really a Labeled Statement, where the Statement happens to be a Block.)
An Instance Initializer is used at the syntactic level of a class member; e.g.
public class Test {
final int foo;
{
// Some complicated initialization sequence; e.g.
int tmp;
if (...) {
...
tmp = ...
} else {
...
tmp = ...
}
foo = tmp;
}
}
The Initializer construct is most commonly used with anonymous classes as per #dfa's example. Another use-case is for doing complicated initialization of 'final' attributes; e.g. see the example above. (However, it is more common to do this using a regular constructor. The pattern above is more commonly used with Static Initializers.)
The other construct is an ordinary block and appears within a code block such as method; e.g.
public void test() {
int i = 1;
{
int j = 2;
...
}
{
int j = 3;
...
}
}
Blocks are most commonly used as part of control statements to group a sequence of statements. But when you use them above, they (just) allow you to restrict the visibility of declarations; e.g. j in the above.
This usually indicates that you need to refactor your code, but it is not always clear cut. For example, you sometimes see this sort of thing in interpreters coded in Java. The statements in the switch arms could be factored into separate methods, but this may result in a significant performance hit for the "inner loop" of an interpreter; e.g.
switch (op) {
case OP1: {
int tmp = ...;
// do something
break;
}
case OP2: {
int tmp = ...;
// do something else
break;
}
...
};
You may use it as constructor for anonymous inner classes.
Like this:
This way you can initialize your object, since the free block is executed during the object construction.
It is not restricted to anonymous inner classes, it applies to regular classes too.
public class SomeClass {
public List data;{
data = new ArrayList();
data.add(1);
data.add(1);
data.add(1);
}
}
Anonymous blocks are useful for limiting the scope of a variable as well as for double brace initialization.
Compare
Set<String> validCodes = new HashSet<String>();
validCodes.add("XZ13s");
validCodes.add("AB21/X");
validCodes.add("YYLEX");
validCodes.add("AR2D");
with
Set<String> validCodes = new HashSet<String>() {{
add("XZ13s");
add("AB21/X");
add("YYLEX");
add("AR5E");
}};
Instance initializer block:
class Test {
// this line of code is executed whenever a new instance of Test is created
{ System.out.println("Instance created!"); }
public static void main() {
new Test(); // prints "Instance created!"
new Test(); // prints "Instance created!"
}
}
Anonymous initializer block:
class Test {
class Main {
public void method() {
System.out.println("Test method");
}
}
public static void main(String[] args) {
new Test().new Main() {
{
method(); // prints "Test method"
}
};
{
//=========================================================================
// which means you can even create a List using double brace
List<String> list = new ArrayList<>() {
{
add("el1");
add("el2");
}
};
System.out.println(list); // prints [el1, el2]
}
{
//==========================================================================
// you can even create your own methods for your anonymous class and use them
List<String> list = new ArrayList<String>() {
private void myCustomMethod(String s1, String s2) {
add(s1);
add(s2);
}
{
myCustomMethod("el3", "el4");
}
};
System.out.println(list); // prints [el3, el4]
}
}
}
Variable scope restrict:
class Test {
public static void main() {
{ int i = 20; }
System.out.println(i); // error
}
}
You can use a block to initialize a final variable from the parent scope. This a nice way to limit the scope of some variables only used to initialize the single variable.
public void test(final int x) {
final ClassA a;
final ClassB b;
{
final ClassC parmC = getC(x);
a = parmC.getA();
b = parmC.getB();
}
//... a and b are initialized
}
In general it's preferable to move the block into a method, but this syntax can be nice for one-off cases when multiple variables need to be returned and you don't want to create a wrapper class.
I use the anonymous blocks for all the reasons explained in other answers, which boils down to limiting the scope of variables. I also use them to have proper delimitation of pairs of method belonging together.
Consider the following excerpt:
jg.writeStartObject();
{
jg.writeStringField("fieldName", ((JsonFormFieldDependencyData.FieldLocator) valueOrLocator).getFieldName());
jg.writeStringField("kind", "field");
}
jg.writeEndObject();
Not only you can see at a glance that the methods are properly paired, but doesn't also kind of look like the output too ?
Just be careful to not abuse it and end up in-lining methods ^^
Refer to
Are fields initialized before constructor code is run in Java?
for the order or execution used in this discussion.
Instance init blocks in Java solve initialization problems other languages have been grappling with - what if we need to enforce of perform an instance initialization that must be run but only after all the static intializers and constructors have completed.
Consider this issue is more prevalent in C# with WPF components where sequence of component initialization is tighter. In Java backend such issues are probably resolved by some redesign. So this remains an overly simplistic illustration of the issue.
public class SessionInfo {
public String uri;
..blah, blah ..
}
abstract public class Session {
final public SessionInfo sessInf;
final public Connection connection;
public Session(SessionInfo sessInf) {
this.sessInf = sessInf;
this.connection = connect(sessInf.uri);
}
abstract void connect(String uri) throws NullPointerException;
}
sessInf.uri is looked up by each impl by the impl constructor.
abstract public class SessionImpl extends Session {
public SessionImpl (SessionInfo sessInf) {
super(sessInf);
sessInf.uri = lookUpUri();
}
..blah, blah ..
}
If you trace the flow, you will find that SessionImpl connect would throw NPE, simply because
SessionImpl constructor is run after constructor of parent Session.
therefore sessInf.uri will be null
and connect(sessInf.uri) at parent constructor would hit NPE.
The solution for such requirement for such a right initialization cycle is
abstract public class Session {
final public SessionInfo sessInf;
final public Connection connection;
public Session(SessionInfo sessInf) {
this.sessInf = sessInf;
}
abstract void connect(String uri) throws NullPointerException;
// This block runs after all the constuctors and static members have completed
{
// instance init blocks can initialize final instance objects.
this.connection = connect(sessInf.uri);
}
}
In this way, you will be able to enforce getting connection onto all extension classes and have it done after all constructors have completed.
Describe a task, either with a comment or inherently due to the structure of your code and the identifiers chosen, and then use code blocks to create a hierarchical relationship there where the language itself doesn't enforce one. For example:
public void sendAdminMessage(String msg) throws IOException {
MessageService service; {
String senderKey = properties.get("admin-message-server");
service = MessageService.of(senderKey);
if (!ms.available()) {
throw new MessageServiceException("Not available: " + senderKey);
}
}
/* workaround for issue 1298: Stop sending passwords. */ {
final Pattern p = Pattern.compile("^(.*?)\"pass\":.*(\"stamp\".*)$");
Matcher m = p.matcher(msg);
if (m.matches()) msg = m.group(1) + m.group(2);
}
...
}
The above is just some sample code to explain the concept. The first block is 'documented' by what is immediately preceding it: That block serves to initialize the service variable. The second block is documented by a comment. In both cases, the block provide 'scope' for the comment/variable declaration: They explain where that particular process ends. It's an alternative to this much more common style:
public void sendAdminMessage(String msg) throws IOException {
// START: initialize service
String senderKey = properties.get("admin-message-server");
MessageService service = MessageService.of(senderKey);
if (!ms.available()) {
throw new MessageServiceException("Not available: " + senderKey);
}
// END: initialize service
// START: workaround for issue 1298: Stop sending passwords.
final Pattern p = Pattern.compile("^(.*?)\"pass\":.*(\"stamp\".*)$");
Matcher m = p.matcher(msg);
if (m.matches()) msg = m.group(1) + m.group(2);
// END: workaround for issue 1298: Stop sending passwords.
...
}
The blocks are better, though: They let you use your editor tooling to navigate more efficiently ('go to end of block'), they scope the local variables used within the block so that they cannot escape, and most of all, they align the concept of containment: You are already familiar, as java programmer, with the concept of containment: for blocks, if blocks, method blocks: They are all expressions of hierarchy in code flow. Containment for code for documentary reasons instead of technical is still containment. Why use a different mechanism? Consistency is useful. Less mental load.
NB: Most likely the best design is to isolate the initialisation of the MessageService object to a separate method. However, this does lead to spaghettification: At some point isolating a simple and easily understood task to a method makes it harder to reason about method structure: By isolating it, you've turned the job of initializing the messageservice into a black box (at least, until you look at the helper method), and to fully read the code in order of how it flows, you need to hop around all over your source files. That's usually the better choice (the alternative is very long methods that are hard to test, or reuse parts of), but there are times when it's not. For example, if your block contains references to a significant number of local variables: If you make a helper method you'd have to pass all those variables. A method is also not control flow and local variable transparent (a helper method cannot break out of the loop from the main method, and a helper method cannot see or modify the local variables from the main method). Sometimes that's an impediment.
To me, if-else if is too long and hard to read.
Is there some table-driven approach I can use to map an input value to call a specific method for that value?
For example:
for(String id:ids){
Test test = new Test();
if(id == "101") {
test.method1();
}else if(id=="102") {
test.method2();
}else if(id=="103"){
test.method3();
...
...
static Map<String,Method> methods = new HashMap<String,Method>();
...
static {
methods.put("101",Test.class.getMethod("method1",null));
methods.put("102",Test.class.getMethod("method2",null));
...
}
...
for( String id : ids ) {
Test test = new Test();
Method m = methods.get(id);
if (m != null) {
m.invoke(test,null);
}
}
Of course, all this really does is trade the pain of the if/else chain for the pain of initializing the hash with all the methods.
You can use reflection to call method. Construct the method name from the string and Use Method Class from java.lang.reflect namespace to invoke the method and pass parameters.
Something like this
Method theMethod = yourClass.getClass().getMethod("methodName", null);
method.invoke(yourClass, );
As it is, your code is not quite correct, as it's using == to check equality of strings.
You should use the equals method:
for(String id: ids){
Test test = new Test();
if(id.equals("101")) {
test.method1();
}else if(id.equals("102")) {
test.method2();
}else if(id.equals("102")){
test.method3();
// etc.
}
}
Other answers have suggested a creating a method map Map<String,Method> and using reflection. This will work and may be perfectly reasonable for your use, but loses some compile-time type safety checks.
You can get those checks back by defining an interface and populating a map with instances of that interface:
interface TestMethod {
void execute(Test test);
}
private static HashMap<String, TestMethod> methodMap = new HashMap<String, TestMethod>();
static {
methodMap.put("101", new TestMethod(){
#Override
public void execute(Test test) {
test.method1();
}
});
methodMap.put("102", new TestMethod(){
#Override
public void execute(Test test) {
test.method2();
}
});
methodMap.put("103", new TestMethod() {
#Override
public void execute(Test test) {
test.method3();
}
});
// etc.
}
and then your loop can be coded as
for (String id: ids) {
Test test = new Test();
methodMap.get(id).execute(test);
}
As it is here, this unfortunately makes the code longer and leaves it at least as difficult to read. But if you're using Java 8, you can use lambda expressions to populate the map, which would make the static initializer block look more like:
static {
methodMap.put("101", t -> t.method1();
methodMap.put("102", t -> t.method2();
methodMap.put("103", t -> t.method3();
// etc.
}
and then it might actually be worth doing.
However, if this sort of conditional based on some kind of string "code" is common in your application, and especially if you have multiple conditionals depending on the same codes, you might want a much broader redesign to encapsulate the different actions done for the different codes in a class hierarchy and use polymorphism instead of either if-else (or switch) or a method map lookup. You're still likely to need some conditionals to build the instances of your classes, but the code might be greatly improved.
I am a newbie to development and to unit tests in particular .
I guess my requirement is pretty simple, but I am keen to know others thoughts on this.
Suppose I have two classes like so -
public class First {
Second second ;
public First(){
second = new Second();
}
public String doSecond(){
return second.doSecond();
}
}
class Second {
public String doSecond(){
return "Do Something";
}
}
Let's say I am writing unit test to test First.doSecond() method. However, suppose, i want to Mock Second.doSecond() class like so. I am using Mockito to do this.
public void testFirst(){
Second sec = mock(Second.class);
when(sec.doSecond()).thenReturn("Stubbed Second");
First first = new First();
assertEquals("Stubbed Second", first.doSecond());
}
I am seeing that the mocking does not take effect and the assertion fails.
Is there no way to mock the member variables of a class that I want to test . ?
You need to provide a way of accessing the member variables so you can pass in a mock (the most common ways would be a setter method or a constructor which takes a parameter).
If your code doesn't provide a way of doing this, it's incorrectly factored for TDD (Test Driven Development).
This is not possible if you can't change your code. But I like dependency injection and Mockito supports it:
public class First {
#Resource
Second second;
public First() {
second = new Second();
}
public String doSecond() {
return second.doSecond();
}
}
Your test:
#RunWith(MockitoJUnitRunner.class)
public class YourTest {
#Mock
Second second;
#InjectMocks
First first = new First();
public void testFirst(){
when(second.doSecond()).thenReturn("Stubbed Second");
assertEquals("Stubbed Second", first.doSecond());
}
}
This is very nice and easy.
If you look closely at your code you'll see that the second property in your test is still an instance of Second, not a mock (you don't pass the mock to first in your code).
The simplest way would be to create a setter for second in First class and pass it the mock explicitly.
Like this:
public class First {
Second second ;
public First(){
second = new Second();
}
public String doSecond(){
return second.doSecond();
}
public void setSecond(Second second) {
this.second = second;
}
}
class Second {
public String doSecond(){
return "Do Something";
}
}
....
public void testFirst(){
Second sec = mock(Second.class);
when(sec.doSecond()).thenReturn("Stubbed Second");
First first = new First();
first.setSecond(sec)
assertEquals("Stubbed Second", first.doSecond());
}
Another would be to pass a Second instance as First's constructor parameter.
If you can't modify the code, I think the only option would be to use reflection:
public void testFirst(){
Second sec = mock(Second.class);
when(sec.doSecond()).thenReturn("Stubbed Second");
First first = new First();
Field privateField = PrivateObject.class.
getDeclaredField("second");
privateField.setAccessible(true);
privateField.set(first, sec);
assertEquals("Stubbed Second", first.doSecond());
}
But you probably can, as it's rare to do tests on code you don't control (although one can imagine a scenario where you have to test an external library cause it's author didn't :))
You can mock member variables of a Mockito Mock with ReflectionTestUtils
ReflectionTestUtils.setField(yourMock, "memberFieldName", value);
If you can't change the member variable, then the other way around this is to use powerMockit and call
Second second = mock(Second.class)
when(second.doSecond()).thenReturn("Stubbed Second");
whenNew(Second.class).withAnyArguments.thenReturn(second);
Now the problem is that ANY call to new Second will return the same mocked instance. But in your simple case this will work.
I had the same issue where a private value was not set because Mockito does not call super constructors. Here is how I augment mocking with reflection.
First, I created a TestUtils class that contains many helpful utils including these reflection methods. Reflection access is a bit wonky to implement each time. I created these methods to test code on projects that, for one reason or another, had no mocking package and I was not invited to include it.
public class TestUtils {
// get a static class value
public static Object reflectValue(Class<?> classToReflect, String fieldNameValueToFetch) {
try {
Field reflectField = reflectField(classToReflect, fieldNameValueToFetch);
reflectField.setAccessible(true);
Object reflectValue = reflectField.get(classToReflect);
return reflectValue;
} catch (Exception e) {
fail("Failed to reflect "+fieldNameValueToFetch);
}
return null;
}
// get an instance value
public static Object reflectValue(Object objToReflect, String fieldNameValueToFetch) {
try {
Field reflectField = reflectField(objToReflect.getClass(), fieldNameValueToFetch);
Object reflectValue = reflectField.get(objToReflect);
return reflectValue;
} catch (Exception e) {
fail("Failed to reflect "+fieldNameValueToFetch);
}
return null;
}
// find a field in the class tree
public static Field reflectField(Class<?> classToReflect, String fieldNameValueToFetch) {
try {
Field reflectField = null;
Class<?> classForReflect = classToReflect;
do {
try {
reflectField = classForReflect.getDeclaredField(fieldNameValueToFetch);
} catch (NoSuchFieldException e) {
classForReflect = classForReflect.getSuperclass();
}
} while (reflectField==null || classForReflect==null);
reflectField.setAccessible(true);
return reflectField;
} catch (Exception e) {
fail("Failed to reflect "+fieldNameValueToFetch +" from "+ classToReflect);
}
return null;
}
// set a value with no setter
public static void refectSetValue(Object objToReflect, String fieldNameToSet, Object valueToSet) {
try {
Field reflectField = reflectField(objToReflect.getClass(), fieldNameToSet);
reflectField.set(objToReflect, valueToSet);
} catch (Exception e) {
fail("Failed to reflectively set "+ fieldNameToSet +"="+ valueToSet);
}
}
}
Then I can test the class with a private variable like this. This is useful for mocking deep in class trees that you have no control as well.
#Test
public void testWithRectiveMock() throws Exception {
// mock the base class using Mockito
ClassToMock mock = Mockito.mock(ClassToMock.class);
TestUtils.refectSetValue(mock, "privateVariable", "newValue");
// and this does not prevent normal mocking
Mockito.when(mock.somthingElse()).thenReturn("anotherThing");
// ... then do your asserts
}
I modified my code from my actual project here, in page. There could be a compile issue or two. I think you get the general idea. Feel free to grab the code and use it if you find it useful.
If you want an alternative to ReflectionTestUtils from Spring in mockito, use
Whitebox.setInternalState(first, "second", sec);
Lots of others have already advised you to rethink your code to make it more testable - good advice and usually simpler than what I'm about to suggest.
If you can't change the code to make it more testable, PowerMock: https://code.google.com/p/powermock/
PowerMock extends Mockito (so you don't have to learn a new mock framework), providing additional functionality. This includes the ability to have a constructor return a mock. Powerful, but a little complicated - so use it judiciously.
You use a different Mock runner. And you need to prepare the class that is going to invoke the constructor. (Note that this is a common gotcha - prepare the class that calls the constructor, not the constructed class)
#RunWith(PowerMockRunner.class)
#PrepareForTest({First.class})
Then in your test set-up, you can use the whenNew method to have the constructor return a mock
whenNew(Second.class).withAnyArguments().thenReturn(mock(Second.class));
Yes, this can be done, as the following test shows (written with the JMockit mocking API, which I develop):
#Test
public void testFirst(#Mocked final Second sec) {
new NonStrictExpectations() {{ sec.doSecond(); result = "Stubbed Second"; }};
First first = new First();
assertEquals("Stubbed Second", first.doSecond());
}
With Mockito, however, such a test cannot be written. This is due to the way mocking is implemented in Mockito, where a subclass of the class to be mocked is created; only instances of this "mock" subclass can have mocked behavior, so you need to have the tested code use them instead of any other instance.
This is the second time I found myself writing this kind of code, and decided that there must be a more readable way to accomplish this:
My code tries to figure something out, that's not exactly well defined, or there are many ways to accomplish it. I want my code to try out several ways to figure it out, until it succeeds, or it runs out of strategies. But I haven't found a way to make this neat and readable.
My particular case: I need to find a particular type of method from an interface. It can be annotated for explicitness, but it can also be the only suitable method around (per its arguments).
So, my code currently reads like so:
Method candidateMethod = getMethodByAnnotation(clazz);
if (candidateMethod == null) {
candidateMethod = getMethodByBeingOnlyMethod(clazz);
}
if (candidateMethod == null) {
candidateMethod = getMethodByBeingOnlySuitableMethod(clazz);
}
if (candidateMethod == null) {
throw new NoSuitableMethodFoundException(clazz);
}
There must be a better way…
Edit: The methods return a method if found, null otherwise. I could switch that to try/catch logic, but that hardly makes it more readable.
Edit2: Unfortunately, I can accept only one answer :(
To me it is readable and understandable. I'd simply extract the ugly part of the code to a separate method (following some basic principles from "Robert C.Martin: Clean Code") and add some javadoc (and apologies, if necessary) like that:
//...
try {
Method method = MethodFinder.findMethodIn(clazz);
catch (NoSuitableMethodException oops) {
// handle exception
}
and later on in MethodFinder.java
/**
* Will find the most suitable method in the given class or throw an exception if
* no such method exists (...)
*/
public static Method findMethodIn(Class<?> clazz) throws NoSuitableMethodException {
// all your effort to get a method is hidden here,
// protected with unit tests and no need for anyone to read it
// in order to understand the 'main' part of the algorithm.
}
I think for a small set of methods what you're doing is fine.
For a larger set, I might be inclined to build a Chain of Responsibility, which captures the base concept of trying a sequence of things until one works.
I don't think that this is such a bad way of doing it. It is a bit verbose, but it clearly conveys what you are doing, and is easy to change.
Still, if you want to make it more concise, you can wrap the methods getMethod* into a class which implements an interface ("IMethodFinder") or similar:
public interface IMethodFinder{
public Method findMethod(...);
}
Then you can create instances of you class, put them into a collection and loop over it:
...
Method candidateMethod;
findLoop:
for (IMethodFinder mf: myMethodFinders){
candidateMethod = mf.findMethod(clazz);
if (candidateMethod!=null){
break findLoop;
}
}
if (candidateMethod!=null){
// method found
} else {
// not found :-(
}
While arguably somewhat more complicated, this will be easier to handle if you e.g. need to do more work between calling the findMethods* methods (such as more verification that the method is appropriate), or if the list of ways to find methods is configurable at runtime...
Still, your approach is probably OK as well.
I'm sorry to say, but the method you use seems to be the widely accepted one. I see a lot of code like that in the code base of large libraries like Spring, Maven etc.
However, an alternative would be to introduce a helper interface that can convert from a given input to a given output. Something like this:
public interface Converter<I, O> {
boolean canConvert(I input);
O convert(I input);
}
and a helper method
public static <I, O> O getDataFromConverters(
final I input,
final Converter<I, O>... converters
){
O result = null;
for(final Converter<I, O> converter : converters){
if(converter.canConvert(input)){
result = converter.convert(input);
break;
}
}
return result;
}
So then you could write reusable converters that implement your logic. Each of the converters would have to implement the canConvert(input) method to decide whether it's conversion routines will be used.
Actually: what your request reminds me of is the Try.these(a,b,c) method in Prototype (Javascript).
Usage example for your case:
Let's say you have some beans that have validation methods. There are several strategies to find these validation methods. First we'll check whether this annotation is present on the type:
// retention, target etc. stripped
public #interface ValidationMethod {
String value();
}
Then we'll check whether there's a method called "validate". To make things easier I assume, that all methods define a single parameter of type Object. You may choose a different pattern. Anyway, here's sample code:
// converter using the annotation
public static final class ValidationMethodAnnotationConverter implements
Converter<Class<?>, Method>{
#Override
public boolean canConvert(final Class<?> input){
return input.isAnnotationPresent(ValidationMethod.class);
}
#Override
public Method convert(final Class<?> input){
final String methodName =
input.getAnnotation(ValidationMethod.class).value();
try{
return input.getDeclaredMethod(methodName, Object.class);
} catch(final Exception e){
throw new IllegalStateException(e);
}
}
}
// converter using the method name convention
public static class MethodNameConventionConverter implements
Converter<Class<?>, Method>{
private static final String METHOD_NAME = "validate";
#Override
public boolean canConvert(final Class<?> input){
return findMethod(input) != null;
}
private Method findMethod(final Class<?> input){
try{
return input.getDeclaredMethod(METHOD_NAME, Object.class);
} catch(final SecurityException e){
throw new IllegalStateException(e);
} catch(final NoSuchMethodException e){
return null;
}
}
#Override
public Method convert(final Class<?> input){
return findMethod(input);
}
}
// find the validation method on a class using the two above converters
public static Method findValidationMethod(final Class<?> beanClass){
return getDataFromConverters(beanClass,
new ValidationMethodAnnotationConverter(),
new MethodNameConventionConverter()
);
}
// example bean class with validation method found by annotation
#ValidationMethod("doValidate")
public class BeanA{
public void doValidate(final Object input){
}
}
// example bean class with validation method found by convention
public class BeanB{
public void validate(final Object input){
}
}
You may use Decorator Design Pattern to accomplish different ways of finding out how to find something.
public interface FindMethod
{
public Method get(Class clazz);
}
public class FindMethodByAnnotation implements FindMethod
{
private final FindMethod findMethod;
public FindMethodByAnnotation(FindMethod findMethod)
{
this.findMethod = findMethod;
}
private Method findByAnnotation(Class clazz)
{
return getMethodByAnnotation(clazz);
}
public Method get(Class clazz)
{
Method r = null == findMethod ? null : findMethod.get(clazz);
return r == null ? findByAnnotation(clazz) : r;
}
}
public class FindMethodByOnlyMethod implements FindMethod
{
private final FindMethod findMethod;
public FindMethodByOnlyMethod(FindMethod findMethod)
{
this.findMethod = findMethod;
}
private Method findByOnlyMethod(Class clazz)
{
return getMethodOnlyMethod(clazz);
}
public Method get(Class clazz)
{
Method r = null == findMethod ? null : findMethod.get(clazz);
return r == null ? findByOnlyMethod(clazz) : r;
}
}
Usage is quite simple
FindMethod finder = new FindMethodByOnlyMethod(new FindMethodByAnnotation(null));
finder.get(clazz);
... I could switch that to try/catch logic, but that hardly makes it more readable.
Changing the signature of the get... methods so you can use try / catch would be a really bad idea. Exceptions are expensive and should only be used for "exceptional" conditions. And as you say, the code would be less readable.
What is bothering you is the repeating pattern used for flow control--and it should bother you--but there isn't too much to be done about it in Java.
I get really annoyed at repeated code & patterns like this, so for me it would probably be worth it to extract the repeated copy & paste control code and put it in it's own method:
public Method findMethod(Class clazz)
int i=0;
Method candidateMethod = null;
while(candidateMethod == null) {
switch(i++) {
case 0:
candidateMethod = getMethodByAnnotation(clazz);
break;
case 1:
candidateMethod = getMethodByBeingOnlyMethod(clazz);
break;
case 2:
candidateMethod = getMethodByBeingOnlySuitableMethod(clazz);
break;
default:
throw new NoSuitableMethodFoundException(clazz);
}
return clazz;
}
Which has the disadvantage of being unconventional and possibly more verbose, but the advantage of not having as much repeated code (less typos) and reads easier because of there being a little less clutter in the "Meat".
Besides, once the logic has been extracted into it's own class, verbose doesn't matter at all, it's clarity for reading/editing and for me this gives that (once you understand what the while loop is doing)
I do have this nasty desire to do this:
case 0: candidateMethod = getMethodByAnnotation(clazz); break;
case 1: candidateMethod = getMethodByBeingOnlyMethod(clazz); break;
case 2: candidateMethod = getMethodByBeingOnlySuitableMethod(clazz); break;
default: throw new NoSuitableMethodFoundException(clazz);
To highlight what's actually being done (in order), but in Java this is completely unacceptable--you'd actually find it common or preferred in some other languages.
PS. This would be downright elegant (damn I hate that word) in groovy:
actualMethod = getMethodByAnnotation(clazz) ?:
getMethodByBeingOnlyMethod(clazz) ?:
getMethodByBeingOnlySuitableMethod(clazz) ?:
throw new NoSuitableMethodFoundException(clazz) ;
The elvis operator rules. Note, the last line may not actually work, but it would be a trivial patch if it doesn't.
Are there any practical uses of anonymous code blocks in Java?
public static void main(String[] args) {
// in
{
// out
}
}
Please note that this is not about named blocks, i.e.
name: {
if ( /* something */ )
break name;
}
.
They restrict variable scope.
public void foo()
{
{
int i = 10;
}
System.out.println(i); // Won't compile.
}
In practice, though, if you find yourself using such a code block that's probably a sign that you want to refactor that block out to a method.
#David Seiler's answer is right, but I would contend that code blocks are very useful and should be used frequently and don't necessarily indicate the need to factor out into a method. I find they are particularly useful for constructing Swing Component trees, e.g:
JPanel mainPanel = new JPanel(new BorderLayout());
{
JLabel centerLabel = new JLabel();
centerLabel.setText("Hello World");
mainPanel.add(centerLabel, BorderLayout.CENTER);
}
{
JPanel southPanel = new JPanel(new FlowLayout(FlowLayout.LEFT, 0,0));
{
JLabel label1 = new JLabel();
label1.setText("Hello");
southPanel.add(label1);
}
{
JLabel label2 = new JLabel();
label2.setText("World");
southPanel.add(label2);
}
mainPanel.add(southPanel, BorderLayout.SOUTH);
}
Not only do the code blocks limit the scope of variables as tightly as possible (which is always good, especially when dealing with mutable state and non-final variables), but they also illustrate the component hierarchy much in the way as XML / HTML making the code easier to read, write and maintain.
My issue with factoring out each component instantiation into a method is that
The method will only be used once yet exposed to a wider audience, even if it is a private instance method.
It's harder to read, imagining a deeper more complex component tree, you'd have to drill down to find the code you're interested, and then loose visual context.
In this Swing example, I find that when complexity really does grow beyond manageability it indicates that it's time to factor out a branch of the tree into a new class rather than a bunch of small methods.
It's usually best to make the scope of local variables as small as possible. Anonymous code blocks can help with this.
I find this especially useful with switch statements. Consider the following example, without anonymous code blocks:
public String manipulate(Mode mode) {
switch(mode) {
case FOO:
String result = foo();
tweak(result);
return result;
case BAR:
String result = bar(); // Compiler error
twiddle(result);
return result;
case BAZ:
String rsult = bar(); // Whoops, typo!
twang(result); // No compiler error
return result;
}
}
And with anonymous code blocks:
public String manipulate(Mode mode) {
switch(mode) {
case FOO: {
String result = foo();
tweak(result);
return result;
}
case BAR: {
String result = bar(); // No compiler error
twiddle(result);
return result;
}
case BAZ: {
String rsult = bar(); // Whoops, typo!
twang(result); // Compiler error
return result;
}
}
}
I consider the second version to be cleaner and easier to read. And, it reduces the scope of variables declared within the switch to the case to which they were declared, which in my experience is what you want 99% of the time anyways.
Be warned however, it does not change the behavior for case fall-through - you'll still need to remember to include a break or return to prevent it!
I think you and/or the other answers are confusing two distinct syntactic constructs; namely Instance Initializers and Blocks. (And by the way, a "named block" is really a Labeled Statement, where the Statement happens to be a Block.)
An Instance Initializer is used at the syntactic level of a class member; e.g.
public class Test {
final int foo;
{
// Some complicated initialization sequence; e.g.
int tmp;
if (...) {
...
tmp = ...
} else {
...
tmp = ...
}
foo = tmp;
}
}
The Initializer construct is most commonly used with anonymous classes as per #dfa's example. Another use-case is for doing complicated initialization of 'final' attributes; e.g. see the example above. (However, it is more common to do this using a regular constructor. The pattern above is more commonly used with Static Initializers.)
The other construct is an ordinary block and appears within a code block such as method; e.g.
public void test() {
int i = 1;
{
int j = 2;
...
}
{
int j = 3;
...
}
}
Blocks are most commonly used as part of control statements to group a sequence of statements. But when you use them above, they (just) allow you to restrict the visibility of declarations; e.g. j in the above.
This usually indicates that you need to refactor your code, but it is not always clear cut. For example, you sometimes see this sort of thing in interpreters coded in Java. The statements in the switch arms could be factored into separate methods, but this may result in a significant performance hit for the "inner loop" of an interpreter; e.g.
switch (op) {
case OP1: {
int tmp = ...;
// do something
break;
}
case OP2: {
int tmp = ...;
// do something else
break;
}
...
};
You may use it as constructor for anonymous inner classes.
Like this:
This way you can initialize your object, since the free block is executed during the object construction.
It is not restricted to anonymous inner classes, it applies to regular classes too.
public class SomeClass {
public List data;{
data = new ArrayList();
data.add(1);
data.add(1);
data.add(1);
}
}
Anonymous blocks are useful for limiting the scope of a variable as well as for double brace initialization.
Compare
Set<String> validCodes = new HashSet<String>();
validCodes.add("XZ13s");
validCodes.add("AB21/X");
validCodes.add("YYLEX");
validCodes.add("AR2D");
with
Set<String> validCodes = new HashSet<String>() {{
add("XZ13s");
add("AB21/X");
add("YYLEX");
add("AR5E");
}};
Instance initializer block:
class Test {
// this line of code is executed whenever a new instance of Test is created
{ System.out.println("Instance created!"); }
public static void main() {
new Test(); // prints "Instance created!"
new Test(); // prints "Instance created!"
}
}
Anonymous initializer block:
class Test {
class Main {
public void method() {
System.out.println("Test method");
}
}
public static void main(String[] args) {
new Test().new Main() {
{
method(); // prints "Test method"
}
};
{
//=========================================================================
// which means you can even create a List using double brace
List<String> list = new ArrayList<>() {
{
add("el1");
add("el2");
}
};
System.out.println(list); // prints [el1, el2]
}
{
//==========================================================================
// you can even create your own methods for your anonymous class and use them
List<String> list = new ArrayList<String>() {
private void myCustomMethod(String s1, String s2) {
add(s1);
add(s2);
}
{
myCustomMethod("el3", "el4");
}
};
System.out.println(list); // prints [el3, el4]
}
}
}
Variable scope restrict:
class Test {
public static void main() {
{ int i = 20; }
System.out.println(i); // error
}
}
You can use a block to initialize a final variable from the parent scope. This a nice way to limit the scope of some variables only used to initialize the single variable.
public void test(final int x) {
final ClassA a;
final ClassB b;
{
final ClassC parmC = getC(x);
a = parmC.getA();
b = parmC.getB();
}
//... a and b are initialized
}
In general it's preferable to move the block into a method, but this syntax can be nice for one-off cases when multiple variables need to be returned and you don't want to create a wrapper class.
I use the anonymous blocks for all the reasons explained in other answers, which boils down to limiting the scope of variables. I also use them to have proper delimitation of pairs of method belonging together.
Consider the following excerpt:
jg.writeStartObject();
{
jg.writeStringField("fieldName", ((JsonFormFieldDependencyData.FieldLocator) valueOrLocator).getFieldName());
jg.writeStringField("kind", "field");
}
jg.writeEndObject();
Not only you can see at a glance that the methods are properly paired, but doesn't also kind of look like the output too ?
Just be careful to not abuse it and end up in-lining methods ^^
Refer to
Are fields initialized before constructor code is run in Java?
for the order or execution used in this discussion.
Instance init blocks in Java solve initialization problems other languages have been grappling with - what if we need to enforce of perform an instance initialization that must be run but only after all the static intializers and constructors have completed.
Consider this issue is more prevalent in C# with WPF components where sequence of component initialization is tighter. In Java backend such issues are probably resolved by some redesign. So this remains an overly simplistic illustration of the issue.
public class SessionInfo {
public String uri;
..blah, blah ..
}
abstract public class Session {
final public SessionInfo sessInf;
final public Connection connection;
public Session(SessionInfo sessInf) {
this.sessInf = sessInf;
this.connection = connect(sessInf.uri);
}
abstract void connect(String uri) throws NullPointerException;
}
sessInf.uri is looked up by each impl by the impl constructor.
abstract public class SessionImpl extends Session {
public SessionImpl (SessionInfo sessInf) {
super(sessInf);
sessInf.uri = lookUpUri();
}
..blah, blah ..
}
If you trace the flow, you will find that SessionImpl connect would throw NPE, simply because
SessionImpl constructor is run after constructor of parent Session.
therefore sessInf.uri will be null
and connect(sessInf.uri) at parent constructor would hit NPE.
The solution for such requirement for such a right initialization cycle is
abstract public class Session {
final public SessionInfo sessInf;
final public Connection connection;
public Session(SessionInfo sessInf) {
this.sessInf = sessInf;
}
abstract void connect(String uri) throws NullPointerException;
// This block runs after all the constuctors and static members have completed
{
// instance init blocks can initialize final instance objects.
this.connection = connect(sessInf.uri);
}
}
In this way, you will be able to enforce getting connection onto all extension classes and have it done after all constructors have completed.
Describe a task, either with a comment or inherently due to the structure of your code and the identifiers chosen, and then use code blocks to create a hierarchical relationship there where the language itself doesn't enforce one. For example:
public void sendAdminMessage(String msg) throws IOException {
MessageService service; {
String senderKey = properties.get("admin-message-server");
service = MessageService.of(senderKey);
if (!ms.available()) {
throw new MessageServiceException("Not available: " + senderKey);
}
}
/* workaround for issue 1298: Stop sending passwords. */ {
final Pattern p = Pattern.compile("^(.*?)\"pass\":.*(\"stamp\".*)$");
Matcher m = p.matcher(msg);
if (m.matches()) msg = m.group(1) + m.group(2);
}
...
}
The above is just some sample code to explain the concept. The first block is 'documented' by what is immediately preceding it: That block serves to initialize the service variable. The second block is documented by a comment. In both cases, the block provide 'scope' for the comment/variable declaration: They explain where that particular process ends. It's an alternative to this much more common style:
public void sendAdminMessage(String msg) throws IOException {
// START: initialize service
String senderKey = properties.get("admin-message-server");
MessageService service = MessageService.of(senderKey);
if (!ms.available()) {
throw new MessageServiceException("Not available: " + senderKey);
}
// END: initialize service
// START: workaround for issue 1298: Stop sending passwords.
final Pattern p = Pattern.compile("^(.*?)\"pass\":.*(\"stamp\".*)$");
Matcher m = p.matcher(msg);
if (m.matches()) msg = m.group(1) + m.group(2);
// END: workaround for issue 1298: Stop sending passwords.
...
}
The blocks are better, though: They let you use your editor tooling to navigate more efficiently ('go to end of block'), they scope the local variables used within the block so that they cannot escape, and most of all, they align the concept of containment: You are already familiar, as java programmer, with the concept of containment: for blocks, if blocks, method blocks: They are all expressions of hierarchy in code flow. Containment for code for documentary reasons instead of technical is still containment. Why use a different mechanism? Consistency is useful. Less mental load.
NB: Most likely the best design is to isolate the initialisation of the MessageService object to a separate method. However, this does lead to spaghettification: At some point isolating a simple and easily understood task to a method makes it harder to reason about method structure: By isolating it, you've turned the job of initializing the messageservice into a black box (at least, until you look at the helper method), and to fully read the code in order of how it flows, you need to hop around all over your source files. That's usually the better choice (the alternative is very long methods that are hard to test, or reuse parts of), but there are times when it's not. For example, if your block contains references to a significant number of local variables: If you make a helper method you'd have to pass all those variables. A method is also not control flow and local variable transparent (a helper method cannot break out of the loop from the main method, and a helper method cannot see or modify the local variables from the main method). Sometimes that's an impediment.