As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I need suggestion for the right approach to apply conditions in Java.
I have 100 conditions based on which I have to change value of a String variable that would be displayed to the user.
an example condition: a<5 && (b>0 && c>8) && d>9 || x!=4
More conditions are there but variables are the same more or less.
I am doing this right now:
if(condition1)
else if(condition2)
else if(condition3)
...
A switch case alternative would obviously be there nested within if-else's i.e.
if(condition1)
switch(x)
{
case y:
blah-blah
}
else if(condition2)
switch(x)
{
case y:
blah-blah
}
else if(condition3)
...
But I am looking for some more elegant solution like using an Interface for this with polymorphic support , What could be the thing that I could possibly do to avoid lines of code or what should be the right approach.
---Edit---
I actualy require this on an android device. But its more of a java construct here.
This is a small snapshot of conditions that I have with me. More will be added if a few pass/fail. That obviously would require more if-else's with/without nesting. In that case would the processing go slow.
I am as of now storing the messages in a separate class with various string variables those I have kept static so if a condition gets true
then I pick the static variable from the only class and display that
one. Would that be right about storing the resultant messages.
Depending on the number of conditional inputs, you might be able to use a look-up table, or even a HashMap, by encoding all inputs or even some relatively simple complex conditions in a single value:
int key = 0;
key |= a?(1):0;
key |= b?(1<<1):0;
key |= (c.size() > 1)?(1<<2):0;
...
String result = table[key]; // Or result = map.get(key);
This paradigm has the added advantage of constant time (O(1)) complexity, which may be important in some occasions. Depending on the complexity of the conditions, you might even have fewer branches in the code-path on average, as opposed to full-blown if-then-else spaghetti code, which might lead to performance improvements.
We might be able to help you more if you added more context to your question. Where are the condition inputs coming from? What are they like?
And the more important question: What is the actual problem that you are trying to solve?
There are a lot of possibilities to this. Without knowing much about your domain, I would create something like (you can think of better names :P)
public interface UserFriendlyMessageBuilder {
boolean meetCondition(FooObjectWithArguments args);
String transform(String rawMessage);
}
In this way, you can create a Set of UserFriendlyMessageBuilder and just iterate through them for the first that meets the condition to transform your raw message.
public class MessageProcessor {
private final Set<UserFriendlyMessageBuilder> messageBuilders;
public MessageProcessor(Set<UserFriendlyMessageBuilder> messageBuilders) {
this.messageBuilders = messageBuilders;
}
public String get(FooWithArguments args, String rawMsg) {
for (UserFriendlyMessageBuilder msgBuilder : messageBuilders) {
if (msgBuilder.meetCondition(args)) {
return msgBuilder.transform(rawMsg);
}
}
return rawMsg;
}
}
What it seems to me is "You have given very less importance to design the product in modules"
Which is the main factor of using OOP Language.
eg:If you have 100 conditions and you are able to make 4 modules then therotically for anything to choose you need 26 conditions.
This is an additional possibility that may be worth considering.
Take each comparison, and calculate its truth, then look the resulting boolean[] up in a truth table. There is a lot of existing work on simplifying truth tables that you could apply. I have a truth table simplification applet I wrote many years ago. You may find its source code useful.
The cost of this is doing all the comparisons, or at least the ones that are needed to evaluate the expression using the simplified truth table. The advantage is an organized system for managing a complicated combination of conditions.
Even if you do not use a truth table directly in the code, consider writing and simplifyin one as a way of organizing your code.
Related
I'm trying to make a game and I have a Selection class that holds a string named str in it. I apply the following code to my selection objects every 17 milliseconds.
if(s.Str == "Upgrade") {
}else if(s.Str == "Siege") {
}else if(s.Str == "Recruit") {
}
In other words, these selection objects will do different jobs according to their types(upgrade,siege etc...). I am using str variable elsewhere. my question is that:
Would it be more optimized if I assign the types to an integer when I first create the objects?
if(s.type == 1) {
}else if(s.type == 2) {
}else if(s.type == 3) {
}
This would make me write extra lines of code(Since I have to separate objects by type when I first create) and make the code more difficult to understand, but would there be a difference between comparing integers rather than comparing strings?
If you compare strings >that< way, there is probably no performance difference.
However, that is the WRONG WAY to compare strings. The correct way is to use the equals(Object) method. For example.
if (s.Str.equals("Upgrade")) {
Read this:
How do I compare strings in Java?
I apply the following code to my selection objects every 17 milliseconds.
The time that it will take to test two strings for equality is probably in the order of tens of NANOseconds. So ... basically ... the difference between comparing strings or integers is irrelevant.
This illustrates why premature optimization is a bad thing. You should only optimize code when you know that it is going to be worthwhile to spend your time on it; i.e. when you know there is going to be a pay-off.
So should I optimize after I write and finish all the code? Does 'not doing premature optimization' means that?
No it doesn't exactly mean that. (Well .. not to me anyway.) What it means to me is that you shouldn't optimize until:
you have a working program whose performance you can measure,
you have determined specific (quantifiable) performance criteria,
you have a means of measuring the performance; e.g. an appropriate benchmarks involving real or realistic use-cases, and
you have good a means of identifying the actual performance hotspots.
If you try to optimize before you have the above, you are likely to optimize the wrong parts of the code for the wrong reasons, and your effort (programmer time) is likely to be spent inefficiently.
In your specific case, my gut feeling is that if you followed the recommended process you would discover1 that this String vs int (vs enum) is irrelevant to your game's observable performance2.
But if you want to be more scientific than "gut feeling", you should wait until you have 1 through 4 settled, and then measure to see if the actual performance meets your criteria. Only then should you decide whether or not to optimize.
1 - My prediction assumes that your characterization of the problem is close enough to reality. That is always a risk when people try to identify performance issues "by eye" rather than by measuring.
2 - It is relevant to other things; e.g. code readability and maintainability, but I'm not going to address those in this Answer.
The Answer by Stephen C is correct and wise. But your example code is ripe for a different solution entirely.
Enum
If you want performance, type-safety, easier-to-read code, and want to ensure valid values, use enum objects rather than mere strings or integers.
public enum Action { UPGRADE , SIEGE , RECRUIT }
You can use a switch for the various enum possible objects.
Action action = Action.SIEGE ;
…
switch ( action )
{
case UPGRADE:
doUpgradeStuff() ;
break;
case SIEGE:
doSiegeStuff() ;
break;
case RECRUIT:
doRecruitStuff() ;
break;
default:
doDefaultStuff() ;
break;
}
Using enums this way will get even better in the future. See JEP 406: Pattern Matching for switch (Preview).
See Java Tutorials by Oracle on enums. And for an example, see their tutorial using enums for month, day-of-week, and text style.
See also this Question, linked to others.
Comparing primitive numbers like Integer will be definitely faster compared to String in Java. It will give you faster performance if you are executing it every 17 milliseconds.
Yes there is difference. String is a object and int is a primitive type. when you are doing object == "string" it is matching the address. You need to use equals method to check the exact match.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
In many cases, mostly when you are looping through an array and assigning values to the elements, there is a scope to use post increment operator. Is it considered a good practice.
For example, in the following code where the copying is being done which one is better.
int [] to_assign;
int [] to_include;
int [] from_assign;
// Version 1
int count = 0;
while(i<<some_value>){
if(to_include[i]==1)
to_assign[count++] = from_assign[i];
}
// Version 2
int count = 0;
while(i<<some_value>){
if(to_include[i]==1)
{
to_assign[count] = from_assign[i];
count++;
}
}
It's purely a matter of style. Personally, I'd use whichever one makes the most logical sense. If the increment is logically part of the operation, then use the post-increment. If not, use a separate increment operation.
Also, when you use an increment operator alone, it is generally preferred to use a pre-increment. While it won't matter with simple types like integers, with more complex types, it can be much more efficient in languages like C++ that have operator overloading because a pre-increment doesn't need two instances to be around at the same time. There's no performance impact with Java, because it doesn't have operator overloading, but if you want a consistent style rule, it should be pre-increment rather than post-increment.
I'd argue that the second solution is perhaps slightly cleaner to read. I.e. the eye observes that an assignment is being made, and then that there is an increment. Having them both together on the same line makes it slightly less easy to read at a glance. So, I'd prefer solution two.
That said, this is a matter of preference. I don't think one can speak of an established best or better practice here.
In the days when every last ounce of performance mattered, the first version would have been preferred, because the compiler has a higher change of emitting slightly more optimal assembly in the first solution.
Any good compiler will optimize this for you anyway. All are good as long as they are human-readable.
"++" is a leftover from the days of pointer arithmetic -- there are some of us who prefer "+= 1" for integers. But the compiler should manage simple cases like this correctly, regardless.
Modern compilers optimize this kind of code anyway, so in the end it doesn't matter how and where you're incrementing the variable.
From a style perspective, the first version is smaller, and for some easier to read.
From a code comprehension point of view, the second version is easier to understand for beginner developers.
From a performance point of view, ignoring compiler optimizations, this is even faster:
// Version 3
int count = -1;
while(i<<some_value>){
if(to_include[i]==1)
{
to_assign[++count] = from_assign[i];
}
}
It's faster because in theory count++ creates a temporary copy of the value before the increment, while ++count increments and uses the same variable. But again, this kind of premature optimization is not needed any more, since compilers can detect such frequent cases and optimizes the generated code.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Apart from readability, are there any differences in performance or compile-time when a single-line loop / conditional statement is written with and without brakets?
For example, are there any differences between following:
if (a > 10)
a = 0;
and
if (a > 10)
{
a = 0;
}
?
Of course there is no difference in performance. But there is a difference in the possibility of introducing errors:
if (a>10)
a=0;
If somebody extends code and writes later,
if (a>10)
a=0;
printf ("a was reset\n");
This will always be printed because of the missing braces. Some people request that you always use braces to avoid this kind of errors.
Contrary to several answers, there is a finite but negligible performance difference at compile time. There is zero difference of any kind at runtime.
No, there is no difference, the compiler will strip out non-meaningful braces, line-breaks etc.
The compile time will be marginally different, but so marginally that you have already lost far more time reading this answer than you will get back in compile speed. As compute power increases, this cost goes down yet further, but the cost of reducing readability does not.
In short, do what is readable, it makes no useful difference in any other sense.
A machine code does not contain such braces. After compilation, there is no more {}. Use the most readable form.
Well, there is of course no difference between them as such at runtime.
But you should certainly use the 2nd way for the sake of maintainence of your code.
Why I'm saying this is, suppose in future, you need to add some more lines to your if-else block to expand them. Then if you have the first way incorporated in your old code, then you would have to add the braces before adding some new code. Which you won't need to do in 2nd case.
So, it is far easier to add code to the 2nd way in future, than to the 1st one.
Also, if you are using the first way, you are intended to do typing errors, such as semi-colon after your if, like this: -
if (a > 0);
System.out.println("Hello");
So, you can see that your Hello will always get printed. And these errors you can easily remove if you have curly braces attached to your if.
It depends on the rest of the coding guidelines. I don't see any
problem dropping the braces if the opening brace is always on a line
by itself. If the opening brace is at the end of the if line,
however, I find it too easy to overlook when adding to the contents. So
I'd go for either:
if ( a > 10 ) {
a = 0;
}
regardless of the number of lines, or:
if ( a > 10 )
{
// several statements...
}
with:
if ( a > 10 )
a = 0;
when there is just one statement. The important thing, however, is that
all of the code be consistent. If you're working on an existing code
base which uses several different styles, I'd alway use braces in new
code, since you can't count on the code style to ensure that if they
were there, they'd be in a highly visible location.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
In near future we might be enforced by a rule by which we can not have any hard coded numbers in our java source code. All the hard coded numbers must be declared as final variables.
Even though this sounds great in theory it is really hard/tedious to implement in code, especially legacy code. Should it really be considered "best practice" to declare numbers in following code snippets as final variables?
//creating excel
cellnum = 0;
//Declaring variables.
Object[] result = new Object[2];
//adding dash to ssn
return ssn.substring(1, 3)+"-"+ssn.substring(3, 5)+"-"+ssn.substring(5, 9);
Above are just some of the examples I could think of, but in these (and others) where would you as a developer say enough is enough?
I wanted to make this question a community wiki but couldn't see how...?
Definitely no. Literal constants have their places, especially low constants such as 0, 1, 2, ...
I don't think anyone would think
double[] pair = new double[PAIR_COUNT];
makes more sense than
double[] pair = new double[2];
I'd say use final variables if
...it increases readability,
...the value may change (and is used in multiple places), or
...it serves as documentation
A related side note: As always with coding standards / conventions: very few (if any) rules should be followed strictly.
Replacing numbers by constants makes sense if the number carries a meaning that is not inherently obvious by looking at its value alone.
For instance,
productType = 221; // BAD: the number needs to be looked up somewhere to understand its meaning
productType = PRODUCT_TYPE_CONSUMABLE; // GOOD: the constant is self-describing
On the other hand,
int initialCount = 0; // GOOD: in this context zero really means zero
int initialCount = ZERO; // BAD: the number value is clear, and there's no need to add a self-referencing constant name if there's no other meaning
Generally speaking, if a literal has a special meaning, it should be given a unique name rather than assuming things. I'm not sure why it is "practically" hard/tedious to do the same.
Object[] result = new Object[2]; => seems like a good candidate for using a Pair class
cellnum = 0; => cellnum = FIRST_COLUMN; esp since you might end up using an API which treats 1 as the starting index or maybe you want to process an excel in which columns start from 2.
return ssn.substring(1, 3)+"-"+ssn.substring(3, 5)+"-"+ssn.substring(5, 9) => If you have code like this littered throughout your codebase, you have bigger problems. If this code exists in a single location and is shielded by a sane API, I don't really see a problem here.
I've seen folks consider 0 and 1 accepted exceptions.
The idea is that you want to document why you have two Objects as above for example.
I agree with you about the dashes in SSN. The comment describes it better than 4 named constants.
In general, I like the idea of no magic numbers, but as with every rule, there are pragmatics involved. Legacy code, brings its own issues. It's a lot of work without a lot of productivity in terms of changed behavior to bring old code up to date this way. I would consider doing it in an evolutionary fashion: when you have to edit an old file, bring it up to date.
It really depends on the context doesn't it. If there are numbers in the code that does not indicate why they exist then naming them makes teh code more readable. If you see the number 3.14 in code is it PI? is there any way to tell or is that just a coincidence? Naming it PI will clear up the mystery.
In your example, why is cellnum = 2? why not 10? or 20? That should be named something, say INITIAL_CELL or MAX_CELL. Expecially if this same number, meaning the same thing appears again in the code.
Depends if it needs to be changed. Or for that matter, it can be changed.
If you only need 2 objects (say, for a pair like aioobe mentioned) then that isn't a magic number, it's the correct number. If it's for a variable tuple that, at this moment, is 2, then you probably should abstract it out into a constant.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I personally like the exclusive or, ^, operator when it makes sense in the context of boolean checks because of its conciseness. I much prefer to write
if (boolean1 ^ boolean2)
{
//do it
}
than
if((boolean1 && !boolean2) || (boolean2 && !boolean1))
{
//do it
}
but I often get confused looks from other experienced Java developers (not just the newbies), and sometimes comments about how it should only be used for bitwise operations.
I'm curious as to the best practices regarding the usage of the ^ operator.
You can simply use != instead.
I think you've answered your own question - if you get strange looks from people, it's probably safer to go with the more explicit option.
If you need to comment it, then you're probably better off replacing it with the more verbose version and not making people ask the question in the first place.
I find that I have similar conversations a lot. On the one hand, you have a compact, efficient method of achieving your goal. On the other hand, you have something that the rest of your team might not understand, making it hard to maintain in the future.
My general rule is to ask if the technique being used is something that it is reasonable to expect programmers in general to know. In this case, I think that it is reasonable to expect programmers to know how to use boolean operators, so using xor in an if statement is okay.
As an example of something that wouldn't be okay, take the trick of using xor to swap two variables without using a temporary variable. That is a trick that I wouldn't expect everybody to be familiar with, so it wouldn't pass code review.
I think it'd be okay if you commented it, e.g. // ^ == XOR.
You could always just wrap it in a function to give it a verbose name:
public static boolean XOR(boolean A, boolean B) {
return A ^ B;
}
But, it seems to me that it wouldn't be hard for anyone who didn't know what the ^ operator is for to Google it really quick. It's not going to be hard to remember after the first time. Since you asked for other uses, its common to use the XOR for bit masking.
You can also use XOR to swap the values in two variables without using a third temporary variable.
// Swap the values in A and B
A ^= B;
B ^= A;
A ^= B;
Here's a Stackoverflow question related to XOR swapping.
if((boolean1 && !boolean2) || (boolean2 && !boolean1))
{
//do it
}
IMHO this code could be simplified:
if(boolean1 != boolean2)
{
//do it
}
With code clarity in mind, my opinion is that using XOR in boolean checks is not typical usage for the XOR bitwise operator. From my experience, bitwise XOR in Java is typically used to implement a mask flag toggle behavior:
flags = flags ^ MASK;
This article by Vipan Singla explains the usage case more in detail.
If you need to use bitwise XOR as in your example, comment why you use it, since it's likely to require even a bitwise literate audience to stop in their tracks to understand why you are using it.
I personally prefer the "boolean1 ^ boolean2" expression due to its succinctness.
If I was in your situation (working in a team), I would strike a compromise by encapsulating the "boolean1 ^ boolean2" logic in a function with a descriptive name such as "isDifferent(boolean1, boolean2)".
For example, instead of using "boolean1 ^ boolean2", you would call "isDifferent(boolean1, boolean2)" like so:
if (isDifferent(boolean1, boolean2))
{
//do it
}
Your "isDifferent(boolean1, boolean2)" function would look like:
private boolean isDifferent(boolean1, boolean2)
{
return boolean1 ^ boolean2;
}
Of course, this solution entails the use of an ostensibly extraneous function call, which in itself is subject to Best Practices scrutiny, but it avoids the verbose (and ugly) expression "(boolean1 && !boolean2) || (boolean2 && !boolean1)"!
If the usage pattern justifies it, why not? While your team doesn't recognize the operator right away, with time they could. Humans learn new words all the time. Why not in programming?
The only caution I might state is that "^" doesn't have the short circuit semantics of your second boolean check. If you really need the short circuit semantics, then a static util method works too.
public static boolean xor(boolean a, boolean b) {
return (a && !b) || (b && !a);
}
As a bitwise operator, xor is much faster than any other means to replace it. So for performance critical and scalable calculations, xor is imperative.
My subjective personal opinion: It is absolutely forbidden, for any purpose, to use equality (== or !=) for booleans. Using it shows lack of basic programming ethics and fundamentals. Anyone who gives you confused looks over ^ should be sent back to the basics of boolean algebra (I was tempted to write "to the rivers of belief" here :) ).
!= is OK to compare two variables. It doesn't work, though, with multiple comparisons.
str.contains("!=") ^ str.startsWith("not(")
looks better for me than
str.contains("!=") != str.startsWith("not(")