It's really easy to have use cases where a SonarQube rule can be important for some files, but completely useless for others, I'll just give some examples:
The rule "Missing translations should be added" is really great for finding missing keys in all kinds of messages_xx.properties. However each group of properties files always has an empty messages_en.properties (at least for us, where English is the default locale). For this files it's an actual bug to add keys to.
Another example is "String literals should not be duplicated": in normal Java files duplicated strings introduce bugs (because you might change one and not the other). In test files enforcing this code leads to unreadable code, because the duplicate code usually is in the initialization of the tested objects and / or the messages printed out when the test fails.
I could easily go on with how test cases differ from "real" Java classes. Even though test code should have the same quality measurements, in practice it's quite different.
The question is now: How to handle these rules in Sonar?
The trivial answers I already discarded:
remove the rules entirely (they are quite useful)
fix the rules (the first example even introduces bugs into the code)
mark the test as won't fix (it's just too much)
So I guess I want to change the Quality Profile based on the project (e.g. exclude org.acme.project.it) or a file name (e.g. exclude *Test.java). Or maybe enable rules only for some file name patterns.
What is the best way to handle SonarQube rules that only work on a specific group of files?
You want to set up some exclusions. Exclusions allow you to ignore certain files completely, or to conversely ignore all but the specified set of files. You can set up exclusions for coverage or duplications. And most pertinently to your question, you can set up multiple flavors of issue exclusions.
Related
I want to modify / make the rule target only public interfaces (not public classes etc). Is this possible ? Im using this rule in Java code but its too strict for my project and I would love to know if there is a way to change it a little bit.
Link for rule: https://rules.sonarsource.com/java/RSPEC-1213
For an existing ruleset on SonarQube, talk to your sonar administrator to change the rules that are enforced on the code and remove that particular one from global enforcement.
There have been a few times I've gone to the admins of the tool for the install that I use and said "this rule isn't one that I care about or will enforce and only makes it confusing" and had them remove that rule from the globally run ruleset.
Is it possible to write your own rule?
Yes, it is possible. From SonarQube's docs: Adding coding rules you have some options. Either you can write a plugin for SonarQube and add that to your instance (docs), or you can write an external application that analyzes the code which SonarQube consumes.
If you don't have your own instance of sonarqube or aren't up to writing the associated plugin or external tooling... you might want to instead lookout PMD (site).
For PMD, writing a custom rule can be much simpler (docs). One of the ways that PMD works is by 'compiling' the Java code into an XML representation of the abstract syntax tree for Java and then running xpath queries against that XML (tutorial).
The xpath rule can then be included in a project's configuration.
What about turning it off for the code that I'm working on?
If a specific rule is one that you don't want to invoke, you could suppress it with #SuppressWarnings("java:S106") (that particular spares warnings is for System.out.println use, but the same structure can be used for other warnings) or by adding // NOSONAR too strict on the line. There are spots where I have such comments where following the rule for a particular set of code is problematic and suppress it for that line, method, or class - with the comment about why that is done.
That particular rule... I'm gonna agree with the Java (and now Oracle) guidelines and follow it. The reason is that if anyone else works on the code, they'll expect it to follow that convention. Having a consistent understanding of what things should be where in code so that another developer doesn't need to go dig through an entire file to find the constructor when it is expected to be at the top (under the field definition) is a good thing. What's more, it limits the future cases where a developer goes through to make things consistent with conventions and results in a lot of style: updating code to follow style guide commits later.
The question is whether the functionality I describe below already exists, or whether I need to make an attempt at creating it myself. I am aware that I am probably looking at a lot of work if it does not exist yet, and I am also sure that others have already tried. I am nevertheless grateful for comments such as "project A tried this, but..." or "dude D already failed because...". If somebody has an overall more elegant solution, that would of course be welcome as well.
I want to change the way I develop (private) Java code by introducing a multiplexing layer. What I mean by that is that I want to be able to create library-like parameterizable AST-snippets, which I want to insert into my code via some sort of placeholders (such as annotations). I am aware of project https://projectlombok.org/ and have found that, while I find it useful for small applications, it does not generally suit my requirements, as it does not seem possible to insert own snippets without forking the entire project and making major modifications. Also lombok only ever modifies a single file at a time, while I am looking for a solution that will need to 'know' multiple files at a time.
I imagine a structure like this:
Source S: (Parameterizable) AST-snippets that can be included via some sort of reference in Source A.
Source A: Regular Java-Code, in which I can reference snippets from Source A. This code will not be compiled directly, as it is lacking the referenced snippets, and would thus throw a lot of compile time exceptions.
Source T: Target Source, which is an AST-equivalent copy of Source A, except that all references of AST-Snippets have been replaced by their respective Snippet from Source S. It needs to be mappable to the original Source A as well as the resolved snippets from Source S, where applicable, as most development will happen there.
I see several challenges with this concept, not the least of which are debuggability, source-mapping and compatibility with other frameworks/APIs. Also, it seems a challenge to work around the one-file-at-a-time limitation, memory wise.
The advantage over lombok would be flexibility, since lombok only provides a fixed set of snippets for specific purposes, whereas this would enable devs to write own snippets, or make modifications to getters, setters etc. Also, lombok 'quirks' into the compilation step, and does not output the 'fused' source, afaik.
I want to target at least javac and eclipse's ecj compilers.
Recently, a colleague of mine said something along these lines: "consecutive APKs (executables) produced by build server from the same source code might not be the same". The context for this discussion was whether QA performed on build X also applies to build Y, which was performed by the same build server (configured the same way) from the same source code.
I think that generated executables might not be identical due to various factors (e.g. different timestamp), but the question is whether they can be functionally different.
The only scenario, that I can think of, in which the same source code could produce different functionality is that of multi-threading issue: in case of incorrect synchronization of multi-threaded code, different re-ordering/optimization actions performed at compile time could affect this poorly synchronized code and change its functional behavior.
My questions are:
Is it true that consecutive builds performed by the same build server from the same source code can be functionally different?
If #1 is true, are these differences limited to incorrectly synchronized multi-threaded code?
If #2 is false, what are the other parts that can change?
Links to any related material will be appreciated.
It's certainly possible in a few cases. I'll assume you are using Gradle to build your Android app.
Case 1: You are using a 3rd party dependency that's included with a version wildcard, such as:
compile somelib.1+
It's possible for the dependency to change in this case, which is why it's highly recommended to use explicit dependency versions.
Case 2: You're injecting environment information into your app using Gradle's buildConfigFields. These values will be injected into your app's BuildConfig class. Depending on how you use those values, the app behavior could vary on consecutive builds.
Case 3: You update the JDK on your CI in-between consecutive builds. It's possible, though I'd assume highly unlikely, that your app behavior could change depending on how it's compiled. For example, you might be hitting an edge case in the JDK that gets fixed in a later version, causing code that previously worked before to act differently.
I think this answers your first question and second question.
edit: sorry, I think I missed some important info from your OP. My case 2 is an example of your e.g. different timestamp and case 3 violates your configured the same way. I'll leave the answer here though.
I think that different functionality may be caused only by discrepancies in environment or maybe you are using snapshot version of some 3rd party library, and thus it was updated after some time.
some advice:
if it possible to rebuild it, use verbose mode of build tool (-X in maven for example) and compare output line by line with some diff program
If the same source code could produce different results on the same machine / configuration, programming as we know it would probably not be possible.
There is always an option that things break, when the language level, operating system, or some other dependency changes. If all that changes it the time of the build, you would have to do something fundamentally wrong.
Using android / gradle, one possible reason to lead to a different behavior or errors in general is using + in your build.gradle file for library versions. This is why you should avoid doing so, since a consecutive build could fetch a newer / different version, hence you'd have different source code, and thus it could create a functional different executable.
A good build should always be repeatable. This means given the same configuration it should have the same results. If it isn't, you could never rely on anything and would have to do total regression testing on everything.
[...] consecutive builds performed by the same build server from the same source code can be functionally different
No. As described above, if you use the same versions, the same source code, it should produce the same behavior. Unless you do something very wrong.
[...] are these differences limited to incorrectly synchronized multi-threaded code?
This would imply a bug with your compiler. While this is possible, it is extremely unlikely.
[...] what are the other parts that can change?
Besides the timestamp and the build number nothing else should change, given the same source code and configuration.
It is always a good idea to include unit (and other) tests in your build. This way you can test specific behavior to be the same with each build.
They should be identical,except:
there is threading/optimization issues in build system.
hardware failures CPU/RAM/HDD issues on build environment
time/platform related code in build system itself or build scripts
So if you are building exact same code on exact same HW using exact same version of build system, same OS version and your code DO NOT SPECIALLY DEPEND from build time result should be same. They even should have exact same check sums and size.
Also results is same ONLY if your code do not depend on external modules which is downloaded from Internet at build time like Gradle/Maven does - you can't grantee this libraries the same because of they are not in version control. Moreover there is can be dependency where module version specified not exactly (like 2.0.+) so if maintainer updated this module your build system will use updated one -> so basically your builds generated from different source code.
As somebody mention using Unit tests on build server is good practice to make sure your build is stable and don't contain obvious bugs.
While this question addresses Java/Android, Jon Skeet blogged about different C# parsers treating some Unicode characters differently, mostly due to changes in the Unicode character database.
In his examples, the Mongolian Vowel Separator (U+180E) is considered either a whitespace character or a character allowed within an identifier, yielding different results in variable assignments.
It is definately possible. You can construct an example program that will behave different in functionality everytime you start it up.
Imagine a strategy design pattern that lets you choose between algorithms during runtime and you load one algorithm based on RNG.
I need to temporary ignore rule "Insufficient branch coverage by unit tests" (common-java:InsufficientBranchCoverage).
Reading http://docs.sonarqube.org/display/SONAR/Frequently+Asked+Questions I see that SuppressWarnings should work for all rules.
But any combination of
#SuppressWarnings("common-java:InsufficientBranchCoverage")
#SuppressWarnings("InsufficientBranchCoverage")
#SuppressWarnings("java:InsufficientBranchCoverage")
does not work for me.
I use Sonar 5.0, Sonar Java plugin 3.0.
Edit:
This warning may be supressed (removed) from sonar UI. I see two solutions
disable the rule 'Insufficient branch coverage by unit tests' for my quality profile. The drawback is, that rule is disabled for whole project, not just for single class
mark issue as ignored when browsing issues drilldown. This ignores only single occurence of the issue. The drawback is, issue need to be marked in every sonar project (we have project-per-branch). When I need to remove warning, I must do this in sonar UI again, for each project.
Unfortunately, it is not possible.
The InsufficientBranchCoverage rule applies directly at File level and it is consequently not linked to any particular line in the file. To remove issues related to a given rule key using #SuppressWarnings, the rule has to apply at Class or Method level (as you can read in the documentation).
Note that to guarantee consistency of the results of the analysis, we can not disable the issue at File level, as it may end by hiding issues which would have been perfectly legit (take for instance the situation of a java file having multiple classes).
I have an array of configs that while they may possibly change in the future, the likelihood is that they will never have to be changed.
If any are missing or incorrect then a certain feature of my system will not work correctly.
Should these still be retrieved be some sort of config, either xml, database etc and made available to the end user to change - or is this a good situation where it makes more sense to hard code them in the class that uses them?
I have spent a long time changing mind over and over on this.
Designer's estimate of the likelihood of something needing to change is not a reliable criterion to make a decision, because real-world use of our programs has its peculiar ways of proving us wrong.
Instead of asking yourself "how likely is something to change?", ask yourself "does it make sense for an end-user to make a change?" If the answer is "yes", make it user-changeable; otherwise, make it changeable only through your code.
A particular mechanism through which you make something changeable (a database, a configuration file, a custom XML file, and so on) does not matter much. An important thing is to have good defaults for settings that are missing, so that your end-users would have harder time breaking your system by supplying partial configurations.
Best practice is to use any kind of config or properties file and use default values and failsafe if the file is damaged/missing. These approach has the following advantages:
It can easily be recognised as a config file meaning another dev would not need to dive through your classes to change a parameter
property files can be written by build tools like ant, so if you have e.g. a test server address and a productive server address the ant task could change the content accordingly
it works with default values even without it
Disadvantage is the added complexity.
Yes, it's almost certainly a bad idea to hard-code them; if nothing else, it can make testing (whether automated or manual) a lot more difficult than it needs to be. It's easy to include a .properties file in your jar with the usual defaults, and changing them in the future would just require overriding them at runtime. Dependency injection is usually an even better choice if you have the flexibility to arrange it.
If the configs will never gonna change as you said then its fine if you declare those properties as a variable in interface or a separate class and use this constant through out the program.
Separate property files are used only when some property value are not fixed and is depend on environment like database name,username, password etc. Whereas some property are fixed and is not dependent on the environment in which it is going to deploy like portno, tablenames if any etc.
It depends on your application. As a baseline, its good design to use static variables to hold data that your program will need, instead of hardcoding strings and integers all over the place; This means any changes (i.e. application wide font color) in the future will only require a single change, then a compile cycle and your good to go.
However, if these settings are user configurable, then they cannot be hard coded, but instead need to be read from an external source, and where you do it, is a matter of design, complexity and security.
Plain text files are good for a small application, where security is lax and things are plain text. The SublimeText editor and notepad++ editor do this for their theme settings and it works well. (I believe it was plain text, perhaps they have moved to XML now)
A better option is XML, as it is structured, easier to read/parse/write. Lots of projects use this as an option. One thing to look out for is corrupt files, while reading/writing to them, if the user closes the program or the JVM exits randomly for whatever reason. You might want to look at things like buffers. And also deal with FileNotFoundExceptions, if the text/xml file is missing.
Another option is a database file of some sort, its a bit more secure, you can add application level encryption and you have a multitude of options. Large programs that already use a DB backend, like MySQL, already have a database to hand, so create a new table and store the config there. Small applications can look at SQLite as an option.
Never ever hard code things if hey "might" change, or you might be sorry later and make others mad (very likely in big or/and open source projects). If the config will never change, it is not a config any more but a constant.
Only use hard coding when experimenting with code.
If you want to save simple values, you can user java properties.
Look HERE for an example.
good luck.
There are some properties you can change without having to retest the software. These properties you have tested for a range of values, or you are sure it is safe to change with at most a restart. This properties can be made configurable.
There are other properties which you cannot assume will just work without retesting the software. In this case it is better to hard code them IMHO. This encourages you to go through the release process when you change such a value. Values which you never expect to change are a good candidate for this.