I am new to SonarQube and want to make a custom plugin to make some custom rules and measures.
My project contains a json file (report.json) where all important measures of the project are listed. Now I want to take one of these measures in the file and let SonarQube check if this value is greater than a certain threshold or lower.
I've already checked out this template but I'm not sure which steps I need to make and which SonarQube API classes I have to use to make it work.
In detail it looks as follows:
I have a json file (report.json) and want to take the value of the key numericValue:
...
"numericValue": 251.84199999999998,
...
Now I want to check with SonarQube if this value is greater than 250 or lower than 250. Based on this a custom rule in SonarQube should pass or fail.
I have already looked at the documentation (here and here) but it didn't help me much.
Related
In our Client project we have 6 applications , Three Java and three .NET.
We have the code in the same repository in different folders for each app.
The case is that we need to compare the artifacts which is being built with the previous builds to know changes are made or not.
If changes are made then deploy it
else
Do not deploy it.
Also, based on the apps. Say if I work on the One Java and one .NET app, I will want to deploy only these two and rest four apps need not be deployed unnecessarily.
How can we achieve this without manual intervention?
Please suggest us with a solution.
You can try to set up a build pipeline and corresponding release pipeline for each app.
In the build pipeline for each app, you can use the 'Paths' key to specify the file paths that can trigger the build pipeline.
In the release pipeline for each app, you can try setting up a step to execute the API "Builds - Get Changes Between Builds" to get the changes between the current build and previous build. If no any change made between the two builds, skip the subsequent deployment step.
[UPDATE]
Is there any equivalent API which could provide the difference between two artifacts even if the build manually runs without any changes? So if we do not see any changes in result we could cancel the release pipeline.
To avoid deploying the artifacts that have the same source version, as I mentioned above, you can try using the API "Builds - Get Changes Between Builds". In Response body of this API, it will return the array list of commits between the two specified builds.
If have changes between the two builds, the value of "count" property is the number of the commits. And in the "value"property (array object), these commits will be listed.
If no any change between the two builds, the value of "count" is 0, and "value" is an empty array.
In your case, you just need to check whether the value of "count" is greater than 0. If greater than 0, start the deployment. If equal to 0, skip the deployment to avoid re-deployment.
Currently I am working on Java Project where trying to use QuickFIX engine.
But every time getting below message:
MsgSeqNum too low, expecting 3 but received 2
For some security reason, can't share the whole Java file and config, but code portion can be shared in customized way.
What I want from here is, if there is any Java sample using QuickFIX where above error point has been fixed.
NB:
Apologies if the same question is there.
Please help me to find that one also.
You can manually set the sequence numbers using APIs:
Session.lookupSession(session_).setNextSenderMsgSeqNum();
Session.lookupSession(session_).setNextTargetMsgSeqNum();
You can also refer: How to set sequence numbers manually in QuickFixJ?
I am setting up sonarqube code analysis for my existing projects. I want to focus only one issues on new code and ignore already existing issues.Is there a way to export existing defects list and use it as baseline of defects which should be ignored.
I can create the project and mark all issues as Cannot be fixed/ignored. But i have to do it for every release version. We have different release versions
Thanks in Advance
If I understood well, it seems you need to flag some of your Sonar issues, so you can exclude them from results when a new analysis is performed.
You can do it by creating an Action Plan (Important: this feature has been removed from Sonar > 5.3) and assigning the issues to this Action Plan (call it "baseline").
Then, from the Issues view, you can filter by project and Action Plan, selecting all non-baseline Action Plans.
In case you're using Sonar > 5.3 (no Action Plan feature), you can instead add a "baseline" Tag to your baseline issues (do it by bulk-changing them).
Once issues have been tagged, you can (from Issues view) filter by Tag, selecting all existings tags but "baseline" tag, and save this Filter, so you don't need to create it every time.
This is exactly what the SonarQube Leak Period is for.
You don't mention your SonarQube server version, so I'll assume the latest: 6.3.
Set Administration > General > Leak > Leak Period to the appropriate value, whether that's a date, previous_version, or a number of days. Then focus solely on "new" metrics, such as Coverage on New Code.
The SonarQube interface is designed to help you focus on Leak Period values by pulling them out into a separate section of the project home page:
How can I tell eclipse to inform me when the number of lines in my method exceeds a certain number?
I've tried researching but ended up with nothing.
Any help will be appreciated.
Thanks folks!
An eclipse plugin perhaps?
You can use code metrics plugins for Eclipse which analyze your code and calculate statistics.
An example:
http://metrics.sourceforge.net/
The statistics include the average and maximum line numbers of methods.
I found it:
An eclipse plugin - http://metrics.sourceforge.net/
It will show the metrics in a different panel but will not raise a warning in your eclipse though. You need to look at the metrics.
Install CodePro Analytix. Configure the issue detection for method length in CodePro Analytix to your number of lines. Set the issue level to "Error". Then use the dynamic auditing mode of CodePro Analytix. It will always scan the currently opened editor files for violations. So on saving a file containing a long method, you get a violation error immediately.
Maybe you want to consider plugins like PMD or FindBugs to improve the code quality.
CheckStyle can check for style issues in your code, like a method that is too long (confer MethodLength). There is an Eclipse Plugin where you can configure which checks to do, what your limits are (e.g., with how many lines to report a method as "too long"), and how to display that the check failed (e.g., as an error, as a warning,...).
I am currently helping a member of my team to get in grip with our new project and the tools we are using. We use Java as a primary language. A particularity of my colleague is that he is blind. He's working primarily with Emacs, and he runs maven targets in a terminal.
After I'm done implementing, I find it very useful to check my test coverage. I'd like my colleague to be able to check coverage as well. I have two ways of getting this information:
Use IntelliJ integrated test coverage (it uses EMMA and shows a green, red or yellow color next to each line). Very convenient, as I can see this information immediatly after having run the tests, with no further interaction
This won't work for my colleague as he can't use IntelliJ, and it would probably not work anyway as there is no textual representation of the coverage info
Use Cobertura reports. They use the same concept of line in green/red They are fine for macro information like overall coverage in a class, but not for checking which line has not been covered.
Actually he could dig into the HTML sources of the report and find out which one has class nbHitsUncovered, but it seems very impractical.
I would really like to show him how to get his coverage data quickly. Does anybody know of a tool that shows coverage without relying on colors? Or do we have to write our own? (by transforming the HTML report, for instance)
I’m a totally blind developer who does my work on Windows with the Jaws for Windows screen reader so this won’t map exactly to the developer you work with. With a little programming it looks like cobertura test results are the easiest to deal with. Based on the following sample XML report it shouldn’t be difficult to throw together a quick Perl script to check for lines with a hit count of 0.
https://raw.github.com/jenkinsci/cobertura-plugin/master/src/test/resources/hudson/plugins/cobertura/coverage-with-data.xml
I was able to find out that line 24 was the only one executed 0 times with a quick find for
Hits="0"
Although I was able to find out what line wasn’t executed I had to scroll up quite a bit to figure out what class and method the line was located in. A quick Perl script could eliminate the need to scroll back and provide the package, class, and method the line is located in more efficiently.
I took a look at a sample Emma HTML report using Google Chrome and it was pretty accessible. I could tell what methods were fully tested and what weren’t. Figuring out what lines were executed and what ones weren’t was more difficult. I could tell a method wasn’t 100% executed and would then navigate to it in the report. I then had to use the keystroke provided by my screen reader to announce color on each line of code. I forget the exact color names but I could tell the lines that were and weren’t executed since my screen reader listed them as having different colors. This worked but was slow since I had to manually check each line of a method; that wasn’t completely executed since my screen reader can’t automatically announce color changes. I’m not sure how your developer would do the equivalent since I don’t know his exact assistive technology setup.
I have had a dig around Antoine as I also use SONAR and Cobertura on my projects and am intrigued by your problem. From what I can see when you tell the ANT task to generate "html" as the output you get all the line information that want, but as you've pointed out it's not an easily parseable format (and possibly subject to change).
With SONAR I tell Cobertura to output "xml" which gives me a file named coverage.xml with the output. Unfortunately it does not include line-by-line data and I cannot see any ANT task parameters to include it from the Cobertura docs.
It makes sense to me that the file named cobertura.ser contains all of the data you require, but only the HTML report displays it for you. I believe the answer to your question may lie in trying to extract the required serialised data from cobertura.ser.
Looking at the source code I can see the following classes
net.sourceforge.cobertura.reporting.html.HTMLReport
net.sourceforge.cobertura.reporting.xml.XMLReport
What I suspect you can try and do is take a copy of the HTMLReport as a base and try writing the same output as XML which you can then parse for your own purposes (or mjust ad the same method calls used by HTMLReport in XMLReport). I can see the string nbHitsUncovered in HTMLReport so hopefully you only have one class to write.
I've googled around and can't see anyone having done this, but it looks like a useful enhancement.
How about using a GreaseMonkey script that searches all the lines having the class nbHitsUncovered and adds some list/table containing the information wanted to the report?