What we covered so far ?
In my previous post I covered the reasons why software quality metrics should be collected and why improvements to the code should be made based on those metrics. In this post I’ll be illustrating how Sonar can fulfill the job of collecting metrics and driving decisions.
Sonar goes beyond just collecting and displaying metrics:
- Sonar can answer the following questions:
- What are our most critical code quality issues?
- Where is the highest concentration of code issues?
- How many working hours will it take to fix the issues?
- What does the metrics trend look like over the past year?
- Sonar can be used to track work tickets assigned to team members.
In short, Sonar helps us analyze the situation, take actions, and quantify the improvement.
Publicly available Sonar instances
Before I start enumerating and explaining some useful sonar features, it’s worth noting that there are several running instances of Sonar that are publicly available. For example, you can try the nemo instance or the apache instance. Those instances have software metrics of real open-source projects that are being updated periodically. Of course, organizations can control access to their Sonar instances so that only authorized personnel have access to those metrics.
Please note that wherever required I’ll be using the Apache Abdera project from the nemo instance to illustrate some features and functionality available in Sonar. To find the Apache Abdera project you can do one of the followings:
- Type Apache Abdera in the search box in right top corner of the main page of the nemo instance. Then select the project from the drop down list.
- You can use this link.
Some high-level features of Sonar?
Sonar talks in term of projects not in terms of individual developers
Sonar measures the quality of software projects but it is not a “blaming tool”. As far as I know, you cannot use Sonar directly to blame individual developers for violations and/or poor software quality. You should never use Sonar or metrics to point the finger because that’s not what the tool is about.
Sonar and its metrics are about improving the code quality. That is, they are for positive action. After all, it is not an individual’s fault that you have a bad code base; it’s the whole team’s responsibility to detect and correct any “badness”.
Sonar leverages graphical representations
There are a wide variety of graphical representations that sonar uses to present metrics in an intuitive manner. For example, Sonar uses Pie Charts to breakdown the technical debt categories, Timeline graphs for showing quality trends over a period of time, Tree Maps for comparing projects and packages in terms of two metric values, Word Clouds to show the biggest risks or quickest wins, etc.
This is my favorite feature. It enables me to drill down through metrics starting from project-level, through library-level, package-level, and class-level.
If you open the Apache Abdera project in sonar and click on the number of violations, you will see a neat arrangement for drilling down. You can click on one of the severity levels and the other sections will be refreshed accordingly. If you click on a rule that was violated, the remaining sections will be updated accordingly. And the same happens when you click on library, package, and class.
I find this helpful when I want to solve a recurring rule violation. All I have to do is drill down to that particular rule, and Sonar will tell me all the libraries, packages, and classes affected. If I want to focus on one library, then I can drill down to it and ignore the rest. Drill down is also available for comments count, complexity, package tangle, code coverage, etc.
Due to the abundance of free plugins for Sonar, you can easily start using Sonar effectively. If you check the plugins library, you will find plugins for analyzing different programming languages, integrating with other tools like Crowd, Fortify, Eclipse, LDAP and Hudson/Jenkins.
Timeline is a plugin that provides a powerful graphical representation of trends. For example, it may show you that your project had a consistent Rules Compliance around 80% for about two months then it dropped significantly to reach 65% one week ago. You can even overlay different metrics on top of each other to see how they reacted to deadlines and milestones. If you find that some metrics got worse in those events, you can come up with a strategy to avoid that dip next time.
This plugin is very useful to find cause-and-effect relationships between events and what they do to software quality.
Many ways to gather metrics
Currently, Sonar can gather metrics on your project by being invoked through Maven, Ant, command line (Sonar Runner), or CI Engines (e.g. Bamboo, Hudson, Jenkins, TeamCity, etc). You can find a complete list of those options and how to implement them by referring to the Sonar documentation.
Sonar has Quality Profiles that you can use to define “a set of source code requirements” that will be used to evaluate your projects against. In those profiles, there is an Alerts section you can use to set up alerts on some metrics.
For example, you can configure an alert that will “warn” you if your project has a complexity per method > 4, but it will show you “error” if it exceeds 10. Warnings and errors are shown as yellow and red icons, respectively next to the project name.
If you are collecting your metrics as part of a Maven or Ant build, then you can configure the build to fail if Sonar gave you an “error” alert after the analysis. That way, unless Sonar analysis passes without errors, you won’t have a successful build and you cannot deploy the code.
Not just for Java
Eclipse IDE integration
You can hook up Eclipse to your Sonar instance to get violation highlighting, synced up metrics for each project and more. You can find all the necessary information in sonar documentation.
When you bring in a new tool to your workplace, it’s very vital that you have all the resources and support throughout the lifetime of that tool. And by now, you must have noticed how much I rely on the Sonar documentation. That’s how good the documentation is.
The sonar team has been doing an awesome job in documenting everything you need to know about installing, running, managing, and upgrading Sonar. Moreover, you can stay up-to-date with new Sonar releases, discussions, and enhancements by following the team’s blog.
Useful metrics for Java projects
Sonar has lots of different metrics available and because of that you might want to focus your team’s effort on manageable number of metrics. For example, you can summarize the metrics you care about in the following format :
- Unit test coverage always >= 80%,
- Complexity per method always <= 10,
- Package tangle <= 5 %, etc.
Moreover, you can use those rules in your Alert section of the Sonar Quality Profiles so you can know when you have passed the “warning” or “error” thresholds.
Now let me enumerate some of Sonar metrics that I find very useful for my Java projects. I won’t go into details of them because there are people much smarter than me who have sufficiently covered the importance of each of those metrics and how to maximize them in your code. I’ll just ask you take the time and explore what each metrics means (Wikipedia is always a good place to start). However, I’ll link to Sonar documentation of how those metrics are incorporated into Sonar.
- Cohesion and complexity measurements like LCOM4 and Cyclomatic Complexity.
- Unit tests statistics like Line/branch coverage and Success rate.
- Duplication percentages
- Rules compliance, which can cover potential bugs discovered by PMD, Checkstyle, FindBugs, etc.
- Architectural measurements like coupling between packages and Quality Index .
- Dependency management which is not technically a metric per se but still very helpful for cataloging and managing your third-party dependencies and the different versions used by analyzed projects. Unfortunately it requires maven dependency management so it won’t work with Ivy dependencies.
- Comments statistics like number and percentages of documented and Undocumented API.
Sonar and Continuous Integration Pipeline
For a successful incorporation of Sonar (or any other metrics tool) into your software development shop, there is one issue worth discussion at the beginning. That issue is how to add Sonar to your continuous integration pipeline or your development cycle. This is very crucial because it will determine whether or not your team will stick to using Sonar in the long run.
A typical development cycle involves pulling changes from source control, update unit tests, make changes, make sure unit tests pass, do a complete build of the project on a CI Server, deploy the artifact, and run acceptance tests, if applicable.
I’d argue that Sonar should be part of the “build” step before you deploy. That is, configure the build process to invoke Sonar analysis in such a way that the build breaks if the metric values are not good enough. And implementing the “breaking” logic is easy with Quality Profiles as I’ve covered in this post. That way you are ensuring that the team won’t be introducing bad quality code in every CI build.
Until you start thinking of code quality metrics violations in the same way as failed unit tests, you will always be playing catch-up. Instead of going back and improving code quality after months of development, let’s push good quality code with every build.
But be careful that when defining thresholds in Quality Profiles, you should not go overboard and set unrealistic threshold values. Instead, define realistic thresholds that most of your projects meet and the ones that don’t meet can be brought up to speed with ease.
The next post will cover how to incorporate Sonar into an Ant-based Java project.