Wednesday, October 26, 2016

Measuring Software Value Using a Team Health Assessment



Software development is a team effort. Agile software development, in particular, depends on a high level of communication between team members. In order to be able to improve the business value they are delivering, it is important that the software development teams conduct regular self-assessments. By taking the time to conduct an in-depth assessment of the key areas that impact team performance and health, an organization can make modifications to their processes to enable continual improvement that can lead to increased business value.

In Agile, teams typically rely on sprint retrospectives to analyze their performance for continuous improvement. The challenge is that these events are team- and sprint-specific and often become wasteful ceremonies in that they don’t add any new value.

It is common for the team to reach a point where they have discussed and fixed the things they can fix and the things they can’t fix require organizational intervention, which is outside their span of control. It is easy – and probably correct – for teams in this situation to conclude that sprint retrospectives should be abandoned because, from a lean perspective, they are not adding value and so represent waste to be removed.  

Over the years, our team has leveraged the AgilityHealth℠ Radar (AHR) TeamHealth Assessment as an event to review team dynamics on a quarterly basis. This structured, facilitated event is an opportunity for a more strategic review than the sprint retrospective typically allows..
There are five vital areas that can impact the health of an Agile team: Clarity, Performance, Leadership, Culture, and Foundation. Each should be carefully evaluated to help the team identify their strengths, areas of improvements and top impediments to growth. From there, a growth plan outlining the target outcomes for the following few months can be developed.

The true value of an assessment like this comes from the open and honest conversations that take place enabling the team to evaluate their performance and outcomes and continually improve their processes for the future.  

Does your software development team regularly assess the team’s performance and make adjustments for future growth?  If so, is there a specific methodology your organization uses?

Mike Harris
CEO

This blog was originally posted at https://www.softwarevalue.com/insights/blog/posts/2016/october/measuring-software-value-using-a-team-health-assessment/.

Monday, October 10, 2016

How can I use SNAP to improve my estimation practices?



Scope of Report
This month’s report will focus on how to improve estimation practices by incorporating the Software Non- functional Assessment Process (SNAP) developed by the International Function Point User’s Group (IFPUG) into the estimation process.

Software Estimation
The Issue
Software development estimation is not an easy or straightforward activity. Software development is not like making widgets where every deliverable is the same and every time the process is executed it is the same. Software development varies from project to project in requirements definition and what needs to be delivered. In addition, projects can also vary in what processes and methodologies are used as well as the technology itself. Given these variations it can be difficult to come up with a standard, efficient, and accurate way of estimating all software projects.

The Partial Solution
Software estimation approaches have improved but these have not been widely adopted. Many organizations still rely on a bottom-up approach. For many years, development organizations have used a bottom-up approach to estimation based on expert knowledge. This technique involves looking at all of the tasks that need to be developed and using Subject Mater Experts (SMEs) to determine how much time will be required for each activity. Often organizations ask for input separately, but often a Delphi method is used. The Delphi method was developed in the 1950’s by the Rand Corporation. Per Rand “The Delphi method solicits the opinions of experts through a series of carefully designed questionnaires interspersed with information and feedback in order to establish a convergence of opinion”. As the group converges the theory is that the estimate range will get smaller and become more accurate. This technique, and similarly Agile planning poker, is still utilized, but often is just relying on expert opinion and not data.

As software estimation became more critical other techniques began to emerge. In addition to the bottom-up method, organizations began to utilize a top-down approach. This approach involved identifying the total costs and dividing it by the number of various activities that needed to be completed. Initially this approach also was based more on opinion than fact.

In both of the above cases the estimates were based on tasks and costs rather than on the deliverable. Most industries quantify what needs to be built/created and then based on historical data determine how long it will take to reproduce. For example, it took one day to build a desk yesterday so the estimate for building the same desk today will also be one day.

The software industry needed a way to quantify deliverables in a consistent manner across different types of projects that could be used along with historical data to obtain more accurate estimates. The invention of Function Points (FPs) made this possible. Per the International Function Point User Group (IFPUG), FPs are defined as a unit of measure that quantifies the functional work product of software development. It is expressed in terms of functionality seen by the user and is measured independently of technology. That means that FPs can be used to quantify software deliverables independently of the tools, methods, and personnel used on the project. It provides for a consistent measure allowing data to be collected, analyzed, and used for estimating future projects.

With FPs available the top-down methodologies were improved. This technique involves quantifying the FPs for the intended project and then looking at historical data for projects of similar size to identify the average productivity rate (FP/Hour) and determine the estimate for the new project. However, as mentioned above, not every software development project is the same, so additional information is required to determine the most accurate estimate.

Although FPs provide an important missing piece of data to assist in estimation, they do not magically make estimation simple. In addition to FP size, the type of project (Enhancement or New Development) and the technology (Web, Client Server, etc.) have a strong influence on the productivity. It is important to segment historical productivity data by FP size, type, and technology to ensure that the correct comparisons are being made. In addition to the deliverable itself, the methodology (waterfall, agile), the experience of personnel, the tools used, and the organizational environment can all influence the effort estimate. Most estimation tools have developed a series of questions surrounding these ‘soft’ attributes that raise or lower the estimate based on the answers. For example, if highly productive tools and reuse are available then the productivity rate should be higher than average and thus require less effort. However, if the staff are new to the tools, then the full benefit may not be realized. Most estimation tools adjust for these variances and are intrinsic to the organizations’ historical data.

At this point we have accounted for the functional deliverables and the tools, methods, and personnel involved. So what else is needed?

The Rest of the Story
Although FPs are a good measure of the functionality that is added, changed, or removed in a software development or enhancement project, there is often project work separate from the FP measurement functionality that cannot be counted under the IFPUG rules. These are typically items that are defined as Non-Functional requirements. As stated in the IFPUG SNAP Assessment Practices Manual (APM), ISO/IEC 24765, Systems and Software Engineering Vocabulary defines non-functional requirements as “a software requirement that describes not what the software will do but how the software will do it. Examples include software performance requirements, software external interface requirements, software design constraints, and software quality constraints. Non-functional requirements are sometimes difficult to test, so they are usually evaluated subjectively.”

IFPUG saw an opportunity to fill this estimation gap and developed the Software Non-Functional Assessment Practice (SNAP) as a method to quantify non-functional requirements.

SNAP

History
IFPUG began the SNAP project in 2008 by initially developing an overall framework for measuring non- functional requirements. Beginning in 2009 a team began to define rules for counting SNAP and in 2011 published the first release of the APM. Various organizations beta tested the methodology and provided data and feedback to the IFPUG team to begin statistical analysis. The current version of APM is APM 2.3 and includes definitions, rules, and examples. As with the initial development of FPs, as more SNAP data is provided adjustments will need to be made to the rules to improve accuracy and consistency.

SNAP Methodology
The SNAP methodology is a standalone process; however, rather than re-invent the wheel, the IFPUG team utilized common definitions and terminology from the IFPUG FP Counting Practices Manual within the SNAP process. This also allows for an easier understanding of SNAP for those that are already familiar with FPs.

The SNAP framework is comprised of non-functional categories that are divided into sub-categories and evaluated using specific criteria. Although SNAP is a standalone process it can be used in conjunction with FPs to enhance a software project estimate.

The following are the SNAP categories and subcategories assessed:


Each sub-category has its’ own definition and assessment calculation. That means that each subcategory should be assessed independently of the others to determine the SNAP points for that subcategory. After all relevant subcategories have been assessed the SNAP points are added together to obtain the total SNAP points for the project.

Keep in mind that a non-functional requirement may be implemented using one or more subcategories and a subcategory can be used for many types of non-functional requirements. So the first step in the process is to examine the non-functional requirements and determine which categories/subcategories apply. Then only those categories/subcategories are assessed for the project.
With different assessment criteria for each subcategory it is impossible to review them all in this report; however, the following is an example of how to assess subcategory 3.3 Batch Processes:
Definition: Batch jobs that are not considered as functional requirements (they do not qualify as transactional functions) can be considered in SNAP. This subcategory allows for the sizing of batch processes which are triggered within the boundary of the application, not resulting in any data crossing the boundary.

Snap Counting Unit (SCU): User identified batch job

Complexity Parameters: 1. The number of Data Elements (DETs) processed by the job

2. The number of Logical Files (FTRs) referenced or updated by the job
SNAP Points calculation:



Result: Scheduling batch job uses 2 FTRs so High complexity. 10*25 DETs= 250 SP >/p>
Each non-functional requirement is assessed in this manner for the applicable subcategories and the SP results are added together for the total project SNAP points.

SNAP and Estimation
Once the SNAP points have been determined they are ready to be used in the software project estimation model. SNAP is used in the historical top-down method of estimating, similar to FPs. The estimator should look at the total SNAP points for the project and look at historical organization data if available, or industry data for projects with similar SNAP points to determine the average productivity rate for non-functional requirements (SNAP/Hours). Once the SNAP/Hour rate is selected the estimate can calculate effort by taking the SNAP points divided by the SNAP/Hour productivity rate. It is important to note that this figure is just the effort for developing/implementing the non-functional requirements. The estimator will still need to develop an effort estimate for the functional requirements. This can be done by taking the FPs divided by the selected FP/Hour productivity rate. Once these two figures are calculated they can be added together to have the total effort estimate for the project.

Estimate example:


Note that the SNAP points and the FPs are not added together, just the effort hours. SNAP and FP are two separate metrics and should never be added together. It is also important to make sure that the same functionality is not counted multiple times between SNAP and FPs as that would be ‘double counting’. So, for example, if multiple input/output methods are counted in FPs they should not be counted in SNAP.

This initial estimate is a good place to start; however, it is also good to understand the details behind the SNAP points and FPs to determine if the productivity rate should be adjusted. For instance, with FPs, an enhancement project that is mostly adding functionality would be more productive than a project that is mostly changing existing functionality. Similarly, with SNAP, different categories/subcategories may achieve higher or lower productivity rates. For example, a non-functional requirement for adding Multiple Input Methods would probably be more productive than non-functional requirements related to Data Entry Validations. These are the types of analyses that an organization should conduct with their historical data so that it can be used in future project estimations.

FPs have been around for over 30 years so there has been plenty of time for data collection and analysis by organizations and consultants to develop industry trends; but it had to start somewhere. SNAP is a relatively new methodology and therefore has limited industry data that can be used by organizations. As more companies implement SNAP more data will become available to the industry to develop trends. However, that doesn’t mean that an organization needs to wait for industry data. An individual company can start implementing SNAP today and collecting their own historical data, conducting their own analyses, and improving their estimates. Organizational historical data is typically more useful for estimating projects anyway.

Conclusion:
An estimate is only as good as the information and data available at the time of the estimate. Given this, it is always recommended to use multiple estimation methods (e.g. bottom-up, top-down, Delphi, Historical/Industry data based) to find a consensus for a reasonable estimate. Having historical and/or industry data to base an estimate upon is a huge advantage as opposed to ‘guessing’ what a result may be. Both FP/Hour and SNAP/Hour productivity rates can be used in this fashion to enhance the estimation process. Although the estimation process still isn’t automatic and requires some analysis, having data is always better than not having data. Also, being able to document an estimate with supporting data is always useful when managing projects throughout the life cycle and assessing results after implementation.

Sources:
  • Rand Corporation http://www.rand.org/topcs/delphi-method
  • Counting Practices Manual (CPM), Release 4.3.1; International Function Point User Group (IFPUG), https://www.ifpug.org/
  • APM 2.3 Assessment Process Manual (SNAP); International Function Point User Group (IFPUG), https://www.ifpug.org/

This blog was originally posted at https://www.softwarevalue.com/insights/blog/posts/2016/october/how-can-i-use-snap-to-improve-my-estimation-practices/.

Monday, October 3, 2016

The Magic Quadrant for Software Test Automation


One of the most fundamental questions test engineers ask before starting a new project is what tools they should use to help create their automated tests. Luckily, Gartner issues a yearly report to address this issue. This report, “Magic Quadrant for Software Test Automation,” focuses specifically on functional software test automation and the UI automation facilities of tools. The use cases the report considers with regard to each tool includes:
  • They must support mobile applications
  • They must feature responsive design
  • They must support packaged applications
With those use cases as evaluation criteria, Gartner evaluated 12 major vendors:
  1. Automation Anywher
  2.  Borland
  3. Hewlett Packard Enterprise
  4. IBM
  5. Oracle
  6. Original Software
  7. Progress
  8. Ranorex
  9. SmartBear
  10.  TestPlant
  11. Tricentist
  12.  Worksoft
As part of its analysis, Gartner placed each vendor in one of four categories:
  1. Leaders – Those who support all three use cases.
  2. Challengers – Those who have strong execution but typically only support two of the use cases.
  3. Visionaries – Those who generally focus on a particular test automation problem or class of user.
  4. Niche Players – Those who provide unique functions to a specific market or use case.
Beyond that, the vendors were assessed by their ability to execute and their completeness of vision. In short, ability to execute is ultimately the ability of the organization to meet its goals and commitments. Completeness of vision is the ability of the vendor to understand buyers’ wants and needs and successfully deliver against them.


The result is above. It’s important to mention that Gartner notes that most organizations typically have more than one automation tool provider. In addition, many of the solutions are still maturing – and will continue to mature over time.

Gartner updates the report on an annual basis, and it’s valuable to any organization who does testing. Testing, as we often say at DCG, is a key part of the development process, but it’s one that is often overlooked. The information in this report can enable organizations to make educated choices about software vendors, resulting in improved software quality and execution.

Read the article: “Magic Quadrant for Software Test Automation.”

Mike Harris
CEO

This blog was originally posted at https://www.softwarevalue.com/insights/blog/posts/2016/october/the-magic-quadrant-for-software-test-automation/.