Thursday, March 23, 2017

Using Software Value to Drive Organizational Transformation



I was delighted to read a thought leadership article from McKinsey recently, “How to start building your next-generation operating model,” that emphasizes some key themes that I have been pushing for years (the quotes below are from the article):
  • The importance of orienting the organization around value streams to maximize the flow of business value – “One credit-card company, for example, shifted its operating model in IT from alignment around systems to alignment with value streams within the business.
  • Perfection is the enemy of good enough – “Successful companies prioritize speed and execution over perfection.
  • Continuous improvement relies on metrics to identify which incremental, experimental improvements work and which don’t.  Benchmarking and trend analysis help to prioritize areas where process improvement can offer the most business value – “Performance management is becoming much more real time, with metrics and goals used daily and weekly to guide decision making.”
  • Senior leaders, “hold themselves accountable for delivering on value quickly, and establish transparency and rigor in their operations.
  • “Leading technology teams collaborate with business leaders to assess which systems need to move faster.”
There is one “building block” for transformation in the article to which I am a recent convert and so kudos to the McKinsey team for including it in this context.   Their “Building Block #2” is “Flexible and modular architecture, infrastructure and software delivery.”  We are all familiar with the flexible infrastructure that cloud provides, but I have been learning a lot recently about the flexible, modular architecture and software delivery for application development and application integration that is provided by microservices frameworks such as the AnyPoint PlatformTM from Mulesoft.

While they promote organizing IT around business value streams, the McKinsey authors identify a risk to be mitigated in that value streams should start to build up software, tools and skills specific to each value stream.  This might be contrary to the tendency in many organizations to make life easier for IT by picking a standard set of software, tools and skills across the whole organization.  I agree that it would be a shame indeed if agile and lean principles that started life in IT software development are constrained by legacy IT attitudes as the agile and lean principles roll out into the broader organization.

There are a lot more positive ideas for organizational transformation in the article, so I recommend that you take a few minutes to read it.  My only small gripe is that while the authors emphasize organizing around value throughout, they do not mention prioritizing by business value.  Maybe at the high level that McKinsey operates in organizations that concept is taken for granted.  My experience is that as soon as you move away from the top level, if business value priorities are not explicit, then managers and teams will use various other criteria for prioritization and the overall results may be compromised.

This blog was originally posted at https://www.softwarevalue.com/insights/blog/posts/2017/march/using-software-value-to-drive-organizational-transformation/.

Thursday, March 16, 2017

Algorithms: What are They Worth and What Might They Cost You?

Every so often, I read an article that gets me thinking in a different way about software value and software risk.  Danilo Doneda of Rio de Janeiro State University and Virgilio Almeida of Harvard University recently published an article entitled, “What is Algorithm Governance?[1]
Doneda and Almeida suggest that the time may have come to apply governance to algorithms because of the growing risks of intentional or unintentional, “… manipulation, biases, censorship, social discrimination, violations of privacy and property rights and more,” through the dynamic application of a relatively static algorithm to a relatively dynamic data set.  
By way of example, we have probably all experienced the unintended consequences of the application of a reasonably well understood algorithm to new data.  We all have a basic grasp of what the Google search algorithm will do for us but some of you might have experienced embarrassment like mine when I typed in a perfectly innocent search term without thinking through the possible alternative meanings of that set of words (No, I’m not going to share).  At the other end of the spectrum from the risk of relatively harmless misunderstandings, there is a risk that algorithms can be intentionally manipulative – the VW emission control algorithm that directed different behavior when it detected a test environment is a good example. 
For those of us who deal with outsourcing software development, it is impossible to test every delivered algorithm against every possible set of data and then validate the outcomes. 

If we consider software value, from a governance perspective, it should be desirable to understand how many algorithms we own and what they are worth.  Clearly, the Google search algorithm is worth more than my company.  But, are there any algorithms in your company’s software that represent trade secrets or even simple competitive differentiators?  Which are the most valuable? How could their value be improved?  Are they software assets that should be inventoried and managed?  Are they software assets that could be sold or licensed?  If data can gather and sell data then why not algorithms?
From a software metrics perspective, it should be easy to identify and count the algorithms in a piece of software.  Indeed, function point analysis might be a starting point using its rules for counting unique transactions, each of which presumably involves one or more algorithms, though it would be necessary to identify those algorithms that are used by many unique transactions (perhaps as a measure of the value of the algorithm?).  Another possible perspective on the value of the algorithm might be on the nature of the data it processes.  Again, function points might offer a starting point here but Doneda and Almeida offer a slightly different perspective.  They mention three characteristics of the data that feeds “Big Data” algorithms, “… the 3 V’s: volume (more data are available), variety (from a wider number of sources), and velocity (at an increasing pace, even in real time).  It seems to me that these characteristics could be used to form a parametric estimate of the risk and value associated with each algorithm. 
It is interesting to me that these potential software metrics appear to scale similarly for software value and software risk.  That is, algorithms that are used more often are more valuable yet carry with them more risk.  The same applies to algorithms that are potentially exposed to more data. 
[1] Doneda, Danilo & Almeida, Virgilio A.F. “What is Algorithm Governance.” IEEE Computer Edge. December 2016.

Mike Harris, CEO

Monday, February 27, 2017

How Does Cybersecurity Drive the Business Value of Software?



Software brings tremendous value to organizations, but in today’s day and age, it also carries significant risk.  Malicious cyberattacks continue to rise at a rapid pace.  According to the Identity Theft Resource Center and CyberScout, data breaches increased by 40 percent in 2016 – that’s after a record year in 2015.  With the ongoing upsurge in data breaches, software can be seen by many as a potential liability for an organization.  We are such a data-driven economy today that criminals have realized that they can cause serious damages to companies, governments and other entities by hacking into their information systems and stealing, corrupting or deleting valuable data.  These breaches are extremely costly to organizations – not only financially, but also to their reputations.
Just look at Target.  In 2013, hackers stole credit card numbers of 110 million customers costing the retail giant approximately $162 million, in addition to a decrease in sales and a black eye to their reputation (for a short period of time).
It’s no wonder that “94 percent of CISOs are concerned about breaches in their publicly facing assets in the next 12 months, particularly within their applications,” according to a January 2017 Bugcrowd study.  However, despite these concerns, another survey of over 500 IT decision makers found that 83 percent of the respondents actually release their code before testing it or resolving known weaknesses (Veracode, September 2016).
Software is typically at the foundation of all cybersecurity attacks.  In fact, the Software Engineering Institute stated that 90 percent of reported security incidents result from exploits against defects in the design or code of software.  If a network router is hacked, most likely the hacker went through the router’s software, not hardware.  These breaches can pose such a significant threat to an organization’s value that software developers must make application security an integral part of the software development lifecycle.
By finding and fixing vulnerabilities early in the software development lifecycle, there is less risk to the business and more potential for increased business value from the software.  For example, Adobe Flash player is a product used by many websites to enable interactivity and multimedia.  In 2015, it had more than 300 patches (TechBeacon’s Application Security Buyer’s Guide).  Developing these patches is a resource drain (both time and money).  On balance though the risk Adobe would run by not providing these patches could be significant and negatively impact the Adobe’s value as well as the value of the organizations using its product.
So, if an application has, let’s say, 500 known weaknesses, the organization may not have the time or money to fix all of them before an imminent release.  They need to collaborate with the business unit and determine which vulnerabilities pose the highest risk to the business (negative business value) and which ones, if remediated, will help to deliver the most value to the business if they are fixed.  It is not unusual for developers to fix those vulnerabilities that are easiest to resolve; however, it is critical to take a step back and prioritize identified vulnerabilities based on business value.
This post originally appeared at https://www.softwarevalue.com/insights/blog/posts/2017/february/how-does-cybersecurity-drive-the-business-value-of-software/

Monday, February 13, 2017

Function Points and Agile

I participated in an interesting conversation on Function Points and Agile with members of the software development group at a federal agency recently.  We, the DCG team, were explaining how we would start the process of measuring how much value they are delivering from software development by measuring how much functionality they are delivering in function points.  For an organization with immature metrics and, perhaps, lack of user trust, this starting point takes the question of productivity off of the table to allow us to move on to end user value delivery
All of the participants in the meeting quickly recognized the value of having a standard metric, function points, to measure the size of the functionality being delivered (and with SNAP – the non-functional size too) but I could see on their faces the sort of trusting disbelief that might be associated with my pulling a rabbit out of my bag.   Some of the participants in the meeting were not familiar with function points and asked for a quick, five minute explanation.  I get asked this a lot so here it is (before I get inundated with corrections – I know – this is an over-simplification):
Imagine a software application and draw an imaginary boundary around it in your mind to include all its functionality but not the functionality of other applications that it communicates with or draws information from.  Now consider the diagram below.
Function Points and Agile
From a user perspective, looking at the application from outside the application boundary, I can interact with the application in three ways, called transaction functions: external inputs (EIs), external outputs (Eos) and external inquiries (same as input and output but with no change of data or state – EQs).  From within the application, I can access data in two places – inside the application boundary or outside the application boundary.  My interactions with these files are the two types of data functions: internal logical files (ILFs) where data is stored within the application boundary and external interface files (EIFs) where data is stored outside our application boundary.
Most of you will be able to easily imagine that these five types of user interaction with our application can be more or less complex.  If I want to produce a function point count, the next step is to consider the first of the transactions that the user wishes to perform on the application (as defined in the requirements, user stories or whatever) and to identify how many of each of the five function types is involved in the transaction and how complex that involvement is (low, average or high).  Predetermined rules govern the weights that apply to each function type based on the complexity of the function in this transaction.  With all this information gathered, we can calculate the number of function points using the simple table shown below.
Function Point Counting Weights
Type LowAvgHighTotal
EI __ x 3 +__ x 4 +__ x 6 =___
EO __ x 4 +__ x 5 +__ x 7 =___
EQ __ x 3 +__ x 4 +__ x 6 =___
ILF __ x 7 +__ x10 +__ x15 =___
EIF __ x 5 +__ x 7 +__ x10 =___
    Total____

One of the participants offered a very perceptive observation, “Isn’t this a lot of work for every user story in Agile?”  It could be.  In practice though, by the time a user story is defined and understood, the software problem has been decomposed to the point where identifying the FPA functions and complexities is fairly simple.  That said, we don’t recommend this for all agile team members.  Story points work fine for most agile teams.  Where function points (and SNAP) can and must be used for Agile is where there is a need to aggregate the delivered functionality (and non-functional elements) into higher level metrics  for reporting to management.  This level of function point analysis is often better done by a separate, central team rather than the agile team members themselves.
This post was originally published at https://www.softwarevalue.com/insights/blog/posts/2017/january/function-points-and-agile/

Wednesday, February 8, 2017

Software Vendor Management and Code Quality

Outsourcing software development projects requires vigilance in order to realize the anticipated gains.  The hard-fought negotiations to ensure a little bit less cost for the client with a worthwhile profit for the vendor are over for another year or two and the actual work can (re)commence.
What impact will the new software development outsourcing contract have on the behavior of the vendor? 
Probably the vendor will be looking to save costs to regain lost margin.  With the best intentions in the world, this probably means quality is at risk, even if only in the short term.  Why? Because the vendor will probably choose to do one, or all, of the following: Push more work through the same team: introduce new, cheaper resources to the team; cut back on testing.
How can a client monitor for these software vendor management changes? 
First and foremost, you need good data.  It is not helpful to start to gather data after you think you might have detected a problem with delivered code.  The only data that will be useful in a discussion about diminishing quality from development outsourcing is trend data (I will return to this point at the end).  That means that the client must be capturing and analyzing data continuously – even in the good times.  It you tell me that the quality of my code has dropped off recently, I will not believe you unless you can show me concrete data showing me when and how it was better before.
What sort of data? 
The level of defects found by severity in any acceptance testing should be included.  However, with many clients these days having only limited software development capabilities themselves, I would also recommend that all delivered code should be passed through a reputable static code analysis such as CASTSonarQube or Klocwork.  These tools provide a deeper analysis of the quality of the code, new metrics and, by comparison with previous runs on previous code deliveries, the impact of the current code delivery – did it improve or diminish the overall quality of the application being worked on.  Clearly, the former is desirable and the latter is a cause for discussion.  Some care needs to be taken before diving headlong into an untried static code analyzer.  Poor examples of the breed tend to generate many false positives –sometimes so many false positives that the credibility and value of the tool is lost.
Maintaining code quality with software vendor management
Photo By Markus Spiske (https://unsplash.com/photos/xekxE_VR0Ec) [CC0], via Wikimedia Commons
From personal experience, I also like to see the results of formal code reviews carried out on the code by the developer and one of his colleagues.  To quote a RogueWave white paper, “The value of code review is unarguable, which explains why they’re mandated by 53% of today’s software development teams.”  Many years ago, during my time managing a large software development group at Sanchez Computer Associates (now part of FIS), we faced the challenge of maintaining and improving code quality on our complex core product while increasing the number of developers to meet demand.  Code reviews seemed to be a good answer because we had a group of very experienced developers who could teach and mentor the newcomers.  The problem was that the old hands were just as much in demand to get code out of the door so didn’t have the time to review all the code being produced by everyone else.
They, not I, came up with a good compromise.  They devised a set of programming standards in the form of a checklist that every programmer, including the most experienced developers would apply to their code before unit test.  This caught a lot of minor problems through the simple repetitive reminder exercise.  Next, the programmer would do a quick review of their checklist and code with a colleague who could do quick “spot checks.”  Finally, if any coding defects were discovered in subsequent test or use, the lessons from these were captured in an updated checklist.  From a software vendor management perspective, I see the collection and review of these checklists as being a form of commitment from individual team members that their code is “done.”
Returning to my point about trend data, being the only currency for a software vendor management discussion, in my experience these discussions proceed very differently if the data collected before the contract (re)negotiation are used to set some expectations in the contract.  Not necessarily service level agreements (SLAs), because these may be reserved for more important issues such as cost, productivity or customer satisfaction, but certainly the recording of an expectation that quality metrics will meet or exceed some average expectations based on prior performance from this software vendor (or the one they are replacing).
This post was originally published at https://www.softwarevalue.com/insights/blog/posts/2017/january/software-vendor-management-and-code-quality/

Wednesday, February 1, 2017

Microservices in Software Architecture

Software value can take many forms but the ability to respond quickly and flexibly to new business challenges separates “just so” software architecture from high-value software architecture.  To this end, over the past 20 years, we have seen many steps down the path from monolithic applications to client-server to service-oriented architectures (SOA).  Now, organizations seeking to maximize the business value of their software architectures are adopting microservices architectures.

Microservices, as the name suggests, should represent the smallest unit of functionality that aligns to a core business capability.

That’s not to say that each business process or transaction is a single microservice but rather that business processes and transactions are “composed” using microservices.  Sounds like SOA?  Well, yes, it did to me too, at first.  The major difference, I think, is that this time the industry has got out ahead of the curve, learned from the challenges that we all had/have with SOA and built the necessary infrastructure to standardize and support the microservices from the beginning.  For example:
  • Microservices API’s are standardized
  • Microservices are natively able to communicate with each other through industry-wide adoption of pre-existing standards like HTTP and JSON.
  • Microservices can be formally defined using standards like the “Restful API Modelling Language” (RAML) so that developers reusing the microservices can depend on the functionality contained within the microservice and resist the urge to rewrite their own version “just in case.”  Indeed, a collaboration hub like Mulesoft’s Anypoint Exchange encourages merit-based reuse of microservices by capturing the reviews and ratings of other developers who have used that microservice.
  • Microservices can be implemented in different programming languages.
  • Tools are available to manage the complexity of microservices e.g. Mulesoft Anypoint Platform.
This last bullet hints at some of the challenges of a microservice architecture.  Development needs to be highly automated with automated deployment to keep track of all the microservices that need to be composed into a particular application and continuous integration.  However, the adoption of a microservices approach also requires strong discipline from developers and the devops team.  Fortunately, the “small is beautiful” nature of most microservices means that the development teams can (and should) be small so team discipline and communication can be maximized.

Implementating a microservices architecture is not something to try on your own for the first time.

There a number of companies that have already developed strong experience in architecting and development microservices including our own Spitfire Group who have done a number of implementations including a back-office upgrade for a Real Estate firm.
I believe that organizations should seriously consider enhancing the business value of their software by implementing microservices architecture for their “leading edge” products or services.  By “Leading edge,” I mean those software-based products or services that are most subject to change as the business environment changes.  They are probably customer-facing applications which have to respond to competitive changes in weeks not months.  They are probably going to be applications whose software value rests on they’re being fit for purpose all the time.
This post was originally published at https://www.softwarevalue.com/insights/blog/posts/2017/january/microservices-in-software-architecture/

Saturday, January 28, 2017

How Software Estimation Impacts Business Value

 Software estimation in simple terms is the prediction of the cost, effort and/or duration of a software development project based on some foundation of knowledge.  Once an estimate is created, a budget is generated from the estimate and the flow of activity (the planning process) runs from the budget. 

Software estimation can significantly impact business value because it impacts business planning and budgeting.

One challenge is that most organizations have a portfolio of software development work that is larger than they can accomplish and need a mechanism to prioritize the projects based on the value they deliver to the business.  This is where estimation can help – they predict the future value of the project to the business and estimate the cost of the project in resources and time.   Unfortunately, the estimates are often created by the people that are performing the actual day-to-day work not estimation experts.  Worse, new estimates from the people doing the work are typically based on their recall of previous estimates, not on previous project actuals – very few organizations take the time to report the actuals after a project is completed.  To most accurately estimate a software development project’s future business value,  it is best to generate the estimate based on the actuals from similar past projects and statistical modelling of the parameter that are different for the next project.
Of course, an estimate is only an estimate no matter who develops it.  You can’t predict all the factors that may require modifications to the plan.  This is where the estimation cone of uncertainty comes in.  The cone starts wide because there is quite a bit of uncertainty at the beginning around the requirements of a project.  As decisions are made and the team discovers some of the unknown challenges that a project presents, then the cone of uncertainty starts to get smaller towards the final estimate.

In regards to business value, the cone of uncertainty is significant because of the impact that the rigid adoption of early estimates can have on the budgeting and planning processes, especially if the software development effort is outsourced.

I see software estimation as both a form of planning and input to the business planning process.  However, there is a a significant cross-section of the development community that believes #NoEstimates is the wave of the future.  This is a movement within the Agile community based on the premise that software development is a learning process that will always involve discovery and be influenced by rapid external change.  They believe that this dynamic environment of ongoing changes makes detailed, up-front plans a waste of time as software estimates can never be accurate.  Using #NoEstimates techniques requires breaking down stories into manageable, predictable chunks so that teams can predictably deliver value.  The ability to predictably deliver value provides organizations with the tool to forecast the delivery.  In my view, the #NoEstimates philosophy really isn’t not estimating – it is just estimating differently.
Whether you use classic estimation methodologies that leverage plans and performance to the plans to generate feedback and guidance, or follow the #NoEstimates mindset that uses both functional software and throughput measures as feedback and guidance – the goal is usually the same.  They are both a form of planning and input to the business planning processes that are aimed at driving the business value of each software development initiative.
This post originally appeared at https://www.softwarevalue.com/insights/blog/posts/2017/january/how-software-estimation-impacts-business-value/