Response to Stop 29119 Petition

ISO 29119 Background

Up until last year there was no comprehensive set of international software testing standards. There are standards that touch upon software testing, but many of these standards overlap and contain what appear to be contradictory requirements with conflicts in definitions, processes and procedures. There are some useful IEEE testing standards (e.g. IEEE 829, IEEE 1028) and national standards (e.g. BS 7925-1/-2) but there were large gaps in the standards relating to software testing, such as organizational-level testing, test management and non-functional testing, where no useful standards existed at all. This means that consumers of software testing services (and testers themselves) had no single source of information on good testing practice.

Given these conflicts and gaps, developing an integrated set of international software testing standards that provide far wider coverage of the testing discipline provided a pragmatic solution to help organizations and testers. And ideally this initiative would not re-invent the wheel, but build upon the best of the available standards; thus the motivation for the ISO/IEC/IEEE 29119 set of standards.

In May 2007 ISO formed a working group to develop new standards on software testing - a new area for ISO. The scope of this initial work (ISO/IEC/IEEE 29119 parts 1, 2, 3 & 4) was largely defined by the existing IEEE and BSI standards (which they would replace), although it was clear from the onset that a completely new 'Test Processes' standard would be required, in particular to ensure that agile life cycles and exploratory testing were considered, as well as more traditional approaches to software projects and to testing. Subsequently, work on a 'Test Assessment' standard (ISO/IEC 33063) based on the 'Test Processes' standard started. Much later (in 2012) a standard on Keyword-Driven Testing (ISO/IEC/IEEE 29119-5) was begun, and a proposal for a standard on 'Work Product Reviews' is currently under ballot.

The current set of standards handled by WG26, the joint ISO/IEC/IEEE working group on software testing, is shown below:

The first three of these standards were published over a year ago, and were available as drafts for several years prior to that. Both the IEEE and BSI contributed existing standards, which were themselves developed by consensus over many years, as source documents to the project (these standards will be retired as the new standards are published).

The availability of these international testing standards provides a number of potential benefits:

  • Improved communication - through a common terminology.

  • Definition of good practice in the testing industry - a guideline for testing professionals, a benchmark for those buying testing services and a basis for future improvements to testing practices. Note that we do not claim that these standards define 'best practice', which will change based on the specific context.

  • A baseline for the testing discipline - for instance, the standard on test techniques and measures provides an ideal baseline for comparison of the effectiveness of test design techniques (for instance, by academics performing research) and a means of ensuring consistency of test coverage measurement (which is useful for tool developers).

  • A starting point - for any organization that is looking to define their testing processes, templates, techniques or concepts for the first time.

Petition and Response

In August 2014 the following online petition was created and the following is the response from the convenor of ISO Software Testing Working Group (WG26), Dr Stuart Reid.

"To the President of the International Organization for Standardisation,
We, the undersigned, hereby petition the International Organization for Standardisation to suspend publication of ISO/IEC/IEEE 29119-4 and ISO/IEC/IEEE 29119-5, and to withdraw ISO/IEC/IEEE 29119-1, ISO/IEC/IEEE 29119-2 and ISO/IEC/IEEE 29119-3.
It is our view that significant disagreement and sustained opposition exists amongst professional testers as to the validity of these standards, and that there is no consensus (per definition 1.7 of ISO/IEC Guide 2:2004) as to their content."

Members of the ISO Software Testing Working Group (WG26) are well-versed in the definition of consensus. The six years spent in gaining consensus on the published testing standards provided us all with plenty of experience in the discussion, negotiation and resolution of technical disagreements - if nothing else, we are now experts at compromise and reaching consensus.

However, as a Working Group (WG) we can only gain consensus when those with substantial objections raise them via the ISO/IEC or IEEE processes. The petition talks of sustained opposition. A petition initiated a year after the publication of the first three standards (after over 6 years' development) represents input to the standards after the fact and inputs can now only be included in future maintenance versions of the standards as they evolve.

It is unclear if 'significant disagreement' reflects the number of professional testers who don't like these standards or their level of dissatisfaction with their content, however the petition comments raise a number of issues which deserve a considered response:

"The standards are not free."  Most ISO/IEC/IEEE standards cost money (ISO is partially funded by the sale of standards), and the testing standards are no different in this respect. Personally, I would prefer all standards to be made freely-available, but I am not in a position to make this change - and do not know where the costs of development would come from. I can see that the charge made for the standards has forced a large proportion of those commenting on them to base their commentary on second-hand descriptions of the standards - and with fuller access and better information I expect many of these people would have refrained from 'signing' the petition and making incorrect statements about the standards.

"The standards 'movement' is politicized and driven by big business to the exclusion of others."  A large proportion of the members of WG26 are listed at our About WG26 page along with their employers. This list does not support the assertion in the petition. The seven editors (who do the majority of the work) are from a government department, a charity, two small testing consultancies, a mid-size testing consultancy, a university and one is semi-retired. All WG members give their time freely and many use their own money to attend meetings. As all received comments have their resolution fully documented anyone who submits a comment on a draft standard can easily see how their suggested change was handled - thus even those who cannot afford the time to come to WG meetings can easily influence the content of the standards. IEEE balloting also extends the consensus. For example, the IEEE balloting group for ISO/IEC/IEEE 29119-4 contains 76 people: 5 identify themselves as academic; 11 consultants; 9 government and/or military; 5 process management; 7 software producers; 5 project managers; 12 software product users/acquirers, and 21 with general interest.

"The methods advocated haven't been tried and the standards do not emerge from practice."  The number of years' and range of testing experience on the Working Group (and the number of countries represented) shows that a wide range of experiences have been drawn on to create the standards. Early drafts were widely distributed and many organizations started (and continue) to use the standards - and so provided important feedback on their use to allow improvements to be made. At least one academic study at PhD level has been performed on the use of these early drafts within 14 different software organizations by Jussi Kasurinen of Lappeenranta University of Technology. Three of the main contributions of the thesis were described as:

  • The concepts presented in the ISO/IEC 29119 test process model enable better end-product quality.
  • The ISO/IEC 29119 test standard is a feasible process model for a practical organization with some limitations.
  • The ISO/IEC 29119 test standard is a feasible foundation for a test process development framework.

"The standards represent an old-fashioned view and do not address testing on agile projects."  The standards were being continually updated until 2013 and so are inclusive of most development life cycles, including agile. As an example, the test documentation standard (ISO/IEC/IEEE 29119-3) is largely made up of example test documentation and for each defined document type example documentation for both traditional and agile projects is provided. The standards will be regularly reviewed and changes based on feedback from use have already been documented for the next versions.

"The standards require users to create too much documentation."  Unlike the IEEE 829 standard, which it replaces, the test documentation standard, ISO/IEC/IEEE 29119-3, does not require documentation to follow specific naming conventions nor does it require a specific set of documents - so users can decide how many documents to create and what to call them. The standard is information-based and simply requires the necessary test information to be recorded somewhere (e.g. on a Wiki). As stated above, it is fully aligned with agile development approaches and so users taking a lean approach to documentation can be fully compliant with the standard.

"The existence of these standards forces testers to start using them."  According to ISO, standards are "Guideline documentation that reflects agreements on products, practices, or operations by nationally or internationally recognized industrial, professional, trade associations or governmental bodies".

They are guideline documents therefore they are not compulsory unless mandated by an individual or an organization - it is up to the organization (or individual) as to whether or not following the standards is required, either in part or in their entirety. However, if specified in a contract then they can define requirements on the testing, but as with all contracts this depends on the signatories. Because the standards includes the ability to apply its requirements in a range of lifecycles, including agile, if a company is required to use the standards it will not prevent an agile approach.

My view of the testing standards is that they will be of most use to testers who want to see a definition of good practice and see how close (or far) they are away from it. They can then decide if they wish to change their practices and if they wish to adopt the standard then they are free to tailor it to suit their needs. One of the most common uses I make of the testing standards is to use them as checklists - I then have more confidence that I haven't accidentally ignored an important aspect of the testing.

"The Testing Standards are simply another way of making money through certification."  There is currently no certification scheme associated with the testing standards, and I am not aware of one being developed. There is also no link between the ISO/IEC/IEEE Testing Standards and the ISTQB tester certification scheme. The ISTQB scheme could align with the body of knowledge represented by the ISO/IEC/IEEE testing standards, but that is a decision for those who run ISTQB.

"The Testing Standards do not allow exploratory testing to be used."  Exploratory testing is explicitly included as a valid approach to testing in the standards. The following is a quote from part 1:

"When deciding whether to use scripted testing, unscripted testing or a hybrid of both, the primary consideration is the risk profile of the test item. For example, a hybrid practice might use scripted testing to test high risk test items and unscripted testing to test low risk test items on the same project."

If using exploratory testing (and also wanting to show compliance with the standard) then a simple rationale for not documenting tests in advance would be required. The standard also does not require the tests executed during an exploratory testing session to be documented - it does require they be "recorded", but this could be with a screen reader or as notes in a session sheet, and even this requirement can be tailored out if an organization deemed such recording as unnecessary.

"No-one knew about the standards and the Working Group worked in isolation."  From the outset the development of the testing standards has been well-publicised worldwide by members of the Working Group at conferences, in magazine articles and on the web. Early in the development, in 2008, workshops were run at both the IEEE International Conference in Software Testing and the EuroSTAR conference where the content and structure of the set of standards were discussed and improved. The working group also went to great lengths to invite the broader testing community to comment on the standard and to voice their opinions at meetings of our software testing standards working group, by placing information on the softwaretestingstandard.org website on how individuals can get involved in the development of the standard (see our How to Get Involved page). Since then I have lost count of the number of testing conferences where the standards were presented on and discussed. At every one of my presentations on the standards I have invited the audience to become involved in their development. Of course, I am not the only person who has spoken about the standards - experts from many countries have spoken about the standards and invited participation.

Testing experts from around the world were invited to take part in the development of the standards - these included a number of prominent members of the AST who were personally approached and asked to contribute to the standards early in their development, but they declined to take part. Other members of the AST have provided input, such as Jon Hagar, who is the IEEE-appointed Project Editor - and he has presented on the standards at the CAST conference.

"No-one outside the Working Group is allowed to participate."  There are a number of options for those interested in contributing to the standards - and these are all still open. The Working Group (WG) meets twice a year for 6 days - and is made up of experts acting in a personal capacity, appointed by national standards bodies, such as ANSI, or liaising organizations (e.g. IEEE). Most experts at Working Group meetings are supported by a 'mirror panel' in their home country who act as reviewers and submit comments on the drafts. Many of these mirror panel members represent specific industry areas (e.g. financial testing), testing roles (e.g. independent consultant) or testing interest groups. These comments are then collated and moderated by the mirror panel to remove duplicates and contradictory suggestions before being submitted to the WG. The Working Group also accepts comments directly from any individual with an interest in software testing. Each comment is individually handled and the response to the comment is agreed by the WG and is fully documented. As an example, the WG itself resolved well over 3000 comments on the 'Test Processes' standard. All comment resolution files are then circulated back to each country's mirror panels, so that individual contributors can see how their comments were handled by the WG. This makes it a very transparent process that is accessible to any tester, anywhere.

"Context-Driven Testing isn't covered by the standards."  I fully agree with the seven basic principles of the 'Context-Driven School' at http://context-driven-testing.com. To me most of them are truisms, and I can see that to those new to software testing they are a useful starting point. I also have no problem with context-driven testers declaring themselves as a 'school', however I am unhappy when they assign other testers to other deprecated 'schools' of their own making.

Jon Hagar, the Project Editor for the ISO/IEC/IEEE 29119 set of standards, is a supporter of the context-driven school and he, along with the rest of the Working Group, ensured that that many of the context-driven perspectives were considered. For instance, the following statement is early in part 1: "Software testing is performed as a context-managed process."

The standards do, however, also mandate that a risk-based approach is used, as can be seen in the following quote (also from part 1): "A key premise of this standard is the idea of performing the optimal testing within the given constraints and context using a risk-based approach." I see no problem in following both risk-based and context-driven approaches simultaneously as I believe the context and the risk-profile to be part of the same big picture which I would use to determine my approach to the testing.


Dr Stuart Reid
Convenor, WG26
8th September 2014