The PNSQC Board and Officers, in conjunction with the Rose City Software Process Improvement Network (SPIN) held a Tuesday evening panel discussion at the 2006 PNSQC.
The evening panel allowed attendees to discuss and share best practices with fellow members of the software community. The moderator, Jean Richardson, set the stage for the 60-minute discussion.
A lot of time and energy gets spent discussing methodologies and deciding which process is better. Really, all the methodologies are different ways to describe, categorize, and communicate the same low-level actions, communications, and activities. This panel focused on methods to implement your process – any process – in a successful manner. Each panelist had 5 minutes to introduce their background in software development and describe what has worked for them. The panel then took questions from the audience.
The focus was on sharing specific best-of-breed implementations of techniques that show up on any project – whether it uses traditional or agile development processes.
Moderator: Jean Richardson
What the Evening Panelists Have to Say
We start with the “What”; experience gives us the “How.” Applying our knowledge of process, developing a technique of implementing it and adapting that implementation to the constraints of the project and culture bring success. Project methodologies are not static and must adapt. Critical chain, agile and classic waterfall all have their place and a blend of them can be used on the same project.When determining what to do to improve project deployment, you often have to start by bringing things back to a common baseline-implementing all or part of a classical project management process. This has the effect of “draining the lake” so that the rocks are visible. From there you can look at implementing a different approach.It is desirable to implement a new philosophy from the start, but that requires the knowledge that something is wrong with the current methodology. More likely the attitude is that it is the project that is “bad.” Blaming the project lays the blame on one entity and not the organization’s methodology. Subtle, even covert, implementation of different processes on a project and then reporting on the success, sighting other areas where the technique will work, helps evolve the management style.
The process that’s required for developing software is determined by the attributes of the quality required. And those attributes are determined by what the users (who the users are, what their needs are, and the environments in which they will use the product) require for their usage to be beneficial, successful and/or enjoyable. Which means, if you don’t have an in-depth understanding of the users and how they will use the product, you can’t make claims about quality or what process to follow to provide it.
Software processes and methodologies are tools software professionals use to build products that satisfy their user base. It is important to choose the right tool for the job considering the complexity of the project, the experience level of the people using them, and a proven track record of working. Processes and methodologies give us an avenue to build successful products and are not as important as building a cohesive team focused on a common goal.
What do we mean by “mature” software development processes? Software-intensive organizations need the ability to control, manage, and optimize product development, to be sure; but maturity entails more than this. Maturity also implies that an organization’s development processes are well-adapted to its operational and strategic objectives, and that these objectives may be achieved with a minimum of internal conflict, confusion, stress, and frustration. The Technical perspective (e.g., metrics and measurement; process definition; V&V) is necessary, but not sufficient to achieving software quality. Success requires that the Organizational perspective (e.g., innovation; creativity; adaptation to market conditions) and the Personal perspective (e.g., jelled teams; working styles and communication skills; trust, advice, and friendship) must also be taken into account. Skillful blending and balancing of T+O+P considerations is the hallmark of a truly mature software organization.
My work on improving software quality focuses on the “real” developers–the people actually writing the code. As a result, I’m interested in tools, techniques, and methodologies that these people can use to improve the quality of the code they produce. Regardless of the development process being used, these people need to know what they’re supposed to produce and the constraints under which it must be created (requirements), need to have a vision for how they’ll implement it (design), need to produce something executable (coding), need to have a way to verify that they’ve implemented what they’re supposed to (testing), and need to satisfy a host of additional requirements that are often left unstated (e.g., source code comprehensibility, modifiability, and portability). They must also typically balance pieces of “best practice” advice that may not be fully compatible, e.g., encapsulation vs. testability, speed vs. loose coupling, graceful failure vs. minimal footprint. I’m especially interested in the issues facing library developers, who can never know all the ways in which their software is used and who must typically ensure backwards compatibility for almost every release. Finally, because I often work with C++ developers whose code bases are very large, were developed over a decade or more, are extremely performance-sensitive, and are mission-critical, I’m interested in the practicality of quality improvement recommendations for software systems of this nature.
Each of the words – quality, competitive, and advantage implies value — something we may not talk about but which significantly affects us, whether we be a tester on the front lines or the customer.
The luncheon panel discussion focused on two value contexts — project and product. In the project context, a possible question might be “What kinds of dashboard metrics have been valuable to convey concerns from Test?” In the product context, the question might be “What kinds of bugs have you seen that made the product less valuable, and why?”
Before lunch was served, a short form was passed out for attendees to submit a written question or a comment to the panel. The moderator, Jon Bach read the questions during lunch.
Moderator: Jon Bach, Quardev Laboratories
I am honored again this year to host a panel of experts about a topic I think may be important in our community. With this year’s conference theme: “Quality: A Competitive Advantage,” the conference organizers let me choose a sidebar on a congruent topic, so I chose something very few of us stop to think about – value.There are many definitions of quality, for example, but my favorite is from Jerry Weinberg: “Quality is value to some person.” Value is not about price, it is about what you get for the cost. Something given free, may wind up “costing” you a lot of time and money. Conversely, something that is expensive may prove to pay for itself very quickly.It is a subjective concept that intrigued me enough to want to choose colleagues who I thought would bring value to the topic of Value, so consider this an invitation to spend some time exploring it with us at lunch on Wednesday!
I have been strongly influenced by Jerry Weinberg in my thinking about quality and value. He defines quality as “value to some person,” and value as “what someone will do or pay to have his or her requirements met.” He also points out that, like quality, “purpose” is not an intrinsic property of a product, but rather a relationship between the product and a person. Finally, that which we call a product is a system built out of smaller systems and part of some larger system.For some testers, these ideas are unsettling. “Surely quality, purpose, and value aren’t subjective!” is a common reaction. “How can we testers make decisions about quality if everyone’s idea of quality is different?” After years of thinking about it, I am convinced that quality, purpose, and value are subjective, and that testing therefore involves a good deal of uncertainty. We testers can live with that uncertainty, though, and even embrace it. First, our role is to provide information to managers and to the rest of the project community, not to make decisions about the product or the project; that is properly a management task. Second, I believe we do better testing when we consider the system, instead of the product; multiple users, instead of “the user”; possible purposes, instead of “the purpose”; and varieties of values, instead of “the value”. If we embrace subjectivity-or diversity-we can anticipate more risks, find more bugs, and provide more value to more people.
Brian believes that if software quality professionals want to not merely survive but thrive in an increasingly competitive world, they need to understand how value is determined by customers and by business stakeholders.
The concept of “value” as it applies to software or other products or commodities is a difficult one. While value can be measured in objective units, such as dollars and cents, in reality it is largely subjective. Unlike quality, for instance, which is measurable against specific standards or a set of objectives, the value of a product is typically defined in terms of what a customer is willing to exchange for it. In a perfect world, high quality always equals high value, but as we see every day, that is not always the case. While quality frequently drives value, especially over the longer term, customers are still the final arbiters. Customers are influenced by a marketing, packaging, reputation, pricing, accessibility -in addition to their own objective standards.The key to producing something of value as it pertains to software is deceptively simple — understand what your customers not only need, but what they want, and then design and build a quality product to those standards. It gets more deceptive and less simple when you recognize the danger of listening to your customers too closely, forgetting that they are paying you to build them a product for a reason – often because they lack the skills or the resources to build it themselves. All too often testers are confined to a strict quality assurance role – they verify quality all day, testing output against input, a product against a design. However, testers are first and foremost customer advocates. In the rare cases where they are given the voice to test the design itself, at that point, a tester evolves beyond “simply” verifying quality, and can begin truly testing a product’s value.
The value that I have been most concerned about for the past six years has been educational value. Most software testing techniques were developed during a time when a big program was 10,000 lines and written in COBOL, a language almost anyone can read. An enterprising tester could read the entire program, list all the relevant variables, identify most of the interesting combinations, and lovingly handcraft an appropriate set of tests. Today, we snap together massive programs–my cell phone has over 1,000,000 lines of code. Programmer productivity has skyrocketed while tester productivity has increased linearly. At its best, (slightly) automated regression testing increments tester productivity a little more. When programmer productivity rises sharply faster than tester productivity, our impact on projects declines. To the extent that testing is valuable (and I do think it is very valuable) the diminishing significance on projects that is a natural consequence of differential productivity becomes a serious problem.One of the reasons programming evolves so rapidly is the strong educational support for programmers at university. New ideas spread quickly, into new courses or modifications of the standard courses. Educational support for testers is not so strong. There are no degree programs in testing–most universities offer zero or one testing course. Even the handful that offers two or three courses can only teach so much in that small amount of time. As a result, most training in testing will probably continue to be industrial–learning on the job by yourself, by in-house classes, or from a commercial trainer.I went back to university because I felt, as a successful commercial trainer, that the short course format offered little potential for skill development or for the development of a real appreciation of new ideas. There just isn’t enough time in the short-course format, not enough time in class, not enough time for homework, and not enough calendar time for someone to think over a new idea, try it on her own projects, then come back to the course and comment on how it worked (or didn’t) in her situation.My question is this: “How can we adapt the university instructional models to industrial training, in order to promote skill development and propagation of new ideas and technologies?”If testers are going to continue to add serious value to projects, we need educational support that can help foster and spread deep change, rather than teaching and testing students (certification candidates) a superficial rehashing of the same stuff people were learning in the 1980’s.
Jean Richardson, Moderator, Tuesday evening panel
An experienced writer, trainer, and public speaker, Jean Richardson has been working in hardware and software development environments since 1989. During that time, she has designed and implemented a number of communications programs, managed dozens of projects, built large and small co-located and distributed teams, led process improvement initiatives, and led professional development and education efforts for software developers in all specialties.
As a businessperson, she is firmly aware of the value– as well as the cost- of excellent customer service. She cautions fellow consultants against too strictly applying the adage “it’s just business,” because business is done by human beings. She has learned that basic human issues are at the root of most conflicts and most customer/vendor, employer/employee, or client/consultant disputes. Jean believes that if we ignore this basic fact we dehumanize ourselves and imperil our society.
Her client list boasts a wide range of businesses including ADP, Chrome Systems, Intel, Freightliner, Kaiser Permanente, Kryptiq Corporation, Mentor Graphics, and US Bank.
Jon Meads became concerned with the need to design interactive systems that meet users’ perceptual and cognitive needs as well as their practical and functional requirements. As a software engineer and manager, he was instrumental in pioneering and developing interactive windowing systems (pre-Macintosh). He then started to focus more directly on user-centered design and methodologies for developing usable systems. He has been a software engineer and manager at Tektronix, Intel, and Four-Phase Systems and served with Bell Northern Research’s Corporate Design Group as an in-house consultant on human-computer interaction.
Currently Jon is president of Usability Architects, Inc., a consulting and contracting firm, specializing in designing the user experience and providing support for the full usability engineering lifecycle from product definition and user studies to the specification, design, and development of usable systems through user-centered methodologies and practices.
Jon is a past Chair of ACM/SIGGRAPH, a co-founder of the Annual SIGGRAPH conferences, has served on the Advisory Board for the ACM’s Special Interest Group on Computer-Human Interaction (SIGCHI), was a Co-Chair for CHI ’90, the 1990 Conference on Human Factors in Computing Systems, and was an ACM National Lecturer lecturing on user interface concepts and techniques.
Scott Meyers is an independent author and consultant with over three decades of experience in software development practice and research. His perennially best-selling “Effective C++” books (Effective C++, More Effective C++, and Effective STL) defined a new genre in technical publishing, and his Effective C++ CD introduced several innovations in the web-based presentation of technical material.
Scott is Consulting Editor for Addison Wesley’s Effective Software Development Series and an inaugural member of the Advisory Board for the online journal, The C++ Source (http://www.artima.com/cppsource).
His current interests focus on domain- and language-independent principles for improving software quality. He received his Ph.D in Computer Science from Brown University.
Ben Waldron, Chief Technology Officer at Pop Art, Inc. has over 10 years of technology industry experience as a developer, consultant, and systems architect. Ben spent the majority of his career with Microsoft as a Senior Consultant architecting broad-based solutions for corporate and government customers where, in 2003, he was awarded Consultant of the Year for Microsoft Federal. Ben later served as Chief Architect for Learning.com providing web-based online instructional material and assessments to students worldwide.
Ben is a frequent author of featured articles in technical publications such as MSDN Magazine and .NET Developer’s Journal typically focusing on improving software quality and predictability. He holds a Bachelors degree in Computer Science from Saint Joseph’s College, IN and a Masters Degree in Systems Engineering from Virginia Tech.
Todd Williams is President of eCameron, Inc., which specializes in managing projects that are high risk and recovering projects that are in trouble. He has provided Information Technology and Manufacturing businesses with Project Management and Analysis/Design services for over 20 years. He works internationally and locally helping companies like Hewlett-Packard, Brooks Automation, and Digital Equipment.
Todd publishes a monthly newsletter on Project Management, available at www.ecaminc.com.
During a career spanning more than two decades, Brent Zenobia has managed or performed virtually every aspect of software development for a wide range of organizations. Brent has organized divisional SEPGs for Sharp Laboratories of America and Sharp Software Development India, and designed enterprise SPI communication and training campaigns for use throughout Intel.
Brent is a member of IEEE, a founding member of the Rose City SPIN, and holds a Master of Software Engineering from OMSE, where he teaches a course on software process improvement. Brent is currently completing his Ph.D. in Engineering and Technology Management at Portland State University, where he is using software engineering techniques to construct an agent-based model of technology adoption.
Jon Bach, Moderator, Wednesday luncheon panel
Jon Bach is Corporate Intellect Manager and Senior Test Consultant for Quardev Laboratories (www.quardev.com) – a Seattle test lab specializing in rapid, exploratory testing. He is most known for being co-inventor (with brother James) of Session-Based Test Management – a way to manage and measure exploratory testing.
In his ten-year career, he has led projects for many corporations, including Microsoft, where he was a test manager on Systems Management Server 2.0 and feature lead on Flight Simulator 2004.
Jon is a writer, philosopher, and test practitioner. He has presented at many national and international conferences. Currently he is a President of the 2007 Conference for the Association for Software Testing.
Michael Bolton, founder of DevelopSense, a Toronto-based consultancy, has over 15 years of experience in the computer industry testing, developing, managing, and writing about software.
Michael teaches James Bach’s Rapid Software Testing course in countries all over the world, and writes a regular column about testing and software quality in Better Software Magazine. He also contributes to Quality Software, the magazine of the Toronto Association of System and Software Quality, and sporadically produces his own newsletter.
Michael has been an invited participant at Cem Kaner and James Bach’s Workshop on Teaching Software Testing in Melbourne, Florida, 2003, 2005, and 2006, and was a member of the first Exploratory Testing Research Summit in 2006. He is Program Chair for the Toronto Association of System and Software Quality, a presenter at this year’s Amplifying Your Effectiveness conference in Phoenix, and a member of Gerald M. Weinberg’s SHAPE Forum.
Brian Branagan has 19 years of quality engineering and software testing experience in start-ups and Fortune 500 corporations including Adobe Systems, Getty Images, and RealNetworks. He studied systems “effectivement” with Jerry Weinberg and the practices of adaptive management with Robert Dunham.
Hal Bryan, Microsoft
Hal Bryan, Flight Simulator Community Evangelist, is a notary public, former police officer, and now, a former software test engineer in Microsoft’s Games Studios. He has more than 9 years’ experience in testing, as both a contractor and full-time employee at Microsoft, working on Windows 98, Flight Simulator, Combat Flight Simulator, and other game/simulation titles.
Having recently transitioned out of Test into a full-time role as an Evangelist, Hal now finds himself having to answer to customers directly, confirming what he had long suspected from his years in Test – customers expect value for their money.
Cem Kaner is Professor of Software Engineering at the Florida Institute of Technology and the head of Florida Tech’s Center for Software Testing Education Research.
He teaches a variety of courses on black box and programmer testing (has written a few books on testing) and is working on creating free web-based courseware that other instructors can use at other schools or at their companies. The goal is to provide the instructional support for broad improvement in the skill of working software testers and other developers who create tests.
World Trade Center
121 SW Salmon St.
Portland, OR 97204
The PNSQC newsletter offers readers interviews with presenters and keynotes, invites to webinars, upcoming industry calendar listings, and so much more straight to your inbox. Sign up by entering your email into the box and let the latest news come directly to you.
Copyright PNSQC 2019
WordPress website created by Mozak Design - Portland, OR