P-7 – The “Swim” System for User-Oriented Presentation of Test-Case Results
P-8 – Managing Change in the Software Test Environment
P-10 – Ten Years of Tester Tendencies
P-11 – Process Optimization by Design
P-14 – 7 Words You Can Never Use in a Requirements Specification
P-15 – Ultra Lightweight Software Test Automation (ULSTA) in an Agile Environment
P-16 – Building Quality In – A Model for Achieving High Quality on Any Type of Project
P-17 – Critical Success Factors for Team Software Process (TSP) Adoption
P-20 – A Tool to Aid Software Practitioners in Selecting a Best-Fit Project Methodology
P-25 – Tick-the-Code Inspection: Empirical Evidence
P-26 – Approaching Write Once Run Anywhere: Maximizing Code Reuse for Cross Platform Development
P-28 – Selecting and Adapting the Ceremony of Software Configuration Management Processes
P-30 – The Devil’s in the Decisions
P-32 – Testable Software Architectures
P-33 – Facilitating Effective Retrospectives
P-36 – Maintainability in Testing
P-37 – Timeline: Getting And Keeping Control Over Your Project
P-38 – Implementing a System to Manage Software Engineering Process Knowledge
P-40 – Applying Selective Revalidation Techniques at Microsoft
P-41 – Mapping for Quality – Past, Present, and Future
P-42 – Scaling Quality
P-43 – Building Quality in from the Beginning using Lean Quality Assurance Practices
P-44 – Efficient Software Testing Using Statistical Methods
P-47 – Building for a Better Automation Future: One Company’s Journey
P-51 – Introduction to Software Risk Management
P-54 – Software Process Improvement: Ten Traps to Avoid
P-58 – Agile Quality Management
P-59 – AJAX Testing – How to test the Asynchronous?
P-60 – Feedback, Analysis and Synthesis based Testing (FAST)
P-62 – User Acceptance Testing – A Context-Driven Perspective
P-63 – An Exploratory Tester’s Notebook
P-64 – Two Futures of Software Testing
P-65 – Customer Interaction and the Role of the Tester – Going Beyond the Release
P-67 – State-driven Testing
P-70 – Testing Web Services
P-72 – Project Intelligence
P-73 – Flexing Your Mental Muscle to Obtain Defect-Free Products
P-74 – A Graphical Display of Testing Status for Complex Configurations
V-01 – Lights Out Testing” for ERP Applications
|P-7||The “Swim” System for User-Oriented Presentation of Test-Case Results|
Ward Cunningham, Bjorn Freeman-Benson, Karl Matthias
We describe “Swim,” an agile functional testing system inspired by FIT, the Framework for Integrated Test, and exceeding FIT’s capabilities through the inclusion of an end-user-accessible results-rendering engine. Its output allows developers, testers, and end-users to explore business logic and consequent output from multiple points of view and over extended periods of time. We have incorporated this engine in the Eclipse Foundation portal to support situated reasoning about foundation processes and their automation.
Howard G. “Ward” Cunningham is the American computer programmer who developed the first wiki. A pioneer in both design patterns and Extreme Programming, he started programming the software WikiWikiWeb in 1994 and installed it on the website of his software consultancy, Cunningham & Cunningham (commonly known by its domain name, c2.com) on March 25, 1995, as an add-on to the Portland Pattern Repository. He currently lives in Beaverton, Oregon.
|P-8||Managing Change in the Software Test Environment|
Diane Manlove & Jerry Parris
|P-10||Ten Years of Tester Tendencies |
As PNSQC looks back on its 25-year history, I will look back as well. For the past ten years, I have been a hiring test manager on many projects. In this paper, I will show test ideas from over 400-test “auditions” – practical interviews I have staged for testers to show me what they can do. They were given a version of a Triangle program like the one referenced in Glenford Myers’ classic book “The Art of Software Testing.” The application, built in VB, has a logging algorithm to record mouse clicks and keystrokes. The names of the testers were not preserved, just the data on what tests they ran and the order in which they ran them. I have analyzed this data and have seen several interesting patterns of behavior I call “tendencies.” The talk that accompanies this paper will show the pathologies that have trapped candidates in several ways. I reveal the top ten tendencies and demonstrate ways the audience can avoid those traps on a project.
Jon Bach is lead consultant and corporate intellect manager for Quardev – an outsource test lab in Seattle, Washington. He is co-inventor (with his brother James) of “Session-Based Test Management” – a technique for managing (and measuring) exploratory testing. For ten years, Jon has worked as a test contractor, full-time test manager, and consultant for companies such as Microsoft, Rational, Washington Mutual, and Hewlett-Packard. He has written articles for Computer magazine and Better Software (formerly STQE Magazine). At Quardev, Jon manages testing projects ranging from a few days to several months using Rapid Testing techniques (like SBTM). He is the speaker chairman for Washington Software Alliance’s Quality Assurance SIG, as well as Vice President for the Association of Software Testing.
|P-11||Process Optimization by Design|
David N. Card
The “Lean” paradigm for process improvement is becoming increasingly popular in software and systems engineering, as well as manufacturing. However, the Lean influence arrives most often as redesign and rework in the form of a Kaizen event. This article explains how the Lean principles can used to help design good processes from the start. Moreover, since Lean is based on queuing theory, processes designed in this manner can be optimized through simulation. The article demonstrates the application of these concepts with results from a simulation study of an industrial software process. It also discusses the relationship of Lean concepts to Agile and the CMMI.
David N. Card is a fellow of Q-Labs, a subsidiary of Det Norske Veritas. Previous employers include the Software Productivity Consortium, Computer Sciences Corporation, Lockheed Martin, and Litton Bionetics. He spent one year as a Resident Affiliate at the Software Engineering Institute and seven years as a member of the NASA Software Engineering Laboratory research team. He has worked extensively with high maturity organizations where quantitative and statistical methods are essential. Recent clients have included Siemens, Bosch, Lockheed Martin, Rockwell Collins, and BAE Systems. Mr. Card is the author of Measuring Software Design Quality (Prentice Hall, 1990), co-author of Practical Software Measurement (Addison Wesley, 2002), and co-editor ISO/IEC Standard 15939: Software Measurement Process (International Organization for Standardization, 2002). Mr. Card also serves as Editor-in-Chief of the Journal of Systems and Software. He is a Senior Member of the American Society for Quality.
|P-14||7 Words You Can Never Use in a Requirements Specification|
A requirement specification is the one place where all stakeholders of a project have a vested interest. Unfortunately, requirements often are not written clearly for the people who depend on them. This paper will help people to use precise language in writing their requirements, detect problems when reviewing requirements, and avoid misunderstandings when using requirements. Actual Tektronix requirements will be used as examples and tools to help write better requirements will be covered.
Les Grove is a software engineer at Tektronix, Inc., in Beaverton, Oregon. Les has worked at Tektronix for 10 years and has 20 years of experience in software development, testing, and process improvement. He holds a Bachelors of Science in Computer Science from California Polytechnic State University, San Luis Obispo and has a Masters degree in Software Engineering from the University of Oregon through the Oregon Masters of Software Engineering program.
|P-15||Ultra Lightweight Software Test Automation (ULSTA) in an Agile Environment|
Dr. James McCaffrey
Creating software test automation is frequently difficult in an Agile setting. In a fast-paced environment, the time required to create much traditional software test automation can render the automation obsolete before the automation can be deployed. A growing trend in software quality assurance is the increasing use of ultra lightweight software test automation (ULSTA). Ultra lightweight software test automation is characterized by being script-based (Perl, PowerShell, etc.), short (under two pages), quick to write (under two hours), and disposable (typical lifespan is two weeks). This paper presents concrete examples of four types of software test automation where ULSTA has proven to be especially effective: automated unit testing, application UI test automation, automated Web application HTTP request-response testing, and Web application UI test automation.
Dr. James McCaffrey works for Volt Information Sciences, Inc., where he manages technical training for software engineers working at Microsoft’s Redmond, Washington campus. He has worked on several Microsoft products including Internet Explorer and MSN Search. James has a doctorate from the University of Southern California, an M.S. in Information Systems from Hawaii Pacific University, a B.A. in Mathematics from California State University at Fullerton, and a B.A. in Psychology from the University of California at Irvine. James is a Contributing Editor for Microsoft MSDN Magazine and is the author of “.NET Test Automation Recipes” (Apress, 2006). He can be reached at firstname.lastname@example.org or email@example.com.
|P-16|| Building Quality In – A Model for Achieving High Quality on Any Type of Project|
We’ve all heard that it’s better to “build in quality” than to test in quality. Have you ever wondered how exactly is quality built into a software product?This paper explains the fundamental principles behind “building quality in,” and demonstrates how to apply those principles to your business and your organization. We take a look at identifying what type of mistakes are most frequently made in your organization, and at choosing from a toolkit of prevention and detection methods to either prevent or detect these specific classes of mistakes as early as possible. We also consider how to analyze your software development process and quickly see where it would be most useful to add a particular detection or prevention mechanism.
Kathy Iberle is a senior software quality engineer at Hewlett-Packard, currently working at the HP site in Vancouver, Washington. Over the past twenty years, she has been involved in software development and testing for products ranging from medical test result management systems to inkjet printer drivers to Internet applications. Kathy has worked extensively on training new test engineers, researching appropriate development and test methodologies for different situations, and developing processes for effective and efficient software testing. Kathy has an M.S. in Computer Science from the University of Washington.
|P-17||Critical Success Factors for Team Software Process (TSP) Adoption|
Adopting TSP/PSP can have major benefits in driving improved quality and productivity. The critical success factors for TSP/PSP adoption are analyzed and shared based on the lessons learned from 20 Intuit TSP/PSP projects. Factors include the level of support required at all levels and how to drive the required mindset changes. To discover these factors both employee survey and software metrics are analyzed. Benefit: The critical success factors for TSP/PSP adoption are analyzed and shared based on the lessons learned from 20 Intuit TSP/PSP projects. Factors include the level of support required at all levels and how to drive the required mindset changes.
Jim Sartain is a Director responsible for Software Quality and the Engineering Process at Intuit. His team ensures a highly effective work environment for 700 software professionals by driving software process improvement, delivering technical education, and providing software process consulting and software tools. Prior to Intuit, Mr. Sartain held a variety of product development positions at Hewlett-Packard. His last position at Hewlett-Packard was leading the development and support of an Airline Reservation System servicing leading low-cost airlines worldwide. He also led commercial software development at Hewlett-Packard for database management systems, compilers, computer networking and application development tools. Mr. Sartain earned his bachelor’s degree in computer science and psychology from the University of Oregon, and his M.S. in Management of Technology from the National Technological University.
|P-20||A Tool to Aid Software Practitioners in Selecting a Best-Fit Project Methodology|
I Nengah Mustika and Robert F. Roggio
This paper discusses a web-based application designed to assist methodology selection. Through a short series of questions, answers relating to project characteristics (team size, criticality of the development, need for heavy documentation, and more) are used to compute a metric that suggests a methodology or near methodology that might best fit the project. Further, while the application contains default weights assigned to questions and answers, a practitioner may alter these weights to more accurately capture the strength of some of the characteristics. Previous results of the exercise (text and graphics) may be stored and compared to new results as weights might be altered by a user. The latest version of this application also provides the ability to supplement meaningful questions, assign weights, and use these additional questions / answers in computing a more meaningful nearly-best-fit set of metrics.
I Nengah Mustika received a Master of Science in Computer and Information Sciences from the University of North Florida in April 2007. Since 2004, he has been working as a Solution Developer for Idea Integration, Inc. in Jacksonville, Florida. Idea Integration is a consulting and technology solutions firm specializing in application development, business intelligence, infrastructure, information security, and interactive marketing.
|P-25||Tick-the-Code Inspection: Empirical Evidence|
The paper starts with a hypothetical scenario of software development. It shows how bugs can come into being essentially from nothing. Current ways of producing software leave much to be desired. There is a lot of inadvertent complexity in the software produced by the industry and it is possible and feasible to get rid of. The paper presents the results of four experiments as evidence. All experiments use the Tick-the-Code method to check source code. The experiments show that both the developers and the source code they produce can be significantly improved. The results indicate that the developers can easily find and suggest numerous improvements. It becomes clear that it is feasible to use Tick-the-Code often and on a regular basis. In one of the experiments, the software engineers created almost 140 improvement suggestions in just an hour (of effort). Ticking code often and on a regular basis makes a software organization more mature. As long as the organization has to waste time reworking requirements and careless coding, maturity of operation is unachievable.
Miska Hiltunen teaches Tick-the-Code to software developers. After developing the method and the training course for it, he founded Qualiteers (www.qualiteers.com). Mr. Hiltunen is on a mission to raise the average software quality in the industry. He has been in the software industry since 1993, mostly working with embedded software. Before starting his freelance career, he worked for eight years in R&D at Nokia. He graduated from Tampere University of Technology in Finland with a Master of Science in Computer Science in 1996. Mr. Hiltunen lives and works with his wife in Bochum, Germany.
|P-26||Approaching Write Once Run Anywhere: Maximizing Code Reuse for Cross Platform Development|
There are multiple solutions to create graphical user interactive applications that run on multiple platforms. Each solution has its own set of drawbacks that are unacceptable to Wacom. From UI widgets that do not look native, to virtual machines that do not provide the level of integration with the operating system that is required for this product. Yet, independent platform development is too expensive. Instead, Wacom approaches the concept of “write once, run anywhere” as a goal worthy to strive for, but ultimately unattainable. This paper takes a look at the technique Wacom uses to reach for this goal while writing software for multiple platforms.
Raleigh Ledet is a Macintosh Software Engineer with Wacom Technology. He holds a, B.S. in Computer Science from University of Southwestern Louisiana and he is working towards a M.S. in Software Engineering from Portland State University. He has been at Wacom Technology for 6 years as a Macintosh Software Engineer and previously was a Programmre at EoZ Bis.
|P-28||Selecting and Adapting the Ceremony of Software Configuration Management Processes|
Software processes are most efficient and productive when process ceremony is aligned with the characteristics of the project. This paper examines process ceremony in the context of Software Configuration Management (SCM), in particular, change request, artifact development, and release management. The paper summarizes the work and results of a graduate project at Portland State University conducted by the authors.
Dan Brook is a software engineer at Rohde & Schwarz, Inc. where he currently develops software for cell phone test devices. He is a recent graduate of the Masters of Software Engineering (OMSE) program at Portland State University in Portland, Oregon and holds a bachelor’s degree from Whitman College in Walla Walla, Washington.
|P-30||The Devil’s in the Decisions|
Robert Goatham & Jim Brosseau
In a paper that looks at software development from a new perspective, the authors ask us to consider the central role that decision-making plays in software projects. Teams are aware of the key decisions made in a project, but many will not have considered that decision-making is the pervasive thought process that resolves even the smallest detail of a project’s outcome. Given that teams that consistently make good decisions are likely to succeed, while teams that make bad decisions are likely to fail, the paper considers the factors that affect the ability of the team to make effective decisions. An assessment tool is provided that allows teams to assess the issues that could prevent them making effective decisions. By understanding the factors that drive effective decision making and the relationship between those elements, project teams will be better able to identify weakness in their decision making capabilities and address those weaknesses before the project is compromised by a set of bad decisions.
Robert Goatham B.Sc (Hons), PMP, has 20 years in the IT Industry and a broad range of experience in both Project Management and Quality Management, Robert Goatham has long been an advocate for quality. Robert became a Certified Quality Analyst in 1999 and established the IT Quality function for Singapore Airlines. In addition he has played an active role in quality programs in many organizations. Always pragmatic, Robert’s mix of fundamentals and from the trenches experience leaves audiences with a new and distinct perspective on the key issues affecting the IT Industry.
|P-32||Testable Software Architectures|
It is an established best practice in modern software engineering that rigorous usage of automated unit testing frameworks can have a dramatic increase in the quality of software. What is often more important, but not frequently discussed, is that software that can be rigorously tested in this manner is typically more maintainable and more extensible all the while consistently retaining high levels of quality. These factors combine to drive down the Total Cost of Ownership (TCO) for software that is rigorously testable. Modern enterprise-class software systems can have life spans of 3, 5, even 10 years. With this kind of longevity, it is not uncommon for long-term maintenance and extension costs to far outweigh initial development costs of enterprise software systems. When this fact is combined with the benefits described above, it becomes clear that testability should be a first-class driver when crafting software architecture. What is also clear is that crafting high-quality software architectures that can demonstrably drive down TCO over a long period, while maintaining high quality, hasn’t traditionally been the IT industry’s strong suit. While the Agile practice of Test-Driven Development (TDD), by definition, assists in the process of developing testable software, it alone is not 100% assurance that rigorously testable (and hence desirable) software architectures will be produced. In this paper, we will cover some of the core concepts that typify testable software architecture. We will also discuss, through the development of an example software architecture, how the usage of Design Patterns produces software architectures that embody these core concepts of testability. Lastly, we will illustrate these concepts with working C# and NUnit code.
Bio not available at this time.
|P-33||Facilitating Effective Retrospectives|
Many software development teams are looking for ways to continuously improve their processes. Process improvement efforts are always difficult, but add in additional challenges such as geographically dispersed teams, spanning several time zones which means very little overlapping working hours, and cultural differences. At Intel, very few teams are located in the same state, much less in the same building. We have teams spread out among 290 locations in approximately 45 countries. The challenge: How do you facilitate an effective retrospective in spite of all these challenges? This paper will share the key learnings from delivering over 80 retrospectives at Intel Corporation.
Debra Lavell has over 11 years experience in quality engineering, currently working as a Program Manager in the Corporate Platform Office at Intel. Her role is to engage with product development teams to uncover opportunities to share best practices. Prior to her work in quality, Debra spent 8 years managing an IT department responsible for 500+ node network for ADC Telecommunications. Debra is a member of Portland, Oregon’s Rose City Software Process Improvement Network (SPIN) steering committee where she coordinates the monthly speakers, and communicating meeting information. Over the past 6 years, Debra has served on various PNSQC committees and has served as both the Vice-President and President.
|P-36||Maintainability in Testing|
Is your organization focused on short term deliverables or sustained success? How resistant are your tests to breaking changes? If you go on vacation, could someone realistically take your tests, execute them, and analyze the results? What will become of your tests after you ship? Each of the previous questions relates to a specific concern about maintainability in testing. While these are all important questions, we often overlook them in the rush to finish a product release. To properly plan for maintainability, however, we must address all of these questions and more.
Brian Rogers is a software design engineer in test at Microsoft. He is a strong proponent of engineering excellence within the test discipline, and has designed and developed static analysis tools for detecting defects in test artifacts. Brian has a degree in Computer Engineering from the University of Washington.
|P-37||Timeline: Getting And Keeping Control Over Your Project|
Many projects deliver late and over budget. The only way to do something about this is changing our way of working, because if we keep doing things as we did, there is no reason to believe that things will magically improve. They won’t.The Evolutionary Project Management (Evo) approach is about continuously introducing small changes in the way we do things, constantly improving the performance and the results of what we do. Because we can imagine the effect of the change, it can be biased towards improvement, rather than being random. One technique of the Evo way of working is TimeLine, for getting and keeping the timing of projects under control while still improving the results, using just-enough estimation techniques and calibration to reality. TimeLine does not stop at establishing that a project will be late. Instead of accepting the apparent outcome of a TimeLine exercise, we have many opportunities of doing something about it. One of the most rewarding ways of doing something about it is saving time.
Niels Malotaux is an independent Project Coach specializing in optimizing project performance. He has over 30 years experience in designing hardware and software systems, at Delft University, in the Dutch Army, at Philips Electronics and 20 years leading his own systems design company. Since 1998 he devotes his expertise to helping projects to deliver Quality On Time: delivering what the customer needs, when he needs it, to enable customer success. To this effect, Niels developed an approach for effectively teaching Evolutionary Project Management (Evo) Methods, Requirements Engineering, and Review and Inspection techniques. Since 2001, he coached some 80 projects in 20+ organizations in the Netherlands, Belgium, Ireland, India, Japan and the US, which led to a wealth of experience in which approaches work better and which work less in practice. He is a frequent speaker at conferences and published four booklets related to the subject of this paper.
|P-38||Implementing a System to Manage Software Engineering Process Knowledge|
Information management and even knowledge management can be easy to accomplish within a small group. Trying to achieve this same capability across multiple software engineering groups that are distributed world-wide and accustomed to managing information locally within one team is a different story. The ability to share information and knowledge in this circumstance, much less retain it and effectively apply it, becomes a daunting task. This paper discusses a solution that was developed to enable software engineers to easily share and access best practices through a centralized corporate knowledge base. The key objectives of this solution were to reduce duplication of effort in software engineering process improvement, increase reuse of best practices, and improve software quality across the company by enabling practitioners corporate-wide to quickly access the tools, methodologies, and knowledge to get their job done more effectively and efficiently.
Rhea Stadick works in the Intel Software Quality group under the Corporate Quality Network at Intel Corporation. Currently, she is a platform software quality engineer for next generation ultra mobile products. Rhea has spent a significant portion of her two years at Intel researching and developing information and knowledge management systems. During this time, she has implemented and supported Intel’s software engineering process knowledge management system that is used by the worldwide software engineering community at Intel. Rhea holds a bachelor’s in computer science.
|P-40||Applying Selective Revalidation Techniques at Microsoft|
The Internet Explorer (IE) test team faces two key testing challenges:
At the same time, the team also wants to maintain the quality bar across the range of different IE versions, the Windows platforms on which IE runs and its 32-/64-bit binary releases. All these factors are considered in the context of shorter development test cycles. Our approach to addressing these challenges is to apply so-called selective revalidation techniques, which leverage our existing code coverage data. Regression test selection enables a potentially significant reduction in the number of regression tests that need to be rerun in response to a given code modification. Test set optimization provides us with a mechanism to better prioritize our large test suite and identify potential redundancy. We anticipate that both of these techniques will not only encourage more systematic and efficient testing procedures, but also significantly streamline our current test development cycle.
Jean Hartmann currently holds the position of Test Architect in the Internet Explorer (IE) team. Her main responsibility includes driving the concept of software quality throughout the IE development lifecycle. Jean’s technical interests, while diverse, have gravitated over the years towards three main areas of interest, namely testing, requirements engineering and architectural analysis. Previously, she was the Manager for Software Quality at Siemens Corporate Research for twelve years. She earned her Ph.D. in Computer Science in 1992 researching the topic of selective regression testing strategies, whilst working with the British Telecom Research Labs (BTRL) in Ipswich, U.K.
|P-41||Mapping for Quality – Past, Present, and Future|
Human beings (as opposed to human doings) do not think in straight lines. In other words, few of us can just jump into the flow of a software process without being affected by things that influence us in our daily work. Understanding your own maps will be essential for building a better future to explosive improvements in quality within your organization for years to come.To start the process, we will demonstrate that how you perceive your software world affects how you view quality and the processes that support quality. The difficulty of creating a map is, in itself, a revealing element of the mapping exercise. This session will demonstrate how to create your map and how you can work to improve the processes that influence it.An interactive session will be included in this presentation and the group will compare their maps and discuss some elements of what we call “Process Intelligence”. Understanding how communities need to come together to create a world view that effectively integrates the people and processes while delivering quality products will determine how well the software industry will deliver products in the next 25 years.
Celeste Yeakley has collaborated at small start up companies as well as large corporate giants such as Dell and Motorola. She has contributed to her profession by serving on the UT Software Quality Institute board from 1993-2003 as well as in community quality assessments. Celeste is a graduate of the University of Texas at Austin with a Master’s degree in Science & Technology Commercialization.
When we develop software to support our business’ customers, we hope to make our products successful. We want to build software that will: be used, make user’s lives easier, do what it is supposed to do, and perform acceptably. If we are fortunate enough to be successful, the user base grows, and we have to learn how to scale the systems to support the increasing load. Everyone knows the three key subjects to study to deliver scalable software: design, coding and testing. However, there’s a fourth scalability subject that’s often overlooked. That is scaling the development and quality teams themselves.With more cooks in the kitchen, the development environment needs to evolve. Communication within the organization has to move past the ‘prairie-dogs popping over the cubicle wall’ style. The code base has to be organized so that functional teams can build, test and deploy independently. Most importantly, the new process needs to stay as nimble as the process that fostered your current success. The pace of business innovation and deployment of systems to support it cannot be bogged down by process and bureaucracy.Software tools that deliver operational insight and continuous testing aid and inform how the product is developed and delivered at all levels: product management, design, server engineering, production engineering, operations, and of course, quality assurance. This paper examines how Netflix has navigated between the Scylla of growing system demands and the Charybdis of chaos at the hands of ever-larger teams.
Rob Fagen is the Web QA Manager at Netflix. In his professional career of just over 20 years, he has been a developer, a system and data base administrator, several flavors of software contractor and a publisher. Through it all, he’s always gravitated towards a software quality role.
|P-43||Building Quality in from the Beginning using Lean Quality Assurance Practices|
From the beginning of software, we have been dealing with defects. It is costing us billions of dollars each year. We have tried many ways to detect and remove these defects, but quality has not improved. We can learn from Lean principles to instead “Build Quality In” The role of Quality Assurance then should be to prevent defects from happening. We need to develop a quality process to build quality into the code from the beginning. By preventing these defects from getting into the hands of testers and ultimately our customers, and helping ensure we are building the right product, we will indeed reduce the costs from defects and better delight our customers. This paper explores “Lean” and how we can apply seven lean quality assurance practices to improve our quality significantly.
Jean McAuliffe (firstname.lastname@example.org) is a senior consultant and trainer for Net Objectives. She was a Senior QA Manager for RequisitePro at Rational Software, and has been a Product Manager for two agile start-ups. She has over 20 years experience in all aspects of software development (defining, developing, testing, training and support) for software products, bioengineering and aerospace companies. The last five years she has been actively and passionately involved in lean agile software development. She has a Masters in Electrical Engineering from the University of Washington. Jean is a member of the Agile Alliance, and charter member of the Agile Project Leadership Network
|P-44||Efficient Software Testing Using Statistical Methods|
As modern software grows in size and complexity, the testing cost increases as well, calling for efficient methods of software testing. This paper presents an approach towards an efficient allocation of testing resources that ensures high quality of a software product with lower testing cost. Large software system is decomposed onto smaller parts – software components, for which various software metrics are collected, including a metric that quantifies the dependencies between components. Statistical methods, like neural networks, are used to predict fault proneness for components. Given collected metrics and predicted fault proneness, a model, based on constraint satisfaction problem has been developed to achieve optimal allocation for a given set of resources and risk constraints. The paper concludes with a case study conducted at Microsoft in the Windows Serviceability group. Results are presented where reduction in test efforts is achieved with minimal risk to product quality.
Alex (Oleksandr) Tarvo is a Software Development Engineer in Test in Windows Serviceability team at Microsoft. He works on statistical models for risk prediction of software systems and develops software systems for associated data mining. His professional interests include machine learning and software reliability. Oleksandr received his both BS and MS degrees in Computer Science in Chernigov State Technological University, Ukraine.
|P-47||Building for a Better Automation Future: One Company’s Journey|
Is automated testing one of your pain points? Have you ever wondered how to get from where you are to where you need to be? At Intuit, we have a long history of pursuing test automation.We started with ad hoc, “click and record” methods which resulted in fragile testware and mixed results. We then progressed to a more structured approach based on software engineering principles along with organizational changes to reinforce accountability and ownership. We focused on improving the automation infrastructure, improving test reliability, and reducing false negatives. This resulted in where we are today, with a centralized infrastructure team, federated testing, improved efficiency and productivity, and an organizational mindset that values and relies on test automation.
Todd Fitch has held a variety of roles in QA, development and management, in several industries. Todd is currently the Platform Quality Leader for the Small Business Division at Intuit. He holds two patents in test automation and is a member of the Department Advisory Council for the Computer Engineering Department at San Jose State University. He received a B.S. in Computer Science from San Jose State University and an MBA from the Berkeley-Columbia MBA program.
|P-51||Introduction to Software Risk Management|
Karl E. Wiegers
Know your enemy! Risk management has become recognized as a critical success factor in software projects. This presentation provides an overview of software risk management. Risk management is the process of identifying, addressing and controlling potential problems before they threaten the success of a software project. The benefilts of managing risks formally are described along with five ways that organizations may choose to respond to their risks.The fundamental components of software risks management are outlined: risk assessment, risk avoidance, risk control (risk management planning, resolution, and monitoring).Many common types of risks that threaten the success of software projects are summarized and a simple form for documenting risks and planning mitigation approaches is presented. The session concludes with several recommendations for how to begin implementing risk management on any software project.
Karl Wiegers is Principal Consultant at Process Impact, author of many books and articles, and a frequent conference speaker.
|P-54||Software Process Improvement: Ten Traps to Avoid|
Karl E. Wiegers
Even well-planned software process improvement initiatives can be derailed by one of the many risks that threaten such programs. This presentation describes ten common traps that can undermine a software process improvement program. The symptoms of each trap are described, along with several suggested strategies for preventing and dealing with the trap. By staying alert to the threat of these process improvement killers, those involved with leading the change effort can head them off at the pass before they bring your software process improvement program to a screeching halt.
Karl Wiegers is Principal Consultant at Process Impact, author of many books and articles, and a frequent conference speaker.
|P-58||Agile Quality Management|
Agile processes are becoming increasingly popular as they promise frequent deliveries of working code. Traditional Quality Management methods appear to be old fashioned and not suitable for this new era of self-organizing teams, emergent systems and empirical project management. How can we build a bridge between high-level quality management and agile methodologies? How can we incorporate QA experts into agile teams as fully respected and responsible members?Agile thinking and Quality Management came up from the same roots. Many Quality Management experts already have started to adopt their practices to Agile – as Agile provides a far better environment for the creation of high quality software than legacy approaches do. More and more companies start using agile processes to cope with stuck projects and missed deadlines. Many of them are asking about a way to keep or gain a high organizational maturity level in this new situation. Some already managed to implement process frameworks like Scrum in organizations on CMMI Level 3 and higher. We will see who is on our side and how agile processes contribute to better Software Quality.
Andreas Schliep is a Certified Scrum Practitioner. He works as a passionate Scrum Coach and retrospective facilitator for SPRiNT iT, Ettlingen/Germany. He has 8 years of leadership experience as a team and group leader. His development experience is based on projects in the field of video conferencing, VoIP and instant messaging, focusing on C++, Java, databases and network protocols.
|P-59||AJAX Testing – How to test the Asynchronous?|
Manoharan S. Vellalapalayam
Manoharan Vellalapalayam is a Software Architect at Intel with Information technology group. Mano’s primary area of focus is .NET, Database and Security application architectures. He has over 15 years of experience in software development and testing. He has created innovative test tool architectures that are widely used for validation of large and complex CAD software used in CPU design projects at Intel. Chaired validation technical forums and working groups. He has provided consultation and training to various CPU design project teams in US, Israel, India, and Russia. He has published many papers, and is a frequent speaker at conferences including PNSQC, PQST and DTTC.
|P-60||Feedback, Analysis and Synthesis based Testing (FAST)|
The computational power of modern day computers has increased dramatically, making test generation solutions attractive. Traditional test automation approach relies on handcrafting test scenarios that makes test development a time consuming process. The FAST methodology solves this problem by enabling generation of highly effective tests with full verification. This makes it very attractive for use in functional testing. The feedback and the analysis system are unique components of FAST that takes generation technique to the next level, by adding measures of directed generation and integrated verification. Furthermore, FAST enables a system where tests are reusable in multiple contexts giving extensive coverage to cross-component and cross-feature testing with little additional cost.This presentation describes the FAST test generation technique and how this approach enables efficient test generation, illustrated with a real world example from the Visual C# Team and discusses the benefits and how it can be applied to other test domains.
Vijay Upadya has been involved in software test for over 9 years; the last 7 at Microsoft. He is currently working in the Microsoft Visual C# group primarily focusing on test strategy, test tools development and test process improvements for the team. He spoke at the 2006 QAI International Quality Conference in Toronto.
|P-62||User Acceptance Testing – A Context-Driven Perspective|
User Acceptance Testing is a part of most test plans, yet few people talk about what it means and what it requires. Is it obvious what user acceptance testing means? Is there is no effective difference between user acceptance testing and other testing activities? Or might there be so many possible interpretations of “user acceptance” that the term is effectively meaningless? Hang around a software development project for long enough and you will hear two sentences: “We need to keep the customer satisfied” and “The customer doesn’t know what he wants.” A more thoughtful approach might be to begin by asking a question: “Who IS the customer of the testing effort?”In this presentation, Michael Bolton will establish that there is far more to that question than many testing groups consider, and will show that “user acceptance testing” is not meaningless if people achieve consensus on a contextual framework and understand what they mean by user, acceptance, and testing.
Michael Bolton is the co-author (with senior author James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. Michael has over 17 years of experience in the computer industry testing, developing, managing, and writing about software. He was with Quarterdeck Corporation for eight years, delivering the company’s flagship products and directing project and testing teams worldwide. Michael has been teaching software testing and presenting at conferences around the world for eight years. He is Program Chair for the Toronto Association of System and Software Quality, and a co-founder of the Toronto Workshops on Software Testing. He has a regular column in Better Software Magazine and writes for Quality Software (the magazine published by TASSQ).
|P-63||An Exploratory Tester’s Notebook|
Explorers and investigators throughout history have made plans, kept records, written logbooks, and drawn maps, and have used these techniques to report to their sponsors and to the world. Skilled exploratory testers use similar approaches to describe observations, to record progress, to capture new test ideas, and to relate the testing story and the product story to the project community. By focusing on what actually happens, rather than what we hope will happen, exploratory testing records can tell us more about the product than traditional pre-scripted approaches do.In this presentation, Michael Bolton invites you on a tour of his exploratory testing notebook and demonstrates more formal approaches to documenting exploratory testing. The tour includes a look at an informal exploratory testing session, simple diagramming techniques, and a Session-Based Test Management session sheet – techniques that can help testers to demonstrate that testing has been performed diligently, thoroughly, and accountably.
See Michael’s bio for P-62.
|P-64||Two Futures of Software Testing|
In one possible future of software testing, testers are gatekeepers of product quality. Testing follows a rigorously controlled process, investigative testing is banned, and change to the product is resisted. This is a dark vision of the future. The scary part? It looks a lot like today.In another view, testers are active investigators, critical thinkers, and skilled, valued members of the project team. They provide important, timely, credible information to managers so that THEY can make sound and informed business decisions. Most importantly, testers embrace challenge and change, adapting practices and strategies thoughtfully to fit the testing mission and its context.Where are we now, and where are we going? In this interactive one-hour presentation, Michael shares his visions of the futures of software testing, and the roles of the tester in each of them. This entertaining presentation includes dialogue and a brief exercise, encouraging discussion and debate from the floor.
See Michael’s bio for P-62.
|P-65||Customer Interaction and the role of the Tester – Going beyond the release|
Most of us understand and realize the role of the tester in the life cycle of the product – from conceptualization to final shipment. However, the most important phase of the product has only just begun – after the formal product lifecycle, when customers actually start to use it and start giving feedback, asking questions and running into issues.This paper deals with exploring this final and crucial phase of the product and the role of the tester, based on my own experiences while working on the Windows Collaboration Technologies (Peer-Peer) team. The author will talk about how his team records and plans to use information from their interaction with customers in planning and development for future product releases. Armed with a breadth (and not just depth) of knowledge about product internals, known issues, and tested-real world scenarios, they found that tester participation in community forums, blogs, and support calls is not only valuable but also appreciated by the customer.The paper discusses a few low-cost ways to get involved with customers. It also explains that, as with other product cycle activities, it is important to follow standard product cycle processes such as tracking, prioritizing, scheduling and regularly reviewing/analyzing results for long term planning.
Bio: Not currently available.
It is a widely known fact that cost of a defect increases enormously as it progresses through various phases in software lifecycle. Catching defects early, hence, is very important and beneficial. State driven testing is a technique that we have successfully used to minimize defect leakage from early stages of software development. In this presentation, we will explain what is state driven testing and how it can be applied in real life situations with great benefits. Some of the benefits of this technique are:
The presentation will define state driven testing, describe the technique and its aspects in detail; and explain the benefits, challenges, and recommendations and walk through some examples with actual demonstrations of this technique.
Sandeep has over 11 years of software engineering experience in various roles including development, QA, and management of Development and QA teams. His domain experience includes healthcare, semiconductor, finance, supply chain event management (SCEM) and shrink-wrap software. Sandeep currently holds the position of Quality Leader for an organization in Small Business Division at Intuit. He holds a patent in software development (SCEM). He has a B.S. in Computer Science from Regional Engineering College, Kurukshetra, India and an M.S. in Software Engineering from San Jose State University.
|P-70||Testing Web Services|
There are many aspects of testing a service that differ from traditional testing of shrink wrapped products or internal applications. Many web-service test techniques are just not feasible for shrink-wrapped products. I will describe the techniques and an indication of how they are used. Observe how users behave given different interfaces on the real production system. You can fork your production traffic as a perfect user profile, or selectively extract for an accelerated user profile test. Understand how to test the monitoring of your service and how service monitoring can help test improve.To keep services continuously available, you must be able to upgrade services even as the older versions continue to run. Test must understand the implications for both messages and data stores when operating with multiple versions during upgrades.
Keith plans, designs, and reviews software architecture and software tests for Microsoft. He is currently a Test Architect on the Protocol Tools and Test Team (PT3). Keith joined Microsoft after working on distributed systems in Silicon Valley for 20 years. Keith was Principal Software Development Engineering in Test (SDET) on the Live Search team. Keith’s experiences in the Live Search group, running the MSN Test Patent group, and being track leader on Testing Web Services for Test Day at Microsoft, (where he met many other testers with perspectives on this topic) form the basis for this paper. With over 25 years in the field, Keith is a leader in testing methodology, tools technology, and quality process. Keith has been active in the software task group of ASQ, participated in various standards on test methods, published several articles, and presented at many quality and testing conferences. Keith has a BS in computer science from Cornell University.
Modern software development is mostly a cooperative team effort, generating large amount of data in disparate tools built around the development lifecycle. Making sense of this data to gain a clear understanding of the project status and direction has become a time-consuming, high-overhead and messy process. This paper shows how we have applied Business Intelligence (BI) techniques to address some of these issues. We built a real-time data warehouse to host project-related data from different systems. The data is cleansed, transformed, and sometimes rolled up, to facilitate easier analytics operations. We built a web-based data visualization and dashboard system to give project stakeholders an accurate, real-time view of the project status. In practice, we saw participating teams gained better understanding of their corresponding projects and improved their project quality over time.
Rong Ou is a Software Engineer at Google working on engineering tools to help other engineers build better software faster. Before coming to Google, he was a Principal Software Architect at Sabre Airline Solutions, where he had been an enthusiastic proponent and active practitioner of Extreme Programming (XP) since 2000. He has an MS in CS from The University of Texas at Austin and a BA in Physics from Peking University.
|P-73||Flexing Your Mental Muscle to Obtain Defect-Free Products|
Much time is spent on improving our engineering processes; however, we have seen only small gains over the past twenty years in improving our on-time delivery and the quality of our ever complex products. Somewhere, the key ingredients for winning in product development have escaped us. This paper will explore the unlocking of our mental abilities to improve product delivery. Taking lessons from sports psychology, we will explore relatively simple concepts like getting better every day, the power of belief, visualization, goal setting, motivation and the negative power of excuses. We will apply some of these concepts to product development and quickly speak about some of those processes that we should be spending more time on to better achieve defect-free products.
Rick Anderson is a Senior Software Engineering Manager at Tektronix, Inc., a world leader since 1946 in test and measurement equipment located in Beaverton, Oregon. In this role, Rick leads a large multi-site team of Software Engineers and Software Quality Engineers in the pursuit of defect-free oscilloscope software that “wows” our customers. Prior to this, Rick worked for a small start-up doing interactive television systems and a large telephone switch manufacturer. Rick is a member of the IEEE and ASQ. He also serves on the Industry Advisory Boards at Oregon State University and Arizona State University. In his spare time, he coaches highly competitive youth sports. Rick can be reached at email@example.com
|P-74||A Graphical Display of Testing Status for Complex Configurations|
Representing the status of software under test is complex and difficult, compounded when there are many interacting subsystems and combinations that must be tracked.This paper describes a method developed for a one-page representation of the test space for a large and complex set of product components. The latest project this was applied to had 10 interdependent variables and over 250 components. Once the components are identified and grouped, the spreadsheet can be used to show configurations to be tested, record test outcomes, and represent the overall state of testing coverage and outcomes. The paper uses a sanitized example modified from an actual test configuration.
Douglas has more than thirty years experience as a consultant, manager, and engineer in the computer and software industries. He is currently a Software QA Program Manager for Hewlett-Packard. He is extremely active in quality communities -he has been Chair and Program Chair for several local, national, and international quality conferences, he has been a speaker at numerous conferences, including PNSQC. Douglas is a Past Chair of the Santa Clara Section of the American Society for Quality (ASQ) and the Santa Clara Valley Software Quality Association (SSQA), a task group of the ASQ. He is a founding member and past Member of the Board of Directors for the Association for Software Testing (AST), and a member of ACM and IEEE.
|V-01||“Lights Out Testing” for ERP ApplicationsSarat Addanki|
Testing approach for an ERP application is very different from that of a Custom built application. The ubiquity and the sheer scale of an ERP system along with the kind of business process knowledge it demands, make the Quality Assurance of the ERP system a highly complex flow. When operating within the constraints of time, effort, and cost it is a tough challenge to ensure the quality of the system at the highest level.A strategy called the “Lights Out Testing” addresses these challenges and it works with the diligent use of automation in test execution and validation to improve the scope of testing and reduce the testing cycle time. The solution is a three-pronged approach where it focuses on the Knowledge Management, the QA Process, and Test Automation, which can be tackled in the forms of automated and configurable execution and validation.
Sarat Addanki is the Vice President, Strategic ERP Solutions. He has 13 years of experience in the development, implementation and testing of ERP applications. The past eight years he has been strictly focused on SAP. Sarat has pioneered the development of Testing strategies and Test automation solutions for complex SAP implementations, which includes McKesson Corporation. At Arsin Corporation, he designed suite of SAP Test Automation solutions to accelerate testing of SAP implementations. He is a Project Management Professional (PMP) as certified by the Project Management Institute. Sarat graduated from Osmania University with a degree in Computer Science and Engineering.
World Trade Center
121 SW Salmon St.
Portland, OR 97204
The PNSQC newsletter offers readers interviews with presenters and keynotes, invites to webinars, upcoming industry calendar listings, and so much more straight to your inbox. Sign up by entering your email into the box and let the latest news come directly to you.
Copyright PNSQC 2020
WordPress website created by Mozak Design - Portland, OR