|P-4||Data Mining for Process Improvement|
Paul Below, EDS
What do you do if you want to improve a process and you have 100 factors that are candidate predictors? How do you decide where to direct your causal analysis effort? Similarly, what if you want to create an estimating model or a simulation, and you have so many factors you do not know where to start? Data mining techniques have been used to filter many variables down to a vital few in order to focus causal analysis and build model-based estimates. Specific software engineering examples are provided in four categories: classification, regression, clustering, and association.
When creating a predictive model to understand a process, the primary challenge is how to start. Regardless of the variable being estimated (e.g., effort, cost, duration, quality, staff, productivity, risk, size) there are many factors that influence the actual value and many more that could be influential. The existence of one or more large datasets of historical data could be viewed as both a blessing and a curse: the existence and accessibility of the data is necessary for prediction and learning, but traditional analysis techniques do not provide us with optimum methods for identifying key independent (predictor) variables from a large pool of variables. Unfortunately, the Lean Six Sigma body of knowledge does not include data mining as a subject area. Data mining techniques can be used to help thin out the forest, so that we can examine the important trees.
Paul Below has over 25 years experience in the subjects of measurement technology, statistical analysis, forecasting, Lean Six Sigma, and data mining. He has provided innovative engineering solutions as well as teaching and mentoring internationally in support of multiple industries. He serves as analyst for EDS, an HP Company, where he provides executive leaders and clients with statistical analysis of operational performance, helping strengthen competitive position through process improvement and predictability.
Mr. Below is a Certified Software Quality Analyst and a past Certified Function Point Specialist. He is Six Sigma Black Belt. He has been a course developer and instructor for Estimating, Lean Six Sigma, Metrics Analysis, Function Point Analysis, as well as statistics in the Masters of Software Engineering program at Seattle University. He is a member of the IEEE Computer Society, the American Statistical Association, the American Society for Quality, the Seattle Area Software Quality Assurance Group, and has served on the Management Reporting Committee of the International Function Points User Group. He has one US patent and two pending.
|P-8||Retrospective Analysis and Prioritization Areas for Beta Release Planning Improvement|
Ajay Jain, Adobe Systems Inc
The beta release of a product is an official prereleased version of the product. At this stage, the product is generally considered to be fully functional and in a “close-to-release” state. Beta participants can be a small group of classified users invited by the product company or open for public scrutiny.
This paper presentation provides a retrospective analysis by taking a project and product, as a case study and evaluates the testing effort invested by the testing team in beta or prerelease testing, making a statistical comparison with the regular feature testing effort during a complete product cycle. The data variables used include resources, time spent in bug triaging, beta readiness certification, and documentation. The data was analyzed to measure the effectiveness of the beta or prerelease testing program in terms of the product quality improvement achieved. Post-analysis recommendations are provided to help a beta testing program achieve a high quality product without overshooting the testing cost.
The strategies discussed in the paper presentation offer a test manager the ability to achieve a higher ROI (Return on Investment) from a beta release program, not only in terms of testing effort spent, but also in improving the product functionalities and optimizing the organization cost.
Ajay Jain has over 9 years of industry experience in mobile and desktop publishing software test and validation domain. He is currently working with Adobe Systems Incorporated as Quality Engineering Manager, leading and managing the Instantiation (Deployment, Provisioning and Services) QE team for Adobe Creative Suite family of suites products. Prior to Adobe, Ajay Jain has worked with industry majors like Lucent Technologies (Bell Labs Development Center) and Skyworks Inc, where his experience ranged from building a start-up testing team from scratch to a resource optimized, efficiency driven team certifying multiple product lines.
Ajay has an active interest in knowledge sharing on best processes, practices and has written and published several papers for Quality Matters, Adobe Quality Newsletter (Internal), and hosted the birds of feather session at Adobe QE Summit. Two of his papers, “Power of Glide Path: Statistical approach for controlling and adding Predictability in a testing project” and “Well processed, well done – Suggesting 5 light-weight processes for optimizing software test management” earned him speaker invites to the 8th All India Software Testing Conference 2008, and SPICE conference 2009. He has a patent invention–Meritorious Disclosure award–on a mobile call-handling feature. Ajay holds a B.Tech (Engineering) Degree from Delhi Institute of Technology and a Specialized Diploma in Business Administration from Institute of Management Technology.
|P-9||New Challenges to Quality in the 24×7 Enterprise IT Shop: Post-Integrated Business Automation Systems|
For several decades, the 24×7 IT shop has been considered the ugly stepsister of software product development in the eyes of most technology professionals. There are numerous differences in expectations, including a need for only “second-string” technical professionals, the lack of need to utilize formal techniques, and a lack of desire or need to innovate. These differences are often very real for many reasons, the primary one being “pre-integration” in traditional IT environments. Due to the lack of integration points at the edges of legacy systems, and the proprietary integration architectures of mainstream business application systems, successful 24×7 IT shops selected a single hardware vendor and single software vendor, many times they were the same vendor. These vendors provided everything the company needed, ensuring all the pieces worked together, often at high cost due to engaging professional services. IT professionals did not need to get too far under the hood, nor could they in most cases, relegated to the role of managing these systems, only calling the vendor at the first hint of trouble or when functionality changes were required. The most skilled person in the shop was often the DBA whose mastery of the all-important database was considered near voodoo by the rest of the team.
Reality has been changing quickly over the last several years, resulting in the IT Engineer needing to be just as competent with the diversity of computer science knowledge as the Product Development Engineer. There are several reasons for this, most importantly the combined forces of Service-Oriented Architectures (SOAs) and other new approaches to system integration, Software as a Service (SaaS) delivery mechanisms, free open-source software (FOSS) driving shifts in total cost of ownership, and the recognition that IT shops need to approach core service delivery and non-core service delivery in different ways. IT professionals must now integrate business application systems from a variety of vendors and deliver value from these combined systems more quickly than in the past. Delivering systems in this post-purchase integration world requires the same skills and tools utilized by successful Product Development organizations. Unfortunately, many IT professionals are not familiar with the large body of knowledge available to leveraged, nor are they yet sure how to apply this knowledge.
This paper presentation will discuss the primary forces driving change for 24×7 IT departments. It will then focus on why these forces have emerged, and will examine current-day product development approaches that may be applicable. Finally, the author provides several specific suggestions and examples of ways 24×7 IT departments can start down the path to their future as highly regarded technology teams.
Al Hooton has been active professionally in software technology for almost 30 years.
John Ruberto, Intuit
Moving Quality Forward can be a daunting task. Making changes to the status quo is already difficult, and that is before you consider the complexity and interconnectedness of your existing practices. When pressed to make changes, it can seem like an overwhelming proposition.
This paper presentation will show how our team responded to such a challenge: to cut the system test duration from 12 weeks down to 4 weeks. When initially presented with this challenge, we resisted. Our brainstorming sessions turned into justifications of the status quo and explanations of why it is an impossible task. We always needed every minute of those 12 weeks, so of course it was an impossible task to remove 67% of the time. The break through came when we built a model of the time we spent in the system test phase. That model allowed us to break the large, complex problem into a series of smaller, easier to implement solutions. The model we developed expressed the testing duration as a function of:
Now, instead of racking our brains on how to reduce the duration from 12 weeks, we could focus on a single variable at a time. For example, “how can we reduce the number of test cases executed, while maintaining the same quality levels?” By asking these questions we were more productive, and it lead to implementing a number of changes. In the end, we succeeded in achieving a 4-week system test cycle, which enabled our company to release 5 versions of our product each year, instead of 1 release every 9 months. This improvement resulted in delivering more value, faster, to our customers.
This paper presentation provides context about our industry, process, and business situation, how we created the model based on our processes and practices, how we used the model to create a series of smaller & easier problems to solve, and finally describe the techniques utilized, including:
John Ruberto has been developing software in a variety of roles for 23 years. He is currently managing a software quality organization at Intuit, Inc.
|P-13||Web Security Testing with Ruby and Watir|
Jim Knowlton, McAfee
To verify the quality of web applications today, security testing is a necessity. But how to cover it all? SQL injection, cross-site scripting, buffer overflow… the list goes on. Automating some of this testing would be great, but where to start?
Ruby combined with Watir makes a great toolset for security testing of web apps:
This paper presentation will provide:
Participants attending this presentation will come away with practical ideas for implementing security web testing in their own environments.
Jim Knowlton is a QA Automation Engineer with McAfee, where he drives test automation efforts on McAfee ePolicy Orchestrator. He has over 18 years of experience in the software industry, including clients such as Novell, Symantec, and Nike. He is the author of Python: Create, Modify, Reuse (Wrox, 2008). Follow his bloging at www.agilerubytester.com.
|P-14||Distributed Team Collaboration|
Kathy Milhauser, Portland State University
Working in distributed teams is becoming increasingly common as companies extend and diversify their operations across geographic boundaries. Learning to form and sustain high-performing distributed teams with members in multiple locations, time zones, and representing diverse cultural perspectives requires new skills and new approaches to project team collaboration.
This paper presentation outlines the drivers causing this transformation in project work, introduces a variety of models that represent distributed team configurations, summarizes some of the challenges inherent in leading and contributing to distributed teams through a set of case studies, and suggests practices that are emerging to optimize distributed team performance.
Kathy Milhauser has been an adjunct faculty member of Portland State University’s Oregon Master of Science in Software Engineering program (OMSE) since 2003. Kathy is also the Program Director for the MS programs in Technology Management and Project Management at City University of Seattle. Kathy’s professional background includes 20 years at Nike, Inc., where she led projects and teams in the global IT and HR organizations. Starting with Nike in the late 1980’s, Kathy had the opportunity to ride a wave of phenomenal growth in the company as well as with technology. Beginning as a member of a small IT team, Kathy participated in the installation of the first dozen PCs in the company, configuring software and teaching end users how to use new tools like email and file sharing on the company’s first networks. She eventually built and managed Nike’s first corporate Help Desk, expanding it to provide 24×7 coverage to employees in offices in Nike’s U.S., Asian, and European headquarters. As the organization grew, Kathy transitioned to applications development, eventually leading teams that designed custom applications to manage Nike’s proprietary product data. Kathy’s project management skills were developed during a stretch assignment with a small product team who had the bright idea of allowing consumers to create customized shoes on the Internet during the e-commerce boom, leading to what is now a successful business called NIKEiD.com.
Kathy’s passion for learning led her to pursue her graduate degree from Pepperdine University’s award winning Online Master of Arts in Educational Technology in 2003. She then moved into Nike’s HR organization on a mission to introduce technology enabled learning to a globally distributed workforce. Her accomplishments included leading an e-learning initiative to capture product process knowledge for future product innovators and engineers, implementing Peoplesoft’s Enterprise Learning Management System in Nike HR offices globally, leading Nike’s Lean Enterprise training initiative, and facilitating the development of a collaborative community of practice that included trainers in 7 countries in Nike’s partner manufacturing offices in Asia. Kathy has been active in Elliott Masie’s learning consortium since the early 1990s, sharing best practices and participating in research with over 200 global organizations. Kathy holds a PMP certification from the Project Management Institute, an MA in Educational Technology from Pepperdine, and is currently working on a Doctor of Management degree at George Fox University, where her research focuses on collaboration models for distributed teams. She has published in the Software Association of Oregon’s online journal and in an edited book on learning in virtual settings.
|P-15||ProdTest: A Production Test Framework For SAAS Deployment At Salesforce.com|
Bhavana Rehani, Salesforce.com
Kei Tang, Salesforce.com
Salesforce.com provides Software-As-A-Service (SAAS) for CRM applications and is an innovator for the Platform-As-A-Service (PAAS) model. These on-demand models present unique challenges during new feature deployment. For example, the R&D group needs to ensure very high levels of availability for our customers, without which it would be impossible for them to conduct their critical business operations. However, deployment for each major release requires some system downtime for upgrade, installation, and sanity testing of that release.
Previously, sanity testing used to be a time-consuming manual process involving numerous QA engineers. To shorten downtime, we needed to reduce the testing time of a deployment. The QA organization developed a new automation framework, ProdTest, for executing tests on the production environment. This framework allows engineers to write both API and UI tests that can easily be run on both internal and production environments. Since initial rollout, ProdTest has gradually evolved into a complex and sophisticated tool for writing automated tests. It has become an integral part of our deployment process, providing both measurable and intangible benefits — including a reduction in downtime and fewer numbers of people involved in a deployment.
This paper presentation discusses the technical, process, and people challenges faced while building ProdTest, in addition to the benefits to the organization, as well as an insight into our future direction.
Bhavana Rehani is a Senior Quality Assurance Engineer at SalesForce.com.
|P-17||Improving Your Quality Process|
John Balza, Balza Consulting
Organizations are often faced with the problem of improving their quality process and justifying the investments for inspections and improved testing. This paper presentation focuses on how Hewlett-Packard developed an improved quality process by:
John Balza currently is a software quality consultant and teacher working with software companies to improve their software quality using metrics and defect analysis. Previously, John was the Quality and Productivity Manager at Hewlett-Packard in Fort Collins, Colorado where he was responsible for overall product quality for an organization of 1500 engineers in six geographic areas developing HP’s version of UNIX, HP-UX. In this role, John reduced customer defects by over 80% and improved productivity by 30%. These changes were made by gaining management sponsorship for key process improvements and then facilitating teams to make these improvements. This included creating management metrics to track quality throughout the lifecycle, applying agile methods to large complex projects, and assisting the lab organizations to achieve level 2 and 3 of the Capability Maturity Model.
John is a frequent presenter at software quality conferences including the Pacific Northwest Software Quality Conference, International Software Quality Conference, World Congress for Software Quality, Rocky Mountain Quality Conference, Denver SQUAD Conference, and PSQT.
|P-18||Reconfiguring the Box: Thirteen Key Practices for Successful Change Management|
Leesa Hicks, Tektronix
Existing processes help organizations work in an organized and predictable way, but they also provide resistance points for improving how they work. Tektronix “standardized” on IBM Rational ClearCase for configuration management years ago, but because it had no built-in process automation, each product team developed their own customized processes and automation for using ClearCase.
Ironically, the lack of process automation in the early versions of ClearCase was actually part of its initial appeal. However, both ClearCase and development organizations have matured greatly since then, and now ClearCase provides automated processes with Unified Change Management (UCM). More recently, Tektronix adopted IBM Rational ClearQuest for change tracking, which adds yet more value when integrated with ClearCase UCM. Even with all the new capabilities and automation that UCM provides, motivating development teams to change how they use ClearCase has been challenging, in spite of the issues they have been experiencing with their existing ClearCase usage. They still need to be convinced that the value added by adopting UCM outweighs the cost of changing how they work, even though they are using the same tools.
This paper presentation describes how we have helped development teams break out of the constraints of their existing configuration management processes and improved their development processes with successful adoptions of UCM, including 13 key practices used to bring about these beneficial changes.
Leesa Hicks is a Principal Engineer in the Software Engineering Services Group at Tektronix. She has worked in many different industries in various software engineering roles, but a common thread amongst all her varied positions includes work in process improvement. Leesa currently specializes in configuration management and change tracking tools. She has a Master’s degree in Computer Science from UCSD.
|P-19||Moving to an Agile Testing Environment:|
What Went Right, What Went Wrong
Ray Arell, Intel
About a year ago, I went to my software staff and declared, “We are going Agile!” On a long flight to India, I read an Agile project management book, and like all good reactionary development managers, I was sold! Two years later our adaptation of the Scrum framework has taken shape, but it was not without strain on our development, test, and other QA processes.
This paper presentation focuses on a retrospective of what went right and more importantly, what went wrong as we evolved to our new development/test process and the effect on our team. This includes:
Perhaps our success can sway you that the shift to Agile is the way to go by giving your organization a little information on what you may experience.
Ray is a Senior Engineering Manager and Agilist at Intel. He has over 21 years of hardware and software development, validation, and management experience. During his tenure, he has worked on a variety of teams focused on CPU, chipsets, and graphics system-level testing. Today he manages an Agile software engineering team in Intel’s Digital Office Platform Divisions and he is a leading force in the Agilization of Intel. Ray is also co-author of Change-Based Test Management: Improving the Software Validation Process (ISBN: 0971786127), and has delivered keynotes at STANZ 2008 Wellington/Sydney, QA&Test 2005/6 Bilbao, and speaker at many other events.
|P-21||Why Tests Don’t Pass (or Fail)|
Douglas Hoffman, Software Quality Methods, LLC
When we run a test, the results are either finding a bug, (the software failed the test) or not finding a bug, (the software passed the test). We have something to investigate/log into the bug database or we do not. Unfortunately, experience repeatedly shows us that passing a test does not represent an absence of bugs. It is possible for tests to miss bugs. It is possible not to notice an error, even though a test surfaces it. Passing a test really means that we did not detect anything out of the ordinary.
Likewise, failing a test is no guarantee of the presence of bugs. There could be a bug in the test itself, a configuration problem, corrupted data, or a host of other explainable reasons that are not due to anything wrong with the software under test. Typically, failing means something noticed warrants further investigation and possible reporting.
This paper presentation explores some of the implications and suggests some ways to benefit from this new way of thinking about test outcomes and concludes with examination of how to use this viewpoint to better prepare tests and report results.
Douglas Hoffman has over 30 years experience as a consultant, manager, and engineer in the computer and software industries based on a solid foundation in computer science and electrical engineering. He provides organizational assessments, strategic quality planning, and test planning services. His recent technical work has focused on test oracles and advanced automation architectures. He is an ASQ Fellow, member of ACM and IEEE, holds ASQ Certificates in Software Quality Engineering and Manager of Quality/Organization Excellence, and has been a registered ISO Lead Auditor. He holds credentials for teaching Computer Science at the college level and has done so at the University of San Francisco, UC Santa Cruz Extension, and Howard University.
Douglas is a Past Chair of the American Society for Quality (ASQ) Silicon Valley Section and the Santa Clara Valley Software Quality Association (SSQA), a task group of the ASQ. He is a member of the Board of Directors in the Association for Software Testing (AST) and committee member for the Pacific Northwest Software Quality Conference (PNSQC). He is also a regular speaker at software quality conferences including PNSQC, STPCon, STAR, Quality Week, and others.
|P-25||Reducing Test Case Bloat|
Lanette Creamer, Adobe Systems
We may think we are ready to move onto new and innovative features. However, if we do not deal with the past, it can easily come back to haunt, slowing down new projects, and robbing our testing time unexpectedly, often to the point that testing becomes the bottleneck that slows innovation to a crawl.
For those of us who work on software that already exists, exciting new functionality and improvements are the main things that drive upgrades, as well as compatibility with new platforms. However, if end users cannot trust the quality of the legacy features they rely on, they will be reluctant to upgrade, or worse, your new versions will get a reputation of being unstable — harming overall adoption. In some cases users may downgrade their software to an earlier version because they are unhappy with the quality of the newer release or request an earlier version they feel is reliable.
This paper presentation is about the subjective and difficult part of testing which has no provable mathematical correct answer. It is about risk management, test planning, cost, value, and being thoughtful about which tests to run in the context of your specific project. The discussion covers identifying and reducing test case bloat, when it can be done, who does it, along with a few examples used in practice. Further, it will cover one untested but under test theory, experiences shared in significantly reducing test cases to cover more than three times the applications when the test team reduced from sixteen to four testers. When facing increasingly complex software and growing software, we must balance testing existing features that customers rely on every day with new features and interactions. When balanced in a sensible way, the best of the legacy test cases can be maintained, using existing knowledge to reduce risk as much as possible.
Lanette Creamer has been working in quality for Adobe Systems since 2000. She presented at PNSQC 2008, winning Best Paper.
|P-26||Testing IPv6 Enabled Applications|
Travis Luke, Microsoft
Each year IPv6 gains wider adoption around the world, both as a replacement for traditional TCP/IP and in side-by-side usage. IPv6 transition technologies are enabled by default on many modern operating systems and applications have already begun to take advantage of them. However, very few testing guidelines exist for this emerging technology. It is important that we move quality forward in this ecosystem.
This paper presentation will being with an overview of the current state of IPv6 deployment, then focuses on the most popular transition technologies — 6to4, Teredo, and ISATAP — describing how each technology acts as a bridge to a full IPv6 deployment and what software developers and testers need to know about it. The presentation will describe how to build a test lab that simulates various IPv6 environments such as the home, the Internet café, and the enterprise. Lastly, the author will outline practical test cases for IPv6-enabled applications.
Travis Luke has been working at Microsoft as a Software Development Engineer in Test for ten years. For the past five years, he has worked in the Windows Networking division on Peer-To-Peer network technologies. Prior to Microsoft, Travis worked as a Network Administrator and consultant.
|P-27||The Elephant in the Room:|
Using Brain Science to Enhance Working Relationships
Sharon Buckmaster, FutureWorks Consulting
Diana Larsen, FutureWorks Consulting
Learn how increasing gender intelligence strengthens our ability to maximize the contributions of all members of a team. Gender intelligence is one of the emerging concepts in progressive organizations. Fusing data from the fields of social neuroscience, positive psychology, and advanced imaging techniques, the brain science knowledge that results gives us new tools for understanding and enhancing the ability of men and women to work together.
Companies like Deloitte & Touche, IBM, and PriceWaterHouse Coopers have seen immediate financial results including increased retention of women by training their managers to use gender intelligence in the workplace. Using the principles of brain science regarding gender can have a positive impact on corporate culture and organizational success.
Sharon Buckmaster and Diana Larsen, principal consultants with FutureWorks Consulting, coach and consult with leaders and Agile teams. They bring focus to the human side of organizations, teams, and projects. They work with clients who strive to create workplaces that are economically, ethically, and socially sustainable.
|P-29||Holding our Feet to the Fire|
Jim Brosseau, Clarrus Consulting Group Inc.
In recent times, there have been several movements in software development that would suggest that we wait until the latest possible moment to make decisions, to avoid or delay the associated costs of change that would seem inevitable if we decide too early. As we have all seen in this industry, though, as movements such as Agile or Lean Software Development gain popularity, there is often something lost in the translation to the masses, and this innocuous statement of deferring decisions becomes embraced a little too tightly. That “last responsible moment” is often overtaken by the system deciding for us, and rarely in our favor.
In this paper presentation, the suggestion is for a balanced approach. Some decisions must be made early, where the cost of deferring these decisions becomes extremely costly to the organization, and results in diminished value to the end user. Balancing appropriate timing of decisions with appropriate management of change becomes the optimal way of driving a project to successful completion. The author will describe different types of decisions to be made, heuristics to recognize reasonable decision points along the way, and identifies the decisions that we absolutely need to move forward in the lifecycle to ensure that success means creation of the value we actually intended to deliver.
Jim has worked with more than 100 teams worldwide in the past 10 years with a goal of increasing collaborative effectiveness.
|P-30||Managing Software Debt|
Chris Sterling, SolutionsIQ
Many software developers will have to deal with legacy code at some point during their careers. Seemingly simple changes are turned into frustrating endeavors with code that is hard to read and unnecessarily complex. Test scripts and requirements are lacking, and at the same time are out of sync with the existing system. The build is cryptic, minimally sufficient, and difficult to successfully configure and execute. It is almost impossible to find the proper place to make a requested change without breaking unexpected portions of the application. The people who originally worked on the application are long gone.
How did the software get like this? It is almost certain the people who developed this application did not intend to create such a mess. This paper presentation will highlight ways teams can work with stakeholders to manage software debt over the delivery life cycle of the product.
Chris Sterling is an Agile Coach, Certified Scrum Trainer, and Technology Consultant for SolutionsIQ. He has been involved in many diverse projects and organizations and has extensive experience with bleeding edge and established technology solutions. He has been a coordinator of multiple Puget Sound area groups including International Association of Software Architects (IASA), Seattle Scrum Users Group, and most recently the Beyond Agile group.
Chris has been a speaker at many conferences and group meetings including Agile 2007 & 2008, SD West, Scrum Gathering, and others. In his consulting and speaking engagements, Chris brings his real world experience and deep passion for software development enabling others to grasp the points and take away something of value. Chris has also contributed to and created multiple open source projects. He is currently teaching the “Advanced Topics in Agile Software Development” class at the University of Washington Agile Developer Certificate extension program and writing a book with publisher Addison-Wesley on software architecture.
|P-32||Software Pedigree Analysis: Trust but Verify|
Susan Courtney, SLC Software LLC
Barbara Frederiksen, Johnson- Laird, Inc.
Marc Visnick, Johnson- Laird, Inc.
Quality software is defined not just by technical measurements, but also by the presence of a well-documented pedigree that itemizes all known use of third-party materials and documents the license terms under which such materials may be used. Use of third-party materials is common in today’s software development environment and failure to document its use, including licensing restrictions, may expose your company to unexpected legal liability. These liabilities can manifest in the context of copyright, patent, trade secret, or license litigation. Further, an unknown software pedigree may also impede the sale of your software or reduce its valuation in an acquisition scenario. Companies should develop and enforce guidelines for using third-party materials, and, if necessary, perform periodic forensic code audits. Forensic code audits serve to demonstrate compliance with usage and documentation guidelines, and to identify any third-party use that falls outside such guidelines.
Susan Courtney is forensic software analyst and has worked as a consultant for Johnson-Laird, Inc. She has done forensic software analysis for patent and copyright cases, performed electronic evidence analysis, and assisted with data preservation and discovery. Ms. Courtney is also the owner of SLC Software LLC, a software consulting firm that provides a variety of software-related services including business systems analysis, quality assurance and project management.
Barbara Frederiksen-Cross is the Senior Managing Consultant for Johnson- Laird, Inc., in Portland, Oregon. Barbara is a forensic software analyst specializing in the analysis of computer-based evidence for copyright, patent, and trade secret litigation. She is also an expert in computer software design and development, the recovery, preservation, and analysis of computer-based evidence, and computer systems’ capacity issues. Barbara began her career as a computer programmer in 1974. She first began working as a forensic software analyst in 1987. She received her training in software forensics while working as an independent consultant to Johnson- Laird, Inc. Mrs. Frederiksen-Cross was appointed as Court Data System Advisor to the Honorable Marvin J. Garbis, in the U.S. District Court for the District of Maryland in December 2000, and has provided forensic analysis services in cases such as eBay v. Bidder’s Edge, Symantec v. McAfee, Rentrak v. Hollywood Entertainment, Telecomm Technical Services Inc., et al., v. Rolm Company, Compuware Corporation v. International Business Machines Corp., VMware, Inc. v. Connectix Corporation and Microsoft Corporation. She has assisted with data preservation and discovery in cases such as the Vioxx Product Liability Litigation, Propulsid Product Liability Litigation, Rezulin Product Liability Litigation, and Bridgestone/Firestone, Inc., ATX, ATX II, and Wilderness Tires, Products Liability Litigation.
Marc Visnick is a forensic software analyst and attorney based in Portland, Oregon, and a senior consultant with Johnson-Laird, Inc. He specializes in forensic software analysis for patent, copyright and trade secret litigation, as well as software due diligence, independent development project design and supervision, and electronic evidence preservation, recovery, and analysis. Over the past 5 years, Mr. Visnick has participated in hundreds of forensic audits for software-related mergers, acquisitions, and source licensing transactions. Mr. Visnick is past Chair of the Oregon State Bar Computer and Internet Law Section.
|P-33||Leveraging Code Coverage Data to Improve Test Suite Efficiency and Effectiveness|
Jean Hartmann, Microsoft
During each release of Visual Studio, substantial time and resources are expended in test case development, execution, and verification. Thousands of new tests are added to existing test suites without any kind of review regarding their unique contribution to test suite effectiveness or impact on test suite efficiency. In the past, such unbridled growth in test collateral was sustainable without significantly affecting product release, often offset by increased machine and staff resources. With the growing number of test configurations in which these tests need to be run, this is no longer feasible – it is time to clean up!
This paper presentation describes how we leveraged existing code coverage data, together with optimization techniques, to help each test team analyze its test suite and guide them in improving its effectiveness and efficiency. The analysis focused on identifying groups of tests cases given certain goals – for example, increasing overall test suite stability and reliability, reducing test suite execution time and minimizing test suite redundancy. The guidance focused on a set of best practices that teams can adopt to achieve those goals.
The paper presentation reflects on some of the benefits and challenges faced as part of this case study. It also outlines the tools developed to conduct the analysis and support the best practices using examples and data taken from the case study to illustrate and emphasize key points.
NOTE: A paper for PNSQC 2008 examined the regression test selection problem — retesting in the presence of code churn. This 2009 paper presentation focuses on a currently underway pilot program to “slice and dice” as well as optimizes the test suite as a whole without considering code churn, given the test suites are exploding in size and a systematic approach is necessary to deal with that growth.
Jean Hartmann is currently, a Principal Test Architect in Microsoft’s Developer Division with previous experience as Test Architect for Internet Explorer where her main responsibility is driving the concept of software quality throughout the product development lifecycle. Prior to Microsoft, Jean spent twelve years at Siemens Corporate Research as Manager for Software Quality. She earned a Ph.D. in Computer Science in 1993, while researching the topic of selective regression test strategies.
|P-34||Developing Requirements for Legacy Systems: A Case Study|
Bill Baker, Sage
Todd Gentry, Harland Financial Solutions
Many legacy systems were created without documented requirements. Over the years, changes were made, often without adequate documentation. Software quality suffers as the system becomes more and more complex. This paper describes a case study of bringing requirements management and other related process improvements to a legacy software product — one which has been successful for over twenty years.
A cross-functional team of management undertook a project to improve significantly the processes related to requirements. This paper presentation describes the lessons learned while undertaking this project. Among these are:
Bill Baker is a software development manager at Sage in Beaverton Oregon. He has been involved in software development, project management, and process improvement for a number of years. While at Harland Financial Solutions, Bill led the improvements outlined in this paper. Bill has a Ph.D. in Electrical Engineering from Washington State University and an excessive collection of other degrees from Washington State University and Michigan State University.
Todd Gentry has been a software developer for 25 years. He is currently a Senior Manager in Product Development for Harland Financial Solutions supervising a software development team working in the .NET framework upgrading the technology of a legacy system. Todd is a Certified Scrum Master. He holds a B.S. in Computer Science and Mathematics from the University of Oregon.
|P-35||Where are You in UI Design?|
Kelcie Anderson, PMP
As a QA professional, how are you contributing to the usability of your product? The usability, or user experience, of a product has steadily gained in importance over the last decade. Once, only a few people knew the phrase, but now the term usability is bandied about readily. With a solid understanding of a good UI design process, you can effectively collaborate with the UI team throughout the development life cycle. In addition, you are in the unique position to overcome the following barriers to usability: (1) Production code is not implemented per the UI design, (2) the UI design cannot be implemented as specified, causing the developer to change the design “on the fly,” or (3) the company is not operating with the above UI model.
Kelcie Anderson is a Project Management Professional with a background in program management and usability engineering management. She spent 9 years in the User Experience group at Tektronix. She is currently streamlining the product development process at Acumed.
|P-36||I Have Two Managers!: One Company’s Model for a Consultative Testing Team and Matrix Management|
Amy Yosowitz, Apollo Group, Inc
Apollo Group Inc., parent company of University of Phoenix, faces an interesting challenge in providing homegrown software to a staff of thousands who serve more than 350,000 students. With 30+ integrated applications being developed continuously by almost as many development teams, we have chosen to go the route of having a consultative testing team. Our Software Quality Analysts (SQAs) have dual identities. Each SQA is a member of a larger QA organization where individuals learn skills, exchange knowledge with other testers, and receive mentoring from QA management. Each SQA is also a highly-valued member of a development team where they interact with their development lead, developers, and business contacts. This paradigm has resulted in well-rounded SQAs who are fulfilled in their careers and have a deep sense of ownership over the applications they work with. This paper presentation reports on the many aspects of this model, including roles and responsibilities, the elements involved, and the challenges faced.
Amy Yosowitz has over 10 years experience in the software field. She is currently a Senior Information Technology Manager for Quality Assurance at Apollo Group, Inc (the parent company of University of Phoenix), where she manages a group of 25+ software testers that focus on testing business applications. She was formerly a Senior Quality Assurance Engineer at Alogent Corporation (now Goldleaf Financial Solutions) and a Consultant at HBO & Company (now McKesson Corporation). During her tenure at Alogent, Amy was a lead tester on a state-of-the-art banking teller system. At HBO & Company, Amy performed implementation, testing, and design tasks for an innovative hospital information system. Amy holds a Bachelors of Arts degree in Mathematics with a French minor from Emory University in Atlanta, Georgia.
|P-37||My Experience of Adopting an Agile Software Development Approach|
John Bartholomew, Nethra Imaging
This paper summarizes my experience at adopting an agile software development approach, which yielded significant improvements in product quality and reliability of release delivery dates. A brief overview of the traditional Scrum development methodology is presented, focusing on the concept of an iteration, on team roles and on the function of the burndown chart. Our group’s choice of iteration length, development/test/documentation team interactions, as well as unique differences in our code development/testing process and quality management process will be discussed, then the resulting release process and release date reliability achieved. In short, we were able to hit our three-month release cycle deadline within one week for the last several software releases in 2008.
The benefits of our team’s use of Scrum and our resulting observations are presented, including:
John Bartholomew is an M.I.T. Electrical Engineer/Computer Science graduate (MS and BS degrees) with over 20 years experience in EDA and semiconductor tool development, including two startup companies in the Portland OR area. He has developed and taught classes in Object-Oriented Programming and Object-Oriented Analysis and Design at Oregon Institute of Technology and Portland Community College. John has also managed small software teams across multiple development sites.
|P-38||Can’t Travel? Virtual Retrospectives Can Be Effective!Debra Lavell, Intel|
Tick, tock. Tick, tock. The clock is ticking. It is the end of a long software development project and the geographically dispersed team wants to look back and capture what worked well and what needs to be done differently the next time so they can improve future projects. You need a process to gather learnings, but you have a zero-dollar travel budget. What to do? This paper presentation explores specific key take aways and Best Known Methods (BKM) you can use to prepare for and successfully execute a virtual retrospective.
Debra has over 10 years experience in quality engineering. She currently works as a Program Manager in the Corporate Platform Office at Intel Corporation focusing on Retrospectives and Organizational Learning. Since January 2003, Debra has delivered over 200 Project and Milestone Retrospectives for Intel worldwide. Prior to her work in quality, Debra spent 8 years managing an IT department responsible for a 500+-node network for ADC Telecommunications. Debra is a member of the Rose City Software Process Improvement Network Steering Committee. She currently is the President of the Pacific Northwest Software Quality Conference, Portland, Oregon. She holds a Bachelor’s of Arts degree in Management with an emphasis on Industrial Relations.
|P-40||An Empirical Study of Data Mining Code Defect Patterns in Large Software Repositories|
Kingsum Chow, Intel Corporation
Xuezhi Xing, Intel Corporation
Zhongming Wu, Intel Corporation
Zhidong Yu, Intel Corporation
There has been growing interest in mining software code defect patterns and using the knowledge to detect problems and fix them early to reduce cost. This paper presentation evaluates the effectiveness of such approaches by applying them to several large software repositories, e.g. Apache Harmony. Several tools address the code defect patterns such as buffer overflow and null pointer dereferencing. However, application specific bugs can be difficult to find so some approaches try to explore the patterns of specific applications automatically while others try to provide description based methods to describe these patterns as assertion statements and contract programming. These description-based approaches are often powerful at describing patterns, but they require manually constructing the specifications. This paper presentation describes an empirical study to evaluate the effectiveness of data mining software code defects and insights into the characteristics of the common code defect patterns.
Kingsum Chow and Zhidong Yu are principal and staff engineers, respectively, in the Software Solutions Group at Intel Corporation. Xuezhi Xing and Zhongming Wu are Intel interns.
|P-44||Build Robust Test Automation Solutions for Web Applications|
Wei Liu, Enterprise Rent-A-Car Company
Dawn Wilkins, The University of Mississippi
Test automation scripts for web applications are often fragile and lead to high rate of false positive errors, resulting in a large amount of analysis time and maintenance effort. This paper presentation shares experience gained from efforts to improve test automation scripts at Enterprise Rent-A-Car Company and documents successful lowering of false positive (Type I) error rate which consequently reduced analysis time and maintenance effort. Causes that make test automation scripts fragile are analyzed and solutions are discussed. Notable findings include great benefit to test automation by following simple rules at upstream in software development life cycle, such as the design phase and the development phase.
Wei Liu is a Senior Software Engineer with Enterprise Rent-A-Car Company.
|P-50||Visualizing Software QualityMarlena Compton, Equifax|
Moving software quality forward will require better methods of assessing quality quickly for large software systems. Assessing how much to test a software application is consistently a challenge for software testers especially when requirements are less than clear and deadlines are constrained. In assessing the quality of large-scale software systems, Edward Tufte’s data visualization principles can be used as an aid. Visualizations based on these principles, such as treemaps, can show complexity in a system, coverage of system or unit tests, where tests are passing vs. failing, and which areas of a system contain the most frequent and severe defects.
Marlena Compton automates tests and performs some manual testing in a distributed systems group at Equifax where she has worked for the past 5 years. She will be taking her Masters in Software Engineering in December of 2009. Marlena’s Blog covers technical testing and data visualization topics.
|P-51||Score One for Quality: Using Games to Improve Product Quality|
Joshua Williams, Microsoft
Ross Smith, Microsoft
Dan Bean, Microsoft
Doing research into the generation gap between current managers (from the Baby Boomer era) and the incoming group of Gen X, Gen Y, and Millennials, we find that there is a lot of work demonstrating the effect of video games on younger employees. Taking that slant, we set out to improve the legacy concept of a bug bash or simple leader board-driven, single-task-oriented game into something richer that would help drive greater engagement among all employees.
What we found, however, was a very powerful mechanism for communicating organizational priorities effectively and quickly. Not only can we help people feel engaged, we can drive behaviors using games that help improve both product quality as well as morale. This leads to a virtuous cycle where standard productivity metrics also improve as engagement improves. Our latest game is driving a new level of quality into our localized products by leveraging Microsoft’s diverse worldwide employee base. We predict that “Games at Work” or “Productivity Games” carry a huge potential for influencing not just the software engineering workplace, but also all types of companies and work.
Joshua Williams, Test Architect, Windows Defect Prevention at Microsoft Corporation, has loved the personal computer for over 20 years, and has been worked to improve the user experience on PCs for the past 15 years. His work testing Windows has spanned releases from Windows 95 through Windows 7, and nearly everything in between. His work has ranged from globalization efforts to improve the quality of non-English versions of Windows to improving driver quality for Universal Serial Bus support to designing and implementing large-scaled test automation systems. Three years ago, Joshua changed focus to work on strategies to improve software quality throughout the entire software lifecycle and projects focused on making work more enjoyable. His work with 42Projects (www.42projects.org and on Facebook) has certainly brought “buzz back to the hallways” he inhabits. Most recently, working with productivity games and exploring how games and fun can help get work done motivates him to learn a little more each day.
|P-58||Too Much Automation Or Not Enough? When To Automate Testing|
Keith Stobie, Microsoft
Fundamentally test automation is about Return On Investment (ROI). This paper presentation explores those factors that influence getting better quality for less money by choosing to automate and when to shun automation. Factors include:
As a test architect at Microsoft, Keith Stobie plans, designs, and reviews software architecture, process, and tests for Protocol Engineering with a focus on Model-Based testing. Prior work included Live Search and Windows Communication Foundation. Over the past 25 years, Keith has focused on software testing distributed systems including Tandem Fault Tolerant systems, Informix Parallel database, and transactional and collaborative software at BEA Systems. Keith provides training on inspections and quality process, and test training, strategy, methodology, design, tools, and automation. Keith has mentored and coached hundreds of professionals in the field. He writes and speaks to conferences around the world on software engineering, SQA, and testing.
|P-59||Moving Software Quality Upstream|
The Positive Impact of Lightweight Peer Code Review
Julian Ratcliffe, Advanced Micro Devices
As schedules tighten and product launch deadlines remain fixed, software developers have to rise to the challenge without compromising quality. Coordinating a workforce with a wide range of experience and expertise, often spread across several continents, brings enormous challenges including the need to make excruciatingly efficient use of resources. Many software development teams end up with a review process that is either chaotic or cumbersome. The addition of a lightweight, peer code review process, introduces a powerful new weapon into the battle against bugs. At AMD, we improved software quality and built a culture of defect prevention by integrating such process into the development workflow. The approach to process transition was effective in cultivating cultural acceptance, leading to developers adopting the changes almost voluntarily. Further, by closely integrating the code review, revision control, and issue tracking, it is possible to design a process that can help and not hinder development.
Julian G. Ratcliffe is a Senior Member of Technical Staff at Advanced Micro Devices. He has a background in parallel computing and holds a BSc. Hons. in Electronic Engineering from the University of Southampton, United Kingdom. His software experience spans more than two decades and in that time he has dealt with all processors great and small. He moved to Portland, Oregon in 1996 intending to spend a couple of years “getting the American thing out of his system,” but forgot to leave.
|P-62||Improve Quality By Making Clear Requests and Commitments and Avoiding the “I’ll Try” Trap|
Pam Rechel, Brave Heart Consulting
When unclear requests are made, for example, “Can you get me an update to the project schedule as soon as possible,” are met with unclear commitments, e.g. “Sure, I’ll try to get that to you when I have a chance,” the end result is often frustration, extra time, low trust, and most important of all, missed deadlines and poor quality because of the lack of clarity about both the request and the commitment. Sometimes, the response is “I’ll try” when actually the person is aware that there is currently not enough time to get it done, but saying “no, I can’t get that done by the deadline” is seen as “not being a team player.” This lack of specific requests and commitments and the hesitance to say no is prevalent — in projects, work teams, families, and community. The resulting sloppy or vague requests often cause multiple ineffective conversations about one issue.
The goal of the paper presentation is to present practical steps on how to make clear commitments and requests and to learn to overcome the enemies/barriers of saying no. By the end of the session, participants will know the six components to an effective request and commitment and some strategies for overcoming the barriers to saying “no.”
Pam Rechel, Principal of Brave Heart Consulting in Seattle and Portland, is an executive coach and organization consultant working with leaders, managers and their teams. Creating accountable organizations where it is easy to communicate and get results is her primary focus. She has an M.A. degree in Coaching and Consulting in Organizations from the Leadership Institute of Seattle (LIOS), an M.B.A. in Information Systems from George Washington University and an M.S. from Syracuse University, and additional coaching credentials from the Newfield Network in Boulder, Colorado. Pam holds the highest certification for Myers-Briggs Type Indicator® practitioners, including the MBTI® Step II. She facilitates workshops on Accountability for teams.
|P-67||A Distributed Requirements Collaboration Process|
Brandon Rydell, PGE
Sean Eby, PGE
Carl Seaton, Arris
Distributed project teams struggle to accurately document the negotiations and trade-offs required to lead teams to concise, prioritized requirements. The main reasons for this challenge are: 1) a lack of objective criteria to evaluate requirements and 2) a lack of freely available tools to channel negotiations into objectively characterized requirements that can be comparatively evaluated. While there are numerous publications and tools addressing the challenges of software requirements engineering, many organizations today still manage requirements as tabular lists. Individual requirements are often modified during negotiations between groups such as engineering, marketing, and management. Frequently much of this negotiation takes place informally in numerous face-to-face meetings where much of the rationale for changes and trade-offs are not documented and subsequently lost.
This paper presentation outlines a new requirements collaboration process to address these issues based on work done by Karl Wiegers in his paper Prioritizing Requirements. It is extended to include a method of proposing alternate requirements, documenting the negotiation of priority among interested parties and a way to rationally select priority requirements based on an objective measure of their relative merit. We also introduce an application prototype built for use by distributed teams to support our requirements negotiation process and to address the issue of capturing rationale as requirements are negotiated. Finally, we will discuss our experience using this prototype tool to negotiate and define the requirements for a second, more functional version of the DRCT, where we analyze the effectiveness of this process and the ability of the tool to support it in a distributed manner.
Brandon Rydell, a certified Project Management Professional (PMP), is a software engineering supervisor at Portland General Electric (PGE) in Portland, Oregon. His experience over the last two decades includes working as a software requirements engineer and project manager (PM) on a wide variety of software engineering projects for PGE, PacifiCorp, Nike, and United Parcel Service. Brandon has a Masters of Software Engineering (MSE) degree from Portland State University and a B.A. in Business Administration from the University of Washington.
Sean Eby is currently a software engineer at PGE in Portland, Oregon. In his 11+ years of experience, he has worked in all areas of software engineering for Coaxis, Emery Worldwide, Merant, and NCD. Sean specializes in designing and developing simple, elegant software solutions. Sean has an MSE degree from Portland State University.
Carl Seaton is a Staff Systems Software Engineer at Arris, currently developing high-performance video-on-demand servers at the Beaverton, Oregon site. Over the past 16 years, Carl has done everything from customer support to software development to engineering management, specializing in highly reliable distributed systems. Carl holds B.S degrees in Computer Engineering and Computer Science from Oregon State University, and will complete his Master of Software Engineering degree at Portland State University this fall.
|P-68||Best Practices for Security Testing: Top 10 Recommended Practices|
Aarti Agarwal, McAfee
From a technical and project management perspective, security test activities are primarily performed vulnerabilities within the system. From a business perspective, security test activities are often conducted to reduce overall project costs, protect an organization’s reputation or brand, reduce litigation expenses, or conform to regulatory requirements. Identifying and addressing software security vulnerabilities prior to product deployment assists in accomplishing these business goals. Security issues are among the highest concerns to many organizations. Despite this fact, security testing is often the least understood and least defined task.
There are two major aspects of security testing: testing security functionality to ensure that it works and testing the subsystem in light of malicious attack. Security testing is motivated by probing undocumented assumptions and areas of particular complexity to determine how a program can be broken. In addition to demonstrating the presence of vulnerabilities, security tests can also assist in uncovering symptoms that suggest vulnerabilities might exist.
This paper presentation outlines and defines the process, procedures and methods that should be used to evaluate the product from a security perspective and move quality forward. The author will prioritize the “Top 10” recommended security practices for building confidence on a product. While these guidelines are not comprehensive, they are focused on the most critical areas every enterprise needs to adopt as stated:
Aarti Agarwal is a Software Quality Assurance Engineer in McAfee’s Enterprise Security team. Her testing and quality assurance experience includes a focus on security testing of client and server applications. In her present role, she is a part of McAfee’s testing team for the Host Intrusion Prevention System that preserves desktops and servers with signature and behavioral protection from attacks. Her role involves identification of architectural risks, design and implementation risks, information gathering, user interface attacks, file system attacks, design attacks, implementation attacks, test planning, threat model preparation, research on security tools, penetration testing, execution and analysis of results and report metrics preparation. Previously, she was associated with Accenture Services Pvt. Ltd, where she gained expertise in Data Warehouse quality assurance for a financial organisation project.
|P-73||Quality Cost Management — Manage Your Quality Costs or Let Them Manage You|
Ian Savage, McAfee
Many of us QA professionals spend our entire careers managing defects and we are getting pretty good at it. However, if defect management is our universe, we are not really assuring quality. We need to move away from finding, reproducing, documenting, prioritizing defects, and validating fixes. We need to prevent defects.
Rising above the defects takes commitment to process improvement. What is stopping you, dear reader, from making serious software process improvements? My bet is that your answer is “Time.” If you are a software quality professional, the only other valid reason is “Knowledge.” Gaining process improvement knowledge is easy – great resources exist … if only you had time.
This paper presentation addresses the time issue. The answer is money. That’s it. It is that simple and it is that hard. Getting money to improve processes requires convincing business arguments. Executives understand money more than they understand software quality. Your task, should you choose to accept it, is to convince your management that preventing problems is more profitable than finding and fixing them. Only then will they fund your improvement efforts. Here is your starting point: Proactively managing quality costs is a sounder business practice than reacting to product failures.
This paper will provide the theoretical foundation (ala Juran, Crosby, Feigenbaum, Krasner, and Emery) and will cite data from several well-known companies that are using quality cost systems to increase profits and reduce chaos. My goal is to help you convince your executives to invest in defect avoidance… with dedicated resources driven by data. Quality cost management, based of Crosby’s cost-of-quality foundation, shows you how to reduce overall quality costs by eliminating sources of waste and failure.
We need not swim in defects forever. There is a better way.
A quality/productivity evangelist and practitioner, Ian is a veteran software developer, quality assurance engineer, manager and executive with experience in the high-tech manufacturing, financial services, construction, and security domains. Since 1979, Ian has improved productivity and software quality through structured development methods and through adaptive-agile methods.
|P-75||Some Observations on the Transition to Automated Testing|
Robert Zakes, Oregon Secretary of State
We started on our test automation effort over several years ago after being embarrassed by a bug in one of our big production systems. We developed the manual test scripts but choose not to run them all since we only changed a few lines of code. Besides it takes days to run all those scripts…
After some research we decided to use Selenium, an open-source Firefox browser addin as our test automation tool and started scripting. Lisa Crispin’s workshop in the 2007 PNSQC gave us some valuable insights and several attendees validated our choice of Selenium. We now have extensive regression scripts developed for most of our systems and run them religiously for every build.
This paper presentation relates our observations along the way and some of the practices we have adopted. We found test automation had many uses besides building regression scripts. Some novel spin offs will be described. A typical script will be dissected to display the organization, types of tests, dynamic web page tests, switches for flexibility, loops for testing paths, documentation, etc. Script maintenance can overwhelm or even or even put an end to test automation. We describe how our agile approach is part of the development process, ferrets out missing requirements, and provides scripts ready for the scheduled release promotion. We will examine some lessons learned, for example, where ad hoc testing fits. Finally, we will present some stunning metrics of manual versus automated testing.
Robert A (Bob) Zakes, is currently the Requirements and Testing Manager, for the Office of the Oregon Secretary of State. Bob has over forty years of experience in project management and software requirements and testing. Bob has worked for IBM as an instructor and marketing and systems engineering manager, National Retail Systems West as a product manager, Hanna International as Engineering and Quality Assurance Manager, and the State of Oregon as a project manager and his current position. He has been responsible for several successful system implementations at the State. His current primary project is Oregon’s Central Business Registry.
|P-77||The Effect Of Highly-Experienced Testers On Product Quality|
Alan Page, Microsoft
How can testers address growing complexity and challenges in software development when the testers in most organizations are junior in experience to their development counterparts? A small group of highly experienced senior testers at Microsoft is working to do their part of advancing the state of the art in testing. They have solved problems that have enabled entire organizations to improve testing efforts and product quality. They have leveraged years of experience in software testing and the respect of their peers in order to solve extremely difficult testing problems that enabled the success of their teams.
This paper presentation discusses three case studies showing how highly experienced testers solved big problems at Microsoft and helped their teams make improvements on quality that absolutely could not have been made without their efforts, as well as the effects a growing group of highly experienced testers can have on an organization.
Alan Page began his career as a tester in 1993. He joined Microsoft in 1995, and is currently the Director of Test Excellence, where he oversees the technical training program for testers and works on various other activities focused on improving testers, testing, and test tools. In his career at Microsoft, Alan has worked on various versions of Windows, Internet Explorer, and Windows CE. Alan writes about testing on his blog (http://blogs.msdn.com/alanpa), was the lead author on How We Test Software at Microsoft (Microsoft Press, 2008, http://www.hwtsam.com), and recently contributed a chapter to Beautiful Testing (O’Reilly Press, 2009).
|P-84||The Search for Software Robustness|
Dawn Haynes, PerfTestPlus, Inc.
The IEEE [Std 610.12.1990] defines robustness as the degree to which a system operates correctly in the presence of exceptional inputs or stressful environmental conditions. This is an important and often overlooked area of testing, especially when test teams are over-tasked and under-resourced. Testing to verify that requirements have been met is necessary, usually commands the highest priority, and often leads teams to focus on positive or “happy-path” tests. Positive testing tends to cover a small amount of the system’s code base, leaving the rest of the code base virtually untested. Targeting tests toward how the system handles errors and failures can go a long way to shrinking the amount of untested code.
If you seek to evaluate your software’s robustness, not just its happy-path features, you may find some valuable information that can have a significant impact on the success of your software implementations. Join me in exploring several techniques for enhancing your robustness testing:
Dawn Haynes is a Senior Trainer and Consultant for PerfTestPlus, and a Director of the Association for Software Testing. A highly regarded trainer of software testers, she blends experience and humor to provide testers of all levels with tools and techniques to help them generate new approaches to common and complex software testing problems. In addition to training, Dawn is particularly passionate about improving the state of testing across the industry. She has more than 20 years of experience supporting, administering, developing and testing software and hardware systems, from small business operations to large corporate enterprises. Dawn holds a BSBA in MIS with a minor in programming from Northeastern University.
|P-87||Half-Baked Ideas for Rapid Test Management|
Whether you are a tester or a test manager, Jon Bach assumes you have no time to do the things you want to do. Knowing that even the things you absolutely must do today is its own list of competing priority 1 items, he has ideas to cope. They are truly “half-baked,” as in, they are still in the oven — hey are still being tested. It is not about time management, it is about where you focus your energy — you and your team. This paper presentation will share some of the half-baked ideas for test management that seem to work for him as a test manager over the years. His ideas are meant to solve common problems in test execution, reporting, measurement, and personnel — all of which are low, or no, cost and relatively easy to implement.
Jon Bach has been a software tester for 14 years, and is currently a freelance consultant. He speaks frequently about exploratory and rapid testing, and is the co-inventor of Session-Based Test Management — a way to manage and measure exploratory testing.
World Trade Center
121 SW Salmon St.
Portland, OR 97204
The PNSQC newsletter offers readers interviews with presenters and keynotes, invites to webinars, upcoming industry calendar listings, and so much more straight to your inbox. Sign up by entering your email into the box and let the latest news come directly to you.
Copyright PNSQC 2020
WordPress website created by Mozak Design - Portland, OR