|P-11||Software Quality Management System|
In software development projects, the investment of quality improvements needs to be optimized in away that does not affect the cost and schedule aspects. However, as is currently practiced in the industry, software’s artifacts are considered equal in their significance and risk to the software life cycle with respect to quality improvement activities. The investment in activities concerning the detection and removal of defects is distributed evenly on the software’s artifacts without taken into consideration the risk and significance factors of such artifacts. Some software’s modules hold risky and significant architectural components that need to be of a high quality and a low defect density. On the other hand, other modules do not require a similar level of quality. Defects originating from modules of high criticality may contribute to a project failure more so than less significant modules. In this paper, we propose a model that helps the project managers to optimize the investment given to the QA activities of their software on the basis of the risk associated with the development process.
|P-12||Visualize Software-Quality (demo features the Firefox code base)|
When software projects reach a considerable size (multi-person, multi-man-year) they are difficult to maintain, which results in high maintenance efforts, code duplication, bugs and other related problems. The consequences are limited in software life cycles especially in areas where software evolves rapidly. For example, in an industry setting software usually is re-implemented every 3-5 years. This is commonly considered as the only foolproof strategy currently available. I consider this a huge waste of resources and truly a sign of software crisis.
Quality initiatives could make a significant difference, but suffer from tight budgets. Since testing is never finished, it is desired to drive testing efforts to parts of the software product that would benefit the most.
The root cause of the software crisis is that we cannot see or touch the software like we can see or touch the work products of other engineering disciplines, such as construction or mechanics. You can instantly see whether a car is missing the bumper, a door or the windshield. The quality of a software end product is not as obviously apparent.
What if we could make the quality of software visible? This would govern testing initiatives and could help to decide if we are really ready for go life. The talk will provide in-depth insights on how to visualize different aspects of software quality. Reference to valuable literature is given (e.g. University of Maryland, Professor Edward Tufte, etc.).
Mark has years of professional experience in the areas of continuous integration, performance testing and functional testing. He started programming in Python in 2005 when he needed tools for bulk file analysis. Python is his first choice for programming testing tools and utilities.
Mark Fink currently works as a performance test engineer at a Swiss bank.
|P-13||Performance Testing and Improvements using Six Sigma – 5 steps for faster pages on Web and Mobile|
Mukesh Jain, Microsoft
Quality is not just about having a defect-free product that meets the requirements. If your product or service is slow in responding to your user actions it will directly impact your product adoption and user satisfaction. Performance is an implied need, which is not normally stated explicitly, but users expect it to be there and its absence will impact your business. With global users and mobile devices, performance testing becomes much more important and challenging.
The gauge I try to use is – if the user feels the product is slow, your product is slow – no matter what process or tools you have used for performance; i.e. the customer is always right. If you ignore your customer, they will ignore your product
There is no silver bullet; you can build a product that meets the performance expectations of users by planning it upfront, managing with the right set of Metrics and leveraging Six Sigma techniques. It’s about understanding your audience/users/customers – your product is used by experts, novices and executives – its not one or other – your product should take care of performance for all of them.
In this presentation, Mukesh will talk about Six Sigma techniques that can be used to improve performance of your web & mobile application and how you can plan the right thing, do the right thing and get the right things for the right user at the right time – every time. He will provide demo, tools and techniques along with metrics for managing performance of your applications.
He will also share his experience on how he used Six Sigma techniques in Microsoft to improve performance of Bing, MSN, Messenger, Mobile, Hotmail and Outlook.
Prior to Microsoft, he was associated with several multinational corporations leading project, quality and program management. Along with a bachelor’s degree in computer engineering and science, he has achieved various certifications. In 2006, he was honored as “Best Six Sigma Black Belt” by iSixSigma magazine. He is author of the bookDelivering Successful Projects using TSP and Six Sigma and is currently writing a book on Web Performance Improvements.
|P-16||Scaling Agile Teams; Replicating the Successes Achieved with Small Teams on Larger Ones|
Don Hanson II, McAfee
Have you ever played the telephone game where a sentence is whispered to the next person in line? It’s amazing how much the sentence changes as more people are added to the line.
This game illustrates how working with larger groups of people introduces new challenges. Communication issues in this instance. Likewise larger software development projects can introduce new challenges for agile teams to overcome.
In this paper we’ll look at different techniques for addressing three challenges commonly associated with larger projects:
These approaches may help address similar issues on your projects.
|P-17||Software Quality Assurance in the Physical World|
Kingsum Chow, Intel
Software testing on a computer system starts with a known initial state. Testing is conducted to assure the functional quality of the software in the environment that is controlled by the computer. Software written for a robot faces additional challenges. The software on the robot needs to function in a physical environment where the initial state may not be precisely known and the behavior is not precisely controllable. On top of these problems, the software on the robot faces a complex environment that may have many factors that are changing as the robot is tested.
This paper explores the challenges of testing software on a robot in the physical world. Through the case study of running a LEGO robot with motors and sensors on a FIRST LEGO League competition table, it characterizes the environment factors, some of which are not controllable. Through the sensors that are available on a robot, it characterizes the uncertainties from the sensor readings and the location of the robot. It then describes how it employs a software testing process to test if the robot can perform missions reliably given the less than perfect environment and sensors.
The software testing approach in the physical world has only scratched the surface of the complexity of the quality assurance of a robot. Early results from the case study here demonstrates that taking lots of sensor readings, establishing a relationship between the sensor data and the results of the experiments would simulate testing on the equivalence of many different environments and reduce the number of runs to catch failures.
The contributions of this paper are:
Kingsum Chow is a principal engineer from the Intel Software and Services Group (SSG). He has been working for Intel since receiving his Ph.D. in Computer Science and Engineering from the University of Washington in 1996. He has published more than 40 technical papers and patents. Outside of work, he has coached a FIRST LEGO League team since 2006.
|P-22||User Experience Grading via Kano Categories|
Matt Primrose, Intel Corporation
A consistent challenge in product development is determining whether a product meets its usability requirements. In particular, accurate, meaningful usability information is hard to obtain before product release. This paper describes a method to classify use cases, features and requirements into Kano model categories and then grade each based on how well is has been implemented from a usability perspective. Using this method during product development can provide early, regular data on the status of product usability, help to determine where resources are spent and perform competitive analysis based on product features and usages. Augmenting traditional development and validation methods with the method described in this paper can provide additional information needed to help make design and implementation decisions easier and help to enhance the product usability.
Matt Primrose is a usage model developer and technologist for Intel Corporation.
|P-24||Adaptive Application Security Testing Model (AASTM)|
Ashish Khandelwal, McAfee
Gunankar Tyagi, McAfee
The aspect of security testing takes a close look at a few sorted out terms like XSS, SQL injection, key logging, backdoors, phishing attacks and so on. However, the lack of skills & experimentation in Application Security Testing prevents practical implementation of security testing, and most project teams do not know where or how to start.
This paper follows the experimental ride of our team of functional testers and the new approach towards Application Security Testing. As a team, we faced many challenges from the beginning in terms of understanding the Threat Model approach, issues of implementing the same in a time-constrained manner, as well as dealing with the failure to finally implement and yield value. We subsequently created a self-adjusting model, which we have coined as “Adaptive Application Security Testing Model”.
This model follows a systematic ladder approach where your expertise, skill set and result oriented strategy are enhanced. It tries to break free from the traditional approach and instigates an elementary and adaptable methodology. It tries to bridge the gap by identifying security defects early in the test cycle without compromising the rigorous Threat Modeling approach. Overall, it covers security efforts, starting from developing the abuse cases, creating attack model, carrying out attacks on the product and subsequent follow-up.
From our expertise, we have broken down Adaptive model into two sequential testing objectives – Peripheral and Adversarial.
As applicable to functional defects, initial security issues can be found with minimal effort. Peripheral Security Testing tries to expose this behavior and takes a lead in finding early security defects. This specially designed black-box approach, with the help of attack models, exposes the surface but crucial vulnerabilities.
On the other hand, Adversarial Security Testing requires the tester to dwell inside the product with a set of certain prerequisites. These include research on historical vulnerabilities, code base knowledge, architectural understanding and application security testing expertise. This hard-to-break approach, once followed in righteous manner, gives you extra mileage in highlighting product vulnerabilities.
The snapshot below explains the process followed by the two approaches. It is important to adapt these approaches and follow them in a sequential way.
The Adaptive model aims to maximize the efforts of an Application Security tester. With hard-hitting deadlines and resource optimization, this paper gives you a strategy to achieve application security without dealing with compromises.
To conclude, here are some of the strategic points, which an application security tester can leverage:
|P-25||Turning Complexity into Simplicity: an Experience Report|
Jon Bach, Quardev
Can complexity be made simple?
Webster’s definition: “Something complex” or “The quality or state of being complex.” Look up “complex” and you get “hard to separate, analyze or solve”.
This defines a lot of what software testing sometimes feels like to me, and after some research with colleagues, I find I am not alone.
To study this further, I enlisted the Weekend Tester (India) crew last week (http://weekendtesting.com/archives/1031) to have them test what I thought was a complex application: Mifos, a microfinancing application by the Grameen Foundation.
I gave them this charter: Read their new user manual – http://en.flossmanuals.net/bin/view/Mifos/WebHome. Using this manual, read the welcome (http://en.flossmanuals.net/bin/view/Mifos/Welcome) to learn a bit about the purpose of Mifos and pick one of the following sections, then test the application to discover new bugs, usability improvements and user manual improvements
I wanted to discuss the notion of the product’s complexity in the debrief. But what I didn’t account for was that the conversation was heightened by the fact that we used a very SIMPLE supporting application to collaborate our test notes in real time (www.typewith.me). This application was SO simple that it provided a nice contrast to Mifos and spurred some great conversation, leading me to conclude the following:
With this paper, I intend to share how these 4 ideas can be explored to make something that may seem complex be more simple.
|P-29||Inspiring, Enabling and Driving Quality Improvement|
Jim Sartain, Adobe Systems
This session will discuss how to inspire, enable and drive continuous and significant quality improvement across large organizations. Participants will learn what has worked and not worked at several well-known commercial software companies. Key success factors are outlined as well as an overall methodology for driving the improvement effort.
An overall approach to garnering leadership support, identifying key improvement opportunities and driving progress is shared. Specific success metrics are shared as well as data on what constitutes world-class, what is typical for commercial software organizations and how several large software companies are delivering for their customers in a world-class manner. Specific suggestions are discussed on how to help organizations become more customer focused to ensure they are working on what matters most to external and internal customers.
This discussion includes the use of the Net Promoter concept to discover what is delighting customers and causing them to recommend the product or service to others, as well as how to identify detractors and what needs to be most improved with the software.
The benefits of a number of key engineering best practices including personal reviews, peer reviews, unit testing and the Team Software Process and Personal Software Process are discussed as well as key barriers to their adoption. A roadmap is shared for how to approach quality improvement initiatives with a multi-year view, starting first with eager (innovative) adopters and ensuring they are rewarded and recognized for their initiative. Next, you work with early adopters, who will be attracted by the innovators, who will be promoters of the changes as well as supplying proof point that these changes pay off for employees, customers and ultimately the business.
Self-directed teams adopting engineering best practices including TSP and PSP were able to ensure that quality, schedule, scope and cost were not strict trade-offs. Through a combination of better planning and estimation, improved project execution and effective use of quality best practices, such as unit testing and peer reviews, these teams delivered high quality software on schedule and budget, with good work-life balance for the team. Knowledge gained from retrospectives and improved metrics helped drive continuous and significant improvement in product and process. The biggest change was in the mindset that a team owns its work processes and has primary responsibility for defining and improving it.
The goals and metrics used to define, track and drive improvement will be discussed. Often establishing what’s important and mechanisms, including relevant metrics with the right altitude and precision, are key to progress. These metrics assess progress from a customer, employee and business stakeholder perspective and demonstrate the continuous improvement required to ensure continued individual and organizational commitment. Lastly, the long-term results from one large organization are shared to serve as a proof point for the value of adopting these best practices from the viewpoint of all key stakeholders, especially employees and customers.
|P-32||Improvement Processes that Create High Quality Embedded Software|
Jay Abraham, The MathWorks
The development of embedded software encompasses a wide range of best practices and development methodologies. For quality-critical projects intended for highly reliable applications, delivery of high quality software is an absolute requirement. In these situations, development and test teams must complete code reviews, perform unit and regression tests and testing on the target system. But is it enough? What if a critical defect escapes to software deployment and then to production? Formal methods based mathematical techniques may alleviate some of the doubt. Application of formal methods based code verification may provide precision in guiding software engineering teams to know which parts of code will not fail and isolate those aspects of code that will fail or are most likely to fail. This paper will discuss the practical application of these techniques for the verification of software. As part of the application of this improvement process, the paper will explore how these techniques enable the creation of high quality embedded software.
Jay Abraham is currently Technical Manager at The MathWorks. His area of expertise is in software tools for the verification of critical embedded applications. He has 20 years software and hardware design experience. Jay began his career as a microprocessor designer at IBM followed by engineering and design positions at hardware, software tools, and embedded operating systems companies such as Wind River Systems. He has held chairmanships in IEEE standards committees and has presented at prestigious conferences such as the Design Automation Conference and Embedded Systems Conference. Jay has a MS in Computer Engineering from Syracuse University and a BS in Electrical Engineering from Boston University.
|P-33||Streamlining Test Automation using White-box Driven Approach|
Sushil Karwa, McAfee
Sasmita Panda, McAfee
To improve the end user software experience, one must identify better means of assessing quality quickly in the Software Development Lifecycle.
Test automation so far has been perceived as something that can cater to our regression or smoke tests and, if time permits, to functional tests. How often have you been challenged with such an approach? How often have you thought of complementing this with white box driven automation approach? In this paper, we propose to bring innovation to this age-old approach of test automation. This paper substantiates its ideas with real case implementations from the projects where the concepts are applied. This experience helps us conclude that a test automation strategy that includes both white box and black box automation techniques can streamline the achievement of more effective and efficient testing.
This paper talks about various white box automation ideas that can be implemented across the different stages of the software development life cycle that complement the black box automation efforts. Some novel ways of white box testing and how it can be automated are also discussed. It describes when white box automation can be more effective and essential than black box automation.
In the current complex software environment, maintaining black box automation scripts can become a very cumbersome job with the frequency of changes to the application under test. Since there is no direct API call involved, UI changes can make black box automated test scripts fragile. This paper does not insist on replacing black box automation completely, rather it proposes to complement them with white box test automation.
Another important aspect of test automation is functional validation and verification. Employing the right approach for automating the validation and verification step as part of test automation is critical and will also be discussed in this paper.
The concepts shared are exemplified using our observations and the practices and processes we adopted to achieve more out of our white box automation. Finally, we share our learning while implementing white box automation.
Sasmita Panda is a Senior QA Engineer at McAfee and works with the Host Intrusion Prevention System product team. She has a Master’s Degree in Computer Application from Madras University, India. Her testing and QA experience include a focus on white box testing automation, security testing and identification of different techniques for measuring code coverage.
|P-36||Document Your Software Project|
Ian Dees, Tektronix
Software projects are growing in complexity. At the same time, demands on our productivity are increasing. So it’s vital to come up to speed quickly on any given body of code, whether it’s a new third-party library or a neglected legacy subsystem.
Many projects lean on API documents that have been automatically extracted from source code comments. Other teams construct elaborate diagrams that explain every detail of the architecture. While these can both be helpful as a reference, they do little to answer the basic questions about the code: what’s this for? What file should I look in first? What’s this project’s equivalent of a “Hello world!” program?
In this presentation, we’ll use the metaphor of a magazine article to think about ways to help someone learn an unfamiliar code base. That person may be a new hire today, or it may be you, five years from now.
Getting the emphasis and level of detail right are crucial. Everything else, especially the specific writing tool you choose, is secondary. Even so, we’ll spend some time looking at a couple of open-source software packages that may be of help on your quest to produce great documentation.
|P-38||Affecting Printer Installation Success in the Consumer Market|
Kathleen Naughton, Hewlett-Packard
I just bought a new printer—needing to get my [term paper, resume, wedding invitations, soccer team flier] printed like yesterday. I unbox the device, follow the single-sheet “how to” instructions and I’m good to go. But wait! My PC can’t find the printer on my wireless network. My security software keeps giving me pop-up notifications and warnings. And now I have gotten a fatal error from the installer! On the third attempt and assistance from the “dreaded” call-center technician, I am finally ready to print that essential paper that initiated this adventure into technology setup and configuration frustration and aggravation.
This scenario is more common than we as technology companies want to admit. It is an especially challenging experience in the consumer and small business marketplace where our PC and network environments are not managed by IT department policies and processes. At HP, we have categorized call-center calls to help us monetize the cost of install failures in the field. We knew the data – at one point 70-75% of calls were categorized as “install” calls. We developed tools to help us mine what the failures were, but we could not seem to effectively change the install call-center rates in the consumer marketplaces. That has been changing!
This paper will describe the design and development of a test lab that has been effecting change in the R&D requirements and development processes to reduce the install call rates while supporting the adoption of wireless technology advances in the customer segment. The paper will:
|P-39||Using Static Code Analysis to Find Bugs Before They Become Failures|
Brian Walker, Tektronix
Automated static code analysis tools have evolved considerably from the original lint tool. They are now more comprehensive, provide more relevant results and produce fewer false positive reports. The Video Product Line organization at Tektronix has integrated static source code analysis into its nightly build process. The analysis tool has identified several errors that produced faulty behavior or crashes that could have been avoided. It has also identified subtle errors that were overlooked by human observers or simply not covered by functional testing. In concert with automated functional testing, static source code analysis has enabled the Video Product Line to achieve better quality and devote more time to feature development.
Brian Walker is a Senior Software Engineer at Tektronix with over 15 years of experience in embedded software development.
|P-40||Large-Scale Integration Testing at Microsoft|
Jean Hartmann, Microsoft
Given the large number of components and dependencies, substantial time and resources are expended by Visual Studio teams in integrating and validating these components into the product.
In the past, the flow of code was often hampered by component teams not efficiently and effectively qualifying their code contributions or reverse integrations (RIs). As a result, dependent component teams started stalling, as breaking changes proliferated with respect to product and test code – they paid a heavy price in terms of failure analysis. Those dependent teams also became wary of merging those unstable code contributions into their own code bases as part of their forward integrations (FIs). In short, the flow of code became unpredictable, resulting in a code base that was often in an unstable state.
In this paper, we describe some of the challenges that we faced in reinvigorating the code flow, while ensuring that code quality was not compromised. We more closely examine some of the issues that resulted in stagnant code flow, discuss some of lessons gleaned from a test perspective and focus on the new strategies, processes and tools we are developing to make integration testing more efficient and effective. The paper uses examples and data from this “fearless FI” initiative to illustrate and emphasize key points.
|P-41||Assessing The Health Of Your QA Organization|
Michael L. Hoffman
The aspiration of any software company or organization is the delivery of software products within defined goals of scope and constraints of schedule and resources. The competitive nature of business requires that organizations live up to these goals, and work within these constraints more effectively and efficiently over product lifecycle generations.
What constitutes improving over time for a software QA organization varies depending on perspective:
The ability to consolidate so many perspectives into a comprehensive evaluation of improvement at the organizational level is a complex challenge. A QA organization needs to regularly ask itself these questions:
Realizing quantitatively whether your software QA organization lives up to the dynamic needs of your business is not as simple as looking at defect trends over time. An appropriate evaluation involves various aspects of the software QA function, including operational behaviors, talent, customers and budgets.
This paper chronicles an approach taken by one software QA organization to evaluate its own health through establishing goals, benchmarking, defining organization-level metrics, and on-going self-assessment.
|P-42||Lessons Learned About Distributed Software Team Collaboration|
Kal Toth, PSU
Raleigh Ledet, Apple, Inc.
Software engineering teams are increasingly working at a distance from each other, and are often globally dispersed. Operating in distributed teams significantly increases the complexity of already complex software engineering tasks. This paper on distributed software teams is an experience report summarizing the conduct and outcomes of a practicum project completed under the auspices of the Oregon Master of Software Engineering (OMSE) Program at Portland State University during the winter and spring of 2010. The students in this program are software professionals with considerable hands-on experience developing software products and services, often working in distributed teams. They are well aware of the challenges of working in such distributed teams – that productivity is seldom as high for distributed teams as for collocated ones. Their industry experience, combined with the software engineering processes they learned, provided a unique opportunity for these mature students to learn more about how software teams collaborate effectively when they are geographically dispersed. By way of thoughtful and systematic experimentation they were able to progressively evolve a practical framework for defining and evaluating a kernel of distributed software processes.
The overall goal of this practicum project was to provide advice and guidance to distributed software teams, and to offer suggestions for further work in this field. The primary objective was to specify and evaluate selected software engineering processes adapted to support collaborative software teams. A secondary goal was to evaluate the processes using open source software or free-ware tools. The project focus was explicitly aimed at processes over tools – the software tools being the means rather than the ends for learning about such team collaboration. Although this experience report focuses on how the project was conducted, it also summarizes the principle results.
The practicum team consisted of eight students split into two sub-teams with slightly different distribution characteristics, as well as distinct, but related, responsibilities. Similar team-of-teams structures, of varying sizes, are not uncommonly found in practice. This thereby offered a unique opportunity to realistically emulate day-to-day activities, such as project management meetings, specification document reviews and code inspections, while performing the various stages of the software engineering lifecycle.
The project started with a clean slate, without limitations as to which processes would be explored and defined or which tools would be used. The project team was tasked to develop a project specification and development plan to meet the practicum objectives and to utilize tools to shape and evaluate the fruit of their work.
The emergent and evolutionary nature of the project informed the targeted results. More specifically, the students experimented with different collaboration processes and tools during the specification and planning phase of their work, which fed the implementation phase. This focused their attention on some of the distributed team collaboration building blocks – both base processes and representative foundation tools. Some of these were rejected, while others were kept and incorporated into their emergent processes. In other words, the students “ate their own dog food” in an evolutionary fashion as they progressively evolved towards their final work products.
Kal Toth is Executive Director of the Oregon Master of Software Engineering Program (OMSE) and Associate Professor in the Maseeh College of Engineering and Computer Science, PSU. Kal was formerly with Hughes Aircraft of Canada, CGI Group, Intellitech, National Defence Canada, and Datalink Systems Corp.
|P-43||Testing the Mobile Application’s Performance – a Case Study on Windows Mobile Devices|
Rama Krishna Pagadala, Microsoft
Mobile software usage is growing quickly, and an expanding number of consumers are expecting a consistent experience when moving from PC to mobile devices and back again. One of the key factors enabling wide deployment and adoption of mobile applications is careful usage of system resources. High quality mobile applications must consume fewer system resources, such as CPU, memory and, most importantly, battery. It is crucial that design for mobile applications considers frugal and optimal resource usage as an essential aspect of application design (direct porting of desktop applications to mobile platforms is most often a recipe for failure). Knowing and understanding the factors that influence performance on mobile devices is extremely beneficial in order to plan a successful testing approach. Equally challenging is the task of measuring application performance and reporting the performance data in a useful and actionable manner.
We are going to present a case study on performance testing of the Microsoft Office Communicator Mobile application for Windows Mobile 6.x phones. We will discuss our approach to performance testing, the complexity of automating performance tests and how architectural improvements have contributed to improved application performance and better resource usage. Our automated tests measure 6 unique performance metrics for 20 different scenarios. We will also provide details on the performance metrics and tools used to gather and present this data – and how we’ve used this data to make important decisions throughout the product cycle.
Prior to Microsoft, Rama Krishna worked as a Software Developer at Cisco Systems for four years. He earned a Master’s degree in Computer Science in 2001.
|P-44||Don’t Test Too Much! (or Too Little – Lessons Learned the Hard Way)|
Keith Stobie, Microsoft
Over testing can actually be as bad for you as under testing. Whether it is testing to your own lofty expectations (versus those of the project), verifying too much in stress or verifying too much in one test, you might end up with a less useful result than other approaches. You must also be careful of under testing sequence and state or aspect of the software that are just hard to verify.
This experience report relates parables of mistakes I made and what I learned that you should apply to avoid them. These lessons learned are also cross-indexed as examples for parts of the book: Lessons Learned in Software Testing (Kaner, Bach, and Pettichord).
Verify Stress tests out of band.
The many times I’ve taught various aspects of software testing, I find many students learn best by hearing stories that demonstrate a principle in action. I’ve been testing for so many years, I’ve gathered a number of scars that I attempt to help newer converts to the discipline avoid. Many of my lessons learned occurred before a book called Lessons Learned in Software Testing (Kaner 2001) was ever published, but in reading the book I could see how my experiences related to those in the book and correlate them as lesson #. The majority of the lessons are from my mistakes around over testing!
You can test too much in one test case causing unnecessary blocking due to bugs. You can test too much for a release by exceeding its requirements. You can verify too much during stress testing resulting in less stress testing or overly expensive stress testing.
Testing is a business activity with a cost. Using your resources wisely to most effectively cover the risks requires:
|P-46||Incorporating a Squad of Robots into Your Testing|
Jerry Yang, McAfee
Are you running out of time to test your products? Are you overloaded with too many work assignments? Are you repeatedly doing the same type of work day after day? Consider building your own robot squad to help reduce your workload. Harry Robinson, Microsoft Principle Software Design Engineer, presented the topic “How to Build Your Own Robot Army” in 2006; a presentation that resonated with me. I have created a robot squad to help me, not only in executing tests, but also configuring system and application settings, executing test tools/utilities, collecting data logs and analyzing test results. When I incorporated the robot squad into my testing, I realized time savings by offloading mundane tasks to the robots, allowing me to focus on the more complex portions of my work assignments.
This paper presentation will describe:
|P-54||Testing Concurrency Runtime via a Stochastic Stress Framework|
Atilla Gunal, Microsoft
Rahul Patil, Microsoft
The non-linear interaction of many software components makes quality assurance a hard problem even for traditional serial code. Concurrency and interaction from multiple threads adds an additional temporal dimension to software complexity. This extra dimension introduces unique bug types such as deadlocks, livelocks and race conditions.
In this paper, we will describe how a solid stress framework, complete with integrated structured randomization and methodic meddling of temporal properties makes practical software quality assurance possible. Specifically, we discuss the methods and practices applied to provide solid assurance to a critical commercial component – the native Concurrency Runtime stack from Microsoft. First, by applying random distributions in individual tests and integrating such individual tests via a statistically fair scheduler, we describe how to cope with traversing the seemingly infinite interaction patterns. Second, we will expose how such testing helps identifying hangs stemming from deadlocks and livelocks. Thirdly, we will talk about methodically injecting randomization into the temporal properties of the software system, and how that can be used to assure us to find bugs with reasonable probabilistic expectation. We will conclude with a brief survey of the effectiveness of our stochastic stress framework over other tools.
Rahul Patil is a senior QA lead for Microsoft’s new native concurrency runtime technologies. As a QA lead in charge of a brand new concurrency platform, Rahul has had to address reporting quality on new risks introduced by non-determinism inherent in concurrent software. He also holds a Master’s degree in Software Engineering and has been working at Microsoft for the past 6 years testing various SDKs and platforms.
|P-56||Test Environment Configuration in a Complex World|
Ed French, Microsoft Cooperation
Liu Hong, Microsoft Cooperation
Maxim Markin, Microsoft Cooperation
Imagine you need to reproduce a complex test environment that consists of Windows Server and Client machines and requires several Active Directory Domains, DNS and/or DHCP roles installed on some of the server machines. How will you implement this in a way that is robust across all available versions, editions and languages of Windows Server? With the invention of PowerShell, Windows got command line usability and tools for local and remote management of Windows Server that allows tackling these kinds of problems. This presentation offers an approach that allows capturing a complex test configuration in an Xml file and executing a test scenario configuration using PowerShell scripting and PowerShell remoting.
Ed French has worked at Microsoft Corporation for 10 plus years as a Software Developer in Test. He is in the Windows Server division testing management products and has one US patent for software development.
Liu Hong is a Lead SDET in Microsoft Platform and Tools division at Microsoft Cooperation. She has worked at Microsoft since May 2006. She holds a Ph.D. degree in Civil Engineering from Clemson University. Before her career at Microsoft, she first worked at Aon Cooperation as a research engineer, and afterwards as software developer in various companies such as Infomove and Applied Inference in the Puget Sound area and, immediately before she joined Microsoft, as an independent contractor for Gray Hills Solutions.
Maxim Markin is a Software Design Engineer in Test at Microsoft Cooperation.
|P-57||Issues in Verifying Reliability and Integrity of Data Intensive Web Services|
Anand Chakravarty, Microsoft
With the rapid proliferation of web-services architected in different flavours, providing users with a diverse range of features and supporting traffic at scales far higher than traditional applications, verifying server-side components has become an essential deliverable for many Test organizations. The test scenarios, their priorities, execution and verification of web-services are necessarily different from those used for desktop/standalone applications. Efficiently verifying web-services requires an understanding of the key differences between these two classes of products and basing the test approach on that understanding. The transition from the waterfall to agile model of product development also requires a recalibration of test cases definition, development and execution.
In this paper, we share our experiences in shipping a set of web-services that use large volumes of data and Artificial-Intelligence algorithms to translate text between human languages. These web-services are part of Microsoft’s Machine Translation products, shipped by an incubation team under Microsoft Research.
The areas of performance and stress testing yield bugs that present unique challenges in their findings, investigations, fixing and regressions. In the early days of testing services, a lot of the necessary infrastructure and tools were written by the individual teams geared towards perceived unique characteristics of each service. Over time, these have evolved into now publically available automation libraries that handle a lot of the commonly performed functions, such as results summarization, performance monitoring, etc. While these tools are beneficial and add value to a test team’s quality coverage, it is essential to use them in a targeted manner, with the priority being measurement of quality metrics and the tools being a means to obtain them. Using the MSR-Machine Translation system as a case-study, we attempt to present a test approach based on real-world scenarios and practical solutions that have helped ship a set of web-services that receives traffic of millions of users a day, with continuously improving quality and performance.
A big lesson from our experiences has been the value of ‘Keeping It Simple’, regardless of the complexity of the systems under test. A proper understanding of user scenarios to help correctly prioritize test cases, combined with focused automation, helps achieve the goals of software quality and agile development schedules, leading to successful products and passionate users.
|P-59||Quality Pedigree Programs: or How to Mitigate Risks and Cover your Assets|
Susan Courtney, Johnson-Laird, Inc.
Barbara Frederiksen-Cross, Johnson-Laird, Inc.
Marc Visnick, Johnson-Laird, Inc.
Forensic software analysts routinely review source code in the context of litigation or internal software audits to assess whether, and to what degree, a body of software uses or references third-party materials. These references may include source code examples incorporated directly into a program, source code routines that are statically linked as part of the program, the use of binary libraries that are dynamically referenced when a program is executed or URL-based citations to third-party materials, such as an article on a website. While third-party materials are obviously invaluable to software development, third-party materials may introduce a variety of legal or security risks into software and expose a company to unexpected legal liability and/or negative publicity. Thus, quality software is defined not just by technical measurements, but also by the presence of a comprehensive set of policies and procedures that help mitigate these potential risks.
We believe it is essential that companies proactively establish a baseline pedigree for their software via a forensic code audit. The successful completion of a forensic code audit represents a moment in time where all known third-party materials are appropriately catalogued, and risks associated with those materials are fully understood by the company’s relevant business and legal stakeholders. But a one-time pedigree analysis alone is not sufficient to prevent downstream problems. A forensic code audit should be part of a comprehensive quality pedigree program that includes a set of well-defined prophylactic policies and procedures surrounding the use of third-party materials. These policies and procedures take into account the entire software lifecycle, including any customer support obligations that may remain once a program is deprecated. A company that proactively implements a quality pedigree program is better positioned to respond to customer requests, react to lawsuits or potential licensing problems, or to justify a particular valuation of their intellectual property in the context of a merger or acquisition.
Building upon the authors’ 2009 presentation, we explore the practical mechanics of a forensic code audit, and discuss the other policies and procedures that can be used to manage a quality pedigree program as a part of your overall software quality plan.
Barbara Frederiksen-Cross is the Senior Managing Consultant for Johnson-Laird, Inc. in Portland, OR. She is a forensic software analyst specializing in the analysis of computer-based evidence for copyright, patent and trade secret litigation. She began her career as a computer programmer in 1974 and began working as a forensic software analyst in 1987. She was appointed as Court Data System Advisor to the Honorable Marvin J. Garbis (US District Court, District of Maryland) in Dec. 2000, and has provided forensic analysis services and assisted with data preservation and discovery services on many cases.
Marc Visnick is a forensic software analyst and attorney based in Portland, OR, and a senior consultant with Johnson-Laird, Inc. He specializes in forensic software analysis for patent, copyright and trade secret litigation, as well as software due diligence, independent development project design and supervision, and electronic evidence preservation, recovery and analysis. Over the past 6 years he has participated in hundreds of forensic audits for software-related mergers, acquisitions and source licensing transactions. He is Chair-elect of the Oregon State Bar Computer and Internet Law Section.
|P-61||Using Live Labs Pivot to Make Sense of the Chaos|
Max Slade, Microsoft
Today’s methods of organizing and searching data cannot keep pace with the exponential rate of information growth. This information overload presents an opportunity to expose new value in the aggregate, where the data interactions provide greater ability to act on previously unexplored insights.
Pivot will change the way people perceive and explore the information that surrounds them by visualizing information in new ways, exposing hidden relationships and making it easier to act on these newly discovered insights. Pivot makes it easier to interact with massive amounts of data in ways that are powerful, informative and fun. Please view the following 2010 TED talk which allows you to see Pivot running and a description from Gary Flake, the founder of Live Labs: www.ted.com/talks/gary_flake_is_pivot_a_turning_point_for_web_exploration.
Pivot is a tool you can use in your business immediately.
|P-63||Bridging the Cultural Gap|
Katherine Alexander, Vertex Business Services
The ever-increasing globalization of the workplace and geographical dispersion of employees creates many challenges for today’s testers, including communicating across different time zones, overcoming language barriers and maintaining a consistent team framework. One unique challenge that should not be overlooked is that of cultural diversity and the stereotyping that can and does occur. Whether it’s Americans working with East Indians, Europeans working with Asians, or even east and west coasters in the United States, there are cultural differences that can hinder a project inadvertently by its participants. Because perceptions affect interactions, the negative results range from simply frustrated communication among team members to delayed projects.
These obstacles need not be a hindrance to the testing process. Learn what the common stereotypes are and how to overcome pre-conceived notions, devise efficient work practices and create a more cohesive testing team across the globe.
|P-64||Simulating Real-world Load Patterns when Playback Just won’t Cut It|
Wayne Roseberry, Microsoft
One of the challenges with performance and stress load tests on servers is to construct patterns of usage and data that mimic what will happen in real-world deployments. Artificial load patterns are valuable and often efficient means to find flaws, but they fall short of determining if the product will behave as desired under customer expectations. Some playback solutions exist that allow for relatively simple load testing, but these are not well adapted to server software having complex, dynamic states required to construct valid operations and requests.
This paper describes how the SharePoint 2010 team solved this problem by creating a tool capable of sampling real world usage data and building a model capable of describing and building the load test necessary to simulate real-world traffic patterns. The tool was designed to build load tests that run using a shipped web-testing product, Microsoft Visual Studio Test System 2008, thus making the tests and data portable to any other environment where this product is available. The solution allowed us to incorporate new load patterns relatively quickly, both from in-house deployments and customer deployments of our server product. Adding such workloads to our test suites had previously been so expensive and difficult as to be non-feasible. With the new solution in place bug discovery rates increased as did fix ratios, as we knew that the load tests were based on expected traffic patterns. We were able to add pre-production validation testing to our internal deployment processes, providing costing and stability predictions, something that was not possible before. Finally, the test tools were released to market to allow customers to do their own capacity planning and management via a load test kit able to adapt to their own workload needs.
|P-69||Code Coverage Case Study: Covering the Last 9%|
Cristina Manu, Microsoft
Pooja Nagpal, Microsoft
Donny Amalo, Microsoft
Roy Patrick Tan, Microsoft
Code coverage is a good technique for understanding what code has been exercised by an existing test bed. It is often debated how much resources need to be invested in increasing the code coverage. This paper is a case study of the code coverage effort that we did during our testing cycle for a component of .NET Framework 4.
At the end of the test cycle we had a focused initiative to increase the block code coverage from 91% to 100% in order to measure the return on investment (ROI) of such an effort.
We calculated the ROI based on the number and importance of the issues found, the time invested, the increase in code coverage and bug yield in comparison with other test activities. Our results showed that it’s neither prohibitively expensive to achieve effective-100% code coverage nor did we find an exponential increase in the number of bugs with the higher code coverage, as suggested in some papers in the field. Although we did find some systemic holes in our test bed, they did not uncover any major issues in the product. In this paper we discuss what parts of our development process (such as the propensity of testers to develop more complex scenarios first) may have contributed to this lack of bugs.
Pooja Nagpal is a Software Development Engineer in Test for the Parallel Computing group at Microsoft.
Donny Amalo is a Software Development Engineer in Test for the Parallel Computing group at Microsoft.
Roy Patrick Tan is a Software Development Engineer in Test at Microsoft. He got his PhD in Computer Science from Virginia Tech in 2007.
|P-72||Peering into the White Box: A Testers Approach to Code Reviews|
Alan Page, Microsoft
Code reviews (including peer reviews, inspections and walkthroughs) are consistently recognized as an effective method of finding many types of software bugs early – yet many software teams struggle to get good value (or consistent results) from their code reviews. Furthermore, code reviews are mostly considered an activity tackled by developers, and not an activity that typically falls within the realm of the test team. Code reviews, however, are an activity that questions software code; and many testers who conduct code reviews question the software code differently than their peers in development.
This paper will present how a test team at Microsoft used code reviews as a method to improve code quality and more importantly as a learning process for the entire test team. The paper will also discuss how smart and consistent application of lightweight root cause analysis and the creation of code review checklists led the path to success – and how any team can use these principles to reach these same levels of success.
|P-73||Scripting vs. Coding for Rapid Test Case Automation|
Sam Bedekar, Microsoft
We will introduce the Simple Language for Scenarios (SLS). In SLS, we constrained users into writing test cases in a particular text-based format (not XML based). This reduced coding-style flexibility, but gave us a single paradigm of writing test cases and a point of injection via the interpreter. We used this point to achieve several goals – we were able to automatically marshal calls between different thread models, adapt existing cases for API parameter testing and enable security fuzzing without requiring each tester to have to write code to do so. For string injection we wrote an API string testing class where the script engine could automatically call each method in a loop and pass in all possible parameters within the context of an existing case simply by annotating the “start” and “end” points in the test case where the engine needed to loop. Additionally, with API testing, we were able to run a case to a particular point, and then bring up a UI to allow ad hoc testing. The steps executed by the tester in the UI would then be written back as script language that could then be incorporated into the test case library.
Ultimately, we landed with a hybrid model that managed to leverage the strengths of both approaches. Along the journey, we faced several facets to consider including development time for a test case, reducing cost of failure analysis, adaptability/extensibility of a test case to new environments and more. This paper will go into the technical details of our findings and how other software projects could benefit from our lessons-learned to more effectively deploy automated test cases.
|P-75||Driving Product Quality towards Release Goals|
Bhushan Gupta, Nike, Inc.
Data mining is exciting as it provides answers to questions that are essential to make business decisions. The testing activity generates a wealth of data ranging from product quality to testing productivity for improving testing operations. With some extra effort, it is possible to excite the software development groups to seek better development practices.
This paper describes how the data collected by a test group at Hewlett-Packard provides decision support to the program management team, insight into test productivity and improvement opportunities for the software development teams. The data mining goes beyond the normal collection of defect patterns during product development and includes test execution productivity, defects found per hundred test cases, defect aging patterns and code volatility. The data has provided insight into performance of the test teams, a comparison of effectiveness between manual and automation testing and supported the natural rhythmic observations of product development. An organization may choose to limit the use of test data to only provide release decisions but in reality, the data has rich information to guide improvements to the overall software development lifecycle.
From 1995-97 Bhushan Gupta worked as a Systems Analyst at Consolidated Freightways where he contributed to the design and development of a Windows based logistic management system. Bhushan was a faculty member of Software Engineering Technology department at Oregon Institute of Technology from 1985 to 1995.
As a change agent, Bhushan volunteers his time and energy for the organizations that promote software quality. He has been a Vice President, a Program Co-Chair, and a member of the board of directors of Pacific Northwest Software Quality Conference.
Bhushan Gupta has a MS degree in Computer Science from New Mexico Institute of Mining and Technology, Socorro, New Mexico, 1985.
|P-79||Increase Development Efficiency by Utilizing Outsourced Testing|
Hao Zhao, Expedia
In a global environment, outsourcing development has become a standard and popular practice to add development capacity and reduce cost. This paper presentation will focus on how to maximize the productivity of offshore test teams.
This paper summarizes my experience working with several outsourced testing teams. Following a brief overview of the differences between onshore and offshore testing, the paper presents the strength of traditional offshore test teams and how to utilize them to improve test efficiency and increase development capacity; as well as the challenges and risks that test managers will confront when using outsourcing testing. Finally, based on real world experiences, providing several specific suggestions, strategies and examples of how to manage test team composed by both onshore and offshore engineers.
This presentation offers test managers an ability to use outsourced testing resources to increase development capacity efficiently while keep development cost at a management level.
|P-80||Incorporating User Scenarios into Software Testing Lifecycle|
April Ritscher, Microsoft
How many of us have tested an application and certified that it met the requirements stated in the functional specification, only to find out that it does not meet the business need?
Many times as software test engineers we are brought on to the project after the requirements have been gathered. This gives us very little visibility into the early discussions on what the user needs to accomplish with an application solution.
Therefore we test and certify the application based on the functional documentation provided to us without really understanding how the business is intending to use the application in production.
This required us to change our approach in writing test cases and change our focus from the traditional functional testing to user scenario focused testing. To achieve that objective we used visual representations of possible user actions and application responses that align with the user scenarios. Using visual representations enables us to easily identify gaps in the functional requirements and allows other disciplines to more easily review the test cases.
This paper will describe how we break out our functionality into individual test cases and use these as building blocks to test the end-to-end user scenarios. It will also describe how we extract information from the flowcharts to be used during manual and automated testing.
|P-81||Web Test Automation Framework with Open Source Tools powered by Google WebDriver|
Nikhil Bhandari, Intuit
Kapil Bhalla, Intuit
Amid nails, nuts and bolts, the hammer is not enough.
TeKila is an aggregation of several Open Source powers – Google Web Driver, HTML Unit, Fitnesse, TestNG and Selenium. It offers a tool kit to test Web Application at different levels and in modes.
Often, the search for a silver bullet tool for automating ends in a compromise. In demanding times when everything is changing rapidly, speed and flexibility cannot be compromised.
The ever-rising burden of releasing features fast, with high levels of quality, punctuality and performance, demands automation. The task of writing a test automation framework, which does more than UI testing for rapidly growing web-based applications, is challenging. Many find it tormenting, some attempt it and only a few succeed.
A bridal suit has to be stitched; ones in the market would just not fit. Similarly, along with creative ideas and innovative approaches, engines of automation frameworks for projects need to be powered with multiple cylinders. There are numerous tools available for use; this introduces a new paradox – the paradox of choice – what to choose, from where to choose, how to choose, etc are questions everyone needs answers for.
A few of the many challenges are:
In our attempt of combating automation we came up with TeKila. TeKila is an aggregation of the best of various Open Source Powers enabling us to do:
What did we achieve through TeKila:
Separation of concerns:
Nikhil Bhandari holds an Engineering degree along with 9.5+ years of experience as a Lead Software Engineer – QA. He is currently working with Intuit, and previously worked with companies like Oracle, McAfee & Satyam Computers in Bangalore, India. He has utilized various testing tools and scripting languages for developing test automation frameworks for both desktop and online applications on various platforms. He has been a speaker at STARWEST Conference 2008 (USA), Free-Test Conference 2009, 2010 (Norway), Step-In Forum Evening Talk (India).
Kapil Bhalla has a Master’s Degree in Computer Applications from NIT Karnataka, India and has been working with Intuit India for the last 2+ years. Over the past year he played a key role in developing Open Source Automation Testing Framework.
|P-82||Managing Polarities in Pursuit of Quality|
Denise Holmes, Edge-Leadership Consulting
One of the questions for this conference is “can complexity be managed or are we destined for complete chaos?” The key concept in that phrase is that of managing complexity, versus being at the whims of a complex system. Sometimes a sense of powerlessness results from treating interdependent factors as independent of one another and then being surprised at the negative impacts that occur because we weren’t aware of their interrelatedness. For example, taking the stance “we need to focus on quality no matter what it costs or how long it takes” could result in a wildly over-budget project that is obsolete or of no interest to the customer by the time it is released. This is more likely in a situation in which costs and time to market aren’t proactively managed at some level. The opposite also holds true, going for speed at the expense of quality, or cost at the expense of quality might mean losing customers’ trust and business, beginning a downward slide that finishes the organization. The reality is that all of these factors are important and influence each other to some degree: quality AND cost AND time to develop. These are just a few dynamics that may be competing with each other in our world.
So, what does this mean for the quality professional of today?
Managing complexity means recognizing differing and critical needs, how to experience the upsides of each need, avoid the downsides, and do it all intentionally, with awareness of the choices being made. The skills to accomplish this, as covered in the paper, include the ability to:
This paper introduces the process of polarity management as a way of seeing, thinking and communicating around opposing dynamics to help you become (or remain) a proactive player in support of your software quality efforts.
|P-86||ATDD in Scrum – Improving Quality, User Focus and Fun – All in One|
Testing as an integral part of the SCRUM process is a big challenge; The cycles are short, regression testing is necessary more than ever and quality should be a part of the definition of “done” for each story. Under these conditions, the testers should have maximum flexibility and the best tools for dealing with SCRUM as a development method.
The Acceptance Test Driven Development methodology elevates TDD concepts one level higher and provides us with a quick solution for testing integration with SCRUM. Automated acceptance test cases are designed and developed in parallel, or even before the features. ATDD improves both development and testing by enhancing focus on users’ requirements. It also provides a better use of testing resource time, including optimization of tester ramp up. Last but not least, it’s really fun!
I would like to share with you the experience of our SCRUM team, which consists of developers and testers, working together on a big software project for a year and a half. In this complex environment, after fifteen sprints, we realized that some change is needed and added the ATDD methodology.
I will show you (by graphs) the increase in tests and bugs by using automated acceptance tests, and the improvement in the team’s satisfaction after the change. I will also show the importance of combining manual tests with automatic acceptance tests, taking into account the automation limitations. The ATDD methodology solved some of our key problems and helped us improve our project quality and our SCRUM process. It may help you too.
|P-87||Lean System Integration at HP|
Kathleen Iberle, Hewlett-Packard
Discover how HP is applying lean principles to drive the integration of large systems, resulting in both higher quality and higher productivity.
In the HP printer business, lean integration is:
This paper will include an introduction to Lean software and systems development and a reference list. Lean systems development is a superset of agile software development. Lean is able to handle situations that the most commonly known agile methods do not address, including large, complex and partial waterfall systems, by applying methods deriving from queuing theory and statistics.
The paper will then demonstrate the methods with progress reports and/or results from actual projects in the HP Inkjet and LaserJet businesses. There is enough data as of this writing to demonstrate what we are doing, why and how well it is (or isn’t) working. The paper will show the management methods utilizing cumulative flow diagrams and the power of this method. Higher quality and productivity will be demonstrated with examples and anecdotes.
I’m very excited about the Lean methods – in my opinion, Lean is the next big thing in software engineering – and I hope to be able to share it with the Pacific Northwest Software Quality Conference audience.
Kathy has an M.S. in Computer Science from the University of Washington and an excessive collection of degrees in Chemistry from the University of Washington and the University of Michigan.
|P-88||Effective Testing Techniques for Untold Stories in Story-Driven Development|
Erbil Yilmaz, Microsoft
As part of story-driven development, in each sprint feature crews (teams) take a few stories and implement them. The goal, at the end of each sprint, is to demonstrate and complete the customer experience along the stories’ design. This is a great software development methodology proving the software being built can deliver the customer experience.
However, story-driven development brings unique challenges for software testing. As stories are implemented, relying on each other, a complex software system emerges capable of doing more than just what the stories told. With the addition of each story, testing surface and capabilities (and defects) of the software grow exponentially.
Usually, different sub-teams develop different parts of the software, making it very difficult to manage and understand the complete set of capabilities of the software. As a result, at the end of each sprint the software product consists of: (1) features told with developed stories and (2) features/defects emerging from untold stories, coming from interactions of the implemented stories. Sometimes these features are as designed, but often hiding significant integration bugs.
This paper describes various techniques developed and used by the Visual Studio Architecture Tools team over a few releases, through experiments in testing within agile development. It also presents some case studies to illustrate and emphasize key points, ranging from using architecture diagrams as part of test planning to particular test coverage along with test spectrum range. The guidance presented focuses on a set of best practices that teams can adopt to accomplish optimal test coverage over the test spectrum.
|P-89||Using Customer Driven Quality to Manage Complexity|
John Ruberto, Intuit, Inc.
Today, developing software is much more complex than in the past. Our applications are no longer applications, but instead multi-tiered systems. Sometimes they are groups of systems working in concert, which we call eco-systems. The systems interface with other systems – thousands of them. Our software is no longer the product of a single team, but built by many individuals in several countries. New client platforms and browsers come to the market everyday.
In this complex world, how do we define quality? We define quality goals with an ever-increasing number of “ilities”. We perform all kinds of testing: acceptance, performance, functional, concurrency, stress, exploratory, stability, build verification and, finally, regression testing.
Given all of this complexity, it can be difficult as a quality team to make decisions:
The answers to these questions lie with our ultimate quality consultants – our customers.
One of our values at Intuit is “Customers Define Quality”, and this paper will share how we interact with our customers at every stage of the lifecycle to understand their definition of quality.
The paper will describe how we include the customer in our requirements definition phase to understand the most important customer problems, generate ideas to solve those problems and to test these hypotheses instead of just relying on the opinions of HiPPOs (Highly Paid Persons with Opinions). Since errors and defects injected during the requirements phase can be the most expensive errors, investing in quality at this stage is especially productive.
We use several customer driven processes during the design and coding phases. Design for Delight goes hand-in-hand with the requirements formation & testing process, with the goal of ensuring that we build the product that solves important problems for our customers. We also introduce the customer advocacy role, usually part of the Quality Assurance team. The paper will also describe how we build customer empathy in every developer.
During the testing phase, our customers help us prioritize our testing, both manual and automated. Our customers also help us understand how they use our products, which helps guide our testing process. Finally, we involve our customers in the testing phase.
After release, the support and feedback processes allow us to react very quickly if there are customer issues, sometimes when only one customer affected. The customer feedback also helps calibrate our development and test processes and provide an even deeper understanding of our customers.
Customer Driven Quality helps focus our quality assurance activities on our most important stakeholder, our customers.
|P-90||Working with Complex Data|
Engin Uzuncaova, Microsoft
Software testing, in general, is a human-centric process where creativity and technical skills mix and mingle in unique ways to both understand and evaluate software systems. When we are faced with new and original testing challenges, it is usually the human element in the process that makes it possible for us to repackage the existing tools and methods, again, in new and original ways for viable solutions. As the complexity of a system under test increases, this task becomes harder to accomplish. What do we do when the level of understanding required to properly analyze and test such systems is beyond our comfort zone?
Testing geo-spatial data and route planning algorithms is a good example for this; the inherent complexity of geo-spatial data, the representation of the data and also novel algorithms that utilize this data make testing more of a daunting (or fun?) task at both levels: understanding and evaluating. This paper presents a collective testing approach that we have developed at Bing Maps to attack the complexity of our route planning product focusing on the following areas:
This approach has provided improved coverage in our testing and enabled us to identify more quality issues during development. We also benefited from a more efficient evaluation mechanism for complex issues found during testing. As we head towards providing far more geo-spatial coverage and enhanced features to our customers, our testing approach will have to adapt and evolve accordingly; this paper presents some pointers in this area as well.
|IS||Three Must Haves to Pass the PMP Exam|
Balbinder K. Banga, PMP
The Project Management Professional certification is an important designation to carry in a Project Manager’s toolbox. Those three little letters, PMP, have been known to open doors and provide opportunities for advancement in the field of project management.
While most agree the certification is important, what does it really take to pass the exam? Does one have to have a photographic memory, an affinity for interpreting a book that reads like a dictionary or be a mathematics wizard? Join Balbinder K. Banga as she aims to dispel the myths surrounding what it takes to pass the exam. The presentation will focus on the top three “must haves” for passing the PMP exam.
Balbinder has coached hundreds of students to successfully pass the exam and will share three common findings that she has discovered to be consistent among successful test takers.
Balbinder has served as a judge for the PMI Excellence awards for the last four years and
Balbinder has 10 years of experience in Program and Project Management. She has
Balbinder is a PMP and a Microsoft Certified Professional. She has a Master’s degree in Engineering Management with a focus on Project Management. She volunteers with PMI and is the Director of Academic Outreach for the Portland PMI Chapter.
|EX||Testing in The Cloud|
Paul Trompeter, Headstrong, Managing Consultant
Oh, so you know what Cloud Computing is – great!. However, did you also know that the benefits of Cloud Computing can be enormous; but so can the costs! Do you know what the “real-world” implementations of Cloud Computing are? What are the challenges therein? You may know what “Cloud Computing” is, by definition. But how would you test in that vast, ever-changing, shared environment? QA needs to know how to plan for, design, and test within this new, challenging environment.
This webinar will answer these questions and show you how to strategize your testing for the Cloud. It will also discuss:
World Trade Center
121 SW Salmon St.
Portland, OR 97204
The PNSQC newsletter offers readers interviews with presenters and keynotes, invites to webinars, upcoming industry calendar listings, and so much more straight to your inbox. Sign up by entering your email into the box and let the latest news come directly to you.
Copyright PNSQC 2020
WordPress website created by Mozak Design - Portland, OR