|T-41||Delivering Relevant Results For Search Queries with Location Specific Intent|
Anand Chakravarty, Microsoft Corporation
John Guthrie, Microsoft Corporation
With a still growing number of users and an ever-growing volume of information available, the Internet presents many interesting and continuously morphing challenges in the area of Search. In addition to the scale of data to be processed, the results of user search queries are expected to be highly relevant and delivered speedily. Presenting these results to the user in a manner that enables them to decide and act based on the information provided is a primary characteristic of a good Search engine. Measuring the accuracy and relevance of Search results is thus an important area in the Testing of Search engines. Considering the high volume of information to be processed, the extremely diverse nature of query-intent and the growing expectation of high relevance from Search users, testing of Search engines requires the traditional QA characteristic of passion for quality, combined with a high-level of comfort with ambiguity and strong automation skills.
When we consider the area of search queries that have a location specific intent, there are other factors that become important along with the usual technical problems. Most queries that have local intent have a higher degree of immediacy in terms of the user’s intent to act on the results returned to their queries. There is thus greater expectation of the results being fresh and accurate. With the growing market for Mobile devices, searches with local-intent are becoming more popular. As a tester, when presented with measuring how well a Search engine performs for such queries, it is important to understand the scale and variety of queries involved. Because there is a high level of ambiguity and variation in search queries and results, statistical metrics are a natural tool to measure Search engine quality.
In this paper we cover the methods used to obtain metrics for measuring relevance of search queries with local intent. Testing is done while fundamental components are in a dynamic state: rankers, intent detectors, content, location identifiers, etc. A QA team that comes up with good metrics to measure the quality of search results in such scenarios increases the quality of Search Experience delivered to their users, and helps to evolve the quality of solutions implemented in this extremely challenging problem space.
Anand Chakravarty is a Software Design Engineer in Test at Microsoft Corporation. He has been testing online services for most of the past decade, and is currently on the Bing Local Search Team.
As a serial entrepreneur, John Guthrie founded, sold, and/or participated in the IPO of companies in various fields. He has led teams to develop, test, and patent technology for Digital Rights Management, Data Storage, and Web Site Modifications. Now at Microsoft, he is working on ensuring the quality of the results for local-intent queries at Bing.
|T-30||An Introduction to Customer Focused Test Design|
Alan Page, MIcrosoft
Test design, simply put, is what testers do. We design and execute tests to understand the software we’re testing, and to identify risks and issues in that software. Good test design comes from skilled testers using a toolbox of test ideas drawn from presentations, articles, books, and hands-on experience with test design. Great testers have great test designs because they have a generous test design toolbox.
One significant drawback of the majority of test design ideas used by many testers is the heavy emphasis on functional testing of the software. While functional testing is a critical aspect of software testing, many test design ideas fail to include high-priority plans for testing areas such as performance, reliability, or security. Test teams frequently delegate these testing areas to specialists, and ignore the importance of early testing. Furthermore, when testers manage to design tests for these areas, the testing often occurs late in the product cycle, when it may be too late to fix these types of bugs.
Our team at Microsoft has introduced the concept of Customer Focused Test Design. This test design approach includes an emphasis on testing end-to-end scenarios, real-time customer feedback, future customer trends, and, most importantly, a shift of emphasis away from functional tests, and towards early testing approaches for quality attributes that have the biggest influence on the customer’s perception of quality.
This paper discusses the individual pieces of this approach, the success we’ve had so far, and provides examples of how this change in approach has affected the overall project outcome.
Alan Page Alan Page began his career as a tester in 1993. He joined Microsoft in 1995, and is currently a Principal SDET on the Office Lync team. In his career at Microsoft, Alan has worked on various versions of Windows, Internet Explorer, Windows CE, and has functioned as Microsoft’s Director of Test Excellence. Alan is a frequent speaker at industry testing and software engineering conferences, a board member of the Seattle Area Software Quality Assurance Group (SASQAG), and occasionally publishes articles on testing and quality in testing and software engineering web sites and magazines. Alan writes about testing on his blog (http://angryweasel.com/blog), was the lead author on How We Test Software at Microsoft (Microsoft Press, 2008), and contributed a chapter on large-scale test automation to Beautiful Testing (O’Reilly Press, 2009).
|T-11||Testing Services in Production |
Keith Stobie, Microsoft
There are many benefits to be realized by Testing Services in Production when the risks are properly mitigated. Testing in production finds problems at a scale that most groups can’t afford to duplicate with a test environment. Using production systems for testing is critical to business success of an effective software service. This paper describes and demonstrates several different approaches to using production systems for testing including: when each approach is appropriate, what prerequisites are needed, and how each approach would be used.
Monitoring of services, Controlled Experiments, and production data are well-known forms of testing. You can also use tracers to follow service flow, do destructive testing (killing services, networks, etc.), and even do load, capacity, performance, and stress testing in production. In short, almost all kinds of testing can be done in production, but how do you mitigate the risk? Testing in production allows customers to benefit (and occasionally to suffer) from the most current advances. Throttling requests or work, exposure control, incremental rollout, and especially superb monitoring are all needed to control production testing risk.
Keith Stobie is a Principal Software Development Engineer in Test working as a Knowledge Engineer in the Engineering Excellence group at Microsoft. Previously he was Test Architect for Bing Infrastructure where he planned, designed, and reviewed software architecture and tests. Keith worked in the Protocol Engineering Team on Protocol Quality Assurance Process including model-based testing (MBT) to develop test framework, harnessing, and model patterns. With twenty-five years of distributed systems testing experience Keith’s interests are in testing methodology, tools technology, and quality process. Keith has a BS in computer science from Cornell University. ASQ Certified Software Quality Engineer, ASTQB Foundation Level Member: ACM, IEEE, ASQ.
|T-29||Exploring Cross-Platform Testing Strategies at Microsoft|
Jean Hartmann, Microsoft
The Office Productivity Suite including applications, such as Word, Excel, PowerPoint and Outlook, has now been available for Windows PCs and Apple Macintoshes for many years. During these years, the respective product and test code bases have grown significantly, with increasing numbers of features being added and requiring validation. When validating PC-based products, testers leveraged some of the benefits of the Windows platform including the availability of the .NET framework to implement their tests using managed programming languages. For test execution, they used Windows-supported mechanisms, such as COM/RPC, to communicate in- and out-of-process with the application under test. Thus, when teams needed to deliver related Office products on a different platform, such as the Apple Macintosh, test teams were faced with a dilemma – either attempt to port test cases, together with the required test infrastructure, or create new tests using the platform-preferred development/test environment. Both were time-consuming.
With the advent and rapid evolution of mobile platforms, such strategies are becoming more difficult to justify – implementing test suites from scratch is just too costly and slow. New test strategies, tools and processes are needed that promote the construction of portable tests and test libraries and enable testers to quickly retarget a given test case for different platforms and devices. This approach is particularly valuable when validating the common or ‘core’ application logic of each Office product for different platforms and devices, resulting in a more consistent level of quality for core functionality. It also gives test teams more time to focus on validating those product features that are unique to a specific platform or device.
This paper chronicles our ongoing exploration of platform-agnostic testing strategies during the current shipping cycle. It highlights the challenges that we have faced so far and attempts to illustrate and emphasize key concepts of our work using examples.
Jean Hartmann is a Principal Test Architect in Microsoft’s Office Division with previous experience as Test Architect for Internet Explorer and Developer Division. His main responsibility includes driving the concept of software quality throughout the product development lifecycle. He spent twelve years at Siemens Corporate Research as Manager for Software Quality, having earned a Ph.D. in Computer Science in 1993, while researching the topic of selective regression test strategies.
|T-32||A Distributed Randomization Approach to Functional Testing|
Ilgin Akin, Microsoft
Sam Bedekar, Microsoft
Traditionally, much of software testing focuses on controlled environments, predefined set of steps, and expected results. This approach works very well for tests that are designed to test targeted functionality of a product. However, even when test automation results are all green, have you wondered what other bugs might be hidden in the product? Have you hit a point in your test automation where you felt like you are not finding new bugs anymore but only regressions?
These are some of the questions we had been thinking about at Microsoft that led to the AutoBugbash project. A standard bugbash is a focused amount of time where the whole team gets together in a room and pounds on the product. Bugbashes are a highly effective way to find a lot of bugs. There are more eyes on the products and people tend to do a lot of random actions that may not be part of their day-to-day structured testing. However, bug bashes are expensive in terms of manual labor.
AutoBugbash is a form of Bugbash automation that incorporates the elements of decentralization and randomization. There are two main components to this approach. The first component is a set of standalone test clients that autonomously take actions on their own and verify expected behavior locally and log their observations with minimal state checking and no predefined script. The second component is a post-run component called the reconciler which reconstructs the sequence of events by parsing logs and matching events to actions. With this paper, we will describe how the AutoBugbash project helped uncover crashes and other hard to find bugs within the Microsoft Lync product.
Ilgin Akin has been working in the Unified Communications group at Microsoft as a Software Development Engineer in Test since 2006. Prior to Microsoft, she has worked at Pitney Bowes Corporation as a Software Developer.
Sam Bedekar is a Test Manager at Microsoft. He has worked in the Unified Communications space for nine years. Sam is very passionate about test and the impact it can have on product quality and the industry. Sam believes that Test Engineering has vast potential to grow and is excited about leading efforts to bring the state of the art forward.
|T-36||Application Security for QA Managers – Pain or Gain|
Dr. Ravi Kiran Yerra, COE Security LLC
It is often the case that developers and software vendors are not fully aware of application security vulnerabilities such as cross site scripting, injection flaws, cross site request forgery and etc. In many cases, these vulnerabilities can be prevented with training, more consistent and standardized software development practices, software acquisition protocols, and appropriate use of manual and/or automated security vulnerability test, manual and/or automated security code reviews.
Dr. Ravi Kiran Yerra is an internationally known speaker with his doctorates degree in internet security management. Dr. Ravi Yerra has over fifteen years of real-world experience in delivering information security solutions around applications, cloud, virtualization, products, and database along with risk assessments and software testing across the globe in numerous industries and verticals. Since 1995, Dr. Yerra has been involved in multiple security projects and played a vital role in establishing various private and government information security initiatives.
|T-57||Testing in Production: Enhancing Development and Test Agility in Sandbox Environment|
Xiudong Fei, Microsoft
Sira Rao, Microsoft
Developing and testing an application hosted in a sandbox environment presents unique challenges. Development and testing agility is slowed down when validating such applications in production environment. Production environments are complex, have restricted permissions for modifying and using the environment, are costly in terms of deployment and upgrades, and make it difficult for applications to capture relevant logging information and troubleshoot in real-time. To test such applications in production, the problems are aggravated when trying to validate implementation changes, gather information such as logs or if there is a need to troubleshoot the application at runtime as it would require updating the binaries on the back-end or performing in-process debugging by the developer. These problems are also observed when the Test and Ops teams need to deploy test environments to validate the application. Repeatedly setting up test environments is costly and it is hard to simulate a true production environment. All of these constraints slow product development, testing, troubleshooting and thereby delay achieving quality levels needed for release.
This paper introduces a detour concept for an application that is hosted in a sandbox environment. We share the experience of validating the Lync Web application in the context of a Silverlight sandbox environment. In this paper we discuss the tool and framework created to address the above mentioned challenges. We also discuss how we used this tool to intercept Silverlight binaries, modify them and re-direct this to a browser session that hosts the application under test. Additionally, the tool can remotely manipulate objects in the Silverlight application through the use of scripts. With the objects available at hand, we can programmatically manipulate them for out-of-process mode debugging and also use this mechanism as alternate UI automation framework. The tool can be used to enable logging, inflate log size and for product error handling and security testing through fault injection. Using this tool in test and production environments and based on cost of deploying simple test environments, we conservatively estimate a savings of two person-days per month. The paper concludes with lessons learned by our team throughout the development and deployment of this tool, and includes information on practices other teams can implement to achieve similar results.
Xiudong Fei has been a test engineer at Microsoft Lync group for the last four years. His passion is to create new ways of testing to have business impact, and have fun.
Sira Rao is a Test lead at Microsoft. He has worked on the Unified Communications products at Microsoft for over seven years. He is passionate about building high quality products that excites customers.
|T-21||Application Monitor: Putting games into Crowdsourced Testing|
Vivek Venkatachalam, Microsoft
Marcelo Nery Dos Santos, Microsoft
Harry Emil, Microsoft
Software test teams around the world are grappling with the problem of testing increasingly complex software with smaller budgets and tighter deadlines. In this tough environment, crowdsourced testing can (and does) play a critical role in the overall test mission of delivering a quality product to the customer.
At Microsoft, we firmly believe in using internal crowd sourced1 testing to help us reach high levels of test and scenario coverage. To achieve this goal, we use the dogfood program where employees volunteer to use pre-release versions of our products and give us their feedback on a regular basis. However, for a volunteer based crowdsourced testing effort to be really effective, one needs the ability to direct the crowd to exercise certain scenarios more than others and the ability to adjust this mix on demand. What if one could devise a mechanism that provides the right incentives for the crowd to adopt the desired behaviors ime?
This paper describes how we conceived of, designed and implemented Application Monitor, a tool that runs on a user’s machine and allows us to detect usage patterns of Microsoft Lync2 ( the software product that this team of authors worked on) in near real-time. The paper then describes a simple game we incorporated into the tool with the goal of making it fun for the crowd. The game also provided us with the ability to direct their efforts to test high-risk features, by appropriately changing incentives.
One learning point was that even crowd behaviors that attempted to game the system served the ultimate purpose, which was to increase testing of specific scenarios. The paper will discuss this and other takeaways as well as point out key issues that other teams wishing to start similar efforts should consider.
Vivek Venkatachalam is a software test lead on the Microsoft Lync team. He joined Microsoft in 2003 and worked on the Messenger Server test team before moving to the Lync client team in 2007. He is passionate about working on innovative techniques to tackle software testing problems
Marcelo Nery dos Santos is a software engineer on the Microsoft Lync test team. As a remote employee for 15 months, he was an active dogfood user of the product himself. He has also worked on tools to help test performance for the Lync product. His main interests are working on innovative tools and approaches for enabling more efficient testing.
Harry Emil has worked at Microsoft on the Windows and Office testing teams since 1989. His research focuses on the wisdom of crowds, workplace productivity, and real-time multilingual communications.
|T-7||No Test Levels Needed in Agile Software Development Environment!|
Leo Van der Aalst, Fontys University of Applied Sciences
Testing is not only a vital element of agile projects, it is an integral part of agile software development. In traditional software development environments test levels like system test, user acceptance test and production acceptance test are commonly executed. In an ASDE, all team members have to work together to get the work done. All disciplines have to work together and support each other when necessary. There are no designer, developer, user or test teams in an agile project. In such an environment it does not make much sense to talk about system test and/or acceptance test teams. Instead all team members have to accept the feature (or user story or use case, etc.) with their own acceptance criteria in mind. For instance, an end-user should test the suitability and user-friendliness of the feature. Operations should, for instance, test the performance, the manageability and continuity of the feature. And the designer should test the functionality of the feature. In short, in an ASDE test levels are replaced by combinations of acceptor and quality characteristics important to the acceptor per feature. This approach requires a different mind-set of the team members, a different product risk analysis approach and a different view on establishing the test strategy.
Leo van der Aalst has almost 25 years of testing experience and developed amongst others services for the implementation of test organizations, agile testing, test outsourcing, software testing as a service (STaaS), risk based testing, calculation of the added value of testing and test-governance.
Leo is lector “Software Quality and Testing” at Fontys University of Applied Sciences (Eindhoven, The Netherlands) and he is co-author of the TMap NEXT® for result-driven testing and TMap NEXT® Business Driven Test Management books. He is also a member of the Dutch Innovation Board Testing and of the Research and Development unit of Sogeti Netherlands2.
Besides all this, Leo is a much sought-after teacher of international test training, a regular speaker at national and international conferences, and he is the author of several articles.
Speaker at conferences: a.o. STARWEST and PNSQC (both USA), Test Congress, Iqnite and Test Expo (all UK), Quality Week Europe (Belgium), SQC/Iqnite and TAV (both Germany), Swiss Testing Day (Switzerland), ExpoQA and QA&TEST (both Spain)
Cecile Davis is co-author of this paper. She is a test consultant and has been involved in several agile test projects. She is a certified agile tester, RUP-certified, co-author of TPI NEXT® and founder of the SIG on agile testing within Sogeti. She is involved in several (national) SIG’s related to this subject.
|T-15||Application Compatibility Framework: Building Software Synergy|
Ashish Khandelwal, McAfee
Shishira Rao, McAfee
Amrita Desai, McAfee
Building a healthy inter-product alliance in a software ecosystem requires a great deal of effort. Our paper focuses on simplifying a product compatibility testing and helps a tester to find compatibility defects early in the testing life cycle. It demonstrates a “Framework driven approach” for improving Product Quality from compatibility standpoint which is proven, tested and sustained over releases of a product.
As a standard practice, we follow compatibility model described in our paper and make sure that our product passes the product compatibility testing procedure. With these three testing types in place, not only we have identified risky areas early in the test cycle but also succeeded in influencing product management decisions based on our results and analysis. Here are few of the strategic points which a compatibility tester can leverage upon from this paper.
Ashish Khandelwal has more than 6.5 years of Software Testing experience. He holds a B.Tech degree from IIT Kanpur and works as a Senior QA Engineer with the McAfee Host DLP product solution group. He has contributed to multiple international conferences and published papers on Compatibility & Security Testing.
Shishira Rao has more than 7 years of Software Testing experience. She holds a B.E. degree from VTU and works as a Senior QA Engineer with the McAfee Host DLP product solution group. Her testing and QA experience includes a focus on Process improvement, Compatibility testing and Strategic planning.
Amrita Desai is a QA Engineer at McAfee and works with the Host DLP Product solution group with more than 3 years of experience. She has a Bachelor’s Degree in Computer Science from RGPV University. Her testing and QA experience includes a focus on Black box testing, Compatibility testing and Soak testing.
David Lee, SoftSource Consulting
David Lee is a Senior Software Developer at SoftSource Consulting (www.sftsrc.com), a consulting firm with a proven track record of helping companies engineer custom software solutions on the Microsoft platform. David has been developing and architecting software solutions for the past 13 years. Most of his experience has been in designing and maintaining enterprise portal applications in the financial/professional services industry. David graduated from Oregon State University with a BS in Liberal Studies.
|T-9||YES! You CAN Bring Test Automation to Your Company!|
Alan Ark, Compli
No money? No support from your supervisor? No experience in test automation? No problem.
Even with these barriers to entry, you too can reap the rewards of automating test activities in your web application. Sometimes a formal framework is overkill. Shooting for the moon rarely works when there are so many unknowns. With a less than optimal implementation, bad test automation could also leave your project at risk.
This presentation will provide you with an idea framework to help your test efforts along. The key is to start small and learn as you go along. By utilizing Ruby (a free scripting language) and the Watir (Web Application Testing in Ruby) library among other tools, the hit to your budget is zero dollars. By building on cumulative successes, you can present the business case for supporting automated test efforts in your organization.
As someone who has brought in test automation to several organizations, I will share with you my keys to success, so that you may also reap the many rewards of automated testing on your projects.
Alan Ark is the QA Manager at Compli, in Portland, Oregon. Alan has gained tremendous experience working for Unircu, Switchboard.com, and Thomson Financial – First Call. Mr. Ark has previously presented ‘Euro: An Automated Solution to Currency Conversion’ at Quality Week ’99, and ‘Collaborative Quality: One Company’s Recipe for Software Success’ at PNSQC 2008. At Compli, he is using Ruby to solve problems both large and small. His LinkedIn profile can be viewed at http://www.linkedin.com/in/arkie
|T-19||Unit-Testing a C++ Database Application with Mock Objects|
Ray Lischner, Proteus Technologies
Sometimes, an idea isn’t valuable because it’s new but because it’s old. Unit testing isn’t new. Databases aren’t new. Database client libraries aren’t new. Writing a database client library that directly supports unit testing via mock database classes and the injection of mock results—that’s valuable.
Unit testing is not new, and has long been a common element of software testing. Various design and development techniques, such as extreme programming and test-driven design, have reinvigorated unit testing as a best practice. In order to unit-test an application effectively, the developer must be able to isolate the code under test. The most common method of isolating code for unit testing is to use mock classes or objects to stand in for other classes, especially external services and interfaces.
Java and similar languages offer language support for easy definition and construction of mock objects. C++ presents additional challenges. Some C++ utilities help you write mock classes, but in order to unittest effectively, a C++ library must be designed for testability. Given the evident value of unit testing, it is unfortunate that so many important libraries are not designed with unit testing in mind.
This paper presents a new C++ database client library, one that has been designed from the start with unit testing as a primary design goal. This library uses abstract interfaces to enable the developer to follow a clean database interface, and to substitute a mock implementation for unit testing and a real implementation for production use. The application code needs to be compiled only once, so you can be sure that you are unit testing the real application code, even when linked with the mock database library.
The resulting library enabled delivery of a database application with zero known defects. The paper presents the design decisions and shows how testability drove the library design, and how the library and resulting applications benefited from the emphasis on unit testing.
Ray Lischner has been designing and developing software for about three decades, starting at the California Institute of Technology, where he earned his B.S. in computer science, and subsequently at large and small companies on both coasts of the United States. Meanwhile, he earned his M.S., also in computer science, from Oregon State University and performed stints as an author (C++ in a Nutshell, Exploring C++, Shakespeare for Dummies, and other books), consultant, teacher, and stay-at-home dad. Many years ago, Ray served on the PNSQC board and various committees. He currently develops software for Proteus Technologies, where he holds the title of Distinguished Member of Technical Staff. Ray is a senior member of IEEE and a member of ACM.
|T-23||Playback Testing Using Log Files|
Vijaya Upadya, Microsoft
Most testers are familiar with record/playback testing. A variation of this approach is log playback testing. Instead of explicitly recording the user actions using a recorder, this technique simply relies on the information in the log file produced by the system under test to play back the scenario. This is extremely useful for reproducing issues reported by customers which are typically hard to reproduce otherwise due to lack of detailed repro steps. This approach also lends itself well to creating data driven tests and can be used to augment the existing test automation.
Testing and investigating failures in a non-deterministic software system can be very costly and painful. In most cases the only source of information to know what code path got exercised are the log files produced by the application. This paper talks about how the information in log files can be used to playback the sequence of actions to test specific code paths and also reproduce bugs. We will look at exploring an extension of this approach and how log files can be used as a source for data driven testing by simply varying the sequencing of actions that are played back. This paper illustrates the technique using real a world example. Finally the paper discusses the benefits and how it can be applied to different test domains.
Vijaya Upadya is a senior software development engineer in test at Microsoft, currently working at the Microsoft main campus in Redmond, Washington. Over the past decade, he has been involved in software development and testing for products ranging from compilers to libraries like Linq To SQL, Silverlight and RIA Services to Windows Live mesh. Vijay primarily focuses on test strategy, test framework and tools and test frameworks for the team.
Vijay has a Master’s degree In Systems Analysis and Computer Science from the Mangalore University, India
|T-42||Green Lantern Automation Framework|
Sridevi Pinjala, IBM
Once upon a time, the average life for software was 7 years. Some software lasted more than 7 years. Some software lasted less than 7 years. But no matter how long the software lasted, its code was updated, improvised, changed and tweaked frequently. To handle the tweaks and to make sure the other code was not broken, regression testing came into being. Automated tools were invented to handle regression. But most automated tests also needed to be tweaked when the application code was tweaked. When the application interface changed completely, the automated scripts died.
Some companies had several interfaces for one back-end system. They had to pay the price to have all these interfaces tested with the same tests and maintained separately. There was no way out. If one particular interface has hundreds of automated scripts written already, they could not re-use the scripts on other interfaces.
Some companies had frequent changes to the User Interface of the application. Since the automated test case scripts needed to be changed every time the User Interface changed, automation did not add any value nor did it decrease the need for manpower. So, the companies had to rely on manual testing alone.
Many test automation tools came into the market to help out the needy companies. Some tools relied on the objects used for testing, some relied on the User Interface, and some relied on the Document Object Model. Almost all of these tools had a feature – to change the logical name on the elements to give them a unique identification to avoid conflict.
Sridevi Pinjala is an automation expert at IBM and is now working on IBM test automation tools. She came up with Green Lantern framework when working with Everest Consultants. She is a Certified Scrum Product Owner and currently pursuing MS in Information Management at Aspen University. She considers Job Bach her mentor without whose encouragement, she wouldn’t be here to present her ideas. She identifies herself as the American in the Saree. (http://sriluballa.wordpress.com)
|T-27||Parameterized Random Test Data Generation|
Bj Rollison, Microsoft
Testing is the most challenging job in software development. The software tester must select a small number of tests from countless possible tests and perform them within a limited period of time, often with too few resources. Additionally, tests usually employ only a fraction of the possible data that may be used by customers or by malicious users. Whether we are unit testing, testing an API, or executing end-to-end user scenarios or acceptance tests the test data is usually the keystone to many functional tests.
Testers often craft test data representing typical customer inputs, as well as invalid data for a given input control or parameter. But, defining a broad set of test data from all possible inputs for either positive or negative testing is often a non-trivial part of the testing effort. Also, while static test data is useful, the effectiveness of static test data wears out with repeated use in subsequent iterations of a test in which that data is used.
One possible way to increase the breadth of test data coverage is to use random test data. But, random test data is sometimes disregarded because it may not “look like” customer data, or random data may generate false positive results indicating a failure in the system due because of invalid constructs in the random test data. This paper explains the fundamental principles of parameterized random test data generation which can be used to overcome many of the problems associated with random test data. It also demonstrates how parameterized random test data can increase test coverage and expose unexpected issues in software.
Bj Rollison is a Principal SDET Lead at Microsoft, currently leading a team responsible for testing the foundation services integrating social networking features on the Windows Phone. Bj started his professional career in the computer industry in 1991 building custom hardware solutions for small and medium sized businesses for an OEM company in Japan. In 1994, he joined Microsoft’s Windows 95 international team, and later moved into a test management role in the Internet division working on Internet Explorer 3.0 and 4.0 and several web-client products.
As the Director of Test and a Test Architect in Microsoft’s Engineering Excellence group Bj has taught thousands of testers and developers around the world. Bj also teaches software testing courses at the University of Washington, and is a frequent speaker at international software testing conference, and is co-author of the book How We Test Software At Microsoft. Bj is currently interested in the application of random test data generation to increase test effectiveness of test designs through variance in test data, and some of his tools can be found at http://www.testingmentor.com.
|T-17||Reliability Before You Ship|
Wayne Roseberry, Microsoft
Reliability is one of the most difficult areas to establish confidence before shipping the product. There is always that nagging question, “Will it be reliable enough for real world load and demand? Did what we build this time get better than what we had before?”
This paper will describe how the Microsoft SharePoint 2010 team used reliability and monitoring tools in lab and real-world environments to substantially improve service availability and performance. The presentation will discuss what our key definitions were for availability, failure and performance targets, and show how we used those to establish confidence in reliability before the product shipped.
Wayne Roseberry is a Principal Design Engineer in Test at Microsoft Corporation, where he has been working since June of 1990. His software testing experience ranges from the first release of The Microsoft Network (MSN), Micorosoft Commercial Internet Services, Site Server and all versions of SharePoint. Previous to testing, Wayne also worked in Microsoft Product Support Services, assisting customers of Microsoft Office.
Previous to working for Microsoft, Wayne did contract work as a software illustrator for Shopware Educational Systems.
In his spare time, Wayne writes, illustrates and self-publishes children’s literature.
|T-12||Creating a Lean, Mean Client Integration Machine|
Aaron Akzin, WebMD
Shelley Blouin, WebMD
Kenny Tran, WebMD
Supriya Joshi, WebMD
Sudha Sudunagunta, WebMD
Aaron Medina, WebMD
Reorganizations are all too common in the modern business environment and in particular at technology organizations. This paper chronicles the motivation for undertaking such an effort. We are all aware of the disruption this will interject upon employees, clients and business objectives and the inherit risk in doing so. Despite this, often these actions are necessary. The intent of the authors to you, the reader, is that we detail what was done, how it was done and finally a retrospective about what was successful and what requires additional improvement efforts. WebMD Health Services is a rapidly changing and dynamic organization and we refuse to remain static and dormant which we adamantly oppose in our culture and as a trait in the employees we hire.
The authors of this Paper are all members of a team within the Integrations department that are under the umbrella of Technology at WebMD Health Services. We all serve the Health Care Market for our customers that are in this vertical.
Aaron Akzin is a Quality Assurance Analysts, Shelley Blouin is a Manager of Software Development, Kenny Tran is an Integrations Developer on the team, Supriya Joshi, Sudha Sudunagunta and Aaron Medina are Quality Assurance Analysts on the team.
Collectively we have decades of experience in technology in all roles and enjoy our interactions in this new structure and are extremely interested in sharing our experience during the successful transition.
|T-50||Design For Delight Applied to Software Process Improvement|
John Ruberto, Intuit, Inc
New product designers use a variety of techniques, blending art and science, to design the latest gadgets. Laptops, cell phones, kitchen utensils, and automobile dashboards are examples of products that have benefitted from the design process. My company uses a methodology, called Design for Delight, to create new services and offerings for our customers.
These creative methods also work for software process improvement. This paper shows how our team applied Design for Delight (D4D) for software process improvement. The paper will provide an overview of Design for Delight, tell the story how we applied it to improve a key software process, and describe the benefits and limitations of using Design for Delight for process improvement.
Our experience shows that D4D works well to identify customers of a process, work with them to learn the pain points, and identify improvements that focus on solving these pain points. The team is engaged and brings lots of creativity to problem solving.
On the other hand, we experienced several limitations of using this methodology for Software Process Improvement; for example, the results were highly dependent on the individuals selected as representative customers.
With these limitations in mind, our experience is that using product design techniques to improve software processes is a useful practice for applying creativity and innovation in software process improvement.
John Ruberto has been developing software in a variety of roles for 25 years. Currently, he is the Quality Leader for QuickBooks Online, a web application that helps small business owners manage their finances. John has a B.S. in Computer and Electrical Engineering from Purdue University, an M.S. in Computer Science from Washington University, and an MBA from San Jose State University.
|T-73||Releasing Software (How Do You Know When You are Done?)|
Doug Whitney, McAfee
Releasing software. How do you know when you are done? There are several items that can be added to a release checklist, but that last one may take so long that it delays shipment. Did you do the legal check? Have you completed an internal deployment? The release criteria apply to project management, program management, development, quality assurance, publications, localizations and support. It happens throughout the entire lifecycle and should not wait until the project moves into the final testing phases. There are also methodologies that go beyond the release itself, like, internal deployments, phasing out the release to a small set of customers, and product supportability. There are many more questions and we will never get it perfect, but the aim of this paper is to help you to find out what you can do to enhance the release process for you.
Doug Whitney is a development manager for one engineering team and program manager for four other teams at McAfee. He has presented papers at PNSQC and Quality Week on various topics. He has 19 years of engineering management experience and has managed QA teams at both McAfee and Intel.
|T-4||Inspiring, Enabling and Driving Quality Improvement|
Jim Sartain, McAfee
This paper discusses some approaches used at McAfee, Adobe Systems and Intuit to drive continuous significant quality and engineering process improvement through the adoption of engineering best practices including Team Software Process (TSP), Scrum, Peer Reviews, Unit Testing and Static Code Analysis. It covers what has worked well and some of the challenges encountered. Key success factors and an overall methodology are outlined including:
Jim Sartain is a Senior Vice President responsible for World-wide Product Quality at McAfee, the world’s largest dedicated security software company. He leads a team responsible for inspiring, driving and enabling continuous quality improvement across McAfee world-wide. Prior to joining McAfee, Jim held quality and engineering process improvement positions at both Adobe Systems and Intuit where he helped drive significant quality improvements in products such as Acrobat, Photoshop and QuickBooks. Jim also worked at Hewlett-Packard for seventeen years in a variety of software engineering and product development roles. His last job at HP was in the role of CTO/CIO for an Airline Reservation Systems business that serviced low-cost airlines including JetBlue and RyanAir. Jim received a bachelor’s degree in computer science and psychology from the University of Oregon and a M.S. degree in management of technology from Walden University.
|T-28||Software Technology Readiness for the Smart Grid|
Cristina Tugurlan, PNNL
Harold Kirkham, PNNL
David Chassin, PNNL
Budget and schedule overruns in product development due to the use of immature technologies constitute an important matter for program managers. Moreover, unexpected lack of technology maturity is also a problem for buyers. Both managers and buyers would benefit from an unbiased measure of technology maturity. This paper presents the use of a software maturity metric called Technology Readiness Level (TRL), in the milieu of the smart grid. (The smart grid adds increasing levels of communication and control to the electricity grid.) For most of the time they have been in existence, power utilities have been protected monopolies, guaranteed a return on investment on anything they could justify adding to the rate base. Such a situation did not encourage innovation, and instead led to widespread risk-avoidance behavior in many utilities. The situation changed at the end of the last century, with a series of regulatory measures, beginning with the Public Utility Regulatory Policy Act of 1978. However, some bad experiences have actually served to strengthen the resistance to innovation by some utilities. Some aspects of the smart grid, such as the addition of computer-based control to the power system, face an uphill battle. Consequently, the addition of TRLs to the decision-making process for smart grid power-system projects might lead to an environment of more confident adoption.
Cristina Tugurlan joined Pacific Northwest National Laboratory as a software test engineer in August 2010. Before joining PNNL, she held a software engineer position at IBM, OR. She is responsible for the software testing and validation of projects modeling wind generation integration, power system operation and smart grid. She has a Ph.D. in Applied Mathematics from Louisiana State University, Baton Rouge, LA.
Harold Kirkham received the PhD degree from Drexel University, Philadelphia, PA, in 1973 and joined American Electric Power, and was responsible for the instrumentation at the Ultra-High Voltage research station. He was at the Jet Propulsion Laboratory (JPL), Pasadena, CA, from 1979 until 2009, in a variety of positions. In 2009 he joined the Pacific Northwest National Laboratory, where he is now engaged in research on power systems. His research interests include both power and measurements.
David Chassin has been at Pacific Northwest National Laboratory for 19 years and has more than 25 years of experience in the research and development of computer applications software for the architecture, engineering and construction industry. His research focuses on non-linear system dynamics, high-performance simulation and modeling of energy systems, controls, and diagnostics. He is the principle investigator and project manager of DOE’s SmartGrid simulation environment, called GridLAB-D and was the architect of the Olympic Peninsula SmartGrid Demonstration’s real-time pricing system.
|T-56||Watch Your STEP!|
Prabu Chelladurai, Polaris Software Lab Canada
Timely delivery of high quality software within budget is no more a nicety; but a necessity. Software testing, with a significant stake in enabling it, compels enterprises to focus on its improvement. However, the volatile nature of today’s enterprises triggered by various factors like recession, attrition, technology change, competition and diverse culture impede any improvement effort and make the course and postimplementation of improvement feel more fragile than agile
This paper details a framework named STEP (Software Test Enhancement Paradigm) that provides ample and adoptable guidance towards improving and fortifying software testing. This proven framework has been enriched with best practices from industry standard frameworks like CMMi, TMM, TMMi, and TPI. This paper also outlines nine proven cost effective solutions based on technology and software process [with a case study] to remove fragility and induce agility in the journey of software test improvement. These aspects blended with STEP’s unique ability to accommodate 2 flavors of test process improvement: Staged & Continuous and granularly calibrated measurement of maturity enable easier adoption, focused improvement, quicker realization of ROI thereby guaranteed Delivery of Quality!
Prabu Chelladurai is a Senior Project Manager at Polaris Software Lab Canada Inc and is involved in Software Test Management and Process Improvement for Polaris and its clients. He has managed large testing initiatives; calibrated software processes practiced by Polaris’ clients and implemented STEP/TMM practices for the company’s testing division named Polaris Application Certification Enterprise (PACE). He is a certified practitioner of CMMI, ITIL and Function Points (CFPS). Prabu has a Masters degree in Software Engineering from Carnegie Mellon University (CMU, Pittsburgh). While at CMU, he has researched and implemented the best practices of Extreme Programming and Scrum in various software projects.
|T-88||Managing the Deluge of Third-Party Devices and Apps in the Enterprise|
Anil Parambath, CSS Corp
Bryan Segale, DeviceAnywhere
Personal mobile devices are creeping further and further into their network, in some cases as secondary devices to enterprise-owned mobile product but, in many instances, as a result of the growing trend of allowing users to purchase and own their own mobile devices and leverage them for business productivity (and enterprise cost savings). The question becomes, how do enterprise IT departments manage all these mobile devices and their associated applications, when the devices have a broad spectrum of manufacturers, operating systems, platforms and carrier networks?
Currently the Apple iOS alone has over 200,000 applications in its App Store. Google Android recently announced that it crossed the 100,000 application barrier. What it means for IT managers is a in increasingly complex management challenge, where they not only have to ensure network and data security, but also deployment and management of enterprise applications across hundreds or even thousands of mobile devices.
This session will discuss best practices in addressing the explosion of devices and applications – both personal and business – in the enterprise environment, including the integration of mobile testing and QA practices to minimize risks and management challenges associated with this mobile phenomenon, ultimately allowing IT managers to take back ownership of the mobility arena in an efficient manner.
Anil Parambath is the Vice President of Application Technology Practice at CSS Corp. A career technologist for nearly 15 years, Anil has been working for CSS Corp for the past ten years. His passion is developing business solutions simplifying and leveraging next generation technologies. He is currently focused on exploring the convergence of cloud and mobile technologies, developing solutions that leverages the reach of mobility and the on-demand cloud. Prior to joining CSS, Anil was part of the Internet banking technology group in Citibank, where he was part of the team which developed the internet banking concept 15 years back. He also had a stint as an entrepreneur developing solution for Small businesses in third world countries. He holds a Masters in Business Administration and Bachelors in Electronics Engineering.
Bryan Segale is DeviceAnywhere’s VP of Solutions responsible for professional services and customer support. In this consultative role, he has had the opportunity to work alongside all types of customers in the mobile space. He has helped customers to design and implement manual and automated testing strategies enabling customers to get higher quality applications to market faster. During this time, he has been exposed to many mobile technologies, different types of customers in the mobile space and has gained a real understanding of the challenges in developing, testing and supporting mobile applications. Bryan has over 12 years of test automation consulting experience with 6 of those years focused on mobile.
|T-74||Reverse Engineering: Vulnerabilities and Solutions|
Barbara Frederiksen-Cross, Johnson-Laird Inc.
Susan Courtney, Johnson-Laird Inc.
The same characteristics that provide for cross-platform deployment of many modern software development languages also renders the software written in these languages extremely vulnerable to reverse engineering. At the same time, reverse engineering tools and techniques have become much more sophisticated. The convergence of these two developments creates substantial risk for software developers with respect to both the security of their software and protection of trade secrets and intellectual property that is embodied in the software.
Fortunately, the risks that reverse engineering poses to your intellectual property, competitive edge, and bottom line can be mitigated if you take proactive measures to protect your software against reverse engineering.
This paper first examines the ways in which software is vulnerable to reverse engineering and then explains techniques that may be incorporated into your software quality program to help protect your software assets against reverse engineering. The paper also discusses factors that must be considered and weighed when deciding which anti-reversing techniques to apply.
Barbara Frederiksen-Cross is the Senior Managing Consultant for Johnson-Laird, Inc., in Portland,
|T-64||Dirty Tricks in the Name of Quality|
Ian Dees, Tektronix
We join software projects with grand ideas of tools, techniques, and processes we’d like to try. But we don’t write code in a vacuum. Except on the rare occasions when we’re starting from scratch, we’re confronted with legacy code we may not understand and team members who have been quite productive for years without the silver bullets we’re pushing.
How do we get a toehold on a mountain of untested code? How can we get our software to succeed despite itself? Sometimes, we have to get our hands dirty. We may have to break code to fix it again. We may have to put ungainly scaffolding in place to hold the structure together long enough to finish construction. We may have to look to seemingly unrelated languages and communities for inspiration.
This introductory-level talk is a discussion of counterintuitive actions that can help improve software quality. We’ll begin with source code, zoom out to project organization, and finally consider our personal roles as contributors to quality.
Ian Dees was first bitten by the programming bug in 1986 on a Timex Sinclair 1000, and has been having a blast in his software apprenticeship ever since.
Since escaping Rice University in 1996 with engineering and German degrees, he has debugged assembler with an oscilloscope, written web applications nestled comfortably in high-level frameworks, and seen everything in between. He currently hacks embedded C++ application code, automates laboratory hardware, and writes test scripts for Tektronix, a test equipment manufacturer near Portland, Oregon.
When he’s not coding for work or for friends, you’re most likely to find Ian chasing his family around on bicycles, plinking away at his guitar, or puzzling at the knobs on the espresso machine while some impromptu meal simmers on the stove nearby.
Ian is the author of Scripted GUI Testing With Ruby and co-author of Using JRuby, both published by the Pragmatic Programmers.
|T-86||Delivering Quality One Weekend at a Time: Lessons Learned from Weekend Testing|
Michael Larsen, Sidereel.com
For many testers, the steps taken to learn and grow our craft are often ad-hoc and random. We learn from books, we ask questions and receive answers on online forums and web sites, we attend conferences and confer with peers, and we may talk to fellow testers at work. But what if those resources are not enough?
What if we had an organization that testers could turn to, where they could, practice the craft of testing, and learn from mentors from all over the world? What if such a group would tackle interesting problems and encourage both novices and experienced testers to participate? What if I said such a group was available completely free of cost to participants? Sound like fantasy? It’s not; it’s happening here and now. The group that does this is called “Weekend Testing”. While the concept of Weekend Testing has been around for a few years, the power is in the utilization of the methods and principles, not the name or even the people behind it.
The paper explains the methods that we use in Weekend Testing, and how you can participate in sessions that Weekend Testing chapters arrange. More important, even if you never participate in an official Weekend Testing session, you can apply the concepts and methods to your group or organization. I will explain how to facilitate a Weekend Testing Session, and share some lessons learned from dozens of Weekend Testing sessions over the past couple of years. Additionally, I will show you how to take these same ideas and use them with your own organizations.
Michael Larsen is Senior Tester at Sidereel.com in San Francisco, California. Over the past seventeen years, he has been involved in software testing for products ranging from network routers and switches, virtual machines, capacitance touch devices, video games and distributed database applications that service the legal and entertainment industries.
Michael is and co-founder and facilitator of the Americas chapter of Weekend Testing, an instructor with the Association for Software Testing, a Black Belt in the Miagi-Do School of Software Testing, and the producer of Software Test Professionals “This Week in Software Testing” podcast. He also contributed the chapter “Trading Money for Time” to the book “How to Reduce the Cost of Software Testing”, scheduled to be released in October, 2011. Michael writes the TESTHEAD blog (http://mkltesthead. blogspot.com) and can be found on Twitter at @mkltesthead.
|T-25||Volunteer Armies Can Deliver Quality Too: Achieving a Successful Result in Open Source, Standards Organizations, and Other Volunteer Projects|
Julie Fleischer, Intel
Delivering quality can be straightforward in an organization that creates multiple products and has a series of time-tested processes and procedures in place to guarantee success. However, at some point in your career, you may be asked to deliver a quality result with a team that has self-defined or loosely-defined processes, includes volunteers whose “day job” is not your project, and who rely on a consensus process with individuals from diverse, and sometimes competing, backgrounds and agendas to make decisions. This paper is written by a Program Manager and QA Lead who has spent the better part of the past decade delivering quality test and certification programs, specifications, and projects in various open source programs and standards development organizations. We’ll cover tips and techniques for achieving success in those volunteer-based environments.
Julie Fleischer has worked at Intel for the past thirteen years and has held a variety of roles in program/project management, QA leadership, and software development. She currently is a Technical Program Manager for Linux driver development and also briefly held a position as the Technical Program Manager for the Yocto Project (www.yoctoproject.org), an open source project for creating embedded Linux distributions. Prior to those roles, she worked as the Chair of the Test and Certification Working Group for Continua Health Alliance where she led the team to define and implement a certification program towards v1.0 and v1.5 of the Continua Design Guidelines. She also chaired the team that created the USB Personal Healthcare Device Class standard and project managed the creation of the v1.0 Continua Design Guidelines. She has spent over a decade in open source and standards organizations and has led many “volunteer armies” to accomplish successful results.
Julie received her BS and MS in Computer Science from Case Western Reserve University and has presented at PNSQC in the past on technical and interpersonal topics.
|T-26||Sabotaging Quality: Sacrificing Long-term for Short-term Goals|
Joy Shafer, Quardev
From the very beginning of the project your team has had discussions about quality. You have unanimously agreed it‘s a top priority. Everyone wants to deliver a product of which they can be proud. Six months later, you find yourself neck-deep in bugs and fifty-percent behind schedule. The project team decides to defer fixing half of the bugs to a future release. What happened to making quality a priority?
One of your partners is discontinuing support for a product version on which your online service is dependent. You have known this was coming for years; you are actually four releases behind your partner‘s current version. The upgrade of your online service has been put off repeatedly. Now the team is scrambling to get the needed changes done before your service is brought down by the drop in support. You are implementing the minimum number of features required to support a newer version. In fact, you‘re not even moving to the most current version—it was deemed too difficult and time-consuming to tackle at this point. You are still going to be a release behind. Are you ever going to catch up? Is minimal implementation always going to be the norm? Where is your focus on quality?
Do these scenarios sound familiar? Why is it often so difficult to efficiently deliver a high-quality product? What circumstances sabotage our best intentions for quality? And, more importantly, how can we deliver quality products in spite of these saboteurs?
One of the most common and insidious culprits is the habit of sacrificing long-term goals for short-term goals. This can lead to myriad, long standing issues on a project. It is also one of the most difficult problems to eradicate. There are other saboteurs: competing priorities, resource deprivation, dysfunctional team dynamics, and misplaced reward systems, to name a few. In this paper I‘ll focus on the quality saboteur of sacrificing long-term goals for short-term goals.
I will discuss the benefits that can be gained when you make the right long-term investments and the types of problems you‘ll see if your team is solely pursuing short-term goals. I will show you practical strategies for slaying this saboteur or at least mitigating its effects.
Joy Shafer is currently a Consulting Test Lead at Quardev on assignment at Washington Dental Service. She has been a software test professional for almost twenty years and has managed testing and testers at diverse companies, including Microsoft, NetManage and STLabs. She has also consulted and provided training in the area of software testing methodology for many years. Joy is an active participant in community QA groups. She holds an MBA in International Business from Stern Graduate School of Business (NYU). For fun she participates in King County Search and Rescue efforts and writes Sci-Fi and Fantasy.
|T-37||Increasing Software Quality with Agile Experiences in a Non-Technically-Focused Organization|
Aaron Hockley, Multnomah County, Oregon
Agile development methodologies are well-documented but most of the textbook examples and anecdotes found on the internet provide stories of the use of agile methodologies as part of a technically-focused organization such as a software development contractor, retail software company, or hardware/systems organization with software teams that support the company’s technical products.
Multnomah County (Oregon) is not such an organization. The county’s business consists of providing services such as health services, jails, taxation, licensing, animal control, and managing individuals on parole and probation. A few small software teams build and support tools to support the county’s business, but the organization is decidedly focused on non-technical ventures. In a non-technicallyfocused organization, the role of the product owner (internal customer) becomes challenging. Software development participation competes with their other (usual) job activities and the product owners are often unfamiliar with software development experiences.
Over the past three years, a variety of agile practices have been introduced at Multnomah County; lessons learned by the county’s Public Safety Development Team have resulted in software that better meets the customer’s needs. Experience has shown that quality improves when the development team has frequent access to business personnel even though the ideal co-located customer scenario cannot be achieved. Communication with the business product owners is key; the county’s software teams tried a variety of agile work tracking and communications systems before finding one that works well for all parties. No tool is perfect. The team concluded that given the challenges of customer time and participation, the tool which provides the best customer experience is probably the best tool.
Experience demonstrated that it wasn’t feasible to use a textbook Scrum approach due to customer availability challenges. Project teams and customers eventually settled into a development process that’s a hybrid between a Scrum approach and a Kanban-style system. This nimble development cycle is working well for all involved parties.
Better software means that the core business services are better met. Software that best supports the staff providing day-to-day county services is important; a nimble development approach enables applications to be created that best meet the staff and resident needs.
Aaron Hockley is a QA development analyst with the Public Safety software team at Multnomah County in Portland, Oregon. He is actively involved in both day-to-day software testing (both manual and automated) as well as process improvement for software development in the public safety business areas. Over the past ten years, he has worked for a variety of software and technology companies with a
|T-3||Unusual Testing; Lessons Learned from Being a Casualty Simulation Victim|
Nathalie Rooseboom de Vries van Delft, Capgemini
Hobbies can be an inspiration for many analogies in software and system testing, but it can also be the other way around. I’ve been a so called casualty simulation victim for a couple of years now, playing a patient in hospitals, a victim who needs help from a first aider (both in First Aid lessons and ambulance training) and at disaster re-enactments. I used my knowledge from the Software Testing process for the benefit of being better and more structured in my casualty simulation situations. In return I got a whole bunch of tips and lessons learned that I could use within my job as software tester. Many lessons are particularly useful for software testing, but there are also lessons that are beneficial for all other disciplines in software and system development.
Nathalie Rooseboom de Vries van Delft is Community of Practice leader Testing, CTO office advisor and Managing Consultant at Capgemini Netherlands, responsible for thought leadership and testing competence development. She fulfills the roles of test manager and advisor with various clients. She speaks on national and international test events on regular basis, writes in specialist publications and participates in the Dutch Standardization Body (NEN) workgroup for Software and System development. She is very passionate about (software) testing in general, but the subjects Data Warehouse Testing, E2E-testing, Standardization, Ethics/Philosophy and Test Architecture (Framework) are most favorite.
|T-31||Kanban – What Is It and Why Should I Care?|
Landon Reese, Hewlett Packard
Kathy Iberle, Hewlett Packard
Kanban is gaining popularity in the software development world. It deserves to be considered as a means to manage software development. Kanban is a light-weight agile model that provides visibility into work in process, the capacity of a given resource pool, and the current workflow. The Core Test Strategy Lab at Hewlett Packard has adopted Kanban, and has seen concrete evidence of its efficacy. Using our experience in the Core Test Strategy Lab, this paper will do the following:
Landon Reese is currently a project manager at Hewlett Packard in the Core Test Strategy Lab. Within his current role he has implemented a Kanban process to manage tool development, and the build of a data center at the Boise site. Prior to his current role, Landon spent two years as a firmware engineer on enterprise class laser printers, responsible for scan ASIC turn-on and testing of the scanner interface.
Landon has a B.S. in Electrical Engineering from Santa Clara University.
Kathy Iberle is a senior software quality engineer at Hewlett-Packard, currently working at the HP site in Boise, Idaho. Over the past twenty-five years, she has been involved in software development and testing for products ranging from medical test result management systems to printer drivers to Internet applications. Kathy has worked extensively on training new test engineers, researching appropriate development and test methodologies for different situations, and developing processes for effective and efficient requirements management and software testing.
Kathy has an M.S. in Computer Science from the University of Washington and an excessive collection of degrees in Chemistry from the University of Washington and the University of Michigan.
|T-6||Audit Effectiveness – Assuring Customer Satisfaction|
Diane Clegg, Freescale Semiconductor Inc
Simon Lang, Freescale Semiconductor Inc
Jeff Fiebrich, Freescale Semiconductor Inc
Whether a company is large or small, audits are important factors in continually improving business practices. Ultimately, improved business practices will assure customer satisfaction and drive project effectiveness. Audits can be used to identify best practices to be shared across the company as well as identify areas needing improvement. Good planning is imperative to a successful audit. Audit planning requires not only creating a schedule, but ensuring the appropriate people are available to be audited. Preparation is the key element in planning an effective and efficient audit. This will include scheduling, recruiting the team, training and kick-off meetings. Then determine the right skill set you will need to fill the auditor role. This role will be critical to preparing auditees, proper reporting, avoiding audit challenges and, if necessary, managing 3rd party audits. No matter what combination of audits is right for your organization, planning and monitoring of the system is key. Drive for an understanding of the value your overall audit program brings to your organization and your customers. Align your Key Process Indicators with your company’s strategies, track them, change things when necessary and, above all, maintain the integrity of your audit program.
Jeff Fiebrich is a Software Quality Manager for Freescale Semiconductor Inc. He is a member of the American Society for Quality (ASQ) and has received ASQ certification in Quality Auditing and Software Quality Engineering, and is a RABQSA International Certified Lead Auditor. A graduate of Texas State University with a degree in Computer Science and Mathematics, he served on the University of Texas, Software Quality Institute subcommittee in 2003 – 2004. He has addressed national and international audiences on topics from software development to process modeling. Jeff is the co-author of the book, ‘Collaborative Process Improvement’, Wiley-IEEE Press, 2007.
Simon Lang is a Software Quality Manager for Freescale Semiconductor. Simon has 10+ years of experience in the High-Tech industry. He has most recently lead various improvement efforts for one of the software organizations using CMMI methodologies and Lean tools. Simon has degrees in business management and is a certified internal auditor, Lean facilitator as well as a Six Sigma green belt.
Diane Clegg is a Quality Systems Engineer for Freescale Semiconductor, Inc. She is an America Society of Quality certified Internal Quality Management System (QMS) Auditor. With fifteen years of experience in the semiconductor industry, Diane served as the Project Manager for the Freescale Document Management System (DMS) project implemented via SAP.
|T-87||How to deal with an abrasive boss (aka “bully”)|
Pam Rechel, Brave Heart Consulting
Context: There are bosses who are not as competent as we would like, some who are annoying but tolerable and some who are abrasive. Abrasive bosses (sometimes known as “bullies”) behave in such a negative manner that the workplace is impacted – motivation is low or nonexistent, results begin to slide downward and employees hunker down to stay out of the way or begin their exit strategy. Employees, who don’t have authority over the boss, are often so afraid of the abrasive boss or not wanting to trigger a reaction, that nothing is said or done.
Why the situation isn’t hopeless. In the session we’ll discuss why people behave in abrasive ways; some strategies that can work to deal with them and what strategies to avoid.
The goal of this session is: to present practical steps on how to deal with an abrasive boss.
Why is this important? It’s important to take control of your work life and to be able to change how you approach someone whose behavior is unacceptable. The skill of communicating with difficult individuals is often not taught by our parents or schools, yet it becomes a key workplace competence to survive and thrive at work.
Summary: By the end of the session, participants will learn two strategies to successfully deal with workplace bullies and two strategies to avoid.
What’s in it for the participants to develop the skills presented in this session?
Pam Rechel is an executive coach and organization consultant with Brave Heart Consulting in Portland specializing in coaching abrasive managers and helping busy teams increase accountability and results. She has an M.A. in Coaching and Consulting in Organizations from the Leadership Institute of Seattle (LIOS), an M.B.A. in Information Systems from George Washington University and an M.S. from Syracuse University and is a Newfield Network certified coach.
|T-66||Before We Can Test, We Must Talk: Creating a Culture of Learning Within Quality Teams|
Kristina Sitarski, Menlo innovations
Learning is a process you do, not a process that is done to you. Before the heated discussion on what automated functional testing tool to lobby for or the fun game of ‘who can cause the exception first,’ we must learn how to communicate with one another. In my experience working in a paired quality assurance environment for three years, I have been partnered with several different people. Although my partners have been problem-solving software destructors like myself, they all have had very distinct learning styles. This brings me to a truth sometimes overlooked: to teach is to learn. To help teach not only my partner, but also customers and our Agile shop tour attendees, I look to Neil Fleming’s model of different learning styles. Using this as a guideline, as well as some important items ranging from jellybeans to yarn, I find it has improved the communication on concepts such as functional test writing, object modeling, and estimation.
Understanding Mr. Fleming’s model is important, but it is imperative to constantly be self aware of the audience you are communicating with and not afraid to create your own teaching style.To deliver quality communication, sometimes on quality itself, we must learn how to effectively teach and learn from one another.
Kristina Sitarski has worked at Menlo Innovations as a quality advocate since 2008. Before she became a quality advocate, she worked in quality assurance for Xerox. Currently at Menlo, she brings her experience, as well as a degree in anthropology, to provide an objective, user-centric view of the design and implementation of software tests. She enjoys working and living in the city of Ann Arbor, Michigan.
|T-67||The Ladder of Unmanaged Conflict|
Jean Richardson, Azure Gate Consulting
Because building software is a social process, navigating conflict in the software development process is a large part of what occupies team member time. It seems to be perennially scary to deal with conflict. NOT dealing with conflict has a very high cost that is often unacknowledged; it can destroy teams and entire organizations. Using proven research and anecdotes drawn from the speaker’s experience, this paper will address:
Attendees will take away:
A software development professional since 1989, Jean Richardson is experienced in adaptive and predictive project management, writing, training, public speaking, and requirements and business analysis. She holds a B.A. in English with minors in economics and administrative systems management and is completing an M.A. in Organizational Communication in Winter 2011. Her master’s thesis is titled In Your Own Hands: Personal Integrity and the Individual?s Experience of Work Life and focuses on personal integrity through the lens of agile methods, specifically, Scrum. She holds PMP, CSM, CSPO, and ITIL certifications, has met the Oregon Department of Justice standard for court-based mediators and has been mediating in the court system for over 10 years.
As a consultant, her client list boasts a wide range of businesses including ADP, Chrome Systems, CoreLogic, Intel, Freightliner, Kaiser Permanente, Kryptiq Corporation, Mentor Graphics, Oregon Health Authority, The Regence Group, Tripwire, and US Bank. EPHT GEORGE, a project Jean managed for the Public Health Division of the Oregon State Department of Human Services, won the Project Management Institute 2009 Project of the Year Award for the Portland Chapter.
|T-61||Learning Software Engineering Online|
It is apparent that software practitioners seeking advanced education and professional development demand programs that blend with their obligations at work and at home as well as yield immediate benefits. Ideally, software engineers should be able to integrate their learning with how they work. Acculturated by their social and business networks, they would prefer to use tools and interfaces that are similar to what they already use. The learning processes and tools they are expected to use when learning should blend with their day-to-day processes to the extent possible.
Offering classroom courses in the edge hours – evenings and weekends – is an approach that meets the needs of many working software professionals seeking advanced education. However, the time such practitioners can devote to the campus commute to attend a few hours of classes is rapidly shrinking. Learning by way of online methods and tools has therefore become increasingly popular over the last several years. E-learning over the web offers a great deal of flexibility over traditional learning modes. Several variants and hybrids of these models can, and have been, implemented, each offering distinct benefits, but also limitations.
My experiences to date suggest that better, more cost-effective, and more relevant learning can be achieved by hybridizing face-to-face and online learning modes. For example, hybrid learning can increase the depth of engagement through discussion forums, provide access for dispersed learners, and also satisfy the demand for a modicum of face-time for those students who are prepared to attend classes on campus. However, “one size does not appear to fit all” – for example, discussion forums appear to be less effective for programming courses, while demonstration videos walking through coding and testing samples seem to be very effective. And online teleconferencing mechanisms supporting shared visual space and jointly authored documents are very effective for work group activities – much like those experienced on-the-job.
This paper describes my experiences and evolution of various learning models and hybrids that I have leveraged to blend certain critical elements of traditional face-to-face and online learning approaches. A spin-off benefit of e-learning is that classrooms are freed up for other purposes, reducing the demand for costly physical facilities. In this paper I explore several blended learning model variants that I have used to teach software engineering courses. The observed benefits and limitations of these learning models and support tools are highlighted; and several outstanding questions and issues for further consideration are raised.
Kal Toth holds a Ph.D. from Carleton University, Ottawa, Canada and is a registered professional engineer in British Columbia with a software engineering designation. He has worked for a range of companies and universities including Hughes Aircraft, CGI Group, Intellitech, Datalink Systems Corp., Portland State University, Oregon State University, and the Technical University of British Columbia. His areas of specialization and R&D interest include software engineering, project management, information security, and identify management.
|T-47||Hard Lessons About Soft Skills – Understanding the Psyche of the Software Tester|
Marlena Compton, Mozilla
Testers are often characterized as the conscience of a project. This session will dive deeply and uncomfortably into the psyche of software testing and those of us who claim it as our profession. Attitudes that are deeply embedded in our testing culture and their effectiveness will be examined:
The positive role that emotions can play in testing is also discussed. Crucial conversations are also introduced to help testers understand how to argue their point without bullying. This presentation will show why testers must have and act with conscience and have an emotional fluency in order to test software effectively. We will show that despite being erroneously called soft skills, the effective care and handling of emotions is the hardest skill in testing.
Marlena Compton has been testing software since 2007 and has an innate talent for making developers feel angry, inadequate and ashamed. After noticing that happy developers make better software, she embarked on a journey to learn how to be an effective tester and help the developers feel supported, at the same time. This led her to research psychology and communication skills as they apply to testing with friend and licensed counselor Gordon Shippey.
Gordon Shippey has completed degrees at Emory University, the Georgia Institute of Technology, and Argosy University. A Licensed Associate Professional Counselor in the state of Georgia, Gordon provides psychotherapy to individuals, couples, and families. Gordon is particularly interested in work/life balance and organizational psychology.
|T-69||Building a Culture Around Quality|
Tracy Beeson, Menlo Innovations
Have you ever said to yourself “I want unhappy people working on my team?” Of course not, but why?
Have you ever said to yourself “I want someone on my team who is not invested in the work we do?” Of course not, but why?
Certainly an unhappy person can work hard. Just because a person is not as invested in the project doesn’t mean they can’t add value. Yet despite the value, it somehow leaves us with an uneasy feeling. Why?
A person who is neither happy nor invested, is a person unable to deliver quality work at a long term sustainable pace.
This experience report is about how our team built a culture centered around delivering exceptional quality by fostering a culture of trust, respect, and joy in the workplace. It demonstrates how our team has incorporated the teachings of Dr. Deming, Patrick Lencioni’s “Five Dysfunctions of a Team”, Peter Senge’s “The Fifth Discipline: The Art and Practice of the Learning”, and others into everything that we do because we believe that the culture of an organization can have a direct affect on the quality of its deliverables.
Tracy Beeson is a senior quality advocate at Menlo Innovations in Ann Arbor Michigan. Over the past ten years Tracy has worked in the quality assurance field, the last five of which helping to build and integrate the quality assurance process into the agile process at Menlo. She has written and taught Menlo’s “Building Agile Quality” class and presented at several conferences including the Agile 2008 Conference in Toronto and the PMI Global Congress 2009 in Orlando, FL.
Tracy has a bachelor of arts degree from Middlebury College in Middlebury, Vermont where she majored in Computer Science and earned her secondary teaching license in Math and Computer Science.
|T-40||Delivering Quality in Software Performance and Scalability Testing|
Khun Ban, Intel
Robert Scott, Intel
Kingsum Chow, Intel
Huijun Yan, Intel
We all are witnessing the amazing growth in today’s computing industry. Servers are many times faster and smarter than those of just a few years ago. The need to ensure the applications run well with a large number of users on the Internet has continued and is getting more attention as the move to cloud computing is gaining momentum. To address the question whether the software system can perform under load, we look to performance and load testing. Performance testing is executed to determine how fast a system performs. It is typically done using a particular workload. It can also serve to verify other quality attributes, such as scalability, reliability and resource usage. Load testing is primarily concerned with the ability to operate under a high load such as large quantities of data or a large number of users. We use performance and load testing to evaluate the scalability of a software system.
In this paper, we characterize the challenges in conducting the application performance tests and describe a systematic approach to overcome them. We present a step-by-step data driven and systematic approach to identify performance scalability issues in testing and ensuring software quality through software performance and scalability testing. The approach includes how to analyze multiple pieces of data from multiple sources, and how to determine the configuration changes that are needed to successfully complete the stress tests for scalability. The reader will find it easy to apply our approach for immediate improvement in performance and scalability testing.
Khun Ban is a staff performance engineer working with the System Software Division of the Software and Services Group at Intel. He has over ten years of enterprise software development experience. He received his B.S. degree in Computer Science and Engineering from the University of Washington in 1995.
Robert Scott is a staff performance engineer working with the System Software Division of the Software and Services Group. Bob has over ten years of experience with Intel improving the performance of enterprise class applications. Prior to Intel, he enjoyed twenty years engaged in all aspects of software development, reliability and testing, and deployment.
Kingsum Chow is a principal engineer working with the System Software Division of the Intel Software and Services Group. He has been working for Intel since receiving his Ph.D. in Computer Science and Engineering from the University of Washington in 1996. He has published more than 40 technical papers and has been issued more than 10 patents.
Huijun Yan is a senior performance engineer working with the System Software Division of the Software Services Group. Huijun has over seven years of enterprise software performance experience. Huijun received her MS degree in Computer Science from Brigham Young University in 1991.
|T-43||Perf Cells: A Case Study in Achieving Clean Performance Test Results|
Vivek Venkatachalam, Microsoft
Shirley Tan, Microsoft
Marcelo Nery dos Santos, Microsoft
A key indicator of the quality of a software product is its responsiveness and performance, as this enables users to get tasks done quickly and efficiently. Test teams realize this and therefore typically set aside time and resources for performance testing. Unfortunately, the effort inevitably runs into the dreaded “high unexplainable variability in performance test results”. Since fixing this is hard to do, a typical workaround is to raise the acceptable loss threshold to ensure only large performance trigger corrective action. However, this causes a “death-by-a-thousand-cuts” effect where numerous small performance losses (that are all real) are allowed into the product. By the time the product is ready to ship, these small performance losses have accumulated and the product exhibits poor performance with no easy fix in sight.
How does one get close to the ideal of a performance test system that can reliably detect even small performance losses on a build-over-build basis while minimizing false positives? This paper describes our attempt to tackle this problem while running performance tests for Microsoft Lync.
The bulk of the variation in performance results when testing a distributed system is typically due to network traffic variations and varying load on the servers. Our approach to eliminate this variability was to design a system where the performance test runs on a virtualized distributed system, that we call a Perf Cell. This is essentially a single physical machine that houses all the individual components on separate virtual machines connected via a virtual network and otherwise isolated from all external networks. In the paper, we present our design and implementation as well as results that indicate that variability is indeed reduced. We believe this paper would be useful reading for engineers responsible for designing, implementing or running a performance engineering system.
Vivek Venkatachalam is a software test lead on the Microsoft Lync team. He joined Microsoft in 2003 and worked on the Messenger Server test team before moving to the Lync client team in 2007. He is passionate about working on innovative techniques to tackle software testing problems.
Marcelo Nery dos Santos is a software engineer on the Microsoft Lync test team. He has primarily worked on tools to help test performance for the Lync product. His main interests are working on innovative tools and approaches for enabling more efficient testing.
Shirley Tan is a software engineer on the Microsoft Lync test team and recently completed 5 years at Microsoft. Apart from her work on performance testing, she is interested in Model Based Testing and Code Churn Analysis.
World Trade Center
121 SW Salmon St.
Portland, OR 97204
The PNSQC newsletter offers readers interviews with presenters and keynotes, invites to webinars, upcoming industry calendar listings, and so much more straight to your inbox. Sign up by entering your email into the box and let the latest news come directly to you.
Copyright PNSQC 2020
WordPress website created by Mozak Design - Portland, OR