P-123 – Implementing Lean and Six Sigma in Software Application Development
P-152 – Exploratory Testing As Competitive Sport
P-112 – Taking Ownership for Software Development
P-113 – Benchmarking for the Rest of Us
P-104 – The Challenge of Productivity Measurement
V-02 – Requirements-Driven Development: The Thread that Keeps Development Connected with Users
V-01 – Peeking inside Google’s Innovation Factory
P-118 – CyberHunters: Deep Diving into the Mind of the Test Engineer
P-140 – Pairwise Testing in Real World. Practical Extensions to Test Case Generators
P-158 – Test Case Maps in support of Exploratory Testing
P-150 – Evolutionary Methods (Evo) at Tektronix: A Case Study
P-129 – Quantifying Software Quality – Making Informed Decisions
P-147 – Using Social Engineering to Drive Quality Upstream
P-157 – Adopting and Adapting Agile Development Practices in the Real World
P-117 – Leveraging Model-Driven Testing Practices to Improve Software Quality at Microsoft
P-101 – Accelerating Performance Testing – A Team Approach
P-165 – Software Testing in an Agile World
P-105 – Defining Test Data and Data-Centric Application Testing
P-108 – Planning for Highly Predictable Results with TSP/PSP, Six Sigma
P-159 – Performance Testing: How to Compile, Analyze, and Report Results
P-160 – Four Behaviors that Hold Testers Back
P-161 – Step Away from the Tests: Take a Quality Break
P-153 – Know Your Code: Stay in Control of Your Open Source
P-149 – Making the Most of Community Feedback
V-03 – Achieving Tangible ROI From Your Quality Assurance Test Organization
P-102 – MAGIQ: A Simple Method to Determine the Overall Quality of a System
P-111 – Key Measurements for Testers
V-04 – Use Cases/Test Cases: Two Sides of the Same Coin
P-125 – Understanding the Imagination Factor
P-162 – Improving Test Code Quality
P-099 – Front End Requirements Traceability for Small Systems
P-146 – Smart Result Analysis: A Key Competitive Advantage
P-133 – How to Test Requirements
P-100 – Techniques That Inspired Workplace Improvement
P-093 – Insights in Real Test-Driven Development
|P-123||Implementing Lean and Six Sigma in Software Application Development|
David Anderson is the author of “Agile Management for Software Engineering” published by Prentice Hall in 2003. He is a recognized expert in agile software development and management methodologies, the application of management science and techniques such as The Theory of Constraints, Lean, and Six Sigma to software engineering problems. David works for Microsoft as the architect for the Microsoft Solutions Framework (MSF) methodology and designed the MSF for CMMI Process Improvement process template – the first mass market solution to bring together agile and CMMI. David is a regular conference speaker. He has published widely on the topics of software development management, productivity, quality assurance, and continuous improvement.
|P-152||Exploratory Testing As Competitive Sport|
Two decades after the term “exploratory testing” was invented, testers and managers still do not understand it completely. Worse, experts consider it an art, implying that there is no way to teach it or improve one’s skill at it. This presentation will try to shatter that myth and describe specific skills and tactics that testers can use to improve their unscripted testing ability while gaining credibility and confidence.
In his ten-year career in testing, Jon Bach has led projects for many corporations, including Microsoft, before starting with Quardev – an onshore test outsourcing company in Seattle. As Manager for Corporate Intellect, he manages testing projects ranging from a few days to several months using Rapid Testing techniques (like SBTM). He is the speaker chair for Seattle’s Quality Assurance SIG, as well as a regular guest speaker for Seattle-area testing forums and universities. In 2000, Jon and his brother James created “Session-Based Test Management” – a technique for managing (and measuring) exploratory testing, for which he is a recognized expert.
|P-112||Taking Ownership for Software Development|
Software development in a team environment needs to be consciously managed to be effective, and we all need to recognize our role in this effort.
Jim Brosseau has a career spanning more than 20 years in a variety of roles and responsibilities. He has held successively more responsible positions in military and defense contracting, commercial software development, and training and consulting. He has worked in the QA role, and has acted as team lead, project manager, and director. In addition, Jim has worked with more than 60 organizations in the past 7 years with a goal of reducing business inefficiencies. An integral part of this effort has been a focus on techniques for measurement of productivity gains and ROI for refined development and management practices.
|P-113||Benchmarking for the Rest of Us |
While commonly used external benchmarks can provide some insights, it is important to balance this with internal information to gain a complete objective picture.See P-112 for bio.
|P-104||The Challenge of Productivity Measurement|
In an era of tight budgets and increased outsourcing, getting a good measure of an organization’s productivity is a persistent management concern. Unfortunately, experience shows that no single productivity measure applies to all situations. This article discusses the key considerations for defining an effective productivity measure.
David Card is a fellow of Q-Labs. Previous employers include the Software Productivity Consortium, Computer Sciences Corporation, Lockheed Martin, and Litton Bionetics. He spent one year as a Resident Affiliate at the Software Engineering Institute and seven years as a member of the NASA Software Engineering Laboratory research team. David is the author of Measuring Software Design Quality (Prentice Hall, 1990), co-author of Practical Software Measurement (Addison Wesley, 2002), and co-editor ISO/IEC Standard 15939: Software Measurement Process (International Organization for Standardization, 2002). He also serves as Editor-in-Chief of the Journal of Systems and Software; he is chair of the IEEE Standard 1044, Classification of Software Anomalies, revision project; and a Senior Member of the American Society for Quality.
|V-02||Requirements-Driven Development: The Thread that Keeps Development Connected with Users|
In companies today over 80% of processes are automated in software. This means that the performance of your organization can be linked directly to the performance of your software. If you deliver the wrong software or the software doesn’t perform as anticipated – you’ve got problems. Requirements (any request for change) are essential to the success of development projects. This presentation will share insights and best practices of companies taking a requirements-driven development approach to ensure requirements serve as a proxy for the customer (both external and internal) throughout the development process (define, design, simulate, develop, validate, deliver) in the context of their business.
John Carrillo is Senior Director, Strategic Solutions with Telelogic. The Telelogic Solutions Group provides domain expertise, thought leadership, and expert opinion to its global customer base. With over 15 years in the commercial and defense industries, John’s experience includes roles in technical sales, management, process consulting, systems engineering, product development, and program management. John holds degrees in Electrical Engineering (EE) and Sociology from Loyola Marymount University and California State University at Long Beach. He completed EE graduate studies at California State University in Long Beach.
|V-01||Peeking inside Google’s Innovation Factory |
Come and peek inside the doors of the world’s fastest moving innovation factory. For every service Google releases there are dozens that are in various stages of experimentation. This highly charged atmosphere creates many challenges for our engineers in test. From our approaches to automation, to the way we divide the work within the project team, we have developed many innovative adaptations to common testing models. This talk will be a brief glimpse of our ideas and will be of interest to people who need to “thinking differently” about quality.
Patrick Copeland is currently Director of Quality Assurance at Google where he develops innovative approaches to testing. Previously he managed the SQL Programming Model QA team while at Microsoft Corporation. As an undergraduate, Patrick attended the University of Arizona; he received an MS from the University of Southern California.
|P-118||CyberHunters: Deep Diving into the Mind of the Test Engineer|
Effective software testing gives companies a competitive edge. One way to improve testing is to understand better the mind of the tester. Drawing upon his research in human behavior, the author guides us through a “deep dive” into the inner world of the software tester.
John Copp, a once-upon-a-time psychologist, has advanced degrees in psychology, anthropology, and computer science. Originally a researcher and former Guggenheim Fellow, John made the switch to the computer industry 20 years ago where he has been a developer, database administrator, and test engineer, his current role at McAfee.
|P-140||Pairwise Testing in Real World. Practical Extensions to Test Case Generators|
This presentation introduces PICT, a pairwise generation tool recently released to the public by Microsoft. PICT is one of the most powerful combinatorial test generator available, it’s also one of the simplest and most usable tools.
Jacek Czerwonka works in one of Microsoft’s test organizations. For the last few years, he has been involved in designing and implementing pairwise-related tools and evangelizing pairwise testing at Microsoft.
|P-158||Test Case Maps in support of Exploratory Testing|
In the 2000’s ad hoc testing came of age and was beautifully articulated and defined as exploratory testing. But test procedures and testing tables were too heavy for this light, highly agile and wonderfully effective form of testing. Instead, test maps should be considered a complement to exploratory testing. With test case maps testers are able to provide management the objectivity they need to track the testing process and still achieve high productivity and efficiency as realized with exploratory testing.
Claudia Dencker is President of Software SETT Corporation, a company specializing in hosted test case management and tracking for QA professionals. Ms. Dencker has taught software testing and global team management classes worldwide through the IEEE, Software SETT and the University of California, Santa Cruz. She has over 20 years in software QA and testing.
|P-150||Evolutionary Methods (Evo) at Tektronix: A Case Study |
This presentation discusses the use of Evo (Evolutionary Method) to deliver quality products on time. The method enables teams to detect early signs of schedule trouble and take corrective action.
Frank Goovaerts has engineering degrees from the University of Leuven (Belgium) and the University of Cincinnati. He has worked in the software industry since he graduated in 1981. For the last 16 years he has been with Tektronix in a variety of roles varying from developer, to project lead, functional manager and currently Director of Software Engineering for the performance oscilloscope group. Tektronix is a world leader in test, measurement and monitoring equipment. Frank is an advocate for process improvement and continues to drive his organization to create better software, on time.
|P-129||Quantifying Software Quality – Making Informed Decisions|
Software Quality can be quantified and measured to provide a desired customer value. The quality attributes Installability, Learnability, Performance, Realability, Scalability, along with functionality, provide a sound basis to define the Total Customer Experience.
Bhushan Gupta has 20 years of experience in software engineering, 10 in the software industry. Currently a Process Engineering Architect at Hewlett-Packard, he joined the company as a software quality engineer in 1997 where he was responsible for identifying key process areas to reduce rework. Based upon Hewlett-Packard quality parameters, he has developed quality goals, quality plans, and software validation strategies for several products. In his current position, he has customized the evolutionary lifecycle for the Digital Publishing Division; contributed to software validation planning and execution processes; developed metrics for software problem reports (defects), project retrospectives, and software process improvements.
|P-147||Using Social Engineering to Drive Quality Upstream|
This presentation chronicles how one small team at Microsoft used social engineering to create their own test-centric development strategy, spread adoption of the strategy in a Waterfall community, deliver an innovative product, and not go crazy, all within a short 12 months.
Thomas Gutschmidt has been professionally involved in the computer industry for the past seven years. He currently works for Microsoft in Redmond, Washington. He is also a freelance author and writer and has been involved in several open source game projects and module development projects.
|P-157||Adopting and Adapting Agile Development Practices in the Real World|
The benefits of agile development practices are well established. A myriad of books describe agile development. However, these texts leave unanswered questions around transitioning from traditional models in real-world environments. This presentation will answer these questions and provide descriptions of what worked.
Don Hanson founded his first commercial software company in the early 90’s, developing an evolving line of animation plug-in products. He has since developed commercial software products for the enterprise market and lead user-interface development for a mobile wireless navigation startup. Currently, Don is the development manager for the integration platform used to manage security products with $500 million in annual sales at McAfee, Inc.
|P-117||Leveraging Model-Driven Testing Practices to Improve Software Quality at Microsoft|
The Internet Explorer team at Microsoft is exploring the use of model-driven testing techniques and tools to increase quality and make the testing process more systematic and efficient. This presentation will describe the introduction of such technology.
Jean Hartmann has been working in the field of QA and testing for nearly twenty years. While his early interests focused on smarter regression testing strategies, his focus since 1998 has been model-based testing techniques and tools. At Siemens Corporate Research, his R&D team developed a UML-based test generation environment, which is being used successfully within a number of the Siemens operating companies. With a recent move to Microsoft as an IE Test Architect, he continues to pursue this topic and his passion for making testing a more systematic and efficient discipline.
|P-101||Accelerating Performance Testing – A Team Approach|
Presenting a strategy for engaging key members of the development and deployment staffs to assist test teams in planning and executing performance tests – dramatically accelerating the process and minimizing down time during testing cycles.
Dawn Haynes is a consultant providing software quality, testing, and training services for companies such as Quality Tree Software and SQE. With over twelve years of experience in manual, automated, and performance testing of software systems, Dawn has performed advanced technical training, curriculum and course development, and training department management. Dawn’s career has included successful technical positions at Xerox, Rational Software, SoftBridge Microsystems, and Ipswitch, Inc. Dawn is a member of ASQ and ASTD, and has participated in several invitation-only industry initiatives. She is a contributing author of the book “Quality Web Systems: Performance, Security & Usability.” Dawn holds a BSBA in MIS from Northeastern University.
|P-165||Software Testing in an Agile World|
This presentation discusses real-world lessons and guidance for a testing organization, obtained from applying agile development techniques on multiple software projects from the perspective of a test manager or test lead responsible for planning and executing the feature and system level testing on an agile project.
Paul Hemson has spent the past 15 years in various QA leadership and management roles at Mentor Graphics, Intersolv, Cadence, and for the past 4 years as a QA Director at McAfee, Inc. in Beaverton. At McAfee he is responsible for a large, global QA organization. Prior to his QA roles, Paul spent 6 years in software support and development positions at Mentor Graphics. He holds a Bachelor of Science degree in Electrical Engineering from Swansea University (UK).
|P-105||Defining Test Data and Data-Centric Application Testing|
As more and more projects are developed using the various agile methodologies, it is important to the success of these projects that the testers involved know how their role fits into the development pattern being used. This presentation seeks to define that role within the framework of agile development.
Chris Hetzler received both a B.S. in Computer Science and a M.S. in Software Engineering from NDSU. He currently works in the Fargo development office of Microsoft Business Solutions where he has been involved with the testing of numerous legacy and next-generation products, including Microsoft Dynamics GP & BP, Dynamics eConnect, and Dynamics Web Services, where he was the lead technical tester.
|P-108||Planning for Highly Predictable Results with TSP/PSP, Six Sigma & Poka-Yoke |
Historically, application development is often estimated and tracked based on a gut feeling. A schedule is developed from the rough estimate, and as the team tries to meet schedule, it struggles to meet those deadlines. Corners get cut and often the last phase – stabilization- suffers the full impact of poor planning and estimating. Customers are dissatisfied, teams are exhausted and sometimes quality has been compromised in the drive to meet the promised date. Team Software Process (TSP) shifts the focus from testing as the ‘find it and fix it’ stage, to each individual engineer acting to prevent defects throughout the project lifecycle. Team members record data during project execution, tracking key metrics and taking corrective action as soon as project deviates from the plan. Each engineer performs a self-review to ensure the quality of their own output before it goes to the next phase. This brings in high level of predictability in schedule, effort, and quality.
Mukesh Jain is leading Six Sigma, TSP/PSP, SDLC, Process, and Business Intelligence initiatives in Microsoft. He has over 10 years of experience with various positions including Developer, QA, Project Manager, and Quality Manager. He has led multinational companies in India with TSP/PSP, ITIL, Six Sigma, ISO 9000 and SEI CMM Level 3-5 implementation and certification. He holds Engineering degree in Computer Science. He is an SEI Authorized TSP Launch Coach, PSP Instructor, PSP Engineer, ISO 9000 Internal Auditor, CQA, CQIA, CSQA, CSTE, and Master-Microsoft Office Specialist. He served American Society for Quality (ASQ) as Secretary (2000-2003). He Served International Society for Performance Improvement (ISPI) as Vice-President (2001). As an executive member, he organized TUG Asia 2005 in India in March 2005. He has written several papers on the subject of software quality and project management and presented them in Microsoft and other companies and at international conferences, including IEEE, QAI, ASQ, SPIN, PMI, and SEI.
|P-159||Performance Testing: How to Compile, Analyze, and Report Results|
Performance testing can generate volumes of output. Learn how to organize your test results in a way that helps you understand and then interpret the test output. Learn how to present and communicate performance test information to management in a helpful, instructive, and insightful manner.
Karen Johnson, Quality Assurance Manager at Bacon’s Information, has 21 years experience in IT. She has extensive experience in all aspects of quality assurance in a variety of software applications. Karen has spoken at StarEast and StarWest conferences. She has been published in Software Testing & Quality Engineering magazine – now known as Better Software. She is a tutorial chair for 2006 Conference for the Association for Software Testing. Karen is also a member of WOPR – the workshop on performance and reliability, LAWST – the Los Altos Workshop on Software Testing, and AWTA – the Austin Workshops on Test Automation (AWTA).
|P-160||Four Behaviors that Hold Testers Back|
Testers sometimes worry too much about what others think. This presentation will identify four behaviors that hold testers and their products back – asking permission to open bugs; fretting over feature, team, and discipline boundaries; neglecting to interpret data and recognize trends; and commiserating instead of improving.
John Lambert is a test technical lead at Microsoft and works on the next-generation web services runtime (“Indigo”). He helps with test automation and test methodologies for the team. John has bachelor of science and master of science degrees in computer science from Case Western Reserve University; his thesis was on using stack traces to identify automatically failed executions in a distributed system. John spent a summer as a program manager intern at Microsoft working on server appliances and a summer as a research intern at Cigital investigating malicious software detection techniques. He has presented at PNSQC and STAR.
|P-161||Step Away from the Tests: Take a Quality Break|
Designing, implementing, and running tests are critically important tasks – but sometimes we need a break. This presentation will describe four non-testing techniques that improve quality in a short time slice – watching bugs, helping developers, talking to other testers, and increasing positive interactions.See P-160 for Bio.
|P-153||Know Your Code: Stay in Control of Your Open Source|
Product developers are aware of the opportunities and obligations associated with open source and third party software. However, many developers do not realize they may be violating intellectual property. With source code increasingly available for free, you must retain control of your product development process and manage compliance of intellectual property.
Prior to founding Black Duck, Doug Levin served as the CEO of MessageMachines and X-Collaboration Software Corporation, two VC-backed companies based in Boston. From 1995 to 1999, he worked as an interim executive or consultant to CMGI Direct, IBM/Lotus Development Corporation, Oracle Software Corporation, Solbright Software, Mosaic Telecommunications, Bright Tiger Technologies, Best!Software and several other software companies. From 1987 to 1995, Doug held various senior management positions with Microsoft Corporation including heading up worldwide licensing for corporate purchases of non-OEM Microsoft software products. Previously, he held senior management positions with two startups in California and served as an IT and financial consultant to an overseas development company. Doug is an adjunct professor of Entrepreneurship and Management at the Kenan-Flagler Business School at his alma mater, the University of North Carolina at Chapel Hill. He also holds a certificate in international economics from the College d’Europe in Bruges, Belgium.
|P-149||Making the Most of Community Feedback|
The Mozilla Project has a long history of relying on community testing and other quality-related feedback from our users. Learn how the Mozilla Project encourages and processes this feedback, and how you can apply the lessons learned to your own projects.
Dave Liebreich has almost 20 years experience developing and supporting the development of software. Currently employed at Mozilla Corporation, he is trying to figure out how to improve and increase the testing performed in this huge open-source project.
|V-03||Achieving Tangible ROI From Your Quality Assurance Test Organization|
Testing, an often-underappreciated activity in the software development industry, is finally being recognized as a core function for any organization developing or utilizing software. With potential areas for process improvements and cost savings becoming more finite, there is still plenty of ROI to be realized in the quality assurance and testing domain. Whether through judicious use of test automation, more mature processes, or utilization of offshore resources, it is possible to decrease overall spending while improving quality. Find out how through real world examples that can be applied to your quality assurance test (QAT) organization.
Chris Manuel is Director, Quality Assurance Test (QAT) Practice at Rapidigm (now a Fujitsu Consulting Company). Previously he held several positions at Rapidigm including Practice Manager, E-Business Associate Director. Chris started his career with American Management Systems (AMS), a global consulting firm based in Fairfax, VA. With 12 years of experience in IT consulting and professional services, Chris has lead initiatives for Fortune Global 50 organizations including Microsoft, General Motors, and Wal-Mart. Chris has contributed to various publications including the periodical Software Business and has spoken at software development leadership forums in California, Washington and Ohio. He graduated with a Bachelor of Science in Management from Rensselaer Polytechnic Institute.
|P-102||MAGIQ: A Simple Method to Determine the Overall Quality of a System|
Dr. James McCaffrey
Learn MAGIQ – a powerful but quick and easy technique to evaluate the overall quality of any type of software system. Use MAGIQ to determine how your systems stand up against competitor’s systems, and for build quality analysis.
Dr. James McCaffrey works for Volt Information Sciences, Inc. where he manages technical training for software engineers working at Microsoft. He holds a doctorate from the University of Southern California, a bachelor’s in mathematics from California State University at Fullerton, and a bachelor’s in psychology from the University of California at Irvine. He worked as a lead software engineer at Microsoft on key products such as Internet Explorer and MSN Search. James is the author of “.NET Test Automation Recipes: A Problem Solution Approach” (Apress), and is a contributing editor for MSDN magazine.
|P-111||Key Measurements for Testers|
What more can we measure than the number of open and closed defects or test cases run? There are several measures to manage and predict the testing process, and to answer questions like, “How much time do we need in testing?”, and, “Is the software good enough to release?”
Pamela Perrott is a Senior Quality Architect at Construx Software. She has been in the IT industry for 23 years as a programmer, systems programmer, analyst, project manager for tools implementations, and instructor. Pam is expert in quality practices, such as implementing inspections, and has a deep knowledge of software process improvement, testing, and software project management. Prior to working at Construx, she integrated new technologies, implemented inspections, and performed complex requirements management at Verizon Wireless. Pam has an AB from Bryn Mawr College in Biology, an MA from Cambridge University in Biochemistry, a Certificate in Data Processing from North Seattle Community College, and a Master’s in Software Engineering from Seattle University. She is also a Certified Function Point Specialist (CFPS) and a Certified Software Test Engineer (CSTE).
|V-04||Use Cases/Test Cases: Two Sides of the Same Coin|
Use Cases are a very powerful technique for capturing and communicating requirements for a software project. Yet use cases are often misunderstood, underused, and even avoided. In this session, you will learn how to write precise use cases, and then how they can be do double duty – leveraging the use case specifications to create test case specifications automatically.
Ashu Potnis works for TechnoSolutions Corporation in Vancouver, Washington. He began his career 20 years ago typing COBOL code on mainframe “character mode” screens. He has been through it all – 3GL to Web Services, Waterfall to Iterative processes. The Software Development Lifecycle remains remains near and dear to his heart. In 1996, he co-authored a popular software development tool for Oracle Databases, SQL Navigator, which currently enjoys an installed base of over 100,000 users.
|P-125||Understanding the Imagination Factor |
The first step to identifying bugs is to imagine their existence. A tester’s imagination is key to identifying issues with requirements, finding bugs in software, and reporting them in ways that will grab stakeholders’ attention. Learn how to leverage and develop your own imagination, and that of your fellow testers.
Mike Roe is a Senior QA Engineer for Symyx Technologies, a chemical research company based in SantaClara, CA. He works with a software development team in Bend, Oregon. He has tested software for about eight years in various industries including chemical research, government licensing, accounting, aerospace, and software. For the last three years he has tested scientific electronic lab notebook software (iELN) and the systems in which they are managed.
|P-162||Improving Test Code Quality|
A test team, while responsible for ensuring quality of a feature or product, is not infallible. Just like production code, test code suffers from common problems. This talk will describe tools and low-overhead process for identifying and removing these problems.
Brian Rogers is a tester at Microsoft, working on the next-generation web services runtime (“Indigo”) where he improves test code quality across his team.
|P-099||Front End Requirements Traceability for Small Systems|
This presentation provides a case study centering on tracing functional requirements from Needs to Features, to Use Cases and to an Analysis Model using forward and backward traceability matrices. Developers included two teams of graduate students using the Rational Unified Process as a methodology and, to a lesser degree, Requisite Pro as a tool.
Bob Roggio is currently a professor of computer and information sciences in the Department of Computer and Information Sciences at the University of North Florida. Previous academic positions held include Associate Professor of Computer Science and Engineering at Auburn University (1984-1987), Chair of the Computer and Information Sciences Department at The University of Mississippi (1987-1991), and Dean, College of Computing Sciences and Engineering at University of North Florida (1991-1994). In mid 1994, he returned to full time instruction and research with primary emphasis in software engineering. During his academic career, he has presented and authored/co-authored over forty scholarly publications in journals or conference proceedings worldwide. Prior to his academic career, Bob spent twenty years in the U.S. Air Force both at home and abroad where his principal duties centered upon the definition, design, development and implementation of a wide range of computing applications. He holds a B.S. degree in mathematics from the University of Oklahoma, an M.S. degree in computer science from Purdue University, and a Ph.D. in Computer Engineering from Auburn University.
|P-146||Smart Result Analysis: A Key Competitive Advantage|
Modern software applications that are rich in features and large in size need new innovative approaches applied to test result analysis in order to gain competitive advantage. This presentation offers a new innovative and practical approach to result analysis successfully applied to various large CAD software projects at Intel.
Manoharan Vellalapalayam is a software architect at Intel with Information technology group. He has over 15 years of experience in software development and testing. He has created innovative test tool architectures that are widely used for validation of large and complex CAD software used in CPU design projects at Intel. Chaired validation technical forums and working groups. He has provided consultation and training to various CPU design project teams in US, Israel, India and Russia. He specializes in automated distributed test jobs execution, smart result analysis, GUI testing and cross-platform test architectures.
|P-133||How to Test Requirements|
As a tester you can make a significant difference in the quality of requirements. The learning curve needed make this happen is relatively small. This presentation will equip you with techniques and tips to get you started.
Kelly Whitmill has 20+ years experience in software testing. Most of that time his role has been that of a team lead with responsibility for finding and implementing effective methods and tools to accomplish the required tests. He is particularly interested in practical approaches that can be effective in environments with limited resources. He has a strong interest in test automation and requirements. He currently works for IBM in Boulder, Colorado.
|P-100||Techniques That Inspired Workplace Improvement|
Learn how we inspired the workforce to accept change. We discovered the unfamiliar, core reason for resistance and will reveal a simple technique to alter the approach to finding real issues and solutions. We inspired the passion in people to quickly buy-in and realize the benefits before they knew what happened.
George Yamamura has a 30-year career developing space vehicle guidance and control software, including one organization assessed at SEI CMM Level 5. He has been the subject of articles in papers, journals and magazines, and he has recently written a book titled, The 10th Inning, Winning Strategies in Baseball and Business. He was awarded the AIAA Technical Management Award, the Pacific NW Software Excellence Award and the National Association of Asian American Professionals Lifetime Achievement Award. He has BS and MS in Aeronautics and Astronautics from the University of Washington and in Applied Mathematics from the University of Santa Clara.
|P-093||Insights in Real Test-Driven Development|
Learn about the benefits, challenges, and limitations of test-driven development based on experiences gained in real-world projects and learn how to successfully introduce it in your organization.
Peter Zimmerer is a Senior Principal Engineer at Siemens AG, Corporate Technology, in Munich, Germany. He studied Computer Science at the University of Stuttgart, Germany and received his M.Sc. degree (Diplominformatiker) in 1991. He is an ISTQBTM Certified Tester Full Advanced Level. For more than 14 years he has been working in the field of software testing and quality engineering for object-oriented (C++, Java), distributed, component-based, and embedded software. He was also involved in the design and development of different Siemens in-house testing tools for component and integration testing. At Siemens, he performs consulting on testing strategies, testing methods, testing processes, test automation, and testing tools in real-world projects and is responsible for the research activities in this area. He is co-author of several journal and conference contributions and speaker at international testing conferences, e.g. at Quality Week, Conference on Testing Computer Software, Conference on Quality Engineering in Software Technology (Conquest), OOP, PSQT/PSTT, QAI’s Software Testing Conference, EuroSTAR, and STARWEST.
The PNSQC newsletter offers readers interviews with presenters and keynotes, invites to webinars, upcoming industry calendar listings, and so much more straight to your inbox. Sign up by entering your email into the box and let the latest news come directly to you.
Copyright PNSQC 2017
WordPress website created by Mozak Design - Portland, OR