P-5 |
Extending the Use of Code Coverage Measurements To Improve Test Design-a Collaborative Approach
Michael Huber, Schlumberger Information Solutions Designing tests and determining their overall coverage can be a tough challenge for test engineers. When performing manual tests, engineers usually do not have sufficient visibility into the features that have already been covered by unit tests. Test engineers also have no means to confidently measure the amount of code that has been executed by their own testing efforts, neither for an individual function nor for the software application as a whole. To improve test design, code quality and collaboration, a software development team needs to have a common means of evaluating and discussing test completeness. Although software developers have long used code coverage measurements to understand, communicate and increase the scope of their unit tests, manual testers still tend to be locked firmly into a world of scripted tests in which metrics are typically based on simply counting test cases. Little to no analysis is performed on the true coverage of these manual tests. This poster presentation introduces a process that encourages and enables all members of the software development team (developers, architects, manual testers and test automation engineers) to collect code coverage data consistently during their respective test activities. The coverage data is compiled into a single file that visually illustrates the areas of the code that are covered by at least one testing approach, as well as the areas that remain completely uncovered. The software development team analyzes and discusses this information at regular intervals during the development cycle. With all players at the same table, this approach results in a healthier overall view of the testing strategy and provides clearer insight into how each member of the software development team can contribute to a higher quality software product. In addition to providing more accurate coverage metrics, this process results in the development of a spirit of joint ownership in relation to code quality, better communication and collaboration between the groups and skill development for all members of the team, especially the manual testers. A case study of a real software development project in which this workflow was successfully implemented will be presented. The technical realization of this process in Visual Studio will be explained in detail so that conference participants will be able to easily implement this approach in their organizations. A live demo is also planned, making the presentation a practical, hands-on experience for the audience. All authors work for Schlumberger Information Solutions in Houston, TX, creating software for the upstream oil and gas industry. They work in the same product portfolio: Michael Huber as QA Manager; Steven Loucks, Andrea Pound and Deian Tabakov as Software Developers. They have collaboratively designed, refined and implemented the approach described in this paper. |
P-35 |
In story verification/validation
Subhasis Bera, Tektronix “In story verification/validation” by all stakeholders helps in reducing numerous testing cycles and helps the product get to the market faster than usual. Performing cycles of “Expert User testing” and “in-story Product owner testing” reduces the testing cycle by ~30%. Performance Experiments: The following execution idea was practiced over 3-5 sprint cycles:
Performance changes:
Subhasis Bera is working as Senior Project Lead with Tektronix practicing SCRUM. He is holding a Bachelors in Electronics & Instrumentation and has close to 10 years of Experience is Data acquisition and Virtual instrumentation. Email:Subhasis.bera@tek.com |
P-39 |
Applying Agile/Scrum Methodology in Performance Testing to Deliver Quality
Emily Ren, Symetra In many companies, methodology in performance testing is not as clear and mature as in functional testing. Performance testing is an extremely challenging job. It is very different from functional testing and can be highly frustrating. Much of performance testing is an uphill battle and requires stellar cooperation and coordination from all involved. It is essential for the performance testing team and the project team to work together and set up a well thought-out performance testing process with proper methodology. Agile/Scrum methodology is actually a natural fit for a performance test process because:
This paper will discuss the common mistakes in a performance test process, and how to apply Agile/Scrum methodology for the team to work effectively to optimize the outcome of performance testing and tuning activities. Emily Ren has 12 years experience in QA, specializing in Performance Testing Process and automation tools. She has worked with many companies for large applications. Emily prides herself on her drive to use cutting edge technologies, as well as on her insistence of applying best practices to set up strategic and efficient test processes to do an outstanding job. Currently, she works as a Performance Testing consultant at Symetra Financial, conducting Performance Testing assessments and developing an overall roadmap. Prior to Symetra, she was a Principle Consultant and Manager of testing practice in Sogeti USA (formerly Ernst & Young and Cap Gemini). She did consulting for many Fortune 500 companies including: Microsoft, Boeing, E*Trade, HP, Johnson & Johnson, Pfizer, Expedia, BlueCross, Dell, LexisNexis, and many others. She also managed and conducted software testing for big companies like T-Mobile and Washington Mutual. While working in T-Mobile’s IT dept, she set up a Standard Performance Test Process and managed all performance test projects for all in house applications, such as handset software, SAP, Retail, IT Governance, HR applications, etc. She has made numerous presentations and done training about performance testing process, including presentations about “Implement Efficient Performance Testing Process” at SASQAG (Seattle Area Software Quality Assurance Group) and an IEEE conference. She is also a certified PMP. |
P-52 |
Continuous Integration Testing with Code Coverage
George Shin, Hewlett-Packard Company Finding new and never before covered code is a challenge in a continuous integration testing environment, which requires merging and preserving code coverage measurements from each different build version across different build targets. Across a lot of code churn and frequent rebuilding, the developers and testers must rely more on automation and tools to detect and minimize coverage gaps between each test iteration. In an environment where random testing methodology is used, the importance of cumulative code coverage measurements, reporting and analysis becomes part of the quality release process. Between each integration testing iteration, the emphasis is placed on measuring the effectiveness of the test coverage against detecting modified code down to the minimal subset (for instance, a function in high-level language). This emphasis is particularly important for the developers in test to stay integrated continuously with the daily commits to the nightly builds. Product developers in their continuous integration within a given (private) branch are required to maintain high quality commits (to baseline or feature branch) by focusing on unit test coverage against modified code using both full and incremental builds. This paper discusses a system environment that delivers automation and tools that contribute to developing and testing a better quality product for our customers, and that also minimizes risk of exposing uncovered code that can become customer visible failures. George Shin currently works at Hewlett Packard’s Data Center Development Unit in Boise, Idaho. In his current role as QA/Test Manager that began in late 2009 he leads a team of developers in test responsible for integration test coverage for Enterprise Virtual Array (EVA) product, with the main focus being QA coverage of controller firmware. Prior to his QA/Test Manager role, George worked as a firmware development engineer in HP starting with Virtual Array product and later transitioning to EVA. Prior to HP he has worked as a firmware engineer in other storage product platforms from Seagate HDD, Quantum HDD and IBM ADSTAR. He has managed start-up software development teams focused on providing Hard Disk Drive test solutions for design, quality and reliability validations. George has 18+ years of R&D experience in I/O technology and storage platforms. George holds bachelor’s degrees in electrical engineering and computer science from Washington State University, and a master’s degree in electrical engineering from Cal Poly. |
P-54 |
Embedding Code Coverage Measurement in Multi-Platform Storage Device Without a File System
George Shin, Hewlett-Packard Company This paper describes a method and apparatus for managing and presenting code coverage information of embedded software running in storage target device with multiple controllers without a file system. The management and presentation of the coverage results deal with embedded software that is the target of code coverage measurement. The embedded software runs on multiple platforms from a single code base and code coverage requires separate coverage measurements, one from each unique target platform that the run-time executable is built uniquely for. Each unique platform run-time executable produces its own coverage result that is portable (as long as same source location is used to build these executables) across all other platforms, such that merging and preserving of the measurements from each platform can be carried out at the storage target device without the host intervention. Presenting of and managing coverage results takes place between the host and storage device platforms using standard SCSI commands. The coverage result is managed independently on each storage controller as memory mapped files emulating as a file in a disk based file system. The coverage result storage and retrieval access methods, as part of management interface, are abstracted up to be generic, which supports both disk based and memory mapped file systems. A host in the system described by this invention interacts with the storage target device through the well-defined management interfaces for coverage result presentations to control the granularity of the coverage measurements. George Shin currently works at Hewlett Packard’s Data Center Development Unit in Boise, Idaho. In his current role as QA/Test Manager that began in late 2009 he leads a team of developers in test responsible for integration test coverage for Enterprise Virtual Array (EVA) product, with the main focus being QA coverage of controller firmware. Prior to his QA/Test Manager role, George worked as a firmware development engineer in HP starting with Virtual Array product and later transitioning to EVA. Prior to HP he has worked as a firmware engineer in other storage product platforms from Seagate HDD, Quantum HDD and IBM ADSTAR. He has managed start-up software development teams focused on providing Hard Disk Drive test solutions for design, quality and reliability validations. George has 18+ years of R&D experience in I/O technology and storage platforms. George holds bachelor’s degrees in electrical engineering and computer science from Washington State University, and a master’s degree in electrical engineering from Cal Poly. |
P-55 |
Quality challenges faced in Test and Measurement (T&M) industry and how we circumvent those using Scrum
Anuradha Asudeva, Tektronix Quality in T&M industry related to technology has its own challenges. These challenges make it hard to maintain quality and at the same time make it interesting too. Test and measurement domain applications provide measurements and compliance solution to the “upcoming” technologies. Design and validation teams involved in upcoming technologies require a tool/application to verify their design and validate their product. Developing these applications poses the following Quality challenges to the T&M developers:
Anuradha Vasudeva is a Technical Lead at Tektronix, India. She has done her bachelor degree in Electronics and Communication. She started her career as a Software engineer; with a Web domain Product based company Ocwen Financial Solutions Pvt. Ltd. Worked on Web page Development and SQL for about 2 years. Later, she worked as a Design Engineer at GE Healthcare, for a vascular product. She is familiar with Six Sigma, FMEA, CAPA concepts. She has spent last 4.5 years with Tektronix, developing measurement algorithms and leading domain activities. She is also a trained Certified Scrum Master (CSM) and performed the role of Scrum Master, Project Lead and Technical Lead in her current project. She performs ‘Expert User Testing’ before the final release of the product. She is actively involved in presenting and demoing products to the customers. |
P-58 |
Testing in Production: Enhancing Development and Test Agility in Sandbox Environment
Xiudong Fei, Microsoft Developing and testing an application in a sandbox environment presents unique challenges. Sandbox environments can limit how we validate changes to implementation and how we deploy and setup the specific application, can slow down or limit the troubleshooting of the application and pose problems in terms of obtaining relevant logs to help resolve end user issues. If the application within a sandbox environment needs to be validated in a production environment, whether on the premises of an enterprise or one that is hosted in the cloud, the problems are further aggravated. This is due to the additional quality gates that the application must go through before being deployed on the backend system. This process is usually done with “near” release quality (or sometimes Beta quality) of the application. Additionally, when testing and “dog-fooding” an application in production it is sometimes necessary to gather additional information such as logs or configuration information; or there is a need to troubleshoot the application at runtime. Such an application requires updating the binaries on the backend or performing in-process debugging by the concerned developer. The same problems are observed during product development, when the QA / Test and Ops teams need to test product features and also deploy test topologies on an ongoing basis. However, setting up a private server repeatedly is costly and it is hard to simulate a production environment having complex and advanced topology and network configuration and several thousand users that use and test the system in interesting ways. All of these constraints usually slow down product development, testing and troubleshooting. This paper introduces a Detour concept for an application that is hosted in a sandbox environment. We share the experience of the Lync Web application in the context of a Silverlight sandbox environment. In this paper we will discuss the tool and framework that we created to address the above mentioned challenges. We will discuss how we used this tool to intercept Silverlight binaries in a HTTP(s) stream, replace them with custom binaries and re-direct this to a browser session that hosts the application under test. We further discuss how a connection established with this tool can be used to remotely manipulate objects in the Silverlight application through the use of scripts (such as PowerShell). This provides a mechanism to have an alternative UI automation strategy that is driven through scripts manipulating UI objects programmatically. With the objects available at hand, we can do debugging in an out-of-process mode at runtime without having to attach to a debugger, including setting breakpoints based on request / response, which is not possible in traditional debuggers. We also discuss how we use the connection to dynamically disable or enable relevant logs, and to extend the size of logs by making the entire disk available for log storage rather than what’s available in memory and limited by the application. Another use of this framework and tool is to validate security of the product using a man-in-the-middle style of fault injection. In the course of developing and testing the Lync Web application we used both test topologies and production environments (on premise and hosted in cloud topologies). We observed that setting up a simple test topology during product development is costly and having utilized this tool and framework on both test and production environments over the course of a year we conservatively estimate a savings of 0.5 days / week for one person to validate code changes, including testing and troubleshooting of applications that are in a sandbox environment. The paper will conclude with lessons learned by our team throughout the development and deployment of this tool and include information on practices any team can perform to achieve similar results. Xiudong Fei has been a test engineer at Microsoft Lync group for the last four years. His passion is to create new ways of testing to have business impact and have fun. Sira Rao is a Test lead at Microsoft. He has worked on the Unified Communications products at Microsoft for over seven years. He is passionate about building high quality products that excites customers. |
P-59 |
Troubleshooting for Fortune and Glory
Chris Blain, Tripwire The lines between what “testers” do and what “developers” do for a software project are starting to blur. Skills from both sides are bleeding over to the other as teams look to become more adaptable and flexible. There is a rich literature and folklore for “troubleshooting skills”, which I hope to bring to a wider audience. An exploratory tester with the skills (networking, debugging, along with traditional testing skills) to follow wherever their inquiries lead them can be a valuable team member indeed. This talk will present both methods and tools to help expand the reach of anyone wanting to find the root cause of issues that trouble their code. Chris Blain is an escalation engineer and software tester for Tripwire, Inc. He has been involved in the software industry for over 14 years at companies such as Rational Software and a number of small software companies around Portland. He has experience in a variety of roles from technical support, development, test management and escalation engineering. He is a member of the Context-Driven school of software testing and is lucky to be able to learn from the other members of this group. |
P-62 |
Dirty Tricks in the Name of Quality
Ian Dees, Tektronix We join software projects with grand ideas of tools, techniques and processes we’d like to try. But we don’t write code in a vacuum. Except on the rare occasions when we’re starting from scratch, we’re confronted with legacy code whose history we may not know and team members who have been quite productive for years without the silver bullets we’re pushing. How do we get a toehold on a mountain of untested code? How can we get our software to succeed despite itself? Sometimes, we have to get our hands dirty. We may have to break code to fix it again. We may have to put ungainly scaffolding in place to hold the structure together long enough to finish construction. We may have to look to seemingly unrelated languages and communities for inspiration. This paper is a discussion of counterintuitive actions that can help improve software quality. We’ll begin with source code, zoom out to project organization and finally, consider our roles as individual software developers. Ian Dees saw his first Timex Sinclair 1000 over 20 years ago, and was instantly hooked. Since then, he’s debugged embedded assembly code using an oscilloscope, written desktop apps in C++, and joyfully employed scripting languages to make testing less painful. Ian currently writes GUI code for handheld instruments as a Software Engineer at Tektronix. Ian is the author of “Scripted GUI Testing With Ruby” and co-author of “Using JRuby.” |
P-75 |
Delivering Quality on Determined Time
Mohan Das Gandhi Gandhi, Tektronix Engg. Dept.(P) Ltd In today’s uncertain economic climate, it has become imperative to get products to market quickly. Time to market is valuable in fast-moving industries To beat the competition and satisfy the customers the following changes are inevitable:
There are times when delivering a quality product is more important than delivering the ‘perfect’ product. Due to the competition in the market, the complexity of software is increasing every day. The difficulty in software testing stems from the complexity of software. Software testing is not just for uncovering any defects in the software. Software testing must be performed in order to ensure that a software program, application or product sufficiently meets all the intended business and technical requirements. Ongoing testing becomes essential in order to provide constant feedback in an effort to help products release on time. Two challenges in delivering a quality product are:
Apart from that, testers have to overcome the following challenges and still release the product within the constrained time:
We cannot shortcut our test strategy and test methods due to time pressure. Testers must still be committed to deliver the best quality and to ensure user confidence, as well as that the software will perform as promised. To overcome all the issues and deliver the better quality on time, testers must adopt a pro-active, innovative and smart way of testing. In our company, we have implemented the following strategy to cover all the tests and complete on time: Mohan Das Gandhi holds a master’s degree in applied electronics from PSG CAS, Coimbatore, Tamilnadu, India and is currently pursuing a doctorol program (P.hD) in software quality engineering at Hindustan University, Chennai. He is presently employed by a US-based company, Tektronix Engg Devp. (I) Pvt Ltd in Bangalore, as a Software Quality Leader. He has worked for Tektronix for the past 11 years and, overall, has 13 years of experience in the IT industry. He has vast experience in GUI test automation tool development and has done a lot of research in GUI test automation. Last year, his “Achieve Quality through GUI Test automation in Complex environment” paper was selected as a poster paper. So far he has submitted three poster papers to PNSQC and all of them were selected as poster papers (“Improvements in GUI testing of TekScope application for DPO/DSA Tektronix Oscilloscope Series”,”Automated GUI Stress Testing using GUIStressRobo” and “Achieve Quality through GUI Test automation in Complex environment”). |
P-77 |
User Experience and the Agile Process
Jessica Walker, Intel Traditionally, user experience research was part of the development lifecycle. Researchers had months to prepare, run and report out findings from studies. With the new Agile model, traditional research methods need to adapt in order to fit into the sprint process. The purpose of this presentation is to show how user experience research can be applied to the agile process and improve software applications. This presentation will provide user experience research methods, which teams can apply to their own software projects, and will also provide real world examples. Key Topics:
By incorporating one or many of these methodologies into the Agile process, teams will see a vast improvement of the usability of the software product. These techniques will also create user stories that are more precise and directly aligned with the users’ needs. Jessica Walker is a human factors engineer at Intel. She has been working at Intel for five years conducting user research on various projects in health, mobile, emerging markets and business client. Jessica has been on an agile software development team for two years as a researcher and a scrum master. |
P-78 |
A Brighter Future for Web Browser Application Testing
Moji Friedhoff, Jeppesen This paper will describe how our online application test team evolved from a ‘traditional’ model of Waterfall life cycle, which pitted reliability against time, to a model that allows us to properly prioritize and focus on the most important aspects of our application and to establish the proper metrics to conclude whether adequate testing has been performed. Moji Friedhoff is a Software Test Engineer at Jeppesen and has, overall, 16 years of testing and technical support experience. Moji’s testing and QA experience includes a focus on black box testing, white box testing automation and providing project test schedules with well-defined tasks, deliverables, time estimates and required resources. Moji has an associate degree in mathematics from Portland Community College, bachelor of science degree in health and human services from University of Oregon, and a bachelor of science in electrical and computer engineering (Integrated Circuit Test, Verification and Validation Certificate) from Portland State University. |
P-79 |
Not Just Defect Tracking: Using Your Defect Tracking Tool for Managing Workflow for Many Work Item Types
Bruce Kovalsky, Capgemini Many test organizations purchase a defect tracking tool (e.g., Quality Center, ClearQuest, Bugzilla) to report, track and manage workflow on defects only. This paper will show those who are defect tool administrators how to expand the usage of the tool for managing workflow for many other types of work items, such as features, issues and change requests, so that the entire organization (development, project management office) can fully utilize the investment made in the tool. Defect tools generally provide a “standard” defect workflow, such as New -> Assigned -> Resolved -> Verified -> Closed. Many organizations use this standard workflow for defects successfully, but do not realize that workflow for defects can be customized to use different workflow statuses than the standard workflow (e.g., Deferred, NeedsInfo, Cancelled), as well as customizing the tool to mange workflow for several other types of work item types. Once these work items are stored in the tool, they can be easily searched and updated by everyone that has access to the tool; and update access can be configured to limit access to certain teams or individuals. This paper will show organizations how to customize their defect tracking tools to support other types of workflow, such as:
I have found that a small investment up front, by customizing workflow for the work items that your organization needs managed, can pay off in big ways by providing the entire organization with the ability to search for current status, history of the work done and visibility into the work planned to be done Bruce Kovalsky has been a Quality Assurance/Test Manager, Consultant and Automation expert at various Seattle-area companies since 1992. He joined Capgemini as a Test Manager in 2010. After receiving a bachelor’s degree in computer science from the University of California at Berkeley, he spent eight years developing software in the Aerospace industry, and then began focusing his career on Quality Assurance and Testing. He has presented papers at several quality conferences, including QAI (1998), Rational User Conference (2000, 2005), and PNSQC (1998). |
P-81 |
Unusual Testing; Lessons Learned from Being a Casualty Simulation Victim
Nathalie Rooseboom de Vries van Delft, Capgemini I’ve been a Casualty Simulation victim for a couple of years now, and in this role I have participated in numerous drills, exercises and exams for first aiders, medical teams and first-responders. Besides the tremendous fun I have being a ‘victim’, I also learned a lot from these ‘tests’. I have used these lessons in my daily work as a software- and system-tester too. This is the poster paper that accompanies the presentation where I will tell about my experiences as a casualty simulation victim and how I applied the lessons learned in my job as software tester, specifically in my project of end-to-end testing done at NS Hispeed for their commercial systems of Fyra. I’ve made a high-level overview on a convenient two-pager. The two-pager shows in basics how a familiar testing process is applied within Casualty Simulation, some background information on Casualty Simulation and the teasers for the ‘four gems’ from the Lessons Learned. The paper doesn’t go into the material from the presentation too deeply and contains a different example than will be showed during the presentation itself, just to keep it interesting for the visitors that are able to visit the track ,and has the sole purpose of getting to interaction and discussion around the subject. Note: this poster paper contains imagery which COULD be gruesome to some people; all the casualties shown are NOT REAL (simulated), but could nevertheless be shocking to some. Nathalie Rooseboom de Vries van Delft is Community of Practice leader IT Testing at Capgemini, responsible for thought leadership and testing competence development. She fulfills the roles of test manager and advisor with various clients. She speaks on national and international test events on a regular basis, writes in specialist publications and participates in the Dutch Standardization Body (NEN) workgroup for Software- and System development. She is very passionate about (software) testing in general, but the subjects of Data Warehouse Testing, Chaintesting, Standardization, Ethics/Philosophy and Test Architecture (Framework) are her favorites. Nathalie was also a member of the EuroSTAR 2010 program committee. |
P-82 |
How to Satisfy the Customer without Sacrificing the Team
Supriya Joshi, WebmMD Health Services One of the most challenging tasks all client-facing companies have to face is “how to deal with day to day client demands and handle them efficiently”. There are often times when the work gets put in one after another with no understanding of where to start. It’s frustrating to start the day by looking at pile of work to be done with no prioritization. Sometimes you don’t always have a choice of whether or not to deal with these daily demands, but there are some things you can do to make the situation less stressful. This paper outlines the approach that WebMD health services adopted to overcome these challenges with the goal of achieving higher client satisfaction and delivering higher quality features in a shortened timeframe. Supriya Joshi is a Quality Assurance Analyst from WebMD Health Services and has been working there for past 2-1/2 years. She has worked as a Software Engineer in various companies and holds MS in computer science. |
P-84 |
Shifts and Sparks – Factor in the Future
Chermaine Li, Microsoft If you could have anticipated the rise of social media such as Facebook and Twitter, would you have built your product differently? Our businesses operate in a rapidly changing environment. How can Microsoft improve its ability to anticipate future disruptions and opportunities, account for them early and throughout its engineering processes, and enhance the future-readiness of its products? In 2009, the Microsoft Envisioning Lab started gathering information about future forces of change and consolidating that information into 18 key “Shifts” that will impact information work and business productivity over the next five to ten years. Building on that work to make it more actionable, the Lync engineering team developed a workshop methodology and a set of “Sparks” for each Shift to trigger creative, future-focused, out-of-the-box thinking that can be leveraged at different stages of the engineering process. The goal is to encourage engineers to factor in the future when planning, designing, developing and testing products. When Microsoft engineers, designers and planners analyze scenarios and features through the lens of these “Shifts and Sparks”, their thinking expands; they consider a broader array of alternatives and become more enthusiastic customer advocates. The “Shifts and Sparks” is a change in mindset and it offers a practical approach for Microsoft to build products that are truly ready for the future. Shwetha Nagaraj is an engineer on the Lync client team at Microsoft working on products for Mobile platforms. Shwetha loves working on tough problems with smart people and creating innovative solutions. Her current interest is learning about customer focused engineering and the various ways and tools in which this can be adopted in our product processes. Shifts and Sparks is an example of one such tool that can help you along the process of building products that your customers love. Chermaine Li is an engineer on the Lync team at Microsoft. Chermaine has been on the Lync team for the past five years. She is currently interested in learning how to become a successful customer advocate in her team, and ensure that products are built with the customer in mind. |
P-85 |
Audit Effectiveness – Assuring Customer Satisfaction
Jeff Fiebrich, Freescale Semiconductor Inc. Whether a company is large or small, an important factor in continually improving business practices is audits. Audits can be used to identify best practices to be shared across the company, as well as to identify areas needing improvement. Good planning is imperative to a successful audit. Audit planning requires not only creating a schedule, but also ensuring that the appropriate people are available to be audited. Preparation is the key element in planning an effective and efficient audit. Planning for an audit begins with selecting the business process(es) to be audited. This could be determined by an external standard to which the company is required to comply or processes deemed by management as needing improvement. Many processes are cross-functional or involve several departments. The auditor should build the audit around open-ended questions, which reveal more about each process. It is the auditors’ responsibility to ensure that the auditee is well trained and prepared. This is especially true for external or third-party audits. Coaching by the auditor improves upon the efficiency of the audit and is a good learning tool for all. This coaching enables the auditor to better understand the process for which they’ll be reviewing and it enables the auditee to understand the requirements to which they must adhere. At first glance, gauging the effectiveness of your internal audit program may seem easy. If you have accomplished your objectives, the audit program is effective. But as you start to list the organization, department and audit program’s objectives, things start to get fuzzy. What methods will you use, and what measures do you need to monitor? An internal audit program, coupled with a properly deployed strategic plan and a well-designed system of indicators, can help an organization execute its strategy and achieve its performance goals. Annual assessments typically focus on very high-level processes (macro-processes) and systems, while quality management system audits typically focus on process and associated sub-processes. Strategic auditing is not meant to replace assessment tools, but rather to enhance and sustain their effects. The traditional verification audits are part of a robust risk-management process to mitigate potential unacceptable losses. The frequency of verification audits depends on degree of risk and performance history. But do not rule out other non-traditional audit techniques. Appreciative inquiry is a discovery method that includes “the art and practice of asking questions that strengthen a system’s capacity to apprehend, anticipate, and heighten positive potential”. An appreciative inquiry audit helps to reveal and enhance what’s correct or to discard what’s not. This creates a value-added experience that encourages the workforce by building solutions based on the fundamental truth that in every company, department or project, something works. Jeff Fiebrich is a Software Quality Manager for Freescale Semiconductor Inc. He is a member of the American Society for Quality (ASQ) and has received ASQ certification in Quality Auditing and Software Quality Engineering, and is a RABQSA International Certified Lead Auditor. A graduate of Texas State University with a degree in computer science and mathematics, he served on the University of Texas, Software Quality Institute subcommittee in 2003 – 2004. He has addressed national and international audiences on topics from software development to process modeling. Jeff has over twenty years of Quality experience as an engineer, project manager, and software process improvement leader. He has led efforts in ISO certification and Software Engineering Institute (SEI) Maturity and SPICE assessments. Jeff has worked extensively on efforts in the United States, Israel, Europe, India, and Asia. Jeff is the co-author of the book, ‘Collaborative Process Improvement’, Wiley-IEEE Press, 2007. Simon Lang is a Software Quality Manager for Freescale Semiconductor. Simon has 10+ years of experience in the High-Tech industry. He has most recently lead various improvement efforts for one of the software organizations using CMMI methodologies and Lean tools. Simon has degrees in business management and is a certified internal auditor, Lean facilitator as well as a Six Sigma green belt. Diane Clegg is a Quality Systems Engineer for Freescale Semiconductor, Inc. She is an America Society of Quality certified Internal Quality Management System (QMS) Auditor. With fifteen years of experience in the semiconductor industry, Diane served as the Project Manager for the Freescale Document Management System (DMS) project implemented via SAP. |
WordPress website created by Mozak Design - Portland, OR
Copyright PNSQC 2020