These poster papers were presented at PNSQC 2010.
ID: 10 |
Achieve Quality through GUI Test Automation in Complex Environments Mohan Das Gandhi G, Tektronix br> br> Day by day, software testing becomes very crucial, since the applications are developed in different languages on different OS. Also, the complexity of the software is getting increased. The difficulty in software testing stems from the complexity of software. Regardless of complexity, we need to achieve high quality in our products. Test automation is one of the robust and fastest solutions for achieving quality in a complex environment. Automated testing can provide several benefits when it is implemented appropriately. The significant benefits of automated tests are:
Due to the growing complexity of the software, there are multiple challenges in GUI test automation:
What if the testing tool has limitations to support all the above challenges? Then we have only two options: Is it possible to overcome all the challenges in Single automation tool? The answer is Yes. In our organization, we have developed our own GUI automation tool called TekRobo. This paper describes:
Mohan Das Gandhi holds a Master’s degree in Applied Electronics from PSG CAS, Coimbatore, Tamilnadu, India. He is presently employed by a US based company, Tektronix Engg Devp. (I) Pvt Ltd in Bangalore as a Software Quality Leader. He has worked for Tektronix for the past 10 years and, overall, has 12 years of experience in the IT industry. He has vast experience in GUI test automation tool development and has done a lot of research in GUI test automation. Last year he submitted two papers ("Improvements in GUI testing of TekScope application for DPO/DSA Tektronix Oscilloscope Series" and "Automated GUI Stress Testing using GUIStressRobo") to PNSQC, both of which were selected as poster papers. |
ID: 18 |
Software Quality Assurance in the Physical World Kingsum Chow, Intel br> br> Software testing on a computer system starts with a known initial state. Testing is conducted to assure the functional quality of the software in the environment that is controlled by the computer. Software written for a robot faces additional challenges. The software on the robot needs to function in a physical environment where the initial state may not be precisely known and the behavior is not precisely controllable. On top of these problems, the software on the robot faces a complex environment that may This paper explores the challenges of testing software on a robot in the physical world. Through the case study of running a LEGO robot with motors and sensors on a FIRST LEGO League competition table, it characterizes the environment factors, some of which are not controllable. Through the sensors that are available on a robot, it characterizes the uncertainties from the sensor readings and the location of the robot. It then describes how it employs a software testing process to test if the robot The software testing approach in the physical world has only scratched the surface of the complexity of the quality assurance of a robot. Early results from the case study here demonstrates that taking lots of sensor readings, establishing a relationship between the sensor data and the results of the experiments would simulate testing on the equivalence of many different environments and reduce the number of runs to catch failures. The contributions of this paper are:
Kingsum Chow is a principal engineer from the Intel Software and Services Group (SSG). He has been working for Intel since receiving his Ph.D. in Computer Science and Engineering from the University of Washington in 1996. He has published more than 40 technical papers and patents. Outside of work, he has coached a FIRST LEGO League team since 2006. |
ID: 20 |
Abandoning Spreadsheets for Test Data – Case Study with TestLink At the end of 2009, our group was struggling with lots of test data and spreadsheets that could not capture the information. To grow our capabilities, before we chucked the spreadsheets, we installed and configured the open-source tool TestLink to help manage our data. So far, it has worked great even though we can’t use many of the automation capabilities because of the nature of our testing. It has however, enabled us to better organize our testing as well as consolidate to one repository. There are still some gaps in TestLink, but the active development and improvement lists show a promising growth. This would be a case-study, with data from 1+ year of use. Richard Vireday is a long time PNSQC contributor and volunteer. |
ID: 47 |
Hacker’s Envy, Tester’s Pride: Top 10 Commandments for Securing your Web Applications Most of us are aware of the consequences that can occur in face of any security flaw being exploited by a malicious user. Addition of security to our test agenda has gained a lot of momentum in recent years. Web applications face ubiquitous and ever-growing security threats from nasty users, insiders, hackers and others. This paper talks about how to test security of web applications in a comprehensive manner. There is a plethora of resources trying to address the questions like “Why security?”, “What to test for security?”, “Who should test security?”, “When to start security?” or “Where to look for security?”. In this paper you will find the answer to one of the basic yet very vital questions: “How to test for security of web applications?”. In this paper presentation the authors have framed a guideline that security testers can adopt while performing security testing for Web Applications. With the help of these guidelines, one would be able to produce repeatable, unswerving tests that fit into the overall testing strategy, but that address the security side of web applications. Building security tests using these guidelines will help you to discover flaws in the source itself: fault in how it executed its business functions. Using The guidelines explained in this paper are quite comprehensive but by no means exhaustive. This paper explains about the top 10 ways to test a Web application in a meticulous manner. These include both manual and automated ways of testing web applications’ security, and more importantly we will be focusing more on “how” to test for these security areas of Web application with real examples and demos. Here is a glimpse of what we will be discussing in this paper as part of top 10 ways to test web
Target Audience: Sushil Karwa is a QA Technical Lead at McAfee. He has a Master’s degree in Quality Management from BITS Pilani University, India. His testing and QA experience include a focus on white box test automation for a web-based product. He has been associated with McAfee for the last 7 years. His role involves identification of security risks, threat model preparation, security code audits and test planning for security management framework. At McAfee, he has been involved in initiating and implementing the white box test automation process for various product teams. Sushil is a Certified Ethical Hacker and Security Analyst from EC-Council, USA. Sasmita Panda is a Senior QA Engineer at McAfee and works with the Host Intrusion Prevention System product team. She has a Master’s Degree in Computer Application from Madras University, India. Her testing and QA experience include a focus on white box testing automation, security testing and identification of different techniques for measuring code coverage. |
ID: 48 |
Agile Performance Testing Vikram Gopinath, McAfee India Private Limited Uday Ravi Kempegowda, McAfee India Private Limited br> br> Achieving Quality in a Complex Environment can be very difficult. A main factor in the “complex” environment is that there is so much to focus on but so little time. With methodologies like agile development gaining significance by the minute, it is very hard for QA to keep up with the rapid pace of the project. Similarly, development also is finding it hard to focus on the non-functional aspects of the product like performance and security. This paper outlines the approach that we at McAfee adopted to overcome these challenges with the goal of achieving quality in a complex environment. The paper primarily focuses on the a few practices that can reduce the time taken for performance testing during the agile project life cycle and at the same time provide equal if not better coverage of testing as compared to the traditional project life cycle. The practices the paper will explore include:
The key contents of this paper will be in these lines:
Vikram Gopinath works for the enterprise system security solutions team at McAfee India Private Limited. Vikram has experience in areas like performance, security and functional testing. Uday Ravi Kempegowda works for the enterprise system security solutions team at McAfee India Private Limited. Uday has experience in areas like performance, security and functional testing. |
ID: 50 |
Web Services Testing Made Easy Meera Subbarao, Cigital, Inc. br> br> Although the concepts behind service-oriented architecture (SOA) have been around for over a decade, SOA has gained extreme popularity of late due to web services. If you’re in the market for a superb tool for testing of web services, SoapUI will fit the bill. SoapUI supports functional testing of Web Services by providing a TestCase metaphor where a number of TestSteps can be executed in sequence. It also allows easy content transfer between responses and request. The paper presentation will include 4 parts as such:
Meera Subbarao works as a Senior Software Consultant for Cigital, Inc., which specializes in software security and quality. Meera has eighteen years of software programming experience. Prior to joining Cigital, Meera was a Senior Software Consultant for Stelligent, where she helped customers create production-ready software using Agile practices. She is a Sun Certified Java Programmer as well as a Sun Certified Web Component Developer. Meera has articles published regularly in online publications and frequently speaks at conferences like SD West 2009 and Software Test and Performance Conference. |
ID: 65 |
Monitoring Server Reliability in Production Systems as a Metric for Driving Software Quality Wayne Roseberry, Microsoft br> br> Server products operate under complex conditions with unpredictable workload characteristics, concurrency rates, One way to solve part of this problem is to establish monitoring tools and methods that map an end user based view of software quality to a product engineering based view of system flaws and failures. By basing the metrics on experiences that map directly to impact, you can focus the product team time on issues that will truly matter to the customer. By building a monitoring solution that can be applied in both production systems and in test environments you enable a quality feedback loop that facilitates improved test engineering, product design, investigational methodology and ship schedule management. This paper presentation will describe how the SharePoint 2010 team deployed such a solution for monitoring both in-house production environments and performance and stress testing lab environments. It will describe the components in the solution that span in-product instrumentation, custom solutions built on existing product features, as well as team methods and processes for monitoring and triaging issues coming from production systems. It will demonstrate the gains we achieved in bug discovery yield, improved fix rates and more successful pre-release production runs. Wayne Roseberry has 20 years experience working in the software industry. He is currently employed as a software test lead at Microsoft. Projects at Microsoft include MSN 1.0 – 2.0, Site Server and all releases of SharePoint from version 1 through 2010. |
ID: 71 |
The Automation Endeavour: Notes from the Venture David Paul, Vertex Business Services br> br> Test Automation has the potential to greatly increase the efficiency and effectiveness of a Test Team’s resources. However, it also has the potential to be a substantial drain on these same resources if not implemented strategically. It was with this in mind, that we set out to utilize test automation in order to boost the efficiency of our software testing effort, increase the quality of product, support the development effort and meet our on-time delivery requirements. The following paper is a case study that describes our journey towards developing and implementing our department’s first automated test suite. I will describe the decisions and actions that we took along the way, including considerations of what to automate, how we determined the approach and an explanation of the life cycle of our pilot test automation project. From there, I will explain the transition that took place as our automation project progressed into what it is today: useful both as a testing and development tool. br> |
ID: 77 |
5 Ways to Improving Software Quality using Continuous Integration Meera Subbarao, Cigital, Inc. br> br> One of the primary goals of every developer should be to prevent or drastically limit the number of bugs or defects from being introduced in their source code. It is also our responsibility to write good, extensible, testable and maintainable code. This however seems like a herculean task to many. There is a wide array of metrics tools for your projects that can be run, such as code coverage, complexity, coupling, bugs, tests, suspicious code, style violations, copy/paste detectors, performance measurements, dependency analysis and more. Running these tools every time we write code becomes a nightmare. However, there is an easy solution for all this. This paper presentation shows how to integrate all these tools with your IDE. If that isn’t sufficient, the paper also shows how to automate these tools to run them using a Continuous Integration (CI) server. Once these tools are automated and integrated with the CI Server, it can be a struggle to get a single, simple view of the project, and, even more importantly, how the project’s metrics are changing over time. Also covered in this paper presentation is how to create a simple dashboard which can produce a single page view of all these metrics, including trending graphs and analysis of the entire codebase. Meera Subbarao works as a Senior Software Consultant for Cigital, Inc., which specializes in software security and quality. Meera has eighteen years of software programming experience. Prior to joining Cigital, Meera was a Senior Software Consultant for Stelligent, where she helped customers create production-ready software using Agile practices. She is a Sun Certified Java Programmer as well as a Sun Certified Web Component Developer. Meera has articles published regularly in online publications and frequently speaks at conferences like SD West 2009 and Software Test and Performance Conference. |
ID: 78 |
Continuous Integration in a Nutshell Meera Subbarao, Cigital, Inc. br> Do you know what’s in production today? Can you reproduce the same set of binaries and database from the SCM every time? Are you capable of running a complete compilation without an IDE? Does the automated build provide an automated capability for updating an existing database? Are developers able to take a new/clean machine and type ant to get working software? Is there a capability to roll back a deployment (binary artifacts and database)? Is it possible for a developer to click a button in a build server to run an automated integration build that builds and deploys software to a remote environment? Is it possible for anyone to click a button in build server to run automated integration build that remotely builds and deploys software to remote environment? br> If you answered NO to all the above, then this paper presentation will help you understand the benefits of using Continuous Integration (CI). The presentation also focuses on other aspects of CI like Continuous Testing, Continuous Deployment, Continuous Inspection, Continuous Documentation and Continuous Feedback. br> Meera Subbarao works as a Senior Software Consultant for Cigital, Inc., which specializes in software security and quality. Meera has eighteen years of software programming experience. Prior to joining Cigital, Meera was a Senior Software Consultant for Stelligent, where she helped customers create production-ready software using Agile practices. She is a Sun Certified Java Programmer as well as a Sun Certified Web Component Developer. Meera has articles published regularly in online publications and frequently speaks at conferences like SD West 2009 and Software Test and Performance Conference. br> br> |
ID: 92 |
Mining for Gold: Bug Isolation Jean Ann Harrison, CardioNet, Inc Molly Mahai br> br> The task of writing bug reports is a familiar one for software testers. Testers see an error message or witness unacceptable behavior in an application and create bug reports in their bug report-tracking tool. But is that error message the real bug or was the code written to indicate something went wrong? What about slow performance of a web page loading based on one input? Is the input the bug? Is the reaction from the application due to the input or a bug? This presentation will not only address general testing practices in finding bugs, but testers will learn to mine the bounty and dig deeper. Go beyond the symptoms like error messages and explore behavior patterns to discover gold mines of information. Molly & Jean Ann will expand on the many benefits when software testing resources are dedicated to providing more descriptive information about the problems to Development. Attendees will learn how to recognize symptoms and be shown testing techniques in exposing root causes of bugs. Real life situations will be shared with attendees along with step by step exercises to expand attendees’ skill set. When an error condition exists, Molly & Jean Ann will not only repeat the steps to confirm repeatability but also explain what kinds of other variables can be added or detracted to those steps to expose more information about the behavior. Why did one error appear when steps 1, 2, 3, 4 and 5 were implemented but the error did not appear when steps 1, 3, 4 and 5 were implemented? Finally what knowledge does a gold mining bug reporter need to be successful? Throughout the session, Molly & Jean Ann will use personality traits and helpful technical skills to further expand upon the golden nuggets of bug reporting capabilities. Jean Ann Harrison is a Lead Quality Assurance Engineer at CardioNet, Inc providing ambulatory cardiac monitoring service for physicians’ patients. Jean Ann is currently the software quality assurance lead on the next generation mobile heart monitor device and has been the lead on all embedded software testing at CardioNet. Jean Ann’s background also includes a variety of projects of large multi-configured applications for client/server, web, Unix and mainframe systems. Her experience is primarily manual testing with occasional automation and a strong focus on building quality into design. Constantly working to perfect her craft, Jean Ann attends and presents at conferences, takes courses, networks and actively participates in software testing forums. She believes software testing takes daily practice to contribute to a project’s success. |
ID: 97 |
Scrum for Validation teams Mitali Monalisa, Intel Srivathsan Bhargavi, Intel br> br> The scrum process was developed for teams that handle the different functions of software product development as one organization. With the scope and size of products growing, these pieces are handled by distinct teams. It becomes imperative to understand how scrum processes can be tailored to best fit individual validation teams, which are primarily consumers of the deliverables from by the development teams. This paper talks about the best-known methods indentified during the adoption of scrum by product validation teams. It provides a list of suggested tools, processes and communication mechanisms to maximize the benefits of scrum. Results/Impacts and Lessons Learned:
Mitali Monalisa is a software engineer with eight years of industry experience. She has been with Intel since 2005 in various roles of implementing and leading software projects. |
ID: 99 |
Virtual Extreme Programming Workbench: a Support Tool for Practitioners of Extreme Programming in a Distributed Environment Richard Nieuwenhuis Lex Bijlsma Frans Mofers br> br> The Extreme Programming (XP) software development methodology relies heavily on the co-location of the team members. The Extreme Programming team, according to the originator Kent Beck, should sit close to one another in an open environment where the whole team is able to see each other. This enables easy face-to-face communication between team members, which in turn improves team awareness, the team’s knowledge management and the social atmosphere. Nowadays, outsourcing and teleworking is becoming more common, meaning that XP practitioners need to adapt their daily XP practices for a distributed setting. XP in a distributed setting is called Distributed Extreme Programming. Research on Distributed Extreme Programming (DXP) is growing but is still relatively scarce compared to research on XP itself. In most cases the distributed XP teams have their own interpretation of DXP. These teams use (existing) tools that are a direct translation of a practice without taking into consideration that the practice may work better in a distributed environment in an adapted version. Other problems, which are mostly not tackled, are time zone or cultural differences. The social aspect, one of the key characteristics of XP, is also almost completely ignored most of the time. There is evidence, however, that upholding to certain graphical rules and functional flows will have a positive effect on the social feeling a person has with software. This means that software users can have a positive social state towards a tool, making it ideal to incorporate these types of rules in a DXP tool. The Virtual Extreme Programming Workbench is a proposed tool that incorporates the Extreme Programming practices for a distributed environment and adheres to the XP philosophy of being social. The social aspect of the VXPW and tackling the problems distributed XP practitioners face will have a positive effect on the software quality the DXP teams produce. Richard Nieuwenhuis started his academic career at the Utrecht University where he finished his master’s degree in Software Technology in the beginning of 2006. After two years of employment as a Java application developer, Richard currently is employed at an interactive media bureau where he is responsible for the corporate online applications. For over a year Richard has been involved part time with the Open University for his PHD research. In his research, Richard tries to find a suitable adaptation of the Extreme Programming software development methodology for distributed environments. |
ID: 100 |
Technique to Reduce a Set of Test Cases to a Minimal Subset without Loss of Code Coverage Rohit Kulshreshtha, Adobe Systems br> br> The size of a test suite for a given functionality increases with introduction of test files. Often, test files are introduced without considering the redundancy they add in terms of code coverage. It is possible that we may get similar code coverage with a smaller set of files. The codebase may contain a large number of ‘points’ of code coverage that are touched during the execution of the code. Also, the test suite itself may contain a large number of test cases – each exercising a different set of code coverage points. We describe a technique that enables us to find the smallest subset of test cases that exercises the exact same set of code coverage points as the test suite itself. We also demonstrate a prototype of this application that uses code coverage data supplied by a commercially available code coverage tool. The immediate benefits of such an exercise are:
Subset can be identified based on multiple criteria:
Rohit Kulshreshtha is a Member of Technical Staff at Adobe Systems (Noida, India). He is a developer in CoreTech – a team that creates shared components that form the building blocks of most of Adobe’s products. Rohit has worked on several key performance enhancements in PDF creation workflow. He has also been responsible for designing innovative architectures for components. Rohit is a member of the “Tiger Security Team” responsible for performing security reviews of CoreTech components. Rohit takes a keen interest in helping improve quality processes. Prior to joining Adobe, Rohit worked as an independent freelancer. |
ID: 101 |
The Evolution of a Test Engineer Becki Bloch, Vertex Business Systems br> br> Complex software systems are no longer confined to the business and professional realms. As systems have become integrated into our daily lives, the need for simplicity is often in inverse proportion to the “behind the scenes” complexity. Becki Bloch has over 20 years of experience in the software industry. She has held positions as an application developer, systems programmer, Test Engineer and Quality Control Manager. In her current position as a Senior Test Engineer at Vertex Business Systems, she performs a variety of roles from business analyst, solution lead, test lead and tester. Becki is a certified scrum master and recently obtained a degree in Web Development. She is passionate about software testing, working collaboratively with team members and clients and continuing to learn and positively influence the quality of software solutions developed at Vertex. |
ID: 103 |
Developing an Automated Testing Framework: Process and Challenges Laura Bright, McAfee Anand Iyer, McAfee br> br> Test automation poses many challenges. The automation tools and framework need to be robust to ensure that tests are executed in a timely manner and results are accurate. The choice of automation tools is important. UI-based tools are useful to test the end-user experience and execute functionality not testable through the back-end, but may be slow and error-prone. Internally developed back-end tools are useful to test underlying functionality and are robust to UI changes and unexpected errors, but need to be thoroughly tested and maintained to ensure they test all required functionality and report accurate results. Successfully automating a test suite requires determining which automation tool is most appropriate for each test case, and ensuring that the testing tools are sufficiently accurate and robust to implement each test case. Seamlessly integrating automation tools and components and achieving full end-to-end automation present additional challenges. A final challenge is establishing best practices and coordinating efforts across multiple geographic locations. This poster presents our experiences addressing these challenges in an end-to-end automation framework. Our framework includes a back-end automation tool (VTAF) developed internally using C#, and a widely used UI automation tool, QTP. Our framework integrates with MAGI, a company-wide test automation framework. The MAGI framework includes mechanisms to automatically launch tests on new builds, build the test rig, execute tests, and report results to a web server. The framework also includes a shared code base maintained by a core team and leveraged by all automation teams within the organization. We present an overview of the design of our automation framework and present the processes we established to implement this framework. We treat the entire automation framework as a product and follow a well-defined software process for developing and maintaining this product, including all of the following:
We present the design of our integrated automation framework and outline the details of each of these practices as we developed the framework. We summarize our experiences and lessons learned, and discuss future directions. Laura Bright is a Senior QA Engineer at McAfee in Beaverton, Oregon where she specializes in test automation. She has worked in the area of software quality for several years. Previously she worked as a research associate at Portland State University where she specialized in scientific data management. She holds a BA degree from Dartmouth College & MS & PhD degrees in Computer Science from the University of Maryland at College Park. |
ID: 104 |
Exploring Touch Testing: A Hands-on Experience Elizabeth Marley, Omni Group br> br> Although touchscreens have been around for decades, their capabilities and prevalence are rapidly expanding. They’ve moved from huge, expensive, special-purpose business kiosks in airport terminals and banks to mobile devices that fit in our pockets and toddlers’ hands. They’re still used for business, but also for entertainment, games and phone calls. As touchscreens move from niche markets to the mainstream, more testers will be testing software targeted at these devices. Test strategies based on physical buttons — on keyboards, mice and cellphones — need to be updated or replaced with new approaches that account for new interaction models and the variations in human hands. This poster paper will highlight differences between traditional monitor/keyboard/mouse computers and mobile/touchscreen devices. Many of the issues discussed will be accompanied by hands-on examples. While the author provides examples based on her own experience Apple’s iPhone OS devices, the principles should be easily generalizable to mobile touchscreen devices. Liz Marley tests the Omni Group’s Mac, iPhone and iPad productivity software. Liz has 7 years of professional testing experience, and 4 years of debugging homework assignments while earning a CS degree at Harvey Mudd College. Outside work, Liz accidentally finds bugs in knitting patterns and video games. |
WordPress website created by Mozak Design - Portland, OR
Copyright PNSQC 2020