The stakes for Microsoft, which was outlining its Office 2010 product strategy, were extremely high. According to Microsoft’s earnings statements, Microsoft Office productivity suite generates more revenue than any other business division, says Gregg Keizer,who covers Microsoft and general technology news for Computerworld. Months before Microsoft released Office 2010 productivity suite, 9 million people downloaded the beta version to test the نرم افزار اتوماسیون اداری and to provide feedback. Through this program, Microsoft collected 2 million valuable comments and insights from those testers.
Denise Carlevato, a Microsoft usability engineer for 10 years, and her colleagues from Microsoft’s Virtual Research Lab observed how people used new features. Their objective was to make Microsoft Office fit the way millions of people used their product and to help them work better. It was a massive, controlled crowd sourcing project.
Developing a new software product is always exciting, especially to watch ideas take form and truly become a reality. Sometimes a fresh perspective or an innovative use case is all it takes to turn a product from good to great. However, when it comes to testing, we often find ourselves in unchartered waters wondering if the product will actually work in the diverse customer landscapes. It is virtually impossible to test the vast number of devices and configurations of software that web-based software can run on today. Truly robust testing is time consuming, and ensuring that every possible permutation and combination of features, localizations, and platforms works, as intended is nearly impossible.
Often times, comprehensive testing is a challenge and buggy code is delivered to the customer. For example, if a Software-as-a-Service (SaaS) application does not render in a particular browser or a critical software tool fails to deliver its intended functionality, a bug fix or a patch is promised and the vicious cycle starts all over again. Either way, the customer withstands the worst of inadequate testing, especially when faced with the escalating costs of software maintenance, performance, etc. For the software development company, ramifications include distress around brand image, perceived quality, relationship and potential future projects, trust, etc.
Welcome to the new world of crowd sourced testing, an emerging trend in software engineering that exploits the benefits, effectiveness, and efficiency of crowd sourcing and the cloud platform towards software quality assurance and control. With this new form of software testing, the product is put to test under diverse platforms, which makes it more representative, reliable, cost-effective, fast, and above all, bug-free.
Crowd sourced testing, conceived around a Testing-as-a-Service (TaaS) framework, helps companies reach out to a community to solve problems and remain innovative. When it comes to testing software applications, crowdsourcing helps companies reduce expenses, reduce time to market and increase resources for testing, manage a wide range of testing projects, test competence needs, exigency to resolve higher defects rates, and use 3rd party’s test environment to subside the project requirements.
It differs from traditional testing methods in that the testing is carried out by a number of different testers from across the globe, and not by locally hired consultants and professionals. In other words, crowd sourced testing is a form of outsourced software testing, a time-consuming activity, to testers around the world, thus enabling small startups to use ad-hoc quality-assurance teams, even though they themselves could not afford traditional quality assurance testing teams.