Academic Writing Essay

Submitted By mohsin511
Words: 2173
Pages: 9

Issues in the Automated Testing of Web-Applications using AJAX

Course:
Academic Writing MS (SPM)

Presented to:
Miss Sumaira Sarfraz

Presented by:
Muhammad Mohsin Imtiaz (13L-5461)
Zeeshan Alam Ansari (13L-5462)
Muhammad Ali Yousaf (13L-5467)

National University of Computer and Emerging Sciences Lahore, Pakistan

ABSTRACT
The challenges faced by the QA departments of software houses have greatly increased since web applications are constantly moving to Web 2.0 architecture. When AJAX (Asynchronous JAVASCRIPT and XML) is used in the web page, its complexity increases and thus testing of such web pages requires more nifty skills as compared to the web pages created without AJAX. The increased complexity is because of these two reasons. ONE, the state flow graph of the whole website is difficult to extract and seldom mentioned in the requirement specifications. Even when it does get mentioned, usually only a rudimentary version is shown. TWO, the effort required by any human tester to extract the exhaustive state flow graph is a considerable portion of the entire testing effort. The advances made by automation testing tools are not very relevant when it comes to Web 2.0 testing precisely because of the fact that state flow graph in AJAX based web pages is difficult to obtain. This paper introduces automation testing for Web 2.0 pages by proposing a crawler designed specifically for Web 2.0 architecture that pulls out an exhaustive list of states of the website interfaces and also by suggesting a test suite customized for the new platform.
CRAWLER
The underlying architecture of internet has gone through a substantial change in last 10 years. The difference inherent in the new architecture is called ‘Web 2.0’ and it refers to the enhanced role JavaScript now plays in rendering of the web pages. The user interactivity has increased and content is often shaped dynamically customized to the preferences (explicit, implicit) and requirements of individual surfer of the internet. This is achieved by using Document Object Model (DOM) and asynchronous communication with the web server. Some of the most common and pervasive examples are Facebook, Gmail and the auxiliary products of Facebook Inc. and Google Inc. While the user experience has greatly increased by this change, it has created some hitherto absent challenges for the testers and developers of Web 2.0 pages which include:
—“Search-ability”. This refers to the web site meta-tags and contextual information that every search engine creates and saves for every web site, irrespective of its architecture (Web 1.0 or Web 2.0). The first challenge in finding the “meta tags” and “contextual information” for Web 2.0 pages is crawling. Almost all the search engines rely on custom-built crawlers that extract the meta-tag information from every web page. The problem in extracting this information from Web 2.0 pages is that a substantial part of the functionality is not hard coded (so not extractable by old crawlers) and lies in the layer between JavaScript and webserver and in on-the-fly variations in DOM
-“Testability” – Testability refers to the extent a piece of software, irrespective of its architecture and platform, can be tested by human or automated testers. Finding all the execution paths in traditional web pages is easier to obtain, but in Web 2.0 pages, this information is extremely difficult. One possibility is to use a sophisticated and more intelligent crawler that can manipulate the JavaScript to dig all states of the User Interface elements e.g. buttons, drop downs, lists etc. The search engine experts have been referring to the growing crisis of “hidden web”, i-e millions of web pages that don’t appear in any search since they use Web 2.0 elements among other reasons and the crawlers have not evolved suitably to pull meta-tags from them. The crisis is worsening because web designers are increasingly switching from traditional web to Web