Eight days into the 'mother of all exams' reports of hardware problems, software errors, poor administration, repeated questions) and other testing issues are still coming in.
It is time the directors of IIMs acknowledge this experiment has failed and call for a re-test - in old fashioned paper and pencil format. A high stakes exam like CAT is an act of faith. When that faith in its fair delivery and character of merit is at doubt, it no longer serves its purpose.
Just like when it physically leaked in Nov 2003; and had to be re-administered.
But the issue goes beyond this year. At some level we've all accepted that computer based testing is better, more efficient than traditional methods. Is that really the case?
The issue is not just hardware or software - but more fundamental. This exercise is a 'CAT and mouse' game - and not just because the exam went online this time.
On the one hand you have a couple of hundred IIM professors, a handful of whom would sit down, rack their brains and come up with the CAT paper. On the other hand, you had hundreds of 'experts' preparing mock CATs, analysing past papers, predicting future patterns. And many of these experts are IIM grads themselves.
When it came to setting one paper a year, the CAT could somehow pull it off. Remain distant, difficult, unpredictable - the Mount Everest of all exams. Putting it online - with a poorly set question bank - has taken it down to the level of a Sahyadri.
This destroys everything the CAT has stood for - all these years.
Now one can dispute whether CAT actually does select people with the best managerial potential in the first place. An article posted on fairtest.org notes that in 1985 Harvard Business School (HBS) decided to eliminate the GMAT from their admissions process.
John Lynch, the Admissions Director at the time, gave several compelling reasons. In a blind test, Harvard found that admissions decisions made with and without the GMAT were essentially the same. Success at Harvard depended on intangibles such as motivation, interpersonal skills, perseverance and hard work – all factors not measured by GMAT.
Looking at undergraduate grade-point average (UGPA), ethics, leadership, community activities, prior work experience and the interview made GMAT scores "superfluous".
However, 11 years later HBS reinstated use of the test. The point is, as long as you are using a GMAT or CAT, let there be no doubts about the administration and standards of that test.
Which brings me back to the question of computer based testing. In 2008 there was a GMAT Cheating Scandal involving Scoretop, where 'live' questions were posted on a members-only website. This article published at that time reveals some interesting facts:
1) The item pool is periodically refreshed, but the same questions are reused for at least several weeks.
2) The testing industry was well aware of the vulnerability of computer adaptive tests to what it calls “pre-disclosure.” Before the 1993 introduction of the computer adaptive Graduate Record Exam (GRE), two researchers at the Educational Testing Service (ETS) wrote about their “fear [that examinees] will remember questions and reveal them to their friends or to a coaching school” and that “a group of examinees [might] memoriz[e] subsets of the pool and combin[e] their knowledge.
3) To expose the problem, staff from the Stanley Kaplan Education Center took the computerized GRE, compiled a list of items they had memorized, and presented it to ETS officials. ETS, which then administered the GMAT as well as the GRE, responded by suing Kaplan for copyright violations, even though the questions were never made public (see http://www.fairtest.org/ets-and-test-cheating).
4) After this incident, test-makers said they began using much larger item pools and changing them more frequently, but there is no proof for this claim. In 2006, ETS lost the GMAT testing contract after a series of administrative and scoring errors. The test is now run by the global conglomerate Pearson.
Under 'Indian conditions' where stakes are so high for both students and the coaching classes, I think it will be far more difficult to maintain the integrity of the question bank!
In conclusion, computerised testing can work - but requires herculean effort and partnership between the IIMs and the testing agency. It's not an exercise which IIMs can simply sub-contract - like housekeeping!
Brand IIM is like the venerable banyan tree, and the CAT forms the mighty roots of that tree. Destroy those roots and the very tree will start withering...
The IIMs must reclaim the CAT immediately, or they will lose the ground beneath their feet.
For more on the issues related to effectiveness and fairness of SAT, GRE and GMAT see articles on www.fairtest.org