As promised, I want to raise a couple of items from last week’s conference that I found very interesting.
On a panel session billed as “Ask the Experts”, one of the panelists launched into a justification of the MCQ-style questions on the basis that they were at least as good as the “old”, essay-style exams. The rationale went as follows.
He had marked a Service Delivery paper where the candidate had achieved 0 marks for a question on Financial Management and 1 mark for a question on Capacity. However, they had scored sufficient marks on each of the other 3 questions to take them to the required 50% level. This apparently proved something; I guess that he considered the candidate didn’t deserve to pass?
To me this proves nothing. First, we don’t know whether the candidate read the questions and decided that their best chance of passing lay in answering 3 questions, about which they were really confident, in an exemplary manner rather than spending time on providing 2 poor answers. Or whether one or both of the “poor” answers were simply because the candidate misread the question (it happens frequently!). This is therefore not prima facie evidence that the candidate knew nothing about the subject(s). The fact is that the requirements for passing the exam were to achieve 50% on the paper and they did just that.
Second, what is the difference with the MCQ scenario? There are 8 questions on the Intermediate papers. A candidate could get 2 questions completely wrong (the 0 point option) and another nearly right (3 pointer) and still achieve the 28 marks need to pass. Supposing that those 2 wrong answers were on Finance and Capacity, the situation is no better than in the essay style. Actually I think it is probably worse, because the candidate may have merely guessed the right answers to one or more of the others; so there is even less evidence that they really do know the subject. This is without taking into the account the other factors about relevance that I discussed last week.
The reality is that virtually no qualification has a 100% knowledge requirement, though some which have a direct bearing upon people’s lives, such as doctors and pilots, have a much greater breadth and depth tested as well as having a higher pass mark to attain. What any test should be doing is evaluating the depth of knowledge and the ability to apply that knowledge in an appropriate way and at a suitable level.
Incidentally, the fact that no ITIL qualification indicates a complete mastery of the subject matter is one of the key reasons why exam institutes are so keen on training organizations producing proper session plans. Although the course designer may hold a particular qualification, there is no guarantee that they actually understand all aspects of the syllabus correctly, and nor does having a copy of a diagram from a book in the slide deck guarantee that a topic will be explained correctly.
One of the biggest disappointments in the ITIL scheme seems to be the Managing across the Lifecycle course which suffers from a bizarre syllabus that doesn’t really match the title and for which MCQ testing seems the most inappropriate. Several people spoke out about it at the conference and many more voiced their concerns to me in private conversations. Whether anything happens is a moot point, but at least none of the occupants of “Castle ITIL” can be in any doubt about the swell of negative opinion surrounding the scheme and this qualification in particular.
Speculation has also been mounting about the renewal of the OGC contracts, which were awarded for a 5 year term with the option for OGC to extend by up to 5 years without retendering. Indications are that regardless of their satisfaction or otherwise with their partners’ performance, other pressures within OGC and the UK government in general mean that any review is unlikely to take place before 2012.
Meanwhile, as I stated last week, the plethora of new qualifications – the figure of 30 was frequently bandied about – will make the poor old consumer completely bewildered as to the best route to take through the qualification maze. It is said that competition brings greater choice to the consumer and leads to lower prices, but it is also true that greater complexity often leads to confusion. It is difficult to make real comparisons when one isn’t assessing like against like – and in other spheres it is common for one group to pay for the savings that another achieves. One only has to try choosing the best utility provider to realize how difficult it is – different standing charges, usage rates, discounts, etc., all make it nigh on impossible for many to get the optimum solution.
Maybe APMG should stop trying to dictate all aspects of the scheme, simply focusing on defining some sensible testing criteria, producing and managing meaningful exams that match them, and leaving it to the market to define educational and training events that may lead candidates to these without the two being inextricably linked. I hope to expand on some of these ideas next week.
Ps. For a number of years I have used the label of Service Management Evangelist, but now that so many are using the Evangelist tag – and some of them are actually Fundamentalists – I’ve decided on a change. I think that the thread running through my ramblings is a championing of Commonsense and Competence, so that’s the new label.
Any feedback and comments are always welcome!!
21st November 2010
I agree that OGC has other pressures but knowing the EU procurement set up, it’s likely that the initial term of the contract will end in January 2012 and so any extension to APMG would need to be notified to them by July 2011, long before any 2012 ‘review’. Tricky isn’t it. But I’m sure OGC won’t be rushed into making any ill-considered decisions, regardless of any Cabinet Office or other, general government issues.
(ITIL® is a Registered Trade Mark of the Office of Government Commerce in the United Kingdom and other countries.)