Why and how the SQE can work
The example of the qualified lawyers transfer scheme shows it is possible to create high-quality exams catering for large numbers of prospective solicitors, writes Eileen Fry
The solicitors qualifying exam is not a step into the unknown. A large-scale high-quality centralised assessment governing admission as a solicitor of England and Wales and similar to the proposed SQE already exists: the qualified lawyers transfer scheme. QLTS has been run by Kaplan since 2011 on behalf of the SRA. The QLTS assessments provide a route whereby lawyers from other jurisdictions and barristers of England and Wales can become solicitors.
Like the SQE, stage 1 of QLTS is an extensive multiple choice test. The QLTS MCT is a computer-based test delivered worldwide assessing applied legal knowledge. The questions test the application of fundamental legal principles to realistic fact patterns, not esoteric topics or the ability to cite cases or isolated facts. Well-designed multiple choice questions are particularly appropriate for such a high-stakes professional exam because they can ensure coverage of the syllabus and consistent standards across examinees. Good MCQs are hard to write: most of the criticism of MCQs assumes badly drafted questions.
Also like the SQE, stage 2 of QLTS (the objective structured clinical examination) tests law and skills. The OSCE tests interviewing and advising, advocacy and oral presentation, research, writing and drafting in the subject areas of business, property, and probate, and litigation. Many of the techniques used are proposed for the SQE, for instance one-to-one assessments and the use of professional actors to play clients.
The Law Society in its recent response to the SQE consultation says: ‘It is essential… that the SQE assessments are both reliable and valid, and do what they set out to do. The data generated to show the reliability of the assessments should be published, analysed, and evaluated to determine the success of the SQE.’ We agree. Prerequisites for confidence in the SQE are that it ensures that the desired levels of competence are consistently achieved. Test reliability is therefore a key quality indicator.
What is meant by reliability? An exam is reliable if its results are replicable. If the same cohort of candidates took a similar paper and the paper ranked the candidates in the same order, the paper would be regarded as perfectly reliable. Reliability is most commonly measured statistically by Cronbach’s alpha, which is a measure of internal consistency. In a high-stakes professional exam, an alpha coefficient of more than.9 is commonly seen as a desirable target for an MCT and more than .8 for a skills-based assessment
Despite the innovative assessment methodologies used in QLTS, the exams have achieved remarkable reliability, more than meeting contemporary test quality requirements for such a high-stakes assessment.
QLTS has shown that a high-quality SQE is possible. However, QLTS also points to improvements which can be made in the SQE specification.
Multiple choice test
As proposed, stage 1 of the SQE includes six MCT exams, five with 120 questions and one with 80 questions, 680 questions in total. So many questions are not necessary to test the areas envisaged accurately and reliably. The QLTS MCT tests the seven foundation subjects and some LPC subjects in 180 questions. If it was not sampling adequately from the syllabus it would not be a reliable exam. Luck would play too big a role in the questions that came up. But it is an extremely reliable exam with an alpha coefficient consistently higher than .9.
A question bank of over 2,000 questions would be necessary for the exams outlined in the SQE consultation. However, the SQE should test the application of fundamental legal principles. Our experience suggests that to reach the required number of questions it would be necessary to include ones which test more esoteric areas or are unlikely to be of concern to a day-one solicitor.
In addition, fail judgements resulting from these exams would not be readily defensible. By using a statistical formula to extrapolate from QLTS, we know that papers with 80 questions are unlikely to meet contemporary test quality requirements as regards reliability. A paper with 120 items may come closer to such requirements but is by no means certain to meet them. On the other hand, a paper with 180 well-drafted MCQs does meet such quality requirements.
We suggest an absolute maximum of two MCT exams with 200 questions each.
The SQE consultation proposes single assessment points which candidates pass or fail. The candidate who passes the MCT but fails the stage 1 skills assessment, or passes the whole of the stage 2 skills except for one ten-minute advocacy exercise may well feel something unfair has happened. That candidate would be correct according to contemporary test quality requirements. A single assessment point is not reliable. Since Cronbach’s alpha measures internal reliability, it is not even possible to measure the alpha coefficient of a single assessment point.
How many assessment points are needed to make a skills assessment sufficiently reliable? The QLTS OSCE has 18 separate assessment points combining to form a single assessment with a single pass mark. This produces an exam which meets contemporary test quality requirements as to reliability and accuracy.
The stage 1 skills assessment would have to be expanded very considerably to justify pass-fail judgements, and it may be a hindrance to diversity. For those doing their legal practice outside the traditional firm environment, these skills could be learnt later during their work experience. Testing skills at this stage may be better done as part of the interview and selection process. Our conclusion is that the stage 1 skills assessment should be abandoned.
The stage 2 skills assessment needs to be adapted so that assessment points are combined into a single exam with a single pass mark. How this is done will depend on the number of practice areas candidates are examined in at stage 2.
The SQE consultation proposes that at stage 2 candidates choose two out of five practice areas. Again, data from QLTS sheds light on this proposal. The correlation between candidates’ overall marks on similar tasks in the different practice areas in the QLTS OSCE (typically around 0.6) is not high enough to justify this proposal. The argument in support of the proposal that if candidates practise in an area in which they are not competent they will be in breach of their professional duties is not a good one. Taken to its logical extreme it would remove the need for a qualifying exam altogether. As long as the solicitor qualification is a generic one, we recommend the stage 2 exam as a minimum samples from across the reserved areas.
We needn’t be afraid of the SQE. QLTS has shown that with robust design, appropriate and detailed development of questions, detailed training of actors, assessors, and markers, together with appropriate legal and psychometric expertise, it is possible to create similar exams to stage 1 and stage 2 of the SQE catering for large numbers and of a very high standard. QLTS also provides the evidence base to suggest improvements to the draft assessment specification. However, these suggested improvements should not be seen as detracting from the basic point: the SQE can work.
Eileen Fry is the head of Kaplan QLTS