September 2014


Some Thoughts on Reports

Peter Findlay - Assistant Director at the Quality Assurance Agency for Higher Education (United Kingdom)

Article published in the ENQA Occasional Papers 21 Transparency of European higher education through public quality assurance reports (EQArep)"

Quality assurance questions for reports

Some of the perennial and well-tried questions for quality assurance also apply, of course, to the quality of our agency reports:

  • What is it intended for? (aims, readership)
  • Why are we doing it that way? (method and format adopted)
  • Is that the best way? (evaluation of the method)
  • How do we know it works? (achieving the aims, impact of reports)
  • How could it be improved? (reviewing, evaluating, planning)

A report is a message

A report is, in terms of standard communication theory, just another kind of message. Its effective communication will depend on three things: the sender (the expert team, the agency), the medium (the form of the report, the means of its publication) and the receiver (the university, and other stakeholders such as students and employers).

Messages are also governed by the rule that the purpose, the content and the readership or audience are three inter-related factors which will determine each other. Any report has to take into account its most important aims, its likely readership, and the best form in which to address these.

Communication theory uses the terms "encoding" and "decoding" with regard to messages. For our considerations, "encoding" means:

  • knowing the purpose of the report.
  • knowing what you need to write about.
  • knowing how it will be written and the stages of production.
  • knowing who will / should read it.
  • knowing how and when they will read it.  

And "decoding" implies:

  • that the report is received and read in a managed process.
  • that the reader understands the process involved and the conventions of the report.
  • that the reader knows what is expected from the report in terms of actions to be taken or information to be used.

Who is it for?

An agency therefore needs to be very clear in deciding about the aims of its reports, and about their intended readership and users. But that is often not a simple matter. It would be reasonably straightforward if reports were simply written for the exclusive attention of higher education institutions. But there are other interested parties – government, students, employers who may also be among the target readership. A review report may need to address all of these interests, but can a single report do that successfully, with such different expectations and needs involved? A major question for our reports is how to meet the needs of these different stakeholders.

Answers to this knotty problem that have been tried:

  • Separate reports for separate stakeholders
  • A summary section of the report with clear outcomes, separated from the more detailed main body of the report
  • Different parts of the report for different readers
  • A short report on the website and a longer report for the institution
  • A "checklist" format for the key criteria, with a comments box for the details, where required.

The report is the result

We need to keep in mind that an agency's report is the end result, or outcome of a whole process: the character and quality of the report will therefore depend on the quality of many other contributing aspects:

  • Agency management, policy and method
  • Agency officer and review team
  • Documentation and information provided, access to institutional information
  • Visit and confirmation
  • Drafting the report
  • Editing the report
  • Finalising and agreeing the report

It can easily be forgotten that completing the report is the primary aim of all these different parts of the review process. It is both the end product and the focus of the whole process. Meeting other reviewers or managing the team is worthwhile and rewarding; the site visit is likely to be exciting and interesting; the preparation of the report, on the other hand, is isolated and desk-based, and will often be a demanding and even tedious task. It can all too easily be forgotten about during the earlier stages of the process.
At every stage, therefore, we need to think ahead to the report: keeping notes, making early drafts, checking evaluation against the facts, working towards the judgements. A good report results from a planned process leading up to its final production.

Who owns what?

There must from the very start of the process be clarity about who "owns" the report – that is, who makes the final decision about what is included in it or not, and how judgements are made. Sometimes ownership is shared and then the various responsibilities must be carefully defined. Does ownership lie with:

  • The expert panel?
  • The member of the panel who writes the report?
  • The agency officer who edits the report?
  • The agency committee that approves the report?
  • The agency as a legal entity?
  • The institution?
  • Most importantly, what person or body has the right to modify the report and/or its judgements?

Can different participants in the process own different parts of the report? For instance it might be argued that the judgements in a report are owned by the experts as peer reviewers, but the form and content of the report are owned by the agency that has ultimate responsibility for the quality of publication.

Keep it simple, ******!

It can be argued that complicated and opaque reports which are difficult to read may serve a purpose, which would be to limit the understanding and reception of the report to a limited readership who understand the conventions and codes in which it is written. However, if the aim is to reach as wide and various a readership as possible, then the simpler the better.
By simplicity is not meant reducing the significance or complexity of the content, but ensuring that it is conveyed a very clear structure, and in language which can be easily followed and understood.
Some desirable and undesirable features of such a report might be:

To aim for:

  • Length – as short as possible.
  • A well sign-posted structure with clear headings.
  • Style – simple, precise, clear, focused.
  • Well structured paragraphs – "beginning, middle, end".
  • Sentences, which are simple and clear (never longer than two lines of text!).
  • Tone – neutral, measured, objective.
  • Content – minimum description, maximum analysis and evaluation. (It is difficult to see the usefulness of large amounts of description of institutional procedures – the institution itself knows all about them already, and other report readers probably don't really want to know very much about them, except whether they are effective.)
  • Arguments clearly based on evidence.
  • Regular reminders of the relevance to the review questions/agenda/ framework.
  • Regular summaries of each section.
  • Clear explanation of basis for judgements and recommendations.

To avoid:

  • Too much reference to the process (the expert team, the review visit, etc.).
  • Over-complex sentences.
  • Jargon and cliché (especially the jargon of quality assurance experts).
  • Emotive or strongly judgemental words.
  • Obscure words.
  • Ambiguity.
  • Too much unnecessary detail.
  • Speculation about future developments in the institution.
  • Subjective comments drawn on experts' personal experience.

The politics of language

National languages of small nations won't have the same breadth of impact or allow international comparability (but will stay in the culture) but the national language reflects more exactly the ownership and location of the report.

Translation into the lingua franca (English) brings risks of mistranslation, confusion, and misunderstanding. There are many varieties of English - a "Euroglish" is in development in quality assurance reports! We don't realise how varied our understanding of key terms in English can be (accreditation, expert, validation, assessment, standards – some key terms that have different meanings in different contexts). You can't always be sure what English means to the reader!


A timeline for the development and submission of the report is essential. It should include a clear indication of what each contributor is expected to provide, and the iterations needed for editing.

Deadlines are the plague and sometimes the agony of report-writers. Always make your deadlines very realistic, and then add some more days on, to make for greater flexibility in meeting the problems that will certainly occur.

Transparency…. I can see clearly now

Transparency is generally regarded as a virtue and a goal of a good report. But transparency can harm as well as help – what exactly is the aim of complete clarity? Is the main aim of the report to regulate, control, and compare; or is it to improve and enhance and develop? If we seek to improve, and to help an institution to succeed, then is it always a good idea to publish in full facts, which might damage its reputation? To publish in detail for the public to read all that is wrong with an institution? But is the alternative acceptable, that is, for an agency's publication policy to be adjusted taking into account the message delivered by the report? Is there, then, a tension between transparency and enhancement?

Tight-rope walking – the tension between peer review and regulatory body

Most agencies use a reporting method, which involves an expert team. Either they will write the early drafts of the report themselves, or their views and opinions are gathered and incorporated into a report by the agency officer. This approach follows the principle of peer review. In its purest form, a peer review approach using the informed opinion of the experts would mean that those experts would own the review and write the report. In such a method, individual experts might even be allowed to submit a minority, dissenting statement. But agencies will usually want a higher level of control over the report. They will require experts to reach a consensus on the report judgements, and may impose a common framework for all reports.

Consider the potential spectrum of possibilities for the format of the report. At one extreme, pure peer review is close to the character of a personal narrative about impressions gained by experts; at the other end of the spectrum is something like a highly controlled checklist, pre-defined by the agency, into which the experts place their views and decisions. A question for all report processes is where to position the form of the report between these two extremes. The signs of agency control will be found in the amount of shaping structure given to the report: specified standards and criteria; required headings; checklists for completion; word counts; tables; pre-defined forms of judgement statements. There is a tension here, because the higher the level of control that is exerted by the agency, the less the experts may feel that their views and the expression of them are taken seriously. It is also questionable whether a simple "tick-box" conformity approach can accurately reflect the complexity of institutional systems and their different approaches.
So it is a matter of balancing freedom of expression (a high value in our academic context) against the level of bureaucratic control, and the levels of consistency and comparability between reports that are required by the agency.

It is also worth noting that the higher the level of agency control, the more will be needed a thorough training and briefing of experts, so that they understand fully what is expected in their work on the report.

Show me the evidence….

We follow the principle of "evidence-based assessment", but what exactly do we mean by evidence here and how do we use it?

Do we only use evidence provided by the institution?

Does the team have the right to request other evidence?

Can we request from the institutions evidence that demonstrates or confirms the occurrence of bad practice?

How do we ensure that evidence is recorded and retained? (Photocopying during the visit? Access to internal web pages?)

Who takes care of the evidence and ensures safekeeping? Can individual experts be relied on to do this?

Is the evidence published with the report (which probably makes it too long), or kept in reserve? Maybe the evidence-base can be embedded in early, unpublished, versions of the report as comments, or footnotes, or parentheses.

It is worth noting the institutions usually only require sight of the evidence when a criticism is made, not when the assessment is positive: in that case do we only need to retain evidence relating to critical points or problematic matters?

Who reads it, who needs it?

It seems at least arguable that very few people actually read through a full institutional quality assurance report from end to end. That is of course with the exception of the ever-resilient agency officer responsible for the report, which probably reads it through from end to end many times!

What evidence does an agency have of the eventual reading and the actual impact of its reports? We should reflect carefully on the likely readership for reports. If we really are dealing with a small number of readers (more than ten, less than 50? Or even less than that?), then surely we need also to think about the economy of our report production: the expenditure of effort in producing it, relative to the actual pattern of reception and use. Remember also that research shows that generally, in reading reports like ours, the attention span of the reader reduces with every page that is turned….

Maybe many more people will read the summary of the report, or a short version published on the website. Is there a case for a relatively short published report and then separately an unpublished more detailed report for the institution? 

Usually evidence of impact comes in the form of follow-up reports and action plans. Maybe, then, all we need for our readership are summaries, judgements, recommendations, and action plans? Could we improve the accessibility of reports by using more visual signs in the layout (graphics, tables, highlighting, even pictures)?

Finally, we need to remember that our readers too may need advice and even training in reading and using reports in the most productive way. Guidance documents about the procedure and the main features and aims of reports are very worthwhile.

Last word (it's always good to arrive at it!)

The perfect report is more likely to be an aspiration than a realisation: the best that you can do, in the time that you have available, has to be good enough!


Generalitat de Catalunya

c/. dels Vergós, 36-42. 08017 Barcelona. Tel.: +34 93 268 89 50

© 2014 AQU Catalunya - Legal number B-21.910-2008