Using Representative Opinion Surveys In The African Peer Review Mechanism Process

Categories

 

A b o u t  S A I I A
The South African Institute of International Affairs (SAIIA) has a long and proud record as
South Africa’s premier research institute on international issues. It is an independent, nongovernment
think-tank whose key strategic objectives are to make effective input into public
policy, and to encourage wider and more informed debate on international affairs with
particular emphasis on African issues and concerns. It is both a centre for research excellence
and a home for stimulating public engagement. SAIIA’s occasional papers present
topical, incisive analyses, offering a variety of perspectives on key policy issues in Africa and
beyond. Core public policy research themes covered by SAIIA include good governance
and democracy; economic policy-making; international security and peace; and new
global challenges such as food security, global governance reform and the environment.
Please consult our website www.saiia.org.za for further information about SAIIA’s work.
This paper is the outcome of research commissioned by SAIIA’s Governance and APRM
Programme.

About  the  Governance and APRM Programme
Since 2002, SAIIA’s Governance and APRM Programme has promoted public debate and
scholarship about critical governance and development questions in Africa and beyond.
The programme seeks to improve public policymaking by linking governments, citizens and
researchers through a variety of publications, training workshops and research fellowships.
The project has worked on the African Peer Review Mechanism and governance in almost
20 African Countries. SAIIA welcomes original governance-related manuscripts to consider
for publication in this series.

Series editor: Steven Gruzd steven.gruzd@wits.ac.za
The Governance and APRM Programme thanks Shaun de Waal, Dianna Games, John
Gaunt, Rex Gibson, Barbara Ludman, Richard Steyn and Pat Tucker for editorial assistance
on these papers.

SAIIA gratefully acknowledges the Royal Netherlands Embassy in South Africa, which
has generously supported the Governance and APRM Programme and this series.
This publication is also available in French. Translations by www.alafrench.com and Beullens
Consulting fabien@bconsult.co.za. Faten Aggad from SAIIA is thanked for proofreading
the French versions.
© SAIIA. November 2008

All rights are reserved. No part of this publication may be reproduced or utilised in any from by any
means, electronic or mechanical, including photocopying and recording, or by any information or
storage and retrieval system, without permission in writing from the publisher. Opinions expressed are
the responsibility of the individual authors and not of SAIIA.

Abstract

The opinions of the general public are as important as those of the elite if a country wishes
to achieve a comprehensive self-assessment process in terms of the African Peer Review
Mechanism (APRM). But gathering and measuring the opinion of ordinary people is not a
simple matter.

The author of this paper, Robert Mattes, Professor in the Department of Political Studies
at the University of Cape Town, has immense experience in planning and conducting
opinion surveys in Africa, notably in his role as co-founder of Afrobarometer. Here he warns
against the traps and pitfalls awaiting the unwary.
The first of these traps is the belief that a more representative assessment of public
opinion can be obtained by contacting an ever larger number of people. The law of diminishing
returns comes into play, and the cost of increasing the sample size can outweigh
the benefits. Professor Mattes argues that, while a representative survey is an irreplaceable
element of the national self-review process, relatively small random probability samples of
ordinary citizens can produce accurate and cost-effective results.
However, other elements must be in place to ensure the survey’s credibility. These
include freedom to travel for fieldworkers; the availability of accurate census data; and the
avoidance of inappropriate mechanisms, like polling heads of households instead of the
people who reside in them.
He warns, too, that it is important to establish what can be learnt from ordinary citizens
– and what is out of their domain.

About the Author

Robert Mattes is Professor in the Department of Political Studies and Director of the Democracy
in Africa Research Unit and Centre for Social Science Research at the University of
Cape Town, and co-founder of and senior advisor to the Afrobarometer. An earlier version
of this paper was produced for ‘APRM Lessons Learned – A Workshop for Practitioners,
Researchers and Civil Society’ hosted by the South African Institute of International Affairs in
Johannesburg from 12-13 September 2006.

Introduction

Any national self-review would be incomplete if it included only the assessments of
elites (whether government officials, technocratic experts or civil society stakeholders)
and excluded the opinions of the mass public. The true state of political and economic
governance in a country cannot be assessed simply on the basis of an objective analysis
of the rules, resources and behavior of the economy, government institutions and large
corporations.

Competent business people would never draw a final conclusion about the quality
of their company and product simply by investigating the company charter, its internal
processes or the assembly line. They would also need to know whether consumers were
actually buying their product and, more importantly, whether they were satisfied with it,
and likely to keep on buying it. In much the same way, the actual state of political governance
and, especially, democratic politics is at least partially in the eye of the beholder.
But exactly how the values, awareness, evaluations and experiences of ordinary people
are to be gathered is not a simple matter. On one hand, a country may wish to instill a
sense of public ownership of the project and encourage the participation of as wide a
cross-section of ordinary citizens as possible. On the other hand, any self-assessment that
aims to provide a true reflection of the state of affairs in the country would want to be as
accurate, and therefore as representative as possible. The difficulty is that, for a range of
methodological, pragmatic and socio-political reasons, it is rarely possible to maximise
both these goals at the same time.

 

'Participatory' consultations of Public Opinion

One apparent way to consult public opinion and simultaneously instill a sense of awareness
and public ownership is to run as broad a consultative process as possible in which
enumerators speak to ordinary citizens in their homes or in public meetings and record
their responses, either through structured responses to structured questionnaires, or
through transcripts of semi-structured or unstructured discussions and debates.
Public discussions have many advantages. Most importantly, they allow people to set
the agenda, name their problems and frame the issues and range of potential solutions
in their own words rather than having them structured by the questionnaire designers.
Moreover, they are deliberative, meaning that people can persuade each other to change
their opinions in the course of discussion.

However, public consultations also have disadvantages. First, people are not political
animals. Family life, friends, social activities and the need to earn a living compete with
people’s attention to public affairs and their willingness to take part in political events.
Thus even the most well-funded public consultation exercise may engage the attention of
only a small fraction of ordinary citizens, let alone get them to participate – especially if
people do not see any real incentive to do so. A recent South African exercise is a classic
case. The January–February 2006 South African Afrobarometer survey found that just one
in 20 people (6%) said they had even heard about the APRM process, one in 33 (3%) had
attended a public meeting, and just one in 50 had filled out a questionnaire.2
Second, virtually any process of public consultation means that citizens have to take
the initiative to make their voices heard. If there is one thing we have learned from 50
years of studying political behaviour, it is that not all people are equally willing to take the
time to talk about politics, or have the capacity to do so. Thus, consultative campaigns
may not only fail to reach meaningful numbers of ordinary citizens, they may also fail to
reach a representative cross-section. Again, we see clear evidence of this in the Afrobarometer
examination of the South African APRM process. Not only were the better educated,
frequent newspaper readers and active members of civil society organisations and trade
unions more likely to have heard about the process, they were also more likely to have
attended a meeting and filled out a questionnaire. 3 This then is inherently not a representative
cross-section. People who have more accesses to education and the mass media, who
are more attentive to politics, and who are more actively engaged in civil society, are likely
to have significantly different values and opinions from citizens who are not.

Thus the desire to consult a broad cross-section of ordinary citizens, and allow them
to participate in these important processes and so gain a sense of national ownership, can
easily backfire – failing to reach a significant section of the public and producing a potentially
biased view of public opinion.

Representative and Accurate Assessments of Public Opinion

Ironically, the problem of obtaining a representative assessment is not solved by contacting
ever larger numbers. Rather, the solution is in the method by which citizens are selected,
much more than in how many are selected.
In other words, the solution is in sampling citizens, rather than attempting to create a
mini census. The representativeness of a sample (the extent to which it produces estimates
of public opinion or experience that mirror that of the total population) depends on two
criteria. First, the process of selecting individuals must be random, rather than allowing
people to participate on their own initiative (which produces the well-known biases outlined
above). Second, every citizen must have an equal and known chance (or probability)
of being selected.

The accuracy of any estimate taken from a sample does, however, depend to some
extent on numbers. A wealth of past experience shows us that even a random probability
sample of 300 people can produce estimates that are accurate (95% of the time) to within
a margin of sampling error of about five percentage points. However, very few analysts
would be satisfied with knowing, for example, that the fact that 45% of respondents say
they are satisfied with the performance of the President means that presidential approval
in the total population lies somewhere between 40% and 50%.
While we can increase accuracy by spending more money and contacting more people,
the law of diminishing returns that lies behind the mathematical basis of sampling means
there is no one-to-one return. To reduce the margin of sampling error by one percentage
point, we need to double the sample size. 

Sample size Sampling error
300 +/– 5 points
600 +/– 4 points
1,200 +/– 3 points
2,400 +/– 2 points
4,800 +/– 1 point

Thus we soon reach a point where the added costs of massively increasing sample sizes
(say by contacting 4,800 respondents rather than 2,400) brings only marginal returns in
increased accuracy. This is why the large majority of socio-political surveys use sample
sizes somewhere between 1,000 and 2,500. We are generally satisfied with knowing that
satisfaction with presidential performance hovers somewhere between 44% and 48%. We
do not get too concerned over whether 18% or 22% of all citizens actually contacted their
MP in the previous year as long as we can draw a broad inference that approximately
one in five people did so. While we would like to be more accurate, it simply costs too
much.

In contrast, national statistical agencies often run much larger household-based surveys
because they place much greater emphasis on statistical precision for development
policy. It really does matter whether the actual rate of unemployment is 40% or 41%.
Larger sample sizes also allow more accurate inferences from smaller subgroups. Is there
a difference, for example, in job-seeking strategies between young, unmarried urban men
and young, unmarried urban women?

There is a bit of a paradox here. The attempt to consult more and more people and
allow them to participate in a national self review may ultimately fail to contact a meaningfully
large number of people and, more importantly, is almost guaranteed to produce a
biased picture of public opinion. In contrast, surveys of relatively small but representative
random probability samples of ordinary citizens can produce accurate and cost effective
estimates of public opinion.

Thus surveys of random probability samples of ordinary citizens are an essential part
of the process of national self-review. It is true, however, that opinion surveys are largely
based on structured questionnaires, allowing the designers to set the agenda, name the
issues and frame the allowed responses. But even with these drawbacks, they can be
defended by the basic principles that seem to necessitate a consultative, participatory
process. That is, representative surveys by their nature treat all citizens equally and offer
everyone an equal and known chance of being selected to participate and so influence the
self-review process (even if participation means simply answering questions).

Mechanics of Representative Surveys : Essentials

Besides saying that representative surveys are an irreplaceable element of the national selfreview
process, there are essential elements that have to be in place to ensure credibility.

Absence of widespread civil conflict

First, the freedom to travel and visit people virtually anywhere in the country is a prerequisite
for fieldworkers seeking a nationally representative sample. This means an absence of widespread civil conflict, politically hostile “no go” zones, crime or other obstacles such
as natural disasters or large tracts of unmapped landmines that could compromise the
safety of fieldworkers. But how much is too much? In general, there is no simple statistical
answer. The key factor is the degree to which excluding these areas would compromise
our ability to generalise from the other responses.

Accurate and recent census data

Another prerequisite for credible surveys based on representative samples is the availability
of recent, accurate census data that is sufficiently detailed to allow disaggregation
to quite small areas, even to the level of the basic census enumerator area. This is important
because we begin the sampling process (a multi-stage approach is discussed below)
by disaggregating the census into a list of its smallest geographic units (e.g. enumerator
areas) and then picking from this list a sample of these units. Because these units often
differ in size, we need to know the actual population size of each to weight its probability
of selection. If each unit has an equal probability of selection regardless of population, the
sample would no longer be representative. This principle of sampling is what we know as
“probability proportionate to size” (PPPS).

However, simply picking a sample of geographic units from one national list may
randomly and unintentionally fail to include politically important areas or groups, or fail
to reflect important variations across the population. Thus the census data should also be
sufficiently detailed to allow us to stratify, or cluster, these units into a larger number of
sub-lists that reflect politically relevant lines: rural-urban differences, religious or linguistic
differences, districts and provinces. The principle of PPPS also means that the census
must tell us the relative population size of each of these strata, or sub-lists, so that we do
not select too many or too few from each.

Finally, the census should be detailed enough to enable us to examine demographic
data that can be collected only once we interview a respondent (e.g. age, marital status,
income, education). This data has to be compared to actual population figures to assess
the representativeness of the sample and decide whether it is necessary to weight it to
conform to national demographics.

Multi-stage sampling

As mentioned above, the sampling process is a multi-stage one. Few countries – fortunately
– have a national list of all citizens, or at least one that they would share with a
survey firm. So we have to sample citizens by first sampling the things in which we know
they live – households. But we also rarely have a single unified list of all households.
Thus:

  • Stage 1 consists of the process outlined in the previous section: that is, randomly sam-pling small geographic units from a national, stratified list of all those units based onthe principle of PPPS.
  • Stage 2 consists of sampling households within the selected geographic units.
  • Stage 3 consists of sampling individuals within the selected households.

A sample of heads of households is not a sample of citizens

The proper implementation of Stage 3 is imperative if we want to say that our survey
results are representative. We want a sample of people, not of households. (As explained 
above, households are simply a convenient place to find people.)
This means that we should guard against uncritically accepting the standard sampling
procedures of national census or statistical institutions. The social issues that generally
interest these institutions are traditionally addressed by economists and sociologists
through household surveys because they have defined the household as a critical unit of
analysis. The head of household is then conventionally selected to act as an informant
about the status, activities and experiences of the household, and basic demographic data
is collected for all individuals in the household.

Households have properties that are important to economists, sociologists and development
planners. But they are simply not a factor for those interested in issues surrounding
democratic citizenship. When it comes to democracy and governance, the individual
citizen, not the household, is the proper unit of analysis.5 The very theory on which
democracy is premised stresses that all legal citizens should have as equal influence as
possible on the affairs of government, including participating in the national self review.
But this is more than an issue of democratic ideology. As I shall discuss in greater detail
below, any survey instrument designed to enable analysts to complete the self-review questionnaire
would typically ask about a wide a range of evaluations and preferences, as well
as behaviors and knowledge. I can think of only a very small set of questions – usually
relating to household finances – about which the head of household might have superior
knowledge to others living there. For the vast remainder of questionnaire items, there
would be no reason to privilege the experiences, behaviours or opinions of the head over
others in the household.

Finally, this is also an issue of representativeness and accuracy. Because heads of
household are more likely to be older, employed and male, and because they have more
responsibilities that may lead them to look at the world differently, interviewing only
heads of household is very likely to provide biased and misleading results.
Thus, the uncritical use of household survey methodologies in the national self-review
process may end up wasting huge amounts of money because the results will only be generalisable
to heads of households, not all citizens.

Accurate translations

To be representative, and to enable all citizens to have an equal influence on the overall
results, it is imperative that all respondents are able to hear and respond in the language in
which they feel most comfortable. Any survey instrument used in Africa should be translated
– word for word, not just key concepts – into all relevant home languages.

Minimum sample size
For reasons discussed above, any survey claiming to be national in scope should interview
at least 1,200 respondents, which would provides estimates of the national public accurate
to within plus or minus three percentage points.

MECHANICS O F REPRESENTATIVE SURVEYS : DESIRABlES
There is a range of factors that should ideally be in place to carry out a credible and representative
survey, but can be seen as ‘desirables’ rather than ‘essentials’.

Household lists and maps

The ‘international gold standard’ for survey research requires that samples be selected
using PPPS at all stages.6 Any decent census should enable African survey researchers
to select enumerator areas or other geographic units based on probability. But selecting
households based on probability requires that we have an up to date list of all households
in the selected sampling unit, and if possible, information about the size of each. However,
many African censuses cannot provide this detailed information, and if they can it is often
hopelessly out of date. In that case, there is the option of having field workers arrive in the
sampling unit ahead of time to construct the map themselves. This drives up survey costs.

I do not regard this level of precision as absolutely essential. But survey researchers
can and should at least satisfy a ‘silver standard’: that is, as long as all enumerator areas are
chosen based on PPPS, it is reasonable to select households and respondents by random
methods which are strictly monitored by field supervisors and over which the fieldworker
has no control (such as randomised starting points in the EA, randomised walk paths
stopping at every nth house, and randomly varying this interval each day, and a random
rule of selecting among eligible household members).7

Substitution

Again, the ‘international gold standard’ holds that survey researchers should not allow
any substitutions of selected respondents or households who refuse or are unable to be
interviewed; this guards against ending up with a biased sample that under-represents
economically active people, or groups who do not feel comfortable talking about their
social and political attitudes.8

If researchers are worried about large rates of non-response, they can either draw overly
large samples ahead of time or, depending on the level of non-response, draw new and
separate smaller samples after the fact and interview the entire sample. But the first option
presumes fairly sophisticated knowledge about past response rates that is rare because survey
research is a recent phenomenon in most African countries. The second option often
entails an intolerably large increase in fieldwork costs.

It is not clear whether allowing substitution necessarily results in major biases. Again,
survey researchers in Africa may reasonably hold costs under control yet satisfy a ‘silver
standard’ if they allow substitution of one household for another (but never substitution
within households) and then only after at least two or three attempts to reach the targeted
household and respondent. In that case they must keep accurate data that would allow a
post hoc comparison of the responses of substituted and non-substituted respondents.9

Survey Timing

Finally, it is desirable though not essential that survey designers plan ahead to conduct
self-review surveys in as politically neutral a period as possible. Essentially, this means
not conducting surveys immediately before or after elections, and trying to avoid any
other times in which the national mood might be artificially but predictably optimistic or
pessimistic.

QUESTIONNAIRE CONTENT

We need to approach the design of an APRM-related public opinion questionnaire with
a sensible theory of governance and democracy and the role of citizens within this. It
should begin with an examination of what the principle of fundamental equality and equal
influence means for the content of a questionnaire aimed at citizens. Yet this should be
balanced by a keen sense of what citizens are and are not able tell us.

There are a range of political issues about which citizens have a right to express opinions
whether or not they are based on real experience or other information – for example,
evaluations of elected leaders and most public institutions. In this area, perception is a very
large part of the reality that a national self assessment process needs to measure. Regardless
of whether a given government department is actually a hotbed of nepotism, the
popular perception that it is is probably more important than the actual state of affairs.

It is less clear whether this logic applies to other institutions covered by the APRM
questionnaire such as the Reserve Bank, or other areas such as corporate governance. It
might be important to measure whether ordinary citizens see the private sector, especially
big businesses, as corrupt, and/or more or less corrupt than state agencies and elected
officials. Beyond that, however, it is not clear what more citizens can really tell us about
corporate governance.

There are also issues where it is important to distinguish those who have had some
experience with an institution or have heard of some issue because examining their
experiences can tell us about the performance of the institutions (e.g. an experience of
victimisation by bureaucrats or elected officials). On the other hand, lack of knowledge
or experience may also be important to measure because it tells us how many citizens are
being included, or excluded, from key policy debates or access to public institutions.
But there are also issues in the APRM questionnaire where the vast majority of citizens
have too little experience to justify allocating scarce survey resources to them.The
area of corporate governance springs to mind. Does it make sense to ask people who are
shareholders in a large corporation (in all probability, a very small minority themselves)
about their experiences in annual general meetings, or their knowledge of the company
finances?

The APRM self-assessment questionnaire should certainly provide the guide for designing
a public opinion questionnaire to support the APRM process. But it is not necessary to
have ordinary people attempt to provide answers to the exact questions in the APRM questionnaire.
Quite simply, citizens cannot tell us about everything. We should not overload
the questionnaire in an attempt to match the APRM instrument exactly. Rather, we need
to decide what people can tell us (in terms of their experiences, awareness, behaviours,
values, evaluations or preference) that can help the review process.

NEED FOR CLEAR DEFFINITION OF OTHER TARGET GROUPS

Surveys of other representative samples – such as firms, civil society leaders, bureaucrats
or technical experts – may be appropriate tools to add to the national self review.
However, it is not clear that the APRM process has sufficiently clear definitions of each
of these groups. What are the defining characteristics of a firm, a civil society group, or
a government official, let alone more ambiguous terms like ‘role-players’ or ‘experts’ that
help us know who qualifies. Only with a working definition can we evaluate the representativeness
of any attempt to sample these groups and begin to compare each group with
the others, with the citizens, and with their counterparts in other APRM countries.

PLANING, TIMING AND COSTS

Besides all the components above, conducing credible citizen surveys takes advance planning
to avoid rushing the survey instrument and sampling strategy. Last-minute planning
is likely to result in adopting existing questionnaires that might not be maximally appropriate.
It might also allow survey companies or national statistic offices to impose their
own operating procedures, inappropriate or not. Some of the steps presented below can be
done in parallel, rather than sequentially. But based on my experience in the Afrobarometer,
country teams should allow at least five to six months between deciding on research
and receiving usable results.

Questionnaire design 4 weeks
Advertising and awarding bids to research provider 3 weeks
Questionnaire translation 1 week
In-house pilot of questionnaire and redesign 2 weeks
Sample design, sample drawing 2 weeks
Training fieldworkers 2 weeks
Field pilot 1 week
Fieldwork 4 weeks
Data entry, cleaning, presentation of marginal results 4 weeks

Based on my experience, nationally representative surveys in Africa are expensive compared
to other continents. Costs may vary widely depending on the size and infrastructure
of the country, and whether one selects a for-profit or not-for-profit research firm. In
general, national teams should anticipate spending anywhere between US$85,000 and
US$125,000 for a survey of 1,200 respondents, again depending on the country and the
fieldwork provider.

On the other hand, depending on the country and the timing of the exercise, a significant
amount of public opinion data may exist, covering a wide range of APRM topics
(especially in the areas of socio-economic and political governance). The Afrobarometer
has just finished its most recent round of surveys of nationally representative samples of
citizens in 18 African countries, the largest survey project ever conducted on the continent.
It took place between March 2005 and February 2006. The countries included
were:

  • West Africa: Benin, Cabo Verde, Ghana, Mali, Nigeria, Senegal
  • East Africa: Kenya, Madagascar, Tanzania, Uganda
  • Southern Africa: Botswana, Lesotho, Malawi, Mozambique, Namibia, South Africa,Zambia, Zimbabwe

In addition, we have now conducted three separate surveys each in 12 countries that
provide the first evidence ever collected about trends spanning a six-year period (circa
2000, circa 2003, circa 2005 in Botswana, Ghana, Lesotho, Malawi, Mali, Nigeria, Namibia,
South Africa, Tanzania, Uganda, Zambia, Zimbabwe).

Finally, Afrobarometer plans to conduct new surveys in these 18 countries (with possible
additions) beginning in 2008. Single countries, or a group of countries, which plan
on undergoing self review in 2008 and 2009 may be able to obtain survey data far more
cheaply than if they did it themselves by contributing to Afrobarometer fieldwork costs,
and/or paying for additional questions.

ENDNOTES

1 ‘Public Participation in South Africa’s African Peer Review Mechanism: Results from the January–
February 2006 Afrobarometer – South Africa’, Presented to ‘APRM Lessons Learned – A
Workshop for Practitioners, Researchers and Civil Society’, 12–13 September 2006, Johannesburg,
South Africa.

2 Ibid.

3 It should be noted that we may from time to time use disproportionate sampling to provide
reliable estimates of small but socially or politically relevant groups, so long as they are subsequently
weighted back down to their true proportion of the population.

4 Some surveys may insert a prior stage to reduce travel costs. First they create a list of larger geographical
units, such as counties or districts, that are not too large, but relatively numerous and
which group enumerator areas into fairly homogenous clusters. This list should also be strati-
fied along rural–urban, or provincial lines. Once a small list of these larger clusters is drawn, a
sample of enumerator areas can then be drawn.

5 Some of the early APRM national surveys (e.g. Kenya) appear to suffer from this problem, as
well as apparently all the UNECA household surveys. A simple explanation is that those who
designed these surveys had strong backgrounds in socio-economic household surveys and simply
copied the sample design without thinking through the consequences.

6 Heath A, S Fisher and S Smith, ‘The Globalisation of Public Opinion Research’, Annual Review of
Political Science 8 (2005): 297–333.

7 Mattes R, ‘Public Opinion in Emerging Democracies: Are the Processes Different?’ in Handbook
of Public Opinion Research, Donsbach W & M Traugott (eds), Sage, 2008.

8 Heath, Fisher and Smith, ‘The Globalisation of Opinion Research’, 2005.

9 Mattes, ‘Public Opinion in Emerging Democracies’, 2008.
More information about the project and contact details can be found at
www.afrobarometer.org.

OTHER PUBLICATIONS

The African Peer Review Mechanism: Lessons from the Pioneers is the first in-depth study of
the APRM, examining its practical, theoretical and diplomatic challenges. Case studies of
Ghana, Kenya, Rwanda, Mauritius and South Africa illustrate difficulties faced by civil society
in making their voices heard. It offers 80 recommendations to strengthen the APRM.

The APRM Toolkit DVD-ROM is an electronic library of resources for academics, diplomats
and activists. In English and French, it includes video interviews, guides to participatory
accountability mechanisms and surveys, a complete set of the official APRM documents,
governance standards and many papers and conference reports. It is included with the
Pioneers book.

APRM Governance Standards: An Indexed Collection contains all the standards and codes
mentioned in the APRM that signatory countries are meant to ratify and implement, in a
single 600-page volume. Also available in French.
Planning an Effective Peer Review: A Guidebook for National Focal Points outlines the principles
for running a robust, credible national APRM process. It provides practical guidance
on forming institutions, conducting research, public involvement, budgeting and the media.
Also available in French and Portuguese.

Influencing APRM: A Checklist for Civil Society gives strategic and tactical advice to civil
society groups on how to engage with the various players and institutions in order to have
policy impact within their national APRM process. Also available in French and Portuguese.
To order publications, please contact SAIIA publications department at pubs@saiia.org.za
South African Institute of International Affairs
Jan Smuts House, East Campus, University of the Witwatersrand
PO Box 31596, Braamfontein 2017, Johannesburg, South Africa
Tel +27 11 339-2021 • Fax +27 11 339-2154
www.saiia.org.za • info@saiia.org.za

Social Media

We are on Facebook!


drupal hit counter