您的当前位置:首页正文

Bridging the Gap Between Theory and Practice in Integrated Assessment

2020-10-13 来源:易榕旅网


ISBN: Further details:

Published by:

IMPACT ASSESSMENT RESEARCH CENTRE

WORKING PAPER SERIES

Paper No: 7

Bridging the Gap Between Theory

and Practice in Integrated

Assessment

Norman Lee, Institute for Development Policy &

Management

September 2004

IARC Administrator, Institute for Development Policy and Management, University of Manchester, Harold Hankins House, Precinct Centre, Oxford Road, Manchester M13 9QH

Tel: +44-161 275 2808Fax: +44-161 275 0423

Email: iarc@man.ac.uk Web: http://www.man.ac.uk/idpm/iarc

Bridging the Gap Between Theory and Practice in Integrated Assessment1

ABSTRACT

There is growing support for the use of integrated assessments (IAs)/sustainability impact assessments (SIAs), at different government levels and geographic scales of policy-making and planning, both nationally and internationally. However, delivering good quality IAs/SIAs, in the near future, could be challenging. This paper mainly focuses upon one area of concern, differences between research and other technical contributions intended to strengthen assessment methodologies and the types of assessment methods considered usable by practitioners. To help in addressing this concern, the development of a common assessment framework is proposed, which is based on a shared, practitioner-researcher-stakeholder understanding of what constitutes a satisfactory integrated/sustainability impact assessment. The paper outlines a possible structure for this framework, which contains three interconnected elements – the planning context in which the assessment is to be carried out; the process by which the assessment is to be undertaken and its findings used; and the methods, technical and consultative, by which impacts are to be assessed. It concludes with suggested ‘next steps’, addressed to researchers, practitioners and other stakeholders, by which the assessment framework might be tested and improved, and its subsequent use supported.

An earlier version of this paper, which was more concerned with regional/local planning level assessments, was presented at the EU REGIONET Workshop on Evaluation of Regional Sustainable Development, held at the University of Manchester in June 2003. The author is grateful for comments received on that draft, particularly from Rodrigo Jiliberto and Aleg Cherp, and two anonymous reviewers.

1

2

1. INTRODUCTION

For the purposes of this paper integrated assessment (IA) covers three types of integration:

• Vertical integration of assessments, i.e., linking together separate impact assessments, which are undertaken at different stages in the policy, planning and project cycle (hereafter, the planning cycle).

• Horizontal integration of assessments: i.e. bringing together different types of impacts – economic, environmental and social – into a single, overall assessment at one or more stages in the planning cycle. It may also involve horizontal co-ordination between contemporaneous assessments for separate, but inter-related, planning cycles.

• Integration of assessments into decision-making i.e. integrating assessment findings into different decision-making stages in the planning cycle, (Lee, 2002).

The main focus is on strategic-level integrated assessments applied to policies, plans and/or programmes (PPPs). These PPPs are very diverse and, for example, may vary in their geographic scope between the international/national and regional/local scale. The particular form of integrated assessment, with which the paper is primarily concerned, is sustainability impact assessment (SIA). Therefore, it assumes that economic, environmental and social impacts are to be assessed according to criteria consistent with sustainable development. It also assumes that the assessment process incorporates ex ante appraisal and ex post evaluation i.e. it covers all principal stages in the preparation, approval and implementation of the PPP being assessed.

It draws upon the author’s own research and practical experience in integrated assessment. Additionally, several EU2 and UK3 assessment studies and guidance documents have been consulted. However, no attempt is made to present a comprehensive review of this broad and diverse literature or a critique of individual studies. 2

See ECOTEC, 1997, ERM, 1998; Moss and Fichter, 2000; GHK, 2002, European Commission, 2002a,b. see DETR, 2000a-d; ODPM, 2003; NWRA, 2000; DTLR, 2002; NWRA, 2002.

3

3

The paper is primarily concerned with the following problem:

• On the one hand, IA (and, more particularly, SIA) is a potentially valuable assessment instrument in the promotion of sustainable development and, currently, at least, there is some influential support for its practical application (see, EC, 2002a; DETR, 2000a; DTLR, 2002).

• On the other hand, securing good quality IA/SIA application is likely to prove very challenging and, if this cannot be achieved within a reasonable period , the window of opportunity for its widespread use is likely to close.

First, it explores, in Section 2, different causes of this problem, highlighting one of these for more detailed examination. This is the gap between the nature of many recent and on-going research contributions to strategic-level IA/SIA methodology and the types of assessment methods and approaches which planning practitioners mostly seem able and/or willing to use.4

This is followed by an outline of one possible approach to ‘bridging the gap’, which involves a linked consideration, within a common framework, of: the context of the assessment (Section 3), the assessment process (Section 4) and the assessment methods to be used (Section 5). The paper concludes with some practical suggestions on the implementation of this approach and other supporting measures (Section 6).

2. THE ASSESSMENT PROBLEM: ITS POSSIBLE CAUSES AND REMEDIES

The difficulties experienced in integrated/sustainability impact assessment have several possible causes. These include:

• Limited working experience in undertaking impact assessment of PPPs, and in the more strategic levels of appraisal and evaluation which these require, compared to impact assessment experience at the project level.

The gap is not complete. Some researchers are involved in both developing and applying new methods and some practitioners are engaged in some method development. However, these appear to be fairly limited, minority activities at present.

4

4

• Limited experience in linking economic, environmental and social impact assessments within integrated and sustainability assessments.

• Limited commitment to apply and/or use integrated/sustainability assessments. • Limited time, data and other resource constraints within which assessments have to be completed.

• The complexity of the policy, planning and decision-making environment (abbreviated, hereafter, to the ‘planning environment’), within which assessments have to be undertaken.

• Major differences between the IA/SIA methods and tools, which researchers and consultants have developed, and the simpler assessment methods which are often used by practitioners.

Each of these is briefly examined below; the last of these is the main focus of attention in the remainder of the paper.

The first two types of difficulty are often experienced when new assessment requirements are initially established. Overcoming these takes time, which may be reduced by appropriate written guidance and training/awareness-raising. The third difficulty is not unusual where existing PPP requirements and their assessment criteria are being changed. Involving the main participants in the process of change can help in raising the level of commitment.

The problem of limited time and other resources may be addressed by a combination of more realistic specifications of the priority tasks to be undertaken; more effective use of existing resources; and securing some extra resources where these can be justified. There is also a case for reviewing the management and decision-making procedures for the planning process itself to see whether rationalising and rescheduling these can enhance the overall effectiveness of the assessment process.5 Proposals to substitute IA/SIA for the increasing number of more specialised assessment procedures may be partly justified as a response to the dangers of over-assessment and the need for more effective use of existing assessment resources (see, for example, European Commission, 2002a). This may be considered as part of the wider issue of the role of management, planning and decision-making procedures (and of the support these receive from the management, planning and decision-making sciences) in strengthening strategic-level integrated assessment practice. See Rayner and Malone (1998) and Parsons (1995) for overviews of some of the literature on this wider issue.

5

5

The complexity of the planning environment, viewed from an assessment perspective, is evident in several ways – for example, the multi-sectoral character and broadly defined content of many of the plans to be assessed; the relative importance of complex impacts (indirect, induced and cumulative); the spatial and temporal complexity of their distribution; their multiple links, horizontal and vertical, to impacts from other PPPs. The challenge posed by complexity is very real in all forms of strategic-level assessment. Mostly, it can only be handled satisfactorily by appropriate simplification.

Complex vs simple assessment methods The existence of significant differences between the more sophisticated assessment methods developed by researchers and technical specialists, and the simpler methods often used by practitioners in planning studies (see, footnotes 2 and 3), is already familiar to those previously involved in the initial periods of EIA implementation. First generation EIA textbooks and guides listed many technically sophisticated methods (often borrowed and adapted from different specialist disciplines). However, reviews of early EIA practice revealed that relatively little use was made of these kinds of methods,6 (VROM, 1984). The most commonly used methods were checklists and simple matrices, readily accessible data relating to the main characteristics of the project and the conditions of the surrounding environment, and expert judgment and simple prediction techniques (augmented by consultation with authorities and interested groups) to assess the significant impacts of projects. Only later, and then mainly in the case of larger scale developments, were some more sophisticated methods used.

A substantial majority of the EIA reports, produced in those early years, were judged to be of unsatisfactory quality (CEC, 1993). However, in many cases, this was probably more due to the unsatisfactory application of simple methods than to the lack of use of more complex methods. Current evidence on the quality of strategic environmental assessments for planning studies is much more limited but points to the likelihood of a serious, initial quality problem which is probably due to similar causes (Lee et al, 1999). It is likely that similar conclusions apply to many first-generation IAs/SIAs, although the precise nature and extent of these In an investigation, undertaken on behalf of two Dutch ministries in the early 1980s, consultants identified nearly 350 different methods for predicting environmental impacts, in the North American and Western

European literature. However, surveys of EIA prediction practice showed that less formal approaches, notably the use of expert opinion, were most frequently used. Also, where more formal methods were applied, it was the simpler versions of these that were most commonly used (VROM, 1984).

6

6

deficiencies needs to be examined further through systematic quality review studies. A number of quality review studies of IAs /SIAs are ongoing, for example, relating to the quality of Extended Impact Assessment reports, prepared by the European Commission (see Wilkinson et al, 2004, Lee and Kirkpatrick, 2004).

**************************************************************************

If real progress is to be made in addressing several of the above difficulties, those engaged in these strategic assessment and planning processes – decision-makers, planning practitioners, researchers, consultants, other stakeholders – will need to develop a better shared

understanding of what constitutes a satisfactory integrated/sustainability impact assessment. For this an agreed assessment framework, (rather than a rigid blueprint) is probably required. In Sections 3-5, the outlines of one such framework are developed which contains three inter-related 7components (See Figure 1). These are:

• the planning and assessment context within which the IA/SIA is to be undertaken; • the process by which the assessment is to be undertaken and used for policy-planning and decision-making purposes; and

• the methods, technical and consultative, by which the impacts are to be assessed.

The mechanisms by which this shared understanding may be developed – and training provided in its application - are considered further in Section 6.

7

The interdependencies between and within, each of the three components , are important. For example, the regulatory context influences both the assessment process which is followed and the assessment methods which are used. The assessment process conditions the assessment methods used, and vice versa. Over time, changes in assessment process and/or methods may have a feedback effect on the planning and assessment regulatory systems within which they operate.

7

FIGURE 1: COMMON ASSESSMENT FRAMEWORK

ASSESSMENT CONTEXT

3. PLANNING AND ASSESSMENT CONTEXT

Assessment processes and methodology are often described, in the research and guidance literature, according to their general properties. In practice, the specification of their form and application needs to be related to the type of case and context under consideration. Therefore, in elaborating the tasks at each stage in the assessment process (Section 4), and in selecting assessment methods to use for each task (Section 5), their appropriateness to the specific planning and assessment context should be a fundamental consideration.

These contextual considerations should be at the core of any shared understanding of what constitutes a satisfactory assessment for any specific PPP. At the commencement of the assessment process, there should be a brief, collective attempt to identify the key contextual conditions, which may need to be taken into account in shaping the assessment process and the methodology to be used. Some of this basic information may be used again at subsequent stages in the assessment (eg. scoping). It can then serve as a continuing ‘reality check’ during the remainder of the assessment process. The types of information, to be considered, may be grouped into three categories:

ASSESSMENT METHODS ASSESSMENT PROCESS 8

1. The regulatory and institutional context within which the assessment is to be undertaken. This may cover:

• The key requirements and constraints, which the regulatory framework imposes on the assessment (eg. relating to the objectives of the PPP, the formal stages and procedures in its implementation, provisions for consultation, time schedules, etc), • The main authorities, other agencies and stakeholders likely to be involved in the preparation and/or assessment of the PPP.

• Other PPPs, within the overall regulatory framework, whose formulation and implementation may need to be taken into account in the assessment of the PPP under consideration.

2. The characteristics of the PPP to be assessed. This may cover;

• Is it large or small scale, single or multi-sector, policy or investment-oriented? • Is it to be assessed at an early or late stage in the overall planning process? • What is the geographic extent of the area likely to be impacted and the extent of its economic, social and environmental diversity?

3. The resources available for the completion of the assessment. This may include;

• The staffing resources, other expertise and financial support likely to be available for undertaking the assessment.

• The extent of the relevant data likely to be available for the assessment. • The time constraints within which the assessment must be completed.

The information provided under the first two headings can be used to form a first impression of the likely nature, duration, potential scope, level of detail and complexity of the

assessment work to be undertaken. The information provided under the third heading should indicate the types and scale of resources likely to be available to complete the assessment. A

9

comparison between the two sets of information provides a first opportunity for a ‘reality check’ in developing a shared understanding of how the assessment may be best undertaken.

This initial activity may require, in straightforward cases, little more than a single meeting of the parties concerned, based on a short (2-3) pages) briefing note summarising key

information under the above three headings.8 The purpose of the meeting would be to verify this information and make a preliminary identification of any potential difficulties where assessment requirements might exceed the available assessment resources. This should then be taken into consideration at the scoping stage of the assessment process, which is examined in the next section.

4. ASSESSMENT PROCESS

The focus of this section is on the main stages of the assessment process to be followed and their relationship to the main stages of the planning process for the PPP being assessed. The guiding principle on the timing of stages in the assessment process is to try to ensure that the relevant assessment findings are available for use at the key decision-points in the corresponding stages of the planning process to which they relate. These timings will often be context-specific.

The assessment process should begin, preferably, at or near the commencement of the planning process. It then becomes an ongoing activity, which helps to shape the development of the PPP at each decision point in the planning process. Too often the commencement of the assessment is delayed, sometimes until the PPP is almost completed. Where this happens its contribution to the planning process and its influence on the final version of the PPP, are likely to be very limited. In extreme cases, it may do little more than provide an ex-post justification for a planning decision which, de facto, has already been taken. Whilst it is very desirable that the assessment process is closely co-ordinated with the planning process, it is also important that it preserves its own independence and integrity. This may be supported by provisions for independent auditing of both assessment process and assessment findings (eg. by stakeholder and peer review). 8

This meeting should be held at an early stage in the preparation of the PPP and its assessment process.

10

There is already considerable practical experience in ‘staging’ assessment activities within the overall assessment process (eg. for environmental assessments). These, with adaptations for any significant differences in their contextual characteristics, may also be used initially in IA/SIA. (Thereafter, they may be refined on the basis of more direct experience). This is illustrated below, distinguishing between the principal stages of screening, scoping, preparing preliminary or detailed assessments and monitoring /ex post evaluation.

1. Screening This determines, at the beginning of the assessment process, which PPPs are to be subject to assessment. Exemptions may be granted, where:

• The implementation of the PPP, by virtue of its nature, size or location, is unlikely to have significant impacts; or

• The PPP may be more appropriately appraised at a different phase in the planning cycle; or

• Separate provision has already been made for an alternative type of assessment.

Screening under the first of these categories may be undertaken:

• Either using lists of types of PPPs, which indicate those types that do (positive lists) or do not (negative lists) require assessment; • Or, on a case-by-case basis, using the above criteria.

Some screening systems also make a distinction between PPPs, which only require a preliminary assessment, and those which require a detailed assessment. Preliminary assessments may have simpler impact assessment requirements, briefer reporting requirements and fewer process requirements (eg. relating to consultation and the need to publish assessment reports). On the other hand, some assessment systems make no distinction at this stage, but achieve a similar outcome at the scoping stage by setting less/more more demanding terms of reference for different PPPs.

2. Scoping establishes the Terms of Reference (TORS) for the assessment, and should normally be completed in a relatively short period of time using existing information and some consultations. The TORS may include:

11

• A set of goals which the PPP is expected to pursue, as well as a description of the problem to be addressed.

• A listing of the components of the PPP and the types of resulting impacts, which should be assessed.

• A definition of the system boundaries within which the assessment should be undertaken,

• A list of alternatives to the proposed PPP which should be investigated, and types of mitigating and enhancing measures, which may also need to be examined.

• An indication of the levels of detail in which different parts of the assessment should be undertaken.

• An indication of the types of methods (including those relating to consultation and data collection) to be used in the assessment.

Taken together, the screening and scoping stages are crucial in ensuring that the resources available for the impact assessment are used most effectively within the assessment process. In effect, they also provide a second ‘reality check’ on the assessment’s feasibility. Because of its importance, there is often a case for publishing the scoping report and providing stakeholders with an opportunity to comment upon it before the Terms of Reference are formally adopted and the assessment proceeds.

3. Preliminary and Detailed Assessments The additional data collection and analysis which

is required for the Preliminary Assessment, beyond what has already been used at the scoping stage, may be quite limited. Little or no new data may be required and the assessment methods to be used should still fall into the ‘simple’ category. Some suggest that, in the case of these PPPs, the assessment process may be streamlined by combining the scoping and preliminary assessment stages.

Further analyses are required in Detailed Assessments but it is still important not to overstate their additional requirements. The Terms of Reference, prepared during scoping, should have indicated what needs to be included and at what level of detail . In the majority of cases the assessment is quite likely to rely mainly on the effective use of simpler methods. Additional new data may be needed in some cases, but only within prescribed limits and for selected purposes.

12

Preliminary assessment reports are likely to be quite short; possibly their principal findings may be summarised in as little as two pages. Detailed assessment reports are likely to be considerably longer but not to the extent that they cease to be useful for consultation and decision-making purposes. They should include short, non-technical summaries. The findings of these Preliminary and Detailed Assessment reports, together with the consultation findings based on these, should be integrated into the decision-making stage/s of the planning process for the PPPs concerned.

4.Monitoring and Ex Post Evaluation These are among the least developed stages in many assessment systems, despite being central to their long-term overall effectiveness.

Preliminary and Detailed assessment reports should include recommendations for monitoring the implementation of the PPP and for evaluating its impacts. These ex-post requirements should be proportionate to the scale and complexity of the PPP and its expected impacts. They are likely to be significantly less for Preliminary than Detailed assessments.

The recommendations for monitoring and ex-post evaluation should be an integral component of the final PPP proposal, which is submitted to the authority responsible for its approval and subsequent implementation. Their overall purpose is:

• To check whether the approved PPP, and its accompanying mitigatory and enhancing measures, have been satisfactorily implemented.

• To assess whether the type and level of impacts, positive and negative, predicted to occur, have resulted (and whether any unexpected significant impacts have occurred). • To recommend, in the light of the above, where either the implementation of the PPP needs to be strengthened or where it may need to be amended.

The effectiveness of monitoring and ex post evaluation provisions is likely to be greater where the principal stakeholders have been involved in formulating their requirements and where the ex post findings are made public.

13

5. ASSESSMENT METHODS

The set of methods, which are used to carry out a particular assessment, form its overall methodology. This should be ‘tailor made’ to the requirements of the individual assessment. The methods may be quantitative and/or qualitative, technical and/or participative, etc. Different methods will be chosen to serve different kinds of tasks within the overall methodology; for example to:

• Describe existing economic, environmental and social conditions within the study area.

• Predict likely future conditions under the ‘policy-off’ (i.e. no new policy) scenario. • Formulate planning goals, taking into consideration any major problems and their

root causes, which may have been identified within the ‘policy-off’ scenario. • Identify alternative strategies to address these problems and contribute to the attainment of the goals.

• Assess the likely extent to which each of the options will address these problems and help to achieve these goals.

• Compare the likely outcomes for each option.

• Present the assessment findings in a suitable form for use by decision-makers and other stakeholders.

For each assessment task, there will often be a number of alternative methods, which could be used. Since the choices between them may have a major influence on the quality of the overall assessment, their selection needs to be made in a systematic manner; for example, through a tasks-methods analysis. Factors to take into account when choosing between alternative methods may include:

• The nature of the assessment task.

• The level of detail and degree of accuracy with which the task needs to be performed. • The consistency of each method selected with the other assessment methods to be included within the methodology.

• The data, expertise, time and other resource requirements of each method.

14

• The transparency, intelligibility and credibility of each method as perceived by decision-makers and other stakeholders.

In turn, each of these may be influenced by Context and Process considerations within the Assessment Framework.

This tasks-methods analysis is initially undertaken during the scoping stage but may need refining as the assessment proceeds. Undertaking this analysis provides a further

opportunity to develop a shared understanding among stakeholders of what will constitute a satisfactory impact assessment for the PPP in question. It also provides another opportunity for a ‘reality check’. Tasks-methods matrices and decision trees9 are useful tools in systematising these analyses.

The findings of tasks-methods analyses highlight the main differences, in methodology and data requirements, between preliminary and detailed assessments. The former are much more likely to rely on simple methods and existing data; the latter are more likely to supplement these with some more complex methods which require new data. Within both categories there will also be differences in the extent and level of analysis, depending on the importance of the impacts and level of stake-holder concerns, the availability of appropriate data and the severity of other constraints (time, expertise, resources, etc) within the assessment process.

************************************************************************** The remainder of this section briefly reviews certain of the tasks10 to be undertaken at different stages in the assessment process and particular methods11 which may assist in Kirkpatrick and Lee (2002) illustrate the use of a decision tree approach in constructing a case-specific SIA methodology for the WTO trade negotiations. 10

Data gathering methods are not included, due to limitations of space. Data banks and data intelligence units can make an important contribution to increasing the stock of existing data and their accessibility for planning and assessment purposes. A wide range of surveying, interviewing and observational methods may be used to gather new data. However, doing this satisfactorily, in a short period, can be problematic because of time, resource and specific skill constraints. 11

The literature on IA/SIA methods is very extensive and only a small proportion of references, mainly those reviewing a range of different methods, have been directly used in preparing this article. They include Bond et al, 2001; Caratti et al, 2004, forthcoming; GIWA, 2002; Hill, 1968; Jiliberto et al, 2002; Lee and Kirkpatrick, 2001; Lee, 2002; Lichfield, 1996; McCulloch et al, 2001; Parsons, 1995; Ravetz, 1998, 2000; Rayner and

Malone, 1998;Rotmans, 1998; Rotmans and Dowlatabadi, 1998; Toth and Hizsnyik, 1998. These are in addition to the following, previously cited, references: DETR, 2000b-d; DTLR,2002; EC, 2002b; ECOTEC, 1997, ERM, 1998; GHK, 2002; Moss and Fichter, 2000; NWRA, 2002.

9

15

carrying out different assessment tasks. It attempts to reflect the ‘context’ and ‘process’ considerations presented in the earlier sections of the article as well as the tasks-methods selection criteria already mentioned in this section. The tasks and methods which are reviewed include: a range of foundation tasks; the use of checklists, matrices and causal chain analyses; methods for forecasting impacts, and handling uncertainty; options analysis; consultations and stake-holder participation.

tasks 1. Foundation

These are a set of basic tasks which, undertaken with appropriate methods, should

collectively establish the foundations for a well-focused, realistic assessment. The tasks may include: defining system boundaries; problem identification, goal and target setting; options identification; specification of impact indicators. Foundation tasks are relevant to both preliminary and detailed assessments but are undertaken in a simpler way in preliminary assessments. The tasks are mainly undertaken at the scoping stage but, in the case of detailed assessments, the findings may be refined as the assessment proceeds.

The definition of system boundaries may cover the geographic extent and time horizon of the impact assessment, and the extent to which exogenous influences, originating from outside the study area, are to be reflected in the analysis. These matters need to be clarified at an early stage in the assessment to minimise ambiguities and inconsistencies in the later analysis and to keep the overall assessment within achievable limits.

Problems, goals and targets are inter-related. A problem arises from an unfulfilled goal; and a target may be a stepping-stone to achieving that goal. As in the case of most of these foundation tasks, consultation (see 5. Consultation and Participation) is likely to be central to their initial identification but simple consistency checks should also be undertaken between these three inter-related elements. Initially, targets may be expressed in approximate, qualitative terms; only when the assessment is more advanced can the realism and appropriateness of more precisely defined targets be checked.

Options identification can also only be attempted in a preliminary manner at an early stage in the assessment. However, it is a useful discipline to stimulate the identification of broad

16

options,12 early in the planning process, recognising that the options list will need to be amended and refined as the assessment proceeds.

Another important foundation task is the construction of a set of impact indicators to use in assessing the likely contribution of different policy options towards achieving specified targets and goals (target indicators) or strengthening the process for achieving those targets and goals (process indicators). This, can be one of the most challenging foundations tasks. Assuming the overarching goal of the PPP is to promote sustainable development, the following issues may need to be considered:

• What definition of sustainable development is to be used as a basis for formulating these indicators? For example, in the case of target indicators, the adoption of an inter-generational equity goal may point to the use of economic, environmental and social capital stock indicators, whilst acceptance of an intra-generational equity goal would imply the use of distributional indicators.

• How are global/national target indicators to be translated into lower-level target indicators for use in regional and local assessments? Targets and indicators may need to be set for PPPs with more limited geographic boundaries, which cross-refer to SD targets and indicators formulated at more aggregate levels in the planning process, (George, 1997; DETR, 2000 continuing).

• Can the preferred indicators be satisfactorily measured in practice? Measures of economic, environmental and social capital, at different levels of spatial aggregation, are notoriously difficult to construct. There are also considerable difficulties in constructing intra-generational equity measures. In many cases, the search (not always successful) becomes one for suitable proxy measures to use as substitutes. • Can suitable process indicators be constructed? Two types of indicators may be particularly relevant – those recording progress in a) implementing key policies consistent with sustainability principles and b) strengthening governance capacities to adopt and implement sustainability measures. These indicators do not readily lend themselves to measurement and changes are often expressed in qualitative terms. • How many SD indicators, quantitative and qualitative, should be used in IA/SIA studies? The practical difficulties in using large numbers of indicators are

12

The initial options list should not be defined too narrowly or conservatively; some thinking ‘outside the box’ should be encouraged.

17

increasingly recognised. Possibly, no more than fifteen headline indicators should normally be used in any assessment. In more detailed assessments, some headline indicators may be sub-divided into a limited number of narrower, second tier indicators (Kirkpatrick and Lee, 2002).

Each of the above foundation tasks should be simpler and less demanding for preliminary assessments than detailed assessments. However, in neither case should these involve the use of complex assessment methods or extensive data collection and empirical analysis. The main requirements are likely to be for analytical clarity and well-targeted consultations. The biggest challenge is likely to be the selection of appropriate SD indicators, especially where there are no closely relevant existing indicator studies on which to draw.

2. Checklists, matrices and causal chains

Checklists and matrices are amongst the most frequently mentioned assessment methods in guidance for practitioners. Checklists, although used at several stages in the assessment process, are chiefly associated with sensitising assessors to the potentially important components of the PPP and baseline environmental conditions, and the main types of their resulting impacts, which may need to be considered in the initial stages of the assessment. Often, however, they require little more than ‘ticking’ successive items on a list (indicating that they have been considered) although, in some cases, they may require providing some additional information or comment.

Matrices can be regarded as a combination of two checklists. For example, they may list the main components of a proposed PPP on one axis and the main types of impacts it may cause on the other. The cells within the matrix are then filled by recording the likely direction and level of significance of each expected impact for each component of the PPP. However, there are many different formulations of the matrix which enable it to be used for differing tasks at several stages in the assessment process.

Matrices are potentially useful tools for organising, summarising and presenting different types of impact assessment information. They are relatively simple to use in both preliminary and detailed assessments. However, their relative importance, within the

18

overall assessment, can be over-stated. The fundamental limitation of the matrix (and of the checklist) is that it supplies no insight or information on how the findings, summarised in the cells, have been derived. Sometimes a ‘justification’ column is added to the matrix, but in practice, only very brief substantiation may be provided.

Causal Chain Analysis (CCA) is less known and used in assessment practice than checklists and matrices, although it has a relatively long history under different names such as network analysis, root cause analysis or cause-effect analysis. Its main purpose is to identify, in a structured manner, the significant sections of the causal chain, which link a) problems, to be addressed by the PPP, to their root causes and b) the PPP (and any alternatives to be assessed) to their sustainability impacts (GIWA, 2002). Thus, it aims to provide an analytical explanation of the impacts that are likely to result from the PPP and an analytical structure for their empirical assessment. In effect, it should provide the causal explanation (sometimes called the ‘missing middle’) which is not obtainable from the matrix.

Each PPP is likely to have some direct economic, environmental and/or social impacts and, because these are most readily recognised, they may be identified first of all. However, there may also be some significant indirect impacts – for example, where a direct economic impact induces a significant indirect environmental or social impact (or vice versa) – which should also be identified. There are obvious dangers in analysing causal chains in great detail. Therefore, it is important to focus on the main skeleton (i.e. the significant sections) of the causal chain, which should, in any case, be more simply examined in preliminary assessments than detailed assessments. In the case of multi-sectoral, or multi policy, detailed assessments, it may be easier to prepare separate, simple causal chains for each major sector or policy before attempting to link these together. Mostly, CCA will be undertaken using qualitative criteria for the identification of significant sections in causal chains, based upon a combination of literature review, expert opinion and other consultations.

3. Forecasting future impacts and handling uncertainty

The tasks associated with forecasting, and prediction more generally, are among the most problematic in the IA/SIA process and the least satisfactorily performed at the present.

19

These tasks occur at several stages in the assessment process, from scoping through to the completion of detailed assessments. The reasons for the difficulties include:

• The uncertainty that surrounds the prediction of future impacts and their determinants – hence the need to strengthen ways of handling uncertainties within assessments.13

• The complexity of the interdependent systems (economic, environmental and social) through which PPP impacts on sustainable development are generated and transmitted – hence the need for causal chain analysis to help in identifying the most significant sections within these interdependent systems.

A fundamental predictive task is determining the base-line against which the PPP’s likely impacts are to be assessed. (The baseline is not the current economic, environmental and social situation, but the likely future situation should the PPP (or its alternatives) not be implemented). This involves considering:

• Key differences between current and baseline conditions, which are relevant to the PPP’s assessment; and

• Key differences, between the causal chain findings, under current and baseline conditions, which are also relevant to the PPP’s assessment.

Where there are significant uncertainties relating to either of these, scenario and sensitivity analyses may be helpful in exploring the baseline implications of alternative futures.

The determination of baseline conditions should not be a major exercise, in the case of preliminary assessments, provided that only a small number of alternative scenarios are to be investigated. The analysis may be largely qualitative, i.e. assessing the likely direction and

13

Uncertainty arises in a number of different ways within the assessment process – for example, due to

uncertainty about future, exogenously determined conditions (e.g. relating to technological, socio-economic or environmental conditions); due to limitations in current scientific and other forms of knowledge (e.g. as reflected in the specification of dose-response functions or, more generally, in causal-chain analyses); and due to conflict and the lack of stability in values held within society. Responses to these uncertainties range between those which are technical (e.g. incorporating some form of probability analysis into the assessment methodology) and those which are process and procedural (e.g. specifying the procedures and the role of stakeholders and other consultees within these, by which uncertainties are to be handled within the assessment process). These two approaches are sometimes presented as rival solutions but the preferred option may be a simplified hybrid of the two. This subsection is mainly confined to technical aspects of forecasting, prediction and uncertainty, whilst the role of stakeholders and other consultees within the assessment process and their contribution to handling uncertainty, is discussed in the ‘Consultation and Participation’ subsection. Different approaches and methods for handling uncertainty are further examined in de Bruijn and ten Heuvelhof, 2002; Holling, 1978; Jiliberto et al, 2002; Parsons, 1995; and Rayner and Malone, 1998.

20

significance of changes in base-line conditions, in respect of each SD indicator being used, compared with the current situation. It should indicate which aspects of the current problem, to which the proposed PPP is addressed, are likely to get worse or better, and to what degree, if it is not implemented.

A similar approach could then be followed when assessing the likely impact of the PPP itself (and any alternatives) in terms of the likely direction and significance of changes in SD indicators compared to the baseline situation. Simple sensitivity tests may be applied to assess the likelihood of the assessment findings changing significantly between different scenarios. Some practical assistance should be available from the authorities who are constructing economic, social and environmental forecasts for their own planning purposes. They may only require checking for their appropriateness and coverage, supplemented by some limited gap-filling . This may be undertaken drawing upon existing data and simple methods of analysis, supported by consultations with experts and other stakeholders.

However, there is likely to be a need for more substantial forecasting and prediction studies in those detailed assessments where:

• The indirect impacts of the PPP are complex, highly significant and difficult to measure;

• Several scenarios and PPP alternatives are to be investigated; and/or • Quantitative estimates of major impacts are sought.

In some of these cases, a formal, systems modelling approach to impact prediction may be desirable – but it may not always be feasible. Modelling of economic systems is more developed than of environmental and social systems, and linked economic-environmental-social system modelling is the least developed of all. There are also significant differences in modelling developments according to type of country, governmental/administrative level, type of PPP and sectoral coverage. Even where a suitable model exists or can be soon developed, other practical obstacles may be faced. These include insufficient data, time, financial resources and/or expertise for its satisfactory application. These difficulties may be progressively reduced over time (see 6. Conclusions on recommended improvements) but, in the meantime, second-best solutions are needed. These are likely to involve the use of less ambitious predictive approaches such as hybrid predictive methods (e.g. linking the

21

quantitative modelling of economic impacts to more qualitative analyses of their consequential environmental and social impacts), more refined forms of the simpler predictive methods used in preliminary assessments and more extensive use of expert and stake-holder consultations (see Kirkpatrick and Lee, 2002 for further details).

4. Assessment summaries, option comparisons and decision-making

Different kinds of appraisal methods and criteria have been developed to summarise assessment findings in a form suitable for use by stakeholders and decision-makers. These may be used at intermediate stages in the assessment process (e.g. when discarding less preferred options) as well as when the final version of the preferred PPP is to be approved. In some cases, the assessment procedures and criteria are specified in regulations or have evolved over time to conform to the prevailing planning and decision-making ‘culture’. Some decision-making approaches are more technocratic, where explicitly defined assessment criteria are applied by professional experts. Others are more participative, where the structural form of the decision-making process (including the consultative process embedded within it) is the more dominant influence.

The appraisal of a PPP, whose overall objective is to promote sustainable development, might be expected to use assessment criteria and sustainability indicators that are consistent with this overall goal. However, in practice, many PPPs are also expected to serve more specific objectives of an economic, environmental or social nature. The relationship between these more specific objectives and more general sustainable development objectives should be clarified at the scoping stage, before the assessment criteria and indicators are finalised.

Additional considerations, when presenting assessment summaries for consultation and decision-making purposes, are whether or not the findings should be quantified and/or aggregated.

• Quantitative vs qualitative findings. At strategic levels of assessment (and

particularly in preliminary assessments) the presentation of findings in a quantitative form may create an exaggerated impression of accuracy. This may be

counterproductive as well as misleading, since a high level of precision in impact estimates is rarely needed for strategic-level decision-making. Furthermore,

22

quantified estimates, where these are expressed in a variety of units with which decision makers and stake holders are unfamiliar, may be confusing rather than informing. On the other hand, where qualitative measures are used, the categorisation of the severity of the impacts, and the criteria upon which the categories are based, need to be very clearly stated and consistently applied. Whichever of these approaches (or a hybrid of the two) is selected will have important implications for the assessment methods to be used and the type and quantity of data required, at several stages in the assessment process. The choice between these approaches should initially be made at the scoping stage.

• Aggregated vs disaggregated findings . Presenting the estimated impacts of a PPP (and of its alternatives) in an aggregated form may, in one sense, simplify the assessment findings to be considered by decision-makers and strengthen their consistency with PPP objectives. Two of the most frequently-mentioned aggregated forms of assessment are cost-benefit analysis (CBA) and scoring-weighting analysis, which is one form of multi-criteria analysis (MCA). However, both face a number of difficulties. For example, the assumed goals of CBA may not correspond to those of sustainable development (e.g. in relation to inter-generational and intra-generational equity), major difficulties may be encountered in the monetary valuation of key impacts and CBA findings may not be accepted, as a basis for decision-making, by some influential stakeholder groups.

Aggregated forms of scoring-weighting are also criticised, particularly where the basis for the scores and weights is not made clear. More disaggregated forms of MCA may be more acceptable, particularly where the main stakeholders have had the opportunity to develop a shared understanding of the particular form it should take. Two kinds of options analysis matrix, which have been successfully used in earlier planning studies and may be adapted for SIA purposes, are:

• A goal-achievement matrix which indicates the extent to which the preferred PPP (and each of the main alternatives to this) is likely to contribute to each of the sustainability goals/targets that has been set (Hill, 1968).

• A planning balance sheet, which shows the expected distribution of impacts (positive and negative) between major stakeholder categories, for each option investigated (Lichfield, 1996).

23

Disaggregated qualitative impact assessment findings can be prepared and presented, using relatively simple methods, for preliminary studies and, including some quantified estimates, for detailed studies. However, a general requirement for quantification, and especially for monetisation, of impact estimates in detailed studies may be neither feasible nor desirable.

5. Consultation and Participation

Consultation and Participation (C&P) is integral to a number of assessment stages and tasks in the IA/SIA process (see Section 3), as well as being an important component of the IA methodology reviewed in this section. However, it can also make a major contribution to ‘bridging the gap’ in integrated assessment by bringing together technical specialists and researchers, along with public and other stakeholder representatives, in multi-stakeholder assessment groups engaged in C & P activities.

A number of challenges, relating to the effective consultation and participation of the public, other stakeholders and technical experts, have to be faced, as indicated below:

• A fundamental challenge is how the general public, as well as a wide range of business, social and environmental interests, can be effectively involved in C&P where, for example, the proposed PPP may cover a large geographic area and only address planning issues at a broad strategic level. In such circumstances, more success may be achieved when efforts are made to engage representatives of the public, other stakeholder groups and technical specialists/researchers in joint C&P assessment group meetings. Those invited to participate should be sufficiently representative of the different communities and socio-economic groups within the affected area, as well as of stakeholders and technical experts covering each of the three pillars ( economic, environmental and social) of sustainable development. • A second challenge is to determine an appropriate role for a multi-stakeholder assessment group in relation to the specific tasks to be performed at different stages in the assessment process. This needs to be defined realistically, having regard to the nature, scale and complexity of the tasks concerned, the participants capacity and

24

willingness to become involved and the time and resource constraints of the planning and assessment schedule.

• A third challenge is to devise an effective method of group working and reporting. Simple suggestions for strengthening assessment group practice include: select experienced facilitators for each meeting; ensure that well-prepared briefing papers, including supporting data, are distributed in advance to participants; select experienced rapporteurs to record key outcomes and to distribute the written record of these in a timely manner. In some circumstances basic guidance and/or training in group assessment techniques – for example, simulation gaming and policy options analysis (Parson and Ward, 1998) will be helpful.

If multi-stakeholder assessment groups function effectively, they should make a significant contribution to the quality of the IA/SIA process, the IA/SIA report and the resulting PPP. Other positive spin offs should also result, such as:

• A shared understanding of what constitutes a sound and practical IA/SIA methodology. As this occurs, the methodological gap should be progressively reduced.

• The commitment among stakeholders, to the use of IA/SIA, should grow. This would be an important development given the limited practical use of IA/SIA methodological guidance, to date.

6. CONCLUSIONS

There is growing support for the use of IA/SIA as a method of impact assessment for policy-making and planning, to promote sustainable development. However, delivering good quality IA/SIAs in the near future will be challenging. A number of difficulties have been identified (see Section 2) of which one has received most attention. This is the potential gap between the kinds of contributions, which researchers and technical experts are making to the development of assessment methodologies, and the types of assessment methods that planning practitioners seem most able/willing to use. How can this gap be bridged and does it require some changes in approach by several of the parties involved?

25

One possible solution, explored in this paper, involves the interested parties (planning practitioners, researchers, technical specialists, stakeholders and community representatives) engaging in a joint initiative, mainly conducted at the individual case assessment level, to develop a shared understanding of what constitutes a satisfactory IA/SIA. This would use a common assessment framework, containing the following linked components:

• The planning context within which the assessment is to take place. • The process by which the assessment is to be undertaken, and

• The methods, technical and consultative, by which the impacts are to be assessed.

Each of these components has been reviewed (see Section 3 – 5) and some of the main conclusions are listed below:

• The importance of an early contextual analysis, to ensure that the assessment methodology, which is to be used, is relevant, focused and feasible.

• The assessment should, ideally, commence at the beginning of the planning process and then contribute to all subsequent major decision points in that process. • Screening and scoping are key early stages in the assessment process, particularly in determining which PPPs should be assessed and to what extent and level of detail. A distinction is drawn between the requirements of Preliminary and Detailed assessments.

• The choice of the specific methods to be used in each assessment, and of their data requirements, should be made on a case-by-case basis, using tasks-methods analysis. Comments on certain of the main tasks, and their associated methods, are contained in Section 5. A broad distinction is drawn between ‘simple’ methods (which are relatively straightforward to apply and which mainly use existing data) and more ‘complex’ methods (which may require specialist skills and the collection of additional data). Overall, greater emphasis is placed on the need for more effective use of simpler assessment methods, using mainly existing data, and on the selective use of more complex methods, which may require significant amounts of new data. • Consultation and participation, particularly where they involve the representatives of

major stakeholders and other interest groups (as well as technical experts) in joint assessment group meetings, are likely to be of integral importance to the assessment process. They should also play a central role in helping to develop, test and apply

26

case-specific IA/SIA methodologies which are sufficiently rigorous and practical. In so doing, they should make a significant contribution to bridging the gap between the theory and practice of IA/SIA.

These provisional findings need to be further tested and elaborated . This should be accommodated within an IA/SIA development strategy which not only addresses the ‘gap’ problem but also other related causes of deficiencies in existing IA/SIA practice (see Section 2). The ‘next steps’ might include the following initiatives in which representatives of practitioners, major stakeholders and interest groups, as well as technical experts, researchers and trainers, should participate.

• A programme of ex ante trial runs and ex post studies of IA/SIAs. The former should test the provisional findings in this paper, particularly relating to the use of multi-stakeholder assessment groups on a case-specific basis. Ex post case studies should evaluate, also on a case specific basis, the IA/SIA process that was followed, the IA/SIA report that was produced, and subsequently, their influence on PPP approval and the resulting impact outcomes.

• A review of data requirements for IA/SIA use, based upon a tasks-methods analysis, identifying where the most important deficiencies exist and making realistic proposals on the most cost-effective means of remedying these.

• A review of IA/SIA research priorities. This review should probably include further investigation of : (i) the context-process-method framework for impact assessment; (ii) the assessment process-procedure-decision making relationship and (iii) the strengths and weaknesses of existing ‘simple’ and more ‘complex’ IA methods. • Preparation of practical assessment guidance (covering the context, process and methods of assessment), supported by case study examples of its application. • Development of short training courses in the application of IA/SIA using the above guidance and case materials.

27

REFERENCES

Bond R, Curran J, Kirkpatrick C, Lee N, & Francis P, (2001) ‘Integrated impact assessment for sustainable development: a case study approach’. World Development, 29/6. 1011-1024

Caratti P, Dalkmann H, & Jiliberto R, (2004) Analysing Strategic Environmental Assessment: Towards Better Decision-Making, Edward Elgar, Cheltenham

CEC (1993) Report from the Commission of the European Communities of the

Implementation of Directive 85/337/EEC on the Assessment of the Effects of Certain Public and Private Projects on the Environment. Office for Official Publications of the European Communities, Luxembourg.

De Bruijn H & ten Heuvelhof, E (2002) ‘Policy analysis and decision-making in a network: how to improve the quality of analysis and the impact on decision-making’. Impact Assessment and Project Appraisal 20 (4) : 232-242.

DETR (2000 continuing) Regional Quality of Life Counts – Regional Versions of the National ‘Headline’ Indicators of Sustainable Development , Department of the Environment, Transport and Regions, Stationery Office, London.

DETR (2000a) Planning Policy Guidance Note 11: Regional Planning. Department of the Environment, Transport and the Regions, Stationery Office, London.

DETR (2000b) Good Practice Guide on Sustainability Appraisal of Regional Planning Guidance, Department of the Environment, Transport and the Regions, Stationery Office, London.

DETR (2000c) Guidance on Preparing Regional Sustainable Development Frameworks, Department of the Environment, Transport and the Regions, Stationery Office, London.

DETR (2000d) Good Practice Guidance on RPG Targets and Indicators, Department of the Environment, Transport and the Regions, Stationery Office, London

28

DTLR (2002) Better Policy Making: Integrated Policy Appraisal in DTLR, Department for Transport, Local Government and the Regions, London.

ECOTEC (FOR DGXVI) (1997). The Thematic Evaluation of the Impact of the Structural Funds on the Environment (3 vols) DGXVI, European Commission, Brussels.

ERM (FOR DGXI) (1998) A Handbook on Environmental Assessment of Regional Development Plans and EU Structural Funds Programmes, DGXI, European Commission, Brussels.

EUROPEAN COMMISSION (2002a) Commission Communication on Impact Assessment (COM 2002/276) European Commisson, Luxembourg.

EUROPEAN COMMISSION (2002b) Impact Assessment in the Commission – Draft Guidelines (3 vols). Strategic Planning and Programming Unit, Secretariat-General, European Commission, Brussels.

George C (1997) ‘Assessing global impacts at sector and project levels’, Environmental Impact Assessment Review, 17: 227-247.

GHK (2002) The Thematic Evaluation of the Contribution of the Structural Funds to Sustainable Development (2 vols). DG Regio, European Commission, Brussels.

GIWA (2002) GIWA Methodology for Predictive Analysis, Detailed Assessment, Causal Chain Analysis and Policy Options Analysis. Global International Waters Assessment Study, UNEP/GEF/Kalmar University, Kalmar, Sweden.

Hill, M (1968) ‘A goals-achievement matrix for evaluating alternative plans’. Journal of the American Institute of Planners, 34. 19-28.

Holling, CS (1978) Adaptive Environmental Assessment and Management. John Wiley, Chichester, UK.

29

Jiliberto, R et al (2002) ANSEA: New Concepts in Strategic Environmental Assessment – Towards Better Decision-Making (5th Framework Research Programme of the EU, Project Report). http://www.taugroup.com/ansea/home/HOMEw.htm

Kirkpatrick C and Lee N (2002) Further Development of the Methodology for a Sustainability Impact Assessment of Proposed WTO Negotiations (Final Report to the European Commission), IDPM, University of Manchester, Manchester

Lee N, (2002) ‘Integrated approaches to impact assessment: substance or make-believe?’ Environmental Assessment Yearbook 2002, Institute of Environmental Management and Assessment, Lincoln, UK.

Lee N, Colley R, Bonde J & Simpson J, (1999) Reviewing the Quality of Environmental Statements and Environmental Appraisals, Occasional Paper 55, Department of Planning & Landscape, University of Manchester, Manchester.

Lee N and Kirkpatrick, C (2001) ‘Methodologies for sustainability impact assessments of proposals for new trade agreements’. Journal of Environmental Assessment Policy & Management, 3/3, 395-412.

Lee N and Kirkpatrick, C (2004) A Pilot Study of the Quality of European Commission Extended Impact Assessments, Impact Assessment Research Centre, IDPM, University of Manchester, Manchester (www.idpm.man.ac.uk/iarc).

Lichfield, N (1996) Community Impact Evaluation, UCL Press, London.

McCulloch N, Winters LA, and Cicera, XC (2001) Trade Liberalisation and Poverty: A Handbook, DFID and Centre for Economic Policy Research, London.

Moss T and Fichter H (2000) Regional Pathways to Sustainability: Experiences of Promoting Sustainable Development in Structural Funds Programmes in 12 Pilot Regions. Research Directorate General, European Commission, Luxembourg.

30

North West Regional Assembly (2002) An Integrated Appraisal Toolkit: Guidance for the North West (Draft for Consultation), NWRA, Warrington.

North West Regional Assembly (2000) Action for Sustainability: Northwest England’s Framework for a Better Quality of Life, Northwest Regional Assembly and Government Office for the North West, Mancester.

ODPM (2003) The Strategic Environmental Assessment Directive: Guidance for Planning Authorities. Office of the Deputy Prime Minister, London.

Parson E A and Ward H, (1998) ‘Games and simulations’ in Rayner and Malone, op.cit.

Parsons W (1995) Public Policy: An Introduction to the Theory and Practice of Policy Analysis Edward Elgar, Cheltenham, UK.

Ravetz J (1998) ‘Integrated assessment models: from global to local’. Impact Assessment and Project Appraisal, 16/2. 147-154.

Ravetz J (2000) City-Region 2020: Integrated Planning for a Sustainable Environment, Earthscan, London.

Rayner S and Malone E L (eds) (1998) Human Choice and Climate Change (Volume 3, Tools for Policy Analysis), Battelle Press, Columbus, Ohio, USA.

Rotmans J (1998) ‘Methods for IA, the challenges and opportunities ahead’. Environmental Modeling and Assessment, 3. 155-179.

Rotmans J and Dowlatabadi H (1998) ‘Integrated assessment modeling’ in Rayner and Malone, op. cit.

Toth F L and Hizsnyik E, (1998) ‘Integrated environmental assessment methods: evaluation and applications’. Environmental Modeling and Assessment, 3. 181-191.

31

VROM (1984) Prediction in Environmental Impact Assessment, MER Series, Vol. 17: Ministry of Public Housing, Physical Planning and Environmental Affairs, Leidschendam, Netherlands.

Wilkinson D, Fergusson M, Bowyer C et al (2004) Sustainable Development in the European Commission’s Integrated Impact Assessments for 2003. Institute for European Environmental Policy, London.

32

因篇幅问题不能全部显示,请点此查看更多更全内容