• Rezultati Niso Bili Najdeni

PDF Use of Nuclear Safety Performance Indicators in EU Members States

N/A
N/A
Protected

Academic year: 2024

Share "PDF Use of Nuclear Safety Performance Indicators in EU Members States"

Copied!
9
0
0

Celotno besedilo

(1)

International Conference

NNNuuucccllleeeaaarrrEEEnnneeerrrgggyyyfffooorrrNNNeeewwwEEEuuurrrooopppeee222000000999

Bled / Slovenia / September 14-17

Use of Nuclear Safety Performance Indicators in EU Members States

Tea Bilic Zabric INKO, consulting

Kolezijska 5A, SI-1000 Ljubljana, Slovenia inko@siol.net

Maciej Kulig Enconet, consulting

Auhofstrasse 58, A-1130 Vienna, Austria m.kulig@enconet.com

ABSTRACT

Safety performance indicators (SPIs) are being used by the nuclear industry to monitor operational safety performance. Benefits in using these types of indicators to assess the level of safety and direct corrective actions before real safety concerns occur are recognised both by operators and regulators. Consequently both are trying to select indicators to be monitored and used in their work.

Although many organisations adopted safety performance indicators developed by international organisations (IAEA, OECD/NEA, and WANO), a variety of other SPIs are established and used by specific NPPs or regulatory organisations. The development of SPIs, for either the operator use or the regulatory purpose is a complex process, and the country specific methods vary significantly.

Specific objectives of the article include:

• Overview and comparison of SPIs used by operators and regulators in the EU Member States

• Safety benefits of using the SPIs and their relations to practices and specifics in countries evaluated

• Finding out a common set of SPIs used in EU Member States.

The SPIs to be described include all those used and considered by operators or regulators for monitoring technical parameters and for monitoring safety culture of the operating NPPs, as well as the SPIs that are being proposed by international organisations.

Distinction will be made regarding the practices of operators and regulators.

Article covers Member States that operate nuclear power plants. Readily available sources like public domain reports and publications of international organisations (IAEA, OECD/NEA, WANO), papers presented at international conferences on the subject, working material from meetings of various working groups, etc were used as information sources.

(2)

1 INTRODUCTION

Benefits in using safety performance indicators (SPI) are recognised by both operators and regulators. The safety performance indicators used by utilities focus on the safety of the process of nuclear energy production. SPIs of this type help in measuring the overall health of the utility/plant organisation and the working environment of its employees. The SPI system provides support in all the main objectives of the safety management process - the control of the process, process self-assessment, continuous improvement, and management assessment.

The regulators apply this tool for measuring safety performance of licensees but also to assess the effectiveness of their oversight process. The increasing interest in the use of safety performance indicators by regulators is consistent with a visible shift from prescriptive to performance-oriented regulation. In the performance oriented approach the regulatory process does not (only) follow the established rules and standards, but also takes into consideration actual ‘safety performance’ of a plant. The ‘performance’ is then used to adjust the regulatory oversight. An important part of ‘performance’ is the capability of the regulated entity to detect and act upon emerging performance problems before they lead to adverse consequences. The SPI programmes are recognised as a useful tool to support this goal.

Work on the development of SPIs for the use by nuclear operators and regulatory organisations has been conducted within the last 10-20 years by several international organisations (IAEA, OECD/NEA, and WANO) as well as individual users (operators and regulators). Methods and approaches brought up by these studies and practical experience gained in the pilot projects stimulated regular applications at both the utility and regulatory environment.

The project (TREN/H2/403-2006) [1] was launched with the intention of providing appropriate information to EC TREN regarding the current status of the SPI area in MSs.

Such study was needed because information on the subject available to the EC was far from being complete and up-to date.

2 COLLECTION OF INFORMATION

The identification of all available information sources on the subject and analysis of their contents was the first step of the preparatory work. This undertaking was intended to assist in establishing the general concept for data gathering task that was considered as an essential step.

A comprehensive list of public domain documents that provide information on the subject was compiled. This list comprised about 30 references that were issued in the period of 1998-2006, including reports of international organizations (IAEA, OECD/NEA, WANO), papers presented at international conferences and meetings, guidelines, and other publications.

2.1 Questionnaire

Two Questionnaires were prepared - one for the survey of utilities, another for the survey of regulatory organisations. Scope and profile of the survey was adjusted to the purpose and main objectives of the project. Insights from the review of the earlier studies on the subject provided relevant input.

The questions covered in the Questionnaires and suggested format for responses were designed taking into account results of similar surveys conducted earlier by IAEA and

(3)

OECD/NEA. References [2] (IAEA) and [3] (OECD/NEA) proved to be the most useful in this context.

Both Questionnaires had the same structure. Each of them was composed of three sections:

(1) Policy level and fundamentals which comprises general information on the objectives of SPI system(s) in use at the respondent’s organization and on the major stakeholder groups involved in the development of these SPIs; (2) Characteristics of the SPI system which provides the most important characteristics of individual SPIs in use at the respondent’s organization like information on each of the SPIs including the definition and the main characteristics. The latter question covers: the experience of usage, data acquisition process, reporting mode and frequency, use of thresholds, external reporting, and specific uses of the indicator; and (3) Feedback of experience in using the SPIs which comprises information on the feedback of experience in using each of the SPIs at the respondent’s organization what includes user’s opinions on the potential strengths of each indicator and eventual pitfalls or concerns associated with its use (related to its definition, required data, possibility to establish thresholds, attitude of staff in the SPI monitoring.

Extent of the survey by using questionnaires was limited to a sample of agencies. It was targeted to include regulatory and utility organisations in 15 MS that operate NPPs. The questionnaire was sent to one regulatory organisation (RB or TSO) and to one operator (utility or NPP) from each MS. Generally, the respondents were willing to contribute to the survey.

However, in several cases the respondents refused to provide information. Finally, the response was obtained from 14 regulatory organisations and 11 utilities. Four operators (from Belgium, France, Finland, and United Kingdom) and one regulator (from the Netherlands) did not contribute to the survey.

2.2 Results of data collection

Total number of indicators to be analysed was of the order of several hundred (514 indicators for the regulatory use and 608 for the utility applications).

Information gathered included: (i) specifications for indicators that are currently being used at the agencies covered by the survey and (ii) information on the user’s feedback.

The specifications for indicators include (i) the fundamental concepts of usage, (ii) definitions of individual indicators and their basic characteristics, and (iii) the implementation approaches and practices. The user’s feedback includes observations regarding: (i) the strengths of individual indicators, (ii) the identified difficulties and concerns, and (iii) the overall ranking of the indicators by the users.

The information on the feedback from experience was rather sparse, both in addressing the positive and negative aspects of indicators. This may be due to the differences in organisation specifics. In spite of these shortcomings information on the feedback of experience was an essential input for the evaluation of SPIs with regard to its value.

Information gathered in the survey does not cover all the MSs. However, it provides a reasonable sample that represents the current practices at the regulatory agencies and operator’s environment. This information was found sufficient for identification of differences and similarities among MSs and for indication of good practices which are worth to be considered by MS agencies in enhancing their SPI programmes.

(4)

3 ANALYSIS AND COMPARISON OF FINDINGS

The information on SPI systems currently applied by nuclear regulatory authorities and NPP operators in EU MS was systematically analysed and compared in order to identify the commonalities and differences in the SPI programmes.

The analysis was conducted in several steps. They are briefly highlighted in the following.

(1) General policies and fundamentals - analysis focused on the objectives for the establishment of SPI system as well as the main contributors to the system development.

(2) Comparison of SPI characteristics among MS - compare of SPI systems used at each of the MS organizations by addressing the size and structure of the SPI system, experience of the organisation in using the SPI system, practices related to the collection and processing of data, use of pre-defined thresholds, SPI reporting frequency, external reporting practice, and the range of specific uses.

(3) Identification of similar practices - focused on the identification of similar practices. The original SPIs (as defined by the respondents) were reviewed and identical or very similar indicators were combined together (aggregated). The analysis was based on a systematic examination of all specific indicators included in the database. The original SPI list included a relatively large number of identical SPIs that have been differently named by the different users. This list also contained ‘similar’ indicators that are used to monitor the same safety factor/aspect, but have slightly different definitions. Identical or similar SPIs were identified by analysing the SPI characteristics (the coding, SPI name, and definition). To facilitate the process, the original list of SPIs containing specific indicators from all MS was sorted by IAEA logic framework code and the analysis was conducted for each of the SPI category. The result of this step was an ‘integrated list’ of specific indicators that can be assigned to each of the ‘original SPIs’ defined by MS (as identified by the survey conducted within the project). The list included 163 specific indicators. It should be noted that some items in the ‘integrated SPI list’ cover more than one indicator from the original list, e.g. “Cancelled/delayed periodic Test & Maintenance, non- compliance with the test frequency or test rules” are grouped as one indicator.

(4) Comparison of regulator and utility practices – compare the practice of utility and regulator at the MS level. Indicators (of indirect type) commonly used by both the utility and regulator were identified. These common indicators were also analysed with regard to their distribution by SPI logic framework categories.

4 IDENTIFICATION OF GOOD PRACTICES

The identification of good practices that are worth to be recommended to other SPI users was the most difficult task so this issue should be given appropriate attention.

The process of identification of ‘good practices’ included 2 basic elements:

(1) Selecting of SPI candidates for a broader use – establish a top level list of indicators that are candidates for a broader use in EU MS organisations (regulators and operators).

The majority of the list was compiled based on the ‘broad usage’ criterion (‘the SPI used by more than 30% of organisations of the same type (regulators or operators) covered in the survey’). Several indicators that do not fulfil this criterion were added to this list to ensure that each of the strategic indicators based on IAEA logic framework, ref. [2] was

(5)

noted that many of these SPIs cover several similar indicators. The use of a SPI by many agencies (with all the effort behind its implementation) can be considered a strong argument to believe that this is a ‘good indicator’.

(2) Assessment of SPIs with regard to their value - SPIs selected in previous step is assessed with regard to their value for a specific user. The assessment was conducted using a number of ‘quality attributes’ that are recognised within the nuclear community as important features of a ‘good and reliable SPI’. These attributes were derived within the project based on earlier studies (mostly by IAEA).

Table 1: Quality attributes used for the evaluation of SPIs.

Attribute

importance Attribute

Relationship with safety

Clear, concise and precise definition

Stimulation of appropriate actions by licensee/regulator Essential

Capability to differentiate between levels of performance Capability of input data being objectively measured Good representation of performance within the area Important

Capability of identifying undesirable trends

Compatibility with the existing means of performance measurements Applicability and comparability among licensees

Desirable

Capability to provide feedback for the evaluation of corrective actions

The aggregated SPIs selected in previous steps were further analysed taking into account the users’ feedback obtained within the survey. The evaluation generated a measure that showed the ‘value’ of a specific indicator that could be used in supporting the decisions of potential users regarding its implementation. The first assessment was conducted using the users’ opinions regarding the SPI strengths and drawbacks (further called ‘observations’).

The value of the SPI was assessed using the pre-specified criteria. They were based on a number of features (attributes) that a ‘good’ indicator is expected to possess. The set of attributes and their specifications to be used in the assessment were derived based on earlier studies and projects devoted to this subject, mostly IAEA [2,4,5]. The attributes were considered to have a different importance. Three groups of importance – the

‘essential’, ‘important’, and ‘desirable’ - were introduced. Appropriate algorithm for the assessment of the attributes and calculating an overall ‘worth’ of the indicator was established. The observations obtained in the survey from the users were ‘mapped’ to the above ‘quality attributes’. Generally, an attribute could be associated with both positive features of the SPI (‘strengths’ or ‘merits’) and negative features (‘drawbacks’ or

‘concerns’). Among the observations addressed in the questionnaire there were 6 merits and 13 concerns that could be associated with one or more of the attributes. The observation was considered applicable to the SPI if the number of users who made this observation (O) is large enough as compared to the total number of responses (R). The score of observation was assigned using the following logic formula Si = If Oi/R > P then 1 otherwise 0. The parameters used in the evaluation for Pm (for merits) and Pc (for concerns) were 0.300. A score (SAj) was assigned to each of the attributes depending on the number of applicable observations k and the score of each observation Si using appropriate weights wi, SAj = ∑Si * wi. The weights used in this process depend on the observation type (merit or concern). The weights were positive numbers (0 < wi < 1) for

(6)

‘merits’, and negative numbers (-1 < wi < 0) for ‘concerns’. Weights wAj used in the formula depend on the attribute importance (‘essential’, ‘important’ or ‘desirable’).

The final score (‘Weight’) of the indicator is calculated as the sum of scores determined for each of the 10 attributes and normalised to have the absolute value less than 1, W= ∑ SAj * wAj / ∑wAj.. So the algorithm included the formulas for accepting the observations, merging the individual observations associated with an attribute and calculating corresponding scores, and combining the scores assigned to each of the attributes as well as incorporation of a number of parameters (weights) that had to be set up by the analyst performing the evaluation. Outcome of the evaluation was a quantitative index (‘worth’) assigned to each of the specific indicators.

The second assessment based on an independent judgment of experts was conceptually similar. It used the same set of attributes but instead of ‘user’s observations’ there was a set of questions that had to be answered by the experts performing the evaluation.

Formulas used to combine the scores assigned to individual questions and to the attributes, and related parameters were similar as those explained above used in the first assessment.

The results from assessment obtained for each of the 69 indicators were compared. Based on these values the indicators were also categorised with regard to their value. The SPIs were divided into 3 Categories A, B, and C. The Category A includes SPIs for which both

‘Worths’ are greater than 0.5. The Category B includes the SPIs for which one of the

‘Worths’ is greater than 0.5 and the second not less than 0.3. The Category C includes SPIs for which one of the ‘Worths’ is lower than 0.3 and the second lower than 0.5. The assigned ‘worth’ and/or ‘category’ were intended to assist potential users in the development and/or enhancement of SPI programmes. The evaluation algorithm was programmed in the Excel environment.

Final results of evaluation are presented in the Table 2. This table includes information on

‘worths’ assigned in the evaluation and the category.

Table 2: Summary results of SPIs evaluation with regard to their value

TL# Specific SPI Name/ Asses.1

Worth Asses.2

Worth Category

1 Unit capability factor 0,338 0,397 B

2 Unplanned power changes 0,338 0,281 C

3 Capability loss 0,314 0,397 B

4 No. of work orders (WO) for TS components (preventive, corrective) 0,401 0,493 B

5 Number of pending (open) work orders 0,475 0,493 B

6 Ratio of prev. to corrective WO for TS comp., rel. volume of prev. maintenance 0,658 0,534 A 7 Process time for corrective work orders for safety components 0,256 0,425 C

8 Ratio of performed and planned maintenance activities 0,136 0,233 C

9 Coolant chemistry index (primary / secondary) 0,384 0,651 B

10 Fuel reliability index, RCS specific activity, RCS contamination 0,521 0,671 A

11 Number of leaking fuel assemblies, fuel failure index 0,137 0,390 C

12 RCS leakage (identified, unidentified) 0,384 0,507 B

13 Containment leakage 0,521 0,644 A

14 Safety significant events/ reportable events 0,584 0,856 A

15 Safety significant events, reportable events during plant shutdown 0,483 0,747 B

16 No. of less sign. events 0,384 0,240 C

17 No. of unplanned scrams 0,712 0,795 A

18 No. of safety system actuations 0,475 0,240 C

19 No. of unplanned scrams with loss of normal heat removal 0,425 0,397 B

20 No. of RPS/ESFAS failures 0,384 0,308 B

(7)

TL# Specific SPI Name/ Asses.1

Worth Asses.2

Worth Category

22 No. of safety system failures 0,712 0,836 A

23 SS failures or malfunctions revealed by testing / surveillance 0,497 0,397 B

24 Training of technically qualified personnel 0,311 0,493 B

25 Successful / unsuccessful exams, examination pass rate 0,469 0,486 B

26 Errors due to training deficiency 0,195 0,240 C

27 Emergency response training/exercises/ drills 0,475 0,555 B

28 Findings / corrective actions from ER drills 0,452 0,349 B

29 Response to emergency and drill exercises 0,414 0,534 B

30 No. of fire events 0,521 0,418 B

31 No. of fire alarms / malfunctions of fire detection system 0,521 0,459 B

32 Risk during power operation/ CDF 0,712 0,836 A

33 Risk significance of TS related events 0,612 0,836 A

34 Number of TS violations 0,384 0,438 B

35 Risk during shutdown/ CDF 0,384 0,630 B

36 Collective radiation exposure to workers 0,521 0,452 B

37 No. of workers receiving dose above limits 0,360 0,452 B

38 Significant radiological events/radiation protection event reports 0,688 0,603 A 39 Dispersion of contamination (relative contamination/exposure area) 0,521 0,479 B

40 Public dose (calculated off-site dose) 0,712 0,726 A

41 Gaseous releases (activity vs. allowed limits) 0,521 0,562 A

42 Liquid releases (activity vs. limits) 0,521 0,562 A

43 Radioactive solid waste 0,521 0,705 A

44 Liquid radioactive waste 0,575 0,521 A

45 Volume of low-level radioactive waste 0,521 0,705 A

46 Industrial safety accident rate 0,233 0,425 C

47 Number of registered industrial safety events 0,369 0,295 C

48 Ratio of working time loss due to accidents 0,274 0,247 C

49 Security system performance 0,438 0,377 B

50 Number of TS violations / deviations 0,658 0,514 A

51 Number of exemptions from TS 0,658 0,575 A

52 Temporary modifications 0,429 0,281 C

53 Configuration control deviations 0,459 0,473 B

54 Cancelled/delayed periodic T&M 0,519 0,445 B

55 No. of deviations from procedures 0,370 0,445 B

56 No. of HF related reportable events 0,137 0,171 C

57 No. of events due to procedure deficiencies 0,521 0,555 A

58 No. of events due to training deficiencies 0,314 0,171 C

59 Resolution of reportable events/ safety issues in backlog 0,658 0,562 A

60 Participation ratio in safety related training/ number of courses 0,151 0,219 C

61 No. of independent internal audits 0,475 0,356 B

62 No. of findings from internal audits 0,475 0,418 B

63 External review findings missed by internal reviews/audit 0,195 0,308 C

64 Audit schedule adherence 0,475 0,452 B

65 Number of recurrent events 0,521 0,534 A

66 Number of OpEx external events investigated 0,483 0,541 B

67 No.of OpEx events subject to RCA 0,521 0,555 A

68 Events caused by modification process deficiencies 0,521 0,527 A

69 Investment in facilities 0,589 0,575 A

5 LIMITATIONS AND STRENGTHS OF THE EVALUATION

The quantitative ‘worths’ and ‘categories’ can be helpful in selecting the most reliable indicators for use by the regulatory and utility organisations in EU MS. However, these results need to be used with care. The evaluation inevitably involves subjective judgment. The

(8)

potential user may ask how the final ‘worth’ of the indicator has been established, what practical value this assessment has, how much subjectivity is involved and what uncertainties were introduced. Although there are no definitive answers to deal with the subjectivity, the SPI ‘worth’ was developed in a systematic manner based on well defined rules, the process was documented, and the results are reviewable and reproducible. Clear display of the contributing scores may also help in establishing practical ways of the resolution of eventual weaknesses of the SPI. These are unquestionable strengths of the approach.

The practical value of the ‘worth’ (or ‘category’) assigned in the evaluation is not its absolute value; appropriate way of using the ‘worth’ (‘category’) is to support a comparison of different indicators.

It can be noted that the ‘worth’ of an indicator should not be the only criterion for justification of its use. The user may have additional arguments (preferences) to include a specific SPI which is not ‘the best’ in terms of its ‘worth’.

It should be also noted that the answers to the questions included in the evaluation model depended on the type of user organisation (i.e. regulator or operator) and on the organisation-specific conditions (e.g. the capabilities and protocol of data information systems, internal procedures or practices, etc.). Thus, to ensure a high ‘quality’ of this ranking, it was performed by the potential user who was able to take into consideration all the relevant specifics of the organisation.

The evaluation conducted was of a ‘generic’ type as it not necessarily considered all the organisation-specific conditions. Therefore, the proposed approach should rather be considered as a tool to be used independently by the potential SPI users to assist in the selection of most appropriate indicators.

On the other hand, the SPI programmes are considered as only one of the elements of the safety management process. The users may have other means of supporting the plant safety oversight process and they may decide not to use any indicators in the area that is already well monitored using other tools.

The generated list of most useful indicators should not be considered as the definite proposal for the harmonisation of SPI approaches within the EU. Final decision on the usefulness and practical applicability of a particular indicator by the user should and will depend on organisation-specific conditions.

6 INFORMATION ON THE MOST RECENT DEVELOPMENTS OF SPI

The indicators that address the management, organisation and safety culture (MOSC) aspects are recognised as useful anticipatory measures indicating deteriorated safety conditions before they turn into serious problems. Those related to the management and organisational aspects are easier to formulate. However, the SPI for measuring Safety Culture (SC) are the most difficult to define.

The existing models focus on aspects of SC that are easier to monitor. These are the declared aspirations of the organisation about the way it wants to be (so called ‘Espoused Values’). These aspects include the following targets: ‘priority given to safety’, ‘striving for safety improvement’, ‘no tolerance of safety deficiencies’, ‘effectiveness of communication among employees’, ‘blame-free work environment’, and ‘considering problems and errors as learning opportunities’.

It should be noted that the only practical way forward is to identify a set of indicators that measure the important characteristics of a positive Safety Culture. It is worth noting that the SPI programmes currently used in MSs (which have been derived based on IAEA logic

(9)

framework, ref [2]) include a number of indicators that are designed to measure some of the aspects mentioned above.

7 CONCLUSION

The conducted evaluation, described above, showed that there are many differences among MSs. These differences include (i) characteristics of the SPI programmes (mainly the size and structure), and (ii) the implementation practices (approaches to data gathering/ processing, practices of reporting, use of thresholds, external reporting, etc.). In spite of the mentioned differences there are similarities in the main concept of using the SPIs (e.g. most typical uses), coverage of specific safety areas by low level indicators, and even in the definition of individual indicators. This is one of the important general conclusions from the survey.

The SPI programmes are relatively ‘young’. The majority of users implemented their systems a couple of years ago. Only few users have a relatively long experience.

Dissemination of ‘good practices’ is the process that requires some time. At many organisations the development or enhancement of SPI programmes is underway.

The existing SPI programmes implemented in different MS organisation are not uniform. Important question is whether the SPI programmes in use at MS organisations need to be more uniform (harmonised). If so, what should be the scope of harmonisation and what would be the benefit of such harmonisation?

The use of SPIs at supranational (EU) level is not well established.

REFERENCES

[1] ENCONET CONSULTING Ges.m.b.H, “Nuclear Safety Performance Indicators – Project (6 reports) ”, prepared for EC Directorate H – Nuclear Energy within the framework of the project TREN/H2/403-2006T, Vienna, 2007, 2008

[2] IAEA, “Operational Safety Performance Indicators for Nuclear Power Plants”, IAEA- TECDOC-1141, IAEA, Vienna (2000).

[3] NUCLEAR ENERGY AGENCY, “Direct Indicators of Nuclear Regulatory Efficiency and Effectiveness – Pilot Project Results, OECD 2004, NEA Publication No. 3669, ISBN 92-64-02061-6.

[4] INTERNATIONAL ATOMIC ENERGY AGENCY, “Development and Implementation of Safety Performance Indicators at Nuclear Power Plants”, Final Report of a Co-

ordinated Research Project 2000-2003’, IAEA-J4-RC-802.1 and IAEA-J4-RC-802.2, Working materials.

[5] INTERNATIONAL ATOMIC ENERGY AGENCY, “Development of NPP safety performance indicators for use by regulatory bodies and communication to stakeholders”, to be issued as Safety Reports Series in 2007 (draft of July 2005).

Reference

POVEZANI DOKUMENTI

Based on the data originating from questionnaire interviews concerning the assessment of the quality of LPIS and on data provided by European bodies responsible for