2009年12月8日 星期二

Evaluating Intellectual Assets

EVALUATING INTELLECTUAL ASSETS IN UNIVERSITY LIBRARIES: A MULTI-SITE CASE STUDY FROM THAILAND
SHEILA CORRALL† and SOMSAK SRIBORISUTSAKUL
Department of Information Studies, University of Sheffield, Regent Court
Sheffield, S1 4DP, United Kingdom
†E-mail: s.m.corrall@sheffield.ac.uk
Intellectual assets are strategic resources that libraries can use to add value to services, but their intangible attributes make them hard to evaluate. An exploratory case study used document analysis, interviews and a questionnaire to develop and test indicators of intellectual assets and related performance measures at three university libraries in Thailand. The study demonstrated the feasibility of applying an intellectual capital perspective and a scorecard process model to design a workable system for evaluating library intangibles, particularly where libraries have a pre-existing interest in knowledge management and a culture of assessment.
1. Introduction
Library evaluation cannot be separated from its context. If the operating environment changes, libraries need new measures to monitor their performance under new conditions (Rowley, 2005). For example, electronic metrics and impact indicators have been devised to measure library performance in digital environments and evaluate the customer service experience as libraries respond to advances in information technology and high expectations of users (Brophy, 2006). The knowledge-based economy is pushing organizations towards adoption of knowledge management (KM) as a means of creating organizational value, on the basis that KM initiatives can help to create benefits that customers desire. A need to assess the intellectual assets (IAs) of libraries as another bottom-line indicator is emerging from this context of value-oriented services (White, 2007).
Some writers have encouraged library practitioners to consider organizational knowledge in libraries and information services as IAs, intellectual capital (IC), or intangibles (Koenig, 1998; Huotari and Iivonem, 2005). Evaluating IAs can be seen as a stepping-stone towards managing knowledge, but it is not easy to launch this idea in academic libraries. Librarians in higher education institutions (HEIs) do not always recognize that recent developments in performance measurement (PM) have made them more accountable for the knowledge used in service delivery, in addition to their use of tangible assets, such as equipment and buildings. They are also less familiar with managing IAs than with other KM-related processes, such as work on knowledge access and repositories (Townley, 2001).
A review of library research on intangible assessment reveals that the literature has particularly grasped the importance of service quality, but given less attention to IAs. Applications of intangible measurements in libraries have concentrated on library scorecards, adopting Kaplan and Norton’s (1996) Balanced Scorecard (BSC) model to group both financial and non-financial measures under four perspectives: finance, internal process, customer, and innovation and learning (Ceynowa, 2000; Cribb, 2005; Self, 2003). There have been few empirical studies in the broad area of knowledge assets assessment for academic libraries, although Barron (1995) and Dakers (1998) examined staff skills and competence to audit tacit knowledge in human resources (HR) in public and national libraries respectively, and Van Deventer (2002) implemented IC management for an information services unit in a large research organization to disclose intangible stocks and activities in an IC report.
The present multi-case study explores the feasibility of IA evaluation in academic libraries through an investigation of three universities in Thailand. The central research question was ‘how do Thai university libraries, as representatives of developing-nation libraries, develop performance indicators (PIs) to evaluate their organizational IAs?’ The study was guided by the following four sub-questions derived from this question:
• What are the most important IAs for Thai academic libraries?
• Why do library administrators want to evaluate library IAs?
• How do libraries choose PIs as proxies to demonstrate their IAs?
• What PIs are suitable for evaluating library IAs?
This paper argues that a specific model for evaluation of these assets can help libraries exploit them to add high value to services and bring future benefits to information supply operations. It demonstrates that library administrators are interested in intangibles; that IC theory can be adapted for identifying knowledge resources in academic libraries; and that the methodology described is appropriate for developing PIs related to library IAs. The paper presents a review of the conceptual framework, description of the research methodology, analysis of the case background and discussion of the main findings.
2. Theoretical Framework
This study utilises two paradigms to underpin identification and assessment of IAs in academic libraries: the resource-based view (RBV) and the IC perspective. First, taking the RBV, today’s organizations realize that their knowledge base and intangible assets represent a strategic resource. Such resources are characterized as strategic by four distinguishing features: they are valuable, rare, inimitable and non-substitutable. In contrast, all tangible assets, such as budgets or premises, can easily be acquired by rivals. An organization can accordingly claim sustained competitive advantage over others in its domain or sector if it possesses IAs (Barney, 1991; Meso and Smith, 2000).
Secondly, using the IC perspective, organizations regard their knowledge base and intangible assets as good long-term investments, similar to other capital assets, which will enable them to create value in products and services for stakeholders. The term ‘corporate memory’ is often used in this context: when an organization plans to evaluate its corporate memory, it is attempting to measure its stocks of intangibles and assess its learning activities (Stewart, 1997; Marr, 2005).

2.1 Intellectual Assets
IAs have been given various names, definitions and components, because this specialist field involves several disciplines, such as strategic management, accounting and HR (Marr and Moustaghfir, 2005). In this study the terms ‘intellectual assets’, ‘intangible assets’, ‘intangibles’ and ‘knowledge resources’ are used interchangeably to denote knowledge-based items, or manifestations of the existence of knowledge, owned (or held) by an organization, whose value can be extracted and used to increase organizational effectiveness in accordance with its strategy (Green, 2007).
IAs can be distinguished from ‘intellectual capital’: Bukowitz and Williams (2000), describing practice in PricewaterhouseCoopers, explain that IC resembles ‘raw knowledge’, which is not yet articulated and converted into IAs; thus, tacit knowledge belongs to each employee and may not serve any purpose for the organization. In other words, ownership and strategic alignment differentiate organizational IAs from IC.
For corporate purposes, it is commonly accepted that there are three areas of intangible strategic resources, comprising HR, structural capital and relational capital:
• human resources are collective capabilities derived from individuals in firms, which include capacities, experience, motivation, and staff satisfaction;
• structural capital is organizational competence in the forms of databases, technology, routines and culture;
• relational capital signifies the networks developed by organizations with customers, suppliers, partners and stakeholders (OECD, 2006: 9).
In the library world, many academics and practitioners have classified knowledge resources into groups with a strategic management and strategic accounting lens. Kaplan and Norton’s (1996; 2004) BSC and related Strategy Map is a popular reference point. Another approach is Sveiby’s (2001) Dynamic Intangible Assets Monitor (IAM), which uses accounting theory for disclosing stocks of intangible assets parallel to tangible assets. Libraries seem typically to use a four-fold categorization of IAs, introducing collection and service assets as an additional area alongside those typically used in the corporate sector, scoping their categories as follows:
• human assets include expertise, core competencies and learning;
• structural assets embrace a diverse range of library management systems such as organizational structure, management information and work processes;
• relationship assets include customer relationships, reputation and image;
• collection and service assets emphasise unique collections of information materials, added-value services and new products (Koenig, 1998; Pierce and Snyder, 2003; White, 2004; Cribb, 2005).


2.2 Indicator Development for Intangible Evaluation
Contemporary academic libraries have to communicate their strategic impact to their parent institutions by maximizing appreciation of library roles. IA measurement is a potential tool which HE libraries can initiate as part of KM programmes within larger management systems (Huotari and Iivonen, 2005). White (2007) points out the benefits of intangible assessment, in that it helps libraries to:
• expand the scope of traditional evaluation towards a library’s worth;
• align library management’s ability with the parent organization’s IC strategy;
• utlise information on IAs to make decisions about the maintenance and improvement of organizational knowledge.
Evaluation models for knowledge resources in the business context have usually begun with an extended balance-sheet approach to show value for money. However, the scorecard method tends to be the preferred approach to indicator development for reporting intellectual performance, since this model lets organizations design ‘fit-for-purpose’ indicators in the form of a feedback loop. Scorecard measures can be revised or changed when organizations the analyze causes and effects of previous assessments (Rylander et al., 2000; Shulver et al., 2000). This method also provides the foundation for well-known guidelines on disclosing intangible assets, including those of the European Union (MERITUM, 2002), the Danish Ministry of Science, Technology and Innovation (Denmark, 2003) and the Japanese Ministry of Economy, Trade and Industry (Japan, 2005). For the library sector, White (2004) has suggested that the organizational knowledge of academic libraries could be assessed by the scorecard method.
The scorecard process model for developing PIs typically has three main steps, which shaped the conceptual framework and practical design of the study:
(i) linking stakeholders’ expectations to key success factors (KSFs) relying on IA components,
(ii) building PIs based on these KSFs to describe qualitative targets for knowledge resources,
(iii) translating each prospective indicator into quantitative measures of intangible stocks and learning activities (Probst et al., 2000; Rylander et al., 2000).
3. Research Methods
The project employed a mixed methodology, selecting the case study design as a flexible research strategy, enabling the use of varied data sources (Eisenhardt, 1989). Library practitioners have often favoured a qualitative methodology for PI projects, such as BSC implementations (Ceynowa, 2000; Cribb 2005; Self, 2003). This helps to generate indicators which meet local needs, but are less amenable to inter-institutional comparison. Others have combined qualitative and quantitative methods (Cotta-Schonberg and Line, 1994; Cullen, 2006), developing indicators that are both meaningful and robust, by adopting a ‘pragmatist’ philosophy (Tashakkori and Teddlie, 2003). Mixed methods are more useful than employing only qualitative or quantitative approaches when researchers want to examine the complex results of a distinctive situation and normalize them by comparing findings with other organizations (Petty and Guthrie, 2000). A mixed methodology is also a pragmatic choice for studies with both theoretical and practical aims.
The case approach is particularly appropriate for researching areas where there have been few previous studies (Benbasat et al., 1987) and was widely used to generate theories, find indicators of intellectual performance and diversify the context of measurement when the field of IC measurement emerged in the 1990s (Petty and Guthrie, 2000; Marr and Chatzkel, 2004), reinforcing its suitability for researching this area in Thai university libraries, where there has been no prior work in the field. Case studies are well suited to answering ‘why’ and ‘how’ questions, having been used in France to answer such questions in relation to intangible indicator development (MERITUM, 2002). They are also well suited to examining elaborate phenomena in natural settings (Yin, 2003), thus supporting the necessary investigation here of issues such as the institutional context of libraries and the opinions of different stakeholders.
4. Data Collection and Analysis
Fieldwork was carried out in two stages over five months. The first stage (July to August 2007) was used to test and refine the methodology by conducting a single-case pilot at a Thai university library chosen as a representative site, using Yin’s (2003) criteria of convenience, ease of access, proximity to the field researcher’s normal workplace and the availability of experts willing to make suggestions about the research design.
The second stage (June to August 2008) collected data for the main multiple-case study involving three Thai university libraries. Selection of sites for the main study was informed by prior research on IC measurement and Yin’s (2003) replication logic. The first criterion was library size, cited by Pors et al. (2004) as a significant determinant of the number of management tools deployed, with implementation of IC measurement tending to be associated with large numbers of staff (Wang, 2006). Another criterion was readiness for intangible assessment, indicated by adoption of management models such as BSC, Total Quality Management and benchmarking tools, such schemes being thought to aid understanding of IC measurement (Roberts, 2003). The final criterion was an active interest in intangibles or KM. Case sites were selected after browsing the websites of 39 libraries as potential participants.
Data for the pilot were derived from three sequential methods: document analysis, semi-structured interviews and a self-administered questionnaire survey. In the main study, the self-administered questionnaire was replaced by the researcher administering the survey instrument to groups of staff collectively, as a result of participant feedback about difficulties in interpreting some questions. Purposive sampling was used to select interview participants.
Both quantitative and qualitative content analysis have been used in business studies of IC measurement (OECD, 1999; MERITUM, 2002). A qualitative approach was used here to examine strategy, policy and other administrative documentation as a pre-interview procedure in the qualitative phase of the study. Document analysis was used to familiarize the researcher with the sites, to capture official requirements for evaluating strategic resources and to compare existing elements of PM with the language of the IC movement, thus facilitating communication with library personnel.
Semi-structured interviews have been widely used in empirical IC research, typically with the other modes of data-gathering used here (OECD, 1999; MERITUM, 2002). The semi-structured interviews were conducted with senior managers/associate directors at each site to verify the categorization of intangibles and framework for evaluation proposed in the conceptual model; to explore administrators’ attitudes towards their libraries’ IAs; and to identify KSFs as a basis for formulating draft indicators of intellectual performance, thereby linking organizational strategy to knowledge assets measures (Bontis et al., 1999).
Self-administered questionnaires and interviews have both been used in library settings to test developed measures (King Research, 1990; Cotta-Schonberg and Line, 1994; Lithgow and Hepworth, 1993), along with other techniques, such as Delphi panels (Harer and Cole, 2005) and focus groups (Cullen, 2006). The structured questionnaire-based group interviews formed the quantitative phase of the study here and were used to test the relevance and transparency of the proposed indicators and sample measures with middle managers and specialist staff as potential users of the indicators at the operational level. Senior managers previously interviewed were asked to review the questions derived from the qualitative phase and to suggest additions or changes.
Qualitative data from the library documentation and in-depth interviews were analysed line-by-line and coded using specialist software (NVivo7) to generate themes and compare categories. Quantitative data were analysed using a spreadsheet (Excel) to generate descriptive statistics, such as the mean values for respondents’ ratings of the understandability and importance of the proposed indicators. The data from each site were analysed and written-up as individual case reports in a standard format, first describing the contextual influences on PM, represented by each library’s strategy, organizational structure and institutional model for service evaluation; second, presenting the findings from documentary sources and key informants on library IAs, in terms of their identification and classification, and the motives and criteria for their evaluation; and, third, reporting the results of the user acceptance tests for the proposed indicators and measures, conducted via structured interviews. Finally, evidence from the three cases was systematically compared to identify similarities and differences in relation to the four themes of the research questions, prior to synthesising the findings from the cross-case analysis for comparison with the related literature to support formulation of theoretical propositions from the cases.

5. Case Background
The formal strategies, governance structures and steering models for service evaluation of the case libraries are important contextual dimensions underlying the process of developing PIs for their IAs. Table 1 summarizes and compares key elements of the organizational context explored at the three sites.
Table 1. Organizational contexts of the case libraries
Dimensions Elements Case library sites
K SW T
Strategy Mission contents
• Contributions to institutional goals (teaching, study and research)   
• Provision of information resources and services   
• Interventions on lifelong learning/information literacy  
• Library staff, technology and administration 
• User focus 
• Information access 
Objectives contents
• Supply electronic resources and provide users with remote access   
• Develop and train library staff   
• Improve library premises/facilities  
• Manage library operations and evaluate its performance  
• Sustain relationships with other organizations  
• Know users and respond to their needs  
• Ensure that library collections meet the university curricula 
Organization structure Bureaucratic hierarchy   
Library director sharing authority through a standing advisory committee   
Steering model of library evaluation Use the QA system and standards required by the parent organization   
Service quality evaluation elements
• Strategic and operational planning   
• The effectiveness of learning support services   
• Administration/management responsibilities   
• Finance and budgeting   
• The mechanism for auditing internal QA   
• Continuous improvement and organizational development  
• Preservation of art and culture 
• Organizational information systems 
Number of QA measures 35 30 18
Evaluation criteria
• Measuring the library’s QA progress based on the PDCA cycle   
• Overall library performance determined by the examiners’ judgements   

5.1 Library Strategies
The three strategies had many common elements. Their mission statements all acknowledged their contributions to institutional goals, in addition to the provision of information resources and services. Two libraries also highlighted their roles in lifelong learning/information literacy, but there were some elements mentioned at only one site (e.g. the SW mission specifically mentioned user focus). Similarly, their strategic objectives all emphasized delivery of electronic resources and the development and training of library staff, but other issues, such as library premises and collections, did not feature in all cases.
5.2 Management Structures
The organization structures of the libraries are quite similar, reflecting their shared institutional status as public universities. They all work within a governance structure characterized by institutional rules and regulations, standardized procedures for library staff and a hierarchy of authority, with co-ordination and delegation of work by senior staff to lower levels. However, although decision-making is centralized, they all have a standing committee that enables library managers to participate in administration and in addition they use project teams with membership drawn from different divisions to implement action plans and encourage co-operation among groups.
5.3 Performance Evaluation
All universities in Thailand are obliged to meet standards specified by the Office for National Educational Standards and Quality Assessment and the three case libraries accordingly each work within a formal institutional quality assurance (QA) framework that has a strong influence on their approach to performance evaluation.
The libraries’ information supply or service delivery chains are identified as a sub-system in the monitoring of university performance, which is based on the input–process–output–outcome model, shown in the following examples:
• Inputs ─ annual budget, workforce, office equipment, leadership, plans;
• Processes ─ management processes, work processes for producing information products, procedures for delivering services;
• Outputs ─ the quantity and quality of library collections and services;
• Outcome ─ user satisfaction.
Interestingly, library K differs slightly in its definition of inputs, separating intangible inputs (e.g. strategies, plans and leadership) from tangible inputs (e.g. finance and workforce) and then categorizing its intangibles as the managerial context that precedes the tangible inputs.
The libraries undertake internal quality audits of their operations in accordance with their institutional QA standards for learning support systems, which define specific evaluation elements (as shown in Table 1) and also specify the QA measures to be used for evaluation, which are of four types, reflecting the input–process–output–outcome model. The evaluation elements are similar across the cases, but there is significant variation in the number and nature of the measures used, with library T having only 18, compared to 30 and 35 for the other two libraries. Examples of measures include size of professional staff (input), throughput for library activities (process) and use of library collections (output). Only library T claims to measure outcomes, via the results of its user satisfaction surveys.
The evaluation process involves producing a self-assessment report, incorporating documentation and performance data; hosting a visit by university auditors, gathering direct evidence to substantiate the report; and then receiving and responding to the audit findings. The auditors are all formally trained in the use of the Plan–Do–Check–Act (PDCA) cycle, underlining the formality and rigour of the process.
6. Case Findings
Qualitative data from the document review and semi-structured interviews with a total of 12 library administrators across the three sites were analysed to identify the core IAs of the three libraries, classify these assets into the four predefined categories identified above (in section 2.1), explore the administrators’ motives for intangible evaluation and then develop a draft set of PIs.
6.1 Core Intellectual Assets
Documentation associated with the libraries’ QA systems was used to explore existing service quality evaluation elements and related performance measures relevant to assessment of intangible aspects of library performance, such as measures used to evaluate the effectiveness of their administration, their services and their strategic and operational planning. The assumption here was that IAs were already included in the current evaluation process, but not recognized as such, because they were hidden behind the measured QA elements; so the aim was to identify these hidden assets and map them onto the four-fold framework described. Thus, user satisfaction surveys conducted by the SW library provided staff with knowledge of user experiences, which could be categorized as a relationship asset.
Library T’s strategy documentation was a particularly fruitful source for identifying potential intangible assets, as it had adopted Kaplan and Norton’s (2004) Strategy Map tool as a means of depicting its vision, mission, strategic priorities, desired outcomes and key PIs from the four BSC perspectives (external stakeholder, innovation and learning, financial and internal). So this library had already identified crucial intangible resources alongside tangible resources; for example, in addition to specifying ‘first-class facilities’ (a tangible asset) as a desired outcome, it specified ‘effective teams’ (an intangible human asset) as a desired outcome associated with the learning and growth perspective.
Although the QA and strategy documentation was valuable in the initial identification of IAs, some examples of assets essential to quality service delivery could only be specified in detail after the interviews with library administrators. Table 2 shows the range of IAs identified at the case sites, arranged in the four categories. Many examples conform to the broad IC taxonomy found in national guidelines (e.g. Denmark, 2003; Japan, 2005) and business literature reporting companies’ IC, which classify the concept into three categories: human capital, structural capital and relational capital (OECD, 2006). However, the fourth category of collection and service assets is library-specific and distinctive in the way that it combines assets from the other categories.
Table 2. Intellectual assets of the case libraries
Category Library K Library SW Library T
Human assets • Service mindset
• Mental agility
• Expertise
• Skills
• Team spirit
• Commitment to library goals • Adaptability skills
• Group participation/ teamwork
• Commitment to library strategy
• Education and training
• Competence development
Structural assets • Minutes of knowledge sharing meetings
• Reports of working groups
• Quality control records
• Management information system • Quality assurance documentation, e.g. handbooks, self-assessment reports and work procedures
• Output from knowledge management projects, e.g. best practices, success stories and lessons learned
Relationship assets • Relationships with key stakeholders
• Users’ feedback • Relationships with university executives
• Public image of the library
• Marketing communications • Interaction between library workers and users

Collection and service assets • Frequently used services
• Users’ praise at service points
• Information resources frequently requested
• Digital collections
• In-house databases • Core course materials
• New search tools
• Electronic archives
• New/value-added services
• Collections and services that satisfy users • Information resources requested by target users
• Top-ranking services
• New services
• Digital collections

Previous work on intangible assets in the library sector has restricted the identification and classification of IC in academic libraries to the three recognized categories of the IC taxonomy (Van Deventer, 2002; Pierce and Snyder, 2003; Iivonen and Huotari, 2007). However, the results here suggest that it is necessary for academic libraries to add the ‘collection and service assets’ category to the classification of library IAs. Collection and service assets are the end-products of core knowledge-based processes in libraries, such as collection development, service enhancement and innovations in library and information work. They form a distinct fourth category in being essentially derived from a combination of human, structural and relationship assets. Identification of this additional category contributes to our further understanding of library services and information resources as library assets not wholly embraced in the broad IC taxonomy.
Human, structural and relationship assets are crucial to the internal procedures of library operations, but such procedures are less important strategically in shaping users’ perceptions of the value of information services, as users only perceive and take interest in the resources and services that are the end-products of library operations (Saracevic and Kantor, 1997). Consequently, library stakeholders’ perceptions of value are essentially connected with collection and services assets, rather than with other categories of library IAs, with this fourth category reflecting the distinctive identity of academic libraries, whose mission is to provide library services and information resources to users in support of teaching, learning and research in HEIs (Brophy, 1991). Moreover, these distinctive assets are directly relevant to the working practices of staff at all levels of library organizations, in addition to being experienced, recognized and appreciated by library stakeholders, which underlines their significance as a strategic resource.
6.2 Motives for Evaluation
The interviews with administrators also explored their motives for evaluating the libraries’ IAs, in terms of the incentives for gathering information on their knowledge resources and reasons for identifying them specifically. All the libraries had established KM projects and although their programmes were at different stages of development, the administrators all recognized a need to monitor and measure their progress. Libraries K and T wanted to demonstrate the effects of their KM projects, which had been initiated a few years ago; the SW library was at an earlier stage of KM development, but also recognized information about its KM activities had potential value in getting messages across to university executives.
Similarly, all the libraries saw the evaluation of intangibles as complementing their existing QA procedures, going beyond operational performance to more strategic concerns. Libraries SW and T both mentioned the need for library-specific measures that went beyond the standard university performance evaluation; one SW administrator wanted to differentiate the library and position it ahead of other university support services, ‘Every support unit uses the same list of mandatory QA measures. If we have new performance indicators to augment our QA measures, we may show our distinctive quality that causes us to be in front when compared with other subsidiaries in the university community’.
The director of library T made a similar point, linking this to use of the BSC, ‘the existing QA measures used in the Office produce the management data that reflects the overall performance of the University rather than the specific results of the library operations… We want a particular type of measure chosen from the BSC framework to prove the value of our library and information work contributing to the University’s academic excellence’.
The administrators identified several different stakeholder groups they wanted to target with information about their IAs. The director of library T suggested that the development of such measures could raise awareness of important intangibles among library staff, ‘Intangibles such as proactive services, value added collections and staff commitment to organizational change are very important to the whole organization… In our current evaluation of library services, it’s hard to make the library personnel become aware of these intangibles if we don’t have any new indicators for assessing them’.
The interviews thus identified two main motives for evaluating intangibles, namely to monitor the effectiveness of the libraries’ KM activities and to communicate the libraries’ value to stakeholders. The libraries’ KM-related motives are in line with findings from other sectors: Mouritsen et al.’s (2004) survey of Danish companies found 85 per cent of respondents had used IC statements to underpin KM implementation and other sources confirm this association of IA evaluation with KM processes (Marr et al., 2002; Denmark, 2003; Thorleifsdottir and Claessen, 2006). The library respondents’ desire to find new ways to communicate their contributions to their universities similarly reflects other commentators’ recognition of the need to go beyond tangible assessment to demonstrate library impact (Abels et al., 2004; White, 2007). The link between surfacing information on IAs and reporting performance via BSCs has also been acknowledged previously (Koenig, 1998; Kaplan and Norton, 2004).
6.3 Indicator Development Process
The interviews also established the framework for assessment of the libraries’ knowledge assets by exploring the administrators’ measurement viewpoints and evaluation criteria. At each site, existing QA standards and processes were seen as the starting-point for measuring intangibles. Essentially, they all wanted to integrate new PIs for intangible assets into their existing QA measures, offering both conceptual and practical reasons; for example, to build on staff familiarity with the QA system and harmonize with existing measures to avoid perceptions of increased workload, as well as the logic of treating intellectual resources in the same way as other library resources. But they also wanted to advance their assessment activity strategically: for example, an associate director at library K argued that their existing audit only helped them “Plan” and “Do” operational tasks, whereas using the BSC could help them ‘to “Check” and “Act” strategically’; library T was already using the scorecard approach to relate evaluation to its strategic objectives (via its Strategy Map) and welcomed the opportunity simply to extend this with new intangible measures.
On evaluation criteria, as intangible evaluation was a novel idea for them, all three groups of administrators emphasized simplicity as an essential criterion to facilitate widespread introduction and willing participation. In addition, they again wanted to harmonize with the existing evaluation criteria of their QA systems, by using the input–process–output model. Library T’s use of the BSC meant that it was already linking its measures to its mission, strategic priorities and desired outcomes, and the director gave examples of relevant measures already used (e.g. percentage of clients satisfied with services and numbers of best-practice documents created).
Development of initial PIs for the three libraries was guided by the three-step process model outlined above: defining KSFs, identifying PIs and choosing measures (quantifiable inputs, processes and outputs) associated with library IAs. The documented strategic objectives of the libraries were used to identify possible KSFs related to intangibles, which were then analyzed to identify the types of measures (efficiency, effectiveness, etc.) required to assess the library’s performance.
One of the investigators acted as facilitator during the indicator development process. Within each library, the process facilitator interpreted the library administrators’ interview data, which yielded further insights into their strategic objectives to supplement the data extracted from strategy documents. He next designed the PIs as broad statements articulating expectations for intellectual performance and converted the libraries’ existing QA measures to surrogate measures for quantifying their intellectual assets and activities. He then asked the library administrators responsible for overseeing the formal evaluation of library operations and services to review the initial PIs and measures, to determine whether they fitted the library contexts. After the reviews, the facilitator incorporated the proposed indicators and measures in questionnaires for acceptance testing with users through small-scale surveys, as described in the next section (6.4).
Table 3. Comparative classification of key success factors
Asset
type Factor category Key success factors Evaluation aspect
Library K Library SW Library T
Human assets Human Competent and ambitious workers Library staff training and development HR linked to value-based management Efficiency and
effectiveness
Structural assets Managerial


Managing and directing the library systematically
Enhanced enterprise in managing library operations Efficiency and
effectiveness
Technological

Effective use of information systems and technology in library work
Relationship assets Social
Enduring collaborations with other institutions Understanding of the community served Sustainable partnership Sustainability
Collection and service assets Marketing
Quality of collections and efficiency of services Library services that meet users’ needs User-oriented provision of collections and services Quality

Table 3 shows that each library placed considerable emphasis on human, social and marketing factors, with similarities evident in the human, relationship and collection/service assets, but striking differences in their structural assets, where libraries K and T stressed different aspects of management, while the SW library identified usage of information systems and technology as key to successful strategy implementation. Table 3 also shows that in each library the complete set of KSFs covered four aspects of evaluation, although there were some variations in both the types of assets and specific examples identified in each case.
The administrators all agreed that the indicators should take the form of statements articulating an expected level of intellectual performance, composed mainly of action verbs and key activities.
Table 4. Proposed performance indicators for evaluating intellectual assets
Asset
type Performance indicators
Library K Library SW Library T
Human assets • Develop personal competencies and skills suitable for modernized work in a learning centre
• Build up staff loyalty, motivation and team morale • Encourage library personnel regularly to develop their job skills and capabilities
• Support exchange of personal knowledge
among library workers
• Give library and information professionals a chance to demonstrate competencies outside the workplace • Enhance staff expertise in library and information work
• Foster loyalty and increase teamwork skills of staff members
Structural assets • Enable a learning environment through managerial systems • Establish efficient processes and procedures for managing library operations
• Use practical knowledge recorded in QA documents to improve supply of information products and services
• Apply information
technology in harness
with information access
improvement and service
quality enhancement • Implement KM activities to promote knowledge
sharing through daily work
• Have success in disseminating collective
knowledge to library staff and sharing it with other organizations
Relationship assets • Promote sustainable cooperation by dealing with other organizations in a win-win situation • Give priority to user
satisfaction
• Initiate culture preservation projects as a part of social responsibility • Promote library programmes/events to increase client awareness and secure adequate funding
Collection and service assets • Put a high value on core
collections in response to readers’ needs
• Place a high value on core services in response to users’ needs • Deal with users promptly on the service counters
• Improve the quality of
learning space for users in the library premises • Provide library collections and services that users need
• Increase user satisfaction by improving the service delivery process
Table 4 shows the number of indicators suggested ranged from six to ten, with staff development emerging as the most prominent shared concern. As it was difficult to find direct input, process and output measures of the four abstract areas of evaluation (efficiency, effectiveness, sustainability and quality), surrogate or proxy measures that indirectly demonstrated the growth or decline of IAs were identified. The surrogate measures most often selected by the three libraries were:
Input measures
• Total costs of staff development, education and training
• Investments in knowledge-based infrastructure (e.g. database systems)
Process measures
• Number of team meetings arranged to enable knowledge exchange
• Frequency of staff satisfaction surveys
• Frequency of user satisfaction surveys and focus groups
Output measures
• Level of staff satisfaction
• Number of new quality management documents produced (e.g. best practices)
• Number of visits to the library and its website
• Number of suggestions from users
The use of Kaplan and Norton’s (1996) BSC framework for the indicator development process is significant here as although a growing number of libraries are now adopting this approach, designing scorecards for their particular circumstances is still seen as a new challenge (Matthews, 2008) as library practitioners have not generally been good at developing indicators that connect their activities with organizational strategies (Ford, 2002). The internal focus of the libraries’ approach to measurement, shown by their concentration on inputs, throughputs and outputs (but not outcomes or impacts) is consistent with the focus of the MERITUM (2002) and Danish guidelines (Denmark, 2003) on IC reporting, which are also based on scorecard methods.
The libraries’ selection of efficiency, effectiveness and quality as key dimensions for monitoring and evaluation is in line with established practice in the sector. However, the Thai cases also emphasise sustainability or stability as a fourth key dimension, which arguably reflects the bureaucratic culture and hierarchical structure of the Thai HE sector and supports Kaarst-Brown et al.’s (2004) and Pors’s (2008) claims that the stability associated with hierarchical cultures in libraries enables them to have efficient operations, easy control of daily tasks and secure financial support from their parent organizations.
6.4 Practicality of Indicators
The quantitative survey tested acceptability of the proposed indicators and measures with staff who would be expected to use them. Respondents were asked to rate the indicators and sample measures proposed for their particular library for understandability and importance, using a four-point Likert scale in each case (where 4 meant very easy to understand and 1 meant very difficult).
Overall, the indicators related to human assets were seen as easiest to understand and indicators relating to relationship assets (“sustainable cooperation”, “social responsibility” and “promotion and marketing of library programmes”) were among the lowest ratings; but none of the indicators was judged as difficult to understand, with only two out of the total set of 23 having mean scores below 3 (2.80 and 2.89).
The importance ratings also recorded high mean scores, with only one value below 3 (“social responsibility”, recorded as 2.67 at the SW library). However, the ranking of similar indicators varied slightly across the sites, with staff loyalty and teamwork gaining the highest score (a maximum rating of 4.0) at library T and being ranked equal top at library K, while user satisfaction and prompt service were ranked above the staff-oriented indicators at SW library; but it must be noted that the differences here were minimal.
These findings need to be related to their organizational and operational context. Although the concept of IA evaluation was new to all the libraries, they all had established systems for evaluating library performance and the survey respondents had all been involved in this process. Harer and Cole (2005) emphasise the significance of library professionals’ previous knowledge of PM in reaching a comprehensible set of indicators. The libraries thus had a ‘culture of assessment’, which encouraged staff to pay attention to the results they produced and how these would be perceived by stakeholders (Lakos and Phipps, 2004).
Another key factor which probably helped to make the indicators easy to understand was the deliberate use of words and phrases found in the libraries’ existing strategy and quality documents or of terms used by the administrators in their interviews (such as ‘user satisfaction surveys’, ‘staff development’ and ‘knowledge-sharing activities’). The importance of relating and mapping institutional use of language to the terms and categories of IA evaluation practice is stressed in published guidelines (MERITUM, 2002; Roberts, 2003; Thorleifsdottir and Claessen, 2006). Bukowitz and Williams (2003) argue that establishing these links helps to create a shared understanding and avoid confusion over meaning and nomenclature.
The nature of the survey sample is also significant here. The participants were all staff with operational line management roles, responsible for ensuring the quality of t services delivered in their areas, attending to the development of their team members’ abilities as needed, but with little stakeholder interaction beyond their immediate clients. This may explain why indicators for evaluating human assets had high mean scores, while indicators designed to assess longer-term relationship assets (e.g. sustained collaboration, social responsibility) had lower scores.
Finally, the high importance ratings show that when indicators are directly tied to a library or other organisation’s strategic intent they can be made more relevant to participants, as asserted by Franceschini et al. (2007: 8-9), ‘Indicators and strategies are tightly and inevitably linked to each other. A strategy without indicators is useless; indicators without a strategy are meaningless.’
7. Conclusion
In a knowledge-based economy, libraries should consider the value of their knowledge resources as organizational assets enabling the development and provision of value-added products and services. Library practitioners need to extend their existing measurement systems to cover intangible resources, but they should move beyond the assessment of service quality to the evaluation of IAs.
The case study presented describes the successful application of intangible asset measurement using a mixed-methods approach in a real-world context. Models and tools devised by strategists and accountants for the corporate world offer a viable framework for developing IA indicators and corresponding performance measures related to the KSFs of library and information services. However, for this sector, the standard IC taxonomy needs to be expanded beyond human, structural and relationship assets to reflect the distinctive contribution of library collection and service assets and thus communicate their value to stakeholders.
The evidence from the case suggests that the proposed developmental model of IA indicators is compatible with the quality management systems operated by many library and information services and that there are broad similarities between the assets of different libraries, but with variations in the details and types of assets. The findings also suggest that identification of intangible resources may be facilitated by prior experience of service assessment and engagement with KM, and in addition that institutional culture and terminology have an influence on the implementation of PM. More generally, the investigation affirmed the importance of explicitly linking the evaluation of intangible knowledge resources to institutional strategic objectives.
Acknowledgements
The authors gratefully acknowledge the cooperation and contribution of the library staff at the universities that participated in this study. They are particularly indebted to the directors of the libraries for giving permission to conduct the fieldwork.
References
Abels, E.G, Cogdill, K.W and Zach, L. (2004) “Identifying and communicating the contributions of library and information services in hospitals and academic health sciences centers”, Journal of the Medical Library Association, 92(1): 46–55.
Barney, J. (1991). “Firm resources and sustained competitive advantage”. Journal of Management, 17(1), 99–120.
Barron, D.D. (1995) “Staffing rural public libraries: the need to invest in intellectual capital”, Library Trends 44(1): 77–87.
Benbasat, I., Goldstein, D.K. and Mead, M. (1987) “The case research strategy in studies of information systems”, MIS Quarterly, 11(3): 369–386.
Bontis, N., Dragonetti, N.C., Jacobsen, K. and Roos, G. (1999) “The knowledge toolbox: a review of the tools available to measure and manage intangible resources”, European Management Journal, 17(4): 391–402.
Brophy, P. (1991) “The mission of the academic library”, British Journal of Academic Librarianship, 6(3): 135-148.
Brophy, P. (2006) Measuring Library Performance: Principles and Techniques. London: Facet.
Bukowitz, W.R. and Williams, R.L. (2000). The Knowledge Management Fieldbook. rev. ed. London: Pearson Education.
Ceynowa, K. (2000) “Managing academic information provision with the balanced scorecard: a project of the German Research Association”, Performance Measurement and Metrics, 1(3): 157–164.
Cotta-Schonberg, M. and Line, M.B. (1994) “Evaluation of academic libraries: with special reference to the Copenhagen Business School”, Journal of Librarianship and Information Science, 26(2): 55–69.
Cribb, G. (2005) “Human resource development: impacting on all four perspectives of the balanced scorecard”, World Library and Information Congress: the 71th IFLA General Conference and Council, 14-18 August 2005, Oslo, Norway, The Hague: International Federation of Library Associations and Institutions. http://www.ifla.org/IV/ifla71/papers/075e-Cribb.pdf
Cullen, R. (2006) “Operationalising the Focus/Values/Purpose Matrix: a tool for libraries to measure their ability to deliver service quality”, Performance Measurement and Metrics, 7(2): 83–99.
Dakers, H. (1998) “Intellectual capital: auditing the people assets”, INSPEL, 32(4): 234-242.
Denmark (2003) Ministry of Science Technology and Innovation. Intellectual Capital Statements – The New Guideline. Copenhagen: Danish Ministry of Science, Technology and Innovation.
Eisenhardt, K.M. (1989) “Building theories from case study research”, Academy of Management Review, 14(4): 532–550.
Ford, G. (2002) “Strategic uses of evaluation and performance measurement”, In: Stein, J., Kyrillidou, M. and Davis, D. (eds) Proceedings of the 4th Northumbria International Conference on Performance Measurement in Libraries and Information Services, 12 to 16 August 2001, Pittsburgh, USA, pp. 19–30. Washington, DC: Association of Research Libraries.
Franceschini, F., Galetto, M. and Maisano, D. (2007) Management by Measurement: Designing Key Indicators and Performance Measurement Systems. Berlin: Springer.
Green, A. (2007) “Intangible assets in plain business language”, VINE, 37(3): 238–248.
Harer, J.B. and Cole, B.R. (2005) “The importance of the stakeholder in performance measurement: critical processes and performance measures for assessing and improving academic library services and programs”, College and Research Libraries, 66(2): 149–170.
Huotari, M.-L. and Iivonen, M. (2005) “Knowledge processes: a strategic foundation for the partnership between the university and its library”, Library Management, 26(6/7): 324–335.
Iivonen, M. and Huotari, M.-L. (2007) “The university library’s intellectual capital”, Advances in Library Administration and Organization, 25: 83-96.
Japan (2005) Ministry of Economy, Trade and Industry, Guidelines for Disclosure of Intellectual Assets Based Management. Tokyo: Japanese Ministry of Economy, Trade and Industry.
Kaarst-Brown, M.L., Nicholson, S., Stanton, H.M. and von Dran, G.M. (2004) “Organizational cultures of libraries as a strategic resource”, Library Trends, 53(1): 33–53.
Kaplan, R.S. and Norton, D.P. (1996) The Balanced Scorecard: Translating Strategy into Action. Boston, MA: Harvard Business School Press.
Kaplan, R.S. and Norton, D.P. (2004) Strategy Maps: Converting Intangible Assets into Tangible Outcomes. Boston, MA: Harvard Business School Press.
King Research (1990) Keys to Success: Performance Indicators for Public Libraries, London: Office of Arts and Libraries.
Koenig, M.E.D. (1998) “From intellectual capital to knowledge management: what are they talking about?”, INSPEL, 32(4): 222–233.
Lakos, A. and Phipps, S. (2004) “Creating a culture of assessment: a catalyst for organizational change”, Portal: Libraries and the Academy, 4(3): 345–361.
Lithgow, S.D. and Hepworth, J.B. (1993) “Performance measurement in prison libraries: research methods, problems and perspectives”, Journal of Librarianship and Information Science, 25(2): 61–69.
Marr, B. (2005) “Strategic management of intangible value drivers”, Handbook of Business Strategy, 6(1): 147–154.
Marr, B. and Chatzkel, J. (2004) “Intellectual capital at the crossroads: managing, measuring, and reporting of IC”, Journal of Intellectual Capital, 5(4): 224–229.
Marr, B. and Moustaghfir, K. (2005) “Defining intellectual capital: a three-dimensional approach”, Management Decision, 43(9): 1114–1128.
Marr, B., Schiuma, G. and Neely, A. (2002) “Assessing strategic knowledge assets in e-business”, International Journal of Business Performance Management, 4(2/3/4): 279–295.
Matthews, J.R. (2008) Scorecards for Results: a Guide for Developing a Library Balanced Scorecard. Westport, CT: Libraries Unlimited.
MERITUM (2002) Guidelines for Managing and Reporting on Intangibles (Intellectual Capital Report). Brussels: European Commission, Measuring Intangibles to Understand and Improve Innovation Management Project.
Meso, P. and Smith, R. (2000) “A resource-based view of organizational knowledge management systems”, Journal of Knowledge Management, 4(3): 224–234.
Mouritsen, J., Bukh, P.N. and Marr, B. (2004) “Reporting on intellectual capital: why, what and how?”, Measuring Business Excellence, 8(1): 46–54.
OECD (1999) The OECD International Symposium on Measuring and Reporting Intellectual Capital: Experiences, Issues, and Prospects. Amsterdam: Organisation for Economic Co-operation and Development.
OECD (2006) Intellectual Assets and Value Creation: Implications for Corporate Reporting. Paris: Organisation for Economic Co-operation and Development, Directorate for Financial and Enterprise Affairs, Corporate Affairs Division.
Petty, R. and Guthrie, J. (2000) “Intellectual capital literature review: measurement, reporting and management”, Journal of Intellectual Capital, 1(2): 155–176.
Pierce, J.B. and Snyder, H. (2003) “Measuring intellectual capital: a valuation strategy for library and information centers”, Library Administration and Management, 17(1): 28–32.
Pors, N.O. (2008) “Management tools, organizational culture and leadership: an explorative study”, Performance Measurement and Metrics, 9(2): 138–152.
Pors, N.O., Dixon, P. and Robson, H. (2004) “The employment of quality measures in libraries: cultural differences, institutional imperatives and managerial profiles”, Performance Measurement and Metrics, 5(1): 20–27.
Probst, G., Raub, S. and Romhardt, K. (2000) Managing Knowledge: Building Blocks for Success. Chichester: John Wiley & Sons.
Roberts, H. (2003) “Rules of thumb in building indicators for knowledge transfer and how to use these indicators”, In: How to Develop and Monitor Your Company's Intellectual Capital: Tools and Actions for the Competency-Based Organisation, pp. 26–29. Oslo: Nordic Industrial Fund, The Frame Project.
Rowley, J. (2005) “Making sense of the quality maze: perspectives for public and academic libraries”, Library Management, 26(8/9): 508–518.
Rylander, A., Jacobsen, K. and Roos, G. (2000) “Towards improved information disclosure on intellectual capital”, International Journal of Technology Management, 20(5/6/7/8): 715–741.
Saracevic, T. and Kantor, P.B. (1997) “Studying the value of library and information services, Part 1: Establishing a theoretical framework”, Journal of the American Society for Information Science, 48(6): 527–542.
Self, J. (2003) “From values to metrics: implementation of the balanced scorecard at a university library”, Performance Measurement and Metrics, 4(2): 57–63.
Shulver, M., Lawrie, G., Andersen, H. and Cobbold, I. (2000) The Soft Side of the Balanced Scorecard: Developing Strategically Relevant Measures of Intellectual Capital, 2GC Working Paper. Maidenhead: 2GC Ltd. http://www.2gc.co.uk/pdf/2GC-WP-Intellectual-Capital-090311.pdf
Stewart, T.A. (1997) Intellectual Capital: the New Wealth of Organizations. London: Nicholas Brealey.
Sveiby, K.E. (2001) “A knowledge-based theory of the firm to guide in strategy formulation”, Journal of Intellectual Capital, 2(4): 344–358.
Tashakkori, A. and Teddlie, C. (2003) Handbook of Mixed Methods in Social and Behavioral Research, Thousand Oaks, CA: Sage.
Thorleifsdottir, A. and Claessen, E. (2006) Nordic Harmonized Knowledge Indicators: Putting IC into Practice. Oslo: Nordic Innovation Centre.
Townley, C.T. (2001) “Knowledge management and academic libraries”, College and Research Libraries, 62(1): 44–55.
Van Deventer, M.J. (2002) Introducing Intellectual Capital Management in an Information Support Services Environment. PhD, University of Pretoria.
Wang, H. (2006) “From ‘user’ to ‘customer’: TQM in academic libraries?”, Library Management, 27(9): 606–620.
White, L.N. (2007) “Unseen measures: the need to account for intangibles”, The Bottom Line: Managing Library Finances, 20(2): 77–84.
White, T. (2004) “Knowledge management in an academic library case study: KM within Oxford University Library Services (OULS)”, World Library and Information Congress: the 70th IFLA General Conference and Council, 22-27 August 2004, Buenos Aires, Argentina. The Hague: International Federation of Library Associations and Institutions. http://www.ifla.org/IV/ifla70/papers/089e-White.pdf
Yin, R.K. (2003). Case Study Research: Design and Methods. 3rd ed. Thousand Oaks, CA: Sage.

沒有留言:

張貼留言