Article Text

Download PDFPDF

Understanding cost data collection tools to improve economic evaluations of health interventions
  1. John M Chapel,
  2. Guijing Wang
  1. Division for Heart Disease and Stroke Prevention, CDC, Atlanta, Georgia, USA
  1. Correspondence to Dr Guijing Wang; gbw9{at}cdc.gov

Abstract

Micro-costing data collection tools often used in literature include standardized comprehensive templates, targeted questionnaires, activity logs, on-site administrative databases, and direct observation. These tools are not mutually exclusive and are often used in combination. Each tool has unique merits and limitations, and some may be more applicable than others under different circumstances. Proper application of micro-costing tools can produce quality cost estimates and enhance the usefulness of economic evaluations to inform resource allocation decisions. A common method to derive both fixed and variable costs of an intervention involves collecting data from the bottom up for each resource consumed (micro-costing). We scanned economic evaluation literature published in 2008-2018 and identified micro-costing data collection tools used. We categorized the identified tools and discuss their practical applications in an example study of health interventions, including their potential strengths and weaknesses. Sound economic evaluations of health interventions provide valuable information for justifying resource allocation decisions, planning for implementation, and enhancing the sustainability of the interventions. However, the quality of intervention cost estimates is seldom addressed in the literature. Reliable cost data forms the foundation of economic evaluations, and without reliable estimates, evaluation results, such as cost-effectiveness measures, could be misleading. In this project, we identified data collection tools often used to obtain reliable data for estimating costs of interventions that prevent and manage chronic conditions and considered practical applications to promote their use.

  • economics
  • intervention

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Background

Healthcare expenditures in the USA have been rising rapidly in recent decades and are higher on a per capita basis than in all other comparable high-income countries.1–3 A primary driver of healthcare expenditures in the USA is the growing burden of chronic health conditions.4 Developing, identifying and scaling interventions that efficiently prevent and manage chronic conditions is an important factor for controlling healthcare costs, and decision makers—faced with limited public resources—are increasingly requesting information on the economic costs and benefits of health interventions to make evidence-based programming and resource allocation decisions.5 6 Sound economic evaluations, such as cost-effectiveness analysis and cost-benefit analysis of health interventions, can provide valuable information that supports allocation decisions.7 8

Many types of economic evaluations are available to provide valuable information for resource allocation decisions, and each approach offers different measures of the economic aspects of a health intervention (figure 1). Regardless of the type of analysis chosen, reliable intervention cost data forms the foundation of all economic evaluations. Without reliable cost estimates, the evaluation results may be misleading.7 9 Moreover, intervention cost information is essential for informing the replication planning and scale-up of interventions found to be effective and efficient. A detailed understanding of the inputs and costs of an intervention are needed to move from more controlled research settings to real-world implementation contexts.10

Figure 1

Economic evaluations of health interventions: types and components.

Despite the importance of health intervention cost estimates, the inclusion and quality of these estimates are frequently lacking and seldom addressed in the literature.9 11–13 Considerable effort and focus are often placed on the rigorous measurement of health outcomes of interventions, whereas methods for measuring intervention costs have been relatively neglected in comparison.

Several guidelines, checklists and standards have been developed for conducting, reporting and reviewing economic evaluations.5 14 However, few have provided adequate focus on methods for intervention cost estimation,15–17 and none has been developed with sufficient detail for thorough direct measurement techniques such as micro-costing,18 the most accurate and precise costing method.7 While defined consensus standards and guidelines have yet to be developed for conducting micro-costing studies, the principles are well established and discussed in the literature.7 8 10 19 20 Even so, discussion of the available data collection tools and methods used to conduct micro-costing studies has been limited and tends to focus on clinical settings and health technology,17 21 with less attention placed on public health interventions such as those for the prevention and management of chronic conditions.10 19

This study adds to the field by scanning literature published in the last decade to identify and describe data collection tools often used to obtain reliable data for estimating intervention costs for health interventions. Another objective is to discuss considerations for the practical application of cost data collection tools and methods to promote their uses for better intervention cost estimates.

Intervention costs and costing methods

Components of an economic evaluation of a health intervention can consist of the costs of the intervention and the consequences that result from the intervention.7 8 Intervention costs refer to the cost of inputs required in the development and implementation process of the intervention (in other words, the costs of resources consumed in the process of developing, implementing, operating and delivering the intervention). The consequences of the intervention are those that result from the consumption of the intervention, such as changes in health utility, or increased consumption or savings of resources that occur as a result of the intervention. For this paper, we are primarily focused on the intervention costs, although the intervention consequences (outcomes) are an equally important component of economic evaluation. Several terms and concepts may help to understand and assess intervention costs (table 1).

Table 1

Common terms in cost assessment of health interventions

Costing methods in economic evaluation generally fall on a spectrum between a bottom up, micro-costing approach and a top down, gross costing approach,7 22 each with trade-offs between accuracy, precision and the burden of research.15 The choice of method determines the cost estimates and may change the results of an economic evaluation considerably.23 24 In gross costing, a highly aggregated cost of service is used, such as cost per inpatient stay or lump sum of funding provided for a programme. A more accurate and detailed method is to estimate intervention costs using a micro-costing approach. Micro-costing can provide the most precise method of deriving intervention costs because it involves direct enumeration and costing of each intervention input. Commonly, a hybrid of approaches is found to be appropriate under given feasibility restraints, using micro-costing to estimate the intervention input costs and gross costing to estimate the cost consequences.8 We focus on micro-costing to estimate the costs of intervention inputs, although the methods we describe below could potentially be applied to measuring cost consequences as well.

At its core, micro-costing entails enumerating each input unit used in an intervention and deriving the total cost by assigning a unit cost value to the inputs and aggregating them. Inputs could include labour inputs such as nurse or pharmacist time and capital inputs such as facilities space or programme educational materials. Units of input can be defined based on the intervention and could include, for example, an hour of nurse time or a percentage of a nurse’s total time for labour, square foot for facilities space and individual booklet for programme educational materials. Micro-costing comprises five primary steps (other investigators have itemised methods from 3 to 6 steps10 19 20 25 but the major principles remain the same): 1) define the intervention production processes and the study perspective; 2) identify the intervention inputs; 3) quantify the units of each input; 4) assign a cost value to each input unit and aggregate and 5) conduct sensitivity analysis. Under the umbrella of micro-costing methods, different approaches with varying degrees of precision exist. The measurement and valuation of inputs can involve bottom up or top down micro-costing,24 and costs can be measured at different levels of analysis (eg, cost per patient served, intervention session or site).26

Micro-costing is also a foundational principle of activity-based costing (ABC).26 In ABC, resource use is identified and quantified for each of a defined set of mutually exclusive and exhaustive activities to determine how costs are allocated across the activities. For the purposes of this paper, we use an inclusive definition of micro-costing to include these variations. The data collection tools and discussions that follow are applicable to them all. An evaluator may judge the balance of precision and burden—and thus which variation or combination of variations of micro-costing—that may be appropriate for their intervention and the purpose of their study.

Identifying costs data collection tools

We scanned literature published from 2008 through 2018 to identify types of data collection tools that are often used to conduct micro-costing studies of public health and preventive interventions. Details of the literature selection process are available online (supplementary appendix figure 1). Briefly, we searched databases including Medline, PsychInfo and Econ lit for cost studies of interventions to prevent and manage chronic conditions, including mental and behavioural health risk factors. The searches returned 2082 records, and 306 studies remained after screening out reviews, commentaries and studies that did not address non-infectious chronic conditions, prevention and management (evaluations of clinical operations, treatments or medicines were not included) or intervention cost estimation through micro-costing. Of those, we included 93 economic evaluation studies that focused on examining intervention costs and provided sufficient detail to identify the specific processes by which cost data for at least one input category were collected.

Supplemental material

We reviewed the 93 studies and abstracted the original descriptions of the cost data collection tools used in each study, as well as information on how each tool was used (eg, mode, main user) when available. The described cost data collection tools were then listed and combined into emergent categories and subcategories. Information on the study type, study perspective, intervention setting, intervention type and health targets was also abstracted and subsequently combined into emergent categories. Counts of the number of studies that employed each type of tool were then used to summarise the frequency with which they had been used overall (table 2) and by study and intervention characteristics categories (online supplementary appendix).

Supplemental material

Table 2

Cost data collection tools used in 93 studies surveyed in literature published in 2008–2018

Results

Data collection methods employed in the recent literature varied widely. We identified five major types of data collection tools (table 2):

  1. Standardised comprehensive templates (eg, web-based cost assessment tool, standardised data collection instrument). These templates are comprehensive in that they are used to collect resource unit quantity and unit cost data for most or all aspects of an intervention. They are standardised in the sense that, while they are often developed specifically for or in conjunction with a study, they have been (or can be) generalised to be made publicly available or used for multiple studies. For example, in the field of substance abuse there are multiple publicly available standardised comprehensive templates that have been used in original or adapted form, such as the Drug Abuse Treatment Cost Analysis Program (www.datcap.org). The templates are often completed retrospectively by a lead user (eg, operations manager, researcher) who might use a variety of data sources to fill out the template, such as financial records, programme documents and consultation with other staff. The templates can be administered via interview (ie, completed by the researcher by interviewing staff and reviewing records) or via a survey (eg, email, web-based) for a representative from the intervention being studied to complete.

  2. Targeted questionnaires and interviews (eg, survey of participants, manager’s survey, staff interviews). Targeted questionnaires are similar to the templates described above but are more limited in scope to target specific cost categories, are study specific or less formal/standardised. This category can refer to questionnaires conducted via survey or via interviews (structured or unstructured). For example, a questionnaire could be created to survey intervention staff in order to estimate the amount of time they spent on intervention activities. Or a researcher could conduct staff interviews, with a structured or unstructured interview guide, to estimate this information. Questionnaires are often administered retrospectively once or periodically (eg, 6 months) during a study.

  3. Activity logs (eg, staff time sheet, time diary). Activity logs are prospectively completed logs or forms completed by intervention staff or participants. They are most often used by intervention staff to prospectively record their time spent on intervention-related activities. For example, a staff member could carry a form in which they record various activities that represent work time for each increment (eg, 15 min periods) of the work day, as illustrated in an example form provided by Findorff et al.27 However, they can include the collection of materials or supply costs as well.25 Logs can be developed and administered as paper-based forms, computer-based forms or through a smart phone app or other hand-held electronic device. Similar logs can also be administered to intervention participants to prospectively record time or resource use; in these cases, they are often described as diaries.

  4. Direct observation (time and motion study). Direct observation involves researchers or trained observers prospectively recording intervention resource use through in-person observation of the intervention processes. For example, a researcher could follow a provider of the intervention throughout the day to observe and record quantities of all resources used and the time spent on each intervention-related activity by the provider.

  5. On-site databases and routine records (eg, intervention database, cost accounting system, financial records). On-site databases refer to data systems housed on-site to collect resource use information specific to the site (not to be confused with broader databases such as an insurer’s administrative database). Databases can be set up specifically for data collection for a study (study specific databases), or they may already exist for a site’s normal operations (in-place databases), such as a cost-accounting system for a hospital cost centre.21 They can be developed or customised to record information specific to a study’s interests, such as options to identify the specific activity a certain resource was devoted to. In this category, we include records that are already routinely being collected (other routine records), even if they are not specifically described as being housed in a specific database, such as ‘programme records’ or ‘financial records’ that are not described further.

The tools defined above are not mutually exclusive. Frequently, multiple methods are used in combination in a study. In our 93 reviewed studies, 51 (55%) described the use of just one category of tool, while 42 (45%) described the use of two or more. Nor is the above list necessarily exhaustive; other, less frequently used methods may exist.

In our sample of surveyed literature, on-site administrative databases and records were the most commonly used tools. However, the high frequency of their use was driven by studies that did not describe the process by which records had been collected, instead simply referring to ‘programme records’ or ‘financial records’, which were commonly used as a data source in conjunction with other data collection tools. Specifically defined electronic databases that were created to collect data for the study or which were already in place were only described in 9% and 8% of studies, respectively. Activity logs, targeted questionnaires and interviews and standardised comprehensive templates were the second, third and fourth most common tools, respectively, and were used at similar rates (31%–38%). Direct observation was the least common category (10%). Standardised comprehensive templates were more commonly administered via survey than via interview, whereas targeted questionnaires were more often in interview form than survey form. Targeted questionnaires via interview frequently were not well described and often referred to as ‘staff interviews’ with little to no detail on the development or content of the questions being asked.

Case example: cost analysis of a community health worker programme in rural Vermont

To provide more detailed examples of the identified cost data collection tools in practical use, we summarise a case example of a well-conducted costing study in the field of prevention and public health that reasonably balances rigorous data collection against feasible research burden (table 3).

Table 3

Cost data collection tools employed in a sample study

Mirambeau et al 28 conducted a cost analysis of a community health worker (CHW) programme for the Northeastern Vermont Regional Hospital service area in rural Vermont and estimated the fixed and variable costs of implementing the programme for 1 year. The researchers created a standardised comprehensive template to collate cost data for 1 year in 2010–2011 from the public health perspective. The researchers conducted a 2-day site visit, administered in-person and telephone interviews with staff and reviewed programme documents and literature to inform the development of a standardised cost collection template. The hospital administrator used the template to compile the data by examining records from the hospital’s administrative database and speaking to relevant staff. To allocate labour costs to the programme, the researchers created an activity log to track CHWs’ time spent on the programme. For a 2-week period, each CHW used the form to prospectively record the activity that reflected their time spent for each 30 min increment of their workday.

A strength of this study was the authors’ use of an activity log to collect data used to allocate labour cost to the programme. By administering the activity logs prospectively for a 2-week period with multiple staff members, the authors were able to generate estimates of the labour costs associated with the programme that were likely more accurate than if the CHWs or a supervisor were to estimate their time retrospectively. Although administering the activity log for a 2-week period is fairly strong, an alternative method of using the tool for a sample of time periods across the length of the 1-year study period could improve confidence that the time estimates are representative of the average work-time. This alternative approach could have helped ensure the time estimates reflected any potential variations in labour use over time resulting from changes in levels of ‘production’ (eg, patient load), which might not be captured in a single 2-week period.

A second strength of this study was the information collected to inform the development of a standardised template. The detailed study of the intervention production process prior to data collection helped ensure the data collected represented the entirety of costs involved to operate the intervention. The authors were able to distinguish between non-labour start-up costs and ongoing operational costs in their presentation of data, as well as present a sensitivity analysis with different cost scenarios, which can assist future planning (cost analysis results table available in online supplementary appendix). The study could have been improved by identifying which ongoing operational and personnel costs were fixed and which were variable. Additionally, the authors could have incorporated activity-based costing in their data collection tools to describe how the costs were allocated in the programme and identified the primary cost drivers.

Discussion

Numerous data collection tools are available to collect and estimate fixed and variable costs of health interventions. Researchers should take into account a number of considerations when deciding on the process they should use to collect cost data for their evaluation, including the size and scale of the intervention, setting, time horizon and purpose of the study. One important concern is the balance between the required precision of the data collected for a study and the acceptable level of research burden.

The investigator should aim to collect all cost data prospectively during a study. However, prospective cost collection approaches can have feasibility constraints. In an ideal scenario, existing data systems (in-place database) could be used to track resource use prospectively with minimal additional burden to staff. But many preventive interventions are set in community-based settings and might not have the data infrastructure found in healthcare settings, which could limit the feasibility of using existing data systems to track resource use. Moreover, many prevention interventions are delivered in multiple sites with varying contexts that can influence costs,29 which further necessitates a standard primary data collection strategy. Direct observation could produce very accurate and precise data as trained observers watch the intervention processes and consistently record the resources used. However, the research cost could be fairly high: the observer’s time would need to be dedicated to research activities throughout the observation period, and they would most likely require training or re-training to ensure observation recording is consistent. Additionally, staff or patients might find the presence of observers intrusive in some cases.26 We suspect the potentially high research cost of direct observation is a primary reason it was by far the least commonly used data collection method used in our sample of cost studies.

Activity logs can be a well-balanced option to collect detailed, prospective data with a reasonable level of research burden, but potential limitations remain. While not as high cost as direct observation, activity logs still come with the additional research burden as staff must complete the logs during their work day, where they may already be working at capacity, and could require training for the logs’ use. And because the logs rely on multiple staff to input reliable data, results depend on individuals’ agreement to participate in the data collection and their level of buy-in and effort, which can lead to missing data.29

The research burden for prospective methods can be mitigated by using sample time periods for data collection rather than prospectively collecting data throughout the study period, as mentioned above. Data collection could occur during a one-time sample period (eg, every day for two consecutive weeks) or during a sample of multiple time points across the study period (eg, 1 week during the first half of the study period and 1 week in the second half, random sample of days across the study period). Careful consideration should be made when choosing a study period that will be representative of the intervention as a whole. For instance, if there is a learning curve in the uptake of new protocols, choosing a period at the beginning of the intervention might overestimate time costs.26 27 Similarly, an intervention could have varying levels of production across time that should be considered when selecting a sampling period.

Prospective data collection methods can ideally be used to track all resources used in an intervention. But in contexts where this might not be feasible for all cost categories and prioritisations need to be made (eg, limited research budget, time, site buy-in), some costs might be more important to track in detail than others. For example, interventions to prevent and manage chronic conditions tend to be heavily labour intensive, and thus labour cost is typically a key category for understanding programme costs.29 30 Community-based interventions employ labour from the setting in which it is delivered (eg, school, community centre) and staff members may have multiple job duties, related and unrelated to the intervention, meaning a detailed understanding of how their time is spent is essential to allocate an appropriate amount of their time to the intervention and avoid overestimating its costs. Additionally, variable costs such as intervention materials may be less predictable and harder to track than fixed costs such as site rent and utilities, which is often readily available from an examination of existing records. Therefore, we recommend prioritising labour and other variable costs for detailed prospective data collection.

When prospective data collection is not feasible, or deemed not necessary for certain cost categories, retrospective approaches (standardised comprehensive templates, targeted questionnaires, retrospective examination of records) can provide lower burden options for data collection. Standardised comprehensive templates and targeted questionnaires can be used in a survey format to collect data mainly at one time rather than prospectively throughout the intervention and thus reduce burden for the intervention staff. However, using them in a one-time, retrospective survey format can create issues with accuracy resulting from response biases (eg, recall bias). In addition, a survey format with these tools means the accuracy of the data collected will depend on the respondents’ level of effort to provide accurate data.26 31

The mode of a questionnaire can have important influences on data accuracy. For example, questionnaires conducted via in-person interviews might encourage more complete reporting, have a lower cognitive burden on the respondent or mitigate recall bias, but they can also require additional coordination and could introduce bias (eg, interviewer bias). Conversely, questionnaires via self-administered surveys can be easier to conduct, especially if a large sample is required; but they tend to have a lower response rate and completeness of information and a higher cognitive burden.32 Similarly, for retrospective surveys, the recall period can affect the accuracy of the data.33–35

Collecting detailed micro-costing data provides a number of benefits. Most notably, as discussed above, direct measurement of intervention costs through micro-costing is likely to produce the most accurate and precise measure of an intervention’s costs.7 The detailed accounting of resource unit quantities also facilitates sensitivity analyses and the translation of intervention costs to other contexts. Because data are collected on a micro-level to document the quantities of each resource used, sensitivity analyses can easily be employed to examine how intervention costs could change depending on contextual differences in resource use and resource unit costs.10 For example, costs for inputs such as rent for facilities or certain costs of labour can easily be adjusted to account for potential differences in rents or wages across regions.28 Researchers can also examine cost differences that may result from substituting inputs—such as capital for certain types of labour or using lower cost labour for certain activities—to help optimise efficiency.10 35 36 In general, when intervention costs are presented, resource unit quantities and unit costs should be presented separately to enhance the usefulness of results to inform future planning for replication and scale up by allowing other investigators to conduct similar types of sensitivity analysis.10 21 37 An emphasis on this type of transparency and assessment of contextual factors can help a cost analysis align with and contribute to broader evaluation and planning priorities focused on facilitating the successful translation and dissemination of effective interventions, such as those described in the Reach, Effectiveness, Adoption, Implementation, Maintenance framework.36 38 39

Additionally, studies should describe their data collection tools and methods with sufficient detail that can provide readers the ability to assess the data collection process and determine potential areas of accuracy or lack thereof. A standardised taxonomy for micro-costing data collection tools and methods used in public health and prevention science could improve the transparency of, and confidence in, intervention cost estimates. For example, Ridyard et al have similarly proposed a taxonomy for resource use measures employed in clinical trials in a UK and European context.40 The studies we reviewed rarely provided descriptions of their tools and methods with such a sufficient level of detail, indicating work is still needed in this area. Moreover, while scanning the literature to identify data collection tools used, we found many studies described using a micro-costing method but provided no detail on the process with which their data were collected.

We scanned the recent literature to identify the types of tools commonly used in economic evaluations of chronic disease prevention and management interventions and provided a snapshot of how often they are used. However, this literature scan was not meant to provide a full systematic review of this costing literature and caution should be taken when interpreting the frequencies of tool use reported here. A rigorous systematic and critical review of micro-costing studies could provide useful information to assess topics, such as the quality of current methods,21 and future research should address this need.41 Similarly, the development of a standardised checklist for the conduct, reporting and appraisal of micro-costing studies, such as one previously proposed by Ruger nd Reiff,18 could be of great benefit to promote the standardisation of such methods and improve the comparability of estimates from different studies. There have also been calls for future research to directly compare the use of various tools and examine their comparative accuracy, reliability or validity.21 Although this type of research would be of great benefit to the field, none has been conducted and published to our knowledge.

Conclusions

Researchers who want to estimate the intervention costs of public health and preventive interventions focused on chronic conditions can apply the tools we have identified. The considerations we have discussed require careful forethought, and proper application can produce quality cost estimates, which in turn will enhance the usefulness of economic evaluations to inform resource allocation decisions, planning and sustainability for effective preventive health interventions. Future research can address the standardisation, validity and reporting of such tools, which can further improve the confidence and utility of intervention cost estimates in practice.

References

Footnotes

  • Contributors JMC completed the draft and revised the manuscript. GW provided guidance and modifications.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; internally peer reviewed.