Evaluating the relevance and clarity of the heart failure eMeasure implementation toolkit by using a web-based survey instrument

Background: The impact on workfl ow is an important component in determining whether an HIT implementation will be successful. Workfl ow is, unfortunately, a concept that is often ignored when implementing HIT and the literature about workfl ow in domains of quality improvement, system implementation, and process improvement has not been adequate. HIT is not always designed to fi t the workfl ow of a given organization, making it diffi cult to truly assess HIT impact on outcomes or processes. The context and purpose of the study: The purpose of this study was to establish an initial toolkit, which would be generalizable to assess the impact of implementation on the workfl ow of quality improvement and information technology professionals’ and their team in the inpatient hospital setting. The toolkit, in the form of an implementation guide, is a compilation of resources such as checklists, forms, and planning documents that provided a template for workfl ow analysis. Results: The expert evaluators rated the tools as moderately relevant, and moderately clear. The postimplementation tools were rated highest for relevance. Comments predominantly highlighted areas of the toolkit that needed additional depth or detail in the toolkit, rather than suggesting additional tools. Main fi ndings: This study provided a methodology for identifying information needs, detecting confl icts, and implementing possible workfl ow solutions; thus, the toolkit could serve as a pragmatically useful implementation guide to assess the workfl ow pertaining to the quality improvement and information technology professionals’ and their team during the implementation of eMeasures at an inpatient hospital setting. Conclusions: The toolkit provided a useful collection of tools in the form of checklists, forms, and planning documents to enhance the workfl ow during implementation of heart failure (HF) eMeasures. Brief summary: We developed an initial eMeasure Implementation Toolkit for the heart failure (HF) eMeasure to allow QI and information technology (IT) professionals and their team to assess the impact of implementation on workfl ow. During the development of the toolkit, we conducted stakeholder interviews with the VA key informants and subject matter experts, who provided valuable information about the context of understanding the workfl ow during the implementation of eMeasures. The information obtained from the stakeholder engagement highlighted areas and tools that were essential to the toolkit development phase. The fi nal phase involved the evaluation of the eMeasure Implementation Toolkit for relevance and clarity by non-VA experts. The non-VA subject matter experts predominantly evaluated the sections of the toolkit that contained the tools for evaluating the workfl ow during eMeasure implementation via a survey. Any potential implications: This eMeasure Implementation Toolkit should continue to be refi ned. The toolkit should undergo further evaluation with a variety of subject matter experts from various job categories and medical centers. Second, the toolkit should be tailored to the level of cognition of each user, while focusing on the syntax used to describe the sections of the toolkit. Finally, the toolkit could be developed for other quality measures such as stroke, diabetes, pneumonia, etc., based on the research methodology used in the development of the existing toolkit. Research Article Evaluating the relevance and clarity of the heart failure eMeasure implementation toolkit by using a webbased survey instrument Megha Kalsy1-4*, Katherine Sward4,5, Bruce Bray3-5, Andrew Redd3,6, Karen Eilbeck4,7 and Jennifer H Garvin1-4 1VA Center for Health Information and Communication, Richard L. Roudebush VAMC, 1481 W 10th St, Indianapolis, IN, 46202, USA 2Division of Health Information Management and Systems, School of Health and Rehabilitation Sciences, The Ohio State University, 453 W 10th Ave., Columbus, OH, 43210, USA 3Informatics, Decision-Enhancement, and Analytic Sciences Center (IDEAS 2.0), VA Salt Lake City Health Care System, 500 Foothill Dr., Salt Lake City, UT, 84148, USA 4Department of Biomedical Informatics, University of Utah, 421 Wakara Way #140, Salt Lake City, Utah, 84108, USA 5The college of Nursing, University of Utah, 10 N 2000 E, Salt Lake City, Utah, 84112, USA 6University of Utah School of Medicine, Division of Epidemiology, 295 Chipeta Way, Salt Lake City, UT, 84132, USA 7University of Utah School of Medicine, Department of Human Genetics, 15 N 2030 E, Salt Lake City, UT, 84132, USA Received: 21 May, 2019 Accepted: 03 August, 2019 Published: 05 August, 2019 *Corresponding author: Megha Kalsy, PhD, Division of Health Information Management and Systems, School of Health and Rehabilitation Sciences, The Ohio State University, 453 W 10th Ave, Columbus, OH, 43210, USA, Tel: 4409858404; Email:


Background
Technology is rapidly transforming healthcare by enabling the sharing of real-time health information across institutions to support patient care, administration, and research [1].
Health Information Technology (HIT) tools are being used as a component of interventions to improve the quality of care and to reduce costs [2]. Given their capacity to reduce costs, informatics methods are integral to healthcare quality metric assessment and reporting. Quality Improvement (QI) activities (e.g., data gathering) from sources such as electronic health records (EHRs), data warehouses, and decision support facilitate the evaluation of quality metrics [3]. Although Health Information Technology (HIT) support for QI activities is increasing, little research has been done on the workfl ow involved in the automation of quality metric assessment and reporting [4][5][6].
The impact on workfl ow is an important component in determining whether a Health Information Technology (HIT) implementation will be successful [7][8][9]. Workfl ow is, unfortunately, a concept that is often ignored when implementing HIT and the literature about workfl ow in domains of quality improvement, system implementation, and process improvement has not been adequate [10,11]. HIT is not always designed to fi t the workfl ow of a given organization, making it diffi cult to truly assess HIT impact on outcomes or processes [12][13][14]. The literature demonstrates inadequate sophistication in studies regarding the role of workfl ow in the adoption of HIT in the domain of Quality Improvement (QI), due to the absence of formal workfl ow design and methodologies, lack of comprehensive knowledge about the system, and a lack of interest by the quality improvement staff towards the use of the new technology [15,16].
Conducting a comprehensive workfl ow analysis is a critical step in HIT implementation. Workfl ow analysis allows health centers to critically analyze the work environment.
Workfl ow is loosely defi ned as a set of tasks that can be grouped chronologically into processes, and the set of people or resources needed for those tasks in order to accomplish an end goal [16]. One solution to addressing implementation challenges such as the lack of attention to workfl ow is an implementation toolkit [17,18]. An implementation toolkit is an assembly of instruments such as checklists, forms, and planning documents. Implementation toolkits are intended to provide guidance or assistance; they may provide a template or blueprint for what to do, when to do it, and how to do it. Users can apply an implementation toolkit in its entirety, or only apply certain portions that are informative for their needs. In this research, the workfl ow is comprised of a set of processes that are needed to transition from a manual approach of data collection to an automated one, the set of people or resources that are available to perform this transition, and the humantechnology interactions between them. To ensure that HIT successfully integrates with workfl ow, it is essential to understand the current system before implementing the new technology [19,20]. Therefore, an implementation toolkit that supports workfl ow evaluation for HIT-enabled QI efforts needs to include evaluation of both the current workfl ow, and the potential impact of the new system on workfl ow.
The purpose of this study was to establish a generalizable toolkit to assess the impact of implementing electronic Quality Improvement (QI) reporting on the workfl ow of quality improvement professionals and their team, in the inpatient

Theoretical Framework
The study was guided by the Promoting Action on Research Implementation in Health Services (PARIHS) model as a unifi ed overarching research framework [21]. This framework, which was developed in 1998 to explain the process of implementing research into practice, has helped explain the variable success of many types of implementation projects. The framework suggests that three elements (evidence, context, and facilitation) exist on a continuum from weak to strong in terms of the extent to which each supports successful implementation.
Evidence examines the science behind the innovation to be

Generalizability of the Assessment Toolkit
The survey was developed using participants who worked at the VA. We assessed the generalizability of the eMeasure Implementation Toolkit by surveying a different set of subject matter experts (SMEs) from VA and non-VA settings. We asked the experts to evaluate the toolkit for relevance and clarity. The

Analysis
Descriptive Statistics: We obtained survey data from experts to evaluate the eMeasure Implementation Toolkit. We used descriptive statistics to summarize the results from the survey, for items rating relevance and clarity.
Content Validity: Evidence for content validity was supported by having QI, HIT, and measure automation experts review the content of the toolkit. Each item was rated for relevance to the underlying constructs (a scale of 1-5 will be used, with 1 being "very relevant") and for clarity (a scale of 1-5 with 1 being "very clear"). A correlation of the assessment toolkit was measured via similarities in answers between the various non-VA settings. A qualitative research report documented the fi ndings from this analysis.

Results
We evaluated the eMeasure Implementation Toolkit by using a survey in REDCap. The non-VA subject matter experts evaluated the eMeasure Implementation Toolkit for relevance and clarity. The PDF of the REDCap survey can be found in the Appendix.
The subject matter experts predominantly evaluated the sections of the toolkit that contained the tools for evaluating the workfl ow during eMeasure implementation. The total time of participation in the evaluation phase was approximately 20-30 minutes, approximately 10-15 minutes to review the toolkit and 10-15 minutes to complete the questionnaire. Table 1 lists the tools under each section of the eMeasure Implementation Toolkit that were evaluated by the non-VA subject matter experts.

Summary of the descriptive statistics results of the survey
The expert evaluators were asked to review the tools in the eMeasure Implementation Toolkit and provide their perspectives about the tools. The experts were asked to rate each item in the toolkit for relevance and clarity and add  additional comments at the end of the survey. Each item was rated on a 1 to 5 scale, with 1 representing "very relevant" (or very clear) and 5 representing not at all relevant (or not at all clear). For each tool, the correlation between tool relevance and clarity was also assessed. Table  3 shows results for pre-implementation requirements. This section of the toolkit featured three tools: the planning checklist, a template for stakeholder interviews, and fl owcharts. Each of the tools in this section was rated moderately relevant and moderately clear. There appears to be a strong correlation between relevance and clarity ratings for each tool. Completeness of the toolkit could be a concern, as comments included "Pre-implementation planning checklist -concerning the jobs lists diary knowing how busy the fl oor staff can be I'm not sure they would take the time to fi ll out the form in total". Other comments included "I was a little unclear how I would use the planning checklist. A few more details or links to instructions would be valuable". Section B: Tools for Implementing a single eMeasure Table 4 displays survey fi ndings for four tools for implementing a single eMeasure. The analysis of the eMeasure document, standard terminology and data sources for implementing an eMeasure, structured query language tools, and natural language programming tools were the four tools in the single eMeasure implementation section.

Section A: Determine pre-implementation requirements
The analysis of the eMeasure document, the standard terminology and data sources, and structured query language tools were each rated moderately for relevance and clarity. The natural language programming tools were rated moderately for relevance. For clarity of the natural language tools, the evaluators took a more neutral stance. There appears to be a strong correlation between relevance and clarity rating for each tool. On average, the tools in this section were seen as moderately relevant and moderately clear. Completeness could be a concern, as comments included "NLP section needs further work as this is an area that may hold the key to capturing data from a large portion of medical records".    Section C: Tools for managing team activity Each of the tools in this section was rated moderately relevant and moderately clear. There appears to be a strong correlation between relevance and clarity rating for each tool.

Range of the number of Years of work Experience in HIT
On average, the tools in this section were seen as moderately relevant and moderately clear. Comments included "HIT terminology is unfamiliar to me so clarity is diffi cult to rate".
Completeness of the tools for managing team activity could be a concern, as other comments suggested that additional tools in each of these tool categories might be helpful. The number of eMeasures being simultaneously implemented, as well as characteristics of the team, could impact eMeasure implementation workfl ow. Table 6 displays results for three tools for postimplementation requirements. The tools for assessment of barriers and facilitators, and process improvement requirements, were rated very relevant and moderately clear.

Section D: Determine post-implementation requirements
The post-implementation assessment to fi nalize workfl ows was rated moderately relevant and moderately clear. There appears to be a strong correlation between relevance and clarity rating for each tool. On average, the tools in this section were seen as very relevant and moderately clear

Findings summary
In general, the expert evaluators felt the toolkit was a useful collection of tools to assess the workfl ow during the implementation of eMeasures. Overall, the tools in the toolkit were rated by the reviewers as moderately relevant, and moderately clear. The post-implementation tools were rated highest for relevance. Comments predominantly highlighted areas of the toolkit that needed additional depth or detail in the toolkit, rather than suggesting additional tools.

Discussion
The impact on workfl ow is an important component in determining whether an HIT implementation will be successful.
Workfl ow is, unfortunately, a concept that is often ignored when implementing HIT and the literature about workfl ow in domains of quality improvement, system implementation, and process improvement has not been adequate. HIT is not always designed to fi t the workfl ow of a given organization, making it diffi cult to truly assess HIT impact on outcomes or processes [16]. The literature demonstrates inadequate sophistication in studies regarding the role of workfl ow in the adoption of HIT in the domain of QI, due to the absence of formal workfl ow design and methodologies, lack of comprehensive knowledge about the system, and a lack of interest by the quality improvement staff towards the use of the new technology [16].
One solution to addressing implementation challenges such as the lack of attention to workfl ow is an implementation toolkit.
An implementation toolkit is an assembly of instruments such as checklists, forms, and planning documents. Implementation toolkits are intended to provide guidance or assistance; they may provide a template or blueprint for what to do, when to do it, and how to do it. Users can apply an implementation toolkit in its entirety, or only apply certain portions that are informative for their needs.
To ensure that HIT successfully integrates with workfl ow, it is essential to understand the current system before  implementing the new technology [23,24]. Therefore, an implementation toolkit that supports workfl ow evaluation for HIT-enabled QI efforts needs to include evaluation of both the current workfl ow, and the potential impact of the new system on workfl ow [25].
The purpose of this study was to establish an initial toolkit, which would be generalizable to assess the impact Services (PARIHS) model [21], in combination with sociotechnical model of HIT [22], that strengthened the overall design of the study.

Conclusion
The purpose of this study was to establish a generalizable toolkit to assess the impact of implementing eMeasures on the workfl ow of quality improvement and information technology professionals' and their team in the inpatient hospital setting.
The toolkit developed during this research was guided by a strong unifi ed overarching implementation framework in the form of the Promoting Action on Research Implementation in Health Services (PARIHS) model [21], in combination with a socio-technical model of HIT [22], that strengthened the overall design of our study. The toolkit provided a useful collection of tools in the form of checklists, forms, and planning documents to enhance the workfl ow during implementation of eMeasures.
The fi nal phase involved the evaluation of the eMeasure Implementation Toolkit for relevance and clarity by non-VA experts. The non-VA subject matter experts predominantly evaluated the sections of the toolkit that contained the tools for evaluating the workfl ow during eMeasure implementation via a survey. During the evaluation phase of the toolkit, the expert evaluators rated the tools as moderately relevant, and moderately clear. The post-implementation tools were rated highest for relevance. Comments predominantly highlighted areas of the toolkit that needed additional depth or detail in the toolkit, rather than suggesting additional tools.

Future directions
There are numerous future directions that may be drawn as a result of this study. Some of these relate to the fi ndings of the study, while others deal with the potential applications and use of the developed eMeasure Implementation Toolkit. Each is enumerated in detail below.