Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
'),o.close()}("https://assets.zendesk.com/embeddable_framework/main.js","jmir.zendesk.com");/*]]>*/

Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Research Protocols

Date Submitted: Feb 29, 2024
Open Peer Review Period: Mar 1, 2024 - Apr 26, 2024
(closed for review but you can still tweet)

NOTE: This is an unreviewed Preprint

Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).

Peer-review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer-Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.

Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).

Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.

Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.

Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Evaluation methods, indicators, and outcomes in learning health systems: a jurisdictional scan

  • Shelley Vanderhout; 
  • Marissa Bird; 
  • Antonia Giannarakos; 
  • Balpreet Panesar; 
  • Carly Whitmore

ABSTRACT

Background:

In learning health systems (LHS), real-time evidence, informatics, patient-provider partnerships and experiences, and organizational culture are combined to conduct “learning cycles” that support improvements in care. Though the concept of LHS is fairly well established in the literature, evaluation methods, mechanisms, and indicators are less consistently described. Further, LHS often use “usual care” or “status quo” as a benchmark for comparing new approaches to care, but disentangling usual care from multifarious care modalities found across settings is challenging. There is a need to identify which evaluation methods are used within LHS, describe how LHS growth and maturity are conceptualized, and determine what tools and measures are being used to evaluate LHS at the system level.

Objective:

To 1) identify international examples of LHS and describe their evaluation approaches, frameworks, indicators and outcomes; and 2) describe common characteristics, emphases, assumptions, or challenges in establishing counterfactuals in LHS.

Methods:

A jurisdictional scan will be conducted according to modified PRISMA guidelines. LHS will be identified through a search of peer-reviewed and grey literature using Ovid Medline, Ebsco CINAHL, Ovid Embase, Clarivate Web of Science, PubMed Non-Medline databases, and the web. To gain a comprehensive understanding of each LHS, including details specific to evaluation, self-identified LHS will be included if they are described according to ≥4 of 11 pre-specified criteria (core functionalities, analytics, use of evidence, co-design/implementation, evaluation, change management/governance structures, data sharing, knowledge sharing, training/capacity building, equity, sustainability). Search results will be screened, extracted, and analyzed to inform two descriptive reviews pertaining to our two main objectives. Evaluation methods and approaches, both within learning cycles and at the system level, as well as frameworks, indicators, and target outcomes will be identified and summarized descriptively. Across evaluations, common challenges, assumptions, contextual factors, and mechanisms will be described.

Results:

NA

Conclusions:

This research will characterize the current landscape of LHS evaluation approaches and provide a foundation for developing consistent and scalable metrics of LHS growth, maturity, and success.


 Citation

Please cite as:

Vanderhout S, Bird M, Giannarakos A, Panesar B, Whitmore C

Evaluation methods, indicators, and outcomes in learning health systems: a jurisdictional scan

JMIR Preprints. 29/02/2024:57929

DOI: 10.2196/preprints.57929

URL: https://preprints.jmir.org/preprint/57929

The author of this paper has made a PDF available, but requires the user to login, or create an account.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.

Advertisement