schliessen

Filtern

 

Bibliotheken

Advancing implementation science through measure development and evaluation: a study protocol

Significant gaps related to measurement issues are among the most critical barriers to advancing implementation science. Three issues motivated the study aims: (a) the lack of stakeholder involvement in defining pragmatic measure qualities; (b) the dearth of measures, particularly for implementation... Full description

Journal Title: Implementation science : IS 22 July 2015, Vol.10, pp.102
Main Author: Lewis, Cara C
Other Authors: Weiner, Bryan J , Stanick, Cameo , Fischer, Sarah M
Format: Electronic Article Electronic Article
Language: English
Subjects:
ID: E-ISSN: 1748-5908 ; PMID: 26197880 Version:1 ; DOI: 10.1186/s13012-015-0287-0
Link: http://pubmed.gov/26197880
Zum Text:
SendSend as email Add to Book BagAdd to Book Bag
Staff View
recordid: medline26197880
title: Advancing implementation science through measure development and evaluation: a study protocol
format: Article
creator:
  • Lewis, Cara C
  • Weiner, Bryan J
  • Stanick, Cameo
  • Fischer, Sarah M
subjects:
  • Program Development -- Methods
  • Program Evaluation -- Methods
ispartof: Implementation science : IS, 22 July 2015, Vol.10, pp.102
description: Significant gaps related to measurement issues are among the most critical barriers to advancing implementation science. Three issues motivated the study aims: (a) the lack of stakeholder involvement in defining pragmatic measure qualities; (b) the dearth of measures, particularly for implementation outcomes; and (c) unknown psychometric and pragmatic strength of existing measures. Aim 1: Establish a stakeholder-driven operationalization of pragmatic measures and develop reliable, valid rating criteria for assessing the construct. Aim 2: Develop reliable, valid, and pragmatic measures of three critical implementation outcomes, acceptability, appropriateness, and feasibility. Aim 3: Identify Consolidated Framework for Implementation Research and Implementation Outcome Framework-linked measures that demonstrate both psychometric and pragmatic strength. For Aim 1, we will conduct (a) interviews with stakeholder panelists (N = 7) and complete a literature review to populate pragmatic measure construct criteria, (b) Q-sort activities (N = 20) to clarify the internal structure of the definition, (c) Delphi activities (N = 20) to achieve consensus on the dimension priorities, (d) test-retest and inter-rater reliability assessments of the emergent rating system, and (e) known-groups validity testing of the top three prioritized pragmatic criteria. For Aim 2, our systematic development process involves domain delineation, item generation, substantive validity assessment, structural validity assessment, reliability assessment, and predictive validity assessment. We will also assess discriminant validity, known-groups validity, structural invariance, sensitivity to change, and other pragmatic features. For Aim 3, we will refine our established evidence-based assessment (EBA) criteria, extract the relevant data from the literature, rate each measure using the EBA criteria, and summarize the data. The study outputs of each aim are expected to have a positive impact as they will establish and guide a comprehensive measurement-focused research agenda for implementation science and provide empirically supported measures, tools, and methods for accomplishing this work.
language: eng
source:
identifier: E-ISSN: 1748-5908 ; PMID: 26197880 Version:1 ; DOI: 10.1186/s13012-015-0287-0
fulltext: fulltext
issn:
  • 17485908
  • 1748-5908
url: Link


@attributes
ID524089878
RANK0.07
NO1
SEARCH_ENGINEprimo_central_multiple_fe
SEARCH_ENGINE_TYPEPrimo Central Search Engine
LOCALfalse
PrimoNMBib
record
control
sourcerecordid26197880
sourceidmedline
recordidTN_medline26197880
sourceformatXML
sourcesystemOther
pqid1698967924
galeid541557032
display
typearticle
titleAdvancing implementation science through measure development and evaluation: a study protocol
creatorLewis, Cara C ; Weiner, Bryan J ; Stanick, Cameo ; Fischer, Sarah M
ispartofImplementation science : IS, 22 July 2015, Vol.10, pp.102
identifier
subjectProgram Development -- Methods ; Program Evaluation -- Methods
descriptionSignificant gaps related to measurement issues are among the most critical barriers to advancing implementation science. Three issues motivated the study aims: (a) the lack of stakeholder involvement in defining pragmatic measure qualities; (b) the dearth of measures, particularly for implementation outcomes; and (c) unknown psychometric and pragmatic strength of existing measures. Aim 1: Establish a stakeholder-driven operationalization of pragmatic measures and develop reliable, valid rating criteria for assessing the construct. Aim 2: Develop reliable, valid, and pragmatic measures of three critical implementation outcomes, acceptability, appropriateness, and feasibility. Aim 3: Identify Consolidated Framework for Implementation Research and Implementation Outcome Framework-linked measures that demonstrate both psychometric and pragmatic strength. For Aim 1, we will conduct (a) interviews with stakeholder panelists (N = 7) and complete a literature review to populate pragmatic measure construct criteria, (b) Q-sort activities (N = 20) to clarify the internal structure of the definition, (c) Delphi activities (N = 20) to achieve consensus on the dimension priorities, (d) test-retest and inter-rater reliability assessments of the emergent rating system, and (e) known-groups validity testing of the top three prioritized pragmatic criteria. For Aim 2, our systematic development process involves domain delineation, item generation, substantive validity assessment, structural validity assessment, reliability assessment, and predictive validity assessment. We will also assess discriminant validity, known-groups validity, structural invariance, sensitivity to change, and other pragmatic features. For Aim 3, we will refine our established evidence-based assessment (EBA) criteria, extract the relevant data from the literature, rate each measure using the EBA criteria, and summarize the data. The study outputs of each aim are expected to have a positive impact as they will establish and guide a comprehensive measurement-focused research agenda for implementation science and provide empirically supported measures, tools, and methods for accomplishing this work.
languageeng
source
version5
lds50peer_reviewed
links
openurl$$Topenurl_article
backlink$$Uhttp://pubmed.gov/26197880$$EView_this_record_in_MEDLINE/PubMed
openurlfulltext$$Topenurlfull_article
addlink$$Uhttp://exlibris-pub.s3.amazonaws.com/aboutMedline.html$$EView_the_MEDLINE/PubMed_Copyright_Statement
search
creatorcontrib
0Lewis, Cara C
1Weiner, Bryan J
2Stanick, Cameo
3Fischer, Sarah M
titleAdvancing implementation science through measure development and evaluation: a study protocol
description
0Significant gaps related to measurement issues are among the most critical barriers to advancing implementation science. Three issues motivated the study aims: (a) the lack of stakeholder involvement in defining pragmatic measure qualities; (b) the dearth of measures, particularly for implementation outcomes; and (c) unknown psychometric and pragmatic strength of existing measures. Aim 1: Establish a stakeholder-driven operationalization of pragmatic measures and develop reliable, valid rating criteria for assessing the construct. Aim 2: Develop reliable, valid, and pragmatic measures of three critical implementation outcomes, acceptability, appropriateness, and feasibility. Aim 3: Identify Consolidated Framework for Implementation Research and Implementation Outcome Framework-linked measures that demonstrate both psychometric and pragmatic strength.
1For Aim 1, we will conduct (a) interviews with stakeholder panelists (N = 7) and complete a literature review to populate pragmatic measure construct criteria, (b) Q-sort activities (N = 20) to clarify the internal structure of the definition, (c) Delphi activities (N = 20) to achieve consensus on the dimension priorities, (d) test-retest and inter-rater reliability assessments of the emergent rating system, and (e) known-groups validity testing of the top three prioritized pragmatic criteria. For Aim 2, our systematic development process involves domain delineation, item generation, substantive validity assessment, structural validity assessment, reliability assessment, and predictive validity assessment. We will also assess discriminant validity, known-groups validity, structural invariance, sensitivity to change, and other pragmatic features. For Aim 3, we will refine our established evidence-based assessment (EBA) criteria, extract the relevant data from the literature, rate each measure using the EBA criteria, and summarize the data.
2The study outputs of each aim are expected to have a positive impact as they will establish and guide a comprehensive measurement-focused research agenda for implementation science and provide empirically supported measures, tools, and methods for accomplishing this work.
subject
0Program Development -- Methods
1Program Evaluation -- Methods
general
026197880
1English
2MEDLINE/PubMed (U.S. National Library of Medicine)
310.1186/s13012-015-0287-0
4MEDLINE/PubMed (NLM)
sourceidmedline
recordidmedline26197880
issn
017485908
11748-5908
rsrctypearticle
creationdate2015
addtitleImplementation science : IS
searchscope
0medline
1nlm_medline
2MEDLINE
scope
0medline
1nlm_medline
2MEDLINE
lsr4120150722
citationpf 102 vol 10
startdate20150722
enddate20150722
lsr30VSR-Enriched:[issue, galeid, pqid]
sort
titleAdvancing implementation science through measure development and evaluation: a study protocol
authorLewis, Cara C ; Weiner, Bryan J ; Stanick, Cameo ; Fischer, Sarah M
creationdate20150722
lso0120150722
facets
frbrgroupid5045124562907788636
frbrtype5
newrecords20190701
languageeng
creationdate2015
topic
0Program Development–Methods
1Program Evaluation–Methods
collectionMEDLINE/PubMed (NLM)
prefilterarticles
rsrctypearticles
creatorcontrib
0Lewis, Cara C
1Weiner, Bryan J
2Stanick, Cameo
3Fischer, Sarah M
jtitleImplementation Science : Is
toplevelpeer_reviewed
delivery
delcategoryRemote Search Resource
fulltextfulltext
addata
aulast
0Lewis
1Weiner
2Stanick
3Fischer
aufirst
0Cara C
1Bryan J
2Cameo
3Sarah M
au
0Lewis, Cara C
1Weiner, Bryan J
2Stanick, Cameo
3Fischer, Sarah M
atitleAdvancing implementation science through measure development and evaluation: a study protocol
jtitleImplementation science : IS
risdate20150722
volume10
spage102
pages102
eissn1748-5908
formatjournal
genrearticle
ristypeJOUR
abstractSignificant gaps related to measurement issues are among the most critical barriers to advancing implementation science. Three issues motivated the study aims: (a) the lack of stakeholder involvement in defining pragmatic measure qualities; (b) the dearth of measures, particularly for implementation outcomes; and (c) unknown psychometric and pragmatic strength of existing measures. Aim 1: Establish a stakeholder-driven operationalization of pragmatic measures and develop reliable, valid rating criteria for assessing the construct. Aim 2: Develop reliable, valid, and pragmatic measures of three critical implementation outcomes, acceptability, appropriateness, and feasibility. Aim 3: Identify Consolidated Framework for Implementation Research and Implementation Outcome Framework-linked measures that demonstrate both psychometric and pragmatic strength.
doi10.1186/s13012-015-0287-0
pmid26197880
issue1
oafree_for_read
date2015-07-22