Background: The use of clinical dashboards in aged care systems to support performance review and improve outcomes for older adults receiving care is increasing.
Objective: Our aim was to explore evidence from studies of the acceptability and usability of clinical dashboards including their visual features and functionalities in aged care settings.
Methods: A systematic review was conducted using 5 databases (MEDLINE, Embase, PsycINFO, Cochrane Library, and CINAHL) from inception to April 2022. Studies were included in the review if they were conducted in aged care environments (home-based community care, retirement villages, and long-term care) and reported a usability or acceptability evaluation of a clinical dashboard for use in aged care environments, including specific dashboard visual features (eg, a qualitative summary of individual user experience or metrics from a usability scale). Two researchers independently reviewed the articles and extracted the data. Data synthesis was performed via narrative review, and the risk of bias was measured using the Mixed Methods Appraisal Tool.
Results: In total, 14 articles reporting on 12 dashboards were included. The quality of the articles varied. There was considerable heterogeneity in implementation setting (home care 8/14, 57%), dashboard user groups (health professionals 9/14, 64%), and sample size (range 3-292). Dashboard features included a visual representation of information (eg, medical condition prevalence), analytic capability (eg, predictive), and others (eg, stakeholder communication). Dashboard usability was mixed (4 dashboards rated as high), and dashboard acceptability was high for 9 dashboards. Most users considered dashboards to be informative, relevant, and functional, highlighting the use and intention of using this resource in the future. Dashboards that had the presence of one or more of these features (bar charts, radio buttons, checkboxes or other symbols, interactive displays, and reporting capabilities) were found to be highly acceptable.
Conclusions: A comprehensive summary of clinical dashboards used in aged care is provided to inform future dashboard development, testing, and implementation. Further research is required to optimize visualization features, usability, and acceptability of dashboards in aged care.
Health information technologies are increasingly being used in the health care sector, including in aged care, due to their capacity to improve workflow efficiencies and quality of care [, ]. A technology rapidly gaining momentum in health is electronic clinical dashboards. These typically provide a summary of vital clinical data relating to individual patients to increase users’ understanding of their health care needs and care, display trends in patient-reported clinical outcomes, and support decision-making [ , ]. Limited examples of clinical dashboards currently exist within aged care [ , ].
Aged care has a diverse workforce with varying levels of health and digital literacy. In order to address the needs of older adults (defined as individuals aged 65 years and older) in care, their families, and the workforce, dashboards ideally should be designed to support the perspectives and requirements of all relevant stakeholders in aged care. However, there is limited research on how best to present data to support the interpretation of resident outcomes . Furthermore, while the use of visual information can help reduce information overload and improve understanding of data for users in general [ ], it is unclear how different types of visual displays used in dashboards may affect comprehension and decision-making for aged care users.
It has been shown that the way in which information is presented (eg, icon displays vs tables, pie charts, and bar graphs) can impact the accuracy of decisions taken by health professionals , but limited work has examined whether interpretation of the visual information is dependent upon the expertise, knowledge, and experience of various dashboard users. Aged care organizations are being encouraged to adopt dashboards to improve the quality of care and resident safety [ ]; however, dashboards can be used to communicate information to different users, including patients, clinicians, or others.
The aim of this review was to thus identify the visual features of clinical dashboards that are usable and acceptable to the varied number of users in aged care settings in order to help guide future development, design, and implementation of dashboards in aged care.
Adhering to recommended procedures for systematic reviews (ie, PRISMA [Preferred Reporting Items for Systematic Reviews and Meta-Analyses] guidelines) , we conducted a literature search for peer-reviewed empirical studies until April 27, 2022, using a predefined search strategy in the following databases: MEDLINE, Embase, Scopus, PsycINFO, and CINAHL. Primary search terms were dashboard, aged population, aged care, acceptability, and usability; papers were limited to 2000 to April 2022, human subjects, and in English (see search strategy in Table S1 in ). To increase the comprehensiveness of the search, we scanned the reference lists and cited documents of included peer-reviewed articles (ie, snowballing) to identify any relevant articles missed by the searches.
Inclusion and Exclusion Criteria
We included peer-reviewed primary studies reporting a usability or acceptability evaluation of a clinical dashboard for use in aged care environments, including home-based community care, retirement villages, and long-term care (Table S2 in). All study designs were included.
All potential studies were exported into a reference citation manager and duplicates were removed. The primary author (JS) removed additional duplicates. A random selection of 10% of the abstracts (n=200) was then screened by the 2 authors (JS and FS). An interreviewer agreement was high (100%), with no disagreement on which papers should proceed to full-text screening. FS conducted the remainder of the abstract review. Full-text articles were then obtained for screening by JS and FS, with AN moderating the final list of articles. Please see PRISMA diagram for a detailed summary ().
Data extraction was completed independently by 2 reviewers (JS and LD) and checked by an additional reviewer (AN). The data extraction tool was piloted to ensure complete documentation of the qualitative and quantitative components of the included studies. Once finalized, data were extracted on study general characteristics (eg, year, country, type of dashboards, participants, and study design), sample characteristics (eg, age and gender), dashboard visual features (eg, charts), acceptability and usability ratings, study findings, and recommendations.
Acceptability was defined as the users’ judgement on the appropriateness of the dashboard and its design features, which included sensitivity to their needs as well as usage levels and utility . Adopting the theoretical framework of acceptability [ ], perceived user acceptability was explored for the overall dashboard as well as specific design features as described by the study (eg, bar charts). Detailed examples of acceptability scoring are shown in .
Briefly, acceptability was categorized according to technology acceptability statements in validated technology usability tools or through other in-house developed surveys that were focused on users’ responses to acceptability. For example, statements such as “I found the system unnecessarily complex” in the System Usability Scale ; “I think the visual perception of the dashboard is rich” in the Questionnaire for User Interaction Satisfaction [ - ]; and “Using this dashboard would enable me to accomplish tasks more quickly” in the Technology Acceptance Model (TAM) [ ] were used to rate acceptability of the dashboards or its features. Acceptability was scored according to the confirmed metrics of these tools and were classified as low, medium, or high, for each scale. For example, with the TAM model, acceptability was defined as low (<50% agreement), medium (50%-70% agreement), and high (>70% agreement) [ ].
In-house surveys typically used a 5-point Likert scale of agreement (1=highly disagree to 5=highly agree) to specific statements on the usefulness of the dashboard, its value, and its necessity (eg, Lee and Huebner ) and was scored as low (1-2), medium (2-4), and high (4-5) acceptability.
For qualitative articles, general and specific dashboard features that were perceived positively by all stakeholders in a single study were coded as high acceptability, features that included a mix of both positive and negative stakeholder feedback were coded as medium acceptability, and features that were perceived to provide minimal to no added value for stakeholders (eg, low staff engagement  or required significant improvements [ ]) were categorized as having low acceptability.
|Study design and measurement||Low||Medium||High|
|System Usability Scale ||<25||25-35||>35+|
|Questionnaire for User Interaction Satisfaction ||<5||5-7||>7+|
|Technology Acceptance Model ||<50% agreement||50%-70% agreement||>70% agreement|
|In-house survey on the overall dashboard (eg, “the anticoagulation dashboard is necessary for high-quality home health patient care” ) and specific dashboard features (eg, “The graph combining edema status and weight is useful [ ])||<50% agreement||50%-70% agreement||70%-100% agreement|
|Participant feedback||Negative appraisals (eg, “The tablet is extra work, and for people with dementia, it’s very important for me to give them extra time.” )||Containing a mix of both negative and positive comments (eg, “On the right track but not quite there.” ; “Whether the system really works remains to be seen. At least it is [better] than nothing.” [ ])|
|System Usability Scale ||<50 (low)||50-70 (medium)||70+ (high)|
|Technology Acceptance Model ||<50% agreement||50%-70% agreement||>70% agreement|
|In-house survey (eg, “The CHF dashboard provides the ease of reviewing necessary patient information at one time.” )||<50% agreement||50%-70% agreement||80%-100% agreement|
|Participant feedback||Negative appraisals (eg, “there are no options that we might like to have clicked, that the clients are, for example, chronically or acutely confused.” ; “The staff struggled with the challenge of responding to acute events versus detecting trends and patterns of behavioural decline and determining how to integrate such monitoring into their daily schedules” [ ])||Mix of appraisals (eg, “We had difficulty logging into the system in the beginning.” ; “The system has a learning curve, so training is necessary” but “we can identify fixable usability challenges using scenario based training” [ ])||Positive appraisals (eg, “Oh, I love it. I have a sense of being cared for!” ; “The electronic form flows nicely. It is set up just like the paper form, is easy to follow and is one less thing on my desk.” [ ])|
aAcceptability subscores of the quantitative scales were used to compute the overall acceptability of the dashboards.
Usability was defined as the extent to which the dashboard could be used by the specified users to achieve their goals effectively, efficiently, and with satisfaction [, ]. Usability was also rated for overall dashboard use and specific dashboard features using previously described methods focused on usability items in the tools (eg, System Usability Scale, Questionnaire for User Interaction Satisfaction, and TAM) for assessing low, medium, and high usability (eg, Dowding et al [ ], Lanzarone et al [ ]). These items typically focused on the dashboard’s effectiveness (ie, can stakeholders achieve their goals) and efficiency (ie, amount of effort and resources required to achieve their goal) metrics. For further information, refer to our scoring system described in .
For qualitative studies, acceptability and usability were synthesized using a thematic analysis  where main themes regarding the acceptability or usability of the dashboard (including its individual visual features) were first identified independently by JS and LD. Any discrepancies that arose were solved through discussion with the third member of the review team (AN). Themes were reviewed and amended by the review team and were subsequently organized into overarching topics for clarity and conciseness. A similar process was also adopted identifying the recommendations to improve acceptability and usability. Where possible, synthesis was made according to different dashboard user types (eg, resident, caregiver, health care professional).
A narrative synthesis of quantitative articles was used to specify whether clinical dashboards and their features were considered acceptable and usable. Interreviewer disagreement on data extracted was resolved through discussion among the research team. The review team included academics with backgrounds in psychology (JS), aged care (LD and KS), public health (FS and MR), epidemiology (JW, MR, and KS), digital health (JW, AN, MR, and MB), pharmacy (KS, MR, and NW), human factors (MB), and data science (NW). The results were synthesized as a narrative review.
Study quality was assessed using the Mixed Methods Appraisal Tool (MMAT)  by three authors (JS, KS, and MR). This tool allows the appraisal of the methodological quality of 5 categories of studies: qualitative research, randomized controlled trials, nonrandomized studies, quantitative descriptive studies, and mixed methods studies. Each study category has 5 assessment criteria, which are scored as either “yes—criterion met,” “no—criterion not met,” or “unclear/can’t tell whether criterion met” [ ]. Mixed methods studies are assessed against the relevant study categories, as well as the mixed methods studies category.
Two reviewers independently scored each study, and disagreements were discussed with a third reviewer to come to a consensus on the rating. An overall quality score was assigned to each study following the method described by MMAT . The score was the overall percent of quality criteria met for an individual study. For multimethod studies, the overall quality score was the score for the lowest-scoring component.
After excluding duplicates, our search strategy identified 2575 potentially relevant articles (). After excluding articles that did not meet our inclusion criteria, a total of 14 peer-reviewed articles were included, although 2 articles were reported on the same dashboard [ , , ] and were described collectively. Articles were most frequently excluded because they did not report an evaluation of a clinical dashboard.
Study Quality Assessment
The quality of studies ranged from 20% (n=3) to 100% (n=6) on the MMAT checklist (Table S3 in) [ - , , , , , - ]. Although more than half of the studies (n=8) received scores greater than 60%, over a third of the studies (n=5) had scores less than 40%, indicating a mix of low, moderate, and excellent quality.
Characteristics of Studies
Study characteristics are summarized in. Studies were conducted mostly in the United States (6/12) [ , , , , , - ], with 1 study conducted in Australia [ ], China [ ], Sweden [ ], Italy [ ], Canada [ ], and Europe [ ]. The majority of studies adopted a mixed methods design (8/12) [ , , , , , , , ], followed by a quantitative approach (3/12) [ , , ] and 2 used a qualitative design [ , ]. Studies were carried out mostly in a home care setting (6/12) [ , , , , , , , ], which encompasses domiciliary care, community care, or other social care provided within the home in which the older adult is living or long-term care (6/12) [ , , - ], which refers to individuals in residential aged care, nursing homes, or other care facilities that provide permanent accommodation for persons who require consistent and ongoing services to assist with activities of daily living. Studies had varied sample sizes of users (median 32, range 3-292 [ , ]). Most studies described the experiences of health professionals including nurses (9/12) [ , , , , - ], aged care staff (5/12) [ , , , , ], physicians (3/12) [ , , ], with 5 studies including a mix of older adults in home or community care, respite care, and long-term care; staff; and health care professionals [ , , , , ].
A summary of the methodological frameworks and theories used to develop or evaluate the dashboards is provided in Table S4 in[ , , , , - ]. Most dashboards (8/12) used a developmental framework [ , , , , , , , , ], including feedback intervention theory [ ], and most also used an evaluation framework (7/12) [ , , , , , , , ], with the most common being the TAM [ ] and the UK’s Medical Research Council complex intervention evaluation framework [ ].
|Author (year), country||Study designa||Dashboard type||Platform||Software used||Focusb||Study setting||System users|
|HCc||Rd||LTCe||Of||Sample size, n||Age (years), mean (SD)||Sex (female), %|
|Algilani et al  (2016), Sweden||MM||Clinical||ICTg application||In-house||Health status||✓|
|Older adults: 8||77.6 (—h)||60|
|Bail et al  (2022), Australia||Qual||Clinical||ICT application||Humanetix||Administrative, health status||✓||Staff: 65||—||—|
|Bell et al  (2020), USA||Quant||Clinical||Web-based||Unclear||Medication and prescribing practices||✓||✓||✓i|
|Older adults: 112||83j (—)||0|
|Cui et al  (2018), China||Quant||Clinical prototype||Mobile app||Unclear||Administrative, health status||✓||✓k||Nurses: 18||—||100|
|Dowding et al  (2018), USA||Quant||Clinical prototype||Paper-based||In-house||Administrative, health status||✓||Nurses: 195||49 (11)||89.7|
|Dowding et al  (2018), USA||MM||Clinical prototype||Computer||Morae software (Techsmith)||Health status||✓||Nurses: 292||—||—|
|Dowding et al  (2019), USA||MM||Clinical prototype||Web-based||Morae software (Techsmith)||Health status||✓||Nurses: 32||51l (10)||91|
|Kramer et al  (2016), USA||Quant||Clinical simulation||Computer||In-house||Medication and prescribing practices||✓||Physicians: 19||39.8 (6.1)||57.9|
|Lanzarone et al  (2017), Italy||MM||Administrative||DiamondTouch table||Geodan||Administrative||✓||Staff/otherm||—||—|
|Lee and Huebner  (2017), USA||MM||Clinical prototype||Computer||MS Excel||Administrative, health status||✓||Nurse: 14||—||—|
|Mei et al  (2013), USA||MM||Clinical||Computer||MS InfoPath, Sharepoint||Adverse events||✓||Nurse: 4||—||—|
|Papaioannou et al  (2010), Canada||MM||Clinical, MEDeINRn||Web-based||In-house||Medication and prescribing practices||✓|
|Older adults: 128||85.9 (8)||75|
|Shiells et al  (2020), Belgium, Czech Republic, and Spain||Qual||EHRo||Computer, tablet||Unclear||Administrative, health status||✓||Staff (21)k||—||90.5|
|Wild et al  (2021), USA||MM||Clinical, ambient||Web-based||ZigBee||Administrative||✓|
|Older adults: 95||86.4 (7.4)||80|
aStudy design (MM: mixed methods; Quant: Quantitative; Qual: Qualitative).
bFocus of dashboard (Health status: vital signs, physiological, and functional status, eg, weight, blood pressure; Medication and prescribing practices: medication discrepancies, appropriate prescribing practices; administrative includes care pathways and changes to services/care an older adult is receiving; Falls refers to the incidence of older adult falls).
cHC: home or community care. Refers to in-home care, domiciliary care, community care, and social care provided within the home in which the older adult is living compared to care provided in group accommodation, clinics, and nursing homes, and also 3 independent living retirement communities.
dR: respite care. Refers to planned or unplanned short-term care for older adults to provide a temporary break for caregivers.
eLTC: long-term care. Refers to those in residential aged care, nursing homes, or long-term care facilities who provide permanent accommodation for those who require consistent and ongoing services to assist with activities of daily living.
gICT: information and communication technology.
iRefers to short stay/transitional care and palliative care.
jOnly at-risk older adults receiving care (n=21) data were reported.
kRefers to a community hospital.
lAge reported for usability component only.
mIncluding home care planners, experts, and nonexperts of home care providers. Sample size is not provided.
nMEDeINR: an electronic decision support system based on a validated algorithm for warfarin dosing.
oEHR: electronic health record.
Dashboard Purpose and Features
An overview of dashboard type and purpose are shown in. Dashboards were either already established in existing information systems (8/12) [ , , , , , , , ] or were prototypes (4/12) [ , , , , , ]. Most dashboards were accessed through a computer (5/12) [ , , , , ] or specialized hardware (eg, DiamondTouch table [ ]) or a web-based platform (4/12) [ , , , ] ( ).
The main purpose of dashboards was grouped into four categories: (1) health status (8/12) [, , , , , , , ], which included monitoring of vital signs, physiological, and functional status such as weight and blood pressure; (2) medication and prescribing practices (3/12) [ , , ], which referred to medication discrepancies and appropriate prescribing practices; (3) administrative (7/12) [ - , , , , ], which included exploring and viewing older adult care pathways as well as changes to services or care that the older adult is receiving; and (4) adverse events (1/11) [ ], which refers to the specific incidence of falls or other behavior-related events.
Dashboard features are described inand were broadly categorized into information, analytic capability, and other functionalities. Most information depicted on dashboards included health conditions prevalence and incidence (9/12) [ , , , , , - ] and medication use patterns (6/12) [ , , , , , ], which could be displayed over time (8/12) [ , , , , , , , ]. Other information included geographical location (2/12) [ , ], hospitalization data (2/12) [ , ], and linkage to additional resources of complementary information and guidelines (2/12) [ , ].
Analytic capability referred to the dashboard’s ability to display data in a meaningful way (eg, wound record, medical status, and medication administration and use) either through descriptive analysis (12/12) [- , , , , , - , , ], predictive ability (7/12) [ , , , , , , ], or prescriptive capability (7/12) [ , , , , , , ] (ie, predicting what action should be completed according to available guidelines), which was supported by a visual exploration of the data through charts or other graphical means (6/12) [ , , , , , ].
Other functionalities included interactive forms dedicated to client assessment and service planning (11/12) [, , , , , , , , - ], which included initial assessments, transitions in client care, client-level monitoring (eg, vital signs), as well as the management and coordination of aged care service operations to suit clients’ needs. The ability for stakeholders to communicate and interact was also described (6/12) [ , , , , , ].
|Author (year)||Visual representation of information||Analytic capability||Features|
|General||Specific||Descriptive||Predictivea||Prescriptiveb||Visual explorationc||Epidemiologic monitoring or surveillance||Client assessment and service planningd||Stakeholder communication and interactione|
|Prevalence/incidencef||Spatialg||Resourcesh||Events over timei||Medication use patterns||Hospitalization|
|Algilani et al  (2016)||✓||✓||✓||✓||✓|
|Bail et al  (2022)||✓||✓||✓||✓||✓|
|Bell et al  (2020)||✓||✓j||✓||✓||✓||✓||✓||✓||✓||✓k|
|Cui et al  (2018)||✓||✓|
|Dowding et al  (2018)||✓||✓||✓||✓|
|Dowding et al  (2018)||✓||✓||✓||✓||✓||✓||✓|
|Dowding et al  (2019)||✓||✓||✓||✓||✓||✓||✓|
|Kramer et al  (2016)||✓||✓||✓||✓|
|Lanzarone et al  (2017)||✓||✓||✓||✓||✓||✓||✓l|
|Lee and Huebner  (2017)||✓||✓||✓||✓||✓||✓||✓||✓||✓|
|Mei et al  (2013)||✓||✓||✓||✓|
|Papaioannou et al  (2010)||✓||✓||✓||✓||✓|
|Shiells et al  (2020)||✓||✓|
|Wild et al  (2021)||✓||✓||✓||✓||✓||✓||✓||✓||✓||✓|
aRefers to dashboard/application capability of predicting what could happen (eg, dashboard triggers alerts on older adults with high risk based on risk assessment modeling of older adult health concerns).
bRefers to dashboard/application capability of recommending what should be done according to guidelines (eg, decision support).
cRefers to any graphical representation of data (eg, charts, graphs, and maps).
dIncludes initial assessment and transitions in older adult care, monitoring (eg, vital signs), and the management and coordination of aged care service operations to suit older adult needs.
eIncludes capability of communicating between users of the dashboard and data sharing.
fRefers to whether the dashboard/tool provided prevalence or incidence data or indicated the potential to compute these data for reporting purposes.
gRefers to visual applications that directly or indirectly provide geographical area or location (eg, of staff and clients).
hRefers to whether the dashboard/application provided links to additional physical resources or complementary information, guidelines, and recommendations outside that of the information within the dashboard/application (eg, through links to external websites/files).
iRefers to whether the dashboard/application had the capability to display changes in events over time.
jPhysical resource was a pharmacist to prescribe or deprescribe based on evidence-based guidelines.
kAdvised the pharmacist of “actionable older adults receiving care” and recommended appropriate prescribing with the provider.
lInvolved reorganization and allocation of staff and dispatch of emergency vehicles.
Overall Acceptability and Usability of Dashboards
A summary of the users’ overall perceived acceptability and usability of the dashboards is presented in. Using the criteria described in the methods, perceived usability was mixed, with 4 studies reporting low [ , , , ], 5 medium [ , , , ], and 4 high usability [ , , , ]. Discrepancies between studies related to whether the dashboard was easy to learn, operate, and navigate, with some stakeholders feeling very confident using the dashboard [ ] and others reporting difficulties with dashboard functionalities [ , , , ].
In terms of acceptability, most studies reported medium to high acceptance (10/11), with only 1 study revealing low acceptance . While most respondents were willing to use the dashboard in their workplace (eg, 94.4% agreement [ ]), uptake was low (eg, across 3 years, more than half of staff members logged in less than once [ ]) and initial enthusiasm declined over time (eg, [ ]).
There was no distinct pattern of dashboard type (eg, clinical and administrative), platform (eg, ICT application and computer), or focus area (eg, health status, administration, and medication) on reported dashboard usability or acceptability. Older adults tended to report usability as low (3/4 studies) [, , ], while other user groups (eg, aged care staff) reported dashboard usability as medium to high (8/9) [ , , , , , , , ]. There were no noticeable differences between users on dashboard acceptability.
|Author (year)||Dashboard type||User group (n)||Usabilitya||Key findings||Acceptabilityb||Key findings|
|Algilani et al  (2016)||Clinical||Low||Interviews: Barriers to navigation and access, documentation and monitoring, and subject matter.||High||Interviews: Reported acceptability and management of own care.|
|Bail et al  (2022)||Clinical||Medium||Interviews, focus groups, and surveyc: Users reported positively on the application across multiple devices, ease of access, scheduling and documentation of information at point-of-care (formatting and structure of alerts), and instantaneity of changes to care plan (rather than waiting hours to weeks). Some users felt that the app interfered with the rhythm of care (eg, repetitive information), lacked training and login for agency staff, resulting in workarounds and missing data, and offering different styles of alerts and flagging (eg, different adverse events and health conditions).||High||Interviews, focus groups and surveyc: Users reported reduced time spent on information retrieval and documentation; reduced errors by omission and missed documentation; improved staff and resident satisfaction; built consistency working with clinical treatment protocols; assisted management decisions and allocation of resources.|
|Bell et al  (2020)||Clinical||Low||Surveyc: Little preference for using dashboard to receive prescribing notifications over traditional methods; user satisfaction, tool integration, and interface intuitiveness.||Medium||Surveyc: Percentage of time of prescribing recommendations accepted by skilled nursing facilities was adequate (66% uptake).|
|Cui et al  (2018)||Clinical prototype||High||Survey: TAMMd found a large proportion of participants who found the dashboard easy to learn, use, and navigate (89%), and were satisfied with the component (100%).||High||Survey: TAMM results highlighting considerable perceived usefulness of the dashboard in improving assessment quality, collecting data, and standardizing information (100% of users).|
|Dowding et al  (2018)||Clinical prototype||Medium||Surveyc: Large percentage of users who were able to use the dashboard immediately (91%) and use icons to switch between data types (96%).|
Heuristic evaluation and task analysis: Time taken to complete tasks differed (eg, 5.7 minutes for nurses vs 1.4 minutes for expert users).
|High||Survey: High SUSe (73.2) and QUISf (6.1) scores for overall user reactions.|
|Dowding et al  (2019)||Clinical prototype||Medium||Surveyc: >50% of participants had difficulty navigating dashboard and interpreting data in the dashboard due to interoperability.||High||Survey: High SUS (73.2) scores.|
Interviews: users valued the ability to see trends for vital signs over time.
|Kramer et al  (2016)||Clinical simulation||Medium||Survey: High SUS (86.5) scores, however, reported improvements in accuracy (ie, number of medication reconciliation discrepancies using electronic dashboard vs paper) and amount of time to complete cases (ie, efficiency; reported similar completion time for paper-based process vs electronic dashboard) was mixed.||High||Surveyc: Majority preferred the electronic module compared to paper-based processes (89.5% of users).|
|Lanzarone et al  (2017)||Administrative||Medium||Surveyc: Low completion times for task completion, increased distance traveled; however, there was minimal change in nurse allocated to visits (ie, good satisfaction among older adults receiving care) and low numbers of overloaded nurses.||Medium||Surveyc: Mixed reports on the satisfaction of older adults receiving care, applicability of tool integration, and visualization of the information, with multiple recommendations.|
|Lee and Huebner  (2017)||Clinical prototype||High||Interviews: Users provided positive responses regarding the module’s ability to locate laboratory findings quickly, review information easily, and access decision support.||High||Surveyc: High user ratings of clinical dashboard usefulness and necessity data (100%) particularly for supporting high-quality home health care.|
|Mei et al  (2013)||Clinical||High||Survey: High TAMg scores (reported on system usability (eg, time taken to complete, the proportion of participants reporting ease of use) (100%).||High||Surveyc: High user agreement for improving job performance and accomplishing more work following system implementation.|
|Papaioannou et al  (2010)||Clinical, MEDeINR||High||Surveyc: 100% of users found the platform was easy/very easy to use with improvements in therapeutic range and time in sub/supratherapeutic ranges.||Medium to high||Surveyc: 75% of users agreed platform decreased workload and 92% felt communication was better. Interviews: feedback found decreased anxiety around prescribing and emphasized improvements for training.|
|Shiells et al  (2020)||EHRh||Low||Interviews: Users reported the absence of core assessment scales in the records, systems being not interoperable, and frustration with organizational support for system access and training.||Low||Interviews: Users reported a low preference for the device (preferring traditional methods of a desktop computer and paper) and its functionality, perceiving it as more work.|
|Wild et al  (2021)||Clinical, ambient||Low||Surveyc: Low proportion of users who logged into the dashboard (44%). Interviews: users reported technical difficulties and continued unfamiliarity with the system.||Medium||Interviews: Users reported some enthusiasm about interest areas (eg, sleep and medication adherence) and appreciated real-time metrics (eg, sleep duration) being captured.|
aUsability refers to the extent to which the dashboard could be used by the specified users to achieve their goals effectively and efficiently.
bAcceptability refers to the satisfaction with the dashboard and future adoption by specified users.
cSurvey developed in-house by researchers.
dTAMM: Technology Acceptance Model for Mobile.
eSUS: System Usability Scale.
fQUIS: Questionnaire for User Interaction Satisfaction.
gTAM: Technology Acceptance Model.
hEHR: electronic health record.
An overview of the key dashboard features and their perceived acceptability is provided in. The median number of features used in the dashboards was 6 and ranged from 4 [ ] to 11 [ ]. Displaying an alert (10/13) [ - , , , , , , - ] was the most common, followed by customizable displays (8/12) [ - , , , , ] and the presence of color coding (7/12) [ , , , , , , , ]. One-third of the dashboards used symbols and icons (4/12) [ , , , , ]. Visual graphs such as bar charts (2/12) [ , , , ] and line graphs (3/12) [ , , , , ] were less frequently used in the dashboards. Functional aspects, including radio buttons (4/12) [ , , , , ] and checkboxes (2/12) [ , ], were not used frequently.
The ability to update, alert, and generate reports for primary stakeholders was the most frequently used feature and was reported to be highly acceptable across all dashboard types. In general, features with high acceptability were bar charts, tables, icons, symbols, images, and color coding to organize and display information, as well as the use of radio buttons, the ability to expand and collapse information, and multiple displays to facilitate easy customization of the dashboard for different users. A small number of studies also described positional coding, checkboxes, and a completeness bar, which had high acceptability. One study of 195 nurses used a dashboard with spider and radar graphs, and these were reported as too complex .
There was only 1 study in-home care exploring older adults’ acceptability for line graphs, icons, and displays, all which were rated as medium. Nurses tended to report communication features (eg, ability to converse with other users in the system) as low to medium [, ], whereas older adults report it as high [ ]. Compared to other user groups, older adults’ acceptability of alert features was variable, ranging from low to high acceptability.
Problems Identified With Dashboard Acceptability and Usability
Thirteen studies described problems hindering user acceptability and usability of dashboards. The main issues that decreased the overall acceptability and usability of the clinical dashboards included hardware problems, display options, and training. For older adults in home, respite, and long-term care, accessibility of a smart tablet was hindered by locking the tablet, having the incorrect pin code, and forgetting to charge the device . Older adults within each care setting also appreciated a larger text display size and found the 3-step question design difficult when inputting information for a dashboard (eg, yes/no and subsequent questions as they have to recall the previous answer) [ ]. For registered nurses, the existing workload prevented daily log-ins despite instructions [ , ]. Similarly, reliance on agency or outsourced workers meant that many staff did not have log-ins and prevented the use of the dashboard [ ].
Training on how to use and navigate the dashboard was provided for most dashboard users; however, participant feedback on training ranged from low [, ] to high satisfaction across studies [ , , , , ]. In some papers, 3 training classroom sessions were sufficient [ ], and in others, “on-the-job” training was preferred as an alternative to classroom-based learning [ ]. In 1 study, more training was requested by new staff with suggestions for a designated nursing staff member to lead the training session, which could be a recorded session to enable easy dissemination [ ].
Suggested areas for improvement across papers mostly related to reducing user workloads, ensuring the security and privacy of resident data, and strengthening decision support and communication features. Ensuring data remain private, particularly data on medication and prescribing patterns, was an emerging area for improvement, with a focus on having data available only to the relevant user [, ]. Furthermore, inputting reasons for medication use would support nurses’ and clinician’ decision-making on medication administration, identification of discrepancies, and reconciling errors.
Although dashboards could be used to support interactions between different users (eg, staff, providers, and older adults), in 1 study, it was shown that users valued traditional methods of communication, particularly in relation to medication practices (eg, receiving pharmacist notifications separately) rather than logging into the dashboard . This was because users reported spending more time searching for appropriate medication-related information on the dashboard compared to routine practice (ie, predashboard) [ , ] and thus preferred alternative mediums (eg, sourced from electronic notes [ ], phone calls [ ], and face-to-face conversations [ ]) to clarify discrepancies. Suggestions for dashboard functionalities to improve communication and reduce workload included (1) easy-to-navigate workflows [ , , ]; (2) visual features that allow for better interpretability and usefulness (ie, simple graphs, customizable alerts, and appropriately positioned icons) [ , , , , ]; and (3) timely responses between users to facilitate efficiency and confidence in medication reconciliation and management [ , , , ].
The aim of this review was to assess current evidence about the acceptability and usability of clinical dashboard features and functionalities in aged care environments. In general, users had high acceptability but mixed opinions on usability, with dashboards focused on administration activities having high acceptability. Dashboards that featured an update, alerts, and reports and those with simple visual elements (eg, bar charts, tables, and symbols) were considered highly acceptable, while those with complex features (eg, spider and radar graphs) had low acceptability.
Clinical dashboards are relatively new in aged care settings, despite these apps being used widely within population health and health services . In our review, dashboards were developed to support a wide range of clinical and administrative purposes and had no distinct pattern of usability and acceptability on dashboard type or platform. Rather, our results suggest that the capabilities of the dashboards and how information is displayed to end users are more likely to influence the acceptability and usability of dashboards.
Previous studies reporting on the usefulness of other dashboard visualization features in health care settings may inform future dashboard design in aged care. For instance, clinicians prefer data tables as they perceive numbers as less “biased” than data that are presented in graphics [- ]. Although not explored in the studies included in this review, visual aids such as league charts, caterpillar plots, or funnel plots can offer substantial benefits particularly if the purpose of the dashboard includes institutional performance comparisons (eg, comparing several aged care facilities in certain adverse health events). League charts are often desired because of their familiarity and simplicity [ , ]. Caterpillar plots and funnel plots, types of statistical process control techniques, are widely used visual aids for comparing the performance of institutions in certain performance indicator against a benchmark value [ ]. Research shows that health care providers prefer caterpillar and funnel plots once they are taught how to read them [ ]. A dashboard that includes specific values, as well as organizational comparisons in certain performance indicators may improve service processes and improve delivery of aged care quality [ ]. Thus, when designing dashboards, data visualization approaches need to consider the target audience as well as dashboard purpose.
The perceived usefulness and acceptability of dashboards and their features may differ between end users. For instance, in this review, there were differences between older adults and other end users on the perceived usefulness of dashboards, with older adults likely to report usability as low, while other users reported it as medium-high. Such variability in the perceived usefulness of dashboards across end users can be minimized through customizable design , that is, engaging and considering the need of end users (eg, clients, staff members, and family) in the dashboard development process. A user-centered design approach would enable designers to gain an in-depth understanding of end user experiences, expectations, and needs for clinical dashboards, which are critical to addressing usability and acceptability issues and enhancing the likelihood of having an impactful and sustainable dashboard [ , ].
Implications and Recommendations for Future Dashboard Development
The findings of this study have important implications to guide future dashboard development. Dashboards often focused on 1 aspect of care (eg, clinical or administrative). While clinical outcomes are an important aspect of aged care quality, there is increasing understanding that a holistic resident or client trajectory should be key to understanding quality . Future dashboards thus need to consider and construct an inclusive picture of resident or client needs to support the care continuum from entry in the system.
Our results found that dashboards typically used in-house collected data, with some using real-time reporting of information [, , , ]. As reporting of quality indicators becomes mandatory in aged care sectors in many countries, the use of a dashboard makes it potentially possible to streamline and automate this process. This may relieve aged care staff of the significant time burden in collating and reporting these data [ ]. It could also mean that reported data are more accurate as it removes some opportunities for human error and reports in real time.
Given that dashboards present data visually and aim to support users’ decision-making, the use of in-built decision support within a dashboard provides another opportunity for improved quality care. Recommendations in response to information presented in the dashboard could prompt end users to take appropriate actions to improve clinical care [, , , , , ]. This review suggests that certain dashboard features are associated with increased usability and acceptability. For example, reduce user workload through customizability and interoperability of the dashboard, provide visual features to support timely interpretation and response, and include links to complementary information to strengthen confidence in clinical decision-making. Extending such decision support to enhance quality care could include alerts for allergies or special care needs, links to published guidelines to make users aware of appropriate care pathways, and medication errors such as duplications and interactions. Implementing evidence-based decision support to inform better care could be seen as highly beneficial within the aged care sector where health literacy levels vary greatly [ , ].
There are several limitations to our review. The exclusion of gray literature, small number of studies fulfilling the inclusion criteria, and poor quality of the included studies are current drawbacks. Furthermore, most of the studies included in the current review did not explore the potential effect of their dashboards on outcomes and care processes (eg, documentation of care processes and better health outcomes). Due to the nature of reporting in each of the study’s findings and the variation in type and size of end user groups, it was not feasible to determine the differences in usability and acceptability between individual groups; thus, our findings are a summary of all respondents. Future research should focus on how the introduction of different types of clinical dashboards could support adherence to quality guidelines and understand dashboard design and usability in terms of mixed versus specific user groups. Identification of areas where dashboards should be most appropriately introduced to target specific initiatives should also be considered (eg, older adults with dementia and home care) to help improve the quality of care. Further work is needed to explore how users understand and interpret dashboard features, their preferences for information presentation, and how the information is used to support care or service planning, decision support, and user behavior.
Users found dashboards in aged care generally highly acceptable, particularly those with simple visual elements and features such as an update, alerts, and reporting functionalities. This review highlights the variability in the usability of dashboards and identified certain design features of dashboards, which are associated with increased usability and acceptability. Four possible advantageous features and functionalities for future dashboard developments within aged care are emphasized. Specifically, customizability and interoperability to account for different end user preferences; incorporating numerical (tables) and graphical (league and caterpillar charts) presentations of data to facilitate accurate individual assessment and comparison (benchmarking) respectively; integrating changes to client care preferences with real-time clinical outcomes for a holistic representation of the care journey; and building in recommendations and alerts for best practice clinical decision-making to reduce error and support appropriate care pathways. However, further research on the development, testing, and implementation of visualization dashboard solutions to support outcome improvement for older adults is required.
All authors contributed to the conception of the review. JS led the design of the review, with input from all authors. JS, FS, and LD reviewed the retrieved references, with AN acting as fourth reviewer. Papers were retried by JS, with assistance from LD and AN. JS reviewed the full-text papers, with LD and AN acting as second reviewers. JS, KS, and MR conducted the risk of bias assessment. Data extraction and quality assessment were undertaken by JS and LD. JS led the writing of the manuscript, with all authors contributing or providing feedback and approving the final version of the manuscript.
Conflicts of Interest
Supplementary material.DOCX File , 24 KB
- Xie CX, Maher C, Machado GC. Digital health dashboards for improving health systems, healthcare delivery and patient outcomes: a systematic review protocol. OSF. 2020. URL: https://osf.io/8me96/ [accessed 2023-05-16]
- Dendere R, Samadbeik M, Janda M. The impact on health outcomes of implementing electronic health records to support the care of older people in residential aged care: a scoping review. Int J Med Inform 2021;151:104471 https://linkinghub.elsevier.com/retrieve/pii/S1386505621000976. [CrossRef] [Medline]
- Stadler JG, Donlon K, Siewert JD, Franken T, Lewis NE. Improving the efficiency and ease of healthcare analysis through use of data visualization dashboards. Big Data 2016;4(2):129-135 [https://www.liebertpub.com/doi/10.1089/big.2015.0059] [CrossRef] [Medline]
- Peyman T, Moritz L. Designing user-adaptive information dashboards: considering limited attention and working memory. 2019 Presented at: 27th European Conference on Information Systems (ECIS); June 2019; Stockholm & Uppsala, Sweden
- Kapadia V, Ariani A, Li J, Ray PK. Emerging ICT implementation issues in aged care. Int J Med Inform 2015;84(11):892-900 [https://www.sciencedirect.com/science/article/abs/pii/S1386505615300198?via%3Dihub] [CrossRef] [Medline]
- Seaman KL, Jorgensen ML, Raban MZ, Lind KE, Bell JS, Westbrook JI. Transforming routinely collected residential aged care provider data into timely information: current and future directions. Australas J Ageing 2021;40(3):e262-e268 [https://onlinelibrary.wiley.com/doi/10.1111/ajag.12985] [CrossRef] [Medline]
- Robertson H, Nicholas N, Dhagat A, Travaglia J. A spatial dashboard for Alzheimer's disease in New South Wales. Stud Health Technol Inform 2017;239:126-132 [https://opus.lib.uts.edu.au/bitstream/10453/128937/1/SHTI239-0126.pdf] [Medline]
- Barnett K, Livingstone A, Margelis G, Tomlins G, Gould G, Capamagian L, et al. Innovation Driving Care Systems Capability: Final Report. Murarrie, Queensland: Aged Care Industry I.T Company; 2020.
- Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6(7):e1000097 [https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1000097] [CrossRef] [Medline]
- Qiu D, Hu M, Yu Y, Tang B, Xiao S. Acceptability of psychosocial interventions for dementia caregivers: a systematic review. BMC Psychiatry 2019;19(1):23 [https://bmcpsychiatry.biomedcentral.com/articles/10.1186/s12888-018-1976-4] [CrossRef] [Medline]
- Sekhon M, Cartwright M, Francis JJ. Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework. BMC Health Serv Res 2017;17(1):88 [https://bmchealthservres.biomedcentral.com/articles/10.1186/s12913-017-2031-8] [CrossRef] [Medline]
- Bangor A, Kortum PT, Miller JT. An empirical evaluation of the system usability scale. Int J Hum Comput Stud 2008;24(6):574-594 [https://www.tandfonline.com/doi/abs/10.1080/10447310802205776] [CrossRef]
- Hortman PA, Thompson CB. Evaluation of user interface satisfaction of a clinical outcomes database. Comput Inform Nurs 2005;23(6):301-307 [https://journals.lww.com/cinjournal/Abstract/2005/11000/Evaluation_of_User_Interface_Satisfaction_of_a.4.aspx] [CrossRef] [Medline]
- Harper BD, Slaughter LA, Norman KL. Questionnaire administration via the WWW: a validation and reliability study for a user satisfaction questionnaire. 1997 Presented at: WebNet97; November 3, 1997; Toronto, ON URL: http://www.uselab.tu-berlin.de/wiki/images/c/c5/Harper_et_al._(1997).pdf
- Chin JP, Diehl VA, Norman KL. Development of an instrument measuring user satisfaction of the human-computer interface. New York, NY: Association for Computing Machinery; 1988 Presented at: SIGCHI Conference on Human Factors in Computing Systems; May 15-19, 1988; Washington, DC p. 213-218 URL: https://dl.acm.org/doi/abs/10.1145/57167.57203 [CrossRef]
- Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q 1989;13(3):319-340 [https://www.jstor.org/stable/249008] [CrossRef]
- Lee S, Huebner T. A home care practice scenario using clinical dashboards. Home Healthc Now 2017;35(2):83-95 [https://journals.lww.com/homehealthcarenurseonline/Abstract/2017/02000/A_Home_Care_Practice_Scenario_Using_Clinical.4.aspx] [CrossRef] [Medline]
- Wild K, Sharma N, Mattek N, Karlawish J, Riley T, Kaye J. Application of in-home monitoring data to transition decisions in continuing care retirement communities: usability study. J Med Internet Res 2021;23(1):e18806 [https://www.jmir.org/2021/1/e18806] [CrossRef] [Medline]
- Shiells K, Diaz Baquero AA, Štěpánková O, Holmerová I. Staff perspectives on the usability of electronic patient records for planning and delivering dementia care in nursing homes: a multiple case study. BMC Med Inform Decis Mak 2020;20(1):159 [https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-020-01160-8] [CrossRef] [Medline]
- Kramer HS, Gibson B, Livnat Y, Thraen I, Brody AA, Rupper R. Evaluation of an electronic module for reconciling medications in home health plans of care. Appl Clin Inform 2016;7(2):412-424 [https://www.thieme-connect.de/products/ejournals/abstract/10.4338/ACI-2015-11-RA-0154] [CrossRef] [Medline]
- van Hoof J, Kort HSM, Rutten PGS, Duijnstee MSH. Ageing-in-place with the use of ambient intelligence technology: perspectives of older users. Int J Med Inform 2011;80(5):310-331 [https://www.sciencedirect.com/science/article/pii/S1386505611000566?via%3Dihub] [CrossRef] [Medline]
- Algilani S, Langius-Eklöf A, Kihlgren A, Blomberg K. An interactive ICT platform for early assessment and management of patient-reported concerns among older adults living in ordinary housing - development and feasibility. J Clin Nurs 2017;26(11-12):1575-1583 [https://onlinelibrary.wiley.com/doi/10.1111/jocn.13468] [CrossRef] [Medline]
- Mei YY, Marquard J, Jacelon C, DeFeo AL. Designing and evaluating an electronic patient falls reporting system: perspectives for the implementation of health information technology in long-term residential care facilities. Int J Med Inform 2013;82(11):e294-e306 [https://www.sciencedirect.com/science/article/abs/pii/S1386505611000748?via%3Dihub] [CrossRef] [Medline]
- Ottaviani AC, Monteiro DQ, Oliveira D, Gratão ACM, Jacinto AF, Campos CRF, et al. Usability and acceptability of internet-based interventions for family carers of people living with dementia: systematic review. Aging Ment Health 2021;26(10):1922-1932 [https://www.tandfonline.com/doi/abs/10.1080/13607863.2021.1975095?journalCode=camh20] [CrossRef] [Medline]
- Technical Committee ISO/TC 159 and S.S. Ergonomics. ISO 9241-110:2020: ergonomics of human-system interaction — part 110: interaction principles. International Organization for Standardization. 2020. URL: https://www.iso.org/standard/75258.html [accessed 2023-05-16]
- Dowding D, Merrill JA, Barrón Y, Onorato N, Jonas K, Russell D. Usability evaluation of a dashboard for home care nurses. Comput Inform Nurs 2019;37(1):11-19 [https://journals.lww.com/cinjournal/Abstract/2019/01000/Usability_Evaluation_of_a_Dashboard_for_Home_Care.3.aspx] [CrossRef] [Medline]
- Lanzarone E, Masclet C, Noël F. A multi-user tool for enhancing the daily replanning and control of visits in home care services. Product Plan Control 2017;28(3):202-219 [https://www.tandfonline.com/doi/abs/10.1080/09537287.2016.1248869] [CrossRef]
- Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006;3(2):77-101 [https://www.tandfonline.com/doi/abs/10.1191/1478088706QP063OA] [CrossRef]
- Hong QN, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, et al. The mixed methods appraisal tool (MMAT) version 2018 for information professionals and researchers. Educ Inf 2018;34(4):285-291 [https://content.iospress.com/articles/education-for-information/efi180221] [CrossRef]
- Dowding D, Merrill J, Russell D. Using feedback intervention theory to guide clinical dashboard design. AMIA Annu Symp Proc 2018;2018:395-403 [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6371234/] [Medline]
- Dowding D, Merrill JA, Onorato N, Barrón Y, Rosati RJ, Russell D. The impact of home care nurses' numeracy and graph literacy on comprehension of visual display information: implications for dashboard design. J Am Med Inform Assoc 2018;25(2):175-182 [https://academic.oup.com/jamia/article-abstract/25/2/175/3778216?redirectedFrom=fulltext&login=false] [CrossRef] [Medline]
- Bell K, Hartmann C, Baughman AW. A pharmacist-led pilot using a performance dashboard to improve psychotropic medication use in a skilled nursing facility. BMJ Open Qual 2020;9(3):e000997 [https://bmjopenquality.bmj.com/content/9/3/e000997] [CrossRef] [Medline]
- Bail K, Gibson D, Hind A, Strickland K, Paterson C, Merrick E, et al. 'It enables the carers to see the person first': qualitative evaluation of point-of-care digital management system in residential aged care. J Clin Nurs 2023;32(1-2):174-190 [https://onlinelibrary.wiley.com/doi/10.1111/jocn.16285] [CrossRef] [Medline]
- Cui Y, Gong D, Yang B, Chen H, Tu MH, Zhang C, et al. Making the CARE comprehensive geriatric assessment as the core of a total mobile long term care support system in China. Stud Health Technol Inform 2018;247:770-774 [https://ebooks.iospress.nl/doi/10.3233/978-1-61499-852-5-770] [Medline]
- Papaioannou A, Kennedy CC, Campbell G, Stroud JB, Wang L, Dolovich L, Improving Prescribing in Long Term Care Investigators. A team-based approach to warfarin management in long term care: a feasibility study of the MEDeINR electronic decision support system. BMC Geriatr 2010;10:38 [https://bmcgeriatr.biomedcentral.com/articles/10.1186/1471-2318-10-38] [CrossRef] [Medline]
- Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M, Medical Research Council Guidance. Developing and evaluating complex interventions: the new medical research council guidance. BMJ 2008;337:a1655 [https://www.bmj.com/content/337/bmj.a1655] [CrossRef] [Medline]
- Glasson J, Chang E, Chenoweth L, Hancock K, Hall T, Hill-Murray F, et al. Evaluation of a model of nursing care for older patients using participatory action research in an acute medical ward. J Clin Nurs 2006;15(5):588-598 [https://onlinelibrary.wiley.com/doi/10.1111/j.1365-2702.2006.01371.x] [CrossRef] [Medline]
- Kaasinen E, Mattila E, Lammi H, Kivinen T, Välkkynen P. Technology acceptance model for mobile services as a design framework. In: Lumsden J, editor. Human-computer Interaction and Innovation in Handheld, Mobile, and Wearable Technologies. Hershey, PA: IGI Global; 2011:80-107
- Zhang J, Walji MF. TURF: toward a unified framework of EHR usability. J Biomed Inform 2011;44(6):1056-1067 [https://www.sciencedirect.com/science/article/pii/S1532046411001328?via%3Dihub] [CrossRef] [Medline]
- Nielsen J. Interactive technologies. In: Usability Engineering. San Francisco, CA: Elsevier Science & Technology; 1994.
- Holtzblatt K, Wendell JB, Wood S. Rapid contextual design: a how-to guide to key techniques for user-centered design. In: The Morgan Kaufmann Series in Interactive Technologies. San Francisco, CA: Elsevier Science & Technology; 2004.
- Forsberg K, Mooz H. The relationship of systems engineering to the project cycle. Eng Manag J 2015;4(3):36-43 [https://www.tandfonline.com/doi/abs/10.1080/10429247.1992.11414684] [CrossRef]
- Zahabi M, Kaber DB, Swangnetr M. Usability and safety in electronic medical records interface design: a review of recent literature and guideline formulation. Hum Factors 2015;57(5):805-834 [https://journals.sagepub.com/doi/10.1177/0018720815576827] [CrossRef] [Medline]
- Straus SE, Tetroe J, Graham I. Defining knowledge translation. Can Med Assoc J 2009 Aug 04;181(3-4):165-168 [http://www.cmaj.ca/cgi/pmidlookup?view=long&pmid=19620273] [CrossRef] [Medline]
- Graham ID, Logan J, Harrison M, Straus SE, Tetroe J, Caswell W, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof 2006;26(1):13-24 [https://journals.lww.com/jcehp/Abstract/2006/26010/Lost_in_knowledge_translation__Time_for_a_map_.3.aspx] [CrossRef] [Medline]
- Sockolow PS, Bowles KH, Rogers M. Health information technology evaluation framework (HITREF) comprehensiveness as assessed in electronic point-of-care documentation systems evaluations. Stud Health Technol Inform 2015;216:406-409 [Medline]
- Kluger AN, DeNisi A. The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol Bull 1996;119(2):254-284 [https://psycnet.apa.org/buy/1996-02773-003] [CrossRef]
- Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. A new framework for developing and evaluating complex interventions: update of medical research council guidance. BMJ 2021;374:n2061 [https://www.bmj.com/content/374/bmj.n2061] [CrossRef] [Medline]
- Khairat SS, Dukkipati A, Lauria HA, Bice T, Travers D, Carson SS. The impact of visualization dashboards on quality of care and clinician satisfaction: integrative literature review. JMIR Hum Factors 2018;5(2):e22 [https://humanfactors.jmir.org/2018/2/e22/] [CrossRef] [Medline]
- Anell A, Hagberg O, Liedberg F, Ryden S. A randomized comparison between league tables and funnel plots to inform health care decision-making. Int J Qual Health Care 2016;28(6):816-823 [https://academic.oup.com/intqhc/article/28/6/816/2607817] [CrossRef] [Medline]
- Allwood D, Hildon Z, Black N. Clinicians' views of formats of performance comparisons. J Eval Clin Pract 2013;19(1):86-93 [https://onlinelibrary.wiley.com/doi/10.1111/j.1365-2753.2011.01777.x] [CrossRef] [Medline]
- Noël G, Joy J, Dyck C. Improving the quality of healthcare data through information design. Inf Design J 2022 Jul 5;23(1):104-122 [https://www.jbe-platform.com/content/journals/10.1075/idj.23.1.11noe] [CrossRef]
- Petit-Monéger A, Saillour-Glénisson F, Nouette-Gaulain K, Jouhet V, Salmi LR. Comparing graphical formats for feedback of clinical practice data. a multicenter study among anesthesiologists in France. Methods Inf Med 2017;56(1):28-36 [https://www.thieme-connect.de/products/ejournals/abstract/10.3414/ME15-01-0163] [CrossRef] [Medline]
- Ben-Tovim D, Woodman R, Harrison JE, Pointer S, Hakendorf P, Henley G. Measuring and Reporting Mortality in Hospital Patients. Canberra: Australian Institute of Health and Welfare; 2009.
- Dugstad Wake J, Rabbi F, Inal Y, Nordgreen T. User-centred design of clinical dashboards for guided iCBT. Innovations Syst Softw Eng 2022:1-17 [https://link.springer.com/article/10.1007/s11334-022-00454-6] [CrossRef]
- Brunner J, Chuang E, Goldzweig C, Cain CL, Sugar C, Yano EM. User-centered design to improve clinical decision support in primary care. Int J Med Inform 2017;104:56-64 [https://www.sciencedirect.com/science/article/abs/pii/S1386505617301119?via%3Dihub] [CrossRef] [Medline]
- De Vito Dabbs A, Myers BA, Mc Curry KR, Dunbar-Jacob J, Hawkins RP, Begey A, et al. User-centered design and interactive health technologies for patients. Comput Inform Nurs 2009;27(3):175-183 [https://journals.lww.com/cinjournal/Abstract/2009/05000/User_Centered_Design_and_Interactive_Health.11.aspx] [CrossRef] [Medline]
- Royal commission into aged care quality and safety. In: Royal Commission Into Aged Care Quality and Safety Final Report. Commonwealth of Australia: Australian Government - Attorney General's Department; 2021.
- Ludlow K, Westbrook J, Jorgensen M, Lind KE, Baysari MT, Gray LC, et al. Co-designing a dashboard of predictive analytics and decision support to drive care quality and client outcomes in aged care: a mixed-method study protocol. BMJ Open 2021;11(8):e048657 [https://bmjopen.bmj.com/content/11/8/e048657] [CrossRef] [Medline]
- Stacey D, Légaré F, Lewis K, Barry MJ, Bennett CL, Eden KB, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev 2017;4(4):CD001431 [https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD001431.pub5/full] [CrossRef] [Medline]
- Palesy D, Jakimowicz S. Health literacy training for Australian home care workers: enablers and barriers. Home Health Care Serv Q 2019;38(2):80-95 [https://www.tandfonline.com/doi/abs/10.1080/01621424.2019.1604458?journalCode=whhc20] [CrossRef] [Medline]
- Health Literacy Taking Action to Improve Safety and Quality. Sydney, New South Wales, Australia: Australian Commission on Safety and Quality in Health Care; 2014.
|ICT: information and communication technology|
|MMAT: Mixed Methods Appraisal Tool|
|PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses|
|TAM: Technology Acceptance Model|
Edited by J Wang; submitted 29.08.22; peer-reviewed by J Ray, P Hirway, M Topaz; comments to author 26.02.23; revised version received 12.03.23; accepted 12.05.23; published 19.06.23Copyright
©Joyce Siette, Laura Dodds, Fariba Sharifi, Amy Nguyen, Melissa Baysari, Karla Seaman, Magdalena Raban, Nasir Wabe, Johanna Westbrook. Originally published in JMIR Aging (https://aging.jmir.org), 19.06.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Aging, is properly cited. The complete bibliographic information, a link to the original publication on https://aging.jmir.org, as well as this copyright and license information must be included.