Published on in Vol 5, No 4 (2022): Oct-Dec

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/38464, first published .
Evolving Hybrid Partial Genetic Algorithm Classification Model for Cost-effective Frailty Screening: Investigative Study

Evolving Hybrid Partial Genetic Algorithm Classification Model for Cost-effective Frailty Screening: Investigative Study

Evolving Hybrid Partial Genetic Algorithm Classification Model for Cost-effective Frailty Screening: Investigative Study

Original Paper

1Torrens University, Ultimo, Australia

2College of Engineering, Information Technology and Environment, Charles Darwin University, Haymarket, NSW, Australia

3Torrens University, Adelaide, Australia

4Lutheran Services, Milton, Australia

*these authors contributed equally

Corresponding Author:

Niusha Shafiabady, BEng (Hon), MSc, PhD

College of Engineering, Information Technology and Environment

Charles Darwin University

815 George Street

Haymarket, NSW, 2000

Australia

Phone: 61 80474147

Email: niusha.shafiabady@cdu.edu.au


Background: A commonly used method for measuring frailty is the accumulation of deficits expressed as a frailty index (FI). FIs can be readily adapted to many databases, as the parameters to use are not prescribed but rather reflect a subset of extracted features (variables). Unfortunately, the structure of many databases does not permit the direct extraction of a suitable subset, requiring additional effort to determine and verify the value of features for each record and thus significantly increasing cost.

Objective: Our objective is to describe how an artificial intelligence (AI) optimization technique called partial genetic algorithms can be used to refine the subset of features used to calculate an FI and favor features that have the least cost of acquisition.

Methods: This is a secondary analysis of a residential care database compiled from 10 facilities in Queensland, Australia. The database is comprised of routinely collected administrative data and unstructured patient notes for 592 residents aged 75 years and over. The primary study derived an electronic frailty index (eFI) calculated from 36 suitable features. We then structurally modified a genetic algorithm to find an optimal predictor of the calculated eFI (0.21 threshold) from 2 sets of features. Partial genetic algorithms were used to optimize 4 underlying classification models: logistic regression, decision trees, random forest, and support vector machines.

Results: Among the underlying models, logistic regression was found to produce the best models in almost all scenarios and feature set sizes. The best models were built using all the low-cost features and as few as 10 high-cost features, and they performed well enough (sensitivity 89%, specificity 87%) to be considered candidates for a low-cost frailty screening test.

Conclusions: In this study, a systematic approach for selecting an optimal set of features with a low cost of acquisition and performance comparable to the eFI for detecting frailty was demonstrated on an aged care database. Partial genetic algorithms have proven useful in offering a trade-off between cost and accuracy to systematically identify frailty.

JMIR Aging 2022;5(4):e38464

doi:10.2196/38464

Keywords



Genetic algorithms (GA) are a general-purpose computational optimization method inspired by the evolution mechanism in nature. They are one of the most popular metaheuristic search algorithms and have been used for variety of applications, including synthetic data generation, feature selection, and to solve complex equations [1]. In this study, genetics algorithms have been applied to identify features that offer a suitable trade-off between cost and accuracy.

Within the context of global population aging, the number of older people who will live a significant proportion of their lives with frailty is growing rapidly [2]. Frailty is problematic for older people and the societies in which they live due to the elevated risks associated with the syndrome, including terms poor health outcomes [3] and additional use of health and aged care services [4-7], leading to inflated health care costs [8-10]. However, emerging research suggests that frailty is a highly dynamic [11,12] and potentially modifiable state with appropriate intervention [13,14]. Screening for early detection is proposed to increase the likelihood that the worst impacts of frailty can be lessened [4,15,16].

There are 2 main approaches to identifying frailty: the frailty phenotype (FP) and the frailty index (FI) [17]. However, these established approaches have known drawbacks, requiring significant time investment, face-to-face interaction, and specific data items to be collected [18]. Recently, an electronic frailty index (eFI) was proposed [19] that has the potential to achieve greater efficiencies over face-to-face models when applied to administrative data sets, but the need to ensure a minimum set of items adhering to prespecified criteria remains a barrier to implementation. For example, previous research has shown that although it is possible to calculate and construct an eFI based on an aged care administrative data set, a significant proportion of the items require manual calculation to ensure accuracy and improve quality [20]. Clearly, it would be preferable to identify automated techniques capable of delivering comparable accuracy and quality but with greater efficiency. Consequently, this study aimed to apply a sophisticated genetic algorithm technique to identify an optimal predictor of the calculated eFI.


Study Design, Participants, and Setting

This retrospective study utilized a data set previously compiled [21] from the administrative database of 10 residential aged care facilities located in Queensland, Australia. Participants were included in the study if they were aged 75 years or older and had completed an Aged Care Funding Instrument (ACFI) assessment within the previous 3 years.

Ethical Considerations

A waiver of consent for the initial study was obtained from the Human Research Ethics Committee of Torrens University Australia (application H11/19), which declared the study exempt under National Statement 5.1.22 (secondary use of deidentified administrative data) due to the pragmatic nature of the study. Because this is a secondary study of the same data, the approval extends to this study. Moreover, this study adheres to the Australian National Statement on Ethical Conduct in Human Research.

Frailty Outcome Measure

An eFI was previously calculated for this data [21] based on a formulation originally specified by Clegg et al [22]. Care was taken to ensure the included deficits adhered to the criteria recommended by Searle and colleagues [23], which resulted in 32 of the 35 deficits being extracted from unstructured patient notes and only 3 being derived from the ACFI data. The binary frailty classification was derived using a threshold of 0.21 (ie, frailty defined as >0.21) [24].

Screening Test Construction

Genetic algorithms are an optimization technique [1] applied in machine learning to filter a set of features that are used to construct a classification model. During training, a classification algorithm is tuned on a training set, and the success of attaining a generalized predictive algorithm is then verified by measuring the classification errors in the test set.

Genetic algorithms leverage the observation that classification models often perform better when they are trained on a subset of the available features. Which subset of features to use, however, is not obvious. Genetic algorithms start with a population of randomly generated subsets of features, or chromosomes, that are all independently used to generate classification models. The chromosomes from the population that generated the best performing models are allowed to combine, or breed, to form a new generation of the population, while the worst performing ones are removed completely. The process continues until either a predefined number of generations have been trained or the performance of the models has plateaued. Once training is complete, the best-performing model is deployed using only the naturally selected subset of the available features.

While genetic algorithms are good at selecting an optimal subset of features, they select the features based on maximizing the classification accuracy of a generated model. The cost of acquiring the various features is not factored into the choice of features, even if the performance of less expensive features is close to that of their more expensive counterparts. In this study, the cost of a feature is the combination of the effort, monetary cost, and patient risk involved in capturing the values. We want to minimize the number of expensive features chosen to form the model but allow as many low-cost features to be used as is necessary to gain acceptable performance of the model.

To achieve the inclusion of low-cost features in the classification model, the standard genetic algorithm training configuration illustrated in Figure 1 is modified as illustrated in Figure 2.

Figure 1. Genetic algorithm configuration for training a single member of the population.
View this figure
Figure 2. Partial genetic algorithm configuration for training a single member of the population.
View this figure

This modification is performed every time a model is trained for every member of the population trialed by the genetic algorithm. When the genetic algorithm trains a model, it passes a subset of the available training records to the classification model’s training algorithm. The low-cost feature values for each record need to be added to the selected training records before commencing the training. The genetic algorithm trains the classification model for each chromosome multiple time with different subsets of the training records and determining the performance of each model using records not used in training that instance. As with the training records, the low-cost features need to be added to the records used to determine a model’s performance. The performance of the chromosome is calculated as the average performance of all the models built from different subsets of the training records. This process is called n-fold cross validation, where n is the number of models built. In this study, 3-fold cross validation was used because it ensured a good balance between performance and the time it took to build the models.

Four types of classification models were optimized using partial genetic algorithms: logistic regression, support vector machines, random forest, and decision trees. These algorithms are popular choices for classification because they have proven successful in generating generalized models for a wide range of applications [20]. Logistic regression is a statistical modeling technique whereby a linear combination of the input features is found during training, which models the logarithm of the odds that a binary outcome is in the true state. A support vector machine (SVM) aims to learn a multidimensional hyperplane that separates the set of records given to it for training. Predictions are made by placing the candidate record in the same multidimensional classification space and determining which side of the hyperplane it maps to. SVM was developed in the 1990s and has since enjoyed success in many real-world applications, including pattern recognition [25], text classification [26], and bioinformatics. Decision trees employ a divide and conquer strategy. A tree is formed of nodes, and each node performs a comparison of a single input feature and a threshold if the variable is continuous or a state if the feature is discrete. The outcome of the comparison determines the choice of the next node, which either performs a new comparison or terminates the tree with a given classification. During training, the set of training records are used to find comparisons at each node that gain the most information by reducing entropy in the outcomes by the greatest amount. Subsequent training predictions are made by feeding records into the root node and determining the classification of the terminating node where the record exits the tree. Random forest is a meta form of decision trees, where the output is determined by a vote between many trees. The trees are built using different methods to ensure they are not replicas of each other.

The software was written in Python and the models were built using the sklearn module (version 1.0.2) and the genetic_selection module from sklearn-genetic (version 0.5.1).


Model Generation

Of the 69 features considered, 34 were extracted directly from the ACFI assessment and 35 were the values used to calculate the eFI. Two of the ACFI features, Psychogeriatric Assessment Scales (PAS) score and Cornell Scale, were excluded as they had a high percentage of missing values (PAS score 36%, Cornell Scale 42%). The remaining 32 ACFI assessment features had no missing values and were categorized as low cost of acquisition features. Of the 35 features used to calculate the eFI, 32 were extracted by an automated search for key words in the unstructured patient notes, followed by manual inspection and verification by a clinician. These were categorized as having a high cost of acquisition. The remaining 3 features used to calculate the eFI were direct combinations of ACFI features. As the calculation of these features could be fully automated, they were included with the low-cost features. A total of 4 sets of low-cost features were considered: (1) ACFI features + the low-cost eFI features; (2) the low-cost eFI features; (3) no low-cost features; and (4) a set of features chosen from the low-cost features using genetic algorithms. A different set was found for each of the classification algorithms.

Sixteen scenarios were trialed, comprising each of the aforementioned 4 sets of low-cost features for each of the 4 classification algorithms. For each scenario, the partial genetic algorithm was used to optimize the classification algorithm with different limits placed on the number of high-cost features. The limits were varied sequentially from 1 to 32, which was the number of candidate high-cost features. The performance of each of the 32 algorithms generated for each scenario were plotted on a single graph. The graphs for each scenario are plotted in Figures 3-6.

When comparing the graphs for each classification model, logistic regression outperformed decision trees in every scenario and SVM and random forest in almost all scenarios. Tables 1-3 demonstrate the numeric comparison of the 16 scenarios when 5, 10, and 15 of the high cost of acquisition features were used.

The option of “No low-cost” features was provided to determine how much predictive value the low-cost features were adding to the classification. As expected, this option performed the worst for all the classification algorithms, confirming that the low-cost features were adding value. Next, models were built using only the 3 low-cost eFI features as fixed features. This improved the accuracy of the logistic regression algorithm to 97% when almost all the eFI features were included (Table 4). Although this is a good outcome, a model built using so many of the high-cost features was not the goal of this study.

A genetic algorithm works by selecting an optimal subset of all the features made available to it. This characteristic was the motivation behind building a version of the models in 2 stages. In the first stage, a standard, nonpartial, genetic algorithm was used on the low-cost features to find an optimal combination. These models performed so poorly (Table 5) that they could not be used without further improvement. The combination of features used to generate these models (Multimedia Appendices 1-3) was then employed as the fixed features in the partial genetic algorithm during the second stage. The models in the second stage performed surprisingly poorly, showing no difference from the models built without any low-cost features, regardless of the classification model used.

Using all the low-cost features in a partial genetic algorithm yielded the best overall results and matched the 97% accuracy achieved by the models that used the low-cost eFI features when the model was able to select most of the high-cost eFI features. At 10 features, however, the extra low-cost features allowed the algorithm to increase its sensitivity from 82.7% to 89.3% and specificity from 81.7% to 86.7%.

Figure 3. Logistic regression optimized with a partial genetic algorithm. ACFI: Aged Care Funding Instrument; EFI: electronic frailty index; GA: Genetic algorithm; LR: logistic regression; npa: negative percent agreement; ppa: positive percent agreement.
View this figure
Figure 4. Support vector machine optimized with a partial genetic algorithm. ACFI: Aged Care Funding Instrument; EFI: electronic frailty index; GA: Genetic algorithm; npa: negative percent agreement; ppa: positive percent agreement; SVM: support vector machine.
View this figure
Figure 5. Decision tree optimized with a partial genetic algorithm. ACFI: Aged Care Funding Instrument; DT: decision tree; EFI: electronic frailty index; GA: Genetic algorithm; npa: negative percent agreement; ppa: positive percent agreement.
View this figure
Figure 6. Random forest optimized with a partial genetic algorithm. ACFI: Aged Care Funding Instrument; EFI: electronic frailty index; GA: Genetic algorithm; npa: negative percent agreement; ppa: positive percent agreement; RF: random forest.
View this figure
Table 1. Performance of the 12 scenarios with 5 high-cost features.
FeaturesSensitivitySpecificityPPAaNPAbAccuracyF1c
ACFId + low-cost eFIe

Logistic regression767571.479.275.673.2

Support vector machine7661.767.371.369.664.3

Decision tree73.363.365.571.468.864.4

Random forest8071.774.177.976.372.9
Low-cost eFI

Logistic regression8081.776.684.580.779

Support vector machine74.766.767.873.771.167.2

Decision tree7653.364.067.165.958.2

Random forest727066.77571.168.3
No low-cost features

Logistic regression78.766.771.474.773.369

Support vector machine82.65571.269.770.462.2

Decision tree845573.37071.162.9

Random forest766568.473.171.166.7
Genetically selected low-cost features

Logistic regression8066.772.77574.169.6

Support vector machine82.758.372.971.371.964.8

Decision tree7668.369.57577.268.9

Random forest81.37576.380.378.575.6

aPPA: positive percent agreement.

bNPA: negative percent agreement.

cF1: F-score.

dACFI: Aged Care Funding Instrument.

eeFI: electronic frailty index.

Table 2. Performance of the 12 scenarios with 10 high-cost features.
FeaturesSensitivitySpecificityPPAaNPAbAccuracyF1c
ACFId + low-cost eFIe

Logistic regression89.386.786.789.388.186.7

Support vector machine85.380.081.484.28380.7

Decision tree65.361.758.768.163.760.2

Random forest807073.976.975.671.8
Low-cost eFI

Logistic regression82.781.77984.982.280.3

Support vector machine81.381.777.884.781.579.7

Decision tree81.35068.26767.457.7

Random forest8463.37674.174.869.1
No low-cost features

Logistic regression7286.771.287.178.578.2

Support vector machine77.378.373.481.777.875.8

Decision tree81.3707577.276.372.4

Random forest8066.772.77574.169.6
Genetically selected low-cost features

Logistic regression82.68078.683.881.579.3

Support vector machine78.778.374.681.978.576.4

Decision tree77.36067.970.769.663.7

Random forest78.766.771.474.773.369

aPPA: positive percent agreement.

bNPA: negative percent agreement.

cF1: F-score.

dACFI: Aged Care Funding Instrument.

eeFI: electronic frailty index.

Table 3. Performance of the 12 scenarios with 15 high-cost features.
FeaturesSensitivitySpecificityPPAaNPAbAccuracyF1c
ACFId + low-cost eFIe

Logistic regression85.385.082.387.785.283.6

Support vector machine84.086.781.388.785.183.9

Decision tree69.361.761.769.365.961.7

Random forest84.066.776.975.976.371.4
Low-cost eFI

Logistic regression85.381.781.785.383.781.7

Support vector machine86.781.783.185.584.482.4

Decision tree76.058.366.069.568.161.9

Random forest86.765.079.675.677.071.6
No low-cost features

Logistic regression80.083.376.985.781.580.0

Support vector machine73.375.069.278.674.172.0

Decision tree78.755.067.368.668.160.6

Random forest77.376.673.080.677.074.8
Genetically selected low-cost features

Logistic regression81.380.077.483.580.778.7

Support vector machine80.076.775.481.178.576.0

Decision tree69.361.761.769.365.961.7

Random forest84.066.776.975.976.371.4

aPPA: positive percent agreement.

bNPA: negative percent agreement.

cF1: F-score.

dACFI: Aged Care Funding Instrument.

eeFI: electronic frailty index.

Table 4. Performance of models based on all features.
AlgorithmSensitivitySpecificityPPAaNPAbAccuracyF1c
LRd97.396.796.797.397.096.7
SVMe86.795.085.195.690.489.8
DTf76.063.367.972.170.465.5
RFg88.075.083.381.582.278.9

aPPA: positive percent agreement.

bNPA: negative percent agreement.

cF1: F-score.

dLR: logistic regression.

eSVM: support vector machine.

fDT: decision tree.

gRF: random forest.

Table 5. Performance of models based only on low-cost features.
AlgorithmSensitivitySpecificityPPAaNPAbAccuracyF1c
LRd77.363.369.172.571.166.1
SVMe77.358.367.369.968.962.5
DTf61.370.059.271.965.264.1
RFg77.358.367.369.968.962.5

aPPA: positive percent agreement.

bNPA: negative percent agreement.

cF1: F-score.

dLR: logistic regression.

eSVM: support vector machine.

fDT: decision tree.

gRF: random forest.


Principal Findings

With AI techniques, cost-effective screening tests for frailty are possible for aged care databases that contain an ACFI assessment and unstructured patient notes. This study has shown that the ACFI assessment alone does not provide sufficient information to determine if a patient is frail. However, when ACFI data are augmented by as few as 10 additional features, an AI model can be derived that performs well enough to be used as a screening test. What this means in clinical practice is that older people with frailty can be rapidly and accurately identified in residential care using our novel AI-derived model for frailty. A rapid identification of frailty is crucial to optimally manage the condition [27]. Indeed, the recent Australian Royal Commission to Aged Care highlighted the importance of early identification of aged care residents with frailty, who require additional support [28].

The value of any AI-derived model for frailty screening can be judged by the amount it reduces the cost of acquisition of the features required to determine the value of the deficits used to construct a frailty index. Features that are routinely collected and stored in a database in a format that can be directly fed into a classification model have a low cost of acquisition. Unfortunately, as shown in this study (Table 5) and others [20], such models lack both the sensitivity and specificity to be useful screening tests. At the other extreme, models that include all the deficit features used to calculate the eFI perform extremely well [20] (Table 4), but the value of such models is marginal.

To be useful for a screening test, a model must be acceptably accurate and significantly reduce the cost of acquisition of the features required to implement a frailty index. If a model cannot be developed with acceptable accuracy without including at least some high-cost features, it is desirable to determine the optimal minimum set of high-cost features required to achieve an acceptable performance. Genetic algorithms perform well at determining the optimal subset of features required to maximize the performance of a model. Furthermore, their choice of a subset can be limited to any number of features, up to and including all the available features. This allows the trade-off between the number of features and the performance of the derived models to be determined.

This study found that if a genetic algorithm was permitted to choose any number of features from all the available features, regardless of their cost, it most frequently chose subsets that only included high-cost features. This motivated the development of the previously mentioned partial genetic algorithm, which forced the algorithm to include low-cost features as well. However, this raises the question of whether the low-cost features add any value at all. To answer this question, the results include both a fixed set that had no low-cost features and a set that included only the low-cost features used to calculate the eFI. Considering logistic regression models with 10 high-cost features, including all the low-cost features, yielded an improvement of 17% in sensitivity (89% versus 72%). This combination did not compromise specificity, which remained stable (87%) and is comparable to the scenario with no low-cost features. This improvement is significant and possibly represents the difference between a clinically useful screening test and one that is inadequate. Even if the comparison is made between models built on all the low-cost features and those that include only low-cost features used in the eFI calculation, there is a 6% improvement in sensitivity (89% versus 83%) and 5% in specificity (87% versus 82%).

Although the partial genetic algorithm–built models with 10 high-cost features use less than a third of all the high-cost features, they still require those 10 features to be extracted by screening patient notes. Recent advances in natural language processing (NLP) show promise for automating this extraction process. It is plausible that NLP could extract all the features required to calculate the eFI, but this would require a much larger data set than the one used in this study. In the meantime, the cost of acquisition of at least 10 features from every patient record remains the cost of implementing a screening test on any database similar to ours that contains an ACFI assessment and unstructured patient notes.

Partial genetic algorithms can be used to derive classification models from any database where the cost of acquisition of some parameters is higher than it is for others. Although they have been demonstrated in this study on an aged care database to predict frailty, they could be used in any domain. They are well suited to permit AI models to be trained to implement screening tests in domains where costs are important and there is a difference in the cost of acquisition of candidate features.

Limitations

Because this study reuses the data from a previous study [20], it shares the limitations associated with the data from the first study. In particular, the data were sourced from a single aged care provider, and the data set was relatively small. This study further filtered patients based on the availability of an ACFI assessment. It is plausible that these criteria gave a skewed representation of the population that a screening test would be applied to, resulting in different model performance. The ability to reproduce AI results continues to be controversial [29,30] within medicine, so further studies should aim to reproduce these results with different data sets. A further limitation is the changing model of aged care in Australia, with a new model set to replace ACFI in the next 2 years.

Conclusion

The value of screening tests lies in their cost-effective application. The main cost of applying a model-based screening test lies in acquiring the measures fed into the model. To derive useful screening tests using AI techniques, algorithms must be employed that favor the use of cheaper features over those that require more effort or patient risk to acquire. What all aged care providers and their clinical advisers need is a screening tool that will allow the efficient planning of evidence-based interventions to older frail people who will best benefit from them. At a time where the aged care sector and all providers are being asked by governments and national quality agencies to focus on this vulnerable group, it is crucial that we employ an efficient screening tool.

This paper has shown how partial genetic algorithms can be used to determine an optimal subset of high-cost features to use with cheap features to derive AI models to classify frailty, both in terms of which parameters to use and how many to use. This technique can be applied to any database. It does not guarantee that an adequate model will be found from any database, but it does give a good indication of whether there is sufficient information in the data to derive a model.

Partial genetic algorithms were demonstrated in this paper to derive a cost-effective screening test for frailty, but the method can be applied to any screening tests where there is a disparity in the cost of measuring the required features. The outcome of this study will aid health care providers in screening for frailty with better accuracy through the proposed cost-effective method, which strikes a good balance between accuracy and cost.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Full list of features.

DOCX File , 14 KB

Multimedia Appendix 2

Selected features.

DOCX File , 41 KB

Multimedia Appendix 3

Low-cost features selected for models built with GA-selected subset.

DOCX File , 13 KB

  1. Yee CY, Shafiabady N, Isa D. Optimal sizing supercapacitor-battery hybrid energy storage system in solar application using the genetic algorithms. IJRM 2014 Jun 29;1(1):44-52. [CrossRef]
  2. Ambagtsheer RC, Beilby JJ, Visvanathan R, Dent E, Yu S, Braunack-Mayer AJ. Should we screen for frailty in primary care settings? A fresh perspective on the frailty evidence base: A narrative review. Prev Med 2019 Feb;119:63-69. [CrossRef] [Medline]
  3. Chatindiara I, Allen J, Hettige D, Senior S, Richter M, Kruger M, et al. High prevalence of malnutrition and frailty among older adults at admission to residential aged care. J Prim Health Care 2020;12(4):305. [CrossRef]
  4. Turner G, Clegg A. Best practice guidelines for the management of frailty: a British Geriatrics Society, Age UK and Royal College of General Practitioners report. Age Ageing 2014 Nov;43(6):744-747. [CrossRef] [Medline]
  5. Vermeiren S, Vella-Azzopardi R, Beckwée D, Habbig A, Scafoglieri A, Jansen B, et al. Frailty and the prediction of negative health outcomes: a meta-analysis. J Am Med Dir Assoc 2016 Dec 01;17(12):1163.e1-1163.e17. [CrossRef] [Medline]
  6. Dent E, Kowal P, Hoogendijk EO. Frailty measurement in research and clinical practice: A review. Eur J Intern Med 2016 Jun;31:3-10. [CrossRef] [Medline]
  7. Theou O, Sluggett J, Bell J, Lalic S, Cooper T, Robson L, et al. Frailty, hospitalization, and mortality in residential aged care. J Gerontol A Biol Sci Med Sci 2018 Jul 09;73(8):1090-1096. [CrossRef] [Medline]
  8. Hajek A, Bock J, Saum K, Matschinger H, Brenner H, Holleczek B, et al. Frailty and healthcare costs-longitudinal results of a prospective cohort study. Age Ageing 2018 Mar 01;47(2):233-241. [CrossRef] [Medline]
  9. Sirven N, Rapp T. The cost of frailty in France. Eur J Health Econ 2017 Mar 25;18(2):243-253. [CrossRef] [Medline]
  10. Dent E, Lien C, Lim WS, Wong WC, Wong CH, Ng TP, et al. The Asia-Pacific Clinical Practice Guidelines for the Management of Frailty. J Am Med Dir Assoc 2017 Jul 01;18(7):564-575. [CrossRef] [Medline]
  11. Thompson MQ, Theou O, Adams RJ, Tucker GR, Visvanathan R. Frailty state transitions and associated factors in South Australian older adults. Geriatr Gerontol Int 2018 Nov 16;18(11):1549-1555. [CrossRef] [Medline]
  12. Lang P, Michel J, Zekry D. Frailty syndrome: a transitional state in a dynamic process. Gerontology 2009 Apr 4;55(5):539-549. [CrossRef] [Medline]
  13. Puts MTE, Toubasi S, Andrew MK, Ashe MC, Ploeg J, Atkinson E, et al. Interventions to prevent or reduce the level of frailty in community-dwelling older adults: a scoping review of the literature and international policies. Age Ageing 2017 May 01;46(3):383-392 [FREE Full text] [CrossRef] [Medline]
  14. Hoogendijk EO, Afilalo J, Ensrud KE, Kowal P, Onder G, Fried LP. Frailty: implications for clinical practice and public health. Lancet 2019 Oct;394(10206):1365-1375. [CrossRef]
  15. Gobbens RJ, Luijkx KG, Wijnen-Sponselee MT, Schols JM. Toward a conceptual definition of frail community dwelling older people. Nurs Outlook 2010 Mar;58(2):76-86. [CrossRef] [Medline]
  16. Gobbens RJJ, Maggio M, Longobucco Y, Barbolini M. The validity of the SUNFRAIL tool: a cross-sectional study among Dutch community-dwelling older people. J Frailty Aging 2020;9(4):219-225. [CrossRef] [Medline]
  17. Orkaby AR, Hshieh TT, Gaziano JM, Djousse L, Driver JA. Comparison of two frailty indices in the physicians' health study. Arch Gerontol Geriatr 2017 Jul;71:21-27 [FREE Full text] [CrossRef] [Medline]
  18. Ambagtsheer RC, Archibald MM, Lawless M, Kitson A, Beilby J. Feasibility and acceptability of commonly used screening instruments to identify frailty among community-dwelling older people: a mixed methods study. BMC Geriatr 2020 Apr 22;20(1):152 [FREE Full text] [CrossRef] [Medline]
  19. Clegg A, Bates C, Young J, Ryan R, Nichols L, Teale EA, et al. Development and validation of an electronic frailty index using routine primary care electronic health record data. Age Ageing 2018 Mar 01;47(2):319 [FREE Full text] [CrossRef] [Medline]
  20. Ambagtsheer R, Shafiabady N, Dent E, Seiboth C, Beilby J. The application of artificial intelligence (AI) techniques to identify frailty within a residential aged care administrative data set. Int J Med Inform 2020 Apr;136:104094. [CrossRef] [Medline]
  21. Ambagtsheer RC, Beilby J, Seiboth C, Dent E. Prevalence and associations of frailty in residents of Australian aged care facilities: findings from a retrospective cohort study. Aging Clin Exp Res 2020 Sep 04;32(9):1849-1856. [CrossRef] [Medline]
  22. Clegg A, Bates C, Young J, Ryan R, Nichols L, Ann Teale E, et al. Development and validation of an electronic frailty index using routine primary care electronic health record data. Age Ageing 2016 May 03;45(3):353-360 [FREE Full text] [CrossRef] [Medline]
  23. Searle SD, Mitnitski A, Gahbauer EA, Gill TM, Rockwood K. A standard procedure for creating a frailty index. BMC Geriatr 2008 Sep 30;8(1):24 [FREE Full text] [CrossRef] [Medline]
  24. Hoover M, Rotermann M, Sanmartin C, Bernier J. Validation of an index to estimate the prevalence of frailty among community-dwelling seniors. Health Reports 2013;24(9):7.
  25. Shafiabady N. ST (Shafiabady-Teshnehlab) optimization algorithm. Swarm Intelligence: Innovation, New Algorithms and Methods. 2018.   URL: https://digital-library.theiet.org/content/books/10.1049/pbce119g_ch4 [accessed 2022-09-29]
  26. Shafiabady N, Lee L, Rajkumar R, Kallimani V, Akram NA, Isa D. Using unsupervised clustering approach to train the Support Vector Machine for text classification. Neurocomputing 2016 Oct;211:4-10. [CrossRef]
  27. Dent E, Martin FC, Bergman H, Woo J, Romero-Ortuno R, Walston JD. Management of frailty: opportunities, challenges, and future directions. Lancet 2019 Oct;394(10206):1376-1386. [CrossRef]
  28. Pagone G, Briggs L. Final report: care, dignity and respect. Royal Commission into Aged Care Quality and Safety.   URL: https://agedcare.royalcommission.gov.au/sites/default/files/2021-03/final-report-volume-1.pdf [accessed 2022-05-01]
  29. Stupple A, Singerman D, Celi LA. The reproducibility crisis in the age of digital medicine. npj Digit. Med 2019 Jan 29;2(1). [CrossRef]
  30. Jeganathan J, Knio Z, Amador Y, Hai T, Khamooshian A, Matyal R, et al. Artificial intelligence in mitral valve analysis. Ann Card Anaesth 2017;20(2):129. [CrossRef]


ACFI: Aged Care Funding Instrument
AI: artificial intelligence
eFI: electronic frailty index
FI: frailty index
FP: frailty phenotype
GA: genetic algorithm
NLP: natural language processing
PAS: Psychogeriatric Assessment Scales
SVM: support vector machine


Edited by J Wang, T Leung; submitted 03.04.22; peer-reviewed by Y Longobucco, M Kraus; comments to author 21.06.22; revised version received 01.07.22; accepted 30.07.22; published 07.10.22

Copyright

©John Oates, Niusha Shafiabady, Rachel Ambagtsheer, Justin Beilby, Chris Seiboth, Elsa Dent. Originally published in JMIR Aging (https://aging.jmir.org), 07.10.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Aging, is properly cited. The complete bibliographic information, a link to the original publication on https://aging.jmir.org, as well as this copyright and license information must be included.