Published on in Vol 8 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/68826, first published .
Risk Factors for Community-Dwelling Older Adults Dropping Out of Self-Guided, Remote, and Web-Based Longitudinal Research: Predictive Modeling of Data from the Web-LABrainS Platform

Risk Factors for Community-Dwelling Older Adults Dropping Out of Self-Guided, Remote, and Web-Based Longitudinal Research: Predictive Modeling of Data from the Web-LABrainS Platform

Risk Factors for Community-Dwelling Older Adults Dropping Out of Self-Guided, Remote, and Web-Based Longitudinal Research: Predictive Modeling of Data from the Web-LABrainS Platform

1Pennington Biomedical Research Center, Institute for Dementia Research and Prevention, 6400 Perkins Rd, Baton Rouge, LA, United States

2Pennington Biomedical Research Center, Computing Services, Baton Rouge, LA, United States

Corresponding Author:

Jeffrey N Keller, BS, PhD


Background: Little is currently known regarding the feasibility of using a self-guided, remote, web-based platform as the basis for a longitudinal study of aging in community-dwelling older adults (OAs). This study describes the feasibility and risk factors for participant dropout found when using this approach as part of the web-based Louisiana Aging Brain Study (web-LABrainS).

Objective: This study used data from 402 participants in the web-LABrainS effort to determine the feasibility of using a self-guided, remote, and web-based platform as the basis for conducting longitudinal research in community-dwelling older adults. Additionally, we sought to determine the risk factors associated with participant dropout over a 12-month period in web-LABrainS and determine whether the same risk factors associated with dropout from in-clinic longitudinal studies were observed in web-LABrainS dropouts.

Methods: Participants were enrolled in web-LABrainS on a rolling basis using word-of-mouth promotional efforts. Participants used the web-LABrainS platform to provide electronic consent, demographic and health information, answer questionnaires, and complete assessments as part of a self-guided and web-based effort off-site of the clinic (remote). Following completion of the baseline battery, participants were contacted by email every 6 months to complete another round of the web-LABrainS battery. The data in this study were collected from 402 participants, 217 (54.0%) of whom completed baseline, 6-month, and 12-month assessments (adherent participants) and 185 (46%) of whom participated in only the baseline and no subsequent web-LABrainS batteries (dropout participants).

Results: Our study indicates that even with limited participant outreach and retention efforts, it is feasible to conduct longitudinal clinical research studies in community-dwelling OAs using a self-guided, remote, and web-based approach. In contrast to traditional in-clinic longitudinal studies, dropouts from web-LABrainS were not observed to be significantly different with respect to age, education, gender, marital status, or living alone (P=.67, .16, .29, .051, .31). Similar to traditional longitudinal studies, dropouts from web-LABrainS had significantly higher use of depression medication, decreased self-reported mobility, and decreased delayed recall performance (P=.007, .007, .004). Interestingly, no differences in technology use, comfort with technology, time of assessment, or consent to be contacted about future research were observed between adherents and dropouts (P=.17, .36, .47, .40). Predictive binary logistic regression yielded a moderately accurate model and further supported a negative association between cognitive ability and dropout (OR 0.77, 95% CI 0.61-0.96).

Conclusions: Our study provides some of the first clinical evidence for the feasibility of conducting longitudinal human research using a self-guided, remote, and web-based approach. Additionally, these data highlight the similarities and differences in key factors associated with participant dropout using this type of approach compared to traditional longitudinal study formats. The findings from this study may help guide the design and deployment of future longitudinal studies of older adults focused on self-guided, remote, or web-based approaches.

JMIR Aging 2025;8:e68826

doi:10.2196/68826

Keywords



Longitudinal studies of aging have proven to be invaluable in helping to identify the underlying clinical factors involved in the onset and progression of a diverse array of chronic conditions including dementia, frailty, and cardiovascular disease [1-4]. The vast majority of human longitudinal studies conducted to date have focused on the use of in-clinic study measures and the administration of in-person pencil-and-paper assessments and questionnaires [5]. There are multiple advantages to focusing on traditional onsite administration of assessments and measures for longitudinal studies. For example, traditional onsite evaluations significantly increase the control of the study environment, increase the rigor of assessments, and promote the centralization of clinical expertise and resources. There are also multiple disadvantages to focusing on the use of onsite visits for longitudinal human studies. Disadvantages include the high cost of conducting studies in a clinical setting and the limited availability of study visit slots within the typical daily clinic workflow. Additionally, the reliance on in-clinic visits selects for those participants who have the resources and flexibility to travel to the clinic (socioeconomic and geographical burden) and increases the overall entropy for participants to participate in clinical research.

The dropout rate for in-clinic longitudinal studies involving OAs has been reported to range from 15% to 60% [6-23]. Previous studies have identified a diverse array of risk factors associated with participant dropout from in-clinic longitudinal studies [6-23]. Factors reported to drive participant dropout in traditional longitudinal studies include advanced age, lower education, gender, and the presence of chronic health conditions [6-23].

In the last decade, there has been a dramatic increase in the development, validation, and use of self-guided, remote, and web-based assessments as part of clinical research [24-33]. While these efforts preceded COVID-19, the COVID-19 pandemic introduced an urgency and awareness for the need to quickly develop viable alternatives for conducting large-scale clinical research efforts independent of traditional research settings [34-38]. Together, these novel approaches have focused on the development of self-guided or web-based assessments for the collection of demographics, health information, and delivery of questionnaires and assessments outside of the clinic. As a consequence, today there is a growing scientific literature outlining self-guided, remote, or web-based assessments for cognitive, mobility, and mental health studies in older adults (OAs).

Very little is currently known in terms of the feasibility of using a self-guided, remote, and web-based platform for conducting longitudinal research in OAs. Previously, we validated a self-guided, remote, and web-based platform for the collection of demographics, health information, and delivery of questionnaires and clinical assessments [39]. We then initiated efforts to determine the feasibility of using this platform to conduct a self-guided, remote, and web-based longitudinal aging study in community-dwelling OAs. This study was defined as the web-based Louisiana Aging Brain Study (web-LABrainS).

In this research effort, we report on the feasibility of deploying web-LABrainS as a potential format for conducting longitudinal research in community-dwelling OAs. Feasibility was evaluated using extremely limited outreach (word-of-mouth) and limited retention efforts (2 email reminders to encourage participation) in order to define performance using minimal support conditions. We observed that even in this restricted paradigm, the web-LABrainS platform was a feasible option for conducting an initial 12-month period research follow-up. We also observed similarities and differences between the factors associated with study dropout in web-LABrainS, as compared to previous studies using traditional approaches for longitudinal research. Additionally, we describe the correlations between technology use, comfort with technology, time of day for assessments, and other novel findings toward adherence in web-LABrainS. Taken together, these efforts allowed us to achieve our goal of determining the feasibility of using a largely automated, self-guided, and web-based approach for onboarding, assessing, reporting, and conducting follow-up assessments as part of longitudinal studies involving community-dwelling OAs.


Study Participants

All data in the current study were obtained from participants enrolled in web-LABrainS. The web-LABrainS study is a self-guided, web-based battery of questionnaires (demographics, mobility, medication use, and health history) and assessments [39]. Web-LABrainS has been continually enrolling participants since 2020. Participants who heard about web-LABrainS via word-of-mouth (no promotions or outreach efforts for web-LABrainS have been conducted) contacted the Pennington Biomedical Research Center (PBRC) to be enrolled. Participants were then sent an email containing a hyperlink that took them to the web-LABrainS site to consent to and complete the web-LABrainS battery. Individuals were sent an additional email and hyperlink every 6 months to repeat the web-LABrainS battery.

Because subjects enroll in and complete the web-LABrainS battery of assessments on a rolling basis, for the purposes of this analysis, we pulled participant data from our servers on October 17, 2024. Data from participants who enrolled in the study any time after this date, as well as assessment data from participants enrolled in the study, were not included in this analysis.

Dropout and Adherent Classification

Subjects were classified as either dropouts or adherents of web-LABrainS. A participant was classified as a dropout if they completed their baseline assessment and had no subsequent assessments. A minimum of 9 months had to have passed since their baseline assessment for a participant to be considered a dropout. If a participant completed a second assessment—even if it was more than 9 months after their baseline assessment—they were not classified as a dropout. Participants were classified as adherents if they completed assessments at a minimum of 3 time points (baseline, 6-month, and 12-month assessments). Participants were recruited to the study via minimal outreach efforts that largely relied on word-of-mouth and grassroots promotion by the participants in web-LABrainS. Participants were retained in the study by equally minimal retention efforts that were based on 2 email prompts for those individuals who did not open the email link to complete their web-LABrainS assessment at baseline, 6, or 12 months postbaseline. This approach was undertaken to evaluate the feasibility in a low recruitment and retention environment.

Components of Web-LABrainS

The components of the web-LABrainS assessment have been described previously in a study of the feasibility of this self-guided, web-based paradigm [39]. We will briefly summarize the components of the web-LABrainS assessments used in this investigation. First, participants were presented with demographic questions about their date of birth, gender identification, racial and ethnic background, marital status, and education. Participants were then asked about their living situation (including whether they live alone and their type of residence), the types of technology they use, their self-rated comfort with technology, as well as their driving status and frequency. Participants are asked about how they rate their own mobility and are given a short questionnaire adapted from the Life-Space Assessment [40]. Participants proceed to provide their level of concern for their memory and whether they rate it worse than others their age. Participants are asked about the number and type of prescription medications they use, and their ability to read a medication label. They are asked to provide their personal medical history including identifying the presence of current chronic health conditions. To conclude the survey portion of the web-LABrainS battery, participants are asked whether they wish to be contacted for research portions in the future.

In the next section of the web-LABrainS battery, participants take multiple validated cognitive assessments, including an 8-item orientation assessment as well as 4-item immediate and delayed recall assessments. Finally, they take an assessment of acute symptoms of depression and anxiety.

Statistical Analysis

We used 2 types of statistical analysis in this study. We first implemented a between-group comparison of dropouts and adherents. This involved 1-way chi-square tests of independence to assess the significance of the differences in the observed and expected frequencies of all categorical dependent variables that were examined. It also involved Mann-Whitney U tests to assess the significance of the differences in the distributions of all numerical-dependent variables that were examined. Exact P values were reported; however, a significance threshold (α) of .05 was used to report statistically significant results.

Our second form of statistical analysis was a binary logistic regression model, a predictive model trained on our data from which we calculated odds ratios (ORs) and CIs to determine the relative strength of our examined variables as predictors of dropout status. We trained and cross-validated 4 different models. We used different sets of predictors in each of these models and iteratively tuned their hyperparameters for optimal performance. Depending on the model, this involved adjusting the optimization algorithm, regularization method, regularization strength, size trim tolerance for predictor coefficients, convergence tolerance, or number of iterations over multiple runs of the model and selecting that which resulted in the strongest classification performance. We assessed our model’s classification ability using k-fold cross-validation. We first shuffled the dataset and split it into 5 separate folds. Four of the folds were used to train the model, while the last fold was withheld to be used as the test set. This process was repeated with each fold being used as the test set once. The trained model was used to predict the outcomes of the unseen subjects in each set. For each iteration of the model, we generated a classification report that included the model’s precision, recall, and F1-scores to evaluate model performance for each class (dropout and adherent). The weighted and unweighted averages of these scores across the 2 classes were also provided along with the total model accuracy. The support values for these averages were also included. The 5 classification reports produced for each iteration were then averaged.

We then selected the best-performing model from the cross-validation phase. Of the 4 models tested, our chosen model was tied for the highest overall accuracy but exhibited more consistent performance across validation folds and better classification of the minority class. This model included the variables that significantly differed between groups from our first analysis (self-rated mobility score, delayed recall performance, total number of prescription medications taken, use of depression medication, and quality-of-life self-rating) along with demographic variables (age, race, ethnicity, gender, and years of education). With that subset of data selected, we preprocessed our data in the same way we did during our cross-validation stage. This involved applying 1-hot encoding to all categorical variables, converting them into a binary format that the model was able to use. It also involved converting all continuous variables to a standard scale with a mean of 0 and SD of 1 to ensure equal contribution of these variables to our model. Finally, we added a constant term to the dataset to be used as a reference by the model.

The model itself was based on a logit function. The coefficients of this function were estimated using a convex optimization algorithm with L1 or Lasso (Least Absolute Shrinkage and Selection Operator) regularization. We applied a regularization strength of 0.01 to the coefficient estimation. We also set a trim tolerance of 0.05 for coefficients with a small absolute value. After the model was fit to the full dataset, we retrieved the coefficients of each predictor and calculated the corresponding ORs. The ORs show how the probability of the outcome—in this case, dropping out— changes with an increase of 1 SD to a numerical variable or the presence of a binary variable.

All statistical analysis was performed in Python [41] using functions from the numpy, scipy, sklearn, and statsmodels packages [42-45]. Outputs from the Web-LABrainS tool were stored in data structures and manipulated using objects and functions from the pandas Python package [46]. Data visualization was accomplished using the matplotlib and seaborn packages [47,48].

Ethical Considerations

All study procedures were approved by the PBRC Institutional Review Board (IRB). Informed consent was provided for all participants prior to the initiation of study procedures. Participants were informed that participation was voluntary and that they could opt out at any time. Informed consent included the ability of PBRC researchers to conduct secondary analysis without additional consent. All data were stored in a deidentified manner to maintain participant confidentiality. No compensation was provided for study participation as outlined in the PBRC IRB-approved study protocol. All data used for this study were obtained from web-LABrainS. IRB approval for the web-LABrainS study was obtained from the PBRC IRB (FWA # 00006218) prior to the initiation of web-LABrainS research efforts. The PBRC IRB approval number is 2020-044-PBRC Web-LABrainS. The study complied with ethical standards outlined in the Belmont Report and Declaration of Helsinki.


In this study, a total of 402 OA participants in web-LABrainS were examined. The demographics of all 402 OA participants at their baseline assessment are provided in Table 1. The baseline age of the entire group was 65.3 years (SD 11.6 years). In terms of representation of different races, genders, and levels of education, our overall sample was overwhelmingly White (355/402, 88.3%), female (298/402, 74.1%), and highly educated (mean 16.8, SD 2.5 years of education). Of the 402 participants in this study, a total of 217 participants (54.0%) completed the web-LABrainS battery at baseline, 6 months, and 12 months. These participants were classified as adherent participants. The remaining 185 (46.0%) completed only the baseline web-LABrainS assessment. These participants were referred to as the dropouts.

Table 1. Demographics and between-group comparisons for dropouts and adherents.
MeasureTotal, N=402Dropout, n=185Adherent, n=217P value
Year of baseline assessment, n (%).27
 2021100 (24.9)44 (23.8)56 (25.8)
 202268 (16.9)33 (17.8)35 (16.1)
 2023231 (57.5)105 (56.8)126 (58.1)
 20243 (0.7)3 (1.6)0 (0.0)
Time of baseline assessment, mean (SD) (HH:MM)14:00 (4:00)14:12 (04:00)13:48 (04:00).47
Age (years), mean (SD)65.3 (11.6)65.5 (12.0)65.1 (11.2).67
Age quintiles, n (%).52
 <5054 (13.4)27 (14.6)27 (12.4)
 50‐6064 (15.9)28 (15.1)36 (16.6)
 60‐70127 (31.6)54 (29.2)73 (33.6)
 70‐80117 (29.1)53 (28.6)64 (29.5)
 >8040 (10.0)23 (12.4)17 (7.8)
Gender, n (%).29
 Men103 (25.6)42 (22.7)61 (28.1)
 Women298 (74.1)143 (77.3)155 (71.4)
 Prefer to self-describe1 (0.2)0 (0.0)1 (0.5)
Race, n (%).27
 White355 (88.3)157 (84.9)198 (91.2)
 Black or African American30 (7.5)19 (10.3)11 (5.1)
 Asian6 (1.5)4 (2.2)2 (0.9)
 American Indian or Alaska Native0 (0.0)0 (0.0)0 (0.0)
 Other6 (1.5)3 (1.6)3 (1.4)
Ethnicity, n (%).16
 Hispanic or Latino5 (1.2)2 (1.1)3 (1.4)
 Non-Hispanic344 (85.6)165 (89.2)179 (82.5)
 Other53 (13.2)18 (9.7)35 (16.1)
Marital status, n (%).05
 Married257 (63.9)106 (57.3)151 (69.6)
 Never married28 (7.0)11 (5.9)17 (7.8)
 Common-law partner5 (1.2)4 (2.2)1 (0.5)
 Divorced70 (17.4)39 (21.1)31 (14.3)
 Widowed39 (9.7)23 (12.4)16 (7.4)
Years of education, mean (SD)16.8 (2.5)16.6 (2.6)16.9 (2.5).16
Highest level of education, n (%).35
 High school or GED15 (3.7)10 (5.4)5 (2.3)
 Some college68 (16.9)35 (18.9)33 (15.2)
 Associate’s degree20 (5.0)7 (3.8)13 (6.0%
 Bachelor’s degree126 (31.3)60 (32.4)66 (30.4)
 Master’s degree125 (31.1)52 (28.1)73 (33.6)
 Doctorate degree48 (11.9)21 (11.4)27 (12.4)
Living situation, n (%).31
 Living alone96 (23.9)49 (26.5)47 (21.7)
 Not living alone306 (76.1)136 (73.5)170 (78.3)
Housing, n (%).74
 Single residence house369 (91.8)168 (90.8)201 (92.6)
 Assisted living1 (0.2)1 (0.5)0 (0.0)
 Apartment complex13 (3.2)7 (3.8)6 (2.8)
 Stand- alone apartment5 (1.2)3 (1.6)2 (0.9)
 Other14 (3.5)6 (3.2)8 (3.7)
Average technologies used, mean (SD)4.0 (1.0)3.9 (1.1)4.1 (0.9).17
Individual technologies used, n (%)
 Smartphone use390 (97.0)179 (96.8)211 (97.2)≥.99
 Tablet use307 (76.4)135 (73.0)172 (79.3).17
 Laptop use342 (85.1)156 (84.3)186 (85.7).80
 Desktop use330 (82.1)148 (80.0)182 (83.9).38
 Wearable use252 (62.7)109 (58.9)143 (65.9).18
Comfort with computers score, mean (SD)
 Comfort with computers score, 1‐53.8 (1.5)3.8 (1.5)3.9 (1.5).36
Comfort with computers, n (%).58
 Very comfortable212 (52.7)92 (49.7)120 (55.3)
 Slightly comfortable54 (13.4)28 (15.1)26 (12.0)
 I’m okay63 (15.7)29 (15.7)34 (15.7)
 Slightly uncomfortable9 (2.2)6 (3.2)3 (1.4)
 Very uncomfortable64 (15.9)30 (16.2)34 (15.7)
Mobility, mean (SD)
 Mobility self-score, 1‐53.8 (1.0)3.7 (0.9)3.9 (0.9).007
 Life-space mobility score, 0‐63.7 (1.2)3.7 (1.1)3.7 (1.2).96
Falls in last year, n (%).86
 Fall80 (19.9)38 (20.5)42 (19.4)
 No Fall322 (80.1)147 (79.5)175 (80.6)
Driving status, n (%).12
 Regularly drive383 (95.3)173 (93.5)210 (96.8)
 Occasionally drive16 (4.0)9 (4.9)7 (3.2)
 Rarely drive0 (0.0)0 (0.0)0 (0.0)
 Do not drive3 (0.7)3 (1.6)0 (0.0)
Driving frequency, mean (SD)
 Driving frequency self-score4.4 (0.7)4.4 (0.8)4.5 (0.6).28
Cognition, mean (SD)
 Orientation score, 0‐47.8 (0.4)7.8 (0.4)7.8 (0.4).42
 Immediate recall score, 0‐44.0 (0.1)4.0 (0.1)4.0 (0.1).53
 Delayed recall score, 0‐43.8 (0.6)3.7 (0.7)3.9 (0.5).004
Memory compared to others their age, n (%).30
 Worse77 (19.2)40 (21.6)37 (17.1)
 Not worse325 (80.8)145 (78.4)180 (82.9)
Memory concern score, mean (SD)
 Memory concern self-score1.8 (0.6)1.8 (0.6)1.8 (0.6).42
Memory concern, n (%).22
 Extremely concerned4 (1.0)1 (0.5)3 (1.4)
 Very concerned32 (8.0)20 (10.8)12 (5.5)
 Some concern253 (62.9)113 (61.1)140 (64.5)
 No concern113 (28.1)51 (27.6)62 (28.6)
Depression, mean (SD)
 Depression score, 0‐6412.7 (9.8)13.0 (9.6)12.5 (10.0).39
Quality of life, mean (SD)
 Quality of life self-score79.0 (16.5)76.5 (17.2)81.1 (15.5).003
Future contact for research, n (%).40
 Consented387 (96.3)176 (95.1)211 (97.2).
 Did not consent15 (3.7)9 (4.9)6 (2.8)
Total prescription medications, mean (SD)3.4 (2.7)3.9 (2.9)2.9 (2.4).001
Prescription medication types, n (%)
 Acid suppression92 (22.9)49 (26.5)43 (19.8).14
 Cholesterol181 (45.0)86 (46.5)95 (43.8).66
 Diabetes39 (9.7)22 (11.9)17 (7.8).23
 Sleep aids83 (20.6)46 (24.9)37 (17.1).07
 Depression78 (19.4)47 (25.4)31 (14.3).007
 Anxiety90 (22.4)47 (25.4)43 (19.8).22
Medical conditions, n (%)
 Diabetes31 (7.7)16 (8.6)15 (6.9).64
 High blood pressure185 (46.0)93 (50.3)92 (42.4).14
 High cholesterol197 (49.0)92 (49.7)105 (48.4).87
 Thyroid deficiency80 (19.9)42 (22.7)38 (17.5).24
 Cancer65 (16.2)28 (15.1)37 (17.1).70
 Alcohol abuse19 (4.7)11 (5.9)8 (3.7).41
 Anxiety102 (25.4)49 (26.5)53 (24.4).72
 Stroke8 (2.0)2 (1.1)6 (2.8).40
 B12 deficiency27 (6.7)13 (7.0)14 (6.5).98
 Sleep apnea66 (16.4)28 (15.1)38 (17.5).61
 Depression121 (30.1)63 (34.1)58 (26.7).14
 Concussion or TBI10 (2.5)6 (3.2)4 (1.8).56
 TIA7 (1.7)3 (1.6)4 (1.8)≥.99
 Atrial fibrillation21 (5.2)10 (5.4)11 (5.1)≥.99
 Neurological disease13 (3.2)8 (4.3)5 (2.3).39
 Heart attack9 (2.2)6 (3.2)3 (1.4).36
 Drug abuse4 (1.0)1 (0.5)3 (1.4).73
 Parkinson disease0 (0.0)0 (0.0)0 (0.0%≥.99

Our between-group comparison identified that there were many metrics in which there were no significant differences between dropout and adherent participants. These included basic demographic variables such as average age, distribution of participants within age quintiles, gender, race, ethnicity, average years of education, and highest level of education (P=.67, .52, .29, .27, .16, .16, and .35). Differences in marital statuses neared significance (P=.051) but were not significant. Dropouts and adherents had statistically similar frequency of living alone, housing situations, driving statuses, and self-rated driving frequency (P=.31, .74, .12, .28). Results from the Life Space mobility scale did not significantly differ between groups (P=.96). Next, we sought to compare the 2 groups in terms of technology use and comfort with technology. Self-rated comfort with computers and average comfort with computers score (P=.58 and P=.36), average number of technologies used (P=.17) as well as the frequency of smartphone, tablet, laptop, desktop, and wearable technology use (P=P≥.99, P=.17, P=.80, P=.38, P=.18) were not significantly different between the 2 groups. Cognitively, dropouts and adherents did not display significantly different performance on our orientation or immediate recall assessments (P=.42, .53). There was also not a significant difference in depressive symptoms between groups (P=.39). There were no significant between-group differences in whether participants rated their memory as worse than others their age and in their self-rated memory concern and average memory concern score (P=.30, .22, .42) Use of medications for acid suppression, cholesterol, diabetes, sleep aid, and anxiety did not differ significantly between groups (P=.14, .66, .23, .07, .22). Medical history of diabetes, high blood pressure, high cholesterol, thyroid deficiency, cancer, alcohol abuse, anxiety, stroke, B12 deficiency, sleep apnea, depression, concussion or traumatic brain injury, transient ischemic attack, atrial fibrillation, neurological disease, heart attack, drug abuse, and Parkinson disease did not differ significantly between groups (P=.64, .14, .87, .24, .70, .41, .72, .40, .98, .61, .14, .56, ≥.99, .≥99, .39, .36, .73, ≥.99). Interestingly, consent to be contacted for future research did not differ significantly between groups (P=.40).

The metrics in which we observed differences between dropout and adherent groups ranged across multiple domains of health. The dropout group had significantly lower self-rated mobility scores (P=.007), significantly lower delayed recall scores (P=.004), higher total prescription medications taken (P=.001), higher self-reported use of depression medication (P=.007), and lower self-rated quality of life (P=.003) as compared to the adherent group.

With initial findings from our between-group comparisons, we proceeded to the modeling stage of our analysis. We constructed 4 binary logistic regression models consisting of different sets of predictors chosen based on the results of the between-group tests. The footnote of Table 2 defines the predictor set of each model. Hyperparameters were tuned for maximum classification strength according to the protocols described in the Methods section above. Ultimately, Model 3, a model with a predictor set consisting of all significant between-group variables and basic demographic variables, was tied for the highest overall accuracy but exhibited more consistent performance across validation folds and better classification of the minority class. The classification metrics from the full model comparison are shown in Table 2.

Table 2. Binary logistic regression model performance in the cross-validation stage.
Model 1aModel 2bModel 3cModel 4d
Adherent classification metrics, mean (SD)
 Precision0.59 (0.06)0.61 (0.05)0.61 (0.04)0.57 (0.03)
 Recall0.64 (0.05)0.76 (0.09)0.73 (0.07)0.7 (0.08)
F1-score0.61 (0.05)0.68 (0.06)0.67 (0.05)0.63 (0.04)
 Support54.25 (0.5)54.25 (0.5)54.25 (0.5)54.25 (0.5)
Dropout classification metrics, mean (SD)
 Precision0.53 (0.09)0.61 (0.1)0.6 (0.07)0.53 (0.06)
 Recall0.48 (0.11)0.44 (0.07)0.46 (0.05)0.39 (0.06)
F1-score0.5 (0.1)0.51 (0.07)0.52 (0.06)0.45 (0.04)
 Support46.25 (0.5)46.25 (0.5)46.25 (0.5)46.25 (0.5)
Classification metrics across classes, mean (SD)
 Unweighted average precision0.56 (0.07)0.61 (0.07)0.6 (0.06)0.55 (0.04)
 Unweighted average recall0.56 (0.07)0.6 (0.06)0.6 (0.05)0.55 (0.03)
 Unweighted average F1-score0.56 (0.08)0.59 (0.07)0.59 (0.05)0.54 (0.03)
 Unweighted average support100.5 (0.58)100.5 (0.58)100.5 (0.58)100.5 (0.58)
 Weighted average precision0.56 (0.07)0.61 (0.07)0.61 (0.05)0.55 (0.04)
 Weighted average recall0.56 (0.07)0.61 (0.07)0.61 (0.05)0.56 (0.04)
 Weighted average F1-score0.56 (0.07)0.6 (0.06)0.6 (0.05)0.55 (0.03)
 Weighted average support100.5 (0.58)100.5 (0.58)100.5 (0.58)100.5 (0.58)
Overall model accuracy, mean (SD)
 Accuracy0.56 (0.07)0.61 (0.07)0.61 (0.05)0.56 (0.04)

aModel with predictor set consisting of all nonredundant metrics collected, L1 regularization with convex optimization algorithm, regularization strength of 0.01, size trim tolerance of 0.1 for coefficients, convergence tolerance of 1×10−10, 100 iterations.

bModel with predictor set consisting of only variables significant in between-group comparison, Newton-Raphson root-finding algorithm for optimization, convergence tolerance of 1×10−10, 100 iterations.

cModel with predictor set consisting of variables significant in between-group comparison and basic demographic variables, L1 regularization with convex optimization algorithm, regularization strength of 0.01, size trim tolerance of 0.05 for coefficients, convergence tolerance of 1×10−10, 100 iterations.

dModel with predictor set consisting of variables significant in between-group comparison and basic demographic variables as well as all second-degree interactions, L1 regularization with convex optimization algorithm, regularization strength of 0.01, size trim tolerance of 0.1 for coefficients, convergence tolerance of 1×10−10, 100 iterations.

Our logistic regression model, based primarily upon these significantly different between-group findings, showed several predictors with varying influence on the odds of the dropout (Table 3). Older age, identifying as female, having a Black or African American or Asian racial background, and the use of depression medication were all found to be associated with slightly increased odds, but the CIs for each encompass 1, indicating limited significance. Having a higher delayed recall was the most notable of the significant associations we found (OR 0.77, 95% CI 0.61-0.96), suggesting that higher delayed recall ability might reduce the odds of dropping out. Additionally, taking more prescription medications shows a significant association with an increase in the odds of dropping out (OR 1.3, 95% CI 1.01-1.68). Identifying as an ethnicity other than Hispanic or Latino or non-Hispanic was also shown to reduce dropout odds (OR 0.78, 95% CI 0.62-0.97). Overall, while some predictors hint at associations, few reach strong statistical significance.

Table 3. Binary logistic regression model results.
PredictorCoefficientOdds ratio (95% CI)
Constant term−0.180.83 (0.62-1.12)
Age0.131.14 (0.9-1.44)
Gender
 Women0.131.14 (0.91-1.43)
 Prefer to self-describe−0.340.71 (0.01-51.46)
Race
 Black0.171.18 (0.95-1.47)
 Asian0.21.22 (0.98-1.52)
 Other0.061.06 (0.84-1.34)
 Two or more racesTrimmedN/Aa
Ethnicity
 Hispanic or Latino−0.050.95 (0.77-1.17)
 Other−0.250.78 (0.62-0.97)
Years of education−0.070.93 (0.75-1.15)
Self-rated mobility score−0.140.87 (0.69-1.11)
Delayed recall score−0.270.77 (0.61-0.96)
Total prescription medications0.261.3 (1.01-1.68)
Use of depression medication0.171.19 (0.95-1.49)
Quality of life−0.20.82 (0.65-1.03)

aNot available.

As part of our feasibility efforts, we next sought to determine whether there were significant differences between the time of the day that adherents and dropouts completed their baseline web-LABrainS battery. We observed that dropouts and adherents largely completed their respective assessment batteries at the same times of day (Figure 1). The mean time of day at which all participants completed their assessments was 2:00 PM (SD 4 hours) and the distributions of the times of day at which dropouts and adherents completed their assessments did not differ significantly between groups (Table 1), an indication of the convenience of an online assessment tool that is always available to the user. Interestingly, none of the 402 participants took their assessment battery between midnight and 3:00 AM of the participant’s local time, while all other times of day had some amount of representation.

Figure 1. Kernel density estimation function and histogram for web-based Louisiana Aging Brain Study assessment start times. (A) Density plots showing the temporal distribution of baseline assessments for dropout (blue) and adherent (orange) participants. (B) The number of participants who completed their baseline assessment within each hour-long bin in a 24-hour day is depicted in histogram form. Distributions for both dropouts (blue) and adherents (orange) were displayed together at partial opacity, with overlapping parts of the distributions shown in gray. All times were reported relative to each participant’s time zone.

Principal Findings

Our results in this study support our initial hypothesis that the use of a self-guided, remote, and web-based battery can be used to conduct longitudinal research in community-dwelling OAs. We identified that demographic characteristics such as age, education, gender, marital status, and living alone were not significantly correlated with participant dropout. Similarly, we observed that the reported comfort level with using technology, the amount of technology use, interest in research, and the time of assessment were also not correlated with participant dropout. Conversely, we identified a diverse array of medical-related issues that were significantly correlated with study dropout, including use of depression medication, decreased mobility, and impairments in delayed recall performance, all of which were significantly correlated with study dropout. The importance of each of these findings is discussed below.

While the level of dropouts in this study was high, it is in line with what has been reported by in-clinic longitudinal studies. It is important to point out that recruitment of participants to web-LABrainS in this feasibility study did not involve efforts to target specific populations or efforts to do large-scale recruitment, being that recruitment was largely restricted to word-of-mouth among participants and a byproduct of correspondence with our institute. Additionally, it should be pointed out that participants received 2 reminder emails to conduct their 6- or 12-month assessment batteries and received limited or no additional follow-up (or forms of follow-up) to maintain participant retention in this feasibility study. Given the extremely restricted outreach and retention efforts in this study, we believe that even though our feasibility effort went out to only 12 months, it shows that this approach is a feasible effort to have community-dwelling OAs maintain participation in a longitudinal research effort that involves repeated assessments over time.

Many in-clinic longitudinal studies have investigated which of the variables collected in their demographic surveys, health questionnaires, and physical or cognitive assessments are significant predictors of participant attrition. One of the most commonly identified variables reported in the literature is age, although some studies claim older age makes a subject more prone to attrition [12,14,18,19,21] while others suggest younger subjects have a greater risk of study dropout [7-9,23]. Past research also shows variability in other predictors of study dropout including gender and education. Several studies identify being male as a significant predictor of study dropout [8,12,21], while others have shown that being female increases the risk for study dropout [16,19]. Likewise, past research has found both low levels of education [7,8,10,15,16,20,22,23] and high levels of education [21] to be significant predictors of study dropout. Taken together, data from this study and existing literature suggest that the retention of participants in longitudinal studies may be impacted by some factors (age, gender, and education) that are study-specific and therefore challenging to control for or successfully address.

Other predictors of participant attrition did not vary in directionality. These variables associated with participant dropout included having lower socioeconomic or job status, lower cognitive function, and lower self-reported or general health [7,10,12,15,19,21,22]. Medical conditions such as chronic stress, cardiovascular disease, coronary artery disease, heart attack, respiratory dysfunction, diabetes, depression, and anxiety were commonly identified predictors of study dropout when surveyed for or directly assessed [9-11,19,22]. Finally, lifestyle factors such as smoking, low physical activity, and low social and community involvement were also commonly associated with study dropout [7,10,12,21]. Living situations played an interesting role in the risk of attrition in previous studies. The presence of familial conflict or young children was identified as a significant predictor of study dropout [9]. Living alone or having fewer potential caretakers also significantly predicted attrition [12,18,19]. Based on the consistency of these findings, future studies should set out to fully address the potential impact of each of these aspects on participant retention. For example, future studies (including web-LABrainS 2.0) should consider incorporating strategies that minimize the negative impacts of lower socioeconomic status (providing participant stipends and maximizing flexibility for when study visits occur), impaired cognition (consider incorporating study partner and simplifying assessment screens), and poor health (allow study staff to assist with evaluation if requested, provide alternative types of assessments for individuals with physical limitations).

Our studies identified several factors associated with dropout in this web-LABrainS feasibility effort and the findings from studies using traditional longitudinal study approaches. For example, we observed that dropout in the web-LABrainS effort was not correlated with age, education, gender, marital status, or living alone. In contrast, dropouts in web-LABrainS had significantly higher levels of depression, decreased self-reported mobility, and decreased memory performance (delayed recall). In addition, we identified 2 novel factors—decreased quality of life and increased number of prescribed medications—that were associated with study dropout. Interestingly, in this web-LABrainS feasibility research effort, no differences in technology use, comfort with technology, or time of assessment were observed between adherent participants and dropouts. Another surprising finding in this study was the finding that there was an extremely high percentage (greater than 95%) of both adherent participants and dropout participants in web-LABrainS who confirmed a willingness to be contacted about future research studies. These data suggest that alternative factors, including the perceived value of participation in the specific study, play a role in participant retention. For example, it is likely that some participants continue in a longitudinal study solely based on personal reasons that have not been discussed above (family member impacted by the disease or dedication to a university, clinician, or researcher). Alternatively, participants may differentially stay in a longitudinal study based on their perceived value of the study results they receive. It should be noted that for web-LABrainS, we routinely receive participant correspondence saying they value the opportunity to participate because of a family member who had dementia or because they highly value the study summaries they receive at the conclusion of each assessment.

Limitations

There are multiple limitations in this feasibility study. For example, our web-LABrainS participants are not a representative sample of the community, and in future research efforts, it will be critical to optimize outreach such that the recruitment of a representative community sample is reliably obtained. With a more representative sample and larger sample size, we could also expect predictive modeling for risk of dropout to improve in accuracy from the binary logistic regression model shared in this study. Additionally, there was an expected elevated dropout in this web-LABrainS feasibility study due to the extremely limited nature of participant retention efforts (2 email reminders). In future studies, it will be key to identify and implement features of our platform to maximize the retention of study participants. We believe, based on the incidental feedback from web-LABrainS participants over the course of this feasibility study, that the major reason for study participation is the fact that each participant gets a detailed and automated report on their cognitive function at the conclusion of the web-LABrainS assessment. Continuing to optimize feedback and related collaterals to study participants will likely be critical to ensuring the long-term retention of study participants for years or even decades in longitudinal research efforts.

Conclusions

Taken together, the findings in this study provide a framework for the design and implementation of longitudinal research studies that focus on the use of a self-guided, remote, and web-based approach. Additionally, the data in this study provide important findings on how different demographic, health, and technology use aspects correlate with adherence or dropout in the web-LABrainS approach. In addition to being feasible, we believe using this new paradigm for longitudinal research will significantly decrease participant burden, dramatically lower research costs, and allow for significantly greater recruitment of participants across a wide geographic area. In future studies, it will be critical to develop methodologies and approaches that facilitate participation by a more racially and socioeconomically diverse array of study participants. These efforts will likely include outreach on social media platforms and collaborations with stakeholders in the community that align with racially and socioeconomically diverse populations. It will likely be important in these future efforts to make the outreach and onboarding of participants as frictionless as possible for participants by optimizing the user interface and user experience. In the interest of improving the experience of the community stakeholders involved in the future research effort, optimizing the application programming interface of the research platform will be a high priority. The impact of these efforts will have a significantly greater clinical impact if they can begin to be incorporated into clinical care as screening, patient engagement, and caregiver support tools. Our future efforts will attempt to incorporate each of these aspects as well as develop pathways for web-LABrainS participants to be connected quickly with study staff and health care professionals when mild cognitive impairment or dementia is identified as part of their longitudinal assessment.

Acknowledgments

The authors would like to thank the participants of web-LABrainS for their participation in this feasibility study. We thank John Ruth and Aimee Stewart for retrieving the time-of-day data from the Web-LABrainS assessment tool. Finally, we appreciate Robbie Beyl for his statistics consultation.

Funding

We would like to thank the Keller-Lamar Health Foundation for providing the funding to validate the web-LABrainS platform as part of a previous research effort [39].

Conflicts of Interest

None declared.

  1. Newman AB. An overview of the design, implementation, and analyses of longitudinal studies on aging. J Am Geriatr Soc. Oct 2010;58 Suppl 2(Suppl 2):S287-S291. [CrossRef] [Medline]
  2. Marengoni A, Angleman S, Melis R, et al. Aging with multimorbidity: a systematic review of the literature. Ageing Res Rev. Sep 2011;10(4):430-439. [CrossRef] [Medline]
  3. Bektas A, Schurman SH, Sen R, Ferrucci L. Aging, inflammation and the environment. Exp Gerontol. May 2018;105:10-18. [CrossRef] [Medline]
  4. Dawber TR, Kannel WB. The Framingham study. An epidemiological approach to coronary heart disease. Circulation. Oct 1966;34(4):553-555. [CrossRef] [Medline]
  5. Wild K, Howieson D, Webbe F, Seelye A, Kaye J. Status of computerized cognitive testing in aging: a systematic review. Alzheimers Dement. Nov 2008;4(6):428-437. [CrossRef] [Medline]
  6. Deeg DJH, van Tilburg T, Smit JH, de Leeuw ED. Attrition in the Longitudinal Aging Study Amsterdam. The effect of differential inclusion in side studies. J Clin Epidemiol. Apr 2002;55(4):319-328. [CrossRef] [Medline]
  7. Young AF, Powers JR, Bell SL. Attrition in longitudinal studies: who do you lose? Aust N Z J Public Health. Aug 2006;30(4):353-361. [CrossRef] [Medline]
  8. Nguyen T, Thomas AJ, Kerr P, et al. Recruiting and retaining community-based participants in a COVID-19 longitudinal cohort and social networks study: lessons from Victoria, Australia. BMC Med Res Methodol. Feb 27, 2023;23(1):54. [CrossRef] [Medline]
  9. Ng SK, Scott R, Scuffham PA. Contactable non-responders show different characteristics compared to lost to follow-up participants: insights from an Australian Longitudinal Birth Cohort Study. Matern Child Health J. Jul 2016;20(7):1472-1484. [CrossRef] [Medline]
  10. Katsuno N, Li PZ, Bourbeau J, et al. Factors associated with attrition in a longitudinal cohort of older adults in the community. Chronic Obstr Pulm Dis. Apr 27, 2023;10(2):178-189. [CrossRef] [Medline]
  11. Huguet N, Kaufmann J, O’Malley J, et al. Using electronic health records in longitudinal studies: estimating patient attrition. Med Care. Jun 2020;58 Suppl 6 Suppl 1(Suppl 6 1):S46-S52. [CrossRef] [Medline]
  12. Jacobsen E, Ran X, Liu A, Chang CCH, Ganguli M. Predictors of attrition in a longitudinal population-based study of aging. Int Psychogeriatr. Aug 2021;33(8):767-778. [CrossRef] [Medline]
  13. Davies K, Kingston A, Robinson L, et al. Improving retention of very old participants in longitudinal research: experiences from the Newcastle 85+ study. PLoS One. 2014;9(10):e108370. [CrossRef] [Medline]
  14. Chatfield MD, Brayne CE, Matthews FE. A systematic literature review of attrition between waves in longitudinal studies in the elderly shows a consistent pattern of dropout between differing studies. J Clin Epidemiol. Jan 2005;58(1):13-19. [CrossRef] [Medline]
  15. Brilleman SL, Pachana NA, Dobson AJ. The impact of attrition on the representativeness of cohort studies of older people. BMC Med Res Methodol. Aug 5, 2010;10:71. [CrossRef] [Medline]
  16. Van Beijsterveldt CEM, van Boxtel MPJ, Bosma H, Houx PJ, Buntinx F, Jolles J. Predictors of attrition in a longitudinal cognitive aging study: the Maastricht Aging Study (MAAS). J Clin Epidemiol. Mar 2002;55(3):216-223. [CrossRef] [Medline]
  17. Helliwell B, Aylesworth R, McDowell I, Baumgarten M, Sykes E. Correlates of nonparticipation in the Canadian Study of Health and Aging. Int Psychogeriatr. 2001;13 Supp 1(S1):49-56. [CrossRef] [Medline]
  18. Zunzunegui MV, Béland F, Gutiérrez-Cuadra P. Loss to follow-up in a longitudinal study on aging in Spain. J Clin Epidemiol. May 2001;54(5):501-510. [CrossRef] [Medline]
  19. Dapp U, Anders J, von Renteln-Kruse W, Golgert S, Meier-Baumgartner HP, Minder CE. The Longitudinal Urban Cohort Ageing Study (LUCAS): study protocol and participation in the first decade. BMC Geriatr. Jul 9, 2012;12(1):35. [CrossRef] [Medline]
  20. Matthews FE, Chatfield M, Freeman C, McCracken C, Brayne C, CFAS M. Attrition and bias in the MRC cognitive function and ageing study: an epidemiological investigation. BMC Public Health. Apr 27, 2004;4(1):12. [CrossRef] [Medline]
  21. Mein G, Johal S, Grant RL, Seale C, Ashcroft R, Tinker A. Predictors of two forms of attrition in a longitudinal health study involving ageing participants: an analysis based on the Whitehall II study. BMC Med Res Methodol. Oct 29, 2012;12(1):164. [CrossRef] [Medline]
  22. Jacomb PA, Jorm AF, Korten AE, Christensen H, Henderson AS. Predictors of refusal to participate: a longitudinal health survey of the elderly in Australia. BMC Public Health. 2002;2(1):4. [CrossRef] [Medline]
  23. Hübner S, Haijen E, Kaelen M, Carhart-Harris RL, Kettner H. Turn on, tune in, and drop out: predictors of attrition in a prospective observational cohort study on psychedelic use. J Med Internet Res. Jul 28, 2021;23(7):e25973. [CrossRef] [Medline]
  24. Goetz ME, Hanfelt JJ, John SE, et al. Rationale and design of the Emory Healthy Aging and Emory Healthy Brain Studies. Neuroepidemiology. 2019;53(3-4):187-200. [CrossRef] [Medline]
  25. Fitzgerald D, Hockey R, Jones M, Mishra G, Waller M, Dobson A. Use of online or paper surveys by Australian women: longitudinal study of users, devices, and cohort retention. J Med Internet Res. Mar 14, 2019;21(3):e10672. [CrossRef] [Medline]
  26. Ashford MT, Jin C, Neuhaus J, et al. Participant completion of longitudinal assessments in an online cognitive aging registry: The role of medical conditions. Alzheimers Dement (N Y). Jan 2024;10(1):e12438. [CrossRef] [Medline]
  27. Weiner MW, Nosheny R, Camacho M, et al. The Brain Health Registry: an internet-based platform for recruitment, assessment, and longitudinal monitoring of participants for neuroscience studies. Alzheimers Dement. Aug 2018;14(8):1063-1076. [CrossRef] [Medline]
  28. Laidra K, Reile R, Havik M, et al. Estonian National Mental Health Study: design and methods for a registry-linked longitudinal survey. Brain Behav. Aug 2023;13(8):e3106. [CrossRef] [Medline]
  29. Ruano L, Sousa A, Severo M, et al. Development of a self-administered web-based test for longitudinal cognitive assessment. Sci Rep. Jan 8, 2016;6:19114. [CrossRef] [Medline]
  30. Staguhn ED, Kirkhart T, Allen L, Campbell CM, Wegener ST, Castillo RC. Predictors of participation in online self-management programs: a longitudinal observational study. Rehabil Psychol. May 2024;69(2):102-109. [CrossRef] [Medline]
  31. Chen Y, Ji H, Shen Y, Liu D. Chronic disease and multimorbidity in the Chinese older adults’ population and their impact on daily living ability: a cross-sectional study of the Chinese Longitudinal Healthy Longevity Survey (CLHLS). Arch Public Health. Feb 1, 2024;82(1):17. [CrossRef] [Medline]
  32. Testad I, Aakre JA, Gjestsen MT, et al. Web-based assessment of cognition and dementia risk factors in over 3000 Norwegian adults aged 50 years and older: cross-sectional PROTECT Norge Study. JMIR Aging. Aug 25, 2025;8:e69867. [CrossRef] [Medline]
  33. Erb MK, Calcagno N, Brown R, et al. Longitudinal comparison of the self-administered ALSFRS-RSE and ALSFRS-R as functional outcome measures in ALS. Amyotroph Lateral Scler Frontotemporal Degener. Aug 2024;25(5-6):570-580. [CrossRef] [Medline]
  34. Taquet M, Skorniewska Z, De Deyn T, et al. Cognitive and psychiatric symptom trajectories 2-3 years after hospital admission for COVID-19: a longitudinal, prospective cohort study in the UK. Lancet Psychiatry. Sep 2024;11(9):696-708. [CrossRef] [Medline]
  35. Yu T, Chen J, Gu NY, Hay JW, Gong CL. Predicting panel attrition in longitudinal HRQoL surveys during the COVID-19 pandemic in the US. Health Qual Life Outcomes. Jul 6, 2022;20(1):104. [CrossRef] [Medline]
  36. Hansen T, Nilsen TS, Yu B, et al. Locked and lonely? A longitudinal assessment of loneliness before and during the COVID-19 pandemic in Norway. Scand J Public Health. Nov 2021;49(7):766-773. [CrossRef] [Medline]
  37. Ramiz L, Contrand B, Rojas Castro MY, et al. A longitudinal study of mental health before and during COVID-19 lockdown in the French population. Global Health. Mar 22, 2021;17(1):29. [CrossRef] [Medline]
  38. Yu E, Hagens S. Socioeconomic disparities in the demand for and use of virtual visits among senior adults during the COVID-19 pandemic: cross-sectional study. JMIR Aging. Mar 22, 2022;5(1):e35221. [CrossRef] [Medline]
  39. Calamia M, Weitzner DS, De Vito AN, Bernstein JPK, Allen R, Keller JN. Feasibility and validation of a web-based platform for the self-administered patient collection of demographics, health status, anxiety, depression, and cognition in community dwelling elderly. PLoS One. 2021;16(1):e0244962. [CrossRef] [Medline]
  40. Peel C, Sawyer Baker P, Roth DL, Brown CJ, Brodner EV, Allman RM. Assessing mobility in older adults: the UAB Study of Aging Life-Space Assessment. Phys Ther. Oct 2005;85(10):1008-1119. [CrossRef] [Medline]
  41. Rossum G, Drake FL. Python 3 Reference Manual. CreateSpace; 2009.
  42. Harris CR, Millman KJ, van der Walt SJ, et al. Array programming with NumPy. Nature New Biol. Sep 2020;585(7825):357-362. [CrossRef] [Medline]
  43. Virtanen P, Gommers R, Oliphant TE, et al. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat Methods. Mar 2020;17(3):261-272. [CrossRef] [Medline]
  44. Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: machine learning in Python. arXiV. Preprint posted online on Jun 5, 2018. [CrossRef]
  45. Seabold S, Perktold J. Statsmodels: econometric and statistical modeling with Python. Presented at: 9th Python in Science Conference; Jun 28 to Jul 3, 2010:57-61; Austin, Texas, USA. [CrossRef]
  46. McKinney W. Data structures for statistical computing in Python. Presented at: Proceedings of the 9th Python in Science Conference; Jun 28 to Jul 3, 2010:56-61; Austin, Texas, USA. [CrossRef]
  47. Hunter JD. Matplotlib: a 2D graphics environment. Comput Sci Eng. 2007;9(3):90-95. [CrossRef]
  48. Waskom ML. seaborn: statistical data visualization. J Open Source Softw. 2021;6(60):3021. [CrossRef]


IRB: institutional review board
Lasso: Least Absolute Shrinkage and Selection Operator
OR: odds ratio
PBRC: Pennington Biomedical Research Center
web-LABrainS: web-based Louisiana Aging Brain Study


Edited by Darren Liu; submitted 14.Nov.2024; peer-reviewed by Lyudmila Bovsh, Michael Moore, Wan-Tai Au-Yeung; final revised version received 21.Sep.2025; accepted 20.Oct.2025; published 13.Nov.2025.

Copyright

© Luke Daniel Braun, H Raymond Allen, Jeffrey N Keller. Originally published in JMIR Aging (https://aging.jmir.org), 13.Nov.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Aging, is properly cited. The complete bibliographic information, a link to the original publication on https://aging.jmir.org, as well as this copyright and license information must be included.