Objectives: To gather preliminary data on correlations among psycholinguistic measures, self-report of cognitive function, and performance on neurocognitive tests in breast cancer survivors.
Sample & Setting: Participants were breast cancer survivors who reported issues with cognitive function after completion of chemotherapy. This secondary analysis used data from participants in parent studies at two National Cancer Institute–designated cancer centers.
Methods & Variables: Qualitative interview transcripts (N = 52) underwent psycholinguistic analyses for grammatical and semantic complexity. Relationships among six psycholinguistic variables, self-report of cognitive function, and performance on neurocognitive tests were examined.
Results: Three grammatical complexity variables had a significant positive correlation to self-report of cognitive function. One semantic complexity variable had a significant positive correlation to delayed recall neurocognitive tests.
Implications for Nursing: Results suggest that psycholinguistic analysis may be used to assess cognitive function among breast cancer survivors. Confirmatory studies are needed to establish the correlation between psycholinguistic measures, self-report of cognitive function, and domain-specific tests of neurocognitive performance, as well as to evaluate longitudinal sensitivity to change.
Diagnosis and treatment of non–central nervous system malignancies have a profound impact on survivors’ cognitive function and health-related quality of life (QOL) (Allen et al., 2018). Individuals who are treated for breast cancer report significant changes in cognitive function that lead to negative professional, domestic, and social outcomes (Myers, 2012; Von Ah et al., 2016). Incongruity between self-report of cognitive function and performance on standard neuropsychologic tests confounds assessment of cognitive function among cancer survivors (Hermelink et al., 2010).
A full battery of neurocognitive tests and functional neuroimaging studies may not always be clinically accessible and can be mentally, physically, and financially burdensome for patients (Williams et al., 2021). Development of a sensitive, succinct, and clinically accessible measure of cognitive function is highly desirable for the population of cancer survivors. Interestingly, language in everyday speech depends on cognitive processes and reflects changes in cognitive function (Kemper et al., 1989). Psycholinguistic research has demonstrated that patterns of change in grammar complexity, vocabulary, and idea density are representative of changes in cognitive function related to normal aging, mild cognitive impairment, vascular dementia, and other neurodegenerative disorders (Aramaki et al., 2016; Chen et al., 2009; Kemper et al., 1989, 2001). Thus, psycholinguistic analysis of language complexity in speech may be useful for assessing changes in cognitive function without the burden of extensive neuropsychologic testing or neuroimaging.
The authors have begun to explore the use of psycholinguistic analysis to assess cognitive function in cancer survivors. An initial substudy with men receiving androgen deprivation therapy for prostate cancer demonstrated feasibility for recording, transcribing, and coding short segments of participants’ speech during clinical interviews (Williams et al., 2021). Correlations between psycholinguistic measures and neurocognitive tests were strongest for participants’ answers to open-ended questions requiring reflection (Williams et al., 2021).
Among breast cancer survivors who report changes in cognitive function, one of the most common concerns relates to verbal fluency, specifically, complaints about difficulties with “word-finding” (Myers, 2012). This impedes individuals’ ability to participate in conversations and can impair their ability to function in social and work settings. Likewise, issues with short-term memory, delays in processing speed, and decreases in executive function negatively affect social and work performance and can cause frustration. Lack of correlation between survivors’ self-report and performance on neurocognitive tests makes assessment difficult. Despite survivors’ reports of significant cognitive difficulties compared to before their diagnosis and treatment, most patients perform well within normal limits on neurocognitive tests.
The current secondary analysis, was conducted to gather preliminary data on the correlation among psycholinguistic measures, self-report of cognitive function, and performance on neuropsychologic tests for breast cancer survivors. The theoretical basis for this work was derived from the study of psychological and neurobiologic linkages between cognitive function and language production from the grounded theories of psycholinguistics and neurolinguistics (Balamurugan, 2018; Fernández & Cairns, 2010). The authors postulated that psycholinguistic changes in speech complexity and vocabulary would correlate with participants’ self-report of cognitive function and be in the anticipated direction for neurocognitive test performance (see Figure 1).
Following institutional review board approval and execution of a data use agreement, de-identified transcripts of 52 qualitative interviews were analyzed. These interviews were derived from two parent studies conducted at the University of Kansas Cancer Center in Kansas City (n = 30, 6 months to 5 years postchemotherapy) and Indiana University in Indianapolis (n = 22, 12 months or more postchemotherapy). In both parent studies, eligible participants met the following criteria: (a) female, (b) postmenopausal, (c) diagnosed with nonmetastatic disease, (d) at least six months postchemotherapy, and (e) had self-reported issues with cognitive function (participants answered “yes” when asked if they were experiencing any difficulties with memory and/or concentration that negatively affected self-esteem or interfered with daily life). Participants who scored less than 24 on a Mini-Mental State Examination (severe cognitive impairment) or who were diagnosed with Alzheimer disease, related dementias, or other neurologic conditions that would preclude informed consent were excluded.
For this pilot secondary analysis, the authors selected available data that closely matched in topic and context (breast cancer survivorship experiences with cognitive dysfunction). The interviews from both studies included slightly varied questions and wording but focused on similar topics, with equivalent length and depth of responses. The similarities in participant eligibility, qualitative interviewing, and availability of written transcripts, as well as congruence for self-report instrumentation, made it possible to combine the two parent study transcript samples for secondary analysis, providing a larger sample for analysis than from either study alone. Because of variation in neurocognitive tests used in the two parent studies, pooled analysis was not feasible, but it was possible to explore correlations with psycholinguistic measures for smaller sample subsets.
Both parent studies used similar sets of open-ended questions about participants’ experiences with cognitive issues during and after diagnosis and treatment for breast cancer (see Figure 2). Of note, comparisons of language complexity are not contingent on interview question content or results of the qualitative analysis for the parent study. Participants’ answers were audio recorded and transcribed verbatim. These transcripts were coded by trained graduate research assistants to assess grammatical complexity and semantic complexity. Grammatical complexity included the following variables: (a) mean length of utterance, meaning the average number of words in each utterance; (b) right-branching clauses, meaning clauses following the main clause; (c) left-branching clauses, meaning clauses preceding the main clause; (d) main clauses, meaning the clause containing the primary noun–verb for the utterance; and (e) incomplete utterances, meaning sentence fragments or abandoned utterances. Semantic complexity includes type–token ratio (TTR), meaning the number of words compared to the number of root words (vocabulary diversity).
Self-report of cognitive function in both parent studies was evaluated using scores on the Functional Assessment of Cancer Therapy–Cognitive Function perceived cognitive impairments (PCI) (range of 0–72), perceived cognitive abilities (PCA) (range of 0–28), and QOL (range of 0–16) subscales. Higher subscale scores indicate better perceived cognitive function. The neurocognitive tests used in both studies have been extensively validated and are psychometrically robust (see Table 1) (Strauss et al., 2006). Neurocognitive test variables for the University of Kansas Cancer Center participants were the Trail Making Test (TMT) Parts A (processing speed) and B (executive function). The TMT Parts A and B are timed tests in which participants are instructed to connect a series of circles filled with numbers (in numeric order) and letters (alternating between numeric and alphabetical order) as quickly as they can. Lower scores (time in seconds to complete) indicate better cognitive function. Neurocognitive test variables for the Indiana University participants included scores on the Rey Auditory Verbal Learning Test (sum of scores from trials 1–5 to recall a list of 15 unrelated words, immediate memory), and the Rivermead Behavioural Paragraph Recall Test (long-term delay, delayed memory), as well as the total score (ability to correctly pair a series of specific numbers and geometric figures in 90 seconds) and errors (divided attention) on the Symbol Digit Modalities Test (SDMT). Higher scores (correct responses) indicate better cognitive function for all tests except SDMT errors and TMT times in seconds.
Training for coding practice transcripts was conducted for four research assistants until 90% or greater agreement was achieved for each psycholinguistic metric. Intercoder reliability was checked for 5% of the sample to confirm ongoing reliability above 85% agreement.
The transcripts were segmented into utterances, meaning statements that typically reflect sentences or sentence fragments (Kemper et al., 1989). Complete and incomplete utterances were labeled. Each utterance was coded for the type of noun–verb clause(s) it contained. Brackets were used to identify each main, left, and right subordinate embedded clause. Main clauses have a subject and predicate and represent the primary idea of the utterance. Compound and complex utterances contain multiple noun–verb clauses, mainly right-branching clauses, which come after the main clause, and left-branching clauses, which come before the main clause. Left-branching clauses are more cognitively demanding because the idea conveyed in the left-branching clause must be held in working memory while the remaining main clause is interpreted. Main, right-branching, and left-branching clauses in a sentence might be coded as follows: “And so when I’m on [LEFT] a text message I can [MAIN] look back and see exactly what I [RIGHT] said.”
Coded utterances were tabulated using Systematic Analysis of Language Transcripts software, version 20 (Miller & Chapman, 2022). MLU and TTR were computed using the automated program. Measures of specific clauses (main, left-branching, and right-branching) were computed as means (i.e., the mean number of left-branching clauses per utterance within each transcript).
Pearson’s correlation coefficient at significance level 0.05 was used to examine the relationship between each psycholinguistic measure and each neurocognitive measure. Psycholinguistic correlations for the FACT-Cog self-report subscales were calculated for the transcripts from all 52 participants. Psycholinguistic correlations with neurocognitive test scores were conducted separately for each parent study.
Most participants were female, well educated, non-Hispanic, White, diagnosed with stage II breast cancer, and working full-time. Participants’ sociodemographic characteristics are outlined in Table 2. No significant differences were noted between the two parent studies. Significant positive correlation was observed between participants’ self-report of PCI and three measures of speech complexity: mean length utterance in words (r = 0.38, 95% confidence interval [CI] [0.12, 0.59]), mean number of main clauses (r = 0.39, 95% CI [0.14, 0.6]) and mean number of right-branching clauses (r = 0.39, 95% CI [0.13, 0.6]). PCI was negatively correlated with the percent of incomplete utterances (r = –0.315, 95% CI [0.54, 0.05]). Significant correlations with PCA mirrored these results, although there was no correlation with incomplete utterances (r = 0.403, 95% CI [0.15, 0.61]; r = 0.359, 95% CI [0.1, 0.58]; and r = 0.419, 95% CI [0.17, 0.62], respectively). QOL positively correlated with the mean number of right-branching clauses (r = 0.342, 95% CI [0.08, 0.56]). Four significant correlations were noted between psycholinguistic variables and neurocognitive test scores. TTR (vocabulary diversity measure) positively correlated with immediate and delayed recall (Rivermead, r = 0.533, 95% CI [0.14, 0.78]; and r = 0.563, 95% CI [0.19, 0.8], respectively) and negatively correlated with SDMT scores (r = –0.423, 95% CI [–0.72, –0.002]). Mean left-branching clauses per utterance negatively correlated with SDMT scores (r = –0.527, 95% CI [–0.78, –0.14]).
Significant correlations were observed between participants’ self-report of cognitive function and psycholinguistic measures of speech complexity, and all correlations were within the expected direction. Higher levels of self-reported cognitive function (PCI, PCA, QOL) positively correlated with measures of higher speech complexity and negatively correlated with measures of lower speech complexity (percent of incomplete utterances).
Associations between psycholinguistic variables and performance on neurocognitive tests were mixed. Three of the correlations were in the expected direction. Higher levels of vocabulary diversity (TTR) were positively correlated with immediate and delayed recall (Rivermead) and negatively correlated with SDMT errors. Surprisingly, although left-branching clauses are the most grammatically complex of the clauses measured, the mean of left-branching clauses per utterance was negatively correlated with the SDMT score.
The low number of neurocognitive correlations and the counterintuitive direction of correlation in one variable may be explained by the smaller sample size for the analysis (n = 22 and n = 30, as opposed to N = 52). These secondary analysis results differed somewhat from the authors’ earlier feasibility pilot work with men receiving androgen deprivation therapy for prostate cancer, in which no significant correlation was found between psycholinguistic variables and self-report in the study’s very small sample (n = 13). One possible explanation for this difference may be a tendency for women with breast cancer to report more issues with cognitive function on patient-reported outcome questionnaires than men with prostate cancer.
Limitations of this study include the secondary analysis design, small sample size, number of statistical tests performed, and the various types of neurocognitive testing used to collect data for psycholinguistic analysis comparisons. Variability in the available neurocognitive tests between the two parent study samples limited the ability to pool the data for analyses.
The results of this study provide preliminary evidence that psycholinguistic analyses may be a useful surrogate measure of cognitive function for breast cancer survivors. Confirmatory studies are required to further test the relationships among psycholinguistic measures, self-report of cognitive function, and domain-specific tests of neurocognitive performance, and to test longitudinal sensitivity to change. Although current processes for psycholinguistic coding to determine language complexity are time- and personnel-intensive, advances in voice transcription and natural language processing have potential for automation. In the future, an application with transcription and complexity analysis functions could provide cost-effective, immediate feedback on spoken language complexity for clinical use. The digitization of speech analysis algorithms using mobile devices in the clinic setting (either in person or through telehealth platforms) would be a welcome tool for nurses in oncology settings who are searching for sensitive and practical methods to assess cognitive function in their patients. Implementation of a validated mobile application in practice settings would require clinicians to be educated about how cognition affects language, and educational materials explaining how language reflects cognitive changes would be important for patient education.
Jamie S. Myers, PhD, RN, AOCNS©, FAAN, is a research associate professor in the School of Nursing at the University of Kansas in Edwardsville; Diane Von Ah, PhD, RN, FAAN, is a distinguished professor of cancer research and the director of cancer research in the College of Nursing at the Ohio State University in Columbus; Jianghua He, PhD, is an associate professor, and Jaromme Kim, MA, is a PhD student and graduate research assistant, both in the Department of Biostatistics and Data Science at the University of Kansas Medical Center in Kansas City; Mika Miyashita, PhD, RN, is a professor in the Graduate School of Biomedical Health Sciences at Hiroshima University in Japan; Yuki Asakura, PhD, RN, ACHPN©, ACNS-BC, OCN©, is a clinical nurse specialist and palliative care advanced practice nurse at Centura Health-St. Francis Health Services and Parker Adventist Hospital in Highlands Ranch, CO; and Kristine Williams, RN, PhD, APRN-BC, FGSA, FAAN, is the E. Jean Hill Professor in the School of Nursing at the University of Kansas in Lawrence. This work was funded, in part, by the Center for Enhancing Quality of Life, Indiana University School of Nursing (principal investigator: Von Ah). Williams’ E. Jean Hill Professorship provided support for psycholinguistic analyses. Myers, Von Ah, Asakura, and Williams contributed to the conceptualization and design. Myers, Von Ah, and Asakura completed the data collection. He and Kim provided statistical support. Myers, Von Ah, and Williams provided the analysis. Myers, Von Ah, Miyashita, and Williams contributed to the manuscript preparation. Myers can be reached at firstname.lastname@example.org, with copy to ONFEditor@ons.org. (Submitted November 2021. Accepted March 20, 2022.)
Allen, D.H., Myers, J.S., Jansen, C.E., Merriman, J.D., & Von Ah, D. (2018). Assessment and management of cancer- and cancer treatment-related cognitive impairment. Journal for Nurse Practitioners, 14(4), 217–224. https://doi.org/10.1016/j.nurpra.2017.11.026
Aramaki, E., Shikata, S., Miyabe, M., & Kinoshita, A. (2016). Vocabulary size in speech may be an early indicator of cognitive impairment. PLOS ONE, 11(5), e0155195. https://doi.org/10.1371/journal.pone.0155195
Balamurugan, K. (2018). Introduction to psycholinguistics—A review. Studies in Linguistics and Literature, 2(2), 110. https://doi.org/10.22158/sll.v2n2p110
Chen, C., Xue, G., Mei, L., Chen, C., & Dong, Q. (2009). Cultural neurolinguistics. Progress in Brain Research, 178, 159–171. https://doi.org/10.1016/s0079-6123(09)17811-1
Fernández, E.M., & Cairns, H.S. (2010). Fundamentals of psycholinguistics. Wiley-Blackwell.
Hermelink, K., Küchenhoff, H., Untch, M., Bauerfeind, I., Lux, M.P., Bühner, M., . . . Münzel, K. (2010). Two different sides of ‘chemobrain’: Determinants and nondeterminants of self-perceived cognitive dysfunction in a propspective, randomized, multicenter study. Psycho-Oncology, 19(12), 1321–1328. https://doi.org/10.1002/pon.1695
Kemper, S., Kynette, D., Rash, S., O’Brien, K., & Sprott, R. (1989). Life-span changes to adults’ language: Effects of memory and genre. Applied Psycholinguistics, 10(1), 49–66. https://doi.org/10.1017/S0142716400008419
Kemper, S., Marquis, J., & Thompson, M. (2001). Longitudinal change in language production: Effects of aging and dementia on grammatical complexity and propositional content. Psychology and Aging, 16(4), 600–614. https://doi.org/10.1037/0882-79184.108.40.2060
Miller, J., & Chapman, R. (2022). Systematic analysis of language transcripts (SALT). SALT Software.
Myers, J.S. (2012). Chemotherapy-related cognitive impairment: The breast cancer experience. Oncology Nursing Forum, 39(1), E31–E40. https://doi.org/10.1188/12.ONF.E31-E40
Strauss, E., Sherman, E.M.S., & Spreen, O. (2006). A compendium of neuropsychological tests: Administration, norms, and commentary (3rd ed). Oxford University Press.
Von Ah, D., Storey, S., Crouch, A., Johns, S.A., Dodson, J., & Dutkevitch, S. (2016). Relationship of self-reported attentional fatigue to perceived work ability in breast cancer survivors. Cancer Nursing, 40(6), 464–470. https://doi.org/10.1097/ncc.0000000000000444
Williams, K., Myers, J.S., Hu, J., Manson, A., & Maliski, S.L. (2021). Psycholinguistic screening for cognitive decline in cancer survivors: A feasibility study. Oncology Nursing Forum, 48(5), 474–480. https://doi.org/10.1188/21.ONF.474-480