{ "items": [ "\n\n
OBJECTIVE: To study the association between COVID-19 vaccination and the risk of post-COVID-19 cardiac and thromboembolic complications. METHODS: We conducted a staggered cohort study based on national vaccination campaigns using electronic health records from the UK, Spain and Estonia. Vaccine rollout was grouped into four stages with predefined enrolment periods. Each stage included all individuals eligible for vaccination, with no previous SARS-CoV-2 infection or COVID-19 vaccine at the start date. Vaccination status was used as a time-varying exposure. Outcomes included heart failure (HF), venous thromboembolism (VTE) and arterial thrombosis/thromboembolism (ATE) recorded in four time windows after SARS-CoV-2 infection: 0-30, 31-90, 91-180 and 181-365\u2009days. Propensity score overlap weighting and empirical calibration were used to minimise observed and unobserved confounding, respectively.Fine-Gray models estimated subdistribution hazard ratios (sHR). Random effect meta-analyses were conducted across staggered cohorts and databases. RESULTS: The study included 10.17\u2009million vaccinated and 10.39\u2009million unvaccinated people. Vaccination was associated with reduced risks of acute (30-day) and post-acute COVID-19 VTE, ATE and HF: for example, meta-analytic sHR of 0.22 (95% CI 0.17 to 0.29), 0.53 (0.44 to 0.63) and 0.45 (0.38 to 0.53), respectively, for 0-30\u2009days after SARS-CoV-2 infection, while in the 91-180\u2009days sHR were 0.53 (0.40 to 0.70), 0.72 (0.58 to 0.88) and 0.61 (0.51 to 0.73), respectively. CONCLUSIONS: COVID-19 vaccination reduced the risk of post-COVID-19 cardiac and thromboembolic outcomes. These effects were more pronounced for acute COVID-19 outcomes, consistent with known reductions in disease severity following breakthrough versus unvaccinated SARS-CoV-2 infection.
\n \n\n \n \nBACKGROUND: Benzene affects human health through environmental exposure in addition to occupational contact. However, few studies have examined the associations between long-term exposure to low-level ambient benzene and mortality risks in non-occupational settings. METHODS: This prospective cohort study consists of 393,042 participants without stroke, myocardial infarction, or cancer at baseline from the UK Biobank. Annual average concentrations of benzene for each year during follow-up were measured using air dispersion models. The main outcomes were all-cause mortality as well as mortality from specific causes. Cox proportional hazards models with time-varying exposure measurements were used to estimate the hazard ratios (HRs) and 95% confidence intervals (CIs) for mortality risks. Restricted cubic spline models were used to estimate the exposure-response relationships. RESULTS: With each interquartile range increase in the average annual concentrations of benzene, the adjusted HRs and 95% CIs of mortality risk from all-cause, cardiovascular disease, cancer, and respiratory disease were 1.26 (1.24 to 1.27), 1.24 (1.21 to 1.28), 1.27 (1.25 to 1.29), and 1.25 (1.20 to 1.30), respectively. The monotonically increasing exposure-response curves showed no threshold and plateau within the observed concentration range. Furthermore, the effect of benzene exposure on mortality persisted across different subgroups and was somewhat stronger in younger and white people (P for interaction <0.05). CONCLUSIONS: Long-term exposure to low-level ambient benzene significantly increases mortality risk in the general population. Ambient benzene represents a potential threat to public health, and further investigations are needed to support timely pollution regulation and health protection.
\n \n\n \n \nWHAT IS ALREADY KNOWN ABOUT THIS TOPIC?: Respiratory diseases (RDs) are the primary cause of death in older adults in China. However, there is limited evidence regarding the disparity in mortality rates of RDs between urban and rural areas among the elderly population. WHAT IS ADDED BY THIS REPORT?: The age-standardized mortality rate (ASMR) due to RDs in the elderly population in both urban and rural areas of China has shown a consistent decrease. This trend is observed in both males and females. However, there was no significant change in the average annual percentage of ASMR for pneumonia among the urban elderly population and rural elderly men throughout the study period. WHAT ARE THE IMPLICATIONS FOR PUBLIC HEALTH PRACTICE?: Efforts should be made in China to reduce mortality from chronic lower respiratory disease and pneumonia among the elderly, particularly in urban populations.
\n \n\n \n \nOBJECTIVES: Early-onset osteoarthritis (OA) is an emerging health issue amidst the escalating prevalence of overweight and obesity. However, there are scant data on its disease, economic burden and attributable burden due to high body mass index (BMI). METHODS: Using data from the Global Burden of Diseases Study 2019, we examined the numbers of incident cases, prevalent cases, years lived with disability (YLDs) and corresponding age-standardised rates for early-onset OA (diagnosis before age 55) from 1990 to 2019. The case definition was symptomatic and radiographically confirmed OA in any joint. The average annual percentage changes (AAPCs) of the age-standardised rates were calculated to quantify changes. We estimated the economic burden of early-onset OA and attributable burden to high BMI. RESULTS: From 1990 to 2019, the global incident cases, prevalent cases and YLDs of early-onset OA were doubled. 52.31% of incident OA cases in 2019 were under 55 years. The age-standardised rates of incidence, prevalence and YLDs increased globally and for countries in all Sociodemographic Index (SDI) quintiles (all AAPCs>0, p<0.05), with the fastest increases in low-middle SDI countries. 98.04% of countries exhibited increasing trends in all age-standardised rates. Early-onset OA accounts for US$46.17\u2009billion in healthcare expenditure and US$60.70\u2009billion in productivity loss cost in 2019. The attributable proportion of high BMI for early-onset OA increased globally from 9.41% (1990) to 15.29% (2019). CONCLUSIONS: Early-onset OA is a developing global health problem, causing substantial economic costs in most countries. Targeted implementation of cost-effective policies and preventive intervention is required to address the growing health challenge.
\n \n\n \n \nBACKGROUND: Accurate prognostication of oncological outcomes is crucial for the optimal management of patients with renal cell carcinoma (RCC) after surgery. Previous prediction models were developed mainly based on retrospective data in the Western populations, and their predicting accuracy remains limited in contemporary, prospective validation. We aimed to develop contemporary RCC prognostic models for recurrence and overall survival (OS) using prospective population-based patient cohorts and compare their performance with existing, mostly utilized ones. METHODS: In this prospective analysis and external validation study, the development set included 11 \u00a0128 consecutive patients with non-metastatic RCC treated at a tertiary urology center in China between 2006 and 2022, and the validation set included 853 patients treated at 13 medical centers in the USA between 1996 and 2013. The primary outcome was progression-free survival (PFS), and the secondary outcome was OS. Multivariable Cox regression was used for variable selection and model development. Model performance was assessed by discrimination [Harrell's C-index and time-dependent areas under the curve (AUC)] and calibration (calibration plots). Models were validated internally by bootstrapping and externally by examining their performance in the validation set. The predictive accuracy of the models was compared with validated models commonly used in clinical trial designs and with recently developed models without extensive validation. RESULTS: Of the 11 \u00a0128 patients included in the development set, 633 PFS and 588 OS events occurred over a median follow-up of 4.3 years [interquartile range (IQR) 1.7-7.8]. Six common clinicopathologic variables (tumor necrosis, size, grade, thrombus, nodal involvement, and perinephric or renal sinus fat invasion) were included in each model. The models demonstrated similar C-indices in the development set (0.790 [95% CI 0.773-0.806] for PFS and 0.793 [95% CI 0.773-0.811] for OS) and in the external validation set (0.773 [0.731-0.816] and 0.723 [0.731-0.816]). A relatively stable predictive ability of the models was observed in the development set (PFS: time-dependent AUC 0.832 at 1 year to 0.760 at 9 years; OS: 0.828 at 1 year to 0.794 at 9 years). The models were well calibrated and their predictions correlated with the observed outcome at 3, 5, and 7 years in both development and validation sets. In comparison to existing prognostic models, the present models showed superior performance, as indicated by C-indices ranging from 0.722 to 0.755 (all P <0.0001) for PFS and from 0.680 to 0.744 (all P <0.0001) for OS. The predictive accuracy of the current models was robust in patients with clear-cell and non-clear-cell RCC. CONCLUSIONS: Based on a prospective population-based patient cohort, the newly developed prognostic models were externally validated and outperformed the currently available models for predicting recurrence and survival in patients with non-metastatic RCC after surgery. The current models have the potential to aid in clinical trial design and facilitate clinical decision-making for both clear-cell and non-clear-cell RCC patients at varying risk of recurrence and survival.
\n \n\n \n \nBACKGROUND: Although vaccines have proved effective to prevent severe COVID-19, their effect on preventing long-term symptoms is not yet fully understood. We aimed to evaluate the overall effect of vaccination to prevent long COVID symptoms and assess comparative effectiveness of the most used vaccines (ChAdOx1 and BNT162b2). METHODS: We conducted a staggered cohort study using primary care records from the UK (Clinical Practice Research Datalink [CPRD] GOLD and AURUM), Catalonia, Spain (Information System for Research in Primary Care [SIDIAP]), and national health insurance claims from Estonia (CORIVA database). All adults who were registered for at least 180 days as of Jan 4, 2021 (the UK), Feb 20, 2021 (Spain), and Jan 28, 2021 (Estonia) comprised the source population. Vaccination status was used as a time-varying exposure, staggered by vaccine rollout period. Vaccinated people were further classified by vaccine brand according to their first dose received. The primary outcome definition of long COVID was defined as having at least one of 25 WHO-listed symptoms between 90 and 365 days after the date of a PCR-positive test or clinical diagnosis of COVID-19, with no history of that symptom 180 days before SARS-Cov-2 infection. Propensity score overlap weighting was applied separately for each cohort to minimise confounding. Sub-distribution hazard ratios (sHRs) were calculated to estimate vaccine effectiveness against long COVID, and empirically calibrated using negative control outcomes. Random effects meta-analyses across staggered cohorts were conducted to pool overall effect estimates. FINDINGS: A total of 1\u2009618 395 (CPRD GOLD), 5\u2009729 800 (CPRD AURUM), 2\u2009744\u2009821 (SIDIAP), and 77\u2009603 (CORIVA) vaccinated people and 1\u2009640\u2009371 (CPRD GOLD), 5\u2009860\u2009564 (CPRD AURUM), 2\u2009588\u2009518 (SIDIAP), and 302\u2009267 (CORIVA) unvaccinated people were included. Compared with unvaccinated people, overall HRs for long COVID symptoms in people vaccinated with a first dose of any COVID-19 vaccine were 0\u00b754 (95% CI 0\u00b744-0\u00b767) in CPRD GOLD, 0\u00b748 (0\u00b734-0\u00b768) in CPRD AURUM, 0\u00b771 (0\u00b755-0\u00b791) in SIDIAP, and 0\u00b759 (0\u00b740-0\u00b787) in CORIVA. A slightly stronger preventative effect was seen for the first dose of BNT162b2 than for ChAdOx1 (sHR 0\u00b785 [0\u00b760-1\u00b720] in CPRD GOLD and 0\u00b784 [0\u00b774-0\u00b794] in CPRD AURUM). INTERPRETATION: Vaccination against COVID-19 consistently reduced the risk of long COVID symptoms, which highlights the importance of vaccination to prevent persistent COVID-19 symptoms, particularly in adults. FUNDING: National Institute for Health and Care Research.
\n \n\n \n \nBACKGROUND: Stress is a universal phenomenon and one of the most common precipitants of insomnia. However, not everyone develops insomnia after experiencing a stressful life event. This study aims to test aspects of Spielman's '3P model of insomnia' (during adolescence) by exploring the extent to which: (a) insomnia symptoms are predicted by polygenic scores (PGS); (b) life events predict insomnia symptoms; (c) the interaction between PGS and life events contribute to the prediction of insomnia symptoms; (d) gene-environment interaction effects remain after controlling for sex. METHODS: The sample comprised 4,629 twins aged 16 from the Twin Early Development Study who reported on their insomnia symptoms and life events. PGS for insomnia were calculated. In order to test the main hypothesis of this study (a significant interaction between PGS and negative life events), we fitted a series of mixed effect regressions. RESULTS: The best fit was provided by the model including sex, PGS for insomnia, negative life events, and their interactions (AIC\u2009=\u200926,158.7). Our results show that the association between insomnia symptoms and negative life events is stronger for those with a higher genetic risk for insomnia. CONCLUSIONS: This work sheds light on the complex relationship between genetic and environmental factors implicated for insomnia. This study has tested for the first time the interaction between genetic predisposition (PGS) for insomnia and environmental stressors (negative life events) in adolescents. This work represents a direct test of components of Spielman's 3P model for insomnia which is supported by our results.
\n \n\n \n \nSTUDY OBJECTIVES: Digital technology use is associated with poor sleep quality in adolescence and young adulthood although research findings have been mixed. No studies have addressed the association between the two using a genetically informative twin design which could extend our understanding of the etiology of this relationship. This study aimed to test: (1) the association between adolescents' perceived problematic use of digital technology and poor sleep quality, (2) whether the association between problematic use of technology and poor sleep quality remains after controlling for familial factors, and (3) genetic and environmental influences on the association between problematic use of technology and poor sleep quality. METHODS: Participants were 2232 study members (18-year-old twins) of the Environmental Risk (E-Risk) Longitudinal Twin Study. The sample was 48.9% male, 90% white, and 55.6% monozygotic. We conducted regression and twin difference analyses and fitted twin models. RESULTS: Twin differences for problematic use of technology were associated with differences for poor sleep quality in the whole sample (p < 0.001; B = 0.15) and also when we limited the analyses to identical twins only (p < 0.001; B = 0.21). We observed a substantial genetic correlation between problematic use of technology and sleep quality (rA = 0.31), whereas the environmental correlation was lower (rE = 0.16). CONCLUSIONS: Adolescent reported problematic use of digital technology is associated with poor sleep quality-even after controlling for familial factors including genetic confounds. Our results suggest that the association between adolescents' sleep and problematic digital technology use is not accounted for by shared genetic liability or familial factors but could reflect a causal association. This robust association needs to be examined in future research designed to test causal associations.
\n \n\n \n \nBACKGROUND: Periprosthetic fractures are rare but serious complications of unicompartmental knee arthroplasty (UKA). Although cementless UKA has a lower risk of loosening than cemented, there are concerns that tibial fracture risk may be higher given the reliance on interference fit for primary stability. The risk of fracture and the effect of surgical fixation are currently unknown. We compared the periprosthetic fracture rate following cemented and cementless UKA surgery. METHODS: A total of 14,122 medial mobile-bearing UKAs (7,061 cemented and 7,061 cementless) from the National Joint Registry and Hospital Episodes Statistics database were propensity score-matched. Cumulative fracture rates were calculated and Cox regressions were used to compare fixation groups. RESULTS: The three-month periprosthetic fracture rates were similar (P\u00a0= .80), being 0.10% in the cemented group and 0.11% in the cementless group. The fracture rates were highest during the first three months postoperatively, but then decreased and remained constant between one and 10 years after surgery. The one-year cumulative fracture rates were 0.2% (confidence interval [CI]: 0.1 to 0.3) for cemented and 0.2% (CI: 0.1 to 0.3) for cementless cases. The 10-year cumulative fracture rates were 0.8% (CI: 0.2 to 1.3) and 0.8% (CI: 0.3 to 1.3), respectively. The hazard ratio during the whole study period was 1.06 (CI: 0.64 to 1.77; P\u00a0= .79). CONCLUSIONS: The periprosthetic fracture rate following mobile bearing UKA surgery is low, being about 1% at 10 years. There were no significant differences in fracture rates between cemented and cementless implants after matching. We surmise that surgeons are aware of the higher theoretical risk of early fracture with cementless components and take care with tibial preparation. LEVELS OF EVIDENCE: III.
\n \n\n \n \nBACKGROUND: Myocardial inflammation and injury occur during coronary artery bypass graft (CABG) surgery. We aimed to characterise these processes during routine CABG surgery to inform the diagnosis of type 5 myocardial infarction. METHODS: We assessed 87 patients with stable coronary artery disease who underwent elective CABG surgery. Myocardial inflammation, injury and infarction were assessed using plasma inflammatory biomarkers, high-sensitivity cardiac troponin I (hs-cTnI) and cardiac magnetic resonance imaging (CMR) using both late gadolinium enhancement (LGE) and ultrasmall superparamagnetic particles of iron oxide (USPIO). RESULTS: Systemic humoral inflammatory biomarkers (myeloperoxidase, interleukin-6, interleukin-8 and c-reactive protein) increased in the post-operative period with C-reactive protein concentrations plateauing by 48\u00a0h (median area under the curve (AUC) 7530 [interquartile range (IQR) 6088 to 9027] mg/L/48\u00a0h). USPIO-defined cellular myocardial inflammation ranged from normal to those associated with type 1 myocardial infarction (median 80.2 [IQR 67.4 to 104.8] /s). Plasma hs-cTnI concentrations rose by \u226550-fold from baseline and exceeded 10-fold the upper limit of normal in all patients. Two distinct patterns of peak cTnI release were observed at 6 and 24\u00a0h. After CABG surgery, new LGE was seen in 20% (n\u2009=\u200918) of patients although clinical peri-operative type 5 myocardial infarction was diagnosed in only 9% (n\u2009=\u20098). LGE was associated with the delayed 24-h peak in hs-cTnI and its magnitude correlated with AUC plasma hs-cTnI concentrations (r\u2009=\u20090.33, p\u200910-fold the 99th centile upper limit of normal that is not attributable to inflammatory or ischemic injury alone. Peri-operative type 5 myocardial infarction is often unrecognised and is associated with a delayed 24-h peak in plasma hs-cTnI concentrations.
\n \n\n \n \nBACKGROUND: High-sensitivity cardiac troponin assays can help to identify patients who are at low risk of myocardial infarction in the emergency department. We aimed to determine whether the addition of clinical risk scores would improve the safety of early rule-out pathways for myocardial infarction. METHODS: In 1935 patients with suspected acute coronary syndrome, we evaluated the safety and efficacy of 2 rule-out pathways alone or in conjunction with low-risk TIMI (Thrombolysis In Myocardial Infarction) (0 or 1), GRACE (Global Registry of Acute Coronary Events) (\u2264108), EDACS (Emergency Department Assessment of Chest Pain Score) (<16), or HEART (History, ECG, Age, Risk factors, Troponin) (\u22643) scores. The European Society of Cardiology 3-hour pathway uses a single diagnostic threshold (99th percentile), whereas the High-STEACS (High-Sensitivity Troponin in the Evaluation of Patients With Acute Coronary Syndrome) pathway applies different thresholds to rule out (<5 ng/L) and rule in (>99th percentile) myocardial infarction. RESULTS: Myocardial infarction or cardiac death during the index presentation or at 30 days occurred in 14.3% of patients (276/1935). The European Society of Cardiology pathway ruled out 70%, with 27 missed events giving a negative predictive value of 97.9% (95% CI, 97.1-98.6). The addition of a HEART score \u22643 reduced the proportion ruled out by the European Society of Cardiology pathway to 25% but improved the negative predictive value to 99.7% (95% CI, 99.0-100; P<0.001). The High-STEACS pathway ruled out 65%, with 3 missed events for a negative predictive value of 99.7% (95% CI, 99.4-99.9). No risk score improved the negative predictive value of the High-STEACS pathways, but all reduced the proportion ruled out (24% to 47%; P<0.001 for all). CONCLUSIONS: Clinical risk scores significantly improved the safety of the European Society of Cardiology 3-hour pathway, which relies on a single cardiac troponin threshold at the 99th percentile to rule in and rule out myocardial infarction. Where lower thresholds are used to rule out myocardial infarction, as applied in the High-STEACS pathway, risk scores halve the proportion of patients ruled out without improving safety. CLINICAL TRIAL REGISTRATION: URL: https://www.clinicaltrials.gov . Unique identifier: NCT01852123.
\n \n\n \n \nRATIONALE: In response to blood vessel wall injury, aberrant proliferation of vascular smooth muscle cells (SMCs) causes pathological remodeling. However, the controlling mechanisms are not completely understood. OBJECTIVE: We recently showed that the human long noncoding RNA, SMILR, promotes vascular SMCs proliferation by a hitherto unknown mechanism. Here, we assess the therapeutic potential of SMILR inhibition and detail the molecular mechanism of action. METHODS AND RESULTS: We used deep RNA-sequencing of human saphenous vein SMCs stimulated with IL (interleukin)-1\u03b1 and PDGF (platelet-derived growth factor)-BB with SMILR knockdown (siRNA) or overexpression (lentivirus), to identify SMILR-regulated genes. This revealed a SMILR-dependent network essential for cell cycle progression. In particular, we found using the fluorescent ubiquitination-based cell cycle indicator viral system that SMILR regulates the late mitotic phase of the cell cycle and cytokinesis with SMILR knockdown resulting in \u224810% increase in binucleated cells. SMILR pulldowns further revealed its potential molecular mechanism, which involves an interaction with the mRNA of the late mitotic protein CENPF (centromere protein F) and the regulatory Staufen1 RNA-binding protein. SMILR and this downstream axis were also found to be activated in the human ex vivo vein graft pathological model and in primary human coronary artery SMCs and atherosclerotic plaques obtained at carotid endarterectomy. Finally, to assess the therapeutic potential of SMILR, we used a novel siRNA approach in the ex vivo vein graft model (within the 30 minutes clinical time frame that would occur between harvest and implant) to assess the reduction of proliferation by EdU incorporation. SMILR knockdown led to a marked decrease in proliferation from \u224829% in controls to \u22485% with SMILR depletion. CONCLUSIONS: Collectively, we demonstrate that SMILR is a critical mediator of vascular SMC proliferation via direct regulation of mitotic progression. Our data further reveal a potential SMILR-targeting intervention to limit atherogenesis and adverse vascular remodeling.
\n \n\n \n \nExposure to urban particulate matter has been associated with an increased risk of cardiovascular disease and thrombosis. We studied the effects of transient exposure to diesel particles on fibrin clot structure of 16 healthy individuals (age 21-44). The subjects were randomly exposed to diesel exhaust and filtered air on two separate occasions. Blood samples were collected before exposure, and 2 and 6 hours after exposure. There were no significant changes on clot permeability, maximum turbidity, lag time, fibre diameter, fibre density and fibrinogen level between samples taken after diesel exhaust exposure and samples taken after filtered air exposure. These data show that there are no prothrombotic changes in fibrin clot structure in young, healthy individuals exposed to diesel exhaust.
\n \n\n \n \nFine particulate air pollution\u00a0<2.5\u00a0\u03bcm in diameter (PM2.5) is a major environmental threat to global public health. Multiple national and international medical and governmental organizations have recognized PM2.5 as a risk factor for cardiopulmonary diseases. A growing body of evidence indicates that several personal-level approaches that reduce exposures to PM2.5 can lead to improvements in health endpoints. Novel and forward-thinking strategies including randomized clinical trials are important to validate key aspects (e.g., feasibility, efficacy, health benefits, risks, burden, costs) of the various protective interventions, in particular among real-world susceptible and vulnerable populations. This paper summarizes the discussions and conclusions from an expert workshop, Reducing the Cardiopulmonary Impact of Particulate Matter Air Pollution in High Risk Populations, held on May 29 to 30, 2019, and convened by the National Institutes of Health, the U.S. Environmental Protection Agency, and the U.S. Centers for Disease Control and Prevention.
\n \n\n \n \n