CARDIOLOGY AND VASCULAR

Measuring the benefit of CVD risk prediction

CVD risk prediction

Dr Geoff Chadwick, Consultant Physician, St Columcille’s Hospital, Dublin

July 12, 2016

Article
Similar articles
  • Cardiovascular disease (CVD) remains a major cause of morbidity and mortality worldwide, despite a reducing incidence and case fatality for myocardial infarction and stroke. Emerging as a leading cause of death in the early 20th century, CVD peaked in incidence in the 1960s. The development of preventive interventions (both pharmaceutical and lifestyle) has since led to numerous models for predicting those at risk, which was summarised in a recent review by Damen et al1 with an editorial commentary by Tim Holt.2

    The prevention of CVD relies on timely identification of those at increased risk in order to target effective dietary, lifestyle and drug interventions. Prediction of risk for CVD began back in 1948 with recruitment for the seminal Framingham study. This resulted in risk equations combining the suspected risk factors. Before this, there was a need to confirm and measure the relative contribution of risk factors, as well as identify individuals at risk of CVD. For example, was smoking or blood pressure as important a risk as raised cholesterol levels? How did these factors interact with each other, and with the age and sex of the person? Important trials of antihypertensive and lipid lowering drugs followed the Framingham study. It was increasingly recognised that such interventions have greater benefit on those at higher risk. The BMJ points out that relative risk reduction is reasonably consistent across different levels of absolute risk, so a 25% relative reduction confers more actual benefit if the risk starts at 40% than if it starts at 10%.2 Therefore, identifying the baseline CVD risk is important in order to optimise the targeting of interventions. It is also good for individuals to know this to understand their own personal risk irrespective of decisions on treatment or lifestyle. Risk modelling is also used by life insurance companies in estimating the risk of applicants, and by health economists and policy makers engaged in public health initiatives.

    With the objective of providing an overview of prediction models for risk of CVD in the general population, Damen et al identified 363 models reported in 212 papers,1 mostly from Europe, the US or Canada, which were derived from cohort study data. Only 36% of developed models were externally validated and in many cases performance criteria were heterogeneous, poorly defined or simply missing. Over the years it became apparent that the Framingham model’s performance was suboptimal on today’s populations outside the US. Dominated by white middle class participants, the Framingham cohort was not sufficiently relevant to 21st century Europe. 

    For some people, the identification of CVD risk might trigger more healthy behaviour, or access to preventive drug treatment. However, for others, it may be medicalising normality, turning people into patients, and adversely affecting self image and life insurance premiums.2 It may expose a large number of people to drug side-effect, for the benefit of just a few. To date, programmes aimed at systematic assessment of the CVD risk of healthy people have proven ineffective at improving hard clinical outcomes such as CVD incidence. 

    Damen et al conclude that head-to-head comparisons of promising algorithms would be more beneficial than deriving new ones, and that as novel risk factors are identified, researchers should evaluate what they add to existing models. The aim is to translate CVD risk recognition into tangible and measurable clinical benefit for patients and the general public.

    References
    1. Damen J, Hooft L, Schuit E. Prediction models for cardiovascular disease risk in the general population: systematic review. BMJ 2016;353:i2416
    2. Holt T. Predicting cardiovascular disease. BMJ 2016; 353:i2621
    © Medmedia Publications/Hospital Doctor of Ireland 2016