Big Data and critical care

A session blog from Day 2 of State of the Art, London December 2015. See the full list

Blogger Nick Plummer (@nickopotamus)

RCTs and Big Data – future fusion?

Derek Angus

The problem with “thick data” (highly granular, small number trial data) is that we’re often left in a “data-poor, opinion-rich” environment once an intervention has been approved, given the RCTs providing this information don’t generally reflect real practice. On the other hand “big data” analytics allows inferences about optimal care and allow for real-time informatics (essentially forming a “just in time” cohort study to influence patient care there and then), but don’t allow researchers to assign causality without the benefit of randomisation as they’re purely observational studies (Longhurst 2014).

Two models have tried to build on this: Point of Care (POC) clinical trials, where an EHR is used to alerts to patients which can be recruited, but still lack the large scale of big data analytics and retains the problem that recruited patients are in effect guinea pigs; and platform trials which are essentially adaptive RTCs, using techniques such as response-adaptive randomisation where the current statistical model is used to change randomisation odds – or even lead to the introduction of new arms of the trial on the fly – to ensure that the patient being recruited gets the best available treatment based on the best available evidence. Again however platform trials are limited by small samples, a poor understanding of methodology and reporting, and tend to be focussed in the pre-approval setting.

REMAP (randomised embedded multifactorial adaptive platform trials) try to blend  POC and platform trials to provide the same benefits of big data analytics while retaining the ability to make causal inferences – an example being the REMAP Severe Pneumonia program (Angus 2015).

References/Further reading:

 

The potential and pitfalls of Big Data in critical care

Nazir Lone

Big data to fill in the “translation gaps” in bench to bedside research (Khoury 2012), however just because it has an n ranging from “huge” to “all” it isn’t free from the same pitfalls as all epidemiological data. The major and obvious pitfall is linking association to causation, which is impossible without randomisation (and yes, even comparing two big populations isn’t good enough); however it also suffers from the same problems as all “classical” observational studies:

  • Chance: Just because n is huge it doesn’t mean endless power – the risk of a type one error (false positive) remains 5%
  • Confounders: Due to huge datasets estimates tend to be precise, but this is a false security – confounders are already implied within the dataset, and similarly not all confounders that should be adjusted for are recorded in this dataset (see Freemantle 2015 and its #notsafenotfair aftermath for a classic story of misascribed association to causation)
  • Selection bias: n may equal ”all”, but how did the the patients appear in the dataset to begin with? There’s an implied bias in critical care datasets, i.e. these patients must be sick but potentially salvageable; this effect can modify or distort any association and limits its applicability to patients from outside this patient group (e.g. potential referrals to ICU on the wards).

A further concern in using big data lies in interpretation of dynamic systems, where pre-hoc results then influence the post-hoc estimates. This is a risk in both thick data (see the run up to the Scottish referendum) and big data e.g. Google flu trends (Lazer 2014). The major strength of big data comes from where it is combined alongside thick data to give robust generalisation to causation, but users of the data must be aware of potential biases brought about through the seductive n = all.

References/Further reading

 

Big Data in critical care: the ethics

Sarah Cunningham-Burley

We face a significant problem with public acceptability and trust in big data analysis. This may be due to generalised erosion of trust in data, science, and large scale programmes as a whole; however it is probably less significant than we think as researchers (as highlighted in the Q&A regarding the difference between public perception of health big data and surveillance big data). “Public engagement” is likely to be a major contributor to solve this, but takes many different types and approaches to achieve a number of different outcomes (Aitken 2010).

There are clearly a number of ethical concerns regarding the use of big data. Four of these are “obvious”: Concerns regarding consent for use of the data; questions of patient confidentiality with their data freely available; whether the routine acquisition of this data is a breach of privacy; and how to effectively anonymise the data (especially regarding rare diseases and biomarkers). There are also four less obvious ethical issues: Who has control of the data; how to assess the public benefit of making this personal information available; the risks of commercialisation and who profits from this; and linked the concept of benefit sharing, deciding who and how benefits from the use of the information

On the whole we still face a deficit of public trust in the use of big data in healthcare (see the problems implanting and benefiting from care.data); we need to rebuild this trust with transparency and a well thought out, on-going, dynamic public engagement programme.

References/Further reading

 

Learning from the MIMIC II database

Leo Celi

MIMIC is a “medical information mart” composed of ICU physiological and clinical data, hospital data, and post-discharge data. This allows observational studies to be performed on a huge number of patients.

Leo once again noted that causation can never be proved by an observational study, but highlighted that we can often be “selective purists”: the causations between smoking and cancer or CO2 emissions and climate change are widely accepted despite the lack of RCT; yet we’re prepared to slate observations of a link between vaccines and autism! Multiple studies have shown a significant correlation between the outcomes of RCTs and observational studies in the wider healthcare context (Ioannidis 2001), Cochrane setting (Anglemyer 2014), and the critical care literature (Kitsios). As we also suffer from the problem that there are too many questions in intensive care to perform a RCT for each one (even if it that was possible, which it isn’t always), big data has the opportunity to provide high quality evidence in these cases, so long as we are aware that it is still observational data.

MIMIC is a huge dataset, but still suffers from limitations. As a single centre database the results could argue to be of limited value, as they may not be generalizable to other datasets; MIMIC 3 aims to address this in collecting data from multiple international centres. The researchers also faced barriers to data sharing and transparent peer review, and it was noted that there is a divide in practice between how data scientists and clinicians used the data: There is a large body of machine learning literature showcasing optimisation of diagnostic algorithms based on big data, but it isn’t always clear what these mean in the real world to a patient on ICU (Wagstaff 2012).

The outcomes from the MIMIC studies are more than just the data, although this is freely available for analysis. This includes their “lab notebook” of techniques and search strategies used, and the software to allow sharing of information and replication of results. This highlights the key takehome message: The value of big data doesn’t just come from its results, but hinges on the ability to share the data, methodologies, and finding.

References/Further reading

Q&A

  1. How can aspiring researchers develop the core skills needed to work with big data? There are multiple courses in epidemiology in universities across the UK; you need a grounding in the fundamentals of observational studies, then arrange collaboration with engineers and statisticians for data analysis (such as at the FARR Institute). The most difficult part part is finding time and funding for an OOPE.
  2. Why are we happy for medical research groups to use our data, but not for example government surveillance services to use our data? This comes down to the interest of individual verses the interests of the public, and how this impacts on the eight ethical elements discussed by Sarah. The panel also thought there may be a generational component to this – are the younger generation happier with their data being “owned” (e.g. Google and Facebook), or are they more aware of the hazards?
  3. How can we tie this in with randomisation to provide solid information regarding causation? The key in doing any big data research is to randomise where you can in advance to avoid confounders. An example of this would be government or health policies; randomise then assess the impact, rather than universally enforce and perform a retrospective observational review. There are limitations to this (you can’t randomise where a weekend falls), but this means reframing the question: learn what you can from the observational study, but rather than a knee-jerk response and blanket implementation of a single policy trial multiple different and randomised responses to the problem, and compare these prospectively.