Sunday, December 18, 2005

Outbreak Simulation

I. Relevance

In the absence of syndromic data that include a bioterrorist-related disease outbreak it is hard to evaluate the detection ability of different algorithms. This is on top of the complication of having natural disease oubreaks in the data, which are not easily labeled (when exactly was the last flu season in a certain geographical location?)

One approach has been to try and simulate signatures of such attacks and inject them into real, but attack-less data. A second approach has been to simulate the attack-less data as well. Yet, a different approach has been to try and model the consequences of a bio-agent release using meteorological and atmospheric models and use those to simulate an attack.

In this posting we concentrate on temporal data streams and algorithms. However, similar issues arise in spatial and spatio-temporal data and approaches.

II. Examples
In Goldenberg et al. (2002) we injected linearly increasing outbreaks into cough medication sales, over a 3-day period. We tried different slopes and different magnitudes.

Stoto et al. (2004) “seeded” real ER data with a “fast-outbreak” which was constructed as a 3-day linear increase in cases (adding 3,6, and 9 cases on the first, second, and third day, respectively), or a “slow-outbreak” constructed as a 9-day step function (adding 1,1,1,2,2,3,3,3 cases to each of the first through 9th days).

Burkom et al. (2005) simulated background (=attack-less) counts from a Poisson distribution. They then injected counts drawn randomly from a lognormal distribution, based on the lognormal distribution of incubation periods of infectious diseases.

III. Determining the outbreak structure
The main issue is that we do not really know what the signature of a bioterrorist attack disease outbreak would look like in medication sales, ER admissions, etc. In particular
o Different types of outbreaks can lead to different signatures
o Different data streams might have different “reactions” to outbreaks
What we do have is some knowledge on disease progression. The lognormal curve arrives from such information. But what does it measure when it comes to syndromic data streams? According to Burkom et al. (2005),
“The incubation period distribution was used to estimate the idealized curve for the expected number of new symptomatic cases on each outbreak day. The lognormal parameters were chosen to give a median incubation period of 3.5 days, consistent with the symptomatology of known weaponized diseases and a temporal case dispersion consistent with previously observed outbreaks”
The question is what can be inferred from disease progression to the manifestation of the outbreak in pre-diagnosis data? There will clearly be large effects of media, word-of-mouth, and mass psychology. Can these be integrated to some degree?

Another approach has been to model behavior at the individual level. Wong et al. (2005) consider the fact that “the majority of the background knowledge of the characteristics of respiratory anthrax disease is at an individual rather than a population level.” They therefore build a model based on “person-level” activity for detecting infectious but noncontagious diseases such as Anthrax.

IV. Implications
Given that outbreak simulation is used for the purpose of evaluating the performance of detection algorithms, the main issue with simulating a pre-defined outbreak shape is that we can design the most efficient monitoring algorithm to detect the particular simulated outbreak shape!!!

For instance, it can be shown that the Shewhart chart is most efficient at detecting a (large) single spike, a Cusum chart for detecting a step function, an EWMA chart for detecting an exponential increase, etc. (Box & Luceño, 1997)

In the recent Bio-ALIRT competition, a group of medical and epidemiological experts examined the datasets and by eyeballing and using a Cusum chart determined when there were outbreaks:
“Using visual and statistical techniques, ODG found evidence of disease outbreaks in the data” (Siegrist & Pavlin, 2004)

The participating groups were then to detect those outbreaks. Clearly, those who used a Cusum or algorithms that mimic human sight were most likely to do “best”.

V. Injecting simulated outbreaks
Another issue with outbreak simulation is how it to inject it into the no-outbreak data. Clearly there are some periods when it is more likely to get detected than others.

In Goldenberg et al. (2002) we injected the simulated outbreak at every point in the series, and then evaluated an overall rate of how many times it was detected (as well as false alarms). A similar approach was taken in Stoto et al. (2004)

In contrast, Burkom et al. (2005) injected the simulated outbreak at a randomly chosen start day (recall that their background data is in itself Poisson-simulated data).

VI. Some solutions
Since we really do not know the shape or magnitude of an outbreak, one approach is to simulate a range of different outbreaks and then evaluate algorithms over all the different types. This will most likely give preference to algorithms that are not very tightly coupled with a certain outbreak type (e.g., wavelets or other multi-resolution methods).

A practical consideration is to choose an outbreak duration that is not longer than the period that we would act upon. For instance, if detecting an Anthrax attack occurs 3 days later, then it is too late. In that sense we can consider just the outbreak signature in its first three days.

VII. References
Box, G. and Luceño, A. (1997). Statistical Control: By Monitoring and Feedback Adjustment. Wiley-Interscience, 1st edition.

Burkom, H, Murphy, S, Coberly, J, and Hurt-Mullen K (2005), Public Health Monitoring Tools for Multiple Data Streams, MMWR 54 (suppl), 55-62.

Goldenberg A, Smueli G, Caruana RA and Fienberg SE (2002). Early Statistical Detection of Anthrax Outbreaks by Tracking Over-the-Counter Medication Sales. PNAS, 99 (8), 5237-5240.
Siegrist, D and Pavlin, J (2004), Bio-ALIRT Biosurveillance Detection Algorithm Evlauation, MMWR 53 (suppl), 152-158.

Stoto MA, Schonlau M, Mariano LT (2004). Syndromic Surveillance: Is it Worth the Effort? Chance, 17 (1), 19-24.

Wong, W-K, Cooper, G, Dash, D, Levander, J., Dowling, J, Hogan, W, and Wagner M (2005), Use of Multiple Data Streams to Conduct Bayesian Biologic Surveillance, MMWR 54 (suppl), 63-69.