# criterion performance measurements

## overview

want to understand this report?

## deepseq/10^5 elems

lower bound | estimate | upper bound | |
---|---|---|---|

Mean execution time | 7.500413304881048e-4 | 7.501572019175477e-4 | 7.502916699961609e-4 |

Standard deviation | 1.251258371708759e-6 | 1.4232922047614617e-6 | 1.6727246018394313e-6 |

Outlying measurements have no (1.9960000000003317e-3%) effect on estimated standard deviation.

## deepseq/10^6 elems

lower bound | estimate | upper bound | |
---|---|---|---|

Mean execution time | 8.203057134778884e-3 | 8.203858221204662e-3 | 8.204927290113342e-3 |

Standard deviation | 8.389395044402607e-6 | 1.0518487289960534e-5 | 1.6901701213693935e-5 |

Outlying measurements have no (1.996000000000055e-3%) effect on estimated standard deviation.

## deepseq/10^7 elems

lower bound | estimate | upper bound | |
---|---|---|---|

Mean execution time | 8.117032512489113e-2 | 8.117262061897067e-2 | 8.117995103660376e-2 |

Standard deviation | 1.6212068661084014e-5 | 4.2233505081620876e-5 | 9.559026818336794e-5 |

Outlying measurements have no (1.9960000000000004e-3%) effect on estimated standard deviation.

## understanding this report

In this report, each function benchmarked by criterion is assigned
a section of its own. In each section, we display two charts, each
with an *x* axis that represents measured execution time.
These charts are active; if you hover your mouse over data points
and annotations, you will see more details.

- The chart on the left is a kernel density estimate (also known as a KDE) of time measurements. This graphs the probability of any given time measurement occurring. A spike indicates that a measurement of a particular time occurred; its height indicates how often that measurement was repeated.
- The chart on the right is the raw data from which the kernel
density estimate is built. Measurements are displayed on
the
*y*axis in the order in which they occurred.

Under the charts is a small table displaying the mean and standard deviation of the measurements. We use a statistical technique called the bootstrap to provide confidence intervals on our estimates of these values. The bootstrap-derived upper and lower bounds on the mean and standard deviation let you see how accurate we believe those estimates to be. (Hover the mouse over the table headers to see the confidence levels.)

A noisy benchmarking environment can cause some or many measurements to fall far from the mean. These outlying measurements can have a significant inflationary effect on the estimate of the standard deviation. We calculate and display an estimate of the extent to which the standard deviation has been inflated by outliers.