Random algorithms and probabilistic data structures are algorithmically efficient and can provide shockingly good practical results. I will give a practical introduction, with live demos and bad jokes, to this fascinating algorithmic niche. I will conclude with some discussions of how our group has applied this to large sequencing data sets (although this will not be the focus of the talk).
I propose to start with Python implementations of most of the DS & A mentioned in this excellent blog post:
and also discuss skip lists and any other random algorithms that catch my fancy. I'll put everything together in an IPython notebook and add visualizations as appropriate.
I'll finish with some discussion of how we've put these approaches to work in my lab's research, which focuses on compressive approaches to large data sets (and is regularly featured in my Python-ic blog, http://ivory.idyll.org/blog/).
The HackerNews (oops!) reference for my reddit-attributed quote about putting a gun to someone's head and asking them to write a log-time algorithm for storing stuff: https://news.ycombinator.com/item?id=2670632
Aggregate Knowledge's EXCELLENT blog post on HyperLogLog. The section on Big Pattern Observables is truly fantastic :)
Flajolet et al. is the original paper. It gets a bit technical in the middle but the discussions are great.
Vasily Evseenko's git repo https://github.com/svpcom/hyperloglog, forked from Nelson Goncalves's git repo, https://github.com/goncalvesnelson/Log-Log-Sketch, served as the source for my IPython Notebook.
The Wikipedia page is pretty good.
Everything I know about Bloom filters comes from my research.
I briefly mentioned the CountMin Sketch, which is an extension of the basic Bloom filter approach, for counting frequency distributions of objects.
Other nifty things to look at
Set operations on HyperLogLog counters, again over at Aggregate Knowledge.
In addition to our published paper on using Bloom filters to store de Bruijn graphs, you might be interested in:
Our preprint on streaming lossy compression of sequencing data (aka Digital Normalization)