Description
Random algorithms and probabilistic data structures are algorithmically efficient and can provide shockingly good practical results. I will give a practical introduction, with live demos and bad jokes, to this fascinating algorithmic niche. I will conclude with some discussions of how our group has applied this to large sequencing data sets (although this will not be the focus of the talk).
Abstract:
I propose to start with Python implementations of most of the DS & A mentioned in this excellent blog post:
http://highlyscalable.wordpress.com/2012/05/01/probabilistic-structures-web-analytics-data-mining/
and also discuss skip lists and any other random algorithms that catch my fancy. I'll put everything together in an IPython notebook and add visualizations as appropriate.
I'll finish with some discussion of how we've put these approaches to work in my lab's research, which focuses on compressive approaches to large data sets (and is regularly featured in my Python-ic blog, http://ivory.idyll.org/blog/).
Misc talk links
Github repo with IPython Notebooks in it
Overviews and linkfoo
Wikipedia's category page for Probabilistic Data Structures
The Highly Scalable Blog on Probabilistic Data Structures for Web Analytics and Data Mining
Specific References
SkipLists:
John Shipman's excellent writeup
William Pugh's original article
The HackerNews (oops!) reference for my reddit-attributed quote about putting a gun to someone's head and asking them to write a log-time algorithm for storing stuff: https://news.ycombinator.com/item?id=2670632
HyperLogLog:
HyperLogLog counter IPython Notebook
Aggregate Knowledge's EXCELLENT blog post on HyperLogLog. The section on Big Pattern Observables is truly fantastic :)
Flajolet et al. is the original paper. It gets a bit technical in the middle but the discussions are great.
Nick Johnson's blog post on cardinality estimation
MetaMarkets' blog post on cardinality counting
More High Scalability blog posts, this one by Matt Abrams
The obligatory Stack Overflow Q&A
Vasily Evseenko's git repo https://github.com/svpcom/hyperloglog, forked from Nelson Goncalves's git repo, https://github.com/goncalvesnelson/Log-Log-Sketch, served as the source for my IPython Notebook.
Bloom Filters:
The Wikipedia page is pretty good.
Everything I know about Bloom filters comes from my research.
I briefly mentioned the CountMin Sketch, which is an extension of the basic Bloom filter approach, for counting frequency distributions of objects.
Other nifty things to look at
Rapidly-exploring random trees
StackOverflow question on Bloom-filter like structures that go the other way
A survey of probabilistic data structures
K-Minimum Values over at Aggregate Knowledge again
Set operations on HyperLogLog counters, again over at Aggregate Knowledge.
Using SkipLists to calculate an efficient running median
My research
A fairly readable (?) grant on Big Data in sequencing data sets
A less readable ;) grant on "infinite assembly"
In addition to our published paper on using Bloom filters to store de Bruijn graphs, you might be interested in:
Our preprint on streaming lossy compression of sequencing data (aka Digital Normalization)
Our use of these techniques to assemble the heck out of large metagenomic data from soil
A chapter on optimizing our khmer software.
--titus
Comments !