Dan Roy aka daniel roy, daniel m roy, droy

Assistant Professor

Department of Computer and Mathematical Sciences
University of Toronto Scarborough

Department of Statistical Sciences
Department of Computer Science  (by courtesy)
University of Toronto

Jump to:   Teaching | Research Articles | News | Probabilistic Programming | Curriculum Vitæ | Contact

join toronto

I am seeking students at all levels with strong quantitative backgrounds interested in foundational problems at the intersection of statistics and computer science. I am also seeking qualified postdoctoral researchers for two- and three- year positions. Please send me your c.v. and best research/project as PDFs.


My research blends computer science, statistics and probability theory; I study "probabilistic programming" and develop computational perspectives on fundamental ideas in probability theory and statistics. I am particularly interested in: representation theorems that connect computability, complexity, and probabilistic structures; stochastic processes, the use of recursion to define stochastic processes, and applications to nonparametric Bayesian statistics; and the complexity of probabilistic and statistical inference, especially in the context of probabilistic programming. Ultimately, I am motivated by the long term goal of making lasting contributions to our understanding of complex adaptive systems and especially Artificial Intelligence.


STA D68   (Winter 2015)
Advanced Machine Learning and Data Mining

STA 4513  (Fall 2014)
Statistical models of networks, graphs, and other relational structures


AUG 2014
I am co-organizing the 3rd NIPS Workshop on Probabilistic Programming, which will be held this December 13 in Montreal.

MAY 2014
I have accepted a tenure-track assistant professorship in Statistics at the University of Toronto. I will be starting this Fall.

JAN 2014
I will be giving an invited talk at the Mathematical Foundations of Programming Semantics conference this June at Cornell.

OCT 2013
I will be serving on the program committee for Computability and Complexity in Analysis, which will be held in Darmstadt this coming July.

Show More ▼

probabilistic programming

Programs can be used to give compact representations of distributions: in order to represent a distribution, one simply gives a program that would generate an exact sample were the random number generator to produce realizations of independent and identically distributed random variables. This approach to representing distributions by probabilistic programs works not only for simple distributions on numbers like Poissons, Gaussians, etc., and combinations thereof, but also for more exotic distributions on, e.g., phrases in natural language, rendered 2D images of 3D scenes, and climate sensor measurements.

Probabilistic programming systems support statistical inference on models defined by probabilistic programs. By constraining some variables of a program (e.g., simulated sensor readings in some climate model) and studying the conditional distribution of other variables (e.g., the parameters of the climate model), we can identify plausible variable settings that agree with the constraints. Conditional inferences like this would allow us to, e.g., build predictive text systems for mobile phones, guess the 3D shape of an object from only a photograph, or study the underlying mechanisms driving observed climate measurements.

Probabilistic programming systems for machine learning and statistics are still in their infancy, and there are many interesting theoretical and applied problems yet to be tackled. My own work focuses on theoretical questions around representing stochastic processes and the computational complexity of sampling-based approaches to inference. I was involved in the definition of the probabilistic programming language Church, and its first implementation, MIT-Church, a Markov Chain Monte Carlo algorithm operating on the space of execution histories of an interpreter. Some of my key theoretical work includes a study of the computability of conditional probability and de Finetti measures, both central notions in Bayesian statistics. Readers looking for an overview of these results are directed to the introduction of my doctoral dissertation. A less technical description of a probabilistic programming approach to artificial intelligence can be found in a recent book chapter on legacies of Alan Turing, co-authored with Freer and Tenenbaum.

More information on probabilistic programming can be found on probabilistic-programming.org, a wiki that I maintain. In particular, look at the list of research articles and papers on probabilistic programming and the tutorials.

NIPS Workshop on Probabilistic Programming

I co-organized the first workshop on probabilistic programming for statistics and machine learning at NIPS*2008 (with Vikash Mansinghka, John Winn, David McAllester and Josh Tenenbaum). Four years later, I co-organized the second workshop on probabilistic programming at NIPS*2012 (with Vikash Mansinghka and Noah Goodman).


Hello, my name is Daniel M. Roy (or simply, Dan) and I am a Research Fellow of Emmanuel College at the University of Cambridge. I am also a member of the Machine Learning Group, headed by Zoubin Ghahramani and part of the Computational and Biological Learning Lab in the Department of Engineering.

I am a graduate of the EECS PhD program in computer science at the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory (CSAIL), where I was advised by Leslie Kaelbling. I also collaborated with members of Josh Tenenbaum's Computational Cognitive Science group.

You may find my abbreviated curriculum vitæ online.


Nate Ackerman, William Beebee, Keith Bonawitz, Cristian Cadar, Hal Daumé III, Brian Demsky, Daniel Dumitran, Cameron Freer, Zoubin Ghahramani, Noah Goodman, Creighton Heaukulani, Eric Jonas, Leslie Kaelbling, Charles Kemp, Balaji Lakshminarayanan, Tudor Leu, James Lloyd, Vikash Mansinghka, Peter Orbanz, Ryan Rifkin, Martin Rinard, Virginia Savova, Lauren Schmidt, David Sontag, Yee Whye Teh, Josh Tenenbaum, David Wingate

chalk-talk-style slides

Several template .TEX files for producing slides for mathematical talks that more closely mimic the chalk talk aesthetic.

valid xhtml

Valid XHTML 1.1 re-validate
Valid CSS /styles/screen

preprints and working papers

Convergence of Sequential Monte Carlo-based Sampling Methods
(with Jonathan Huggins)

The continuum-of-urns scheme,
generalized beta and Indian buffet processes,
and hierarchies thereof

Gibbs-type Indian Buffet Processes
(with Creighton Heaukulani)
Preprint available upon request.

Computability and a.e.-continuous disintegration
(with Nate Ackerman and Cameron Freer)
Preprint available upon request.

The combinatorial structure of negative binomial processes
(with Creighton Heaukulani)

On the computability of conditional probability
(with Nate Ackerman and Cameron Freer)

Chapter 5. Distributions on data structures: a case study
(Mondrian process theory)

Those seeking a more formal presentation of the Mondrian process than the NIPS paper should see Chapter 5 of my dissertation (linked above). A revised version of these and additional results is in preparation.

Note: Articles distinguished by "(with ...)" have alphabetical author lists, as is the convention in mathematics and theoretical computer science.


Computability, inference and modeling in probabilistic programming
Daniel M. Roy
Ph.D. thesis, Massachusetts Institute of Technology, 2011.
MIT/EECS George M. Sprowls Doctoral Dissertation Award

research articlesand publications

Particle Gibbs for Bayesian Additive Regression Trees
Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh
Proc. Artificial Intelligence and Statistics (AISTATS), 2015.
[ preprint ]

Mondrian Forests: Efficient Online Random Forests
Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh
Adv. Neural Information Processing Systems 27 (NIPS), 2014.
[ preprint on arXiv:1406.2673, code ]

Bayesian Models of Graphs, Arrays and Other Exchangeable Random Structures
(with Peter Orbanz)
IEEE Trans. Pattern Anal. Mach. Intelligence (PAMI), 2014.
Slides for talk at CIMAT

Top-down particle filtering for Bayesian decision trees
Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh
Proc. Int. Conf. on Machine Learning  (ICML), 2013.
[ code ]

Towards common-sense reasoning via conditional simulation:
Legacies of Turing in Artificial Intelligence

(with Cameron Freer and Josh Tenenbaum)
Turing's Legacy (ASL Lecture Notes in Logic), 2012.

Random function priors for exchangeable arrays with applications to graphs and relational data
James Lloyd, Peter Orbanz, Zoubin Ghahramani, Daniel M. Roy
Adv. Neural Information Processing Systems 25 (NIPS), 2012.

Computable de Finetti measures
(with Cameron Freer)
Annals of Pure and Applied Logic, 2012.

Complexity of Inference in Latent Dirichlet Allocation
David Sontag and Daniel Roy
Adv. Neural Information Processing Systems 24 (NIPS), 2011.

On the computability and complexity of Bayesian reasoning
NIPS Philosophy and Machine Learning Workshop, 2011.

Noncomputable conditional distributions
(with Nate Ackerman and Cameron Freer)
Proc. Logic in Computer Science (LICS), 2011.

Probabilistically Accurate Program Transformations
Sasa Misailovic, Daniel M. Roy, and Martin C. Rinard
Proc. Int. Static Analysis Symp. (SAS), 2011.

Bayesian Policy Search with Policy Priors
David Wingate, Noah D. Goodman, Daniel M. Roy, Leslie P. Kaelbling, and Joshua B. Tenenbaum
Proc. Int. Joint Conf. on Artificial Intelligience (IJCAI), 2011.

When are probabilistic programs probably computationally tractable?
(with Cameron Freer and Vikash Mansinghka)
NIPS Workshop on Monte Carlo Methods for Modern Applications, 2010.

Posterior distributions are computable from predictive distributions
(with Cameron Freer)
Proc. Artificial Intelligence and Statistics (AISTATS), 2010.

Complexity of Inference in Topic Models
David Sontag and Daniel Roy
NIPS Workshop on Applications for Topic Models: Text and Beyond, 2009.

The Infinite Latent Events Model
David Wingate, Noah D. Goodman, Daniel M. Roy, and Joshua B. Tenenbaum
Proc. Uncertainty in Artificial Intelligence (UAI), 2009.

Computable exchangeable sequences have computable de Finetti measures
(with Cameron Freer)
Proc. Computability in Europe (CiE), 2009.

Exact and Approximate Sampling by Systematic Stochastic Search
Vikash Mansinghka, Daniel M. Roy, Eric Jonas, and Joshua Tenenbaum
Proc. Artificial Intelligence and Statistics (AISTATS), 2009.

The Mondrian Process
(with Yee Whye Teh)
Adv. Neural Information Processing Systems 21 (NIPS), 2009.

Video animation of the Mondrian process as one zooms into the origin (under a beta Levy rate measure at time t=1.0). See also the time evolution of a Mondrian process on the plane as we zoom in with rate proportional to time. In both cases, the colors are chosen at random from a palette. These animations were produced by Yee Whye in Matlab. For now, we reserve copyright, but please email me and we'll be more than likely happy to let you use them.

A stochastic programming perspective on nonparametric Bayes
Daniel M. Roy, Vikash Mansinghka, Noah Goodman, and Joshua Tenenbaum
ICML Workshop on Nonparametric Bayesian, 2008.

Church: a language for generative models
Noah Goodman, Vikash Mansinghka, Daniel M. Roy, Keith Bonawitz, and Joshua Tenenbaum
Proc. Uncertainty in Artificial Intelligence (UAI), 2008.

Bayesian Agglomerative Clustering with Coalescents
Yee Whye Teh, Hal Daumé III, and Daniel M. Roy
Adv. Neural Information Processing Systems 20 (NIPS), 2008.

Discovering Syntactic Hierarchies
Virginia Savova, Daniel M. Roy, Lauren Schmidt, and Joshua B. Tenenbaum
Proc. Cognitive Science (COGSCI), 2007.

AClass: An online algorithm for generative classification
Vikash K. Mansinghka, Daniel M. Roy, Ryan Rifkin, and Joshua B. Tenenbaum
Proc. Artificial Intelligence and Statistics (AISTATS), 2007.

Efficient Bayesian Task-level Transfer Learning
Daniel M. Roy and Leslie P. Kaelbling
Proc. Int. Joint Conf. on Artificial Intelligience (IJCAI), 2007.

Learning Annotated Hierarchies from Relational Data
Daniel M. Roy, Charles Kemp, Vikash Mansinghka, and Joshua B. Tenenbaum
Adv. Neural Information Processing Systems 19 (NIPS), 2007.

Clustered Naive Bayes
MEng thesis, Massachusetts Institute of Technology, 2006.

Enhancing Server Availability and Security Through Failure-Oblivious Computing
Martin Rinard, Cristian Cadar, Daniel Dumitran, Daniel M. Roy, Tudor Leu, and William S. Beebee, Jr.
Proc. Operating Systems Design and Implementation (OSDI), 2004.

A Dynamic Technique for Eliminating Buffer Overflow Vulnerabilities (and Other Memory Errors)
Martin Rinard, Cristian Cadar, Daniel Dumitran, Daniel M. Roy, and Tudor Leu
Proc. Annual Computer Security Applications Conference (ACSAC), 2004.

Efficient Specification-Assisted Error Localization
Brian Demsky, Cristian Cadar, Daniel M. Roy, and Martin C. Rinard
Proc. Workshop on Dynamic Analysis (WODA), 2004.

Efficient Specification-Assisted Error Localization and Correction
Brian Demsky, Cristian Cadar, Daniel M. Roy, and Martin C. Rinard
MIT CSAIL Technical Report 927. November, 2003.

Implementation of Constraint Systems for Useless Variable Elimination
(advised by Mitchell Wand)
Research Science Institute. August, 1998.


I believe that errata, clarifications, missed citations, links to follow-on work, retractions, and other "marginalia" are important, but underappreciated, contributions to the scientific literature. Ultimately, the nature of a scientific document needs to be rethought, but until then I am slowly collecting marginalia in a simple wiki. I encourage everyone to host similar wikis, or contribute to this one.

Marginalia wiki.



recruiters: no, thank you.

(research-only email)
droy, utstat toronto edu

(teaching-only email)
UTSC: daniel.roy, utsc utoronto ca
 StG: daniel.roy, utoronto ca

(UTSC office)
Dept. of Computer and
    Mathematical Sciences
Univ. of Toronto Scarborough
IC Building, Room 462
1265 Military Trail
Toronto, ON
M1C 1A4
t: +1 (416) 287 5653

(StG office)
Dept. of Statistical Sciences
Univ. of Toronto
Sidney Smith Hall, 6026C
100 St. George St.
Toronto, ON
M5S 3G3
t: +1 (416) 978 4455
f: +1 (416) 978 5133

(mobile phone)
CA: +1 (647) 993 6380
UK: +44 (0) 7552 784 664
US: +1 (310) 913 6380