A longstanding goal of Bayesian machine learning research is to separate model description from inference implementation while keeping pace with the tremendous growth in size and complexity of models and datasets. Advances in three areas in the last decade --automatic differentiation, Monte Carlo integration, and stochastic variational inference-- have enabled unprecedented progress towards that goal in the form of deep probabilistic programming languages like Pyro. Pyro allows Bayesian models to be specified generatively as Python functions that invoke random samplers, and provides both scalable black-box inference algorithms applicable to a wide variety of models as well as modular components for implementing custom inference algorithms. This talk will explain what deep probabilistic programming with Pyro is and when and how you should use it. Our running example will be a semi-supervised labeling problem from single cell transcriptomics: scANVI.