We'll start with a visual introduction to Morse theory, which relates the topology (shape) of a manifold (space) to the behavior of smooth, real-valued functions on that manifold. We'll then apply this relationship in both directions. First, we’ll consider the function on the space of k-planes in R^m given by squared distance to a fixed point cloud, leading to a visceral understanding of the gradient dynamics of PCA as learned by a linear autoencoder. Second, we’ll consider the loss function of a deep neural network. We’ll explain how the Morse homology of Euclidean space forces geometric relationships between critical points, establishing a theoretical foundation for fast geometric ensembling that in turn suggests new algorithms.