Massive datasets are becoming increasingly common. What useful computations can be performed on a dataset when reading all of it is prohibitively expensive? This question, fundamental to several fields, is at the heart of the research area, called sublinear-time algorithms, that has provided important insights into fast approximate computation.In this talk, we will consider types of computational tasks central to sublinear-time algorithms: testing, learning, and approximation. We will see examples of sublinear-time algorithms in several domains. The algorithms themselves are typically simple and efficient, but their analysis requires insights into basic combinatorial, algebraic, and geometric questions. We will also discuss new directions in sublinear-time algorithms, including new computational tasks, new measures for accuracy guarantees, and new models for data access. These directions enable applications of sublinear-time algorithms in privacy, analysis of real-valued data, and situations where the data is noisy or incomplete.