- 9:00-9:20: Prof. Vladimir Braverman:
- Title:
*An Optimal Algorithm For Large Frequency Moments Using O(n^1-2/k) Bits* - Abstract: TBA

- Title:
- 9:25-9:45: Prof. David Mount:
- Title:
*A New Algorithm for Approximating the Euclidean Minimum Spanning Tree* - Abstract: The Euclidean minimum spanning tree is a fundamental structure defined by a set of n points in d-dimensional space. Given the high computational complexity of computing this structure exactly, the approximate version is well studied. Given a positive error parameter epsilon, the objective is to output a spanning tree whose weight is within a factor (1+epsilon) of the weight of the optimal tree. Existing approximation algorithms have running times that grow rapidly with 1/epsilon, particularly in higher dimensions. In this talk, I will present a new approximation algorithm whose dependency on epsilon is much smaller, and in fact is independent of the dimension of the space.

- Title:
- 9:50-10:10:
Prof. Samir Khuller:
- Title:
*To do or not to do: Scheduling to Minimize Energy* - Abstract: Traditional scheduling algorithms, especially those involving job scheduling on parallel machines, make the assumption that the machines are always available and try to schedule jobs to minimize specific job related metrics. Since modern data centers consume massive amounts of energy, we consider job scheduling problems that take energy consumption into account, turning machines off, especially during periods of low demand. The ensuing problems relate very closely to classical covering problems such as capacitated set cover, and we discuss several recent results in this regard. This is a survey talk based on several papers with Jessica Chang, Koyel Mukherjee and Frederic Koehler. No prior background in algorithms beyond undergraduate algorithms is assumed.

- Title:
- 11:00-12:00
**Distinguished Invited Speaker:**Prof. Avrim Blum, Carnegie Mellon University:- Title:
*Reconstructing Preferences and Priorities from Opaque Transactions* - Abstract: There has been significant work on learning about utility functions of single agents from observing their behavior in isolation. In this talk, I will discuss the problem of learning about utilities of a collection of agents, when the only information available is some kind of overall outcome of a joint interaction or opaque transaction. For example, consider an auction of a single item where n agents each draw values from their own personal probability distributions D_i, and the only information that can be observed is the identity of the winner (highest bidder). From repeated observations, plus the ability to enter the auction (and win or lose) yourself, can you reconstruct the relevant parts of the individual distributions? Or consider a setting with multiple items where agents have combinatorial preferences, and where a seller is running a mechanism that you do not know. From observing the results of a sequence of these interactions, can you learn both the preferences of the buyers *and* the mechanism of the seller? In this talk I will discuss algorithms in the context of both of these problems. In the process we will see connections to decision-list learning in learning theory and Kaplan-Meier estimators in medical statistics. This is joint work with Yishay Mansour and Jamie Morgenstern

- Title:
- 1:00-2:00
**Distinguished Invited Speaker:**Prof. Sanjeev Arora, Princeton University:- Title:
*Overcoming the Intractability Bottleneck in Unsupervised Learning* - Abstract:Unsupervised learning ---i.e., learning with unlabeled data--- is increasingly important given todays data deluge. Most natural prob- lems in this domain -- e.g. for models such as mixture models, HMMs, graphical models, topic models and sparse coding/dictionary learning --- are NP-hard. Therefore researchers in practice use either heuristics or convex relaxations with no concrete approximation bounds. Several nonconvex heuristics work well in practice, which is also a mystery. Recently, a sequence of results has shown that rigorous approaches lead- ing to polynomial running time are possible for several of these problems. These involve sidestepping worst-case complexity via special assump- tions on the input. Some of this work ---eg for topic models---even leads to practical running times (50x faster than previous approaches). It has even become possible to analyse nonconvex optimization heuristics such as alternating minimization or kSVD. The talk will be a survey of these new results, including topic modeling, sparse coding, and deep learning.

- Title:
- 2:30-2:50: Prof. Elaine Shi:
- Title:
*Circuit ORAM and the Tightness of the Goldreich-Ostrovsky Lower Bound* - Abstract: Oblivious RAM (ORAM) constructions have traditionally been measured by their bandwidth overhead. While the bandwidth overhead metric best characterizes an ORAM's performance in secure processor and cloud outsourcing scenarios, we observe that in other cryptographic applications such as secure multi-party computation, the most relevant metric is an ORAM's circuit complexity. We propose a new tree-based ORAM scheme called Circuit ORAM. Circuit ORAM achieves omega(D log N) total circuit size (over all protocol interactions) for memory words of D = Omega(log^2 N) bits, while achieving a negligible failure probability. Circuit ORAM outperforms all known ORAM schemes in terms of circuit complexity both asymptotically and in practice. Empirical results suggest that Circuit ORAM yields circuits that are 30x to 60x smaller than Path ORAM for datasets of roughly 8GB. The speedup will be even greater for larger data sizes. Circuit ORAM also suggests that certain stronger interpretations of the Goldreich-Ostrovsky ORAM lower bound are tight, giving an initial answer to a problem that has intrigued the community for 27 years.

- Title:
- 2:55-3:15: David Harris:
- Title:
*The Moser-Tardos Framework with Partial Resampling* - Abstract: The resampling algorithm of Moser & Tardos is a powerful approach to develop constructive versions of the Lovasz Local Lemma. We develop a *partial* resampling approach motivated by this methodology: when a bad event holds, we resample an appropriately-random *subset* of the variables that define this event, rather than the entire set as in Moser & Tardos. This is particularly useful when the bad events are determined by sums of random variables. This leads to several improved algorithmic applications in scheduling, graph transversals, packet routing etc.

- Title:
- 3:20-3:40: Prof. MohammadTaghi Hajiaghayi:
- Title:
*Streaming Algorithms for Estimating the Matching Size in Planar Graphs and Beyond* - Abstract:We consider the problem of estimating the size of a maximum matching when the edges are revealed in a streaming fashion. Consider a graph G=(V,E) with n vertices and m edges. The input stream is a permutation of edges S= (e_1,...,e_m) chosen by an adversary. The goal is to output an estimation of the size of a maximum matching. The algorithm is only allowed to use a small amount of memory (much smaller than n). When the underlying graph is planar, we present a simple and elegant streaming algorithm that with high probability estimates the size of a maximum matching within a constant factor using O-tilde(n^(2/3)) space. The approach generalizes to the family of graphs that have bounded arboricity. Graphs with bounded arboricity include, among other families of graphs, graphs with an excluded constant-size minor. To the best of our knowledge, this is the first result for estimating the size of a maximum matching in the adversarial-order streaming model (as opposed to the random-order streaming model). We circumvent the barriers inherent in the adversarial-order model by exploiting several structural properties of planar graphs, and more generally, graphs with bounded arboricity. We hope that this approach finds applications in estimating other properties of graphs in the adversarial-order streaming model. We further reduce the required memory size to O-tilde(sqrt(n)) for three restricted settings: (i) when the underlying graph is a forest; (ii) when we have 2-passes over the stream of edges of a graph with bounded arboricity; and (iii) when the edges arrive in random order and the underlying graph has bounded arboricity. This a joint work with Esfandiari, Liaghat, Monemizadeh, and Onak and is to appear in SODA'15.

- Title:
- 1:30-2:30: Prof. Michael Dinitz, Johns Hopkins University:
- Title: Explicit Expanding Expanders
- Abstract: Expanders are one of the most extensively studied classes of graphs, and play a key role in a broad variety of applications, ranging from computational complexity to generating good coding schemes. We study the following question, motivated by recent advances in computer networking: Construct an an infinite sequence of expanders G_0,G_1,..., such that for every two consecutive graphs G_i and G_{i+1}, G_{i+1} can be obtained from G_i by adding a single vertex and inserting/removing a small number of edges, which we call the expansion cost of transitioning from G_i to G_{i+1}. This question is very natural, e.g., in the context of datacenter networks, where the vertices represent racks of servers, and the expansion cost captures the amount of rewiring needed when adding another rack to the network. We present an explicit construction of d-regular expanders with expansion cost at most 5d/2}, for any d \geq 6. Our construction leverages the notion of a ``2-lift'' of a graph. This operation was first analyzed by Bilu and Linial, who repeatedly applied 2-lifts to construct an infinite family of expanders which double in size from one expander to the next. Our construction can be viewed as a way to ``interpolate'' between Bilu-Linial expanders with low expansion cost while preserving good edge expansion throughout. Joint work with Michael Schapira and Asaf Valadarsky