Date: 4th December, 3:30 pm (IST).
Algorithms for Massive Datasets: Efficiency and Corruption-Resilience
Univ of Haifa.
Harvesting and analyzing the massive amount of information that gets generated and stored on a daily basis, has the potential to deepen and enhance our understanding of major aspects of human society and economy. Since faulty datasets are ubiquitous, the resilience of algorithms to imperfections in data is as crucial as their speed, memory efficiency, and accuracy. In this talk, I shall present some of my recent work on corruption-resilient and efficient algorithms for massive datasets.
I will begin the talk with an overview of my research on this topic. I will then focus on a new sensitivity definition for graph algorithms. Our definition is motivated by the observation that input representations of large graphs may not always contain the full information about those graphs. This necessitates the design of graph algorithms that, in spite of being given only a (large) subgraph as input, output solutions that are close to the solutions output when the whole graph is available. Our sensitivity definition is introduced to formalize this desirable feature. After discussing the basic properties of our definition, I will give an overview of our main results, and present the key ideas used in the design and analysis of our low sensitivity algorithm for the global minimum cut problem.