Avi Wigderson

Thursday, March 6th

Avi Widgerson Institute for Advanced Study, Princeton University

“Randomness: a computational complexity view”

Widgerson’s research interests lie in Randomness and Computation, Algorithms and Optimization, Complexity Theory, Circuit Complexity, Proof Complexity, Quantum Computation and Communication, Cryptography and Distributed Computation.

Abstract:

Man has grappled with the meaning and utility of randomness for centuries. Research in the Theory of Computation in the last thirty years has enriched this study considerably. I’ll describe two main aspects of this research on randomness, demonstrating its power and weakness respectively.

-Randomness is paramount to computational efficiency:

The use of randomness can dramatically enhance computation (and do other wonders) for a variety of problems and settings. In particular, examples will be given of probabilistic algorithms (with tiny error) for natural tasks in different areas of mathematics, which are exponentially faster than their (best known) deterministic counterparts.

-Computational efficiency is paramount to understanding randomness:

I will explain the computationally-motivated definition of “pseudorandom” distributions, namely ones which cannot be distinguished from the uniform distribution by efficient procedure from a given class. We then show how such pseudorandomness may be generated deterministically, from (appropriate) computationally difficult problems. Consequently, randomness is probably not as powerful as it seems above.

I’ll conclude with the power of randomness in other computational settings, primarily probabilistic proof systems. We discuss the remarkable properties of Zero-Knowledge proofs and of Probabilistically Checkable proofs.

Talk Time & Place:
3:30pm - Biological Sciences Learning Center (room 115- 1st floor)
924 East 57th St.
Chicago, IL 60637

Prabhakar Raghavan

Tuesday, April 1st

Prabhakar Raghavan Yahoo! Research

“New sciences for a new web”

Raghavan has been Head of Yahoo! Research since 2005. His research interests include text and web mining, and algorithm design. He is a Consulting Professor of Computer Science at Stanford University and Editor-in-Chief of the Journal of the ACM.

Abstract:

The web has made a widely-hailed transition from its original incarnation to a putative state of “Web 2.0”. This transition has stemmed from the clever use of AJAX and efficient grid computing to enhance a user’s perception of responsiveness and interaction. In the process, the web experience has changed from a human interacting with a browser, to the emergence of a plethora of social media experiences. One consequence is that moving beyond the current notion of Web 2.0 demands research advances that straddle the boundaries between computational and social sciences, the latter including microeconomics, cognitive psychology and sociology. It also raises difficult questions on the use of data - ranging from the algorithmic to the societal.

This lecture will attempt to chart this interdisciplinary research agenda, arguing that the most influential research will require heavy interaction between these “hard” and “soft” sciences.

Talk Time & Place:
3:30pm - Biological Sciences Learning Center (room 109- 1st floor)
924 East 57th St.
Chicago, IL 60637

Simon Peyton Jones

Thursday, June 12th

Simon Peyton Jones Microsoft Research - Cambridge

“Exploiting Multicores with Nested Data Parallelism in Haskell”

Simon Peyton Jones has been at Microsoft Research (Cambridge) since 1998. His main research interest is in functional programming languages, their implementation, and their application. He has led a succession of research projects focused around the design and implementation of production-quality functional-language systems for both uniprocessors and parallel machines. He was a key contributor to the design of the now-standard functional language Haskell, and is the lead designer of the widely-used Glasgow Haskell Compiler (GHC). He has written two textbooks about the implementation of functional languages.

More generally, he is interested in language design, rich type systems, software component architectures, compiler technology, code generation, runtime systems, virtual machines, and garbage collection. He is particularly motivated by direct use of principled theory to practical language design and implementation - a reason he loves functional programming so much.

His home page is at http://research.microsoft.com/~simonpj

Abstract:

There are many approaches to exploiting multi-cores, but a particularly promising one is the “data-parallel” paradigm, because it combines massive parallelism (on both shared and distributed memory) with a simple, single-control-flow programming model. Indeed, I think that data parallelism is the only way we will be able to exploit tens or hundreds of processors effectively.

Alas, data-parallel programming is usually restricted to “flat” data parallelism, which is good for implementers but bad for programmers. Instead, I’ll describe the “nested” data parallel programming model, first developed in the 90’s by Blelloch and Sabot. It is great for programmers but much harder to implement; as a result, it’s virtually unknown in practice. It’s really only feasible to support nested data parallelism in a purely functional language, so we are building a high-performance implementation in Haskell.

In this talk I’ll explain what nested data parallelism is, why it’s important, and what progress we have made. Fear not: I won’t assume you know any Haskell. Yet.

Talk Time & Place:
3:30pm - Biological Sciences Learning Center (room 115- 1st floor)
924 East 57th St.
Chicago, IL 60637