Fulton-Watson-Copp Chair and Professor, Computer Science Dept., University of Illinois Urbana-Champaign
Abstract: Intrinsic images are maps of surface properties. A classical problem is to recover an intrinsic image, typically a map of surface lightness, from an image. The topic has mostly dropped from view, likely for three reasons: training data is mostly synthetic; evaluation is somewhat uncertain; and clear applications for the resulting albedo are missing. The decline of this topic has a consequence - mostly, we don’t understand and can’t mitigate the effects of lighting.
I will show the results of simple experiments that suggest that very good modern depth and normal predictors are strongly sensitive to lighting – if you relight a scene in a reasonable way, the reported depth will change. This is intolerable. To fix this problem, we need to be able to produce many different lightings of the same scene. I will describe a method to do so. First, one learns a method to estimate albedo from images without any labelled training data (which turns out to perform well under traditional evaluations). Then, one forces an image generator to produce many different images that have the same albedo – with care, these are relightings of the same scene. Finally, a GAN inverter allows us to apply the process to real images. I will show some interim results suggesting that learned relightings might genuinely improve estimates of depth, normal and albedo.
Bio: I am currently Fulton-Watson-Copp chair in computer science at U. Illinois at Urbana-Champaign, where I moved from U.C Berkeley, where I was also full professor. I have occupied the Fulton-Watson-Copp chair in Computer Science at the University of Illinois since 2014. I have published over 170 papers on computer vision, computer graphics an machine learning. I have served as program co-chair or general co-chair for vision conferences on many occasions. I received an IEEE technical achievement award for 2005 for my research. I became an IEEE Fellow in 2009, and an ACM Fellow in 2014. My textbook, “Computer Vision: A Modern Approach” (joint with J. Ponce and published by Prentice Hall) was widely adopted as a course text. My recent textbook, “Probability and Statistics for Computer Science”, is in the top quartile of Springer computer science chapter downloads. A further textbook “Applied Machine Learning” has just appeared in print. I have served two terms as Editor in Chief, IEEE TPAMI. I serve on a number of scientific advisory boards.
Host: Greg Shakhnarovich
Registration to attend virtually: https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=60b3f587-be8f-428a-8706-afbd012ab9bc
Professor, John Hopkins University, ACL Fellow
Abstract: Large autoregressive language models have been amazingly successful. Nonetheless, should they be integrated with older AI techniques such as explicit knowledge representation, planning, and inference? I’ll discuss three possible reasons:
As possible directions, I’ll outline some costly but interesting extensions to the standard autoregressive language models – neural FSTs, lookahead models, and nested latent-variable models. Much of this work is still in progress, so the focus will be on designs rather than results. Collaborators include Chu-Cheng Lin, Weiting (Steven) Tan, Li (Leo) Du, Zhichu (Brian) Lu, and Hongyuan Mei
Bio: Jason Eisner is a Professor of Computer Science at Johns Hopkins University, as well as Director of Research at Microsoft Semantic Machines. He is a Fellow of the Association for Computational Linguistics. At Johns Hopkins, he is also affiliated with the Center for Language and Speech Processing, the Mathematical Institute for Data Science, and the Cognitive Science Department. His goal is to develop the probabilistic modeling, inference, and learning techniques needed for a unified model of all kinds of linguistic structure. His 150+ papers have presented various algorithms for parsing, machine translation, and weighted finite-state machines; formalizations, algorithms, theorems, and empirical results in computational phonology; and unsupervised or semi-supervised learning methods for syntax, morphology, and word-sense disambiguation. He is also the lead designer of Dyna, a declarative programming language that provides an infrastructure for AI algorithms. He has received two school-wide awards for excellence in teaching, as well as recent Best Paper Awards at ACL 2017, EMNLP 2019, and NAACL 2021 and an Outstanding Paper Award at ACL 2022.
Host: Karen Livescu
Registration to attend virtually: https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=6355939c-49d5-496a-9d9e-af8e0117f336
Professor, Paul G. Allen School of Computer Science & Engineering, University of Washington, MacArthur Fellow, ACL Fellow, Distinguished Research Fellow at Institute for Ethics in AI at Oxford
Abstract: In this talk, I will question if there can be possible impossibilities of large language models (i.e., the fundamental limits of transformers, if any) and the impossible possibilities of language models (i.e., seemingly impossible alternative paths beyond scale, if at all).
Bio: Yejin Choi is Wissner-Slivka Professor and a MacArthur Fellow at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She is also a senior director at AI2 overseeing the project Mosaic and a Distinguished Research Fellow at the Institute for Ethics in AI at the University of Oxford. Her research investigates if (and how) AI systems can learn commonsense knowledge and reasoning, if machines can (and should) learn moral reasoning, and various other problems in NLP, AI, and Vision including neuro-symbolic integration, language grounding with vision and interactions, and AI for social good. She is a co-recipient of 2 Test of Time Awards (at ACL 2021 and ICCV 2021), 7 Best/Outstanding Paper Awards (at ACL 2023, NAACL 2022, ICML 2022, NeurIPS 2021, AAAI 2019, and ICCV 2013), the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, and IEEE AI’s 10 to Watch in 2016.
Host: Nati Srebro
Registration to attend virtually: https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=bbae3879-b4c4-4cb0-b9ae-b0820139b35c
Lenore Blum - Distinguished Career Professor Emerita of Computer Science at Carnegie Mellon University, and Visiting Chair Professor at Peking U
Manuel Blum - Bruce Nelson Professor Emeritus of Computer Science at Carnegie Mellon University, Professor Emeritus of EECS at UC Berkeley, and Visiting Chair Professor at Peking U.
Abstract: The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. We have defined the Conscious Turing Machine (CTM) for the purpose of investigating a Theoretical Computer Science (TCS) approach to consciousness. For this, we have hewn to the TCS demand for simplicity and understandability. The CTM is consequently and intentionally a simple machine. It is not a model of the brain, though its design has greatly benefited - and continues to benefit - from cognitive neuroscience, in particular the global (neuronal) workspace theory. Although it is developed to understand consciousness, the CTM offers a thoughtful and novel guide to the creation of an Artificial General Intelligence (AGI). For example, the CTM has an enormous number of powerful processors, some with specialized expertise, others unspecialized but poised to develop an expertise. For whatever problem must be dealt with, the CTM has an excellent way to utilize those processors that have the required knowledge, ability, and time to work on the problem, even if it, the CTM, is not aware of which of the processors these may be.
Bios: Lenore Blum (PhD, MIT) is Distinguished Career Professor Emerita of Computer Science at Carnegie Mellon University, and Visiting Chair Professor at Peking U. Lenore’s research, from her early work in model theory and differential fields (logic and algebra) to her work in developing a theory of computation and complexity over the reals (mathematics and computer science) has focused on merging seemingly unrelated areas. Her book, Complexity and Real Computation, written with Felipe Cucker, Mike Shub and Steve Smale, develops a theoretical basis for scientific computation in continuous domains akin to the Turing-based theory for discrete domains. Her current research with Manuel and Avrim Blum, inspired by theoretical computer science and major advances in cognitive neuroscience, lays designs for a conscious AI. Lenore is internationally known for her work in increasing the participation of girls and women in STEM and is proud that CMU has gender parity in its undergraduate CS program. Over the years, she has been active in the mathematics community: as President of the Association for Women in Mathematics (AWM), Vice-President of the American Mathematical Society (AMS), Chair of the Mathematics Section of the American Association for the Advancement of Science (AAAS), Deputy Director of the Mathematical Sciences Research Institute (MSRI), and as Inaugural and current President of the Association for Mathematical Consciousness Science (AMCS). She is a Fellow of AAAS, AMS, AWM.
Manuel Blum (PhD, MIT) is Bruce Nelson Professor Emeritus of Computer Science at Carnegie Mellon University, Professor Emeritus of EECS at UC Berkeley, and Visiting Chair Professor at Peking U. Manuel has been motivated to understand the mind/body problem since he was in second grade when his teacher told his mom she should not expect him to get past high school. As an undergrad at MIT, he spent a year studying Freud and then apprenticed himself to the great anti-Freud neurophysiologist, Dr. Warren S. McCulloch, who became his intellectual mentor. When he told Warren (McCulloch) and Walter (Pitts) that he wanted to study consciousness, he was told in no uncertain terms that he was verboten to do so - and why (there was no fMRI at the time). As a graduate student, he asked and got Marvin Minsky to be his thesis advisor. Manuel is one of the founders of complexity theory, a Turing Award winner, and has mentored many in the field who have chartered new directions ranging from computational learning, cryptography, zero knowledge, interactive proofs, proof checkers, and human computation. He is a Fellow of AAAS1, AAAS2, NAS, NAE.
Host: David McAllester
Registration to attend virtually: https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=3c9f9fc5-5c91-405f-8ecb-b109010c39ec
Professor of Computer Science and member of the Data Science Institute at Columbia University
Abstract: Turing-complete blockchain protocols approximate the idealized abstraction of a “computer in the sky” that is open access, runs in plain view, and, in effect, has no owner or operator. This technology can, among other things, enable stronger notions of ownership of digital possessions than we have ever had before. Building the computer in the sky is hard (and scientifically fascinating), and in this talk, I’ll highlight three threads in my recent research on this challenge:
Possibility and impossibility results for permissionless consensus protocols (i.e., implementing an “ownerless” computer). Incentive-compatible transaction fee mechanism design (i.e., making an “open-access” computer sustainable and welfare-maximizing). A Black-Scholes-type formula for quantifying adverse selection in automated market makers (some of the most popular “programs” running on the computer in the sky).
The talk will emphasize the diversity of mathematical tools necessary for understanding blockchain protocols and their applications (e.g., distributed computing, game theory, mechanism design, and continuous-time stochastic processes) and the immediate practical impact that mathematical work on this topic has had (e.g., Ethereum’s EIP-1559 and LVR for automated market makers).
Bio: Tim Roughgarden is a Professor in the Computer Science Department at Columbia University and the Founding Head of Research at a16z crypto. Prior to joining Columbia, he spent 15 years on the computer science faculty at Stanford, following a PhD at Cornell and a postdoc at UC Berkeley. His research interests include the many connections between computer science and economics, as well as the design, analysis, applications, and limitations of algorithms.
For his research, he has been awarded the ACM Grace Murray Hopper Award, the Presidential Early Career Award for Scientists and Engineers (PECASE), the Kalai Prize in Computer Science and Game Theory, the Social Choice and Welfare Prize, the Mathematical Programming Society’s Tucker Prize, the INFORMS Lanchester Prize, and the EATCS-SIGACT Gödel Prize. He was an invited speaker at the 2006 International Congress of Mathematicians and the Shapley Lecturer at the 2008 World Congress of the Game Theory Society. He is a Fellow of the Guggenheim Foundation, the ACM, the Game Theory Society, and the Society for the Advancement of Economic Theory. He has written or edited ten books and monographs, including Twenty Lectures on Algorithmic Game Theory (2016), Beyond the Worst-Case Analysis of Algorithms (2020), and the Algorithms Illuminated book series (2017-2020).
Host: Avrim Blum
Registration to attend virtually: https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=6ca90811-1c40-45a3-99e5-b0820141ea88
All talks will be held at TTIC in room #530 located at 6045 South Kenwood Avenue (intersection of 61st street and Kenwood Avenue)
Parking: Street parking, or in the free lot on the corner of 60th St. and Stony Island Avenue.
For questions and comments contact Nati Srebro.