Name
Safety and Robustness
Date & Time
Wednesday, October 25, 2023, 11:00 AM - 12:30 PM
Speakers
Hoda Heidari, Carnegie Mellon University On The Ethical & Societal Ramifications Of LLMs: Opportunities, Risks, And What’s Ahead I will begin with a very quick recap of large language models or LLMs. Keeping in mind how this technology works helps us better grasp the practical opportunities and limitations. Next, we will talk about opportunities: If a task satisfies certain criteria, I believe it makes a promising candidate for the fruitful usage of current LLMs. In other domains, we need to carefully weigh the risks of harm against potential benefits and tread cautiously. The bulk of this presentation will focus on risks and potential harms. We will see recent examples where the use of LLMs has gone awry. The risks of using LLMs stem from technological limitations, lack of transparency and access, and economic incentives to misrepresent their capabilities. I will briefly mention ongoing efforts to govern them and wrap up by offering several recommendations on how we can promote responsible research, development, and use of this technology.
Himabindu Lakkaraju, Harvard University
Sunoo Park, NYU Courant Institute of Mathematical Sciences Some Emerging Challenges In Ai And Law I will discuss some emerging challenges in AI and law, including ongoing copyright litigation.
Hossein Bateni, Google Research, NYC Attention-based Model Structure Optimization We propose a feature selection algorithm called Sequential Attention that achieves state-of-the-art empirical results for neural networks. This algorithm is based on an efficient one-pass implementation of greedy forward selection and uses attention weights at each step as a proxy for feature importance. We give theoretical insights into our algorithm for linear regression by showing that an adaptation to this setting is equivalent to the classical Orthogonal Matching Pursuit (OMP) algorithm, and thus inherits all of its provable guarantees. Our theoretical and empirical analyses offer new explanations towards the effectiveness of attention and its connections to overparameterization, which may be of independent interest.
Himabindu Lakkaraju, Harvard University
Sunoo Park, NYU Courant Institute of Mathematical Sciences Some Emerging Challenges In Ai And Law I will discuss some emerging challenges in AI and law, including ongoing copyright litigation.
Hossein Bateni, Google Research, NYC Attention-based Model Structure Optimization We propose a feature selection algorithm called Sequential Attention that achieves state-of-the-art empirical results for neural networks. This algorithm is based on an efficient one-pass implementation of greedy forward selection and uses attention weights at each step as a proxy for feature importance. We give theoretical insights into our algorithm for linear regression by showing that an adaptation to this setting is equivalent to the classical Orthogonal Matching Pursuit (OMP) algorithm, and thus inherits all of its provable guarantees. Our theoretical and empirical analyses offer new explanations towards the effectiveness of attention and its connections to overparameterization, which may be of independent interest.
Location Name
Kline Tower: 14th Floor
Full Address
219 Prospect St
New Haven, CT 06511
United States
New Haven, CT 06511
United States
Session Type
Workshop