**Name**

Privacy/Security

**Date & Time**

Wednesday, October 25, 2023, 1:30 PM - 3:00 PM

**Speakers**

Lin Chen, Google Harmonizing Bias And Variance And Harmonizing Instance-level And Bag-level Losses Classical machine learning theory states that the generalization error of a model can be decomposed into bias and variance, and that these two terms exhibit a trade-off. In this talk, we challenge this view and show that for an ensemble of deep learning models, bias and variance are aligned at a sample level. This means that squared bias is approximately equal to variance for correctly classified sample points. We present empirical and theoretical evidence for this phenomenon.
In the second part of the talk, we discuss the harmonization of bag-level and instance-level losses in learning from aggregate labels. This is a common problem in privacy-preserving machine learning, where the training data is aggregated before being shared with the learner. We show that the instance-level loss can be perceived as a regularized form of the bag-level loss. This allows us to compare the two approaches with respect to bias and variance, and to introduce a novel interpolating estimator which combines the two approaches. We provide a theoretical analysis of the risk of the interpolating estimator and derive the optimal bag size for differentially private learning from aggregate labels.

Umar Syed, Google Research, New York

Ruiqi Zhang, University of California, Berkeley Trained Transformers Learn Linear Models In-context Attention-based neural networks such as transformers have demonstrated a remarkable ability to exhibit in-context learning (ICL): Given a short prompt sequence of tokens from an unseen task, they can formulate relevant per-token and next-token predictions without any parameter updates. By embedding a sequence of labeled training data and unlabeled test data as a prompt, this allows for transformers to behave like supervised learning algorithms. Indeed, recent work has shown that when training transformer architectures over random instances of linear regression problems, these modelsâ€™ predictions mimic those of ordinary least squares. Towards understanding the mechanisms underlying this phenomenon, we investigate the dynamics of ICL in transformers with a single linear self-attention layer trained by gradient flow on linear regression tasks. We show that despite non-convexity, gradient flow with a suitable random initialization finds a global minimum of the objective function. At this global minimum, when given a test prompt of labeled examples from a new prediction task, the transformer achieves prediction error competitive with the best linear predictor over the test prompt distribution. We additionally characterize the robustness of the trained transformer to a variety of distribution shifts and show that although a number of shifts are tolerated, shifts in the covariate distribution of the prompts are not. Motivated by this, we consider a generalized ICL setting where the covariate distributions can vary across prompts. We show that although gradient flow succeeds at finding a global minimum in this setting, the trained transformer is still brittle under mild covariate shifts. We complement this finding with experiments on large, nonlinear transformer architectures which we show are more robust under covariate shifts.

Hamed Hassani, University of Pennsylvania Smoothllm: Defending Large Language Models Against Jailbreaking Attacks Despite efforts to align large language models (LLMs) with human values, widely-used LLMs such as GPT, Llama, Claude, and PaLM are susceptible to jailbreaking attacks, wherein an adversary fools a targeted LLM into generating objectionable content. To address this vulnerability, we propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks on LLMs. Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense first randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs. SmoothLLM reduces the attack success rate on numerous popular LLMs to below one percentage point, avoids unnecessary conservatism, and admits provable guarantees on attack mitigation. Moreover, our defense uses exponentially fewer queries than existing attacks and is compatible with any LLM.

Umar Syed, Google Research, New York

Ruiqi Zhang, University of California, Berkeley Trained Transformers Learn Linear Models In-context Attention-based neural networks such as transformers have demonstrated a remarkable ability to exhibit in-context learning (ICL): Given a short prompt sequence of tokens from an unseen task, they can formulate relevant per-token and next-token predictions without any parameter updates. By embedding a sequence of labeled training data and unlabeled test data as a prompt, this allows for transformers to behave like supervised learning algorithms. Indeed, recent work has shown that when training transformer architectures over random instances of linear regression problems, these modelsâ€™ predictions mimic those of ordinary least squares. Towards understanding the mechanisms underlying this phenomenon, we investigate the dynamics of ICL in transformers with a single linear self-attention layer trained by gradient flow on linear regression tasks. We show that despite non-convexity, gradient flow with a suitable random initialization finds a global minimum of the objective function. At this global minimum, when given a test prompt of labeled examples from a new prediction task, the transformer achieves prediction error competitive with the best linear predictor over the test prompt distribution. We additionally characterize the robustness of the trained transformer to a variety of distribution shifts and show that although a number of shifts are tolerated, shifts in the covariate distribution of the prompts are not. Motivated by this, we consider a generalized ICL setting where the covariate distributions can vary across prompts. We show that although gradient flow succeeds at finding a global minimum in this setting, the trained transformer is still brittle under mild covariate shifts. We complement this finding with experiments on large, nonlinear transformer architectures which we show are more robust under covariate shifts.

Hamed Hassani, University of Pennsylvania Smoothllm: Defending Large Language Models Against Jailbreaking Attacks Despite efforts to align large language models (LLMs) with human values, widely-used LLMs such as GPT, Llama, Claude, and PaLM are susceptible to jailbreaking attacks, wherein an adversary fools a targeted LLM into generating objectionable content. To address this vulnerability, we propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks on LLMs. Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense first randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs. SmoothLLM reduces the attack success rate on numerous popular LLMs to below one percentage point, avoids unnecessary conservatism, and admits provable guarantees on attack mitigation. Moreover, our defense uses exponentially fewer queries than existing attacks and is compatible with any LLM.

**Location Name**

Kline Tower: 14th Floor

**Full Address**

219 Prospect St

New Haven, CT 06511

United States

New Haven, CT 06511

United States

**Session Type**

Workshop