Skip to Content

Sponsors

No results

Keywords

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
All occurrences of this event have passed.
This listing is displayed for historical purposes.

Presented By: Department of Statistics

Department Seminar Series: Banghua Zhu, PhD Candidate, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley

"Towards Principled Post-Training of Large Language Models"

Banghua Zhu Banghua Zhu
Banghua Zhu
Abstract: Reinforcement Learning from Human Feedback (RLHF) is a pivotal technique that aligns large language models (LLMs) closely with human-centric values, and has created several leading LLMs, including GPT-4, Claude and Llama 2. The first step of RLHF involves learning human values using a reward model from ranking data. It is observed that the performance of the reward model degrades after one epoch of training, and optimizing the language model too much against the learned proxy reward model hinders the true objective. This talk delves into these issues, leveraging the theoretical insights from statistical decision theory to design improved reward learning algorithms. We also introduce advanced prompting techniques that generate high-quality synthetic ranking dataset for RLHF. By combining the high-quality RLHF dataset with our improved reward learning algorithms, we created the open source language model Starling-7B, which ranks first among all 7B models according to human evaluation in Chatbot Arena.
Banghua Zhu Banghua Zhu
Banghua Zhu

Explore Similar Events

  •  Loading Similar Events...

Tags


Back to Main Content