前言

大模型最近很是火啊,媒体铺天盖地的宣传,候选者简历中写LLM微调等等。本文希望以huggingface trl/RLHF notebooks讲到的几个例子作为入口,介绍下RLHF在整个训练工作中的位置以及起到的作用,方便理解与后续应用。

代码分析

huggingface trl/RLHF notebooks这个文件夹下,一共有三个例子:

  1. gpt2-sentiment.ipynb
  2. gpt2-sentiment-control.ipynb
  3. best_of_n.ipynb

同时也按照上述这三个文件顺序进行分析。

一、gpt2-sentiment.ipynb

目的:这个文件实现的是如何利用RLHF学会生成正向评论。

1. Load IMDB dataset

数据集默认有两个字段,textlabel,即用户对一部电影的评论和这条评论的情感倾向(正向、负向)。

这里对text字段随机截断长度为n后面的文本,例如:text=这个电影我觉得很棒。 截取后变成query=这个电影

2. Model和Ref Model

这里采用GPT2作为训练model,ref model和model是一样的,可先理解成model是用来训练的,ref model是用来参考的。
ref model是RLHF训练过程中不可缺少的一部分,也跟在generation model后面添加ValueHead层是一个道理,关于强化学习更细力度,本文先忽略。

3. reward model

这里采用distilbert-imdb模型来作为打分模型,这个模型的作用是输入一条评论,它会给出positive、negative的打分。

4. 训练

即让model基于query生成指max_new_tokens的文本,然后让reward model来打分,以positive score为目标,不断优化model,使其能够基于用户给定的文本开头来生成正向评论。

这里的max_new_tokens也比较有意思,它可以有两层的不同解释:

  1. 一条文本的长度不会很长
  2. 折扣因子

关于后者,我觉得会是一个比较有意思的点。在RLHF中,有针对每一步给一个score,还有走完后针对整条路径给一个score。那这里的max_new_tokens是不是就可以理解成是中间的状态~
既不会因为每一步都打分造成训练效率低下也不会因为对整条路径打分导致某些点决策失误所带来的更大偏差,尽量缓解这种情况。

结束。

二、gpt2-sentiment-control.ipynb

目的:通过添加prompt来控制生成评论的情感。

这里的prompt有三类:positive、negative、neutral。由于neutral是reward model本身能力所不具备的,看到这里也可以跳过。

其构造示例如下:

1
query="[positive]这个电影很"

那么预期目标是

如果是

1
query="[negative]这个电影很"

那么预期目标是不好之类的情感。

剩下流程和上面文件一致,此处忽略。

三、best_of_n.ipynb

目的:RLHF的目标是超越原有天花板,那这种是选取ref model的best of n来和RLHF训练后的做个比对。

整体下来,reward model占据很重要的作用,决定了RLHF的效果,需要注意。

更多看下原代码,整体流程不是很复杂。又水水水了一篇。

实验

1. 数据集

ChnSentiCorp作为情感分类数据集。

2. reward model

train_score.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66

import torch
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments

checkpoint = 'chinese-roberta-wwm-ext'

model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)

raw_dataset = load_dataset(
'csv',
data_files={
"train": "train.tsv",
"dev": "dev.tsv",
},
delimiter='\t'
)


def data_collator(batch_data):
text_a = [_['text_a'] for _ in batch_data]
data = tokenizer.batch_encode_plus(text_a, max_length=510, truncation=True, return_tensors='pt', padding=True)
data.update({"labels": torch.tensor([_['label'] for _ in batch_data]).reshape(-1, 1)})
return data


def compute_metrics(data):
from umetrics.macrometrics import MacroMetrics
macro = MacroMetrics(labels=[0, 1])
predictions = data.predictions.argmax(-1).tolist()
labels = data.label_ids.flatten().tolist()
macro.step(y_trues=labels, y_preds=predictions)
macro.classification_report(print)
return {"f1": macro.f1_score()}



args = TrainingArguments(
output_dir='score_model',
remove_unused_columns=False,
seed=1,
do_train=True,
do_eval=True,
evaluation_strategy='epoch',
learning_rate=1e-5,
num_train_epochs=10,
per_device_train_batch_size=16,
fp16=True,
save_total_limit=1,
metric_for_best_model='f1',

)

trainer = Trainer(
model=model,
args=args,
data_collator=data_collator,
train_dataset=raw_dataset['train'],
eval_dataset=raw_dataset['dev'],
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
trainer.train()


训练后F1能达到95%,所以打分模型至此结束。

3. train RLHF

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
import pandas as pd
import torch
from datasets import load_dataset
from tqdm import tqdm
from transformers import AutoTokenizer, set_seed, pipeline
from trl import AutoModelForCausalLMWithValueHead, create_reference_model, PPOConfig, PPOTrainer
from trl.core import LengthSampler

model_name = "Wenzhong-GPT2-110M"
config = PPOConfig(
model_name=model_name,
learning_rate=1.41e-5,
log_with="tensorboard",
batch_size=64,
gradient_accumulation_steps=8,
mini_batch_size=8,
)
sent_kwargs = {"return_all_scores": True, "function_to_apply": "none", "batch_size": 16}

gpt2_tokenizer = AutoTokenizer.from_pretrained(config.model_name)
gpt2_tokenizer.pad_token = gpt2_tokenizer.eos_token

set_seed(1)


# ################

def build_dataset(input_min_text_length=2, input_max_text_length=8):
ds = load_dataset(
'csv',
data_files={
"train": "train.tsv",
"dev": "dev.tsv",
},
delimiter='\t'
)['train']
ds = ds.rename_columns({"text_a": "review"})
ds = ds.filter(lambda x: len(x['review']) > 10, batched=False)

input_size = LengthSampler(input_min_text_length, input_max_text_length)

def tokenize(sample):
sample["input_ids"] = gpt2_tokenizer.encode(sample["review"][: input_size()])
sample["query"] = gpt2_tokenizer.decode(sample["input_ids"])
return sample

ds = ds.map(tokenize, batched=False)
ds.set_format(type="torch")
return ds


dataset = build_dataset()


# ##################

def collator(data):
return dict((key, [d[key] for d in data]) for key in data[0])


# ##################

sentiment_pipe = pipeline(
"sentiment-analysis",
model='/score_model/checkpoint-1000',
device='cuda:1'
)

text = "今天天气很好"
print(sentiment_pipe(text, **sent_kwargs))
# [[{'label': 'LABEL_0', 'score': -2.200057029724121}, {'label': 'LABEL_1', 'score': 2.2879886627197266}]]
text = '这个电影主角演的真心一般般'
print(sentiment_pipe(text, **sent_kwargs))
# [[{'label': 'LABEL_0', 'score': 4.0778069496154785}, {'label': 'LABEL_1', 'score': -4.0350117683410645}]]

# # ##############
#
output_min_length = 16
output_max_length = 32
output_length_sampler = LengthSampler(output_min_length, output_max_length)

model = AutoModelForCausalLMWithValueHead.from_pretrained(config.model_name)
ref_model = create_reference_model(model)

ppo_trainer = PPOTrainer(config, model, ref_model, gpt2_tokenizer, dataset, data_collator=collator)

generation_kwargs = {
"min_length": -1,

"top_k": 0,
"top_p": 1,
"do_sample": True,
"pad_token_id": gpt2_tokenizer.eos_token_id,
"bos_token_id": gpt2_tokenizer.bos_token_id,
"eos_token_id": gpt2_tokenizer.eos_token_id,
}
device = 'cuda:0'

for i in range(3):
for epoch, batch in tqdm(enumerate(ppo_trainer.dataloader)):
query_tensors = batch['input_ids']
response_tensors = []
for query in query_tensors:
gen_len = output_length_sampler()

response = ppo_trainer.generate(query, **{**generation_kwargs, "max_new_tokens": gen_len})
response_tensors.append(response.squeeze()[-gen_len:])
batch["response"] = [gpt2_tokenizer.decode(r.squeeze()) for r in response_tensors]
#### Compute sentiment score
texts = [q + r for q, r in zip(batch["query"], batch["response"])]
pipe_outputs = sentiment_pipe(texts, **sent_kwargs)
rewards = [torch.tensor(output[1]["score"]) for output in pipe_outputs]

#### Run PPO step
stats = ppo_trainer.step(query_tensors, response_tensors, rewards)
ppo_trainer.log_stats(stats, batch, rewards)

#### get a batch from the dataset
bs = 16
game_data = dict()
dataset.set_format("pandas")
df_batch = dataset[:].sample(bs)
game_data["query"] = df_batch["query"].tolist()
query_tensors = df_batch["input_ids"].tolist()

response_tensors_ref, response_tensors = [], []

#### get response from gpt2 and gpt2_ref
for i in range(bs):
gen_len = output_length_sampler()
output = ref_model.generate(
torch.tensor(query_tensors[i]).unsqueeze(dim=0).to(device), max_new_tokens=gen_len, **generation_kwargs
).squeeze()[-gen_len:]
response_tensors_ref.append(output)
output = model.generate(
torch.tensor(query_tensors[i]).unsqueeze(dim=0).to(device), max_new_tokens=gen_len, **generation_kwargs
).squeeze()[-gen_len:]
response_tensors.append(output)

#### decode responses
game_data["response (before)"] = [gpt2_tokenizer.decode(response_tensors_ref[i]) for i in range(bs)]
game_data["response (after)"] = [gpt2_tokenizer.decode(response_tensors[i]) for i in range(bs)]

#### sentiment analysis of query/response pairs before/after
texts = [q + r for q, r in zip(game_data["query"], game_data["response (before)"])]
game_data["rewards (before)"] = [output[1]["score"] for output in sentiment_pipe(texts, **sent_kwargs)]

texts = [q + r for q, r in zip(game_data["query"], game_data["response (after)"])]
game_data["rewards (after)"] = [output[1]["score"] for output in sentiment_pipe(texts, **sent_kwargs)]

# store results in a dataframe
df_results = pd.DataFrame(game_data)
print(df_results)

print("mean:")
print(df_results[["rewards (before)", "rewards (after)"]].mean())
print()
print("median:")
print(df_results[["rewards (before)", "rewards (after)"]].median())

model.save_pretrained("gpt2-imdb-pos-v2", push_to_hub=False)
gpt2_tokenizer.save_pretrained("gpt2-imdb-pos-v2", push_to_hub=False)


4. eval RLHF

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_path1 = 'Wenzhong-GPT2-110M'
model_path2 = 'gpt2-imdb-pos-v2'

for model_path in (model_path1, model_path2):
model = AutoModelForCausalLM.from_pretrained(model_path).to("cuda")

tokenizer = AutoTokenizer.from_pretrained(model_path)

generation_kwargs = {
"min_length": -1,

"top_k": 0,
"top_p": 1,
"do_sample": True,
"pad_token_id": tokenizer.eos_token_id,
"bos_token_id": tokenizer.bos_token_id,
"eos_token_id": tokenizer.eos_token_id,
"max_new_tokens": 100,
}

for query in ("这个电影", "今天要下雨,心情太", "今天天气很差", '这个手机屏幕很差,手感'):
input_ids = torch.tensor(tokenizer.encode(query)).to("cuda").reshape(1, -1)
response = model.generate(input_ids, **generation_kwargs)[0]
print(tokenizer.decode(response))



使用RLHF训练后的生成结果示例如下:

1
2
3
4
这个电影,见证点深的艺术价值。<|endoftext|>
今天要下雨,心情太好了,很舒服了。整个人还算可以,心情的状态都还可以的好。所以,周末的时候接触我的朋友们都会给我一种�
今天天气很差,但是连续8天都是安安静静休息,也很好!水的这出入口很好,很适合我们的婚了。<|endoftext|>
这个手机屏幕很差,手感显示是手机配置高品质,但真是非常棒!<|endoftext|>

前两个例子还好,比较容易生成理想的正面评论,第三个和第四个例子前面说到了天气很差屏幕很差,都是偏负面的评论,但是后面生成的文本还是能生成正向的回答,说明经过强化学习的确有产生预期效果。

关于文本流畅问题,有个问题是Wenzhong-GPT2本身产生通顺句子的能力就比较弱,但经过多轮训练,也能产生除生成正向评论外的效果,这点也是很nice.