使用 PEFT 适配器进行训练

Sentence Transformers 已经与 PEFT(Parameter-Efficient Fine-Tuning,参数高效微调)集成,允许您在不微调所有模型参数的情况下微调嵌入模型。相反,使用 PEFT 方法,您只需微调一小部分(额外的)模型参数,与完全模型微调相比,性能仅有轻微下降。

PEFT 适配器模型可以像任何其他模型一样加载,例如 tomaarsen/bert-base-uncased-gooaq-peft,它不包含 model.safetensors,而只包含一个很小的 adapter_model.safetensors

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/bert-base-uncased-gooaq-peft")
# Run inference
sentences = [
    "is toprol xl the same as metoprolol?",
    "Metoprolol succinate is also known by the brand name Toprol XL. It is the extended-release form of metoprolol. Metoprolol succinate is approved to treat high blood pressure, chronic chest pain, and congestive heart failure.",
    "Metoprolol starts to work after about 2 hours, but it can take up to 1 week to fully take effect. You may not feel any different when you take metoprolol, but this doesn't mean it's not working. It's important to keep taking your medicine"
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings[0], embeddings[1:])
print(similarities)
# tensor([[0.7913, 0.4976]])

兼容性方法

SentenceTransformer 支持 7 种与 PEFT 适配器交互的方法

添加新的适配器

向模型添加新的适配器就像调用 add_adapter() 并传入一个 (子类) PeftConfig 到已初始化的 Sentence Transformer 模型。在以下示例中,我们使用 LoraConfig 实例。

from sentence_transformers import SentenceTransformer

# 1. Load a model to finetune with 2. (Optional) model card data
model = SentenceTransformer(
    "all-MiniLM-L6-v2",
    model_card_data=SentenceTransformerModelCardData(
        language="en",
        license="apache-2.0",
        model_name="all-MiniLM-L6-v2 adapter finetuned on GooAQ pairs",
    ),
)

# 3. Create a LoRA adapter for the model & add it
peft_config = LoraConfig(
    task_type=TaskType.FEATURE_EXTRACTION,
    inference_mode=False,
    r=64,
    lora_alpha=128,
    lora_dropout=0.1,
)
model.add_adapter(peft_config)

# Proceed as usual... See https://sbert.hugging-face.cn/docs/sentence_transformer/training_overview.html

加载预训练适配器

我们创建了一个名为 tomaarsen/bert-base-uncased-gooaq-peft 的小型适配器模型,它基于 bert-base-uncased 基础模型。

adapter_model.safetensors 文件大小为 9.44MB,仅为基础模型 model.safetensors 大小的 2.14%。要加载像这样的适配器模型,您可以直接加载此适配器

from sentence_transformers import SentenceTransformer

model = SentenceTransformer("tomaarsen/bert-base-uncased-gooaq-peft")
embeddings = model.encode(["This is an example sentence", "Each sentence is converted"])
print(embeddings.shape)
# (2, 768)

或者您可以加载基础模型并将适配器加载到其中

from sentence_transformers import SentenceTransformer

model = SentenceTransformer("bert-base-uncased")
model.load_adapter("tomaarsen/bert-base-uncased-gooaq-peft")
embeddings = model.encode(["This is an example sentence", "Each sentence is converted"])
print(embeddings.shape)
# (2, 768)

在大多数情况下,前者最简单,因为它无论模型是否为适配器模型都有效。

训练脚本

有关如何将 PEFT 与 Sentence Transformers 结合使用的完整示例,请参见以下示例文件

此脚本用于训练 tomaarsen/bert-base-uncased-gooaq-peft,该模型在 NanoBEIR 基准测试上达到了 0.4705 的 NDCG@10;仅略微落后于 tomaarsen/bert-base-uncased-gooaq,后者使用修改后的脚本进行完整模型微调,得分为 0.4728 NDCG@10。