Finding Skill Neurons in Pre-trained Transformers via Prompt Tuning

Published in EMNLP 2022, 2022

Xiaozhi Wang*, Kaiyue Wen*, Zhengyan Zhang, Lei Hou, Zhiyuan Liu, Juanzi Li

This paper discuss our discovery of a set of neurons inside pretrained language model that encode skills, meaning that the activations of these neurons, with delta training, or even without any training, can be used to predict some of the downstream tasks. We further prove that these neurons are crucial for downstream delta tuning.