한빛사논문
Heiko H. Schütt 1,2,4,*, Dongjae Kim 1,3,4 & Wei Ji Ma 1
1Center for Neural Science and Department of Psychology, New York University, New York, NY, USA.
2Department of Behavioural and Cognitive Sciences, Université du Luxembourg, Esch-Belval, Luxembourg.
3Department of AI-Based Convergence, Dankook University, Yongin, Republic of Korea.
4These authors contributed equally: Heiko H. Schütt, Dongjae Kim.
*Corresponding author: correspondence to Heiko H. Schütt
Abstract
We use efficient coding principles borrowed from sensory neuroscience to derive the optimal neural population to encode a reward distribution. We show that the responses of dopaminergic reward prediction error neurons in mouse and macaque are similar to those of the efficient code in the following ways: the neurons have a broad distribution of midpoints covering the reward distribution; neurons with higher thresholds have higher gains, more convex tuning functions and lower slopes; and their slope is higher when the reward distribution is narrower. Furthermore, we derive learning rules that converge to the efficient code. The learning rule for the position of the neuron on the reward axis closely resembles distributional reinforcement learning. Thus, reward prediction error neuron responses may be optimized to broadcast an efficient reward signal, forming a connection between efficient coding and reinforcement learning, two of the most successful theories in computational neuroscience.
논문정보
관련 링크
연구자 키워드
소속기관 논문보기
관련분야 논문보기
해당논문 저자보기