schliessen

Filtern

 

Bibliotheken

Computational modeling of epiphany learning

Models of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these "epiphanies" has shown evidence... Full description

Journal Title: Proceedings of the National Academy of Sciences of the United States of America 02 May 2017, Vol.114(18), pp.4637-4642
Main Author: Chen, Wei James
Other Authors: Krajbich, Ian
Format: Electronic Article Electronic Article
Language: English
Subjects:
ID: E-ISSN: 1091-6490 ; PMID: 28416682 Version:1 ; DOI: 10.1073/pnas.1618161114
Link: http://pubmed.gov/28416682
Zum Text:
SendSend as email Add to Book BagAdd to Book Bag
Staff View
recordid: medline28416682
title: Computational modeling of epiphany learning
format: Article
creator:
  • Chen, Wei James
  • Krajbich, Ian
subjects:
  • Beauty Contest
  • Decision Making
  • Epiphany Learning
  • Eye Tracking
  • Pupil Dilation
  • Computer Simulation
  • Models, Neurological
  • Eye Movements -- Physiology
  • Learning -- Physiology
ispartof: Proceedings of the National Academy of Sciences of the United States of America, 02 May 2017, Vol.114(18), pp.4637-4642
description: Models of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these "epiphanies" has shown evidence of sudden changes in behavior, but it remains unclear how such epiphanies occur. We propose a sequential-sampling model of epiphany learning (EL) and test it using an eye-tracking experiment. In the experiment, subjects repeatedly play a strategic game that has an optimal strategy. Subjects can learn over time from feedback but are also allowed to commit to a strategy at any time, eliminating all other options and opportunities to learn. We find that the EL model is consistent with the choices, eye movements, and pupillary responses of subjects who commit to the optimal strategy (correct epiphany) but not always of those who commit to a suboptimal strategy or who do not commit at all. Our findings suggest that EL is driven by a latent evidence accumulation process that can be revealed with eye-tracking data.
language: eng
source:
identifier: E-ISSN: 1091-6490 ; PMID: 28416682 Version:1 ; DOI: 10.1073/pnas.1618161114
fulltext: fulltext
issn:
  • 10916490
  • 1091-6490
url: Link


@attributes
ID252457815
RANK0.07
NO1
SEARCH_ENGINEprimo_central_multiple_fe
SEARCH_ENGINE_TYPEPrimo Central Search Engine
LOCALfalse
PrimoNMBib
record
control
sourcerecordid28416682
sourceidmedline
recordidTN_medline28416682
sourceformatXML
sourcesystemOther
pqid1902648949
galeid495476568
display
typearticle
titleComputational modeling of epiphany learning
creatorChen, Wei James ; Krajbich, Ian
ispartofProceedings of the National Academy of Sciences of the United States of America, 02 May 2017, Vol.114(18), pp.4637-4642
identifier
subjectBeauty Contest ; Decision Making ; Epiphany Learning ; Eye Tracking ; Pupil Dilation ; Computer Simulation ; Models, Neurological ; Eye Movements -- Physiology ; Learning -- Physiology
descriptionModels of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these "epiphanies" has shown evidence of sudden changes in behavior, but it remains unclear how such epiphanies occur. We propose a sequential-sampling model of epiphany learning (EL) and test it using an eye-tracking experiment. In the experiment, subjects repeatedly play a strategic game that has an optimal strategy. Subjects can learn over time from feedback but are also allowed to commit to a strategy at any time, eliminating all other options and opportunities to learn. We find that the EL model is consistent with the choices, eye movements, and pupillary responses of subjects who commit to the optimal strategy (correct epiphany) but not always of those who commit to a suboptimal strategy or who do not commit at all. Our findings suggest that EL is driven by a latent evidence accumulation process that can be revealed with eye-tracking data.
languageeng
source
version4
lds50peer_reviewed
links
openurl$$Topenurl_article
backlink$$Uhttp://pubmed.gov/28416682$$EView_this_record_in_MEDLINE/PubMed
openurlfulltext$$Topenurlfull_article
addlink$$Uhttp://exlibris-pub.s3.amazonaws.com/aboutMedline.html$$EView_the_MEDLINE/PubMed_Copyright_Statement
search
creatorcontrib
0Chen, Wei James
1Krajbich, Ian
titleComputational modeling of epiphany learning
descriptionModels of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these "epiphanies" has shown evidence of sudden changes in behavior, but it remains unclear how such epiphanies occur. We propose a sequential-sampling model of epiphany learning (EL) and test it using an eye-tracking experiment. In the experiment, subjects repeatedly play a strategic game that has an optimal strategy. Subjects can learn over time from feedback but are also allowed to commit to a strategy at any time, eliminating all other options and opportunities to learn. We find that the EL model is consistent with the choices, eye movements, and pupillary responses of subjects who commit to the optimal strategy (correct epiphany) but not always of those who commit to a suboptimal strategy or who do not commit at all. Our findings suggest that EL is driven by a latent evidence accumulation process that can be revealed with eye-tracking data.
subject
0Beauty Contest
1Decision Making
2Epiphany Learning
3Eye Tracking
4Pupil Dilation
5Computer Simulation
6Models, Neurological
7Eye Movements -- Physiology
8Learning -- Physiology
general
028416682
1English
2MEDLINE/PubMed (U.S. National Library of Medicine)
310.1073/pnas.1618161114
4MEDLINE/PubMed (NLM)
sourceidmedline
recordidmedline28416682
issn
010916490
11091-6490
rsrctypearticle
creationdate2017
addtitleProceedings of the National Academy of Sciences of the United States of America
searchscope
0medline
1nlm_medline
2MEDLINE
scope
0medline
1nlm_medline
2MEDLINE
lsr41201702
citationpf 4637 vol 114 issue 18
startdate20170502
enddate20170502
lsr30VSR-Enriched:[pqid, issn, galeid]
sort
titleComputational modeling of epiphany learning
authorChen, Wei James ; Krajbich, Ian
creationdate20170502
lso0120170502
facets
frbrgroupid7972424419389191687
frbrtype5
newrecords20190701
languageeng
creationdate2017
topic
0Beauty Contest
1Decision Making
2Epiphany Learning
3Eye Tracking
4Pupil Dilation
5Computer Simulation
6Models, Neurological
7Eye Movements–Physiology
8Learning–Physiology
collectionMEDLINE/PubMed (NLM)
prefilterarticles
rsrctypearticles
creatorcontrib
0Chen, Wei James
1Krajbich, Ian
jtitleProceedings Of The National Academy Of Sciences Of The United States Of America
toplevelpeer_reviewed
delivery
delcategoryRemote Search Resource
fulltextfulltext
addata
aulast
0Chen
1Krajbich
aufirst
0Wei James
1Ian
au
0Chen, Wei James
1Krajbich, Ian
atitleComputational modeling of epiphany learning
jtitleProceedings of the National Academy of Sciences of the United States of America
risdate20170502
volume114
issue18
spage4637
pages4637-4642
eissn1091-6490
formatjournal
genrearticle
ristypeJOUR
abstractModels of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these "epiphanies" has shown evidence of sudden changes in behavior, but it remains unclear how such epiphanies occur. We propose a sequential-sampling model of epiphany learning (EL) and test it using an eye-tracking experiment. In the experiment, subjects repeatedly play a strategic game that has an optimal strategy. Subjects can learn over time from feedback but are also allowed to commit to a strategy at any time, eliminating all other options and opportunities to learn. We find that the EL model is consistent with the choices, eye movements, and pupillary responses of subjects who commit to the optimal strategy (correct epiphany) but not always of those who commit to a suboptimal strategy or who do not commit at all. Our findings suggest that EL is driven by a latent evidence accumulation process that can be revealed with eye-tracking data.
doi10.1073/pnas.1618161114
pmid28416682
issn00278424
oafree_for_read
date2017-05-02