Thursday, February 14, 2013

Journal Alert: TOPICS IN COGNITIVE SCIENCE

Title:
> Why Formal Learning Theory Matters for Cognitive Science
>
> Authors:
> Fulop, S; Chater, N
>
> Source:
> *TOPICS IN COGNITIVE SCIENCE*, 5 (1):3-12; JAN 2013
>
> Abstract:
> This article reviews a number of different areas in the foundations of
> formal learning theory. After outlining the general framework for formal
> models of learning, the Bayesian approach to learning is summarized.
> This leads to a discussion of Solomonoff's Universal Prior Distribution
> for Bayesian learning. Gold's model of identification in the limit is
> also outlined. We next discuss a number of aspects of learning theory
> raised in contributed papers, related to both computational and
> representational complexity. The article concludes with a description of
> how semi-supervised learning can be applied to the study of cognitive
> learning models. Throughout this overview, the specific points raised by
> our contributing authors are connected to the models and methods under
> review.
>
> ========================================================================
>
>
> *Pages: 13-34 (Article)
> *View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000313754300003
> *Order Full Text [ ]
>
> Title:
> Tuning Your Priors to the World
>
> Authors:
> Feldman, J
>
> Source:
> *TOPICS IN COGNITIVE SCIENCE*, 5 (1):13-34; JAN 2013
>
> Abstract:
> The idea that perceptual and cognitive systems must incorporate
> knowledge about the structure of the environment has become a central
> dogma of cognitive theory. In a Bayesian context, this idea is often
> realized in terms of tuning the priorwidely assumed to mean adjusting
> prior probabilities so that they match the frequencies of events in the
> world. This kind of ecological tuning has often been held up as an ideal
> of inference, in fact defining an ideal observer. But widespread as this
> viewpoint is, it directly contradicts Bayesian philosophy of
> probability, which views probabilities as degrees of belief rather than
> relative frequencies, and explicitly denies that they are objective
> characteristics of the world. Moreover, tuning the prior to observed
> environmental frequencies is subject to overfitting, meaning in this
> context overtuning to the environment, which leads (ironically) to poor
> performance in future encounters with the same environment. Whenever
> there is uncertainty about the environmentwhich there almost always isan
> agent's prior should be biased away from ecological relative frequencies
> and toward simpler and more entropic priors.
>
> ========================================================================
>
>
> *Pages: 35-55 (Article)
> *View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000313754300004
> *Order Full Text [ ]
>
> Title:
> Language Learning From Positive Evidence, Reconsidered: A Simplicity-Based Approach
>
> Authors:
> Hsu, AS; Chater, N; Vitanyi, P
>
> Source:
> *TOPICS IN COGNITIVE SCIENCE*, 5 (1):35-55; JAN 2013
>
> Abstract:
> Children learn their native language by exposure to their linguistic and
> communicative environment, but apparently without requiring that their
> mistakes be corrected. Such learning from positive evidence has been
> viewed as raising logical problems for language acquisition. In
> particular, without correction, how is the child to recover from
> conjecturing an over-general grammar, which will be consistent with any
> sentence that the child hears? There have been many proposals concerning
> how this logical problem can be dissolved. In this study, we review
> recent formal results showing that the learner has sufficient data to
> learn successfully from positive evidence, if it favors the simplest
> encoding of the linguistic input. Results include the learnability of
> linguistic prediction, grammaticality judgments, language production,
> and form-meaning mappings. The simplicity approach can also be scaled
> down to analyze the learnability of specific linguistic constructions,
> and it is amenable to empirical testing as a framework for describing
> human language acquisition.
>
> ========================================================================
>
>
> *Pages: 56-88 (Article)
> *View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000313754300005
> *Order Full Text [ ]
>
> Title:
> On the Necessity of U-Shaped Learning
>
> Authors:
> Carlucci, L; Case, J
>
> Source:
> *TOPICS IN COGNITIVE SCIENCE*, 5 (1):56-88; JAN 2013
>
> Abstract:
> A U-shaped curve in a cognitive-developmental trajectory refers to a
> three-step process: good performance followed by bad performance
> followed by good performance once again. U-shaped curves have been
> observed in a wide variety of cognitive-developmental and learning
> contexts. U-shaped learning seems to contradict the idea that learning
> is a monotonic, cumulative process and thus constitutes a challenge for
> competing theories of cognitive development and learning. U-shaped
> behavior in language learning (in particular in learning English past
> tense) has become a central topic in the Cognitive Science debate about
> learning models. Antagonist models (e.g., connectionism versus nativism)
> are often judged on their ability of modeling or accounting for U-shaped
> behavior. The prior literature is mostly occupied with explaining how
> U-shaped behavior occurs. Instead, we are interested in the necessity of
> this kind of apparently inefficient strategy. We present and discuss a
> body of results in the abstract mathematical setting of (extensions of)
> Gold-style computational learning theory addressing a mathematically
> precise version of the following question: Are there learning tasks that
> require U-shaped behavior? All notions considered are learning in the
> limit from positive data. We present results about the necessity of
> U-shaped learning in classical models of learning as well as in models
> with bounds on the memory of the learner. The pattern emerges that, for
> parameterized, cognitively relevant learning criteria, beyond very few
> initial parameter values, U-shapes are necessary for full learning
> power! We discuss the possible relevance of the above results for the
> Cognitive Science debate about learning models as well as directions for
> future research.
>
> ========================================================================
>
>
> *Pages: 89-110 (Article)
> *View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000313754300006
> *Order Full Text [ ]
>
> Title:
> Complexity in Language Acquisition
>
> Authors:
> Clark, A; Lappin, S
>
> Source:
> *TOPICS IN COGNITIVE SCIENCE*, 5 (1):89-110; JAN 2013
>
> Abstract:
> Learning theory has frequently been applied to language acquisition, but
> discussion has largely focused on information theoretic problemsin
> particular on the absence of direct negative evidence. Such arguments
> typically neglect the probabilistic nature of cognition and learning in
> general. We argue first that these arguments, and analyses based on
> them, suffer from a major flaw: they systematically conflate the
> hypothesis class and the learnable concept class. As a result, they do
> not allow one to draw significant conclusions about the learner. Second,
> we claim that the real problem for language learning is the
> computational complexity of constructing a hypothesis from input data.
> Studying this problem allows for a more direct approach to the object of
> studythe language acquisition devicerather than the learnable class of
> languages, which is epiphenomenal and possibly hard to characterize. The
> learnability results informed by complexity studies are much more
> insightful. They strongly suggest that target grammars need to be
> objective, in the sense that the primitive elements of these grammars
> are based on objectively definable properties of the language itself.
> These considerations support the view that language acquisition proceeds
> primarily through data-driven learning of some form.
>
> ========================================================================
>
>
> *Pages: 111-131 (Review)
> *View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000313754300007
> *Order Full Text [ ]
>
> Title:
> What Complexity Differences Reveal About Domains in Language*
>
> Authors:
> Heinz, J; Idsardi, W
>
> Source:
> *TOPICS IN COGNITIVE SCIENCE*, 5 (1):111-131; JAN 2013
>
> Abstract:
> An important distinction between phonology and syntax has been
> overlooked. All phonological patterns belong to the regular region of
> the Chomsky Hierarchy, but not all syntactic patterns do. We argue that
> the hypothesis that humans employ distinct learning mechanisms for
> phonology and syntax currently offers the best explanation for this
> difference.
>
> ========================================================================
>
>
> *Pages: 132-172 (Article)
> *View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000313754300008
> *Order Full Text [ ]
>
> Title:
> Human Semi-Supervised Learning
>
> Authors:
> Gibson, BR; Rogers, TT; Zhu, XJ
>
> Source:
> *TOPICS IN COGNITIVE SCIENCE*, 5 (1):132-172; JAN 2013
>
> Abstract:
> Most empirical work in human categorization has studied learning in
> either fully supervised or fully unsupervised scenarios. Most real-world
> learning scenarios, however, are semi-supervised: Learners receive a
> great deal of unlabeled information from the world, coupled with
> occasional experiences in which items are directly labeled by a
> knowledgeable source. A large body of work in machine learning has
> investigated how learning can exploit both labeled and unlabeled data
> provided to a learner. Using equivalences between models found in human
> categorization and machine learning research, we explain how these
> semi-supervised techniques can be applied to human learning. A series of
> experiments are described which show that semi-supervised learning
> models prove useful for explaining human behavior when exposed to both
> labeled and unlabeled data. We then discuss some machine learning models
> that do not have familiar human categorization counterparts. Finally, we
> discuss some challenges yet to be addressed in the use of
> semi-supervised models for modeling human categorization.
>
> ========================================================================
>
>
> *Pages: 173-184 (Article)
> *View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000313754300009
> *Order Full Text [ ]
>
> Title:
> Knowledge and Implicature: Modeling Language Understanding as Social Cognition
>
> Authors:
> Goodman, ND; Stuhlmuller, A
>
> Source:
> *TOPICS IN COGNITIVE SCIENCE*, 5 (1):173-184; JAN 2013
>
> Abstract:
> Is language understanding a special case of social cognition? To help
> evaluate this view, we can formalize it as the rational speech-act
> theory: Listeners assume that speakers choose their utterances
> approximately optimally, and listeners interpret an utterance by using
> Bayesian inference to invert this model of the speaker. We apply this
> framework to model scalar implicature (some implies not all, and N
> implies not more than N). This model predicts an interaction between the
> speaker's knowledge state and the listener's interpretation. We test
> these predictions in two experiments and find good fit between model
> predictions and human judgments.
>
> ========================================================================
>
>
> *Pages: 185-199 (Article)
> *View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000313754300010
> *Order Full Text [ ]
>
> Title:
> Sources of Uncertainty in Intuitive Physics
>
> Authors:
> Smith, KA; Vul, E
>
> Source:
> *TOPICS IN COGNITIVE SCIENCE*, 5 (1):185-199; JAN 2013
>
> Abstract:
> Recent work suggests that people predict how objects interact in a
> manner consistent with Newtonian physics, but with additional
> uncertainty. However, the sources of uncertainty have not been examined.
> In this study, we measure perceptual noise in initial conditions and
> stochasticity in the physical model used to make predictions.
> Participants predicted the trajectory of a moving object through
> occluded motion and bounces, and we compared their behavior to an ideal
> observer model. We found that human judgments cannot be captured by
> simple heuristics and must incorporate noisy dynamics. Moreover, these
> judgments are biased consistently with a prior expectation on object
> destinations, suggesting that people use simple expectations about
> outcomes to compensate for uncertainty about their physical models.
>
> ========================================================================
>
>
> *Pages: 200-213 (Article)
> *View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000313754300011
> *Order Full Text [ ]
>
> Title:
> Actively Learning Object Names Across Ambiguous Situations
>
> Authors:
> Kachergis, G; Yu, C; Shiffrin, RM
>
> Source:
> *TOPICS IN COGNITIVE SCIENCE*, 5 (1):200-213; JAN 2013
>
> Abstract:
> Previous research shows that people can use the co-occurrence of words
> and objects in ambiguous situations (i.e., containing multiple words and
> objects) to learn word meanings during a brief passive training period
> (Yu & Smith, 2007). However, learners in the world are not completely
> passive but can affect how their environment is structured by moving
> their heads, eyes, and even objects. These actions can indicate
> attention to a language teacher, who may then be more likely to name the
> attended objects. Using a novel active learning paradigm in which
> learners choose which four objects they would like to see named on each
> successive trial, this study asks whether active learning is superior to
> passive learning in a cross-situational word learning context. Finding
> that learners perform better in active learning, we investigate the
> strategies and discover that most learners use immediate repetition to
> disambiguate pairings. Unexpectedly, we find that learners who repeat
> only one pair per trialan easy way to infer this pairperform worse than
> those who repeat multiple pairs per trial. Using a working memory
> extension to an associative model of word learning with uncertainty and
> familiarity biases, we investigate individual differences that correlate
> with these assorted strategies.
>
> ========================================================================
>
>
> *Pages: 214-221 (Article)
> *View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000313754300012
> *Order Full Text [ ]
>
> Title:
> What Should Be the Data Sharing Policy of Cognitive Science?
>
> Authors:
> Pitt, MA; Tang, Y
>
> Source:
> *TOPICS IN COGNITIVE SCIENCE*, 5 (1):214-221; JAN 2013
>
> Abstract:
> There is a growing chorus of voices in the scientific community calling
> for greater openness in the sharing of raw data that lead to a
> publication. In this commentary, we discuss the merits of sharing,
> common concerns that are raised, and practical issues that arise in
> developing a sharing policy. We suggest that the cognitive science
> community discuss the topic and establish a data-sharing policy.
>
>

No comments: