Realize a limited mind from incomplete models

2008/04/25 at 12:05 pm | Posted in Cognition, Infrastructure, Research | Leave a comment

    This is a demonstration of establishing the computational model of word recognition in the approach of nested modeling. Considering the weakness of previous computational models (triangle model, DRC, and CDP) in simulating the human performance, Perry et al. merge the IA lexical network of DRC and the association network of CDP in the CDP+ model. The characteristics of lexical and sub-lexical routes success their predecessors. The sub-lexical route deal with the stimuli input stored in the graphemic buffer in terms of the trained association strengths. Based on these principles, they test the performance of CDP+ on the data sets that challenge the previous models.

    To demonstrate the accountability of CDP+, they propose the notion “strong inference testing” which verifies the outcome performance in the analyses of factorial design and large human data. If one of these models has the descriptive adequacy on simulating human data, in their logic, both analyses would value this model in the highest rank.

    I put my comment on the CDP+ based on its simulating the consistency effect. Indeed, this model successfully simulated the Jared’s findings, which both the DRC and the triangle failed, in the analysis of variance. The authors also convince readers the highest accountability of CDP+ with the regression method. It is without doubtful that this model has the highest adequacy in describing the consistency effect in the world. In a subsequent test of weakening the contribution of lexical route, they report the result that the consistency effect is the product of sub-lexical route. It is obviously contrary to the DRC assuming the consistency effect as the product of lexical route.

This simulation case makes me rethinking the level of addressing “adequacy” in scientific research. The first scientific theory considered “adequacy” is the grammar theories proposed by Chomsky. When this term used in the discipline of computational model, it refers to the boundary of data sets accounted by a model. The first level, observational adequacy, means the capability of a model handling the data under a specific topic, for instance, consistency effect. The second level, descriptive adequacy, refers to the scope of a model covering the studies of a general issue, such as orthography to phonology mapping. The final level, explanatory adequacy, suggests the potential of a model generalizing the theoretical implication beyond its default scope.

    Like their predecessors, Perry et al. plan the CDP+ in the level of descriptive adequacy. There are accumulated behavioral data supporting them standing on this level. Their conclusion that the sub-lexical route results in the consistency effect builds on the simulation case that the operation of lexical route is neutralized. However, this argument has being debated among behavioral studies, which means there is debate if the observational adequacy has been achieved.

    As for a researcher who plans to build the computational model of Chinese word recognition, the advantage as well as the disadvantage is fewer behavioral cases for simulation. This is a disadvantage because we could not consider as many questions as English studies. This is also an advantage because we could seriously consider which the constituents of observational adequacy for a topic are. After this adequacy is satisfied, nested modeling approach could support the development of computational models toward the descriptive adequacy. So far, concentrating on a specific topic is the principle that the developers of Chinese models should obey.

 

Perry, C., Ziegler, J. C., & Zorzi, M. (2007). Nested Incremental Modeling in the Development of Computational Theories: The CDP+ Model of Reading Aloud. Psychological Review, 114, 273-315.

Create a free website or blog at WordPress.com.
Entries and comments feeds.