HE WORD WE CHOOSE DENOTE THE CONCEPTS WE THINK?”:

THE EFFECTS OF SYNERGIC CONCEPTS ON LEXICAL COMPLEXITY

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

IN WRITTEN DATA OF LATE BILINGUALS

Independent Study with Professor Itsvan Kecskes

Hanh Dinh

LEXICAL COMPLEXITY MEASUREMENTS

          The measurement of Complexity, Accuracy and Fluency (CAF) has been associated with the performance of second language learners regarding their language outcomes’ proficiency (Ortega, 2003; Pilo 2001). Originally, comprehensive quantifiable measures for written performance were reviewed more than one decade ago by Wolfe-Quintero, Inagaki, and Kim (1998). For the limited time scope of this working paper, I just focus on the lexical complexity.

There are two different ways to define the concept of complexity in SLA research. One defines complexity as linguistic complexity, and another is cognitive complexity (DeKeyser, 1998; Housen & Kuiken, 2009; Housen, Van Daele & Pierrard, 2005). Cognitive complexity is a relative and subjective pyscholinguistic construct referring to the difficulty with which language elements are processed during L2 performance, as determined by the learners’ individual backgrounds (aptitude, motivation, stage of L2 development and L1 background). Hence, cognitive complexity refers to the scope of expanding or restructuring conceptual knowledge under the access to and the control of conceptual representations. On the other hand, linguistic complexity is a component of cognitive complexity but does not coincide with it. It is an objective given, independent from the learner, referring to the intrinsic formal and semantic-functional properties of L2 elements (form-meaning mappings) or to properties of systems of L2.

For written data focusing on bilingual participants, so far, the purpose of these types of measures is to determine the lexical characteristic of more proficient writers and try to delve into “…the complexity of the underlying interlanguage system developed” (Skehan, 2003, p.8). Several of these studies also used lexical complexity measures in comparisons between L2 writers and native ones (Harley & King, 1989; Hyltenstam, 1988; Linnarud, 1986; McClaure, 1991).

Lexical complexity means that a wide variety of basic and sophisticated words are available and can be accessed quickly, whereas a lack of complexity means only a narrow range of basic words are accessible. Lexical complexity is manifested in writing in terms of the range (lexical variation) and size (lexical sophistication) of a second language writer’s productive vocabulary. In 1986, Linnarud added the aspect of lexical density, the extent to which a particular type of lexicals play a role in the written text over the overall words. Therefore, lexical complexity can be divided into three sub-aspects to investigate: lexical variation, lexical density and lexical sophistication (Skehan 2003, Bulte et al 2008; Wolfe-Quintero, Inagaki, and Kim, 1998).

Almost no research has been done using frequency measures of lexical complexity. The only exception was Harley and King (1989) who compared French- English bilingual sixth grade students on number of verb types in their narratives and letters (total number of sophisticated (infrequent) verb types divided by total number of verbs). Unsurprisingly, they found a significant difference on this measure, indicating that native writers have more verbs available in their lexical repertoire. They also found that bilingual writers used significantly fewer types of verbs than the native language writers on timed writing tasks. Ever since, ration measures have proved to be valid than frequency measures (Wolfe-Quintero et al., 1998).

Specifically, the most tradition and common way of measuring lexical variation is to find the ration of type over token (Cumming & Mellow, 1996). Another formula was suggested by McClure (1991). The study decided a ratio of the number of word types to number of total words and compared the word variation of Spanish-English bilinguals and English monolinguals. That study found that there was significant difference in the modifier types per lexical words between bilingual and monolingual school children. However, that ratio did not differentiate between a writer who used a few types in a short composition and a writer who used more types in a longer one, and it did not react to the length of the writing samples. Therefore, Carroll (1967) suggested a new formula: the number of word types divided by the square root of two times the total number of words. That formula was not affected by the length of writing production. Recent studies (Malvern, Richards, Chipere & Duran 2004; Milton 2009) have been in favor of what is called “D-value” (devised by Malvern & Richard, 2002), the rate at which new words are introduced in increasingly longer text samples. The D value is also independent of the length of the text.

Regarding lexical density measurement, a lot of scholars argued that it does not differentiate natives from second language learners (Linnarud, 1986; Hyltenstam, 1088) nor demonstrate the writers’ language proficiency level at all (Linnarud, 1986; Nihalani, 1981; Engber, 1995) since it might depend on the grammatical system of the language. However, lexical density measurement has raised the interest in investigating the denseness of lexical words (or content words) in a particular type of writer. For example, One of the earliest studies was from McClure (1991) who compared Spanish English bilingual and English monolinhgual 4th and 9th grade school children on a variety of lexical word variation type/token measures. The researcher found that the monolinguals had a significantly lower noun variation ratio than the bilinguals, and thus, indicating that the bilinguals got access to proportionately more nouns in their writing. Linnarud (1986) has also noticed that native language writer used many more original words than bilingual writers. The study used a measure that determined the number of words which were unique to a bilingual writer and might have indicated the level of lexical development. Thus, firstly, a unique type of word belonging to a single type of writer shall be decided. Then, that quantity of words will be divided by the writers’ total lexical words. She then investigated the correlation between lexical individuality and holistic ratings of picture descriptions and found it to be significant, but she did not report the means. She also compared the lexcial individuality of the learners to native speakers though she did not do a statistical test of the difference. However, her measure tried to argue the lexical development was doubtful, not only because originality of word choice might not a valid concept for language development, but also that uniqueness varies with the particular corpus and with particular cultural background.

For lexical sophistication, Harley and King (1992) found a significant difference in the ratio of sophisticated verb types to total verbs between sixth grade native and bilingual learners. Whereas, Laufer & Nation (1995) counted the rate of rare vocabulary, which they called Lexical Frequency Profile. Another index is called Lambda, the index of lexical sophistication obtainable with the P_Lex software, and is devised with a formula devised by Meara & Miralpeix (2005), in which they compare the written data of two NNS groups and the NS group. It was used in Skehan & Foster (2012) as well, which reported that native participants produced higher results for Lambda than do nonnative ones.

          The Gap

Linguistic complexity is established based on what complexity is (theoretical), how it could be employed in actual language performance (observational) and how these behavioral displays can be quantified (operational) (Housen et al, 2012). However, from what I have reviewed above, researchers have been working diligently on the operational/ quantifiable level rather than looking into the internal construct of complexity to explain the rationale why language performance emerge in a particular way (from the cognitive level). In other words, CAF measures have rarely been adequately defined at a more theoretical level as a cognitive construct or make a connection between the cognitive construct and the linguistic one. They have not been applied greatly in linguistic or psycholinguistic although complexity is “the primary epiphenomena of the psycholinguistic processes and mechanisms underlying the acquisition, representation and processing of L2 systems” (Housen et all, 2005, p.2). Therefore, in this study, the challenge is to establish a relationship between the representations, processes and mechanisms defined from the perspective of psycholinguistic SLA and the performance outcomes.

2.    The Synergic Concepts and Ways to Identify Words Reflecting Synergic Concepts: The Use of Lexical Labels to Denote the blending concepts

In order to produce the right words in the right place, a learner needs to have proceduralized three different kinds of mental representation. First, the learner needs to acquire the linguistic competence which had been defined by Chomsky (1968) in his theory of Universal Grammar. Competence in his explanation represents the abstract mental representations which are necessitated for grammar concepts to be emerged through syntax and morphology in the world’s languages. Second is the learned linguistic knowledge, which is resulted from explicit and specific learning of, for example, the lexicon, formulaic utterances, pragmatic, and the rules of written language (Schwart 1993; White 2003). The last one refers to the processing of language in real time, allowing the concepts to be retrieved and operate more economically and effectively through a production phase in real time (Anderson 1983, 1995; Anderson & Lebiere 1998, Anderson et al 2004).

         

          The psycholinguistic model elaborated in this study has been developed/proposed from the hypothesis of Synergic Concepts (Kecskes, …..). Then, we are trying to show how the elements of that model relate to the outcomes of bilingual students regarding their lexical complexity. Finally, we hope to carry out empirical investigations related to the model which might help us to investigate these constructs. Such empirical study needs to be able to demonstrate both how conceptualization might be represented in the mind and how that knowledge can activate the language channels (especially the lexical aspects) towards the desired performance outcomes.

 

Levelt (1989, 1999) elaborated on this…..

 

3.    Lexicons in Narrative Writing Task Analysis

The native participants displayed range of infrequent lexis, so they might have had a richer and more accessible resources for lexicons. They also demonstrated a higher rate of lexical sophistication in Narrative tasks compared with other writing tasks in the study. The cartoon narratives task generated the highest Lambdas than other writing tasks, including Decision-making and Personal task. Skehan (2009) explained this by stating that Narrative writing task stimulated particular resources of lexis which the writers could not avoid but produce to articulate the story input, such as events of the story.