Conferenza di consenso sulla traduzione in simboli di libri illustrati per bambini
anno (dal più vecchio),
anno (dal più recente),
Asano M, Imai M, Kita S, Kitajo K, Okada H, Thierry G
Sound symbolism scaffolds language development in preverbal infants
A fundamental question in language development is how infants start to assign meaning
to words. Here, using three Electroencephalogram (EEG)-based measures of brain activity,
we establish that preverbal 11-month-old infants are sensitive to the non-arbitrary correspondences
between language sounds and concepts, that is, to sound symbolism. In each
trial, infant participants were presented with a visual stimulus (e.g., a round shape) followed
by a novel spoken word that either sound-symbolically matched ("moma") or mismatched
("kipi") the shape. Amplitude increase in the gamma band showed perceptual
integration of visual and auditory stimuli in the match condition within 300 msec of word
onset. Furthermore, phase synchronization between electrodes at around 400 msec
revealed intensified large-scale, left-hemispheric communication between brain regions in
the mismatch condition as compared to the match condition, indicating heightened processing
effort when integration was more demanding. Finally, event-related brain potentials
showed an increased adult-like N400 response e an index of semantic integration
difficulty e in the mismatch as compared to the match condition. Together, these findings
suggest that 11-month-old infants spontaneously map auditory language onto visual
experience by recruiting a cross-modal perceptual processing system and a nascent semantic
network within the first year of life.
access the full articleref: Cortex, Volume 63, February 2015, Pages 196-205, doi:10.1016/j.cortex.2014.08.025
Bishop K, Rankin J, Mirenda P
Impact of graphic symbol use on reading acquisition
The purpose of this paper is to speculate about the relationship between the use of graphic symbols utilized by many individuals with severe communication disorders and the acquisition of beginning reading skills. In particular, the skills and processes necessary for individuals to be able to look at written words and determine their spoken counterparts are considered here. This discussion is based on the literature related to reading acquisition in normally developing young children, with logical inferences made to the population of individuals who are augmentative and alternative communication (AAC) users. Arguments are presented that suggest that the use of graphic symbols may facilitate specific components of print and word awareness, but that the overall impact of these symbol sets/systems on beginning reading may be minimal. Conclusions made are speculative in nature; future research is warranted.access the full articleref: Augment Altern Commun.1994, Vol. 10, No. 2 , Pages 113-125 (doi:10.1080/07434619412331276820)
Bornman J, Alant E, Du Preez A
Translucency and Learnability of Blissymbols in Setswana-speaking Children: An Exploration
Although the importance of iconicity in the learning of symbols has been widely acknowledged, there have been few systematic investigations into the influence of culture on the ratings of symbol iconicity. The purposes of this study were two-fold: to determine (a) the translucency ratings of specific Blissymbols as rated by 6- to 7-year-old Setswana-speaking children (one of South Africa's 11 official languages); and (b) whether the ratings changed after second and third exposures in order to determine the learnability of these symbols. This study is partially based on the study by Quist et al. (1998), which utilized Dutch and American participants. Thirty-four Setswana children were exposed to 93 selected Blissymbols. A 3-point semantic differential scale consisting of three faces accompanied each Blissymbol, without the written gloss. This procedure was repeated over a period of 3 days. The results indicated that the majority of Blissymbols were rated as having high translucency ratings. The research further demonstrated significant differences in translucency between first and second exposures, suggesting that learning of the symbols had occurred. The comparison between the results of the current study and the results reported in the Quist et al. study reveal that the translucency ratings of the majority of the selected Blissymbols ranged from moderate to high for all three studies, but that the distribution of symbols across the ratings appears to be different.access the full articleref: Augment Altern Commun. December 2009, Vol. 25, No. 4 , Pages 287-298
Bouma H, Legein CP
Foveal and Parafoveal Recognition of Letters and Words by Dyslexics and by Average Readers.
Bouma and Legein reported that a phenomenon called "crowding" limits letter recognition more strongly in dyslexic readers than in normal readers. The visual span - number of letters recognized in a fixation - is smaller in dyslexics than in normal readers, and hence reading speed is slower because speed depends on visual span. Crowding (difficulty recognizing letters) occurs in the parafovea of the retina of the eye when visual objects are too close together in relation to their distance from the center of vision in the fovea (termed "eccentricity").
ref: Neuropsychologia 1977, Vol. 15, pp. 69-80
Burroughs JA, Albritton E, Eaton B, Montague J
A comparative study of language delayed preschool children's ability to recall symbols from two symbol systems
This study compared the acquisition of two graphic symbol systems, Rebus and Bliss, with language delayed preschool children. Subjects were 26 black children between the ages of 4 and 6 with language delays ranging from 7 months to 2 years, 5 months. Training in Rebus and Bliss was administered to each subject using a crossover design. The Rebus pre- and post-test scores suggested that the more iconic Rebus symbols were easier to identify initially than were the more ideographic Blissymbols. However, with training, a greater amount of improvement occurred from Bliss pre- to posttest. Rebus and Bliss scores were not affected by the order of test administration.access the full articleref: Augment Altern Commun.1990, Vol. 6, No. 3 , Pages 202-206 (doi:10.1080/07434619012331275464)
Carmeli S, Shen Y
Semantic transparency and translucency in compound blissymbols
Blissymbolics is a graphic symbol system used for communication by individuals whose speech is nonfunctional. The transparency and translucency of Blissymbolics have been viewed in the context of the visual relationship between symbols and their referents. This article suggests a new perspective in the study of Blissymbolic transparency and translucency that is semantic conceptual. At present, only compound symbols are discussed. Semantic transparency/translucency is conceived in this article as representing the relationship between the composite meaning of symbol components and the symbol referent. This relationship is measured by guessability, and by subject rating of degree of agreement between the composite meaning of symbol components and the symbol referent. We hypothesized that semantic transparency/translucency is affected by referent prototypicality or uniqueness, and by the interpretation of thematic relationships of symbol components. In the present study, we investigated the effect of referent prototypicality. An experiment administered to nondisabled adult subjects demonstrated the contribution of referent prototypicality to semantic transparency/translucency. Implications for Blissymbol codability are discussed.access the full articleref: Augment Altern Commun.1998, Vol. 14, No. 3 , Pages 171-183 (doi:10.1080/07434619812331278346)
Carreiras M, Armstrong BC, Perea M, Frost R.
The what, when, where, and how of visual word recognition.
A long-standing debate in reading research is whether printed words are perceived in a feedforward manner on the basis of orthographic information, with other representations such as semantics and phonology activated subsequently, or
whether the system is fully interactive and feedback from these representations
shapes early visual word recognition. We review recent evidence from behavioral,
functional magnetic resonance imaging, electroencephalography,
magnetoencephalography, and biologically plausible connectionist modeling
approaches, focusing on how each approach provides insight into the temporal flow of information in the lexical system. We conclude that, consistent with
interactive accounts, higher-order linguistic representations modulate early
orthographic processing. We also discuss how biologically plausible interactive
frameworks and coordinated empirical and computational work can advance theories
of visual word recognition and other domains (e.g., object recognition).
access the full articleref: Trends Cogn Sci. 2014 Feb;18(2):90-8. doi: 10.1016/j.tics.2013.11.005. Epub 2013 Dec 25.
Orthographic Coding: Brain Activation for Letters, Symbols, and Digits.
The present experiment investigates the input coding mechanisms of 3 common
printed characters: letters, numbers, and symbols. Despite research in this area,
it is yet unclear whether the identity of these 3 elements is processed through
the same or different brain pathways. In addition, some computational models
propose that the position-in-string coding of these elements responds to general
flexible mechanisms of the visual system that are not character-specific, whereas
others suggest that the position coding of letters responds to specific processes
that are different from those that guide the position-in-string assignment of
other types of visual objects. Here, in an fMRI study, we manipulated character
position and character identity through the transposition or substitution of 2
internal elements within strings of 4 elements. Participants were presented with
2 consecutive visual strings and asked to decide whether they were the same or
different. The results showed: 1) that some brain areas responded more to letters
than to numbers and vice versa, suggesting that processing may follow different
brain pathways; 2) that the left parietal cortex is involved in letter identity,
and critically in letter position coding, specifically contributing to the early
stages of the reading process; and that 3) a stimulus-specific mechanism for
letter position coding is operating during orthographic processing.
access the full articleref: Cereb Cortex. 2014 Jul 30. pii: bhu163. [Epub ahead of print]
Chang YN, Furber S, Welbourne S.
Modelling normal and impaired letter recognition: implications for understanding pure alexic reading.
Letter recognition is the foundation of the human reading system. Despite this,
it tends to receive little attention in computational modelling of single word
reading. Here we present a model that can be trained to recognise letters in
various spatial transformations. When presented with degraded stimuli the model
makes letter confusion errors that correlate with human confusability data.
Analyses of the internal representations of the model suggest that a small set of learned visual feature detectors support the recognition of both upper case and lower case letters in various fonts and transformations. We postulated that a damaged version of the model might be expected to act in a similar manner to
patients suffering from pure alexia. Summed error score generated from the model
was found to be a very good predictor of the reading times of pure alexic
patients, outperforming simple word length, and accounting for 47% of the
variance. These findings are consistent with a hypothesis suggesting that
impaired visual processing is a key to understanding the strong word-length
effects found in pure alexic patients.
access the full articleref: Neuropsychologia. 2012 Oct;50(12):2773-88. doi: 10.1016/j.neuropsychologia.2012.07.031. Epub 2012 Jul 27.
Cleave PL, Kay-Raining Bird E, Trudeau N, Sutton A.
Syntactic bootstrapping in children with Down syndrome: the impact of bilingualism.
The purpose of the study was to add to our knowledge of bilingual learning in children with Down syndrome (DS) using a syntactic bootstrapping task.
Four groups of children and youth matched on non-verbal mental age participated. There were 14 bilingual participants with DS (DS-B, mean age 12;5), 12 monolingual participants with DS (DS-M, mean age 10;10), 9 bilingual typically developing children (TD-B; mean age 4;1) and 11 monolingual typically developing children (TD-M; mean age 4;1). The participants completed a computerized syntactic bootstrapping task involving unfamiliar nouns and verbs. The syntactic cues employed were a for the nouns and ing for the verbs.
Performance was better on nouns than verbs. There was also a main effect for group. Follow-up t-tests revealed that there were no significant differences between the TD-M and TD-B or between the DS-M and DS-B groups. However, the DS-M group performed more poorly than the TD-M group with a large effect size. Analyses at the individual level revealed a similar pattern of results.
There was evidence that Down syndrome impacted performance; there was no evidence that bilingualism negatively affected the syntactic bootstrapping skills of individuals with DS. These results from a dynamic language task are consistent with those of previous studies that used static or product measures. Thus, the results are consistent with the position that parents should be supported in their decision to provide bilingual input to their children with DS.
Readers of this article will identify (1) research evidence regarding bilingual development in children with Down syndrome and (2) syntactic bootstrapping skills in monolingual and bilingual children who are typically developing or who have Down syndrome.
access the full articleref: J Commun Disord. 2014 May-Jun;49:42-54. doi: 10.1016/j.jcomdis.2014.02.006. Epub 2014 Feb 22.
Dada S(1), Huguet A, Bornman J.
The iconicity of picture communication symbols for children with English additional language and mild intellectual disability.
The purpose of this study was to examine the iconicity of 16 Picture
Communication Symbols (PCS) presented on a themed bed-making communication
overlay for South African children with English as an additional language and
mild intellectual disability. The survey involved 30 participants. The results
indicated that, overall, the 16 symbols were relatively iconic to the
participants. The authors suggest that the iconicity of picture symbols could be
manipulated, enhanced, and influenced by contextual effects (other PCS used
simultaneously on the communication overlay). In addition, selection of
non-target PCS for target PCS were discussed in terms of postulated differences
in terms of distinctiveness. Potential clinical implications and limitations of
the study, as well as recommendations for future research, are discussed.
access the full articleref: Augment Altern Commun. 2013 Dec;29(4):360-73. doi: 10.3109/07434618.2013.849753.
DePaul R, Yoder RE
lconicity in Manual Sign Systems for the Augmentative Communication User: Is That All There Is?
The last 15 years have been characterized by substantial
contributions of new technology to the communication
problems of nonspeaking individuals. However,
according to survey data (Fristoe & Lloyd, 1978;
Goodman, Wilson, & Bornstein, 1978), manual signs
are the most commonly used augmentative systems
for nonspeaking individuals. These studies also indicated
that teachers/clinicians reportedly chose the form
of the sign system (e.g., American Sign Language
[ASL], signed English, etc.) on the basis of familiarity,
rather than with regard to what might be optimal for a
given client. The basis for these clinical choices may
have changed due to the intensive continuing education
programs over the last 10 years. Yet it seems that if
clinicians choose a manual sign system primarily on the
basis of incidental familiarity, the choice of initial lexicons
might be based on such criteria as well. These
kinds of clinical decisions naturally concerned researchers
who were invested in providing the most viable
augmentative system for language-delayed nonspeaking
populations: particularly the mentally retarded. One
factor that was singled out by these researchers was
the role of iconicity in learning these nonspeech symbol
access the full articleref: Augment Altern Commun. 1986 Pages 1-10
Iconicity as structure mapping.
Linguistic and psycholinguistic evidence is presented to support the use of structure-mapping theory as a framework for understanding effects of iconicity on sign language grammar and processing. The existence of structured mappings between phonological form and semantic mental representations has been shown to explain the nature of metaphor and pronominal anaphora in sign languages. With respect to processing, it is argued that psycholinguistic effects of iconicity may only be observed when the task specifically taps into such structured mappings. In addition, language acquisition effects may only be observed when the relevant cognitive abilities are in place (e.g. the ability to make structural comparisons) and when the relevant conceptual knowledge has been acquired (i.e. information key to processing the iconic mapping). Finally, it is suggested that iconicity is better understood as a structured mapping between two mental representations than as a link between linguistic form and human experience.access the full articleref: Philos Trans R Soc Lond B Biol Sci. 2014 Sep 19;369(1651):20130301. doi: 10.1098/rstb.2013.0301.
Fuller D, DePaul R, Yoder D
Iconicity may not be everything, but it seems to be something: A comment on DePaul and Yoder (1986)
access the full articleref: Augment Altern Commun. 1988, Vol. 4, No. 2 , Pages 125-125 (doi:10.1080/07434618812331274717)
Fuller DR, Lloyd LL
Toward a Common Usage of lconicity Terminology
One problem facing the field of augmentative and alternative communication (AAC) is inconsistent
terminology. This may be due in part to the international and transdisciplinary nature of the
field. Perhaps this inconsistency is most apparent when one considers the many terms that are
used to describe iconicity. This paper discusses the various terms that have been used to describe
this variable and proposes the adoption of a consistent terminology for iconicity. Further discussion
is provided on other inconsistencies in terminology that have arisen in recent years
access the full articleref: Augment Altern Commun. September 1991, Vol. 7, No.3 , Pages 215-220
Fuller DR, Stratton MM
Representativeness versus Translucency: Different Theoretical Backgrounds, but Are They Really Different Concepts? A Position Paper
As the field of augmentative and alternative communication (AAC) continues to broaden its base
on both international and transdisciplinary levels, it becomes increasingly important that researchers,
educators, and other professionals subscribe to the same terminology. Two terms have been
used in recent years to describe what appears to be the same concept. Yovetich has defined and
quantified a variable called "representativeness," which is associated with Dual Coding Theory.
This variable is similar to translucency, a variable that describes an aspect of iconicity. Although
proponents of Dual Coding Theory state that the two variables are distinctly different concepts,
evidence suggests that the two actually describe the same phenomenon. This belief is based upon
the following findings: (1) representativeness and translucency are defined and quantified in the
same manner, (2) both variables have the same effect on the learning of Blissymbols, (3) the two
have been found to influence symbol learnability more than any other variable studied to date, and
(4) high correlations have been found between the two variables. The similarity between representativeness
and translucency is discussed relative to the desire to reduce redundant terminology in
the AAC literature.
access the full articleref: Augment Altern Commun. March 1991, Vol. 7, No.1 , Pages 51-58
Widening the lens: what the manual modality reveals about language, learning and cognition.
The goal of this paper is to widen the lens on language to include the manual modality. We look first at hearing children who are acquiring language from a spoken language model and find that even before they use speech to communicate, they use gesture. Moreover, those gestures precede, and predict, the acquisition of structures in speech. We look next at deaf children whose hearing losses prevent them from using the oral modality, and whose hearing parents have not presented them with a language model in the manual modality. These children fall back on the manual modality to communicate and use gestures, which take on many of the forms and functions of natural language. These homemade gesture systems constitute the first step in the emergence of manual sign systems that are shared within deaf communities and are full-fledged languages. We end by widening the lens on sign language to include gesture and find that signers not only gesture, but they also use gesture in learning contexts just as speakers do. These findings suggest that what is key in gesture's ability to predict learning is its ability to add a second representational format to communication, rather than a second modality. Gesture can thus be language, assuming linguistic forms and functions, when other vehicles are not available; but when speech or sign is possible, gesture works along with language, providing an additional representational format that can promote learning. access the full articleref: Philos Trans R Soc Lond B Biol Sci. 2014 Sep 19;369(1651):20130295. doi: 10.1098/rstb.2013.0295.
Harris MD, Reichle J
The Impact of Aided Language Stimulation on Symbol Comprehension and Production in Children With Moderate Cognitive Disabilities
Over the past decade, aided language
stimulation has emerged as a strategy to
promote both symbol comprehension and
symbol production among individuals who use
graphic mode communication systems. During
aided language stimulation, an interventionist
points to a graphic symbol while simultaneously
producing the corresponding spoken word
during natural communicative exchanges. The
purpose of this study was to determine the
impact of aided language stimulation on children
with moderate cognitive disabilities. Three
preschool children with moderate cognitive
disabilities who were functionally nonspeaking
participated in the investigation. The investigator
implemented a multiple-probe design across
symbol sets/activities. Elicited probes were used
to determine whether the children increased their
comprehension and production of graphic
symbols. Results indicated that all 3 children
displayed increased symbol comprehension and
production following the implementation of aided
access the full articleref:
Hartley C, Allen ML.
Iconicity influences how effectively minimally verbal children with autism and ability-matched typically developing children use pictures as symbols in a search task.
Previous word learning studies suggest that children with autism spectrum
disorder may have difficulty understanding pictorial symbols. Here we investigate
the ability of children with autism spectrum disorder and language-matched
typically developing children to contextualize symbolic information communicated
by pictures in a search task that did not involve word learning. Out of the
participant's view, a small toy was concealed underneath one of four unique
occluders that were individuated by familiar nameable objects or unfamiliar
unnamable objects. Children were shown a picture of the hiding location and then
searched for the toy. Over three sessions, children completed trials with color
photographs, black-and-white line drawings, and abstract color pictures. The
results reveal zero group differences; neither children with autism spectrum
disorder nor typically developing children were influenced by occluder
familiarity, and both groups' errorless retrieval rates were above-chance with
all three picture types. However, both groups made significantly more errorless
retrievals in the most-iconic photograph trials, and performance was universally
predicted by receptive language. Therefore, our findings indicate that children
with autism spectrum disorder and young typically developing children can
contextualize pictures and use them to adaptively guide their behavior in real
time and space. However, this ability is significantly influenced by receptive
language development and pictorial iconicity.
access the full articleref: Autism. 2014 Jun 10. pii: 1362361314536634. [Epub ahead of print]
Hjelmquist E, Dahlgren Sandberg A, Hedelin L
Linguistics, AAC, and metalinguistics in communicatively handicapped adolescents
This study had two aims: first, to give an overview of current research on metalinguistic skills and their development and to point to the relevance of this research to the augmentative and alternative communication (AAC) field; and second, to demonstrate how methods based on the reasoning in the metalinguistic field can be applied to the communicative situation of a group of Bliss-using persons. Since a characteristic of many AAC users is that they lack productive spoken communication, studies of their metalinguistic skills put special demands on the tests and methodologies used. At the same time, it is evident that the metalinguistic skills of persons with severe communication disorders are highly relevant from both a theoretical and an applied point of view. The perspective outlined in the overview was used in a study of eight Bliss-using subjects who were tested with respect to comprehension of oral language, metalinguistic functions, and reading/writing variables. The results point to the importance of developing the subjects' expressive linguistic abilities, since their metalinguistic abilities indicate such potentials. The study also served as a test of a new methodology for studying language functions among people with severe communication disorders, which in this respect seems promising.access the full articleref: Augment Altern Commun.1994, Vol. 10, No. 3 , Pages 169-183 (doi:10.1080/07434619412331276880)
Huurneman B, Boonstra FN, Cox RF, Cillessen AH, van Rens G.
A systematic review on 'Foveal Crowding' in visually impaired children and perceptual learning as a method to reduce Crowding.
BACKGROUND: This systematic review gives an overview of foveal crowding (the
inability to recognize objects due to surrounding nearby contours in foveal
vision) and possible interventions. Foveal crowding can have a major effect on
reading rate and deciphering small pieces of information from busy visual scenes.
Three specific groups experience more foveal crowding than adults with normal
vision (NV): 1) children with NV, 2) visually impaired (VI) children and adults
and 3) children with cerebral visual impairment (CVI). The extent and magnitude
of foveal crowding as well as interventions aimed at reducing crowding were
investigated in this review. The twofold goal of this review is : [A] to compare
foveal crowding in children with NV, VI children and adults and CVI children and
[B] to compare interventions to reduce crowding.
METHODS: Three electronic databases were used to conduct the literature search:
PubMed, PsycINFO (Ovid), and Cochrane. Additional studies were identified by
contacting experts. Search terms included visual perception, contour interaction,
crowding, crowded, and contour interactions.
RESULTS: Children with normal vision show an extent of contour interaction over
an area 1.5-3× as large as that seen in adults NV. The magnitude of contour
interaction normally ranges between 1-2 lines on an acuity chart and this
magnitude is even larger when stimuli are arranged in a circular configuration.
Adults with congenital nystagmus (CN) show interaction areas that are 2× larger than those seen adults with NV. The magnitude of the crowding effect is also 2× as large in individuals with CN as in individuals with NV. Finally, children with CVI experience a magnitude of the crowding effect that is 3× the size of that experienced by adults with NV.
CONCLUSIONS: The methodological heterogeneity, the diversity in paradigms used to
measure crowding, made it impossible to conduct a meta-analysis. This is the
first systematic review to compare crowding ratios and it shows that charts with
50% interoptotype spacing were most sensitive to capture crowding effects. The
groups that showed the largest crowding effects were individuals with CN, VI
adults with central scotomas and children with CVI. Perceptual Learning seems to
be a promising technique to reduce excessive foveal crowding effects.
access the full articleref: BMC Ophthalmol. 2012 Jul 23;12:27. doi: 10.1186/1471-2415-12-27.
Imai M, Kita S.
The sound symbolism bootstrapping hypothesis for language acquisition and language evolution.
Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. access the full articleref: Philos Trans R Soc Lond B Biol Sci. 2014 Sep 19;369(1651):20130298. doi: 10.1098/rstb.2013.0298.
Katzir T, Hershko S, Halamish V
The Effect of Font Size on Reading Comprehension on Second and Fifth Grade Children: Bigger Is Not Always Better
Research on reading development has focused on the linguistic, cognitive, and recently, metacognitive skills children
must master in order to learn to read. Less focus has been devoted to how the text itself, namely the
perceptual features of the words, affects children's learning and comprehension. In this study, we manipulated
perceptual properties of text by presenting reading passages in different font sizes, line lengths, and line spacing to
100 children in the second and fifth grades. For second graders (Experiment 1), decreasing font size, as well as
increasing line length, yielded significantly lower comprehension scores. Line spacing had no effect on performance.
For fifth graders (Experiment 2), decreasing font size yielded higher comprehension scores, yet there were no effects
for line length and line spacing. Results are discussed within a "desirable difficulty" approach to reading
access the full articleref: PLOS ONE, 1 September 2013,Volume 8, Issue 9, e74061
Semiotic diversity in utterance production and the concept of 'language'.
Sign language descriptions that use an analytic model borrowed from spoken language structural linguistics have proved to be not fully appropriate. Pictorial and action-like modes of expression are integral to how signed utterances are constructed and to how they work. However, observation shows that speakers likewise use kinesic and vocal expressions that are not accommodated by spoken language structural linguistic models, including pictorial and action-like modes of expression. These, also, are integral to how speaker utterances in face-to-face interaction are constructed and to how they work. Accordingly, the object of linguistic inquiry should be revised, so that it comprises not only an account of the formal abstract systems that utterances make use of, but also an account of how the semiotically diverse resources that all languaging individuals use are organized in relation to one another. Both language as an abstract system and languaging should be the concern of linguistics.access the full articleref: Philos Trans R Soc Lond B Biol Sci. 2014 Sep 19;369(1651):20130293. doi: 10.1098/rstb.2013.0293
Kuchinke L, Krause B, Fritsch N, Briesemeister BB.
A familiar font drives early emotional effects in word recognition.
The emotional connotation of a word is known to shift the process of word
recognition. Using the electroencephalographic event-related potentials (ERPs)
approach it has been documented that early attentional processing of
high-arousing negative words is shifted at a stage of processing where a
presented word cannot have been fully identified. Contextual learning has been
discussed to contribute to these effects. The present study shows that a
manipulation of the familiarity with a word's shape interferes with these
earliest emotional ERP effects. Presenting high-arousing negative and neutral
words in a familiar or an unfamiliar font results in very early emotion
differences only in case of familiar shapes, whereas later processing stages
reveal similar emotional effects in both font conditions. Because these early
emotion-related differences predict later behavioral differences, it is suggested
that contextual learning of emotional valence comprises more visual features than
previously expected to guide early visual-sensory processing.
access the full articleref: Brain Lang. 2014 Oct;137:142-7. doi: 10.1016/j.bandl.2014.08.007. Epub 2014 Sep 16.
Langer S, Hickey M
Augmentative and alternative communication and natural language processing: current research activities and prospects
Historically, there has been little research into the use of natural language processing (NLP) within the context of electronic augmentative and alternative communication (AAC) systems. This is despite the fact that key aspects of AAC research are concerned with the treatment of natural language, and that communication aids appear to represent an ideal means of applying advanced NLP techniques. The lack of NLP research in relation to AAC is partially due to the tendency to focus NLP activities on solving particular problems from constructed examples, rather than the treatment of unrestricted language. Today, however, the face of NLP research has changed significantly, thanks to the increasing availability of and need to process larger corpora. This has prompted a quest for robust solutions to treat unrestricted text, which, in turn, has had two key results: (a) an influx of statistical techniques and (b) the emergence of comprehensive, language-related resources such as broad coverage electronic dictionaries. This paper describes current AAC research that uses NLP and comments on future research directions. Included is a brief survey of AAC systems and research prototypes involving NLP techniques, which is followed by an overview of resources emerging from NLP research that may be applicable to AAC.access the full articleref: Augment Altern Commun.1999, Vol. 15, No. 4 , Pages 260-268 (doi:10.1080/07434619912331278795)
Legge GE, Bigelow CA.
Does print size matter for reading? A review of findings from vision science and typography.
The size and shape of printed symbols determine the legibility of text. In this
paper, we focus on print size because of its crucial role in understanding
reading performance and its significance in the history and contemporary practice
of typography. We present evidence supporting the hypothesis that the
distribution of print sizes in historical and contemporary publications falls
within the psychophysically defined range of fluent print size--the range over
which text can be read at maximum speed. The fluent range extends over a factor
of 10 in angular print size (x-height) from approximately 0.2° to 2°. Assuming a
standard reading distance of 40 cm (16 inches), the corresponding physical
x-heights are 1.4 mm (4 points) and 14 mm (40 points). We provide new data on the
distributions of print sizes in published books and newspapers and in
typefounders' specimens, and consider factors influencing these distributions. We
discuss theoretical concepts from vision science concerning visual size coding
that help inform our understanding of historical and modern typographical
practices. While economic, social, technological, and artistic factors influence
type design and selection, we conclude that properties of human visual processing
play a dominant role in constraining the distribution of print sizes in common
access the full articleref: J Vis. 2011 Aug 9;11(5). pii: 8. doi: 10.1167/11.5.8.
Levinson SC, Holler J.
The origin of human multi-modal communication.
One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins--especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the 'gesture-first hypothesis' with that of gesture and speech having evolved together, hand in hand--or hand in mouth, rather--as one system.access the full articleref: Philos Trans R Soc Lond B Biol Sci. 2014 Sep 19;369(1651):20130302. doi: 10.1098/rstb.2013.0302.
Two sources of meaning in infant communication: preceding action contexts and act-accompanying characteristics.
How do infants communicate before they have acquired a language? This paper supports the hypothesis that infants possess social-cognitive skills that run deeper than language alone, enabling them to understand others and make themselves understood. I suggested that infants, like adults, use two sources of extralinguistic information to communicate meaningfully and react to and express communicative intentions appropriately. In support, a review of relevant experiments demonstrates, first, that infants use information from preceding shared activities to tailor their comprehension and production of communication. Second, a series of novel findings from our laboratory shows that in the absence of distinguishing information from preceding routines or activities, infants use accompanying characteristics (such as prosody and posture) that mark communicative intentions to extract and transmit meaning. Findings reveal that before infants begin to speak they communicate in meaningful ways by binding preceding and simultaneous multisensory information to a communicative act. These skills are not only a precursor to language, but also an outcome of social-cognitive development and social experience in the first year of life. access the full articleref: Philos Trans R Soc Lond B Biol Sci. 2014 Sep 19;369(1651):20130294. doi: 10.1098/rstb.2013.0294.
Luftig R, Bersani H
An investigation of two variables influencing Blissymbol learnability with nonhandicapped adults
Two variables, perceived translucency and component complexity, were hypothesized to influence the learnability of Blissymbols. Translucency was posited to facilitate symbol learning while component complexity was posited to retard learning. Results indicated that both of these hypotheses were confirmed. Additionally, results indicated that, for Bliss naive learners, translucency may be a more potent learnability variable than component complexity. Finally, translucency was found to most greatly affect Blissymbol learning in early rather than late learning trials. Results are discussed in terms of teaching Blissymbols to handicapped learners.access the full articleref: Augment Altern Commun. 1985 Vol. 1, No. 1 , Pages 32-37 (doi:10.1080/07434618512331273501)
Helping dyslexic children attend to letters within visual word forms
Learning to read visual words aloud requires a novel integration of two distinct neurocognitive systems: a visual system that allows one to recognize a visual word from a crowd of letter features and a phonological language system that allows one to recognize and produce spoken words from a crowd of phonetic features (1). Integrating these two systems through the alphabetic principle bestows skilled readers with the ability to appreciate how each letter feature within a crowded visual word form specifically influences each corresponding nuance in its spoken form (e.g., trails vs. traits). Children with developmental dyslexia, a condition that affects as many as 10% of school children (2), face profound challenges in fluently integrating their visual and phonological systems in the service of reading (3). As a result, reading is slow and error prone, which can have severe cascading influences on a child's life. Thus, a central focus in cognitive investigations of dyslexia has been to gain insight into how individual differences in the development of phonological and/or visual processing systems influence the reading acquisition process. Leveraging such insights to improve reading acquisition has remained a central exemplar for the potential of basic cognitive and developmental sciences to bestow translational benefits to education and society.access the full articleref: PNAS, vol. 109 no. 28, 11064-11065, doi: 10.1073/pnas.1209921109
McNaughton S, Lindsay P
Approaching literacy with AAC graphics
This paper examines the possible impact on beginning reading of graphic representational system (GRS) instruction and usage during the emergent literacy years of young children with severe speech and physical impairments (SSPI). The unique development of children who use GRS symbols for communication is discussed within the context of the research literature on the reading process, visual processing, graphic representational processing, and language development. The position taken argues that one should remain open to the possibility of a differential impact upon beginning reading due to use of different types of GRSs. The objectives of this paper are to refine the questions relating to the relationship between the use of a GRS during the preschool years and the child's ultimate reading acquisition.access the full articleref: Augment Altern Commun.1995, Vol. 11, No. 4 , Pages 212-228 (doi:10.1080/07434619512331277349)
Mo C, Yu M, Seger C, Mo L.
Holistic neural coding of Chinese character forms in bilateral ventral visual system.
How are Chinese characters recognized and represented in the brain of skilled
readers? Functional MRI fast adaptation technique was used to address this
question. We found that neural adaptation effects were limited to identical
characters in bilateral ventral visual system while no activation reduction was
observed for partially overlapping characters regardless of the spatial location
of the shared sub-character components, suggesting highly selective neuronal
tuning to whole characters. The consistent neural profile across the entire
ventral visual cortex indicates that Chinese characters are represented as
mutually distinctive wholes rather than combinations of sub-character components,
which presents a salient contrast to the left-lateralized, simple-to-complex
neural representations of alphabetic words. Our findings thus revealed the
cultural modulation effect on both local neuronal activity patterns and
functional anatomical regions associated with written symbol recognition.
Moreover, the cross-language discrepancy in written symbol recognition mechanism
might stem from the language-specific early-stage learning experience.
access the full articleref: Brain Lang. 2015 Feb;141:28-34. doi: 10.1016/j.bandl.2014.11.008. Epub 2014 Dec 18.
Monaghan P, Shillcock RC, Christiansen MH, Kirby S.
How arbitrary is language?
It is a long established convention that the relationship between sounds and meanings of words is essentially arbitrary--typically the sound of a word gives no hint of its meaning. However, there are numerous reported instances of systematic sound-meaning mappings in language, and this systematicity has been claimed to be important for early language development. In a large-scale corpus analysis of English, we show that sound-meaning mappings are more systematic than would be expected by chance. Furthermore, this systematicity is more pronounced for words involved in the early stages of language acquisition and reduces in later vocabulary development. We propose that the vocabulary is structured to enable systematicity in early language learning to promote language acquisition, while also incorporating arbitrariness for later language in order to facilitate communicative expressivity and efficiency.access the full articleref: Philos Trans R Soc Lond B Biol Sci. 2014 Sep 19;369(1651):20130299. doi: 10.1098/rstb.2013.0299.
Myers LJ, Liben LS.
The role of intentionality and iconicity in children's developing comprehension and production of cartographic symbols.
The contribution of intentionality understanding to symbolic development was
examined. Actors added colored dots to a map, displaying either symbolic or
aesthetic intentions. In Study 1, most children (5-6 years) understood actors'
intentions, but when asked which graphic would help find hidden objects, most
selected the incorrect (aesthetic) one whose dot color matched referent color. On
a similar task in Study 2, 5- and 6-year-olds systematically picked incorrectly,
9- and 10-year-olds picked correctly, and 7- and 8-year-olds showed mixed
performance. When referent color matched neither symbolic nor aesthetic dot
colors, children performed better overall, but only the oldest children
universally selected the correct graphic and justified choices with
intentionality. Results bear on theory of mind, symbolic understanding, and map
access the full articleref: Child Dev. 2008 May-Jun;79(3):668-84. doi: 10.1111/j.1467-8624.2008.01150.x.
Nakamura K, Newell A, Alm N, Waller A
How do members of different language communities compose sentences with a picture-based communication system? - a cross-cultural study of picture-based sentences constructed by English and Japanese speakers
A number of picture-based communication systems are in use by nonspeaking people. They are not widely used in Japan. This may be because the systems, although pictorial in nature, tend to be based on English sentence formation. This study was conducted to provide a basis for a discussion about the use by people in non-English-speaking cultures of graphic-based communication aids developed in English-speaking countries. Subjects (80 Japanese and 43 English speakers) were asked to compose picture-based sentences using a computer-based system. The order of graphic symbols on the screen and the effects of syntax markers were investigated as independent variables. The results show that syntax markers and the symbol order had an important effect on the sentences produced by Japanese-speaking subjects. In addition, both the Japanese and the English speakers omitted words when using the picturebased communication system as compared to using speech.access the full articleref: Augment Altern Commun.1998, Vol. 14, No. 2 , Pages 71-80 (doi:10.1080/07434619812331278226)
O'Brien BA, Mansfield JS, Legge GE
The Effect of Print Size on reading speed in dyslexia
This study on the effect of type size on dyslexic reading, found that dyslexic children need larger type sizes than normal readers to achieve their maximum reading speed. As background, several studies summarized by Legge (2007) found that for normal and low-vision readers, there is a "critical print size" at which readers achieve their maximum speed. Increases above the critical print size do not significantly increase reading speed, but decreases below it substantially reduce reading speed. access the full articleref: Journal of Research in Reading. Vol. 28, No. 3. pp. 332-349
Perniss P, Vigliocco G.
The bridge of iconicity: from a world of experience to the experience of language.
Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication. access the full articleref: Philos Trans R Soc Lond B Biol Sci. 2014 Sep 19;369(1651):20130300. doi: 10.1098/rstb.2013.0300.
Turning visual shapes into sounds: early stages of reading acquisition revealed in the ventral occipitotemporal cortex.
The exact role of the left ventral occipitotemporal cortex (VOTC) during the
initial stages of reading acquisition is a hotly debated issue, especially
regarding the comparative effect of learning on early stimulus-dependent vs.
later task-dependent processes. We show that this controversy can be solved with
high-temporal resolution intracerebral EEG recordings of the VOTC. We measured
High-Frequency Activity (50-150 Hz) as a proxy of population-level spiking
activity while participants learned Japanese Katakana symbols, and found that
learning primarily affects top-down/task-dependent neural processing, after a few
minutes only. In contrast, adaptation of early bottom-up/stimulus-dependent
processing takes several days to adapt and provides the basis for fluent reading.
Such evidence that two consecutive stages of neural processing, stimulus- and
task-dependent are differentially affected by learning, can reconcile seemingly
opposite hypotheses on the role of the VOTC during reading acquisition.
access the full articleref: Neuroimage. 2014 Apr 15;90:298-307. doi: 10.1016/j.neuroimage.2013.12.027. Epub 2013 Dec 24.
Nomenclature of category levels in graphic symbols, Part II: role of similarity in categorization
In a companion paper, the nomenclatures at distinct taxonomic category levels were examined in Blissymbolics and Picture Communication Symbols (PCS). A systematic relationship (a.k.a. convergence) has been shown to facilitate concept formation in spoken language. This paper addresses "similarity" as another feature of the nomenclature at distinct taxonomic category levels that has been found to influence concept formation. The role of similarity has been central to many attempts to explain categorization in the continued debate of psychological models of categorization. Thus, the purpose of this paper is to discuss the role of visuo-graphic similarity within and across the nomenclature of category levels in facilitating concept formation of graphic symbol users. The visuo-graphic similarity of the nomenclature within taxonomic category levels and the visuo-graphic links across category levels are described for Blissymbolics and PCS. Based on this description, the author presents several directions for future research relative to the role of visuo-graphic similarity in concept formation involving Blissymbolics and PCS. It is also argued that this area of inquiry will not only advance research on the role of visuo-graphic similarity in concept formation by graphic symbol users but also further our understanding of the categorization process in general.access the full articleref: Augment Altern Commun.1997, Vol. 13, No. 1 , Pages 14-29 (doi:10.1080/07434619712331277808)
Nomenclature of category levels in graphic symbols, Part I: is a flower a flower a flower?
This paper extends the hypothesis that convergence is inherent in language and independent of modality (auditory-vocal, visuo-motor) to the visuo-graphic modality. Convergence is defined as systematic relationships of category levels with respective nomenclature. First, the literature is summarized regarding the superordinate, basic, and subordinate levels as three distinct taxonomic categories. Second, it is documented that these category levels are systematically related ("convergent") to their respective nomenclature in both spoken language and signed language. The lexicons of Blissymbolics and Picture Communication Symbols (PCS) were then compared in terms of their nomenclature at each of these taxonomic levels. The findings indicate that Blissymbolics exhibits expected nomenclature at expected category levels corroborating the hypothesis. However, it was also found that the linguistic device of compounding is prevalent at unexpected lower levels (basic, subordinate). PCS were found to show expected nomenclature at the superordinate and basic levels but not at the subordinate level. Given that convergences in spoken language facilitate concept formation, these findings speak for the need to study the effects of the identified relationships of category levels and nomenclature on concept formation by users of Blissymbolics and PCS, respectively. Several directions for future research are presented.access the full articleref: Augment Altern Commun.1997, Vol. 13, No. 1 , Pages 4-13 (doi:10.1080/07434619712331277798)
Schneps MH, Thomson JM, Sonnert G, Pomplun M, Chen C, Heffner-Wong A
Shorter Lines Facilitate Reading in Those Who Struggle
We found that texts displayed on handheld devices (iPod, iPad) were read up to 27% faster by dyslexics when the text lines were very short in terms of number of characters, on the order of 16-18 characters per line, compared to 60 - 65 characters per line as recommended in traditional print book typography. It should be emphasized that the very short lines were composed ragged-right, not justified. Justified short lines of text would likely nullify the beneficial effects and probably would retard reading speed because of the large, variable, and unpredictable word spaces that are consequences of justification of short lines.access the full articleref: PLoS ONE 8(8).
Origin of symbol-using systems: speech, but not sign, without the semantic urge.
Natural language--spoken and signed--is a multichannel phenomenon, involving facial and body expression, and voice and visual intonation that is often used in the service of a social urge to communicate meaning. Given that iconicity seems easier and less abstract than making arbitrary connections between sound and meaning, iconicity and gesture have often been invoked in the origin of language alongside the urge to convey meaning. To get a fresh perspective, we critically distinguish the origin of a system capable of evolution from the subsequent evolution that system becomes capable of. Human language arose on a substrate of a system already capable of Darwinian evolution; the genetically supported uniquely human ability to learn a language reflects a key contact point between Darwinian evolution and language. Though implemented in brains generated by DNA symbols coding for protein meaning, the second higher-level symbol-using system of language now operates in a world mostly decoupled from Darwinian evolutionary constraints. Examination of Darwinian evolution of vocal learning in other animals suggests that the initial fixation of a key prerequisite to language into the human genome may actually have required initially side-stepping not only iconicity, but the urge to mean itself. If sign languages came later, they would not have faced this constraint.access the full articleref: Philos Trans R Soc Lond B Biol Sci. 2014 Sep 19;369(1651):20130303. doi: 10.1098/rstb.2013.0303.
Sevcik R, Romski MA, Wilkinson K
Roles of graphic symbols in the language acquisition process for persons with severe cognitive disabilities
Symbols play dual roles for nonspeaking persons learning language via instruction. They are the medium by which internal representations of the world are expressed and they provide an inference about how individuals perceive their world. This paper reviews and synthesizes the current empirical literature on symbols and suggests future research directions.access the full articleref: Augment Altern Commun.1991, Vol. 7, No. 3 , Pages 161-170 (doi:10.1080/07434619112331275873)
Shepherd T, Haaf R
Comparison of two training methods in the learning and generalization of blissymbolics
The purpose of this study was to examine two methods for teaching Blissymbols with 40 nonhandicapped 6 and 12 year olds. In the first method, subjects were taught composite symbols by paired association, while in the second method, they were taught both the composite meaning of each symbol as well as the meaning of the elements from which it is comprised. Mean scores for learning (trials to criterion) and for generalization to the identification of novel symbols were analyzed. Analysis of variance indicated significant differences for both age and teaching method on learning and generalization scores. Results demonstrated that regardless of age, subjects learned more quickly when the meanings of symbol elements were included in training. Furthermore, subjects who received training on symbol elements were better able to generalize their symbol knowledge to the identification of novel stimuli. When retested 8 weeks after the initial training, subjects taught the meanings of symbol elements still performed significantly better than subjects trained by paired association. Clinical implications are discussed with reference to the training of Blissymbols as a graphic communication system with individuals who have speech impairments.access the full articleref: Augment Altern Commun.1995, Vol. 11, No. 3 , Pages 154-164 (doi:10.1080/07434619512331277279)
Echoes of the spoken past: how auditory cortex hears context during speech perception.
What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we 'hear' during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds.access the full articleref: Philos Trans R Soc Lond B Biol Sci. 2014 Sep 19;369(1651):20130297. doi: 10.1098/rstb.2013.0297.
Soto G, Toro-Zambrana W
Investigation of Blissymbol use from a language research paradigm
This paper presents the findings of a study that analyzed the morphosyntactic complexity of the language output of three congenitally physically disabled Spanish individuals who use Blissymbolics as their primary means of expression. Three methods were used to collect Blissymbol output by each individual over a 1-week period. All Blissymbol output collected for each subject was analyzed in terms of its morphosyntactic complexity. Results indicate that these individuals were able to convey a wide variety of messages using different morphosyntactic structures. Implications of the findings with respect to language issues in AAC are discussedaccess the full articleref: Augment Altern Commun.1995, Vol. 11, No. 2 , Pages 118-130 (doi:10.1080/07434619512331277219)
Spinelli D et al.
Crowding Effects on Word Identification in Developmental Dyslexia
The effect of crowding on the identification of words was examined in normal readers
and subjects with developmental dyslexia. In Experiment 1, a matching task was used.
Words were presented either alone or embedded in other words. Vocal reaction times (RT)
of dyslexics were slower and more sensitive to the presence of the surrounding stimuli than
those of control subjects. Similar results were obtained in a control experiment using the
same task for strings of symbols (isolated or crowded) instead of words. These data indicate
that differences in crowding in control and dyslexic subjects arise at a pre-linguistic level.
In Experiment 2, vocal RTs to word reading were measured. Two conditions putatively
reducing the effect of crowding were tested: increasing inter-letter spacing and blurring. A
moderate increase of inter-letter spacing produced faster vocal RTs in dyslexics, while no
effect was present in normal controls. Moderate blurring of stimuli did not change dyslexics'
RTs, while normal readers became slower. Group and individual results are discussed to
evaluate the extent to which crowding contributes to the genesis of developmental dyslexia.access the full articleref: Cortex. Volume 38, Issue 2
Iconicity in the development of picture skills: typical development and implications for individuals with severe intellectual disabilities.
The iconicity of graphic symbols and the iconicity hypothesis are theoretical concepts that have had an impact on the use of augmentative and alternative communication strategies for people with severe intellectual disabilities. This article reviews some of the recent literature on the impact of iconicity on symbol recognition and use by typically developing children and relates those findings to people with severe disability. It seems that although iconicity may have some impact on symbol learning, there are other variables that are likely to be much more important. It is likely that iconicity is not helpful to those learning graphic symbols who have little or no comprehension of spoken language.access the full articleref: Augment Altern Commun. 2009;25(3):187-201. doi: 10.1080/07434610903031133.
Sutton A, Soto G, Blockberger S
Grammatical Issues in Graphic Symbol Communication
In this article, issues and concepts related to the study of production, comprehension, and
acquisition of syntax and morphology by children who need augmentative and alternative communication
(AAC) systems are reviewed. The use of graphic symbols when vocal speech is
severely limited presents significant challenges to the typical process of language acquisition.
A conceptual and theoretical context is presented, and concerns that seem unique to AAC are
explored. Productive lines of research are proposed to address language acquisition issues and
to improve AAC system designs and intervention programs for language development in children
who require AAC.
access the full articleref: Augment Altern Commun. September 2002, Vol. 18, No.3 , Pages 192-204
Sutton A, Trudeau N, Morford J, Rios M, Poirier MA.
Preschool-aged children have difficulty constructing and interpreting simple utterances composed of graphic symbols.
Children who require augmentative and alternative communication (AAC) systems while they are in the process of acquiring language face unique challenges because they use graphic symbols for communication. In contrast to the situation of typically developing children, they use different modalities for comprehension (auditory) and expression (visual). This study explored the ability of three- and four-year-old children without disabilities to perform tasks involving sequences of graphic symbols. Thirty participants were asked to transpose spoken simple sentences into graphic symbols by selecting individual symbols corresponding to the spoken words, and to interpret graphic symbol utterances by selecting one of four photographs corresponding to a sequence of three graphic symbols. The results showed that these were not simple tasks for the participants, and few of them performed in the expected manner - only one in transposition, and only one-third of participants in interpretation. Individual response strategies in some cases lead to contrasting response patterns. Children at this age level have not yet developed the skills required to deal with graphic symbols even though they have mastered the corresponding spoken language structures.access the full articleref: J Child Lang. 2010 Jan;37(1):1-26. doi: 10.1017/S0305000909009477. Epub 2009 Mar 27.
Trudeau N, Sutton A, Morford JP
An Investigation of Developmental Changes in Interpretation and Construction of Graphic AAC Symbol Sequences through Systematic Combination of Input and Output Modalities
While research on spoken language has a long tradition of studying and contrasting language production and comprehension, the study of graphic symbol communication has focused more on production than comprehension. As a result, the relationships between the ability to construct and to interpret graphic symbol sequences are not well understood. This study explored the use of graphic symbol sequences in children without disabilities aged 3;0 to 6;11 (years; months) (n = 111). Children took part in nine tasks that systematically varied input and output modalities (speech, action, and graphic symbols). Results show that in 3- and 4-year-olds, attributing meaning to a sequence of symbols was particularly difficult even when the children knew the meaning of each symbol in the sequence. Similarly, while even 3- and 4-year-olds could produce a graphic symbol sequence following a model, transposing a spoken sentence into a graphic sequence was more difficult for them. Representing an action with graphic symbols was difficult even for 5-year-olds. Finally, the ability to comprehend graphic-symbol sequences preceded the ability to produce them. These developmental patterns, as well as memory-related variables, should be taken into account in choosing intervention strategies with young children who use AAC.access the full articleref: Augment Altern Commun.September 2014, Vol. 30, No. 3 , Pages 187-199 (doi:10.3109/07434618.2014.940465)
Strategies in construction and interpretation of graphic-symbol sequences by individuals who use AAC systems.
Given the frequent use of graphic symbols in augmentative and alternative communication (AAC) systems, some individuals who use AAC may have greater familiarity with constructing graphic-symbol sequences than do speaking individuals without disabilities. Whether this increased familiarity has an impact on the interpretation of such sequences or on the relationship between construction and interpretation is fundamental to our understanding of the mechanisms underlying communication using graphic symbols. In this study, individuals who use graphic-symbol AAC systems were asked to construct and interpret graphic-symbol sequences representing the same target content (simple and complex propositions). The majority of participants used stable response patterns on both tasks; a minority were inconsistent on both tasks. Asymmetrical patterns (stable on one task but not the other) were rare, suggesting that neither channel (construction or interpretation) preceded the other, in contrast to earlier findings with participants without disabilities (i.e., novice users of graphic symbols). Furthermore, there were differences between stable and less stable responders on measures of syntactic comprehension and cognitive level but not on chronological age, receptive vocabulary, or AAC system characteristics and length of use.access the full articleref: Augment Altern Commun. 2010 Dec;26(4):299-312. doi: 10.3109/07434618.2010.529619.
Van Balkom H(1), Verhoeven L.
Literacy learning in users of AAC: A neurocognitive perspective.
The understanding of written or printed text or discourse - depicted either in
orthographical, graphic-visual or tactile symbols - calls upon both bottom-up
word recognition processes and top-down comprehension processes. Different
architectures have been proposed to account for literacy processes. Research has
shown that the first steps in perceiving, processing and deriving conceptual
meaning from words, graphic symbols, manual signs, and co-speech gestures or
tactile manual signing and tangible symbols can be seen as identical and
collectively (sub)activated. Results from recent brain research and
neurolinguistics have revealed new insights in the reading process of typical and
atypical readers and may provide verifiable evidence for improved literacy
assessment and the validation of early intervention programs for AAC users.
access the full articleref: Augment Altern Commun. 2010 Sep;26(3):149-57. doi: 10.3109/07434618.2010.505610.
Vigliocco G, Perniss P, Vinson D.
Language as a multimodal phenomenon: implications for language learning, processing and evolution.
Our understanding of the cognitive and neural underpinnings of language has
traditionally been firmly based on spoken Indo-European languages and on language
studied as speech or text. However, in face-to-face communication, language is
multimodal: speech signals are invariably accompanied by visual information on
the face and in manual gestures, and sign languages deploy multiple channels
(hands, face and body) in utterance construction. Moreover, the narrow focus on
spoken Indo-European languages has entrenched the assumption that language is
comprised wholly by an arbitrary system of symbols and rules. However, iconicity
(i.e. resemblance between aspects of communicative form and meaning) is also
present: speakers use iconic gestures when they speak; many non-Indo-European
spoken languages exhibit a substantial amount of iconicity in word forms and,
finally, iconicity is the norm, rather than the exception in sign languages. This
introduction provides the motivation for taking a multimodal approach to the
study of language learning, processing and evolution, and discusses the broad
implications of shifting our current dominant approaches and assumptions to
encompass multimodal expression in both signed and spoken languages.
access the full articleref: Philos Trans R Soc Lond B Biol Sci. 2014 Sep 19;369(1651):20130292. doi: 10.1098/rstb.2013.0292.
Wilkinson KM, McIlvane WJ.
Perceptual factors influence visual search for meaningful symbols in individuals with intellectual disabilities and Down syndrome or autism spectrum disorders.
Augmentative and alternative communication (AAC) systems often supplement oral communication for individuals with intellectual and communication disabilities. Research with preschoolers without disabilities has demonstrated that two visual-perceptual factors influence speed and/or accuracy of finding a target: the internal color and spatial organization of symbols. Twelve participants with Down syndrome and 12 with autism spectrum disorders (ASDs) completed two search tasks. In one, the symbols were clustered by internal color; in the other, the identical symbols had no arrangement cue. Visual search was superior in participants with ASDs compared to those with Down syndrome. In both groups, responses were significantly faster when the symbols
Further studies have examined whether the influence of color cuing applies when the color
is in the symbol background. These studies examined whether search efficiency as measured
through mouse access differed for line drawings with backgrounds color-coded to cue
taxonomic category as compared to line drawings of the same concepts on white
backgrounds. The studies included categories such as fruits and vegetables (Thistle &
Wilkinson, 2009; n = 30), animals (Wilkinson & Coombs, 2010; n = 10), and emotion labels
(Wilkinson & Snell, 2011; n = 30). Contrary to predictions, background color had either no
effect (in older preschool children) or interfered with performance (in younger preschool
children). These data offer hints that the facilitating role of color cuing may not operate the
same way when the color cue is in the symbol background as compared to within the symbol
access the full articleref: Am J Intellect Dev Disabil. 2013 Sep;118(5):353-64. doi: 10.1352/1944-7558-118.5.353.
Yovetich W, Young T
Twenty student volunteers, naive to Blissymbols, were asked to "guess" the meaning of 64 Blissymbols, each presented without their word gloss. The symbols and their verbal labels were each varied orthogonally on two dimensions of representativeness of the symbol (high/low) and concreteness of the word they were designed to represent (high/low). The representativeness values were obtained from the norms reported by Yovetich and Paivio (1980), while the concreteness values were obtained from the norms reported by the Paivio, Yuille, and Madigan (1968). The subjects' responses were subsequently scored as either "exact/synonymous" or "other." Results of the analysis of subjects' mean responses for the two dimensions of Blissymbols, using paired f-tests, revealed that the guessability of a symbol's gloss was significantly affected by the dimension of representativeness. The results have implications for understanding the psychological attributes of the graphic representations which are used in clinical and/or research methodologies, and they support the findings of earlier research dealing with Blissymbols as well as natural language logographs (i.e., Chinese and Japanese).access the full articleref: Augment Altern Commun. 1988, Vol. 4, No. 1 , Pages 35-39 (doi:10.1080/07434618812331274587)
Zangari C, Lloyd L, Vicker B
Augmentative and alternative communication: An historic perspective
During the past 3 decades, the field of augmentative and alternative communication (AAC) has emerged as a major development for the benefit of individuals with little or no functional speech. This paper attempts to document the social and historic events that led to the emergence of the discipline of AAC and to identify some major milestones in its development. The paper outlines the trends and transitions that have occurred in the areas of aided and unaided communication, intervention, service delivery, consumer issues, and professional development. Although abundant information was only available about the course of development in a few countries, the authors have attempted to use available resources to present the major international events and developments that influenced the evolution of AAC from a North American perspective.access the full articleref: Augment Altern Commun.1994, Vol. 10, No. 1 , Pages 27-59 (doi:10.1080/07434619412331276740)
Hearing and seeing meaning in speech and gesture: insights from brain and behaviour.
As we speak, we use not only the arbitrary form-meaning mappings of the speech channel but also motivated form-meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal-posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language. access the full articleref: Philos Trans R Soc Lond B Biol Sci. 2014 Sep 19;369(1651):20130296. doi: 10.1098/rstb.2013.0296.