The new science of computational lexicology and lexicography has arisen through contact and collaboration between representatives of three hitherto distinct disciplines: lexicography, linguistics, and computer science. The Pisa International Summer Schools on Computational Lexicology and Lexicography have played a crucial role, providing a regular forum for inter-disciplinary contact. In this volume, which had its origins in the fifth summer school, distinguished scholars provide a broad perspective on the field. In their overview paper Sue Atkins, Beth Levin, and Antonio Zampolli trace the development of computational lexicography and its links to theoretical linguistics. The sections which follow discuss the collection and pre-processing of textual data, the theoretical infrastructure of lexical analysis, and current tools and methodologies.
The new science of computational lexicology and lexicography has arisen through contact and collaboration between representatives of three hitherto distinct disciplines: lexicography, linguistics, and computer science. The Pisa International Summer Schools on Computational Lexicology andLexicography have played a crucial role, providing a regular forum for inter-disciplinary contact.In this volume, which had its origins in the fifth summer school, distinguished scholars provide a broad perspective on the field. In their overview paper Sue Atkins, Beth Levin, and Antonio Zampolli trace the development of computational lexicography and its links to theoretical linguistics. Thesections which follow discuss the collection and pre-processing of textual data, the theoretical infrastructure of lexical analysis, and current tools and methodologies.
This volume of newly commissioned essays examines current theoretical and computational work on polysemy, the term used in semantic analysis to describe words with more than one meaning or function, sometimes perhaps related (as in plain) and sometimes perhaps not (as in bank). Such words present few difficulties in everyday language, but pose central problems for linguists and lexicographers, especially for those involved in lexical semantics and in computational modelling. The contributors to this book–leading researchers in theoretical and computational linguistics–consider the implications of these problems for grammatical theory and how they may be addressed by computational means. The theoretical essays in the book examine polysemy as an aspect of a broader theory of word meaning. Three theoretical approaches are presented: the Classical (or Aristotelian), the Prototypical, and the Relational. Their authors describe the nature of polysemy, the criteria for detecting it, and its manifestations across languages. They examine the issues arising from the regularity of polysemy and the theoretical principles proposed to account for the interaction of lexical meaning with the semantics and syntax of the context in which it occurs. Finally they consider the formal representations of meaning in the lexicon, and their implications for dictionary construction. The computational essays are concerned with the challenge of polysemy to automatic sense disambiguation–how intended meaning for a word occurrence can be identified. The approaches presented include the exploitation of lexical information in machine-readable dictionaries, machine learning based on patterns of word co-occurrence, and hybrid approaches that combine the two. As a whole, the volume shows how on the one hand theoretical work provides the motivation and may suggest the basis for computational algorithms, while on the other computational results may validate, or reveal problems in, the principles set forth by theories.
Ruslan Mitkov's highly successful Oxford Handbook of Computational Linguistics has been substantially revised and expanded in this second edition. Alongside updated accounts of the topics covered in the first edition, it includes 17 new chapters on subjects such as semantic role-labelling, text-to-speech synthesis, translation technology, opinion mining and sentiment analysis, and the application of Natural Language Processing in educational and biomedical contexts, among many others. The volume is divided into four parts that examine, respectively: the linguistic fundamentals of computational linguistics; the methods and resources used, such as statistical modelling, machine learning, and corpus annotation; key language processing tasks including text segmentation, anaphora resolution, and speech recognition; and the major applications of Natural Language Processing, from machine translation to author profiling. The book will be an essential reference for researchers and students in computational linguistics and Natural Language Processing, as well as those working in related industries.
The past fifteen years have seen great changes in the field of language acquisition. New experimental methods have yielded insights into the linguistic knowledge of ever younger children, and interest has grown in the phonological, syntactic, and semantic aspects of the lexicon. Computational investigations of language acquisition have also changed, reflecting, among other things, the profound shift in the field of natural language processing from hand-crafted grammars to grammars that are learned automatically from samples of naturally occurring language.Each of the four research papers in this book takes a novel formal approach to a particular problem in language acquisition. In the first paper, J. M. Siskind looks at developmentally inspired models of word learning. In the second, M. R. Brent and T. A. Cartwright look at how children could discover the sounds of words, given that word boundaries are not marked by any acoustic analog of the spaces between written words. In the third, P. Resnik measures the association between verbs and the semantic categories of their arguments that children likely use as clues to verb meanings. Finally, P. Niyogi and R. C. Berwick address the setting of syntactic parameters such as headedness--for example, whether the direct object comes before or after the verb.
The First International Conference on Computational Methods (ICCM04), organized by the department of Mechanical Engineering, National University of Singapore, was held in Singapore, December 15-17, 2004, with great success. This conference proceedings contains some 290 papers from more than 30 countries/regions. The papers cover a broad range of topics such as meshfree particle methods, Generalized FE and Extended FE methods, inverse analysis and optimization methods. Computational methods for geomechanics, machine learning, vibration, shock, impact, health monitoring, material modeling, fracture and damage mechanics, multi-physics and multi-scales simulation, sports and environments are also included. All the papers are pre-reviewed before they are accepted for publication in this proceedings. The proceedings will provide an informative, timely and invaluable resource for engineers and scientists working in the important areas of computational methods.
Corpora and Language Education critically examines key concepts and issues in corpus linguistics, with a particular focus on the expanding interdisciplinary nature of the field and the role that written and spoken corpora now play in the fields of professional communication, teacher education, translation studies, lexicography, literature, critical discourse analysis, and forensic linguistics. The book also presents a series of corpus-based case studies illustrating central themes and best practices in the field.