Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "typology," wg kryterium: Temat


Wyświetlanie 1-4 z 4
Tytuł:
Modeling morphological learning, typology, and change : What can the neural sequence-to-sequence framework contribute?
Autorzy:
Elsner, Micha
Sims, Andrea D.
Erdmann, Alexander
Hernandez, Antonio
Jaffe, Evan
Jin, Lifeng
Booker Johnson, Martha
Karim, Shuan
King, David L.
Lamberti Nunes, Luana
Oh, Byung-Doh
Rasmussen, Nathan
Shain, Cory
Antetomaso, Stephanie
Dickinson, Kendra V.
Diewald, Noah
McKenzie, Michelle
Stevens-Guille, Symon
Powiązania:
https://bibliotekanauki.pl/articles/103835.pdf
Data publikacji:
2019
Wydawca:
Polska Akademia Nauk. Instytut Podstaw Informatyki PAN
Tematy:
morphology
computational modeling
typology
Opis:
We survey research using neural sequence-to-sequence models as computational models of morphological learning and learnability. We discuss their use in determining the predictability of inflectional exponents, in making predictions about language acquisition and in modeling language change. Finally, we make some proposals for future work in these areas.
Źródło:
Journal of Language Modelling; 2019, 7, 1; 53-98
2299-856X
2299-8470
Pojawia się w:
Journal of Language Modelling
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Investigating the effects of i-complexity and e-complexity on the learnability of morphological systems
Autorzy:
Johnson, Tamar
Gao, Kexin
Smith, Kenny
Rabagliati, Hugh
Culbertson, Jennifer
Powiązania:
https://bibliotekanauki.pl/articles/2061409.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Instytut Podstaw Informatyki PAN
Tematy:
morphological
complexity
learning
neural networks
typology
Opis:
Research on cross-linguistic differences in morphological paradigms reveals a wide range of variation on many dimensions, including the number of categories expressed, the number of unique forms, and the number of inflectional classes. However, in an influential paper, Ackerman and Malouf (2013) argue that there is one dimension on which languages do not differ widely: in predictive structure. Predictive structure in a paradigm describes the extent to which forms predict each other, called i-complexity. Ackerman and Malouf (2013) show that although languages differ according to measure of surface paradigm complexity, called e-complexity, they tend to have low i-complexity. They conclude that morphological paradigms have evolved under a pressure for low i-complexity. Here, we evaluate the hypothesis that language learners are more sensitive to i-complexity than e-complexity by testing how well paradigms which differ on only these dimensions are learned. This could result in the typological findings Ackerman and Malouf (2013) report if even paradigms with very high e-complexity are relatively easy to learn, so long as they have low i-complexity. First, we summarize a recent work by Johnson et al. (2020) suggesting that both neural networks and human learners may actually be more sensitive to e-complexity than i-complexity. Then we build on this work, reporting a series of experiments which confirm that, indeed, across a range of paradigms that vary in either e- or icomplexity, neural networks (LSTMs) are sensitive to both, but show a larger effect of e-complexity (and other measures associated with size and diversity of forms). In human learners, we fail to find any effect of i-complexity on learning at all. Finally, we analyse a large number of randomly generated paradigms and show that e- and i-complexity are negatively correlated: paradigms with high e-complexity necessarily show low i-complexity. We discuss what these findings might mean for Ackerman and Malouf’s hypothesis, as well as the role of ease of learning versus generalization to novel forms in the evolution of paradigms.
Źródło:
Journal of Language Modelling; 2021, 9, 1; 97--150
2299-856X
2299-8470
Pojawia się w:
Journal of Language Modelling
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
20 years of the Grammar Matrix: cross-linguistic hypothesis testing of increasingly complex interactions
Autorzy:
Zamaraeva, Olga
Curtis, Chris
Emerson, Guy
Fokkens, Antske
Goodman, Michael Wayne
Howell, Kristen
Trimble, Thomas .J.
Bender, Emily M.
Powiązania:
https://bibliotekanauki.pl/articles/24201227.pdf
Data publikacji:
2022
Wydawca:
Polska Akademia Nauk. Instytut Podstaw Informatyki PAN
Tematy:
HPSG
grammar engineering
typology
hypothesis testing
Opis:
The Grammar Matrix project is a meta-grammar engineering framework expressed in Head-driven Phrase Structure Grammar (HPSG) and Minimal Recursion Semantics (MRS). It automates grammar implementation and is thus a tool and a resource for linguistic hypothesis testing at scale. In this paper, we summarize how the Grammar Matrix grew in the last decade and describe how new additions to the system have made it possible to study interactions between analyses, both monolingually and cross-linguistically, at new levels of complexity.
Źródło:
Journal of Language Modelling; 2022, 10, 1; 49--137
2299-856X
2299-8470
Pojawia się w:
Journal of Language Modelling
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Monotonicity as an effective theory of morphosyntactic variation
Autorzy:
Graf, Thomas
Powiązania:
https://bibliotekanauki.pl/articles/103811.pdf
Data publikacji:
2019
Wydawca:
Polska Akademia Nauk. Instytut Podstaw Informatyki PAN
Tematy:
monotonic functions
syncretism
typology
∗ABA-generalization
Person Case Constraint
Gender Case Constraint
Opis:
One of the major goals of linguistics is to delineate the possible range of variation across languages. Recent work has identified a surprising number of typological gaps in a variety of domains. In morphology, this includes stem suppletion, person pronoun syncretism, case syncretism, and noun stem allomorphy. In morphosyntax, only a small number of all conceivable Person Case Constraints and Gender Case Constraints are found. While various proposals have been put forward for each individual domain, few attempts have been made to give a unified explanation of the limited typology across all domains. This paper presents a novel account that deliberately abstracts away from the usual details of grammatical description in order to provide a domain-agnostic explanation of the limits of typological variation. This is achieved by combining prominence hierarchies, e.g. for person and case, with mappings from those hierarchies to the relevant output forms. As the mappings are required to be monotonic, only a fraction of all conceivable patterns can be instantiated.
Źródło:
Journal of Language Modelling; 2019, 7, 2; 3-47
2299-856X
2299-8470
Pojawia się w:
Journal of Language Modelling
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-4 z 4

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies