Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "Culbertson, Jennifer" wg kryterium: Autor


Wyświetlanie 1-1 z 1
Tytuł:
Investigating the effects of i-complexity and e-complexity on the learnability of morphological systems
Autorzy:
Johnson, Tamar
Gao, Kexin
Smith, Kenny
Rabagliati, Hugh
Culbertson, Jennifer
Powiązania:
https://bibliotekanauki.pl/articles/2061409.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Instytut Podstaw Informatyki PAN
Tematy:
morphological
complexity
learning
neural networks
typology
Opis:
Research on cross-linguistic differences in morphological paradigms reveals a wide range of variation on many dimensions, including the number of categories expressed, the number of unique forms, and the number of inflectional classes. However, in an influential paper, Ackerman and Malouf (2013) argue that there is one dimension on which languages do not differ widely: in predictive structure. Predictive structure in a paradigm describes the extent to which forms predict each other, called i-complexity. Ackerman and Malouf (2013) show that although languages differ according to measure of surface paradigm complexity, called e-complexity, they tend to have low i-complexity. They conclude that morphological paradigms have evolved under a pressure for low i-complexity. Here, we evaluate the hypothesis that language learners are more sensitive to i-complexity than e-complexity by testing how well paradigms which differ on only these dimensions are learned. This could result in the typological findings Ackerman and Malouf (2013) report if even paradigms with very high e-complexity are relatively easy to learn, so long as they have low i-complexity. First, we summarize a recent work by Johnson et al. (2020) suggesting that both neural networks and human learners may actually be more sensitive to e-complexity than i-complexity. Then we build on this work, reporting a series of experiments which confirm that, indeed, across a range of paradigms that vary in either e- or icomplexity, neural networks (LSTMs) are sensitive to both, but show a larger effect of e-complexity (and other measures associated with size and diversity of forms). In human learners, we fail to find any effect of i-complexity on learning at all. Finally, we analyse a large number of randomly generated paradigms and show that e- and i-complexity are negatively correlated: paradigms with high e-complexity necessarily show low i-complexity. We discuss what these findings might mean for Ackerman and Malouf’s hypothesis, as well as the role of ease of learning versus generalization to novel forms in the evolution of paradigms.
Źródło:
Journal of Language Modelling; 2021, 9, 1; 97--150
2299-856X
2299-8470
Pojawia się w:
Journal of Language Modelling
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-1 z 1

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies