Abstract:
Owing to language emergence, human beings have been able to understand the intentions of others, generate common concepts, and extend new concepts. Artificial intelligence researchers have not only predicted words and sentences statistically in machine learning but also created a language system by communicating with the machine itself. However, strong constraints are exhibited in current studies. Models dependent on task settings, or supervisor signals and rewards exist, thus hindering the emergence of languages like the real-world. In this study, we improved Batali and Choi et al.’s research and attempted language emergence under conditions of low constraints such as human language generation. We developed a new language emergence agent that combines a language module and a visual module and included the bias that exists in humans as a “reflection function” into the new emergence algorithm. We used the MNIST dataset for language emergence. Irrespective of the function, messages corresponding to the label of MNIST could be generated. However, through qualitative and quantitative analysis, we confirmed that the reflection function caused pattern structuring in the message. This result suggested that the reflection function performed effectively in creating a grounding language from raw images with an under-restricted situation like the human language generation.