Alexa, are you real? Children’s understanding of artificially intelligent smart speakers
Written by Zach Hamzagic
Artificially intelligent smart speakers like Siri, Alexa and Google Home are becoming increasingly popular in homes. An estimated 35% of North Americans over the age of twelve have a smart speaker in their home, and 49% of owners report using them to keep children entertained (Dey, 2025). Of course, teenagers and adults understand that the voice coming from the virtual assistant smart speaker is the product of human programming. But how do young children understand these smart speakers?
Several studies have found that by the age of six, children know that smart speakers are not living beings (Giroud-Hallam et al., 2021; Xu, & Warschauer, 2020). Yet, children still tend to attribute human-like qualities to smart speakers. Younger children around kindergarten age attribute more experiences, mental abilities (like being smart and able to remember things), and deservedness of moral treatment (like being spoken to and handled nicely) to smart speakers than older children (Flanagan et al., 2023). For example, Xu, and Warschauer (2020) interviewed three-to-six-year-olds on their perception of a smart speaker, and found that upwards of 85% of children thought it had mental abilities, and upwards of 65% thought it had emotional abilities (for example, liking things and showing emotions). Children often attributed these abilities to the smart speaker’s capability to converse with them and show reciprocal feelings. However, most children acknowledged the non-human physical features (like being a small device) of the smart speaker and attributed their human-like mental and emotional abilities to being programmed by people.
Researchers who have examined older children (six to eleven) similarly find that children are aware that smart speakers are made and programmed by humans, yet believe they are intelligent (even more so than they actually are; Andries & Robertson, 2023). Even older children seem to be unsure if smart speakers truly have feelings and emotions (Andries & Robertson, 2023; Girouard‐Hallam et al., 2021). Perhaps perceived emotional abilities explain why children think smart speakers have social capabilities (like being friends) and moral rights (like not being destroyed or spoken to rudely; Andries & Robertson, 2023; Flanagan et al., 2023; Girouard‐Hallam et al., 2021).
The research described above indicates that children do not view smart speakers as purely inanimate devices. The perception of intelligence and social ability through natural conversation may develop a sense of trust in children (Andries & Robertson, 2023). This brings up two potential concerns: overreliance on trusting artificially intelligent technology and data privacy. Children may learn early on to rely on easily accessible “trustworthy” information from artificial intelligence. This may decrease children’s opportunity to develop creativity and critical thinking skills. Additionally, children tend to not consider privacy when conversing with smart speakers, and it may be wise for parents to consider educating their children regarding the privacy of their conversations (Andries & Robertson, 2023; Wang et al., 2022).
Overall, children are aware that smart speakers are not living beings with their own thoughts and emotions. Yet, young preschool-age children still attribute human-like properties to these speakers, like social and emotional abilities, and moral qualities. As artificial intelligence (AI) continues to develop and become more realistic, children may begin to interact with increasingly realistic and human-like artificially intelligent virtual assistants. Therefore, it is important to continuously monitor how children perceive and interact with AI virtual assistants.
If you enjoyed this blog post, check out our other blog posts!
References
Andries, V., & Robertson, J. (2023). Alexa doesn't have that many feelings: Children's understanding of AI through interactions with smart speakers in their homes. Computers and Education: Artificial Intelligence, 5, 100176. https://doi.org/10.1016/j.caeai.2023.100176
Dey, M. (2025). Smart speaker statistics by market size, share, region, usage, adoption, shipments, ownership, trends and facts. Retrieved from https://electroiq.com/stats/smart-speaker-statistics/
Flanagan, T., Wong, G., & Kushnir, T. (2023). The minds of machines: Children's beliefs about the experiences, thoughts, and morals of familiar interactive technologies. Developmental Psychology, 59(6), 1017–1031. https://doi.org/10.1037/dev0001524
Girouard‐Hallam, L. N., Streble, H. M., & Danovitch, J. H. (2021). Children's mental, social, and moral attributions toward a familiar digital voice assistant. Human Behavior and Emerging Technologies, 3(5), 1118-1131. https://doi.org/10.1002/hbe2.321
Wang, G., Zhao, J., Van Kleek, M., & Shadbolt, N. (2022). Informing age-appropriate AI: Examining principles and practices of AI for children. In Proceedings of the 2022 CHI conference on human factors in computing systems (pp. 1–29).
Xu, Y., & Warschauer, M. (2020, April). What are you talking to?: Understanding children's perceptions of conversational agents. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1-13).
Disclaimer
The blog posts are for informational and educational purposes only. The posts should not be considered as any type of advice (medical, mental health, legal, and/or religious advice). All blog posts have been researched, written, and edited by the undergraduate students and alumni of the Lifespan Cognition Lab. As a teaching and research-based lab, we encourage all lab members to help make knowledge more accessible to all communities through these posts.