Daniela is a 14-year-old eighth-grade student living with her mother and younger sister in an urban area of Chile. She uses voice assistants — Siri and Alexa — in exactly the moments they are most convenient: when her hands are wet in the bath, or too full to type. That pragmatic relationship captures something essential about how she thinks. She approaches AI with clear eyes: “I believe that they are neither good nor bad, because good and bad are very closed words. I think it will help more than it will hurt — however, there is always a risk in everything.”
Technology, in Its Place
Daniela draws a firm conceptual line between AI as software and robots as hardware. She thinks the word “artificial intelligence” refers to the intangible side of programming — code, logic, systems — while robots are the physical form that code can inhabit. She likes the image of an R2-D2-style automated butler rolling through the house, tray in hand, responding when called. She would like an intelligent floor vacuum. What she does not want is a robot that looks like a person or an animal: “I prefer a robot to look like a robot.” The distinction matters to her — she wants to know what she is dealing with.
A Society That Learns From Its Mistakes
When Daniela describes her hopes for 2050, the word she reaches for is awareness. She wants people to be “more intellectually advanced” — to have learned from history, to be more considerate of the environment, to be “more awake.” Technology is not absent from her picture of the future, but it is not the centrepiece either. She is interested in AI in the fields of security, education, and home assistance. She notices small signs of automation already in place — the metro card recharge machines that replaced human attendants — and she reads them as the leading edge of a larger shift.
Where Robots Can and Cannot Go
Daniela’s approach to specific applications is measured. She accepts robot assistance with children in principle — “if it could really do it, I think it would be wonderful” — but immediately adds that a robot will never fully match the function of a person in that role. She is similarly mixed on teachers: she is clear that teaching requires emotional skills that robots lack — “at the end of the day we are all living together as human beings” — but she accepts AI assistance in grading. On nursing homes, she sees genuine value: Alexa-style devices that allow elderly residents to call for help are a straightforward good. For healthcare and decision-making in general, she draws a hard line — robots should handle minor tasks that make life easier, not judge character or make high-stakes choices.
On accountability, she takes an unusual position: when a machine fails, she says, the responsibility lies with the user, not the company. “It’s just that you take a risk when you buy these things.” That framing is rare among her peers, who tend to assign responsibility to manufacturers.
The Limits of Going Digital
Daniela wants to learn how AI works — she sees it as an inevitable part of life going forward, something school will eventually have to teach. But she holds her ground on the value of what currently exists: “I still like the traditional way, because I don’t like a world full of robots and all that, because I still like what is natural.” Tests without paper, classes without teachers, work without humans — these prospects make her uneasy, not excited. She has a particular fondness for manual art over automated art: “I like manual art more. That is, human art.”
A Future That Still Makes Sense
On work, Daniela is direct about the consequences of automation: human labour will be replaced by machines that are cheaper and more efficient, and that is a real problem. “If humans are replaced by robots, it’s like nothing makes sense anymore — life, like lying down all day.” What she values about work is not just income but purpose: the act of fighting for things, of contributing. Her solution is not to slow AI development but to find the places machines cannot go — the “soft areas,” as she calls them, the zones of critical thinking and emotional reasoning that remain beyond what any current robot can do. She is cautious, not catastrophising; hopeful that better technology can bring better opportunities, but unwilling to pretend that the transition will be painless.