Last week, I flew to LA To attend the Children’s Digital Privacy Summit hosted by Denise Tayloe and her team at Privo. I’ve known Denise since the early days of IIW, and it was great to meet her team for the first time.
They put on a great show what began with a talk by James Thomason, Child Safety in the Age of AI: Navigating the Fine Line Between Innovation and Influence.
I learned a new term, “Pseudanthropy,” the act of AI systems impersonating humans, which poses ethical and societal risks and prompts calls for regulations to prevent deceptive behaviors.
James outlined three significant risks of Pseudoanthropy:
Cognitive Distortions: Pseudoanthropy can lead to a distorted understanding of human connections, as the AI cannot reciprocate real human emotions
Psychological Dependence: Forming an emotional attachment to this pseudo-human entity can replace or hinder real human interactions and emotional development
Technological Determinism: AI that mimics human intimacy yet lacks the depth and complexity of real human experience could redefine individuality in society.
He then laid out some of the dangers of AI and how it could harm children, laying out Guidelines for AI Safety with Children.
- No simulated human faces: AI should not use a human face or visually human-like representations to prevent children from mistaking it for a real person.
- No simulated human emotions: AI should not simulate human emotions or pretend to have feelings, as this could lead children to form emotional attachments or misunderstandings about the nature of AI.
- No simulated human behaviors: AI should avoid imitating human-like behavior or mannerisms that could confuse or mislead children about its capabilities and intentions.
- No simulated human companionship: AI should not use personalization strategies that mimic human friendship or companionship to prevent over-identification or inappropriate reliance.
- Clear artificial identity: AI interactions should be designed to clearly communicate their artificial nature, helping children distinguish between AI and human entities.
- No learning: AI should not collect sensitive information from children, such as personal data, preferences, or behavioral patterns, to protect their privacy.
He ended with a quote from Peter-Paul Verbeek
The design of technology is – in fact – doing ethics by other means.
Next was a panel on the current landscape of privacy protections for children and teens. Nichole Rocha, who leads the 5Rights Foundation (curious about the 5 Rights? You can find them on page 6 of this PDF), spoke about the recent preliminary injunction against the Age Appropriate Design Code passed in California and modeled after a similar code in the UK.
Later, Carl Szabo, Vice President & General Counsel, NetChoice, who spearheaded the case Netchoice v. Bonta to get the injunction, articulated why they did so. The key reasons were that they assert is that the provisions in the bill saying sites need to be “age appropriate” restricts company speech and children’s speech. The case is currently on appeal in the 9th circuit. If it is allowed to stand, it could have quite an impact on the state’s ability to regulate in the digital world.
One of the questions that arising about how regulation happens – Is the internet one big publisher? – no. It comprises many different actors, all making many design choices.
We had a special guest come and speak to us. It was a young man in his early 20s that Denise Tayloe, CEO of Privo, met on the plane on the way out here. He lived in Maryland but had grown up in Los Angeles – and was returning for a visit. On the plane, he and Denise started talking about what she did, and he opened up to her that as a tween, he had been exposed to pornography on the internet and developed an addiction to it. He was still working to heal the damage it had done to his ability to relate to other people and be whole. His experience made it clear to us, adults sitting in the audience, why we were in the room caring about how kids navigate the digital world.
The big news was the announcement by Privo and Trust Elevate is the Verifiable Parental Consent (VPC) Trust Alliance. Their mission is to:
- Define the state-of-the-art verifiable parental consent
- Forge consensus on standardized assurance levels
- Develop robust auditing and certification processes
- Tailor solutions to regional and cultural contexts
- Disseminate our findings and recommendations
COPPA in the US requires verifiable parental consent to collect data about kids and other countries have similar requirements or could soon.
I also learned some new language from the world of age-checking PAS-1296.
- Age determination – an indication established that a citizen has a particular age stated to a specified level of confidence and by reference to information related to that citizen
- Age categorization – an indication established that a citizen is of an age that is within a category of ages, over a certain age, or under a certain age to a specified level of confidence and by reference to information or factors related to that citizen
- Age estimation – an indication by estimation that a citizen is likely to fall within a category of ages, over a certain age or under a certain age to a specified level of confidence by reference to inherent features or behaviors related to that citizen
Overall it was a great day. I’m looking forward to what they bring together next year to discuss these critical issues!