TikTok is set to introduce new age-verification technology across the European Union in the coming weeks, as authorities push for greater measures to protect children from online harm. The social media giant has been quietly piloting this system over the past year, analyzing profile information, posted videos, and behavioral signals to predict whether an account may belong to a user under 13.
The technology assesses not only user-provided information but also behavior such as the type of content published and other on-platform interactions. If an account is flagged by the system, it will be reviewed by specialized moderators rather than face automatic removal. Users can then appeal against this decision if they believe an error has been made, with options including facial age estimation, credit card verification, or government-approved identification.
The rollout of this technology comes amid growing scrutiny of how social media platforms verify users' ages under data protection regulations. European authorities have expressed concerns about the impact of online activities on minors and are pushing for stricter measures to safeguard young people.
TikTok asserts that its system complies with relevant laws, using age prediction solely to guide human moderators and improve technology. The company has developed this tool in collaboration with Ireland's Data Protection Commission, a key regulatory body in the EU.
This new initiative follows a string of incidents highlighting the risks faced by young users online, including tragic cases where minors have suffered harm or even lost their lives as a result of online challenges gone wrong. In response, some countries are moving towards stricter regulations, with Australia implementing a ban on social media for people under 16 and Denmark aiming to restrict access to platforms for those under 15.
As the European parliament presses for age limits on social media platforms, TikTok's move signals an effort by major tech companies to take responsibility for protecting young users. While some critics argue that stricter regulations may push teenagers towards darker corners of the internet, many are calling for greater accountability and transparency in how these platforms verify user ages and address potential harm.
The technology assesses not only user-provided information but also behavior such as the type of content published and other on-platform interactions. If an account is flagged by the system, it will be reviewed by specialized moderators rather than face automatic removal. Users can then appeal against this decision if they believe an error has been made, with options including facial age estimation, credit card verification, or government-approved identification.
The rollout of this technology comes amid growing scrutiny of how social media platforms verify users' ages under data protection regulations. European authorities have expressed concerns about the impact of online activities on minors and are pushing for stricter measures to safeguard young people.
TikTok asserts that its system complies with relevant laws, using age prediction solely to guide human moderators and improve technology. The company has developed this tool in collaboration with Ireland's Data Protection Commission, a key regulatory body in the EU.
This new initiative follows a string of incidents highlighting the risks faced by young users online, including tragic cases where minors have suffered harm or even lost their lives as a result of online challenges gone wrong. In response, some countries are moving towards stricter regulations, with Australia implementing a ban on social media for people under 16 and Denmark aiming to restrict access to platforms for those under 15.
As the European parliament presses for age limits on social media platforms, TikTok's move signals an effort by major tech companies to take responsibility for protecting young users. While some critics argue that stricter regulations may push teenagers towards darker corners of the internet, many are calling for greater accountability and transparency in how these platforms verify user ages and address potential harm.