In an effort to keep children safer online, Instagram is ramping up its efforts to detect users who lie about their age.

Meta, the parent company of Instagram, said Monday that it has begun testing AI tools that can proactively detect situations in which users may have misrepresented their age on the platform and automatically switch them to more restrictive privacy settings.

While Meta has been using AI to estimate users' ages for some time, the company said it will now "proactively" look for teen accounts on the photo and video sharing app, even if they may have provided incorrect dates of birth to register on the platform.

If a user is found to be misrepresenting their age, their account will automatically be converted to a teen account, which comes with stricter security checks, the company further said.

Stronger security controls

Teen accounts are private by default, and messages are limited to people the teen already follows or is connected to. Meta will also restrict “sensitive content,” including fight videos.

Teens will receive reminder alerts if they spend more than 60 minutes on the app, and a built-in “sleep mode” will disable notifications and send automatic replies to messages between 22 p.m. and 00 a.m.

The tech giant, which also owns Facebook and WhatsApp, has trained its artificial intelligence to look for signals that suggest an account is being used by a teenager. These include the type of content the user engages with, profile information and the date the account was created.

Meta, along with other social media companies, has supported the idea of ​​shifting the responsibility of age verification to apps for a long time – a move that comes amid criticism that these platforms are not doing enough to protect children or prevent those under 13 from registering on their apps.

The popular photo and video sharing app will also begin sending notifications to parents, offering guidance on how to talk to their teenagers about the importance of providing accurate age information online, the company said.

Meta under observation

Instagram introduced "Teen Accounts" in September last year, aiming to increase safety and privacy for users aged 13 to 17.

These accounts are automatically set to private, restrict messages to approved connections only, and limit exposure to sensitive content.

This initiative aligns with similar efforts by platforms like YouTube to create safer online environments for younger audiences.

Earlier this month, Meta expanded its “Teen Accounts” features on Facebook and Messenger, as part of its broader response to growing regulatory pressure to increase online safety for children.

The company also announced that teens under 16 will now ask for parental permission before going live, while disabling a feature that automatically blurs images that may contain nudity in direct messages.

The measures come at a time when social media platforms have come under scrutiny for their impact on the mental health and well-being of young users.

Social media giants are facing pressure from regulators, lawmakers and parents over how online platforms affect the mental health and safety of young users.

Several lawmakers have expressed their intention to move forward with proposed legislation aimed at protecting children from the harms of social media, including the Children's Online Safety Act (KOSA).

Although the Republican-controlled House of Representatives did not bring KOSA to a vote last year, lawmakers indicated during a committee hearing last month that efforts to introduce new protections for children online remain a legislative priority.

 

(AA/BalkanWeb)

© BalkansWeb
To become part of the group "Balkanweb" just click: Join Group and your request will be approved immediately. Groups Balkanweb