About our SID activities
Safety is a core priority, and we’ve taken a multi-layered approach to strengthening teen safety across ChatGPT. Most recently, we updated our Model Spec, which outlines the intended behaviour for our models, with new Under-18 principles to guide how ChatGPT should provide a safe, age-appropriate experience for teens.
In recent months, we've introduced other product safeguards and family support with guidance from experts. Our parental controls are designed to support families and put teen well-being first by helping parents tailor their teen's experience.
To support conversations between parents and teens about healthy and responsible AI use, we’ve added new expert-vetted resources to the parents' resource hub, including a Family Guide to Help Teens Use AI Responsibly and tips for parents on how to talk with their kids about AI, which were reviewed by ConnectSafely and members of our Expert Council on Well-Being and AI.
We’re also in the early stages of rolling out an age prediction model on ChatGPT consumer plans. This will help us automatically apply teen safeguards when we believe an account belongs to a minor. If we are not confident about someone’s age or have incomplete information, we’ll default to an U18 experience and give adults ways to verify their age.
Strengthening teen safety is ongoing work, and we’ll continue to improve parental controls and model capabilities, expand resources for parents, work with organisations, researchers, and expert partners, including the Well-Being Council and Global Physician Network.
What we are doing to create a better internet...
Safety is core to how we build and deploy AI. We invest heavily in frontier safety research, build strong safeguards into our systems, and rigorously test our models, both internally and with independent experts. We share our safety frameworks, evaluations, and research to help advance industry standards, and we continuously strengthen our protections to prepare for future capabilities
We’re committed to making strong teen protections and improving them over time to better support teens and families, including age-appropriate protections, parental controls, content policies, and collaboration with experts.
Support for the Digital Services Act in the EU
OpenAI’s mission is to ensure that artificial general intelligence benefits everyone, and this commitment guides how we design, deploy, and govern our technologies. We know that ensuring the security, safety and privacy of all our users, including in Europe, is central to achieving that goal. OpenAI supports the objectives of the EU Digital Services Act and a risk-based approach to online safety and service accountability. We will continue to meet our regulatory obligations and also strive to set new standards through innovative solutions and by collaborating with governments, civil society, and the wider digital safety community.
We have a number of publicly available transparency materials and explanatory resources that describe our approach to safety-by-design and governance. Relevant resources include:
We will update this section as appropriate as SID 2026 approaches.
About us
OpenAI is an AI research and deployment company dedicated to ensuring that general purpose artificial intelligence benefits everyone. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.
About our SID activities
Safety is a core priority, and we’ve taken a multi-layered approach to strengthening teen safety across ChatGPT. Most recently, we updated our Model Spec, which outlines the intended behaviour for our models, with new Under-18 principles to guide how ChatGPT should provide a safe, age-appropriate experience for teens.
In recent months, we've introduced other product safeguards and family support with guidance from experts. Our parental controls are designed to support families and put teen well-being first by helping parents tailor their teen's experience.
To support conversations between parents and teens about healthy and responsible AI use, we’ve added new expert-vetted resources to the parents' resource hub, including a Family Guide to Help Teens Use AI Responsibly and tips for parents on how to talk with their kids about AI, which were reviewed by ConnectSafely and members of our Expert Council on Well-Being and AI.
We’re also in the early stages of rolling out an age prediction model on ChatGPT consumer plans. This will help us automatically apply teen safeguards when we believe an account belongs to a minor. If we are not confident about someone’s age or have incomplete information, we’ll default to an U18 experience and give adults ways to verify their age.
Strengthening teen safety is ongoing work, and we’ll continue to improve parental controls and model capabilities, expand resources for parents, work with organisations, researchers, and expert partners, including the Well-Being Council and Global Physician Network.
What we are doing to create a better internet...
Safety is core to how we build and deploy AI. We invest heavily in frontier safety research, build strong safeguards into our systems, and rigorously test our models, both internally and with independent experts. We share our safety frameworks, evaluations, and research to help advance industry standards, and we continuously strengthen our protections to prepare for future capabilities
We’re committed to making strong teen protections and improving them over time to better support teens and families, including age-appropriate protections, parental controls, content policies, and collaboration with experts.
Support for the Digital Services Act in the EU
OpenAI’s mission is to ensure that artificial general intelligence benefits everyone, and this commitment guides how we design, deploy, and govern our technologies. We know that ensuring the security, safety and privacy of all our users, including in Europe, is central to achieving that goal. OpenAI supports the objectives of the EU Digital Services Act and a risk-based approach to online safety and service accountability. We will continue to meet our regulatory obligations and also strive to set new standards through innovative solutions and by collaborating with governments, civil society, and the wider digital safety community.
We have a number of publicly available transparency materials and explanatory resources that describe our approach to safety-by-design and governance. Relevant resources include:
We will update this section as appropriate as SID 2026 approaches.
About us
OpenAI is an AI research and deployment company dedicated to ensuring that general purpose artificial intelligence benefits everyone. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.