OpenAI has started rolling out its first live test of ads in ChatGPT for users in the United States, marking a major shift in how the world’s most popular AI assistant is funded.
Code changes in the ChatGPT web app suggest that OpenAI is preparing to launch "ChatGPT Jobs," a dedicated dashboard designed to assist users with job hunting, resume building, and career training. The feature follows the recent rollout of ChatGPT Health and aligns with OpenAI’s goal to certify 10 million Americans with AI skills by 2030.
OpenAI has introduced a personalized "Year in Review" feature for ChatGPT users, providing data on their most frequent prompts, topics, and interactions throughout 2025. The tool follows the format of popular social media summaries but has received mixed feedback regarding the accuracy and categorization of its automated insights.
OpenAI has admitted that its new Atlas browser is facing constant attacks from hackers using a technique called "prompt injection." While the company is using AI to fight back, they warn that these security risks may never be fully solved as long as AI agents are used to browse the web.
OpenAI plans to launch an "adult mode" for ChatGPT in early 2026, using an AI-based age prediction system to allow more mature conversations while keeping younger users safe.
OpenAI CEO Sam Altman has issued an internal code red alert, directing teams to pause work on ads, shopping features, health agents, and the Pulse personal assistant to focus entirely on making ChatGPT faster, more reliable, and capable of handling a wider range of questions. The move comes as Google’s Gemini 3 model gains ground on benchmarks and user metrics, three years after ChatGPT’s launch prompted Google’s own code red response.
As ChatGPT marks its third anniversary, OpenAI has shared new usage data revealing how people actually use the world’s most popular AI tool. The findings show a strong preference for practical assistance over creative experimentation.
Offering ChatGPT for free to teachers is not a goodwill move aimed at education alone. It is a structural attempt to shape how AI enters classrooms, how teachers retain authority over its use, and how schools avoid fragmented, unsupervised adoption. The decision reflects constraints around trust, misuse, training load, and institutional inertia rather than a simple push for scale.











