PANews reported on May 15 that according to the official announcement of OpenAI, in order to improve the transparency of model security, OpenAI announced the launch of the "Safety Evaluations Hub" to continuously publish the safety performance results of its models in terms of harmful content, jailbreak attacks, hallucination generation, instruction priority, etc. Compared with the system card that only discloses one-time data when the model is released, the center will be updated periodically with the model update, supporting horizontal comparisons between different models, aiming to enhance the community's understanding of AI security and regulatory transparency. Currently, GPT-4.5 and GPT-4o perform best in terms of jailbreak attack resistance and factual accuracy.
OpenAI launches "Safety Evaluation Center" to regularly publish model safety performance data
Share to:
Follow PANews official accounts, let's navigate bull and bear markets together
Recommended Reading




Pioneer's View: Crypto Celebrity Interviews
Exclusive interviews with crypto celebrities, sharing unique observations and insights

PAData: Web3 in Data
Data analysis and visualization reporting of industry hot spots

Memecoin Supercycle: The hype around attention tokenization
From joke culture to the trillion-dollar race, Memecoin has become an integral part of the crypto market. In this Memecoin super cycle, how can we seize the opportunity?

AI Agent: A Journey to Web3
The AI Agen innovation wave is sweeping the world. How will it take root in Web3? Let’s embark on this adventure together!