Home » Latest News » EU AI Office faces scrutiny over access to Anthropic hacking model as calls grow for more staff and stronger oversight

EU AI Office faces scrutiny over access to Anthropic hacking model as calls grow for more staff and stronger oversight

EU AI Office faces scrutiny over access to Anthropic hacking model as calls grow for more staff and stronger oversight

The European Commission’s AI Office is facing fresh questions over whether it has the staff, expertise and access needed to evaluate the newest generation of high-end AI models built for cybersecurity tasks.

Concerns intensified after reports that EU officials have not secured full access to an advanced Anthropic model described as capable of assisting with vulnerability discovery, a sensitive area where misuse could accelerate real-world attacks.

Why access to the model matters?

AI safety advocates argue that regulators cannot credibly assess risks if they cannot test the most capable systems under controlled conditions. They say this gap is especially serious for models that may help identify and exploit software weaknesses.

A coalition of AI safety groups has urged the Commission to strengthen the AI Office’s resources, warning that oversight could lag behind rapid advances in so-called frontier AI and the growing number of deployments across Europe.

Staffing, hierarchy and enforcement tensions

The AI Office is a relatively new unit and is still building technical capacity, including teams focused on model evaluation and compliance. Critics say the safety-focused group needs more experienced engineers and clearer pathways to act quickly during a fast-moving incident.

Some policy experts also point to structural challenges inside the Commission, arguing that additional layers of decision-making can slow responses when urgent technical assessments or security coordination are required.

The debate lands as the EU prepares implementation of the AI Act, which sets obligations for high-risk systems and potential penalties for violations. Industry observers say companies may be more cautious about sharing highly sensitive models if they fear immediate enforcement consequences.

The Commission has signaled plans to expand staffing, including additional hires tied to upcoming AI Act work. For critics, the central question remains whether the AI Office can secure timely access to the most capable models while keeping pace with the cybersecurity risks they may amplify.