METR
METR (Model Evaluation and Threat Research) is an AI safety organization that conducts evaluations of AI models for dangerous capabilities. METR develops standardized tests to assess whether AI systems can autonomously perform tasks related to biosecurity, cybersecurity, and self-replication.