1. News
  2. AI
  3. Cybersecurity Alert: Malicious ML Models Found on Hugging Face

Cybersecurity Alert: Malicious ML Models Found on Hugging Face

featured
Share

Share This Post

or copy the link

Hugging Face, a prominent platform for artificial intelligence (AI) and machine learning (ML), has been found to host potentially harmful ML models. A cybersecurity research firm uncovered two models that include code capable of packaging and distributing malware to users who download them. Researchers indicate that malicious actors are employing a sophisticated technique known as Pickle file serialization to embed harmful software within these models. Following their discovery, the researchers reported the concerning findings, leading to the removal of the malicious models from Hugging Face.

Researchers Uncover Malware-Laden ML Models on Hugging Face

ReversingLabs, the cybersecurity research firm behind the discovery, detailed the exploitation methods being utilized by threat actors on Hugging Face, a platform that supports a wealth of open-source AI models shared by numerous developers and organizations.

The exploit in question revolves around the use of Pickle file serialization. This technique is used for storing ML models in various serialization formats, which can then be shared and repurposed. Pickle, a Python module, is typically used for serializing and deserializing model data. However, it is known to be an insecure format since executing Python code occurs during deserialization.

In more secure or closed systems, Pickle files are restricted to accessing only data from trusted sources. Conversely, Hugging Face’s open-source nature enables widespread use of these files, giving attackers the opportunity to conceal malware within harmless-appearing models.

The investigation revealed two models on Hugging Face that harbored malicious code, which managed to bypass the platform’s security protocols without being flagged as hazardous. The researchers identified the malware insertion technique as “nullifAI,” which describes the method of circumventing existing protective measures within the AI community.

These identified models were stored in a PyTorch format, which essentially compresses the Pickle files. The researchers discovered that the models were compressed using the 7z format, rendering them unresolvable by the PyTorch “torch.load()” function. This method of compression also prevented comprehensive detection by Hugging Face’s Picklescan tool.

The implications of this exploit are significant, as unsuspecting developers who download these compromised models risk unknowingly installing malware on their devices. The cybersecurity firm reported the findings to the Hugging Face security team on January 20, and the problematic models were removed within a span of less than 24 hours. Furthermore, the platform has reportedly updated the Picklescan tool to enhance its capacity to identify threats associated with compromised Pickle files.

Cybersecurity Alert: Malicious ML Models Found on Hugging Face
Comment

Tamamen Ücretsiz Olarak Bültenimize Abone Olabilirsin

Yeni haberlerden haberdar olmak için fırsatı kaçırma ve ücretsiz e-posta aboneliğini hemen başlat.

Your email address will not be published. Required fields are marked *

Login

To enjoy Technology Newso privileges, log in or create an account now, and it's completely free!