
Hugging Face, one of the most sought-out platforms to host AI models, announced a partnership with software supply chain platform JFrog to improve security on the Hugging Face Hub.
Hugging Face explained that the model weights can contain code executed upon deserialisation and sometimes at inference time, depending on the format. To tackle this, it plans to integrate JFrog’s scanner into its platform, adding new scanning functionality to reduce false positives on the Model Hub.
“Through our integration with Hugging Face, we bring a powerful, methodology-driven approach that eliminates 96% of current false positives detected by scanners on the Hugging Face platform while also identifying threats that traditional scanners fail to detect,” JFrog stated. “Our unique approach dissects embedded code, extracts payloads, and normalises evidence to eliminate false positives while detecting more serious threats.”
JFrog’s scanner aims to perform a deeper analysis and parse the code in model weights to check for potential malicious usage. The scanning is powered by its ‘file security scans’ interface.
It supports various models, including pickle-based models, TensorFlow models, GPT-Generated Unified Format (GGUF) models, Open Neural Network Exchange (ONNX) models, and more. Their documentation lists out all kinds of AI models supported by JFrog.
Users do not need to do anything to benefit from the integration. All the public model repositories will be scanned by JFrog automatically as soon as they push files to the Model Hub.
Hugging Face has shared an example repository where users can check how the scanner flags malicious files.

With this integration to Hugging Face, users should get a better sense of security before using AI models to deploy for their use-cases.
The post Hugging Face Teams Up With JFrog To Hunt Down Malicious AI Models appeared first on Analytics India Magazine.