TensorFlow artificial intelligence models are at risk due to Keras API flaw

Updated 6 months ago on May 31, 2024

TensorFlow AI models could be at risk of supply chain attacks due to a flaw in the Keras API that allows potentially dangerous code to be executed.

Keras is a neural network API that is written in Python and provides a high-level interface to deep learning software libraries such as TensorFlow and Theano.

The vulnerability, tracked as CVE-2024-3660, affects versions of Keras up to 2.13 and was disclosed by the CERT Coordination Center last Tuesday. The flaw lies in the handling of lambda layers, a type of AI "building block" that allows developers to add arbitrary Python code to a model in the form of an anonymous lambda function.

In early versions of Keras, code included in lambda layers could be deserialized and executed without any checks, meaning an attacker could distribute a Trojan version of a popular model that included malicious lambda layers and execute the code on the system of anyone who downloads the model.

"This is another in a long line of model injection vulnerabilities that are more than a decade old, including previous command injections into the Keras model," Dan McInerney, lead AI threat researcher at Protect AI, told SC Media in an email.

Keras 2.13 and later has a "safe_mode" parameter that defaults to "True" and prevents deserialization of unsafe lambda layers that could trigger arbitrary code execution. However, this check is only performed for models serialized in Keras version 3 format (.keras file extension), meaning that Keras models in older formats can still pose a risk.

The vulnerability poses a potential risk to developers working with TensorFlow models in Keras. A developer could unknowingly include a third-party model with a malicious lambda layer in their application or build their own model based on a base model containing malicious code.

Users and model builders are advised to update Keras to at least version 2.13 and ensure that the safe_mode parameter is set to "True" to avoid executing arbitrary code from lambda layers. Models should also be saved and loaded in Keras serialization format version 3.

"Model users should only use models developed and distributed by trusted sources and should always verify model behavior before deployment. They should follow the same best practices for developing and deploying applications in which ML models are integrated as they would for any application that includes any third-party component," the CERT researchers wrote.

Open source software hosting platforms such as Hugging Face, GitHub, npm and PyPI are popular targets for supply chain attacks due to the fact that modern software relies heavily on third-party open source code. Given the rapid development of AI in the last couple of years, supply chain threats aimed at compromising AI models are likely to increase.

"The risks are compounded by the fact that insecure model formats such as pickle have simply been accepted by the machine learning community as the default model format for years, as well as the massive growth in the use of third-party models downloaded from online repositories such as Hugging Face," McInerney says.

Indeed, earlier this month, malicious models in an insecure pickle format were discovered on the Hugging Face platform.

"There are useful open source tools like ModelScan that can detect malicious models, but that's hardly the end of new ways to get models to execute malicious code without the end user's knowledge," McInerney concluded.

Let's get in touch!

Please feel free to send us a message through the contact form.

Drop us a line at mailrequest@nosota.com / Give us a call over skypenosota.skype