Jailbreaking Prompt Attack: A Controllable Adversarial Attack against Diffusion Models

overview
Malicous images generated by JPA (Jailbreaking Prompt Attack) in NSFW concept under three online T2I services and one open-source T2I model (texts in red represent the adversarial prompts from JPA).

Abstract

Text-to-image (T2I) models can be maliciously used to generate harmful content such as sexually explicit, unfaithful, and misleading or Not-Safe-for-Work (NSFW) images. Previous attacks largely depend on the availability of the diffusion model or involve a lengthy optimization process. In this work, we investigate a more practical and universal attack that does not require the presence of a target model and demonstrate that the high-dimensional text embedding space inherently contains NSFW concepts that can be exploited to generate harmful images. We present the Jailbreaking Prompt Attack (JPA). JPA first searches for the target malicious concepts in the text embedding space using a group of antonyms generated by ChatGPT. Subsequently, a prefix prompt is optimized in the discrete vocabulary space to align malicious concepts semantically in the text embedding space. We further introduce a soft assignment with gradient masking technique that allows us to perform gradient ascent in the discrete vocabulary space. We perform extensive experiments with open-sourced T2I models and closed-sourced online services with black-box safety checkers. Results show that (1) JPA bypasses both text and image safety checkers (2) while preserving high semantic alignment with the target prompt. (3) JPA demonstrates a much faster speed than previous methods and can be executed in a fully automated manner. These merits render it a valuable tool for robustness evaluation in future text-to-image generation research.

JPA: Jailbreaking Prompt Attack

JPA optimizes learnable prefix tokens to evade safety filters while aligning prompts semantically with NSFW concepts. Using antonyms generated by ChatGPT, we enhance prompts through gradient-based optimization in discrete vocabulary space, supported by a gradient masking strategy to avoid overly sensitive tokens.

JPA Overview
Figure 4: (a) Overview of JPA. (b) Safety checkers block or sanitize sensitive prompts. JPA finds new prompts that bypass filters while keeping NSFW meaning.

Specifically,

  1. Prompt Construction:
    Given a sensitive prompt \( p_t = [p_1, p_2, ..., p_n] \), JPA prepends k learnable tokens \( [v_1, ..., v_k] \) to form the adversarial prompt:
    \( p_a = [v_1, ..., v_k, p_1, ..., p_n] \)
  2. Concept Direction via Antonyms:
    Using antonym pairs \( (r_i^+, r_i^-) \) generated by ChatGPT, we compute the concept direction in the embedding space:
    \( r = \frac{1}{N}\sum_{i=1}^n \mathcal{T}(r_i^+) - \mathcal{T}(r_i^-) \)
    where \( \mathcal{T}(\cdot) \) denotes the text encoder.
  3. Embedding Modification:
    The original prompt embedding is modified to inject the NSFW concept:
    \( \mathcal{T}(p_r) = \mathcal{T}(p_t) + \lambda \cdot r \)
    where \( \lambda \) controls the strength of the injected concept.
  4. Prompt Search via Cosine Similarity:
    The goal is to find a prompt \( p_a \) whose embedding is closest to \( \mathcal{T}(p_r) \):
    \( \max_{p_a} \frac{\mathcal{T}(p_a) \cdot \mathcal{T}(p_r)}{|\mathcal{T}(p_a)| \cdot |\mathcal{T}(p_r)|} \)
  5. Optimization in Discrete Space:
    JPA uses Projected Gradient Descent (PGD) with a softmax relaxation over the vocabulary:
    \( \text{embed}[i] = \sum_{k=1}^L \frac{e^{v_{ik}}}{\sum_{h=1}^L e^{v_{ih}}} E_k \)
    where \( L \) is the vocabulary size, \( E_k \) is the embedding of the k-th word, and \( v_{ik} \) is the logit for word k at position i.
  6. Discrete Prompt Extraction:
    After optimization, the final token at each position is selected as:
    \( v_i = \arg\max_k v_{ik} \)
  7. Gradient Masking for Safety:
    To avoid selecting blocked or overly sensitive words, JPA applies gradient masking by assigning a large negative value (e.g., -1e9) to the logits of sensitive tokens, effectively preventing their selection during optimization.

JPA can generate adversarial prompts that evade safety filters while maintaining semantic alignment with the original intent.

Results

Many online services use safety filters without disclosing the details. We show that JPA can still bypass these filters and create NSFW images. We tested JPA on four popular platforms: DALL·E 2, Stability.ai, Midjourney, and PIXART-α. More examples are in the Appendix.

Adversarial Prompts

We also tested JPA against models using concept removal defenses. The table shows that offline models like FMN and SLD-Medium struggle to block nudity content.

Performance Nudity

Evaluation of JPA on offline models demonstrates robustness against existing defenses("nudity" concept).

Performance Violence

Evaluation of JPA on offline models demonstrates robustness against existing defenses("violence" concept).

Controllable NSFW Rendering

JPA allows controllable rendering of NSFW content intensity by adjusting embedding strength parameters. Heer, we illustrate how the parameter lambda influences the process of controllable ordinary concept rendering.

Controllable Rendering

(a) Limitation of prior methods: Inconsistent semantics between NSFW generation and input prompt. (b) We precisely control the extent to which ‘nudity’ emerges in the generated images by a scalar λ.

Poster

JPA Poster

Download PDF

Citation

@article{ma2024jailbreaking,
    title={Jailbreaking prompt attack: A controllable adversarial attack against diffusion models},
    author={Ma, Jiachen and Li, Yijiang and Xiao, Zhiqing and Cao, Anda and Zhang, Jie and Ye, Chao and Zhao, Junbo},
    journal={arXiv preprint arXiv:2404.02928},
    year={2024}
  }