Number-Theoretic Cryptographic Framework for Securing Generative Artificial Intelligence Against Adversarial Attacks

Authors

  • Eka Cahya Muliawati Institut Teknologi Adhi Tama Surabaya

DOI:

https://doi.org/10.69855/science.v1i1.472

Keywords:

Generative AI, Number Theory, Cryptography, Adversarial Attacks, Privacy Preservation

Abstract

The rapid adoption of Generative Artificial Intelligence (GenAI) has intensified concerns regarding security, privacy, and robustness against adversarial attacks. Most existing defense mechanisms rely on adversarial training, differential privacy, or cryptographic techniques applied as external protection layers, which often lack formal mathematical guarantees and are weakly coupled with the internal generative process.This study proposes a novel Number-Theoretic Cryptographic Framework that embeds cryptographic primitives directly into the GenAI lifecycle, including latent-space representations and model parameter handling. Unlike prior approaches, the proposed framework integrates number-theoretic hardness assumptions specifically lattice-based and elliptic-curve cryptography into the core generative mechanism, enabling mathematically grounded and provably secure protection against adversarial exploitation.A comprehensive synthetic dataset is constructed by jointly modeling cryptographic parameters, generative model specifications, and adversarial attack scenarios to systematically evaluate the framework. Experimental results demonstrate that number-theoretic cryptographic integration significantly reduces privacy leakage and model extraction vulnerability while preserving generative utility. Lattice-based schemes provide the strongest privacy protection, while elliptic-curve cryptography achieves a balanced trade-off between security and computational efficiency. This work introduces a new paradigm for securing GenAI by unifying generative modeling with formal number-theoretic cryptographic security, offering a robust and future-proof solution against both classical and post-quantum adversarial threats.

References

Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., & Zhang, L. (2016). Deep learning with differential privacy. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 308–318. https://doi.org/10.1145/2976749.2978318

Boneh, D., & Shoup, V. (2020). A graduate course in applied cryptography. Draft version. Stanford University.

Carlini, N., Tramer, F., Wallace, E., Jagielski, M., Herbert-Voss, A., Lee, K., … Erlingsson, Ú. (2021). Extracting training data from large language models. USENIX Security Symposium, 2633–2650.

Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Now Publishers. https://doi.org/10.1561/0400000042

Goldreich, O. (2019). Foundations of cryptography: Volume 1, basic tools (2nd ed.). Cambridge University Press.

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.

Ji, Z., He, Q., Wang, Y., & Zhang, R. (2020). Membership inference attacks and defenses in machine learning. IEEE Transactions on Knowledge and Data Engineering, 34(3), 1125–1141. https://doi.org/10.1109/TKDE.2020.3022708

Katz, J., & Lindell, Y. (2021). Introduction to modern cryptography (3rd ed.). CRC Press.

Mironov, I. (2017). Rényi differential privacy. 2017 IEEE 30th Computer Security Foundations Symposium, 263–275. https://doi.org/10.1109/CSF.2017.11

Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2017). Practical black-box attacks against machine learning. Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security, 506–519.

Regev, O. (2009). On lattices, learning with errors, random linear codes, and cryptography. Journal of the ACM, 56(6), 1–40. https://doi.org/10.1145/1568318.1568324

Shokri, R., Stronati, M., Song, C., & Shmatikov, V. (2017). Membership inference attacks against machine learning models. 2017 IEEE Symposium on Security and Privacy, 3–18. https://doi.org/10.1109/SP.2017.41

Song, C., Ristenpart, T., & Shmatikov, V. (2017). Machine learning models that remember too much. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 587–601.

Tramèr, F., Zhang, F., Juels, A., Reiter, M. K., & Ristenpart, T. (2016). Stealing machine learning models via prediction APIs. USENIX Security Symposium, 601–618.

Vaikuntanathan, V. (2011). Computing blindfolded: New developments in fully homomorphic encryption. 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, 5–16. https://doi.org/10.1109/FOCS.2011.12

Yang, Z., Yu, J., Liu, Y., & Chen, X. (2023). Secure and privacy-preserving generative AI: Challenges and opportunities. IEEE Security & Privacy, 21(5), 28–37. https://doi.org/10.1109/MSEC.2023.3274921

Zhang, J., Chen, K., Li, J., & Li, Y. (2022). Cryptographic approaches for secure machine learning: A survey. ACM Computing Surveys, 55(6), 1–38. https://doi.org/10.1145/3530810

Downloads

Published

2024-02-03

How to Cite

Eka Cahya Muliawati. (2024). Number-Theoretic Cryptographic Framework for Securing Generative Artificial Intelligence Against Adversarial Attacks. Science Get Journal, 1(1), 12–21. https://doi.org/10.69855/science.v1i1.472

Most read articles by the same author(s)