ATRIBUSI GAMBAR SINTETIK STYLEGAN2 MENGGUNAKAN ARTIFICIAL FINGERPRINT BERBASIS CNN
DOI:
https://doi.org/10.70248/jrsit.v3i1.2889Abstract
Model generatif seperti StyleGAN2 mampu menghasilkan gambar sintetik yang menyerupai gambar nyata, namun menimbulkan potensi pelanggaran hak cipta karena sering dilatih menggunakan data tanpa izin pemilik. Penelitian ini bertujuan untuk mengembangkan metode atribusi gambar dengan menyisipkan artificial fingerprint secara tersembunyi ke dalam dataset wajah sebelum digunakan dalam pelatihan model generatif. Metode yang digunakan berbasis Convolutional Neural Network (CNN) dengan dua komponen utama: encoder untuk menyisipkan fingerprint dan decoder untuk mendeteksinya kembali. Dataset yang digunakan adalah FFHQ beresolusi 128×128 piksel, dan model dilatih selama 10 epoch menggunakan algoritma Adam. Evaluasi dilakukan menggunakan Binary Cross Entropy (BCE) untuk mengukur akurasi deteksi fingerprint dan Mean Squared Error (MSE) untuk menilai kualitas visual gambar. Hasil menunjukkan bahwa metode ini berhasil menyisipkan fingerprint secara imperseptibel (MSE < 0.01) dan mengekstraksinya kembali dengan tingkat akurasi sangat tinggi (bitwise accuracy > 99%). Pendekatan ini memberikan kontribusi teknis dalam sistem atribusi otomatis dan perlindungan hak cipta pada pengembangan AI generatif.
References
Cassia, M., Guarnera, L., Casu, M., Zangara, I., & Battiato, S. (2025). Deepfake Forensic Analysis: Source Dataset Attribution and Legal Implications of Synthetic Media Manipulation. arXiv preprint. http://arxiv.org/abs/2505.11110v1
Chen, Y., Vice, J., Akhtar, N., Haldar, N. A. H., & ... (2025). Image Watermarking of Generative Diffusion Models. arXiv preprint arXiv …, 14(8), 1–11. https://arxiv.org/abs/2502.10465
Fang, H., Jia, Z., Qiu, Y., Zhang, J., Zhang, W., & Chang, E. C. (2023). De-END: Decoder-Driven Watermarking Network. IEEE Transactions on Multimedia, 25, 7571–7581. https://doi.org/10.1109/TMM.2022.3223559
Franceschelli, G., & Musolesi, M. (2022). Copyright in generative deep learning. Data and Policy, 4(3), 1–18. https://doi.org/10.1017/dap.2022.10
Ince, S., Kunduracioglu, I., Algarni, A., Bayram, B., & Pacal, I. (2025). Deep learning for cerebral vascular occlusion segmentation: A novel ConvNeXtV2 and GRN-integrated U-Net framework for diffusion-weighted imaging. Neuroscience, 574(April), 42–53. https://doi.org/10.1016/j.neuroscience.2025.04.010
Ji, P., Zhang, Y., & Lv, Z. (2025). Edge-Guided Dual-Stream U-Net for Secure Image Steganography. Applied Sciences (Switzerland), 15(8). https://doi.org/10.3390/app15084413
Matuzevičius, D. (2024). Diverse Dataset for Eyeglasses Detection: Extending the Flickr-Faces-HQ (FFHQ) Dataset. Sensors, 24(23). https://doi.org/10.3390/s24237697
Padhi, S. K., Tiwari, A., & Ali, S. S. (2024). Deep Learning-Based Dual Watermarking for Image Copyright Protection and Authentication. IEEE Transactions on Artificial Intelligence, 5(12), 6134–6145. https://doi.org/10.1109/TAI.2024.3485519
Song, H. J., Khayatkhoei, M., & Abdalmageed, W. (2024). ManiFPT: Defining and Analyzing Fingerprints of Generative Models. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 10971–10981. https://doi.org/10.1109/CVPR52733.2024.01026
Tallam, K. (2025). Embedding Trust at Scale: Physics-Aware Neural Watermarking for Secure and Verifiable Data Pipelines. 1–21. http://arxiv.org/abs/2506.12032
Terven, J., Cordova-Esparza, D. M., Romero-González, J. A., Ramírez-Pedraza, A., & Chávez-Urbiola, E. A. (2025). A comprehensive survey of loss functions and metrics in deep learning. Artificial Intelligence Review, 58(7). https://doi.org/10.1007/s10462-025-11198-7
Wang, Z., Byrnes, O., Wang, H., Sun, R., Ma, C., Chen, H., Wu, Q., & Xue, M. (2023). Data Hiding With Deep Learning: A Survey Unifying Digital Watermarking and Steganography. IEEE Transactions on Computational Social Systems, 10(6), 2985–2999. https://doi.org/10.1109/TCSS.2023.3268950
Wang, Z., Lyu, L., Chen, C., Zeng, Y., & Ma, S. (2023). Where Did I Come From? Origin Attribution of AI-Generated Images. Advances in Neural Information Processing Systems, 36(NeurIPS).
Wißmann, A., Zeiler, S., Nickel, R. M., & Kolossa, D. (2024). Whodunit: Detection and Attribution of Synthetic Images by Leveraging Model-specific Fingerprints. ACM International Conference Proceeding Series, 65–72. https://doi.org/10.1145/3643491.3660280
Xu, R., Hu, M., Lei, D., Li, Y., Lowe, D., Gorevski, A., Wang, M., Ching, E., & Deng, A. (2025). InvisMark: Invisible and Robust Watermarking for AI-generated Image Provenance. Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025, 909–918. https://doi.org/10.1109/WACV61041.2025.00098
Yu, N., Davis, L., & Fritz, M. (2019). Attributing fake images to GANs: Learning and analyzing GAN fingerprints. Proceedings of the IEEE International Conference on Computer Vision, 2019-Octob, 7555–7565. https://doi.org/10.1109/ICCV.2019.00765
Yu, N., Skripniuk, V., Abdelnabi, S., & Fritz, M. (2021). Artificial Fingerprinting for Generative Models: Rooting Deepfake Attribution in Training Data. Proceedings of the IEEE International Conference on Computer Vision, 14428–14437. https://doi.org/10.1109/ICCV48922.2021.01418
Zeng, L., Yang, N., Li, X., Chen, A., Jing, H., & Zhang, J. (2023). Advanced Image Steganography Using a U-Net-Based Architecture with Multi-Scale Fusion and Perceptual Loss. Electronics (Switzerland), 12(18), 1–18. https://doi.org/10.3390/electronics12183808
Zhao, X., Liu, H., Fan, W., Liu, H., Tang, J., & Wang, C. (2021). AutoLoss: Automated Loss Function Search in Recommendations. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 3959–3967. https://doi.org/10.1145/3447548.3467208
Zhong, H., Chang, J., Yang, Z., Wu, T., Mahawaga Arachchige, P. C., Pathmabandu, C., & Xue, M. (2023). Copyright Protection and Accountability of Generative AI: Attack, Watermarking and Attribution. In ACM Web Conference 2023 - Companion of the World Wide Web Conference, WWW 2023 (Vol. 1, Nomor 1). Association for Computing Machinery. https://doi.org/10.1145/3543873.3587321