Hi, I am Noa! I am an associate professor at the University of Osaka, where I am part of the Institute for Advanced Co-Creation Studies and the D3 Center.
I do research in computer vision. I try to keep an interdisciplinary approach inspired by art and social sciences. I am opposed to military and surveillance applications; that's why I'm co-organizing the AI for Peace workshop at ICLR 2026.
The Weaponization of Computer Vision: Tracing Military-Surveillance Ties through Conference Sponsorship
Noa Garcia and Amelia Katirai
FAccT 2026
EMMA: Concept Erasure Benchmark with Comprehensive Semantic Metrics and Diverse Categories
Lu Wei, Yuta Nakashima, Noa Garcia
CVPR 2026
ImageSet2Text: Describing Sets of Images through Text
Piera Riccio*, Francesco Galati*, Kajetan Schweighofer, Noa Garcia, Nuria Oliver
AAAI 2026
Processing and acquisition traces in visual encoders: What does CLIP know about your camera?
Ryan Ramos*, Vladan Stojnić*, Giorgos Kordopatis-Zilos, Yuta Nakashima, Giorgos Tolias, Noa Garcia
ICCV 2025 (highlight)
Bias in Gender Bias Benchmarks: How Spurious Features Distort Evaluation
Yusuke Hirota, Ryo Hachiuma, Boyi Li, Ximing Lu, Michael Ross Boone, Boris Ivanovic, Yejin Choi, Marco Pavone, Yu-Chiang Frank Wang, Noa Garcia, Yuta Nakashima, Chao-Han Huck Yang
ICCV 2025
No Annotations for Object Detection in Art through Stable Diffusion
Patrick Ramos, Nicolas Gonthier, Selina Khan, Yuta Nakashima, Noa Garcia
WACV 2025
Stable Diffusion Exposed: Gender Bias from Prompt to Image
Yankun Wu, Yuta Nakashima, Noa Garcia
AIES 2024
Amelia Katirai, Noa Garcia, Kazuki Ide, Yuta Nakashima, Atsuo Kishimoto
AI and Ethics
Would Deep Generative Models Amplify Bias in Future Models?
Tianwei Chen, Yusuke Hirota, Mayu Otani, Noa Garcia, Yuta Nakashima
CVPR 2024
Auditing Image-based NSFW Classifiers for Content Filtering
Warren Leu, Yuta Nakashima, Noa Garcia
FAccT 2024
Can multiple-choice questions really be useful in detecting the abilities of LLMs?
Wangyue Li, Liangzhi Li, Tong Xiang, Xiao Liu, Wei Deng, Noa Garcia
LREC-COLING 2024
CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care
Tong Xiang, Liangzhi Li, Wangyue Li, Mingbai Bai, Lu Wei, Bowen Wang, Noa Garcia
NeurIPS D&B 2023
Uncurated Image-Text Datasets: Shedding Light on Demographic Bias
Noa Garcia, Yusuke Hirota, Yankun Wu, Yuta Nakashima
CVPR 2023 (highlight)
Not Only Generative Art: Stable Diffusion for Content-Style Disentanglement in Art Analysis
Yankun Wu, Yuta Nakashima, Noa Garcia
ICMR 2023
Quantifying Societal Bias Amplification in Image Captioning
Yusuke Hirota, Yuta Nakashima, Noa Garcia
CVPR 2022 (oral)
Gender and Racial Bias in Visual Question Answering Datasets
Yusuke Hirota, Yuta Nakashima, Noa Garcia
FAccT 2022
The MET Dataset: Instance-level Recognition for Artworks
Nikolaos-Antonios Ypsilantis, Noa Garcia, Guangxing Han, Sarah Ibrahimi, Nanne Van Noord, Giorgos Tolias
NeurIPS D&B 2021
Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation
Zechen Bai, Yuta Nakashima, Noa Garcia
ICCV 2021
Knowledge-Based Video Question Answering with Unsupervised Scene Descriptions
Noa Garcia and Yuta Nakashima
ECCV 2020
A Dataset and Baselines for Visual Question Answering on Art
Noa Garcia, Chentao Ye, Zihua Liu, Qingtao Hu, Mayu Otani, Chenhui Chu, Yuta Nakashima, Teruko Mitamura
ECCV workshop VISART 2020
KnowIT VQA: Answering Knowledge-Based Questions about Videos
Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima
AAAI 2020
BERT representations for Video Question Answering
Zekun Yang, Noa Garcia, Chenhui Chu, Mayu Otani, Yuta Nakashima, Haruo Takemura
WACV 2020
Context-Aware Embeddings for Automatic Art Analysis
Noa Garcia, Benjamin Renoust, Yuta Nakashima
ICMR 2019
How to Read Paintings: Semantic Art Understanding with Multi-Modal Retrieval
Noa Garcia and George Vogiatzis
ECCV workshop VISART 2018