Full Publications List
2025
Conferences
Patrick Ramos, Nicolas Gonthier, Selina Khan, Yuta Nakashima, Noa Garcia. No Annotations for Object Detection in Art through Stable Diffusion. (WACV 2025, to appear)
2024
Conferences
Tianwei Chen, Yusuke Hirota, Mayu Otani, Noa Garcia, Yuta Nakashima. Would Deep Generative Models Amplify Bias in Future Models? (CVPR 2024)
Wangyue Li, Liangzhi Li, Tong Xiang, Xiao Liu, Wei Deng, Noa Garcia. Can multiple-choice questions really be useful in detecting the abilities of LLMs? (LREC-COLING 2024)
Warren Leu, Yuta Nakashima, Noa Garcia. Auditing Image-based NSFW Classifiers for Content Filtering (FAccT 2024)
Tianwei Chen, Noa Garcia, Liangzhi Li, Yuta Nakashima. Retrieving Emotional Stimuli in Artworks. (ICMR 2024)
Yankun Wu, Yuta Nakashima, Noa Garcia, Sheng Li, Zhaoyang Zeng. Reproducibility Companion Paper: Stable Diffusion for Content-Style Disentanglement in Art Analysis (ICMR 2024).
Yankun Wu, Yuta Nakashima, Noa Garcia. Stable Diffusion Exposed: Gender Bias from Prompt to Image. (AIES 2024)
Journals
Tianwei Chen, Noa Garcia, Liangzhi Li, Yuta Nakashima. Exploring Emotional Stimuli Detection in Artworks: A Benchmark Dataset and Baselines Evaluation (Journal of Imaging, Jun. 2024)
Yankun Wu, Yuta Nakashima, Noa Garcia. GOYA: Leveraging Generative Art for Content-Style Disentanglement. (Journal of Imaging, Jun. 2024)
Amelia Katirai, Noa Garcia, Kazuki Ide, Yuta Nakashima, Atsuo Kishimoto. Situating the social issues of image generation models in the model life cycle: a sociotechnical approach (AI and Ethics) (preprint version)
Tianwei Chen, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima, Hajime Nagahara. Learning More May Not Be Better: Knowledge Transferability in Vision-and-Language Tasks (Journal of Imaging, Nov. 2024)
Preprints
Xiao Liu, Liangzhi Li, Tong Xiang, Fuying Ye, Lu Wei, Wangyue Li, Noa Garcia. Imposter.AI: Adversarial attacks with hidden intentions towards aligned large language models (arxiv:2407.15399)
2023
Conferences
Noa Garcia, Yusuke Hirota, Yankun Wu, Yuta Nakashima. Uncurated Image-Text Datasets: Shedding Light on Demographic Bias. (CVPR 2023, Highlight)
Yusuke Hirota, Yuta Nakashima, Noa Garcia. Model-Agnostic Gender Debiased Image Captioning. (CVPR 2023)
Yankun Wu, Yuta Nakashima, Noa Garcia. Not Only Generative Art: Stable Diffusion for Content-Style Disentanglement in Art Analysis. (ICMR 2023)
Tong Xiang, Liangzhi Li, Wangyue Li, Mingbai Bai, Lu Wei, Bowen Wang, Noa Garcia. CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care. (NeurIPS 2023 Datasets and Benchmarks Track)
Workshops
Yankun Wu, Yuta Nakashima, Noa Garcia. Leverage Generative Art for Content-Style Disentanglement. (Women in Computer Vision Workshop at ICCV 2023)
2022
Conferences
Yusuke Hirota, Yuta Nakashima, Noa Garcia. Quantifying Societal Bias Amplification in Image Captioning. (CVPR 2022, Oral)
Yusuke Hirota, Yuta Nakashima, Noa Garcia. Gender and Racial Bias in Visual Question Answering Datasets. (ACM FAccT 2022)
Preprints
Tianwei Chen, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima, Hajime Nagahara. Learning More May Not Be Better: Knowledge Transferability in Vision and Language Tasks. (arXiv:2208.10758)
2021
Conferences
Zechen Bai, Yuta Nakashima, Noa Garcia. Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation. (ICCV 2021) [Data] [Code] [Video]
Nikolaos-Antonios Ypsilantis, Noa Garcia, Guangxing Han, Sarah Ibrahimi, Nanne Van Noord, Giorgos Tolias. Instance-level Recognition for Artworks: The MET Dataset. (NeurIPS 2021 Datasets and Benchmarks). [Data] [Code] [Video]
Tianran Wu, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima, Haruo Takemura. Transferring Domain-Agnostic Knowledge in Video Question Answering. (BMVC 2021) [Data]
Cheikh Brahim El Vaigh, Noa Garcia, Benjamin Renoust, Chenhui Chu, Yuta Nakashima, Hajime Nagahara. GCNBoost: Artwork Classification by Label Propagation through a Knowledge Graph (ICMR 2021)
Yuta Kayatani, Zekun Yang, Mayu Otani, Noa Garcia, Chenhui Chu, Yuta Nakashima, Haruo Takemura. The Laughing Machine: Predicting Humor in Video (WACV 2021)
Workshops
Jules Samaran, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima. Attending Self-Attention: A Case Study of Visually Grounded Supervision in Vision-and-Language Transformers (Student Research, ACL-IJCNLP 2021)
Yusuke Hirota, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima, Ittetsu Taniguchi, Takao Onoye. Visual Question Answering with Textual Representations (Closing the Loop Between Vision and Language, ICCV 2021)
Journals
Zekun Yang, Noa Garcia, Chenhui Chu, Mayu Otani, Yuta Nakashima, Haruo Takemura. A Comparative Study of Language Transformers for Video Question Answering (Neurocomputing, Mar. 2021)
Chenhui Chu, Vinicius Oliveira, Felix Giovanni Virgo, Mayu Otani, Noa Garcia, Yuta Nakashima. The Semantic Typology of Visually Grounded Paraphrases. (CVIU, Dec. 2021)
Preprints
Yusuke Hirota, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima, Ittetsu Taniguchi, Takao Onoye. A Picture May Be Worth a Hundred Words for Visual Question Answering. (arXiv:2106.13445)
2020
Conferences
Noa Garcia, Yuta Nakashima. Knowledge-Based Video Question Answering with Unsupervised Scene Descriptions (ECCV 2020) [Code] [Video]
Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima. KnowIT VQA: Answering Knowledge-Based Questions about Videos (AAAI 2020) [Data] [Code]
Zekun Yang, Noa Garcia, Chenhui Chu, Mayu Otani, Yuta Nakashima, Haruo Takemura. BERT representations for Video Question Answering (WACV 2020)
Workshops
Noa Garcia, Chentao Ye, Zihua Liu, Qingtao Hu, Mayu Otani, Chenhui Chu, Yuta Nakashima, Teruko Mitamura. A Dataset and Baselines for Visual Question Answering on Art (VISART, ECCV 2020) [Code&Data] [Video]
Nikolai Huckle, Noa Garcia, Yuta Nakashima. Demographic Influences on Contemporary Art with Unsupervised Style Embeddings (VISART, ECCV 2020)
Journals
Wenjian Dong, Mayu Otani, Noa Garcia, Yuta Nakashima, Chenhui Chu. Cross-Lingual Visual Grounding (IEEE Access, Dec. 2020)
2019
Conferences
Noa Garcia, Benjamin Renoust, Yuta Nakashima. Context-Aware Embeddings for Automatic Art Analysis (ICMR 2019) [Code] Best paper award candidate
Workshops
Noa Garcia, Benjamin Renoust, Yuta Nakashima. Understanding Art through Multi-Modal Retrieval in Paintings (Language and Vision workshop, CVPR 2019) [Code]
Benjamin Renoust, Matheus Franca, Jacob Chan, Noa Garcia, Van Le, Ayaka Uesaka, Yuta Nakashima, Hajime Nagahara, Juereng Wang, Yutaka Fujioka. Historical and Modern Features for Buddha Statue Classification (SUMAC workshop, ACMMM 2019)
Journals
Noa Garcia, George Vogiatzis. Learning Non-Metric Visual Similarity for Image Retrieval (IMAVIS, Feb. 2019) [Code]
Noa Garcia, Benjamin Renoust, Yuta Nakashima. ContextNet: Representation and Exploration for Painting Classification and Retrieval in Context (IJMIR, Dec. 2019) [Code]
2018
Conferences
Noa Garcia, George Vogiatzis. Asymmetric Spatio-Temporal Embeddings for Large-Scale Image-to-Video Retrieval (BMVC 2018) [Code]
Noa Garcia. Temporal Aggregation of Visual Features for Large-Scale Image-to-Video Retrieval (ICMR 2018 Doctoral Symposium)
Workshops
Noa Garcia, George Vogiatzis. How to Read Paintings: Semantic Art Understanding with Multi-Modal Retrieval (VISART, ECCV 2018) [Data] [Code]
Thesis
Noa Garcia. Spatial and Temporal Representations for Multi-Modal Visual Retrieval (Ph.D. Thesis, Aston University, 2018)
Before 2018
Noa Garcia, George Vogiatzis. Dress like a Star: Retrieving Fashion Products from Videos (Computer Vision for Fashion workshop, ICCV 2017) [Code]
Noa Garcia, George Vogiatzis. Exploiting Redundancy in Binary Features for Image Retrieval in Large-Scale Video Collections (CVMP 2016)
Nuria Sanchez, Noa Garcia, Jose Manuel Menendez. Feedback-Based Parameterized Strategies for Improving Performance of Video Surveillance Understanding Frameworks (IBERAMIA 2014)
Carmen Bravo, Nuria Sanchez, Noa Garcia, Jose Manuel Menendez. Outdoor Vacant Parking Space Detector for Improving Mobility in Smart Cities (EPIA 2013)
Noa Garcia. Heart Rate Estimation Using Facial Video Information (Telecom Engineering thesis, 2012)