Investigating Analogical Explanation Capabilities of Large Language Models


thumbnail

Suitable for:
Student or Bachelor project

Description:

State-of-the-art large-scale language models such as GPT-4 can achieve or even exceed human performance on certain reasoning tasks. Analogical reasoning is one of the core elements of human cognition and can be used for creative problem solving by transferring knowledge from one domain to another based on structural/causal similarities between them. Previous research has shown the strengths and weaknesses that LLMs exhibit with regards to finding and applying appropriate analogies to solve various tasks (Webb et al., 2023; Sourati et al., 2023) and proposed prompting strategies in order to guide the models through the reasoning process (Yasunaga et al., 2023). This thesis topic aims to build up on this research to evaluate the capabilities of LLMs in finding appropriate analogies in different types of domain representations and using them to construct analogical explanations. For this, you will explore different prompting strategies such as analogical prompting or chain of thought on several different state-of-the-art LLMs.

Your profile:

  • interest in large language models and prompting techniques
  • interest and/or experience in evaluation of machine learning models
  • interest in analogical reasoning in humans
  • programming skills in Python
  • some experience with knowledge graphs and/or graph databases is useful, but not mandatory

References:

Sourati, Z., Ilievski, F., & Sommerauer, P. (2023). ARN: A Comprehensive Framework and Dataset for Analogical Reasoning on Narratives (arXiv:2310.00996). arXiv. http://arxiv.org/abs/2310.00996

Webb, T., Holyoak, K. J., & Lu, H. (2023). Emergent analogical reasoning in large language models. Nature Human Behaviour, 7(9), 1526–1541. https://doi.org/10.1038/s41562-023-01659-w

Yasunaga, M., Chen, X., Li, Y., Pasupat, P., Leskovec, J., Liang, P., Chi, E. H., & Zhou, D. (2023). Large Language Models as Analogical Reasoners (arXiv:2310.01714). arXiv. http://arxiv.org/abs/2310.01714


Contact:
Lina Mavrina