Conversational Implicature in Human-AI Interactions
A Pragmatic Perspective
DOI:
https://doi.org/10.55559/fgr.v1i3.22Keywords:
Conversational implicature, human-AI interaction, pragmatics,, Gricean maxims,, discourse analysisAbstract
AI (artificial intelligence) systems should be able to talk more and more like people as the way people use computers changes. But there is still a big problem: how to understand and use verbal implicature, which is a key part of being pragmatically competent. This article looks at how implicature works between people and AI, as well as how well AI systems understand and use suggested ideas. From the point of view of Grice's Cooperative Principle and maxims, this study looks at how well chatbots like ChatGPT, Google Assistant, and Siri work in real life. There is a mix of different research methods used in this paper, such as controlled tests and qualitative speech analysis. It also looks at how people understand implicature and how AI acts in various situations, such as when it needs to be funny, polite, or use subtle language. This study utilises various aspects of language to demonstrate that AI can replicate some meanings after being trained on large datasets, but it often struggles to comprehend non-literal purposes or process contexts dynamically.
References
1. Grice, H. P. (1975). Logic and conversation. In Cole, P. & Morgan, J. L. (Eds.), Syntax and Semantics: Speech Acts (Vol. 3, pp. 41–58). Academic Press.
2. Levinson, S. C. (1983). Pragmatics. Cambridge University Press.
3. Horn, L. R. (1984). Toward a new taxonomy for pragmatic inference. In Schiffrin, D. (Ed.), Meaning, Form, and Use in Context (pp. 11–42). Georgetown University Press.
4. Sperber, D., & Wilson, D. (1995). Relevance: Communication and Cognition. Blackwell.
5. Allen, J. F. (1995). Natural Language Understanding. Benjamin/Cummings.
6. Brown, T. et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901.
7. OpenAI. (2022). ChatGPT: Optimizing language models for dialogue. OpenAI Blog.
8. Niven, T., & Kao, H. Y. (2019). Probing neural network comprehension of natural language arguments. ACL.
9. Bender, E. M., & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. ACL.
10. Shin, J., et al. (2022). Pragmatic evaluation of dialog systems. Journal of Artificial Intelligence Research, 73, 1–27.
11. Dastjerdi, A., et al. (2023). Implicature processing in large language models. Transactions of the ACL, 11, 345–360.
12. Pérez-Marín, D., & Pascual-Nieto, I. (2011). Intelligent conversational agents in education. AI & Education Review, 22(3), 123–138.
13. Traum, D. R., & Allen, J. (1994). Discourse obligations in dialogue processing. Proceedings of ACL, 1–8.
14. ISO 24617-2. (2012). Semantic annotation framework for dialogue acts. International Organization for Standardization.
15. Ravichander, A., & Black, A. W. (2018). Obfuscating intent in dialogue systems. NAACL.
16. Hancock, B., et al. (2019). Learning from dialogue after deployment. ACL.
17. Holtzman, A., et al. (2019). The curious case of neural text degeneration. ICLR.
18. Schlangen, D., et al. (2021). Pragmatics-aware language modeling. EMNLP.
19. Prabhumoye, S., et al. (2021). Controlled text generation with implicit and explicit attributes. NAACL.
20. Ginzburg, J. (2012). The Interactive Stance: Meaning for Conversation. Oxford University Press.
Published on:
Also Available On
Note: Third-party indexing sometime takes time. Please wait one week or two for indexing. Validate this article's Schema Markup on Schema.org
Issue
Section
License
Copyright (c) 2025 Yousif Salman, Doaa TaherMatrood (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.