How AI Learns Pragmatics: The Limits of Contextual Understanding

Downloads

Download the Article:

Authors

  • Doaa Taher Matrood Author
  • Sharifi Shahla Author

DOI:

https://doi.org/10.55559/fgr.v1i3.23

Keywords:

Context, Machine Learning, Natural Language Processing,, Pragmatics, Understanding

Abstract

Natural language processing (NLP) has come a long way in the area of artificial intelligence (AI), making it possible for machines to do more complex language jobs. However, it is still very hard to understand and create functional meaning, which is how the environment affects how we understand what people say. This essay looks at how AI systems learn pragmatics, focusing on what they can and can't do when it comes to knowing context. The study uses ideas from pragmatics, linguistics, and cognitive science to look at how modern AI models, like transformer-based systems, deal with things like implicature, speech acts, deixis, and the consistency of conversation. it uses both numeric performance measures and qualitative mistake analysis as a method. The results show that AI models can understand some patterns of pragmatic reasoning, but they struggle with figurative language, secondary meanings, and conversations that involve more than one turn. The talk looks at what this means for AI design and suggests ways to make systems that are more context-aware and useful. This study helps make AI better at understanding words like humans by combining linguistic theory with computer models.

References

References

Austin, J. L. (1962). How to do things with words. Oxford University Press.

Bender, E. M., & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 5185–5198. https://doi.org/10.18653/v1/2020.acl-main.463

Bosselut, A., Rashkin, H., Sap, M., Malaviya, C., Celikyilmaz, A., & Choi, Y. (2019). COMET: Commonsense transformers for automatic knowledge graph construction. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 4762–4779. https://doi.org/10.18653/v1/P19-1471

Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901.

Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of NAACL-HLT, 4171–4186.

Farkas, R., Vincze, V., Móra, G., Csirik, J., & Szarvas, G. (2010). The CoNLL-2010 shared task: Learning to detect hedges and their scope in natural language text. Proceedings of CoNLL, 1–12.

Ghosh, D., & Veale, T. (2016). Fracking sarcasm using neural network. Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 161–169.

Henderson, M., Thomson, B., & Williams, J. D. (2020). Training neural response selection for task-oriented dialogue systems. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 5390–5400.

Jurafsky, D., & Martin, J. H. (2021). Speech and language processing (3rd ed., draft). https://web.stanford.edu/~jurafsky/slp3/

Juraska, J., Choi, E., & McKeown, K. (2020). PragBank: Annotating and modeling pragmatic phenomena in language. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 4973–4985.

Levinson, S. C. (1983). Pragmatics. Cambridge University Press.

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.

Liu, Z., Chen, H., & Li, M. (2021). Sarcasm detection in social media: A deep learning approach. Journal of Information Science, 47(3), 329–341.

Riloff, E., Qadir, A., Surve, P., De Silva, L., Gilbert, N., & Huang, R. (2013). Sarcasm as contrast between a positive sentiment and negative situation. Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 704–714.

Stolcke, A., Ries, K., Coccaro, N., Shriberg, E., Bates, R., Jurafsky, D., ... & Meteer, M. (2000). Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26(3), 339–373.

Swerts, M., & Ostendorf, M. (1997). Prosody as a clue to the recognition of discourse structure. Speech Communication, 22(1), 25–41.

Zhang, Z., Ferrara, E., & MacDonald, C. (2020). Speech act classification for online conversations with application to hate speech detection. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2033–2043.

Zhou, X., Wan, X., & Xiao, J. (2020). A survey on sarcasm detection: Approaches, datasets, and challenges. ACM Computing Surveys, 53(4), 1–33.

Published on:

05-10-2025

Also Available On

Note: Third-party indexing sometime takes time. Please wait one week or two for indexing. Validate this article's Schema Markup on Schema.org

Issue

Section

Review Article

How to Cite

Matrood, D., & Shahla, S. (2025). How AI Learns Pragmatics: The Limits of Contextual Understanding. Frontiers in Global Research, 1(3), 17-21. https://doi.org/10.55559/fgr.v1i3.23