NLP810: Robust and Trustworthy Natural Language Processing
TB = Textbook or Required reading
REF = Reference or supplemental reading
Type | Title | eBook |
|
Call Number |
|
![]() |
![]() |
S. Minaee, T. Mikolov, N. Nikzad, M. Chenaghlu, R. Socher, X. Amatriain, and J. Gao, "Large Language Models: A Survey," arXiv, 2024. | NA | ||
![]() |
![]() |
W. X. Zhao et al., "A Survey of Large Language Models," arXiv, 2023. | Open Access | NA | |
![]() |
![]() |
H. Sajjad, N. Durrani, and F. Dalvi, "Neuron-Level Interpretation of Deep NLP Models: A Survey," Transactions of the Association for Computational Linguistics, vol. 10, pp. 1285–1303, 2022. |
|
NA | |
![]() |
![]() |
S. L. Blodgett, S. Barocas, H. Daumé III, and H. Wallach, "Language (Technology) is Power: A Critical Survey of 'Bias' in NLP," in Proc. 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 5454–5476. |
|
NA | |
![]() |
![]() |
Y. Chang et al., "A Survey on Evaluation of Large Language Models," ACM Transactions on Intelligent Systems and Technology, 2023. |
|
NA | |
![]() |
![]() |
K. Ramesh, S. Sitaram, and M. Choudhury, "Fairness in Language Models Beyond English: Gaps and Challenges," in Findings of the Association for Computational Linguistics: EACL 2023, 2023, pp. 2061–2074. |
|
NA | |
![]() |
![]() |
.M. K. Sarker, L. Zhou, A. Eberhart, and P. Hitzler, "Neuro-Symbolic Artificial Intelligence: Current Trends," arXiv, 2021. |
|
NA | |
![]() |
![]() |
L. Hu, Z. Liu, Z. Zhao, L. Hou, L. Nie, and J. Li, "A Survey of Knowledge Enhanced Pre-trained Language Models," arXiv, 2022. |
|
NA | |
![]() |
![]() |
.Y. Gao et al., "Retrieval-Augmented Generation for Large Language Models: A Survey," arXiv, 2023. | NA | ||
![]() |
![]() |
M. Choudhury and A. Deshpande, "How Linguistically Fair Are Multilingual Pre-Trained Language Models?," in Proc. AAAI Conference on Artificial Intelligence, vol. 35, no. 14, 2021, pp. 12710–12718. | NA | ||
![]() |
![]() |
T. Y. Zhuo, Y. Huang, C. Chen, and Z. Xing, "Exploring AI Ethics of ChatGPT: A Diagnostic Analysis," arXiv preprint, 2023. | NA | ||
![]() |
![]() |
V. Rawte, A. Sheth, and A. Das, "A Survey of Hallucination in Large Foundation Models," arXiv preprint, 2023 | NA | ||
![]() |
![]() |
J. Mökander, J. Schuett, H. R. Kirk, and L. Floridi, "Auditing Large Language Models: A Three-Layered Approach," AI and Ethics, pp. 1–31, 2023. | Open Access | NA | |
![]() |
![]() |
D. Hershcovich et al., "Challenges and Strategies in Cross-Cultural NLP," in Proc. 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, 2022, pp. 6997–7013. | NA |