A review of the applications and tasks of large language models in the legal field
Abstract
The rapid advancement of artificial intelligence has accelerated the construction of smart judicial systems, bringing profound transformations to legal practice and research. As a core breakthrough in natural language processing, large language models (LLMs) have demonstrated significant potential in tasks such as legal text analysis, legal reasoning, and intelligent decision support, owing to their robust language understanding and reasoning capabilities. However, the specialized nature and rigorous logical demands of the legal domain present challenges for LLMs regarding their interpretability, knowledge timeliness, and reasoning stability. This paper aims to review the application and task-oriented research of LLMs in the legal field, providing readers with a structured framework for understanding. We first summarize the architectural characteristics, training paradigms, and technical approaches of typical legal LLMs. Subsequently, focusing on two key judicial tasks—similar case retrieval and judicial examinations—we discuss existing datasets, evaluation metrics, and methodological advancements. Furthermore, we outline the main challenges faced by legal LLMs and these two types of tasks, including insufficient reasoning consistency, inadequate utilization of domain knowledge, and limitations in handling long texts. Finally, we propose future research directions to provide reference for subsequent studies.