中文    English
Current Issue
05 April 2026, Volume 38 Issue 4
The Impacts and Implications of OpenClaw for Scientific and Technical Literature Intelligence Work | Open Access
QIAN Li, YANG Yanxi, ZHANG Yuanzhe, HU Maodi, CHANG Zhijun
2026, 38(4):  4-12.  DOI: 10.13998/j.cnki.issn1002-1248.26-0179
Asbtract ( 18 )   HTML ( 1)   PDF (739KB) ( 4 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] Agents driven by large language models (LLMs) are transitioning from executing tasks based on predefined workflows to autonomously planning and taking dynamic actions in complex environments. Thus, scientific and technical literature intelligence work is expanding its scope from stage-based information processing to continuous intelligence analysis. OpenClaw is an open agent execution framework that integrates capabilities such as multi-channel access, task orchestration, tool invocation, memory maintenance, and persistent operation. However, a systematic examination of its potential applications, impacts, and practical constraints in the field of scientific and technical literature intelligence remains lacking. This study aims to analyze OpenClaw's technical architecture and comprehensively evaluate how it transforms literature retrieval, knowledge organization, intelligence analysis, and knowledge service processes. [Method/Process] This study systematically reviewed the technical architecture and core components of OpenClaw, focusing on four interdependent layers: the access and communication layer, the agentic loop execution layer, the tool invocation and capability extension layer, and the memory maintenance layer with persistent operation. It related the OpenClaw's architectural logic to the organization of scientific literature, intelligence analysis workflows, and knowledge service scenarios. The study further discussed the changes that OpenClaw's architectural logic may bring to scientific and technical literature intelligence work, including shifts in retrieval patterns, knowledge processing methods, analytical workflows, and service outcomes, as well as the technical and ethical challenges that may emerge in practice. [Results/Conclusions] OpenClaw provides a new technical reference that transitions scientific and technical literature intelligence work from stage-based information processing to continuous knowledge work. Nevertheless, critical challenges persist. These include planning reliability, adaptability of domain knowledge, stability of tool integration, interpretability and traceability, and ethical risks related to data privacy, accountability, and open-ended extension mechanisms. To address these challenges and support the rapid development of an intelligent scientific and technical literature intelligence system, this study proposes the following seven interrelated development directions: 1) establishing task inventories for high-value scientific and technical intelligence scenarios, 2) developing a hybrid foundational model paradigm that combines general-purpose LLMs, task-specific large models, small models, and specialized tool sets, 3) advancing a one-click service model for autonomous scientific discovery and intelligent intelligence support, 4) constructing national-level corpora of literature and intelligence resources tailored to intelligent scientific and technical intelligence scenarios, 5) promoting in-depth collaboration between large, domain-specific models and agents, 6) designing a safe and controllable roadmap for progressive deployment, and 7) establishing a full-process, closed-loop governance mechanism. These proposals are expected to provide valuable references for both disciplinary development and professional practice. They will facilitate the gradual transformation of agents from auxiliary tools into a vital capability for the future of scientific and technical literature intelligence work.

Library Transformation in the Age of AI Agents: Service Reconfiguration and Governance Framework Based on the OpenClaw Architecture | Open Access
LIU Wei, JIN Jiaqin
2026, 38(4):  13-22.  DOI: 10.13998/j.cnki.issn1002-1248.26-0120
Asbtract ( 51 )   HTML ( 12)   PDF (638KB) ( 11 )  
References | Related Articles | Metrics

[Purpose/Significance] The rapid evolution of artificial intelligence technologies from dialogue-based generation to autonomous task execution marks a paradigm shift with profound implications for library services. A new generation of AI agents, exemplified by the open-source project OpenClaw, can independently plan multi-step tasks, invoke external tools, operate computer interfaces through visual perception, and deliver structured work products with minimal human intervention. The shift from "answering questions" to "completing tasks" fundamentally challenges the traditional library service model. The model has long been based on the idea that librarians serve as the primary connection between information resources and users. Libraries worldwide are facing an increasing structural tension: their collections are expanding while their staffing levels are remaining constrained, resulting in unmet knowledge service demands. Agent technologies, with their capabilities for autonomous planning, tool invocation, environmental perception, and persistent memory, offer a potential pathway to address this gap. However, the library community currently lacks a systematic analytical framework through which to understand how this technology paradigm intersects with existing service architectures, governance requirements, and organizational structures. This study addresses this gap by providing both a conceptual framework for analyzing agent technologies in the library context and practical guidance for their implementation and governance, contributing to the broader discourse on intelligent library transformation as articulated in national science and technology development strategies. [Method/Process] This study employs a multi-method research design with OpenClaw as the primary analytical lens. The technical architecture analysis involves systematic examination of OpenClaw's publicly available documentation, GitHub source code repository, and official technical publications. Four core mechanisms are deconstructed in detail: the Computer Use Agent paradigm, which enables vision-driven interface operation through periodic screen capture, multimodal language model interpretation, and simulated mouse and keyboard actions; the local-first architecture with model-agnostic design, which maintains data sovereignty through a decentralized gateway-node topology while supporting flexible switching among multiple large language models; the Heartbeat mechanism, which transforms the agent from a passive responder into a proactive monitor through a condition-triggered self-inspection cycles; and the Model Context Protocol, an open standard for tool integration that enables any MCP-compliant agent to invoke standardized service capabilities. Case comparison analysis evaluates two contrasting platform approaches for supporting agent deployment in libraries - FOLIO Eureka, representing the next-generation Library Service Platform pathway with its microservice architecture, API gateway, and event-driven communication, and the Cloud Alliance's A-LSP, representing an agent-native design philosophy that positions intelligent agents as the core organizational principle of library service platforms. Policy document analysis examines the IFLA Guide on the Introduction of AI in Libraries, China's New Generation Artificial Intelligence Development Plan, the Data Security Law, and the Personal Information Protection Law, as well as regional policy experiments in agent technology promotion. Security incident case studies draw from the ClawHavoc supply chain attack, which compromised over 21 000 active instances; Cisco Talos security audits, which revealed prompt injection vulnerabilities; and CrowdStrike threat assessments, which identified misconfiguration risks that could transform agents into attack vectors. [Results/Conclusions] The study proposed a critical distinction between "narrow OpenClaw" (the specific open-source product and its derivative ecosystem) and "broad OpenClaw" (the agent technology paradigm it represents), arguing that libraries must engage strategically with both dimensions while avoiding the twin pitfalls of conflating technology trends with product procurement decisions or dismissing an entire paradigm based on the limitations of a single product. The narrow application analysis identified three viable deployment scenarios - personal productivity tools for librarians, information collection and subject monitoring, and reader-facing service prototyping - while documenting associated risks in technical stability, supply chain security, and regulatory compliance. The broad paradigm analysis revealed five structural impacts on libraries: diversification of service entry points through embedded integration, transformation from reactive response to proactive push services, evolution of reader information behaviors from search to delegation, disruption of commercial ecosystems including usage-based pricing models, and fundamental repositioning of libraries as knowledge infrastructure in the AI ecosystem. Four architectural prerequisites for agent deployment were identified: API openness, event-driven capabilities, permission governance, and observability, with insufficient system openness identified as the primary bottleneck that constrains implementation. Three differentiated implementation pathways were proposed with corresponding phased strategies. A comprehensive governance framework has been constructed encompassing six dimensions: system security with defense-in-depth measures, data governance and privacy protection aligned with national legislation, ethical standards addressing algorithmic bias and hallucination risks, copyright compliance addressing the ambiguity of agent-mediated access under existing licensing agreements, human-agent collaboration through a tiered oversight system, and standardization initiatives including library-specific MCP tool standards. The study also proposed institutional innovations such as "agent sandbox zones" that allow controlled experimentation in isolated environments. The research concluded that the highly structured and process-oriented nature of library workflows makes libraries a particularly suitable domain for agent technology adoption, but successful implementation depends on the coordinated advancement of technical readiness, governance maturity, and organizational change capacity. Limitations of this study include the nascent stage of actual agent deployment in libraries, which means the proposed frameworks await empirical validation. Future research directions include conducting empirical studies of library agent deployments, developing standardization pathways for cross-library agent collaboration, investigating copyright licensing adaptation mechanisms for agent-mediated access, and examining the long-term impact of agent technologies on the library profession and library science education.

Technical Evolution and Application Scenarios of Open-Source Agents:A Case Study of "OpenClaw" | Open Access
LI Baiyang, REN Shangsheng
2026, 38(4):  23-35.  DOI: 10.13998/j.cnki.issn1002-1248.26-0147
Asbtract ( 39 )   HTML ( 12)   PDF (958KB) ( 4 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] With the rapid advancement of generative artificial intelligence, open-source agents have emerged as a key driving force in reshaping the development paradigm of artificial intelligence. These agents integrate foundation models, tool chains, and collaborative mechanisms to enable autonomous task execution. From the perspective of technological evolution, this paper conducts a systematic analysis of three core issues concerning open-source agents: technological evolution, application scenarios, and security governance, aiming to clarify the evolutionary laws of open-source agents, expand their application boundaries, and provide theoretical and practical references for their standardized development and rational application. [Method/Process] In terms of evolutionary stages, this study proposed that open-source agents have undergone four distinct phases: pre-history of technology (primitive tool integration stage), single-point intelligence (independent task execution stage), systemic intelligence (multi-tool collaborative stage), and ecological intelligence (multi-agent symbiotic stage). This evolution was driven by a shift in focus from relying solely on the capabilities of foundation models to gradually forming a mature agent ecosystem featuring autonomous operation, multi-agent collaboration mechanisms, and cross-platform interoperability. As a prime example of a practice in the ecological intelligence stage, OpenClaw, with its open-source architecture, modular design, and multi-agent collaborative capabilities, marked a paradigm shift in artificial intelligence, moving from "Model as a Service" (MaaS) to "Agent as a Service" (AaaS) and enabling end-to-end task closed-loop management. Regarding application dimensions, a three-tier progressive scenario framework was constructed, encompassing knowledge-intensive assistance (such as academic research and intelligent consulting), tool-intensive execution (such as automated office and industrial control), and collaboration-intensive processes (such as public governance and team collaboration), emphasizing that its core value lies in accomplishing complex end-to-end tasks in a verifiable, auditable, and sustainable manner. In the governance dimension, a four-layer embedded governance framework was proposed to address the security risks and regulatory challenges posed by open-source agents. This framework covers the protocol layer (standard formulation), the platform layer (technical supervision), the execution layer (behavioral constraints), and the ecological layer (industry self-discipline). [Results/Conclusions] This study found that the competitive focus of open-source agents has gradually shifted from the performance of individual models to ecological controllability and governance credibility. As a new type of intelligent entity, whether open-source agents can effectively support diverse application scenarios, such as scientific research, public governance, and industrial production, fundamentally depends on their ability to realize the deep integration of advanced technological capabilities and sound institutional trust. This also provides a core direction for the future development and governance of open-source agents.

Performance of Fine-Tuned Large Language Models in Patent Text Mining | Open Access
LYU Lucheng, ZHOU Jian, SUN Wenjun, ZHAO Yajuan, HAN Tao
2026, 38(4):  36-46.  DOI: 10.13998/j.cnki.issn1002-1248.25-0672
Asbtract ( 242 )   HTML ( 12)   PDF (2115KB) ( 13 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] The use of large language models (LLMs) for patent text mining has become a major research topic in recent years. However, existing studies mainly focus on the application of LLMs to specific tasks, and there is a lack of systematic evaluation of the application effects of fine-tuned LLMs across multiple scenarios. To address this problem, this study takes ChatGLM, an open-source LLM that supports local fine-tuning, as an example. We conduct a comparative evaluation of three types of patent text mining tasks-technical term extraction, patent text generation, and automatic patent classification-under a unified experimental framework. The performance of fine-tuned models is compared from six aspects: different training data sizes, different numbers of training epochs, different prompts, different prefix lengths, different datasets, and single-task versus multi-task fine-tuning. [Method/Process] This study was based on an open-source LLM and carried out fine-tuning research for specific patent tasks in order to clarify the impact of different fine-tuning strategies on the performance of LLMs in patent tasks. Considering task adaptability, model size, inference efficiency, and resource consumption, ChatGLM-6B-int4 was selected as the base model, and P-Tuning V2 was adopted as the fine-tuning method. Three categories of patent tasks are included: extraction, generation, and classification. The extraction task is patent keyword extraction. The generation tasks include: 1) innovation point generation; 2) abstract generation based on a given title; 3) rewriting an existing title; 4) rewriting an existing abstract; 5) generating novelty points based on an existing abstract; 6) generating patent advantages based on an existing abstract; and 7) generating patent application scenarios based on an existing abstract. Six experimental comparison dimensions are designed: 1) different training data sizes; 2) different numbers of training epochs; 3) different datasets with the same data size; 4) different prompts under the same task and data; 5) different P-Tuning V2 prefix lengths with the same training data; and 6) single-task fine-tuning versus multi-task fine-tuning. Two type of evaluation metrics were used. For extraction and generation tasks, the BLEU metric based on n-gram string matching was adopted. For classification tasks, accuracy, recall, and F1 score were used. [Results/Conclusions] Based on the fine-tuning results, several conclusions were obtained. First, a larger training data size does not always lead to better performance. Second, the appropriate number of training epochs depends on the data size. Third, under the same data distribution, different data subsets have limited influence on performance. Fourth, under the same task and dataset, different prompts have little impact on model performance. Fifth, the optimal prefix length is closely related to the training data size. Sixth, for a specific task, single-task fine-tuning performs better than multi-task fine-tuning. These conclusions provide reference and guidance for fine-tuning LLMs in practical patent information work.

Identification of Product Innovation Opportunity Based on Problem and Suggestions Using Dual-Source Data | Open Access
GUO Yanli, GAO Rui, ZOU Meifeng, LIU Zidan
2026, 38(4):  47-60.  DOI: 10.13998/j.cnki.issn1002-1248.25-0663
Asbtract ( 143 )   HTML ( 1)   PDF (1790KB) ( 4 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] As the user base grows, the number of online comments is increasing rapidly. The massive volume of comments has broadened the innovative thinking of enterprises and provided more diverse innovative options, but it has also brought about the problem of information overload. Therefore, in the face of the massive amount of online user comments, how to use efficient and precise methods to mine information with practical value, effectively integrate valuable information and identify product innovation opportunities, and transform it into high-quality resources for enterprise product innovation has become a hot topic of great concern in both academic and industrial circles. Against this backdrop, studying how to identify product innovation opportunities based on online reviews is of great theoretical significance and practical value. Unlike previous studies, this paper uses the BERT model to accurately filter out negative user comments and identify key demand points. This article also combines the characteristics of ordinary users and leading users, integrates dual-source data of user comments from e-commerce platforms and online communities, and associates the demand issues of ordinary users with the suggestions of leading users, which can more accurately identify product innovation opportunities. [Method/Process] First, we collected and pre-processed ordinary user comment data and leading user comment data. Second, the BERT model and LDA topic model were used to categorize the sentiment and cluster the comment data to mine the problems of ordinary users and suggestions of leading users. Finally, based on semantic similarity analysis, problem-suggestion topic mapping was realized to identify product innovation opportunities with high innovation value. [Results/Conclusions] This paper constructed a problem-suggestion product innovation opportunity identification method driven by dual-source data, and selected the action camera as a case to elaborate in detail on the specific practice of the proposed method in the field of product innovation. Through case analysis, the feasibility of the proposed method of product innovation was verified, providing an operational reference basis for enterprises on how to efficiently recommend product innovation work. However, this paper still has certain limitations and needs to be improved with more abundant data in subsequent studies. First, the data collected in this article mainly come from e-commerce platforms and online community platforms. Although this data contain a large amount of user information, there are still deficiencies. In the future, we will introduce more data sources, such as news media and technology websites to obtain more comprehensive and diverse data. Second, this paper has only conducted case application research in the field of intelligent digital products. In the future, we need to further explore more fields, such as smart wearables and whole-house intelligence, to enhance the universality of the product innovation opportunity identification framework constructed in this paper.

Governance of Personal Information Security in the Iteration of Generative AI: From the Perspective of the Technological Evolution of Large Models | Open Access
AN Lin
2026, 38(4):  61-70.  DOI: 10.13998/j.cnki.issn1002-1248.25-0750
Asbtract ( 80 )   HTML ( 22)   PDF (1198KB) ( 11 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] The rapid advancement of generative artificial intelligence (AI) is driving societal digital transformation, yet it simultaneously poses unprecedented systemic risks to personal information security due to the large-scale, automated, and complex nature of its data processing. Previous research has lacked exploration of governance pathways that consider endogenous technological evolution and specific model iterations. This paper takes the technological evolution of mainstream, large-scale generative AI models, both domestically and internationally as a starting point, and systematically reveals the impact of generative AI on personal information protection principles across the stages of data collection, model operation, and content generation. The focus is on analyzing how technological innovations in China's DeepSeek, including open-source traceability, decision transparency, and flexible deployment, lay the groundwork for risk-graded governance. This study not only broadens the theoretical perspective on AI governance and promotes the formation of a "technology-institution" collaborative governance paradigm, but also offers innovative and actionable insights for building an agile and effective personal information protection system in China amidst the rapid adoption of generative AI. [Method/Process] This study employs a comparative analysis and inductive research approach. First, it systematically compares the core technological differences among mainstream generative AI models, both domestic and international, across three dimensions: model ecosystem, model capabilities, and deployment methods. Through this comparison, it analyzes the challenges generative AI poses to personal information protection at various stages, including data collection, model operation, and content generation. Second, the study systematically examines the differentiated impacts brought about by DeepSeek's technological iterations on personal information security governance. Building on this foundation, the research proposes a comprehensive governance strategy centered on the principles of inclusiveness and prudence, guided by risk grading, and covering all operational stages of generative AI. This strategy emphasizes the critical role of DeepSeek's technical characteristics in supporting the implementation of this framework. [Results/Conclusions] The research indicates that constructing a risk-graded governance system based on the sensitivity of personal information is an effective approach to balancing security and innovation in generative AI. This system emphasizes distinguishing between sensitive and general information during data collection, achieving traceability and purpose control during model operation, and implementing differentiated security safeguards during content generation. With its technical advantages, including open-source traceability, decision transparency, and flexible deployment, DeepSeek provides technical validation and practical possibilities for graded governance. This facilitates the protection of sensitive personal information in high-risk scenarios while simultaneously fostering technological iteration and application innovation in medium- to low-risk contexts. Future research should further incorporate multi-dimensional governance elements such as industry self-regulation, social coordination, and international collaboration. Empirical analysis should also be conducted to test the applicability and effectiveness of the governance framework, thereby gradually developing a well-rounded personal information security governance scheme that adapts to the dynamic evolution of technology.

Digital Technology Empowering High-Quality Development of Public Cultural Services: Impact Effects and Influencing Mechanisms | Open Access
YUAN Shuo
2026, 38(4):  71-83.  DOI: 10.13998/j.cnki.issn1002-1248.25-0526
Asbtract ( 358 )   HTML ( 1)   PDF (722KB) ( 2 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] The accelerated digital transformation of public cultural services has fundamentally reshaped modes of service delivery, governance frameworks, and citizen engagement. Exploring how digital technologies empower the high-quality development of public cultural services is essential for designing a modern, equitable, and efficient service system. This study contributes to the existing literature by systematically investigating not only the direct effects of digital technologies but also threshold, regional heterogeneity, spatial spillovers, and mediating mechanisms. This clarifies how digital innovation interacts with governance capacity and institutional environments. Unlike previous research, which relied mainly on descriptive or single-method analyses, this study employs an integrated empirical framework. This framework captures the dynamic and multidimensional nature of digital empowerment within the context of public service. It enriches the theoretical and practical understanding of digital governance. [Method/Process] This study employs panel data from 31 Chinese provinces over the period 2015-2023 to systematically investigate how digital technologies influence the high-quality development of public cultural services. A combination of fixed-effects models, mediating-effects models, threshold regression models, and spatial econometric models was used to capture direct, nonlinear, regional, spatial, and mediating effects. To control for potential confounding factors, fiscal expenditure, population density, and cultural literacy were incorporated as covariates. The analysis drew on theoretical foundations conceptualizing digital technology as a new productive force and was supported by empirical data from national statistical yearbooks, digital finance indices, and governance performance indicators, ensuring both methodological rigor and contextual relevance. [Results/Conclusions] Digital technology significantly promotes the high-quality development of public cultural services, with measurable positive effects for each incremental increase in the digital technology development index. The influence exhibits a nonlinear threshold pattern, reflecting a "promotion-weakening-enhancement" trajectory, highlighting the necessity of integrating technological applications with governance structures, resource allocation, service design, and public digital literacy. Regional analyses reveal stronger effects in the central and western provinces, suggesting that digital technologies can help mitigate service disparities under supportive policy frameworks. The spatial econometric results indicate positive spillover effects on neighboring regions, while the mediation analysis identifies government governance capacity as a key mechanism through which technological inputs translate into service outcomes. Policy implications include reinforcing digital infrastructure, enhancing institutional support, implementing region-specific strategies, fostering inter-provincial coordination, and strengthening government-led service integration. The study has limitations, including the possibility of potential unobserved concurrent causal pathways, Future research should adopt configurational methods such as qualitative comparative analysis in future research to further elucidate the complex, multicausal dynamics of digital technology empowerment in public cultural services.

Effects of AIGC on Reader Trust in Library Information | Open Access
GUO Jinbo
2026, 38(4):  84-98.  DOI: 10.13998/j.cnki.issn1002-1248.25-0593
Asbtract ( 193 )   HTML ( 3)   PDF (1013KB) ( 2 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] With the rapid integration of generative artificial intelligence into library services, user trust in information has begun to exhibit a new pattern characterized by high usage, low certainty, and increased reliance on institutional guarantees. Existing studies on online credibility, artificial intelligence generated content (AIGC) applications and library innovation have mostly examined either technical performance, information literacy, or governance issues in isolation. Few have systematically analyzed how specific AIGC features, user capabilities and the institutional environment of libraries jointly shape multi dimensional user trust. This study focuses on AIGC supported services in public and academic libraries and constructs a comprehensive analytical framework linking technological signals, user ability and library based institutional mediation to the formation of cognitive, emotional and behavioral trust. The paper contributes to the refinement of trust theory in digital information environments by providing empirical evidence from a large-scale sample in China. It also offers actionable insights for libraries seeking to deploy AIGC while maintaining or enhancing their role as trusted public knowledge institutions. [Method/Process] The study is grounded in classic research on cognitive authority and online credibility, and combined with recent work on AIGC, knowledge services, information literacy and library governance. It conceptualizes user trust as a three dimensional construct comprising cognitive, emotional and behavioral components. AIGC related technological features are operationalized along three axes: perceived content quality, generation transparency and interactivity. User capability is measured through standardized digital literacy tests and indicators of professional background, while the library environment is captured by the presence of institutional arrangements such as usage guidelines, staff verification, result labelling and risk reminders. Data were collected through a large-scale questionnaire survey in ten public and academic libraries in Henan Province, yielding 2 347 valid responses. After data cleaning and reliability and validity checks, the study employed a combination of structural equation modelling, two stage least squares estimation, threshold regression, spatial autoregressive models, dynamic panel system GMM estimation, quantile regression and finite mixture models. This sequential strategy allowed for simultaneous identification of structural paths, endogenous relationships, non linear and moderating effects, spatial spillovers and temporal dependence, as well as heterogeneous trust formation patterns across user groups. [Results/Conclusions] The findings confirm that user trust in AIGC enabled library services is best understood as a three dimensional structure, in which cognitive trust influences emotional trust, and both jointly shape behavioral trust. Content quality and generation transparency exert strong and robust positive effects on cognitive trust, while interactivity mainly enhances emotional trust and indirectly affects behavioral intentions. Digital literacy and professional background introduce clear threshold and amplification effects: when user capability is below certain levels, improvements in content quality and transparency have limited impact on trust, but above these thresholds the marginal effects increase markedly. Library level institutional arrangements, including human review, explicit labelling and standardized usage rules, not only raise overall trust levels, but also significantly strengthen the effects of technological signals, sometimes to a degree comparable with individual level capability factors. Spatial and dynamic analyses show that trust exhibits both spillover and path dependence: practices in one library can influence neighbouring institutions through user mobility and word of mouth, and positive or negative experiences accumulate into longer term evaluations. The study suggests that libraries should treat trust building as a core design objective when introducing AIGC, embed transparency and quality signals into interfaces and metadata, establish robust verification and correction workflows, and provide differentiated services for users with different literacy levels and professional backgrounds. The limitations include the concentration of data in one province and the use of primarily macro-level instruments for identifying causation. Future research could extend the framework to cross regional and cross type libraries, compare specific functional scenarios such as reference services and reading promotion, and further integrate trust analysis with broader issues of library governance, literacy education and responsibility allocation in AIGC ecosystems.

Efficacy of Intelligent Consulting Services in Libraries at Home and Abroad under the Background of AI Large Model Driving | Open Access
SONG Lingling, ZHANG Xinghui
2026, 38(4):  99-111.  DOI: 10.13998/j.cnki.issn1002-1248.25-0524
Asbtract ( 212 )   HTML ( 7)   PDF (822KB) ( 2 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] This study investigates the operational practices and strategic development pathways of intelligent consultation services in libraries globally, specifically under the impetus of artificial intelligence (AI) large language models (LLMs). By conducting a systematic analysis of representative case studies, we examine the applied technologies, emerging service models, and measurable efficacy of these AI-enhanced services. The research holds significance in offering actionable insights for the effective implementation of AI within the library sector. It aims to guide the evolution of intelligent consultation toward greater innovation and cultural-contextual adaptability, thereby providing both theoretical underpinning and practical guidance for the localized development of smart library ecosystems. [Method/Process] Employing a comparative case study methodology, this research selected 30 representative libraries from diverse international and domestic contexts as its subjects. Data were primarily gathered through structured online surveys and content analysis of publicly available service interfaces, systematically capturing the scope, functionality, and operational status of their intelligent consultation services. The analysis focused on characterizing technological applications-identifying core LLM integrations, typical functionalities, and architectural highlights. It further integrated findings to compare and contrast prevailing service models and implementation variances. Subsequently, the study conducted a multidimensional comparative assessment of the practical service effectiveness enabled by AI large models, evaluating performance across four key areas: service response efficiency and accuracy; capabilities in resource organization and structured knowledge management; tangible improvements in user service experience; and degree of service model innovation. [Results/Conclusions] The findings indicate that AI large model-driven intelligent consulting services exhibit pronounced advantages in key operational metrics, including enhanced response efficiency, superior knowledge synthesis and management capabilities, enriched user interaction experiences, and the facilitation of novel service paradigms. However, a comparative analysis reveals significant disparities among libraries concerning the depth of technological integration, the sophistication of service offerings, and the level of cultural and linguistic adaptation achieved. In response, the study proposes targeted strategic recommendations from three interrelated perspectives: nuanced technological application, user-centered service design, and collaborative ecosystem construction. It advocates for libraries to prioritize the synergistic balance between technological capability and humanistic service values, to achieve deeper integration with localized and institutional knowledge repositories, and to institute mechanisms for continuous service evaluation and iterative optimization. These approaches are essential for fostering more efficient, inclusive, and sustainable development of intelligent consultation services. Future research directions should encompass longitudinal studies on service effectiveness, the integration of multimodal interactive capabilities, and the formulation of ethical guidelines and governance frameworks for AI deployment in library services.