中文    English
Current Issue
05 March 2024, Volume 36 Issue 3
Concept, Task, and Application of Social Robots in Information Behavior Research | Open Access
LIU Yang, LYU Shuyue, LI Ruojun
2024, 36(3):  4-20.  DOI: 10.13998/j.cnki.issn1002-1248.24-0093
Asbtract ( 182 )   PDF (1287KB) ( 294 )  
References | Related Articles | Metrics
[Purpose/Significance] The advent and emergence of social robots represent a closer development trend in human-computer interaction. However, the study of the information behavior of social robots faces many challenges that arise from the need to simulate human social behavior. This challenge includes technical hurdles such as a multi-level understanding of human emotions, extraction of multi-modal information features, situational awareness, as well as the establishment of long-term user profiling, data privacy, and ethical considerations in personalized interaction. However, existing research tends to focus narrowly on specific applications and lacks a holistic review. This paper attempts to provide a thorough review of both domestic and international studies of social robots in the area of information behavior. It aims to elucidate the theoretical evolution and technological foundations of social robots, thereby enriching our understanding of their role in the landscape of information behavior research. [Method/Process] Using a rigorous literature review methodology, we meticulously analyze the current state and prospective trajectory of research on the information behavior of social robots. First, we extract and scrutinize the theoretical foundations and salient research topics within the field. We then delineate the core tasks of social robots, which include data acquisition, language processing, emotion analysis, information retrieval, and intelligent communication. Furthermore, we synthesize research on the information behavior of social robots in various application domains such as education, healthcare, and service sectors. We delve into the intricacies of human-computer interaction in these contexts and provide comprehensive insights. Finally, we explore future directions in the field. [Results/Conclusions] Our examination of the information behavior of social robots reveals both promising potential and notable challenges. This paper provides a fundamental elucidation of the social robot concept, identifies current research foci, and addresses prevailing challenges. Regarding the construction of data resource and related technologies, we systematically delineate the task architecture of social robots, and highlight their wide-ranging applications in various domains. Furthermore, we provide an in-depth examination of human-computer interaction scenarios in critical domains such as education, healthcare, and service delivery, offering prescient guidance for future research efforts in social robotics. Nonetheless, our findings underscore the nascent stage of development of social robotics, which requires a concerted focus on advancing interaction quality assessment, enhancing social cognitive capabilities, managing user information disclosure, and refining emotional intelligence. By prioritizing these avenues, we aim to improve the quality of human-robot interaction and provide users with enriched and personalized service experiences, thereby catalyzing the continued evolution and broader integration of social robotics technology.
Three Waves of the Organization of Information Resources and the Development of the Statistical Evaluation Systems of Libraries | Open Access
ZHOU Wenjie
2024, 36(3):  21-31.  DOI: 10.13998/j.cnki.issn1002-1248.24-0300
Asbtract ( 50 )   PDF (1145KB) ( 43 )  
References | Related Articles | Metrics
[Purpose/Significance] This paper aims to explore the development and evolution of the library statistical evaluation index system, highlighting its characteristics and changes at different stages of document management, information management, and data management. The research is conducted around three key stages: document level, information level, and data level, analyzing the main content and significance of the library statistical evaluation index system at different development stages. The innovation of this paper lies in the systematic analysis of these transitions, providing a comprehensive perspective that integrates theoretical and methodological advances with practical indicators. [Method/Process] The research methodology includes a systematic analysis of statistical evaluation indicators of libraries in different stages of development. The study uses historical review and theoretical analysis methods, analyzing the development of document organization, information digitization, and data management in libraries. By examining the development of classification, cataloging, and evaluation metrics, the research combines historical documentation with contemporary practices to provide a solid theoretical foundation. The study also draws on existing literature and integrates data from library management systems and user feedback to assess service quality and operational efficiency. This mixed-methods approach ensures a comprehensive understanding of the applicability and effectiveness of the evaluation indicators. [Results/Conclusions] The study shows that the library's statistical evaluation index system has evolved significantly, reflecting the library's adaptation to changing resource types and management needs. The main conclusions can be summarized as follows. The document level in the first stage, focusing on book circulation, including indicators such as book use efficiency, collection development quality, and reader engagement. Key metrics such as cumulative borrowing and utilization rates provide basic service performance data, but lack deep information insights. With the development of information technology, library statistical evaluation indicators have expanded to include service frequency, response time, user satisfaction, and growth rates, enabling libraries to evaluate and improve service strategies based on user feedback and service performance. Currently, the library's statistical evaluation system focuses on research data management and data value assessment. Indicators now include not only resource- and service-related metrics but also operational efficiency, budget utilization, technological updates, scholarly contributions, and social impact. These indicators provide a comprehensive view of the library's performance in resource management, service quality, and social contribution, helping to optimize resource allocation, enhance service quality, and increase impact. The study also acknowledges certain limitations, such as the evolving nature of technology and user needs, which may require continuous updates to the evaluation system. Future research should explore the integration of advanced data analytics and artificial intelligence to further refine evaluation metrics. In addition, ongoing studies are needed to adapt to emerging trends in data management and user behavior to ensure that libraries remain at the forefront of information services in the digital age.
Construction Model of AI-Ready for Scientific and Technological Intelligence Data Resources | Open Access
QIAN Li, LIU Zhibo, HU Maodi, CHANG Zhijun
2024, 36(3):  32-45.  DOI: 10.13998/j.cnki.issn1002-1248.24-0173
Asbtract ( 536 )   PDF (1338KB) ( 148 )  
References | Related Articles | Metrics
[Purpose/Significance] The new quality productivity advancing AI technology, especially exemplified by large language models (LLMs), is rapidly updating and attracting wide attention. In order to accelerate the implementation of AI technologies, it is urgent for advanced AI technologies to acquire support from knowledge resources in scientific and technological (S & T) information and libraries. Meanwhile, S & T information provides significant potential service scenarios for the application of AI technologies such as LLMs. This study aims to explore and design the method and path for constructing AI-ready data resources in the field of S & T information, and proposes a comprehensive and operable construction model that adapts to the new technical environment of AI, thereby facilitating comprehensive readiness in the field of intelligence. [Method/Process] This study first focuses on the concept and development status of AI-ready construction, and examines the development of AI-ready construction at home and abroad from three aspects: governments, enterprises and research institutions. The survey shows that the application of artificial intelligence has been highly valued by various fields of scientific research and production. However, the groundwork and preparation for AI applications are still relatively lagging behind, and AI tools cannot be fully implemented in key application scenarios due to the lack of high-quality and refined data resources. Based on the research results, the study made a preliminary definition of AI-ready construction, that is, we defined AI-ready construction as: the various development and improvement actions to adapt the object to the AI technical environment and promote the long-term benefits. The research then focuses on the field of S & T information, and systematically discusses and designs the AI-ready construction mode in the field of S & T information from six aspects: connotation category, construction angle, construction object, construction principle, control dimension and types of construction mode. [Results/Conclusions] The construction of AI-ready S & T information resources is a comprehensive and multi-angle transformation and upgrading process, which is located between the knowledge resource end and the intelligence application end. It is carried out in four aspects, including standards, methods, tools and platforms. The main content of the construction includes channels of AI technology, data transformation, data resources, and data management. At the same time, the construction is comprehensively controlled by six principles and four control dimensions. Besides, this study proposes the way of the practical construction of AI-ready S & T data resources, including the construction of intelligent data systems, and the construction of integrated platforms for the whole life cycle of S&T information data. The path reflects the process of the variation of knowledge resources from diversification to organization and then to integration, which not only serves the scientific information field itself, but also provides more intelligent, convenient, rich and powerful S&T information support for various fields. In the future, it is hoped that further research can delve into more micro and practical aspects, review the specific characteristics of different AI technologies, and provide more detailed suggestions for specific application scenarios at the operational level, providing a solid guarantee for scientific research institutions to achieve the leading strategic position in research and development.
Security Governance of Data Element Circulation: System Architecture and Practical Approach | Open Access
MA Lecun, PEI Lei, LI Baiyang
2024, 36(3):  46-58.  DOI: 10.13998/j.cnki.issn1002-1248.24-0231
Asbtract ( 114 )   PDF (4386KB) ( 94 )  
References | Related Articles | Metrics
[Purpose/Significance] Research on the governance system and policy of data elements circulation is an important issue to be solved in the field of data governance in China at present, and research on the policy formulation and governance system of its circulation plays an important role in grasping the security of data circulation in China and promoting the market-oriented allocation of data elements. [Method/Process] First, this study is based on the reality of China's data factor market security and trustworthy, autonomous and controllable requirements. Based on the analysis of the security risk of data circulation, we put forward the data factor market risk governance countermeasures of the "security-fairness-efficiency" triangular structure. Then, based on the three-level system and five-dimensional standards of data factor market governance, we put forward the method of docking the security governance with the trusted ecosystem and the international data governance rule system for cross-border data flow, and constructed a governance system with Chinese characteristics for the national unified data factor market. [Results/Conclusions] Facing the security risks in data sovereignty, data market and data circulation, we should identify and monitor data sovereignty disputes and the operation situation of the circulation market, and establish a multi-party cooperative and joint governance model led by the government, operated by the platform owner, the main body of the enterprise and the participation of users. When assessing the market for data elements, a mixed assessment approach should be adopted, combining qualitative and quantitative aspects, combining expert opinion with objective data, and comparing objectives with results. For different types of data, the control boundaries and scope of use should be clarified in a hierarchical manner, and data ownership, use and income should be clarified; at the same time, a confirmation platform of data rights should be established to audit and register and certify the data service subject, data circulation process, and data circulation rules so as to ensure that the normative nature of data circulation is maintained.
Machine Functionalism and the Digital-Intelligence Divide: Evolutionary Pathways, Generative Logic and Regulatory Strategies | Open Access
ZHOU Xin
2024, 36(3):  59-71.  DOI: 10.13998/j.cnki.issn1002-1248.24-0194
Asbtract ( 60 )   PDF (1145KB) ( 57 )  
References | Related Articles | Metrics
[Purpose/Significance] This study aims to critically analyze the social philosophical roots of the digital intelligence divide from the perspective of machine functionalism. By uncovering the theoretical origins and generation pathways of the digital intelligence divide, countermeasures can be proposed. The research contributes to understanding the divide's impact on society and provides insights for promoting inclusive development of artificial intelligence (AI) technology. The study fills a gap in the literature by linking machine functionalism to the digital intelligence divide and offers a novel perspective on addressing the unequal use of AI technology. The findings have significant implications for policymakers, technology developers, and researchers in the fields of AI ethics, digital inequality, and social philosophy. [Method/Process] Using the theoretical lens of machine functionalism, this study examines the evolutionary pathways, generation mechanisms, and multiple risks of the digital intelligence divide. It draws on relevant theories, such as the extended mind thesis and the theory of technological determinism, to analyze how machine functionalism influences the design and application of AI technology. The study also draws on empirical evidence from case studies and surveys to illustrate the manifestation of the digital intelligence divide in different contexts. By synthesizing theoretical and empirical insights, the research proposes interventions that address the divide at different levels, from the philosophical underpinnings to the practical implementation of AI technology. [Results/Conclusions] The study shows that machine functionalism, which applies Turing machine principles to explain the mind and views the mind as a physically realized Turing machine. It has become the social philosophical foundation of AI technology. While breaking with the traditional biological essentialist view of the mind, machine functionalism inadvertently creates inequitable uses of AI through three main pathways: the mechanization of the mind, designer bias and algorithmic preference, and technological specialization and barriers to entry. This creates the digital intelligence divide and risks such as the evolution of information access inequality into social inequality and the weakening of information cocoons and public dialogue. The study argues that interventions are needed to mitigate these risks and promote a more equitable distribution of the benefits of AI technology. To bridge the digital intelligence divide, the study suggests a multi-pronged approach. First, future efforts should focus on promoting positive interaction between machines and humans through value-sensitive design, which incorporates ethical considerations into the development and deployment of AI systems. Second, developing ethical algorithms that eliminate designer bias and algorithmic preference is critical to ensuring fair and unbiased AI decision-making. Third, improving the digital intelligence skills of individuals and communities can help break down barriers to entry caused by technological specialization and enable more people to benefit from AI technology. Together, these policies can help break down the barriers of unequal technology use under machine functionalism. The study concludes by emphasizing the importance of a collaborative effort among policymakers, technology developers, researchers, and the public in addressing the digital intelligence divide. It calls for further research on the social implications of machine functionalism and the development of inclusive AI governance frameworks. The findings of this study serve as a foundation for future work to mitigate the risks of the digital intelligence divide and promote the responsible and equitable development of AI technology.
Impact of User Heterogeneity on Knowledge Collaboration Effectiveness from a Network Structure Perspective | Open Access
SHI Yanqing, LI Lu, SHI Qin
2024, 36(3):  72-82.  DOI: 10.13998/j.cnki.issn1002-1248.24-0207
Asbtract ( 45 )   PDF (1190KB) ( 32 )  
References | Related Articles | Metrics
[Purpose/Significance] In the context of the digital age, knowledge collaboration platforms such as online Q&A communities, academic forums, and various professional networking platforms have become important venues for knowledge sharing and collective wisdom. These platforms bring together users from different fields, with diverse professional backgrounds and levels of expertise. They actively engage in problem solving, exchange views, and form complex and dynamic social networks. Online knowledge collaboration platforms not only enhance the accessibility of knowledge but also serve as incubators for interdisciplinary communication, problem solving, and innovative thinking by harnessing the collective wisdom and expertise of individuals. This article explores how to optimize the network structure of online knowledge collaboration platforms and balance the internal knowledge and expertise within teams. The goal is to promote cross-domain information flow, prevent the formation of information silos, and promote the creation, dissemination, and application of knowledge through collective knowledge collaboration. [Methods/Process] Due to the diversity of participants' backgrounds, experiences, and viewpoints, effectively managing and coordinating this heterogeneity becomes a critical issue. Additionally, the quality and efficiency of knowledge collaboration is also influenced by the characteristics of the network structure, such as the flow of information paths, the role of key nodes, and the interaction patterns of small groups. This study is based on actual data from Stack Overflow, the world's largest programming Q&A website. It focuses specifically on the following aspects of influence: clustering coefficient, node centrality, edge span, user knowledge heterogeneity, and user experience heterogeneity. By constructing a negative binomial regression model, the study investigates how network structure characteristics and team user heterogeneity affect the quality and efficiency of knowledge collaboration. [Results/Conclusions] The results show that, with respect to network structural characteristics, node centrality significantly improves the quality and efficiency of collaboration, and higher aggregation coefficients and larger span of connecting edges restrict information flow and are detrimental to the efficiency of knowledge collaboration. In terms of user heterogeneity, high heterogeneity in knowledge background and registration duration usually hinders collaboration, heterogeneity in experience heterogeneity in registration duration negatively affects collaboration effectiveness in both cases, heterogeneity in response acceptance rate only negatively affects collaboration quality, while heterogeneity in activity intensity positively affects it. In addition, this study still has shortcomings that deserve further exploration. First, future research could consider expanding the sample to include more questions on different topics and domains to increase the reliability and generalizability of the findings. Second, future research could focus on the dynamic changes of network structure and heterogeneity in order to better understand the impact of network structure on knowledge collaboration and to improve the prediction ability of collaboration effects; it could explore more deeply how different types of heterogeneity affect collaboration dynamics over time.
Prevention and Control of Information Fog from the Perspective of Overall National Security Concept | Open Access
MAN Zhenliang, WANG Xinwei
2024, 36(3):  83-91.  DOI: 10.13998/j.cnki.issn1002-1248.24-0127
Asbtract ( 85 )   PDF (954KB) ( 141 )  
References | Related Articles | Metrics
[Purpose/Significance] With the popularization of artificial intelligence technology, the cost of information fog has decreased, and its negative impact on national security is becoming increasingly apparent. Information fog not only creates cognitive barriers for users, but also poses serious challenges to various fields such as politics, economy, and society. This article explores the prevention and control of information fog from the perspective of the overall national security concept, with the aim of addressing the risks and challenges posed by information fog. There is a lack of research in the literature on the prevention and control of information fog from the perspective of overall national security. To fill the gap, this article not only provides a new perspective and strategy for the prevention and control of information fog, enriching the connotation of national security research, but also promotes the cross-integration of information security and national security disciplines, providing new theoretical support for research in related fields. It provides reference and guidance to relevant entities such as the government and online platforms in preventing and controlling information fog. [Method/Process] Based on the concept of overall national security, this article summarizes the academic achievements on information fog at home and abroad, including research on stages, scenario applications, governance strategies, and practical case analysis. We summarize the characteristics of information fog and analyze the methods and strategies for prevention and control. [Results/Conclusions] Information fog has the characteristics of wide dissemination, realistic experience, and difficulty in identification. Based on this feature, the article puts forward the following suggestions to strengthen the improvement of legal policies and clarify the division of responsibilities: 1) to strengthen the evaluation and risk warning of online accounts and utilize technology to update anti-counterfeiting tools and improve information authentication capabilities. Governments should intervene in a timely manner to prevent the information fog from escalating. 2) to improve public awareness of discrimination and the level of prevention. In addition, the article also has some shortcomings. First, it does not present other forms of information fog in the security domain. Second, it does not analyze information fog from an algorithmic perspective. Therefore, in future research, we will closely follow the development of society to analyze the characteristics and presentation methods of information fog in various security fields. At the same time, scholars in the fields of computer science, intelligence science, and national security are also invited to conduct in-depth analysis of information fog from the perspective of computer algorithms, in order to propose practical countermeasures and suggestions for preventing and managing information fog from a technological perspective.
Ontology Construction for Intelligent Control and Application of Crop Germplasm Resources | Open Access
FAN Kexin, XIAN Guojian, ZHAO Ruixue, HUANG Yongwen, SUN Tan
2024, 36(3):  92-107.  DOI: 10.13998/j.cnki.issn1002-1248.24-0135
Asbtract ( 62 )   PDF (7957KB) ( 42 )  
References | Related Articles | Metrics
[Purpose/Significance] Breeding 4.0, characterized by "biotechnology + artificial intelligence + big data information technology," has brought new requirements for the digital management and intelligent utilization of germplasm resources. In order to meet the diverse support needs for knowledge service forms under an intelligent background, this article aims to propose an effective method for knowledge organization and deep semantic association. This is essential to address the inconveniences that discrete germplasm resource data bring to researchers when collaborating across regions and institutions. Therefore, the article presents a method that integrates fragmented domain data into a systematic knowledge system, which is particularly important. [Method/Process] By analyzing the domain data descriptions and the current organizational status, the ontology construction was performed using the seven-step method developed by Stanford University Hospital. First, existing ontologies such as the Crop Ontology, Gene Ontology, and Darwin Core were referenced and reused, and then integrated with the knowledge framework from the "Technical Specifications for Crop Germplasm Resources" series and example datasets. Consequently, an ontology model was successfully constructed, which covers five major categories of crops: cereals, cash crops, vegetables, fruit trees, and forage and green manure crops. This model defines 11 core classes including phenotypes and genotypes, as well as identification methods and evaluation standards, along with 10 object properties and 56 data properties. [Results/Conclusions] Based on the ontology model, the article proposes a methodology for constructing a knowledge graph of crop germplasm resources. Using rice as an example, a domain-specific fine-grained knowledge graph is developed to facilitate semantic association and querying across multiple knowledge dimensions. The article also outlines prospective designs for new intelligent knowledge service scenarios driven by the knowledge graph, such as intelligent question and answer and knowledge computation, aiming to meet the knowledge service needs of researchers, breeding companies, and the general public. This is intended to provide more accurate and efficient support for computational breeding efforts. Currently, the research focuses only on rice as an example of a cereal crop, with economic crops, vegetables, and other types of crop germplasm resources not yet included in the study. Future work will expand the scope of the study and add new classes and properties specific to different germplasm resources to better address the diverse and personalized knowledge needs of users in the eraa of big data. This approach aims to promote the contextualization, ubiquity, and intelligence of knowledge services, and to further integrate them into different academic disciplines related to the development of new quality digital productivity.