中文    English
Current Issue
05 February 2026, Volume 38 Issue 2
Research on the Construction and Evaluation of a Low-Altitude Economy Urban Development Index | Open Access
YANG Guancan, SHI Yingying, ZHANG Zihe
2026, 38(2):  4-15.  DOI: 10.13998/j.cnki.issn1002-1248.26-0063
Asbtract ( 6 )   HTML ( 0)   PDF (901KB) ( 1 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] As China's low-altitude economy transitions from pilot experimentation to large-scale deployment, city governments are increasingly confronted with intelligence challenges rather than mere information shortages. Development signals are scattered across heterogeneous sources-enterprise activities, patents and R&D outputs, infrastructure readiness, investment dynamics, and municipal policy documents - often with inconsistent definitions, update cycles, and measurement units. This fragmentation raises cognitive burden and decision uncertainty: policymakers may "know a lot" but still lack a structured understanding of urban development posture, making cross-city comparison, policy-tool matching, and pathway selection difficult. To address this gap, this study re-frames index construction from an information science perspective as a data-information-knowledge transformation process and develops an interpretable measurement tool to support urban situation assessment and policy reasoning in the early diffusion stage. [Method/Process] We propose a Low-altitude Economy City Development Index (LCDI) using the analytical boundary of three heterogeneous signal systems - industrial chain, technology chain and policy chain. The index operationalizes four interpretable dimensions: technological innovation vitality, market expansion potential, ecological coordination capability, and policy empowerment effectiveness. Multiple objective data sources are integrated and normalized to ensure cross-city comparability. Indicator weights are determined through expert judgment combined with the Analytic Hierarchy Process (AHP), translating perceived importance of signals into an explicit weighting structure. The empirical assessment covers 58 Chinese cities that have issued dedicated low-altitude economy policies and satisfy data availability and comparability requirements. Beyond computing composite scores and dimension profiles, Principal Component Analysis (PCA) is used as a structural representation test: it examines whether the four-dimensional signal system can be stably abstracted into a small set of dominant cognitive axes suitable for decision-oriented interpretation. Cities are further mapped into a two-axis space and categorized via a four-quadrant configuration to facilitate type recognition and mismatch diagnosis. Finally, a concise set of typical-city cases is employed for interpretive validation, checking whether index-implied structures can be meaningfully mapped to observable governance practices and implementation pathways. [Results/Conclusions] Results reveal a clear hierarchical gradient across cities. Leading cities tend to show coordinated advantages across multiple dimensions, indicating that urban low-altitude economy development depends on systemic coupling among technology, market, ecosystem coordination, and institutional supply rather than single-factor expansion. PCA suggests that urban development posture can be summarized along two dominant structural axes: an endogenous capability axis (driven by innovation, market expansion, and coordination) and an institutional empowerment axis (driven by policy and governance capacity). The four-quadrant typology highlights structural mismatches where capability accumulation and policy supply evolve asynchronously. While the study is constrained by data availability and the sector's early-stage diffusion, the LCDI provides a replicable, updatable, and interpretable intelligence tool for cross-city comparison, type-based diagnosis, and differentiated policy calibration, and it points to future work on dynamic monitoring and broader externality indicators.

Digital Capital for the Elderly: Conceptual Connotation, Structural Dimensions and Scale Development | Open Access
ZHANG Ning, HE Boyun
2026, 38(2):  16-29.  DOI: 10.13998/j.cnki.issn1002-1248.25-0345
Asbtract ( 314 )   HTML ( 5)   PDF (790KB) ( 10 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] The global population is aging at an unprecedented pace. As a key tool to address the challenges of digital inclusiveness for the elderly, developing a digital capital scale is of utmost importance. Digital capital not only encompasses the abilities and skills of the elderly in using information technology, but also focuses on the interaction among the social resources, cultural capital, and economic capital they acquire in the digital environment. Therefore, it helps enhance the theoretical understanding of the heterogeneity of the elderly's digital capabilities. [Method/Process] First, a semi-structured interview method was adopted to conduct in-depth interviews with 24 elderly individuals based on the digital capital framework, and combined with the digital life scenarios in China. We also referred to existing studies on the digital literacy and digital capabilities of the elderly. Based on the coding results of the interview transcripts, a 7-dimensional scale for measuring the digital capital of the elderly was derived. Then, a preliminary reliability and validity analysis was conducted on a pre-test sample of 180 respondents, and the dimension indicators were appropriately adjusted. Subsequently, using the data from 380 formal questionnaires, the scale was verified and improved. Based on the principle of conceptual interpretability, the factor names of the four dimensions were re-examined, and the final version of the scale was established. Elbow estimation and the K-means clustering algorithm were then used to classify the digital capital levels of the elderly. [Results/Conclusions] The final scale consists of 19 items, covering four dimensions: digital resource acquisition ability, digital creation and expression ability, digital environment adaptation ability, and digital tool learning ability. Following optimization, the scale demonstrates excellent reliability and validity, and aligns closely with the aging-friendly scenarios. The tool can be used as a standardized tool to measure the digital capital level of the elderly population in China, laying the foundation for future large-scale surveys. By applying this scale, it is possible to effectively distinguish between groups of elderly individuals with varying levels of digital capital, providing empirical support for personalized digital services for the elderly people. For the first time, this study systematically applies the digital capital theoretical framework to the elderly population, which compensates for the lack of standardized measurement tools and highlights the unique needs and challenges of the elderly in terms of the dimensions, usage scenarios, and capability transformation. The proposed hierarchical model of digital capital among the elderly deepens our theoretical understanding of the differences in digital capabilities among this population.

Risk Assessment and Early Warning of Generative Artificial Intelligence Impact on Network Public Opinion Based on Optimized BP Neural Network | Open Access
YI Chenhe, ZHANG Yuting
2026, 38(2):  30-41.  DOI: 10.13998/j.cnki.issn1002-1248.25-0495
Asbtract ( 387 )   HTML ( 1)   PDF (818KB) ( 1 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] Generative Artificial Intelligence (GAI) has rapidly reshaped the landscape of social information dissemination, bringing unprecedented network public opinion risks-such as large-scale disinformation spread, algorithmic bias-induced social inequality, extreme emotional polarization, and model hallucinations leading to cognitive deviations-that significantly amplify the complexity, suddenness, and cross-domain spillover effects of public opinion evolution. These risks not only undermine the authenticity and order of information ecosystems but also pose severe challenges to social governance, public trust, and policy-making efficiency, making accurate identification, quantitative assessment, and early warning an urgent academic and practical task. Existing research has obvious limitations: single-dimensional assessment frameworks fail to capture GAI's multi-faceted and interrelated risks, such as the concealment of generated content, algorithmic recommendation amplification and cross-platform diffusion; traditional models such as basic BP neural networks suffer from susceptibility to local optima and poor generalization, inadequately adapting to the non-linear, dynamic, and high-dimensional attributes of GAI-generated content. To address these gaps, this study constructed a 4-dimensional risk assessment index system (content, dissemination, sentiment, and user) and proposed a GA-optimized BP neural network model, which will enrich public opinion management theories in the AI era and provide practical, efficient tools for precise risk control. It will contribute to the construction of a safe, orderly, and trustworthy online space. [Method/Process] A mixed research method with solid theoretical foundations (information communication theory and intelligent optimization algorithms) and empirical support was adopted: Ten typical GAI-induced public opinion events were selected from Sina Weibo (selection criteria: views ≥1 million, original posts ≥60, covering technology, society, public affairs, and consumption fields). Following a four-stage evolutionary model (formation, outbreak, mitigation, and recovery) and four early warning levels (Level I-IV, corresponding to binary outputs 1000, 0100, 0010, 0001) as specified in national emergency management standards, samples were systematically categorized into four evolutionary stages and corresponding risk grades. A 12-indicator system covering content (authenticity, misleadingness, and professionalism), dissemination (speed, scope, and diffusion path), sentiment (intensity, polarization degree, and negative ratio), and user (influencing impact, participant activity, and interaction stickiness) dimensions was constructed. The weights of each indicator were determined to ensure objectivity, and data preprocessing was performed via min-max normalization to eliminate dimensional differences. A 4-layer BP neural network (12 input neurons, 2 hidden layers with 15 and 10 neurons respectively, and 4 output neurons) was built, with initial weights, thresholds, and hyperparameters (learning rate and iteration times) optimized by genetic algorithm (GA). A traditional BP model served as the control group, with 70% of data as the training set and 30% as the test set, and model performance was evaluated based on prediction accuracy. [Results/Conclusions] Experimental results confirm the significant superiority of the GA-BP model: its prediction accuracy reached 91.67%, 8.34 percentage points higher than the traditional BP model (83.33%). This verifies that GA optimization effectively improved model performance, enabling better capture of complex non-linear relationships among GAI-induced risk factors. The multi-dimensional index system successfully extracted core risk characteristics, realizing comprehensive identification and traceability of GAI-related public opinion risks. Limitations of this study include sample concentration on Chinese social platforms, limited case quantity, and narrow time span. Future research will expand cross-border, multi-language samples (e.g., Twitter, Facebook), enrich technical indicators (e.g., GAI content identifiability, algorithmic intervention intensity), and explore integration with deep learning models (e.g., LSTM, Transformer) to further enhance the generalizability, real-time performance, and intelligent decision-making support capabilities of the risk assessment system.

Construction of an Artificial Intelligence Literacy Ability Framework and Training System for College Students | Open Access
HU Anqi
2026, 38(2):  42-55.  DOI: 10.13998/j.cnki.issn1002-1248.25-0448
Asbtract ( 287 )   HTML ( 2)   PDF (883KB) ( 12 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] The rapid proliferation of generative artificial intelligence (AI), exemplified by models like DeepSeek-R1, has precipitated a paradigm shift across various sectors, positioning AI literacy as an indispensable competency for the future workforce. University students, as digital natives and pivotal agents of technological adoption and innovation, stand at the forefront of this transformation. Their proficiency in understanding, utilizing, and critically evaluating AI technologies directly influences their academic performance, research capabilities, and long-term career adaptability. Although existing literature has begun to explore the conceptual landscape of AI literacy, a significant gap remains. There is an absence of a robust, empirically validated competency framework specifically tailored to the unique learning contexts, developmental needs, and future roles of university students within China's higher education system. This study aims to address this critical gap by constructing and validating a comprehensive AI literacy competency framework for college students. Its primary significance lies in its ability to move beyond theoretical discourse and provide an evidence-based model that can guide the systematical development of targeted training programs. This enriches the theoretical underpinnings of AI literacy education and offers practical guidance for cultivating high-quality talent equipped for the intelligent era. [Method/Process] This research employed a mixed-methods approach, integrating qualitative and quantitative methods to provide both theoretical grounding and empirical robustness. The study commenced with a qualitative phase utilizing the grounded theory methodology. A systematic analysis of 112 core academic publications (2019-2024) from databases such as CNKI and Web of Science was conducted. Through a rigorous process of open coding, axial coding, and selective coding, facilitated by NVivo11 software, we extracted 300 initial concepts, which were subsequently synthesized into 26 sub-categories and ultimately 4 main categories. This process resulted in the preliminary construction of a four-dimensional AI literacy competency framework. Following this, a quantitative phase was implemented to test and refine the framework. A detailed questionnaire was developed based on the identified dimensions and indicators. Utilizing a five-point Likert scale, the questionnaire measured 26 variables corresponding to the framework's sub-components. A total of 586 valid responses were collected from undergraduate students across universities in Jiangsu Province, China. The dataset was randomly split into two halves. The first subset (N=293) underwent exploratory factor analysis (EFA) using SPSS to uncover the underlying factor structure and assess the internal consistency reliability via Cronbach's alpha. The second subset (N=293) was subjected to confirmatory factor analysis (CFA) using AMOS to verify the hypothesized factor structure, evaluate model fit indices (e.g., CMIN/DF, CFI, TLI, RMSEA), and establish convergent and discriminant validity by examining average variance extracted (AVE) and composite reliability (CR). [Results/Conclusions] The empirical analyses strongly support the validity and reliability of the proposed competency framework. The EFA clearly identified four distinct factors that aligned perfectly with the predefined dimensions, with a total variance explained of 69.916% and all factor loadings exceeding 0.6. The CFA results demonstrated excellent model fit (CMIN/DF=1.921, CFI=0.950, TLI=0.943, RMSEA=0.056), confirming the structural integrity of the framework. Furthermore, all constructs exhibited high internal consistency (Cronbach's α>0.90) and satisfactory convergent (AVE>0.5, CR>0.7) and discriminant validity. The finalized framework, therefore, comprises four interconnected core dimensions: AI Cognition (encompassing knowledge of basic concepts, applications, value, and risks), AI Skills (covering practical abilities from tool usage and programming to critical evaluation and innovation), AI Ethics (emphasizing social responsibility, privacy, intellectual property, and legal compliance), and AI Thinking (fostering higher-order cognitive abilities like computational, critical, and systemic thinking). Based on this validated framework, the study proposes a systematic and multi-faceted training system. This system outlines clear training objectives, identifies key stakeholders (e.g., university libraries, teaching centers, schools, and external enterprises), designs layered training content and pathways corresponding to each dimension, and suggests implementation strategies focusing on faculty development, a comprehensive assessment and feedback mechanism, and the strategic integration of AI-related resources. The main limitation of this study is that the respondents of the questionnaire were primarily college students during the empirical test stage. Future research can include teachers, business employers, and AI experts to modify and improve the index weight and content of the competency framework from multiple perspectives. This can be done through the Delphi method, expert interviews, and other methods, so as to enhance the framework's authority and universality.

Model Construction and Strategies for AI-enabled University Library Services to Facilitate Scientific and Technological Achievement Transformation | Open Access
GUO Hailing, ZENG Meiyun, FENG Yuxi
2026, 38(2):  56-65.  DOI: 10.13998/j.cnki.issn1002-1248.25-0568
Asbtract ( 166 )   HTML ( 3)   PDF (986KB) ( 4 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] Against the backdrop of national innovation-driven development strategies and the pressing need to enhance the efficiency with which scientific and technological achievements are transformed within universities, university libraries are undergoing a critical transition. They are shifting from being traditional, passive information providers to becoming proactive, embedded partners in the research and innovation value chain. However, this transition is often hampered by inherent limitations in traditional service models. This study, therefore, posits artificial intelligence (AI) as a pivotal enabler and investigates the specific mechanisms through which AI technologies can empower university libraries to achieve deep, systemic integration into the entire lifecycle of technology transfer. The research aims to provide a comprehensive theoretical framework for understanding this transformation and offer actionable, evidence-based practical pathways for academic libraries to redefine their functional boundaries and substantially strengthen the institutional support ecosystem for university technology transfer. [Method/Process] This research employs a qualitative multi-case study design, underpinned by an analytical framework constructed around the four critical, sequential stages of the technology transfer lifecycle: 1) research topic selection and project initiation, 2) research and development, 3) project conclusion and evaluation, and 4) marketization and industrialization of outcomes. Case selection followed purposive sampling criteria to ensure representation across diverse contexts, including domestic and international universities, as well as varied library types. The primary data comprised detailed case descriptions from published academic literature, institutional reports, and official service platforms. Within this staged framework, the analysis focuses on two intertwined dimensions at each phase: the evolution of the library's core service functions and the transformative impact of AI empowerment. Through a comparative cross-case analysis, this study examines how specific AI technologies augment traditional services, fundamentally changing the role and value proposition of libraries. [Results/Conclusions] The results show that through intelligent information analysis, knowledge association, data mining, and precise matching, AI can promote university libraries to shift from resource supply-oriented support to collaborative services that run through the entire lifecycle of technology transfer. This transformation manifests across the four-stage lifecycle as a shift: from providing literature to forecasting opportunities at the initiation phase; from offering patent data to navigating R&D pathways and risks during development; from archiving outputs to assessing value and potential at conclusion; and from disseminating information to intelligently brokering industry partnerships at the commercialization phase. Synthesizing these stage-specific transformations, this study constructs a novel, integrated service framework. This framework explicitly links specific AI capabilities with the redefined core functions of the library at each stage, illustrating the transition from a linear support model to a dynamic, AI-augmented ecosystem wherein the library serves as a central intelligence node. Meanwhile, this study reveals practical challenges in current practices, including ambiguous organizational boundaries, insufficient professional capabilities, and imperfect evaluation mechanisms oriented toward technology transfer. Correspondingly, it proposes strategies such as clarifying collaborative positioning, strengthening the construction of AI-empowered service capabilities, and improving technology transfer-oriented evaluation mechanisms to promote the sustainable development of AI-empowered research services in university libraries.

Collaborative Development Path of GLAM Institutions Based on AIGC Technology Application | Open Access
HUANG Xiaotang, YAO Qibin
2026, 38(2):  66-78.  DOI: 10.13998/j.cnki.issn1002-1248.25-0590
Asbtract ( 298 )   HTML ( 0)   PDF (799KB) ( 3 )  
Figures and Tables | References | Related Articles | Metrics

[Purpose/Significance] Under the strategic background of national cultural digitization and the high-quality development of public services, artificial intelligence generated content (AIGC) has become a core engine driving the digital and intelligent transformation of galleries, libraries, archives, and museums (GLAM). While AIGC offers unprecedented opportunities for content production and knowledge dissemination, current implementations often suffer from fragmentation, leading to new "data islands" and service barriers. Unlike previous studies, which treat GLAM institutions as a homogeneous whole, this paper aims to clarify the differentiated application paths of AIGC by distinguishing the unique "resource-technology-service" logic of each institution type. It seeks to reveal the structural causes of current collaborative dilemmas and construct a systematic collaborative development mechanism. This research is significant for breaking down institutional barriers, promoting the deep integration of cultural resources, and guiding GLAM institutions to shift from isolated technological upgrades to a clustered, symbiotic development model. [Method/Process] Adopting a digital ecosystem perspective, this study constructs a "Resource Attributes - Technology Adaptation - Service Goals" framework to systematically analyze the heterogeneous characteristics of the four institution types. The analysis reveals how distinct data morphologies - ranging from structured texts in libraries and semi-structured records in archives to multimodal artifacts in museums and unstructured works in art galleries - fundamentally dictate the differentiated deployment of generative text or vision models. By examining core capabilities including intelligent content twinning, editing, and creation, the study demonstrates how service goals strictly regulate technical choices: the emphasis on "access" and "trust" in libraries and archives necessitates technologies that ensure semantic accuracy and historical authenticity, whereas the pursuit of "experience" and "creativity" in museums and art galleries favors generative tools for immersive interaction and open-ended aesthetic expression. [Results/Conclusions] To address the identified challenges of fragmented development, the study proposes a tripartite collaborative development architecture consisting of a "Front-end Resource Layer," a "Mid-platform Technology Layer," and an "End-user Service Layer." The Front-end Resource Layer focuses on constructing a unified multimodal data foundation and standardized semantic ontology to bridge the semantic gap between heterogeneous institutional data. The Mid-platform Technology Layer advocates for the co-construction of an industry-specific general large model and a knowledge reasoning engine; by sharing API interfaces and computing power, this layer solves the high technical threshold and cost issues for smaller institutions, acting as a ubiquitous "industry capability hub." The End-user Service Layer aims to build a one-stop knowledge exploration portal and cross-domain expert workbenches, eliminating service isolation and creating integrated cultural scenarios. The study concludes that GLAM institutions must transition from "cultural containers" to "knowledge engines" through this architecture. Future research should further focus on copyright ethics, algorithmic governance, and new modes of human-machine collaboration to ensure the sustainable and trustworthy development of the digital cultural community.

Integrating Digital Humanities and Agricultural Knowledge Services A Simulation Modeling Perspectives | Open Access
ZHANG Ling
2026, 38(2):  79-89.  DOI: 10.13998/j.cnki.issn1002-1248.25-0683
Asbtract ( 118 )   HTML ( 3)   PDF (614KB) ( 2 )  
References | Related Articles | Metrics

[Purpose/Significance] This study aims to systematically examine the application of simulation modeling in bibliometrics and to clarify its methodological position within the broader framework of digital humanities tools and agricultural knowledge services. In particular, the paper highlights the innovative potential of integrating simulation modeling with generative artificial intelligence, which enables more flexible representation of heterogeneous behaviors and context-dependent decision-making processes. By bridging bibliometrics, digital humanities tools, and agricultural knowledge services, this research contributes to the theoretical advancement of bibliometric methodology and provides a structured foundation for future applications in agricultural information practice. [Method/Process] This study adopts a systematic literature-based analytical approach to review and synthesize major simulation modeling methods applied in bibliometrics. The analysis covers several representative categories of simulation models, including dynamic modeling of classical bibliometric laws, evolution models of co-authorship and citation networks, multi-agent-based simulation, information and knowledge diffusion models, and evolutionary game-theoretic models. These methods are examined with respect to their modeling objects, underlying assumptions, key parameters, and analytical capabilities. Rather than organizing the review solely by research topics, this study emphasizes simulation modeling logic as the central analytical thread. Each category of simulation method is analyzed in terms of how micro-level rules and interactions generate macro-level bibliometric patterns. Particular attention is paid to the role of digital humanities tools in operationalizing these models, especially through visualization, system integration, and interactive simulation environments that facilitate exploration and interpretation. In addition, this study introduces recent advances in generative artificial intelligence, particularly large language model-based agents, as an extension of traditional multi-agent simulation. By incorporating generative AI into simulation frameworks, it becomes possible to model heterogeneous agents with richer cognitive representations, adaptive behaviors, and contextual reasoning abilities. The methodological discussion draws on theoretical foundations from bibliometrics, complex systems, and computational social science, while also considering practical constraints related to data availability, model calibration, and validation. [Results/Conclusions] The analysis demonstrates that simulation modeling significantly enhances the explanatory power of bibliometric research by revealing dynamic mechanisms behind literature growth, collaboration structures, and knowledge diffusion processes. Compared with traditional static indicators, simulation-based approaches provide deeper insights into how bibliometric patterns emerge and evolve over time. The integration of generative artificial intelligence further expands this capability by enabling more realistic modeling of behavioral heterogeneity and context-sensitive decision-making among research actors. From an application perspective, the study shows that simulation models and associated digital humanities tools can be effectively embedded into agricultural knowledge service workflows. These applications include research evaluation, scientific information services, and policy communication, where simulation-based scenario analysis can support strategic planning and decision-making. At the same time, the study identifies several challenges, including data quality constraints, computational costs, and issues related to model interpretability and transparency. The findings suggest that future research should focus on improving data integration, enhancing model validation strategies, and further exploring the integration of generative AI to support more adaptive and explainable simulation systems. By doing so, simulation-based bibliometrics can play a more substantial role in advancing agricultural information services and research management in complex, data-intensive environments.

Promotion of Chinese Classical Literature for Children's Reading: Applications and Initiatives of Sora-Type Video Generation | Open Access
MAO Kaiyan
2026, 38(2):  90-103.  DOI: 10.13998/j.cnki.issn1002-1248.25-0429
Asbtract ( 274 )   HTML ( 2)   PDF (742KB) ( 6 )  
References | Related Articles | Metrics

[Purpose/Significance] Chinese classical texts are central to preserving and transmitting traditional culture; however, promoting them among children has long faced many obstacles: the linguistic barrier posed by classical Chinese, the cognitive distance caused by cultural discontinuity, and the limitations of static and monotonous promotional forms. These challenges have often resulted in low levels of engagement and comprehension among young readers. The recent emergence of Sora-type video generation models, characterized by their ability to produce coherent long-form narratives, integrate multimodal information, and simulate spatially consistent scenes, opens up new opportunities for bridging this gap. This study aims to investigate how such models can be effectively employed in the promotion of Chinese classics among children, to evaluate their potential benefits and inherent risks, and to develop practical strategies that align technological capabilities with educational and cultural objectives. [Method/Process] This research adopts a combined approach of literature review, case study, and comparative analysis. First, it reviews existing literature on the application of artificial intelligence in reading promotion, highlighting current achievements and limitations. Second, it uses representative Chinese classics, including Shan Hai Jing, Strange Tales from a Chinese Studio (Liaozhai Zhiyi), and The Book of Songs (Shijing), to examine how Sora-generated videos function in different promotional contexts. Third, it constructs an analytical framework based on three interrelated dimensions: scenes, content, and approaches. Within this framework, the study identifies opportunities, delineates challenges, and proposes targeted countermeasures. [Results/Conclusions] Sora-type video generation can substantially enhance the promotion of Chinese classics among children. At the scene level, it allows traditional spaces to be extended into immersive and hybrid environments, thereby broadening access beyond classrooms and libraries. At the content level, it transforms abstract imagery and complex narratives into visual forms, reducing cognitive barriers and accommodating differentiated learning needs. At the approach level, it facilitates text-image complementarity, cross-media integration, and personalized recommendations, thereby strengthening engagement and sustaining reading motivation. However, the study also cautions against significant risks. These include the mismatch between generated content and specific promotional settings, the danger of oversimplification or distortion of classical texts, and the over-reliance on audiovisual materials that might undermine children's ability to engage in deep textual reading. To address these risks, the article proposes a threefold strategy: differentiated scene design, content transformation with cultural fidelity, and complementary pathways that ensure children transition from video to text.