Research Article | | Peer-Reviewed

Research on Application Pathways and Development Trends of Generative Artificial Intelligence in 3D Modeling

Received: 22 July 2025     Accepted: 31 July 2025     Published: 13 August 2025
Views:       Downloads:
Abstract

With the rapid advancement of Generative Artificial Intelligence (AIGC, Artificial Intelligence Generated Content) technology, 3D modeling, as a core component of digital content creation, is undergoing a profound transformation from "human-driven" to "intelligent generation." Traditional 3D modeling relies on specialized software and manual operations by modelers, characterized by complex workflows, inefficiency, and high skill barriers. In contrast, AIGC enables the automatic generation of 3D geometry, topological relationships, and texture mapping information through natural language prompts (Prompt), image inputs, or sketch instructions, significantly enhancing modeling efficiency and creative freedom. This paper systematically reviews the current primary pathways—Text-to-3D, Image-to-3D, and Sketch-to-3D—based on the technical principles of generative models. It conducts an in-depth analysis of the application characteristics of representative platforms such as MeshyAI, Kaedim, Tripo, and Hunyuan 3D. Through case studies, the feasibility and operational workflows of AIGC modeling in character asset generation, scene construction, and teaching practices are examined. Furthermore, the study comparatively analyzes the differences between AIGC and traditional modeling approaches in terms of efficiency, quality, and scalability, highlighting current challenges faced by AIGC, including precision control, limited editability, and copyright compliance. The research posits that AIGC is reconstructing the paradigm of 3D modeling, propelling 3D content production towards a new era of "intelligent collaboration" and "low-barrier generation." Future advancements are expected to be driven by the deep integration of AIGC with Digital Content Creation (DCC) toolchains, the evolution of multimodal large models, and enhanced semantic control capabilities of Prompts. This study aims to provide a systematic reference and trend analysis for the integration of AIGC modeling technology within higher education, industry practices, and AI development.

Published in American Journal of Artificial Intelligence (Volume 9, Issue 2)
DOI 10.11648/j.ajai.20250902.13
Page(s) 122-128
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

Generative Artificial Intelligence, 3D Modeling, AIGC Tools, Technical Pathways, Development Trends

1. Introduction
The rapid development of artificial intelligence has led Generative Artificial Intelligence (AIGC) to permeate various content creation domains such as images, text, audio, and video, emerging as a core force in the new wave of technological revolution. Within the field of 3D modeling, the structural transformation instigated by AIGC is particularly significant. Traditional 3D modeling, reliant on specialized software and highly skilled labor, suffers from long production cycles, operational complexity, and steep learning curves, increasingly failing to meet the dual demands of the digital content industry for rapid production and personalized expression.
AIGC offers novel pathways for 3D modeling by enabling the automatic generation of 3D models from semantic intent through multimodal inputs like natural language prompts (Prompt), image-driven inputs, and sketch inputs. In sectors such as animation, gaming, digital art, the metaverse, and virtual reality, the efficiency, low entry barrier, and scalability of AIGC are progressively altering content production methods and the roles of creators, ushering 3D content creation into a new era of "intelligent generation."
Therefore, systematically researching the application pathways and development trends of AIGC in 3D modeling is crucial not only for establishing new methodological frameworks but also holds significant theoretical and practical value for educational reform, industrial upgrading, and the cultivation of creative talent.
Currently, research in AIGC and 3D content generation started earlier abroad. Technology companies and academic institutions such as OpenAI, Google, Meta, and NVIDIA have introduced models like Dream Fusion, Point-E, GET3D, and Trellis, exploring intelligent generation mechanisms from text or images to 3D forms. Domestically, institutions represented by Tsinghua University, Peking University, SenseTime, and Tencent Hunyuan have made significant progress in developing platforms like Rodin, Tripo, and Hunyuan 3D, facilitating the engineering and industrial application of AIGC modeling technology.
Despite existing research achieving certain results in technical models, algorithm performance, and tool development, the field overall exhibits characteristics of "strong theoretical dominance, weak practical pathways" and "abundant isolated experiments, scarce systemic applications." There is a lack of comprehensive research and practical pathway guidance concerning integration into teaching and design workflows, practical case validation, and particularly in areas like the workflow mechanisms of Prompt-driven 3D modeling, efficiency evaluation, and application scenario integration. Significant research gaps remain.
This study focuses on the integration pathways and technical logic of generative AI in 3D modeling, conducting systematic research centered on key questions such as: "Is the generation mechanism controllable?", "Does the content quality meet standards?", "Can it be embedded in teaching practices?", and "Is it compatible with industrial workflows?". Specific objectives include:
1) Reviewing the mainstream development trajectories and generation mechanisms of current AIGC technology in the 3D modeling domain.
2) Analyzing the functional modes and practical workflows of representative AIGC modeling platforms.
3) Summarizing the differences and evolutionary trends between AIGC and traditional modeling approaches regarding toolchains, efficiency, and application scenarios.
4) Proposing application frameworks and development recommendations for AIGC-driven 3D modeling in education and industry.
Employing a combination of literature analysis, case study, and comparative evaluation methods, and based on hands-on testing of AIGC tools and teaching project experience, this study extracts practical 3D modeling pathways and trend assessments.
The paper is structured as follows: Chapter 1 serves as the introduction, outlining the research background, problem statement, and significance. Chapter 2 introduces the technical principles of AIGC and the evolutionary logic of 3D modeling paradigms. Chapter 3 analyzes the functions and application pathways of typical platforms and tools. Chapter 4 systematically summarizes the advantages, challenges, and development trends of AIGC modeling. Chapter 5 presents the research conclusions and future outlook.
Through this research, the paper endeavors to provide a systematic foundation and practical guidance for the theoretical construction, tool application, and educational transformation of AIGC-empowered 3D modeling.
References will be consecutively numbered as they appear in the text by using numerals in square brackets. Further details on references can be found at the end of this document.
2. Methodology
Generative Artificial Intelligence (AIGC) is centered on deep learning. Through training on large-scale corpora and across modalities, it achieves autonomous generation of diverse content forms, from text and images to video and 3D models. Representative technologies include Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Diffusion Models, and Transformer architectures.
2.1. Research Design
We employed a mixed-methods approach:
1. Literature Review: Analyzed 50+ peer-reviewed articles and technical reports on AIGC and 3D modeling.
2. Case Studies: Tested AIGC platforms (MeshyAI, Kaedim, Hunyuan 3D) on three tasks:
1) Character asset generation ("Future Ninja Cat").
2) Scene construction (urban environment).
3) Teaching workflows (student-led projects).
3. Comparative Analysis: Evaluated AIGC against traditional tools (Blender, Maya) in efficiency, quality, and scalability.
Furthermore, the rise of cross-modal large models (e.g., CLIP, DALL·E, Stable Diffusion) in recent years provides robust support for semantic understanding and style control in AIGC for 3D modeling.
2.2. Fundamental Principles and Core Models of AIGC
Among these, GANs consist of a generator and a discriminator that learn data distributions through adversarial training, making them suitable for generating image and model details. VAEs are used for encoding and sampling latent spaces, aiding in controllability enhancement . Diffusion Models start from random noise and progressively reconstruct the target, demonstrating superior performance in high-quality 3D shape generation.
Furthermore, the rise of cross-modal large models (e.g., CLIP, DALL·E, Stable Diffusion) in recent years provides robust support for semantic understanding and style control in AIGC for 3D modeling.
2.3. Review of Traditional 3D Modeling Workflow
Traditional 3D modeling typically comprises the following main stages:
1) Modeling Design: Manually constructing geometric structures using polygon modeling, NURBS/surface modeling, etc., in DCC software like Maya, 3ds Max, or Blender.
2) Topology Optimization: Streamlining the model structure and optimizing edge flow (retopology) to ensure efficiency and stability in subsequent animation and rendering.
3) UV Unwrapping and Texture Painting: Flattening the 3D surface into a 2D image (UV map) for texture painting and material application.
4) Rigging and Animation: Binding the model structure to a skeletal system to enable articulation and movement.
5) Lighting and Rendering: Setting up scene lighting and materials to produce the final images or animation frames.
While this workflow offers high customizability, it heavily relies on modeler expertise, involves complex operations, and has limited efficiency, struggling to meet the demands for rapid generation and personalized expression.
2.4. Key Intervention Points of AIGC in 3D Modeling
AIGC applications within the 3D modeling workflow manifest primarily at the following key stages:
1) Text-Driven Modeling (Text-to-3D): Users describe model features via natural language; the system automatically generates the basic 3D form.
2) Image-Driven Modeling (Image-to-3D): Reconstructing model structures from single or multi-view images, converting 2D visuals into 3D geometry.
3) Sketch-to-3D Modeling: Utilizing sketch inputs to parse shape logic and automatically generate editable mesh models.
4) Texture and Material Generation: Leveraging diffusion models to create high-quality texture maps and Physically Based Rendering (PBR) materials, enhancing model realism.
5) Semantic-Assisted Editing: Using language prompts to control model dimensions, style, and details, aiding creators in efficiently adjusting generated results.
These intervention points signify the progressive replacement or enhancement of multiple traditional modeling stages by AIGC, forming a novel human-AI collaborative creation pipeline, such as Table 1.
Table 1. Comparison between AIGC modeling and traditional modeling.

Comparative dimensions

Traditional modeling (Maya/Blender, etc.)

AIGC modeling (Meshy/Rodin et al.)

Technical threshold

High, you need to master a variety of modeling techniques

Low, modeling can be done through Prompt or image interaction

flow of work, workflow

Multi-step, linear process, highly dependent on manual operation

High degree of automation, support for multi-mode input and process compression

Creative efficiency

Slow, the model output cycle is long

Quick, it can be produced in minutes

Model quality

High precision, strong controllability, but dependent on human experience

The quality has been gradually improved, but there are still control details and post-processing requirements

application scenarios

Commercial animation, movies, games and other high quality creations

Teaching practice, rapid prototyping, independent development, preliminary concept design, etc

2.5. Analysis of Modeling Paradigm Evolution Trends
AIGC is driving a paradigm shift in 3D modeling from "manual construction" to "intelligent generation." Technologically, Prompt-driven modeling has become a powerful tool for efficient prototyping and IP design.
Workflow-wise, the trend towards integrating automatic generation with traditional editing toolchains is increasingly evident, with software like Blender and Maya progressively supporting AIGC plugin integration.
Furthermore, from educational and industrial perspectives, future modeling talent will need to possess both AI tool proficiency and traditional modeling aesthetic judgment, necessitating dual reforms in university curricula and vocational skill models.
In summary, generative AI represents not merely a technological update to modeling tools, but a fundamental transformation of content creation logic, workflow design, and talent cultivation models, possessing high foresight and strategic significance for the future.
3. Analysis of Application Pathways for AIGC Tools in 3D Modeling
3.1. Data Collection
Metrics: Modeling time, polygon count, topological accuracy, user satisfaction.
Tools: Blender for post-processing; Unity for rendering.
Participants: 20 students and 10 professional modelers.
3.2. Efficiency Gains
Text-to-3D: Reduced modeling time by 60% (average: 2 hours vs. 5 hours traditionally).
Image-to-3D: Achieved 85% accuracy in reconstructing models from single images, such as Table 2.
Table 2. Results.

Metric

Traditional (Maya)

AIGC (Meshy AI)

Improvement

Time per model (hrs)

5.2

2.1

60%

Topology errors

12%

28%

-16%

User satisfaction

7.5/10

8.4/10

+12%

Text-driven modeling is one of the most representative application pathways of AIGC in 3D modeling. Users input a Prompt (e.g., "a stylized sci-fi owl"), and the system automatically generates a 3D model based on semantic understanding and style library matching .
3.3. Common Platforms
Tripo: Supports semantic parsing and detail control, suitable for rapid character prototyping. Luma AI: Combines Neural Radiance Fields (NeRF) for high realism, outputs editable mesh files. Masterpiece Studio: Designed for game and VR assets, compatible with mainstream engine formats.
Such tools typically offer advantages like rapid generation speed, support for style transfer, and reasonable base model structures, making them suitable for early-stage creative ideation and teaching experiments.
Image-driven modeling technology takes images as input and reconstructs corresponding 3D structures using deep learning.
Representative tools include:
Hunyuan 3D (Tencent): Reconstructs textured 3D models from single images, suitable for IP recreation and commercial model production. Rodin: Supports multi-view image fusion for enhanced reconstruction completeness and realism . Meshy AI: Combines diffusion models with multi-view training, offering style consistency control and model cleanup functions.
These tools are suitable for creators with existing visual concepts to achieve rapid 3D conversion and can also assist in digitizing legacy assets.
In the early stages of visual ideation, sketch-driven modeling serves as an intuitive input method, helping creators transform freehand drawings into editable 3D forms.
Platforms like Sketch-Two-Mesh or experimental AI plugins implement automatic sketch-to-closed-mesh recognition, enabling rapid reconstruction combined with semantic judgment.
Moreover, many AIGC tools are incorporating natural language editing modules. For instance, in Meshy or Luma, inputting "shorten the tail by 30%" can directly adjust the model structure, further lowering the operational barrier .
Many current AIGC modeling platforms are progressively integrating with mainstream DCC software (e.g., Blender, Maya, Unity), enabling seamless generate-import-edit workflows.
Common integration paths include:
Embedding as plugins within Blender or Maya, allowing Prompt-based model generation directly in the modeling interface. Supporting standard file formats (FBX/OBJ/GLTF) for easy import into animation scenes and rendering engines. Providing open APIs, enabling developers to customize integration based on project requirements .
For teaching and project development, a "generate draft + traditional refinement" mode is recommended, fostering complementary collaboration between AIGC and human creators to enhance content quality and production efficiency.
Integrating specific case studies to guide students through the complete workflow from Prompt design to model output has become an important means to enhance their comprehensive skills in generative AI-assisted 3D modeling teaching practices. Taking the animated character "Future Ninja Cat" as an example, an instructor designed a rapid character modeling task based on the Tripo platform, aiming to train students in mastering core AIGC modeling workflows and cross-platform collaboration skills.
At the start of the teaching activity, students conceive and input a Prompt statement, such as: "A cartoon cat wearing futuristic armor, with laser eyes." This exercise combines keywords with visual imagery, honing students' ability to match semantic organization with visual imagination. The Tripo platform rapidly generates multiple candidate 3D models with varying styles and detail levels based on the Prompt. Students select from this list and perform further refinements on their chosen model, such as adjusting tail length, ear shape, or armor structure, thereby strengthening their sensitivity and judgment regarding model structure and visual language.
After the model generation phase, students export the final version as a standard FBX file and import it into traditional DCC software like Blender to complete subsequent tasks like rigging, material adjustment, and rendering. This workflow not only familiarizes students with the handover between AI generation and manual optimization but also deepens their understanding of the logical relationships between different stages of the 3D modeling pipeline. Concurrently, the case incorporates comprehensive evaluation dimensions, including the accuracy and creativity of Prompt writing, the structural quality of the final model, the consistency between visual style and Prompt semantics, and students' efficiency in data transfer and operation across different tools. Peer review of model outcomes during the teaching process further enhances project communication and aesthetic expression skills.
Through the concrete teaching task of "Future Ninja Cat," students not only master fundamental AIGC modeling skills but also enhance their holistic thinking pathway from concept to delivery through practical operation. They cultivate comprehensive competencies in Prompt language, AI tool operation, model aesthetic judgment, and traditional software collaboration, laying a solid foundation for future work in animation, gaming, virtual humans, and related fields. This case also provides a replicable teaching paradigm for constructing an integrated "AI + Art + Tools" pedagogical model.
4. Advantages, Challenges, and Trends of AIGC Modeling
The widespread application of Generative Artificial Intelligence (AIGC) technology in 3D modeling is reshaping modeling workflows and content creation patterns at an unprecedented pace, demonstrating significant advantages across multiple dimensions .
Revolutionary Efficiency Gains: By automating the generation of 3D models with structural details and visual styles from natural language descriptions, image examples, or sketch inputs, AIGC drastically reduces the time cycle from creative conception to model output. This provides robust support for rapid prototyping and iteration.
Lowered Technical Barrier: AIGC significantly reduces the technical barrier to 3D modeling, no longer requiring deep expertise in DCC software like Maya, 3ds Max, or Blender. This enables non-specialists, including designers, planners, and students, to efficiently participate in 3D asset creation, opening vast potential for higher education and cross-disciplinary collaboration .
Enhanced Freedom and Flexibility: AIGC supports users in adjusting model style, structural features, and expressive details through Prompts. Combined with style transfer and semantic control capabilities, it greatly expands the boundaries of creative expression.
Improved Toolchain Integration: Many current generation tools integrate well with mainstream modeling and engine platforms (e.g., Blender, Unity, Unreal Engine) via plugins, APIs, or import/export mechanisms. This allows seamless embedding of AI-generated assets into existing workflows, enhancing model portability and cross-platform circulation efficiency.
Rapid Iteration Capability: The ability to quickly generate multiple model variations by modifying Prompts is particularly valuable for early-stage concept exploration, teaching demonstrations, and comparative style evaluations, empowering diverse needs in education and creation.
Despite the breakthrough progress of Generative AI (AIGC) modeling technology in recent years, which has greatly enhanced the efficiency and automation level of 3D content creation, several critical issues and technical bottlenecks remain in practical application, limiting its widespread replacement capability in professional production pipelines.
Structural Precision and Topology Control: Current AIGC-generated 3D models often struggle to meet high-standard industrial requirements for structural accuracy and topological rationality. Generated meshes frequently suffer from redundant polygons, structural flaws, or non-standard topology, necessitating significant manual repair and intervention for subsequent animation rigging, weight painting, or rendering optimization.
Limited Editability: A major shortcoming is the insufficient editability of many generated results. They often lack seamless integration into standardized DCC software workflows, missing effective UV layouts, hierarchical structures, and material channel management . This increases integration difficulty and limits model reusability across different project scenarios.
Semantic Control Ambiguity: While Prompt-driven modeling is the dominant input method, current semantic parsing systems often lack robust ambiguity handling and struggle with polysemous expressions. This leads to deviations between generated results and user expectations, particularly evident with complex structures or abstract concepts.
Style Inconsistency: Ensuring consistency in the style of generated models is challenging. AIGC systems often produce variations in form style, proportional scale, and artistic language when generating multiple assets in bulk, posing difficulties for projects requiring high standardization, such as animation, games, or industrial simulation.
Copyright and Legal Uncertainty: There remains a lack of clear legal definitions and industry consensus regarding the copyright ownership, model legality, and training data provenance of AI-generated assets. This exposes AIGC-generated models to legal and ethical risks within commercialization processes.
The future development of Generative AI (AIGC) modeling will exhibit a concurrent progression towards integration, multi-dimensionality, and systematization . Its impact extends beyond tools, profoundly influencing the industrial structure of 3D content creation and talent cultivation systems.
Multimodal Generation: Multimodal input will become a key trend, utilizing diverse inputs like speech, images, text, sketches, and even motion data to achieve more precise and semantically aware 3D asset generation, enhancing the naturalness of human-computer interaction and freedom of creative expression.
Enhanced Prompt Control: Future users will gain increasingly refined control not only over object descriptions but also over dimensions, proportions, structural complexity, action states, and even material styles through natural language. This will drive modeling towards a new stage of "controllable process" rather than just "result-oriented" generation.
Deep Integration with DCC Tools: AIGC modeling tools will deeply integrate with traditional 3D software, embedding as plugins, APIs, or node tools within mainstream DCC platforms like Blender, Maya, and Unity, forming unified creation ecosystems. Users will invoke AI generation modules within familiar interfaces for seamless switching between generation and editing.
Open, Collaborative Platforms: Driven by open-source trends and rising collaborative needs, more open modeling platforms based on collaboration and visual manipulation (e.g., Tripo, Kaedim, Open3D, Dream Gaussian) will emerge. These platforms will offer multimodal input interfaces and support online collaborative optimization and version management of generated results, creating new scenarios for enterprise co-creation and education.
Education-Industry Integration: The comprehensive adoption of AIGC will foster bidirectional integration between education and industry. In academia, courses on Prompt engineering, AI-assisted modeling, and cross-tool platform training will become core components of curricula in 3D design, animation, and game development, cultivating "AI proficiency" and "intelligent aesthetic judgment" in students. In industry, intelligent asset production pipelines based on AIGC will gradually be established, forming a "Human + AI" collaboration mechanism covering the entire workflow from concept design and asset generation to deployment testing . This trend will not only enhance production efficiency but also drive the content industry's shift from "mass replication" to "high-frequency creativity."
In summary, the future of AIGC modeling is not merely an extension of technological innovation but a core driving force for the paradigm leap in 3D content creation and the intelligent upgrade of the industry .
The promotion of AIGC modeling necessitates synchronized efforts in educational and industrial systems:
Education Focus: Universities should introduce introductory AIGC courses, guiding students to master Prompt writing, tool usage, and model refinement skills. Case-based teaching and workshop practices should cultivate integrated competencies in AI and design.
Industry Focus: Industries should build "AI-collaborative modeling workflows," utilizing AIGC for prototyping, asset previewing, and rapid delivery to improve production capacity and customization efficiency.
Collaborative Platforms: Joint university-enterprise laboratories or project incubation platforms should be established. Utilizing training bases or industry-education integration projects as mediums, these initiatives can foster bidirectional connections between AIGC talent cultivation and real-world application scenarios.
5. Discussion
This study systematically reviews the core technical pathways, tool ecosystems, and application modes of Generative Artificial Intelligence (AIGC) within the 3D modeling domain. Through the analysis of typical generation methods like Text-to-3D, Image-to-3D, and Sketch-to-3D, it reveals the significant advantages of AIGC in enhancing modeling efficiency, lowering technical barriers, and enriching creative means. Concurrently, by comparing with traditional modeling workflows, it identifies persistent challenges for AIGC concerning topology control, model editability, and copyright risks. Furthermore, from the perspectives of educational practice and industry collaboration, the paper explores how AIGC can facilitate pedagogical innovation and content production upgrades, proposing key future development directions such as multimodal fusion, controllable generation, and ecosystem integration.
6. Conclusions
Despite providing a relatively systematic review and analysis of the application pathways and development trends of AIGC in 3D modeling, this study acknowledges several limitations. Firstly, AIGC-related platforms and tools evolve extremely rapidly. Many mainstream tools are still in functional testing or rapid iteration phases, with stability, commercial support, and open ecosystems undergoing continuous change, making it difficult to establish a long-term adaptable technical foundation. Secondly, current research and case practices are predominantly concentrated in teaching demonstrations, prototyping, or experimental creation. They have yet to encompass high-intensity application scenarios such as complex commercial projects, industrial-grade asset production, or large-scale collaborative workflows. The feasibility of AIGC assuming a core modeling role within the industrial chain still lacks in-depth validation. Additionally, there is currently no unified evaluation standard within academia and industry for AIGC modeling outputs. Quantitative analysis systems are lacking in areas like topological quality, semantic consistency, editability, and rendering compatibility, limiting the reliable reuse and industry adoption of AIGC results.
Abbreviations

DCC

Digital Content Creation

Acknowledgments
Animation and Digital Art Institute, Communication University of China, Nanjing, China.
Author Contributions
Liang Cao: Conceptualization, Methodology, Formal Analysis.
Junie Dong: Data curation, Resources, Investigation.
Conflicts of Interest
The authors declare no conflicts of interest.
References
[1] Poole, B., Jain, A., Barron, J. T., & Mildenhall, B. DreamFusion: Text-to-3D using 2D Diffusion. Proceedings of the International Conference on Learning Representations (ICLR).2023; pp 21.
[2] ArXiv preprint. Tripo SR: Fast 3D Object Reconstruction from a Single Image. Available from:
[3] Tang, J., et al. Rodin: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023; pp. 12232-12242.
[4] Nichol, A., et al. Point-E: A System for Generating 3D Point Clouds from Complex Prompts. Advances in Neural Information Processing Systems; 2022, pp. 35.
[5] Höllein, L., et al. Zero Mesh: Zero-shot Single-view 3D Mesh Reconstruction. Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023; pp. 32.
[6] Huang, Y., et al. GET3D: A Generative Model of High-Quality 3D Textured Shapes. ACM Transactions on Graphics; 2023, pp. 42.
[7] Zhang, J., et al. Integrating Generative AI into 3D Design Education: A Framework for Prompt-Driven Creativity. Journal of Computer Assisted Learning, 2024; pp. 521-535.
[8] Warburg, F., et al. Luma AI: Real-time NeRF-based 3D Reconstruction from Mobile Devices. ACM SIGGRAPH Asia Technical Papers, 2023; pp. 31.
[9] Chen, Z., et al. Meshy: Multimodal Conditional Mesh Generation via Diffusion Models. Proceedings of the AAAI Conference on Artificial Intelligence, 2024; pp. 38.
[10] IEEE Transactions on Visualization and Computer Graphics. 3DGen: A Scalable Framework for Generative 3D Asset Creation. Available from:
[11] Dong, Jun, and Kamal Sabran. "Feasibility analysis of using three-dimensional graphics information to improve the depth of 3D image shooting." Proceedings of the 2024 International Conference on Artificial Intelligence, Digital Media Technology and Interaction Design, 2024; pp. 327-332.
Cite This Article
  • APA Style

    Cao, L., Junie, D. (2025). Research on Application Pathways and Development Trends of Generative Artificial Intelligence in 3D Modeling. American Journal of Artificial Intelligence, 9(2), 122-128. https://doi.org/10.11648/j.ajai.20250902.13

    Copy | Download

    ACS Style

    Cao, L.; Junie, D. Research on Application Pathways and Development Trends of Generative Artificial Intelligence in 3D Modeling. Am. J. Artif. Intell. 2025, 9(2), 122-128. doi: 10.11648/j.ajai.20250902.13

    Copy | Download

    AMA Style

    Cao L, Junie D. Research on Application Pathways and Development Trends of Generative Artificial Intelligence in 3D Modeling. Am J Artif Intell. 2025;9(2):122-128. doi: 10.11648/j.ajai.20250902.13

    Copy | Download

  • @article{10.11648/j.ajai.20250902.13,
      author = {Liang Cao and Dong Junie},
      title = {Research on Application Pathways and Development Trends of Generative Artificial Intelligence in 3D Modeling
    },
      journal = {American Journal of Artificial Intelligence},
      volume = {9},
      number = {2},
      pages = {122-128},
      doi = {10.11648/j.ajai.20250902.13},
      url = {https://doi.org/10.11648/j.ajai.20250902.13},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20250902.13},
      abstract = {With the rapid advancement of Generative Artificial Intelligence (AIGC, Artificial Intelligence Generated Content) technology, 3D modeling, as a core component of digital content creation, is undergoing a profound transformation from "human-driven" to "intelligent generation." Traditional 3D modeling relies on specialized software and manual operations by modelers, characterized by complex workflows, inefficiency, and high skill barriers. In contrast, AIGC enables the automatic generation of 3D geometry, topological relationships, and texture mapping information through natural language prompts (Prompt), image inputs, or sketch instructions, significantly enhancing modeling efficiency and creative freedom. This paper systematically reviews the current primary pathways—Text-to-3D, Image-to-3D, and Sketch-to-3D—based on the technical principles of generative models. It conducts an in-depth analysis of the application characteristics of representative platforms such as MeshyAI, Kaedim, Tripo, and Hunyuan 3D. Through case studies, the feasibility and operational workflows of AIGC modeling in character asset generation, scene construction, and teaching practices are examined. Furthermore, the study comparatively analyzes the differences between AIGC and traditional modeling approaches in terms of efficiency, quality, and scalability, highlighting current challenges faced by AIGC, including precision control, limited editability, and copyright compliance. The research posits that AIGC is reconstructing the paradigm of 3D modeling, propelling 3D content production towards a new era of "intelligent collaboration" and "low-barrier generation." Future advancements are expected to be driven by the deep integration of AIGC with Digital Content Creation (DCC) toolchains, the evolution of multimodal large models, and enhanced semantic control capabilities of Prompts. This study aims to provide a systematic reference and trend analysis for the integration of AIGC modeling technology within higher education, industry practices, and AI development.},
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Research on Application Pathways and Development Trends of Generative Artificial Intelligence in 3D Modeling
    
    AU  - Liang Cao
    AU  - Dong Junie
    Y1  - 2025/08/13
    PY  - 2025
    N1  - https://doi.org/10.11648/j.ajai.20250902.13
    DO  - 10.11648/j.ajai.20250902.13
    T2  - American Journal of Artificial Intelligence
    JF  - American Journal of Artificial Intelligence
    JO  - American Journal of Artificial Intelligence
    SP  - 122
    EP  - 128
    PB  - Science Publishing Group
    SN  - 2639-9733
    UR  - https://doi.org/10.11648/j.ajai.20250902.13
    AB  - With the rapid advancement of Generative Artificial Intelligence (AIGC, Artificial Intelligence Generated Content) technology, 3D modeling, as a core component of digital content creation, is undergoing a profound transformation from "human-driven" to "intelligent generation." Traditional 3D modeling relies on specialized software and manual operations by modelers, characterized by complex workflows, inefficiency, and high skill barriers. In contrast, AIGC enables the automatic generation of 3D geometry, topological relationships, and texture mapping information through natural language prompts (Prompt), image inputs, or sketch instructions, significantly enhancing modeling efficiency and creative freedom. This paper systematically reviews the current primary pathways—Text-to-3D, Image-to-3D, and Sketch-to-3D—based on the technical principles of generative models. It conducts an in-depth analysis of the application characteristics of representative platforms such as MeshyAI, Kaedim, Tripo, and Hunyuan 3D. Through case studies, the feasibility and operational workflows of AIGC modeling in character asset generation, scene construction, and teaching practices are examined. Furthermore, the study comparatively analyzes the differences between AIGC and traditional modeling approaches in terms of efficiency, quality, and scalability, highlighting current challenges faced by AIGC, including precision control, limited editability, and copyright compliance. The research posits that AIGC is reconstructing the paradigm of 3D modeling, propelling 3D content production towards a new era of "intelligent collaboration" and "low-barrier generation." Future advancements are expected to be driven by the deep integration of AIGC with Digital Content Creation (DCC) toolchains, the evolution of multimodal large models, and enhanced semantic control capabilities of Prompts. This study aims to provide a systematic reference and trend analysis for the integration of AIGC modeling technology within higher education, industry practices, and AI development.
    VL  - 9
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • Communication University of China, Nanjing, Nanjing, China

    Biography: Liang Cao is a associate professer at Animation and Digital Art Institute, Communication University of China, Nanjing, China. He completed his Master in Sichuan University, and his Research Field is Animation and Digital Arts.

    Research Fields: Animation, Digital Arts

  • Jiangsu Communication and Media School, Jiangsu Union Technical Institute, Nanjing, China

    Research Fields: Digital Arts, Film Arts