A Proposed Meta-Reality Immersive Development Pipeline: Generative AI Models and Extended Reality (XR) Content for the Metaverse

Abstract

The realization of an interoperable and scalable virtual platform, currently known as the “metaverse,” is inevitable, but many technological challenges need to be overcome first. With the metaverse still in a nascent phase, research currently indicates that building a new 3D social environment capable of interoperable avatars and digital transactions will represent most of the initial investment in time and capital. The return on investment, however, is worth the financial risk for firms like Meta, Google, and Apple. While the current virtual space of the metaverse is worth $6.30 billion, that is expected to grow to $84.09 billion by the end of 2028. But the creation of an entire alternate virtual universe of 3D avatars, objects, and otherworldly cityscapes calls for a new development pipeline and workflow. Existing 3D modeling and digital twin processes, already well-established in industry and gaming, will be ported to support the need to architect and furnish this new digital world. The current development pipeline, however, is cumbersome, expensive and limited in output capacity. This paper proposes a new and innovative immersive development pipeline leveraging the recent advances in artificial intelligence (AI) for 3D model creation and optimization. The previous reliance on 3D modeling software to create assets and then import into a game engine can be replaced with nearly instantaneous content creation with AI. While AI art generators like DALL-E 2 and DeepAI have been used for 2D asset creation, when combined with game engine technology, such as Unreal Engine 5 and virtualized geometry systems like Nanite, a new process for creating nearly unlimited content for immersive reality is possible. New processes and workflows, such as those proposed here, will revolutionize content creation and pave the way for Web 3.0, the metaverse and a truly 3D social environment.

Share and Cite:

Ratican, J. , Hutson, J. and Wright, A. (2023) A Proposed Meta-Reality Immersive Development Pipeline: Generative AI Models and Extended Reality (XR) Content for the Metaverse. Journal of Intelligent Learning Systems and Applications, 15, 24-35. doi: 10.4236/jilsa.2023.151002.

1. Introduction

The use of artificial intelligence (AI) to generate content for art and design is not new [1] . The infancy of art history computing can be traced to the 1980s as artists experimented with the potential of “digitized” and “digital” iterations of algorithmic art [2] . As art and design have historically adopted emerging technologies, the rapid spread of AI art generators available today was inevitable [3] . However, such generative content produced using text prompts with DALLE-2, Midjourney, Jasper Art, Stable Diffusion, DeepAI, and many more tools, has been largely limited to two-dimensional output and has yet to disrupt the 3D modeling and digital twin development pipelines [4] . These pipelines are currently cumbersome and expensive, requiring specialized technical knowledge that often requires years of training in industry [5] . In order to create content for extended reality (XR), such as augmented reality (AR), mixed reality (MR), and virtual reality (VR), 360 photography, photogrammetry, or 3D modeling software are used. Whether producing a digital twin using the Matterport system or creating a 3D model using Autodesk 3Ds Max, Maya or Blender, there are many limitations. For instance, 3D cameras like the Matterport MC250 Pro2 are able to improve upon previous photogrammetric processes using 134 megapixels and 100k points per second and 1.5 million per scan, and then process for immersive viewing. The digital twin, however, is bound to proprietary software and is not interoperable [6] . The 3D models created are also not optimized and are not created for rendering and are often incomplete. On the other hand, models that are created using 3D modeling software, while optimized for rendering, are cumbersome to create, though are device agnostic and can be exported to file formats that are interoperable, geometry definition file formats such as OBJ. Modelers often save time and search for existing models in asset stores but are limited by what already exists that can be altered and reconfigured to the desired specifications. After completing modeling and alterations, XR content creators can import the compatible files into platforms such as Virbela’s FRAME or Spatial [7] . Both development pipelines have limitations that can be resolved through a combination of emerging technologies.

As such this paper proposes a new meta-reality immersive development pipeline to address the current limitations of content creation for virtual and immersive environments in the metaverse. With advances in natural language processing (NLP) and visual content generation AI, text prompts can be used to generate 3D models in interoperable formats that can support animation (e.g. OBJ, GLTF, GLB, etc.) [8] . Much like image generation with DALLE-2 and Stable Diffusion, 3D content generators like Interactive Pattern Generator and AI Material Designer allow for the creation of tileable (modular or repeatable) materials for assets through text prompts [9] . Using the generators AI can be trained to generate 3D elements such as patterns and texture. Whereas previous limitations included file size to ensure low latency in XR experiences, virtualized geometry systems can now be used to compress files for real-time rendering [10] . After generating content using AI, game engines can be used to edit the file and systems like Nanite to optimize for efficient run times. The proposed pipeline will not only remove the need for specialist technical knowledge and training but allows for unprecedented asset creation. AI-generated art is becoming increasingly common and accepted. XR designers and developers are producing virtual exhibitions to illustrate AI-generated art, such as Andrew Wright’s AI Art Exhibition: The Other Us (2022) (https://framevr.io/theotherus) (Figure 1 and Figure 2). The next logical step is to combine the technologies to generate XR content for an immersive space. The new process and workflow proposed here overcome existing limitations for 3D content creation and paves the way for Web 3.0, the metaverse and a truly 3D social environment.

Figure 1. Andrew Wright, AI Art Exhibition: The Other Us. FRAME. Virbela. (2022) (Detail 1).

Figure 2. Andrew Wright, AI Art Exhibition: The Other Us. FRAME. Virbela. (2022) (Detail 2).

2. Literature Review

First developed in the 1960s, 3D modeling involves the creation of a three-dimensional digital visual representation of an object using computer software. There are many varieties of 3D models, including wireframe, surface, and solid, the most computationally demanding. Each is suited to a particular task of capturing information about a three-dimensional object, such as design, size, appearance, texture, and even weight, density and gravity. The use of such models began in industry and industrial design and expanded in the late 1990s to video games and entertainment [11] [12] . The demand for 3D asset creation has only grown with the advent of the metaverse and the consumption of media, games, and entertainment in immersive environments [13] . 3D modelers often use software like Sketchfab, Blender, Maya, and Autodesk 3ds Max to create an asset, which can be viewed in these applications. Otherwise, for greater optimization and interactivity, assets are also imported into game engines, such as Unreal Engine 5 (Epic Games) and Unity (Unity Technologies) and situated within virtual environments [14] . These game engines even have marketplaces where 3D assets can be readily downloaded and sold, allowing developers without 3D modeling experience access to 3D models. However, as noted, the number of resources in these stores is finite and developers are limited by existing assets that need be altered and reconfigured to the desired specifications. The demand for such high-quality, editable and reconfigurable assets will only continue to rise in many industries, especially immersive content creation for the metaverse [15] .

2.1. Game Engines and Cinematics

Industries such as film have seen an increase in the use of 3D models and game engines as details are now crossing the uncanny valley and becoming indistinguishable from real life, leading to advances in experimental filmmaking, including VR cinematics [16] . The mainstream film industry has also seen a rise in the use of traditional game development software like Unreal Engine 5, which has become a tool used by visual effects artists and filmmakers to create realistic worlds in real-time. The American space western television series The Mandalorian [17] was a front runner in standardizing the use of game engine technology as it provided filmmakers the ability to create realistic virtual sets that would dynamically change based on needs and camera position, and also reflect accurate lighting information [18] . The development pipeline saved countless hours in post-processing as much of the effects were done in camera and on set [19] . The process also has been praised by the actors as they can see the world they are acting in as opposed to the previous method of working in front of a green screen [20] . The need for 3D models and virtual sets will only rise as more and more film projects are relying on this technology. As more films are also shot in virtual reality (VR), the impact of these technologies has the potential to radically change how directing in the film industry operates [21] [22] .

2.2. 3D Modeling Process

With all of the advances made possible using game-engine technology, 3D modeling remains limited to specialists. The development pipeline for 3D modeling explains why [23] . In traditional 3D modeling, the modeler starts with simple geometry, such as a polygon, which can be as simple as a triangle comprised of three vertices existing in three-dimensional space represented in the Cartesian coordinate standard of X, Y, Z. A minimum of three vertices are required to generate the surface geometry, and today’s game models can easily be comprised of 20,000 polygons, which would be the equivalent to 40,000 triangles and those are models that have been optimized for performance. Because of these considerations, the modeling pipeline is heavily reliant on several specialized skills and techniques, such as re-topologizing meshes to improve animation and performance, rendering tools such as normal maps to produce fine details, as well as post-processing effects using specialized shaders [24] . The requirements can lead to the need to be able to process and render thousands, if not millions of polygons on screen at 60 frames per second. The following will investigate the two variables of the asset creation pipeline: the creation of models and the optimization and usability of those models in a game engine or other real-time applications.

2.3. 3D Scanning and Photogrammetry

The acquisition or creation of 3D models has seen a few developments in the past several years, the most well-known of which is 3D scanning. This technology is not new but continues to improve in both quality and adaptation, as well as accessibility. Whereas previous iterations were large and cumbersome, the latest generation of scanners are handheld (e.g. Artec 3D) or even free applications for smartphones that use LiDAR (e.g. Scaniverse and Polycam) [25] . These scanners and their associated software applications allow users to create 3D models of objects large and small and can even scan entire areas [26] . The technology is used in many industries beyond entertainment including engineering and even law enforcement (Chenoweth et al. 2022). Alternatively, another related method of model creation is photogrammetry, which is the process of generating 3D models from photographs or other data. This process is also not new and began with the creation of 2D information, not only from photographs but also from sonar and radar and has been used to create topographic maps [27] . These processes have expanded to other fields including entertainment and these techniques have been used in films such as The Matrix [28] and video games [29] . Taken together, both 3D scanning and photogrammetry can produce 3D models and assets.

2.4. AI-Generated Content

Technology continues to develop, and AI has been used with photogrammetry to improve models and fill in details that were not present in photographs (Amaro et al. 2022). However, recently AI has begun to cross over from being a tool for helping with art creation to a method for generating art. AI art generators like the DALL-E 2 have recently made headlines by creating interesting and imaginative works of art [30] . While these early models often would make mistakes that seem obvious to human eyes, with each iteration the AI improves, and newer models have started to make photo-realistic art [30] . These examples are for creating 2D art but can clearly show an evolution of quality and sophistication with each newer algorithmic iteration. An exciting aspect about using AI is its ability to improve and learn from previous versions. Artists are using these generated images to inspire them and as starting points for conceptualizing ideas and designs, and many speculate it will not be long before AI can do the largest part of concept design work [3] .

The move from 2D art generation to 3D content generation is a natural progression. This has led to several AI systems that can take a prompt, some of which can be as simple as a text description and transform that into a 3D model. Such SOTA models may be used to train AI to understand 3D space using image language models [31] . The open-world 3D scene understanding task is a 3D vision-language task that also includes open-set classification. The limitations of the tasks are that the AI does not currently have enough data. Unfortunately, existing 3D datasets are not varied enough in comparison to 2D counterparts to train AI to generate content [32] . Admittedly, the creation of a 3D model from a text prompt is still early in development and is not ideal for asset creation, but other solutions do currently exist.

Just as photometry uses data to generate pictures, some AI systems are using 2D pictures as inputs to generate 3D content. By loading images as reference, AI systems like Nvidia’s Instant NeRF and Kaedim can generate 3D models. Kaedim is a newer image to 3D model AI tool aimed at 3D artists and those that need 3D models created from concept design. The tool is still in development and currently needs human reviewers to ensure quality of output. The software reviews images of a concept design from all angles and creates a 3D model. Kaedim is one of the few AI 3D model generating tools that takes the technical requirements of the models in mind but does require the user to specify the complexity of the model [33] . This process does require the user to be aware of the specific requirements of the platform or real-time application the model is developed for and experience to know how many polygons would be appropriate [34] . Nvidia’s NeRF (Neural Radiance Field) uses a process based on a concept called inverse rendering, which essentially inverts the concept of normal rendering and attempts to recreate how light reacts to objects in the real world. Instead of normal baked lighting, NeRF instead uses AI to analyze a collection of 2D images and constructs a 3D model from them. The process can create a full 3D scene in a short amount of time [35] . These methods of 3D model creation are becoming increasingly easy to use, and with accessibility becoming as simple as an application accessed on a smartphone, the hurdle of creating a 3D model has all but been removed.

2.5. 3D Model Optimization

The other variable to consider in the proposed immersive development pipeline is the optimization of 3D models for use in real-time applications. While the methods outlined above can create models often with a high number of details, the models themselves are not in a usable format consisting of dense meshes that would lead to poor performance and be unsuitable for animation [36] . This is an issue even current processes face, as 3D modeling for entertainment often uses a technique called digital sculpture [37] . Many of the most popular modeling applications use these techniques to spectacular effect; the most popular software application for digital sculpting is ZBrush. The process where an artist uses digital sculpture to create a model can also result in a dense unusable mesh that must be re-topologized. Retopology can be a time-consuming process where a lower resolution and optimized version of a model is made, the version that would work well in a game engine, for instance, and then the high-resolution details are added back in during rendering [38] . The process usually includes baking the higher-resolution details into color images where the surface of the high-resolution mesh is represented in a texture where the three channels of red, green, and blue are controlling the XYZ information of how light reacts to the surface of the model. These textures are called normal maps and have been standard practice since the early 2000s [39] .

The process involved with retopology can be rather involved and lengthy, therefore, software developers have been working on ways to make it easier, such as including auto-retopology tools to popular 3D software applications [40] . One of the foremost pioneers of graphics technology in this area is the graphics card manufacturer Nvidia. The AI computing company has been sponsoring and developing new graphics technologies for decades and has an annual technology conference where new graphics technologies are showcased. As so much of the processing and rendering of immersive realities and real-time applications rely on the hardware, Nvidia is also involved in improving the performance of those functions [41] . Two of Nvidia’s recent innovations that are relevant for this study are the Nvidia NGX and aforementioned Nvidia NeRF. Both technologies approach rendering in new and dynamic ways. The Nvidia NGX requires an Nvidia RTX video card and uses that hardware combined with AI and deep learning to improve performance and improve the graphic output. The NGX can load an entire 3D scene from an previous design iteration and create a modern lighting and rendering solution. The process can transform older, lower resolution graphics and ensure that they appear crisp and clear on modern systems and resolutions [42] . While this process can make older graphics appear more modern it is, however, focused on the textural graphics and not the resolution of the models themselves making the solution ideal for lower resolution models. Higher resolution models would still require optimized geometry like that which is done through proper retopology [43] .

In order to address the retopology issue, Epic Games’s engineers working on Unreal Engine 5 have used a new approach of virtualized geometry systems that aims to render retopology unnecessary all together. The Nanite system, which is built on earlier iterations and research on how to process and render high-resolution geometry in real-time, allows the user to bring in non-optimized high-resolution 3D models directly from a 3D scan or highly detailed digital sculpt. These files are able to be altered in size to ensure fast processing and rendering [44] [45] . Nanite does this in several novel ways, including dynamically making level of detail (LOD) in real time, which consist of multiple copies at varied polygonal complexity that decrease in resolution as the viewer moves away from an object and increases in complexity as a viewer moves closer. The virtualized geometry system also automatically occludes polygons that are not visible from the viewer’s perspective, making them non-rendered or non-processed. Adjusting the resolution of objects that are out-of-frame from a viewer ensures better performance for real-time applications since render engines often process geometry even when not seen [46] . Nanite also works to improve rendering by using a system of virtualized textures also automatically generated [47] . All of these optimizations are done by the system, removing the technical obstacles that could impact performance and allowing the developer to focus on viewer experience. This system, and others that are sure to follow, entirely removes many of the most time-consuming and technically challenging aspects of 3D asset creation.

3. Recommendations

While both 3D AI asset generators and virtualized geometry systems are currently in use in the market, combining them to create a new development pipeline of 3D asset creation has not been explored. A theoretical framework for using these technologies in tandem is proposed and is an alternative method for designing, creating and producing content for immersive environments in XR. Such an immersive development pipeline would involve generating 2D concept designs via an AI art generator, such as DALLE-2 or Stable Diffusion. These designs can then be rendered as 3D models using software such as Nvidia’s Instant NeRF and Kaedim. Finally, these 3D models can be imported into a game engine, such as Unreal Engine 5 where the virtualized geometry system Nanite can optimize for appropriate resolution to ensure low latency in a virtual environment such as when using a head-mounted display (HMD). As for implementing those models, Nanite is proving to be a novel solution on how to use non-optimized models and has taken much of the asset creation process and removed or automated it. Recommendations for next steps in research include implementing the proposed development workflow and pipeline.

4. Conclusion

Developments in AI generative content witnessed unprecedented strides in 2022 [48] . New technologies are opening alternative methods of 3D asset generation. While this study is the first to examine these technologies within the lens of 3D asset creation, the proposed development pipeline shows that AI 3D art generation programs continue to grow and be developed in a range of industries. As seen in Andrew Wright’s AI Art Exhibition: Yeti (2022) (https://framevr.io/theotherus) (Figure 1 and Figure 2), AI-generated content is now within the reach of the general public. Each image within the exhibition was generated using natural language prompts in an AI generator. At the moment, each image requires trial and error with thousands of text prompt permutations to arrive at the desired effect. But advances are being made rapidly, as evinced by the number of options of generators now freely available. The rise of AI-driven content, and the increased accessibility to formally specialized and technically challenging 3D designers and developers means massive disruption in the field is quickly approaching the horizon. As the combination of these technical achievements creates an alternate possible 3D asset creation pipeline wherein a developer could use commonplace technology, such as a mobile phone to scan in objects, or use an AI system to generate 3D content either from a 2D concept design, which could also have been generated by AI (Bouchard, 2022), or a simple text prompt [31] . These developments democratize the 3D design and modeling field and create more opportunities for users to make the models required without an experienced artist or designer, rendering current design and development pipelines obsolete.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Mondal, B. (2020) Artificial Intelligence: State of the Art. In: Balas, V.E., Kumar, R. and Srivastava, R., Eds., Recent Trends and Advances in Artificial Intelligence and Internet of Things, Springer, Berlin, 389-425.
https://doi.org/10.1007/978-3-030-32644-9_32
[2] Zweig, B. (2015) Forgotten Genealogies: Brief Reflections on the History of Digital Art History. International Journal for Digital Art History, No. 1.
[3] Mikalonytė, E.S. and Kneer, M. (2022) Can Artificial Intelligence Make Art? Folk Intuitions as to whether AI-Driven Robots Can Be Viewed as Artists and Produce Art. ACM Transactions on Human-Robot Interaction, 11, Article No. 43.
https://doi.org/10.1145/3530875
[4] Li, S. and Zhou, Y. (2022) Pipeline 3D Modeling Based on High-Definition Rendering Intelligent Calculation. Mathematical Problems in Engineering, 2022, Article ID: 4580363.
https://doi.org/10.1155/2022/4580363
[5] Burova, A., Mäkelä, J., Heinonen, H., Palma, P.B., Hakulinen, J., Opas, V. and Turunen, M. (2022) Asynchronous Industrial Collaboration: How Virtual Reality and Virtual Tools Aid the Process of Maintenance Method Development and Documentation Creation. Computers in Industry, 140, Article ID: 103663.
https://doi.org/10.1016/j.compind.2022.103663
[6] Tchomdji, L.O.K., Park, S.J. and Kim, R. (2022) Developing Virtual Tour Content for the Inside and Outside of a Building Using Drones and Matterport. International Journal of Contents, 18, 74-84.
[7] Harrington, M.C., Jones, C. and Peters, C. (2022) Course on Virtual Nature as a Digital Twin: Botanically Correct 3D AR and VR Optimized Low-Polygon and Photogrammetry High-Polygon Plant Models. ACM SIGGRAPH 2022 Courses, Vancouver, 7-11 August 2022, 1-69.
https://doi.org/10.1145/3532720.3535663
[8] Oussalah, M. (2021) AI Explainability. A Bridge between Machine Vision and Natural Language Processing. In: International Conference on Pattern Recognition, Springer, Cham, 257-273.
https://doi.org/10.1007/978-3-030-68796-0_19
[9] Beer, R.D. (1995) A Dynamical Systems Perspective on Agent-Environment Interaction. Artificial Intelligence, 72, 173-215.
https://doi.org/10.1016/0004-3702(94)00005-L
[10] Salehi, M., Hooli, K., Hulkkonen, J. and Tolli, A. (2022) Enhancing Next-Generation Extended Reality Applications with Coded Caching.
[11] Schmidt, R., Isenberg, T., Jepp, P., Singh, K. and Wyvill, B. (2007) Sketching, Scaffolding, and Inking: A Visual History for interactive 3D Modeling. Proceedings of the 5th International Symposium on Non-Photorealistic Animation and Rendering, San Diego, 4-5 August 2007, 23-32.
https://doi.org/10.1145/1274871.1274875
[12] Madhavadas, V., Srivastava, D., Chadha, U., Raj, S.A., Sultan, M.T.H., Shahar, F.S. and Shah, A.U.M. (2022) A Review on Metal Additive Manufacturing for Intricately Shaped Aerospace Components. CIRP Journal of Manufacturing Science and Technology, 39, 18-36.
https://doi.org/10.1016/j.cirpj.2022.07.005
[13] Cheng, R., Wu, N., Chen, S. and Han, B. (2022) Reality Check of Metaverse: A First Look at Commercial Social Virtual Reality Platforms. 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Christchurch, 12-16 March 2022, 141-148.
https://doi.org/10.1109/VRW55335.2022.00040
[14] Medina, J.J., Maley, J.M., Sannapareddy, S., Medina, N.N., Gilman, C.M. and McCormack, J.E. (2020) A Rapid and Cost-Effective Pipeline for Digitization of Museum Specimens with 3D Photogrammetry. PLOS ONE, 15, e0236417.
https://doi.org/10.1371/journal.pone.0236417
[15] Korbel, J.J., Siddiq, U.H. and Zarnekow, R. (2022) Towards Virtual 3D Asset Price Prediction Based on Machine Learning. Journal of Theoretical and Applied Electronic Commerce Research, 17, 924-948.
https://doi.org/10.3390/jtaer17030048
[16] Zhang, Y. (2021, December 10) Cross-Platform Methods in Computer Graphics That Boost Experimental Film Making. Thesis, Institute of Technology, Rochester.
https://scholarworks.rit.edu/cgi/viewcontent.cgi?article=12190&context=theses
[17] Favreau, J. (2019) The Mandalorian. [Film] Disney.
[18] Turnock, J.A. (2022) Conclusion. Unreal Engine. In: The Empire of Effects, University of Texas Press, Austin, 216-222.
https://doi.org/10.7560/325308
[19] Anderson, K. (2020, August 24) The Mandalorian’s Virtual Sets Are Wild. Nerdist.
https://nerdist.com/article/mandalorian-virtual-production-star-wars
[20] Dent, S. (2021, May 13) See How “The Mandalorian” Used Unreal Engine for Its Real-Time Digital Sets. Engadget.
https://www.engadget.com/2020-02-21-mandalorian-ilm-stagecraft-real-time-digital-sets.html
[21] Ronfard, R. (2021, May) Film Directing for Computer Games and Animation. Computer Graphics Forum, 40, 713-730.
https://doi.org/10.1111/cgf.142663
[22] Cohen, J.L. (2022) Film/Video-Based Therapy and Trauma: Research and Practice. Taylor & Francis, Abingdon-on-Thames.
https://doi.org/10.4324/9781315622507
[23] Kuusela, V. (2022) 3D Modeling Pipeline for Games. Turku University of Applied Sciences, Turku.
[24] Hevko, I.V., Potapchuk, O.I., Lutsyk, I.B., Yavorska, V.V., Hiltay, L.S. and Stoliar, O.B. (2022) The Method of Teaching Graphic 3D Reconstruction of Architectural Objects for Future IT Specialists. AET 2020, 1, 119-131.
https://doi.org/10.5220/0010921800003364
[25] Sharif, M.M., Haas, C. and Walbridge, S. (2022) Using Termination Points and 3D Visualization for Dimensional Control in Prefabrication. Automation in Construction, 133, Article ID: 103998.
https://doi.org/10.1016/j.autcon.2021.103998
[26] Abboud, A. and Nasrullah, M. (2022) 3D Modeling for the Largest Stone in the World Using Laser Scanning Technique. Doctoral Dissertation, Полоцкий государственный университет имени Евфросинии Полоцкой.
[27] Karami, A., Menna, F. and Remondino, F. (2022) Combining Photogrammetry and Photometric Stereo to Achieve Precise and Complete 3D Reconstruction. Sensors, 22, 8172.
https://doi.org/10.3390/s22218172
[28] Wachowski, L. and Wachowski, L. (1999) The Matrix [Film] Warner Bros. Pictures.
[29] Mamani, A.R.C., Polanco, S.R.R. and Ordonez, C.A.C. (2022) Systematic Review on Photogrammetry, Streaming, Virtual and Augmented Reality for Virtual Tourism. HCI International 2022—Late Breaking Papers: Interacting with Extended Reality and Artificial Intelligence: 24th International Conference on Human-Computer Interaction, HCII 2022, Vol. 13518, 46.
[30] OpenAI (2022, April 14) Dall·E 2.
https://openai.com/dall-e-2
[31] Blance, A. (2022, July 27) Using AI to Generate 3D Models, Fast! Medium.
https://towardsdatascience.com/using-ai-to-generate-3d-models-2634398c0799
[32] Ha, H. and Song, S. (2022) Semantic Abstraction: Open-World 3d Scene Understanding from 2d Vision-Language Models. 6th Annual Conference on Robot Learning, Cornell University, Ithaca.
[33] McKenzie, T. (2022, August 17) Kaedim: A Cool AI for Turning 2D Images into 3D Models.
https://80.lv/articles/kaedim-a-cool-ai-for-turning-2d-images-into-3d-models
[34] Bebeshko, B., Khorolska, K., Kotenko, N., Desiatko, A., Sauanova, K., Sagyndykova, S. and Tyshchenko, D. (2021) 3D Modelling by Means of Artificial Intelligence. Journal of Theoretical and Applied Information Technology, 99, 1296-1308.
[35] Salian, I. (2022, August 12) Nerf Research Turns 2D Photos into 3D Scenes. NVIDIA Blog.
https://blogs.nvidia.com/blog/2022/03/25/instant-nerf-research-3d-ai
[36] Touloupaki, E. and Theodosiou, T. (2017) Performance Simulation Integrated in Parametric 3D Modeling as a Method for Early Stage Design Optimization—A Review. Energies, 10, 637.
https://doi.org/10.3390/en10050637
[37] Raitt, B. and Minter, G. (2000) Digital Sculpture Techniques. Interactivity Magazine, 4, 1-13.
https://doi.org/10.14733/cadconfP.2019.392-396
[38] Rossoni, M., Barsanti, S.G., Colombo, G. and Guidi, G. (2020) Retopology and Simplification of Reality-Based Models for Finite Element Analysis.
[39] Tasdizen, T., Whitaker, R., Burchard, P. and Osher, S. (2003) Geometric Surface Processing via Normal Maps. ACM Transactions on Graphics (TOG), 22, 1012-1033.
https://doi.org/10.1145/944020.944024
[40] Jean-Pierre, J. (2021) The Invasion: Applying the Aesthetics of Horror Films in a Virtual Reality Gaming Environment. Doctoral Dissertation, Florida Atlantic University, Boca Raton.
[41] Choquette, J., Gandhi, W., Giroux, O., Stam, N. and Krashinsky, R. (2021) NVIDIA A100 Tensor Core GPU: Performance and Innovation. IEEE Micro, 41, 29-35.
https://doi.org/10.1109/MM.2021.3061394
[42] Nvidia Developer (2019, December 3) NVIDIA NGX Technology—AI for Visual Applications.
https://developer.nvidia.com/rtx/ngx
[43] Wu, J., Dick, C. and Westermann, R. (2015) A System for High-Resolution Topology Optimization. IEEE Transactions on Visualization and COMPUTER Graphics, 22, 1195-1208.
https://doi.org/10.1109/TVCG.2015.2502588
[44] Haar, U. and Aaltonen, S. (2015) GPU-Driven Rendering Pipelines—Real-Time Rendering.
http://advances.realtimerendering.com/s2015/aaltonenhaar_siggraph2015_combined_final_footer_220dpi.pdf
[45] Barrier, R. (2022, May 17) What Unreal Engine 5 MEANS for the Games Industry. IGN.
https://www.ign.com/articles/what-unreal-engine-5-means-for-the-games-industry
[46] El-Wajeh, Y.A., Hatton, P.V. and Lee, N.J. (2022) Unreal Engine 5 and Immersive Surgical Training: Translating Advances in Gaming Technology into Extended-Reality Surgical Simulation Training Programmes. British Journal of Surgery, 109, 470-471.
https://doi.org/10.1093/bjs/znac015
[47] Beardsall, R. (2020, September 4) Unreal Engine 5 and Nanite Virtualized Geometry—What Does It Mean for Content Creators? Medium.
https://medium.com/xrlo-extended-reality-lowdown/unreal-engine-5-and-nanite-virtualized-geometry-what-does-it-mean-for-content-creators-b4106accd306
[48] Grba, D. (2022) Deep Else: A Critical Framework for AI Art. Digital, 2, 1-32.
https://doi.org/10.3390/digital2010001

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.