Inspiration

Predicting 2025: The future of 3D content design

A look into the future of the Web and the role of 3D design content.

If you’re passionate about the future of the Web, chances are that you’ve recognized the massive role 3D design and content creation will have to play in it. 

The rate of change at which we use 3D technology is set to grow exponentially. That’s why, here at Vectary, we’re focused on building services and tools to help answer this demand. In the spirit of this focus, we’ve put together some predictions for the future of 3D design.

Here are our top five: 

1. Anyone can instantly create and share 3D models using their phone

Creating 3D models by scanning objects with phones will be immediately shareable. 

Even with 3D software that takes care of some of the complications, creating a 3D model from scratch requires an advanced skill set. 3D asset libraries can help to bridge the gap between complexity and the skills required but you’re limited to what’s available in the library. 

The future is closer with this prediction thanks to Apple’s Object Capture API. Announced at the 2021 Apple Worldwide Developers Conference, Object Capture uses photogrammetry to create AR-optimized objects from images on iPhones and iPads. Photogrammetry is a technique that uses two or more photographs to calculate a ‘line of sight’ that provides 3D data. Previously only possible with a combination of camera imagery and software, Apple has made this functionality available via its mobile devices. 

2. AI will fix errors in the 3D capture process

AI will be programmed to fix errors in the captured data, enabling higher accuracy and success rates of 3D content creation.

3D scanning is the most accessible way to compile the data needed to create digital 3D models via 2D sources. Where photogrammetry uses photographs, 3D scanning relies on light sources to accurately capture information. There’s a margin of error in both data capturing processes, resulting in compromised data integrity. 

This is where Artificial Intelligence (AI) comes in. AI could be programmed to identify and resolve anomalies. 

An example of this is the use of AI to identify tissue growth in CT scans, as part of a study conducted by George Eliot Hospital and the NHS AI Lab Skunkworks. The study aimed to minimize the challenges experienced by radiologists by decreasing the time and margin of error in reviewing CT scans of cancer patients. A 3D view of a scan provided a much more significant view of a patient’s condition, providing more benefits than 2D scans. 

3. 3D designs will be constructed using existing 3D elements, made by other creators

3D asset libraries featuring different 3D elements will become the standard, allowing creators to build 3D scenes using just the elements sourced. Much like pulling together icons, photography, and video can create a website template, 3D design components will be universal. 

As we gear up towards Web 3.0, through increased adoption and necessity, 3D content libraries will become more accessible. The number of 3D model marketplaces is growing and will soon extend to other 3D design components, such as materials/textures and environments.

Read: 5+ places to sell your 3D models

4. Designers focus on design, rather than learning complex 3D software  

Extensive 3D software knowledge won’t be a requirement for communicating design intent. Instead, GPT-3 and other AI frameworks will make it possible to design 3D content through simple actions, such as: using descriptions, assembling scenes based on images, adjusting lighting, adding interactivity, etc.

Demand for advanced 3D skills won’t go away as 3D design becomes more accessible; it will still have a place in niche roles and industries. However, as 3D design technology and platforms continue to grow, a larger number of creators will need to be empowered to meet the demand for 3D and AR content creation, particularly in UX/UI, graphic design, and development roles. 

Composing a 3D scene will be more intuitive, e.g. picking 3D objects from libraries, selecting the scene backdrop from pre-sets, easy photogrammetry for creating 3D objects from the physical world, and expansive asset libraries. This will reduce training time while also allowing for rapid prototyping and effortless shareability, all without the need for code.

Graph: ​​Technologies likely to be adopted by 2025

5. Creation of 3D content in Augmented Reality (WebXR)

Composing 3D experiences won’t be limited to 2D screens and environments.  

The combination of Augmented (AR) and Virtual (VR) Reality is referred to as Mixed (MR) or Extended (XR) Reality. With WebXR, it’s possible to combine VR and AR experiences via compatible web browsers, providing an immersive three-dimensional virtual environment. 

Currently, you can build the environment for these virtual spaces and make them interactive, think Painting VR on the Oculus platform. However, WebXR is still at an early stage of development. As XR browser and technology support continues to grow, so will its capabilities. XR will move away from a view-only format and will enable users to create within the virtual space itself. 

6. 3D workflows as standard in all businesses

Companies and agencies will develop, manage, and share 3D content using cross-functional processes and teams.

The functionality of concept development and implementation across departments will only increase with the use of 3D design workflows. Seen as not just as a way to speed up the go-to market process and reduce cost of physical samples, it will be a must-have solution for any business to remain relevant. Designed to increase value of any production pipeline, it will also serve as a digital asset management software for 3D models, interactive showcases, digital twins, and products positioned for entry into the virtual marketplace.

Agree or disagree? Do you have any predictions?



Start creating a 3D content today
Try Vectary
Try Vectary