It takes a world to raise a virtual universe
Today, we’re in phases 1 to 2 of the pyramid above. The majority of popular entertainment across TV, film, music, and games is created by large teams of creative professionals. Most of these products are crafted for a solo audience, meaning the experience is largely unchanged with the addition of friends.
Multiplayer games have done the most to push the boundaries of social entertainment. Games like World of Warcraft and League of Legends have brought millions of players together in shared worlds. Yet even these games have fundamental social limitations. Multiplayer sessions today are limited in scale—even modern titles like Fortnite can only support up to 100 concurrent players in a server instance. In-game socializing also tends to be focused on specific activities, such as Fortnite’s battle royale mode or concerts.
Open-world games such as Grand Theft Auto and The Witcher 3 have introduced unstructured play where players freely explore a virtual world and pursue objectives in any order they choose. Though this is a step toward spontaneous experiences, the scope of these worlds has also been limited by the amount of content a professional team can create.
In this respect, one of the biggest challenges with building the Metaverse is figuring out how to create enough high-quality content to sustain it. It would take a tremendous amount of content to populate the intricate worlds shown in Ready Player One’s OASIS. It would also be staggeringly expensive to create via professional development. The MMO Star Wars: The Old Republic famously cost EA more than $200 million to make and required a team of over 800 people working for six years to simulate just a few worlds within the Star Wars universe. In comparison, a true Metaverse would likely be composed of several galaxies of Star Wars-sized virtual worlds.
User-generated content offers a promising solution for cost-effectively scaling content production. Platforms such as YouTube and Twitch have assembled vast content libraries faster and more efficiently than any professional studio. YouTube serves over 1 billion hours of video daily via a community of 31 million channel creators. Twitch’s 6 million creators live-streamed over 10 billion hours of video in 2019. These platforms harness the collective creative energy of their communities to create an endless flywheel of content.
Yet UGC platforms have their own challenges, as well. Maintaining a high quality bar can be difficult, given the sheer volume of content being created. And due to the need for users to learn new tools and programming skills, content creators tend to be vastly outnumbered by consumers. YouTube’s 31 million creators represent only 1.2 percent of its 2 billion monthly user base. 3D game engines such as Unity and Unreal are powerful, but can be difficult to learn. Unity, for example, is used by only 1.5 million monthly creators today, a fraction of the 2.7 billion gamers worldwide.
UGC created by a small segment of human creators, while a meaningful step forward from professionally-created studio content, is likely only one side of the coin that we need to build the Metaverse and its new social systems.
In order to enable emergent social experiences inside a Metaverse—similar to how we discover experiences in the real world today—there needs to be a vast increase in both the quantity and quality of user-generated content.
To this end, the next major evolution in content creation will be a shift toward AI-assisted human creation (phase 3). Whereas only a small number of people can be creators today, this AI-supplemented model will fully democratize content creation. Everyone can be a creator with the help of AI tools that can translate high-level instructions into production-ready assets, doing the proverbial heavy lifting of coding, drawing, animation, and more.
We’re already seeing early glimpses of this in action. Led by Siri co-creator Tom Gruber, LifeScore has built an adaptive music platform that dynamically composes music in real-time. After human composers input a set of music “source material” into LifeScore, an AI maestro changes, improves, and remixes music on the fly to lead a performance. LifeScore debuted in May as an adaptive soundtrack for the Twitch interactive TV series Artificial, where viewers were able to influence the music based upon how they felt about plot developments.