Viggle AI between creativity and controversy: the viral videos that make people discuss | Festina Lente - Your leading source of AI news | Turtles AI

Viggle AI between creativity and controversy: the viral videos that make people discuss
Canadian startup stands out for the realism of its animations, but use of YouTube data raises compliance issues
Isabella V26 August 2024

 

Viggle AI, a Canadian startup specializing in AI, is gaining attention through its viral videos, but its use of YouTube data to train its video models raises questions about compliance with the platform’s terms of service.

Key points:
- Viggle AI uses public data, including YouTube videos, to train its 3D video models.
- Viggle’s models allow users to specify realistic movements for animated characters.
- The startup recently raised $19 million to expand its team and further develop its products.
- The use of YouTube videos for AI training is controversial and may violate the platform’s terms of service.

Viggle AI, a Canadian AI startup, has recently attracted attention by creating viral videos remixing rapper Lil Yachty in various contexts. This content has found wide circulation on social media, demonstrating the capabilities of Viggle’s artificial intelligence models. However, the use of YouTube videos as a source for training these models has raised questions about compliance with the platform’s terms of service.

Viggle has developed an advanced 3D video model, called JST-1, designed to have an authentic understanding of physical laws, an aspect that distinguishes this technology from other primarily pixel-based video models. According to Viggle CEO Hang Chu, the ability to specify the movements that characters must make is one of the distinguishing features of their technology, allowing for greater controllability and realism in the animations generated.

One of the most striking examples of Viggle’s capabilities is the video in which Joaquin Phoenix’s Joker is animated to mimic Lil Yachty’s movements during an onstage performance. This is achieved by uploading the original video and an image of the desired character. Viggle also offers the ability to create animated characters from scratch using only text prompts.

Although the generated videos are still not perfect, with some critical issues such as expressionless faces and shaky animations, the JST-1 model has proven to be a useful tool for filmmakers, animators, and game designers, helping them visualize their ideas in visual form. Currently, Viggle offers a free, limited version of its AI model on Discord and through its web app, while a monthly subscription of $9.99 grants access to more advanced features.

The startup recently announced that it has raised $19 million in a Series A funding round led by Andreessen Horowitz, with participation from other investors such as Two Small Fish. These funds will be used to accelerate product development, expand the team, and scale the technology. In addition, Viggle is exploring partnerships with film and game studios to license its technology.

During an interview with TechCrunch, CEO Hang Chu stated that Viggle’s models rely on publicly available data, including YouTube videos, for their training. This statement raised questions about compliance with YouTube’s terms of service, which prohibit unauthorized use of videos from the platform for training AI models. In fact, Neal Mohan, CEO of YouTube, had stated in April that using YouTube videos for these purposes would be a clear violation of the platform’s rules. Subsequently, a Viggle spokesperson sought to correct Chu’s statements, saying that the company is committed to YouTube’s terms of service and that the training data has been carefully selected to ensure compliance.

This controversy highlights a broader issue in the AI industry: many companies, including giants like OpenAI and Nvidia, are suspected of using data from YouTube to train their models, but few openly admit it. Viggle’s admission highlights the challenges and legal implications of these practices, which could have significant consequences for the future of AI model training.

The debate over the use of YouTube videos to train AI models highlights the need for greater transparency and regulation in this rapidly evolving field.