Many companies have joined the AI race, as it has been going on for a while. Video generation is a major field in this space where firms are placing bets, and OpenAI has demonstrated this with Sora. With its realistic video-generating capabilities, Meta currently competes with OpenAI’s SORA AI model for video generation.
Despite the technology’s previous mishaps with false information, Google is integrating advanced artificial intelligence into its search engine, allowing consumers to voice inquiries about photos and, on occasion, organize an entire page of results.
Meta Movie Gen AI Model
Meta has introduced a new Movie Gen AI like Llama AI models, but it is an advanced AI model that can generate realistic video and audio in line with the user’s input. Meta’s tool, which is as powerful as the industry legends OpenAI and ElevenLabs, is a new era for AI-generated media.
As per Meta, Movie Gen can build videos with a length of up to sixteen seconds of video and audio successfully, and it also offers sound results of up to 45 seconds of audio that syncs well with the content.
In a blog, the company demonstrated how the tool works by posting several videos involving animals swimming and people doing other excellent Shelby tasks like painting on canvas. One demonstration even showed the AI placing pom-poms into the hands of a man running
We saw an instance where the AI populated pom-poms in the hand of a man who was running across the desert; in another scenario, it turned a nondescript parking lot into a scenario with puddles, skateboarders, etc.
The news is made at a time when the Film industry is embracing as well as skeptical about generative AI video technology. It resonates with a lot of filmmakers who are eager to explore how distinct ideas could be brought to life and how production could be unswerving with the assistance of technologies such as Movie Gen, AI systems taught on conceivably copyrighted, and, conversely, many people are worried. This, of course, raises moral and legal questions. Comparable problems have already been voiced around OpenAI’s Sora model.
Google’s AI Injection
New advanced AI has been injected by Google search so people could pose questions regarding pictures and oftentimes get scheduling of results, which could be an entire page long, although the AI sometimes does provide wrong information.
The most recent updates, which were revealed on Thursday, mark the next phase of Google’s AI-driven makeover, which started in mid-May when the search engine started displaying Gemini artificial intelligence-generated summaries at the top of its highly visible results page in response to certain queries.
The publishers were alarmed by those summaries, which they called “AI Overviews,” as they feared that fewer people would click on search links to their websites, reducing the traffic necessary to generate revenue from digital ads, which support their operations.
Google’s decision to pump even more AI into the search engine that remains the crown jewel of its $2 trillion empire leaves little doubt that the Mountain View, California, company is tethering its future to a technology propelling the biggest industry shift since Apple unveiled the first iPhone 17 years ago.
Google’s next round of AI development expands on its Lens tool, which was introduced seven years ago and answers questions about items in images. Over 20 billion searches are generated by the Lens option each month, with users between the ages of 18 and 24 being its most devoted audience. Google is attempting to attract that younger market as it contends with AI rivals powered by ChatGPT and Perplexity, which are presenting themselves as the solution.
Users will now be able to ask questions in English using Lens about anything they are looking at through a camera lens, much like they would while conversing with a friend, and receive search results. In addition, users who volunteered to participate in Google Labs’ voice-activated search feature experiments would be able to record moving objects—like fish swimming in an aquarium—while asking a conversational question and receiving an answer in the form of an AI overview.
Conclusion
As AI continues to evolve, Meta’s MovieGen and Google’s AI-injected search demonstrate the rapid advancement of generative technologies. While these innovations offer exciting possibilities for content creation and information retrieval, they also raise important questions about copyright, accuracy, and the future of digital media. As we embrace these tools, careful consideration of their implications remains crucial.