Home technology
Category:

technology

The race for AI supremacy has entered an exciting new chapter as Google introduces Veo 2, its next-generation AI video generation model. Coming on the heels of OpenAI’s Sora release, Veo 2 is a bold statement in the escalating rivalry between the tech giants. With its promise of unmatched accuracy and realism, Veo 2 sets a new benchmark in AI-driven video creation.

Raising the Bar in Video Generation

Unlike traditional models that often “hallucinate” errors—such as distorted hands or unexpected artifacts—Veo 2 significantly reduces these issues, delivering videos that are remarkably lifelike. Whether it’s creating hyper-realistic scenes, dynamic animations, or stylized visuals, Veo 2 excels across a wide range of styles and subjects, ensuring unparalleled quality and precision.

Exclusive Access Through Google Labs

Currently, access to Veo 2 is limited to Google Labs’ VideoFX, where interested users can join the waitlist to experience its capabilities firsthand. This phased rollout underscores Google’s strategic approach to fine-tuning the model before it becomes widely available.

But that’s not all—Google has ambitious plans for Veo 2. By 2025, the model will be seamlessly integrated into YouTube Shorts and other Google products, positioning it as a cornerstone of the company’s AI-driven content creation strategy.

The Growing Battle Between Giants

Veo 2’s release comes at a pivotal moment, following OpenAI’s launch of Sora, an AI video generation model that has garnered widespread attention. This latest move highlights the intensifying competition between Google and OpenAI. Earlier, OpenAI’s ChatGPT Search had challenged Google’s dominance in the search engine market. Now, with Veo 2, Google is reclaiming its ground, signaling its commitment to leading the charge in AI innovation.

Why Veo 2 Stands Out

Google’s official blog emphasizes the model’s capacity for high detail and realism, setting it apart from other solutions in the market. By addressing common pitfalls in AI-generated videos, such as distorted features and random anomalies, Veo 2 establishes itself as a game-changer for creators, brands, and businesses seeking professional-grade video content.

What’s Next for AI Video Creation?

As Veo 2 gears up for broader adoption, its integration into YouTube Shorts signals a paradigm shift in short-form content creation. Imagine creators leveraging AI to produce visually stunning videos in minutes—without compromising on quality or creativity.

With Veo 2, Google isn’t just keeping up with the competition; it’s shaping the future of AI-powered video creation. From democratizing content production to enabling entirely new forms of storytelling, Veo 2 is poised to revolutionize how we create and consume video content.

Join the Revolution

If you’re eager to explore Veo 2’s groundbreaking features, now is the time to join the waitlist on Google Labs. Be among the first to witness the transformative power of Veo 2 as it redefines what’s possible in AI-driven video generation.

The future of video content is here—and it’s powered by Google Veo 2. Are you ready to create without limits?

0 comment
0 FacebookTwitterPinterestEmail

Underscoring its commitment to leveraging artificial intelligence (AI) for the advancement of science, Google has announced a $20 million cash investment and an additional $2 million in cloud credits to support groundbreaking scientific research. This initiative, spearheaded by Google DeepMind’s co-founder and CEO, Demis Hassabis, aims to empower scientists and researchers tackling some of the world’s most complex challenges using AI.

The announcement, shared via Google.org, highlights the tech giant’s strategy to collaborate with top-tier scientific minds, offering both financial backing and the robust infrastructure required for pioneering research projects.

Driving Innovation at the Intersection of AI and Science

Maggie Johnson, Google’s Vice President and Global Head of Google.org, shed light on the initiative’s goals in a recent blog post. According to Johnson, the program seeks to fund projects that employ AI to address intricate problems spanning diverse scientific disciplines.
“Fields such as rare and neglected disease research, experimental biology, materials science, and sustainability all show promise,” she noted, emphasizing the transformative potential of AI in these areas.

Google’s initiative reflects its belief in AI’s power to redefine the boundaries of scientific discovery. As Demis Hassabis remarked:
“I believe artificial intelligence will help scientists and researchers achieve some of the greatest breakthroughs of our time.”

The program encourages collaboration between private and public sectors, fostering a renewed excitement for the intersection of AI and science.

The Competitive Landscape: A Race to Support AI Research

Google’s announcement comes on the heels of a similar initiative by Amazon’s AWS, which unveiled $110 million in grants and credits last week to attract AI researchers to its cloud ecosystem. While Amazon’s offering is notably larger in scale, Google’s targeted approach—focusing on specific scientific domains—positions it as a strong contender in the race to harness AI’s potential for solving global challenges.

Bridging the Gap: Encouraging Multidisciplinary Research

One of the standout aspects of Google’s funding initiative is its emphasis on fostering collaboration across disciplines. By enabling researchers to integrate AI into areas like sustainability, biology, and material sciences, the program aims to unlock solutions to problems that have long eluded traditional methods.

The initiative is not merely about funding but also about creating a collaborative ecosystem where innovation can thrive. Google hopes this move will inspire others in the tech and scientific communities to join hands in funding transformative research.

A Vision for the Future

With this $20 million fund, Google is setting the stage for AI to become a cornerstone of scientific exploration. As Hassabis aptly put it:
“We hope this initiative will inspire others to join us in funding this important work.”

This announcement signals not just a financial commitment but also a vision for a future where AI serves as a catalyst for discoveries that could reshape industries, improve lives, and address pressing global issues.

As scientists gear up to submit their innovative proposals, the world waits with bated breath to witness the breakthroughs that this AI-powered initiative will bring. One thing is certain—Google’s bold step has ignited a spark that could lead to the next big leap in human knowledge.

0 comment
0 FacebookTwitterPinterestEmail

In a stunning leap for video technology, Google has unveiled ReCapture, an innovative tool that’s reshaping how we think about video modeling. Unlike previous advancements that generated new videos from scratch, ReCapture transforms any existing video, recreating it with fresh, cinematic camera angles and motion, a major step beyond traditional editing techniques. This remarkable technology was launched by Google on Friday, with industry experts like Ahsen Khaliq of Hugging Face spreading the news on X and senior research scientist Nataniel Ruiz sharing insights on Hugging Face, highlighting ReCapture’s revolutionary impact on AI-driven video transformation.

The Magic of ReCapture: Reimagining Videos from New Perspectives

What sets ReCapture apart? Traditionally, if someone wanted a new camera angle, they needed a new shot. ReCapture eliminates this limitation. It can take a single video clip and reimagine it from different, realistic vantage points without additional filming. Whether for video professionals or social media creators, the ability to add dynamic angles elevates content, bringing a new depth to storytelling.

ReCapture operates through two advanced AI stages. The first involves creating a rough “anchor” video using multiview diffusion models or depth-based point cloud rendering, providing a new perspective. Then, using a sophisticated masked video fine-tuning technique, the anchor video is sharpened, achieving a cohesive, clear reimagining of the scene from fresh viewpoints. This method not only recreates original angles but can even generate unseen portions of the scene, making videos richer, more realistic, and dynamic.

Moving Beyond Text-to-Video with Video-to-Video Generation

This latest tool goes beyond what text-to-video generation has accomplished so far. Video-to-video generation, as pioneered by ReCapture, brings a new level of realism and creativity to video production. By maintaining scene continuity while adding new camera perspectives, ReCapture opens endless creative avenues for content creators, filmmakers, and even gaming developers.

Generative AI has already powered several creative platforms like Midjourney, RunwayML, and CapCut. ReCapture, however, represents a monumental leap forward, merging AI-based depth mapping and fine-tuning methods that are unique in their ability to manipulate existing footage.

ReCapture’s Impact on Creative Industries

In fields from media to generative gaming, ReCapture’s impact is anticipated to be transformative. As demand for immersive and unique content grows, so does the need for tools like ReCapture, which allow creators to expand their vision without the need for costly reshoots. Video games, expected to see tremendous growth in 2025, could be among the biggest benefactors. ReCapture could give developers the tools to enhance gaming environments dynamically, making experiences more lifelike and captivating for players.

Beyond gaming, ReCapture sets a new standard for video realism in media production, offering vast opportunities for creative storytelling, interactive ads, and more engaging digital experiences. As more companies experiment with AI video generation and as demand for these technologies skyrockets, Google’s ReCapture tool is well-positioned to become a staple in the AI toolbox of creators everywhere.

The Future of Visual Content: ReCapture’s Next Steps

By introducing ReCapture, Google demonstrates how AI can go beyond creating content, entering the realm of reimagining it. This tool could redefine how we approach video storytelling, presenting an era where creators can immerse audiences in fresh, dynamic perspectives without requiring multiple camera setups. The road ahead looks promising, with ReCapture paving the way for deeper, more engaging visual experiences in everything from social media to high-end film production.

ReCapture isn’t just a step forward—it’s a reinvention, bringing the art of video transformation to an entirely new level.

1 comment
0 FacebookTwitterPinterestEmail

In an era defined by technological marvels and environmental challenges, the world’s first wooden satellite, LignoSat, has just reached the International Space Station (ISS), ready to undergo a groundbreaking test in low-Earth orbit. This tiny Japanese satellite, a mere 4 inches on each side, might be small, but it represents a massive leap forward in sustainable space technology. Developed through a collaboration between Kyoto University and the Tokyo-based Sumitomo Forestry, LignoSat uses magnolia wood as an eco-friendly alternative to conventional satellite materials, marking the start of a journey that could reshape space exploration’s environmental impact.

Why Wood in Space?

Wood might seem an unlikely candidate for the hostile environment of space, but LignoSat’s designers argue that it offers unique advantages. Satellites are traditionally constructed from aluminum, which has its strengths, yet comes with a hidden cost: pollution. When these metal satellites re-enter Earth’s atmosphere, they generate aluminum oxides, which may disrupt the planet’s thermal balance and even harm the ozone layer.

NASA’s deputy chief scientist Meghan Everett explained this dynamic, noting that a wooden satellite like LignoSat could offer a cleaner alternative that decomposes with minimal impact. “Researchers hope this investigation demonstrates that a wooden satellite can be more sustainable and less polluting for the environment than conventional satellites,” she said.

With the proliferation of megaconstellations such as SpaceX’s Starlink—now at approximately 6,500 active satellites—the pressure on Earth’s atmosphere is only growing. If successful, wooden satellites could provide a novel solution to limit the damage of re-entry, replacing harmful metals with biodegradable materials.

The Road to Testing: LignoSat’s Path on the ISS

LignoSat isn’t a mere concept anymore. Delivered by a SpaceX Dragon capsule to the ISS, it’s now awaiting deployment into orbit from the station’s Kibo module. Once released, the satellite’s mission team, alongside student researchers, will monitor its temperature and assess structural integrity in response to the rigors of space—particularly exposure to atomic oxygen and cosmic radiation.

This data will reveal not only if wood can withstand the harsh environment of space but if it could indeed become a mainstay material for sustainable satellites. The team is hopeful: a successful test could mean wooden satellites join the ranks of spacecraft exploring not only Earth’s orbit but perhaps eventually the moon, Mars, and beyond.

Vision for a Sustainable Space Age

Takao Doi, a retired astronaut and current professor at Kyoto University, believes that this experiment could fundamentally change how satellites are made. “Metal satellites might be banned in the future,” he noted, alluding to the growing awareness of space pollution. If LignoSat’s data shows it performs well, Doi and the team are prepared to propose the idea of wooden satellites to major industry players, including Elon Musk’s SpaceX.

Beyond Earth orbit, wood’s potential as a building material has implications that could extend to extraterrestrial construction as well. As Sumitomo Forestry’s Kenji Kariya points out, “Wood is actually cutting-edge technology as civilization heads to the moon and Mars.” This concept of sustainable materials in space could fuel both the timber industry on Earth and the creation of more eco-friendly space infrastructures.

A Test for the Future

LignoSat’s arrival at the ISS signifies a small yet pivotal step toward sustainability in space. Its upcoming six-month test phase promises to open doors for new technologies and partnerships aimed at reducing space industry pollution while advancing eco-conscious exploration. What once may have seemed an unusual idea—wood in the stars—now hints at a greener future for spaceflight.

With environmental pressures mounting on Earth, innovations like LignoSat reflect a promising shift: from high-tech metallic construction to a more balanced relationship between humanity and space, one grounded in sustainable principles. And as this tiny wooden cube orbits Earth, it may be carving out a path to a cleaner, greener cosmos.

0 comment
0 FacebookTwitterPinterestEmail

In a landscape where powerful large language models (LLMs) dominate, Google DeepMind’s latest research into Relaxed Recursive Transformers (RRTs) marks a breakthrough shift. Together with KAIST AI, Google DeepMind is not just aiming for performance—it’s aiming for efficiency, sustainability, and practicality. This development has the potential to reframe how we approach AI, making it more accessible, less resource-heavy, and ultimately, more adaptable for real-world applications.

RRTs: A New Approach to Efficiency

RRTs allow language models to function with reduced costs, memory, and computational demands, achieving impressive results without the need for massive models. One core technique in RRTs is “Layer Tying,” which permits a single input to be processed through a limited number of layers repeatedly. Instead of processing an input through a large set of layers, Layer Tying allows the same layers to handle the input multiple times, reducing memory requirements and boosting computational efficiency.

Moreover, LoRA (Low-Rank Adaptation) adds another layer of innovation to RRTs. Here, low-rank matrices subtly adjust shared weights to create variations, ensuring each pass-through introduces fresh behavior without requiring extra layers. This recursive design also allows for uptraining, where layers are fine-tuned to continuously adapt as new data is fed into the model.

The Power of Batch-wise Processing

RRTs enable continuous batch-wise processing, meaning multiple inputs can be processed at varying points within the recursive layer structure. If an input yields a satisfactory result before completing all its loops, it exits the model early—saving further resources. According to researcher Bae, continuous batch-wise processing could dramatically enhance the speed of real-world applications. This shift to real-time verification in token processing is poised to bring about new levels of performance efficiency.

Proven Impact: Numbers that Matter

The results from DeepMind’s tests reveal the profound impact of this recursive approach. For example, a Gemma model uptrained to a recursive Gemma 1B version achieved a 13.5% absolute accuracy improvement on few-shot tasks compared to a standard non-recursive model. By training on just 60 billion tokens, the RRT-based model matched the performance of a full-size Gemma model trained on a staggering 3 trillion tokens.

Despite the promise, some challenges remain. Bae notes that further research is needed to achieve practical speedup through real-world implementations of early exit algorithms. However, with additional engineering focused on depth-wise batching, DeepMind anticipates scalable and significant improvements.

Comparing Innovations: Meta’s Quantization and Layer Skip

DeepMind isn’t alone in this quest for LLM efficiency. Meta recently introduced quantized models, reducing the precision of model weights to occupy less space, enabling LLMs to operate within lower-memory devices. Quantization and RRTs share a common goal of enhancing model efficiency but differ in their approach. While quantization focuses on size reduction, RRTs center on processing speed and adaptability.

Meta’s Layer Skip technique, for example, aims to boost efficiency by selectively skipping layers during training and inference. RRTs, on the other hand, allow parameter sharing, increasing model throughput with each pass. Importantly, Layer Skip and Quantization could potentially complement RRTs, setting the stage for a combination of techniques that promise massive gains in efficiency.

A Step Towards Smarter AI Ecosystems

The rise of small language models like Microsoft’s Phi and HuggingFace’s SmolLM reflects a global push to make AI more efficient and adaptable. In India, Infosys and Saravam AI have already embraced small models, exploring ways they can aid in sectors such as finance and IT.

The shift from sheer size to focused efficiency is reshaping the future of AI. With models like RRTs leading the way, the trend suggests that we may soon achieve the power of large language models without the immense resource drain. As AI continues to evolve, techniques like RRTs could bring a future where models are not only faster and smarter but are also lighter, greener, and more adaptable to diverse applications.

0 comment
0 FacebookTwitterPinterestEmail

Google has introduced a series of new features to its Gemini AI, including a personalization tool called Gems, which allows users to customize the AI chatbot for specific tasks. This new feature enables users to tailor the Gemini chatbot to their needs, whether as a workout partner, a coding assistant, or a writing companion.

To create a personalized Gem, users can provide instructions on the desired style of responses, save a custom introduction, and even assign a specific character to the chatbot. Once these preferences are set, the customized Gem is activated and ready for use. This feature will be available exclusively to Gemini Advanced subscribers.

In addition to the customizable Gems, Google is also launching several predesigned Gems for broader tasks such as troubleshooting code, offering writing tips, and explaining complex topics in simpler terms.

Google is also rolling out the next-generation image generation tool, Imagen 3. This update includes the reactivation of Gemini’s ability to generate AI images of people—a feature that was previously disabled due to the creation of historically inaccurate images. The company has now implemented safeguards to prevent such issues in the future. These guardrails are designed to avoid overcorrection for diversity, which previously led to embarrassing mistakes.

“We don’t support the generation of photorealistic, identifiable individuals, depictions of minors, or excessively gory, violent, or sexual scenes,” stated Gemini Product Manager Dave Citron. He acknowledged that not every image generated by Gemini will be perfect but emphasized the company’s commitment to continuous improvement based on user feedback.

Additionally, Google has incorporated the SynthID tool to watermark images created by Imagen 3, ensuring the authenticity and traceability of AI-generated content.

Imagen 3 will be available to all users starting this week, though the ability to generate images of people will initially be limited to paid subscribers.

0 comment
0 FacebookTwitterPinterestEmail

Our News Portal

We provide accurate, balanced, and impartial coverage of national and international affairs, focusing on the activities and developments within the parliament and its surrounding political landscape. We aim to foster informed public discourse and promote transparency in governance through our news articles, features, and opinion pieces.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2023 – All Right Reserved. Designed and Developed by The Parliament News

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00