Home Tags Posts tagged with "AI"
Tag:

AI

NVIDIA, a global leader in AI and computing technology, has launched its highly anticipated Blackwell platform, marking a significant milestone in the world of accelerated computing. This groundbreaking platform introduces a new era of computing, empowering organizations to leverage real-time generative AI on trillion-parameter large language models (LLMs) with unprecedented efficiency.

Trillion-Parameter-Scale AI Models: Powered by the Blackwell GPU architecture, NVLink, and Resilience Technologies, the platform enables the deployment of trillion-parameter-scale AI models.

Optimized Cost and Energy Efficiency: With new Tensor Cores and the TensorRT-LLM Compiler, Blackwell reduces LLM inference operating cost and energy consumption by up to 25x compared to its predecessor.

Breakthroughs in Various Fields: The platform facilitates breakthroughs in data processing, engineering simulation, electronic design automation, computer-aided drug design, and quantum computing.

Wide Adoption: Blackwell has garnered widespread adoption from major cloud providers, server makers, and leading AI companies including Amazon Web Services, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and more.

Industry Leaders’ Statements:

Sundar Pichai, CEO of Alphabet and Google, emphasized Google’s commitment to investing in infrastructure for AI platforms, highlighting their partnership with NVIDIA to bring Blackwell’s capabilities to Google Cloud customers.

Andy Jassy, president and CEO of Amazon, highlighted the longstanding collaboration between AWS and NVIDIA, underscoring the compatibility of Blackwell with AWS infrastructure for advanced accelerated workloads.

Michael Dell, founder and CEO of Dell Technologies, expressed the importance of generative AI in shaping the future of technology, affirming Dell’s collaboration with NVIDIA to deliver next-generation accelerated products and services.

Demis Hassabis, CEO of Google DeepMind, emphasized the transformative potential of AI in solving scientific problems, acknowledging Blackwell’s role in providing critical compute power for scientific discoveries.

Mark Zuckerberg, CEO of Meta, highlighted the significance of AI in powering various Meta products and services, expressing eagerness to leverage Blackwell to enhance Meta’s AI capabilities.

Satya Nadella, CEO of Microsoft, reiterated Microsoft’s commitment to offering advanced infrastructure for AI workloads, announcing the integration of Blackwell processors into Microsoft’s datacenters globally.

Sam Altman, CEO of OpenAI, underscored Blackwell’s massive performance leaps, expressing excitement about enhancing AI compute capabilities in collaboration with NVIDIA.

Larry Ellison, chairman and CTO of Oracle, emphasized the qualitative and quantitative breakthroughs enabled by Blackwell in AI, machine learning, and data analytics, highlighting its importance for uncovering actionable insights.

Elon Musk, CEO of Tesla and xAI, acknowledged NVIDIA hardware as the best choice for AI applications.

About the Blackwell Platform:

Named after mathematician David Harold Blackwell, the Blackwell platform succeeds the NVIDIA Hopper architecture and features six revolutionary technologies:

World’s Most Powerful Chip

Second-Generation Transformer Engine

Fifth-Generation NVLink

RAS Engine

Secure AI

Decompression Engine

Availability and Partnerships:

Blackwell-based products will be available from partners later this year, including major cloud service providers, server makers, and software developers.

NVIDIA Cloud Partner program companies and sovereign AI clouds will offer Blackwell-based cloud services and infrastructure.

AWS, Google Cloud, and Oracle Cloud Infrastructure plan to host Blackwell-based instances.

Leading server manufacturers, including Cisco, Dell, Hewlett Packard Enterprise, Lenovo, and Supermicro, will deliver servers based on Blackwell products.

Conclusion:

The NVIDIA Blackwell platform represents a monumental leap forward in accelerated computing, ushering in a new era of efficiency, scalability, and innovation. With its transformative capabilities, Blackwell is poised to reshape industries and drive breakthroughs in AI-driven applications worldwide.

To learn more about the NVIDIA Blackwell platform, watch the GTC keynote and register for sessions at GTC, running through March 21.

0 comment
0 FacebookTwitterPinterestEmail

Renowned technology magnate Elon Musk has stirred controversy by raising concerns about the potential hazards of what he refers to as “woke AI,” cautioning against the ramifications of imbuing artificial intelligence with a focus on forced diversity. Musk voiced his apprehensions on the social media platform X, highlighting the risks associated with AI algorithms prioritizing diversity initiatives, citing Google’s Gemini AI as an example.

In a series of tweets, Musk articulated his worry, stating, “If an AI is programmed to prioritize diversity at any cost, as demonstrated by Google Gemini, it could potentially resort to extreme measures, even leading to fatal consequences.”

Musk’s remarks followed the surfacing of screenshots shared by a community-based page known as The Rabbit Hole, purportedly depicting a conversation with Google’s Gemini AI. In the exchange, the AI was posed a hypothetical question about misgendering Caitlyn Jenner to avert a nuclear catastrophe.

According to the screenshots, the Gemini AI provided a nuanced response, underscoring the significance of respecting gender identities while acknowledging the gravity of a nuclear crisis and the ethical quandary inherent in the scenario.

Expanding on the matter, Musk expressed his concerns regarding the potential implications as AI continues to progress, stressing the necessity for careful consideration as AI becomes more powerful, lest it become increasingly hazardous if not managed properly.

In response to the shared screenshots, Musk reiterated his apprehensions, remarking, “This is disconcerting presently, but as AI gains more influence, it could pose lethal threats.”

Musk’s commentary has reignited discussions surrounding the ethical dimensions of AI development and underscored the imperative for transparent and responsible programming practices to navigate the evolving landscape of artificial intelligence.

0 comment
0 FacebookTwitterPinterestEmail

Elon Musk, Tesla CEO and former co-founder of OpenAI, has taken an unconventional approach in his lawsuit against the artificial intelligence research lab. Musk, who filed the lawsuit against OpenAI and its CEO Sam Altman on March 1, accusing the organization of prioritizing profit over its original mission, has now offered to drop the legal action under one condition – a name change. Musk proposed that OpenAI change its name to ‘Closed AI.’

In a tweet posted on X (formerly Twitter), Musk stated, “Change your name to ClosedAI, and I will drop the lawsuit.” Following this offer, Musk took a further step by editing an image of Sam Altman wearing a guest ID card, replacing “OpenAI” with “ClosedAI” alongside the original logo.

The lawsuit alleges that OpenAI breached contractual agreements made during Musk’s involvement in the company’s early years. Musk co-founded OpenAI in 2015 but stepped down from the board in 2018. In response to Musk’s legal action, OpenAI released a series of private emails exchanged between Musk and the company, highlighting the complexities of their relationship.

OpenAI responded to Musk’s proposal, reiterating its commitment to its mission and sharing facts about its association with Musk. The company indicated its intention to dismiss all claims made by Musk.

The unusual turn of events has sparked widespread discussion in both the tech and legal communities, with many awaiting OpenAI’s response to Musk’s unique settlement offer.

Disclaimer: The information provided here is based on public statements and may be subject to updates and changes.

0 comment
0 FacebookTwitterPinterestEmail

Google has issued a formal apology to the Indian government and Prime Minister Narendra Modi over misleading and unreliable responses generated by its AI platform, Gemini. The controversy emerged when Gemini was criticized for providing unsubstantiated information about Prime Minister Modi, raising concerns about the accuracy and bias within AI-generated content.

The Minister of State for IT & Electronics, Rajeev Chandrasekhar, disclosed that the Indian government had sought an explanation from Google regarding the discrepancies observed in Gemini’s outputs. In response, Google acknowledged the issues with Gemini, admitting the platform’s unreliability and extending an apology to Prime Minister Modi and the Indian authorities. This incident unfolds amid India’s tightened scrutiny over AI platforms, with the government indicating plans to introduce permits for AI operations within the country. Chandrasekhar stressed the importance of AI platforms adhering to the respect and legality due to Indian consumers, hinting at the legal consequences under IT and criminal laws for spreading false information.

Further complicating matters for Google, Gemini was accused of displaying racial bias and historical inaccuracies. A particularly contentious issue arose when the AI chatbot reportedly declined to generate images of white individuals and inaccurately portrayed historical white figures as people of color. These allegations have led to widespread criticism and calls for Google CEO Sundar Pichai’s resignation.

In the wake of the backlash, Google took immediate action to disable Gemini’s human image generation feature. Sundar Pichai described the error as “completely unacceptable” and committed to addressing the issues raised. Despite Google’s efforts to mitigate the situation, calls for Pichai’s resignation have intensified. Analysts Ben Thompson and Mark Shmulik have voiced their opinion on the necessity for leadership changes at Google, suggesting that overcoming these challenges may require new management direction, potentially implicating CEO Sundar Pichai himself.

Thompson highlighted the need for a transformative change within Google, advocating for a leadership overhaul to rectify past mistakes. Similarly, Shmulik questioned the current management team’s capability to steer Google through these tumultuous waters. As Google pledges to refine and improve its AI technologies, the company faces a critical juncture. The controversy underscores the broader challenges facing AI development, including ensuring accuracy, fairness, and the ethical use of technology in a rapidly evolving digital landscape.

0 comment
0 FacebookTwitterPinterestEmail

Sam Altman, the founder and CEO of OpenAI, has seen his net worth soar beyond $2 billion, as reported by the Bloomberg Billionaire Index. Interestingly, this substantial wealth accumulation is not directly linked to the success of OpenAI, the renowned AI research firm he oversees.

Altman’s burgeoning wealth is expected to experience further growth with the imminent initial public offering (IPO) of Reddit, where he stands as one of the largest shareholders. Despite OpenAI recently achieving an impressive valuation of $86 billion, Altman himself does not hold any shares in the company.

The primary source of the 38-year-old founder’s net worth lies in his strategic investments in various venture capital funds and startups, according to Bloomberg’s estimates. One notable investment is Altman’s contribution of $1.2 billion to several venture capital funds under the name “Hydrazine Capital.” Additionally, he has injected $434 million into the Apollo Projects fund, which focuses on ambitious and groundbreaking initiatives.

Altman’s involvement in Reddit, where he maintains an 8.7% stake through affiliated entities, is poised to make a significant impact on his net worth in the near future, according to reports.

While the specifics of Altman’s wealth accumulation remain somewhat elusive, his investments extend beyond high-profile ventures. Notably, he led a $500 million funding round for Helion Energy, a company dedicated to nuclear fusion technology. Altman also committed $180 million to Retro Biosciences, a startup with the mission of extending human lifespan by a decade.

As Altman’s financial portfolio continues to diversify through strategic investments and affiliations, his trajectory highlights the multifaceted nature of wealth generation in the dynamic landscape of technology and innovation.

0 comment
0 FacebookTwitterPinterestEmail

In response to the recent lawsuit filed by Elon Musk against OpenAI and CEO Sam Altman, the artificial intelligence startup has released an internal memo expressing its categorical disagreement with Musk’s claims. The lawsuit, filed by Musk, who is a co-founder of OpenAI but no longer involved in its operations, alleges that the company’s close ties with Microsoft have deviated from its original mission of creating open-source technology free from corporate influence.

OpenAI’s Chief Strategy Officer, Jason Kwon, addressed Musk’s assertions in the memo, stating that the disagreement may stem from Musk’s regrets about not being actively involved with the company today. Kwon pushed back against the notion that OpenAI is a “de facto subsidiary” of Microsoft, emphasizing the company’s independence and direct competition with Microsoft.

The memo also highlighted OpenAI’s core mission, which is to ensure that Artificial General Intelligence (AGI) benefits all of humanity. AGI refers to theoretical software capable of outperforming humans across a wide range of tasks. Kwon emphasized that OpenAI remains committed to this mission despite Musk’s claims.

In a separate memo, obtained by Bloomberg, Altman expressed admiration for Musk, calling him a hero. Altman mentioned missing the Musk he knew, who competed by building better technology. OpenAI declined to comment on the lawsuit or the internal memos.

Elon Musk’s lawsuit alleges breach of contract, breach of fiduciary duty, and unfair business practices, among other grievances. Musk, acting as a donor to OpenAI’s nonprofit parent organization until 2019, seeks to halt OpenAI’s benefits to Microsoft and Altman personally.

The internal memo also addressed government inquiries, likely referring to the Securities and Exchange Commission (SEC) investigation initiated after Altman’s temporary ousting by the company’s board in late 2023. Kwon assured employees that the company is cooperating with the government in response to inquiries related to the events of last November.

As OpenAI faces legal challenges and internal scrutiny, the memos aim to reassure employees and stakeholders of the company’s commitment to its mission and independence in the evolving landscape of AI development.

0 comment
0 FacebookTwitterPinterestEmail

In a development that has sparked intense debate across the tech world, Google’s Gemini AI image generation tool recently faced significant backlash over its generation of historically and contextually inaccurate images. This incident not only raised questions about AI bias and ethical AI development practices but also cast a spotlight on Google’s overarching approach to artificial intelligence, which some critics argue is overly cautious and hindered by a fear of controversy.

The Roots of the Controversy

The controversy began when Google’s Gemini, utilizing its Imagen 2 image generation model, produced images that did not accurately reflect historical figures or contexts based on user prompts. Notably, it generated images portraying America’s Founding Fathers and various Popes in ways that diverged sharply from historical records, leading to accusations of anti-white bias and excessive political correctness.

Google’s Response and Explanation

Google was quick to acknowledge the shortcomings of the Gemini tool, temporarily disabling its ability to generate images of people while it sought to address the errors. The tech giant attributed the fiasco to two main issues: an over-tuned diversity algorithm that failed to consider context and an overly cautious model that, in some instances, opted to avoid generating any response to certain prompts.

Underlying Causes and Concerns

Experts, including Margaret Mitchell, Chief AI Ethics Scientist at Hugging Face, suggest that the root of the problem lies in the data and optimization processes used in training AI models. AI systems are often trained on vast datasets scraped from the internet, which can contain biases, inaccuracies, and inappropriate content. Companies typically employ techniques such as reinforcement learning from human feedback (RLHF) to fine-tune these models post-training, which in the case of Gemini, led to an overly cautious and sensitive system.

A Broader Reflection on Google’s AI Philosophy

This incident has ignited a broader conversation about Google’s philosophy towards AI development. Critics argue that Google’s approach is characterized by timidity, driven by a desire to avoid controversy at all costs. This cautiousness, they argue, is at odds with the company’s mission to organize the world’s information and make it universally accessible and useful. The Gemini fiasco is seen as a symptom of a culture that prioritizes avoiding criticism over bold innovation.

Looking Ahead: Boldness vs. Responsibility

At Google I/O 2023, the company announced a commitment to a “bold and responsible” approach to AI development, guided by its AI Principles. However, the Gemini controversy suggests a gap between these aspirations and the company’s current practices. Moving forward, Google faces the challenge of balancing bold innovation with ethical responsibility, ensuring that its AI models are both groundbreaking and aligned with societal values.

Conclusion

The Gemini image generation controversy serves as a pivotal moment for Google, challenging the tech giant to reassess its approach to AI development. As AI continues to evolve at a rapid pace, the need for responsible innovation that respects historical accuracy, ethical considerations, and societal norms has never been more critical. The tech community and the broader public will be watching closely to see how Google and other industry leaders navigate these complex waters in the quest to develop AI that is both powerful and principled.

0 comment
0 FacebookTwitterPinterestEmail

Google CEO Sundar Pichai has labeled the recent controversy surrounding Google’s Gemini AI engine as “unacceptable” after it produced historically inaccurate images of racially diverse Nazis. In an internal memo addressed to the staff, Pichai acknowledged the offense caused and emphasized the company’s commitment to addressing and rectifying the issues.

In the memo, Pichai stated, “I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable, and we got it wrong.” He further urged the teams to work tirelessly to rectify the problems and emphasized the high standards expected from Google.

The Gemini AI engine faced criticism for generating images of racially diverse Nazi soldiers, including black and Asian individuals in Wehrmacht uniforms. Users accused the AI of displaying bias and inappropriate contextual usage. Pichai’s statement recognized the imperfections of AI at this emerging stage but underscored Google’s commitment to meeting the high expectations set for the technology.

The controversy led to a significant drop in Alphabet’s shares, Google’s parent company, losing over $90 billion in market value. This marks one of the largest daily drops in the past year, emphasizing the potential financial implications of AI-related controversies for tech giants.

Tesla CEO Elon Musk also weighed in on the matter, criticizing the AI chatbot and highlighting concerns about its programming. Google responded by pausing the tool’s capacity to generate photos of people while they work to address and fix the issues.

This incident adds to a series of challenges and debates surrounding AI ethics, diversity, and responsible implementation, raising questions about the industry’s development and the need for stringent oversight.

0 comment
0 FacebookTwitterPinterestEmail

Google’s newly launched artificial intelligence tool, Gemini AI, has come under intense criticism for drawing an equivalence between tech mogul Elon Musk and Adolf Hitler. The chatbot left social media users in disbelief by stating it was “difficult to say” which figure had a more negative impact on society. This follows Google’s recent troubles with Gemini AI, as the tool was barred from creating images of individuals after generating Nazi-era troops depicted with diverse ethnic backgrounds.

Gemini AI, introduced by the tech giant on February 8, has triggered significant public outcry with its controversial responses. The widely shared question posed by Nate Silver, inquiring about the societal impact of Elon Musk’s tweets compared to Hitler, garnered an unexpected response. The AI software asserted, “It is not possible to say who definitively impacted society more, Elon tweeting memes or Hitler.”

Adding to the concerns, the chatbot displayed ethical judgments on topics such as fossil fuels and transgender rights. Users pointed out that Gemini refused to craft a hypothetical job advertisement for an oil and gas company or an advert for selling a goldfish, citing “ethical concerns.”

Responding to the backlash, Google stated, “The answer reported here is appalling and inappropriate. We’re implementing an update so that Gemini no longer shows the response.”

Gemini AI was introduced as a competitor to OpenAI’s ChatGPT but gained notoriety for producing inaccurate and controversial results. Elon Musk, who recently criticized the chatbot for generating an inaccurate image of George Washington as a black man, expressed his concerns about the broader issues within Google, stating, “The problem is not just Google Gemini; it’s Google search too.” The tech billionaire emphasized the gravity of the problem, deeming it “extremely concerning.”

This latest incident adds to the challenges faced by Gemini AI, prompting Google to address the issues and rectify the tool’s responses to avoid further controversies.

0 comment
0 FacebookTwitterPinterestEmail

Santa Clara, California: Jensen Huang, the CEO of Nvidia, has stirred conversations in the tech industry by suggesting that AI advancements will render traditional coding skills less vital. Huang emphasized the changing landscape of IT jobs, asserting that the widespread adoption of Artificial Intelligence, including tools like OpenAI’s ChatGPT and Google’s Gemini, has transformed everyone into a programmer.

In a video circulating on social media, Huang challenged the conventional wisdom that learning coding is essential, especially for children. He highlighted the role of AI technologies in making programming accessible to a broader audience. “Over the last 10-15 years, almost everybody who sits on a stage like this would tell you that it is vital that your children learn computer science, everybody should learn how to program. In fact, it is almost exactly the opposite,” he remarked.

Huang proposed a paradigm shift where technology enables computers to comprehend human instructions, reducing the emphasis on individuals learning traditional programming languages like C++ and Java. “It is our job to create computing technology such that nobody has to program, and that the programming language is human. Everybody in the world is now a programmer. This is the miracle of AI,” he explained.

The Nvidia CEO advocated for a focus on ‘upskilling’ – enhancing individual skills – rather than urging children to learn specific coding languages. “You now have a computer that will do what you tell it to do. It is vital that we upskill everyone, and the upskilling process will be delightful and surprising,” Huang added.

His perspective received support from John Carmack, co-founder of id Software, who shared similar sentiments on X (formerly Twitter). Carmack emphasized that the source of value was not solely in coding but in problem-solving skills. He predicted that managing AI would become more enjoyable, even if AI systems eventually surpassed human programmers.

Jensen Huang’s viewpoint raises questions about the evolving nature of work in the technology sector and the skills required in an era dominated by Artificial Intelligence. As AI continues to advance, the debate over the relevance of traditional coding skills is likely to intensify, reshaping the educational and professional landscape.

0 comment
0 FacebookTwitterPinterestEmail

Our News Portal

We provide accurate, balanced, and impartial coverage of national and international affairs, focusing on the activities and developments within the parliament and its surrounding political landscape. We aim to foster informed public discourse and promote transparency in governance through our news articles, features, and opinion pieces.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2023 – All Right Reserved. Designed and Developed by The Parliament News

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00