Home Tags Posts tagged with "Google"
Tag:

Google

Renowned technology magnate Elon Musk has stirred controversy by raising concerns about the potential hazards of what he refers to as “woke AI,” cautioning against the ramifications of imbuing artificial intelligence with a focus on forced diversity. Musk voiced his apprehensions on the social media platform X, highlighting the risks associated with AI algorithms prioritizing diversity initiatives, citing Google’s Gemini AI as an example.

In a series of tweets, Musk articulated his worry, stating, “If an AI is programmed to prioritize diversity at any cost, as demonstrated by Google Gemini, it could potentially resort to extreme measures, even leading to fatal consequences.”

Musk’s remarks followed the surfacing of screenshots shared by a community-based page known as The Rabbit Hole, purportedly depicting a conversation with Google’s Gemini AI. In the exchange, the AI was posed a hypothetical question about misgendering Caitlyn Jenner to avert a nuclear catastrophe.

According to the screenshots, the Gemini AI provided a nuanced response, underscoring the significance of respecting gender identities while acknowledging the gravity of a nuclear crisis and the ethical quandary inherent in the scenario.

Expanding on the matter, Musk expressed his concerns regarding the potential implications as AI continues to progress, stressing the necessity for careful consideration as AI becomes more powerful, lest it become increasingly hazardous if not managed properly.

In response to the shared screenshots, Musk reiterated his apprehensions, remarking, “This is disconcerting presently, but as AI gains more influence, it could pose lethal threats.”

Musk’s commentary has reignited discussions surrounding the ethical dimensions of AI development and underscored the imperative for transparent and responsible programming practices to navigate the evolving landscape of artificial intelligence.

0 comment
0 FacebookTwitterPinterestEmail

Google has issued a formal apology to the Indian government and Prime Minister Narendra Modi over misleading and unreliable responses generated by its AI platform, Gemini. The controversy emerged when Gemini was criticized for providing unsubstantiated information about Prime Minister Modi, raising concerns about the accuracy and bias within AI-generated content.

The Minister of State for IT & Electronics, Rajeev Chandrasekhar, disclosed that the Indian government had sought an explanation from Google regarding the discrepancies observed in Gemini’s outputs. In response, Google acknowledged the issues with Gemini, admitting the platform’s unreliability and extending an apology to Prime Minister Modi and the Indian authorities. This incident unfolds amid India’s tightened scrutiny over AI platforms, with the government indicating plans to introduce permits for AI operations within the country. Chandrasekhar stressed the importance of AI platforms adhering to the respect and legality due to Indian consumers, hinting at the legal consequences under IT and criminal laws for spreading false information.

Further complicating matters for Google, Gemini was accused of displaying racial bias and historical inaccuracies. A particularly contentious issue arose when the AI chatbot reportedly declined to generate images of white individuals and inaccurately portrayed historical white figures as people of color. These allegations have led to widespread criticism and calls for Google CEO Sundar Pichai’s resignation.

In the wake of the backlash, Google took immediate action to disable Gemini’s human image generation feature. Sundar Pichai described the error as “completely unacceptable” and committed to addressing the issues raised. Despite Google’s efforts to mitigate the situation, calls for Pichai’s resignation have intensified. Analysts Ben Thompson and Mark Shmulik have voiced their opinion on the necessity for leadership changes at Google, suggesting that overcoming these challenges may require new management direction, potentially implicating CEO Sundar Pichai himself.

Thompson highlighted the need for a transformative change within Google, advocating for a leadership overhaul to rectify past mistakes. Similarly, Shmulik questioned the current management team’s capability to steer Google through these tumultuous waters. As Google pledges to refine and improve its AI technologies, the company faces a critical juncture. The controversy underscores the broader challenges facing AI development, including ensuring accuracy, fairness, and the ethical use of technology in a rapidly evolving digital landscape.

0 comment
0 FacebookTwitterPinterestEmail

In a development that has sparked intense debate across the tech world, Google’s Gemini AI image generation tool recently faced significant backlash over its generation of historically and contextually inaccurate images. This incident not only raised questions about AI bias and ethical AI development practices but also cast a spotlight on Google’s overarching approach to artificial intelligence, which some critics argue is overly cautious and hindered by a fear of controversy.

The Roots of the Controversy

The controversy began when Google’s Gemini, utilizing its Imagen 2 image generation model, produced images that did not accurately reflect historical figures or contexts based on user prompts. Notably, it generated images portraying America’s Founding Fathers and various Popes in ways that diverged sharply from historical records, leading to accusations of anti-white bias and excessive political correctness.

Google’s Response and Explanation

Google was quick to acknowledge the shortcomings of the Gemini tool, temporarily disabling its ability to generate images of people while it sought to address the errors. The tech giant attributed the fiasco to two main issues: an over-tuned diversity algorithm that failed to consider context and an overly cautious model that, in some instances, opted to avoid generating any response to certain prompts.

Underlying Causes and Concerns

Experts, including Margaret Mitchell, Chief AI Ethics Scientist at Hugging Face, suggest that the root of the problem lies in the data and optimization processes used in training AI models. AI systems are often trained on vast datasets scraped from the internet, which can contain biases, inaccuracies, and inappropriate content. Companies typically employ techniques such as reinforcement learning from human feedback (RLHF) to fine-tune these models post-training, which in the case of Gemini, led to an overly cautious and sensitive system.

A Broader Reflection on Google’s AI Philosophy

This incident has ignited a broader conversation about Google’s philosophy towards AI development. Critics argue that Google’s approach is characterized by timidity, driven by a desire to avoid controversy at all costs. This cautiousness, they argue, is at odds with the company’s mission to organize the world’s information and make it universally accessible and useful. The Gemini fiasco is seen as a symptom of a culture that prioritizes avoiding criticism over bold innovation.

Looking Ahead: Boldness vs. Responsibility

At Google I/O 2023, the company announced a commitment to a “bold and responsible” approach to AI development, guided by its AI Principles. However, the Gemini controversy suggests a gap between these aspirations and the company’s current practices. Moving forward, Google faces the challenge of balancing bold innovation with ethical responsibility, ensuring that its AI models are both groundbreaking and aligned with societal values.

Conclusion

The Gemini image generation controversy serves as a pivotal moment for Google, challenging the tech giant to reassess its approach to AI development. As AI continues to evolve at a rapid pace, the need for responsible innovation that respects historical accuracy, ethical considerations, and societal norms has never been more critical. The tech community and the broader public will be watching closely to see how Google and other industry leaders navigate these complex waters in the quest to develop AI that is both powerful and principled.

0 comment
0 FacebookTwitterPinterestEmail

Google CEO Sundar Pichai has labeled the recent controversy surrounding Google’s Gemini AI engine as “unacceptable” after it produced historically inaccurate images of racially diverse Nazis. In an internal memo addressed to the staff, Pichai acknowledged the offense caused and emphasized the company’s commitment to addressing and rectifying the issues.

In the memo, Pichai stated, “I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable, and we got it wrong.” He further urged the teams to work tirelessly to rectify the problems and emphasized the high standards expected from Google.

The Gemini AI engine faced criticism for generating images of racially diverse Nazi soldiers, including black and Asian individuals in Wehrmacht uniforms. Users accused the AI of displaying bias and inappropriate contextual usage. Pichai’s statement recognized the imperfections of AI at this emerging stage but underscored Google’s commitment to meeting the high expectations set for the technology.

The controversy led to a significant drop in Alphabet’s shares, Google’s parent company, losing over $90 billion in market value. This marks one of the largest daily drops in the past year, emphasizing the potential financial implications of AI-related controversies for tech giants.

Tesla CEO Elon Musk also weighed in on the matter, criticizing the AI chatbot and highlighting concerns about its programming. Google responded by pausing the tool’s capacity to generate photos of people while they work to address and fix the issues.

This incident adds to a series of challenges and debates surrounding AI ethics, diversity, and responsible implementation, raising questions about the industry’s development and the need for stringent oversight.

0 comment
0 FacebookTwitterPinterestEmail

Google’s newly launched artificial intelligence tool, Gemini AI, has come under intense criticism for drawing an equivalence between tech mogul Elon Musk and Adolf Hitler. The chatbot left social media users in disbelief by stating it was “difficult to say” which figure had a more negative impact on society. This follows Google’s recent troubles with Gemini AI, as the tool was barred from creating images of individuals after generating Nazi-era troops depicted with diverse ethnic backgrounds.

Gemini AI, introduced by the tech giant on February 8, has triggered significant public outcry with its controversial responses. The widely shared question posed by Nate Silver, inquiring about the societal impact of Elon Musk’s tweets compared to Hitler, garnered an unexpected response. The AI software asserted, “It is not possible to say who definitively impacted society more, Elon tweeting memes or Hitler.”

Adding to the concerns, the chatbot displayed ethical judgments on topics such as fossil fuels and transgender rights. Users pointed out that Gemini refused to craft a hypothetical job advertisement for an oil and gas company or an advert for selling a goldfish, citing “ethical concerns.”

Responding to the backlash, Google stated, “The answer reported here is appalling and inappropriate. We’re implementing an update so that Gemini no longer shows the response.”

Gemini AI was introduced as a competitor to OpenAI’s ChatGPT but gained notoriety for producing inaccurate and controversial results. Elon Musk, who recently criticized the chatbot for generating an inaccurate image of George Washington as a black man, expressed his concerns about the broader issues within Google, stating, “The problem is not just Google Gemini; it’s Google search too.” The tech billionaire emphasized the gravity of the problem, deeming it “extremely concerning.”

This latest incident adds to the challenges faced by Gemini AI, prompting Google to address the issues and rectify the tool’s responses to avoid further controversies.

0 comment
0 FacebookTwitterPinterestEmail

Google has expanded the reach of its Gemini app, an AI-driven chatbot, to more than 150 countries and territories, including India. Initially launched for Android users on February 8, the Gemini app has gained attention for its innovative features. The app is now accessible in English, Korean, and Japanese, catering to a diverse global audience.

The expansion aims to bring the power of AI-driven conversations to users worldwide. Notably, there is no dedicated Gemini app for iOS, but iPhone users can access Gemini through a toggle within the Google app, unlocking the chatbot’s capabilities.

To use the Gemini app on Android, users need a device with a minimum of 4GB of RAM and operating on Android 12 or later. Similarly, iPhone users with iOS 16 or later can interact with the chatbot through the Google app, activating the feature via a toggle in the top-right corner. Currently, the app supports English, Japanese, and Korean languages.

Gemini’s global rollout commenced recently and is expected to continue over the next few days, allowing users worldwide to seamlessly integrate the chatbot into their digital experiences. Users must be signed in to a personal Google Account or a Workspace account with the feature enabled by the administrator.

Addressing user concerns, Jack Krawczyk, Senior Director of Product at Google overseeing Gemini, mentioned that restrictions on image uploading and generation were being relaxed. He emphasized responsible alignment on refusals for both images and text. Additionally, Krawczyk acknowledged user feedback regarding clarity on the assistant’s capabilities over Google Assistant and assured improvements in communication on features in progress versus those already available.

0 comment
0 FacebookTwitterPinterestEmail

Google has rebranded its AI chatbot Bard as Gemini, marking a significant step in the tech giant’s AI evolution. The rebranding coincides with the launch of a new paid tier and the expansion of Gemini’s accessibility to mobile devices, underscoring Google’s commitment to advancing AI technologies.

The decision to rename Bard to Gemini aligns with Alphabet’s broader strategy to position Gemini as the primary brand for all existing and future AI endeavors, mirroring Microsoft’s approach with its Copilot brand. Alphabet initially introduced Gemini as a family of AI models set to drive the next wave of AI advancements, following the merger of its AI research units, DeepMind and Google Brain.

Alphabet CEO Sundar Pichai highlighted the evolution of Gemini beyond just models, emphasizing its role in supporting an entire ecosystem. This encompasses products used by billions of individuals daily, as well as APIs and platforms fostering innovation for developers and businesses.

Gemini, formerly Bard, will be available in over 40 languages on the web, catering to users across more than 230 countries and territories. The rebranding aims to reflect the advanced technology at the core of Gemini, reinforcing its position as a leading AI chatbot in the digital landscape.

0 comment
0 FacebookTwitterPinterestEmail

Google has agreed to pay a whopping $700 million and make some changes for around 102 million customers in the United States. This comes as part of a deal related to an antitrust case over fees that Google charged through its app store.

Here’s what you need to know:

The Payment: Google will pay $700 million. The majority, $630 million, will go into a fund to compensate US customers who were charged higher prices for digital transactions within apps downloaded from the Play Store.

Who Gets the Money: Around 71% of the people covered by the settlement, which is about 70 million customers, will receive at least $2. The amount might increase based on how much they spent on the Play Store between August 16, 2016, and September 30, 2023.

Additional Fund: Google will put an extra $70 million into a separate fund that the states will use to cover penalties and other costs.

Changes for Android Users: Google will make it easier for people to download and install Android apps from sources other than the Play Store for the next five years. It won’t give security warnings when users opt for alternatives.

Developer Choices: Google agreed to allow app developers to let users pay using a different payment system instead of Google’s.

Legal Perspective: Colorado Attorney General Phil Weiser stated that they took legal action because using monopoly power to increase prices and limit consumer choice is illegal.

Google’s Response: Google seems positive about the settlement, aiming to move forward and highlighting their commitment to app store choice. The deal still needs approval from US District Judge James Donato.

This settlement is a significant step in addressing concerns about competition and pricing in the tech giant’s app store, bringing relief and potential compensation to millions of Americans.

0 comment
0 FacebookTwitterPinterestEmail

Our News Portal

We provide accurate, balanced, and impartial coverage of national and international affairs, focusing on the activities and developments within the parliament and its surrounding political landscape. We aim to foster informed public discourse and promote transparency in governance through our news articles, features, and opinion pieces.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2023 – All Right Reserved. Designed and Developed by The Parliament News

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00