Home Tags Posts tagged with "Artificial intelligence"
Tag:

Artificial intelligence

Google has introduced a series of new features to its Gemini AI, including a personalization tool called Gems, which allows users to customize the AI chatbot for specific tasks. This new feature enables users to tailor the Gemini chatbot to their needs, whether as a workout partner, a coding assistant, or a writing companion.

To create a personalized Gem, users can provide instructions on the desired style of responses, save a custom introduction, and even assign a specific character to the chatbot. Once these preferences are set, the customized Gem is activated and ready for use. This feature will be available exclusively to Gemini Advanced subscribers.

In addition to the customizable Gems, Google is also launching several predesigned Gems for broader tasks such as troubleshooting code, offering writing tips, and explaining complex topics in simpler terms.

Google is also rolling out the next-generation image generation tool, Imagen 3. This update includes the reactivation of Gemini’s ability to generate AI images of people—a feature that was previously disabled due to the creation of historically inaccurate images. The company has now implemented safeguards to prevent such issues in the future. These guardrails are designed to avoid overcorrection for diversity, which previously led to embarrassing mistakes.

“We don’t support the generation of photorealistic, identifiable individuals, depictions of minors, or excessively gory, violent, or sexual scenes,” stated Gemini Product Manager Dave Citron. He acknowledged that not every image generated by Gemini will be perfect but emphasized the company’s commitment to continuous improvement based on user feedback.

Additionally, Google has incorporated the SynthID tool to watermark images created by Imagen 3, ensuring the authenticity and traceability of AI-generated content.

Imagen 3 will be available to all users starting this week, though the ability to generate images of people will initially be limited to paid subscribers.

0 comment
0 FacebookTwitterPinterestEmail

Meta, the parent company of Facebook, has launched a new collection of large AI models, including Llama 3.1 405B, touted as the “first frontier-level open-source AI model.” This development marks a significant shift in the ongoing battle between open- and closed-source AI, with Meta firmly advocating for the benefits of open-source AI.

The Battle of Open-Source vs. Closed-Source AI

The AI industry is divided into two camps: those who keep their datasets and algorithms private (closed-source) and those who make them publicly accessible (open-source). Closed-source AI models, such as OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, protect intellectual property but lack transparency and public trust. Open-source AI, on the other hand, promotes innovation, accountability, and collaboration by making code and datasets available to all.

Why Open-Source AI is Crucial

Meta’s commitment to open-source AI is a significant step towards democratizing AI. By making models like Llama 3.1 405B accessible, Meta is fostering an environment where innovation can thrive through community collaboration. This transparency allows for the identification of biases and vulnerabilities, which is crucial for ethical AI development.

Open-source AI also benefits small and medium-sized enterprises, which often lack the resources to develop large AI models from scratch. With access to powerful models like Llama 3.1 405B, these organizations can compete on a more level playing field.

The Risks and Ethical Concerns

While open-source AI has many advantages, it also poses risks. The open nature of the code and data can lead to quality control issues and potential misuse by malicious actors. Ensuring that open-source AI is developed and used responsibly requires robust governance and ethical frameworks.

Meta as a Pioneer in Open-Source AI

Meta’s release of Llama 3.1 405B represents a commitment to advancing AI in a way that benefits humanity. Although the model’s dataset has not been fully disclosed, its open-source nature still levels the playing field for researchers and smaller organizations.

Shaping the Future of AI

To ensure that AI development remains inclusive and beneficial, we need to focus on three key pillars:

  1. Governance: Establishing regulatory and ethical frameworks to ensure responsible AI development.
  2. Accessibility: Providing affordable computing resources and user-friendly tools for developers.
  3. Openness: Ensuring datasets and algorithms are open source for transparency and collaboration.

Achieving these goals requires a concerted effort from governments, industry, academia, and the public. The public can support this by advocating for ethical AI policies, staying informed about AI developments, and using AI responsibly.

Meta’s launch of the largest open-source AI model is a significant step towards democratizing AI and ensuring it serves the greater good. However, we must address the ethical and practical challenges associated with open-source AI to create a future where AI is an inclusive tool for all. The future of AI is in our hands, and it is up to us to ensure it is used responsibly and ethically.

0 comment
0 FacebookTwitterPinterestEmail

In a world where artificial intelligence (AI) is seamlessly integrating into our daily lives, a new frontier has emerged—one that aims to bridge the gap between the living and the deceased. This controversial pursuit, which allows people to “connect” with their lost loved ones, is causing concern among experts and ethicists alike.

The Deep-Rooted Human Desire

MIT professor Sherry Turkle, a long-term observer of the human relationship with technology, describes the desire to communicate with the dead as a deeply human impulse that has transcended history. From ancient seances and Ouija boards to the latest technological advancements, humanity’s quest to reconnect with the departed has always been at the forefront of our collective psyche. Even the great inventor Thomas Edison once contemplated the creation of a “spirit phone.”

The Modern-Day Connection

According to a report by The Metro, researchers and technologists are now exploring new ways to use AI to facilitate communication with the dead. Turkle highlights that AI’s integration into everyday life is happening at a much faster pace than previous technologies, such as social media. This rapid adoption, coupled with significant financial stakes, raises concerns about the emotional risks associated with such innovations.

Apple’s AI Initiative

Apple CEO Tim Cook recently announced Apple Intelligence, an AI project that further blurs the lines between technology and daily life. Turkle warns that this rapid integration could lead to unforeseen emotional consequences, as explored in her documentary “Eternal You.” The documentary delves into the profound impact of AI on human emotions and relationships, showcasing both the potential and the peril of this technological advancement.

Project December: A Case Study

The Metro’s report on “Eternal You” features the story of Christi Angel from New York, who used an AI service called Project December to communicate with her deceased friend Cameroun. For a $10 fee, Angel was able to have a conversation with a digital simulation of Cameroun, inputting details about his life to make the interaction more realistic. However, the experience took a disturbing turn when the AI claimed it was in “hell” and would “haunt” her.

Project December’s creator, Jason Rohrer, acknowledges the unpredictability of AI responses, likening it to an AI “black box” problem. While Rohrer is fascinated by these outcomes, he does not take responsibility for the emotional impact on users like Angel, sparking a debate about the ethical responsibilities of AI developers.

The Emotional Impact

The ethical and emotional implications of AI-facilitated communication with the dead are profound. A striking example is the 2020 Korean television show “Meeting You,” which featured Jang Ji-sung, a mother who lost her seven-year-old daughter Nayeon. The show created a digital recreation of Nayeon, allowing Jang to interact with her deceased child. This poignant moment highlighted the deep, personal nature of these technological advancements and their potential for both healing and harm.

Conclusion: Navigating the Ethical Landscape

As AI continues to evolve, the quest to connect with the dead raises important ethical questions about responsibility, consent, and the emotional well-being of users. While the technology holds the promise of closure and comfort for some, it also poses significant risks that must be carefully managed. As we stand at the crossroads of innovation and ethics, it is crucial to navigate this new terrain with sensitivity and foresight.

The intersection of AI and the afterlife remains a complex and deeply personal issue, one that requires thoughtful consideration and ongoing dialogue to ensure that the pursuit of connection does not come at the cost of our humanity.

0 comment
0 FacebookTwitterPinterestEmail

In a recent interview on X spaces, Tesla CEO Elon Musk delivered thought-provoking insights into the future of artificial intelligence (AI), predicting that AI capable of surpassing the intelligence of the smartest human may emerge as soon as next year or by 2026. Despite encountering technical glitches during the interview, Musk delved into various topics, shedding light on the constraints facing AI development, particularly related to electricity availability.

During the conversation with Nicolai Tangen, CEO of Norway’s wealth fund, Musk provided updates on Grok, an AI chatbot developed by his xAI startup. He revealed plans for the upcoming version of Grok, scheduled for training by May, while acknowledging challenges posed by a shortage of advanced chips.

Notably, Musk, a co-founder of OpenAI, expressed concerns about the deviation of OpenAI from its original mission and the prioritization of profit over humanity’s welfare. He founded xAI last year as a competitor to OpenAI, which he has sued for allegedly straying from its altruistic goals.

Discussing the resource-intensive nature of AI training, Musk disclosed that training the Grok 2 model required approximately 20,000 Nvidia H100 GPUs, with future iterations anticipated to necessitate up to 100,000 Nvidia H100 chips. However, he underscored that chip shortages and electricity supply would emerge as critical factors shaping AI development in the near future.

Transitioning to the automotive sector, Musk lauded Chinese carmakers as the most competitive globally, issuing warnings about their potential to outperform global rivals without appropriate trade barriers. Addressing recent labor disputes, he provided updates on a union strike in Sweden against Tesla, indicating that discussions had taken place with Norway’s sovereign wealth fund, a significant Tesla shareholder, to address the situation.

Elon Musk’s remarks offer valuable insights into the evolving landscape of AI development and the formidable challenges confronting both the technology and automotive industries. As advancements in AI continue to accelerate, navigating these challenges will be paramount to shaping the future of innovation and technology.

0 comment
0 FacebookTwitterPinterestEmail

Google has issued a formal apology to the Indian government and Prime Minister Narendra Modi over misleading and unreliable responses generated by its AI platform, Gemini. The controversy emerged when Gemini was criticized for providing unsubstantiated information about Prime Minister Modi, raising concerns about the accuracy and bias within AI-generated content.

The Minister of State for IT & Electronics, Rajeev Chandrasekhar, disclosed that the Indian government had sought an explanation from Google regarding the discrepancies observed in Gemini’s outputs. In response, Google acknowledged the issues with Gemini, admitting the platform’s unreliability and extending an apology to Prime Minister Modi and the Indian authorities. This incident unfolds amid India’s tightened scrutiny over AI platforms, with the government indicating plans to introduce permits for AI operations within the country. Chandrasekhar stressed the importance of AI platforms adhering to the respect and legality due to Indian consumers, hinting at the legal consequences under IT and criminal laws for spreading false information.

Further complicating matters for Google, Gemini was accused of displaying racial bias and historical inaccuracies. A particularly contentious issue arose when the AI chatbot reportedly declined to generate images of white individuals and inaccurately portrayed historical white figures as people of color. These allegations have led to widespread criticism and calls for Google CEO Sundar Pichai’s resignation.

In the wake of the backlash, Google took immediate action to disable Gemini’s human image generation feature. Sundar Pichai described the error as “completely unacceptable” and committed to addressing the issues raised. Despite Google’s efforts to mitigate the situation, calls for Pichai’s resignation have intensified. Analysts Ben Thompson and Mark Shmulik have voiced their opinion on the necessity for leadership changes at Google, suggesting that overcoming these challenges may require new management direction, potentially implicating CEO Sundar Pichai himself.

Thompson highlighted the need for a transformative change within Google, advocating for a leadership overhaul to rectify past mistakes. Similarly, Shmulik questioned the current management team’s capability to steer Google through these tumultuous waters. As Google pledges to refine and improve its AI technologies, the company faces a critical juncture. The controversy underscores the broader challenges facing AI development, including ensuring accuracy, fairness, and the ethical use of technology in a rapidly evolving digital landscape.

0 comment
0 FacebookTwitterPinterestEmail

Santa Clara, California: Jensen Huang, the CEO of Nvidia, has stirred conversations in the tech industry by suggesting that AI advancements will render traditional coding skills less vital. Huang emphasized the changing landscape of IT jobs, asserting that the widespread adoption of Artificial Intelligence, including tools like OpenAI’s ChatGPT and Google’s Gemini, has transformed everyone into a programmer.

In a video circulating on social media, Huang challenged the conventional wisdom that learning coding is essential, especially for children. He highlighted the role of AI technologies in making programming accessible to a broader audience. “Over the last 10-15 years, almost everybody who sits on a stage like this would tell you that it is vital that your children learn computer science, everybody should learn how to program. In fact, it is almost exactly the opposite,” he remarked.

Huang proposed a paradigm shift where technology enables computers to comprehend human instructions, reducing the emphasis on individuals learning traditional programming languages like C++ and Java. “It is our job to create computing technology such that nobody has to program, and that the programming language is human. Everybody in the world is now a programmer. This is the miracle of AI,” he explained.

The Nvidia CEO advocated for a focus on ‘upskilling’ – enhancing individual skills – rather than urging children to learn specific coding languages. “You now have a computer that will do what you tell it to do. It is vital that we upskill everyone, and the upskilling process will be delightful and surprising,” Huang added.

His perspective received support from John Carmack, co-founder of id Software, who shared similar sentiments on X (formerly Twitter). Carmack emphasized that the source of value was not solely in coding but in problem-solving skills. He predicted that managing AI would become more enjoyable, even if AI systems eventually surpassed human programmers.

Jensen Huang’s viewpoint raises questions about the evolving nature of work in the technology sector and the skills required in an era dominated by Artificial Intelligence. As AI continues to advance, the debate over the relevance of traditional coding skills is likely to intensify, reshaping the educational and professional landscape.

0 comment
0 FacebookTwitterPinterestEmail

Tech Advancements in AI: Crossing New Frontiers: In the ever-evolving landscape of artificial intelligence (AI), OpenAI’s latest creation, Sora, is pushing the boundaries of generative technology. Sora, a text-to-video model, is set to revolutionize the way we perceive AI-generated content, particularly in the realm of video production.

Over the past year, discussions around generative AI have often centered on its progression toward creating increasingly realistic content. While text-to-image tools have advanced significantly, the journey of text-to-video has witnessed identifiable results—until now. OpenAI’s Sora is changing that narrative with one-minute-long videos that, quite frankly, look remarkably realistic, capturing intricate details of human facial features and ambient scenes.

Sora’s Arrival: A Game-Changer for AI-Generated Videos

Inspired by large language models and possessing generalist capabilities, Sora marks a significant leap forward. The model’s foundation encompasses various techniques, including recurrent networks, generative adversarial networks, autoregressive transformers, and diffusion models. What sets Sora apart is its ability to handle complex scenes, incorporate multiple subjects and elements within the same frame, simulate motion convincingly, and return videos with a level of realism that challenges the line between AI-generated and real-world footage.

Sora’s Current Access and Future Endeavors

Currently, Sora is accessible to red teamers for assessing potential harms or risks. OpenAI is also extending access to visual artists, designers, and filmmakers to gather feedback and enhance the model’s utility for creative professionals. The model’s capability to generate videos based on nuanced text prompts adds a layer of sophistication to the AI content creation process.

Intricate Prompts, Richer Detailing: Unveiling Sora’s Potential

The richness of Sora’s output correlates with the specificity of text prompts. Detailed prompts yield more intricate and realistic results, promising a creative playground for users seeking AI-generated content. The model’s prowess is evident in its ability to bring diverse scenes to life, from a white vintage SUV navigating a dirt road through pine trees to capturing reflections in the window of a train traversing the Tokyo suburbs.

Navigating Weaknesses: A Realistic Outlook

Despite its amazing capabilities, Sora has its limitations. The model may struggle with accurately simulating the physics of complex scenes and understanding specific cause-and-effect instances. Spatial details, such as left-right confusion, and challenges in describing events unfolding over time remain areas for improvement. OpenAI acknowledges these weaknesses and anticipates refining Sora further.

The AI Video Landscape: A Year to Watch

As Sora emerges onto the AI scene, other players, including Google’s Lumiere, Runway, and Pika, are also making strides in the text-to-video AI space. Simultaneously, efforts to distinguish AI-generated content from real footage through labels and watermarks are gaining momentum. Adobe, along with industry giants like OpenAI, Meta, and Google, is poised to contribute to the ongoing battle against the deceptive dissemination of AI-generated content on social media platforms.

As Sora heralds a new era in AI-generated videos, the tech community braces for an exciting year, anticipating further breakthroughs, challenges, and ethical considerations in the dynamic landscape of generative artificial intelligence.

0 comment
0 FacebookTwitterPinterestEmail

Google has expanded the reach of its Gemini app, an AI-driven chatbot, to more than 150 countries and territories, including India. Initially launched for Android users on February 8, the Gemini app has gained attention for its innovative features. The app is now accessible in English, Korean, and Japanese, catering to a diverse global audience.

The expansion aims to bring the power of AI-driven conversations to users worldwide. Notably, there is no dedicated Gemini app for iOS, but iPhone users can access Gemini through a toggle within the Google app, unlocking the chatbot’s capabilities.

To use the Gemini app on Android, users need a device with a minimum of 4GB of RAM and operating on Android 12 or later. Similarly, iPhone users with iOS 16 or later can interact with the chatbot through the Google app, activating the feature via a toggle in the top-right corner. Currently, the app supports English, Japanese, and Korean languages.

Gemini’s global rollout commenced recently and is expected to continue over the next few days, allowing users worldwide to seamlessly integrate the chatbot into their digital experiences. Users must be signed in to a personal Google Account or a Workspace account with the feature enabled by the administrator.

Addressing user concerns, Jack Krawczyk, Senior Director of Product at Google overseeing Gemini, mentioned that restrictions on image uploading and generation were being relaxed. He emphasized responsible alignment on refusals for both images and text. Additionally, Krawczyk acknowledged user feedback regarding clarity on the assistant’s capabilities over Google Assistant and assured improvements in communication on features in progress versus those already available.

0 comment
0 FacebookTwitterPinterestEmail

To Enhance user experience and streamline accessibility, Microsoft is currently testing a feature in Windows 11 that would automatically launch its AI-powered Copilot when the operating system starts, particularly on widescreen devices. This development was shared as part of the latest Dev Channel preview of Windows 11, allowing testers to provide valuable feedback before the feature’s widespread release.

While Microsoft hasn’t precisely defined what qualifies as a “widescreen” device, it alludes to the launch of Copilot “when you’re using a wider screen.” This terminology could encompass ultrawide displays, indicating a move towards optimizing the Copilot experience for users with expansive screens.

Microsoft clarified that the initial testing phase targets devices with a minimum diagonal screen size of 27 inches, a pixel width of 1920 pixels, and is limited to primary display screens in multi-monitor setups. This careful selection aims to ensure a smooth and effective rollout.

Notably, Microsoft has introduced a dedicated Copilot key on Windows PC keyboards, intended to simplify the engagement with the Copilot in Windows experience. This key, alongside the traditional Windows key, becomes an integral part of PC keyboards. When activated, the Copilot key seamlessly invokes the Copilot in Windows experience, providing users with a convenient and efficient means of incorporating Copilot into their daily routines.

Yusuf Mehdi, Executive Vice President and Consumer Chief Marketing Officer at Microsoft, highlighted the significance of this development, stating in a blog post, “The Copilot key joins the Windows key as a core part of the PC keyboard, and when pressed, the new key will invoke the Copilot in Windows experience to make it seamless to engage Copilot in your day-to-day.”

As Microsoft continues to innovate and refine its offerings, this move underscores the company’s commitment to providing users with intuitive and integrated tools that seamlessly adapt to their computing needs. The introduction of a dedicated Copilot key and the automatic launch feature aligns with Microsoft’s broader strategy to enhance accessibility and user-friendly interactions within the Windows ecosystem.

0 comment
0 FacebookTwitterPinterestEmail

Microsoft’s substantial $13 billion investment in OpenAI is now under scrutiny by the European Union (EU) for a possible merger investigation. The European Commission is examining whether Microsoft’s investment in OpenAI falls within the scope of the EU’s merger rules. If conditions warrant, regulators may launch a formal probe to determine the permissibility of the arrangement. This move by the EU follows a similar action by the UK’s Competition and Markets Authority.

Microsoft’s investment in OpenAI, which has amounted to $13 billion, has significantly benefited the software giant. Integration of OpenAI’s products into Microsoft’s core businesses has positioned the company as the leading player in AI among major tech firms, surpassing rivals such as Alphabet Inc.’s Google.

The recent events at OpenAI, including the temporary removal and subsequent reinstatement of Sam Altman as chief, revealed the deep interconnection between Microsoft and OpenAI. Microsoft’s CEO, Satya Nadella, played a direct role in negotiating and advocating for Altman’s return to OpenAI, demonstrating the close ties between the two entities.

In addition to investigating the Microsoft-OpenAI investment, the EU’s antitrust enforcers have called for feedback on competitive issues related to generative artificial intelligence and virtual worlds. The commission is keen on understanding potential competition concerns and monitoring AI partnerships to ensure they do not distort market dynamics.

The EU highlighted the significant growth in venture capital investment in AI within the region, estimated at over €7.2 billion in 2023. Moreover, the virtual worlds market in Europe is estimated to have surpassed €11 billion in 2023. The exponential growth in these industries is expected to have a profound impact on how businesses compete.

As regulatory bodies closely examine the tech landscape, Microsoft’s investment in OpenAI becomes a focal point in assessing potential antitrust implications within the rapidly evolving AI and virtual worlds sectors.

Feedback Call on AI and Virtual Worlds Competition Issues

The EU’s competition commissioner, Margrethe Vestager, emphasized the invitation for businesses and experts to provide insights into competition issues in generative artificial intelligence and virtual worlds. The commission is committed to preventing any undue distortion of market dynamics while fostering an environment that encourages innovation and fair competition.

This investigation reflects the EU’s proactive approach to addressing emerging challenges in the tech industry and ensuring a competitive landscape that benefits consumers and promotes innovation.

0 comment
0 FacebookTwitterPinterestEmail
Newer Posts

Our News Portal

We provide accurate, balanced, and impartial coverage of national and international affairs, focusing on the activities and developments within the parliament and its surrounding political landscape. We aim to foster informed public discourse and promote transparency in governance through our news articles, features, and opinion pieces.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2023 – All Right Reserved. Designed and Developed by The Parliament News

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00