Home Tags Posts tagged with "AI"
Tag:

AI

Baidu, the Chinese tech giant, has unveiled remarkable statistics for its AI-powered chatbot, “Ernie Bot,” showcasing its escalating popularity and market penetration. CEO Robin Li disclosed that Ernie Bot has now amassed over 200 million users, doubling its user base from just a few months ago.

Li further highlighted the staggering usage rate of Ernie Bot’s application programming interface (API), which is being leveraged a whopping 200 million times daily. This indicates the substantial demand for the chatbot’s services, with users frequently relying on it to accomplish various tasks. Additionally, Ernie Bot has secured a significant presence in the enterprise sector, boasting 85,000 enterprise clients.

The announcement comes amidst Baidu’s strategic initiatives to monetize Ernie Bot, with revenue generation efforts already underway. In the fourth quarter alone, Baidu capitalized on AI-driven advancements to enhance its advertising solutions, resulting in substantial earnings amounting to several hundred million yuan. Moreover, the company has extended support to other enterprises in building their AI models, further consolidating Ernie Bot’s position in the market.

Ernie Bot, introduced last March as one of China’s pioneering generative AI chatbots, received official approval for public release in August. Notably, China mandates regulatory approval for the deployment of generative AI services, distinguishing it from many other jurisdictions.

Despite Ernie Bot’s impressive growth, it faces competition from domestic rivals, particularly Moonshot AI’s “Kimi” chatbot, backed by Alibaba. Kimi has exhibited rapid expansion, narrowing the gap with Ernie Bot. Recent data indicates a surge in Kimi’s user visits, with a remarkable 321.6 percent increase in March compared to the previous month.

However, on a global scale, Chinese generative AI services still trail behind their Western counterparts. OpenAI’s ChatGPT remains the world leader in this domain, with a staggering total traffic of 1.86 billion views last month.

China’s intensified focus on AI innovation is evident in its accelerated approvals for AI services, underlining its commitment to compete with the United States in the tech sphere. With 117 large AI models receiving approvals thus far, China continues to position itself as a formidable contender in the global AI landscape.

0 comment
0 FacebookTwitterPinterestEmail

Meta, the parent company of popular social media platforms like Facebook, Instagram, and WhatsApp, has quietly introduced its AI-powered chatbot on WhatsApp, Instagram, and Messenger in India and several parts of Africa. This feature is gradually rolling out for both iOS and Android users, potentially powered by Llama 2 or the upcoming Llama 3 AI models.

Users can access the chatbot through the top search bar in the WhatsApp user interface. Interestingly, the design of the chatbot closely resembles that of Perplexity AI, as noted by Aravind Srinivas, the CEO of Perplexity AI, in a post on X. Despite the similarity in appearance, the integration operates independently, ensuring the privacy of private conversations on WhatsApp. User interactions with the search bar remain confidential and are not shared with Meta AI unless explicitly directed to the chatbot.

Meta AI suggests topics through the search bar or conversation, utilizing randomly generated suggestions that do not rely on user-specific information. The search bar retains its primary function, enabling users to search for chats, messages, media, and contacts within the app. Users can continue to search their conversations for specific content without engaging with Meta AI, preserving ease of use and privacy.

Moreover, personal messages and calls on WhatsApp remain end-to-end encrypted, ensuring that neither WhatsApp nor Meta can access them, even with the integration of Meta AI.

Meta’s expansion of AI initiatives follows the advancements made by prominent tech companies like OpenAI. After piloting its AI chatbot in markets such as the U.S., Meta is now extending testing to India, its largest market with over 500 million Facebook and WhatsApp users.

In addition, Meta has confirmed plans to release its next AI model, Llama 3, within the current month, indicating the company’s commitment to advancing AI technology and improving user experiences across its platforms.

0 comment
0 FacebookTwitterPinterestEmail

In a bid to democratize video creation and storytelling, Google unveiled its latest innovation on Tuesday: Google Vids. This groundbreaking app harnesses the power of artificial intelligence (AI) to empower users of all skill levels to become proficient storytellers through video content.

Google Vids, described as an AI-based standalone application, aims to revolutionize the video creation process, making it accessible to everyone, regardless of their expertise in video editing. Set to be officially launched in June as part of Workspace Labs, this innovative tool promises to simplify the complex task of video production.

The core functionality of Google Vids revolves around its AI-driven capabilities, which enable users to craft compelling videos using a variety of elements, including stock images and background music. By leveraging AI, users can seamlessly create visually engaging content without the need for intricate editing skills.

One of the standout features of Google Vids is its intuitive storyboard-like interface, which streamlines the editing process. Users can effortlessly manipulate different components of their videos within the storyboard framework, allowing for seamless customization until the desired final look is achieved.

Google’s announcement of Google Vids coincides with the rollout of several other updates for Google Workspace, further solidifying the company’s commitment to innovation and productivity enhancement. Among the notable additions are an AI-based security add-on, a new table feature for Google Sheets, and a translation tool for Google Meet, catering to diverse user needs within the Workspace ecosystem.

In a move to enhance the user experience beyond productivity tools, Google has also integrated AI into its shopping tool, enabling users to discover new styles and find specific clothing items through image cues. This expansion into the realm of fashion demonstrates Google’s versatility in leveraging AI to address various consumer interests and preferences.

Despite the novelty of Google Vids and its accompanying updates, the tech giant remains steadfast in its mission to deliver innovative solutions that cater to the evolving needs of its user base. With Google Vids, Google is poised to redefine the landscape of video creation, ushering in a new era of creativity and storytelling powered by AI technology.

0 comment
0 FacebookTwitterPinterestEmail

In a recent interview on X spaces, Tesla CEO Elon Musk delivered thought-provoking insights into the future of artificial intelligence (AI), predicting that AI capable of surpassing the intelligence of the smartest human may emerge as soon as next year or by 2026. Despite encountering technical glitches during the interview, Musk delved into various topics, shedding light on the constraints facing AI development, particularly related to electricity availability.

During the conversation with Nicolai Tangen, CEO of Norway’s wealth fund, Musk provided updates on Grok, an AI chatbot developed by his xAI startup. He revealed plans for the upcoming version of Grok, scheduled for training by May, while acknowledging challenges posed by a shortage of advanced chips.

Notably, Musk, a co-founder of OpenAI, expressed concerns about the deviation of OpenAI from its original mission and the prioritization of profit over humanity’s welfare. He founded xAI last year as a competitor to OpenAI, which he has sued for allegedly straying from its altruistic goals.

Discussing the resource-intensive nature of AI training, Musk disclosed that training the Grok 2 model required approximately 20,000 Nvidia H100 GPUs, with future iterations anticipated to necessitate up to 100,000 Nvidia H100 chips. However, he underscored that chip shortages and electricity supply would emerge as critical factors shaping AI development in the near future.

Transitioning to the automotive sector, Musk lauded Chinese carmakers as the most competitive globally, issuing warnings about their potential to outperform global rivals without appropriate trade barriers. Addressing recent labor disputes, he provided updates on a union strike in Sweden against Tesla, indicating that discussions had taken place with Norway’s sovereign wealth fund, a significant Tesla shareholder, to address the situation.

Elon Musk’s remarks offer valuable insights into the evolving landscape of AI development and the formidable challenges confronting both the technology and automotive industries. As advancements in AI continue to accelerate, navigating these challenges will be paramount to shaping the future of innovation and technology.

0 comment
0 FacebookTwitterPinterestEmail

OpenAI has come under fire for allegedly transcribing over a million hours of YouTube videos to train its latest large language model, GPT-4. The report sheds light on the desperate measures taken by major players in the AI field to access high-quality training data amidst growing concerns over copyright infringement and ethical boundaries.

According to The New York Times, OpenAI developed its Whisper audio transcription model as a workaround to acquire the necessary data, despite the questionable legality of the endeavor. The company’s president, Greg Brockman, was reportedly involved in collecting videos for transcription, banking on the notion of “fair use” to justify their actions.

Responding to the allegations, OpenAI spokesperson Lindsay Held emphasized the company’s commitment to curating unique datasets for its models while exploring various data sources, including publicly available data and partnerships. The company is also considering generating synthetic data to supplement its training efforts.

Google, another major player in the AI landscape, has also faced scrutiny for its data-gathering practices. While Google denies any unauthorized scraping or downloading of YouTube content, reports suggest that the company has trained its models using transcripts from YouTube videos, albeit in accordance with its agreements with content creators.

Meta, formerly known as Facebook, encountered similar challenges in accessing quality training data, leading its AI team to explore potentially unauthorized use of copyrighted works. The company reportedly considered drastic measures, including purchasing book licenses or acquiring a large publisher, to address the data scarcity issue.

The broader AI training community is grappling with the looming shortage of training data, which is essential for improving model performance. While some propose innovative solutions like training models on synthetic data or employing curriculum learning techniques, the reliance on unauthorized data usage remains a contentious issue, fraught with legal and ethical implications.

As AI continues to advance, the debate surrounding data access and usage rights is expected to intensify, underscoring the need for clearer regulations and ethical guidelines in the field of artificial intelligence.

The revelations from The New York Times investigation shed light on the complex ethical and legal dilemmas faced by AI companies as they navigate the intricate landscape of data acquisition and model training.

0 comment
0 FacebookTwitterPinterestEmail

A recent survey conducted by staffing firm Adecco Group has unveiled a concerning trend in the corporate world: a significant number of executives are anticipating workforce reductions within the next five years due to the increasing adoption of artificial intelligence (AI).

According to the survey, a staggering 41% of executives at large companies worldwide are expecting to decrease their workforce as a result of AI implementation. This revelation comes amidst the rapid advancement and widespread adoption of generative AI technology, capable of creating realistic text, images, and videos. While some view AI as a tool to streamline processes and eliminate repetitive tasks, others fear its potential to render entire job roles obsolete.

Denis Machuel, CEO of Adecco Group, emphasized the dual nature of AI’s impact on employment. “AI can be a job killer, and it can also be a job creator,” Machuel stated. He noted that while there is a historical precedent of digital technologies creating new job opportunities, the disruptive nature of AI poses significant challenges.

The survey encompassed executives from 18 industries across nine countries, representing both white-collar and blue-collar sectors. Interestingly, the findings diverge from a previous World Economic Forum poll, where half of the companies believed AI would lead to job creation rather than elimination.

Recent layoffs in the tech industry further underscore these concerns. Companies such as Google and Microsoft have shifted their focus towards AI-driven technologies like ChatGPT and Gemini, resulting in workforce reductions. Even non-tech firms like Dropbox and Duolingo have cited AI adoption as a contributing factor to downsizing efforts.

Economists at Goldman Sachs have previously warned that the widespread adoption of generative AI could potentially impact up to 300 million jobs globally, particularly affecting white-collar workers. The results of the Adecco survey suggest that this prediction may materialize within the next five years, highlighting the urgent need for proactive measures to address the evolving landscape of employment in the age of AI.

0 comment
0 FacebookTwitterPinterestEmail

OpenAI has announced a significant move in making its ChatGPT generative AI chatbot accessible to everyone without the need for an account. This decision aims to democratize access to AI technology, enabling curious individuals to explore its capabilities freely.

The Microsoft-backed startup revealed that ChatGPT can simulate human conversation and perform various tasks, including creating summaries, writing poetry, and generating ideas for theme parties. By removing the sign-up requirement, OpenAI intends to cater to a broader audience interested in experiencing AI firsthand.

This strategic shift comes amidst a reported slowdown in ChatGPT’s user growth since May 2023, as indicated by data analytics firm Similarweb. In response, OpenAI seeks to reinvigorate interest in its AI offerings by eliminating barriers to entry.

To address concerns about potential misuse, OpenAI has implemented additional content safeguards for users accessing ChatGPT without signing up. These safeguards include blocking prompts and generations in unspecified categories. Moreover, the company offers paid versions of ChatGPT for individuals, teams, and enterprises, ensuring advanced features and enhanced security measures.

OpenAI clarified that user-generated content may be utilized to enhance its large-language models, although users have the option to opt out of this feature. Notably, the decision to make ChatGPT accessible without an account appears unrelated to Elon Musk’s recent lawsuit against OpenAI and its CEO, Sam Altman. Musk alleged that the company deviated from its original mission of developing AI for humanity’s benefit.

Despite the lawsuit, OpenAI continues to introduce new AI-driven products, such as the AI voice cloning service Voice Engine and the video creation platform Sora, albeit with limited access. This move underscores OpenAI’s commitment to advancing AI technology while maintaining transparency and user safety.

As OpenAI gradually rolls out the feature, individuals eager to explore the capabilities of AI can now do so effortlessly, ushering in a new era of accessibility and exploration in artificial intelligence.

0 comment
0 FacebookTwitterPinterestEmail

Hume AI: has introduced Empathic Voice Interface (EVI), a revolutionary conversational AI system imbued with emotional intelligence. EVI stands out by understanding users’ tone of voice, enriching interactions with human-like responses tailored to individual emotional states.

Designed to mimic human conversational nuances, EVI leverages state-of-the-art technology to comprehend and generate expressive speech, honed through extensive training on millions of human dialogues. Developers can seamlessly integrate EVI into various applications using Hume’s API, promising a unique and immersive voice interface experience.

Features of EVI:

  1. Human-Like Tone: EVI responds with tones akin to human expressions, enhancing the conversational experience.
  2. Responsive Language: It adapts its language based on users’ expressions, effectively addressing their needs.
  3. State-of-the-Art Detection: EVI accurately detects the end of conversation turns using users’ tone, ensuring seamless interactions.
  4. Interruption Handling: While halting when interrupted, EVI seamlessly resumes from where it left off.
  5. Self-Improvement: EVI continuously learns from user reactions to enhance user satisfaction over time.

Beyond its empathic features, EVI offers fast, reliable transcription and text-to-speech capabilities, making it versatile and adaptable to various scenarios. It seamlessly integrates with any Language Model Library (LLM), further enhancing its flexibility and utility.

EVI is slated to be publicly available in April, providing developers with an innovative tool to create immersive and empathetic voice interfaces. Developers keen on early access to the EVI API can express their interest by filling out the form on the EVI waitlist.

Established in 2021, Hume is a research lab and technology company dedicated to ensuring that artificial intelligence serves human goals and emotional well-being. Founded by Alan Cowen, a former researcher at Google AI, Hume raised a $50 million Series B funding from prominent investors including EQT Group, Union Square Ventures, and Comcast Ventures.

In a LinkedIn post, Cowen highlighted the significance of voice interfaces, emphasizing their efficiency and ability to convey nuanced information. He underscored EVI’s emotional intelligence as a key differentiator, enabling it to understand and respond to users’ voices beyond mere words.

OpenAI’s Voice Engine and Future Plans

In parallel, OpenAI is developing a Voice Engine equipped with voice and speech recognition, processing voice commands, and converting between text and speech. The Voice Engine aims to provide automatic speech and voice recognition and generation, enhancing interactions with natural language prompts.

Moreover, OpenAI is working on GPT-5, emphasizing multimodality to process video input and generate new videos. With a focus on customization and personalization, GPT-5 aims to leverage user data to enhance user experiences across various applications.

Last year, OpenAI launched the ChatGPT Voice feature, enabling back-and-forth conversations with diverse voices on Android and iOS platforms. The recent partnership with Figure AI underscores OpenAI’s commitment to advancing generative AI-powered humanoids, furthering the integration of AI into daily human interactions.

The Importance of Emotional Intelligence in Conversational AI

Experts underscore the significance of emotional intelligence in conversational AI, emphasizing its role in enhancing user experiences and driving commercial success. The integration of emotional understanding in chatbots holds promise for businesses, offering personalized and empathetic interactions that resonate with users.

Conversational AI systems like EVI and advancements by OpenAI represent significant strides towards realizing the future of empathetic and intuitive human-machine interactions.

0 comment
0 FacebookTwitterPinterestEmail

As excitement builds for the launch of Google’s upcoming flagship smartphone, the Pixel 9, fresh leaks have emerged, providing insights into its potential specifications and design elements. Recent renders shared by 91 Mobiles offer a tantalizing glimpse into what consumers can anticipate from the highly awaited device.

Initially mistaken for renders of the Pixel 9 and Pixel 9 Pro, recent hints suggest that the leaked images actually showcase the Pixel 9 Pro alongside its larger counterpart, the Pixel 9 XL. This revelation adds an intriguing twist to the anticipated lineup.

The latest renders showcase the vanilla Pixel 9 in an elegant black color variant. The device features rounded corners and a flat-screen design, boasting a 6.03-inch display. Positioned neatly on the right side of the frame are the power button and volume keys, ensuring ergonomic usability.

Reported dimensions hint that the Pixel 9 will measure approximately 152.8 x 71.9 x 8.5mm, with a thickness of 12mm, accounting for the rear camera bump. These dimensions suggest a compact yet substantial form factor, promising comfortable handling for users.

Among the most captivating rumored features of the Pixel 9 is the introduction of Adaptive Touch technology. Camera enthusiasts will be delighted by rumors indicating the inclusion of a telephoto lens, elevating the Pixel 9’s photography capabilities. Google’s commitment to harnessing AI to enhance camera performance reaffirms its dedication to delivering top-tier mobile photography experiences.

Under the hood, the Pixel 9 is expected to be powered by the latest Tensor G4 chip, succeeding its predecessor, the G3 model. While initial speculations hinted at the debut of a brand-new custom chip, recent reports suggest that such plans may be postponed until the release of the Pixel 10 in 2025. Software-wise, the Pixel 9 is poised to ship with Android 15, ensuring a seamless and up-to-date user experience. Additionally, Google is anticipated to integrate even more AI features, including a new assistant named “Pixie,” driven by the advanced Gemini AI chip, as per several media reports.

As anticipation mounts for the official unveiling of the Pixel 9 series, consumers eagerly await further updates and announcements from Google, eager to experience the innovative features and enhancements promised by the tech giant. Stay tuned for more details as the launch date draws near.

0 comment
1 FacebookTwitterPinterestEmail

At the Adobe Summit, the largest digital experience conference worldwide, Adobe and Microsoft unveiled an innovative partnership aimed at revolutionizing the way marketers work. The collaboration brings together Adobe Experience Cloud workflows and insights with Microsoft Copilot for Microsoft 365, offering marketers powerful generative AI capabilities to enhance collaboration, efficiency, and creativity.

The announcement, made on March 26, 2024, signifies a significant step towards breaking down application and data silos, enabling marketers to seamlessly manage everyday workflows within Microsoft 365 applications such as Outlook, Microsoft Teams, and Word. By integrating relevant marketing insights and workflows from Adobe Experience Cloud applications and Microsoft Dynamics 365 into Microsoft Copilot, marketers can streamline tasks ranging from creative brief development to content creation, approvals management, and campaign execution.

Amit Ahuja, Senior Vice President of Digital Experience Business at Adobe, emphasized the growing demand for personalized content across various digital channels and the need for marketers to drive greater efficiency in their daily work. He highlighted the unique offering provided by the partnership, which enables marketing teams to streamline tasks across planning, collaboration, content creation, and campaign execution.

Jared Spataro, Corporate Vice President of AI at Work at Microsoft, echoed Ahuja’s sentiments, emphasizing the shared goal of empowering marketers to focus on creating impactful campaigns and enhancing customer experiences. By integrating contextual marketing insights from Adobe Experience Cloud applications and Dynamics 365 within the workflow through Copilot for Microsoft 365, the partnership aims to help marketers overcome challenges associated with working in silos and different applications.

The collaboration addresses the complexity of the marketing discipline, which requires specialized tools and involves working across multiple teams internally and externally. According to a recent survey conducted by Microsoft, 43 percent of marketing and communications professionals reported that switching between digital applications and programs disrupted their creativity.

The integrated capabilities will initially focus on addressing the needs of marketers who manage campaign goals, status, and actions across multiple teams. These capabilities include strategic insights in the flow of work, creating campaign briefs, presentations, website updates, and emails with relevant context, and keeping projects moving with in-context notifications and summaries.

With Adobe and Microsoft joining forces to address the challenges faced by marketers, the collaboration is poised to usher in a new era of efficiency, collaboration, and creativity in the marketing landscape. As marketers embrace these innovative capabilities, they can expect to streamline their workflows, break down barriers, and deliver exceptional results for their organizations.

0 comment
0 FacebookTwitterPinterestEmail

Our News Portal

We provide accurate, balanced, and impartial coverage of national and international affairs, focusing on the activities and developments within the parliament and its surrounding political landscape. We aim to foster informed public discourse and promote transparency in governance through our news articles, features, and opinion pieces.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2023 – All Right Reserved. Designed and Developed by The Parliament News

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00