Home Education & Tech
Category:

Education & Tech

Meta, the parent company of popular social media platforms like Facebook, Instagram, and WhatsApp, has quietly introduced its AI-powered chatbot on WhatsApp, Instagram, and Messenger in India and several parts of Africa. This feature is gradually rolling out for both iOS and Android users, potentially powered by Llama 2 or the upcoming Llama 3 AI models.

Users can access the chatbot through the top search bar in the WhatsApp user interface. Interestingly, the design of the chatbot closely resembles that of Perplexity AI, as noted by Aravind Srinivas, the CEO of Perplexity AI, in a post on X. Despite the similarity in appearance, the integration operates independently, ensuring the privacy of private conversations on WhatsApp. User interactions with the search bar remain confidential and are not shared with Meta AI unless explicitly directed to the chatbot.

Meta AI suggests topics through the search bar or conversation, utilizing randomly generated suggestions that do not rely on user-specific information. The search bar retains its primary function, enabling users to search for chats, messages, media, and contacts within the app. Users can continue to search their conversations for specific content without engaging with Meta AI, preserving ease of use and privacy.

Moreover, personal messages and calls on WhatsApp remain end-to-end encrypted, ensuring that neither WhatsApp nor Meta can access them, even with the integration of Meta AI.

Meta’s expansion of AI initiatives follows the advancements made by prominent tech companies like OpenAI. After piloting its AI chatbot in markets such as the U.S., Meta is now extending testing to India, its largest market with over 500 million Facebook and WhatsApp users.

In addition, Meta has confirmed plans to release its next AI model, Llama 3, within the current month, indicating the company’s commitment to advancing AI technology and improving user experiences across its platforms.

0 comment
0 FacebookTwitterPinterestEmail

In a bid to democratize video creation and storytelling, Google unveiled its latest innovation on Tuesday: Google Vids. This groundbreaking app harnesses the power of artificial intelligence (AI) to empower users of all skill levels to become proficient storytellers through video content.

Google Vids, described as an AI-based standalone application, aims to revolutionize the video creation process, making it accessible to everyone, regardless of their expertise in video editing. Set to be officially launched in June as part of Workspace Labs, this innovative tool promises to simplify the complex task of video production.

The core functionality of Google Vids revolves around its AI-driven capabilities, which enable users to craft compelling videos using a variety of elements, including stock images and background music. By leveraging AI, users can seamlessly create visually engaging content without the need for intricate editing skills.

One of the standout features of Google Vids is its intuitive storyboard-like interface, which streamlines the editing process. Users can effortlessly manipulate different components of their videos within the storyboard framework, allowing for seamless customization until the desired final look is achieved.

Google’s announcement of Google Vids coincides with the rollout of several other updates for Google Workspace, further solidifying the company’s commitment to innovation and productivity enhancement. Among the notable additions are an AI-based security add-on, a new table feature for Google Sheets, and a translation tool for Google Meet, catering to diverse user needs within the Workspace ecosystem.

In a move to enhance the user experience beyond productivity tools, Google has also integrated AI into its shopping tool, enabling users to discover new styles and find specific clothing items through image cues. This expansion into the realm of fashion demonstrates Google’s versatility in leveraging AI to address various consumer interests and preferences.

Despite the novelty of Google Vids and its accompanying updates, the tech giant remains steadfast in its mission to deliver innovative solutions that cater to the evolving needs of its user base. With Google Vids, Google is poised to redefine the landscape of video creation, ushering in a new era of creativity and storytelling powered by AI technology.

0 comment
0 FacebookTwitterPinterestEmail

In a recent interview on X spaces, Tesla CEO Elon Musk delivered thought-provoking insights into the future of artificial intelligence (AI), predicting that AI capable of surpassing the intelligence of the smartest human may emerge as soon as next year or by 2026. Despite encountering technical glitches during the interview, Musk delved into various topics, shedding light on the constraints facing AI development, particularly related to electricity availability.

During the conversation with Nicolai Tangen, CEO of Norway’s wealth fund, Musk provided updates on Grok, an AI chatbot developed by his xAI startup. He revealed plans for the upcoming version of Grok, scheduled for training by May, while acknowledging challenges posed by a shortage of advanced chips.

Notably, Musk, a co-founder of OpenAI, expressed concerns about the deviation of OpenAI from its original mission and the prioritization of profit over humanity’s welfare. He founded xAI last year as a competitor to OpenAI, which he has sued for allegedly straying from its altruistic goals.

Discussing the resource-intensive nature of AI training, Musk disclosed that training the Grok 2 model required approximately 20,000 Nvidia H100 GPUs, with future iterations anticipated to necessitate up to 100,000 Nvidia H100 chips. However, he underscored that chip shortages and electricity supply would emerge as critical factors shaping AI development in the near future.

Transitioning to the automotive sector, Musk lauded Chinese carmakers as the most competitive globally, issuing warnings about their potential to outperform global rivals without appropriate trade barriers. Addressing recent labor disputes, he provided updates on a union strike in Sweden against Tesla, indicating that discussions had taken place with Norway’s sovereign wealth fund, a significant Tesla shareholder, to address the situation.

Elon Musk’s remarks offer valuable insights into the evolving landscape of AI development and the formidable challenges confronting both the technology and automotive industries. As advancements in AI continue to accelerate, navigating these challenges will be paramount to shaping the future of innovation and technology.

0 comment
0 FacebookTwitterPinterestEmail

OpenAI has come under fire for allegedly transcribing over a million hours of YouTube videos to train its latest large language model, GPT-4. The report sheds light on the desperate measures taken by major players in the AI field to access high-quality training data amidst growing concerns over copyright infringement and ethical boundaries.

According to The New York Times, OpenAI developed its Whisper audio transcription model as a workaround to acquire the necessary data, despite the questionable legality of the endeavor. The company’s president, Greg Brockman, was reportedly involved in collecting videos for transcription, banking on the notion of “fair use” to justify their actions.

Responding to the allegations, OpenAI spokesperson Lindsay Held emphasized the company’s commitment to curating unique datasets for its models while exploring various data sources, including publicly available data and partnerships. The company is also considering generating synthetic data to supplement its training efforts.

Google, another major player in the AI landscape, has also faced scrutiny for its data-gathering practices. While Google denies any unauthorized scraping or downloading of YouTube content, reports suggest that the company has trained its models using transcripts from YouTube videos, albeit in accordance with its agreements with content creators.

Meta, formerly known as Facebook, encountered similar challenges in accessing quality training data, leading its AI team to explore potentially unauthorized use of copyrighted works. The company reportedly considered drastic measures, including purchasing book licenses or acquiring a large publisher, to address the data scarcity issue.

The broader AI training community is grappling with the looming shortage of training data, which is essential for improving model performance. While some propose innovative solutions like training models on synthetic data or employing curriculum learning techniques, the reliance on unauthorized data usage remains a contentious issue, fraught with legal and ethical implications.

As AI continues to advance, the debate surrounding data access and usage rights is expected to intensify, underscoring the need for clearer regulations and ethical guidelines in the field of artificial intelligence.

The revelations from The New York Times investigation shed light on the complex ethical and legal dilemmas faced by AI companies as they navigate the intricate landscape of data acquisition and model training.

0 comment
0 FacebookTwitterPinterestEmail

A recent survey conducted by staffing firm Adecco Group has unveiled a concerning trend in the corporate world: a significant number of executives are anticipating workforce reductions within the next five years due to the increasing adoption of artificial intelligence (AI).

According to the survey, a staggering 41% of executives at large companies worldwide are expecting to decrease their workforce as a result of AI implementation. This revelation comes amidst the rapid advancement and widespread adoption of generative AI technology, capable of creating realistic text, images, and videos. While some view AI as a tool to streamline processes and eliminate repetitive tasks, others fear its potential to render entire job roles obsolete.

Denis Machuel, CEO of Adecco Group, emphasized the dual nature of AI’s impact on employment. “AI can be a job killer, and it can also be a job creator,” Machuel stated. He noted that while there is a historical precedent of digital technologies creating new job opportunities, the disruptive nature of AI poses significant challenges.

The survey encompassed executives from 18 industries across nine countries, representing both white-collar and blue-collar sectors. Interestingly, the findings diverge from a previous World Economic Forum poll, where half of the companies believed AI would lead to job creation rather than elimination.

Recent layoffs in the tech industry further underscore these concerns. Companies such as Google and Microsoft have shifted their focus towards AI-driven technologies like ChatGPT and Gemini, resulting in workforce reductions. Even non-tech firms like Dropbox and Duolingo have cited AI adoption as a contributing factor to downsizing efforts.

Economists at Goldman Sachs have previously warned that the widespread adoption of generative AI could potentially impact up to 300 million jobs globally, particularly affecting white-collar workers. The results of the Adecco survey suggest that this prediction may materialize within the next five years, highlighting the urgent need for proactive measures to address the evolving landscape of employment in the age of AI.

0 comment
0 FacebookTwitterPinterestEmail

Google is gearing up to infuse its artificial intelligence (AI) technology into Gmail, marking a significant stride towards incorporating AI across its suite of products. With the aim of enhancing user experience and productivity, Google’s AI model, known as Gemini, is set to revolutionize the way users interact with their emails.

The move comes as part of Google’s broader strategy to leverage AI capabilities in its products, a move that underscores the company’s commitment to innovation and improving user engagement. Gemini, alongside other AI initiatives like Bard, is poised to reshape the landscape of digital communication by streamlining processes and offering intelligent solutions.

According to insights shared by Google App Detective AssembleDebug with PiunikaWeb, Google is currently testing Gemini within Gmail, with a focus on suggesting replies. This initiative aligns with Google’s earlier announcement of integrating Gemini functionalities into existing products and services, with initial access granted to Google One AI Premium subscribers.

The integration of Gemini into Gmail holds the promise of enhancing email composition by providing users with intelligent suggestions for responses. Screenshots of the feature in action reveal that Gemini seeks feedback on its suggestions, allowing the AI model to refine its responses based on user input, thereby enhancing its accuracy and relevance over time.

Presently, Google One AI Premium subscribers can leverage Gemini’s capabilities to aid in composing emails. However, the potential implementation of this feature in Gmail for Android opens up new possibilities for users, offering them intelligent assistance right within their email platform. While Gemini’s expansion to Gmail marks a significant milestone, its features are already making headway in other Google products and services, such as Google Messages.

The incorporation of AI into Gmail underscores Google’s commitment to prioritizing AI as a key driver of innovation, particularly in the face of stiff competition. By integrating Gemini into Gmail, Google aims to empower users with smarter, more efficient email management tools, enhancing productivity and user satisfaction.

As Google continues to push the boundaries of AI integration, users can expect to see further enhancements and advancements that revolutionize their digital experiences. With AI at the forefront of its endeavors, Google remains dedicated to delivering cutting-edge solutions that enrich the lives of its users worldwide.

0 comment
0 FacebookTwitterPinterestEmail

OpenAI has announced a significant move in making its ChatGPT generative AI chatbot accessible to everyone without the need for an account. This decision aims to democratize access to AI technology, enabling curious individuals to explore its capabilities freely.

The Microsoft-backed startup revealed that ChatGPT can simulate human conversation and perform various tasks, including creating summaries, writing poetry, and generating ideas for theme parties. By removing the sign-up requirement, OpenAI intends to cater to a broader audience interested in experiencing AI firsthand.

This strategic shift comes amidst a reported slowdown in ChatGPT’s user growth since May 2023, as indicated by data analytics firm Similarweb. In response, OpenAI seeks to reinvigorate interest in its AI offerings by eliminating barriers to entry.

To address concerns about potential misuse, OpenAI has implemented additional content safeguards for users accessing ChatGPT without signing up. These safeguards include blocking prompts and generations in unspecified categories. Moreover, the company offers paid versions of ChatGPT for individuals, teams, and enterprises, ensuring advanced features and enhanced security measures.

OpenAI clarified that user-generated content may be utilized to enhance its large-language models, although users have the option to opt out of this feature. Notably, the decision to make ChatGPT accessible without an account appears unrelated to Elon Musk’s recent lawsuit against OpenAI and its CEO, Sam Altman. Musk alleged that the company deviated from its original mission of developing AI for humanity’s benefit.

Despite the lawsuit, OpenAI continues to introduce new AI-driven products, such as the AI voice cloning service Voice Engine and the video creation platform Sora, albeit with limited access. This move underscores OpenAI’s commitment to advancing AI technology while maintaining transparency and user safety.

As OpenAI gradually rolls out the feature, individuals eager to explore the capabilities of AI can now do so effortlessly, ushering in a new era of accessibility and exploration in artificial intelligence.

0 comment
0 FacebookTwitterPinterestEmail

Hume AI: has introduced Empathic Voice Interface (EVI), a revolutionary conversational AI system imbued with emotional intelligence. EVI stands out by understanding users’ tone of voice, enriching interactions with human-like responses tailored to individual emotional states.

Designed to mimic human conversational nuances, EVI leverages state-of-the-art technology to comprehend and generate expressive speech, honed through extensive training on millions of human dialogues. Developers can seamlessly integrate EVI into various applications using Hume’s API, promising a unique and immersive voice interface experience.

Features of EVI:

  1. Human-Like Tone: EVI responds with tones akin to human expressions, enhancing the conversational experience.
  2. Responsive Language: It adapts its language based on users’ expressions, effectively addressing their needs.
  3. State-of-the-Art Detection: EVI accurately detects the end of conversation turns using users’ tone, ensuring seamless interactions.
  4. Interruption Handling: While halting when interrupted, EVI seamlessly resumes from where it left off.
  5. Self-Improvement: EVI continuously learns from user reactions to enhance user satisfaction over time.

Beyond its empathic features, EVI offers fast, reliable transcription and text-to-speech capabilities, making it versatile and adaptable to various scenarios. It seamlessly integrates with any Language Model Library (LLM), further enhancing its flexibility and utility.

EVI is slated to be publicly available in April, providing developers with an innovative tool to create immersive and empathetic voice interfaces. Developers keen on early access to the EVI API can express their interest by filling out the form on the EVI waitlist.

Established in 2021, Hume is a research lab and technology company dedicated to ensuring that artificial intelligence serves human goals and emotional well-being. Founded by Alan Cowen, a former researcher at Google AI, Hume raised a $50 million Series B funding from prominent investors including EQT Group, Union Square Ventures, and Comcast Ventures.

In a LinkedIn post, Cowen highlighted the significance of voice interfaces, emphasizing their efficiency and ability to convey nuanced information. He underscored EVI’s emotional intelligence as a key differentiator, enabling it to understand and respond to users’ voices beyond mere words.

OpenAI’s Voice Engine and Future Plans

In parallel, OpenAI is developing a Voice Engine equipped with voice and speech recognition, processing voice commands, and converting between text and speech. The Voice Engine aims to provide automatic speech and voice recognition and generation, enhancing interactions with natural language prompts.

Moreover, OpenAI is working on GPT-5, emphasizing multimodality to process video input and generate new videos. With a focus on customization and personalization, GPT-5 aims to leverage user data to enhance user experiences across various applications.

Last year, OpenAI launched the ChatGPT Voice feature, enabling back-and-forth conversations with diverse voices on Android and iOS platforms. The recent partnership with Figure AI underscores OpenAI’s commitment to advancing generative AI-powered humanoids, furthering the integration of AI into daily human interactions.

The Importance of Emotional Intelligence in Conversational AI

Experts underscore the significance of emotional intelligence in conversational AI, emphasizing its role in enhancing user experiences and driving commercial success. The integration of emotional understanding in chatbots holds promise for businesses, offering personalized and empathetic interactions that resonate with users.

Conversational AI systems like EVI and advancements by OpenAI represent significant strides towards realizing the future of empathetic and intuitive human-machine interactions.

0 comment
0 FacebookTwitterPinterestEmail

Prime Minister Narendra Modi and Microsoft co-founder Bill Gates engaged in a comprehensive dialogue, delving into India’s inclusive digital vision and the critical issues surrounding technological advancements and digital transformation strategies. The dialogue, which covered a wide array of topics, highlighted India’s unique approach to the digital revolution and its commitment to democratising technology.

During the discussion, PM Modi underscored India’s steadfast commitment to ensuring that technological advancements are accessible to all segments of society. He elaborated on India’s model, which aims to prevent monopolies and foster a sense of ownership and trust among the populace. Emphasising inclusivity, PM Modi articulated how India leverages technology as a catalyst for societal empowerment and progress, particularly in sectors such as health, agriculture, and education.

PM Modi showcased India’s remarkable progress in establishing Ayushman Arogya Mandir Health Centers in rural areas, interconnected with urban hospitals through modern technology. He also highlighted the transformative potential of technology in revolutionising education and enhancing agricultural practices, demonstrating India’s commitment to addressing fundamental societal challenges through innovation.

Moreover, PM Modi outlined his vision of extending digital facilities to every village in India and emphasised the pivotal role of women in driving technological advancements. He unveiled the Namo Drone Didi scheme aimed at empowering women-led self-help groups with drone technology, fostering economic empowerment and innovation among rural women.

Addressing concerns surrounding data privacy and security, PM Modi showcased India’s robust legal framework and emphasised the importance of public awareness and simplified compliance measures. He reiterated India’s commitment to leveraging technology to enhance service delivery and improve citizens’ quality of life while safeguarding their data rights.

In response to Bill Gates’s query on data utilisation without compromising privacy, PM Modi outlined a multifaceted approach, stressing the need for public education on data contribution and transparent intentions behind data requests. He underscored India’s commitment to ethical data practices and prioritising research for the global good.

Reflecting on the government’s role in fostering technological innovation, PM Modi unveiled India’s ambitious plans to invest in environment-friendly solutions and promote research and development in future technologies. He highlighted India’s recent budget announcement of Rs 1 lakh crore to boost technological innovation, showcasing the government’s unwavering commitment to nurturing a conducive ecosystem for innovation and entrepreneurship.

0 comment
0 FacebookTwitterPinterestEmail

As excitement builds for the launch of Google’s upcoming flagship smartphone, the Pixel 9, fresh leaks have emerged, providing insights into its potential specifications and design elements. Recent renders shared by 91 Mobiles offer a tantalizing glimpse into what consumers can anticipate from the highly awaited device.

Initially mistaken for renders of the Pixel 9 and Pixel 9 Pro, recent hints suggest that the leaked images actually showcase the Pixel 9 Pro alongside its larger counterpart, the Pixel 9 XL. This revelation adds an intriguing twist to the anticipated lineup.

The latest renders showcase the vanilla Pixel 9 in an elegant black color variant. The device features rounded corners and a flat-screen design, boasting a 6.03-inch display. Positioned neatly on the right side of the frame are the power button and volume keys, ensuring ergonomic usability.

Reported dimensions hint that the Pixel 9 will measure approximately 152.8 x 71.9 x 8.5mm, with a thickness of 12mm, accounting for the rear camera bump. These dimensions suggest a compact yet substantial form factor, promising comfortable handling for users.

Among the most captivating rumored features of the Pixel 9 is the introduction of Adaptive Touch technology. Camera enthusiasts will be delighted by rumors indicating the inclusion of a telephoto lens, elevating the Pixel 9’s photography capabilities. Google’s commitment to harnessing AI to enhance camera performance reaffirms its dedication to delivering top-tier mobile photography experiences.

Under the hood, the Pixel 9 is expected to be powered by the latest Tensor G4 chip, succeeding its predecessor, the G3 model. While initial speculations hinted at the debut of a brand-new custom chip, recent reports suggest that such plans may be postponed until the release of the Pixel 10 in 2025. Software-wise, the Pixel 9 is poised to ship with Android 15, ensuring a seamless and up-to-date user experience. Additionally, Google is anticipated to integrate even more AI features, including a new assistant named “Pixie,” driven by the advanced Gemini AI chip, as per several media reports.

As anticipation mounts for the official unveiling of the Pixel 9 series, consumers eagerly await further updates and announcements from Google, eager to experience the innovative features and enhancements promised by the tech giant. Stay tuned for more details as the launch date draws near.

0 comment
1 FacebookTwitterPinterestEmail

Our News Portal

We provide accurate, balanced, and impartial coverage of national and international affairs, focusing on the activities and developments within the parliament and its surrounding political landscape. We aim to foster informed public discourse and promote transparency in governance through our news articles, features, and opinion pieces.

Newsletter

Laest News

@2023 – All Right Reserved. Designed and Developed by The Parliament News

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00