Home Tags Posts tagged with "AI"
Tag:

AI

ChatGPT Go plan

OpenAI has rolled out a new subscription tier, ChatGPT Go, in India, marking the first step in a strategy to make advanced AI tools more accessible in cost-sensitive markets. Priced at ₹399 per month, the plan bridges the gap between the free version and the premium tiers, offering more power and flexibility without the higher cost.

Why ChatGPT Go Matters for Indian Users

For a long time, Indian users have asked for two things: affordability and local payment options. ChatGPT Go addresses both. By introducing an India-first plan with rupee pricing and support for UPI payments, OpenAI has removed barriers that often kept casual users from upgrading.

This new tier gives users 10× more message capacity, 10× more image generations, 10× more file uploads, and double the memory length compared to the free version. It’s designed for students managing projects, freelancers working with clients, and professionals who need AI for daily workflows but don’t require the full suite of Plus or Pro.

Features That Set ChatGPT Go Apart

The Go plan offers a balance of power and value. Some key highlights include:

  • 10× higher usage limits for uninterrupted conversations.
  • Expanded creative tools with more image generations.
  • File uploads at a scale better suited for research, learning, or professional work.
  • Extended memory that allows for more context retention.

By packaging these features at a lower cost, the Go plan creates room for everyday productivity without forcing users into higher-priced tiers.

India-First Rollout and Global Implications

Launching ChatGPT Go in India first is more than a pricing experiment—it’s a recognition of India’s growing importance in the global AI ecosystem. With one of the largest user bases for digital tools and a strong preference for value-driven technology, India provides the ideal testbed for this tier.

The move also signals a shift towards inclusivity, ensuring that generative AI isn’t just for enterprises or high-end subscribers but also for students, creators, and individuals looking to use AI for learning, exploration, and personal growth.

The Subscription Landscape

With this launch, ChatGPT now offers four clear tiers:

  • Free Plan (limited usage)
  • Go Plan (₹399/month)
  • Plus Plan (₹1,999/month)
  • Pro Plan (₹19,999/month)

The Go plan fills the crucial middle space, giving users flexibility at a price point suited for wider adoption.

A Step Toward Greater Accessibility

For many, the biggest win is the introduction of UPI payments, which makes subscribing as simple as making any daily digital transaction. Combined with transparent INR pricing, this removes two major pain points—currency conversion and payment friction.

By lowering the entry barrier and giving people more practical capacity, OpenAI is positioning ChatGPT Go as the tool that brings AI into everyday routines, from drafting assignments to generating creative projects and managing professional workflows.

0 comment
0 FacebookTwitterPinterestEmail
Google Gemini

Starting September 2, Google will update its data policy for Gemini, its AI chatbot. This change will allow the company to use your interactions—including file uploads and chat prompts—to train and improve its artificial intelligence systems.

While this might sound like a way to make Gemini more intelligent and helpful, it also introduces concerns about privacy. If you’ve ever used Gemini to ask sensitive questions, you may wonder if those conversations should really be part of AI training. Fortunately, Google has provided a way to opt out.

Why Google Wants Your Data

Artificial intelligence models learn best from real-world examples. Public data alone can’t always capture the variety of ways people ask questions or express themselves. By studying chats and uploads, Gemini can refine its understanding of human language and deliver more accurate responses.

In short, your chats help the AI learn. But for some, the trade-off between smarter AI and personal privacy feels uneasy—especially when health, finance, or personal topics are involved.

What Exactly Will Be Collected?

Google calls this setting Gemini Apps Activity. Once the update rolls out, it will appear as Keep activity. When enabled, this feature records your chats, file uploads, and prompts. That means anything you type or share with Gemini could be stored for AI improvement.

The company emphasizes that the data isn’t directly linked to your personal account. Still, the option to opt out exists for those who’d rather not share their conversations at all.

How to Turn Off Gemini Activity on Desktop

If you’d prefer to stop sharing your interactions, here’s the process:

  1. Go to Gemini.Google.com and sign in.
  2. From the left-hand menu, click Settings and help.
  3. Under Activity, find Gemini apps activity (or Keep activity after September 2).
  4. Toggle it off to stop saving your chats and uploads.
  5. You can also delete your past records if you want them removed from Google’s servers.

Even after disabling it, Google temporarily holds the last 72 hours of your activity before deleting it permanently.

How to Disable It on Mobile

The steps are similar on the Gemini app:

  1. Open the Gemini app and tap your profile icon.
  2. Go to Gemini apps activity.
  3. Switch it off to prevent future training.
  4. Delete past data if you don’t want your history stored.

Remember, if you use multiple Google accounts, you’ll need to repeat the steps on each one.

The Bigger Picture: Privacy vs Progress

This update reflects a larger dilemma in the world of artificial intelligence. On one side, companies like Google need massive amounts of real data to create smarter, more reliable AI. On the other, users worry about privacy and how their information might be used.

By offering an opt-out choice, Google is trying to strike a balance. Whether you choose to keep activity on or off depends on your comfort level with sharing data for AI development.

0 comment
0 FacebookTwitterPinterestEmail
Gemma 3 270M

Artificial Intelligence is no longer limited to powerful servers and high-end computers. With the rise of mobile-first technology, there’s a growing need for models that are light, efficient, and accessible on everyday devices. Google has stepped into this space with Gemma 3 270M, a compact open-source AI model that brings the power of personalization directly to smartphones and IoT systems.

What Makes Gemma 3 270M Different?

Unlike large-scale AI models that rely heavily on cloud-based infrastructure, Gemma 3 270M is built to run directly on devices with limited hardware capabilities. With 270 million parameters, it balances performance with efficiency, making it an ideal fit for edge computing.

Key highlights include:

  • Energy efficiency designed for long-term sustainability.
  • Low hardware dependency, reducing the need for costly processors.
  • Quantisation-aware training, enabling smooth performance on formats like INT4.
  • Instruction-following and text structuring using a robust 256,000-token vocabulary.

Why On-Device AI Matters

On-device AI eliminates the constant need to connect to cloud servers, which brings two big advantages:

  1. Stronger Privacy: Sensitive user data doesn’t need to be uploaded and stored externally.
  2. Faster Responses: Tasks like personalization, text generation, or analysis can happen instantly without latency issues.

For industries like healthcare wearables, autonomous IoT systems, and smart assistants, this could be a game-changer.

Environmental and Accessibility Benefits

By consuming less energy and relying less on server farms, Gemma 3 270M reduces the carbon footprint of AI usage. It also creates opportunities for startups, smaller firms, and independent developers who don’t have access to expensive cloud infrastructure. This aligns with Google’s vision of democratizing AI for all.

Built-in Safeguards and Responsible Use

To address safety concerns, Google has integrated ShieldGemma, a system designed to minimize risks of harmful outputs. However, experts point out that like any open-source technology, careful deployment will be essential to avoid misuse.

What’s Next for Gemma 3 270M?

Google has hinted at expanding Gemma with multimodal capabilities, enabling it to process not just text but also images, audio, and possibly video. This step would make it even more versatile and align it closer with the broader Gemini ecosystem.

Gemma 3 270M is more than just a compact AI model — it represents a shift towards decentralization and sustainability in artificial intelligence. By enabling on-device AI for mobiles and IoT devices, Google is paving the way for a future where AI is faster, greener, and more accessible to everyone.

0 comment
0 FacebookTwitterPinterestEmail
Generative Engine Optimization

Generative Engine Optimization: The New Frontier of Digital Discovery

Search as we know it is evolving at lightning speed. The rise of AI-powered platforms is giving birth to a generative economy—one where businesses that act now, shifting from keyword-heavy SEO to brand-first, AI-driven strategies, will dominate for years to come. The rules that have governed traditional SEO are changing, and those who cling to them risk fading into irrelevance.

From Page One to AI Recommendations

In the old search world, being on “Page 1” of Google was the holy grail. Humans rarely clicked beyond it, so rankings were everything. But AI doesn’t operate like that. Large language models can scan and synthesize information from countless sites in seconds. This means your brand doesn’t have to be top-ranked to be surfaced—it needs to be trusted, contextually relevant, and clearly understood by these AI systems.

Why GEO Isn’t Just SEO 2.0

Generative Engine Optimization isn’t a fancier version of SEO. While it builds on SEO fundamentals, it’s driven by different priorities. Traditional SEO was about attracting the most traffic with the right keywords. GEO is about being chosen by intelligent systems as the most relevant, reliable answer to a query—whether or not you dominate keyword rankings.

The focus shifts from chasing search volume to positioning your brand where it matters most. It’s not about competing for generic terms—it’s about making your expertise, trust signals, and unique value unmistakable to AI-powered search.

The Brand-First Mindset

At the heart of GEO lies branding. This means:

  • Defining exactly who your customers are
  • Clearly articulating what you offer
  • Differentiating yourself from every other option

For too long, businesses have let keywords drive their messaging. But AI search thrives on depth, context, and credibility. If your brand story is fuzzy or inconsistent, intelligent systems will pass you over for competitors with clearer positioning.

How to Build GEO Authority

To win in this new landscape, you need to train AI systems to recognize and trust your brand. That involves:

  • On-page clarity: Make your value proposition explicit
  • Off-page validation: Secure mentions, case studies, interviews, and media coverage that reinforce your authority
  • Consistency across platforms: Ensure your brand voice and claims are uniform across every touchpoint

The more confident AI tools are in your brand’s credibility, the more likely you are to be recommended—whether for direct purchase decisions or complex, nuanced queries.

SEO and GEO: The Power of Bothism

Right now, we’re in a transitional stage. Traditional SEO still delivers value, but GEO is quickly rising. The smartest move is “bothism”—doing both SEO and GEO in tandem. This means keeping your search presence strong while building the brand authority that will matter more as AI search becomes dominant.

Businesses should be auditing where their mentions appear, how consistently they’re presented, and whether they’re visible beyond their own website. GEO deliverables—such as thought leadership content, media placements, and optimized brand profiles—create immediate value, making it easier to prove ROI compared to the slow climb of traditional SEO.

Why the Shift Can’t Wait

The shift to AI-driven search isn’t a slow burn—it’s happening now. As more people turn to generative tools for answers, businesses relying solely on traditional rankings will see traffic and leads drop. This is the moment to diversify, refine your brand presence, and prepare for a search environment where algorithms pick winners based on authority, not just backlinks.

Those who adapt early won’t just survive the change—they’ll define it.

0 comment
0 FacebookTwitterPinterestEmail
gpt 5

OpenAI Faces Backlash After GPT-5 Release
OpenAI’s unveiling of its much-anticipated GPT-5 model has stirred a wave of dissatisfaction among its loyal user base. While the company showcased GPT-5 as a major upgrade in coding, reasoning, accuracy, and multi-modal capabilities, the response from many paying subscribers was anything but celebratory.

Why GPT-5 Hasn’t Won Over Loyal Users
Despite technical improvements and a lower hallucination rate, long-time ChatGPT users say GPT-5 has lost something far more important — its personality. The new model, they argue, delivers shorter, less engaging responses that lack the emotional warmth and conversational depth of its predecessor, GPT-4o. The disappointment has been amplified by OpenAI’s decision to discontinue several older models, including GPT-4o, GPT-4.5, GPT-4.1, o3, and o3-pro, leaving users with no way to return to their preferred options.

Social Media Pushback Intensifies
On Reddit, the ChatGPT community has become a focal point for criticism. Some users compared the removal of older models to losing a trusted colleague or creative partner. GPT-4o, in particular, was praised for its “voice, rhythm, and spark” — qualities that many claim are missing in GPT-5. Others criticized OpenAI’s sudden removal of eight models without prior notice, calling it disruptive to workflows that relied on different models for specific tasks like creative writing, deep research, and logical problem-solving.

Accusations of Misrepresentation
Adding fuel to the backlash, some users have accused OpenAI of misleading marketing during the GPT-5 launch presentation. Allegations include “benchmark-cheating” and the use of deceptive bar charts to exaggerate GPT-5’s performance. For some, this perceived dishonesty was the final straw, prompting them to cancel their subscriptions entirely.

The Bigger Picture for AI Adoption
This controversy highlights an evolving tension in AI development — the balance between technical progress and user experience. While companies often focus on measurable improvements, users place equal value on familiarity, emotional connection, and trust. OpenAI now faces the challenge of addressing the concerns of a vocal segment of its community while continuing to innovate in a competitive AI market.

0 comment
0 FacebookTwitterPinterestEmail
GPT OSS

A New Era of Local Inference Begins

OpenAI’s breakthrough open-weight GPT-OSS models are now available with performance optimizations specifically designed for NVIDIA’s RTX and RTX PRO GPUs. This collaboration enables lightning-fast, on-device AI inference — with no need for cloud access — allowing developers and enthusiasts to bring high-performance, intelligent applications directly to their desktop environments.

With models like GPT-OSS-20B and GPT-OSS-120B now available, users can harness the power of generative AI for reasoning tasks, code generation, research, and more — all accelerated locally by NVIDIA hardware.

Built for Developers, Powered by RTX

These models, based on the powerful mixture-of-experts (MoE) architecture, offer advanced features like instruction following, tool usage, and chain-of-thought reasoning. Supporting a context length of up to 131,072 tokens, they’re ideally suited for deep research, multi-document analysis, and complex agentic AI workflows.

Optimized to run on RTX AI PCs and workstations, the models can now achieve up to 256 tokens per second on GPUs like the GeForce RTX 5090. This optimization extends across tools like Ollama, llama.cpp, and Microsoft AI Foundry Local, all designed to bring professional-grade inference into everyday computing.

MXFP4 Precision Unlocks Performance Without Sacrificing Quality

These are also the first models using the new MXFP4 precision format, balancing high output quality with significantly reduced computational demands. This opens the door to advanced AI use on local machines without the resource burdens typically associated with large-scale models.

Whether you’re using an RTX 4080 with 24GB VRAM or a professional RTX 6000, these models can run seamlessly with top-tier speed and efficiency.

Ollama: The Simplest Path to Personal AI

For those eager to try out OpenAI’s models with minimal setup, Ollama is the go-to solution. With native RTX optimization, it enables point-and-click interaction with GPT-OSS models through a modern UI. Users can feed in PDFs, images, and large documents with ease — all while chatting naturally with the model.

Ollama’s interface also includes support for multimodal prompts and customizable context lengths, giving creators and professionals more control over how their AI responds and reasons.

Advanced users can tap into Ollama’s command-line interface or integrate it directly into their apps using the SDK, extending its power across development pipelines.

More Tools, More Flexibility

Beyond Ollama, developers can explore GPT-OSS on RTX via:

  • llama.cpp — with CUDA Graphs and low-latency enhancements tailored for NVIDIA GPUs
  • GGML Tensor Library — community-driven library with Tensor Core optimization
  • Microsoft AI Foundry Local — a robust, on-device inferencing toolkit for Windows, built on ONNX Runtime and CUDA

These tools give AI builders unprecedented flexibility, whether they’re building autonomous agents, coding assistants, research bots, or productivity apps — all running locally on AI PCs and workstations.

A Push Toward Local, Open Innovation

As OpenAI steps into the open-source ecosystem with NVIDIA’s hardware advantage, developers worldwide now have access to state-of-the-art models without being tethered to the cloud.

The ability to run long-context models with high-speed output opens new possibilities in real-time document comprehension, enterprise chatbots, developer tooling, and creative applications — with full control and privacy.

NVIDIA’s continued support through resources like the RTX AI Garage and AI Blueprints means the community will keep seeing evolving tools, microservices, and deployment solutions to push local AI even further.

0 comment
0 FacebookTwitterPinterestEmail
OpenAI

In a sharp turn of events in the competitive world of artificial intelligence, Anthropic has publicly accused OpenAI of using its proprietary Claude coding tools to refine and train GPT-5, its highly anticipated next-generation language model. The allegation has stirred significant debate in the tech world, raising concerns about competitive ethics, data use, and the boundaries of AI benchmarking.

A Quiet Test Turns Loud: How the Allegation Surfaced

The dispute came to light following an investigative report by Wired, which cited insiders at Anthropic who claimed that OpenAI had been using Claude’s developer APIs—not just the public chat interface—to run deep internal evaluations of Claude’s capabilities. These tests reportedly focused on coding, creative writing, and handling of sensitive prompts related to safety, which gave OpenAI insight into Claude’s architecture and response behavior.

While such benchmarking might appear routine in the AI research world, Anthropic argues that OpenAI went beyond what is considered acceptable.

Anthropic Draws the Line on API Use

“Claude Code has become the go-to choice for developers,” Anthropic spokesperson Christopher Nulty said, adding that OpenAI’s engineers tapping into Claude’s coding tools to refine GPT-5 was a “direct violation of our terms of service.”

According to Anthropic’s usage policies, customers are strictly prohibited from using Claude to train or develop competing AI products. While benchmarking for safety is a permitted use, exploiting tools to optimize direct competitors is not.

That distinction, Anthropic claims, is what OpenAI crossed. The company has now limited OpenAI’s access to its APIs—allowing only minimal usage for safety benchmarking going forward.

OpenAI’s Response: Disappointed but Diplomatic

In a measured response, OpenAI’s Chief Communications Officer Hannah Wong acknowledged the API restriction but underscored the industry norm of cross-model benchmarking.

“It’s industry standard to evaluate other AI systems to benchmark progress and improve safety,” Wong noted. “While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them.”

The statement suggests OpenAI is seeking to maintain diplomatic ties despite the tensions.

A Pattern of Caution from Anthropic

This isn’t the first time Anthropic has shut the door on a competitor. Earlier this year, it reportedly blocked Windsurf, a coding-focused AI startup, over rumors of OpenAI’s acquisition interest. Jared Kaplan, Anthropic’s Chief Science Officer, had at the time stated, “It would be odd for us to be selling Claude to OpenAI.”

With GPT-5 reportedly close to release, the incident reveals how fiercely guarded innovation has become in the AI world. Every prompt, every tool, and every line of code has strategic value—and access to a rival’s system, even indirectly, can be a game-changer.

What This Means for the Future of AI Development

The AI landscape is becoming increasingly guarded. With foundational models becoming key differentiators for companies, control over access—especially to development tools and APIs—is tightening.

Anthropic’s defensive stance could be a sign of things to come: fewer shared benchmarks, more closed systems, and increased scrutiny over how AI labs test, train, and scale their models.

As for GPT-5, questions now swirl not only around its capabilities but also its developmental origins—a storyline that will continue to unfold in the months ahead.

0 comment
0 FacebookTwitterPinterestEmail
MIT's Brain Study on frequent ChatGPT users

A Shocking Study That Raises Eyebrows

An incredible brain-scan study conducted over four months by researchers at MIT, they reveal that significant cognitive consequences are tied to prolonged ChatGPT usage. While the AI tool undoubtedly boosts productivity, its frequent use appears to undermine memory, brain connectivity, and mental effort.

Reduced Brain Activity in Everyday Users

The study supervised a group of participants who used ChatGPT on a regular basis, they found that there was a 47% decline in brain connectivity scores—from 79 down to 42 points. Feasibly most alarming was that 83.3% of users couldn’t recall even a single sentence that they had read or generated just few minutes earlier. Even after stopping using AI , participants showed very minimal signs of cognitive recovery or re-engagement.

Efficiency vs. Effort

As we look at a bigger picture, ChatGPT made users 60% faster in completing tasks, especially essays and written reports. But these outputs were stated as robotic, that they lack depth, emotion, and human insight. The users utilized 32% less mental effort on average, signaling a troubling trend. Speed was gained but at what cost? – real thinking.

Building A foundational understanding

Interestingly, the top-performing group in the study started without any AI assistance, building a foundation of understanding before introducing ChatGPT into their workflow. These participants retained better memory, exhibited stronger brain activity, and produced the most well-rounded content. This approach suggests that AI should be a scaffold, not a crutch.

Dulling the Blade of the Mind

MIT’s findings point toward a growing concern: overdependence on AI may be eroding our cognitive resilience. The study emphasizes that using ChatGPT as a shortcut, especially in younger users, might hamper long-term intellectual development. Early exposure without structured guidance could potentially flatten the curve of curiosity and critical reasoning.

Redefining the Role of AI in Learning

Rather than sounding a death knell for AI tools, the MIT study encourages thoughtful integration. AI should be used as an assistant to direct your thinking not replacing it emerges as the significant takeaway. We must now ask that – How do we ensure AI is an enhancement tool, and  not a substitute for the human mind?

0 comment
0 FacebookTwitterPinterestEmail
github

A Radical Leap in No-Code Development
GitHub has unveiled “Spark,” a groundbreaking tool that could redefine how we create software. Spark enables users to build functional web applications simply by using natural language prompts—no coding experience required. This innovation comes from GitHub Next, the company’s experimental division, and offers both OpenAI and Claude Sonnet models for building and refining ideas.

More Than Just Code Generation
Unlike earlier AI tools that only generate code snippets, Spark goes a step further. It not only creates the necessary backend and frontend code but runs the app and shows a live, interactive preview. This allows creators to immediately test and modify their applications using further prompts—streamlining development cycles and reducing friction.

A Choice of Models for Precision
Spark users can choose from a selection of top-tier AI models: Claude 3.5 Sonnet, OpenAI’s o1-preview, o1-mini, or the flagship GPT-4o. While OpenAI is known for tuning models to support software logic, Claude Sonnet is recognized for its superior technical reasoning, especially in debugging and interpreting code.

Visualizing Ideas with Variants
Not sure how you want your micro app to look? Spark has a “revision variants” feature. This allows you to generate multiple visual and functional versions of an app, each carrying subtle differences. This feature is ideal for ideation, rapid prototyping, or pitching concepts.

Collaboration and Deployment Made Easy
GitHub Spark isn’t just about building—it also simplifies deployment and teamwork. One-click deployment options and Copilot agent collaboration features make it easy for teams to iterate faster and smarter. Whether you’re a seasoned developer or a startup founder with no tech background, Spark makes execution accessible.

A Message from GitHub’s CEO
Thomas Dohmke, CEO of GitHub, emphasized Spark’s significance in a recent statement on X (formerly Twitter):

“In the last five decades of software development, producing software required manually converting human language into programming language… Today, we take a step toward the ideal magic of creation: the idea in your head becomes reality in a matter of minutes.”

Pricing and Availability
GitHub Spark is currently available to CoPilot Pro+ users. The subscription costs $39 per month or $390 per year, which includes 375 Spark prompts. Additional messages can be purchased at $0.16 per prompt.

0 comment
0 FacebookTwitterPinterestEmail

OpenAI’s generative AI tool, ChatGPT, is shattering records with over 2.5 billion daily prompts, a remarkable milestone that underscores the platform’s rapid global expansion. According to newly obtained data, this figure translates to an astonishing 912.5 billion annual interactions, highlighting how deeply embedded the AI chatbot has become in everyday digital workflows.

US Leads the Charge in Prompt Volume

Out of the billions of interactions processed each day, around 330 million originate from the United States, positioning the country as ChatGPT’s largest user base. A spokesperson from OpenAI has verified the accuracy of these figures, affirming the monumental scale at which the AI platform operates today.

Growth That Stuns Even the Tech Industry

What makes this surge even more notable is the meteoric rise in active users. From 300 million weekly users in December to over 500 million by March, the trajectory shows no signs of slowing. This exponential rise is not just a milestone for OpenAI—it represents a fundamental shift in how users interact with information and automation.

A Looming Threat to Google’s Search Supremacy

While Google still maintains dominance with 5 trillion annual searches, the momentum behind ChatGPT suggests a possible reshaping of the search engine landscape. Unlike Google’s keyword-based model, ChatGPT provides direct, human-like responses, offering users a more conversational and task-oriented experience.

Strategic Moves: AI Agent and Browser on the Way

Adding to its expanding arsenal, OpenAI recently launched ChatGPT Agent, a powerful tool capable of performing tasks on a user’s device autonomously. This marks a major step toward an all-in-one digital assistant. In addition, OpenAI is reportedly planning to launch a custom AI-powered web browser, designed to rival Google Chrome directly—an aggressive move that signals OpenAI’s ambitions beyond just chat.

0 comment
0 FacebookTwitterPinterestEmail

Our News Portal

We provide accurate, balanced, and impartial coverage of national and international affairs, focusing on the activities and developments within the parliament and its surrounding political landscape. We aim to foster informed public discourse and promote transparency in governance through our news articles, features, and opinion pieces.

Newsletter

Laest News

@2023 – All Right Reserved. Designed and Developed by The Parliament News

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00