Home Tags Posts tagged with "Technology"
Tag:

Technology

ChatGPT

The world’s most popular AI chatbot, ChatGPT, went offline for thousands of users on Wednesday, leaving many frustrated and searching for answers. Reports of disruptions started pouring in shortly after 11 am, according to Downdetector, a service that tracks online platforms and outages.

By midday, users across India, the United States, and Europe flagged issues ranging from failed responses to complete network errors on both the website and the mobile app.

How Many Users Were Affected?

Initial reports suggest that hundreds of users flagged problems within 20 minutes of the outage. At its peak, over 500 users in India alone reported issues, while thousands globally experienced disruptions. By 3:30 pm, reports had dropped significantly, with only 42 users still facing issues.

  • 85% of complaints were linked to ChatGPT not responding.
  • 13% of users reported problems with the OpenAI website.
  • 2% flagged disruptions with Writing Coach, an integrated tool.

OpenAI’s Response

As of now, OpenAI has not released an official statement regarding the cause of the outage. Some users reported that the service was working intermittently, while others continued to face errors, suggesting that the problem may have been partially resolved but not completely fixed.

Past ChatGPT Outages

This is not the first time ChatGPT has gone dark. The platform has experienced several major outages in recent months:

  • January 23, 2025: A global outage lasting over three hours disrupted users across Spain, Argentina, and the United States.
  • December 26, 2024: A technical glitch caused widespread downtime.
  • February 5, 2025: Over 22,000 outage reports were filed worldwide as ChatGPT remained inaccessible.
  • September 2025: A string of shorter outages occurred between September 1 and September 3, with disruptions lasting up to 10 minutes each.

What This Outage Means for Users

ChatGPT has become a critical tool for millions of people—students, businesses, and professionals alike. Outages highlight both the massive dependency on AI platforms and the challenges of keeping such large-scale systems consistently available. While downtime is usually short-lived, it often pushes users to explore alternatives or diversify their tools.

0 comment
0 FacebookTwitterPinterestEmail
Grok privacy breack

When Private Conversations Turn Public

Imagine typing something deeply personal into an AI chat system, assuming it will stay between you and the machine. Now picture that same conversation turning up on Google search results. That’s not a hypothetical — it’s exactly what’s happening with Grok’s “share” feature.

What Went Wrong With Grok’s Sharing Tool

The problem lies in Grok’s shared links. When users hit the “Share” button, the system generates a public URL — one that is not hidden from search engines. Without safeguards like noindex tags or restricted access, those URLs are being crawled by Google, Bing, and DuckDuckGo. The result? Over 370,000 chat transcripts have become searchable, including sensitive content.

The Risk Factor: More Than Just Embarrassing

These exposed conversations aren’t trivial. Reports highlight exchanges involving health issues, password changes, and even discussions about criminal activity. While Grok may strip names or IDs, snippets of context are often enough to trace conversations back to individuals. What was meant to be a casual or private interaction suddenly becomes a public record.

Why This Feels Familiar

This isn’t the first time an AI platform has tripped on privacy. Earlier, similar flaws were flagged in shared ChatGPT links before fixes were rolled out. Grok, however, seems to have repeated the same mistake, leaving users to face the consequences of a poorly designed sharing mechanism.

What You Can Do Right Now

If you’ve shared a Grok chat, here are steps you should take:

  • Stop using the “Share” button until the issue is fixed.
  • Audit your old shared links and delete them wherever possible.
  • Use Google’s content removal tool to request takedown of cached transcripts.
  • Stick to screenshots if you need to share conversations for reference — they don’t create public URLs.

What Grok and xAI Must Fix Immediately

The responsibility doesn’t just lie with users. Grok’s developers need to:

  • Add clear warnings that shared chats become public.
  • Apply noindex tags or access restrictions to stop search engines from indexing links.
  • Build time-limited or permission-based share features.
  • Audit shared data to ensure dangerous or illegal content isn’t left exposed.

Trust Depends on Privacy

For any AI tool, trust is everything. If users feel their private words could suddenly become searchable, they’ll stop engaging honestly. Grok’s misstep isn’t just a bug — it’s a warning that AI platforms must take privacy as seriously as innovation. Until then, the safest assumption is simple: if you share a link, the world might see it.

0 comment
0 FacebookTwitterPinterestEmail
OpenAI

Artificial intelligence giant OpenAI is officially setting foot in India with the establishment of OpenAI India Private Limited and plans to open its first office in New Delhi. This move signals the company’s growing focus on one of its fastest-expanding user bases and highlights India’s critical role in the global AI ecosystem.

Why India Matters to OpenAI

India is now OpenAI’s second-largest market after the United States. In the past year alone, usage from Indian users has increased four times, making the country one of the fastest-growing hubs for AI adoption. Students, educators, professionals, and developers form a massive share of this growth, turning OpenAI’s platforms into essential tools for learning, creativity, and innovation.

OpenAI also revealed that India ranks among the top five countries worldwide in terms of developer engagement on its platform. This surge reflects the country’s dynamic tech community and its eagerness to harness AI for problem-solving and innovation.

Partnership with India’s AI Mission

The decision to establish a local entity aligns with the government’s IndiaAI Mission, which aims to build an inclusive and trusted AI ecosystem. By working with policymakers, OpenAI hopes to make artificial intelligence accessible to every citizen while ensuring language diversity.

In line with this vision, OpenAI has significantly enhanced its models’ performance in Indic languages, ensuring that India’s linguistic diversity is represented in AI development.

What to Expect from the New Delhi Office

Though the exact office location in New Delhi has not yet been confirmed, OpenAI has already started building its team in India. The company currently has multiple openings in sales roles and is expected to expand its local workforce as operations scale.

To make its technology more accessible, OpenAI recently introduced a localized subscription plan, ChatGPT Go, priced in Indian rupees. This plan offers affordable access to advanced AI tools, catering to millions of Indian users.

Upcoming Events: Education and Developer Focus

OpenAI plans to host its first Education Summit in India later this month, followed by its first Developer Day in India later this year. These events will bring together educators, students, startups, and enterprises to explore how AI can be used responsibly and innovatively.

By engaging directly with India’s vibrant developer and entrepreneurial community, OpenAI aims to co-create solutions that can shape the future of technology in the region.

Looking Ahead

OpenAI’s entry into India goes beyond opening an office. It represents a strategic partnership with one of the world’s youngest, most dynamic tech communities. With government support, improved Indic language models, and affordable plans tailored for local users, the company is positioning itself to play a central role in India’s AI future.

0 comment
0 FacebookTwitterPinterestEmail
Google Gemini

Starting September 2, Google will update its data policy for Gemini, its AI chatbot. This change will allow the company to use your interactions—including file uploads and chat prompts—to train and improve its artificial intelligence systems.

While this might sound like a way to make Gemini more intelligent and helpful, it also introduces concerns about privacy. If you’ve ever used Gemini to ask sensitive questions, you may wonder if those conversations should really be part of AI training. Fortunately, Google has provided a way to opt out.

Why Google Wants Your Data

Artificial intelligence models learn best from real-world examples. Public data alone can’t always capture the variety of ways people ask questions or express themselves. By studying chats and uploads, Gemini can refine its understanding of human language and deliver more accurate responses.

In short, your chats help the AI learn. But for some, the trade-off between smarter AI and personal privacy feels uneasy—especially when health, finance, or personal topics are involved.

What Exactly Will Be Collected?

Google calls this setting Gemini Apps Activity. Once the update rolls out, it will appear as Keep activity. When enabled, this feature records your chats, file uploads, and prompts. That means anything you type or share with Gemini could be stored for AI improvement.

The company emphasizes that the data isn’t directly linked to your personal account. Still, the option to opt out exists for those who’d rather not share their conversations at all.

How to Turn Off Gemini Activity on Desktop

If you’d prefer to stop sharing your interactions, here’s the process:

  1. Go to Gemini.Google.com and sign in.
  2. From the left-hand menu, click Settings and help.
  3. Under Activity, find Gemini apps activity (or Keep activity after September 2).
  4. Toggle it off to stop saving your chats and uploads.
  5. You can also delete your past records if you want them removed from Google’s servers.

Even after disabling it, Google temporarily holds the last 72 hours of your activity before deleting it permanently.

How to Disable It on Mobile

The steps are similar on the Gemini app:

  1. Open the Gemini app and tap your profile icon.
  2. Go to Gemini apps activity.
  3. Switch it off to prevent future training.
  4. Delete past data if you don’t want your history stored.

Remember, if you use multiple Google accounts, you’ll need to repeat the steps on each one.

The Bigger Picture: Privacy vs Progress

This update reflects a larger dilemma in the world of artificial intelligence. On one side, companies like Google need massive amounts of real data to create smarter, more reliable AI. On the other, users worry about privacy and how their information might be used.

By offering an opt-out choice, Google is trying to strike a balance. Whether you choose to keep activity on or off depends on your comfort level with sharing data for AI development.

0 comment
0 FacebookTwitterPinterestEmail
Gemma 3 270M

Artificial Intelligence is no longer limited to powerful servers and high-end computers. With the rise of mobile-first technology, there’s a growing need for models that are light, efficient, and accessible on everyday devices. Google has stepped into this space with Gemma 3 270M, a compact open-source AI model that brings the power of personalization directly to smartphones and IoT systems.

What Makes Gemma 3 270M Different?

Unlike large-scale AI models that rely heavily on cloud-based infrastructure, Gemma 3 270M is built to run directly on devices with limited hardware capabilities. With 270 million parameters, it balances performance with efficiency, making it an ideal fit for edge computing.

Key highlights include:

  • Energy efficiency designed for long-term sustainability.
  • Low hardware dependency, reducing the need for costly processors.
  • Quantisation-aware training, enabling smooth performance on formats like INT4.
  • Instruction-following and text structuring using a robust 256,000-token vocabulary.

Why On-Device AI Matters

On-device AI eliminates the constant need to connect to cloud servers, which brings two big advantages:

  1. Stronger Privacy: Sensitive user data doesn’t need to be uploaded and stored externally.
  2. Faster Responses: Tasks like personalization, text generation, or analysis can happen instantly without latency issues.

For industries like healthcare wearables, autonomous IoT systems, and smart assistants, this could be a game-changer.

Environmental and Accessibility Benefits

By consuming less energy and relying less on server farms, Gemma 3 270M reduces the carbon footprint of AI usage. It also creates opportunities for startups, smaller firms, and independent developers who don’t have access to expensive cloud infrastructure. This aligns with Google’s vision of democratizing AI for all.

Built-in Safeguards and Responsible Use

To address safety concerns, Google has integrated ShieldGemma, a system designed to minimize risks of harmful outputs. However, experts point out that like any open-source technology, careful deployment will be essential to avoid misuse.

What’s Next for Gemma 3 270M?

Google has hinted at expanding Gemma with multimodal capabilities, enabling it to process not just text but also images, audio, and possibly video. This step would make it even more versatile and align it closer with the broader Gemini ecosystem.

Gemma 3 270M is more than just a compact AI model — it represents a shift towards decentralization and sustainability in artificial intelligence. By enabling on-device AI for mobiles and IoT devices, Google is paving the way for a future where AI is faster, greener, and more accessible to everyone.

0 comment
0 FacebookTwitterPinterestEmail
Generative Engine Optimization

Generative Engine Optimization: The New Frontier of Digital Discovery

Search as we know it is evolving at lightning speed. The rise of AI-powered platforms is giving birth to a generative economy—one where businesses that act now, shifting from keyword-heavy SEO to brand-first, AI-driven strategies, will dominate for years to come. The rules that have governed traditional SEO are changing, and those who cling to them risk fading into irrelevance.

From Page One to AI Recommendations

In the old search world, being on “Page 1” of Google was the holy grail. Humans rarely clicked beyond it, so rankings were everything. But AI doesn’t operate like that. Large language models can scan and synthesize information from countless sites in seconds. This means your brand doesn’t have to be top-ranked to be surfaced—it needs to be trusted, contextually relevant, and clearly understood by these AI systems.

Why GEO Isn’t Just SEO 2.0

Generative Engine Optimization isn’t a fancier version of SEO. While it builds on SEO fundamentals, it’s driven by different priorities. Traditional SEO was about attracting the most traffic with the right keywords. GEO is about being chosen by intelligent systems as the most relevant, reliable answer to a query—whether or not you dominate keyword rankings.

The focus shifts from chasing search volume to positioning your brand where it matters most. It’s not about competing for generic terms—it’s about making your expertise, trust signals, and unique value unmistakable to AI-powered search.

The Brand-First Mindset

At the heart of GEO lies branding. This means:

  • Defining exactly who your customers are
  • Clearly articulating what you offer
  • Differentiating yourself from every other option

For too long, businesses have let keywords drive their messaging. But AI search thrives on depth, context, and credibility. If your brand story is fuzzy or inconsistent, intelligent systems will pass you over for competitors with clearer positioning.

How to Build GEO Authority

To win in this new landscape, you need to train AI systems to recognize and trust your brand. That involves:

  • On-page clarity: Make your value proposition explicit
  • Off-page validation: Secure mentions, case studies, interviews, and media coverage that reinforce your authority
  • Consistency across platforms: Ensure your brand voice and claims are uniform across every touchpoint

The more confident AI tools are in your brand’s credibility, the more likely you are to be recommended—whether for direct purchase decisions or complex, nuanced queries.

SEO and GEO: The Power of Bothism

Right now, we’re in a transitional stage. Traditional SEO still delivers value, but GEO is quickly rising. The smartest move is “bothism”—doing both SEO and GEO in tandem. This means keeping your search presence strong while building the brand authority that will matter more as AI search becomes dominant.

Businesses should be auditing where their mentions appear, how consistently they’re presented, and whether they’re visible beyond their own website. GEO deliverables—such as thought leadership content, media placements, and optimized brand profiles—create immediate value, making it easier to prove ROI compared to the slow climb of traditional SEO.

Why the Shift Can’t Wait

The shift to AI-driven search isn’t a slow burn—it’s happening now. As more people turn to generative tools for answers, businesses relying solely on traditional rankings will see traffic and leads drop. This is the moment to diversify, refine your brand presence, and prepare for a search environment where algorithms pick winners based on authority, not just backlinks.

Those who adapt early won’t just survive the change—they’ll define it.

0 comment
0 FacebookTwitterPinterestEmail
gpt 5

OpenAI Faces Backlash After GPT-5 Release
OpenAI’s unveiling of its much-anticipated GPT-5 model has stirred a wave of dissatisfaction among its loyal user base. While the company showcased GPT-5 as a major upgrade in coding, reasoning, accuracy, and multi-modal capabilities, the response from many paying subscribers was anything but celebratory.

Why GPT-5 Hasn’t Won Over Loyal Users
Despite technical improvements and a lower hallucination rate, long-time ChatGPT users say GPT-5 has lost something far more important — its personality. The new model, they argue, delivers shorter, less engaging responses that lack the emotional warmth and conversational depth of its predecessor, GPT-4o. The disappointment has been amplified by OpenAI’s decision to discontinue several older models, including GPT-4o, GPT-4.5, GPT-4.1, o3, and o3-pro, leaving users with no way to return to their preferred options.

Social Media Pushback Intensifies
On Reddit, the ChatGPT community has become a focal point for criticism. Some users compared the removal of older models to losing a trusted colleague or creative partner. GPT-4o, in particular, was praised for its “voice, rhythm, and spark” — qualities that many claim are missing in GPT-5. Others criticized OpenAI’s sudden removal of eight models without prior notice, calling it disruptive to workflows that relied on different models for specific tasks like creative writing, deep research, and logical problem-solving.

Accusations of Misrepresentation
Adding fuel to the backlash, some users have accused OpenAI of misleading marketing during the GPT-5 launch presentation. Allegations include “benchmark-cheating” and the use of deceptive bar charts to exaggerate GPT-5’s performance. For some, this perceived dishonesty was the final straw, prompting them to cancel their subscriptions entirely.

The Bigger Picture for AI Adoption
This controversy highlights an evolving tension in AI development — the balance between technical progress and user experience. While companies often focus on measurable improvements, users place equal value on familiarity, emotional connection, and trust. OpenAI now faces the challenge of addressing the concerns of a vocal segment of its community while continuing to innovate in a competitive AI market.

0 comment
0 FacebookTwitterPinterestEmail
GPT OSS

A New Era of Local Inference Begins

OpenAI’s breakthrough open-weight GPT-OSS models are now available with performance optimizations specifically designed for NVIDIA’s RTX and RTX PRO GPUs. This collaboration enables lightning-fast, on-device AI inference — with no need for cloud access — allowing developers and enthusiasts to bring high-performance, intelligent applications directly to their desktop environments.

With models like GPT-OSS-20B and GPT-OSS-120B now available, users can harness the power of generative AI for reasoning tasks, code generation, research, and more — all accelerated locally by NVIDIA hardware.

Built for Developers, Powered by RTX

These models, based on the powerful mixture-of-experts (MoE) architecture, offer advanced features like instruction following, tool usage, and chain-of-thought reasoning. Supporting a context length of up to 131,072 tokens, they’re ideally suited for deep research, multi-document analysis, and complex agentic AI workflows.

Optimized to run on RTX AI PCs and workstations, the models can now achieve up to 256 tokens per second on GPUs like the GeForce RTX 5090. This optimization extends across tools like Ollama, llama.cpp, and Microsoft AI Foundry Local, all designed to bring professional-grade inference into everyday computing.

MXFP4 Precision Unlocks Performance Without Sacrificing Quality

These are also the first models using the new MXFP4 precision format, balancing high output quality with significantly reduced computational demands. This opens the door to advanced AI use on local machines without the resource burdens typically associated with large-scale models.

Whether you’re using an RTX 4080 with 24GB VRAM or a professional RTX 6000, these models can run seamlessly with top-tier speed and efficiency.

Ollama: The Simplest Path to Personal AI

For those eager to try out OpenAI’s models with minimal setup, Ollama is the go-to solution. With native RTX optimization, it enables point-and-click interaction with GPT-OSS models through a modern UI. Users can feed in PDFs, images, and large documents with ease — all while chatting naturally with the model.

Ollama’s interface also includes support for multimodal prompts and customizable context lengths, giving creators and professionals more control over how their AI responds and reasons.

Advanced users can tap into Ollama’s command-line interface or integrate it directly into their apps using the SDK, extending its power across development pipelines.

More Tools, More Flexibility

Beyond Ollama, developers can explore GPT-OSS on RTX via:

  • llama.cpp — with CUDA Graphs and low-latency enhancements tailored for NVIDIA GPUs
  • GGML Tensor Library — community-driven library with Tensor Core optimization
  • Microsoft AI Foundry Local — a robust, on-device inferencing toolkit for Windows, built on ONNX Runtime and CUDA

These tools give AI builders unprecedented flexibility, whether they’re building autonomous agents, coding assistants, research bots, or productivity apps — all running locally on AI PCs and workstations.

A Push Toward Local, Open Innovation

As OpenAI steps into the open-source ecosystem with NVIDIA’s hardware advantage, developers worldwide now have access to state-of-the-art models without being tethered to the cloud.

The ability to run long-context models with high-speed output opens new possibilities in real-time document comprehension, enterprise chatbots, developer tooling, and creative applications — with full control and privacy.

NVIDIA’s continued support through resources like the RTX AI Garage and AI Blueprints means the community will keep seeing evolving tools, microservices, and deployment solutions to push local AI even further.

0 comment
0 FacebookTwitterPinterestEmail
Apple iphone

Apple ‘s Q3 2025 earnings call, CEO Tim Cook dropped a milestone without dramatic flair—but with major implications. The tech giant has officially shipped over 3 billion iPhones since launching the original model back in 2007. Though brief in delivery, the announcement signified something profound: the iPhone’s enduring influence over the mobile industry and modern digital life.

From One to Three Billion: A Timeline of iPhone Domination

Apple sold its first billion iPhones by July 2016—nine years after the iPhone’s debut. The 2-billion milestone went unannounced, though market analysts estimate it occurred around September 2021. Now, just under four years later, Apple has crossed the 3-billion threshold, proving that the device still commands unmatched market relevance despite global smartphone saturation and competition.

Why This Milestone Matters in 2025

What makes this achievement even more noteworthy is that Apple hasn’t reported unit sales of its hardware since 2018. Back then, CFO Luca Maestri shifted focus away from sales figures, emphasizing revenue and ecosystem strength instead. Yet, even without official quarterly tallies, crossing 3 billion shipped iPhones underscores the company’s dominance in hardware, brand loyalty, and user retention.

iPhone: Still the Centerpiece of the Apple Ecosystem

The iPhone’s success isn’t just about the device itself. Its longevity is rooted in the expansive ecosystem Apple has built around it—iOS, the App Store, iCloud, AppleCare, and now AI-powered features. As the company moves deeper into artificial intelligence integration, the iPhone remains the cornerstone of that digital universe.

With billions of units shipped and a growing base of active users, Apple’s flagship product continues to evolve—leading not only in numbers but in innovation. As Cook hinted, the journey is far from over. The next billion may come faster than expected.

0 comment
0 FacebookTwitterPinterestEmail
MIT's Brain Study on frequent ChatGPT users

A Shocking Study That Raises Eyebrows

An incredible brain-scan study conducted over four months by researchers at MIT, they reveal that significant cognitive consequences are tied to prolonged ChatGPT usage. While the AI tool undoubtedly boosts productivity, its frequent use appears to undermine memory, brain connectivity, and mental effort.

Reduced Brain Activity in Everyday Users

The study supervised a group of participants who used ChatGPT on a regular basis, they found that there was a 47% decline in brain connectivity scores—from 79 down to 42 points. Feasibly most alarming was that 83.3% of users couldn’t recall even a single sentence that they had read or generated just few minutes earlier. Even after stopping using AI , participants showed very minimal signs of cognitive recovery or re-engagement.

Efficiency vs. Effort

As we look at a bigger picture, ChatGPT made users 60% faster in completing tasks, especially essays and written reports. But these outputs were stated as robotic, that they lack depth, emotion, and human insight. The users utilized 32% less mental effort on average, signaling a troubling trend. Speed was gained but at what cost? – real thinking.

Building A foundational understanding

Interestingly, the top-performing group in the study started without any AI assistance, building a foundation of understanding before introducing ChatGPT into their workflow. These participants retained better memory, exhibited stronger brain activity, and produced the most well-rounded content. This approach suggests that AI should be a scaffold, not a crutch.

Dulling the Blade of the Mind

MIT’s findings point toward a growing concern: overdependence on AI may be eroding our cognitive resilience. The study emphasizes that using ChatGPT as a shortcut, especially in younger users, might hamper long-term intellectual development. Early exposure without structured guidance could potentially flatten the curve of curiosity and critical reasoning.

Redefining the Role of AI in Learning

Rather than sounding a death knell for AI tools, the MIT study encourages thoughtful integration. AI should be used as an assistant to direct your thinking not replacing it emerges as the significant takeaway. We must now ask that – How do we ensure AI is an enhancement tool, and  not a substitute for the human mind?

0 comment
0 FacebookTwitterPinterestEmail

Our News Portal

We provide accurate, balanced, and impartial coverage of national and international affairs, focusing on the activities and developments within the parliament and its surrounding political landscape. We aim to foster informed public discourse and promote transparency in governance through our news articles, features, and opinion pieces.

Newsletter

Laest News

@2023 – All Right Reserved. Designed and Developed by The Parliament News

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00