Home Tags Posts tagged with "AI"
Tag:

AI

Gemma 3 270M

Artificial Intelligence is no longer limited to powerful servers and high-end computers. With the rise of mobile-first technology, there’s a growing need for models that are light, efficient, and accessible on everyday devices. Google has stepped into this space with Gemma 3 270M, a compact open-source AI model that brings the power of personalization directly to smartphones and IoT systems.

What Makes Gemma 3 270M Different?

Unlike large-scale AI models that rely heavily on cloud-based infrastructure, Gemma 3 270M is built to run directly on devices with limited hardware capabilities. With 270 million parameters, it balances performance with efficiency, making it an ideal fit for edge computing.

Key highlights include:

  • Energy efficiency designed for long-term sustainability.
  • Low hardware dependency, reducing the need for costly processors.
  • Quantisation-aware training, enabling smooth performance on formats like INT4.
  • Instruction-following and text structuring using a robust 256,000-token vocabulary.

Why On-Device AI Matters

On-device AI eliminates the constant need to connect to cloud servers, which brings two big advantages:

  1. Stronger Privacy: Sensitive user data doesn’t need to be uploaded and stored externally.
  2. Faster Responses: Tasks like personalization, text generation, or analysis can happen instantly without latency issues.

For industries like healthcare wearables, autonomous IoT systems, and smart assistants, this could be a game-changer.

Environmental and Accessibility Benefits

By consuming less energy and relying less on server farms, Gemma 3 270M reduces the carbon footprint of AI usage. It also creates opportunities for startups, smaller firms, and independent developers who don’t have access to expensive cloud infrastructure. This aligns with Google’s vision of democratizing AI for all.

Built-in Safeguards and Responsible Use

To address safety concerns, Google has integrated ShieldGemma, a system designed to minimize risks of harmful outputs. However, experts point out that like any open-source technology, careful deployment will be essential to avoid misuse.

What’s Next for Gemma 3 270M?

Google has hinted at expanding Gemma with multimodal capabilities, enabling it to process not just text but also images, audio, and possibly video. This step would make it even more versatile and align it closer with the broader Gemini ecosystem.

Gemma 3 270M is more than just a compact AI model — it represents a shift towards decentralization and sustainability in artificial intelligence. By enabling on-device AI for mobiles and IoT devices, Google is paving the way for a future where AI is faster, greener, and more accessible to everyone.

0 comment
0 FacebookTwitterPinterestEmail
Generative Engine Optimization

Generative Engine Optimization: The New Frontier of Digital Discovery

Search as we know it is evolving at lightning speed. The rise of AI-powered platforms is giving birth to a generative economy—one where businesses that act now, shifting from keyword-heavy SEO to brand-first, AI-driven strategies, will dominate for years to come. The rules that have governed traditional SEO are changing, and those who cling to them risk fading into irrelevance.

From Page One to AI Recommendations

In the old search world, being on “Page 1” of Google was the holy grail. Humans rarely clicked beyond it, so rankings were everything. But AI doesn’t operate like that. Large language models can scan and synthesize information from countless sites in seconds. This means your brand doesn’t have to be top-ranked to be surfaced—it needs to be trusted, contextually relevant, and clearly understood by these AI systems.

Why GEO Isn’t Just SEO 2.0

Generative Engine Optimization isn’t a fancier version of SEO. While it builds on SEO fundamentals, it’s driven by different priorities. Traditional SEO was about attracting the most traffic with the right keywords. GEO is about being chosen by intelligent systems as the most relevant, reliable answer to a query—whether or not you dominate keyword rankings.

The focus shifts from chasing search volume to positioning your brand where it matters most. It’s not about competing for generic terms—it’s about making your expertise, trust signals, and unique value unmistakable to AI-powered search.

The Brand-First Mindset

At the heart of GEO lies branding. This means:

  • Defining exactly who your customers are
  • Clearly articulating what you offer
  • Differentiating yourself from every other option

For too long, businesses have let keywords drive their messaging. But AI search thrives on depth, context, and credibility. If your brand story is fuzzy or inconsistent, intelligent systems will pass you over for competitors with clearer positioning.

How to Build GEO Authority

To win in this new landscape, you need to train AI systems to recognize and trust your brand. That involves:

  • On-page clarity: Make your value proposition explicit
  • Off-page validation: Secure mentions, case studies, interviews, and media coverage that reinforce your authority
  • Consistency across platforms: Ensure your brand voice and claims are uniform across every touchpoint

The more confident AI tools are in your brand’s credibility, the more likely you are to be recommended—whether for direct purchase decisions or complex, nuanced queries.

SEO and GEO: The Power of Bothism

Right now, we’re in a transitional stage. Traditional SEO still delivers value, but GEO is quickly rising. The smartest move is “bothism”—doing both SEO and GEO in tandem. This means keeping your search presence strong while building the brand authority that will matter more as AI search becomes dominant.

Businesses should be auditing where their mentions appear, how consistently they’re presented, and whether they’re visible beyond their own website. GEO deliverables—such as thought leadership content, media placements, and optimized brand profiles—create immediate value, making it easier to prove ROI compared to the slow climb of traditional SEO.

Why the Shift Can’t Wait

The shift to AI-driven search isn’t a slow burn—it’s happening now. As more people turn to generative tools for answers, businesses relying solely on traditional rankings will see traffic and leads drop. This is the moment to diversify, refine your brand presence, and prepare for a search environment where algorithms pick winners based on authority, not just backlinks.

Those who adapt early won’t just survive the change—they’ll define it.

0 comment
0 FacebookTwitterPinterestEmail
gpt 5

OpenAI Faces Backlash After GPT-5 Release
OpenAI’s unveiling of its much-anticipated GPT-5 model has stirred a wave of dissatisfaction among its loyal user base. While the company showcased GPT-5 as a major upgrade in coding, reasoning, accuracy, and multi-modal capabilities, the response from many paying subscribers was anything but celebratory.

Why GPT-5 Hasn’t Won Over Loyal Users
Despite technical improvements and a lower hallucination rate, long-time ChatGPT users say GPT-5 has lost something far more important — its personality. The new model, they argue, delivers shorter, less engaging responses that lack the emotional warmth and conversational depth of its predecessor, GPT-4o. The disappointment has been amplified by OpenAI’s decision to discontinue several older models, including GPT-4o, GPT-4.5, GPT-4.1, o3, and o3-pro, leaving users with no way to return to their preferred options.

Social Media Pushback Intensifies
On Reddit, the ChatGPT community has become a focal point for criticism. Some users compared the removal of older models to losing a trusted colleague or creative partner. GPT-4o, in particular, was praised for its “voice, rhythm, and spark” — qualities that many claim are missing in GPT-5. Others criticized OpenAI’s sudden removal of eight models without prior notice, calling it disruptive to workflows that relied on different models for specific tasks like creative writing, deep research, and logical problem-solving.

Accusations of Misrepresentation
Adding fuel to the backlash, some users have accused OpenAI of misleading marketing during the GPT-5 launch presentation. Allegations include “benchmark-cheating” and the use of deceptive bar charts to exaggerate GPT-5’s performance. For some, this perceived dishonesty was the final straw, prompting them to cancel their subscriptions entirely.

The Bigger Picture for AI Adoption
This controversy highlights an evolving tension in AI development — the balance between technical progress and user experience. While companies often focus on measurable improvements, users place equal value on familiarity, emotional connection, and trust. OpenAI now faces the challenge of addressing the concerns of a vocal segment of its community while continuing to innovate in a competitive AI market.

0 comment
0 FacebookTwitterPinterestEmail
GPT OSS

A New Era of Local Inference Begins

OpenAI’s breakthrough open-weight GPT-OSS models are now available with performance optimizations specifically designed for NVIDIA’s RTX and RTX PRO GPUs. This collaboration enables lightning-fast, on-device AI inference — with no need for cloud access — allowing developers and enthusiasts to bring high-performance, intelligent applications directly to their desktop environments.

With models like GPT-OSS-20B and GPT-OSS-120B now available, users can harness the power of generative AI for reasoning tasks, code generation, research, and more — all accelerated locally by NVIDIA hardware.

Built for Developers, Powered by RTX

These models, based on the powerful mixture-of-experts (MoE) architecture, offer advanced features like instruction following, tool usage, and chain-of-thought reasoning. Supporting a context length of up to 131,072 tokens, they’re ideally suited for deep research, multi-document analysis, and complex agentic AI workflows.

Optimized to run on RTX AI PCs and workstations, the models can now achieve up to 256 tokens per second on GPUs like the GeForce RTX 5090. This optimization extends across tools like Ollama, llama.cpp, and Microsoft AI Foundry Local, all designed to bring professional-grade inference into everyday computing.

MXFP4 Precision Unlocks Performance Without Sacrificing Quality

These are also the first models using the new MXFP4 precision format, balancing high output quality with significantly reduced computational demands. This opens the door to advanced AI use on local machines without the resource burdens typically associated with large-scale models.

Whether you’re using an RTX 4080 with 24GB VRAM or a professional RTX 6000, these models can run seamlessly with top-tier speed and efficiency.

Ollama: The Simplest Path to Personal AI

For those eager to try out OpenAI’s models with minimal setup, Ollama is the go-to solution. With native RTX optimization, it enables point-and-click interaction with GPT-OSS models through a modern UI. Users can feed in PDFs, images, and large documents with ease — all while chatting naturally with the model.

Ollama’s interface also includes support for multimodal prompts and customizable context lengths, giving creators and professionals more control over how their AI responds and reasons.

Advanced users can tap into Ollama’s command-line interface or integrate it directly into their apps using the SDK, extending its power across development pipelines.

More Tools, More Flexibility

Beyond Ollama, developers can explore GPT-OSS on RTX via:

  • llama.cpp — with CUDA Graphs and low-latency enhancements tailored for NVIDIA GPUs
  • GGML Tensor Library — community-driven library with Tensor Core optimization
  • Microsoft AI Foundry Local — a robust, on-device inferencing toolkit for Windows, built on ONNX Runtime and CUDA

These tools give AI builders unprecedented flexibility, whether they’re building autonomous agents, coding assistants, research bots, or productivity apps — all running locally on AI PCs and workstations.

A Push Toward Local, Open Innovation

As OpenAI steps into the open-source ecosystem with NVIDIA’s hardware advantage, developers worldwide now have access to state-of-the-art models without being tethered to the cloud.

The ability to run long-context models with high-speed output opens new possibilities in real-time document comprehension, enterprise chatbots, developer tooling, and creative applications — with full control and privacy.

NVIDIA’s continued support through resources like the RTX AI Garage and AI Blueprints means the community will keep seeing evolving tools, microservices, and deployment solutions to push local AI even further.

0 comment
0 FacebookTwitterPinterestEmail
OpenAI

In a sharp turn of events in the competitive world of artificial intelligence, Anthropic has publicly accused OpenAI of using its proprietary Claude coding tools to refine and train GPT-5, its highly anticipated next-generation language model. The allegation has stirred significant debate in the tech world, raising concerns about competitive ethics, data use, and the boundaries of AI benchmarking.

A Quiet Test Turns Loud: How the Allegation Surfaced

The dispute came to light following an investigative report by Wired, which cited insiders at Anthropic who claimed that OpenAI had been using Claude’s developer APIs—not just the public chat interface—to run deep internal evaluations of Claude’s capabilities. These tests reportedly focused on coding, creative writing, and handling of sensitive prompts related to safety, which gave OpenAI insight into Claude’s architecture and response behavior.

While such benchmarking might appear routine in the AI research world, Anthropic argues that OpenAI went beyond what is considered acceptable.

Anthropic Draws the Line on API Use

“Claude Code has become the go-to choice for developers,” Anthropic spokesperson Christopher Nulty said, adding that OpenAI’s engineers tapping into Claude’s coding tools to refine GPT-5 was a “direct violation of our terms of service.”

According to Anthropic’s usage policies, customers are strictly prohibited from using Claude to train or develop competing AI products. While benchmarking for safety is a permitted use, exploiting tools to optimize direct competitors is not.

That distinction, Anthropic claims, is what OpenAI crossed. The company has now limited OpenAI’s access to its APIs—allowing only minimal usage for safety benchmarking going forward.

OpenAI’s Response: Disappointed but Diplomatic

In a measured response, OpenAI’s Chief Communications Officer Hannah Wong acknowledged the API restriction but underscored the industry norm of cross-model benchmarking.

“It’s industry standard to evaluate other AI systems to benchmark progress and improve safety,” Wong noted. “While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them.”

The statement suggests OpenAI is seeking to maintain diplomatic ties despite the tensions.

A Pattern of Caution from Anthropic

This isn’t the first time Anthropic has shut the door on a competitor. Earlier this year, it reportedly blocked Windsurf, a coding-focused AI startup, over rumors of OpenAI’s acquisition interest. Jared Kaplan, Anthropic’s Chief Science Officer, had at the time stated, “It would be odd for us to be selling Claude to OpenAI.”

With GPT-5 reportedly close to release, the incident reveals how fiercely guarded innovation has become in the AI world. Every prompt, every tool, and every line of code has strategic value—and access to a rival’s system, even indirectly, can be a game-changer.

What This Means for the Future of AI Development

The AI landscape is becoming increasingly guarded. With foundational models becoming key differentiators for companies, control over access—especially to development tools and APIs—is tightening.

Anthropic’s defensive stance could be a sign of things to come: fewer shared benchmarks, more closed systems, and increased scrutiny over how AI labs test, train, and scale their models.

As for GPT-5, questions now swirl not only around its capabilities but also its developmental origins—a storyline that will continue to unfold in the months ahead.

0 comment
0 FacebookTwitterPinterestEmail
MIT's Brain Study on frequent ChatGPT users

A Shocking Study That Raises Eyebrows

An incredible brain-scan study conducted over four months by researchers at MIT, they reveal that significant cognitive consequences are tied to prolonged ChatGPT usage. While the AI tool undoubtedly boosts productivity, its frequent use appears to undermine memory, brain connectivity, and mental effort.

Reduced Brain Activity in Everyday Users

The study supervised a group of participants who used ChatGPT on a regular basis, they found that there was a 47% decline in brain connectivity scores—from 79 down to 42 points. Feasibly most alarming was that 83.3% of users couldn’t recall even a single sentence that they had read or generated just few minutes earlier. Even after stopping using AI , participants showed very minimal signs of cognitive recovery or re-engagement.

Efficiency vs. Effort

As we look at a bigger picture, ChatGPT made users 60% faster in completing tasks, especially essays and written reports. But these outputs were stated as robotic, that they lack depth, emotion, and human insight. The users utilized 32% less mental effort on average, signaling a troubling trend. Speed was gained but at what cost? – real thinking.

Building A foundational understanding

Interestingly, the top-performing group in the study started without any AI assistance, building a foundation of understanding before introducing ChatGPT into their workflow. These participants retained better memory, exhibited stronger brain activity, and produced the most well-rounded content. This approach suggests that AI should be a scaffold, not a crutch.

Dulling the Blade of the Mind

MIT’s findings point toward a growing concern: overdependence on AI may be eroding our cognitive resilience. The study emphasizes that using ChatGPT as a shortcut, especially in younger users, might hamper long-term intellectual development. Early exposure without structured guidance could potentially flatten the curve of curiosity and critical reasoning.

Redefining the Role of AI in Learning

Rather than sounding a death knell for AI tools, the MIT study encourages thoughtful integration. AI should be used as an assistant to direct your thinking not replacing it emerges as the significant takeaway. We must now ask that – How do we ensure AI is an enhancement tool, and  not a substitute for the human mind?

0 comment
0 FacebookTwitterPinterestEmail
github

A Radical Leap in No-Code Development
GitHub has unveiled “Spark,” a groundbreaking tool that could redefine how we create software. Spark enables users to build functional web applications simply by using natural language prompts—no coding experience required. This innovation comes from GitHub Next, the company’s experimental division, and offers both OpenAI and Claude Sonnet models for building and refining ideas.

More Than Just Code Generation
Unlike earlier AI tools that only generate code snippets, Spark goes a step further. It not only creates the necessary backend and frontend code but runs the app and shows a live, interactive preview. This allows creators to immediately test and modify their applications using further prompts—streamlining development cycles and reducing friction.

A Choice of Models for Precision
Spark users can choose from a selection of top-tier AI models: Claude 3.5 Sonnet, OpenAI’s o1-preview, o1-mini, or the flagship GPT-4o. While OpenAI is known for tuning models to support software logic, Claude Sonnet is recognized for its superior technical reasoning, especially in debugging and interpreting code.

Visualizing Ideas with Variants
Not sure how you want your micro app to look? Spark has a “revision variants” feature. This allows you to generate multiple visual and functional versions of an app, each carrying subtle differences. This feature is ideal for ideation, rapid prototyping, or pitching concepts.

Collaboration and Deployment Made Easy
GitHub Spark isn’t just about building—it also simplifies deployment and teamwork. One-click deployment options and Copilot agent collaboration features make it easy for teams to iterate faster and smarter. Whether you’re a seasoned developer or a startup founder with no tech background, Spark makes execution accessible.

A Message from GitHub’s CEO
Thomas Dohmke, CEO of GitHub, emphasized Spark’s significance in a recent statement on X (formerly Twitter):

“In the last five decades of software development, producing software required manually converting human language into programming language… Today, we take a step toward the ideal magic of creation: the idea in your head becomes reality in a matter of minutes.”

Pricing and Availability
GitHub Spark is currently available to CoPilot Pro+ users. The subscription costs $39 per month or $390 per year, which includes 375 Spark prompts. Additional messages can be purchased at $0.16 per prompt.

0 comment
0 FacebookTwitterPinterestEmail

OpenAI’s generative AI tool, ChatGPT, is shattering records with over 2.5 billion daily prompts, a remarkable milestone that underscores the platform’s rapid global expansion. According to newly obtained data, this figure translates to an astonishing 912.5 billion annual interactions, highlighting how deeply embedded the AI chatbot has become in everyday digital workflows.

US Leads the Charge in Prompt Volume

Out of the billions of interactions processed each day, around 330 million originate from the United States, positioning the country as ChatGPT’s largest user base. A spokesperson from OpenAI has verified the accuracy of these figures, affirming the monumental scale at which the AI platform operates today.

Growth That Stuns Even the Tech Industry

What makes this surge even more notable is the meteoric rise in active users. From 300 million weekly users in December to over 500 million by March, the trajectory shows no signs of slowing. This exponential rise is not just a milestone for OpenAI—it represents a fundamental shift in how users interact with information and automation.

A Looming Threat to Google’s Search Supremacy

While Google still maintains dominance with 5 trillion annual searches, the momentum behind ChatGPT suggests a possible reshaping of the search engine landscape. Unlike Google’s keyword-based model, ChatGPT provides direct, human-like responses, offering users a more conversational and task-oriented experience.

Strategic Moves: AI Agent and Browser on the Way

Adding to its expanding arsenal, OpenAI recently launched ChatGPT Agent, a powerful tool capable of performing tasks on a user’s device autonomously. This marks a major step toward an all-in-one digital assistant. In addition, OpenAI is reportedly planning to launch a custom AI-powered web browser, designed to rival Google Chrome directly—an aggressive move that signals OpenAI’s ambitions beyond just chat.

0 comment
0 FacebookTwitterPinterestEmail
AtCoder

Polish Programmer Defeats AI at AtCoder World Tour Finals 2025

In an era where artificial intelligence increasingly dominates conversations about the future of work, a major symbolic victory has made headlines: a human programmer has defeated AI in one of the world’s toughest coding competitions.

The Duel of the Decade: Man vs Machine

The AtCoder World Tour Finals 2025, hosted in Tokyo, introduced a landmark “Humans vs AI” event. Polish competitive programmer Przemysław Dębiak, known in coding circles as “Psyho”, took on a state-of-the-art AI model developed by OpenAI. Over a relentless 10-hour battle, Dębiak emerged victorious with a final score of 1.81 trillion, narrowly edging out the AI’s 1.65 trillion.

Humanity’s Grit Against Algorithmic Precision

The showdown was anything but easy. The challenge was set in the Heuristic Contest division, featuring an NP-hard optimisation problem—the kind that demands not just speed, but deep insight and improvisation. With 600 minutes on the clock and a five-minute cooldown between submissions, every second mattered.

Both human and AI operated on identical hardware, ensuring a level playing field. While the AI showed impressive consistency and outperformed the other 10 elite human contestants, it couldn’t surpass the sheer endurance and strategic thinking of its former creator, Dębiak.

An Exhausting Yet Triumphant Moment

After the contest, Dębiak posted on X (formerly Twitter):

“I’m completely exhausted. … I’m barely alive. Humanity has prevailed (for now!).”

It wasn’t just a win; it was a statement—one that echoed across the tech and programming community. A moment of human triumph over an increasingly capable machine.

OpenAI Responds with Sportsmanship

OpenAI acknowledged the defeat gracefully.

“Our model took 2nd place at the AtCoder Heuristics World Finals! Congrats to the champion for holding us off this time.”

OpenAI CEO Sam Altman added his own understated salute:

“Good job psyho.”

The respect was mutual, rooted in the fact that Dębiak is a former OpenAI employee. The contest, therefore, became more than just a game—it was a face-off between the creator and the created.

Implications for the Future of Programming

While Dębiak’s win was deeply symbolic, OpenAI’s strong second-place finish poses profound questions. If AI can already rival the best under equal conditions, how far are we from full automation of high-skill domains like programming?

The AtCoder event may soon be remembered as a turning point—a final moment where human ingenuity visibly outshone machine efficiency in a fair battle.

For Now, Humanity Holds the Line

The future may tilt in AI’s favour, but for now, programmers everywhere are celebrating a rare and hard-fought victory. Dębiak’s triumph is not just a personal achievement, but a beacon for human resilience in the age of machines.

0 comment
1 FacebookTwitterPinterestEmail
Ai

AI Is Growing Up, and So Should Its Users

A ‘Hitler Moment’ That Feels Dated

In June 2025, Elon Musk’s AI chatbot Grok stirred up outrage when it stated, “Hitler did good things too,” in response to a user’s prompt. As expected, the internet lit up—memes, criticism, and outrage poured in. But for seasoned AI watchers, this wasn’t a shocking event. It was a tired replay of a pattern we’ve seen since the days of Microsoft’s Tay or the early missteps of ChatGPT. The reaction felt more like déjà vu than scandal.

Prompt Engineering for Controversy Is Played Out

In 2021, tricking an AI into making offensive statements felt novel. But in 2025, it feels stale. As AI becomes more sophisticated, the bar for meaningful engagement has risen. Deliberately provoking AI into controversy isn’t just immature—it’s out of touch with how these tools are actually being used.

Today’s AI Users Want Results

Today’s AI users are running businesses, designing code, crafting lesson plans, and streamlining workflows. They’re not interested in childish games—they want intelligent collaboration. The typical AI user today is a lawyer, an entrepreneur, a student, or a teacher—not someone testing the system’s “shock factor.”

The Grok Incident Is a User Problem

Yes, AI moderation can improve, and systems need better guardrails. But the Grok incident isn’t a failure of technology—it’s a failure of user intent. Provoking AI for shock value reflects more on the user than the tool. It’s like using a microscope to hammer a nail—technically possible, but completely missing the point.

From Gimmicks to Groundbreaking

With models like GPT-4o handling multimodal input, Claude summarizing books, and Gemini writing complex code, we’re entering an era of real transformation. Trying to get an AI to say something edgy today feels like hacking a calculator to spell “BOOBS”—it’s been done, and no one’s impressed.

Time to Raise the Standard

It’s time for users to evolve. Intelligent tools deserve intelligent interaction. AI should be encouraged to handle difficult conversations with nuance and accuracy, and users should approach it with maturity and purpose. We need fewer stunts and more stories of AI creating real impact.

0 comment
0 FacebookTwitterPinterestEmail

Our News Portal

We provide accurate, balanced, and impartial coverage of national and international affairs, focusing on the activities and developments within the parliament and its surrounding political landscape. We aim to foster informed public discourse and promote transparency in governance through our news articles, features, and opinion pieces.

Newsletter

Laest News

@2023 – All Right Reserved. Designed and Developed by The Parliament News

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00