Home Tags Posts tagged with "AI technology"
Tag:

AI technology

prompt flux malware

Google’s Threat Intelligence Group (GTIG) has identified an experimental malware family known as PROMPTFLUX — a strain that doesn’t just execute malicious code, but rewrites itself using artificial intelligence.

Unlike traditional malware that depends on static commands or fixed scripts, PROMPTFLUX interacts directly with Google Gemini’s API to generate new behaviours on demand, effectively creating a shape-shifting digital predator capable of evading conventional detection methods.

A Glimpse into Adaptive Malware

PROMPTFLUX represents a major shift in how attackers use technology. Instead of pre-coded evasion routines, this malware dynamically queries AI models like Gemini for what GTIG calls “just-in-time obfuscation.” In simpler terms, it asks the AI to rewrite parts of its own code whenever needed — ensuring no two executions look alike.

This makes traditional, signature-based antivirus systems nearly powerless, as the malware continuously changes its fingerprint, adapting in real time to avoid detection.

How PROMPTFLUX Operates

The malware reportedly uses Gemini’s capabilities to generate new scripts or modify existing ones mid-operation. These scripts can alter function names, encrypt variables, or disguise malicious payloads — all without human intervention.

GTIG researchers observed that PROMPTFLUX’s architecture allows it to:

  • Request on-demand functions through AI queries
  • Generate obfuscated versions of itself in real time
  • Adapt its attack vectors based on environmental responses

While still in developmental stages with limited API access, the discovery underscores how AI can be weaponised in cybercrime ecosystems.

Google’s Containment and Response

Google has moved swiftly to disable the assets and API keys associated with the PROMPTFLUX operation. According to GTIG, there is no evidence of successful attacks or widespread compromise yet. However, the incident stands as a stark warning — attackers are now experimenting with semi-autonomous, AI-driven code.

The investigation revealed that the PROMPTFLUX samples found so far contain incomplete functions, hinting that hackers are still refining the approach. But even as a prototype, it highlights the growing intersection of machine learning and malicious automation.

A Growing Underground AI Market

Experts warn that PROMPTFLUX is just the beginning. A shadow economy of illicit AI tools is emerging, allowing less-skilled cybercriminals to leverage AI for advanced attacks. Underground forums are now offering AI-powered reconnaissance scripts, phishing generators, and payload enhancers.

State-linked groups from North Korea, Iran, and China have reportedly begun experimenting with similar techniques — using AI to streamline reconnaissance, automate social engineering, and even mimic human operators in digital intrusions.

Defenders Turn to AI Too

The cybersecurity battle is no longer human versus human — it’s AI versus AI. Defenders are now deploying machine learning frameworks like “Big Sleep” to identify anomalies, reverse-engineer adaptive code, and trace AI-generated obfuscation patterns.

Security teams are being urged to:

  • Prioritize behaviour-based detection over static signature scans
  • Monitor API usage patterns for suspicious model interactions
  • Secure developer credentials and automation pipelines against misuse
  • Invest in AI-driven defensive frameworks that can predict evasive tactics

The Future: Cybersecurity in the Age of Adaptive Intelligence

PROMPTFLUX marks the early stage of a new class of cyber threats — self-evolving malware. As AI becomes more integrated into both legitimate development and malicious innovation, defenders must evolve just as quickly.

The next generation of cybersecurity will depend not only on firewalls and encryption but on the ability to detect intent — to distinguish between machine creativity and machine deception.

0 comment
0 FacebookTwitterPinterestEmail
Gemma 3 270M

Artificial Intelligence is no longer limited to powerful servers and high-end computers. With the rise of mobile-first technology, there’s a growing need for models that are light, efficient, and accessible on everyday devices. Google has stepped into this space with Gemma 3 270M, a compact open-source AI model that brings the power of personalization directly to smartphones and IoT systems.

What Makes Gemma 3 270M Different?

Unlike large-scale AI models that rely heavily on cloud-based infrastructure, Gemma 3 270M is built to run directly on devices with limited hardware capabilities. With 270 million parameters, it balances performance with efficiency, making it an ideal fit for edge computing.

Key highlights include:

  • Energy efficiency designed for long-term sustainability.
  • Low hardware dependency, reducing the need for costly processors.
  • Quantisation-aware training, enabling smooth performance on formats like INT4.
  • Instruction-following and text structuring using a robust 256,000-token vocabulary.

Why On-Device AI Matters

On-device AI eliminates the constant need to connect to cloud servers, which brings two big advantages:

  1. Stronger Privacy: Sensitive user data doesn’t need to be uploaded and stored externally.
  2. Faster Responses: Tasks like personalization, text generation, or analysis can happen instantly without latency issues.

For industries like healthcare wearables, autonomous IoT systems, and smart assistants, this could be a game-changer.

Environmental and Accessibility Benefits

By consuming less energy and relying less on server farms, Gemma 3 270M reduces the carbon footprint of AI usage. It also creates opportunities for startups, smaller firms, and independent developers who don’t have access to expensive cloud infrastructure. This aligns with Google’s vision of democratizing AI for all.

Built-in Safeguards and Responsible Use

To address safety concerns, Google has integrated ShieldGemma, a system designed to minimize risks of harmful outputs. However, experts point out that like any open-source technology, careful deployment will be essential to avoid misuse.

What’s Next for Gemma 3 270M?

Google has hinted at expanding Gemma with multimodal capabilities, enabling it to process not just text but also images, audio, and possibly video. This step would make it even more versatile and align it closer with the broader Gemini ecosystem.

Gemma 3 270M is more than just a compact AI model — it represents a shift towards decentralization and sustainability in artificial intelligence. By enabling on-device AI for mobiles and IoT devices, Google is paving the way for a future where AI is faster, greener, and more accessible to everyone.

0 comment
0 FacebookTwitterPinterestEmail
gpt 5

OpenAI Faces Backlash After GPT-5 Release
OpenAI’s unveiling of its much-anticipated GPT-5 model has stirred a wave of dissatisfaction among its loyal user base. While the company showcased GPT-5 as a major upgrade in coding, reasoning, accuracy, and multi-modal capabilities, the response from many paying subscribers was anything but celebratory.

Why GPT-5 Hasn’t Won Over Loyal Users
Despite technical improvements and a lower hallucination rate, long-time ChatGPT users say GPT-5 has lost something far more important — its personality. The new model, they argue, delivers shorter, less engaging responses that lack the emotional warmth and conversational depth of its predecessor, GPT-4o. The disappointment has been amplified by OpenAI’s decision to discontinue several older models, including GPT-4o, GPT-4.5, GPT-4.1, o3, and o3-pro, leaving users with no way to return to their preferred options.

Social Media Pushback Intensifies
On Reddit, the ChatGPT community has become a focal point for criticism. Some users compared the removal of older models to losing a trusted colleague or creative partner. GPT-4o, in particular, was praised for its “voice, rhythm, and spark” — qualities that many claim are missing in GPT-5. Others criticized OpenAI’s sudden removal of eight models without prior notice, calling it disruptive to workflows that relied on different models for specific tasks like creative writing, deep research, and logical problem-solving.

Accusations of Misrepresentation
Adding fuel to the backlash, some users have accused OpenAI of misleading marketing during the GPT-5 launch presentation. Allegations include “benchmark-cheating” and the use of deceptive bar charts to exaggerate GPT-5’s performance. For some, this perceived dishonesty was the final straw, prompting them to cancel their subscriptions entirely.

The Bigger Picture for AI Adoption
This controversy highlights an evolving tension in AI development — the balance between technical progress and user experience. While companies often focus on measurable improvements, users place equal value on familiarity, emotional connection, and trust. OpenAI now faces the challenge of addressing the concerns of a vocal segment of its community while continuing to innovate in a competitive AI market.

0 comment
0 FacebookTwitterPinterestEmail
GPT OSS

A New Era of Local Inference Begins

OpenAI’s breakthrough open-weight GPT-OSS models are now available with performance optimizations specifically designed for NVIDIA’s RTX and RTX PRO GPUs. This collaboration enables lightning-fast, on-device AI inference — with no need for cloud access — allowing developers and enthusiasts to bring high-performance, intelligent applications directly to their desktop environments.

With models like GPT-OSS-20B and GPT-OSS-120B now available, users can harness the power of generative AI for reasoning tasks, code generation, research, and more — all accelerated locally by NVIDIA hardware.

Built for Developers, Powered by RTX

These models, based on the powerful mixture-of-experts (MoE) architecture, offer advanced features like instruction following, tool usage, and chain-of-thought reasoning. Supporting a context length of up to 131,072 tokens, they’re ideally suited for deep research, multi-document analysis, and complex agentic AI workflows.

Optimized to run on RTX AI PCs and workstations, the models can now achieve up to 256 tokens per second on GPUs like the GeForce RTX 5090. This optimization extends across tools like Ollama, llama.cpp, and Microsoft AI Foundry Local, all designed to bring professional-grade inference into everyday computing.

MXFP4 Precision Unlocks Performance Without Sacrificing Quality

These are also the first models using the new MXFP4 precision format, balancing high output quality with significantly reduced computational demands. This opens the door to advanced AI use on local machines without the resource burdens typically associated with large-scale models.

Whether you’re using an RTX 4080 with 24GB VRAM or a professional RTX 6000, these models can run seamlessly with top-tier speed and efficiency.

Ollama: The Simplest Path to Personal AI

For those eager to try out OpenAI’s models with minimal setup, Ollama is the go-to solution. With native RTX optimization, it enables point-and-click interaction with GPT-OSS models through a modern UI. Users can feed in PDFs, images, and large documents with ease — all while chatting naturally with the model.

Ollama’s interface also includes support for multimodal prompts and customizable context lengths, giving creators and professionals more control over how their AI responds and reasons.

Advanced users can tap into Ollama’s command-line interface or integrate it directly into their apps using the SDK, extending its power across development pipelines.

More Tools, More Flexibility

Beyond Ollama, developers can explore GPT-OSS on RTX via:

  • llama.cpp — with CUDA Graphs and low-latency enhancements tailored for NVIDIA GPUs
  • GGML Tensor Library — community-driven library with Tensor Core optimization
  • Microsoft AI Foundry Local — a robust, on-device inferencing toolkit for Windows, built on ONNX Runtime and CUDA

These tools give AI builders unprecedented flexibility, whether they’re building autonomous agents, coding assistants, research bots, or productivity apps — all running locally on AI PCs and workstations.

A Push Toward Local, Open Innovation

As OpenAI steps into the open-source ecosystem with NVIDIA’s hardware advantage, developers worldwide now have access to state-of-the-art models without being tethered to the cloud.

The ability to run long-context models with high-speed output opens new possibilities in real-time document comprehension, enterprise chatbots, developer tooling, and creative applications — with full control and privacy.

NVIDIA’s continued support through resources like the RTX AI Garage and AI Blueprints means the community will keep seeing evolving tools, microservices, and deployment solutions to push local AI even further.

0 comment
0 FacebookTwitterPinterestEmail
MIT's Brain Study on frequent ChatGPT users

A Shocking Study That Raises Eyebrows

An incredible brain-scan study conducted over four months by researchers at MIT, they reveal that significant cognitive consequences are tied to prolonged ChatGPT usage. While the AI tool undoubtedly boosts productivity, its frequent use appears to undermine memory, brain connectivity, and mental effort.

Reduced Brain Activity in Everyday Users

The study supervised a group of participants who used ChatGPT on a regular basis, they found that there was a 47% decline in brain connectivity scores—from 79 down to 42 points. Feasibly most alarming was that 83.3% of users couldn’t recall even a single sentence that they had read or generated just few minutes earlier. Even after stopping using AI , participants showed very minimal signs of cognitive recovery or re-engagement.

Efficiency vs. Effort

As we look at a bigger picture, ChatGPT made users 60% faster in completing tasks, especially essays and written reports. But these outputs were stated as robotic, that they lack depth, emotion, and human insight. The users utilized 32% less mental effort on average, signaling a troubling trend. Speed was gained but at what cost? – real thinking.

Building A foundational understanding

Interestingly, the top-performing group in the study started without any AI assistance, building a foundation of understanding before introducing ChatGPT into their workflow. These participants retained better memory, exhibited stronger brain activity, and produced the most well-rounded content. This approach suggests that AI should be a scaffold, not a crutch.

Dulling the Blade of the Mind

MIT’s findings point toward a growing concern: overdependence on AI may be eroding our cognitive resilience. The study emphasizes that using ChatGPT as a shortcut, especially in younger users, might hamper long-term intellectual development. Early exposure without structured guidance could potentially flatten the curve of curiosity and critical reasoning.

Redefining the Role of AI in Learning

Rather than sounding a death knell for AI tools, the MIT study encourages thoughtful integration. AI should be used as an assistant to direct your thinking not replacing it emerges as the significant takeaway. We must now ask that – How do we ensure AI is an enhancement tool, and  not a substitute for the human mind?

0 comment
0 FacebookTwitterPinterestEmail
github

A Radical Leap in No-Code Development
GitHub has unveiled “Spark,” a groundbreaking tool that could redefine how we create software. Spark enables users to build functional web applications simply by using natural language prompts—no coding experience required. This innovation comes from GitHub Next, the company’s experimental division, and offers both OpenAI and Claude Sonnet models for building and refining ideas.

More Than Just Code Generation
Unlike earlier AI tools that only generate code snippets, Spark goes a step further. It not only creates the necessary backend and frontend code but runs the app and shows a live, interactive preview. This allows creators to immediately test and modify their applications using further prompts—streamlining development cycles and reducing friction.

A Choice of Models for Precision
Spark users can choose from a selection of top-tier AI models: Claude 3.5 Sonnet, OpenAI’s o1-preview, o1-mini, or the flagship GPT-4o. While OpenAI is known for tuning models to support software logic, Claude Sonnet is recognized for its superior technical reasoning, especially in debugging and interpreting code.

Visualizing Ideas with Variants
Not sure how you want your micro app to look? Spark has a “revision variants” feature. This allows you to generate multiple visual and functional versions of an app, each carrying subtle differences. This feature is ideal for ideation, rapid prototyping, or pitching concepts.

Collaboration and Deployment Made Easy
GitHub Spark isn’t just about building—it also simplifies deployment and teamwork. One-click deployment options and Copilot agent collaboration features make it easy for teams to iterate faster and smarter. Whether you’re a seasoned developer or a startup founder with no tech background, Spark makes execution accessible.

A Message from GitHub’s CEO
Thomas Dohmke, CEO of GitHub, emphasized Spark’s significance in a recent statement on X (formerly Twitter):

“In the last five decades of software development, producing software required manually converting human language into programming language… Today, we take a step toward the ideal magic of creation: the idea in your head becomes reality in a matter of minutes.”

Pricing and Availability
GitHub Spark is currently available to CoPilot Pro+ users. The subscription costs $39 per month or $390 per year, which includes 375 Spark prompts. Additional messages can be purchased at $0.16 per prompt.

0 comment
0 FacebookTwitterPinterestEmail
Ai

AI Is Growing Up, and So Should Its Users

A ‘Hitler Moment’ That Feels Dated

In June 2025, Elon Musk’s AI chatbot Grok stirred up outrage when it stated, “Hitler did good things too,” in response to a user’s prompt. As expected, the internet lit up—memes, criticism, and outrage poured in. But for seasoned AI watchers, this wasn’t a shocking event. It was a tired replay of a pattern we’ve seen since the days of Microsoft’s Tay or the early missteps of ChatGPT. The reaction felt more like déjà vu than scandal.

Prompt Engineering for Controversy Is Played Out

In 2021, tricking an AI into making offensive statements felt novel. But in 2025, it feels stale. As AI becomes more sophisticated, the bar for meaningful engagement has risen. Deliberately provoking AI into controversy isn’t just immature—it’s out of touch with how these tools are actually being used.

Today’s AI Users Want Results

Today’s AI users are running businesses, designing code, crafting lesson plans, and streamlining workflows. They’re not interested in childish games—they want intelligent collaboration. The typical AI user today is a lawyer, an entrepreneur, a student, or a teacher—not someone testing the system’s “shock factor.”

The Grok Incident Is a User Problem

Yes, AI moderation can improve, and systems need better guardrails. But the Grok incident isn’t a failure of technology—it’s a failure of user intent. Provoking AI for shock value reflects more on the user than the tool. It’s like using a microscope to hammer a nail—technically possible, but completely missing the point.

From Gimmicks to Groundbreaking

With models like GPT-4o handling multimodal input, Claude summarizing books, and Gemini writing complex code, we’re entering an era of real transformation. Trying to get an AI to say something edgy today feels like hacking a calculator to spell “BOOBS”—it’s been done, and no one’s impressed.

Time to Raise the Standard

It’s time for users to evolve. Intelligent tools deserve intelligent interaction. AI should be encouraged to handle difficult conversations with nuance and accuracy, and users should approach it with maturity and purpose. We need fewer stunts and more stories of AI creating real impact.

0 comment
0 FacebookTwitterPinterestEmail

Google’s new AI Mode in Search is making waves—not for its capabilities, but for the data it’s not sharing. SEO experts and digital marketers are raising alarms about a concerning development: clicks originating from AI Mode are currently untrackable. Whether it’s Google Search Console or third-party analytics platforms, the traffic from this new search layer appears to be cloaked in complete invisibility.

What’s Really Happening
The issue came to light when Tom Critchlow, EVP of audience growth at Raptive, flagged discrepancies in click data. The problem was soon confirmed by Patrick Stox of Ahrefs, who found that clicks from AI Mode links do not appear in Search Console. Even worse, standard analytics platforms classify such visits as either Direct or Unknown. The culprit? The use of the noreferrer attribute on AI Mode links, which effectively strips all referral information that could have identified the source.

The Industry Reacts: Is This ‘Not Provided’ All Over Again?
Veteran SEO strategist Lily Ray called it “Not Provided 2.0”, drawing a parallel to Google’s earlier move to encrypt keyword data. Her theory is straightforward: Google does not want the public or publishers to know how little traffic AI Mode actually drives. Without access to hard data, claims of AI Mode enhancing web traffic remain unverifiable. That lack of transparency is breeding mistrust, especially when Google continues to tout that AI is improving the quality of search visits.

Google’s Mixed Messaging
Google has not fully clarified whether this lack of visibility is intentional or a glitch. Its official help documentation claims AI features—including AI Mode and Overviews—are included in overall traffic reports in Search Console. Yet, when one examines the detailed documentation, there is no mention of AI Mode at all. Only AI Overviews are referenced.

Adding to the confusion, a recent Google blog post encouraged site owners to “focus less on clicks” and more on the “overall value” of visits. It seems to suggest a broader shift away from click-through metrics as a core indicator of success. But without any clear alternatives offered, marketers are left without the tools they need to measure performance accurately.

A Fix Coming Soon?
In a comment on LinkedIn, Google’s John Mueller acknowledged the issue and noted that he had already passed it on to the internal team. However, he offered no confirmation on whether the lack of visibility is a bug or an intentional design choice. As of now, site owners, analysts, and SEO professionals remain in the dark.

What This Means for Publishers and Marketers
The lack of referrer data from AI Mode is more than an inconvenience—it’s a fundamental barrier to data-driven decision-making. In an environment where content performance and user behavior should guide strategy, hiding traffic sources makes it nearly impossible to allocate resources wisely or understand user journeys.

While AI continues to reshape how information is presented, the silence surrounding its impact on traffic raises uncomfortable questions. For a company that once built its empire on the promise of transparency and reliable search metrics, this new direction feels like a step backward.

Until clarity emerges or Google restores visibility, the clicks from AI Mode will remain in the shadows, leaving publishers with more questions than answers.

0 comment
0 FacebookTwitterPinterestEmail

A striking revelation that underscores the accelerating shift in software development, Microsoft CEO Satya Nadella disclosed that artificial intelligence is now responsible for generating as much as 30% of the code within the company’s internal repositories. Speaking at Meta’s inaugural LlamaCon AI developer summit in Menlo Park, California, Nadella emphasized that this figure is steadily rising — a clear signal that generative AI is becoming deeply embedded in Microsoft’s engineering workflows.

Nadella made this statement during a candid conversation with Meta CEO Mark Zuckerberg, where the two tech giants discussed the growing role of AI in shaping their companies’ futures. “I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software,” Nadella noted before the live audience, hinting at a not-so-distant future where machines shoulder the bulk of code production.

Zuckerberg, while not quoting exact figures for Meta, echoed the sentiment. He revealed that Meta is currently developing AI systems capable of designing and evolving future iterations of its Llama models. “Our bet is sort of that in the next year probably … maybe half the development is going to be done by AI, as opposed to people,” Zuckerberg said, outlining a future where AI becomes the primary architect of digital infrastructure.

These insights are not isolated. They reflect a wider movement sweeping through the tech industry. Since the launch of ChatGPT in 2022, companies have increasingly turned to AI not just for customer interaction or content generation, but for core engineering functions. Google CEO Sundar Pichai recently said that over 25% of the company’s new code is now generated by AI tools. Shopify CEO Tobi Lutke went a step further, stating that employees must now demonstrate a task cannot be done by AI before requesting additional manpower. Meanwhile, Duolingo CEO Luis von Ahn announced a transition toward AI in place of some human contractors.

The implications go beyond operational efficiency. The dream now is software written faster, with fewer bugs, and better adaptability — a scenario that AI-powered development promises to bring closer to reality. Startups like Windsurf, reportedly in acquisition talks with OpenAI, are pushing the boundaries by offering “vibe coding” software that can generate entire applications from just a few lines of human input.

As Nadella and Zuckerberg continue to lead organizations that both create and adopt frontier AI models, their insights offer more than just a glimpse into internal operations — they signal a profound redefinition of how software itself will be imagined, designed, and deployed in the years to come.

0 comment
0 FacebookTwitterPinterestEmail

In an era where artificial intelligence is rapidly becoming an everyday companion—from helping draft emails to brainstorming business ideas—the way we ask AI matters more than ever. Recognizing this shift, Google has released a comprehensive 68-page guide to help users get the most out of its AI tool, Gemini, available through the Vertex AI platform.

But don’t let the term “guide” intimidate you. This isn’t a dry manual full of jargon. Instead, it’s a practical, easy-to-understand roadmap for improving how we interact with AI. At its heart lies a skill called prompt engineering—a fancy term for something surprisingly intuitive: asking the right questions, the right way.


The Secret Sauce? Clear Instructions and Smart Examples

Let’s face it—AI isn’t a mind reader. The way we phrase our questions or commands, called prompts, can make or break the quality of the response we get. That’s where Google’s advice comes in clutch.

One of the standout tips? Lead with examples. Think of AI as someone you’re training. You don’t just throw tasks at a new hire without a walkthrough, right? Show AI what you want. Whether you’re looking for writing help, code suggestions, or teaching support, feeding the model examples sets the tone—and expectation.

Another key takeaway: simplicity wins. The more straightforward your prompt, the better the result. AI might be powerful, but it doesn’t benefit from overly complex sentences or instructions filled with “don’ts” and double negatives. Instead of saying “Don’t include fluff,” try “Write only the facts.” That subtle shift in framing can change the outcome dramatically.


Setting the Scene: Context Is King

Google’s guide also dives into more advanced territory—without making it feel like a tech lecture. One clever trick? Giving your prompt a role or goal. For instance, beginning your message with “You are a travel planner” instantly frames the interaction. It’s like handing the AI a script before it performs.

Adding context—like “the user is a college student with a part-time job”—helps the AI fine-tune its tone and content even more. You can also ask it to walk through its reasoning step-by-step, which often results in richer, more accurate answers.


Why This Matters More Than You Think

Whether you’re using Gemini, ChatGPT, Claude, or any of the major AI platforms, prompt design is the one skill that can supercharge your results. And it doesn’t require coding. Just a little structure and clarity.

Google’s latest guide is not just about Gemini. It’s a playbook for anyone who wants to bridge the gap between human intent and machine output. In a world increasingly driven by automation and smart tools, knowing how to speak to AI is fast becoming a superpower.

So, whether you’re writing your first prompt or fine-tuning a workflow for a business use case, Google’s guide has laid down the blueprint. It’s clear, approachable, and a must-read for anyone looking to stay ahead in the age of intelligent tools.

0 comment
0 FacebookTwitterPinterestEmail
Newer Posts

Our News Portal

We provide accurate, balanced, and impartial coverage of national and international affairs, focusing on the activities and developments within the parliament and its surrounding political landscape. We aim to foster informed public discourse and promote transparency in governance through our news articles, features, and opinion pieces.

Newsletter

Laest News

@2023 – All Right Reserved. Designed and Developed by The Parliament News

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00