Home technology
Category:

technology

Optical Illusions

Our eyes often play tricks on us, but scientists have discovered that some artificial intelligence (AI) systems can fall for the same illusions and this is reshaping how we understand the human brain.

Take the Moon, for example. When it’s near the horizon, it appears larger than when it’s high in the sky, even though its actual size and the distance from Earth remain nearly constant. Optical illusions like this show that our perception doesn’t always match reality. While they are often seen as errors, illusions also reveal the clever shortcuts our brains use to focus on the most important aspects of our surroundings.

In reality, our brains only take in a “sip” of the visual world. Processing every detail would be overwhelming, so instead we focus on what’s most relevant. But what happens when a machine a synthetic mind powered by artificial intelligence encounters an optical illusion?

AI systems are designed to notice details humans often miss. This precision is why they can detect early signs of disease in medical scans. Yet, some deep neural networks (DNNs)the backbone of modern AI are surprisingly susceptible to the same visual tricks that fool us. This opens a new window into understanding how our own brains work.

“Using DNNs in illusion research allows us to simulate and analyze how the brain processes information and generates illusions,” says Eiji Watanabe, associate professor of neurophysiology at Japan’s National Institute for Basic Biology. Unlike human experiments, testing illusions on AI carries no ethical concerns.

No DNN, however, can experience all the illusions humans do. Although theories abound, the reasons we perceive certain illusions remain largely unexplained.

Studying people who don’t perceive illusions provides clues. For instance, one person who regained sight in his 40s after childhood blindness was not fooled by shape illusions like the Kanizsa square, where four circular fragments create the illusion of a square. Yet he could perceive motion illusions, such as the barber pole, where stripes seem to move upward on a rotating cylinder.

These observations suggest that our ability to detect motion is more robust than our perception of shapes perhaps because we process motion earlier in infancy, or because shape recognition is more influenced by experience.

Brain imaging, such as fMRI, has also shown which regions of the brain activate when we see illusions and how they interact. Still, perception is subjective. A famous example is the “dress” photo from 2015, which viewers argued over as blue-and-black or white-and-gold. Such differences make illusions difficult to study objectively.

Now AI offers a new approach. Many AI systems, including chatbots like ChatGPT, use DNNs composed of artificial neurons inspired by the human brain. Watanabe and his colleagues investigated whether a DNN could replicate how humans perceive motion illusions, such as the “rotating snakes” illusion a static pattern of colorful circles that appear to spin.

They used a DNN called PredNet, designed around the predictive coding theory. This theory suggests that the brain doesn’t simply process visual input passively. Instead, it predicts what it expects to see, then compares this to incoming sensory data, allowing faster perception. PredNet works similarly, predicting future video frames based on prior observations.

Trained on natural landscape videos, PredNet had never seen an optical illusion before. After processing about a million frames, it learned essential rules of visual perception including characteristics of moving objects. When shown the rotating snakes illusion, the AI was fooled just like humans, supporting the predictive coding theory.

Yet differences remain. Humans experience motion differently in their central and peripheral vision, but PredNet perceives all circles as moving simultaneously. This is likely because PredNet lacks attention mechanisms it cannot focus on a specific area like the human eye.

Even though AI can mimic some aspects of vision, no DNN fully experiences the range of human illusions. “ChatGPT may converse like a human, but its DNN works very differently from the brain,” Watanabe notes. Some researchers are even exploring quantum mechanics to better simulate human perception.

For example, the Necker cube, a famous ambiguous figure, can appear to flip between two orientations. Classical physics would suggest a fixed perception, but quantum-inspired models allow the system to “choose” one perspective over time. Ivan Maksymov in Australia developed a quantum-AI hybrid to simulate both the Necker cube and the Rubin vase, where a vase can also appear as two faces. The AI switched between interpretations like a human, with similar timing.

Maksymov clarifies that this doesn’t mean our brains are quantum; rather, quantum models can better capture certain aspects of decision-making, such as how the brain resolves ambiguity.

Such AI systems could also help us understand how perception changes in unusual environments. Astronauts on the International Space Station experience optical illusions differently. For instance, the Necker cube tends to favor one orientation on Earth, but in orbit, astronauts see both orientations equally. This may be because gravity helps our brains judge depth something that changes in free fall.

With the Universe holding so many wonders, astronauts and the rest of us will be glad to know there are ways to study when our eyes can be trusted.

0 comment
0 FacebookTwitterPinterestEmail
GPU

The global graphics card market is heading into a turbulent phase. According to industry chatter, both AMD and Nvidia are preparing to substantially increase prices for their consumer GPUs this year. If the trend unfolds as expected, the first wave of hikes could begin as early as January for AMD and February for Nvidia, with further increases rolling out gradually through the rest of the year.

For everyday consumers, especially PC gamers, this signals a challenging period ahead as graphics cards become increasingly expensive.

Why GPUs Are Becoming More Expensive

At the core of these anticipated price hikes is the rapidly rising cost of memory and other critical components. The construction of large-scale AI data centres across the globe has created intense demand for GPUs and high-performance memory, pushing prices upward throughout the hardware supply chain.

Unlike previous cycles driven primarily by gaming or crypto mining, this surge is rooted in long-term infrastructure investment. AI companies are locking in massive quantities of hardware in anticipation of future needs, tightening supply for the consumer market.

Gradual Increases, Not a One-Time Jump

Industry sources suggest that these increases may not be limited to a single adjustment. Instead, prices are expected to rise incrementally over the course of the year. High-end models are likely to be affected the most, including Nvidia’s GeForce RTX 50 series and AMD’s upcoming Radeon RX 9000 lineup.

Some projections indicate that flagship GPUs could see dramatic shifts in pricing over time, reflecting both production costs and what the market is willing to bear.

AI’s Growing Appetite for Compute Power

The broader context behind these developments is the explosive growth of artificial intelligence. Leading AI firms are consuming GPUs at unprecedented rates. Executives across the tech industry have openly acknowledged that next-generation AI models will require exponentially more computing power than earlier systems.

This demand is not just theoretical. Companies are already stockpiling hardware, even as infrastructure challenges such as power availability limit how quickly these GPUs can be deployed. The result is sustained pressure on supply, with manufacturers prioritising enterprise and AI customers who can absorb higher prices.

What This Means for Gamers and PC Builders

For gamers and PC enthusiasts, the implications are clear. As supply tightens and prices rise, building or upgrading a gaming PC is likely to become significantly more expensive. Even mid-range components may see noticeable price increases due to basic supply-and-demand dynamics.

At the same time, the gaming industry itself is increasingly embracing AI in development, testing, and production workflows. This further ties the future of gaming hardware to the broader AI economy, making price relief unlikely in the near term.

A Market Redefined by AI Priorities

The GPU market is no longer driven solely by gamers and creators. AI has become the dominant force shaping pricing, availability, and long-term strategy for hardware manufacturers. While this shift fuels innovation, it also places everyday consumers at a disadvantage in an increasingly competitive market.

As 2026 progresses, buyers may need to rethink upgrade plans, explore alternative options, or simply prepare for a new reality where high-performance GPUs come at a much steeper cost.

0 comment
0 FacebookTwitterPinterestEmail
Finland

Amid the global push to reduce emissions and make cities more resilient, Finland has stepped forward with an idea that feels both simple and revolutionary. Rather than letting the immense heat produced by data centres drift into the air unused, Finnish cities are capturing this energy and using it to warm homes, offices, and public spaces.

It’s a rare example of digital infrastructure directly improving everyday urban life and it’s proving that sustainability can emerge from the most unexpected places.

The Hidden Heat in Our Digital Lives

Every click, stream, file upload, and transaction moves through servers. Those servers work hard, and they generate a surprising amount of heat. Cooling them consumes vast amounts of electricity, and until recently, this excess warmth was treated as waste.

Finland chose not to accept that waste as inevitable.

By treating data centres as potential heat producers instead of energy drains, the country has reimagined how digital infrastructure fits into the urban ecosystem.How Finland Turns Data-Centre Heat into Urban Heating

Capturing What Was Once Lost

Large data centres produce continuous heat, which is collected through their cooling systems. Instead of being released outdoors, that heat is recovered and transferred into district heating networks.

Delivering Warmth Through City Pipes

District heating systems common in Nordic countries move hot water or steam through insulated pipelines that serve entire neighborhoods. Once the captured heat enters these networks, it becomes a reliable, renewable source of warmth for residential and commercial buildings.

A Perfect Fit for Winter Cities

In regions where winter temperatures can drop drastically, a steady supply of repurposed heat is not just efficient — it’s transformative.

Why This Innovation Matters

Energy Efficiency at Scale

Using waste heat dramatically cuts down on the energy required for traditional heating systems. What was once an environmental burden becomes a fuel source.

Lower Carbon Emissions

Replacing fossil-fuel-based heating with reclaimed data-centre heat significantly reduces the carbon footprint of entire urban districts.

Cost Savings for Communities

Because this heat would exist regardless, channeling it into homes offers municipalities and residents cleaner energy at lower long-term costs.

A Model That Grows with Digital Demand

As cloud services, AI, and global data usage increase, so too will the amount of recoverable heat. Finland’s system is inherently scalable, its energy source grows naturally with digital consumption.

A Sustainable Blueprint for Future Cities

Finland’s approach is more than a clever engineering solution. It’s a mindset shift: the belief that modern technology and environmental responsibility can reinforce each other rather than compete.

As cities worldwide grapple with rising energy demands and climate pressure, Finland’s system offers a clear path forward — one where innovation, practicality, and sustainability meet.

Turning waste into opportunity is not just a technical change; it’s a model of how cities can thrive smarter, cleaner, and more efficiently in the decades ahead.

0 comment
0 FacebookTwitterPinterestEmail
prompt flux malware

Google’s Threat Intelligence Group (GTIG) has identified an experimental malware family known as PROMPTFLUX — a strain that doesn’t just execute malicious code, but rewrites itself using artificial intelligence.

Unlike traditional malware that depends on static commands or fixed scripts, PROMPTFLUX interacts directly with Google Gemini’s API to generate new behaviours on demand, effectively creating a shape-shifting digital predator capable of evading conventional detection methods.

A Glimpse into Adaptive Malware

PROMPTFLUX represents a major shift in how attackers use technology. Instead of pre-coded evasion routines, this malware dynamically queries AI models like Gemini for what GTIG calls “just-in-time obfuscation.” In simpler terms, it asks the AI to rewrite parts of its own code whenever needed — ensuring no two executions look alike.

This makes traditional, signature-based antivirus systems nearly powerless, as the malware continuously changes its fingerprint, adapting in real time to avoid detection.

How PROMPTFLUX Operates

The malware reportedly uses Gemini’s capabilities to generate new scripts or modify existing ones mid-operation. These scripts can alter function names, encrypt variables, or disguise malicious payloads — all without human intervention.

GTIG researchers observed that PROMPTFLUX’s architecture allows it to:

  • Request on-demand functions through AI queries
  • Generate obfuscated versions of itself in real time
  • Adapt its attack vectors based on environmental responses

While still in developmental stages with limited API access, the discovery underscores how AI can be weaponised in cybercrime ecosystems.

Google’s Containment and Response

Google has moved swiftly to disable the assets and API keys associated with the PROMPTFLUX operation. According to GTIG, there is no evidence of successful attacks or widespread compromise yet. However, the incident stands as a stark warning — attackers are now experimenting with semi-autonomous, AI-driven code.

The investigation revealed that the PROMPTFLUX samples found so far contain incomplete functions, hinting that hackers are still refining the approach. But even as a prototype, it highlights the growing intersection of machine learning and malicious automation.

A Growing Underground AI Market

Experts warn that PROMPTFLUX is just the beginning. A shadow economy of illicit AI tools is emerging, allowing less-skilled cybercriminals to leverage AI for advanced attacks. Underground forums are now offering AI-powered reconnaissance scripts, phishing generators, and payload enhancers.

State-linked groups from North Korea, Iran, and China have reportedly begun experimenting with similar techniques — using AI to streamline reconnaissance, automate social engineering, and even mimic human operators in digital intrusions.

Defenders Turn to AI Too

The cybersecurity battle is no longer human versus human — it’s AI versus AI. Defenders are now deploying machine learning frameworks like “Big Sleep” to identify anomalies, reverse-engineer adaptive code, and trace AI-generated obfuscation patterns.

Security teams are being urged to:

  • Prioritize behaviour-based detection over static signature scans
  • Monitor API usage patterns for suspicious model interactions
  • Secure developer credentials and automation pipelines against misuse
  • Invest in AI-driven defensive frameworks that can predict evasive tactics

The Future: Cybersecurity in the Age of Adaptive Intelligence

PROMPTFLUX marks the early stage of a new class of cyber threats — self-evolving malware. As AI becomes more integrated into both legitimate development and malicious innovation, defenders must evolve just as quickly.

The next generation of cybersecurity will depend not only on firewalls and encryption but on the ability to detect intent — to distinguish between machine creativity and machine deception.

0 comment
0 FacebookTwitterPinterestEmail
AI SEO

The world of search is changing faster than anyone imagined — and businesses are racing to keep up. As AI increasingly takes the lead in answering queries and shaping online visibility, a new survey led by digital strategist Ann Smarty shows that 85.7% of businesses are already investing or plan to invest in AI-focused SEO. The findings highlight a pivotal moment for digital marketing, where the rules of search visibility are being rewritten by artificial intelligence.

AI Is Redefining How People Find Information
Nearly nine out of ten businesses (87.8%) admit they’re worried about losing organic visibility as AI chatbots, voice assistants, and large language models become people’s go-to sources for information. With AI tools like ChatGPT, Gemini, and Perplexity delivering direct answers instead of directing traffic, the traditional click-through model that once powered online discovery is under pressure.

This evolution is forcing businesses to rethink their digital playbook. Instead of fighting to rank higher on search result pages, brands now aim to appear in AI-generated answers—even when that means no direct link or measurable referral traffic.

The New SEO Frontier: From Search to “AI Optimization”
While AI may be changing how people discover brands, most marketers still want to preserve the “SEO” identity. According to the survey, 49% prefer the term “SEO for AI”, while 41% favor “GEO” (Generative Engine Optimization)—reflecting a shift toward optimizing content for generative systems instead of traditional search algorithms.

Interestingly, this adaptation isn’t just about keywords or backlinks anymore. It’s about data quality, authority signals, and context-rich content that AI systems can confidently pull from when crafting responses. In other words, the new race isn’t just for clicks—it’s for representation in AI-driven narratives.

Budgets Are Growing as Priorities Shift
The survey also found that 61.2% of businesses plan to increase their SEO budgets in response to AI’s growing influence. This renewed investment shows that marketers aren’t backing away from SEO—they’re evolving it.

Brand visibility has overtaken traffic as the top goal for three out of four respondents (75.5%). In fact, only 14.3% of businesses prioritize being cited as a source, showing a broader pivot toward brand recall within AI-generated results rather than traditional referral-driven traffic. For many, the mindset has changed from “getting clicks” to “getting remembered.”

The Anxiety Behind the Numbers
Despite optimism around AI-driven innovation, the report also exposes a deep sense of uncertainty. The top concern among respondents was “not being able to get my business found online,” followed closely by the fear of losing organic search entirely and losing traffic attribution.

For marketers, these anxieties are not unfounded. The AI-powered search landscape is still unpredictable, and visibility often depends on opaque algorithms. Some businesses worry that without access to detailed analytics or ranking insights, understanding how to compete will become even harder.

A Reality Check: AI Traffic Isn’t There Yet
While the AI search boom is real, the data suggests it’s not yet a complete replacement for Google. Studies indicate that AI and LLM referrals convert far less effectively than organic search traffic. AI tools may deliver brand impressions, but they don’t yet drive users to take action or make purchases at the same rate.

That said, most experts agree that the long-term potential is immense. As AI-generated answers become more accurate and personalized, companies that adapt early will likely gain a significant advantage in how they’re represented in these emerging ecosystems.

About the Survey
The survey polled over 300 in-house marketers and business owners, primarily from medium to large enterprises. Nearly half represented ecommerce brands—industries most directly affected by visibility shifts and consumer discovery patterns in an AI-first internet.

0 comment
0 FacebookTwitterPinterestEmail
Gemma 3 270M

Artificial Intelligence is no longer limited to powerful servers and high-end computers. With the rise of mobile-first technology, there’s a growing need for models that are light, efficient, and accessible on everyday devices. Google has stepped into this space with Gemma 3 270M, a compact open-source AI model that brings the power of personalization directly to smartphones and IoT systems.

What Makes Gemma 3 270M Different?

Unlike large-scale AI models that rely heavily on cloud-based infrastructure, Gemma 3 270M is built to run directly on devices with limited hardware capabilities. With 270 million parameters, it balances performance with efficiency, making it an ideal fit for edge computing.

Key highlights include:

  • Energy efficiency designed for long-term sustainability.
  • Low hardware dependency, reducing the need for costly processors.
  • Quantisation-aware training, enabling smooth performance on formats like INT4.
  • Instruction-following and text structuring using a robust 256,000-token vocabulary.

Why On-Device AI Matters

On-device AI eliminates the constant need to connect to cloud servers, which brings two big advantages:

  1. Stronger Privacy: Sensitive user data doesn’t need to be uploaded and stored externally.
  2. Faster Responses: Tasks like personalization, text generation, or analysis can happen instantly without latency issues.

For industries like healthcare wearables, autonomous IoT systems, and smart assistants, this could be a game-changer.

Environmental and Accessibility Benefits

By consuming less energy and relying less on server farms, Gemma 3 270M reduces the carbon footprint of AI usage. It also creates opportunities for startups, smaller firms, and independent developers who don’t have access to expensive cloud infrastructure. This aligns with Google’s vision of democratizing AI for all.

Built-in Safeguards and Responsible Use

To address safety concerns, Google has integrated ShieldGemma, a system designed to minimize risks of harmful outputs. However, experts point out that like any open-source technology, careful deployment will be essential to avoid misuse.

What’s Next for Gemma 3 270M?

Google has hinted at expanding Gemma with multimodal capabilities, enabling it to process not just text but also images, audio, and possibly video. This step would make it even more versatile and align it closer with the broader Gemini ecosystem.

Gemma 3 270M is more than just a compact AI model — it represents a shift towards decentralization and sustainability in artificial intelligence. By enabling on-device AI for mobiles and IoT devices, Google is paving the way for a future where AI is faster, greener, and more accessible to everyone.

0 comment
0 FacebookTwitterPinterestEmail
GPT OSS

A New Era of Local Inference Begins

OpenAI’s breakthrough open-weight GPT-OSS models are now available with performance optimizations specifically designed for NVIDIA’s RTX and RTX PRO GPUs. This collaboration enables lightning-fast, on-device AI inference — with no need for cloud access — allowing developers and enthusiasts to bring high-performance, intelligent applications directly to their desktop environments.

With models like GPT-OSS-20B and GPT-OSS-120B now available, users can harness the power of generative AI for reasoning tasks, code generation, research, and more — all accelerated locally by NVIDIA hardware.

Built for Developers, Powered by RTX

These models, based on the powerful mixture-of-experts (MoE) architecture, offer advanced features like instruction following, tool usage, and chain-of-thought reasoning. Supporting a context length of up to 131,072 tokens, they’re ideally suited for deep research, multi-document analysis, and complex agentic AI workflows.

Optimized to run on RTX AI PCs and workstations, the models can now achieve up to 256 tokens per second on GPUs like the GeForce RTX 5090. This optimization extends across tools like Ollama, llama.cpp, and Microsoft AI Foundry Local, all designed to bring professional-grade inference into everyday computing.

MXFP4 Precision Unlocks Performance Without Sacrificing Quality

These are also the first models using the new MXFP4 precision format, balancing high output quality with significantly reduced computational demands. This opens the door to advanced AI use on local machines without the resource burdens typically associated with large-scale models.

Whether you’re using an RTX 4080 with 24GB VRAM or a professional RTX 6000, these models can run seamlessly with top-tier speed and efficiency.

Ollama: The Simplest Path to Personal AI

For those eager to try out OpenAI’s models with minimal setup, Ollama is the go-to solution. With native RTX optimization, it enables point-and-click interaction with GPT-OSS models through a modern UI. Users can feed in PDFs, images, and large documents with ease — all while chatting naturally with the model.

Ollama’s interface also includes support for multimodal prompts and customizable context lengths, giving creators and professionals more control over how their AI responds and reasons.

Advanced users can tap into Ollama’s command-line interface or integrate it directly into their apps using the SDK, extending its power across development pipelines.

More Tools, More Flexibility

Beyond Ollama, developers can explore GPT-OSS on RTX via:

  • llama.cpp — with CUDA Graphs and low-latency enhancements tailored for NVIDIA GPUs
  • GGML Tensor Library — community-driven library with Tensor Core optimization
  • Microsoft AI Foundry Local — a robust, on-device inferencing toolkit for Windows, built on ONNX Runtime and CUDA

These tools give AI builders unprecedented flexibility, whether they’re building autonomous agents, coding assistants, research bots, or productivity apps — all running locally on AI PCs and workstations.

A Push Toward Local, Open Innovation

As OpenAI steps into the open-source ecosystem with NVIDIA’s hardware advantage, developers worldwide now have access to state-of-the-art models without being tethered to the cloud.

The ability to run long-context models with high-speed output opens new possibilities in real-time document comprehension, enterprise chatbots, developer tooling, and creative applications — with full control and privacy.

NVIDIA’s continued support through resources like the RTX AI Garage and AI Blueprints means the community will keep seeing evolving tools, microservices, and deployment solutions to push local AI even further.

0 comment
0 FacebookTwitterPinterestEmail
OpenAI

In a sharp turn of events in the competitive world of artificial intelligence, Anthropic has publicly accused OpenAI of using its proprietary Claude coding tools to refine and train GPT-5, its highly anticipated next-generation language model. The allegation has stirred significant debate in the tech world, raising concerns about competitive ethics, data use, and the boundaries of AI benchmarking.

A Quiet Test Turns Loud: How the Allegation Surfaced

The dispute came to light following an investigative report by Wired, which cited insiders at Anthropic who claimed that OpenAI had been using Claude’s developer APIs—not just the public chat interface—to run deep internal evaluations of Claude’s capabilities. These tests reportedly focused on coding, creative writing, and handling of sensitive prompts related to safety, which gave OpenAI insight into Claude’s architecture and response behavior.

While such benchmarking might appear routine in the AI research world, Anthropic argues that OpenAI went beyond what is considered acceptable.

Anthropic Draws the Line on API Use

“Claude Code has become the go-to choice for developers,” Anthropic spokesperson Christopher Nulty said, adding that OpenAI’s engineers tapping into Claude’s coding tools to refine GPT-5 was a “direct violation of our terms of service.”

According to Anthropic’s usage policies, customers are strictly prohibited from using Claude to train or develop competing AI products. While benchmarking for safety is a permitted use, exploiting tools to optimize direct competitors is not.

That distinction, Anthropic claims, is what OpenAI crossed. The company has now limited OpenAI’s access to its APIs—allowing only minimal usage for safety benchmarking going forward.

OpenAI’s Response: Disappointed but Diplomatic

In a measured response, OpenAI’s Chief Communications Officer Hannah Wong acknowledged the API restriction but underscored the industry norm of cross-model benchmarking.

“It’s industry standard to evaluate other AI systems to benchmark progress and improve safety,” Wong noted. “While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them.”

The statement suggests OpenAI is seeking to maintain diplomatic ties despite the tensions.

A Pattern of Caution from Anthropic

This isn’t the first time Anthropic has shut the door on a competitor. Earlier this year, it reportedly blocked Windsurf, a coding-focused AI startup, over rumors of OpenAI’s acquisition interest. Jared Kaplan, Anthropic’s Chief Science Officer, had at the time stated, “It would be odd for us to be selling Claude to OpenAI.”

With GPT-5 reportedly close to release, the incident reveals how fiercely guarded innovation has become in the AI world. Every prompt, every tool, and every line of code has strategic value—and access to a rival’s system, even indirectly, can be a game-changer.

What This Means for the Future of AI Development

The AI landscape is becoming increasingly guarded. With foundational models becoming key differentiators for companies, control over access—especially to development tools and APIs—is tightening.

Anthropic’s defensive stance could be a sign of things to come: fewer shared benchmarks, more closed systems, and increased scrutiny over how AI labs test, train, and scale their models.

As for GPT-5, questions now swirl not only around its capabilities but also its developmental origins—a storyline that will continue to unfold in the months ahead.

0 comment
0 FacebookTwitterPinterestEmail
github

A Radical Leap in No-Code Development
GitHub has unveiled “Spark,” a groundbreaking tool that could redefine how we create software. Spark enables users to build functional web applications simply by using natural language prompts—no coding experience required. This innovation comes from GitHub Next, the company’s experimental division, and offers both OpenAI and Claude Sonnet models for building and refining ideas.

More Than Just Code Generation
Unlike earlier AI tools that only generate code snippets, Spark goes a step further. It not only creates the necessary backend and frontend code but runs the app and shows a live, interactive preview. This allows creators to immediately test and modify their applications using further prompts—streamlining development cycles and reducing friction.

A Choice of Models for Precision
Spark users can choose from a selection of top-tier AI models: Claude 3.5 Sonnet, OpenAI’s o1-preview, o1-mini, or the flagship GPT-4o. While OpenAI is known for tuning models to support software logic, Claude Sonnet is recognized for its superior technical reasoning, especially in debugging and interpreting code.

Visualizing Ideas with Variants
Not sure how you want your micro app to look? Spark has a “revision variants” feature. This allows you to generate multiple visual and functional versions of an app, each carrying subtle differences. This feature is ideal for ideation, rapid prototyping, or pitching concepts.

Collaboration and Deployment Made Easy
GitHub Spark isn’t just about building—it also simplifies deployment and teamwork. One-click deployment options and Copilot agent collaboration features make it easy for teams to iterate faster and smarter. Whether you’re a seasoned developer or a startup founder with no tech background, Spark makes execution accessible.

A Message from GitHub’s CEO
Thomas Dohmke, CEO of GitHub, emphasized Spark’s significance in a recent statement on X (formerly Twitter):

“In the last five decades of software development, producing software required manually converting human language into programming language… Today, we take a step toward the ideal magic of creation: the idea in your head becomes reality in a matter of minutes.”

Pricing and Availability
GitHub Spark is currently available to CoPilot Pro+ users. The subscription costs $39 per month or $390 per year, which includes 375 Spark prompts. Additional messages can be purchased at $0.16 per prompt.

0 comment
0 FacebookTwitterPinterestEmail

OpenAI’s generative AI tool, ChatGPT, is shattering records with over 2.5 billion daily prompts, a remarkable milestone that underscores the platform’s rapid global expansion. According to newly obtained data, this figure translates to an astonishing 912.5 billion annual interactions, highlighting how deeply embedded the AI chatbot has become in everyday digital workflows.

US Leads the Charge in Prompt Volume

Out of the billions of interactions processed each day, around 330 million originate from the United States, positioning the country as ChatGPT’s largest user base. A spokesperson from OpenAI has verified the accuracy of these figures, affirming the monumental scale at which the AI platform operates today.

Growth That Stuns Even the Tech Industry

What makes this surge even more notable is the meteoric rise in active users. From 300 million weekly users in December to over 500 million by March, the trajectory shows no signs of slowing. This exponential rise is not just a milestone for OpenAI—it represents a fundamental shift in how users interact with information and automation.

A Looming Threat to Google’s Search Supremacy

While Google still maintains dominance with 5 trillion annual searches, the momentum behind ChatGPT suggests a possible reshaping of the search engine landscape. Unlike Google’s keyword-based model, ChatGPT provides direct, human-like responses, offering users a more conversational and task-oriented experience.

Strategic Moves: AI Agent and Browser on the Way

Adding to its expanding arsenal, OpenAI recently launched ChatGPT Agent, a powerful tool capable of performing tasks on a user’s device autonomously. This marks a major step toward an all-in-one digital assistant. In addition, OpenAI is reportedly planning to launch a custom AI-powered web browser, designed to rival Google Chrome directly—an aggressive move that signals OpenAI’s ambitions beyond just chat.

0 comment
0 FacebookTwitterPinterestEmail
Newer Posts

Our News Portal

We provide accurate, balanced, and impartial coverage of national and international affairs, focusing on the activities and developments within the parliament and its surrounding political landscape. We aim to foster informed public discourse and promote transparency in governance through our news articles, features, and opinion pieces.

Newsletter

Laest News

@2023 – All Right Reserved. Designed and Developed by The Parliament News

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00