ACM

Non classé

SurrealDB 3.0 wants to replace your five-database RAG stack with one

Building retrieval-augmented generation (RAG) systems for AI agents often involves using multiple layers and technologies for structured data, vectors and graph information. In recent months it has also become increasingly clear that agentic AI systems need memory, sometimes referred to as contextual memory, to operate effectively. The complexity and synchronization of having different data layers …

SurrealDB 3.0 wants to replace your five-database RAG stack with one Read More »

Qodo 2.1 solves your coding agents’ ‘amnesia’ problem, giving them an 11% precision boost

As AI-powered coding tools flood the market, a critical weakness has emerged: by default, as with most LLM chat sessions, they are temporary — as soon as you close a session and start a new one, the tool forgets everything you were just working on. Developers have worked around this by having coding tools and …

Qodo 2.1 solves your coding agents’ ‘amnesia’ problem, giving them an 11% precision boost Read More »

Most ransomware playbooks don’t address machine credentials. Attackers know it.

The gap between ransomware threats and the defenses meant to stop them is getting worse, not better. Ivanti’s 2026 State of Cybersecurity Report found that the preparedness gap widened by an average of 10 points year over year across every threat category the firm tracks. Ransomware hit the widest spread: 63% of security professionals rate …

Most ransomware playbooks don’t address machine credentials. Attackers know it. Read More »

Nvidia, Groq and the limestone race to real-time AI: Why enterprises win or lose here

​From miles away across the desert, the Great Pyramid looks like a perfect, smooth geometry — a sleek triangle pointing to the stars. Stand at the base, however, and the illusion of smoothness vanishes. You see massive, jagged blocks of limestone. It is not a slope; it is a staircase. ​Remember this the next time …

Nvidia, Groq and the limestone race to real-time AI: Why enterprises win or lose here Read More »

AI agents turned Super Bowl viewers into one high-IQ team — now imagine this in the enterprise

The average Fortune 1000 company has more than 30,000 employees and engineering, sales and marketing teams with hundreds of members. Equally large teams exist in government, science and defense organizations. And yet, research shows that the ideal size for a productive real-time conversation is only about 4 to 7 people. The reason is simple: As …

AI agents turned Super Bowl viewers into one high-IQ team — now imagine this in the enterprise Read More »

How to test OpenClaw without giving an autonomous agent shell access to your corporate laptop

Your developers are already running OpenClaw at home. Censys tracked the open-source AI agent from roughly 1,000 instances to over 21,000 publicly exposed deployments in under a week. Bitdefender’s GravityZone telemetry, drawn specifically from business environments, confirmed the pattern security leaders feared: employees deploying OpenClaw on corporate machines with single-line install commands, granting autonomous agents …

How to test OpenClaw without giving an autonomous agent shell access to your corporate laptop Read More »

Nvidia’s new technique cuts LLM reasoning costs by 8x without losing accuracy

Researchers at Nvidia have developed a technique that can reduce the memory costs of large language model reasoning by up to eight times. Their technique, called dynamic memory sparsification (DMS), compresses the key value (KV) cache, the temporary memory LLMs generate and store as they process prompts and reason through problems and documents. While researchers …

Nvidia’s new technique cuts LLM reasoning costs by 8x without losing accuracy Read More »

MiniMax’s new open M2.5 and M2.5 Lightning near state-of-the-art while costing 1/20th of Claude Opus 4.6

Chinese AI startup MiniMax, headquartered in Shanghai, has sent shockwaves through the AI industry today with the release of its new M2.5 language model in two variants, which promises to make high-end artificial intelligence so cheap you might stop worrying about the bill entirely. It’s also said to be “open source,” though the weights (settings) …

MiniMax’s new open M2.5 and M2.5 Lightning near state-of-the-art while costing 1/20th of Claude Opus 4.6 Read More »

OpenAI deploys Cerebras chips for 15x faster code generation in first major move beyond Nvidia

OpenAI on Thursday launched GPT-5.3-Codex-Spark, a stripped-down coding model engineered for near-instantaneous response times, marking the company’s first significant inference partnership outside its traditional Nvidia-dominated infrastructure. The model runs on hardware from Cerebras Systems, a Sunnyvale-based chipmaker whose wafer-scale processors specialize in low-latency AI workloads. The partnership arrives at a pivotal moment for OpenAI. The …

OpenAI deploys Cerebras chips for 15x faster code generation in first major move beyond Nvidia Read More »

Google Chrome ships WebMCP in early preview, turning every website into a structured tool for AI agents

When an AI agent visits a website, it’s essentially a tourist who doesn’t speak the local language. Whether built on LangChain, Claude Code, or the increasingly popular OpenClaw framework, the agent is reduced to guessing which buttons to press: scraping raw HTML, firing off screenshots to multimodal models, and burning through thousands of tokens just …

Google Chrome ships WebMCP in early preview, turning every website into a structured tool for AI agents Read More »