ACM

Non classé

Replacing coders with AI? Why Bill Gates, Sam Altman and experience say you shouldn’t.

In the race to automate everything – from customer service to code – AI is being heralded as a silver bullet. The narrative is seductive: AI tools that can write entire applications, streamline engineering teams and reduce the need for expensive human developers, along with hundreds of other jobs.  But from my point of view …

Replacing coders with AI? Why Bill Gates, Sam Altman and experience say you shouldn’t. Read More »

Beyond Von Neumann: Toward a unified deterministic architecture

A cycle-accurate alternative to speculation — unifying scalar, vector and matrix compute For more than half a century, computing has relied on the Von Neumann or Harvard model. Nearly every modern chip — CPUs, GPUs and even many specialized accelerators — derives from this design. Over time, new architectures like Very Long Instruction Word (VLIW), …

Beyond Von Neumann: Toward a unified deterministic architecture Read More »

OpenAI’s DevDay 2025 preview: Will Sam Altman launch the ChatGPT browser?

OpenAI will host more than 1,500 developers at its largest annual conference on Monday, as the company behind ChatGPT seeks to maintain its edge in an increasingly competitive artificial intelligence landscape. The third annual DevDay conference at San Francisco’s Fort Mason represents a critical moment for OpenAI, which has seen its dominance challenged by rapid …

OpenAI’s DevDay 2025 preview: Will Sam Altman launch the ChatGPT browser? Read More »

Huawei’s new open source technique shrinks LLMs to make them run on less powerful, less expensive hardware

Huawei’s Computing Systems Lab in Zurich has introduced a new open-source quantization method for large language models (LLMs) aimed at reducing memory demands without sacrificing output quality. The technique, called SINQ (Sinkhorn-Normalized Quantization), is designed to be fast, calibration-free, and easy to integrate into existing model workflows. The code for performing it has been made …

Huawei’s new open source technique shrinks LLMs to make them run on less powerful, less expensive hardware Read More »

New AI training method creates powerful software agents with just 78 examples

A new study by Shanghai Jiao Tong University and SII Generative AI Research Lab (GAIR) shows that training large language models (LLMs) for complex, autonomous tasks does not require massive datasets. Their framework, LIMI (Less Is More for Intelligent Agency), builds on similar work in other areas of LLM research and finds that “machine autonomy …

New AI training method creates powerful software agents with just 78 examples Read More »

Databricks set to accelerate agentic AI by up to 100x with ‘Mooncake’ technology — no ETL pipelines for analytics and AI

Many enterprises running PostgreSQL databases for their applications face the same expensive reality. When they need to analyze that operational data or feed it to AI models, they build ETL (Extract, Transform, Load) data pipelines to move it into analytical systems. Those pipelines require dedicated data engineering teams, break frequently and create delays measured in …

Databricks set to accelerate agentic AI by up to 100x with ‘Mooncake’ technology — no ETL pipelines for analytics and AI Read More »

GitHub leads the enterprise, Claude leads the pack—Cursor’s speed can’t close

In the race to deploy generative AI for coding, the fastest tools are not winning enterprise deals. A new VentureBeat analysis, combining a comprehensive survey of 86 engineering teams with our own hands-on performance testing, reveals an industry paradox: developers want speed, but enterprise buyers demand security, compliance and deployment control. This disconnect is reshaping …

GitHub leads the enterprise, Claude leads the pack—Cursor’s speed can’t close Read More »

Microsoft retires AutoGen and debuts Agent Framework to unify and govern enterprise AI agents

Microsoft’s multi-agent framework, AutoGen, acts as the backbone for many enterprise projects, particularly with the release of AutoGen v0.4 in January.  However, the company aims to harmonize all of its agent framework offerings and bring more observability capabilities to the forefront as well. Microsoft released the Agent Framework on public preview, which will now essentially …

Microsoft retires AutoGen and debuts Agent Framework to unify and govern enterprise AI agents Read More »

HubSpot’s Dharmesh Shah on AI mastery: Why prompts, context, and experimentation matter most

Presented by HubSpot INBOUND, HubSpot’s annual conference for marketing and sales professionals, took place in San Francisco this year, with three days of insights and events across marketing, sales, CX, and AI innovation. It was a mix of the new, like the Creators Corner and the Tech Stack Showcase Stage, and the familiar, like HubSpot …

HubSpot’s Dharmesh Shah on AI mastery: Why prompts, context, and experimentation matter most Read More »

‘Western Qwen’: IBM wows with Granite 4 LLM launch and hybrid Mamba/Transformer architecture

IBM today announced the release of Granite 4.0, the newest generation of its homemade family of open source large language models (LLMs) designed to balance high performance with lower memory and cost requirements. Despite being one of the oldest active tech companies in the U.S. (founded in 1911, 114 years ago!), “Big Blue” as its …

‘Western Qwen’: IBM wows with Granite 4 LLM launch and hybrid Mamba/Transformer architecture Read More »