A novel distributed reinforcement learning platform reduces post-training expenses by as much as 80%, making sophisticated AI accessible outside traditional hyperscale facilities
SINGAPORE, Feb. 11, 2026 —
- Training expenses reduced by as much as 80% versus conventional cloud-based reinforcement learning methods.
- Cutting-edge results delivered without sole dependence on centralized data center resources.
- Increased experimental capacity per unit of investment, moving AI advancement from infrastructure constraints to research velocity.
Gradient, an AI infrastructure provider, today unveiled Echo-2, a reinforcement learning platform engineered to significantly cut the expenses and hardware requirements for training sophisticated AI models. The introduction addresses a major emerging constraint in AI development, where advancement is increasingly hindered not by innovation or expertise, but by the availability of costly, centralized computational resources.
Initial generative AI breakthroughs centered on training models with vast datasets. The subsequent evolution, however, hinges on post-training refinement, where systems enhance capabilities through iterative trial and feedback. This methodology, known as reinforcement learning, enables AI to develop reasoning, planning, and adaptation skills. It also represents one of the most expensive phases of AI creation, frequently requiring massive, power-intensive data centers that place advanced training beyond the reach of most organisations.
Echo-2 is engineered to transform this paradigm. Instead of confining all training operations to rigidly managed clusters, the platform distributes reinforcement learning workloads across diverse hardware configurations. Preliminary testing shows Gradient achieving up to 80% expense reductions relative to conventional cloud methods, while maintaining or surpassing performance on reasoning and autonomous agent tasks. This enables teams to conduct substantially more experiments, accelerate learning, and enhance models without mandatory dependence on hyperscale infrastructure.
“Artificial intelligence advancement is now constrained not by vision, but by foundational resources,” stated Eric Yang, Co-Founder and Chief Executive Officer of Gradient. “Reinforcement learning is emerging as the driver of genuine intelligence, yet currently remains blocked by massive data center expenditures. Echo-2 reduces experimental costs, enabling more teams to develop, evaluate, and refine AI systems without requiring hyperscale cloud infrastructure.”
The launch arrives as public and private sectors encounter mounting limitations regarding energy availability, ecological consequences, and information governance linked to large-scale AI facilities. As AI grows more integral to organisational operations and competitive strategy, the capacity to train and optimise models without aggregating enormous workloads in one place gains critical importance. Solutions like Echo-2 provide an alternative approach that fosters ongoing AI development while alleviating strain on centralized systems.
Echo-2 advances Gradient’s extensive portfolio in distributed AI infrastructure, succeeding the company’s prior release of , which empowers large AI models to operate across numerous machines, assisted by its networking layer that manages data exchange between decentralized systems. Combined, these solutions strive to decrease dependency on centralized computation by enabling AI models to be trained, deployed, and refined using existing hardware throughout distributed environments.
Through cost reduction and enhanced adaptability, Echo-2 enables research groups to accelerate progress and conduct additional experiments, while providing businesses a method to diminish prolonged dependence on costly cloud contracts. As financial, energy, and infrastructure limitations increasingly influence AI’s trajectory, Gradient asserts that Echo-2 expands availability of sophisticated reinforcement learning at a pivotal time. The organisation is productising Echo-2 within a comprehensive distributed reinforcement learning framework, together with the introduction of Logits, an RL-as-a-Service solution constructed on its decentralized architecture, with commercial availability anticipated in late 2026.
– ENDS –
About Gradient:
Gradient operates as an AI research and development laboratory focused on constructing open intelligence via a completely decentralized infrastructure—OIS (Open Intelligence Stack)—covering distributed training, deployment, autonomous agent systems, and additional capabilities.
Supported by leading investors and a cadre of elite researchers, Gradient pledges to deliver further cutting-edge research that will enable a future where intelligence may be constructed, expanded, and advanced by any individual, in any location.
CONTACT: Media Contact: Athraa Bheekoo athraa@lunapr.io
