Datadog, the Cloud Monitoring as a Service company, today announced the launch of Datadog Experiments – a new product that enables teams to design, launch, and measure product experiments and A/B tests directly within its platform. The aim is to give teams the insights they need to “understand how every change affects user behavior, application performance and business outcomes.”
The launch of Experiments comes at a time when software development does not really follow the old playbook anymore. Releases are happening faster, sometimes continuously, and AI driven features are showing up in production sooner than teams are fully comfortable with. The stakes for getting something wrong have also gone up – even a small break can ripple through performance or business metrics almost immediately.
In this environment, teams are expected to test as they build and validate changes quickly. They also need to move forward with confidence that comes from seeing the impact in real time – not hours or days later.
“The faster teams ship, the more expensive it becomes to not know what’s working. When signals are scattered across disconnected tools, teams make decisions with incomplete information—missing what’s actually driving revenue and killing the bold bets that will move the business forward,” said Yanbing Li, Chief Product Officer at Datadog.
Datadog addresses the challenge of fragmented experimentation and observability workflows by embedding experimentation directly into its platform through Datadog Experiments. Rather than piecing together insights from multiple systems after the fact, users can now evaluate the impact of changes on user behavior, application performance, and business metrics in real time.
The new offering uses technology from Datadog’s acquisition of Eppo to introduce rigorous statistical methods, combined with real time observability guardrails that help detect issues early. This allows companies to focus on testing what actually matters, move faster without losing control, and make decisions with a higher level of confidence.
Key features of Datadog Experiments focus on making testing easier to run and easier to act on. Teams can set up and run experiments on their own without waiting on other teams or dealing with extra coordination. Everything is standardized, so moving from insight to decision becomes faster and more straightforward. Built in guardrails also help teams run safer experiments by catching issues early, protecting users, and keeping results reliable.
Experiments also help teams make decisions they can stand behind. Results are consistent and comparable because they are measured against real business metrics pulled directly from existing data sources. This makes it easier for both teams and leadership to trust what the data is showing and move forward with it.
“AI has increased the pace and complexity of software releases exponentially. Too often, though, teams are flying blind when it comes to measuring the efficacy of new code. That’s because they don’t have a uniform way to validate changes and monitor their impact,” said Li.
He further added, “With Datadog Experiments, teams have the guardrails needed to safely validate AI-driven changes. By tying experiments to Real User Monitoring (RUM), Product Analytics, APM and logs, organizations can measure both business impact and performance implications to reduce risk without slowing innovation.”
While the potential value offered by Experiments is impressive, the true test will be whether Datadog can push this beyond feature parity. There is no doubt that experimentation is not a new category. Several players in this space, like Optimizely have been doing this for years and have strong products built around A/B testing.
Datadog is pitching a simpler idea. Instead of adding another tool, it is trying to bring everything into one place where teams already work. If Datadog can show that this actually saves time and leads to better decisions, it could gain traction. If not, it risks becoming just another feature in an already crowded space.
If you want to read more stories like this and stay ahead of the curve in data and AI, subscribe to BigDataWire and follow us on LinkedIn. We deliver the insights, reporting, and breakthroughs that define the next era of technology.
The post Datadog Launches Experiments to Bridge a Costly Gap Between Product Testing and Observability Data appeared first on BigDATAwire.
Go to Source
Author: Ali Azhar
