By Pankaj Mitra, Senior Director, with Jason Lopyan, Manager
According to Cisco’s 2024 AI Readiness Index, 98% of global companies surveyed feel an increased urgency to deploy AI solutions compared to last year. However, enterprises are increasingly finding that without the right monitoring tools, AI deployment can lead to unpredictable effects – hallucinations, biases and toxic effects.
With years of experience building backend systems based on data science and machine learning, the Fiddler team identified these challenges as a key gap in the AI developer’s toolkit. Today, Fiddler AI has emerged as a leader in addressing observability and explainability for AI deployments.
This mission aligns perfectly with Cisco's commitment to promoting responsible AI use, which is why Cisco Investments is proud to support Fiddler AI through its Global AI Investment Fund.
The Complexity of AI Models
“Traditional software development relies on deterministic outputs from clear instructions,” says Krishna Gade, CEO of Fiddler AI. “In contrast, modern AI models can amplify the training data’s inherent biases and result in unpredictable outputs. These models behave as black boxes -- the lack of transparency when these models fail makes it critical for enterprises to have tools that can explain and monitor AI behavior effectively.”
The complexity arises from the models' ability to learn patterns from large training datasets, which they encode into intricate neural network structures. While this allows AI to make sophisticated predictions and decisions, it also obscures the reasoning behind those decisions, making it challenging to understand why a model might produce certain outputs.
Building Trust Through Observability
Fiddler AI offers a suite of tools designed to enhance the observability and explainability of these AI models.
"Safeguards are crucial," Gade emphasizes. "Our tools help ensure that AI applications are not only accurate but also responsible, preventing issues such as hallucinations, leakage of private data and unchecked toxicity."
Fiddler’s platform provides insights into AI’s decision-making processes, allowing enterprises to detect and address these inherent biases and toxicities.
"Observability tools are key to building human confidence in AI systems,” Gade emphasizes. “This allows businesses to deploy AI in production use cases with assurance."
Cisco's Commitment to Responsible AI
Cisco shares Fiddler AI's vision of deploying AI technologies that are transparent, accountable, and fair. Our investment in Fiddler AI underscores a mutual dedication to safeguarding AI deployment with robust observability tools. Cisco believes in fostering AI solutions that are not only innovative but also ethical and responsible.
Fiddler AI Observability Platform
Fiddler AI's comprehensive platform is becoming a critical requirement for enterprises aiming to integrate AI into their workflows. AI infrastructure leaders, such as CTOs, Chief AI Officers, Chief Data Officers and Line of Business decision makers, rely on Fiddler's insights to ensure optimal performance and alignment with business and organizational goals.
"Enterprises are looking for solutions that can provide insights across all forms of AI, from traditional machine learning models to the latest in generative AI," Gade explains. "Fiddler's unified observability product offers a single pane of glass for monitoring AI across the board."
AI engineers and SRE teams in charge of AI, as primary users, leverage Fiddler's robust features to maintain and refine AI applications. For instance, Fiddler's visualization capabilities for mapping embeddings work seamlessly with both traditional and generative AI models. This enables engineers to better understand how their models work and interact with complex datasets while delivering outcomes via AI applications.
The Fiddler platform can run on-prem, on multiple clouds and across multiple models. AI engineers appreciate not just model input/output monitoring, but also Fiddler’s root cause analysis features, allowing them to quickly pinpoint reasons behind undesirable model behavior.
Looking to the Future
As customers integrate different AI functions together to power Agentic AI workflows, AI risks get amplified. The need for robust observability and explainability solutions becomes increasingly critical, across both in-house and procured AI applications. Fiddler’s platform can work across a mix of traditional and generative AI “model gardens” to monitor end-to-end complex workflows.
“As AI technologies evolve rapidly, our commitment is to innovate continuously, adding new features, whether for detection of new issues, building more safety guardrails, or evaluating which model is better suited,” Gade shares. "The moment for consumer AI and democratized AI has truly arrived. This shift is driving the need for greater transparency and accountability in AI systems. At Fiddler, we are committed to providing the tools necessary to ensure that AI can be deployed responsibly and effectively across enterprises, empowering them to harness AI's potential while maintaining trust and ethical standards."
Read more about Fiddlers Series B, here.