An Asset Management Firm

Securing AI Coding Agents in Asset Management with Real-Time Governance
Background
A global asset-management firm with more than 800 developers was rapidly expanding the use of AI-assisted and agentic development platforms across its quantitative engineering and trading infrastructure teams. Developers relied heavily on Linux-based environments and AI coding systems to accelerate model development, automate infrastructure tasks, and improve delivery speed for internal trading and analytics platforms.
These agentic development tools became deeply integrated into IDEs, CI/CD pipelines, operational automation workflows, and internal engineering systems. In many cases, the AI agents interacted with proprietary portfolio algorithms, trading logic, internal APIs, and highly sensitive market-data environments.
Challenge
While agentic development platforms significantly improved developer productivity, security leadership identified a growing governance and visibility gap.
Existing SIEM infrastructure, including Splunk, could detect infrastructure-level anomalies, but lacked visibility into the real-time behavior of AI-driven coding agents operating inside developer environments and automation workflows.
The firm needed a way to:
- Monitor how autonomous coding agents interacted with internal systems and repositories
- Detect exposure of sensitive intellectual property, API secrets, and credentials
- Track external AI model interactions and prompt activity
- Correlate agent behavior with Linux access controls, Git activity, authentication systems, and security telemetry
- Maintain strong governance controls without slowing down developer productivity
Security teams were particularly concerned about unauthorized disclosure of proprietary source code and trading logic during external AI-assisted debugging sessions.

Solution
The firm deployed Codenotary AgentMon across developer workstations, Linux build systems, and engineering automation pipelines to continuously monitor agentic activity in real time.
AgentMon provided visibility into:
- Prompts and AI interactions
- Tool invocations and command execution
- Repository and source-code access
- Token usage and external model communications
- AI-assisted debugging workflows
- Correlation with Linux permissions, authentication systems, Git activity, and SIEM telemetry
By combining runtime monitoring with full attribution and replayability of agent behavior, security teams gained the ability to investigate suspicious workflows quickly and enforce governance policies across the engineering organization.
Within the first weeks of deployment, AgentMon detected multiple attempts by autonomous coding agents to expose sensitive internal algorithms and API secrets during external AI-assisted troubleshooting sessions. The incidents were rapidly contained, and governance controls were strengthened before any broader impact occurred.
Business Impact
The deployment of AgentMon enabled the firm to scale AI-assisted development securely across its global engineering organization while reducing the risk of intellectual property leakage and unauthorized disclosure of sensitive trading systems.
Key outcomes included:
- Improved visibility into AI-agent behavior across development and automation environments
- Early detection of risky external AI interactions involving proprietary code and credentials
- Faster incident response through attributable and replayable agent activity
- Better alignment between AI-assisted development and internal governance requirements
- Continued developer productivity without introducing restrictive manual controls
The project also helped establish a repeatable governance framework for securely adopting agentic development platforms within highly regulated financial environments.
Start a Trial
Our mission is to secure the software supply chain with autonomous, agentic AI—delivering strong security outcomes through a platform that’s simple to use and requires no security expertise.