
January 15, 2026
Security as a platform: Codifying scans, signals, and guardrails
Larkins Carvalho
As Plaid’s codebase expanded to hundreds of services and shared libraries across multiple GitHub organizations, we faced a challenge familiar to any fast-growing engineering organization: how to maintain comprehensive security coverage without becoming a bottleneck?
We had security tools, but they were fragmented. Each language stack shipped with its own static analysis tool, configuration, and alert surface. Each tool had its own infrastructure, its own alert stream, its own maintenance burden. Our dependency scanning flagged thousands of CVEs, most of which weren't actually exploitable in our code. Secret detection existed in pockets, which became harder to maintain at scale. And perhaps most importantly, our current approach was not scalable for running security scans in CI across all our repositories.
We realized this approach fundamentally couldn’t scale. Every new repo meant manual CI configuration, every new security tool meant another custom integration, and every finding meant manual context hunting and engineers dealing with noisy alerts with little actionable guidance.
We needed a different approach.
The insight: Treat security like infrastructure
Most security programs treat scanners and security tools as products you buy and integrate into existing pipelines. At Plaid, we started treating security controls as infrastructure and code-shared CI templates, Terraform modules, and services that enforce the same checks across every repo by default. The same way we build services to scale, we built security to scale: shared primitives, clear contracts, and automation that improves centrally instead of through one-off repo work.
This shift in mindset started to drive design decisions:
Security scans are consistently defined, not configured manually
Findings include contextual guidance based on years of incident data, not generic descriptions
Learnings from incidents and bug bounties are encoded as automated guardrails, not buried in documentation
Developers learn security through their tools, not by reading wikis they'll never find
The outcome: a Security Pipeline as Code that delivers sub-5-minute PR feedback, achieves broad repository coverage, cuts noise dramatically by using reachability and contextual filters, and turns security incidents into reusable controls across all repositories.
Just as importantly, it’s security tooling that engineers at Plaid actually trust and engage with.
The philosophy: from incidents to guardrails
The most powerful aspect of Security Pipeline as Code isn’t the automation. It’s that wherever possible incident, bug bounty, and penetration test findings are turned into permanent guardrails. Instead of trapping lessons in Confluence pages or Slack threads that developers never see, we codify them as rules and checks that automatically protect every repository.
The journey: From findings to automation
What makes these guardrails powerful is that they're custom to how software is built in a specific environment. They are not generic framework rules, but patterns grounded in the architecture, stack, data flows, authn/z model, and security baselines.
Here are concrete examples:
Example 1: Enforcing zero-trust security controls
In an example, if we rolled out a zero-trust architecture for service-to-service authorization, we’d want enforcement enabled across hundreds of microservices. If authorization policies in our service mesh run in "audit mode," it may log violations but not necessarily enforce them.
One way to automate auditing configurations is shown below:
rules:
- id: detected-audit-mode-true
patterns:
- pattern-inside: |
authorizationPolicy:
...
auditMode: True
...
message: Found authorizationPolicy with auditMode set to True.
Please refer to our service-to-service authorization documentation
to enforce authorization checks.
metadata:
category: security
likelihood: LOW
impact: HIGH
severity: WARNING
What makes this organization-specific:
Solves a particular implementation challenge
References internal authorization policy framework
Encodes security baseline: authorization is enforced, not just audited
As a result, a service that enables audit mode is automatically flagged, with inline guidance pointing to the org-specific implementation documentation and examples of proper configuration.
Example 2: Bug bounty finding, cookie security
Take, as another example, a security researcher submits a bug bounty: "Your cookies are missing the SameSite attribute, allowing CSRF attacks against authenticated users."
After the reported instances are fixed, they can be turned into a permanent guardrail:
rules:
- id: insecure-cookie-samesite-attribute
patterns:
- pattern: |
$COOKIE.SetSameSite($X)
- pattern-not: |
$COOKIE.SetSameSite(http.SameSiteLaxMode)
- pattern-not: |
$COOKIE.SetSameSite(http.SameSiteStrictMode)
message: Insecure SameSite attribute on cookies. Please refer to our
session management security baseline for secure configuration.
severity: WARNING
What makes this organization-specific:
Targets the specific patterns used for setting cookie attributes
Points developers to the internal session management baseline instead of generic CSRF advice
Enforces a baseline: all cookies must use Lax or Strict mode
As a result, a Go service that configures cookies with an unsafe SameSite value can be automatically flagged in CI, turning a single bug bounty report into permanent, organization-wide protection.
The compounding effect
These examples illustrate different sources of security knowledge that could be turned into code:
Internal security controls (zero-trust authorization baselines)
External findings (bug bounty submissions)
Proactive security review (pen tests and architecture review)
Over time, we can develop custom rules like these that encode environment-specific security knowledge. They go beyond generic OWASP patterns, reflecting the architecture, frameworks, baselines, and incident lessons that matter in practice.
This is the guardrails philosophy in action: security learnings from findings to architectural decisions to baselines are turned into automated, institutional knowledge that shows up on PRs across our repos.
The architecture: Built for composability
Security Pipeline as Code is built as composable layers with shared CI templates, Terraform modules, and services that apply consistent controls across repos. We designed it to stay modular as the security stack evolves, so individual tools can be swapped behind stable interfaces without per-repo CI changes. Pipelines are orchestrated by a hosted control plane, but scan execution happens on Plaid-managed agents inside our AWS environment. Keeping execution in-VPC lets us run close to internal systems while the control plane provides scheduling, status reporting, and a consistent developer experience.
Layer 1: Repository configuration
Repository at Plaid is onboarded through a simple Terraform configuration. Adding a service is literally adding a line:
That's it. One change, and the repository automatically gets:
Security scans on pull request
Daily comprehensive scans
GitHub status checks
PR comments with contextual guidance
Findings tracked in our vulnerability management system
With this declarative approach, we get broad coverage across all repositories, internal and public, and keep configurations minimal.
Layer 2: Dynamic pipeline orchestration, multi-domain security scanning
The pipeline doesn't blindly run every scanner on every commit. Instead, it dynamically generates the appropriate scan steps based on context, then executes a set of security-domain scanners behind a unified developer experience.
At a high level, the system dynamically tailors scans based on context:
What changed? If a PR modifies .tf files, run infrastructure-as-code analysis. If not, skip it. This keeps feedback fast.
What's enabled? Each repository can independently enable different security domains: SAST (curated rules for insecure code patterns informed by incidents and bug bounties), reachability-aware SCA (only vulnerable dependencies with real call paths), IaC (infra misconfigurations), privacy & compliance (risky data handling), AI-powered business logic analysis (context-dependent issues via deeper scheduled scans), and supply chain security (malicious dependency).
What mode? Pull request scans are fast and incremental; they only analyze code that changed since the branch point. Scheduled scans are comprehensive, analyzing the entire repository.
This approach means developers get fast feedback (only run what's needed for their changes) while the security team can integrate new tools quickly by adding new scanner types to the Terraform module. All of these scanners run in isolated containers with pinned versions, making results reproducible and updates controlled infrastructure changes.
Layer 3: Unified vulnerability management
Findings flow into our internal vulnerability management pipeline and task dashboard. This layer is rich enough to stand on its own, but at a high level:
Single source of truth: Vulnerabilities from security scanners can be normalized into a unified data model, replacing fragmented, tool-specific views.
Auto-resolution: Findings can be resolved automatically when the code is fixed (detected in the next scan), when a developer marks it as a false positive with justification, or when low-severity findings age out without active exploitation.
Team attribution & tasks: Items can be attributed to teams using code ownership and cloud metadata, then surfaced as actionable tasks in an internal task dashboard (and, for high-priority items, synced into team backlogs).
Alert-to-task conversion: Once validated, findings can be converted into actionable remediation tasks instead of remaining isolated in scanner UIs.
Program visibility: The system can provide program-level metrics and SLA views so security and engineering can measure remediation performance across teams and focus effort where risk is highest.
The developer experience: Security in the flow of work
The architecture enables the magic, but what makes Security Pipeline as Code successful is how naturally it fits into developers’ existing tools and workflows.
A typical developer journey
As an example, Sarah, a backend engineer, opens a pull request to add a new internal API endpoint. Within a few minutes, the security pipeline posts a comment on her PR:
🤖 Security Finding
🟡 Missing Authorization Check (api.py:40)
Internal service endpoints must enforce authorization policies. This endpoint is accessible to any authenticated user.
How to fix: Add authorization policy configuration per our service-to-service authorization documentation.
Need help? Ask in #ask-security
Sarah reads this and thinks, "I need to add an authorization policy." The guidance is clear: here's the problem, here's why it matters (with links to real incidents), here's exactly how to fix it, here's where to learn more.
She pushes a fix. The next scan (triggered automatically on the new commit) detects that the issue is resolved. The PR comment updates to show all findings resolved, the GitHub status check turns green, and Sarah merges her PR.
She never opened a ticket. Never pinged the security team. Never left GitHub. And she learned one security concept that she'll apply to every future PR.
Making it real: Cross-ream collaboration
Security Pipeline as Code wasn't built by the security team in isolation. It required deep collaboration across teams with different expertise:
This multi-team ownership is why Security Pipeline as Code scales: it's not a security silo, it's shared infrastructure owned by the organization.
The pattern we followed:
Security defines the "what": which scanners, which rules, and what guidance
Infrastructure defines the "how": Terraform modules, container images, orchestration patterns
Developer Efficiency ensures the "experience": fast pipelines, clear feedback, reliable execution
Developers validate the "value": what's actionable, what builds trust, what actually prevents bugs
The outcomes: Security at scale
After building and operating Security Pipeline as Code, the results speak for themselves:
Lessons learned: What actually mattered
Looking back, several insights were non-obvious at the start but proved critical:
1. Collaboration multiplies impact
This only worked because it was built jointly: The security team knew what we needed to detect, the infrastructure team knew how to build scalable systems, the developer efficiency team knew how to keep CI fast, and developers inform us what guidance is actually helpful.
And it’s not a one-time collaboration; we still get feedback from our platform teams on what needs to be adjusted or tuned, and we can action it quickly. Bringing these perspectives together created something none of us could have built alone.
2. Coverage as a forcing function
We made repo onboarding trivial so coverage expanded quickly. Instead of turning on everything everywhere, we rolled out curated rule sets in stages while keeping org-wide visibility. We started with a small, high-confidence core: OWASP Top 10 style issues, obvious injection patterns, hard-coded secrets, and basic language/framework rules for our main stacks. As we gained signal, we added deeper framework-specific and org-specific rules based on recurring patterns in our codebase. Every new control launched in soft-fail first: findings showed up in PRs and dashboards, but didn’t block merges. That period let us measure blast radius, tune away false positives, and build credibility before introducing friction.
Given these rule sets were small and intentional, the security team could truly own them and actively tune noisy rules and guidance to match how the code is actually written. Full coverage forced discipline: every new rule set had to stay low-noise across the entire codebase before we promoted it to blocking, and false positives were immediately visible because they appeared in our triage queue across repositories at once.
3. Reachability made dependency findings actionable
Traditional dependency scanning tools flag every CVE in your dependency tree. This creates thousands of alerts, most for vulnerabilities in code paths your application never executes. Reachability flips the question to: “Does our code actually call the vulnerable function?”.
Once we adopted reachability analysis and integrated it into how we triage and present dependency risk, we were left with a smaller set of issues we could trace to a real call path in our code. That evidence made the findings credible.
4. Auto-resolution keeps the system tidy
No one wants to manually close 50 items after fixing the same issue across multiple files. With auto-resolution, once the underlying code is fixed, the next scan can mark the finding as resolved and it can disappear automatically from both GitHub and the dashboard.
It’s a small feature, yet it meaningfully improves the developer experience and reduces busywork for the security team.
The road ahead
With the foundation in place, we’re pushing Security Pipeline as Code further in three areas:
Dependency lifecycle and health management: We will proactively flag dependencies approaching end-of-life before they become unpatched security risks, with predictive SLAs driven by EOL dates, and score dependency health using maintainer activity, project age, and governance indicators to prevent the introduction of unhealthy dependencies.
Extended supply chain artifact scanning: We will extend supply chain scanning to AI and data-native assets, including serialized model and agent files and Jupyter notebooks to detect malicious content, risky dependencies, secrets, and unsafe execution patterns in AI and data workflows.
AI-assisted remediation: Going beyond finding issues to automatically suggesting fixes, generating pull requests for common patterns, providing context-aware remediation that understands business logic, and learning from developer fixes to improve suggestions over time.
Conclusion
Security as a Platform isn't just automation; it's about scaling security knowledge.
Incidents, bug bounty findings, and penetration tests often translate into guardrails that help protect our repositories. Security knowledge moves from Slack messages and wiki pages into code that runs automatically, teaching developers at the moment they need it.
The mindset shift is simple but profound: security tooling should teach, not just block. When security is built as a continuously improved platform that delivers fast, high-signal, actionable guidance directly in developer workflows it stops being the team that says “no” and becomes a force multiplier: developers learn secure patterns, incident learnings turn into guardrails, and security scales with the organization instead of slowing it down.
The patterns we keep coming back to are simple:
Controls live in infrastructure: Security is delivered through standard building blocks.
The default path for developers has to be the secure one: Stack-specific guidance instead of abstract “best practices.”
Feedback is fast and contextual: Findings show up in the tools engineers already use, with enough context and guidance to fix issues without context switching.
Acknowledgments
Security as a Platform is a collaboration across Plaid's Security (Mikias Emanuel, Stephanie Ginovker), Developer Efficiency, and Infrastructure teams. Special thanks to our engineering partners for their expertise, patience during rollout, and continuous feedback that shaped this system.
If you’re interested in solving these kinds of problems at scale, click the button below to explore careers at Plaid - helping power the next generation of fintechs and turn data into revolutionary financial products.