
May 08, 2025
Transforming Engineers: How we grew AI coding adoption at Plaid
Clay Allsopp & Joshua Carroll
AI vastly improves knowledge work by generating ideas, reasoning through problems, doing research, writing prose and code, and editing. At Plaid, we’ve embraced this shift, but adoption isn’t automatic - it requires deliberate effort and strategy. One of our top priorities this year is to help all Plaid engineers become AI power users. The problem facing us was how to change the habits of hundreds of highly effective engineers while keeping our ship moving.
Over the past six months, we were able to scale our AI engineering program by:
Having > 75% (and climbing!) of our engineers use advanced AI coding tools on a regular basis.
Streamlining our procurement process to start new AI tool pilots in days instead of weeks.
Running a highly successful internal “AI Day” with 80%+ engineering participation and 90%+ CSAT.
Here are some of the strategies and takeaways from the start of our journey. We expect these learnings to be most useful for engineering leaders in mid-to-large tech organizations who are directly driving AI productivity initiatives or championing/sponsoring those efforts.
TL;DR:
Pilot smart & fast: Streamline AI tool pilots by focusing on quick value signals, proactively aligning with legal/security, and quickly addressing any onboarding friction.
Own adoption: Scaling AI tools requires dedicated ownership, data tracking (usage/retention), and direct user outreach.
Relevance is key: In-house short-form videos, organic sharing about what works (or doesn't) with our specific codebase, and targeted outreach to line managers have all been highly effective.
Complement, don't replace: Position AI tools as additions to, rather than replacements for, engineers' existing favorite IDEs to ease adoption.
Not done yet: AI tools make mistakes and aren’t a great fit for every task. We’re continuing to evolve how we measure their benefits and downsides to our team.
Recent growth in active AI coding use, including a spike from our recent AI Day
Streamline AI tool piloting
After a few iterations of more and less successful AI tool pilots, we landed on some principles and best practices that have streamlined our overall approach. With the AI tool landscape moving extremely fast compared to traditional software and services, we’ve found we need to adapt our approach.
First, we prefer piloting tools where we can get a strong value signal very quickly. We’ve found that there’s still a lot of variability in the quality/performance of tools in the market. For instance, with coding tools, we first do real development on large open-source or public projects to assess baseline quality before beginning any procurement. We mostly avoid anything that requires a significant internal infrastructure setup or other engineering pre-work.
Second, as a company operating in the regulated consumer finance space, we’ve done more upfront work with our internal legal and security partners to align on core principles for considering the nuances and challenges of AI tools, especially in the pilot stage. We developed a framework for classifying each tool based on the inputs (what kind of data is being sent, where is it being sent to, etc) and outputs (what are we doing with the result, what could be the legal or compliance implications to be aware of, etc) to determine what level of review it needs before piloting.
Adoption Needs an Owner
“Enable it in Okta, and they will come” is not enough. After we completed a pilot, we would send out a message announcing it was generally available internally, but we’d find that adoption quickly plateaued. We convened a core team to solve this problem for current and future tools.
This team spun up a basic internal dashboard to think of adoption as our own product. We were able to track not just usage over time but retention by cohorts and teams. This allowed us to spot trends that led to further investigation, like when a tool seemed to be working well for frontend engineers but not infrastructure teams.
We track internal retention and identify trouble spots at the org and team level.
The mindset of ownership pushed us to do more things that wouldn’t normally have scaled, like message every churned user and dive deep into what they were happy or unhappy with when using certain tools. At the end of the day, having a total addressable market of a few hundred customers frees you to do things that don’t scale!
Create your own content
Every AI tool will have its own videos and articles, but there will be none showing exactly how they work on your own systems. We found that making a small amount of in-house content showing common workflows in action was very helpful in getting more engineers to organically try the tools. Even a quick demo of how Agentic tools can write and debug unit tests, running on our actual code and tools, was very helpful in getting more of our engineers activated.
This also creates positive feedback loops. We encouraged engineers to share their wins (and losses!) via Slack, which is its own form of in-house content creation that helps drive more adoption.

Clay published a series of 1-3 minute internal short-form videos on Cursor, which were a huge hit.
We also saw in the data that the most engaged teams often had engineering managers who were already excited and knowledgeable about AI themselves. Based on that insight, we began targeting EMs more directly with training content and Plaid-specific examples to show (not tell) why AI tools are impactful. We were also able to use our adoption analytics to surface insights into the reports our managers already use to track their team’s work.
Adoption requires complementing, not replacing
Engineers have a famous passion for their favorite code editors, so it's an uphill battle to convince them to replace them entirely. Instead, we found the most success by emphasizing that new AI-enabled tools can be used alongside engineers’ current IDEs or tools.
Sometimes, as in the case of VS Code to Cursor, it is possible to migrate usage entirely without regressions in productivity. But some IDEs that we had established tooling for still have features that aren’t implemented in newer AI-native editors. For those engineers, we encouraged “dual wielding” new tools in tandem with their existing robust IDEs. While we did make improvements to some workflows by better integrations with our Bazel setup, we’re still on the hunt for great AI coding assistance that integrates natively into JetBrains IDEs.
AI Day
After a few months of steady iteration on our AI tools and content, Plaid held a company-wide AI Day to shake everyone out of their normal day-to-day and focus on using AI tools in dedicated workshops.
For engineering, we held a workshop to rapidly build demos with Plaid’s public APIs using AI tools, followed by more in-depth talks on the underlying engineering fundamentals of LLMs. Even though the hands-on portion was only one hour, engineers across specialties were able to build working and delightfully styled apps just by prompting (though still far from the capabilities of our customers’ products, of course!). The feedback was incredibly positive, and we also had several new internal AI coding projects come out of it.
Some of the demos from our AI Day workshop - these were helpful to show the power of AI tools, but don’t expect to see them on any app stores!
These kinds of stop-the-world events were extremely helpful in getting the last mile of adoption over the line, but went much more smoothly because we had prepared many examples over the preceding weeks and iterated on how these tools interoperated with our existing stack.
Growing pains
While we are making gains from AI tooling, we're improving our understanding of issues. Some hot spots in our work have included cost controls (especially for tools with variable/per-token costs), benchmarking across tools and models, and revising our code review processes to catch AI-centric failure modes.
In particular, consistent and relevant benchmarking techniques for AI coding tools would be very helpful for larger organizations like ours. For every problem that was quickly solved with AI, we also had tasks that seemed like they should be solvable but ended up driving the AI around in circles. We are building a catalog of these tasks to use internally as reference points as foundation models improve and new tool updates arrive.
There is also not yet a consensus on the precise mechanisms that AI tools use to handle common functionality, like version-controlled settings, filtering sensitive files from AI usage, or memory/context hinting mechanisms. This means that for every new tool, we have to figure out how to support all these features and maintain them for our existing tools. Furthermore, tweaking these settings and context hints further exacerbates the problem of the lack of benchmarking techniques.
Beyond adoption
Now that we have a framework for piloting and scaling the adoption of new AI tools, we are expanding our efforts to help engineers make the best use of the tools. This often means digging one level beyond “usage” and into what features are actually being used, and ultimately how they impact our overall productivity and reliability.
For example, we will measure whether engineers are using the full suite of Agentic editing capabilities in new editors versus whether they are mostly using the tab completion capabilities. We believe there is a lot of room to help engineers write better prompts and integrate our codebase more deeply with the tools, but we want to increasingly use data to guide our work.
There are many other non-coding tasks our engineers do every day that AI can support. We deliberately focused our initial effort on coding to deliver quick impact and organizational wins, but we’re starting to look more at other areas of the SDLC, like incident response, data analysis, and code review.
We have our work cut out for us, but now at least we have AI to help along the way.