How Nubank refactors millions of lines of code to improve engineering efficiency with Devin

8x
engineering time efficiency gain
20x
cost savings
Vimeo

Overview

One of Nubank’s most critical, company-wide projects for 2023-2024 was a migration of their core ETL — an 8 year old, multi-million lines of code monolith — to sub-modules. To handle such a large refactor, their only option was a multi-year effort that distributed repetitive refactoring work across over one thousand of their engineers. With Devin, however, this changed: engineers were able to delegate Devin to handle their migrations and achieve a 12x efficiency improvement in terms of engineering hours saved, and over 20x cost savings. Among others, Data, Collections, and Risk business units verified and completed their migrations in weeks instead of months or years.

The Problem

Nubank was born into the tradition of centralized ETL FinServ architectures. To date, the monolith architecture had worked well for Nubank — it enabled the developer autonomy and flexibility that carried them through their hypergrowth phases. After 8 years, however, Nubank’s sheer volume of customer growth, as well as geographic and product expansion beyond their original credit card business, led to an entangled, behemoth ETL with countless cross-dependencies and no clear path to continuing to scale.

For Nubankers, business critical data transformations started taking increasingly long to run, with chains of dependencies as deep as 70 and insufficient formal agreements on who was responsible for maintaining what. As the company continued to grow, it became clear that the ETL would be a primary bottleneck to scale.

Nubank concluded that there was an urgent need to split up their monolithic ETL repository, amassing over 6 million lines of code, into smaller, more flexible sub-modules.

Nubank’s code migration was filled with the monotonous, repetitive work that engineers dread. Moving each data class implementation from one architecture to another while tracing imports correctly, performing multiple delicate refactoring steps, and accounting for any number of edge cases was highly tedious, even to do just once or twice. At Nubank’s scale, however, the total migration scope involved more than 1,000 engineers moving ~100,000 data class implementations over an expected timeline of 18 months.

In a world where engineering resources are scarce, such large-scale migrations and modernizations become massively expensive, time-consuming projects that distract from any engineering team’s core mission: building better products for customers. Unfortunately, this is the reality for many of the world’s largest organizations.

The Decision: an army of Devins to tackle subtasks in parallel

At project outset in 2023, Nubank had no choice but to rely on their engineers to perform code changes manually. Migrating one data class was a highly discretionary task, with multiple variations, edge cases, and ad hoc decision-making — far too complex to be scriptable, but high-volume enough to be a significant manual effort.

Within weeks of Devin’s launch, Nubank identified a clear opportunity to accelerate their refactor at a fraction of the engineering hours. Migration or large refactoring tasks are often fantastic projects for Devin: after investing a small, fixed cost to teach Devin how to approach sub-tasks, Devin can go and complete the migration autonomously. A human is kept in the loop just to manage the project and approve Devin’s changes.

The Solution: Custom ETL Migration Devin

A task of this magnitude, with the vast number of variations that it had, was a ripe opportunity for fine-tuning. The Nubank team helped to collect examples of previous migrations their engineers had done manually, some of which were fed to Devin for fine-tuning. The rest were used to create a benchmark evaluation set. Against this evaluation set, we observed a doubling of Devin’s task completion scores after fine-tuning, as well as a 4x improvement in task speed. Roughly 40 minutes per sub-task dropped to 10, which made the whole migration start to look much cheaper and less time-consuming, allowing the company to devote more energy to new business and new value creation instead.

Devin contributed to its own speed improvements by building itself classical tools and scripts it would later use on the most common, mechanical components of the migration. For instance, detecting the country extension of a data class (either ‘br’, ‘co’, or ‘mx’) based on its file path was a few-step process for each sub-task. Devin’s script automatically turned this into a single step executable — improvements from which added up immensely across all tens of thousands of sub-tasks.

There is also a compounding advantage on Devin’s learning. In the first weeks, it was common to see outstanding errors to fix, or small things Devin wasn’t sure how to solve. But as Devin saw more examples and gained familiarity with the task, it started to avoid rabbit holes more often and find faster solutions to previously-seen errors and edge cases. Much like a human engineer, we observed obvious speed and reliability improvements with every day Devin worked on the migration.

Results: Delivering an 8-12x faster migration, lifting a burden from every engineer, and slashing migration costs by 20x.

“Devin provided an easy way to reduce the number of engineering hours for the migration, in a way that was more stable and less prone to human error. Rather than engineers having to work across several files and complete an entire migration task 100%, they could just review Devin’s changes, make minor adjustments, then merge their PR”

Jose Carlos Castro, Senior Product Manager

8-12x efficiency gains This is calculated by comparing the typical engineering hours required to complete a data class migration task against the total engineering hours spent prompting and reviewing Devin’s work on the same task.
Over 20x cost savings on scope of the migration delegated to Devin This is calculated by comparing the cost of running Devin versus the hourly cost of an engineer completing that task. The significant savings are heavily driven by speed of task execution and cost effectiveness of Devin relative to human engineering time – it does not even consider the value captured by completing the entire project months ahead of schedule!
Fewer dreaded migration tasks for Nubank engineers

How FE fundinfo Scaled Eng Capacity with AI-Driven Automation Across 1,800 Repos

Vimeo
10%
immediate increase in engineering capacity from automated test generation, security fixes, and modernization work
2-4x
projected engineering capacity increase over the next 2-5 years as Devin expands across the full SDLC
3 days
of manual QA work saved every two weeks through automated testing tools
1,800
repositories managed with automated Devin playbooks via custom Replit apps

About the company

FE fundinfo is a leading financial data company connecting the investment industry in the UK, Europe and Asia Pacific through a single integrated platform, Nexus. Founded in 1996, the group operates in over 15 countries, with more than 1,200 employees and 200+ expert engineers driving innovation.

Industry: Investment fund data & technology services Visit site

Overview

FE fundinfo is a leading financial data company connecting the investment industry in the UK, Europe and Asia Pacific through a single integrated platform: Nexus. With over 1,200 employees across 16 countries, the company processes 311,000 share classes, collects 1.5 million documents monthly and produces over 2.5 million regulatory documents annually, supported by 250+ dedicated data experts. Over 200 expert engineers support 70 product types across approximately 1,800 active code repositories.

As FE fundinfo’s platform expanded, engineering teams faced mounting pressure from maintenance work, security updates, dependency upgrades, testing coverage gaps, and technical debt remediation. Over time, this work began to crowd out new feature development.

The team had experimented with AI coding assistants embedded in IDEs, but saw limited productivity gains. While useful for local code suggestions, these tools lacked deep codebase understanding, required frequent developer intervention, and often failed to complete tasks end-to-end.

Why Devin

Richard Thorpe, Head of Engineering at FE fundinfo, decided to pilot Devin — an autonomous AI software engineer designed to understand large codebase and execute work independently.

The goal was not incremental improvement, but a step-change: offloading engineering toil and enabling the organization to significantly increase engineering capacity.

“Devin’s understanding of our codebases is substantially better than some other AI systems our teams use. It’s a very nuanced difference until you compare them side by side”

Richard Thorpe, Head of Engineering, FE fundinfo

Driving Adoption: A New Engineering Mindset

Richard quickly realized that success with Devin required more than tool adoption — it required a mindset shift. Engineers needed to evolve from pure executors into coordinators who could effectively delegate work to an AI software engineer.

To support this transition, Richard developed a scoring system that trained engineers on how to work with Devin. Engineers were measured on dimensions such as:

  • Specificity of requirements
  • Amount of back-and-forth required between human and agent
  • Whether Devin's additions improved the codebase

This structured approach helped teams learn how to delegate effectively, making Devin adoption a quantifiable skill that could be taught, tracked, and optimized.

FE fundinfo’s Approach At A Glance

AI Use Cases:

  • Framework upgrades
  • Automating testing
  • Security remediation
  • Project planning

Rollout approach:

FE fundinfo built a scoring system to train engineers to work effectively with AI software engineers, tracking usage and outcomes.

Key Outcomes:

  • 10% immediate increase in engineering capacity from automated test generation, security fixes, and modernization work
  • Projected 2-4x engineering capacity increase over the next 2-5 years as Devin expands across the full SDLC
  • 3 days of manual QA work saved every two weeks through automated testing tools
  • 1,800 repositories managed with automated Devin playbooks via custom Replit app
  • End-to-end automation achieved for low-risk changes with auto-merged PRs
  • Engineers evolve from executors to coordinators, focusing on high-value strategic work while Devin handles engineering toil

Results: Unlocking New Engineering Capacity Through Automation

With Devin in place, FE fundinfo began tackling long-standing backlogs that had accumulated over years. Developers were freed up to focus on new feature development and higher-level system design, while Devin took on engineering toil, including:

  • Test generation – Building comprehensive test suites that previously would have required weeks of manual effort
  • Security fixes – Rapidly addressing vulnerabilities across multiple repositories
  • Dependency and framework upgrades – Keeping systems current without diverting engineering resources
  • Large-scale modernization projects – Systematically updating legacy code across the entire platform

To maximize Devin’s impact, the team built an automation system using Replit that runs Devin “playbooks” across all 1,800 repositories. This system automatically:

  • Finds repos needing updates
  • Triggers Devin sessions
  • Tracks progress and handles errors
  • Auto-merges pull requests for low-risk changes (like documentation updates) without human review

This enabled true end-to-end automation, allowing Devin to complete entire workflows from identification through deployment with minimal human intervention.

From Coding to Complete Workflows: Reimagining Software Delivery

FE fundinfo is now extending Devin beyond coding into the rest of the software development lifecycle, especially project planning.

QA engineers emerged as some of Devin’s strongest users, leveraging it to transform their testing workflows. One engineer built a tool that saved three days of manual work every two weeks, while others used Devin to convert freeform tests into stable, automatable suites that could run regularly and consistently. These improvements significantly increased testing coverage while reducing manual effort — allowing QA teams to focus on exploratory testing and strategic quality initiatives.

Richard believes this expansion across different functions and stages of the SDLC will drive further increases in engineering capacity.

The team now uses Devin not only to implement changes, but also to plan projects. In one emerging workflow, a product owner submits a problem statement, which is then passed to Devin. Devin returns a proposed project plan, identifying which tickets should be tackled by a human and which by Devin itself. A human reviews and approves the plan, after which Devin assigns out tickets and begins execution.

Richard envisions this approach could compress the SDLC to three core steps:

  • Idea-to-requirements (with Devin expanding specifications)
  • Code implementation and testing (largely Devin-led)
  • Deployment

Low-risk changes could pass straight to production; medium-complexity work would require human PR review; and high-complexity projects would involve humans in both planning and review.

“If we can open up the entire pipeline — not just coding — we can unlock more ideas, ship more features and grow the business faster.”

Richard Thorpe, Head of Engineering, FE fundinfo

What’s Next: 2-4x Engineering Capacity

As Devin becomes more deeply embedded across the SDLC, engineering roles at FE fundinfo are shifting from execution to coordination. Developers increasingly partner with Devin: they delegate routine work, focus their own time on the most complex and creative challenges, then review Devin’s output.

Richard Thorpe believes this new operating model could enable the engineering organization to double, if not quadruple, its output over the next 2-5 years.

This transformation goes beyond productivity gains. By offloading engineering toil and expanding capacity, FE fundinfo can accelerate feature delivery, reduce technical debt faster, and respond more quickly to market opportunities — strengthening its competitive position in the investment technology space.