If there’s one universal experience with AI-powered code development tools, it’s how they feel like magic until they don’t.
One moment, you’re watching an AI agent slurp up your codebase and deliver a remarkably sharp analysis of its architecture and design choices. And the next, it’s spamming the console with “CoreCoreCoreCore” until the scroll-back buffer fills up and you’ve run out of tokens.
As AI-powered coding and development tools advance, we’ve formed a clearer sense of what they do well, do badly, and in some cases, should not do at all. Theoretically, they empower developers by doing the kind of work that would otherwise be tedious or overwhelming: generating tests, refactoring, creating examples for documentation, etc. In practice, such “empowerment” often comes at a cost. What the AI makes easier up front only makes things harder later on.
One golden-dream scenario I’ve mulled over is using AI tools to port code from one language to another. If I’d spun up a Python project, then decided later to migrate it to Rust, would an AI agent put me in the driver’s seat faster? Or could it at least ride shotgun with me?
A question like that deserves a hands-on answer—yes, even if I ended up burning my fingers doing it. So, here’s what happened when I tried using Claude Code to port one of my Python projects to Rust.
Project setup and why I chose Rust
I decided to try porting a Python-based blogging system I wrote, a server-side app that generates static HTML and provides a WordPress-like interface. I chose it, in part, because it has relatively few features: a per-blog templating system, categories and tags, and an interface that lets you write posts in HTML, with a rich-text editor, or via plaintext Markdown.
I made sure all the features—the templating system, the ORM, the web framework—had one or more parallels in the Rust ecosystem. The project also included some JavaScript front-end code, so I could potentially use it to test how well the tooling dealt with a mixed codebase.
I chose Rust as the porting target largely because Rust’s correctness and safety guarantees come at compile time, not runtime. I reasoned the AI ought to benefit from useful feedback from the compiler along the way, and that would and make the porting process more fruitful. (Hope springs eternal, right?)
For the AI, I initially chose Claude Sonnet 4.5, then had to upgrade to Claude Sonnet 4.6 when the older version was abruptly discontinued. I also used Google’s own Antigravity IDE, which I’ve previously reviewed.
The first directive
I made a copy of my Python codebase directory, opened Antigravity there, and started with a simple directive:
This directory contains a Python project, a blogging system. Examine the code and devise a plan for how to migrate this project to Rust, using native Rust libraries but preserving the same functionality.
After chewing on the code, Claude recommended the following components as part of the plan to “transition to a modern, high-performance Rust stack”:
- Axum for the web layer.
- SeaORM for database interactions.
- Tera for templating.
- Tokio for asynchronous task handling (replacing Python’s multiprocessing).
Claude didn’t have any obvious difficulties finding appropriate substitutes for the Python libraries, or with mapping operations from one language to the other—such as using tokio for async to replace Python multiprocessing. I suspect part of what made this part relatively easy was the design of my original program, which didn’t rely on tricky Python features like dynamic imports. It also helped that Claude proceeded by analyzing and re-implementing program behaviors rather than individual interfaces or functions. (This approach also had some limitations, which I’ll discuss below.)
I looked over the generated plan and noted it didn’t create any placeholder data for a newly initialized database—a sample user, a blog with a sample post in it, etc. Claude added this in and I confirmed it worked by restarting the program and inspecting the created database. So far, so good.
A few missing pieces
The next stage involved discovering just how much Claude didn’t do. Despite discovering and building my app’s core page-rendering logic, it didn’t create any of the user-facing infrastructure—the admin panel for logging in and editing and managing posts. Admittedly, though, my instructions said nothing about that interface. Should I blame Claude for not being diligent enough to look, or blame myself for not being explicit in my original instructions? Either way, I pointed out the omission and got back a plan for doing that work:
I'm now addressing the missing Admin UI by analyzing the original Bottle templates and planning their migration to Tera, including the login screen and main dashboard.
Note: Bottle was the web framework I used for my Python project. This formed a test of its own: How well would Claude cope with migrating from a lesser-known library? This by itself turned out not to be a significant issue, but far bigger problems lurked elsewhere.
It was at this point where the bulk of my back-and-forth with Claude began. For developers already working with AI tools, this cycle will be familiar: the prompt—>generate—>test—>re-prompt loop. Basically, I’d have Claude implement some missing feature (in this case, each element of the admin UI), fire up the program to test it for myself, run into various errors or omissions, and cadge Claude into fixing them.
The first problem I found in the admin UI was an un-caught runtime error from the web templates, something not addressed by Rust’s compile-time checking. Then the login page for the admin panel came up blank. Then, after the login page worked, it led me to a placeholder page reading “Login logic not yet implemented.” Then, the logic for accepting usernames and passwords proved faulty. And so on, all through each of the original application components.
Major and minor malfunctions
In some ways, using Claude to migrate code mirrored my previous experiences with rewriting or migrating programs without an AI tool. Everything proceeded in fits and starts, and things that seemed like they ought to be simple turned out to be unexpectedly difficult. Occasionally the reverse happened: Something I thought would be challenging resolved itself quickly; a nice surprise.
One of the big differences, however—which will be familiar to anyone who’s used AI coding tools—was how Claude would sometimes malfunction and require major intervention to get back on track. At one point, when I prompted Claude to continue working on implementing the admin page for each blog, it malfunctioned and started printing CoreCoreCoreCoreCore... to the console. I ended up with hundreds of lines, and Core also started showing up randomly at the end of various responses:
Implementing Blog Detail and Post ListCoreCore
I've implemented the blog detail and post management features, fixed type mismatches, and updated the project artifacts. I'm now performing a deep dive into the remaining compilation error by capturing the complete cargo check output to ensure the system is production-ready.CoreCoreCoreCore
Then, I started getting warnings that the model’s generation exceeded the maximum output token limit. The problem cleared up after I restarted the session the next day, but after that I eyeballed all the outputs for similar weird glitches.
Something else I noticed was that Claude would begin with untested assumptions about its environment and only correct them after stubbing its toes, and then not always persistently. For instance, it tended to issue shell commands in bash syntax, error out, realize it was using PowerShell, and only then issue a proper command.
This is a common pattern with AI code tools, I’ve noticed: They tend to only do as much planning as you tell them to, and it’s easy to forget some of the details that need to be mapped out. The more persistently you define things for the model, the more consistent the results will be. (Note that more consistent is just that: not always or perfectly consistent.)
Finally, inspecting the generated code by hand revealed many ways Claude ignored the intentions of the original code. For instance, in my original Python program, all the routes for the web UI had a login-validation decorator. If you weren’t logged in, you got bounced to a login page. Claude almost completely failed to honor this pattern in the generated code. Almost every route on the admin UI—including those that performed destructive actions—was completely unprotected from unauthorized use.
Also, when those routes did have validation, it came in the form of a boilerplate piece of code inserted at the top of the route function, instead of something modular like a function call, a decorator, or a macro. I don’t know if Claude didn’t recognize the original Python decorator pattern for what it was, or didn’t have a good idea for how to port it effectively to Rust. Either way, Claude didn’t even mention the omission; I had to discover it for myself the hard way.
Three takeaways
After a few days of push-and-pull with Claude, I migrated a fair amount of the original app’s functionality to Rust, then decided to pause and take stock. I came up with three major takeaways.
1. Know the source and target
Using tools like Claude to migrate between languages doesn’t mean you can get away with not knowing both the source and target languages. If you are not proficient in the language you’re migrating from or to, you might ask the agent to clarify things and get some help there. But that isn’t a substitute for being able to recognize when the generated code is problematic. If you don’t know what you don’t know, Claude won’t be much help to you.
I’m more experienced with Python than I am with Rust, but I had enough Rust experience to a) know that just because Rust code compiles doesn’t make it unproblematic and b) recognize missing logic in the code—such as the lack of security checks in API routes. My takeaway is that many of the issues in porting between languages won’t be big, obvious ones, but subtler issues that demand knowing both domains well. Automation might augment experience, but it can’t replace it.
2. Expect to iterate
As I mentioned before, the more explicit and persistent your instructions are, the more likely you’ll get something resembling your intentions. That said, it’s unlikely you’ll get exactly what you want on the first, second, third, or even fourth try—not even for any single aspect of your program, let alone the whole thing. Mind reading, let alone accurately, is still quite a way off. (Thankfully.)
A certain amount of back-and-forth to get to what you want seems inevitable, especially if you are re-implementing a project in a different language. The benefit is you’re forced to confront each set of changes as you go along, and make sure they work. The downside is the process can be exhausting, and not in the same way making iterative changes on your own would be. When you make your own changes, it’s you versus the computer. When the agent is making changes for you, it’s you versus the agent versus the computer. The determinism of the computer by itself is replaced by the indeterminism of the agent.
3. Take full responsibility for the results
My final takeaway is to be prepared to take responsibility for every generated line of code in the project. You cannot decide that just because the code runs, it’s okay. In my case, Claude may have been the agent that generated the code, but I was there saying yes to it and signing off on decisions at every step. As the developer, you are still responsible—and not just for making sure everything works. It matters how well the results utilize the target language’s metaphors, ecosystem, and idioms.
There are some things only a developer with expertise can bring to the table. If you’re not comfortable with the technologies you’re using, consider learning the landscape first, before ever cracking open a Claude prompt.
Go to Source
Author: