ROBO-PAT: Building My Own AI Twin
There's a strange asymmetry in modern hiring.
Putting yourself out on the job market is tough these days. There are so many AI tools out there, but they tend to be one-sided, focused on the employer or the recruiter. The job seeker still does what they've always done: create a CV, write a cover letter, and maybe answer AI screening questions or interview with a robot.
But what if you, the job seeker, had your own personal AI agent? One that you could load up with your CV, with smart answers to common interview questions, and who could handle all of that for you through a simple chat interface?
That idea became ROBO-PAT: a Recruiter-Oriented Bio & Overview Platform AI Twin. Which is a very laboured backronym, and I'm not apologising for it.
Why I built it
This was a weekend project, or more accurately a series of weekends, driven by curiosity and a growing sense that I needed to get my hands dirty with AI development rather than just read about it.
In my day job, I spend most of my time on strategy, planning, and ways of working. I'm further from the code than I used to be. But the pace of change in software development is hard to grasp from a distance. Talking to developers about how their work is shifting is useful, but there's a ceiling on how much you can understand without actually building something.
So I built something.
One thing that made it practical: GitHub Codespaces. Having a full development environment in the browser meant I wasn't tied to my desk. A lot of this was built on school holidays, on a laptop, in hotel rooms after the kids had gone to sleep or while they were off at sports. The portability meant I could pick it up and put it down without losing context. If you haven't tried Codespaces for a side project, it's worth it.
How it works
ROBO-PAT is a web app with a simple premise: a recruiter enters a personalised invite code, then has a conversation with an AI that answers their questions about me. It draws from a knowledge base I've curated: my work history, how I think about problems, answers to questions I get asked often.
Under the hood, it uses a technique called Retrieval-Augmented Generation, or RAG. Rather than baking all the context into a single giant prompt (expensive and unwieldy), the system stores knowledge in small chunks. When a question comes in, it retrieves the most relevant chunks and passes them to the language model to generate a grounded, accurate answer.
The stack: Next.js on the front end, Supabase for the database, and the Anthropic Claude API for the AI layer. I used two Claude models in a pipeline: a faster, cheaper one to rank which knowledge chunks are relevant, and a more capable one to actually compose the answer. It was a neat trick to keep costs down without sacrificing quality.
What surprised me
A few things surprised me during the process.
AI tools have come a long way. I've been playing around with AI dev tools for a few years now - an early project I built Can you Beat Wellington? used loveable (before it was even called loveable) and I had a love/hate relationship with the tool. This time I built everything with Claude and the difference is staggering. Sure there were mistakes and misunderstandings, but the overall quality and hit rate of prompts was so much better.
The chunking problem. RAG sounds elegant in theory. In practice, getting your knowledge base into the right shape is fiddly. My first approach to splitting documents into chunks worked fine for prose, but broke completely for structured Q&A content. The question and answer would end up in separate chunks, which confused the retrieval logic. Fixing that required rethinking how the chunking worked for different content types. Small problem, slightly annoying to debug.
The persona is the hard part. Getting the AI to answer as me, in my voice, with appropriate confidence, without hallucinating credentials I don't have, or giving away personal information I don't want out on the web, took more prompt engineering than I expected. A language model left to its own devices will be helpful to a fault. Teaching it to say "I'm not sure" or "that's not something I'd want to answer here" required careful instruction.
What I'd do differently
If I were starting over, I'd invest earlier in the knowledge base structure. The quality of the AI's answers is almost entirely determined by the quality of what you put into it. The model is just pattern-matching against your content. Garbage in, confident-sounding garbage out.
I'd also build the admin tooling earlier. I have a dashboard for managing documents, reviewing flagged questions, and seeing what recruiters actually asked, but I added a lot of that reactively. Having visibility into what's happening is invaluable when you're tuning a RAG system.
On the technical side, I'd explore vector embeddings sooner rather than using a language model to rank chunks. It's a more principled solution and would scale better if the knowledge base grew significantly.
The bigger takeaway
Building ROBO-PAT reinforced something I'd heard but not fully internalised: the hard part of AI-powered products isn't the AI. The model is remarkably capable straight out of the box. The hard part is the product: the data architecture, the user experience, the edge cases, the trust and safety considerations. Those are the same problems software has always had.
What's changed is the ceiling. With a weekend and an API key, you can build something that would have taken a team months just a few years ago. That's worth understanding from the inside.
If you'd like to try ROBO-PAT yourself, you can request instant access at chat.radomski.co.nz.
Replies
Reply on Mastodon →Loading replies…