Hell hath no fury like a software engineer scorned.
That time I reverse engineered an AI startup to prove a point.

AI has been moving at the speed of light. What's regular now was rocket science a mere 6 months ago. Some 20 months ago, I was on X which was popping off about this popular indie developer who had made an AI startup which acted like a mental health counsellor for people in need. It was a paid service and was being marketed as a cheaper alternative to mental health services.
The discourse was... heated, to say the least.
On one side, people rightfully pointed out that calling something "TherapistAI" when therapy is a licensed, regulated profession requiring years of specialized training was, at best, misleading. At worst? Potentially exploitative of vulnerable people who might genuinely need professional help but couldn't afford it.
On the other side, the tech optimists argued that the creator had put in time and effort, built something functional, and deserved to be compensated like any other product builder. Fair point, honestly.
Me? I sat somewhere in the middle, but leaning skeptical. The whole thing felt a bit grifty—marketing to people in need with promises a chatbot fundamentally couldn't deliver. But I'll admit, I was also curious. How hard could it actually be to build something like this?
Turns out: not hard at all.
And that's how www.GrifterAI.com was born over a single weekend. Yes, the name was intentional. Yes, I'm aware of the irony.
The 36-Hour Sprint
I started on a Saturday morning with a simple goal: build a functional AI life coach that people could actually use, and do it fast enough to prove this wasn't some revolutionary technical achievement worthy of the hype it was getting.
The clock was ticking. I gave myself the weekend.
Finding the Right Model
The first step was finding and deploying the right large language model. I turned to Google's Model Garden and Vertex AI—honestly one of the better decisions I made. Model Garden gave me access to a variety of pre-trained models, and after some experimentation, I settled on Llama 70B.
Here's some context: at the time, Llama 70B was the largest open-source model available. It was the bleeding edge. If you wanted something more powerful, you were looking at proprietary models with hefty API costs or academic research models that were a nightmare to deploy.
Fast forward to today? The landscape is unrecognizable. We've got Llama 3.1 405B, Mistral models at various sizes, Qwen, Phi, Gemma, Claude, GPT-4—an absolutely dizzying array of options at every scale and specialization. Back then, the choice was simpler because there simply weren't that many options. You picked Llama 70B and you were working with the best open-source had to offer.
It really drives home just how fast AI has been moving. What was cutting-edge 20 months ago is now just... normal. Entry-level, even.
Vertex AI made the deployment process surprisingly smooth. Within a few hours, I had the model spun up and accessible via an API. The real work, though, was in the fine-tuning.
Fine-Tuning for the Use Case
Out of the box, large language models are generalists. They're great at answering questions about medieval history or debugging Python, but for something like life coaching—where tone, empathy, and conversational flow matter—you need to fine-tune.
I spent a good chunk of Saturday afternoon curating a dataset from Kaggle and running fine-tuning jobs. The goal was to make the model sound less like a corporate FAQ bot and more like someone you'd actually want to talk to when you're having a rough day. Empathetic but not patronizing. Thoughtful but not preachy.
Securing the Inference API
When you're running a 70B model on cloud infrastructure, inference isn't cheap. The last thing I wanted was some random person stumbling onto my API endpoint and racking up thousands in GPU costs.
I used Google's built-in security protocols as a baseline, locking down incoming requests to just my IP. But I didn't stop there, I built additional custom security layers on top; Token-based authentication, rate limiting, request validation. If you somehow got the API endpoint, you still weren't getting through without the right credentials.
GPUs are expensive, and I wasn't about to fund someone else's experimentation.
Architecture: Why Telegram?
By Sunday morning, I had a working model and a secure API. Now I needed to make it accessible to users. This is where most people would start building a fancy web app with React, authentication flows, responsive design, the whole nine yards.
I went a different route: Telegram.
The Case for Telegram
Telegram has over 900 million active users. Many people already have it installed and already know how to use it. There's zero friction to getting started: click a link, send a message, done.
Compare that to building a standalone web app where users need to:
Create an account
Verify their email
Remember yet another password
Navigate a new interface
Probably download a mobile app if they want it on their phone
Telegram eliminated all of that. It's unobtrusive, familiar, and just works. Plus, the Telegram Bot API is incredibly well-documented and easy to integrate.
So I built www.GrifterAI.com as an entry point—clean landing page, quick explanation—but it redirects straight to the Telegram bot. The website exists for discovery and SEO; the actual experience lives in Telegram.
The Technical Stack
For the backend, I used Node.js to handle the Telegram Bot API integration. Node's async nature made it perfect for managing multiple concurrent conversations and API calls to Vertex AI for inference.
For the database, I went with SQLite: Was it scalable at enterprise scales? Eh. Not like I cared, tbh - for a project at this scale, SQLite is fast. No network latency, no connection pooling headaches, no overhead from running a separate database server. It's a single file, it's blazingly quick for reads and writes, and it was exactly what I needed. I would love to be in a position where I would have to worry about database scaling. For now, this worked.
The flow was simple:
User sends a message via Telegram
Node backend receives the message via webhook
Backend sends the full context to Vertex AI inference API
Model generates a response
Backend sends response back to user via Telegram
Clean. Efficient. Worked like a charm.
The Marketing Advantage
Here's the underrated part: by using Telegram, I completely offloaded user acquisition and marketing. Telegram has built-in virality—users can share bots with a single tap, add them to groups, forward messages.
This meant I could focus on what I actually enjoy: the technical side. Model performance, API optimization, security hardening, conversation quality. Let Telegram handle the rest.
The Hardest Part (Spoiler: It Wasn't the Tech)
You know what took the longest? Coming up with a name.
I'm not joking. The technical implementation—deploying a 70B model, fine-tuning it, building a Telegram bot, securing an API—took about 36 hours of focused work. Naming it? That was at least 3 of those hours.
Finally, I landed on GrifterAI. It was cheeky, self-aware, and perfectly captured the spirit of the project: calling out the grift by becoming the grift. The domain was available. Done.
The Reality Check: Economics of AI
I launched GrifterAI with a simple plan: offer the first 5 messages free, then redirect people to my socials for more. The idea was to give people a taste, build an audience, figure out monetization later.
That lasted exactly one day.
Running a 70B model isn't like hosting a static website for $5/month. Every single inference call costs real money. Fine-tuning? Even more expensive. GPU time on Vertex AI adds up fast, and I quickly realized that "free tier + audience building" wasn't sustainable unless I wanted to hemorrhage money.
So I made GrifterAI paid-only. Still haven't figured out payment processing yet (Stripe integration is on my TODO list, lmao), but the principle stands: if you're running infrastructure this expensive, you need a revenue model that isn't "hope people like me enough to donate.". Eventually, with no want to market it or profit off of it, I shut the inference down cuz the deployment costs were piling up.
Why I'm Posting This Now (And Why I Didn't Before)
Here's the thing: I actually wrote a LinkedIn post about all of this when I first built GrifterAI. Got all fired up, drafted the whole thing, added screenshots, the works. It's still sitting in my LinkedIn drafts, unpublished.
Why didn't I post it? A couple of reasons.
First, dunking on someone, even someone whose business model I found questionable didn't feel like a positive use of my energy. Yeah, I built the thing to prove a point, but publicly calling someone out? That's just adding negativity to the internet, and we have enough of that already.
Second, and honestly more importantly, I didn't want to contribute to the AI noise. If you've been on LinkedIn in the past year, you know what I'm talking about. Everyone and their dog was posting about AI. "10 ways ChatGPT will revolutionize your workflow!" "AI is coming for your job!" "I used AI to write this post about AI!" Most of it was just recycled takes and copy-pasta content that added nothing to the conversation.
The signal-to-noise ratio around AI was already terrible, and I didn't want to make it worse. I decided I'd only post when I had something genuinely useful to share—actual learnings, technical insights, something that moved the conversation forward instead of just adding to the echo chamber.
Look, I'm not interested in playing the algorithm game or chasing engagement metrics. I share things because they matter to me—deep dives into software architecture, interesting problems in distributed systems, what it's actually like working as an engineer. This blog exists as much as a personal archive as it does for anyone else. Years from now, I want to be able to look back and remember how I solved this problem or what I was thinking about at this point in my career. If other people find it useful along the way, that's a bonus.
So why now? Because I do have something to share. Real technical implementation details. Honest reflections on what worked and what didn't. Lessons about infrastructure costs, architecture decisions, and the actual barriers (or lack thereof) to building AI products. And yeah, as social proof that I actually built the thing and learned from it.
This isn't a hot take or a thought leadership piece. It's just documentation of a weekend project and what came out of it. That felt worth sharing.
What I Learned
This whole project reinforced something I've long suspected: the technical barrier to entry for AI products is lower than ever. What once required a PhD and a research lab can now be done by a single developer over a weekend with the right tools.
That's both exciting and a little terrifying.
On the technical side, I learned a ton about:
Large language model deployment and fine-tuning at scale
Managing inference costs and API security
Real-time conversational AI architecture
The trade-offs between different deployment platforms
On the product side, I learned that sometimes the best UX decision is to not build a custom interface. Telegram's ubiquity and simplicity beat a bespoke web app for reach and accessibility.
On the ethical side? Look, I still think marketing an AI as a replacement for mental health services is sketchy. But I also gained a deeper appreciation for the infrastructure and thought that goes into building these things—even if I fundamentally disagree with how they're positioned.
The Point
So did I prove my point? Yeah, I think so.
Building an AI startup in 2025 isn't rocket science. The tools are accessible, the models are available, and the infrastructure exists. What separates a weekend project from a "real" startup isn't technical complexity—it's positioning, marketing, and ethics.
GrifterAI still exists at www.GrifterAI.com (well, it still has the link to Telegram too. Pop in and say Hi on the channel :P). It's functional, it works, and people have used it. I'm proud of the technical achievement, even if the whole thing started as a slightly petty "I could do that" moment.
Because honestly? Sometimes spite is the best motivator.
And if you ever find yourself thinking "that doesn't seem that hard"—try building it. You might surprise yourself.
TL;DR: Saw an overhyped AI startup on Twitter, built my own version in 36 hours using Google Vertex AI, Llama 70B , Telegram, and Node.js. Learned a lot about LLM deployment, API security, and the economics of running AI at scale. Named it GrifterAI because irony is delicious and I like to think I cooked.



