We were in a Waymo heading to SFO to get to Coachella. About 25 minutes to the terminal. And I had this idea rattling around: what if every SaaStr AI Annual 2026 attendee could generate a custom “I’ll Be There” card for LinkedIn — with their own headshot, their own message, the SaaStr AI 2026 branding — and share it right from the website?
Not a mockup. Not a prototype. A production app, live on saastrannual.com, that real attendees would use to promote the event to their networks.
So I opened Replit on my phone, described what I wanted to an AI agent, and started building.
By the time I walked into the terminal, it was deployed. Live. In production. Generating cards.
This isn’t a story about building something complex. It’s a story about what’s now normal if you’re willing to vibe code your way through a real problem.
The Idea: Attendee-Generated Social Proof
We’ve run SaaStr Annual for over a decade. One thing that always moves registrations is social proof — people posting that they’re going. But most attendees aren’t going to fire up Canva and design a branded card. They need it to be effortless.
So the spec was simple:
- Upload your headshot
- Pick a background
- Add your custom message (“Come find me at SaaStr Annual!” or whatever you want)
- Download a clean 1080×1080 PNG, ready for LinkedIn or Twitter
No login. No account. Just generate and go.
The First Approach Failed Immediately
The obvious move is to use something like html2canvas — render a styled HTML element and screenshot it. We tried that first.
It broke instantly. html2canvas doesn’t reliably support object-fit, aspect-ratio, or CSS backgroundImage. The output looked nothing like what the user saw on screen. Cropping was wrong. Positioning was wrong. The whole thing was unusable.
So we threw it out mid-Waymo and went straight to the Canvas 2D API. Draw everything programmatically: background image, gradient overlay, headshot circle with a glowing cyan border, headline text, subtitle. Every element placed at exact pixel coordinates, rendered at 1080×1080, exported as a clean PNG.
This is a pattern I’ve seen over and over with vibe coding. The AI’s first approach is often the “textbook” answer. It doesn’t quite work. You say “that’s broken, try something else,” and the second or third approach is the one that ships.
But bear in mind, I actually didn’t know what was failing. I was on my phone. I just tested it, shared the bug with the AI Agent … and it just fixed it.
The Headshot Problem
Cropping a portrait photo into a circle sounds trivial. It’s not.
The issue: if you center-crop most selfies or headshots, you get someone’s chin or neck. Faces are almost always in the top third of a photo, not the center.
The fix was a custom drawHeadshot function that anchors to the top of the uploaded image rather than the geometric center. We use canvas.clip() to mask to a circle path, then draw the image offset upward. The result: faces land in the circle naturally, without manual adjustment.
Not perfect for every photo. But good enough for 90%+ of real headshots, which is all you need for a tool like this. Especially again, one built and shipped in a Waymo ride.
The CORS Problem
We wanted the SaaStr logo overlaid on each card. Simple, right? Fetch the logo from our CDN, draw it on the canvas, done.
Except browsers block cross-origin images from being drawn onto a canvas. The moment you draw a cross-origin image, the canvas becomes “tainted” and you can no longer export it. No download. No PNG. Nothing.
The fix: a small server-side proxy endpoint. The browser hits /api/proxy-image, which fetches the logo from the CDN and returns it with proper headers. The browser creates a local blob URL from the response and draws that onto the canvas. Clean export, no CORS issues.
This is the kind of thing that takes a junior dev two hours to debug and an AI agent about 45 seconds to solve once you describe the symptom. In fact, I didn’t need to understand this problem at all. I just told the Replit Agent to fix it.
Five Backgrounds, One Consistent Output
We shipped five background options: the official SaaStr AI 2026 branded card (with event branding baked into the image), an indoor arena stage shot, an outdoor festival grounds shot, and two dark gradient options (cyber blue and electric purple).
The tricky part: white text needs to be readable over all of them. The photo backgrounds have wildly different light and dark areas depending on where in the image the text lands.
The solution is a dark-to-transparent gradient drawn over the bottom half of every photo background. The text always pops, regardless of the underlying image. For the non-branded backgrounds, we overlay the SaaStr logo at the top with a frosted pill behind it for legibility.
Small details. But they’re the difference between a tool that looks polished and one that looks like a hackathon project.
What I Actually Did vs. What the AI Did
Let me be honest about the division of labor here.
The AI wrote all the code. The Canvas rendering logic, the headshot cropping algorithm, the CORS proxy endpoint, the background picker component, the download flow, the live preview — all AI-generated.
What I did:
- Described the product. Not in technical terms. No amazing prompt. Not even any planning in any formal sense. I said things like “I want attendees to upload a headshot and get a branded LinkedIn card.” The AI figured out the implementation.
- Caught edge cases. The first version didn’t handle the logo CORS issue. I saw the broken output and said “the logo isn’t showing up in the downloaded image.” The AI diagnosed the tainted canvas problem and built the proxy.
- Made design calls. Which backgrounds to include. Where the text should sit. How large the headshot circle should be. The cyan glow effect on the border. These are product decisions, not engineering decisions.
- Hit deploy. Replit makes this trivially easy. Push to production, point the subdomain, done.
Total code I personally typed: zero. Everything was AI-generated, AI-debugged, and AI-deployed.
And look at my “prompt”. It’s about as basic as it gets:


Why This Matters More Than the App Itself
The app itself is small. It’s a card generator. Nobody’s going to write a case study about it. Let’s not overstate what this micro-app really does.
But think about what just happened. A ceo in the back of an autonomous car, shipped a production feature to a website that tens of thousands of people will visit — in the time it takes to get to the airport.
Just 12 months ago, this would have been a ticket in Jira. A designer would mock it up. An engineer would estimate it at 3-5 days. It would go into a sprint. Maybe it ships in two weeks. Maybe it gets deprioritized and never ships at all. Let’s be honest. Realistically, it never would have shipped at all. And it’s too niche to buy.
Now it’s a Waymo ride.
That’s the real story of AI and building software in 2026. It’s not about replacing engineers. It’s about the sheer volume of ideas that can go from “what if…” to “it’s live” before you’ve finished your commute.
At SaaStr itself, we’ve gone from 20+ employees to 3 humans and 20+ AI agents. Revenue went from -19% to +47%. We’re shipping more product, more content, and more features than we ever did with a full team. Not because AI is smarter than the people we had — but because the bottleneck was never intelligence. It was time between idea and execution.
That gap is now approaching zero.
Try It Yourself
If you’re coming to SaaStr Annual AI 2026 (May 12-14, SF Bay Area), go generate your card: saastrannual.com/attendee-card
Upload your headshot, pick a background, add your message, and post it to LinkedIn. Takes about 30 seconds. Which, come to think of it, is not much longer than it took the AI to write the first working version of the code.


