I am building a web app called SorryClub, a collection of financial and productivity tools, all by myself. Currently, there are over 40 calculators, and I am continuously adding more. I would like to share how I technically structured it.
Technology Stack
- Framework: Next.js 16 (App Router, React 19, Turbopack)
- DB: TiDB Serverless (MySQL compatible)
- Storage: Cloudflare R2
- Hosting: Vercel Pro
- Auth: NextAuth 4 (Google, JWT)
- Payments: Lemon Squeezy
- AI: OpenAI GPT-4o (SEO Article Generation), Replicate Flux 1.1 Pro (Cover Image)
- Charts: Recharts
- Styling: Tailwind CSS 4 (Dark Mode Only)
Managing 40 Tools in a Single Codebase
All tools follow the same layout pattern. layout.js handles metadata, JSON-LD, FAQs, and related tools, while page.js is responsible for client-side logic. By establishing this pattern, we no longer need to worry about the structure when adding new tools.
Tool metadata (title, icon, category, gradient color) is managed in the database. The frontend uses slug-based dynamic routing, and the relationships between tools are managed in a separate file to be used for recommending related tools.
Premium Model: Does not lock pages
All tools provide core features for free. This is because users arriving via SEO traffic will leave immediately if they open a page and find it locked.
Instead, only advanced features (Monte Carlo simulation, scenario saving, advanced analytics) have been separated into the PRO model.
const isPro = userPlan === "PRO" || userPlan === "PRO_PLUS";
This single line handles the PRO branch consistently across all tools.
Server-side Pageview Tracking
We have built a proprietary tracking system unaffected by ad blockers. It collects IP, User-Agent, Country, and Language from API routes and records dwell time using sendBeacon. You can view daily traffic, patterns by time of day and day of the week, bounce rates, and dwell time by entry source on the admin dashboard.
AI-based SEO content generation
We configure SEO keyword manifests for each tool, and GPT-4o generates relevant articles based on them. Cover images are created using Replicate's Flux 1.1 Pro and stored on Cloudflare R2. By publishing articles with various keywords per tool, we are securing long-tail SEO.
Things I Learned
- Consistent patterns are the most important. If I had created 40 different tools, maintenance would have been impossible.
- It is better for revenue to keep it open for free. If you lock it up, traffic won't come at all.
- It is worth creating your own analytics. You can view detailed data that is not visible in GA.
- AI article generation is definitely effective for SEO. However, quality control is key.
I am currently running on the PRO $5/month plan. After posting on Reddit, 133 people visited in just one day, and I got my first sign-up.
Feedback or questions are welcome.
Top comments (6)
This is really solid. I tried something similar before and hit a mess around tool #10 đ
The âsame layout + metadata-driven toolsâ part is the real win here â saves tons of time later. Also agree on not locking core featuresâĻ every time I tested paywalls on small tools, users just bounced.
Curious how youâre handling SEO quality control though â thatâs where things usually break long-term.
Thank you for empathizing with me! We continuously needed to add and refine the tools that are essential for all of us.
SEO quality management is something we also constantly think about, but there are a few principles that have proven effective for us.
Consistently managing structured metadata for each tool is fundamental. We found that quality drops quickly when manual work increases.
We ensure that content is always linked to the actual tools used. Content unrelated to tools loses value in the long run.
When organizing tools, we pay close attention to handling redirects meticulously to avoid losing existing traffic.
Ultimately, it seems the key to maintaining quality is whether the content connects to the tools that are actually functioning.
Thank you for the great question!
The "consistent patterns" lesson is the one that scales the hardest. I run a financial data site covering 8,000+ stock tickers across 12 languages, and the only reason it's manageable as a solo dev is because every single page follows the exact same Astro template â metadata-driven routing, same section order, same schema markup. The moment you let even one page type drift from the pattern, debugging becomes archaeology.
Your freemium approach is spot-on too. I tested gating content behind walls early on and the bounce rate was brutal â SEO traffic especially has zero patience for locked pages. Keeping the core free and charging for advanced analysis features is the only model that works when your acquisition channel is organic search.
Curious about your AI article generation pipeline â how do you handle quality drift over time? I use a local LLM for content generation and the biggest challenge has been the model confidently outputting wrong financial numbers (like showing a 41% dividend yield instead of 0.41%). Do you have automated validation between GPT-4o output and the actual calculator data, or is it more manual review?
It is truly impressive that you single-handedly manage 8,000 tickers across 12 languages. I strongly agree with your point that deviating from patterns turns debugging into archaeology. We also have 40 tools that all follow the same layout.js + page.js structure; we tried making a few "special" in the beginning, but maintenance was so difficult that we eventually returned to the same pattern for everything.
We had a similar experience with our premium strategy. We once tried locking the entire page, but users who came via SEO left immediately without even touching the calculator. Now, we keep basic calculations available for free and separate only the advanced features needed by individual tools into the PRO section. This structure yields much better conversion rates and dwell times.
The question regarding AI quality drift is a really good point. Since our articles are linked to specific tools, we provide context by including that tool's SEO manifest (keywords, feature descriptions) in the prompt. We bypass the issue of numerical accuracy by not directly including specific calculation results in the articles. It is structured to guide users to the tool, saying, "Check it yourself with this calculator." However, it is not fully automated, and the admin panel reviews it before publication. I really empathize with the issue where 0.41% comes out as 41%. In the financial sector, even a single unit can be fatal, so it is a bit scary without a verification layer.
The SEO manifest approach is really smart â feeding the tool's keywords and feature descriptions into the prompt gives the AI just enough context to stay relevant without hallucinating specifics. That's a much cleaner boundary than trying to get the model to be accurate about numbers.
Your free-to-PRO conversion insight resonates too. We learned the same lesson the hard way: SEO traffic has near-zero patience for friction. If someone lands on a stock page from Google and immediately hits a paywall, they bounce in under 3 seconds. Keeping the core data free and gating the advanced analysis makes the page actually useful for search intent, which then feeds back into better rankings.
The admin review before publication is the piece I'm still trying to figure out at scale. With thousands of pages, manual review doesn't work â but fully automated publication without review is how you end up with 41% dividend yields on Apple. Curious if you've found a middle ground, like flagging outliers for review while auto-publishing the rest?
Honestly, manual review is feasible at the current scale, but it will definitely become a bottleneck once the volume of content grows to hundreds.
Our current approach is to restrict the prompts during article creation to ensure content is produced only within the scope of the keywords and tool capabilities defined in the SEO manifest. We provide specific scenarios and calculation examples, but we have guardrails in place to ensure that content does not go beyond the functional range specified in the manifest.
I found that for a solo developer, "publishing quickly with sufficiently good quality and fixing issues quickly when they are found" is more realistic than perfect automation.