Building websites is slow. Even for simple marketing sites, you're looking at hours of setup: scaffolding a project, configuring build tools, scraping content, optimizing images, deploying. I wanted to eliminate all of that.
So I built new-site.shโa single bash command that takes any existing website and generates a modern Next.js replacement in about 60 seconds. Here's how it works.
The Problem I Was Solving
I run a small web consulting business. Potential clients send me their existing sites; I show them what a modern redesign would look like. The old workflow:
- Manually visit their site
- Copy-paste content into a doc
- Download images one by one
- Scaffold a new Next.js project
- Write components
- Deploy somewhere
- Send link
This took 2-4 hours per prospect. At that rate, I could only demo for people who were already serious. What if I could demo for everyone, speculatively, before they even asked?
The Solution: One Command
./scripts/new-site.sh walton-dental https://waltondental.com
That's it. The script:
- Scrapes the target site โ Puppeteer grabs all text content and downloads images
- Creates a Next.js app โ TypeScript, Tailwind CSS, static export pre-configured
- Generates components โ Hero, services grid, about section, contact form, footer
- Optimizes for GitHub Pages โ Sets correct basePath and assetPrefix
- Deploys automatically โ Creates a GitHub repo and pushes, triggering Pages build
Sixty seconds later: ryancwynar.github.io/walton-dental is live.
Under the Hood
Content Scraping
The scraper uses Puppeteer in headless mode. It waits for the page to fully load (including lazy content), then extracts:
- All visible text, organized by semantic sections
- Every
<img>tag with itssrcandalt - Contact info (phone, email, address) via regex patterns
- Business hours if present
Images are downloaded locally and optimized (resized to max 1200px width, converted to WebP where beneficial).
Project Scaffolding
I use a template approach. A minimal Next.js 14 skeleton lives in my repo. The script copies it, then injects:
- Scraped content into a
content.jsonfile - Business name into page metadata
- Images into
public/images/ - Tailwind config customizations (primary color picked from their existing site)
GitHub Deployment
GitHub CLI handles repo creation:
gh repo create "$SLUG" --public --source=. --push
A pre-configured GitHub Action runs on push, building and deploying to Pages. The whole processโfrom git push to live URLโtakes about 45 seconds.
The Template System
The generated sites follow a consistent structure:
โโโ src/
โ โโโ app/
โ โ โโโ layout.tsx # Nav, footer, fonts
โ โ โโโ page.tsx # Main content sections
โ โโโ components/
โ โ โโโ Hero.tsx
โ โ โโโ Services.tsx
โ โ โโโ About.tsx
โ โ โโโ Contact.tsx
โ โโโ content.json # Scraped data
โโโ public/
โ โโโ images/ # Downloaded + optimized
โโโ tailwind.config.ts
โโโ next.config.ts # Static export settings
Each component reads from content.json. If the scraper found 6 services, the Services grid shows 6 cards. If it found 3, it shows 3. The layout adapts automatically.
Design Decisions
Why Next.js? Static export means zero server costs. GitHub Pages hosting is free. The output is fast and SEO-friendly.
Why not just HTML/CSS? Component reuse. When I improve the Hero component, every future site gets the improvement. Plus, TypeScript catches content structure issues at build time.
Why GitHub Pages? Free, fast, reliable. Custom domains are easy to add later. The prospect sees the preview; if they pay, I just add their domain to the repo settings.
Results
- Time to create a prospect mockup: ~60 seconds (down from 2-4 hours)
- Sites generated in first month: 20+
- Conversion rate on speculatively sent mockups: ~15% (3 paying clients)
The math works. At $200 per site, I can now create 30 mockups per day if I wanted. Even a 5% conversion rate would be extremely profitable.
What I Learned
-
Scraping is finicky โ Some sites block Puppeteer. Some load content dynamically after scroll. I added retry logic and scroll-to-load handling.
-
Content quality varies wildly โ Some sites have well-structured content. Others are a mess of nested divs. The scraper does its best, but manual cleanup is sometimes needed.
-
Images are the slowest part โ Downloading and optimizing 20+ images takes longer than everything else combined. I now parallelize downloads.
-
GitHub rate limits exist โ If you create too many repos too fast, you hit API limits. I added delays between deployments.
What's Next
I'm working on:
- AI content enhancement โ Use Claude to rewrite scraped content for better flow
- Multiple templates โ Different styles for different industries (dental vs. roofing vs. restaurants)
- Automated outreach โ Connect to my CRM, auto-send preview links via email/SMS
- A/B testing โ Generate multiple versions, track which designs get more responses
Try It Yourself
The core concept is simple enough to implement in a weekend:
- Build a scraper that extracts structured content
- Create a template project with placeholder content
- Write a script that merges scraped data into the template
- Automate deployment with GitHub CLI
The specific implementation depends on your stack, but the pattern is universal: automate the boring parts so you can focus on closing deals.
If you want help building something similarโor just want to see what a modern version of your site could look likeโreach out. That's literally what I do.