SugarStitch icon SugarStitch Docs
Install

Use the app

SugarStitch can scrape a single pattern page, a list of saved URLs, or a discovery page that branches into many pattern links. The CLI and the local UI map to the same core scraper.

Quick start

If you want the shortest path from install to results, start with one pattern page and a preset that matches the site structure.

npm run scrape -- --url "https://example.com/pattern" --preset wordpress
  • Use --url for one page.
  • Use --file for a plain text list of URLs.
  • Use --preview first if you want to validate extraction before files are written.

CLI workflow

Scrape one pattern page

npm run scrape -- --url "https://example.com/pattern" --preset generic

Scrape many URLs from a file

npm run scrape -- --file urls.txt

Preview without writing files

npm run scrape -- --url "https://example.com/pattern" --profile tildas-world --preview

Send output to a different folder

npm run scrape -- --url "https://example.com/pattern" --output-dir ./exports --output patterns.json
Saved profiles: point SugarStitch at a profile with --profile <id> when a site already has known selector overrides you want to reuse.

Terminal-first runs

Terminal screenshot showing SugarStitch CLI output

The CLI is great for quick checks, automation, and repeatable batch runs where you already know the source URLs.

UI workflow

The local UI is ideal when you want to test presets, compare overrides, or paste batches into a form instead of remembering flags.

  1. Run npm run ui and open http://localhost:4177.
  2. Choose Single URL or paste multiple URLs.
  3. Pick a saved profile or selector preset.
  4. Add output settings or advanced selector overrides if needed.
  5. Click Test Selectors first or go straight to Start Scraping.

Form overview

SugarStitch homepage showing the main scraping form

Single URL mode, multi-URL paste mode, presets, and profile loading all live on the main screen.

Finished run

SugarStitch completed run summary with log output

After the scrape completes, the UI shows counts, logs, output paths, and the pattern titles that were captured.

Discovery crawl mode

Use discovery crawl mode when you have a listing page, archive page, or “free patterns” hub instead of a direct pattern URL.

npm run scrape -- \
  --url "https://www.tildasworld.com/free-patterns/" \
  --preset wordpress \
  --crawl \
  --crawl-depth 2 \
  --crawl-pattern "free_pattern|pattern|quilt|pillow" \
  --crawl-language english \
  --crawl-paginate
  • --crawl turns discovery mode on.
  • --crawl-depth controls how many link levels deep the crawler follows.
  • --crawl-pattern narrows what discovered links are allowed through.
  • --crawl-language helps when a site mixes multiple language sections.
  • --crawl-paginate adds paginated listing pages like /page/2/ before discovery continues.

Progress while crawling or scraping

SugarStitch loading state shown during scraping

The UI keeps the run focused with a progress state while it fetches candidate pages and downloads files.

Output structure

A successful run writes one JSON entry per scraped page and may also save page text, images, and PDFs into local folders.

{
  "title": "Pattern Title",
  "description": "Short description from the page",
  "materials": ["Cotton fabric", "Stuffing", "Thread"],
  "instructions": ["Cut the pieces", "Sew the body", "Stuff and close"],
  "sourceUrl": "https://example.com/pattern",
  "localImages": ["images/pattern_title/image_1.jpg"],
  "localPdfs": ["pdfs/pattern_title/pattern.pdf"],
  "localTextFile": "texts/pattern_title/pattern.txt"
}
Typical output folder: expect the JSON file plus images/, pdfs/, and texts/ folders inside the selected output directory.