AI Project — 2026

The Networking
Engine

An AI-powered intelligence pipeline that turns three hours of manual event research into a two-minute automated run — so I spend less time hunting and more time showing up.

90% Effort Reduced
2min Per Full Scan
30+ Sources Scraped

The room doesn't
find you.

I'm building a presence across Atlanta's tech, cybersecurity, and investment communities — which means I need to be in the right rooms, consistently.

The problem? Finding those rooms was eating three hours every week. Dozens of sites. Inconsistent formats. Buried registration links. Stale information. Manual copy-paste into a spreadsheet.

I didn't need to work harder. I needed to build something smarter.

3+ hours per week spent manually scanning 30+ event sources across Meetup, university calendars, niche tech blogs, and association sites
🔗 Buried or missing registration links forcing a second search just to RSVP — friction that kills follow-through
🗂 No centralized view across categories — tech, watches & luxury, finance/VC, arts & culture all siloed in different tabs and bookmarks
🔁 Duplicate entries and stale events creating noise that obscured what actually mattered this week
01 / Scraping Layer

Browser Automation

Playwright · Asyncio

Traditional scrapers break on modern web apps. Playwright runs a real browser — handling Single Page Applications, lazy-loaded content, and dynamic rendering that would defeat a basic HTTP request.

  • Async concurrent execution — multiple sources scraped simultaneously
  • Configurable URL management via YAML — add a new source in seconds
  • Built-in performance timer to benchmark and optimize scrape speed
02 / Intelligence Layer

AI Extraction

Google Gemini 1.5 Flash

Raw HTML is messy. Gemini reads it like a human would — pulling structured, clean event data from unstructured content and making judgment calls a regex never could.

  • Geographic filter: Atlanta-area and online events only
  • Temporal filter: rolling 6-week look-ahead window
  • Smart categorization into user-defined buckets (Tech, Finance, Arts, etc.)
  • Link recovery: targets direct registration and ticket URLs
03 / Storage Layer

Sheets Integration

Google Sheets API · OAuth 2.0

No new interface to learn. The output lives in Google Sheets — filterable, shareable, and instantly familiar. The system writes to it; I use it.

  • Name + Date fingerprinting prevents duplicate entries across runs
  • Tab-switchable across event categories via environment config
  • Zero-maintenance — just run it and the sheet updates itself
04 / Resiliency Layer

Error Handling

Quota Management · Graceful Fallbacks

Real systems fail gracefully. When API rate limits hit, the engine saves what it has, surfaces a clear diagnostic, and exits cleanly — no silent data loss, no cryptic errors.

  • Custom API quota detection with partial save on limit hit
  • Source-level frequency logic — high-velocity sites scraped often, static sites checked monthly
  • Execution timer surfacing performance data for continuous optimization

From a 3-hour weekly grind
to a 2-minute automated run.

Same intelligence. A fraction of the time. More runway for what actually matters.

90%
Reduction in
Research Effort
30+
Disparate Sources
Unified Automatically
Scalability — Add Sources
via Config, Not Code
Language & Runtime
Python 3.x Asyncio
AI & Intelligence
Google Gemini 1.5 Flash Generative AI SDK
Scraping & Automation
Playwright Async Browser Automation Concurrent Processing
APIs & Integration
Google Sheets API v4 OAuth 2.0 PyYAML Dotenv

Built for the long game.
Shipped on a Sunday.

This is what happens when GTM instinct meets a builder's mindset.