Supabase Data Layer
21-column schema. RLS policies. Source tagging. Organic intelligence views.
Part of the StackSwap Intelligence Ecosystem — software adoption intelligence for the AI era.
What Is the StackSwap Supabase Data Layer?
The StackSwap Intelligence Ecosystem is built on a unified Supabase-backed data layer that stores leads, companies, stack analyses, and tool swap events. The schema includes 21+ columns per analysis record: company context (industry, team size, revenue stage, GTM motion), tool list (JSONB), analysis outputs (score, AI-native gap, overlaps count, swaps count, total savings), and spend estimates. Row Level Security (RLS) policies protect sensitive data while allowing authenticated reads and writes from the Next.js application and serverless APIs. Source tagging (e.g. stackscan, ai_gtm_engine, synthetic_gtm_data_seeder) ensures seed data and production traffic stay segmented for accurate analytics and safe purge paths.
How It Fits the StackSwap Intelligence Ecosystem
Every product in the StackSwap family reads or writes this layer: the StackScan captures leads and writes stack_analyses and tool_swaps; StackScan and StackBuilder submit analyses; StackSignal and internal dashboards query analyses and swaps for trend visualizations and benchmarking. Organic intelligence views aggregate by industry and team size so the StackScan can surface contextual metrics (e.g. "X stacks analyzed in your segment, $Y/mo savings identified"). The same schema supports future surfaces such as CSV export, API access, and third-party integrations without duplication.
Why This Matters for GTM and SaaS Stack Intelligence
A single source of truth for stack and lead data enables consistent scoring, benchmarking, and recommendations across all touchpoints. Teams evaluating StackSwap for software adoption intelligence or GTM diagnostics can rely on a transparent, queryable data model that powers both the product experience and internal operations. The design prioritizes clarity, segmentation, and future removability of synthetic data so the platform can scale from demo to production without schema churn.