Modules/Databases
06

Databases

PostgreSQL on practice. Tables, queries, schemas. How I design data structures by explaining what I need to AI.

PostgreSQLSchema DesignSQL
I never took a database course. I never read a book about SQL. Everything I know about databases I learned by building real projects that needed to store real data. PostgreSQL is what I use everywhere, and I picked it up the same way I picked up everything else — by describing what I needed and letting Claude figure out the schema.
My biggest database is in SMTPCloud Dashboard. Over 30 tables. More than 1,600 lines of raw SQL schema. Think about that for a second — someone with no database background designed a schema with tables for users, clients, campaigns, campaign_recipients, tracking_events, unsubscribes, programs, leads, nurture_programs, domain_rate_limits, email_send_queue, postmaster data, api_keys, scheduled_tasks, activity_logs, and assets. That is a serious production database, and it works.
Quality Monitoring has 8 tables with 15+ indexes. LeadTool uses SQLAlchemy ORM with Alembic for migrations — that is Python's way of managing database changes over time. MyVaultKeep.io uses Drizzle ORM with PostgreSQL, which is the TypeScript approach. So I have experience with both raw SQL and multiple ORM layers, across different languages.
Here is how I actually design a database: I describe to Claude what data I need to store and how different things relate to each other. For example, I would say something like 'campaigns have recipients, and each recipient can have tracking events like opens and clicks and bounces.' Claude turns that into proper tables with foreign keys connecting them. Foreign keys are just the database's way of saying 'this row in table A is connected to that row in table B.' Once you understand that concept, database design starts making sense.
In my biggest project, SMTPCloud Dashboard, I did not use an ORM at all. I wrote raw pg queries — direct SQL through the node-postgres driver. Sometimes simpler is better. ORMs add a layer of abstraction, and for a complex project with lots of custom queries, raw SQL gave me more control. Claude helped me write the queries, and I learned SQL by reading what it produced.
Indexes are something I learned about the hard way — when queries started getting slow. An index is like a lookup table that makes searching faster. Claude explains when you need them: if you are frequently searching by a column (like email address or campaign_id), you add an index on that column. Quality Monitoring has 15+ indexes because it needs to search and filter data quickly for real-time dashboards.
Schema evolution is a real thing nobody warns you about. You start with a simple database — maybe 5 tables. Then you add a feature and need a new table. Then another feature needs a new column. Then you realize you need to split one table into two. My SMTPCloud schema did not start with 30+ tables. It started small and grew as the product grew. The key is not being afraid to change your schema. With migrations (tools like Alembic or Drizzle Kit), you can version your database changes just like you version your code.
The honest truth about databases: they are less scary than they seem. If you can think about your data in terms of 'things' and 'relationships between things,' you can design a database. Users have campaigns. Campaigns have recipients. Recipients have tracking events. That sentence is basically a database schema. Claude handles the SQL syntax, the data types, the constraints, and the performance optimization. You handle knowing what your application actually needs to store.