drewBrew. Architecture-First Coffee Tracking
This architecture was designed in late 2025 as an exercise in systems thinking and end-to-end planning. Some components (like a standalone blog) evolved as I built this portfolio site, but the core demonstrates how I approach technical problems from business requirements through to future-state capability.
Overview
drewBrew is a coffee tracking system designed to help specialty coffee enthusiasts log beans, brews, and tasting notes in a structured way, with the long-term goal of providing analytics that reveal brewing patterns and optimal bean ageing windows.
What makes this project different is the approach: instead of building features iteratively, I treated it as an architecture exercise first. I validated business requirements, designed the data model, planned the application structure, and mapped the future-state analytics capability before writing significant code.
The result isn't a finished product — it's a demonstration of systematic thinking, business-to-technology alignment, and the ability to plan systems that can evolve without needing to be rebuilt.
The Problem
Specialty coffee brewing is deceptively complex. Variables like bean origin, roast date, grind size, water temperature, and brew method all affect flavour. Competitive baristas and serious hobbyists track these factors manually using notebooks or spreadsheets, but there's no easy way to spot patterns or understand how different variables interact over time.
The business need wasn't just "store brew data" — it was capture the right data, with enough precision, so that meaningful patterns can be discovered later.
Architecture Vision
I approached this like an enterprise architecture problem:
What's the long-term purpose?
Help users understand how bean ageing, brew method consistency, and recipe repeatability affect flavour outcomes.
Who are the users?
Competitive baristas, specialty coffee hobbyists, and anyone who wants to improve their brewing systematically.
What capabilities need to exist?
- Structured logging of beans, brews, and tastings
- Flexible capture of variable data (tasting notes, recipe steps)
- Future analytics that generate actionable insights
What constraints matter?
- Mobile-first experience
- Fast data entry (low friction)
- Privacy (local-first database, not centralised)
- Scalable to thousands of entries per user
This vision shaped every downstream decision — from schema design to technology choices to the planned analytics pipeline.
Business Architecture
I validated requirements with a competitive barista at a Leeds specialty coffee shop who has competed in the World Brewers Cup. He explained how crucial factors like bean age, water temperature, and recipe repeatability are in competitive brewing, and how difficult it is to track these variables over time without a structured system.
From those conversations, the business capabilities became clear:
- Structured data capture with enough granularity to be analysable
- Flexible tasting notes without rigid schema constraints
- Future insight generation that competitive and hobbyist brewers can act on
- Simple, intuitive interface for data entry
This wasn't just feature planning — it was identifying what the system needed to enable, not just what it needed to do.
Data Architecture Implemented
The data layer is the foundation, and this is the part I've built.
Technology Selection
I compared Firebase, MongoDB, and PostgreSQL:
- Firebase: Fast setup, but tightly coupled to Google's ecosystem and not ideal for complex analytics
- MongoDB: Flexible, but less suited for strong relational needs and structured querying
- PostgreSQL + JSONB: Best balance — relational structure where needed, flexible semi-structured storage where data varies
Schema Design
I designed the schema around core entities:
beans → brews → tastings
beans → recipes
brews → gear (many-to-many)Structured fields hold consistent, measurable data (bean origin, roast date, dose, yield, water temperature).
JSONB fields hold variable data (tasting notes, recipe steps).
This hybrid approach directly reflects the business architecture: structure where needed for analysis, flexibility where variation is expected.
Example: The Bean Model
model Bean {
id String @id @default(cuid())
name String
roaster String
origin String?
varietal String?
process String?
roastDate DateTime
openedDate DateTime?
frozenDate DateTime?
price Decimal? @db.Decimal(10, 2)
notes String?
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
brews Brew[]
recipes Recipe[]
}Key architectural decisions:
roastDateis required — it's essential for ageing analysisopenedDateandfrozenDateare optional but captured when available- Relationships are explicit (
brews,recipes) to enable joins and aggregations Decimaltype for price avoids floating-point errors
Every field exists for a reason, and the schema anticipates future analytics needs.
Application Architecture Designed
I've designed the backend and frontend layers, but they're not fully built yet.
Backend Structure
Modular Node/TypeScript backend with:
- Entity-based routing:
/beans,/brews,/tastings,/recipes - Clear separation of concerns: Controllers handle HTTP, services contain logic, database layer isolated through Prisma
- Consistent validation patterns: Each entity validates input at the API boundary
- Error handling: Predictable error responses with meaningful messages
Example validation logic (designed):
// Create Brew request validation
- Valid beanId (exists in database)
- Recognised brew method (from enum)
- Dose and yield within sensible ranges
- Water temperature between 80-100°C
- Reject at API boundary with clear errorThis keeps bad data out of the system and protects downstream analytics.
Frontend
Next.js for:
- Predictable structure
- Server-side rendering where appropriate
- Clean separation between UI and data layers
- Mobile-first responsive design
The frontend is intentionally designed as a data entry interface — fast, simple, low-friction.
Technology Architecture Designed
Planned Hosting
- Cloud-hosted PostgreSQL (production database)
- Vercel for frontend (predictable global performance)
- Containerised backend (portable, independently scalable)
Why These Choices?
Containerising the backend means:
- The API can scale independently of the frontend
- Hosting provider can change (Vercel → GCP → AWS) without rewriting code
- Analytics workloads can run in separate containers
Vercel for frontend gives predictable performance globally without managing servers, but keeps the backend separate for flexibility.
These decisions support the current scale but don't block future capability.
Future State: BeanSights Analytics
This is where the architecture really shows its value.
BeanSights is a planned analytics layer that provides insights like:
- Bean ageing patterns (optimal flavour windows)
- Brew ratio recommendations
- Flavour consistency tracking
- Equipment performance analysis
Planned Architecture
Raw Logs: Brew and tasting entries from the main app
Curated Tables: Aggregated views designed for analysis (e.g., bean ageing profiles)
Analytics Jobs: Scheduled processes that calculate patterns or predictions
Insights Layer: API endpoints that serve user-facing insights
Why Design This Now?
Because decisions made today affect what's possible tomorrow.
- Capturing
roastDateandopenedDatenow enables ageing analysis later - JSONB indexes on
flavour_noteskeys make pattern extraction feasible - Separating raw and curated data keeps analytics performant at scale
This is classic enterprise architecture thinking: look ahead, capture the right data now, plan for evolution.
Technical Decisions & Trade-Offs
Why PostgreSQL + JSONB over Pure NoSQL?
Relational structure gives:
- Strong referential integrity (beans → brews → tastings)
- Efficient joins for analytics queries
- ACID guarantees for data consistency
JSONB gives:
- Flexibility for variable tasting notes
- Fast key-value lookups
- Schema evolution without migrations
The hybrid approach balances structure and flexibility.
Why Modular Backend Over Monolith?
Entity-based modules mean:
- Each service (Beans, Brews, Tastings) can evolve independently
- Business logic is isolated and testable
- Future refactoring (microservices, if needed) is easier
Why Containerisation?
- Portability: Move between hosting providers without rewrites
- Scalability: Scale API and analytics workloads independently
- Predictability: Consistent environments across dev, staging, production
System Diagrams
High-Level Architecture
Entity Relationship Diagram
What I Learned
This project taught me that architecture is where my strengths lie.
Business-to-Technology Alignment
Starting with user needs (competitive barista requirements) and working backwards shaped every technical decision. The schema isn't just "stores data" — it enables specific business capabilities.
Planning vs Building
Designing the application and analytics layers before implementing them forced me to think about:
- How decisions ripple across layers
- What needs to be captured now to enable future features
- Trade-offs between flexibility and structure
Systems Thinking
Even simple features (logging a brew) have architectural implications:
- What data is required vs optional?
- How does this relate to other entities?
- What indexes are needed for future queries?
- How does this support analytics?
Architecture as Communication
Creating diagrams, documentation, and clear naming conventions made the system understandable — not just to me, but to anyone who might work on it.
Current Status
Implemented:
- PostgreSQL database (local)
- Prisma ORM configuration
- Complete schema (Bean, Brew, Tasting, Recipe, Gear models)
- Hybrid relational + JSONB data structure
Designed (not yet built):
- Backend API (modular Node/TypeScript)
- Frontend (Next.js)
- BeanSights analytics layer
- Production hosting architecture
Why the gap?
I deliberately approached this as an architecture exercise first. The data layer is built because it's the foundation. The application layer is designed but paused while I focus on my portfolio site and job search.
Note: This architecture was designed in late 2025. Some planned components (like the drewBrew blog) became redundant after I built this portfolio site with an integrated blog system. The core data architecture and systems thinking remain valid demonstrations of my approach to technical planning.
Technologies
Database: PostgreSQL, Prisma ORM
Planned Backend: Node.js, TypeScript, Express
Planned Frontend: Next.js, React
Planned Hosting: Vercel (frontend), containerised backend
Documentation: Mermaid (diagrams), Obsidian (planning)
Why This Matters
This project demonstrates:
- Business-to-technology alignment — starting with user needs, not tech choices
- Data modelling — hybrid relational + JSONB for structure and flexibility
- Systems thinking — understanding how layers interact and evolve
- Future-state planning — designing for capability, not just features
- Technical decision-making — evaluating trade-offs, justifying choices
- Communication — clear documentation, diagrams, and reasoning
It's not a finished product. It's a demonstration of how I approach problems — systematically, intentionally, and with an eye on the long term.