117 KiB
Harborsmith Platform Architecture Documentation
Executive Summary
Harborsmith is a comprehensive yacht charter and maintenance management platform designed for the San Francisco Bay Area market. This document outlines the complete technical architecture, implementation strategy, and operational guidelines for building a scalable, beautiful, and performant system that serves yacht owners, charter customers, and administrative staff.
Platform Vision
- Beautiful: Nautical-themed UI with smooth animations and premium feel
- Scalable: Multi-region ready with horizontal scaling capabilities
- Fast: Sub-second page loads with optimized media delivery
- Responsive: Mobile-first design supporting all device types
- Enterprise-grade: RBAC, audit logging, compliance-ready
Core Components
- Public Website: Marketing and discovery platform (SSG)
- Customer Web App: Booking and charter management (SPA)
- Admin Portal: Operations and fleet management (SSR/SPA hybrid)
Table of Contents
- System Architecture Overview
- Technology Stack
- Frontend Architecture
- Backend Architecture
- Database Design
- Media Handling System
- Authentication & Authorization
- Third-Party Integrations
- Real-Time Communication
- Deployment Strategy
- Security Architecture
- Performance Optimization
- Monitoring & Observability
- Development Workflow
- Implementation Roadmap
System Architecture Overview
High-Level Architecture
graph TB
subgraph "Client Layer"
PW[Public Website<br/>Nuxt SSG]
CWA[Customer WebApp<br/>Nuxt SPA]
AP[Admin Portal<br/>Nuxt SSR/SPA]
end
subgraph "API Gateway"
TR[Traefik<br/>Load Balancer]
CACHE[Redis Cache]
end
subgraph "Application Layer"
API[Fastify + tRPC API]
WS[Socket.io Server]
MEDIA[Media Service<br/>Tus + FFmpeg]
end
subgraph "Data Layer"
PG[(PostgreSQL<br/>Primary DB)]
REDIS[(Redis<br/>Cache & Sessions)]
MINIO[MinIO<br/>Object Storage]
end
subgraph "External Services"
KC[Keycloak<br/>Identity]
CAL[Cal.com<br/>Scheduling]
STRIPE[Stripe<br/>Payments]
DIR[Directus<br/>CMS]
end
PW --> TR
CWA --> TR
AP --> TR
TR --> API
TR --> WS
TR --> MEDIA
API --> PG
API --> REDIS
API --> MINIO
API --> KC
API --> CAL
API --> STRIPE
API --> DIR
Monorepo Structure
harborsmith/
├── apps/
│ ├── website/ # Public marketing site (Nuxt SSG)
│ ├── webapp/ # Customer application (Nuxt SPA)
│ ├── portal/ # Admin portal (Nuxt SSR/SPA)
│ └── api/ # Backend API (Fastify + tRPC)
├── packages/
│ ├── shared/ # Shared types, utils, constants
│ ├── ui/ # Shared UI components library
│ ├── auth/ # Auth utilities and guards
│ ├── media/ # Media handling utilities
│ └── database/ # Prisma schema and migrations
├── infrastructure/
│ ├── docker/ # Docker configurations
│ ├── k8s/ # Kubernetes manifests (future)
│ └── terraform/ # Infrastructure as code (future)
├── docs/
│ ├── api/ # API documentation
│ ├── guides/ # Implementation guides
│ └── decisions/ # Architecture decision records
├── tools/
│ ├── scripts/ # Build and deployment scripts
│ └── generators/ # Code generators
├── docker-compose.yml
├── turbo.json # Turborepo configuration
├── package.json
└── tsconfig.json
Technology Stack
Frontend Stack
| Layer | Technology | Purpose | Justification |
|---|---|---|---|
| Framework | Nuxt 3.15+ | Universal Vue framework | SSG/SSR/SPA flexibility, excellent DX |
| UI Library | Nuxt UI v3 | Component library | Built for Nuxt 3, fully typed, customizable |
| CSS Framework | Tailwind CSS v4 | Utility-first CSS | Fast development, consistent design |
| State Management | Pinia | Vue state management | Type-safe, devtools support |
| Animations | Motion.dev | Animation library | Smooth, performant animations |
| Charts | Tremor | Dashboard components | Beautiful analytics components |
| Forms | VeeValidate + Zod | Form validation | Type-safe validation |
| Icons | Iconify | Icon system | Massive icon library |
| Utilities | VueUse | Composition utilities | Essential Vue composables |
| Media Upload | Uppy | File upload | 10GB+ support, resumable |
| Video Player | hls.js | Video streaming | HLS adaptive streaming |
Backend Stack
| Layer | Technology | Purpose | Justification |
|---|---|---|---|
| Runtime | Node.js 20+ | JavaScript runtime | LTS, performance improvements |
| Framework | Fastify | Web framework | High performance, plugin ecosystem |
| API Layer | tRPC | Type-safe APIs | End-to-end type safety |
| ORM | Prisma | Database toolkit | Type-safe queries, migrations |
| Validation | Zod | Schema validation | Runtime + compile-time safety |
| Queue | BullMQ | Job queue | Reliable background jobs |
| WebSocket | Socket.io | Real-time | Fallback support, rooms |
| Cache | Redis | Caching layer | Performance, sessions |
| Storage | MinIO | Object storage | S3-compatible, on-premise |
| Media | FFmpeg | Media processing | Video transcoding, HLS |
| Upload | Tus Server | Resumable uploads | Large file support |
Infrastructure Stack
| Component | Technology | Purpose | Configuration |
|---|---|---|---|
| Container | Docker | Containerization | Multi-stage builds |
| Orchestration | Docker Compose | Local development | Hot reload support |
| Proxy | Traefik | Reverse proxy | Auto SSL, load balancing |
| Database | PostgreSQL 16 | Primary database | JSONB, full-text search |
| Cache | Redis 7 | Caching & sessions | Persistence enabled |
| Storage | MinIO | Object storage | Multi-tenant buckets |
| Identity | Keycloak | Authentication | OIDC/OAuth2 |
| Monitoring | Glitchtip | Error tracking | Sentry-compatible |
| Poste.io | Email server | SMTP/IMAP |
Frontend Architecture
Component Architecture
Design System Foundation
// packages/ui/tokens/design-tokens.ts
export const designTokens = {
colors: {
// Nautical Theme
ocean: {
50: '#f0f9ff',
100: '#e0f2fe',
200: '#bae6fd',
300: '#7dd3fc',
400: '#38bdf8',
500: '#0ea5e9', // Primary
600: '#0284c7',
700: '#0369a1',
800: '#075985',
900: '#0c4a6e',
950: '#083344',
},
sail: {
50: '#fefce8',
100: '#fef9c3',
200: '#fef088',
300: '#fde047',
400: '#facc15',
500: '#eab308', // Accent
600: '#ca8a04',
700: '#a16207',
800: '#854d0e',
900: '#713f12',
},
harbor: {
50: '#f9fafb',
100: '#f3f4f6',
200: '#e5e7eb',
300: '#d1d5db',
400: '#9ca3af',
500: '#6b7280', // Neutral
600: '#4b5563',
700: '#374151',
800: '#1f2937',
900: '#111827',
950: '#030712',
}
},
spacing: {
xs: '0.5rem',
sm: '0.75rem',
md: '1rem',
lg: '1.5rem',
xl: '2rem',
'2xl': '3rem',
'3xl': '4rem',
},
typography: {
fonts: {
heading: 'Cal Sans, system-ui, sans-serif',
body: 'Inter, system-ui, sans-serif',
mono: 'JetBrains Mono, monospace',
},
sizes: {
xs: '0.75rem',
sm: '0.875rem',
base: '1rem',
lg: '1.125rem',
xl: '1.25rem',
'2xl': '1.5rem',
'3xl': '1.875rem',
'4xl': '2.25rem',
'5xl': '3rem',
}
},
animation: {
timing: {
instant: '100ms',
fast: '200ms',
normal: '300ms',
slow: '500ms',
slower: '700ms',
},
easing: {
linear: 'linear',
in: 'cubic-bezier(0.4, 0, 1, 1)',
out: 'cubic-bezier(0, 0, 0.2, 1)',
inOut: 'cubic-bezier(0.4, 0, 0.2, 1)',
bounce: 'cubic-bezier(0.68, -0.55, 0.265, 1.55)',
}
},
shadows: {
sm: '0 1px 2px 0 rgb(0 0 0 / 0.05)',
md: '0 4px 6px -1px rgb(0 0 0 / 0.1)',
lg: '0 10px 15px -3px rgb(0 0 0 / 0.1)',
xl: '0 20px 25px -5px rgb(0 0 0 / 0.1)',
'2xl': '0 25px 50px -12px rgb(0 0 0 / 0.25)',
inner: 'inset 0 2px 4px 0 rgb(0 0 0 / 0.06)',
}
}
Shared UI Components
<!-- packages/ui/components/YachtCard.vue -->
<template>
<motion-div
:initial="{ opacity: 0, y: 20 }"
:animate="{ opacity: 1, y: 0 }"
:transition="{ duration: 0.3 }"
class="yacht-card group relative overflow-hidden rounded-2xl bg-white shadow-lg transition-all hover:shadow-2xl"
>
<!-- Image Gallery -->
<div class="relative aspect-[16/9] overflow-hidden">
<img
:src="yacht.primaryImage"
:alt="yacht.name"
class="h-full w-full object-cover transition-transform duration-500 group-hover:scale-110"
loading="lazy"
/>
<div class="absolute inset-0 bg-gradient-to-t from-black/50 to-transparent opacity-0 transition-opacity group-hover:opacity-100" />
<!-- Quick Actions -->
<div class="absolute top-4 right-4 flex gap-2">
<UButton
icon="i-heroicons-heart"
size="sm"
color="white"
variant="soft"
:ui="{ rounded: 'rounded-full' }"
@click="toggleFavorite"
/>
<UButton
icon="i-heroicons-share"
size="sm"
color="white"
variant="soft"
:ui="{ rounded: 'rounded-full' }"
@click="share"
/>
</div>
<!-- Price Badge -->
<div class="absolute bottom-4 left-4">
<UBadge size="lg" color="ocean" variant="solid">
${{ yacht.hourlyRate }}/hour
</UBadge>
</div>
</div>
<!-- Content -->
<div class="p-6">
<div class="mb-2 flex items-start justify-between">
<div>
<h3 class="text-xl font-semibold text-harbor-900">
{{ yacht.name }}
</h3>
<p class="text-sm text-harbor-500">
{{ yacht.model }} · {{ yacht.year }}
</p>
</div>
<UBadge :color="yacht.available ? 'green' : 'red'" variant="subtle">
{{ yacht.available ? 'Available' : 'Booked' }}
</UBadge>
</div>
<!-- Specs -->
<div class="mb-4 flex gap-4 text-sm text-harbor-600">
<div class="flex items-center gap-1">
<Icon name="i-mdi-account-group" />
<span>{{ yacht.capacity }} guests</span>
</div>
<div class="flex items-center gap-1">
<Icon name="i-mdi-ruler" />
<span>{{ yacht.length }}ft</span>
</div>
<div class="flex items-center gap-1">
<Icon name="i-mdi-bed" />
<span>{{ yacht.cabins }} cabins</span>
</div>
</div>
<!-- Features -->
<div class="mb-4 flex flex-wrap gap-2">
<UBadge
v-for="feature in yacht.features.slice(0, 3)"
:key="feature"
color="gray"
variant="subtle"
size="xs"
>
{{ feature }}
</UBadge>
<UBadge
v-if="yacht.features.length > 3"
color="gray"
variant="subtle"
size="xs"
>
+{{ yacht.features.length - 3 }} more
</UBadge>
</div>
<!-- Actions -->
<div class="flex gap-2">
<UButton
block
color="ocean"
size="lg"
@click="bookNow"
>
Book Now
</UButton>
<UButton
block
variant="outline"
color="ocean"
size="lg"
@click="viewDetails"
>
View Details
</UButton>
</div>
</div>
</motion-div>
</template>
<script setup lang="ts">
import { motion } from '@motion-dev/vue'
import type { Yacht } from '@harborsmith/shared/types'
interface Props {
yacht: Yacht
}
const props = defineProps<Props>()
const emit = defineEmits<{
book: [yacht: Yacht]
view: [yacht: Yacht]
favorite: [yacht: Yacht]
share: [yacht: Yacht]
}>()
const toggleFavorite = () => emit('favorite', props.yacht)
const share = () => emit('share', props.yacht)
const bookNow = () => emit('book', props.yacht)
const viewDetails = () => emit('view', props.yacht)
</script>
Application Structure
Public Website (SSG)
// apps/website/nuxt.config.ts
export default defineNuxtConfig({
extends: ['@harborsmith/ui'],
nitro: {
prerender: {
crawlLinks: true,
routes: [
'/',
'/fleet',
'/services',
'/about',
'/contact',
// Dynamic routes from API
'/yachts/**',
]
}
},
modules: [
'@nuxt/ui',
'@nuxt/image',
'@nuxtjs/seo',
'@nuxtjs/fontaine',
'@nuxtjs/partytown',
'@vueuse/nuxt',
],
ui: {
global: true,
icons: ['heroicons', 'mdi', 'carbon'],
},
image: {
provider: 'ipx',
domains: ['minio.harborsmith.com'],
alias: {
minio: 'https://minio.harborsmith.com',
},
screens: {
xs: 320,
sm: 640,
md: 768,
lg: 1024,
xl: 1280,
xxl: 1536,
'2xl': 1536,
},
},
seo: {
redirectToCanonicalSiteUrl: true,
},
experimental: {
payloadExtraction: false,
renderJsonPayloads: true,
componentIslands: true,
},
})
Customer Web App (SPA)
// apps/webapp/nuxt.config.ts
export default defineNuxtConfig({
extends: ['@harborsmith/ui'],
ssr: false,
modules: [
'@nuxt/ui',
'@pinia/nuxt',
'@vueuse/nuxt',
'@nuxtjs/device',
'nuxt-viewport',
],
runtimeConfig: {
public: {
apiUrl: process.env.NUXT_PUBLIC_API_URL,
wsUrl: process.env.NUXT_PUBLIC_WS_URL,
keycloakUrl: process.env.NUXT_PUBLIC_KEYCLOAK_URL,
keycloakRealm: process.env.NUXT_PUBLIC_KEYCLOAK_REALM,
keycloakClientId: process.env.NUXT_PUBLIC_KEYCLOAK_CLIENT_ID,
}
},
pinia: {
storesDirs: ['./stores/**'],
},
build: {
transpile: ['trpc-nuxt'],
},
})
Admin Portal (SSR/SPA Hybrid)
// apps/portal/nuxt.config.ts
export default defineNuxtConfig({
extends: ['@harborsmith/ui'],
nitro: {
prerender: {
routes: ['/login', '/dashboard'],
}
},
modules: [
'@nuxt/ui',
'@pinia/nuxt',
'@vueuse/nuxt',
'nuxt-viewport',
'@nuxtjs/i18n',
],
ssr: true,
experimental: {
viewTransition: true,
crossOriginPrefetch: true,
},
})
State Management
// apps/webapp/stores/booking.ts
import { defineStore } from 'pinia'
import { useNuxtData } from '#app'
export const useBookingStore = defineStore('booking', () => {
// State
const currentBooking = ref<Booking | null>(null)
const selectedYacht = ref<Yacht | null>(null)
const selectedDates = ref<DateRange | null>(null)
const selectedExtras = ref<Extra[]>([])
const bookingStep = ref<BookingStep>('yacht')
// Computed
const totalPrice = computed(() => {
if (!selectedYacht.value || !selectedDates.value) return 0
const hours = calculateHours(selectedDates.value)
const basePrice = selectedYacht.value.hourlyRate * hours
const extrasPrice = selectedExtras.value.reduce((sum, extra) => {
return sum + extra.price * (extra.perHour ? hours : 1)
}, 0)
return basePrice + extrasPrice
})
const canProceed = computed(() => {
switch (bookingStep.value) {
case 'yacht':
return !!selectedYacht.value
case 'dates':
return !!selectedDates.value
case 'extras':
return true // Extras are optional
case 'payment':
return totalPrice.value > 0
default:
return false
}
})
// Actions
const selectYacht = async (yacht: Yacht) => {
selectedYacht.value = yacht
await checkAvailability(yacht.id)
}
const selectDates = async (dates: DateRange) => {
selectedDates.value = dates
if (selectedYacht.value) {
await checkAvailability(selectedYacht.value.id, dates)
}
}
const createBooking = async () => {
const { $api } = useNuxtApp()
const booking = await $api.bookings.create({
yachtId: selectedYacht.value!.id,
startDate: selectedDates.value!.start,
endDate: selectedDates.value!.end,
extras: selectedExtras.value.map(e => e.id),
})
currentBooking.value = booking
return booking
}
const checkAvailability = async (yachtId: string, dates?: DateRange) => {
const { $api } = useNuxtApp()
return await $api.yachts.checkAvailability({
yachtId,
startDate: dates?.start,
endDate: dates?.end,
})
}
const reset = () => {
currentBooking.value = null
selectedYacht.value = null
selectedDates.value = null
selectedExtras.value = []
bookingStep.value = 'yacht'
}
return {
// State
currentBooking,
selectedYacht,
selectedDates,
selectedExtras,
bookingStep,
// Computed
totalPrice,
canProceed,
// Actions
selectYacht,
selectDates,
createBooking,
checkAvailability,
reset,
}
})
Backend Architecture
API Structure
Fastify Server Setup
// apps/api/src/server.ts
import Fastify from 'fastify'
import cors from '@fastify/cors'
import helmet from '@fastify/helmet'
import rateLimit from '@fastify/rate-limit'
import compress from '@fastify/compress'
import { fastifyTRPCPlugin } from '@trpc/server/adapters/fastify'
import { createContext } from './trpc/context'
import { appRouter } from './trpc/router'
import { tusPlugin } from './plugins/tus'
import { socketPlugin } from './plugins/socket'
import { metricsPlugin } from './plugins/metrics'
export async function createServer() {
const server = Fastify({
logger: {
level: process.env.LOG_LEVEL || 'info',
transport: {
target: '@axiomhq/pino',
options: {
dataset: process.env.AXIOM_DATASET,
token: process.env.AXIOM_TOKEN,
}
}
},
maxParamLength: 5000,
bodyLimit: 100 * 1024 * 1024, // 100MB for file uploads
})
// Core plugins
await server.register(helmet, {
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
styleSrc: ["'self'", "'unsafe-inline'"],
scriptSrc: ["'self'", "'unsafe-inline'", "'unsafe-eval'"],
imgSrc: ["'self'", 'data:', 'https:'],
connectSrc: ["'self'", 'wss:', 'https:'],
}
}
})
await server.register(cors, {
origin: process.env.ALLOWED_ORIGINS?.split(',') || true,
credentials: true,
})
await server.register(compress, {
global: true,
threshold: 1024,
encodings: ['gzip', 'deflate', 'br'],
})
await server.register(rateLimit, {
max: 100,
timeWindow: '1 minute',
cache: 10000,
allowList: ['127.0.0.1'],
redis: {
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT || '6379'),
}
})
// Custom plugins
await server.register(tusPlugin, { prefix: '/upload' })
await server.register(socketPlugin, { prefix: '/ws' })
await server.register(metricsPlugin, { prefix: '/metrics' })
// tRPC
await server.register(fastifyTRPCPlugin, {
prefix: '/trpc',
trpcOptions: {
router: appRouter,
createContext,
onError({ path, error }) {
server.log.error({ path, error }, 'tRPC error')
},
}
})
// Health check
server.get('/health', async (request, reply) => {
const checks = await performHealthChecks()
const healthy = Object.values(checks).every(check => check.status === 'healthy')
reply.code(healthy ? 200 : 503).send({
status: healthy ? 'healthy' : 'unhealthy',
timestamp: new Date().toISOString(),
checks,
})
})
return server
}
// Start server
const start = async () => {
const server = await createServer()
try {
await server.listen({
port: parseInt(process.env.PORT || '3000'),
host: '0.0.0.0',
})
server.log.info(`Server listening on ${server.server.address()}`)
} catch (err) {
server.log.error(err)
process.exit(1)
}
}
start()
tRPC Router Architecture
// apps/api/src/trpc/router.ts
import { t } from './trpc'
import { authRouter } from './routers/auth'
import { yachtsRouter } from './routers/yachts'
import { bookingsRouter } from './routers/bookings'
import { usersRouter } from './routers/users'
import { paymentsRouter } from './routers/payments'
import { mediaRouter } from './routers/media'
import { maintenanceRouter } from './routers/maintenance'
import { analyticsRouter } from './routers/analytics'
export const appRouter = t.router({
auth: authRouter,
yachts: yachtsRouter,
bookings: bookingsRouter,
users: usersRouter,
payments: paymentsRouter,
media: mediaRouter,
maintenance: maintenanceRouter,
analytics: analyticsRouter,
})
export type AppRouter = typeof appRouter
// apps/api/src/trpc/routers/yachts.ts
import { z } from 'zod'
import { t, protectedProcedure, adminProcedure } from '../trpc'
import { YachtService } from '../../services/yacht.service'
import { TRPCError } from '@trpc/server'
const yachtInput = z.object({
name: z.string().min(1).max(100),
model: z.string(),
year: z.number().min(1900).max(new Date().getFullYear() + 1),
length: z.number().min(10).max(500),
capacity: z.number().min(1).max(100),
cabins: z.number().min(0).max(20),
hourlyRate: z.number().min(0),
dailyRate: z.number().min(0),
features: z.array(z.string()),
description: z.string(),
location: z.object({
marina: z.string(),
berth: z.string().optional(),
latitude: z.number(),
longitude: z.number(),
}),
})
export const yachtsRouter = t.router({
// Public procedures
list: t.procedure
.input(z.object({
page: z.number().min(1).default(1),
limit: z.number().min(1).max(100).default(20),
filters: z.object({
location: z.string().optional(),
capacity: z.number().optional(),
priceRange: z.object({
min: z.number().optional(),
max: z.number().optional(),
}).optional(),
features: z.array(z.string()).optional(),
available: z.object({
from: z.date(),
to: z.date(),
}).optional(),
}).optional(),
sort: z.enum(['price', 'capacity', 'rating', 'popular']).default('popular'),
}))
.query(async ({ input, ctx }) => {
const service = new YachtService(ctx.prisma)
return service.listYachts(input)
}),
getById: t.procedure
.input(z.string().uuid())
.query(async ({ input, ctx }) => {
const service = new YachtService(ctx.prisma)
const yacht = await service.getYacht(input)
if (!yacht) {
throw new TRPCError({
code: 'NOT_FOUND',
message: 'Yacht not found',
})
}
return yacht
}),
checkAvailability: t.procedure
.input(z.object({
yachtId: z.string().uuid(),
startDate: z.date(),
endDate: z.date(),
}))
.query(async ({ input, ctx }) => {
const service = new YachtService(ctx.prisma)
return service.checkAvailability(input)
}),
// Protected procedures (require auth)
create: adminProcedure
.input(yachtInput)
.mutation(async ({ input, ctx }) => {
const service = new YachtService(ctx.prisma)
return service.createYacht({
...input,
ownerId: ctx.user.id,
})
}),
update: adminProcedure
.input(z.object({
id: z.string().uuid(),
data: yachtInput.partial(),
}))
.mutation(async ({ input, ctx }) => {
const service = new YachtService(ctx.prisma)
// Check ownership
const yacht = await service.getYacht(input.id)
if (!yacht || (yacht.ownerId !== ctx.user.id && ctx.user.role !== 'ADMIN')) {
throw new TRPCError({
code: 'FORBIDDEN',
message: 'You do not have permission to update this yacht',
})
}
return service.updateYacht(input.id, input.data)
}),
delete: adminProcedure
.input(z.string().uuid())
.mutation(async ({ input, ctx }) => {
const service = new YachtService(ctx.prisma)
// Check ownership
const yacht = await service.getYacht(input)
if (!yacht || (yacht.ownerId !== ctx.user.id && ctx.user.role !== 'ADMIN')) {
throw new TRPCError({
code: 'FORBIDDEN',
message: 'You do not have permission to delete this yacht',
})
}
return service.deleteYacht(input)
}),
// Analytics
getStats: protectedProcedure
.input(z.string().uuid())
.query(async ({ input, ctx }) => {
const service = new YachtService(ctx.prisma)
return service.getYachtStats(input, ctx.user.id)
}),
})
Service Layer Architecture
// apps/api/src/services/yacht.service.ts
import { PrismaClient, Prisma } from '@prisma/client'
import { cache } from '../lib/cache'
import { EventEmitter } from '../lib/events'
export class YachtService {
constructor(
private prisma: PrismaClient,
private events: EventEmitter = new EventEmitter(),
) {}
async listYachts(params: ListYachtsParams) {
const cacheKey = `yachts:list:${JSON.stringify(params)}`
// Check cache first
const cached = await cache.get(cacheKey)
if (cached) return cached
const where: Prisma.YachtWhereInput = {}
// Build filters
if (params.filters) {
if (params.filters.location) {
where.location = {
marina: {
contains: params.filters.location,
mode: 'insensitive',
}
}
}
if (params.filters.capacity) {
where.capacity = { gte: params.filters.capacity }
}
if (params.filters.priceRange) {
where.hourlyRate = {
gte: params.filters.priceRange.min,
lte: params.filters.priceRange.max,
}
}
if (params.filters.features?.length) {
where.features = {
hasEvery: params.filters.features,
}
}
if (params.filters.available) {
// Complex availability check
where.bookings = {
none: {
OR: [
{
startDate: {
lte: params.filters.available.to,
},
endDate: {
gte: params.filters.available.from,
},
status: {
in: ['CONFIRMED', 'PENDING'],
}
}
]
}
}
}
}
// Build order by
const orderBy: Prisma.YachtOrderByWithRelationInput = {}
switch (params.sort) {
case 'price':
orderBy.hourlyRate = 'asc'
break
case 'capacity':
orderBy.capacity = 'desc'
break
case 'rating':
orderBy.rating = 'desc'
break
case 'popular':
default:
orderBy.bookingCount = 'desc'
break
}
// Execute query with pagination
const [total, yachts] = await Promise.all([
this.prisma.yacht.count({ where }),
this.prisma.yacht.findMany({
where,
orderBy,
skip: (params.page - 1) * params.limit,
take: params.limit,
include: {
media: {
where: { isPrimary: true },
take: 1,
},
reviews: {
select: {
rating: true,
}
}
}
})
])
// Process results
const results = yachts.map(yacht => ({
...yacht,
primaryImage: yacht.media[0]?.url,
averageRating: yacht.reviews.length
? yacht.reviews.reduce((sum, r) => sum + r.rating, 0) / yacht.reviews.length
: null,
}))
const response = {
yachts: results,
pagination: {
page: params.page,
limit: params.limit,
total,
totalPages: Math.ceil(total / params.limit),
}
}
// Cache for 5 minutes
await cache.set(cacheKey, response, 300)
return response
}
async createYacht(data: CreateYachtData) {
const yacht = await this.prisma.yacht.create({
data: {
...data,
slug: this.generateSlug(data.name),
status: 'DRAFT',
},
include: {
owner: true,
}
})
// Emit event for other services
this.events.emit('yacht.created', { yacht })
// Invalidate cache
await cache.deletePattern('yachts:list:*')
return yacht
}
async updateYacht(id: string, data: UpdateYachtData) {
const yacht = await this.prisma.yacht.update({
where: { id },
data: {
...data,
updatedAt: new Date(),
},
})
// Emit event
this.events.emit('yacht.updated', { yacht })
// Invalidate cache
await Promise.all([
cache.delete(`yacht:${id}`),
cache.deletePattern('yachts:list:*'),
])
return yacht
}
async checkAvailability({ yachtId, startDate, endDate }: CheckAvailabilityParams) {
const conflicts = await this.prisma.booking.findMany({
where: {
yachtId,
status: {
in: ['CONFIRMED', 'PENDING'],
},
OR: [
{
startDate: {
lte: endDate,
},
endDate: {
gte: startDate,
},
}
]
},
select: {
startDate: true,
endDate: true,
status: true,
}
})
const maintenanceSchedules = await this.prisma.maintenanceSchedule.findMany({
where: {
yachtId,
status: 'SCHEDULED',
startDate: {
lte: endDate,
},
endDate: {
gte: startDate,
},
},
select: {
startDate: true,
endDate: true,
type: true,
}
})
return {
available: conflicts.length === 0 && maintenanceSchedules.length === 0,
conflicts: conflicts.map(c => ({
start: c.startDate,
end: c.endDate,
type: 'booking' as const,
status: c.status,
})),
maintenance: maintenanceSchedules.map(m => ({
start: m.startDate,
end: m.endDate,
type: 'maintenance' as const,
maintenanceType: m.type,
})),
}
}
private generateSlug(name: string): string {
return name
.toLowerCase()
.replace(/[^a-z0-9]+/g, '-')
.replace(/^-+|-+$/g, '')
}
}
Database Design
Prisma Schema
// packages/database/prisma/schema.prisma
generator client {
provider = "prisma-client-js"
previewFeatures = ["fullTextSearch", "postgresqlExtensions"]
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
extensions = [pgcrypto, postgis, pg_trgm]
}
// ==================== USER & AUTH ====================
model User {
id String @id @default(uuid())
keycloakId String @unique
email String @unique
firstName String
lastName String
phone String?
avatar String?
role UserRole @default(CUSTOMER)
status UserStatus @default(ACTIVE)
// Profile
profile Profile?
preferences Json @default("{}")
// Relations
ownedYachts Yacht[] @relation("YachtOwner")
bookings Booking[]
reviews Review[]
payments Payment[]
notifications Notification[]
activityLogs ActivityLog[]
// Timestamps
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
lastLoginAt DateTime?
@@index([email])
@@index([keycloakId])
@@index([role, status])
}
enum UserRole {
CUSTOMER
OWNER
CAPTAIN
CREW
ADMIN
SUPER_ADMIN
}
enum UserStatus {
ACTIVE
SUSPENDED
DELETED
}
model Profile {
id String @id @default(uuid())
userId String @unique
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
// Preferences
language String @default("en")
timezone String @default("America/Los_Angeles")
currency String @default("USD")
// Boating Experience
boatingLicense String?
experienceLevel ExperienceLevel @default(BEGINNER)
certifications String[]
// Emergency Contact
emergencyName String?
emergencyPhone String?
emergencyRelation String?
// Documents
documents Document[]
// Metadata
metadata Json @default("{}")
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
enum ExperienceLevel {
BEGINNER
INTERMEDIATE
ADVANCED
PROFESSIONAL
}
// ==================== YACHT ====================
model Yacht {
id String @id @default(uuid())
slug String @unique
name String
model String
manufacturer String
year Int
// Specifications
length Float // in feet
beam Float? // width in feet
draft Float? // depth in feet
capacity Int // number of guests
cabins Int
bathrooms Int
engineType String?
enginePower String? // horsepower
fuelCapacity Float? // in gallons
waterCapacity Float? // in gallons
// Pricing
hourlyRate Float
halfDayRate Float? // 4 hours
fullDayRate Float? // 8 hours
weeklyRate Float?
securityDeposit Float
// Location
location Json // { marina, berth, latitude, longitude }
homePort String
cruisingArea String[]
// Features & Amenities
features String[]
amenities String[]
waterToys String[]
safetyEquipment String[]
navigationEquip String[]
// Description
description String @db.Text
highlights String[]
rules String[]
// Status
status YachtStatus @default(DRAFT)
available Boolean @default(true)
instantBooking Boolean @default(false)
// Relations
ownerId String
owner User @relation("YachtOwner", fields: [ownerId], references: [id])
captain Captain?
crew Crew[]
media Media[]
bookings Booking[]
reviews Review[]
maintenance MaintenanceSchedule[]
documents Document[]
insurance Insurance[]
// Analytics
viewCount Int @default(0)
bookingCount Int @default(0)
rating Float?
// Timestamps
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
lastServiceDate DateTime?
@@index([slug])
@@index([status, available])
@@index([ownerId])
@@index([hourlyRate])
@@index([capacity])
@@fulltext([name, model, manufacturer, description])
}
enum YachtStatus {
DRAFT
PENDING_REVIEW
ACTIVE
INACTIVE
MAINTENANCE
ARCHIVED
}
// ==================== BOOKING ====================
model Booking {
id String @id @default(uuid())
bookingNumber String @unique
// Relations
yachtId String
yacht Yacht @relation(fields: [yachtId], references: [id])
userId String
user User @relation(fields: [userId], references: [id])
// Dates
startDate DateTime
endDate DateTime
duration Float // in hours
// Guests
guestCount Int
guestDetails Json[] // Array of guest information
// Pricing
basePrice Float
extrasPrice Float @default(0)
discountAmount Float @default(0)
taxAmount Float
totalPrice Float
depositAmount Float
// Status
status BookingStatus @default(PENDING)
paymentStatus PaymentStatus @default(PENDING)
// Extras
extras BookingExtra[]
// Captain & Crew
captainRequired Boolean @default(true)
crewRequired Boolean @default(false)
assignedCaptain String?
assignedCrew String[]
// Check-in/out
checkInTime DateTime?
checkOutTime DateTime?
checkInNotes String?
checkOutNotes String?
damageReport Json?
// Payment
payments Payment[]
refunds Refund[]
// Communication
messages Message[]
// Metadata
source String @default("WEBSITE") // WEBSITE, APP, PHONE, PARTNER
specialRequests String?
internalNotes String?
cancellationReason String?
// Timestamps
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
confirmedAt DateTime?
cancelledAt DateTime?
@@index([bookingNumber])
@@index([yachtId, startDate, endDate])
@@index([userId])
@@index([status, paymentStatus])
@@index([startDate])
}
enum BookingStatus {
PENDING
CONFIRMED
IN_PROGRESS
COMPLETED
CANCELLED
NO_SHOW
}
enum PaymentStatus {
PENDING
PARTIAL
PAID
REFUNDED
FAILED
}
model BookingExtra {
id String @id @default(uuid())
bookingId String
booking Booking @relation(fields: [bookingId], references: [id], onDelete: Cascade)
name String
description String?
category String // CATERING, EQUIPMENT, SERVICE, OTHER
quantity Int @default(1)
unitPrice Float
totalPrice Float
createdAt DateTime @default(now())
@@index([bookingId])
}
// ==================== PAYMENT ====================
model Payment {
id String @id @default(uuid())
paymentNumber String @unique
// Relations
bookingId String
booking Booking @relation(fields: [bookingId], references: [id])
userId String
user User @relation(fields: [userId], references: [id])
// Amount
amount Float
currency String @default("USD")
// Stripe
stripePaymentId String? @unique
stripePaymentIntent String?
stripeCustomerId String?
paymentMethod String // CARD, BANK, WALLET
// Status
status PaymentTransactionStatus @default(PENDING)
// Metadata
description String?
metadata Json @default("{}")
failureReason String?
// Timestamps
createdAt DateTime @default(now())
processedAt DateTime?
@@index([paymentNumber])
@@index([bookingId])
@@index([userId])
@@index([status])
@@index([stripePaymentId])
}
enum PaymentTransactionStatus {
PENDING
PROCESSING
SUCCEEDED
FAILED
CANCELLED
}
// ==================== MEDIA ====================
model Media {
id String @id @default(uuid())
// Relations
yachtId String?
yacht Yacht? @relation(fields: [yachtId], references: [id], onDelete: Cascade)
reviewId String?
review Review? @relation(fields: [reviewId], references: [id], onDelete: Cascade)
// File Info
fileName String
mimeType String
size Int // in bytes
// URLs
url String // Public URL
thumbnailUrl String? // Thumbnail for images/videos
streamUrl String? // HLS stream URL for videos
// MinIO
bucket String
objectKey String
etag String?
// Type & Purpose
type MediaType
category String? // EXTERIOR, INTERIOR, DECK, CABIN, etc.
isPrimary Boolean @default(false)
order Int @default(0)
// Video Specific
duration Float? // in seconds
resolution String? // 1080p, 4K, etc.
frameRate Float?
bitrate Int?
// Processing
processingStatus ProcessingStatus @default(PENDING)
processingError String?
processedAt DateTime?
// Metadata
alt String?
caption String?
metadata Json @default("{}")
// Timestamps
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
@@index([yachtId, type, isPrimary])
@@index([reviewId])
@@index([processingStatus])
}
enum MediaType {
IMAGE
VIDEO
DOCUMENT
VIRTUAL_TOUR
}
enum ProcessingStatus {
PENDING
PROCESSING
COMPLETED
FAILED
}
// ==================== REVIEW ====================
model Review {
id String @id @default(uuid())
// Relations
bookingId String @unique
booking Booking @relation(fields: [bookingId], references: [id])
yachtId String
yacht Yacht @relation(fields: [yachtId], references: [id])
userId String
user User @relation(fields: [userId], references: [id])
// Ratings (1-5 stars)
overallRating Float
cleanlinessRating Float?
accuracyRating Float?
valueRating Float?
serviceRating Float?
locationRating Float?
// Content
title String?
content String @db.Text
pros String[]
cons String[]
// Media
media Media[]
// Response
ownerResponse String? @db.Text
ownerRespondedAt DateTime?
// Status
status ReviewStatus @default(PENDING)
isVerified Boolean @default(false)
isFeatured Boolean @default(false)
// Helpful votes
helpfulCount Int @default(0)
// Timestamps
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
publishedAt DateTime?
@@index([yachtId, status])
@@index([userId])
@@index([bookingId])
@@index([overallRating])
}
enum ReviewStatus {
PENDING
APPROVED
REJECTED
HIDDEN
}
// ==================== MAINTENANCE ====================
model MaintenanceSchedule {
id String @id @default(uuid())
// Relations
yachtId String
yacht Yacht @relation(fields: [yachtId], references: [id])
// Schedule
type MaintenanceType
title String
description String?
startDate DateTime
endDate DateTime
// Service Provider
provider String?
providerContact String?
estimatedCost Float?
actualCost Float?
// Status
status MaintenanceStatus @default(SCHEDULED)
// Notes
notes String? @db.Text
completionNotes String? @db.Text
// Timestamps
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
completedAt DateTime?
@@index([yachtId, startDate, endDate])
@@index([status])
}
enum MaintenanceType {
ROUTINE
REPAIR
INSPECTION
CLEANING
UPGRADE
EMERGENCY
}
enum MaintenanceStatus {
SCHEDULED
IN_PROGRESS
COMPLETED
CANCELLED
}
// ==================== CAPTAIN & CREW ====================
model Captain {
id String @id @default(uuid())
yachtId String @unique
yacht Yacht @relation(fields: [yachtId], references: [id])
// Personal Info
firstName String
lastName String
email String
phone String
// Credentials
licenseNumber String
licenseExpiry DateTime
certifications String[]
yearsExperience Int
// Availability
availability Json // Calendar availability
hourlyRate Float
// Status
status CrewStatus @default(ACTIVE)
// Timestamps
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Crew {
id String @id @default(uuid())
yachtId String
yacht Yacht @relation(fields: [yachtId], references: [id])
// Personal Info
firstName String
lastName String
role String // DECKHAND, CHEF, STEWARD, etc.
// Contact
email String?
phone String?
// Employment
hourlyRate Float?
status CrewStatus @default(ACTIVE)
// Timestamps
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
@@index([yachtId])
}
enum CrewStatus {
ACTIVE
INACTIVE
ON_LEAVE
TERMINATED
}
// ==================== COMMUNICATION ====================
model Message {
id String @id @default(uuid())
// Relations
bookingId String
booking Booking @relation(fields: [bookingId], references: [id])
senderId String
sender User @relation(fields: [senderId], references: [id])
// Content
content String @db.Text
attachments String[]
// Status
isRead Boolean @default(false)
readAt DateTime?
// Timestamps
createdAt DateTime @default(now())
editedAt DateTime?
@@index([bookingId])
@@index([senderId])
}
model Notification {
id String @id @default(uuid())
// Relations
userId String
user User @relation(fields: [userId], references: [id])
// Content
type NotificationType
title String
message String
data Json @default("{}")
// Status
isRead Boolean @default(false)
readAt DateTime?
// Delivery
channels String[] // EMAIL, SMS, PUSH, IN_APP
emailSent Boolean @default(false)
smsSent Boolean @default(false)
pushSent Boolean @default(false)
// Timestamps
createdAt DateTime @default(now())
expiresAt DateTime?
@@index([userId, isRead])
@@index([type])
}
enum NotificationType {
BOOKING_CONFIRMED
BOOKING_CANCELLED
BOOKING_REMINDER
PAYMENT_SUCCESS
PAYMENT_FAILED
REVIEW_REQUEST
MAINTENANCE_SCHEDULED
SYSTEM_ANNOUNCEMENT
CUSTOM
}
// ==================== DOCUMENTS ====================
model Document {
id String @id @default(uuid())
// Relations (polymorphic)
entityType String // USER, YACHT, BOOKING, etc.
entityId String
// Document Info
type DocumentType
name String
description String?
// File
fileUrl String
fileName String
mimeType String
size Int
// Status
status DocumentStatus @default(PENDING)
verifiedBy String?
verifiedAt DateTime?
expiresAt DateTime?
// Timestamps
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
@@index([entityType, entityId])
@@index([type, status])
}
enum DocumentType {
LICENSE
INSURANCE
REGISTRATION
INSPECTION
CONTRACT
WAIVER
ID_PROOF
OTHER
}
enum DocumentStatus {
PENDING
VERIFIED
REJECTED
EXPIRED
}
// ==================== AUDIT & ANALYTICS ====================
model ActivityLog {
id String @id @default(uuid())
// Actor
userId String?
user User? @relation(fields: [userId], references: [id])
// Action
action String // CREATE, UPDATE, DELETE, VIEW, etc.
entityType String // YACHT, BOOKING, USER, etc.
entityId String
// Details
changes Json? // Before/after values
metadata Json @default("{}")
// Request Info
ipAddress String?
userAgent String?
// Timestamp
createdAt DateTime @default(now())
@@index([userId])
@@index([entityType, entityId])
@@index([action])
@@index([createdAt])
}
Media Handling System
Upload Architecture
// apps/api/src/plugins/tus.ts
import { FastifyPluginAsync } from 'fastify'
import { Server, EVENTS } from '@tus/server'
import { S3Store } from '@tus/s3-store'
import { Client } from 'minio'
import { nanoid } from 'nanoid'
export const tusPlugin: FastifyPluginAsync = async (fastify) => {
const minioClient = new Client({
endPoint: process.env.MINIO_ENDPOINT!,
port: parseInt(process.env.MINIO_PORT || '9000'),
useSSL: process.env.MINIO_USE_SSL === 'true',
accessKey: process.env.MINIO_ACCESS_KEY!,
secretKey: process.env.MINIO_SECRET_KEY!,
})
const tusServer = new Server({
path: '/upload',
maxSize: 10 * 1024 * 1024 * 1024, // 10GB
datastore: new S3Store({
s3Client: minioClient,
bucket: 'uploads',
partSize: 10 * 1024 * 1024, // 10MB parts
}),
namingFunction(req) {
const ext = req.headers['metadata']?.split('filename ')[1]?.split('.').pop()
return `${nanoid()}.${ext || 'bin'}`
},
onUploadCreate: async (req, res, upload) => {
const metadata = upload.metadata
// Validate user permissions
const token = req.headers.authorization?.replace('Bearer ', '')
if (!token) throw new Error('Unauthorized')
const user = await validateToken(token)
if (!user) throw new Error('Invalid token')
// Store upload record
await fastify.prisma.upload.create({
data: {
id: upload.id,
userId: user.id,
fileName: metadata.filename,
mimeType: metadata.filetype,
size: upload.size,
status: 'UPLOADING',
}
})
return res
},
onUploadFinish: async (req, res, upload) => {
const uploadRecord = await fastify.prisma.upload.findUnique({
where: { id: upload.id }
})
if (!uploadRecord) throw new Error('Upload not found')
// Process based on file type
const processor = getProcessor(uploadRecord.mimeType)
if (processor) {
// Queue processing job
await fastify.queue.add('media.process', {
uploadId: upload.id,
type: processor,
source: `uploads/${upload.id}`,
})
}
// Update status
await fastify.prisma.upload.update({
where: { id: upload.id },
data: {
status: 'COMPLETED',
completedAt: new Date(),
}
})
// Emit event
fastify.events.emit('upload.completed', { upload })
return res
}
})
// Handle tus protocol
fastify.all('/upload', (req, reply) => {
tusServer.handle(req.raw, reply.raw)
})
fastify.all('/upload/*', (req, reply) => {
tusServer.handle(req.raw, reply.raw)
})
}
Video Processing Pipeline
// apps/api/src/workers/media.processor.ts
import { Worker } from 'bullmq'
import ffmpeg from 'fluent-ffmpeg'
import { Client } from 'minio'
import path from 'path'
import fs from 'fs/promises'
const minioClient = new Client({
endPoint: process.env.MINIO_ENDPOINT!,
port: parseInt(process.env.MINIO_PORT || '9000'),
useSSL: process.env.MINIO_USE_SSL === 'true',
accessKey: process.env.MINIO_ACCESS_KEY!,
secretKey: process.env.MINIO_SECRET_KEY!,
})
export const mediaProcessor = new Worker('media.process', async (job) => {
const { uploadId, type, source } = job.data
switch (type) {
case 'video':
return processVideo(uploadId, source)
case 'image':
return processImage(uploadId, source)
default:
throw new Error(`Unknown processor type: ${type}`)
}
}, {
connection: {
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT || '6379'),
},
concurrency: 3,
})
async function processVideo(uploadId: string, source: string) {
const tempDir = `/tmp/video-${uploadId}`
await fs.mkdir(tempDir, { recursive: true })
try {
// Download source video
const sourceFile = path.join(tempDir, 'source.mp4')
await minioClient.fGetObject('uploads', source, sourceFile)
// Get video info
const metadata = await getVideoMetadata(sourceFile)
// Generate HLS streams
const hlsDir = path.join(tempDir, 'hls')
await fs.mkdir(hlsDir, { recursive: true })
// Create multiple quality variants
const variants = [
{ name: '1080p', width: 1920, height: 1080, bitrate: '5000k' },
{ name: '720p', width: 1280, height: 720, bitrate: '3000k' },
{ name: '480p', width: 854, height: 480, bitrate: '1500k' },
{ name: '360p', width: 640, height: 360, bitrate: '800k' },
]
const playlists = []
for (const variant of variants) {
if (metadata.width >= variant.width) {
const outputDir = path.join(hlsDir, variant.name)
await fs.mkdir(outputDir, { recursive: true })
await new Promise((resolve, reject) => {
ffmpeg(sourceFile)
.outputOptions([
`-vf scale=${variant.width}:${variant.height}`,
`-c:v libx264`,
`-b:v ${variant.bitrate}`,
`-c:a aac`,
`-b:a 128k`,
`-hls_time 6`,
`-hls_playlist_type vod`,
`-hls_segment_filename ${outputDir}/segment_%03d.ts`,
`-master_pl_name master.m3u8`,
])
.output(`${outputDir}/index.m3u8`)
.on('end', resolve)
.on('error', reject)
.run()
})
playlists.push({
resolution: variant.name,
path: `${variant.name}/index.m3u8`,
})
}
}
// Create master playlist
const masterPlaylist = createMasterPlaylist(playlists)
await fs.writeFile(path.join(hlsDir, 'master.m3u8'), masterPlaylist)
// Upload all HLS files to MinIO
const hlsFiles = await getFilesRecursive(hlsDir)
for (const file of hlsFiles) {
const relativePath = path.relative(hlsDir, file)
const objectName = `videos/${uploadId}/hls/${relativePath}`
await minioClient.fPutObject('media', objectName, file, {
'Content-Type': file.endsWith('.m3u8') ? 'application/x-mpegURL' : 'video/MP2T',
'Cache-Control': 'public, max-age=31536000',
})
}
// Generate thumbnail
const thumbnailPath = path.join(tempDir, 'thumbnail.jpg')
await new Promise((resolve, reject) => {
ffmpeg(sourceFile)
.screenshots({
timestamps: ['10%'],
filename: 'thumbnail.jpg',
folder: tempDir,
size: '640x360',
})
.on('end', resolve)
.on('error', reject)
})
await minioClient.fPutObject('media', `videos/${uploadId}/thumbnail.jpg`, thumbnailPath, {
'Content-Type': 'image/jpeg',
'Cache-Control': 'public, max-age=31536000',
})
// Update database
await prisma.media.update({
where: { uploadId },
data: {
processingStatus: 'COMPLETED',
processedAt: new Date(),
streamUrl: `https://cdn.harborsmith.com/videos/${uploadId}/hls/master.m3u8`,
thumbnailUrl: `https://cdn.harborsmith.com/videos/${uploadId}/thumbnail.jpg`,
duration: metadata.duration,
resolution: `${metadata.width}x${metadata.height}`,
frameRate: metadata.fps,
bitrate: metadata.bitrate,
}
})
} finally {
// Cleanup temp files
await fs.rm(tempDir, { recursive: true, force: true })
}
}
function createMasterPlaylist(playlists: any[]) {
let content = '#EXTM3U\n#EXT-X-VERSION:3\n'
const bandwidthMap = {
'1080p': 5500000,
'720p': 3500000,
'480p': 1750000,
'360p': 900000,
}
for (const playlist of playlists) {
const bandwidth = bandwidthMap[playlist.resolution]
content += `#EXT-X-STREAM-INF:BANDWIDTH=${bandwidth},RESOLUTION=${playlist.resolution}\n`
content += `${playlist.path}\n`
}
return content
}
Authentication & Authorization
Keycloak Integration
// packages/auth/src/keycloak.ts
import { FastifyPluginAsync } from 'fastify'
import fastifyJwt from '@fastify/jwt'
import axios from 'axios'
interface KeycloakConfig {
realm: string
serverUrl: string
clientId: string
clientSecret: string
}
export const keycloakPlugin: FastifyPluginAsync<KeycloakConfig> = async (fastify, options) => {
// JWT verification
await fastify.register(fastifyJwt, {
secret: {
public: await getKeycloakPublicKey(options),
},
verify: {
algorithms: ['RS256'],
issuer: `${options.serverUrl}/realms/${options.realm}`,
audience: options.clientId,
}
})
// Add decorators
fastify.decorate('authenticate', async (request, reply) => {
try {
await request.jwtVerify()
// Enrich with user data
const user = await getUserFromToken(request.user)
request.user = user
} catch (err) {
reply.code(401).send({ error: 'Unauthorized' })
}
})
fastify.decorate('authorize', (roles: string[]) => {
return async (request, reply) => {
if (!request.user) {
return reply.code(401).send({ error: 'Unauthorized' })
}
const hasRole = roles.some(role =>
request.user.roles?.includes(role)
)
if (!hasRole) {
return reply.code(403).send({ error: 'Forbidden' })
}
}
})
}
async function getKeycloakPublicKey(config: KeycloakConfig) {
const response = await axios.get(
`${config.serverUrl}/realms/${config.realm}/protocol/openid-connect/certs`
)
const key = response.data.keys[0]
return `-----BEGIN PUBLIC KEY-----\n${key.x5c[0]}\n-----END PUBLIC KEY-----`
}
RBAC Implementation
// apps/api/src/middleware/rbac.ts
import { TRPCError } from '@trpc/server'
export const permissions = {
// Yacht permissions
'yacht:create': ['ADMIN', 'OWNER'],
'yacht:read': ['ADMIN', 'OWNER', 'CAPTAIN', 'CREW', 'CUSTOMER'],
'yacht:update': ['ADMIN', 'OWNER'],
'yacht:delete': ['ADMIN', 'OWNER'],
// Booking permissions
'booking:create': ['ADMIN', 'CUSTOMER'],
'booking:read': ['ADMIN', 'OWNER', 'CAPTAIN', 'CUSTOMER'],
'booking:update': ['ADMIN', 'OWNER'],
'booking:cancel': ['ADMIN', 'OWNER', 'CUSTOMER'],
// User permissions
'user:read': ['ADMIN', 'SELF'],
'user:update': ['ADMIN', 'SELF'],
'user:delete': ['ADMIN'],
// Admin permissions
'admin:dashboard': ['ADMIN', 'SUPER_ADMIN'],
'admin:users': ['ADMIN', 'SUPER_ADMIN'],
'admin:reports': ['ADMIN', 'SUPER_ADMIN', 'OWNER'],
}
export function checkPermission(
user: User,
permission: keyof typeof permissions,
context?: { ownerId?: string; userId?: string }
) {
const allowedRoles = permissions[permission]
// Check if user has required role
if (allowedRoles.includes(user.role)) {
return true
}
// Check for SELF permission
if (allowedRoles.includes('SELF') && context?.userId === user.id) {
return true
}
// Check for ownership
if (user.role === 'OWNER' && context?.ownerId === user.id) {
return true
}
throw new TRPCError({
code: 'FORBIDDEN',
message: `Missing permission: ${permission}`,
})
}
Third-Party Integrations
Stripe Integration
// apps/api/src/services/stripe.service.ts
import Stripe from 'stripe'
export class StripeService {
private stripe: Stripe
constructor() {
this.stripe = new Stripe(process.env.STRIPE_SECRET_KEY!, {
apiVersion: '2023-10-16',
})
}
async createCustomer(user: User) {
const customer = await this.stripe.customers.create({
email: user.email,
name: `${user.firstName} ${user.lastName}`,
metadata: {
userId: user.id,
}
})
// Save to database
await prisma.user.update({
where: { id: user.id },
data: { stripeCustomerId: customer.id }
})
return customer
}
async createPaymentIntent(booking: Booking) {
// Calculate platform fee (10%)
const platformFee = Math.round(booking.totalPrice * 0.10 * 100)
const paymentIntent = await this.stripe.paymentIntents.create({
amount: Math.round(booking.totalPrice * 100), // Convert to cents
currency: 'usd',
customer: booking.user.stripeCustomerId,
description: `Booking ${booking.bookingNumber}`,
metadata: {
bookingId: booking.id,
yachtId: booking.yachtId,
userId: booking.userId,
},
application_fee_amount: platformFee,
transfer_data: {
destination: booking.yacht.owner.stripeAccountId,
},
capture_method: 'manual', // Hold funds until check-in
})
return paymentIntent
}
async capturePayment(paymentIntentId: string) {
return await this.stripe.paymentIntents.capture(paymentIntentId)
}
async refundPayment(paymentIntentId: string, amount?: number) {
return await this.stripe.refunds.create({
payment_intent: paymentIntentId,
amount: amount ? Math.round(amount * 100) : undefined,
reason: 'requested_by_customer',
})
}
async createConnectedAccount(owner: User) {
const account = await this.stripe.accounts.create({
type: 'express',
country: 'US',
email: owner.email,
capabilities: {
card_payments: { requested: true },
transfers: { requested: true },
},
business_type: 'individual',
metadata: {
ownerId: owner.id,
}
})
// Create account link for onboarding
const accountLink = await this.stripe.accountLinks.create({
account: account.id,
refresh_url: `${process.env.APP_URL}/portal/stripe/refresh`,
return_url: `${process.env.APP_URL}/portal/stripe/complete`,
type: 'account_onboarding',
})
return { account, accountLink }
}
async handleWebhook(signature: string, payload: string) {
const event = this.stripe.webhooks.constructEvent(
payload,
signature,
process.env.STRIPE_WEBHOOK_SECRET!
)
switch (event.type) {
case 'payment_intent.succeeded':
await this.handlePaymentSuccess(event.data.object)
break
case 'payment_intent.payment_failed':
await this.handlePaymentFailure(event.data.object)
break
case 'account.updated':
await this.handleAccountUpdate(event.data.object)
break
// ... more event handlers
}
}
}
Cal.com Integration
// apps/api/src/services/cal.service.ts
import axios from 'axios'
export class CalService {
private apiKey: string
private baseUrl: string
constructor() {
this.apiKey = process.env.CAL_API_KEY!
this.baseUrl = 'https://api.cal.com/v1'
}
async createEventType(yacht: Yacht) {
const response = await axios.post(
`${this.baseUrl}/event-types`,
{
title: `${yacht.name} Charter`,
slug: yacht.slug,
description: yacht.description,
length: 60, // Default 1 hour slots
locations: [
{
type: 'inPerson',
address: yacht.location.marina,
}
],
metadata: {
yachtId: yacht.id,
},
price: yacht.hourlyRate,
currency: 'USD',
},
{
headers: {
'Authorization': `Bearer ${this.apiKey}`,
}
}
)
return response.data
}
async createBooking(booking: Booking) {
const response = await axios.post(
`${this.baseUrl}/bookings`,
{
eventTypeId: booking.yacht.calEventTypeId,
start: booking.startDate.toISOString(),
end: booking.endDate.toISOString(),
name: `${booking.user.firstName} ${booking.user.lastName}`,
email: booking.user.email,
phone: booking.user.phone,
guests: booking.guestDetails.map(g => g.email),
notes: booking.specialRequests,
metadata: {
bookingId: booking.id,
}
},
{
headers: {
'Authorization': `Bearer ${this.apiKey}`,
}
}
)
return response.data
}
async cancelBooking(calBookingId: string, reason: string) {
const response = await axios.delete(
`${this.baseUrl}/bookings/${calBookingId}`,
{
data: { cancellationReason: reason },
headers: {
'Authorization': `Bearer ${this.apiKey}`,
}
}
)
return response.data
}
async handleWebhook(payload: any) {
switch (payload.triggerEvent) {
case 'BOOKING_CREATED':
// Sync with our database
break
case 'BOOKING_CANCELLED':
// Update booking status
break
case 'BOOKING_RESCHEDULED':
// Update dates
break
}
}
}
Real-Time Communication
Socket.io Implementation
// apps/api/src/plugins/socket.ts
import { Server } from 'socket.io'
import { createAdapter } from '@socket.io/redis-adapter'
import { Redis } from 'ioredis'
export const socketPlugin: FastifyPluginAsync = async (fastify) => {
const pubClient = new Redis({
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT || '6379'),
})
const subClient = pubClient.duplicate()
const io = new Server(fastify.server, {
cors: {
origin: process.env.ALLOWED_ORIGINS?.split(','),
credentials: true,
},
adapter: createAdapter(pubClient, subClient),
})
// Authentication middleware
io.use(async (socket, next) => {
const token = socket.handshake.auth.token
try {
const decoded = await fastify.jwt.verify(token)
const user = await fastify.prisma.user.findUnique({
where: { keycloakId: decoded.sub }
})
if (!user) throw new Error('User not found')
socket.data.user = user
next()
} catch (err) {
next(new Error('Authentication failed'))
}
})
io.on('connection', (socket) => {
const user = socket.data.user
// Join user room
socket.join(`user:${user.id}`)
// Join role-based rooms
socket.join(`role:${user.role}`)
// Handle booking updates
socket.on('booking:subscribe', async (bookingId) => {
// Verify user has access to this booking
const booking = await fastify.prisma.booking.findFirst({
where: {
id: bookingId,
OR: [
{ userId: user.id },
{ yacht: { ownerId: user.id } },
]
}
})
if (booking) {
socket.join(`booking:${bookingId}`)
}
})
// Handle yacht tracking
socket.on('yacht:track', async (yachtId) => {
socket.join(`yacht:${yachtId}:tracking`)
// Send initial position
const position = await getYachtPosition(yachtId)
socket.emit('yacht:position', position)
})
// Handle chat messages
socket.on('message:send', async (data) => {
const message = await fastify.prisma.message.create({
data: {
bookingId: data.bookingId,
senderId: user.id,
content: data.content,
attachments: data.attachments || [],
},
include: {
sender: true,
}
})
// Send to all participants
io.to(`booking:${data.bookingId}`).emit('message:new', message)
// Send push notification to recipient
await sendPushNotification(data.recipientId, {
title: 'New message',
body: data.content,
data: { bookingId: data.bookingId }
})
})
// Handle typing indicators
socket.on('typing:start', ({ bookingId }) => {
socket.to(`booking:${bookingId}`).emit('typing:user', {
userId: user.id,
name: `${user.firstName} ${user.lastName}`,
})
})
socket.on('typing:stop', ({ bookingId }) => {
socket.to(`booking:${bookingId}`).emit('typing:user:stop', {
userId: user.id,
})
})
// Handle disconnection
socket.on('disconnect', () => {
// Clean up any typing indicators
io.emit('typing:user:stop', { userId: user.id })
})
})
// Expose io instance for use in routes
fastify.decorate('io', io)
}
Deployment Strategy
Overview
The Harborsmith platform is deployed using Docker containers orchestrated with Docker Compose, running behind an nginx reverse proxy on the host server. This architecture provides isolation, scalability, and easy management while leveraging the existing nginx infrastructure for SSL termination and load balancing.
Host Nginx Configuration
Since nginx is already running on the host server, we'll configure it as the main reverse proxy for all Harborsmith services. This provides centralized SSL management, load balancing, and request routing.
# /etc/nginx/sites-available/harborsmith.conf
# Upstream definitions for load balancing
upstream harborsmith_website {
least_conn;
server 127.0.0.1:3001 max_fails=3 fail_timeout=30s;
server 127.0.0.1:3002 max_fails=3 fail_timeout=30s;
keepalive 32;
}
upstream harborsmith_webapp {
least_conn;
server 127.0.0.1:3003 max_fails=3 fail_timeout=30s;
server 127.0.0.1:3004 max_fails=3 fail_timeout=30s;
keepalive 32;
}
upstream harborsmith_portal {
least_conn;
server 127.0.0.1:3005 max_fails=3 fail_timeout=30s;
server 127.0.0.1:3006 max_fails=3 fail_timeout=30s;
keepalive 32;
}
upstream harborsmith_api {
ip_hash; # Sticky sessions for WebSocket support
server 127.0.0.1:3000 max_fails=3 fail_timeout=30s;
server 127.0.0.1:3010 max_fails=3 fail_timeout=30s;
server 127.0.0.1:3020 max_fails=3 fail_timeout=30s;
keepalive 64;
}
# MinIO is external - no upstream needed as it has its own access point
# Rate limiting zones
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=upload_limit:10m rate=2r/s;
# Main website (SSG)
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name harborsmith.com www.harborsmith.com;
# SSL configuration (adjust paths as needed)
ssl_certificate /etc/letsencrypt/live/harborsmith.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/harborsmith.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_stapling on;
ssl_stapling_verify on;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Content-Security-Policy "default-src 'self' https:; script-src 'self' 'unsafe-inline' 'unsafe-eval' https:; style-src 'self' 'unsafe-inline' https:;" always;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1000;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
location / {
proxy_pass http://harborsmith_website;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
# Cache static assets
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf|eot)$ {
proxy_pass http://harborsmith_website;
expires 30d;
add_header Cache-Control "public, immutable";
}
}
}
# Customer Web App (SPA)
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name app.harborsmith.com;
ssl_certificate /etc/letsencrypt/live/harborsmith.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/harborsmith.com/privkey.pem;
# Inherit SSL settings from snippets/ssl-params.conf if available
# include snippets/ssl-params.conf;
location / {
proxy_pass http://harborsmith_webapp;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
# Admin Portal
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name portal.harborsmith.com;
ssl_certificate /etc/letsencrypt/live/harborsmith.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/harborsmith.com/privkey.pem;
location / {
proxy_pass http://harborsmith_portal;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
# API Server
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name api.harborsmith.com;
ssl_certificate /etc/letsencrypt/live/harborsmith.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/harborsmith.com/privkey.pem;
# API rate limiting
location / {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://harborsmith_api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
# Timeouts for long-running requests
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# WebSocket endpoint
location /ws {
proxy_pass http://harborsmith_api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket timeouts
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
}
# Upload endpoint with larger limits
location /upload {
limit_req zone=upload_limit burst=5 nodelay;
client_max_body_size 10G;
client_body_timeout 3600s;
proxy_pass http://harborsmith_api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Extended timeouts for large uploads
proxy_connect_timeout 3600s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
proxy_request_buffering off;
}
}
# MinIO is already accessible via its own endpoint/domain
# If you need to proxy MinIO through this nginx (optional):
# server {
# listen 443 ssl http2;
# listen [::]:443 ssl http2;
# server_name minio.harborsmith.com;
#
# ssl_certificate /etc/letsencrypt/live/harborsmith.com/fullchain.pem;
# ssl_certificate_key /etc/letsencrypt/live/harborsmith.com/privkey.pem;
#
# ignore_invalid_headers off;
# client_max_body_size 0;
# proxy_buffering off;
#
# location / {
# proxy_set_header Host $http_host;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Proto $scheme;
#
# proxy_connect_timeout 300;
# proxy_http_version 1.1;
# proxy_set_header Connection "";
# chunked_transfer_encoding off;
#
# # Replace with actual MinIO endpoint
# proxy_pass http://${MINIO_EXTERNAL_HOST}:${MINIO_EXTERNAL_PORT};
# }
# }
# HTTP to HTTPS redirect
server {
listen 80;
listen [::]:80;
server_name harborsmith.com www.harborsmith.com app.harborsmith.com portal.harborsmith.com api.harborsmith.com minio.harborsmith.com;
return 301 https://$server_name$request_uri;
}
Docker Configuration
Docker Compose Configuration
# docker-compose.yml
version: '3.9'
services:
# Database
postgres:
image: postgis/postgis:16-3.4
container_name: harborsmith_postgres
restart: unless-stopped
environment:
POSTGRES_DB: harborsmith
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_MAX_CONNECTIONS: 200
POSTGRES_SHARED_BUFFERS: 256MB
volumes:
- postgres_data:/var/lib/postgresql/data
- ./infrastructure/postgres/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
ports:
- "127.0.0.1:5432:5432" # Only expose to localhost
networks:
- harborsmith_internal
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d harborsmith"]
interval: 10s
timeout: 5s
retries: 5
# Cache
redis:
image: redis:7-alpine
container_name: harborsmith_redis
restart: unless-stopped
command: redis-server --appendonly yes --maxmemory 512mb --maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
- ./infrastructure/redis/redis.conf:/usr/local/etc/redis/redis.conf:ro
ports:
- "127.0.0.1:6379:6379" # Only expose to localhost
networks:
- harborsmith_internal
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
# Database Connection Pooler
pgbouncer:
image: pgbouncer/pgbouncer:latest
container_name: harborsmith_pgbouncer
restart: unless-stopped
environment:
DATABASES_HOST: postgres
DATABASES_PORT: 5432
DATABASES_DBNAME: harborsmith
DATABASES_USER: ${DB_USER}
DATABASES_PASSWORD: ${DB_PASSWORD}
POOL_MODE: transaction
MAX_CLIENT_CONN: 1000
DEFAULT_POOL_SIZE: 25
MIN_POOL_SIZE: 5
RESERVE_POOL_SIZE: 5
ports:
- "127.0.0.1:6432:6432" # PgBouncer port
networks:
- harborsmith_internal
depends_on:
postgres:
condition: service_healthy
# MinIO is already running in a separate Docker Compose stack
# Connection details will be provided via environment variables:
# - MINIO_ENDPOINT: External MinIO endpoint (e.g., minio.local or IP address)
# - MINIO_PORT: External MinIO port (default: 9000)
# - MINIO_ACCESS_KEY: Access key for existing MinIO instance
# - MINIO_SECRET_KEY: Secret key for existing MinIO instance
# API Service (3 replicas)
api-1:
build:
context: .
dockerfile: apps/api/Dockerfile
container_name: harborsmith_api_1
restart: unless-stopped
environment:
NODE_ENV: production
PORT: 3000
DATABASE_URL: postgresql://${DB_USER}:${DB_PASSWORD}@postgres:5432/harborsmith
REDIS_URL: redis://redis:6379
MINIO_ENDPOINT: ${MINIO_ENDPOINT} # External MinIO endpoint
MINIO_PORT: ${MINIO_PORT:-9000}
MINIO_USE_SSL: ${MINIO_USE_SSL:-false}
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
KEYCLOAK_URL: ${KEYCLOAK_URL}
STRIPE_SECRET_KEY: ${STRIPE_SECRET_KEY}
CAL_API_KEY: ${CAL_API_KEY}
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
# MinIO is external - no dependency needed
ports:
- "127.0.0.1:3000:3000"
networks:
- harborsmith_internal
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
api-2:
build:
context: .
dockerfile: apps/api/Dockerfile
container_name: harborsmith_api_2
restart: unless-stopped
environment:
NODE_ENV: production
PORT: 3000
DATABASE_URL: postgresql://${DB_USER}:${DB_PASSWORD}@postgres:5432/harborsmith
REDIS_URL: redis://redis:6379
MINIO_ENDPOINT: ${MINIO_ENDPOINT} # External MinIO endpoint
MINIO_PORT: ${MINIO_PORT:-9000}
MINIO_USE_SSL: ${MINIO_USE_SSL:-false}
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
KEYCLOAK_URL: ${KEYCLOAK_URL}
STRIPE_SECRET_KEY: ${STRIPE_SECRET_KEY}
CAL_API_KEY: ${CAL_API_KEY}
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
# MinIO is external - no dependency needed
ports:
- "127.0.0.1:3010:3000"
networks:
- harborsmith_internal
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
api-3:
build:
context: .
dockerfile: apps/api/Dockerfile
container_name: harborsmith_api_3
restart: unless-stopped
environment:
NODE_ENV: production
PORT: 3000
DATABASE_URL: postgresql://${DB_USER}:${DB_PASSWORD}@postgres:5432/harborsmith
REDIS_URL: redis://redis:6379
MINIO_ENDPOINT: ${MINIO_ENDPOINT} # External MinIO endpoint
MINIO_PORT: ${MINIO_PORT:-9000}
MINIO_USE_SSL: ${MINIO_USE_SSL:-false}
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
KEYCLOAK_URL: ${KEYCLOAK_URL}
STRIPE_SECRET_KEY: ${STRIPE_SECRET_KEY}
CAL_API_KEY: ${CAL_API_KEY}
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
# MinIO is external - no dependency needed
ports:
- "127.0.0.1:3020:3000"
networks:
- harborsmith_internal
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
# Frontend Services
website-1:
build:
context: .
dockerfile: apps/website/Dockerfile
args:
- APP=website
container_name: harborsmith_website_1
restart: unless-stopped
environment:
NODE_ENV: production
NUXT_PUBLIC_API_URL: https://api.harborsmith.com
NUXT_PUBLIC_WS_URL: wss://api.harborsmith.com/ws
ports:
- "127.0.0.1:3001:3000"
networks:
- harborsmith_internal
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 3
website-2:
build:
context: .
dockerfile: apps/website/Dockerfile
args:
- APP=website
container_name: harborsmith_website_2
restart: unless-stopped
environment:
NODE_ENV: production
NUXT_PUBLIC_API_URL: https://api.harborsmith.com
NUXT_PUBLIC_WS_URL: wss://api.harborsmith.com/ws
ports:
- "127.0.0.1:3002:3000"
networks:
- harborsmith_internal
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 3
webapp-1:
build:
context: .
dockerfile: apps/webapp/Dockerfile
args:
- APP=webapp
container_name: harborsmith_webapp_1
restart: unless-stopped
environment:
NODE_ENV: production
NUXT_PUBLIC_API_URL: https://api.harborsmith.com
NUXT_PUBLIC_WS_URL: wss://api.harborsmith.com/ws
NUXT_PUBLIC_KEYCLOAK_URL: ${KEYCLOAK_URL}
NUXT_PUBLIC_KEYCLOAK_REALM: ${KEYCLOAK_REALM}
NUXT_PUBLIC_KEYCLOAK_CLIENT_ID: ${KEYCLOAK_CLIENT_ID}
ports:
- "127.0.0.1:3003:3000"
networks:
- harborsmith_internal
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 3
webapp-2:
build:
context: .
dockerfile: apps/webapp/Dockerfile
args:
- APP=webapp
container_name: harborsmith_webapp_2
restart: unless-stopped
environment:
NODE_ENV: production
NUXT_PUBLIC_API_URL: https://api.harborsmith.com
NUXT_PUBLIC_WS_URL: wss://api.harborsmith.com/ws
NUXT_PUBLIC_KEYCLOAK_URL: ${KEYCLOAK_URL}
NUXT_PUBLIC_KEYCLOAK_REALM: ${KEYCLOAK_REALM}
NUXT_PUBLIC_KEYCLOAK_CLIENT_ID: ${KEYCLOAK_CLIENT_ID}
ports:
- "127.0.0.1:3004:3000"
networks:
- harborsmith_internal
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 3
portal-1:
build:
context: .
dockerfile: apps/portal/Dockerfile
args:
- APP=portal
container_name: harborsmith_portal_1
restart: unless-stopped
environment:
NODE_ENV: production
NUXT_PUBLIC_API_URL: https://api.harborsmith.com
NUXT_PUBLIC_WS_URL: wss://api.harborsmith.com/ws
NUXT_PUBLIC_KEYCLOAK_URL: ${KEYCLOAK_URL}
NUXT_PUBLIC_KEYCLOAK_REALM: ${KEYCLOAK_REALM}
NUXT_PUBLIC_KEYCLOAK_CLIENT_ID: ${KEYCLOAK_ADMIN_CLIENT_ID}
ports:
- "127.0.0.1:3005:3000"
networks:
- harborsmith_internal
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 3
portal-2:
build:
context: .
dockerfile: apps/portal/Dockerfile
args:
- APP=portal
container_name: harborsmith_portal_2
restart: unless-stopped
environment:
NODE_ENV: production
NUXT_PUBLIC_API_URL: https://api.harborsmith.com
NUXT_PUBLIC_WS_URL: wss://api.harborsmith.com/ws
NUXT_PUBLIC_KEYCLOAK_URL: ${KEYCLOAK_URL}
NUXT_PUBLIC_KEYCLOAK_REALM: ${KEYCLOAK_REALM}
NUXT_PUBLIC_KEYCLOAK_CLIENT_ID: ${KEYCLOAK_ADMIN_CLIENT_ID}
ports:
- "127.0.0.1:3006:3000"
networks:
- harborsmith_internal
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 3
volumes:
postgres_data:
driver: local
redis_data:
driver: local
# MinIO data is managed by the external MinIO instance
networks:
harborsmith_internal:
name: harborsmith_internal
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
Dockerfiles for Each Service
# apps/api/Dockerfile
FROM node:20-alpine AS base
RUN apk add --no-cache libc6-compat ffmpeg
WORKDIR /app
# Install dependencies
FROM base AS deps
COPY package*.json ./
COPY turbo.json ./
COPY packages/database/package.json ./packages/database/
COPY packages/shared/package.json ./packages/shared/
RUN npm ci
# Build the application
FROM base AS builder
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npx turbo run build --filter=@harborsmith/api
# Production image
FROM base AS runner
ENV NODE_ENV production
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nodejs
COPY --from=builder --chown=nodejs:nodejs /app/apps/api/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/packages/database/prisma ./prisma
USER nodejs
EXPOSE 3000
CMD ["node", "dist/server.js"]
# apps/website/Dockerfile (SSG)
FROM node:20-alpine AS base
RUN apk add --no-cache libc6-compat
WORKDIR /app
FROM base AS deps
COPY package*.json ./
COPY turbo.json ./
RUN npm ci
FROM base AS builder
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npx turbo run build --filter=@harborsmith/website
FROM base AS runner
ENV NODE_ENV production
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nodejs
COPY --from=builder --chown=nodejs:nodejs /app/apps/website/.output ./.output
COPY --from=builder --chown=nodejs:nodejs /app/apps/website/public ./public
USER nodejs
EXPOSE 3000
CMD ["node", ".output/server/index.mjs"]
# apps/webapp/Dockerfile (SPA)
FROM node:20-alpine AS base
RUN apk add --no-cache libc6-compat
WORKDIR /app
FROM base AS deps
COPY package*.json ./
COPY turbo.json ./
RUN npm ci
FROM base AS builder
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npx turbo run build --filter=@harborsmith/webapp
FROM nginx:alpine AS runner
COPY --from=builder /app/apps/webapp/.output/public /usr/share/nginx/html
COPY ./infrastructure/nginx/spa.conf /etc/nginx/conf.d/default.conf
EXPOSE 3000
CMD ["nginx", "-g", "daemon off;"]
Deployment Scripts
#!/bin/bash
# deploy.sh - Main deployment script
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
echo -e "${GREEN}Starting Harborsmith deployment...${NC}"
# Check for required environment variables
required_vars=(
"DB_USER"
"DB_PASSWORD"
"MINIO_ACCESS_KEY"
"MINIO_SECRET_KEY"
"KEYCLOAK_URL"
"STRIPE_SECRET_KEY"
"CAL_API_KEY"
)
for var in "${required_vars[@]}"; do
if [[ -z "${!var}" ]]; then
echo -e "${RED}Error: $var is not set${NC}"
exit 1
fi
done
# Pull latest code
echo -e "${YELLOW}Pulling latest code...${NC}"
git pull origin main
# Build containers
echo -e "${YELLOW}Building containers...${NC}"
docker-compose build --parallel
# Run database migrations
echo -e "${YELLOW}Running database migrations...${NC}"
docker-compose run --rm api npx prisma migrate deploy
# Start services
echo -e "${YELLOW}Starting services...${NC}"
docker-compose up -d
# Wait for health checks
echo -e "${YELLOW}Waiting for services to be healthy...${NC}"
./scripts/wait-for-healthy.sh
# Reload nginx configuration
echo -e "${YELLOW}Reloading nginx configuration...${NC}"
sudo nginx -t && sudo nginx -s reload
echo -e "${GREEN}Deployment complete!${NC}"
#!/bin/bash
# scripts/wait-for-healthy.sh
services=(
"harborsmith_postgres"
"harborsmith_redis"
"harborsmith_pgbouncer"
"harborsmith_api_1"
"harborsmith_api_2"
"harborsmith_api_3"
"harborsmith_website_1"
"harborsmith_webapp_1"
"harborsmith_portal_1"
)
# Check external MinIO connectivity
echo "Checking MinIO connectivity..."
curl -f http://${MINIO_ENDPOINT}:${MINIO_PORT}/minio/health/live || {
echo "WARNING: Cannot reach external MinIO instance at ${MINIO_ENDPOINT}:${MINIO_PORT}"
echo "Please ensure MinIO is running and accessible"
}
for service in "${services[@]}"; do
echo "Waiting for $service to be healthy..."
while [ "$(docker inspect -f '{{.State.Health.Status}}' $service 2>/dev/null)" != "healthy" ]; do
sleep 2
done
echo "$service is healthy!"
done
Container Management
# Start all services
docker-compose up -d
# Stop all services
docker-compose down
# View logs
docker-compose logs -f [service_name]
# Scale a service
docker-compose up -d --scale api=5
# Update a single service
docker-compose up -d --no-deps --build api-1
# Database backup
docker exec harborsmith_postgres pg_dump -U $DB_USER harborsmith > backup_$(date +%Y%m%d).sql
# Database restore
docker exec -i harborsmith_postgres psql -U $DB_USER harborsmith < backup.sql
Environment Configuration
# .env file (root directory)
# Database
DB_USER=harborsmith
DB_PASSWORD=your_secure_password_here
# External MinIO Configuration
MINIO_ENDPOINT=minio.example.com # or IP address of MinIO host
MINIO_PORT=9000
MINIO_USE_SSL=true # Set to true if MinIO uses HTTPS
MINIO_ACCESS_KEY=your_external_minio_access_key
MINIO_SECRET_KEY=your_external_minio_secret_key
# External Services
KEYCLOAK_URL=https://auth.harborsmith.com
KEYCLOAK_REALM=harborsmith
KEYCLOAK_CLIENT_ID=harborsmith-webapp
KEYCLOAK_ADMIN_CLIENT_ID=harborsmith-portal
# Stripe
STRIPE_SECRET_KEY=sk_live_your_stripe_key
# Cal.com
CAL_API_KEY=cal_live_your_cal_key
# Application
NODE_ENV=production
LOG_LEVEL=info
ALLOWED_ORIGINS=https://harborsmith.com,https://app.harborsmith.com,https://portal.harborsmith.com
External MinIO Integration
Since MinIO is running in a separate Docker Compose stack, the Harborsmith services connect to it via network configuration:
MinIO Connection Configuration
// apps/api/src/config/storage.ts
import { Client } from 'minio'
export const minioClient = new Client({
endPoint: process.env.MINIO_ENDPOINT!, // External MinIO host
port: parseInt(process.env.MINIO_PORT || '9000'),
useSSL: process.env.MINIO_USE_SSL === 'true',
accessKey: process.env.MINIO_ACCESS_KEY!,
secretKey: process.env.MINIO_SECRET_KEY!,
})
// Initialize buckets on startup
export async function initializeStorage() {
const buckets = ['uploads', 'media', 'backups', 'documents']
for (const bucket of buckets) {
const exists = await minioClient.bucketExists(bucket)
if (!exists) {
await minioClient.makeBucket(bucket, 'us-west-2')
console.log(`Created bucket: ${bucket}`)
}
}
// Set bucket policies
const publicPolicy = {
Version: '2012-10-17',
Statement: [{
Effect: 'Allow',
Principal: { AWS: ['*'] },
Action: ['s3:GetObject'],
Resource: ['arn:aws:s3:::media/*'],
}],
}
await minioClient.setBucketPolicy('media', JSON.stringify(publicPolicy))
}
Network Connectivity Requirements
- MinIO must be accessible from Docker containers
- If MinIO is on the same host: use host networking or bridge network
- If MinIO is on different host: ensure firewall allows port 9000
- For production: Use internal network IP or hostname
Docker Network Setup (if MinIO on same host)
# Create shared network for MinIO communication
docker network create minio-shared
# Add to MinIO compose file
networks:
default:
external:
name: minio-shared
# Add to Harborsmith compose file
networks:
harborsmith_internal:
driver: bridge
minio-shared:
external: true
# Update service definitions to use both networks
services:
api-1:
networks:
- harborsmith_internal
- minio-shared
Production Considerations
- SSL Certificates: Using Let's Encrypt with nginx on the host
- Container Networking: All containers expose ports only to localhost (127.0.0.1)
- Health Checks: Each service has health check endpoints
- External MinIO: Connected via environment variables, ensure network connectivity
- Logging: Centralized logging through Docker's logging driver
- Monitoring: Integration with monitoring stack (Prometheus/Grafana)
- Backups: Automated PostgreSQL backups, MinIO handled by external instance
- Updates: Zero-downtime deployments using rolling updates
Security Architecture
Security Best Practices
-
Authentication & Authorization
- OAuth2/OIDC via Keycloak
- JWT tokens with short expiration
- Refresh token rotation
- MFA support
-
Data Protection
- Encryption at rest (PostgreSQL TDE)
- Encryption in transit (TLS 1.3)
- Field-level encryption for PII
- GDPR compliance
-
API Security
- Rate limiting per user/IP
- Input validation with Zod
- SQL injection prevention (Prisma)
- XSS protection (CSP headers)
-
Infrastructure Security
- Network segmentation
- Secrets management (Vault)
- Regular security updates
- Container scanning
-
Monitoring & Compliance
- Audit logging
- Intrusion detection
- SIEM integration
- Compliance reporting
Performance Optimization
Frontend Optimizations
-
Bundle Size
- Tree shaking
- Code splitting
- Dynamic imports
- Component lazy loading
-
Caching Strategy
- Browser caching
- CDN caching
- API response caching
- Static asset optimization
-
Image Optimization
- Next-gen formats (WebP, AVIF)
- Responsive images
- Lazy loading
- CDN delivery
Backend Optimizations
-
Database
- Connection pooling
- Query optimization
- Indexes on foreign keys
- Materialized views for reports
-
Caching
- Redis for session storage
- Query result caching
- Full-page caching for SSG
- CDN for static assets
-
Scaling
- Horizontal scaling with load balancer
- Database read replicas
- Microservices for heavy operations
- Queue-based processing
Monitoring & Observability
Logging Strategy
// Structured logging with Pino
const logger = pino({
level: process.env.LOG_LEVEL || 'info',
transport: {
targets: [
{
target: 'pino-pretty',
options: { colorize: true },
level: 'debug'
},
{
target: '@axiomhq/pino',
options: {
dataset: process.env.AXIOM_DATASET,
token: process.env.AXIOM_TOKEN,
},
level: 'info'
}
]
}
})
Metrics Collection
// Prometheus metrics
import { register, Counter, Histogram, Gauge } from 'prom-client'
const httpRequestDuration = new Histogram({
name: 'http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['method', 'route', 'status']
})
const activeBookings = new Gauge({
name: 'active_bookings_total',
help: 'Total number of active bookings',
})
const paymentProcessed = new Counter({
name: 'payments_processed_total',
help: 'Total number of processed payments',
labelNames: ['status', 'method']
})
Critical Improvements & Best Practices
1. Database Optimization
Connection Pooling with PgBouncer
- Added PgBouncer service for connection pooling
- Transaction pooling mode for optimal performance
- Prevents connection exhaustion under high load
Database Configuration
-- infrastructure/postgres/init.sql
-- Performance optimizations
ALTER SYSTEM SET shared_buffers = '256MB';
ALTER SYSTEM SET effective_cache_size = '1GB';
ALTER SYSTEM SET maintenance_work_mem = '64MB';
ALTER SYSTEM SET checkpoint_completion_target = 0.9;
ALTER SYSTEM SET wal_buffers = '16MB';
ALTER SYSTEM SET default_statistics_target = 100;
ALTER SYSTEM SET random_page_cost = 1.1;
ALTER SYSTEM SET effective_io_concurrency = 200;
ALTER SYSTEM SET work_mem = '4MB';
ALTER SYSTEM SET min_wal_size = '1GB';
ALTER SYSTEM SET max_wal_size = '4GB';
-- Create indexes for foreign keys and common queries
CREATE INDEX idx_bookings_yacht_dates ON bookings(yacht_id, start_date, end_date);
CREATE INDEX idx_bookings_user ON bookings(user_id);
CREATE INDEX idx_bookings_status ON bookings(status, payment_status);
CREATE INDEX idx_yachts_location ON yachts USING gin(location);
CREATE INDEX idx_yachts_features ON yachts USING gin(features);
CREATE INDEX idx_media_yacht ON media(yacht_id, type, is_primary);
CREATE INDEX idx_reviews_yacht ON reviews(yacht_id, status);
CREATE INDEX idx_activity_log_entity ON activity_log(entity_type, entity_id);
CREATE INDEX idx_activity_log_created ON activity_log(created_at DESC);
2. Enhanced Security Configuration
CORS Configuration
// apps/api/src/config/cors.ts
export const corsConfig = {
origin: (origin, callback) => {
const allowedOrigins = process.env.ALLOWED_ORIGINS?.split(',') || []
// Allow requests with no origin (mobile apps, Postman)
if (!origin) return callback(null, true)
// Check if origin is allowed
if (allowedOrigins.includes(origin)) {
callback(null, true)
} else {
callback(new Error('Not allowed by CORS'))
}
},
credentials: true,
methods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS'],
allowedHeaders: ['Content-Type', 'Authorization', 'X-Request-ID'],
exposedHeaders: ['X-Total-Count', 'X-Page-Count'],
maxAge: 86400, // 24 hours
}
Application-Level Rate Limiting
// apps/api/src/plugins/rate-limiter.ts
import { FastifyPluginAsync } from 'fastify'
import { RateLimiterRedis, RateLimiterRes } from 'rate-limiter-flexible'
export const rateLimiterPlugin: FastifyPluginAsync = async (fastify) => {
const rateLimiter = new RateLimiterRedis({
storeClient: fastify.redis,
keyPrefix: 'rl',
points: 100, // Number of requests
duration: 60, // Per 60 seconds
blockDuration: 60 * 10, // Block for 10 minutes
})
// Different limits for different endpoints
const limiters = {
api: new RateLimiterRedis({
storeClient: fastify.redis,
keyPrefix: 'rl:api',
points: 100,
duration: 60,
}),
auth: new RateLimiterRedis({
storeClient: fastify.redis,
keyPrefix: 'rl:auth',
points: 5,
duration: 60 * 15, // 5 attempts per 15 minutes
}),
upload: new RateLimiterRedis({
storeClient: fastify.redis,
keyPrefix: 'rl:upload',
points: 10,
duration: 60 * 60, // 10 uploads per hour
}),
}
fastify.addHook('onRequest', async (request, reply) => {
try {
const limiter = getLimiterForRoute(request.url)
await limiter.consume(request.ip)
} catch (rejRes) {
reply.code(429).send({
error: 'Too Many Requests',
retryAfter: Math.round(rejRes.msBeforeNext / 1000) || 60,
})
}
})
}
3. WebSocket Scaling Solution
Redis Adapter for Socket.io
// apps/api/src/plugins/socket-scaled.ts
import { createAdapter } from '@socket.io/redis-adapter'
import { createClient } from 'redis'
export const scaledSocketPlugin: FastifyPluginAsync = async (fastify) => {
const pubClient = createClient({ url: process.env.REDIS_URL })
const subClient = pubClient.duplicate()
await Promise.all([
pubClient.connect(),
subClient.connect(),
])
const io = new Server(fastify.server, {
cors: corsConfig,
adapter: createAdapter(pubClient, subClient),
connectionStateRecovery: {
maxDisconnectionDuration: 2 * 60 * 1000, // 2 minutes
skipMiddlewares: true,
},
})
// Session affinity handled by nginx ip_hash
io.on('connection', (socket) => {
// Connection handling...
})
}
4. Comprehensive Caching Strategy
// packages/shared/src/cache/cache-manager.ts
export class CacheManager {
private redis: Redis
private strategies: Map<string, CacheStrategy>
constructor(redis: Redis) {
this.redis = redis
this.strategies = new Map([
['yacht-list', { ttl: 300, pattern: 'cache-aside' }],
['yacht-detail', { ttl: 3600, pattern: 'cache-aside' }],
['user-session', { ttl: 86400, pattern: 'write-through' }],
['booking-availability', { ttl: 60, pattern: 'cache-aside' }],
])
}
async get<T>(key: string, fetcher?: () => Promise<T>): Promise<T | null> {
const cached = await this.redis.get(key)
if (cached) {
return JSON.parse(cached)
}
if (!fetcher) return null
// Cache-aside pattern
const data = await fetcher()
const strategy = this.getStrategy(key)
await this.set(key, data, strategy.ttl)
return data
}
async set(key: string, value: any, ttl?: number): Promise<void> {
const strategy = this.getStrategy(key)
const finalTtl = ttl || strategy.ttl
await this.redis.setex(key, finalTtl, JSON.stringify(value))
// Emit cache invalidation event
await this.redis.publish('cache:invalidate', JSON.stringify({ key, ttl: finalTtl }))
}
async invalidate(pattern: string): Promise<void> {
const keys = await this.redis.keys(pattern)
if (keys.length > 0) {
await this.redis.del(...keys)
}
}
async warmUp(): Promise<void> {
// Pre-load frequently accessed data
const popularYachts = await this.redis.zrevrange('popular:yachts', 0, 10)
for (const yachtId of popularYachts) {
await this.get(`yacht:${yachtId}`, async () => {
return await prisma.yacht.findUnique({ where: { id: yachtId } })
})
}
}
}
5. CDN & Media Optimization
CloudFlare CDN Configuration
// apps/api/src/services/cdn.service.ts
export class CDNService {
private cloudflare: Cloudflare
constructor() {
this.cloudflare = new Cloudflare({
email: process.env.CLOUDFLARE_EMAIL,
key: process.env.CLOUDFLARE_API_KEY,
})
}
async purgeCache(urls: string[]): Promise<void> {
await this.cloudflare.zones.purgeCache(
process.env.CLOUDFLARE_ZONE_ID,
{ files: urls }
)
}
async uploadToR2(file: Buffer, key: string): Promise<string> {
// Upload to Cloudflare R2 for edge storage
const url = await this.cloudflare.r2.upload(file, key)
return `https://cdn.harborsmith.com/${key}`
}
}
Image Optimization Pipeline
// apps/api/src/workers/image.processor.ts
import sharp from 'sharp'
export async function processImage(input: Buffer): Promise<ProcessedImages> {
const variants = [
{ name: 'thumbnail', width: 150, height: 150, quality: 80 },
{ name: 'small', width: 400, height: 300, quality: 85 },
{ name: 'medium', width: 800, height: 600, quality: 85 },
{ name: 'large', width: 1920, height: 1080, quality: 90 },
]
const processed = await Promise.all(
variants.map(async (variant) => {
const webp = await sharp(input)
.resize(variant.width, variant.height, { fit: 'cover' })
.webp({ quality: variant.quality })
.toBuffer()
const avif = await sharp(input)
.resize(variant.width, variant.height, { fit: 'cover' })
.avif({ quality: variant.quality - 5 })
.toBuffer()
return { ...variant, webp, avif }
})
)
// Generate blur placeholder
const placeholder = await sharp(input)
.resize(20, 20, { fit: 'cover' })
.blur(10)
.toBuffer()
return { variants: processed, placeholder }
}
6. CI/CD Pipeline
GitHub Actions Workflow
# .github/workflows/main.yml
name: CI/CD Pipeline
on:
push:
branches: [main, staging]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_PASSWORD: test
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: redis:7
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run linter
run: npm run lint
- name: Run type check
run: npm run type-check
- name: Run unit tests
run: npm run test:unit -- --coverage
- name: Run integration tests
run: npm run test:integration
- name: Run E2E tests
run: npm run test:e2e
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
- name: Upload coverage
uses: codecov/codecov-action@v3
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Snyk Security Scan
uses: snyk/actions/node@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
scan-type: 'fs'
scan-ref: '.'
build:
needs: [test, security]
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_TOKEN }}
- name: Build and push Docker images
run: |
docker-compose build --parallel
docker-compose push
deploy:
needs: build
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- name: Deploy to production
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.PRODUCTION_HOST }}
username: ${{ secrets.PRODUCTION_USER }}
key: ${{ secrets.PRODUCTION_SSH_KEY }}
script: |
cd /opt/harborsmith
git pull origin main
docker-compose pull
docker-compose up -d --remove-orphans
./scripts/wait-for-healthy.sh
sudo nginx -s reload
7. Testing Strategy
Test Configuration
// vitest.config.ts
import { defineConfig } from 'vitest/config'
export default defineConfig({
test: {
globals: true,
environment: 'node',
coverage: {
provider: 'v8',
reporter: ['text', 'json', 'html'],
exclude: [
'node_modules/',
'dist/',
'*.config.ts',
],
lines: 80,
functions: 80,
branches: 80,
statements: 80,
},
setupFiles: ['./tests/setup.ts'],
},
})
E2E Testing with Playwright
// tests/e2e/booking-flow.spec.ts
import { test, expect } from '@playwright/test'
test.describe('Booking Flow', () => {
test('should complete a yacht booking', async ({ page }) => {
// Login
await page.goto('/login')
await page.fill('[name=email]', 'test@example.com')
await page.fill('[name=password]', 'password')
await page.click('button[type=submit]')
// Search for yacht
await page.goto('/yachts')
await page.fill('[name=search]', 'Sunset Dream')
await page.click('button[aria-label="Search"]')
// Select yacht
await page.click('[data-yacht-id="123"]')
await expect(page).toHaveURL(/\/yachts\/123/)
// Select dates
await page.click('[data-testid="date-picker"]')
await page.click('[data-date="2024-06-15"]')
await page.click('[data-date="2024-06-17"]')
// Add extras
await page.check('[name="extras.catering"]')
await page.check('[name="extras.captain"]')
// Proceed to payment
await page.click('button:has-text("Book Now")')
// Fill payment details (Stripe Elements)
const stripeFrame = page.frameLocator('iframe[name*="stripe"]')
await stripeFrame.locator('[name="cardnumber"]').fill('4242424242424242')
await stripeFrame.locator('[name="exp-date"]').fill('12/25')
await stripeFrame.locator('[name="cvc"]').fill('123')
// Confirm booking
await page.click('button:has-text("Confirm Booking")')
// Verify success
await expect(page).toHaveURL(/\/bookings\/[a-z0-9-]+\/confirmation/)
await expect(page.locator('h1')).toContainText('Booking Confirmed')
})
})
8. Monitoring & Observability
OpenTelemetry Setup
// apps/api/src/telemetry.ts
import { NodeSDK } from '@opentelemetry/sdk-node'
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node'
import { PeriodicExportingMetricReader, ConsoleMetricExporter } from '@opentelemetry/sdk-metrics'
import { Resource } from '@opentelemetry/resources'
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions'
const sdk = new NodeSDK({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: 'harborsmith-api',
[SemanticResourceAttributes.SERVICE_VERSION]: process.env.npm_package_version,
}),
instrumentations: [
getNodeAutoInstrumentations({
'@opentelemetry/instrumentation-fs': {
enabled: false,
},
}),
],
metricReader: new PeriodicExportingMetricReader({
exporter: new ConsoleMetricExporter(),
exportIntervalMillis: 1000,
}),
})
sdk.start()
Custom Prometheus Metrics
// apps/api/src/metrics.ts
import { Counter, Histogram, Gauge, register } from 'prom-client'
export const metrics = {
httpRequestDuration: new Histogram({
name: 'http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['method', 'route', 'status'],
buckets: [0.1, 0.3, 0.5, 0.7, 1, 3, 5, 7, 10],
}),
bookingsCreated: new Counter({
name: 'bookings_created_total',
help: 'Total number of bookings created',
labelNames: ['yacht_id', 'status'],
}),
activeUsers: new Gauge({
name: 'active_users',
help: 'Number of active users',
}),
paymentAmount: new Histogram({
name: 'payment_amount_usd',
help: 'Payment amounts in USD',
labelNames: ['status', 'method'],
buckets: [100, 500, 1000, 5000, 10000],
}),
uploadedFiles: new Counter({
name: 'uploaded_files_total',
help: 'Total number of files uploaded',
labelNames: ['type', 'size_category'],
}),
}
// Collect default metrics
register.collectDefaultMetrics({ prefix: 'harborsmith_' })
9. Error Handling & Resilience
Circuit Breaker Implementation
// packages/shared/src/circuit-breaker.ts
export class CircuitBreaker {
private failures = 0
private lastFailureTime: number | null = null
private state: 'CLOSED' | 'OPEN' | 'HALF_OPEN' = 'CLOSED'
constructor(
private threshold: number = 5,
private timeout: number = 60000, // 1 minute
private resetTimeout: number = 30000, // 30 seconds
) {}
async execute<T>(fn: () => Promise<T>): Promise<T> {
if (this.state === 'OPEN') {
if (Date.now() - this.lastFailureTime! > this.timeout) {
this.state = 'HALF_OPEN'
} else {
throw new Error('Circuit breaker is OPEN')
}
}
try {
const result = await fn()
this.onSuccess()
return result
} catch (error) {
this.onFailure()
throw error
}
}
private onSuccess(): void {
this.failures = 0
this.state = 'CLOSED'
}
private onFailure(): void {
this.failures++
this.lastFailureTime = Date.now()
if (this.failures >= this.threshold) {
this.state = 'OPEN'
setTimeout(() => {
this.state = 'HALF_OPEN'
}, this.resetTimeout)
}
}
}
Retry Logic with Exponential Backoff
// packages/shared/src/retry.ts
export async function retryWithBackoff<T>(
fn: () => Promise<T>,
options: RetryOptions = {},
): Promise<T> {
const {
maxAttempts = 3,
initialDelay = 1000,
maxDelay = 10000,
factor = 2,
jitter = true,
} = options
let lastError: Error
for (let attempt = 0; attempt < maxAttempts; attempt++) {
try {
return await fn()
} catch (error) {
lastError = error as Error
if (attempt === maxAttempts - 1) {
throw lastError
}
const delay = Math.min(
initialDelay * Math.pow(factor, attempt),
maxDelay,
)
const finalDelay = jitter
? delay + Math.random() * delay * 0.1
: delay
await new Promise(resolve => setTimeout(resolve, finalDelay))
}
}
throw lastError!
}
10. Backup & Disaster Recovery
Automated Backup Script
#!/bin/bash
# scripts/backup.sh
set -e
BACKUP_DIR="/backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
S3_BUCKET="s3://harborsmith-backups"
# Backup PostgreSQL
echo "Backing up PostgreSQL..."
docker exec harborsmith_postgres pg_dump -U $DB_USER harborsmith | gzip > ${BACKUP_DIR}/postgres_${TIMESTAMP}.sql.gz
# Backup MinIO data
echo "Backing up MinIO..."
docker run --rm -v minio_data:/data -v ${BACKUP_DIR}:/backup alpine tar czf /backup/minio_${TIMESTAMP}.tar.gz /data
# Upload to S3
echo "Uploading to S3..."
aws s3 cp ${BACKUP_DIR}/postgres_${TIMESTAMP}.sql.gz ${S3_BUCKET}/postgres/
aws s3 cp ${BACKUP_DIR}/minio_${TIMESTAMP}.tar.gz ${S3_BUCKET}/minio/
# Clean up old local backups (keep last 7 days)
find ${BACKUP_DIR} -name "*.gz" -mtime +7 -delete
# Verify backup integrity
echo "Verifying backup..."
gunzip -t ${BACKUP_DIR}/postgres_${TIMESTAMP}.sql.gz
if [ $? -eq 0 ]; then
echo "Backup successful"
else
echo "Backup verification failed"
exit 1
fi
# Send notification
curl -X POST $SLACK_WEBHOOK_URL \
-H 'Content-Type: application/json' \
-d "{\"text\":\"Backup completed successfully at ${TIMESTAMP}\"}"
Disaster Recovery Plan
# infrastructure/disaster-recovery.yml
recovery_objectives:
rto: 4 hours # Recovery Time Objective
rpo: 1 hour # Recovery Point Objective
backup_schedule:
database:
full: "0 2 * * *" # Daily at 2 AM
incremental: "0 * * * *" # Hourly
media:
sync: "*/15 * * * *" # Every 15 minutes to S3
recovery_procedures:
1_assessment:
- Identify failure scope
- Notify stakeholders
- Activate incident response team
2_database_recovery:
- Restore from latest backup
- Apply WAL logs for point-in-time recovery
- Verify data integrity
3_application_recovery:
- Deploy to backup infrastructure
- Update DNS records
- Restore service connectivity
4_validation:
- Run health checks
- Verify critical functionality
- Monitor for anomalies
5_communication:
- Update status page
- Notify customers
- Document incident
infrastructure_redundancy:
primary_region: us-west-2
backup_region: us-east-1
cross_region_replication: enabled
multi_az_deployment: true
Implementation Roadmap
Phase 1: Foundation (Weeks 1-4)
- Setup monorepo structure
- Configure Docker environment
- Setup PostgreSQL with Prisma
- Implement authentication with Keycloak
- Create base UI components
- Setup CI/CD pipeline
Phase 2: Core Features (Weeks 5-8)
- Yacht management CRUD
- Booking system
- Payment integration
- User profiles
- Search and filtering
- Media upload system
Phase 3: Advanced Features (Weeks 9-12)
- Real-time updates
- Video streaming
- Calendar integration
- Review system
- Analytics dashboard
- Email notifications
Phase 4: Polish & Launch (Weeks 13-16)
- Performance optimization
- Security audit
- Load testing
- Documentation
- Beta testing
- Production deployment
Conclusion
This architecture provides a solid foundation for building a scalable, performant, and maintainable yacht charter platform. The technology choices balance modern best practices with practical considerations for rapid development and future growth.
Key advantages of this architecture:
- Type Safety: End-to-end type safety with TypeScript, tRPC, and Prisma
- Performance: Optimized for speed with Fastify, Redis caching, and CDN delivery
- Scalability: Horizontal scaling ready with Docker and load balancing
- Developer Experience: Monorepo structure with hot reload and type checking
- User Experience: Beautiful UI with smooth animations and responsive design
- Maintainability: Clean architecture with separation of concerns
The platform is designed to handle growth from hundreds to millions of users while maintaining excellent performance and reliability.