Skip to main content

Performance Optimization

Learn how to optimize your Stack9 application for maximum performance. This comprehensive guide covers query optimization, database indexing, caching strategies, N+1 query prevention, pagination, API response optimization, frontend performance, monitoring, and scaling strategies.

What You'll Learn

  • ✅ Query optimization techniques
  • ✅ Database indexing strategies
  • ✅ Caching strategies (Redis, in-memory)
  • ✅ N+1 query prevention
  • ✅ Pagination and lazy loading
  • ✅ API response optimization
  • ✅ Frontend performance optimization
  • ✅ Monitoring and profiling tools
  • ✅ Scaling strategies for production

Time Required: 60-75 minutes

Prerequisites

Understanding Performance in Stack9

Performance optimization in Stack9 involves multiple layers:

Application Layers

LayerImpactOptimization Focus
DatabaseHighQueries, indexes, relationships
BackendHighCaching, API design, business logic
FrontendMediumBundle size, rendering, data loading
NetworkMediumCompression, CDN, payload size
InfrastructureHighScaling, load balancing, resources

Step 1: Query Optimization

Understanding Query Performance

Before optimizing, understand how queries perform:

File: src/action-types/profileQuery.ts

import { S9AutomationActionType } from '@april9/stack9-sdk';

export class ProfileQuery extends S9AutomationActionType {
key = 'profile_query';
name = 'Profile Query Performance';

async exec(params: any) {
const { db, logger } = this.context;
const { queryKey } = params;

// Start timing
const startTime = Date.now();

// Execute query
const results = await db('order')
.where({ status: 'Completed' })
.where('_is_deleted', false)
.orderBy('_created_at', 'desc')
.limit(100);

// End timing
const duration = Date.now() - startTime;

logger.info(`Query executed in ${duration}ms, returned ${results.length} rows`);

return {
success: true,
duration_ms: duration,
row_count: results.length,
};
}
}

Optimization Technique 1: Select Only Required Fields

Bad - Selecting All Fields:

{
"$select": ["*"]
}

Good - Select Specific Fields:

{
"$select": [
"id",
"order_number",
"customer_id",
"customer.name",
"total",
"status",
"_created_at"
]
}

Impact: Can reduce query time by 40-60% for tables with many columns.

Optimization Technique 2: Add WHERE Filters Early

Bad - Filter After Retrieval:

// Fetches all records then filters in memory
const allOrders = await db('order').select('*');
const filtered = allOrders.filter(o => o.status === 'Completed');

Good - Filter in Database:

// Database filters before returning results
const orders = await db('order')
.where({ status: 'Completed' })
.where('_is_deleted', false);

Optimization Technique 3: Use Appropriate Operators

// ❌ Avoid LIKE for exact matches
.where('email', 'like', 'john@example.com')

// ✅ Use equality for exact matches
.where({ email: 'john@example.com' })

// ❌ Avoid NOT IN with large lists
.whereNotIn('id', largeIdArray)

// ✅ Use LEFT JOIN with NULL check
.leftJoin('excluded_ids', 'order.id', 'excluded_ids.id')
.whereNull('excluded_ids.id')

Optimization Technique 4: Limit Result Sets

File: src/query-library/getorderlist.json

{
"key": "getorderlist",
"name": "getOrderList",
"connector": "stack9_api",
"queryTemplate": {
"method": "post",
"path": "/order/search",
"bodyParams": "{\n \"$select\": [\n \"id\",\n \"order_number\",\n \"customer.name\",\n \"total\",\n \"status\"\n ],\n \"$where\": {\n \"_is_deleted\": false,\n \"status\": {\"$in\": [\"Pending\", \"Processing\"]}\n },\n \"$withRelated\": [\"customer(notDeleted)\"],\n \"$orderBy\": [\n {\"column\": \"_created_at\", \"order\": \"desc\"}\n ],\n \"$limit\": 50,\n \"$offset\": {{offset}}\n}",
"queryParams": {}
},
"userParams": {
"offset": "0"
}
}

Step 2: Database Indexing Strategies

Understanding Indexes

Indexes dramatically improve query performance by creating fast lookup structures.

Index Rule 1: Index Foreign Keys

In Entity Definition:

{
"label": "Customer",
"key": "customer_id",
"type": "SingleDropDown",
"relationshipOptions": {
"ref": "customer"
},
"index": true
}

Stack9 automatically creates indexes on foreign key fields when "index": true is set.

Index Rule 2: Index Frequently Filtered Fields

{
"label": "Status",
"key": "status",
"type": "OptionSet",
"typeOptions": {
"values": ["Pending", "Processing", "Completed", "Cancelled"]
},
"index": true
}

Index Rule 3: Index Frequently Sorted Fields

{
"label": "Created At",
"key": "_created_at",
"type": "DateField",
"index": true
}

Index Rule 4: Composite Indexes for Complex Queries

File: src/migrations/add_composite_indexes.ts

import { Knex } from 'knex';

export async function up(knex: Knex): Promise<void> {
// Composite index for common query pattern
await knex.schema.alterTable('orders', (table) => {
table.index(['status', '_created_at'], 'idx_orders_status_created');
table.index(['customer_id', 'status'], 'idx_orders_customer_status');
});

// Index for search queries
await knex.schema.alterTable('products', (table) => {
table.index(['name', 'sku'], 'idx_products_search');
});
}

export async function down(knex: Knex): Promise<void> {
await knex.schema.alterTable('orders', (table) => {
table.dropIndex(['status', '_created_at'], 'idx_orders_status_created');
table.dropIndex(['customer_id', 'status'], 'idx_orders_customer_status');
});

await knex.schema.alterTable('products', (table) => {
table.dropIndex(['name', 'sku'], 'idx_products_search');
});
}

Index Best Practices

DoDon't
✅ Index foreign keys❌ Index every field
✅ Index WHERE clause fields❌ Index low-cardinality fields (boolean)
✅ Index ORDER BY fields❌ Create duplicate indexes
✅ Monitor index usage❌ Forget to maintain indexes
✅ Use composite indexes❌ Over-index write-heavy tables

Step 3: Caching Strategies

Cache Layer 1: Redis Caching

Setup Redis Connector:

File: src/connectors/redis_cache.json

{
"name": "redis_cache",
"label": "Redis Cache",
"description": "Redis caching layer",
"type": "REDIS",
"config": {
"host": "%%REDIS_HOST%%",
"port": 6379,
"password": "%%REDIS_PASSWORD%%",
"db": 0
}
}

Environment Variables:

REDIS_HOST=localhost
REDIS_PASSWORD=your_redis_password

Implement Caching Helper

File: src/utils/cache.ts

import { Redis } from 'ioredis';

export class CacheService {
private redis: Redis;

constructor(redisClient: Redis) {
this.redis = redisClient;
}

/**
* Get cached value
*/
async get<T>(key: string): Promise<T | null> {
const cached = await this.redis.get(key);
if (!cached) return null;

try {
return JSON.parse(cached) as T;
} catch {
return cached as unknown as T;
}
}

/**
* Set cached value with TTL
*/
async set(key: string, value: any, ttlSeconds: number = 300): Promise<void> {
const serialized = typeof value === 'string' ? value : JSON.stringify(value);
await this.redis.setex(key, ttlSeconds, serialized);
}

/**
* Delete cached value
*/
async delete(key: string): Promise<void> {
await this.redis.del(key);
}

/**
* Delete keys by pattern
*/
async deletePattern(pattern: string): Promise<void> {
const keys = await this.redis.keys(pattern);
if (keys.length > 0) {
await this.redis.del(...keys);
}
}

/**
* Get or set cache
*/
async getOrSet<T>(
key: string,
fetchFn: () => Promise<T>,
ttlSeconds: number = 300
): Promise<T> {
// Try to get from cache
const cached = await this.get<T>(key);
if (cached !== null) {
return cached;
}

// Fetch fresh data
const data = await fetchFn();

// Cache the result
await this.set(key, data, ttlSeconds);

return data;
}
}

Using Cache in Action Types

File: src/action-types/getProductList.ts

import { S9AutomationActionType } from '@april9/stack9-sdk';
import { CacheService } from '../utils/cache';

export class GetProductList extends S9AutomationActionType {
key = 'get_product_list_cached';
name = 'Get Product List (Cached)';

async exec(params: any) {
const { db, connectors, logger } = this.context;
const { category_id, status } = params;

// Create cache instance
const cache = new CacheService(connectors['redis_cache']);

// Generate cache key
const cacheKey = `products:list:${category_id}:${status}`;

// Get or fetch data
const products = await cache.getOrSet(
cacheKey,
async () => {
logger.info('Cache miss - fetching from database');

const query = db('product')
.select([
'id',
'sku',
'name',
'price',
'stock_quantity',
'status',
])
.where({ _is_deleted: false });

if (category_id) {
query.where({ category_id });
}

if (status) {
query.where({ status });
}

return await query.orderBy('name', 'asc');
},
300 // Cache for 5 minutes
);

logger.info(`Returned ${products.length} products (cached)`);

return {
success: true,
data: products,
cached: true,
};
}
}

Cache Invalidation

File: src/entity-hooks/product.vat.ts

import { CustomFunction, CustomFunctionResponse, HookOperation } from '@april9/stack9-sdk';
import { CacheService } from '../utils/cache';

export class ValidateProduct extends CustomFunction {
entityName = 'product';

async exec(): Promise<CustomFunctionResponse> {
const { entity, operation, connectors, logger } = this.context;

// Your validation logic here...

// Invalidate cache on create/update/delete
if (operation !== HookOperation.read) {
const cache = new CacheService(connectors['redis_cache']);

// Invalidate all product list caches
await cache.deletePattern('products:list:*');

// Invalidate specific product cache
await cache.delete(`product:${entity.id}`);

logger.info('Product cache invalidated');
}

return {
valid: true,
entity,
};
}
}

Cache Layer 2: In-Memory Caching

For frequently accessed, rarely changing data:

File: src/utils/memoryCache.ts

interface CacheEntry<T> {
data: T;
expiresAt: number;
}

export class MemoryCache {
private cache = new Map<string, CacheEntry<any>>();

get<T>(key: string): T | null {
const entry = this.cache.get(key);

if (!entry) return null;

// Check if expired
if (Date.now() > entry.expiresAt) {
this.cache.delete(key);
return null;
}

return entry.data as T;
}

set<T>(key: string, value: T, ttlSeconds: number = 300): void {
this.cache.set(key, {
data: value,
expiresAt: Date.now() + ttlSeconds * 1000,
});
}

delete(key: string): void {
this.cache.delete(key);
}

clear(): void {
this.cache.clear();
}

// Cleanup expired entries
cleanup(): void {
const now = Date.now();
for (const [key, entry] of this.cache.entries()) {
if (now > entry.expiresAt) {
this.cache.delete(key);
}
}
}
}

// Global cache instance
export const memoryCache = new MemoryCache();

// Run cleanup every 5 minutes
setInterval(() => memoryCache.cleanup(), 5 * 60 * 1000);

Caching Strategy Best Practices

Data TypeCache StrategyTTLInvalidation
Static contentIn-memory1 hourOn deploy
User sessionsRedis1 dayOn logout
Product listsRedis5 minutesOn product update
User profilesRedis15 minutesOn profile update
Reports/AnalyticsRedis1 hourOn schedule
ConfigurationIn-memoryIndefiniteOn config change

Step 4: N+1 Query Prevention

Understanding N+1 Queries

N+1 happens when you fetch a list of records, then make additional queries for related data.

Problem: N+1 Query

Bad:

// Fetches orders (1 query)
const orders = await db('order')
.where({ status: 'Completed' })
.limit(50);

// For each order, fetch customer (N queries = 50 queries!)
for (const order of orders) {
const customer = await db('customer')
.where({ id: order.customer_id })
.first();

order.customer = customer;
}

Total: 51 queries (1 + 50)

Solution 1: Use $withRelated

Good:

File: src/query-library/getorderlist.json

{
"$select": [
"id",
"order_number",
"customer_id",
"customer.name",
"customer.email",
"total",
"status"
],
"$where": {
"status": "Completed"
},
"$withRelated": ["customer(notDeleted)"],
"$limit": 50
}

Total: 2 queries (orders + customers in batch)

Solution 2: JOIN for Simple Cases

const orders = await db('order')
.select([
'order.id',
'order.order_number',
'order.total',
'order.status',
'customer.name as customer_name',
'customer.email as customer_email',
])
.leftJoin('customer', 'order.customer_id', 'customer.id')
.where({ 'order.status': 'Completed' })
.where({ 'order._is_deleted': false })
.limit(50);

Total: 1 query

Solution 3: Eager Loading in Code

import { CustomFunction, CustomFunctionResponse } from '@april9/stack9-sdk';

export class ProcessOrders extends CustomFunction {
async exec(): Promise<CustomFunctionResponse> {
const { db } = this.context;

// Fetch orders
const orders = await db('order')
.where({ status: 'Processing' })
.where('_is_deleted', false);

// Get all unique customer IDs
const customerIds = [...new Set(orders.map(o => o.customer_id))];

// Fetch all customers in one query
const customers = await db('customer')
.whereIn('id', customerIds)
.where('_is_deleted', false);

// Create lookup map
const customerMap = new Map(customers.map(c => [c.id, c]));

// Attach customers to orders
orders.forEach(order => {
order.customer = customerMap.get(order.customer_id);
});

return { valid: true, data: orders };
}
}

Detecting N+1 Queries

File: src/utils/queryLogger.ts

import { Knex } from 'knex';

export function setupQueryLogging(knex: Knex): void {
let queryCount = 0;
const queryTimes: number[] = [];

knex.on('query', (query) => {
queryCount++;
const startTime = Date.now();

knex.on('query-response', () => {
const duration = Date.now() - startTime;
queryTimes.push(duration);

if (duration > 1000) {
console.warn(`Slow query (${duration}ms):`, query.sql);
}
});
});

// Log stats every minute
setInterval(() => {
if (queryCount > 0) {
const avgTime = queryTimes.reduce((a, b) => a + b, 0) / queryTimes.length;
console.log(`Query stats: ${queryCount} queries, avg ${avgTime.toFixed(2)}ms`);

// Warn about potential N+1
if (queryCount > 100) {
console.warn('High query count detected - possible N+1 query issue');
}

queryCount = 0;
queryTimes.length = 0;
}
}, 60000);
}

Step 5: Pagination and Lazy Loading

Implementing Cursor-Based Pagination

File: src/query-library/getorderspaginated.json

{
"key": "getorderspaginated",
"name": "getOrdersPaginated",
"connector": "stack9_api",
"queryTemplate": {
"method": "post",
"path": "/order/search",
"bodyParams": "{\n \"$select\": [\n \"id\",\n \"order_number\",\n \"customer.name\",\n \"total\",\n \"status\",\n \"_created_at\"\n ],\n \"$where\": {\n \"_is_deleted\": false,\n \"_created_at\": {\"$lt\": \"{{cursor}}\"}\n },\n \"$withRelated\": [\"customer(notDeleted)\"],\n \"$orderBy\": [\n {\"column\": \"_created_at\", \"order\": \"desc\"}\n ],\n \"$limit\": {{limit}}\n}",
"queryParams": {}
},
"userParams": {
"cursor": "9999-12-31T23:59:59.999Z",
"limit": "50"
}
}

Implementing Infinite Scroll

Frontend Implementation:

// In your frontend action
async function loadMoreOrders() {
const response = await api.query.execute('getorderspaginated', {
cursor: lastOrderTimestamp,
limit: 50,
});

if (response.data.length > 0) {
orders.push(...response.data);
lastOrderTimestamp = response.data[response.data.length - 1]._created_at;
hasMore = response.data.length === 50;
} else {
hasMore = false;
}
}

Lazy Loading Relationships

File: src/action-types/loadOrderDetails.ts

import { S9AutomationActionType } from '@april9/stack9-sdk';

export class LoadOrderDetails extends S9AutomationActionType {
key = 'load_order_details';
name = 'Load Order Details';

async exec(params: any) {
const { order_id, include_items, include_payments } = params;
const { db } = this.context;

// Always load basic order info
const order = await db('order')
.where({ id: order_id })
.first();

if (!order) {
throw new Error('Order not found');
}

// Lazy load order items only if requested
if (include_items) {
order.items = await db('order_item')
.where({ order_id })
.join('product', 'order_item.product_id', 'product.id')
.select([
'order_item.*',
'product.name as product_name',
'product.sku as product_sku',
]);
}

// Lazy load payment info only if requested
if (include_payments) {
order.payments = await db('payment')
.where({ order_id })
.orderBy('_created_at', 'desc');
}

return {
success: true,
data: order,
};
}
}

Step 6: API Response Optimization

Response Compression

Stack9 automatically compresses API responses using gzip. Ensure compression is enabled:

File: .env

# Enable response compression
ENABLE_COMPRESSION=true
COMPRESSION_LEVEL=6
COMPRESSION_THRESHOLD=1024

Payload Size Optimization

Bad - Large Payload:

{
"$select": ["*"],
"$where": {"status": "Active"}
}

Response: 2.5 MB

Good - Optimized Payload:

{
"$select": ["id", "name", "email", "status"],
"$where": {"status": "Active"},
"$limit": 50
}

Response: 145 KB (94% reduction!)

Field Transformation

File: src/action-types/getOptimizedProducts.ts

import { S9AutomationActionType } from '@april9/stack9-sdk';

export class GetOptimizedProducts extends S9AutomationActionType {
key = 'get_optimized_products';
name = 'Get Optimized Products';

async exec(params: any) {
const { db } = this.context;

const products = await db('product')
.select([
'id',
'name',
'sku',
'price',
'stock_quantity',
'category_id',
])
.where({ _is_deleted: false })
.limit(100);

// Transform response - remove unnecessary data
const optimized = products.map(p => ({
id: p.id,
name: p.name,
sku: p.sku,
price: Number(p.price.toFixed(2)), // Round decimals
inStock: p.stock_quantity > 0, // Boolean instead of number
categoryId: p.category_id,
}));

return {
success: true,
data: optimized,
total: optimized.length,
};
}
}

Response Headers Optimization

// In custom endpoint
app.get('/api/products', async (req, res) => {
const products = await fetchProducts();

// Set cache headers
res.set({
'Cache-Control': 'public, max-age=300', // 5 minutes
'ETag': generateEtag(products),
'Last-Modified': new Date().toUTCString(),
});

res.json(products);
});

Step 7: Frontend Performance

Code Splitting

Split large bundles into smaller chunks:

// Lazy load heavy components
const ReportDashboard = lazy(() => import('./components/ReportDashboard'));
const ProductCatalog = lazy(() => import('./components/ProductCatalog'));

function App() {
return (
<Suspense fallback={<Loading />}>
<Routes>
<Route path="/reports" element={<ReportDashboard />} />
<Route path="/products" element={<ProductCatalog />} />
</Routes>
</Suspense>
);
}

Memoization

Prevent unnecessary re-renders:

import { memo, useMemo, useCallback } from 'react';

const ProductList = memo(({ products, onSelect }) => {
// Memoize expensive calculations
const sortedProducts = useMemo(() => {
return [...products].sort((a, b) => a.name.localeCompare(b.name));
}, [products]);

// Memoize callbacks
const handleSelect = useCallback((id) => {
onSelect(id);
}, [onSelect]);

return (
<div>
{sortedProducts.map(p => (
<ProductCard key={p.id} product={p} onSelect={handleSelect} />
))}
</div>
);
});

Virtualization for Large Lists

import { FixedSizeList } from 'react-window';

function VirtualizedOrderList({ orders }) {
const Row = ({ index, style }) => (
<div style={style}>
<OrderCard order={orders[index]} />
</div>
);

return (
<FixedSizeList
height={600}
itemCount={orders.length}
itemSize={80}
width="100%"
>
{Row}
</FixedSizeList>
);
}

Debouncing Search Input

import { useState, useEffect, useMemo } from 'react';
import { debounce } from 'lodash';

function SearchProducts() {
const [searchTerm, setSearchTerm] = useState('');
const [results, setResults] = useState([]);

// Debounce search API call
const debouncedSearch = useMemo(
() =>
debounce(async (term) => {
const response = await api.query.execute('searchproducts', {
searchTerm: term,
});
setResults(response.data);
}, 300),
[]
);

useEffect(() => {
if (searchTerm.length > 2) {
debouncedSearch(searchTerm);
}
}, [searchTerm, debouncedSearch]);

return (
<input
type="text"
value={searchTerm}
onChange={(e) => setSearchTerm(e.target.value)}
placeholder="Search products..."
/>
);
}

Step 8: Monitoring and Profiling

Performance Monitoring Action

File: src/action-types/monitorPerformance.ts

import { S9AutomationActionType } from '@april9/stack9-sdk';

export class MonitorPerformance extends S9AutomationActionType {
key = 'monitor_performance';
name = 'Monitor Performance Metrics';

async exec(params: any) {
const { db, logger } = this.context;

const metrics = {
database: await this.getDatabaseMetrics(),
cache: await this.getCacheMetrics(),
api: await this.getApiMetrics(),
};

// Log metrics
logger.info('Performance metrics:', metrics);

// Alert if thresholds exceeded
if (metrics.database.slowQueryCount > 10) {
logger.warn(`High slow query count: ${metrics.database.slowQueryCount}`);
}

if (metrics.cache.hitRate < 0.7) {
logger.warn(`Low cache hit rate: ${metrics.cache.hitRate}`);
}

return {
success: true,
metrics,
};
}

private async getDatabaseMetrics() {
const { db } = this.context;

// Get database stats
const tableStats = await db.raw(`
SELECT
schemaname,
tablename,
pg_total_relation_size(schemaname || '.' || tablename) as size_bytes,
n_live_tup as row_count
FROM pg_stat_user_tables
ORDER BY size_bytes DESC
LIMIT 10
`);

// Get slow query count (queries > 1s)
const slowQueries = await db.raw(`
SELECT COUNT(*) as count
FROM pg_stat_statements
WHERE mean_exec_time > 1000
`);

return {
largestTables: tableStats.rows,
slowQueryCount: slowQueries.rows[0]?.count || 0,
};
}

private async getCacheMetrics() {
const { connectors } = this.context;
const redis = connectors['redis_cache'];

const info = await redis.info('stats');
const stats = this.parseRedisInfo(info);

const hits = parseInt(stats.keyspace_hits || '0');
const misses = parseInt(stats.keyspace_misses || '0');
const total = hits + misses;
const hitRate = total > 0 ? hits / total : 0;

return {
hits,
misses,
hitRate: Number(hitRate.toFixed(3)),
totalKeys: await redis.dbsize(),
};
}

private async getApiMetrics() {
const { db } = this.context;

// Get API request logs from last hour
const hourAgo = new Date(Date.now() - 60 * 60 * 1000);

const requests = await db('api_logs')
.where('timestamp', '>', hourAgo)
.select([
db.raw('COUNT(*) as total_requests'),
db.raw('AVG(duration_ms) as avg_duration'),
db.raw('MAX(duration_ms) as max_duration'),
db.raw('COUNT(CASE WHEN status_code >= 500 THEN 1 END) as errors'),
])
.first();

return requests;
}

private parseRedisInfo(info: string): Record<string, string> {
const stats: Record<string, string> = {};
info.split('\r\n').forEach(line => {
const [key, value] = line.split(':');
if (key && value) {
stats[key] = value;
}
});
return stats;
}
}

Query Execution Plan Analysis

// Analyze query performance
const explain = await db('order')
.where({ status: 'Completed' })
.where('_is_deleted', false)
.orderBy('_created_at', 'desc')
.limit(100)
.explain();

console.log('Query execution plan:', explain);

APM Integration

File: src/utils/apm.ts

import * as Sentry from '@sentry/node';

// Initialize Sentry APM
Sentry.init({
dsn: process.env.SENTRY_DSN,
tracesSampleRate: 0.1, // 10% of transactions
integrations: [
new Sentry.Integrations.Http({ tracing: true }),
new Sentry.Integrations.Postgres(),
],
});

// Trace function execution
export function trace<T>(
name: string,
fn: () => Promise<T>
): Promise<T> {
const transaction = Sentry.startTransaction({ name });

return fn()
.then(result => {
transaction.setStatus('ok');
return result;
})
.catch(error => {
transaction.setStatus('internal_error');
throw error;
})
.finally(() => {
transaction.finish();
});
}

Step 9: Scaling Strategies

Horizontal Scaling

Run multiple Stack9 instances behind a load balancer:

# docker-compose.yml
version: '3.8'

services:
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- app1
- app2

app1:
build: .
environment:
- NODE_ENV=production
- DATABASE_URL=postgres://...
- REDIS_URL=redis://...

app2:
build: .
environment:
- NODE_ENV=production
- DATABASE_URL=postgres://...
- REDIS_URL=redis://...

postgres:
image: postgres:15
environment:
POSTGRES_DB: stack9
POSTGRES_USER: stack9
POSTGRES_PASSWORD: secure_password

redis:
image: redis:7-alpine
command: redis-server --appendonly yes

Nginx Load Balancer Config:

upstream stack9_backend {
least_conn;
server app1:3000;
server app2:3000;
}

server {
listen 80;

location / {
proxy_pass http://stack9_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

Database Connection Pooling

File: src/config/database.ts

import { Knex } from 'knex';

export const dbConfig: Knex.Config = {
client: 'postgresql',
connection: process.env.DATABASE_URL,
pool: {
min: 2,
max: 10,
idleTimeoutMillis: 30000,
acquireTimeoutMillis: 30000,
},
acquireConnectionTimeout: 10000,
};

Read Replicas

// Configure read replicas
const masterDb = knex({
client: 'postgresql',
connection: process.env.DATABASE_MASTER_URL,
pool: { min: 2, max: 10 },
});

const replicaDb = knex({
client: 'postgresql',
connection: process.env.DATABASE_REPLICA_URL,
pool: { min: 5, max: 20 },
});

// Use master for writes
async function createOrder(data: any) {
return masterDb('order').insert(data);
}

// Use replica for reads
async function getOrders() {
return replicaDb('order')
.where({ _is_deleted: false })
.orderBy('_created_at', 'desc');
}

CDN for Static Assets

// Configure CDN URL
const CDN_URL = process.env.CDN_URL || '';

function getAssetUrl(path: string): string {
if (CDN_URL) {
return `${CDN_URL}${path}`;
}
return path;
}

// Use in responses
const product = {
id: 1,
name: 'Laptop',
imageUrl: getAssetUrl('/images/products/laptop.jpg'),
};

Performance Checklist

Database

  • ✅ Index all foreign keys
  • ✅ Index frequently filtered fields
  • ✅ Index frequently sorted fields
  • ✅ Use $select to limit fields
  • ✅ Add WHERE clauses to queries
  • ✅ Limit result sets with $limit
  • ✅ Use composite indexes for complex queries
  • ✅ Analyze slow queries
  • ✅ Use connection pooling

Caching

  • ✅ Implement Redis caching
  • ✅ Cache frequently accessed data
  • ✅ Set appropriate TTLs
  • ✅ Invalidate cache on updates
  • ✅ Use in-memory cache for static data
  • ✅ Monitor cache hit rate
  • ✅ Cache API responses

Queries

  • ✅ Prevent N+1 queries
  • ✅ Use $withRelated for relationships
  • ✅ Batch database operations
  • ✅ Use transactions for consistency
  • ✅ Avoid SELECT *
  • ✅ Use appropriate operators

APIs

  • ✅ Enable response compression
  • ✅ Optimize payload sizes
  • ✅ Implement pagination
  • ✅ Set cache headers
  • ✅ Use ETags
  • ✅ Rate limit endpoints

Frontend

  • ✅ Code split large bundles
  • ✅ Lazy load components
  • ✅ Memoize expensive calculations
  • ✅ Virtualize large lists
  • ✅ Debounce user input
  • ✅ Optimize images

Monitoring

  • ✅ Track query performance
  • ✅ Monitor cache metrics
  • ✅ Set up APM
  • ✅ Log slow operations
  • ✅ Alert on anomalies
  • ✅ Track resource usage

Troubleshooting

Slow Queries

Problem: Queries taking too long

Solutions:

  1. Check if indexes exist on filtered fields
  2. Analyze query execution plan
  3. Reduce number of selected fields
  4. Add LIMIT to queries
  5. Check for N+1 query patterns

High Memory Usage

Problem: Application consuming too much memory

Solutions:

  1. Reduce connection pool size
  2. Clear in-memory caches periodically
  3. Limit query result sizes
  4. Check for memory leaks
  5. Implement pagination

Cache Misses

Problem: Low cache hit rate

Solutions:

  1. Increase cache TTL
  2. Warm up cache on startup
  3. Cache more frequently accessed data
  4. Check cache invalidation logic
  5. Monitor cache size limits

Database Connection Errors

Problem: Too many database connections

Solutions:

  1. Adjust connection pool settings
  2. Check for connection leaks
  3. Implement connection timeout
  4. Use read replicas
  5. Scale database vertically

Next Steps

You've mastered performance optimization! Continue learning:

  • Monitoring and Logging - Track application performance
  • Scaling Guide - Scale to production
  • Database Management - Advanced database techniques
  • Caching Strategies - Deep dive into caching

Summary

You now understand:

✅ Query optimization techniques ✅ Database indexing strategies ✅ Caching with Redis and in-memory ✅ N+1 query prevention ✅ Pagination and lazy loading ✅ API response optimization ✅ Frontend performance optimization ✅ Monitoring and profiling ✅ Scaling strategies for production

Performance optimization is an ongoing process. Monitor, measure, and iterate!