Database Performance Optimization Tips untuk Web Applications

0
13

Database performance adalah critical factor yang menentukan success dari web applications. Slow database queries dapat menyebabkan poor user experience, high server costs, dan scalability issues. Di tahun 2025, dengan data volumes yang terus meningkat dan user expectations yang semakin tinggi, database optimization menjadi lebih penting dari sebelumnya. Artikel ini akan membahas comprehensive database optimization strategies untuk modern web applications.

Understanding Database Performance Fundamentals

1. Core Performance Metrics
• Query Response Time: Time taken untuk execute individual queries
• Throughput: Number of queries processed per second
• Latency: Time delay antara request dan response
• Concurrency: Number of simultaneous connections
• Resource Utilization: CPU, memory, I/O usage patterns
• Cache Hit Rate: Percentage of queries served dari cache

2. Common Performance Bottlenecks
• Poorly Optimized Queries: Inefficient SQL queries dengan missing indexes
• Lock Contention: Multiple processes competing untuk same resources
• Inadequate Indexing: Missing atau suboptimal index usage
• Memory Pressure: Insufficient RAM allocated untuk database operations
• I/O Bottlenecks: Slow disk operations limiting performance
• Network Latency: Delays dalam database communication

3. Database Types dan Use Cases
“`javascript
// Database Type Selection Guide
const databaseGuide = {
// Relational Databases
relational: {
mysql: {
bestFor: [‘Traditional applications’, ‘E-commerce’, ‘Financial systems’],
strengths: [‘ACID compliance’, ‘Mature ecosystem’, ‘Good tooling’],
limitations: [‘Vertical scaling’, ‘Complex joins overhead’],
useCases: [‘User management’, ‘Order processing’, ‘Inventory management’]
},
postgresql: {
bestFor: [‘Complex applications’, ‘Data analytics’, ‘Geographic data’],
strengths: [‘Advanced features’, ‘JSON support’, ‘Extensions’],
limitations: [‘Steeper learning curve’, ‘Resource intensive’],
useCases: [‘Content management’, ‘Analytics platforms’, ‘GIS applications’]
},
mssql: {
bestFor: [‘Enterprise applications’, ‘Windows environments’],
strengths: [‘Integration with Microsoft stack’, ‘Business intelligence’],
limitations: [‘Platform dependency’, ‘Licensing costs’],
useCases: [‘Corporate applications’, ‘ERP systems’, ‘Data warehousing’]
}
},

// NoSQL Databases
nosql: {
mongodb: {
bestFor: [‘Content management’, ‘Real-time analytics’, ‘IoT data’],
strengths: [‘Flexible schema’, ‘Horizontal scaling’, ‘Document-oriented’],
limitations: [‘No ACID transactions’, ‘Memory intensive’],
useCases: [‘CMS’, ‘User profiles’, ‘Product catalogs’]
},
cassandra: {
bestFor: [‘High-velocity data’, ‘Time-series data’, ‘Distributed systems’],
strengths: [‘Linear scalability’, ‘High availability’, ‘Multi-datacenter’],
limitations: [‘Complex querying’, ‘Limited consistency’],
useCases: [‘IoT platforms’, ‘Recommendation engines’, ‘Messaging systems’]
},
redis: {
bestFor: [‘Caching’, ‘Real-time data’, ‘Session management’],
strengths: [‘In-memory performance’, ‘Data structures’, ‘Pub/sub’],
limitations: [‘Memory constraints’, ‘Persistence overhead’],
useCases: [‘Session storage’, ‘Real-time leaderboards’, ‘Rate limiting’]
}
},

// Specialized Databases
specialized: {
elasticsearch: {
bestFor: [‘Full-text search’, ‘Log analytics’, ‘Monitoring’],
strengths: [‘Search capabilities’, ‘Analytics’, ‘Scaling’],
limitations: [‘Resource intensive’, ‘Complex setup’],
useCases: [‘Search engines’, ‘Log analysis’, ‘Monitoring systems’]
},
influxdb: {
bestFor: [‘Time-series data’, ‘Monitoring’, ‘IoT metrics’],
strengths: [‘Time-series optimized’, ‘Compression’, ‘Retention policies’],
limitations: [‘Limited query flexibility’, ‘Single-node performance’],
useCases: [‘Application monitoring’, ‘IoT sensor data’, ‘Financial metrics’]
},
neo4j: {
bestFor: [‘Graph data’, ‘Relationship mapping’, ‘Social networks’],
strengths: [‘Graph queries’, ‘Relationship modeling’, ‘Performance’],
limitations: [‘Learning curve’, ‘Scaling challenges’],
useCases: [‘Social networks’, ‘Recommendation engines’, ‘Fraud detection’]
}
}
};
“`

Query Optimization Strategies

1. Index Optimization Techniques
“`sql
— Index Analysis dan Optimization
— 1. Identify Missing Indexes
SELECT
schemaname,
tablename,
attname,
n_distinct,
correlation
FROM pg_stats
WHERE schemaname = ‘public’
ORDER BY n_distinct DESC;

— 2. Analyze Query Performance
EXPLAIN (ANALYZE, BUFFERS)
SELECT u.id, u.name, p.title, p.created_at
FROM users u
JOIN posts p ON u.id = p.user_id
WHERE u.status = ‘active’
AND p.created_at >= ‘2025-01-01’
ORDER BY p.created_at DESC
LIMIT 50;

— 3. Create Optimal Indexes
— Composite index untuk frequently queried columns
CREATE INDEX idx_users_status_created_at
ON users(status, created_at);

— Partial index untuk specific conditions
CREATE INDEX idx_active_users
ON users(id, name)
WHERE status = ‘active’;

— Covering index untuk commonly accessed columns
CREATE INDEX idx_posts_user_title_created
ON posts(user_id, title, created_at, status);

— 4. Index Usage Analysis
SELECT
schemaname,
tablename,
indexname,
idx_scan,
idx_tup_read,
idx_tup_fetch
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;

— 5. Remove Unused Indexes
DROP INDEX IF EXISTS idx_unused_index;

— 6. Optimize untuk JOIN operations
— Bad query dengan missing indexes
SELECT o.*, c.name, p.product_name
FROM orders o
JOIN customers c ON o.customer_id = c.id
JOIN order_items oi ON o.id = oi.order_id
JOIN products p ON oi.product_id = p.id
WHERE o.status = ‘pending’
AND o.created_at >= ‘2025-01-01’;

— Optimized dengan proper indexes
CREATE INDEX idx_orders_status_created ON orders(status, created_at);
CREATE INDEX idx_order_items_order_id ON order_items(order_id);
CREATE INDEX idx_order_items_product_id ON order_items(product_id);

— Query dengan JOIN optimization
SELECT o.id, o.total, c.name, COUNT(oi.id) as item_count
FROM orders o
JOIN customers c ON o.customer_id = c.id
LEFT JOIN order_items oi ON o.id = oi.order_id
WHERE o.status = ‘pending’
AND o.created_at >= ‘2025-01-01’
GROUP BY o.id, o.total, c.name
ORDER BY o.created_at DESC;
“`

2. Query Writing Best Practices
“`sql
— Query Optimization Patterns

— 1. Avoid SELECT * (Anti-pattern)
— Bad:
SELECT * FROM users WHERE email = ‘[email protected]’;

— Good:
SELECT id, name, email, last_login
FROM users
WHERE email = ‘[email protected]’;

— 2. Use EXISTS instead of IN untuk subqueries
— Bad:
SELECT name
FROM departments
WHERE id IN (SELECT department_id FROM employees WHERE salary > 50000);

— Good:
SELECT d.name
FROM departments d
WHERE EXISTS (
SELECT 1
FROM employees e
WHERE e.department_id = d.id
AND e.salary > 50000
);

— 3. Optimize JOIN operations
— Bad: Multiple JOINs tanpa proper indexing
SELECT u.name, p.title, c.name as category
FROM users u
JOIN posts p ON u.id = p.user_id
JOIN categories c ON p.category_id = c.id
WHERE p.status = ‘published’;

— Good: Dengan proper indexes dan query structure
CREATE INDEX idx_posts_user_status ON posts(user_id, status);
CREATE INDEX idx_posts_category ON posts(category_id);

SELECT u.name, p.title, c.name as category
FROM users u
JOIN posts p ON u.id = p.user_id
AND p.status = ‘published’
JOIN categories c ON p.category_id = c.id;

— 4. Use LIMIT untuk pagination
— Bad: Fetching all records
SELECT * FROM products ORDER BY created_at DESC;

— Good: Implementasi pagination
SELECT * FROM products
ORDER BY created_at DESC
LIMIT 20 OFFSET 0; — Page 1

SELECT * FROM products
ORDER BY created_at DESC
LIMIT 20 OFFSET 20; — Page 2

— Better: Cursor-based pagination untuk large datasets
SELECT * FROM products
WHERE created_at = NOW() – INTERVAL ’30 days’
),
user_orders AS (
SELECT
user_id,
COUNT(*) as order_count,
SUM(total) as total_spent
FROM orders
WHERE created_at >= NOW() – INTERVAL ’30 days’
GROUP BY user_id
)
SELECT
au.name,
au.email,
COALESCE(uo.order_count, 0) as recent_orders,
COALESCE(uo.total_spent, 0) as recent_spent
FROM active_users au
LEFT JOIN user_orders uo ON au.id = uo.user_id
ORDER BY recent_spent DESC
LIMIT 100;

— Materialized Views untuk Complex Queries
CREATE MATERIALIZED VIEW user_analytics AS
SELECT
u.id,
u.name,
u.email,
COUNT(DISTINCT o.id) as total_orders,
COALESCE(SUM(o.total), 0) as total_spent,
MAX(o.created_at) as last_order_date,
COUNT(DISTINCT DATE(o.created_at)) as active_days
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
GROUP BY u.id, u.name, u.email;

— Refresh materialized view periodically
REFRESH MATERIALIZED VIEW user_analytics;

— Query materialized view (much faster)
SELECT * FROM user_analytics
WHERE total_orders > 10
ORDER BY total_spent DESC;

— Partitioning untuk Large Tables
CREATE TABLE sales (
id BIGSERIAL,
product_id INTEGER NOT NULL,
customer_id INTEGER NOT NULL,
sale_date DATE NOT NULL,
amount DECIMAL(10,2) NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
) PARTITION BY RANGE (sale_date);

— Create partitions
CREATE TABLE sales_2025_q1 PARTITION OF sales
FOR VALUES FROM (‘2025-01-01’) TO (‘2025-04-01’);

CREATE TABLE sales_2025_q2 PARTITION OF sales
FOR VALUES FROM (‘2025-04-01’) TO (‘2025-07-01’);

CREATE TABLE sales_2025_q3 PARTITION OF sales
FOR VALUES FROM (‘2025-07-01’) TO (‘2025-10-01’);

CREATE TABLE sales_2025_q4 PARTITION OF sales
FOR VALUES FROM (‘2025-10-01’) TO (‘2025-01-01’);

— Query automatically uses relevant partitions
SELECT product_id, SUM(amount) as total_sales
FROM sales
WHERE sale_date BETWEEN ‘2025-01-01’ AND ‘2025-03-31’
GROUP BY product_id;
“`

Database Caching Strategies

1. Application-Level Caching
“`javascript
// Redis Caching Implementation
const Redis = require(‘redis’);
const redis = Redis.createClient({
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASSWORD,
retry_strategy: (options) => {
if (options.error && options.error.code === ‘ECONNREFUSED’) {
return new Error(‘Redis server connection refused’);
}
if (options.total_retry_time > 1000 * 60 * 60) {
return new Error(‘Retry time exhausted’);
}
if (options.attempt > 10) {
return undefined;
}
return Math.min(options.attempt * 100, 3000);
}
});

class DatabaseCache {
constructor(ttl = 3600) { // Default TTL 1 hour
this.ttl = ttl;
}

// Cache key generator
generateKey(operation, params) {
const paramString = JSON.stringify(params);
return `db:${operation}:${Buffer.from(paramString).toString(‘base64’)}`;
}

// Get cached data
async get(key) {
try {
const cached = await redis.get(key);
return cached ? JSON.parse(cached) : null;
} catch (error) {
console.error(‘Cache get error:’, error);
return null;
}
}

// Set cache data
async set(key, data, customTtl = null) {
try {
const ttl = customTtl || this.ttl;
await redis.setex(key, ttl, JSON.stringify(data));
return true;
} catch (error) {
console.error(‘Cache set error:’, error);
return false;
}
}

// Delete cache
async delete(key) {
try {
await redis.del(key);
return true;
} catch (error) {
console.error(‘Cache delete error:’, error);
return false;
}
}

// Clear cache by pattern
async clearPattern(pattern) {
try {
const keys = await redis.keys(pattern);
if (keys.length > 0) {
await redis.del(keys);
}
return keys.length;
} catch (error) {
console.error(‘Cache clear pattern error:’, error);
return 0;
}
}

// Cache wrapper untuk database operations
async cacheQuery(operation, params, queryFunction, ttl = null) {
const cacheKey = this.generateKey(operation, params);

// Try cache first
let result = await this.get(cacheKey);
if (result) {
console.log(`Cache hit for ${operation}`);
return result;
}

// Cache miss, execute query
console.log(`Cache miss for ${operation}, executing query`);
result = await queryFunction();

// Cache the result
await this.set(cacheKey, result, ttl);

return result;
}

// Batch cache operations
async mget(keys) {
try {
const values = await redis.mget(keys);
return values.map(value => value ? JSON.parse(value) : null);
} catch (error) {
console.error(‘Cache mget error:’, error);
return new Array(keys.length).fill(null);
}
}

// Set multiple cache values
async mset(keyValuePairs, ttl = null) {
try {
const pipeline = redis.pipeline();

for (const [key, value] of keyValuePairs) {
pipeline.setex(key, ttl || this.ttl, JSON.stringify(value));
}

await pipeline.exec();
return true;
} catch (error) {
console.error(‘Cache mset error:’, error);
return false;
}
}
}

// Usage Example
const dbCache = new DatabaseCache(1800); // 30 minutes cache

// Database query functions
class UserService {
// Get user by ID dengan caching
async getUserById(userId) {
return await dbCache.cacheQuery(
‘user_by_id’,
{ userId },
async () => {
const query = ‘SELECT id, name, email, created_at FROM users WHERE id = $1’;
const result = await pool.query(query, [userId]);
return result.rows[0] || null;
}
);
}

// Get user posts dengan caching
async getUserPosts(userId, page = 1, limit = 20) {
return await dbCache.cacheQuery(
‘user_posts’,
{ userId, page, limit },
async () => {
const offset = (page – 1) * limit;
const query = `
SELECT id, title, content, created_at
FROM posts
WHERE user_id = $1
ORDER BY created_at DESC
LIMIT $2 OFFSET $3
`;
const result = await pool.query(query, [userId, limit, offset]);
return result.rows;
},
900 // 15 minutes cache
);
}

// Invalidate cache when data changes
async updateUser(userId, updateData) {
const query = ‘UPDATE users SET name = $1, email = $2 WHERE id = $3’;
await pool.query(query, [updateData.name, updateData.email, userId]);

// Clear cache
await dbCache.delete(dbCache.generateKey(‘user_by_id’, { userId }));
await dbCache.clearPattern(`db:user_posts*${userId}*`);

return true;
}

// Get popular posts dengan longer cache
async getPopularPosts() {
return await dbCache.cacheQuery(
‘popular_posts’,
{},
async () => {
const query = `
SELECT p.*, u.name as author_name,
COUNT(l.id) as like_count,
COUNT(c.id) as comment_count
FROM posts p
JOIN users u ON p.user_id = u.id
LEFT JOIN likes l ON p.id = l.post_id
LEFT JOIN comments c ON p.id = c.post_id
WHERE p.created_at >= NOW() – INTERVAL ‘7 days’
GROUP BY p.id, u.name
ORDER BY like_count DESC, comment_count DESC
LIMIT 20
`;
const result = await pool.query(query);
return result.rows;
},
3600 // 1 hour cache
);
}
}
“`

2. Query Result Caching
“`javascript
// Advanced Caching Strategies
class AdvancedCacheManager {
constructor(redisClient) {
this.redis = redisClient;
this.localCache = new Map(); // L1 cache
this.cacheStats = {
hits: 0,
misses: 0,
sets: 0,
deletes: 0
};
}

// Multi-level caching (L1: Memory, L2: Redis)
async multiLevelGet(key) {
// L1 Cache (Memory)
if (this.localCache.has(key)) {
this.cacheStats.hits++;
return this.localCache.get(key);
}

// L2 Cache (Redis)
try {
const cached = await this.redis.get(key);
if (cached) {
const data = JSON.parse(cached);
this.localCache.set(key, data);
this.cacheStats.hits++;
return data;
}
} catch (error) {
console.error(‘Redis get error:’, error);
}

this.cacheStats.misses++;
return null;
}

// Set di both cache levels
async multiLevelSet(key, data, ttl = 3600) {
// L1 Cache (Memory) – dengan limited size
if (this.localCache.size {
try {
await this.writeToDatabase(key, data);
} catch (error) {
console.error(‘Write-behind error:’, error);
// Retry logic or error handling
}
}, 100);
}

// Cache warming strategy
async warmCache(keys) {
const promises = keys.map(async (key) => {
try {
const data = await this.fetchFromDatabase(key);
if (data) {
await this.multiLevelSet(key, data, 3600);
}
} catch (error) {
console.error(`Cache warming error for key ${key}:`, error);
}
});

await Promise.all(promises);
}

// Cache invalidation strategies
async invalidateByPattern(pattern) {
// Clear local cache
for (const [key] of this.localCache) {
if (key.includes(pattern)) {
this.localCache.delete(key);
}
}

// Clear Redis cache
try {
const redisKeys = await this.redis.keys(`*${pattern}*`);
if (redisKeys.length > 0) {
await this.redis.del(redisKeys);
}
this.cacheStats.deletes += redisKeys.length;
} catch (error) {
console.error(‘Cache invalidation error:’, error);
}
}

// Get cache statistics
getStats() {
const total = this.cacheStats.hits + this.cacheStats.misses;
return {
…this.cacheStats,
hitRate: total > 0 ? (this.cacheStats.hits / total * 100).toFixed(2) + ‘%’ : ‘0%’,
localCacheSize: this.localCache.size
};
}
}

// Cache-Aside Pattern Implementation
class CacheAsideService {
constructor(cacheManager, database) {
this.cache = cacheManager;
this.db = database;
}

async getUser(id) {
const cacheKey = `user:${id}`;

// Try cache first
let user = await this.cache.multiLevelGet(cacheKey);
if (user) {
return user;
}

// Cache miss, fetch from database
user = await this.db.getUserById(id);
if (user) {
// Write to cache
await this.cache.multiLevelSet(cacheKey, user, 3600);
}

return user;
}

async updateUser(id, userData) {
// Update database
const updatedUser = await this.db.updateUser(id, userData);

// Update cache
const cacheKey = `user:${id}`;
await this.cache.multiLevelSet(cacheKey, updatedUser, 3600);

return updatedUser;
}

async deleteUser(id) {
// Delete from database
await this.db.deleteUser(id);

// Delete from cache
const cacheKey = `user:${id}`;
await this.cache.localCache.delete(cacheKey);
await this.cache.redis.del(cacheKey);
}
}
“`

Connection Pooling dan Resource Management

1. Database Connection Pooling
“`javascript
// PostgreSQL Connection Pool Configuration
const { Pool } = require(‘pg’);

class DatabaseManager {
constructor(config) {
this.pool = new Pool({
host: config.host,
port: config.port,
database: config.database,
user: config.user,
password: config.password,

// Pool configuration
min: config.minConnections || 2,
max: config.maxConnections || 20,
idleTimeoutMillis: config.idleTimeout || 30000,
connectionTimeoutMillis: config.connectionTimeout || 2000,

// Advanced settings
allowExitOnIdle: false,
maxUses: config.maxUses || 7500,

// SSL configuration
ssl: config.ssl || false,

// Application name untuk monitoring
application_name: config.applicationName || ‘web_app’,

// Statement timeout
statement_timeout: config.statementTimeout || 30000
});

this.pool.on(‘connect’, (client) => {
console.log(‘New client connected to database’);
});

this.pool.on(‘error’, (err, client) => {
console.error(‘Database connection error:’, err);
});

this.setupGracefulShutdown();
}

// Execute query dengan connection pool
async query(text, params, options = {}) {
const start = Date.now();

try {
const client = await this.pool.connect();

try {
const result = await client.query(text, params);

// Log slow queries
const duration = Date.now() – start;
if (duration > 1000) {
console.warn(`Slow query detected (${duration}ms):`, text);
}

return result;
} finally {
client.release();
}
} catch (error) {
console.error(‘Database query error:’, error);
throw error;
}
}

// Transaction helper
async transaction(callback) {
const client = await this.pool.connect();

try {
await client.query(‘BEGIN’);
const result = await callback(client);
await client.query(‘COMMIT’);
return result;
} catch (error) {
await client.query(‘ROLLBACK’);
throw error;
} finally {
client.release();
}
}

// Batch operations
async batchInsert(table, columns, values, batchSize = 1000) {
if (values.length === 0) return;

const columnsStr = columns.join(‘, ‘);
const placeholders = columns.map((_, index) => `$${index + 1}`).join(‘, ‘);

for (let i = 0; i {
const offset = index * columns.length;
return `(${columns.map((_, colIndex) => `$${offset + colIndex + 1}`).join(‘, ‘)})`;
}).join(‘, ‘);

const flattenedValues = batch.flat();

const query = `
INSERT INTO ${table} (${columnsStr})
VALUES ${valuePlaceholders}
`;

await this.query(query, flattenedValues);
}
}

// Health check
async healthCheck() {
try {
const result = await this.query(‘SELECT 1 as health_check’);
const poolInfo = {
totalCount: this.pool.totalCount,
idleCount: this.pool.idleCount,
waitingCount: this.pool.waitingCount
};

return {
status: ‘healthy’,
timestamp: new Date(),
pool: poolInfo
};
} catch (error) {
return {
status: ‘unhealthy’,
timestamp: new Date(),
error: error.message
};
}
}

// Get pool statistics
getPoolStats() {
return {
total: this.pool.totalCount,
idle: this.pool.idleCount,
waiting: this.pool.waitingCount
};
}

// Graceful shutdown
setupGracefulShutdown() {
const shutdown = async (signal) => {
console.log(`Received ${signal}, shutting down database connections…`);

try {
await this.pool.end();
console.log(‘Database connections closed gracefully’);
process.exit(0);
} catch (error) {
console.error(‘Error during database shutdown:’, error);
process.exit(1);
}
};

process.on(‘SIGTERM’, () => shutdown(‘SIGTERM’));
process.on(‘SIGINT’, () => shutdown(‘SIGINT’));
}
}

// Usage example
const dbConfig = {
host: process.env.DB_HOST,
port: process.env.DB_PORT,
database: process.env.DB_NAME,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
minConnections: 5,
maxConnections: 50,
idleTimeout: 30000,
connectionTimeout: 5000,
statementTimeout: 10000,
applicationName: ‘my_web_app’
};

const dbManager = new DatabaseManager(dbConfig);

// Database operations
class UserRepository {
constructor(dbManager) {
this.db = dbManager;
}

async createUser(userData) {
const query = `
INSERT INTO users (name, email, password_hash, created_at)
VALUES ($1, $2, $3, NOW())
RETURNING id, name, email, created_at
`;

const result = await this.db.query(query, [
userData.name,
userData.email,
userData.passwordHash
]);

return result.rows[0];
}

async findUserById(id) {
const query = `
SELECT id, name, email, created_at, updated_at
FROM users
WHERE id = $1 AND deleted_at IS NULL
`;

const result = await this.db.query(query, [id]);
return result.rows[0] || null;
}

async updateUserLastLogin(id) {
const query = `
UPDATE users
SET last_login = NOW()
WHERE id = $1
`;

await this.db.query(query, [id]);
}

async getUserWithPosts(userId) {
return await this.db.transaction(async (client) => {
// Get user
const userQuery = ‘SELECT id, name, email FROM users WHERE id = $1’;
const userResult = await client.query(userQuery, [userId]);
const user = userResult.rows[0];

if (!user) return null;

// Get user posts
const postsQuery = `
SELECT id, title, content, created_at
FROM posts
WHERE user_id = $1
ORDER BY created_at DESC
LIMIT 10
`;
const postsResult = await client.query(postsQuery, [userId]);

return {
…user,
posts: postsResult.rows
};
});
}
}

module.exports = { DatabaseManager, UserRepository };
“`

2. Connection Pool Monitoring
“`javascript
// Database Connection Pool Monitoring
class PoolMonitor {
constructor(pool) {
this.pool = pool;
this.metrics = {
totalQueries: 0,
slowQueries: 0,
errors: 0,
avgResponseTime: 0,
maxResponseTime: 0,
minResponseTime: Infinity
};

this.startMonitoring();
}

startMonitoring() {
// Monitor pool setiap 30 detik
setInterval(() => {
this.collectMetrics();
}, 30000);

// Reset metrics harian
setInterval(() => {
this.resetMetrics();
}, 24 * 60 * 60 * 1000);
}

collectMetrics() {
const stats = this.pool.getStats();

console.log(‘Database Pool Statistics:’, {
timestamp: new Date(),
pool: {
total: stats.total,
idle: stats.idle,
waiting: stats.waiting,
utilization: ((stats.total – stats.idle) / stats.total * 100).toFixed(2) + ‘%’
},
metrics: this.metrics
});

// Check untuk potential issues
if (stats.waiting > 0) {
console.warn(‘Database pool has waiting connections!’);
}

if (stats.idle === 0) {
console.warn(‘Database pool has no idle connections!’);
}

if (this.metrics.slowQueries > 0) {
console.warn(`Detected ${this.metrics.slowQueries} slow queries`);
}
}

// Enhanced query method dengan monitoring
async monitoredQuery(query, params = []) {
const start = Date.now();

try {
this.metrics.totalQueries++;

const result = await this.pool.query(query, params);

const responseTime = Date.now() – start;
this.updateResponseTimeMetrics(responseTime);

// Log slow queries
if (responseTime > 1000) {
this.metrics.slowQueries++;
console.warn(`Slow query (${responseTime}ms):`, query);
}

return result;
} catch (error) {
this.metrics.errors++;
console.error(‘Database query error:’, error);
throw error;
}
}

updateResponseTimeMetrics(responseTime) {
// Update min/max
this.metrics.minResponseTime = Math.min(this.metrics.minResponseTime, responseTime);
this.metrics.maxResponseTime = Math.max(this.metrics.maxResponseTime, responseTime);

// Update average (simplified)
this.metrics.avgResponseTime =
(this.metrics.avgResponseTime + responseTime) / 2;
}

resetMetrics() {
this.metrics = {
totalQueries: 0,
slowQueries: 0,
errors: 0,
avgResponseTime: 0,
maxResponseTime: 0,
minResponseTime: Infinity
};
}

// Health check endpoint
getHealthStatus() {
const stats = this.pool.getStats();
const utilizationRate = (stats.total – stats.idle) / stats.total;

return {
status: this.determineHealthStatus(utilizationRate),
timestamp: new Date(),
pool: stats,
metrics: this.metrics,
recommendations: this.getRecommendations(utilizationRate)
};
}

determineHealthStatus(utilizationRate) {
if (utilizationRate > 0.9) return ‘critical’;
if (utilizationRate > 0.7) return ‘warning’;
if (this.metrics.errors > 0) return ‘degraded’;
return ‘healthy’;
}

getRecommendations(utilizationRate) {
const recommendations = [];

if (utilizationRate > 0.9) {
recommendations.push(‘Consider increasing max pool size’);
recommendations.push(‘Check for slow queries and optimize them’);
}

if (this.metrics.slowQueries > 0) {
recommendations.push(‘Investigate slow queries and add appropriate indexes’);
}

if (this.metrics.errors > 0) {
recommendations.push(‘Review error logs and fix connection issues’);
}

if (utilizationRate ${this.thresholds.queryTime}
ORDER BY mean_time DESC
LIMIT 10
`
},
{
name: ‘Most Frequent Queries’,
query: `
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY calls DESC
LIMIT 10
`
},
{
name: ‘High Variance Queries’,
query: `
SELECT query, calls, total_time, mean_time, stddev_time
FROM pg_stat_statements
WHERE stddev_time > mean_time * 0.5
ORDER BY stddev_time DESC
LIMIT 10
`
}
];

const results = {};

for (const queryInfo of queries) {
try {
const result = await this.db.query(queryInfo.query);
results[queryInfo.name] = result.rows;
} catch (error) {
console.error(`Error getting ${queryInfo.name}:`, error);
results[queryInfo.name] = [];
}
}

return results;
}

// Get connection statistics
async getConnectionStats() {
try {
const query = `
SELECT
COUNT(*) as total_connections,
COUNT(*) FILTER (WHERE state = ‘active’) as active_connections,
COUNT(*) FILTER (WHERE state = ‘idle’) as idle_connections,
COUNT(*) FILTER (WHERE state = ‘idle in transaction’) as idle_in_transaction,
AVG(EXTRACT(EPOCH FROM (now() – backend_start))) as avg_session_duration
FROM pg_stat_activity
WHERE pid != pg_backend_pid()
`;

const result = await this.db.query(query);
return result.rows[0];
} catch (error) {
console.error(‘Error getting connection stats:’, error);
return {};
}
}

// Get index statistics
async getIndexStats() {
try {
const query = `
SELECT
schemaname,
tablename,
indexname,
idx_scan,
idx_tup_read,
idx_tup_fetch,
pg_size_pretty(pg_relation_size(indexrelid)) as index_size
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC
LIMIT 20
`;

const result = await this.db.query(query);
return result.rows;
} catch (error) {
console.error(‘Error getting index stats:’, error);
return [];
}
}

// Get table statistics
async getTableStats() {
try {
const query = `
SELECT
schemaname,
tablename,
n_tup_ins as inserts,
n_tup_upd as updates,
n_tup_del as deletes,
n_live_tup as live_tuples,
n_dead_tup as dead_tuples,
last_vacuum,
last_autovacuum,
last_analyze,
last_autoanalyze
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC
LIMIT 20
`;

const result = await this.db.query(query);
return result.rows;
} catch (error) {
console.error(‘Error getting table stats:’, error);
return [];
}
}

// Check performance alerts
checkAlerts(stats) {
const alerts = [];

// Check slow queries
const slowQueries = stats.queries[‘Slow Queries’] || [];
if (slowQueries.length > 0) {
alerts.push({
type: ‘performance’,
severity: ‘warning’,
message: `Found ${slowQueries.length} slow queries`,
details: slowQueries.map(q => ({
query: q.query.substring(0, 100) + ‘…’,
avgTime: Math.round(q.mean_time) + ‘ms’
}))
});
}

// Check connection utilization
const totalConnections = parseInt(stats.connections.total_connections || 0);
const activeConnections = parseInt(stats.connections.active_connections || 0);
const utilizationRate = totalConnections > 0 ? activeConnections / totalConnections : 0;

if (utilizationRate > this.thresholds.connectionUtilization) {
alerts.push({
type: ‘capacity’,
severity: ‘warning’,
message: `High connection utilization: ${(utilizationRate * 100).toFixed(1)}%`,
details: {
total: totalConnections,
active: activeConnections,
utilizationRate: (utilizationRate * 100).toFixed(1) + ‘%’
}
});
}

// Check for tables needing vacuum
const tablesNeedingVacuum = stats.tables.filter(table => {
if (!table.last_vacuum) return true;
const lastVacuum = new Date(table.last_vacuum);
const daysSinceVacuum = (Date.now() – lastVacuum) / (1000 * 60 * 60 * 24);
return daysSinceVacuum > 30; // 30 days
});

if (tablesNeedingVacuum.length > 0) {
alerts.push({
type: ‘maintenance’,
severity: ‘info’,
message: `${tablesNeedingVacuum.length} tables need VACUUM`,
details: tablesNeedingVacuum.map(t => ({
table: `${t.schemaname}.${t.tablename}`,
lastVacuum: t.last_vacuum || ‘Never’
}))
});
}

// Check unused indexes
const unusedIndexes = stats.indexes.filter(index =>
parseInt(index.idx_scan || 0) === 0
);

if (unusedIndexes.length > 0) {
alerts.push({
type: ‘optimization’,
severity: ‘info’,
message: `${unusedIndexes.length} indexes are never used`,
details: unusedIndexes.map(idx => ({
index: idx.indexname,
table: `${idx.schemaname}.${idx.tablename}`,
size: idx.index_size
}))
});
}

this.alerts = alerts;
return alerts;
}

// Get performance report
async generateReport() {
const stats = await this.collectStats();

return {
timestamp: new Date(),
summary: {
totalQueries: this.metrics.size,
alertsCount: this.alerts.length,
healthScore: this.calculateHealthScore(stats)
},
metrics: stats,
alerts: this.alerts,
recommendations: this.generateRecommendations(stats, this.alerts)
};
}

// Calculate database health score
calculateHealthScore(stats) {
let score = 100;

// Deduct points for slow queries
const slowQueries = stats.queries[‘Slow Queries’] || [];
score -= Math.min(slowQueries.length * 5, 30);

// Deduct points for high connection utilization
const totalConns = parseInt(stats.connections.total_connections || 0);
const activeConns = parseInt(stats.connections.active_connections || 0);
const utilization = totalConns > 0 ? activeConns / totalConns : 0;
if (utilization > 0.8) score -= 20;

// Deduct points for tables needing maintenance
const tablesNeedingMaintenance = stats.tables.filter(table => {
if (!table.last_vacuum) return true;
const lastVacuum = new Date(table.last_vacuum);
const daysSinceVacuum = (Date.now() – lastVacuum) / (1000 * 60 * 60 * 24);
return daysSinceVacuum > 30;
});
score -= Math.min(tablesNeedingMaintenance.length * 3, 20);

return Math.max(0, score);
}

// Generate optimization recommendations
generateRecommendations(stats, alerts) {
const recommendations = [];

// Query optimization recommendations
const slowQueries = stats.queries[‘Slow Queries’] || [];
if (slowQueries.length > 0) {
recommendations.push({
category: ‘Query Optimization’,
priority: ‘high’,
action: ‘Analyze and optimize slow queries’,
details: [
‘Use EXPLAIN ANALYZE to analyze query execution’,
‘Add appropriate indexes for frequently queried columns’,
‘Consider query restructuring’,
‘Review JOIN operations’
]
});
}

// Index optimization recommendations
const unusedIndexes = stats.indexes.filter(idx =>
parseInt(idx.idx_scan || 0) === 0
);
if (unusedIndexes.length > 0) {
recommendations.push({
category: ‘Index Optimization’,
priority: ‘medium’,
action: ‘Remove unused indexes’,
details: [
`${unusedIndexes.length} indexes are never used`,
‘Consider dropping indexes that add overhead’,
‘Monitor index usage regularly’
]
});
}

// Maintenance recommendations
const tablesNeedingMaintenance = stats.tables.filter(table => {
if (!table.last_vacuum) return true;
const lastVacuum = new Date(table.last_vacuum);
const daysSinceVacuum = (Date.now() – lastVacuum) / (1000 * 60 * 60 * 24);
return daysSinceVacuum > 30;
});
if (tablesNeedingMaintenance.length > 0) {
recommendations.push({
category: ‘Maintenance’,
priority: ‘medium’,
action: ‘Perform regular maintenance’,
details: [
`${tablesNeedingMaintenance.length} tables need VACUUM`,
‘Set up regular VACUUM and ANALYZE schedules’,
‘Monitor table bloat’
]
});
}

// Connection pool recommendations
const totalConns = parseInt(stats.connections.total_connections || 0);
const activeConns = parseInt(stats.connections.active_connections || 0);
const utilization = totalConns > 0 ? activeConns / totalConns : 0;
if (utilization > 0.8) {
recommendations.push({
category: ‘Connection Management’,
priority: ‘high’,
action: ‘Optimize connection pool’,
details: [
`Connection utilization: ${(utilization * 100).toFixed(1)}%`,
‘Consider increasing max pool size’,
‘Optimize application connection handling’,
‘Review long-running queries’
]
});
}

return recommendations;
}

// Export metrics untuk external monitoring
async exportMetrics() {
const stats = await this.collectStats();

return {
timestamp: new Date().toISOString(),
database: {
connections: {
total: stats.connections.total_connections,
active: stats.connections.active_connections,
idle: stats.connections.idle_connections,
utilization: stats.connections.total_connections > 0
? (stats.connections.active_connections / stats.connections.total_connections * 100).toFixed(2)
: 0
},
queries: {
slowCount: (stats.queries[‘Slow Queries’] || []).length,
frequentCount: (stats.queries[‘Most Frequent Queries’] || []).length
},
health: {
score: this.calculateHealthScore(stats),
alerts: this.alerts.length
}
}
};
}
}

module.exports = DatabasePerformanceMonitor;
“`

Kesimpulan

Database performance optimization adalah ongoing process yang membutuhkan continuous monitoring, analysis, dan optimization. Dengan implementasi comprehensive strategies dari query optimization, caching, connection pooling, hingga monitoring, kita dapat achieve significant performance improvements.

Key optimization strategies:
• Query Optimization: Proper indexing, efficient query writing, dan execution plan analysis
• Caching Strategy: Multi-level caching dengan appropriate invalidation policies
• Connection Management: Optimal connection pooling configuration
• Monitoring: Comprehensive performance monitoring dengan alerting
• Maintenance: Regular database maintenance tasks (VACUUM, ANALYZE, etc.)

Remember: Database optimization is about finding the right balance between read performance, write performance, storage efficiency, and maintenance overhead.

Start with measuring performance, identify bottlenecks, implement optimizations iteratively, dan measure the impact of each change.