SERVERS 8 min read

VPS Configuration for Production Projects

Choosing Your Provider

Provider selection depends on three factors: geographic latency to your users, cost, and support quality. For most small-to-medium projects, DigitalOcean, Hetzner, and Vultr offer the best price-to-performance ratio. Hetzner in particular is exceptional value for European and international traffic. AWS, GCP, and Azure offer more services but at significantly higher cost and complexity.

Start with a plan that is one tier above your current need. Under-provisioning a production server and then scrambling to resize it during a traffic spike is a painful lesson. It is much easier to downsize once your actual baseline is clear. For most web applications, 2 vCPU and 4 GB RAM is a reasonable starting point.

Choose a data center region close to your primary user base. A 200ms round-trip latency vs. 20ms makes a perceivable difference in web application responsiveness. For global audiences, you will need a CDN regardless of server location — but the origin server should be geographically sensible.

Initial Server Hardening

First actions after provisioning: create a non-root user with sudo privileges, disable root SSH login, and switch SSH to key-based authentication. These three steps eliminate the majority of automated brute-force attacks. Use ssh-copy-id to install your public key, then edit /etc/ssh/sshd_config: PermitRootLogin no, PasswordAuthentication no.

Configure UFW (Uncomplicated Firewall): allow ports 22 (SSH), 80 (HTTP), and 443 (HTTPS). Deny everything else by default. If you are running a database, do not expose port 3306 or 5432 to the internet — use SSH tunneling or a VPN for remote database access.

Install fail2ban to automatically block IPs that fail SSH authentication repeatedly. Configure it to ban after 5 failures within 10 minutes with a 1-hour ban duration. This dramatically reduces brute-force noise in your logs. Also enable automatic security updates: unattended-upgrades on Ubuntu/Debian handles this transparently.

Nginx + SSL/TLS Setup

Nginx is the right choice for most production deployments: high performance, excellent documentation, and proven stability. Install from the official Nginx repository for the latest stable release, not the distribution's default package which is often significantly outdated.

Configure SSL/TLS with Certbot and Let's Encrypt. Install certbot and python3-certbot-nginx, then run certbot --nginx -d yourdomain.com -d www.yourdomain.com. Certbot automatically modifies your Nginx config and sets up auto-renewal. Verify auto-renewal with: certbot renew --dry-run.

Harden your Nginx TLS configuration: disable SSLv3 and TLS 1.0/1.1, use only TLS 1.2 and 1.3. Set ssl_ciphers to a modern cipher suite (Mozilla's SSL Config Generator provides current recommendations). Enable HSTS with a long max-age. Add security headers: X-Frame-Options SAMEORIGIN, X-Content-Type-Options nosniff, Referrer-Policy strict-origin-when-cross-origin.

Backup Automation

The rule of backups is simple: if it doesn't happen automatically, it doesn't happen. Manual backups are skipped during busy periods, which is precisely when disasters occur. Automate daily database dumps with a cron job that runs mysqldump (for MySQL) or pg_dump (for Postgres), compresses the output, and ships it to an off-server location.

Use rclone to sync backups to cloud storage (S3, R2, Backblaze B2). A typical backup cron job: dump the database, gzip it with a timestamp filename, rclone copy it to your bucket, then delete local backups older than 7 days. The cloud copies should be retained for 30+ days with lifecycle rules.

Test your backups. A backup you have never restored from is a backup you do not actually have. Schedule a monthly restoration test on a staging server. Time the full restoration procedure and document it. This also validates that your backup files are not corrupted and that your restoration runbook is accurate.

CI/CD Basics

A simple CI/CD pipeline for a VPS-hosted project: on push to main, your CI (GitHub Actions, GitLab CI) runs tests, builds assets, then SSHs to the server and runs a deploy script. The deploy script: git pull, install dependencies, run migrations, reload PHP-FPM or restart your Node process, clear caches.

Zero-downtime deployments require a bit more: deploy to a new release directory, run migrations and warmup steps, then atomically swap a symlink from current to the new release. Capistrano and Deployer implement this pattern. Even a shell script version is better than a naive git pull that leaves the application in a broken state mid-deploy.

Keep your deploy scripts in version control. Document the entire deployment process in a runbook. Any team member should be able to deploy without tribal knowledge. Test the deploy process from a fresh checkout at least once per quarter.

Monitoring

At minimum, monitor: server uptime (external ping monitor like UptimeRobot), disk space usage (alert at 80%), CPU and memory (alert on sustained high usage), and application error rate. These four cover the most common production failure modes.

For application-level monitoring, integrate an error tracking service (Sentry is free for small volumes). Unhandled exceptions in production should create alerts, not disappear silently into log files that nobody reads. Configure Sentry to notify via email or Slack on new issues.

Set up logrotate for all application and Nginx logs to prevent disk exhaustion. Structured logging (JSON format) makes logs searchable and parseable by log aggregation tools. Even if you are not running a full ELK stack, having structured logs means you can grep effectively and grep into a future log aggregation system without reformatting.

[ ← BACK TO ARTICLES ]