This blogpost is for those who have found themselves living their best life and are starting to get bored. I’ll show you how to get off the high and increase your suffering.
As a Software Developer I have seen no greater suffering than when using a database. So often are databases abused with bad abstractions built upon them. I would argue that the majority of tech debt is caused by poor usage of databases. People build layers and layers of abstractions just to avoid learning about the database. If this sounds interesting to you between your slips of fine wine, then let me show you how to suffer greatly with a database.
Step 1, avoid learning how the database work at all costs
You don’t have time to learn about the database. You are busy with Jira tickets that need to be done, PRs to be reviewed and promotions to be argued for.
Let’s avoid all that with ORMs or easy to use libraries. Databases is just about storing data right? So the simpler the API the better it hides the messy details of the database from us.
MongoDB is a document-oriented database? Who cares. All I need is a key-value store to solve any business problem.
We’ll suffer by
- writing a lot more queries than we need to by forcing our domain model upon the database
- dealing with poor performance due to loading too much data
- filling up the bug tracking system with strange data inconsistency issues, which are clearly impossible and user error
- cause production outages by introducing more unhandled edge cases
- spending a lot of time tracing bugs
- losing data by not learning about atomic updates or transactions
Step 2, reimplement missing features you are used to
Why are database designers so dumb? I don’t understand how any of them can sleep at night without implementing basic features like auto-incrementing primary keys.
We’re going to have to have to use our precious time to implement auto-incrementing
_id for MongoDB. Let’s see, we’ll just add a new collection to store what is the next integer key and use MongoDB’s atomic update operator
[$inc](https://docs.mongodb.com/manual/reference/operator/update/inc/) to get the next ID. No more stupid unreadable ObjectIds!
We’ll suffer by
- forcing everybody to remember to use our new features in the way we intended or else it’s their fault for breaking production
- again introducing more unhandled edge cases
- prevent horizontal scaling of a distributed database by implementing new features in a centralized fashion
- literally losing a war by leaking analytics using an auto-incrementing number
Step 3, if it’s good enough on my laptop, it’s good enough for production
Why are database operators paid so much? We’re a devops shop, so that means as a dev I’m the operator now. And I got the database working on my laptop, so I’ll just run in the same as in production.
Everything is super simple, I don’t get all the fuss with bash scripts and linux services. All you need to do is ssh into the server and start
mongod. It already runs in the background.
We’ll suffer by
- having multiple single point of failure, decreasing the MTBF
- be woken up in the middle of the night when the server crashes and
mongod doesn’t auto start
- explain to our customers why its their fault for entrusting us with their data, after we lose the database and realize we don’t have a backup
- spend the weekend rebuild the server by hand as nothing is automated
- losing money by having to take down our service during maintenance windows to upgrade the database
- not knowing how close we are to the next performance cliff
- having the server crash by running out of disk space due to the lack of rotating logs or simply filling up with data
- having a slow and sluggish service as we won’t be watching CPU or memory usage
- alternatively spend all the budget on massive 128 core servers “just in case we go viral”
Step 4, avoid all the following advice below
The rest of this blogpost is good solid advice if you don’t want to suffer with a database. I suggest you leave now as it will only decrease your suffering.
- RTFM. Just do it, read the whole documentation from start to finish, and then again. For databases we need to understanding the following topics. What is the intended use case for this type of database? What are the tradeoffs? How does one model problems with this database? How does the way we model the problem affect our ability to read/write later? How do we design data model and read/write patterns to minimize the resources needed? What is the recommended way to deploy and operate this database? How do we monitor the database? How do we know if the database is running well? How do we know if the database could use more resources and actually save money? What is the consensus model? Where does this database lay in terms of the CAP theorem?
- Consider how to write less code and have the database do more work. Databases are design to have high performance in certain workloads as long as you are not fighting its intended use case. Spend the time to better translate the business problem that aligns with how the database wants to work and the code will disappear.
- Avoid the temptation of integrating with an external service when the database can do it. Keeping more of the data in the database allows you to keep all the data more consistent with atomic updates and transactions. There was already enough busy work setting up the database connection, avoid more busy work of adding external services.
- Learn about different types of databases and read their documentation as well. By learning how different databases operate makes it more clear on the intended use case of the database you are stuck with. This is the only way to solve long term suffering if the company accidentally picked the wrong database initially. The only escape is to find a better fit with another database. Gather the business case and propose a migration.
- If a solar flare caused all computers in the world to restart, every in-flight user request would be lost. Would your database restart gracefully with consistent data? If it’s not correctly saved to the database, it doesn’t exist. Ensure every single user request ends up being an atomic update or transactions that has been fsynced to disk.
- Administrate the databases well and eliminate/mitigate as many single points of failure as possible. Learn how to horizontally scale a database with replication. Take snapshot backups of the database and test restoring backups daily. Monitor all database nodes CPU usage, memory usage, disk space, disk IOPS and network traffic. Ensure all are healthy and ready to take over when some node inevitably fails.
The suffering only ends when you have decommissioned the database. How much you suffer before then is up to you.
Once you have learned how to suffer with a database, you can apply these same principals to everything else. There is a plethora of documentation for you to ignore, especially the ones for programming languages and services you are already using.
Had enough suffering? You’re in luck. Battlefy is hiring.