Unit testing for databases
Database applications can - and should - benefit from unit tests, but it is surprising how rarely database code is covered by such tests. One of the problems I suppose is with the fact that it can be pretty tricky to write an isolated test that does not modify the state of the system, so it is fully re-runnable and yields the same results each time. Martin Fowler wrote a good article discussing the unit tests concept - well worth having a look at!
Even if you decide not to implement unit tests for your stored procedures or queries, you can still implement them to ensure that the coding standards are observed.
You can use unit tests to check
- naming conventions of your tables, columns, constraints, and other objects - a database is much easier to manage if all developers follow certain agreed standards
- data types used across the database for common fields (user name, date created, etc) - will help you to avoid awkward bugs with data not fitting into their table columns or performance problems that may occur if the database has to convert the datatypes back and forth in joins and WHERE clauses
- permissions applied to your database - developers can take shortcuts and grant all access to all objects because their code works this way.
Tests like this will allow you to catch issues early, while you're developing the database - not in production.
Several frameworks allow you to write and manage unit tests for your database. Here are two of them:
DBUnit supports several DB technologies, including PostgreSQL, MySQL, and Oracle
tSQLt supports SQL Server
A simple approach
Many, many years ago I approached the topic of unit testing in SQL Server from a much simpler direction. The approach can help you with the use cases I listed above. The idea is based on a simple script that you can modify to suit your needs. You can write a client framework or use an existing one (I used XUnit for example) to execute these scripts automatically every few minutes or every day, as you see fit.
The script is quite simple. You just add your regular code in the begin try/end try section and you catch the resulting error in the catch section of the script. You roll back any changes that might have been made by the testing script. Of course, if you only validate naming conventions or data types, you don't have to roll back any transactions.
begin tran
begin try
raiserror('-- ======== Enter your unit test name here ========= --', 10, 1) with nowait;
raiserror('Test 2: Add new feed, invalid category name', 10, 1) with nowait;
insert FeedCategory (CategoryName) values('Maintenance feeds') --insert category
exec AddNewFeed @FeedName = 'Feed 1', @FeedCategoryName = 'No such category', @FeedType='FS'
raiserror('FAIL', 16, 1) --Assert.Fail. We shouldn't be here.
end try
begin catch
declare @err varchar(2000)
set @err = error_message()
if @err like '%Invalid feed category name%' --is it expected exception?
raiserror('Test succeeded.', 10, 1) with nowait;
else
raiserror('Test failed: %s', 16, 1, @err);
end catch
rollback
I used this script and its equivalent in Oracle to test not only data but also the schema of the objects in the database. You can for example select data from information_schema.columns (in SQL Server) and test whether each table has the same data type for the column user_name, or the date types are consistent, or the identity primary keys are of type long integer. You can also test the naming conventions of the columns - very handy to catch mistakes early in the development cycle.
You can incorporate the execution of such tests into your CI/CD framework and greatly improve the quality of the released code.
Check out also other blog posts on SelectCompare:
- Big Data vs Important Data
- Temporal Data Tables in SQL Server
- In-Memory Databases
- SelectCompare 1.2 released
Image By Marcel Douwe Dekker [CC BY-SA 3.0], via Wikimedia Commons
Leave a comment
Please note, comments must be approved before they are published