In the previous post on this topic, I gave you some very good reasons to not pre-populate your unit test database. So, the burning question is, where should test data come from?
The test data should be inside the test case. This keeps everything together. You know exactly what data is important to the test case by reading the test case. The first step of the test case generally inserts the data it needs into the database (part of the Assemble-Act-Assert pattern).
Only data needed for the test should be included in the test case. By inserting only the data you need in the test case, you create focus. There is much less chance that unrelated schema changes will break your tests and that the test will actually validate the small piece of code you are targeting.
Tables should be faked. The biggest problem with inserting test data inside the test case is that there are constraints on the tables. To solve this problem, we can borrow a unit testing practice from object-oriented languages, that is creating a fake version of a dependency. For example, if we’re testing a stored procedure which reads data from a table, the stored procedure is dependent upon the table. We can replace the real table with a fake table which does not have the constraints. This allows us to easily insert data in the test case.
In tSQLt, tables are faked using the tSQLt.FakeTable procedure. This procedure renames the original table and puts a new one in its place. The new table contains the same columns with the same data-types, but does not have the constraints including primary keys, foreign keys, checks or null restrictions. Let’s look at an example of a test case that fakes a table:
EXEC tSQLt.NewTestClass 'OrderReportTests'; GO CREATE PROCEDURE OrderReportTests.[test: OrderReport works for several customers] AS BEGIN EXEC tSQLt.FakeTable 'dbo', 'Orders'; INSERT INTO dbo.Orders(OrderId, CustomerId) VALUES (1,13),(2,13),(3,42),(4,73),(5,73),(6,73); SELECT CustomerId, Cnt INTO #actual FROM dbo.CustomerReport; SELECT TOP(0) * INTO #expected FROM #actual; INSERT INTO #expected(CustomerId,Cnt) VALUES (42,1),(13,2),(73,3); EXEC tSQLt.AssertEqualsTable #expected,#actual; END GO EXEC tSQLt.Run 'OrderReportTests';
In this example, we fake the Orders table. The CustomerReport view depends on only the OrderId and CustomerId columns from the Orders table. We only need to insert those two columns into the Orders table, even though it has several other ‘not null’ columns. The rest of the test validates that the records returned by CustomerReport view are the expected values. The documentation of the AssertEqualsTable feature in tSQLt discusses this technique in more detail.
Tests should be executed in transactions which are rolled-back after each test. In order for test cases to be independent, anything that happens during the test case must be undone. The easiest way of doing this in a database is to create a transaction and roll it back when the test is complete. SQL Server conveniently rolls back schema changes also, so a faked table is rolled back.
The tests should be written in parallel with the code. The best way to achieve test coverage of your code is to only write enough code to make a failing test pass. This is a critical component of test driven development and will be discussed in great detail on this website. Writing test cases this way will improve the ability to make changes down stream because you will be confident that there is adequate test coverage. Furthermore, knowledge of test case identification heuristics will help you discover the correct tests to write and which test to write next.
Pre-populated databases are still useful in other areas of application development. They are especially useful for performance and scalability testing. Usability testing and prototyping also benefit from pre-populated databases. They’re also necessary for data-mining and discover tasks, and development of data dependent certain algorithms. However, they get in your way when you’re writing unit tests.
In the final post in this series, I’ll counter some of the criticisms of this approach to database unit testing.