Long Refactoring: Call a Unit Test

Our test takes 5 seconds to run. That is not horrible, but it does not bode well for when we write more tests. We don’t want to explode that number, and we want to be able to write more focused test. I’m going to do some project setup that will enable us to do more development, including fine grained unit tests.

I’m going to start by using tox.

I’m going to grab a simplistic tox file I have from previous projects.

# tox (https://tox.readthedocs.io/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
envlist = pep8,py37
skipsdist = True
commands =
  flake8 --ignore=D100,D101,D102,D103,D104,E305,E402,W503,W504,W605
filename= *.py
show-source = true
enable-extensions = H203,H904
ignore = D100,D101,D102,D103,D104,D203,E402,W503,W504
deps = -rrequirements.txt
commands =

I created a simple requirements.txt file that just has flake8 and the tester in it for now. Ideally, this would be in a testing only requirements file, but simplicity….


I create a subdirectory called treesudoku with no underscore or hyphens for simplicity. Move all of the code there. Lets’ run tox. There is a slew of output, and I am not going to look at it all now. For starters, the pep8 checks run by Flake 8 produce far more output than I want to deal with now.

treesudoku/test_tree_soduku.py:14: in <module>
    assert(len(check_data) == len(output))
E   AssertionError: assert 267 == 0
E    +  where 267 = len('\nBoard:0483921657967345821251876493548132976729564138136798245372689514814253769695417382\nBoard:1245981376169273584...1391657842728349165654812793\nBoard:2462831957795426183381795426173984265659312748248567319926178534834259671517643892')
E    +  and   0 = len('')

I am more concerned with that failing py37 test. We can see that we are not actually running our program. Maybe because it is in a different subdir? Maybe. But instead of guessing, lets step through the code.

Edit the file to introduce the debugger. Step through it until we can produce the error code:

(Pdb) print(p.stderr)
b'Traceback (most recent call last):\n  File "treesudoku/tree_sudoku.py", line 184, in <module>\n    x = SudokuSolver()\n  File "treesudoku/tree_sudoku.py", line 19, in __init__\n    self.board_strings = self.import_csv()\n  File "treesudoku/tree_sudoku.py", line 70, in import_csv\n    with open(\'sample_sudoku_board_inputs.csv\', \'r\') as file:\nFileNotFoundError: [Errno 2] No such file or directory: \'sample_sudoku_board_inputs.csv\'\n'

I moved the test data file…I move it back. The test runs…but still produces an error line in the output. But..I can run the test by hand still.

python3 ./treesudoku/test_tree_soduku.py

Lets convert that test file into something that the test framework can run. Commit to git before continuing.

Take the body of the test and move it into a function with a name that starts with test_. The testing framework will now pick that up.

diff --git a/treesudoku/test_tree_soduku.py b/treesudoku/test_tree_soduku.py
index f4d8532..c36355c 100644
--- a/treesudoku/test_tree_soduku.py
+++ b/treesudoku/test_tree_soduku.py
@@ -3,16 +3,20 @@ check_data ="""
-print ("Running Test")
-p = subprocess.run(["python3", "treesudoku/tree_sudoku.py"], capture_output=True)
-output = p.stdout.decode("utf-8").split("\n")
-output = "".join(output[:-2])
-output = output.replace("-","").replace("|","")
-output = output.replace(" ","").replace("\n","")
-output = output.replace("Board","\nBoard")
-print("comparing output ")
-assert(len(check_data) == len(output))
-assert(check_data == output)
+def test_end_to_end():
+    print ("Running Test")
+    p = subprocess.run(["python3", "treesudoku/tree_sudoku.py"], capture_output=True)
+    output = p.stdout.decode("utf-8").split("\n")
+    output = "".join(output[:-2])
+    output = output.replace("-","").replace("|","")
+    output = output.replace(" ","").replace("\n","")
+    output = output.replace("Board","\nBoard")
+    print("comparing output ")
+    assert(len(check_data) == len(output))
+    assert(check_data == output)
+    print("OK")

Now I can run it the old way or via tox.

==================================================================== test session starts =====================================================================
platform linux -- Python 3.7.9, pytest-6.1.1, py-1.9.0, pluggy-0.13.1
cachedir: .tox/py37/.pytest_cache
rootdir: /home/ayoung/Documents/CodePlatoon/tree_sudoku
collected 1 item                                                                                                                                             
treesudoku/test_tree_soduku.py .                                                                                                                       [100%]
===================================================================== 1 passed in 13.53s =====================================================================
__________________________________________________________________________ summary ___________________________________________________________________________
ERROR:   pep8: commands failed
  py37: commands succeeded

Now let’s see what it takes to add a new unit test.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.