Running the Keystone Unit tests takes a long time.
To start with a blank slate, you want to make sure you have the latest from master and a clean git repository.
cd /opt/stack/keystone git checkout master git rebase origin/master clean -xdf keystone/ time tox -r ... py27: commands succeeded ERROR: py34: commands failed pep8: commands succeeded docs: commands succeeded genconfig: commands succeeded real 8m17.530s user 33m1.851s sys 0m56.828s
The -r option to tox recreates the tox virtual environments. Additional runs should go faster
time tox
…
py27: commands succeeded ERROR: py34: commands failed pep8: commands succeeded docs: commands succeeded genconfig: commands succeeded real 5m52.367s user 30m57.366s sys 0m35.403s
To run just the py27 tests:
time tox -e py27 ... Ran: 5695 tests in 243.0000 sec. .... py27: commands succeeded congratulations :) real 4m18.144s user 28m51.506s sys 0m31.286s
Not much faster, so we know where most of the time goes. It also reported the slowest tests:
keystone.tests.unit.token.test_fernet_provider.TestFernetKeyRotation.test_rotation 2.856
So we have 5000+ test that take 4 minutes to run.
Running just a single test:
time tox -e py27 -- keystone.tests.unit.token.test_fernet_provider.TestFernetKeyRotation.test_rotation .... ====== Totals ====== Ran: 1 tests in 4.0000 sec. .... py27: commands succeeded congratulations :) real 0m17.200s user 0m15.802s sys 0m1.681s
17 Seconds is a little long, considering the test itself only ran for four seconds of it. Once in a while is not a problem, but if this breaks the flow of thought during coding, it is problematic.
What can we shave off? Lets see if we can avoid the discovery step, run inside the venv, and specify exactly the test we want to run;
. .tox/py27/bin/activate time python -m testtools.run keystone.tests.unit.token.test_fernet_provider.TestFernetKeyRotation.test_rotation Tests running... Ran 1 test in 2.770s OK real 0m3.137s user 0m2.708s sys 0m0.428s
That seems to have had only an overhead of a second.
OK, what about some of the end-to-end test that set up an HTTP listener and talk to the database, such as those in: keystone.tests.unit.test_v3_auth?
time python -m testtools.run keystone.tests.unit.test_v3_auth Tests running... Ran 329 tests in 91.925s OK real 1m32.459s user 1m28.260s sys 0m4.669s
Fast enough for a pre-commit check, but not for “run after each change.” How about a single test?
time python -m testtools.run keystone.tests.unit.test_v3_auth.TestAuth.test_disabled_default_project_domain_result_in_unscoped_token Tests running... Ran 1 test in 0.965s OK real 0m1.382s user 0m1.308s sys 0m0.076s
I think it is important to run the tests before you write a line of code, and to run the tests continuously. But if you don’t run the entire body of unit tests, how can you make sure you are exercising the code you wrote? One technique is to put in a break-point.
I want to work on the roles infrastructure. Specifically, I want to make the assignment of one (prior) role imply the assignment of another (inferred) role. I won’t go in to the whole design, but I will start with the database structure. Role inference is a a many-to-many relationship. As such, I need to implement a table which has two IDs: prior_role_id and inferred_role_id. Lets start with the database migrations for that.
time python -m testtools.run keystone.tests.unit.test_sql_upgrade Tests running... Ran 30 tests in 3.528s OK real 0m3.948s user 0m3.874s sys 0m0.075s
OK…full disclosure, I’m writing this because I did too much before writing tests, my tests were hanging, and I want to redo things slower and more controlled to find out what went wrong. I have some placeholders for migrations: a way to keep from changing the migration number for my review as other reviews get merged/ They just execute:
def upgrade(migrate_engine): pass
So..I’m going to cherry-pick this commit and run the migration test.
migrate.exceptions.ScriptError: You can only have one Python script per version, but you have: /opt/stack/keystone/keystone/common/sql/migrate_repo/versions/075_placeholder.py and /opt/stack/keystone/keystone/common/sql/migrate_repo/versions/075_confirm_config_registration.py
Already caught up with me…
$ git mv keystone/common/sql/migrate_repo/versions/075_placeholder.py keystone/common/sql/migrate_repo/versions/078_placeholder.py (py27)[ayoung@ayoung541 keystone]$ time python -m testtools.run keystone.tests.unit.test_sql_upgrade Tests running... Ran 30 tests in 3.576s OK real 0m4.028s user 0m3.951s sys 0m0.081s
OK…lets see what happens if I put a breakpoint in one of these tests.
def upgrade(migrate_engine): import pdb; pdb.set_trace()
And run
(py27)[ayoung@ayoung541 keystone]$ time python -m testtools.run keystone.tests.unit.test_sql_upgrade Tests running... --Return-- /opt/stack/keystone/keystone/common/sql/migrate_repo/versions/078_placeholder.py(18)upgrade()-None -> import pdb; pdb.set_trace()
Ctrl C to kill the test (0r cont to keep running). This may not always work; some of the more complex tests will do manipulations of the thread libraries, and will keep the breakpoints from interrupting the debugging thread. For these cases, use rpdb and telnet.
More info about running the tests in OpenStack can be dfound here: https://wiki.openstack.org/wiki/Testr
I wrote about using rpdb to debug here: http://adam.younglogic.com/2015/02/debugging-openstack-with-rpdb/