Ansible has a very neat feature called “fact gathering”, which collects useful information from hosts prior to executing any of the tasks and makes this information available for use within those tasks. Unfortunately, this also relies on Python being available on the remote machine which doesn’t work for Cisco IOS. In this post I’ll show how to write a simple module which will collect IP address information from remote devices and store it in global variable for future use. I’ll also show how to write a module which will convert our human-readable TDD scenarios into YAML structures. As always, full code repository is available on Github
Cisco IOS IP fact gathering
In order to recognise that a traceroute has traversed a certain device, without relying on DNS, we need to populate a local database mapping IP addresses to their respective devices. The resulting database (or YAML dictionary) needs to be stored in a file so that it can be read and used again by Ansible tasks doing the traceroute verification. In order to make it happen, we need to answer the following questions:
- How to get IP address information from each device?
The most straight-forward way is to capture the result of running something like
show ip interface briefand parse the output. The assumption is that all devices are living in a non-overlapping IP address space (however it is possible to modify the examples to be vrf-aware).
- Where to store the information?
Ideally, we would need a hash-like data structure (e.g. python dictionary) which will return a hostname when given a certain IP address. This data structure needs to be available to all hosts, however most of the variables in Ansible are host-specific. The only way to simulate a global variable in Ansible is to store all data in
group_vars/all.ymlfile which is exactly what our module will do.
- How will multiple processes write into a single file at the same time?
That’s where Ansible’s concurrency feature bites back. This is a well known computer science problem and the solution to this is to use
mutex, however that’s beyond what Ansible can do. In order to overcome that, I’ll make Ansible do the tasks sequentially, which will dramatically slow things down for bigger environments. However, this task only needs to be run once, to collect the data, while all the other tasks can be run in parallel, in separate playbooks.
Developing Ansible playbook
Our Ansible playbook will need to accomplish the following tasks:
- Capture the output
show ip interface briefcommand
- Parse the output capture in the previous step
- Save the output in a
All these tasks will need to be run sequentially on every host from
cisco-devices group. To get the output from a Cisco device we’ll use the
raw module again. The other two tasks don’t require connection to remote device and will be run on a localhost by the virtue of a
delegate_to: 127.0.0.1 option.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Writing a custom Ansible module
Ansible has an official guide on module development. A typical module will contain a header with license information along with module documentation and usage examples, a
main() function processing the arguments passed to this module from Ansible and, of course, the actual code that implements module’s logic. For the sake of brevity I will omit the header and some of the less important details in the code.
Ansible module to parse command output
This ansible module needs to extract IP address and, optionally, interface name from the output of
show ip interface brief and store it in a python dictionary. The right way to examine the module code is from
main() function. This function will contain a
module variable (instance of AnsibleModule) which specifies all the arguments expected by this module and their type (the type will be converted to the appropriate python type). Text parser is implemented with a
SIIBparse class whose only public method
parse() will traverse the text line by line looking for interfaces with Line Protocol in
up state, extract IP address (1st column), interface name (2nd column) and store the result in a python dictionary with IP address as the key and interface name as it’s value.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
If information passed to the module in the argument was invalid, the module must fail with a meaningful message passed inside a
fail_json method call. When parsing is complete, our module exits and the resulting data structure is passed back to Ansible variables with
ansible_facts argument. Now all hosts can access it through variable called
Ansible module to save IP address information
The task of this module is to get all the information collected inside each hosts'
IPs variables, combine it with devices' hostnames and save it in the
group_vars/all.yml file. This module makes use of Python’s yaml library. Built-in class
FactUpdater can read(), update() the contents and write() the global variable file defined in a
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
This module only performs actions on local file and does not provide any output back to Ansible.
Read and parse TDD scenarios
Finally, since we’re modifying Ansible global variable file, it would make sense to also update it with testing scenarios information. Technically, this steps doesn’t need to be done in Ansible and could be done simply using Python or Bash scripts, but I’ll still show it here to demonstrate two additional Ansible features. The first one is
local_action: module_name which is a shorthand for specifying
delegate_to option (see above). Second feature is
tags, it allows to specify which play to run in playbook containing many of them. In our case one file
cisco-ip-collect.yml will have two plays defined and will run both of them by default unless
--tag=collect specifies the exact play.
1 2 3 4 5 6 7 8 9 10 11
This play has a single task which runs a single custom module. Before we proceed to the module let’s see how a typical testing scenario file looks like.
1 2 3 4 5 6
The file should be stored in a
scenarios/ directory and should have a name
all.txt. This file contains a list of scenarios, each with its own name, and a list of test steps that need to be performed to validate a particular scenario. The parser for this file is a custom Python module which opens and reads the contents of
group_vars/all.yml file, parses the scenarios file with the help of some ugly-looking regular expressions, and, finally, updates and saves the contents of Ansible group variable back to file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69
The biggest portion of code is the read() method of the parser which does the following:
- scans text file line by line ignoring lines starting with
#and whose length is not enough to contain either a scenario name or scenario step
- matches each line against pre-compiled regular expressions for scenario name or for scenario step (a very helpful tool for regex testing)
- attempts to save the data in a Python dictionary whose keys are scenario numbers and whose values is a list consisting of a scenario name (1st element) and a dictionary with scenario steps (2nd element)
The end result of running both ip address collection and scenarios conversion plays is Ansible group variable file that looks like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
The next post, final in a series, will show how to write an Ansible play to validate TDD scenarios and produce a meaningful error message in case it fails.