JustPaste.it

root@controller01:/opt/openstack-ansible/playbooks# openstack-ansible setup-hosts.yml
Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e @/etc/openstack_deploy/user_variables.yml "
[DEPRECATION WARNING]: 'include' for playbook includes. You should use 'import_playbook' instead. This feature will
be removed in version 2.8. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.

PLAY [Install Ansible prerequisites] *********************************************************************************

TASK [Ensure python is installed] ************************************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

PLAY [Basic host setup] **********************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************
ok: [compute01]
ok: [network01]
ok: [controller01]

TASK [Check for a supported Operating System] ************************************************************************
ok: [controller01] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [compute01] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [network01] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Remove apt package manager proxy] ******************************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [Update apt when proxy is added/removed] ************************************************************************
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|changed` instead use `result is
changed`. This feature will be removed in version 2.9. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.

TASK [Remove yum package manager proxy] ******************************************************************************

TASK [Remove dnf package manager proxy] ******************************************************************************

TASK [Backup the default pip_install_upper_constraints] **************************************************************
ok: [controller01]

TASK [Backup the default pip_default_index] **************************************************************************
ok: [controller01]

TASK [Test internal repo URL for the current upper constraints file] *************************************************
ok: [controller01]

TASK [Remove global requirement pins file from host] *****************************************************************

TASK [Copy global requirement pins file to host] *********************************************************************
ok: [compute01]
changed: [network01]
changed: [controller01]

TASK [Set pip install upper constraints] *****************************************************************************
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|version_compare` instead use
`result is version_compare`. This feature will be removed in version 2.9. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
ok: [controller01]

TASK [Fall back to repo_build_pip_default_index] *********************************************************************
ok: [controller01]

TASK [apt_package_pinning : Add apt pin preferences] *****************************************************************

TASK [openstack_hosts : Gather variables for each operating system] **************************************************
ok: [controller01] => (item=/etc/ansible/roles/openstack_hosts/vars/ubuntu-18.04.yml)
ok: [compute01] => (item=/etc/ansible/roles/openstack_hosts/vars/ubuntu-18.04.yml)
ok: [network01] => (item=/etc/ansible/roles/openstack_hosts/vars/ubuntu-18.04.yml)

TASK [openstack_hosts : Allow the usage of local facts] **************************************************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [openstack_hosts : include_tasks] *******************************************************************************
included: /etc/ansible/roles/openstack_hosts/tasks/openstack_release.yml for controller01, compute01, network01

TASK [openstack_hosts : Drop openstack release file] *****************************************************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [openstack_hosts : Remove legacy openstack release file] ********************************************************

TASK [openstack_hosts : Add global_environment_variables to environment file] ****************************************
ok: [compute01]
changed: [network01]
changed: [controller01]

TASK [openstack_hosts : Configure etc hosts files] *******************************************************************
included: /etc/ansible/roles/openstack_hosts/tasks/openstack_update_hosts_file.yml for controller01, compute01, network01

TASK [openstack_hosts : Drop hosts file entries script locally] ******************************************************
changed: [controller01 -> localhost]

TASK [openstack_hosts : Copy templated hosts file entries script] ****************************************************
changed: [controller01]
changed: [compute01]
changed: [network01]

TASK [openstack_hosts : Stat host file] ******************************************************************************
ok: [compute01]
ok: [controller01]
ok: [network01]

TASK [openstack_hosts : Update hosts file] ***************************************************************************
changed: [compute01]
changed: [network01]
changed: [controller01]

TASK [openstack_hosts : Apply package management distro specific configuration] **************************************
included: /etc/ansible/roles/openstack_hosts/tasks/openstack_hosts_configure_apt.yml for controller01, compute01, network01

TASK [openstack_hosts : Remove the blacklisted packages] *************************************************************
ok: [compute01]
ok: [controller01]
ok: [network01]

TASK [openstack_hosts : Add/Remove repositories gpg keys manually] ***************************************************

TASK [openstack_hosts : Add requirement packages (repositories gpg keys, toolkits...)] *******************************
changed: [network01]
ok: [compute01]
changed: [controller01]

TASK [openstack_hosts : Remove any old UCA repository using the old filename] ****************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [openstack_hosts : Add/Remove/Update standard and user defined repositories] ************************************
ok: [compute01] => (item={u'repo': u'deb http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/rocky main', u'state': u'present', u'filename': u'uca'})
changed: [network01] => (item={u'repo': u'deb http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/rocky main', u'state': u'present', u'filename': u'uca'})
changed: [controller01] => (item={u'repo': u'deb http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/rocky main', u'state': u'present', u'filename': u'uca'})

TASK [openstack_hosts : Update Apt cache] ****************************************************************************
skipping: [compute01]
ok: [network01]
ok: [controller01]

TASK [openstack_hosts : include_tasks] *******************************************************************************
included: /etc/ansible/roles/openstack_hosts/tasks/configure_metal_hosts.yml for controller01, compute01, network01

TASK [openstack_hosts : Check Kernel Version] ************************************************************************
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|version_compare` instead use
`result is version_compare`. This feature will be removed in version 2.9. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.

TASK [openstack_hosts : Disable cache for apt update for hosts] ******************************************************

TASK [openstack_hosts : Install distro packages for bare metal nodes] ************************************************
ok: [compute01]
changed: [network01]
changed: [controller01]

TASK [openstack_hosts : check how kernel modules are implemented (statically builtin, dynamic, not set)] *************
skipping: [controller01]
skipping: [compute01]
ok: [network01]

TASK [openstack_hosts : Fail fast if we can't load a module] *********************************************************
skipping: [network01] => (item={u'pattern': u'CONFIG_BRIDGE_NF_EBTABLES', u'name': u'ebtables'})

TASK [openstack_hosts : Load kernel module(s)] ***********************************************************************
ok: [controller01] => (item={u'name': u'8021q'})
ok: [network01] => (item={u'name': u'8021q'})
changed: [controller01] => (item={u'name': u'br_netfilter'})
changed: [network01] => (item={u'name': u'br_netfilter'})
changed: [network01] => (item={u'name': u'dm_multipath'})
changed: [controller01] => (item={u'name': u'dm_multipath'})
changed: [network01] => (item={u'name': u'dm_snapshot'})
changed: [controller01] => (item={u'name': u'dm_snapshot'})
changed: [network01] => (item={u'name': u'ebtables'})
changed: [controller01] => (item={u'name': u'ebtables'})
ok: [compute01] => (item={u'name': u'8021q'})
changed: [network01] => (item={u'name': u'ip6table_filter'})
ok: [compute01] => (item={u'name': u'br_netfilter'})
ok: [network01] => (item={u'name': u'ip6_tables'})
ok: [compute01] => (item={u'name': u'dm_multipath'})
changed: [controller01] => (item={u'name': u'ip6table_filter'})
ok: [network01] => (item={u'name': u'ip_tables'})
ok: [compute01] => (item={u'name': u'dm_snapshot'})
ok: [controller01] => (item={u'name': u'ip6_tables'})
changed: [network01] => (item={u'name': u'ipt_MASQUERADE'})
ok: [compute01] => (item={u'name': u'ebtables'})
ok: [controller01] => (item={u'name': u'ip_tables'})
changed: [network01] => (item={u'name': u'ipt_REJECT'})
ok: [compute01] => (item={u'name': u'ip6table_filter'})
ok: [compute01] => (item={u'name': u'ip6_tables'})
changed: [network01] => (item={u'name': u'iptable_filter'})
changed: [controller01] => (item={u'name': u'ipt_MASQUERADE'})
ok: [compute01] => (item={u'name': u'ip_tables'})
ok: [compute01] => (item={u'name': u'ipt_MASQUERADE'})
changed: [controller01] => (item={u'name': u'ipt_REJECT'})
changed: [network01] => (item={u'name': u'iptable_mangle'})
ok: [compute01] => (item={u'name': u'ipt_REJECT'})
ok: [compute01] => (item={u'name': u'iptable_filter'})
changed: [controller01] => (item={u'name': u'iptable_filter'})
changed: [network01] => (item={u'name': u'iptable_nat'})
ok: [compute01] => (item={u'name': u'iptable_mangle'})
ok: [compute01] => (item={u'name': u'iptable_nat'})
changed: [controller01] => (item={u'name': u'iptable_mangle'})
ok: [compute01] => (item={u'name': u'ip_vs'})
changed: [network01] => (item={u'name': u'ip_vs'})
ok: [compute01] => (item={u'name': u'iscsi_tcp'})
changed: [controller01] => (item={u'name': u'iptable_nat'})
ok: [compute01] => (item={u'name': u'nbd'})
ok: [network01] => (item={u'name': u'iscsi_tcp'})
ok: [compute01] => (item={u'name': u'nf_conntrack'})
changed: [network01] => (item={u'name': u'nbd'})
ok: [compute01] => (item={u'name': u'nf_conntrack_ipv4'})
ok: [network01] => (item={u'name': u'nf_conntrack'})
ok: [compute01] => (item={u'name': u'nf_conntrack_ipv6'})
changed: [controller01] => (item={u'name': u'ip_vs'})
ok: [network01] => (item={u'name': u'nf_conntrack_ipv4'})
ok: [compute01] => (item={u'name': u'nf_defrag_ipv4'})
ok: [controller01] => (item={u'name': u'iscsi_tcp'})
changed: [network01] => (item={u'name': u'nf_conntrack_ipv6'})
ok: [compute01] => (item={u'name': u'nf_nat'})
ok: [network01] => (item={u'name': u'nf_defrag_ipv4'})
ok: [compute01] => (item={u'name': u'nf_nat_ipv4'})
changed: [controller01] => (item={u'name': u'nbd'})
ok: [network01] => (item={u'name': u'nf_nat'})
ok: [compute01] => (item={u'name': u'vhost_net'})
ok: [controller01] => (item={u'name': u'nf_conntrack'})
ok: [compute01] => (item={u'name': u'x_tables'})
ok: [network01] => (item={u'name': u'nf_nat_ipv4'})
ok: [controller01] => (item={u'name': u'nf_conntrack_ipv4'})
changed: [network01] => (item={u'name': u'vhost_net'})
changed: [controller01] => (item={u'name': u'nf_conntrack_ipv6'})
ok: [network01] => (item={u'name': u'x_tables'})
ok: [controller01] => (item={u'name': u'nf_defrag_ipv4'})
ok: [network01] => (item={u'pattern': u'CONFIG_BRIDGE_NF_EBTABLES', u'name': u'ebtables'})
ok: [controller01] => (item={u'name': u'nf_nat'})
ok: [controller01] => (item={u'name': u'nf_nat_ipv4'})
changed: [controller01] => (item={u'name': u'vhost_net'})
ok: [controller01] => (item={u'name': u'x_tables'})

TASK [openstack_hosts : Write list of modules to load at boot] *******************************************************
ok: [compute01]
changed: [controller01]
changed: [network01]

TASK [openstack_hosts : Adding new system tuning] ********************************************************************
ok: [compute01] => (item={u'value': 36864, u'key': u'fs.inotify.max_user_watches'})
ok: [compute01] => (item={u'value': 0, u'key': u'net.ipv4.conf.all.rp_filter'})
changed: [network01] => (item={u'value': 36864, u'key': u'fs.inotify.max_user_watches'})
ok: [compute01] => (item={u'value': 0, u'key': u'net.ipv4.conf.default.rp_filter'})
changed: [network01] => (item={u'value': 0, u'key': u'net.ipv4.conf.all.rp_filter'})
changed: [controller01] => (item={u'value': 36864, u'key': u'fs.inotify.max_user_watches'})
ok: [compute01] => (item={u'value': 1, u'key': u'net.ipv4.ip_forward'})
changed: [network01] => (item={u'value': 0, u'key': u'net.ipv4.conf.default.rp_filter'})
changed: [controller01] => (item={u'value': 0, u'key': u'net.ipv4.conf.all.rp_filter'})
ok: [compute01] => (item={u'value': 262144, u'key': u'net.netfilter.nf_conntrack_max'})
changed: [network01] => (item={u'value': 1, u'key': u'net.ipv4.ip_forward'})
ok: [compute01] => (item={u'value': 5, u'key': u'vm.dirty_background_ratio'})
changed: [network01] => (item={u'value': 262144, u'key': u'net.netfilter.nf_conntrack_max'})
ok: [compute01] => (item={u'value': 10, u'key': u'vm.dirty_ratio'})
changed: [controller01] => (item={u'value': 0, u'key': u'net.ipv4.conf.default.rp_filter'})
changed: [network01] => (item={u'value': 5, u'key': u'vm.dirty_background_ratio'})
ok: [compute01] => (item={u'value': 5, u'key': u'vm.swappiness'})
changed: [controller01] => (item={u'value': 1, u'key': u'net.ipv4.ip_forward'})
ok: [compute01] => (item={u'value': 1, u'key': u'net.bridge.bridge-nf-call-ip6tables'})
changed: [network01] => (item={u'value': 10, u'key': u'vm.dirty_ratio'})
changed: [controller01] => (item={u'value': 262144, u'key': u'net.netfilter.nf_conntrack_max'})
ok: [compute01] => (item={u'value': 1, u'key': u'net.bridge.bridge-nf-call-iptables'})
changed: [controller01] => (item={u'value': 5, u'key': u'vm.dirty_background_ratio'})
changed: [network01] => (item={u'value': 5, u'key': u'vm.swappiness'})
ok: [compute01] => (item={u'value': 1, u'key': u'net.bridge.bridge-nf-call-arptables'})
changed: [controller01] => (item={u'value': 10, u'key': u'vm.dirty_ratio'})
changed: [network01] => (item={u'value': 1, u'key': u'net.bridge.bridge-nf-call-ip6tables'})
ok: [compute01] => (item={u'value': u'4096', u'key': u'net.ipv4.neigh.default.gc_thresh1'})
changed: [controller01] => (item={u'value': 5, u'key': u'vm.swappiness'})
changed: [network01] => (item={u'value': 1, u'key': u'net.bridge.bridge-nf-call-iptables'})
ok: [compute01] => (item={u'value': u'8192', u'key': u'net.ipv4.neigh.default.gc_thresh2'})
changed: [controller01] => (item={u'value': 1, u'key': u'net.bridge.bridge-nf-call-ip6tables'})
changed: [network01] => (item={u'value': 1, u'key': u'net.bridge.bridge-nf-call-arptables'})
ok: [compute01] => (item={u'value': u'16384', u'key': u'net.ipv4.neigh.default.gc_thresh3'})
changed: [controller01] => (item={u'value': 1, u'key': u'net.bridge.bridge-nf-call-iptables'})
changed: [network01] => (item={u'value': u'2048', u'key': u'net.ipv4.neigh.default.gc_thresh1'})
ok: [compute01] => (item={u'value': u'16384', u'key': u'net.ipv4.route.gc_thresh'})
changed: [controller01] => (item={u'value': 1, u'key': u'net.bridge.bridge-nf-call-arptables'})
changed: [network01] => (item={u'value': u'4096', u'key': u'net.ipv4.neigh.default.gc_thresh2'})
ok: [compute01] => (item={u'value': 60, u'key': u'net.ipv4.neigh.default.gc_interval'})
changed: [controller01] => (item={u'value': u'4096', u'key': u'net.ipv4.neigh.default.gc_thresh1'})
ok: [compute01] => (item={u'value': 120, u'key': u'net.ipv4.neigh.default.gc_stale_time'})
changed: [network01] => (item={u'value': u'8192', u'key': u'net.ipv4.neigh.default.gc_thresh3'})
changed: [controller01] => (item={u'value': u'8192', u'key': u'net.ipv4.neigh.default.gc_thresh2'})
ok: [compute01] => (item={u'value': u'4096', u'key': u'net.ipv6.neigh.default.gc_thresh1'})
changed: [network01] => (item={u'value': u'8192', u'key': u'net.ipv4.route.gc_thresh'})
changed: [controller01] => (item={u'value': u'16384', u'key': u'net.ipv4.neigh.default.gc_thresh3'})
ok: [compute01] => (item={u'value': u'8192', u'key': u'net.ipv6.neigh.default.gc_thresh2'})
changed: [network01] => (item={u'value': 60, u'key': u'net.ipv4.neigh.default.gc_interval'})
changed: [controller01] => (item={u'value': u'16384', u'key': u'net.ipv4.route.gc_thresh'})
ok: [compute01] => (item={u'value': u'16384', u'key': u'net.ipv6.neigh.default.gc_thresh3'})
changed: [network01] => (item={u'value': 120, u'key': u'net.ipv4.neigh.default.gc_stale_time'})
changed: [controller01] => (item={u'value': 60, u'key': u'net.ipv4.neigh.default.gc_interval'})
ok: [compute01] => (item={u'value': u'16384', u'key': u'net.ipv6.route.gc_thresh'})
changed: [network01] => (item={u'value': u'2048', u'key': u'net.ipv6.neigh.default.gc_thresh1'})
changed: [controller01] => (item={u'value': 120, u'key': u'net.ipv4.neigh.default.gc_stale_time'})
ok: [compute01] => (item={u'value': 60, u'key': u'net.ipv6.neigh.default.gc_interval'})
changed: [network01] => (item={u'value': u'4096', u'key': u'net.ipv6.neigh.default.gc_thresh2'})
ok: [compute01] => (item={u'value': 120, u'key': u'net.ipv6.neigh.default.gc_stale_time'})
changed: [controller01] => (item={u'value': u'4096', u'key': u'net.ipv6.neigh.default.gc_thresh1'})
changed: [network01] => (item={u'value': u'8192', u'key': u'net.ipv6.neigh.default.gc_thresh3'})
ok: [compute01] => (item={u'value': 0, u'key': u'net.ipv6.conf.lo.disable_ipv6'})
changed: [controller01] => (item={u'value': u'8192', u'key': u'net.ipv6.neigh.default.gc_thresh2'})
changed: [network01] => (item={u'value': u'8192', u'key': u'net.ipv6.route.gc_thresh'})
ok: [compute01] => (item={u'value': 131072, u'key': u'fs.aio-max-nr'})
changed: [controller01] => (item={u'value': u'16384', u'key': u'net.ipv6.neigh.default.gc_thresh3'})
changed: [network01] => (item={u'value': 60, u'key': u'net.ipv6.neigh.default.gc_interval'})
changed: [controller01] => (item={u'value': u'16384', u'key': u'net.ipv6.route.gc_thresh'})
changed: [network01] => (item={u'value': 120, u'key': u'net.ipv6.neigh.default.gc_stale_time'})
changed: [controller01] => (item={u'value': 60, u'key': u'net.ipv6.neigh.default.gc_interval'})
changed: [network01] => (item={u'value': 0, u'key': u'net.ipv6.conf.lo.disable_ipv6'})
changed: [controller01] => (item={u'value': 120, u'key': u'net.ipv6.neigh.default.gc_stale_time'})
changed: [network01] => (item={u'value': 131072, u'key': u'fs.aio-max-nr'})
changed: [controller01] => (item={u'value': 0, u'key': u'net.ipv6.conf.lo.disable_ipv6'})
changed: [controller01] => (item={u'value': 131072, u'key': u'fs.aio-max-nr'})

TASK [openstack_hosts : Configure sysstat] ***************************************************************************
included: /etc/ansible/roles/openstack_hosts/tasks/openstack_sysstat.yml for controller01, compute01, network01

TASK [openstack_hosts : Enable sysstat config] ***********************************************************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [openstack_hosts : Enable sysstat cron] *************************************************************************
ok: [compute01]
changed: [controller01]
changed: [network01]

TASK [openstack_hosts : Start and enable the sysstat service] ********************************************************

TASK [openstack_hosts : Create a directory to hold systemd journals on disk] *****************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [openstack_hosts : Create tmpfiles structure in journald directory] *********************************************

TASK [openstack_hosts : Install distro packages] *********************************************************************
ok: [controller01]
ok: [compute01]
changed: [network01]

TASK [openstack_hosts : include_tasks] *******************************************************************************
included: /etc/ansible/roles/openstack_hosts/tasks/openstack_authorized_keys.yml for controller01, compute01, network01

TASK [openstack_hosts : Ensure ssh directory] ************************************************************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [openstack_hosts : Update SSH keys] *****************************************************************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

RUNNING HANDLER [openstack_hosts : Restart sysstat] ******************************************************************
changed: [network01]
changed: [controller01]

PLAY [Apply security hardening configurations] ***********************************************************************

TASK [ansible-hardening : Gather variables for each operating system] ************************************************
ok: [controller01] => (item=/etc/ansible/roles/ansible-hardening/vars/debian.yml)
ok: [compute01] => (item=/etc/ansible/roles/ansible-hardening/vars/debian.yml)
ok: [network01] => (item=/etc/ansible/roles/ansible-hardening/vars/debian.yml)

TASK [ansible-hardening : Check for check/audit mode] ****************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Check to see if we are booting with EFI or UEFI] *******************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Set facts] *********************************************************************************
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|skipped` instead use `result is
skipped`. This feature will be removed in version 2.9. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
ok: [controller01]
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|skipped` instead use `result is
skipped`. This feature will be removed in version 2.9. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
ok: [compute01]
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|skipped` instead use `result is
skipped`. This feature will be removed in version 2.9. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
ok: [network01]

TASK [ansible-hardening : Check if grub is present on the remote node] ***********************************************
ok: [compute01]
ok: [controller01]
ok: [network01]

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/main.yml for controller01, compute01, network01

TASK [ansible-hardening : Create temporary directory to hold any temporary files] ************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Set a fact for the temporary directory] ****************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/async_tasks.yml for controller01, compute01, network01

TASK [ansible-hardening : Verify all installed RPM packages] *********************************************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : Check for .shosts or shosts.equiv files] ***************************************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : Get user data for all users on the system] *************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Get user data for all interactive users on the system] *************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Install EPEL repository] *******************************************************************

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/packages.yml for controller01, compute01, network01

TASK [ansible-hardening : Add or remove packages based on STIG requirements] *****************************************
ok: [controller01] => (item=absent)
ok: [compute01] => (item=absent)
ok: [network01] => (item=absent)
ok: [compute01] => (item=latest)
changed: [network01] => (item=latest)
changed: [controller01] => (item=latest)

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/apt.yml for controller01, compute01, network01

TASK [ansible-hardening : Ensure debsums is installed] ***************************************************************

TASK [ansible-hardening : Gather debsums report] *********************************************************************

TASK [ansible-hardening : V-71855 - Get files with invalid checksums (apt)] ******************************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : V-71855 - Create comma-separated list] *****************************************************

TASK [ansible-hardening : V-71855 - The cryptographic hash of system files and commands must match vendor values (apt)] ***

TASK [ansible-hardening : Search for AllowUnauthenticated in /etc/apt/apt.conf.d/] ***********************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-71977 - Package management tool must verify authenticity of packages] ********************

TASK [ansible-hardening : V-71979 - Package management tool must verify authenticity of locally-installed packages] ***
ok: [compute01]
changed: [controller01]
changed: [network01]

TASK [ansible-hardening : V-71987 - Clean requirements/dependencies when removing packages (dpkg)] *******************

TASK [ansible-hardening : Enable automatic package updates (apt)] ****************************************************

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/accounts.yml for controller01, compute01, network01

TASK [ansible-hardening : Check if /etc/security/pwquality.conf exists] **********************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Set password quality requirements] *********************************************************
ok: [compute01]
changed: [controller01]
changed: [network01]

TASK [ansible-hardening : Check for SHA512 password storage in PAM] **************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Print warning if PAM is not using SHA512 for password storage] *****************************

TASK [ansible-hardening : Ensure libuser is storing passwords using SHA512] ******************************************

TASK [ansible-hardening : Set minimum password lifetime limit to 24 hours for interactive accounts] ******************
skipping: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'nobody', u'gid': 65534, u'gecos': u'nobody', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 65534})
skipping: [controller01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 1000, u'name': u'itech'}, u'name': u'itech', u'gid': 1000, u'gecos': u'itech,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/itech', u'uid': 1000})
skipping: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'nobody', u'gid': 65534, u'gecos': u'nobody', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 65534})
skipping: [compute01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 1000, u'name': u'itech'}, u'name': u'itech', u'gid': 1000, u'gecos': u'itech,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17856, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/itech', u'uid': 1000})
skipping: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'nobody', u'gid': 65534, u'gecos': u'nobody', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 65534})
skipping: [network01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 1000, u'name': u'itech'}, u'name': u'itech', u'gid': 1000, u'gecos': u'itech,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/itech', u'uid': 1000})

TASK [ansible-hardening : Set maximum password lifetime limit to 60 days for interactive accounts] *******************
skipping: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'nobody', u'gid': 65534, u'gecos': u'nobody', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 65534})
skipping: [controller01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 1000, u'name': u'itech'}, u'name': u'itech', u'gid': 1000, u'gecos': u'itech,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/itech', u'uid': 1000})
skipping: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'nobody', u'gid': 65534, u'gecos': u'nobody', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 65534})
skipping: [compute01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 1000, u'name': u'itech'}, u'name': u'itech', u'gid': 1000, u'gecos': u'itech,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17856, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/itech', u'uid': 1000})
skipping: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'nobody', u'gid': 65534, u'gecos': u'nobody', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 65534})
skipping: [network01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 1000, u'name': u'itech'}, u'name': u'itech', u'gid': 1000, u'gecos': u'itech,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/itech', u'uid': 1000})

TASK [ansible-hardening : Ensure that users cannot reuse one of their last 5 passwords] ******************************

TASK [ansible-hardening : Ensure accounts are disabled if the password expires] **************************************

TASK [ansible-hardening : Apply shadow-utils configurations] *********************************************************
ok: [controller01] => (item={u'stig_id': u'V-71921', u'parameter': u'ENCRYPT_METHOD', u'ansible_os_family': u'all', u'value': u'SHA512'})
skipping: [controller01] => (item={u'stig_id': u'V-71925', u'parameter': u'PASS_MIN_DAYS', u'ansible_os_family': u'all', u'value': u''})
skipping: [controller01] => (item={u'stig_id': u'V-71929', u'parameter': u'PASS_MAX_DAYS', u'ansible_os_family': u'all', u'value': u''})
skipping: [controller01] => (item={u'stig_id': u'V-71951', u'parameter': u'FAIL_DELAY', u'ansible_os_family': u'RedHat', u'value': u'4'})
skipping: [controller01] => (item={u'stig_id': u'V-71995', u'parameter': u'UMASK', u'ansible_os_family': u'all', u'value': u''})
ok: [compute01] => (item={u'stig_id': u'V-71921', u'parameter': u'ENCRYPT_METHOD', u'ansible_os_family': u'all', u'value': u'SHA512'})
skipping: [compute01] => (item={u'stig_id': u'V-71925', u'parameter': u'PASS_MIN_DAYS', u'ansible_os_family': u'all', u'value': u''})
skipping: [compute01] => (item={u'stig_id': u'V-71929', u'parameter': u'PASS_MAX_DAYS', u'ansible_os_family': u'all', u'value': u''})
skipping: [compute01] => (item={u'stig_id': u'V-71951', u'parameter': u'FAIL_DELAY', u'ansible_os_family': u'RedHat', u'value': u'4'})
skipping: [compute01] => (item={u'stig_id': u'V-71995', u'parameter': u'UMASK', u'ansible_os_family': u'all', u'value': u''})
changed: [controller01] => (item={u'stig_id': u'V-72013', u'parameter': u'CREATE_HOME', u'ansible_os_family': u'all', u'value': True})
ok: [compute01] => (item={u'stig_id': u'V-72013', u'parameter': u'CREATE_HOME', u'ansible_os_family': u'all', u'value': True})
ok: [network01] => (item={u'stig_id': u'V-71921', u'parameter': u'ENCRYPT_METHOD', u'ansible_os_family': u'all', u'value': u'SHA512'})
skipping: [network01] => (item={u'stig_id': u'V-71925', u'parameter': u'PASS_MIN_DAYS', u'ansible_os_family': u'all', u'value': u''})
skipping: [network01] => (item={u'stig_id': u'V-71929', u'parameter': u'PASS_MAX_DAYS', u'ansible_os_family': u'all', u'value': u''})
skipping: [network01] => (item={u'stig_id': u'V-71951', u'parameter': u'FAIL_DELAY', u'ansible_os_family': u'RedHat', u'value': u'4'})
skipping: [network01] => (item={u'stig_id': u'V-71995', u'parameter': u'UMASK', u'ansible_os_family': u'all', u'value': u''})
changed: [network01] => (item={u'stig_id': u'V-72013', u'parameter': u'CREATE_HOME', u'ansible_os_family': u'all', u'value': True})

TASK [ansible-hardening : Print warning for groups in /etc/passwd that are not in /etc/group] ************************

TASK [ansible-hardening : Get all accounts with UID 0] ***************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Print warnings for non-root users with UID 0] **********************************************

TASK [ansible-hardening : Print warning for local interactive users without a home directory assigned] ***************

TASK [ansible-hardening : Check each user to see if its home directory exists on the filesystem] *********************
ok: [controller01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 0, u'name': u'root'}, u'name': u'root', u'gid': 0, u'gecos': u'root', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/root', u'uid': 0})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 1, u'name': u'daemon'}, u'name': u'daemon', u'gid': 1, u'gecos': u'daemon', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/usr/sbin', u'uid': 1})
ok: [compute01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 0, u'name': u'root'}, u'name': u'root', u'gid': 0, u'gecos': u'root', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17856, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/root', u'uid': 0})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 2, u'name': u'bin'}, u'name': u'bin', u'gid': 2, u'gecos': u'bin', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/bin', u'uid': 2})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 1, u'name': u'daemon'}, u'name': u'daemon', u'gid': 1, u'gecos': u'daemon', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/usr/sbin', u'uid': 1})
ok: [network01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 0, u'name': u'root'}, u'name': u'root', u'gid': 0, u'gecos': u'root', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/root', u'uid': 0})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 3, u'name': u'sys'}, u'name': u'sys', u'gid': 3, u'gecos': u'sys', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/dev', u'uid': 3})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 2, u'name': u'bin'}, u'name': u'bin', u'gid': 2, u'gecos': u'bin', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/bin', u'uid': 2})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 1, u'name': u'daemon'}, u'name': u'daemon', u'gid': 1, u'gecos': u'daemon', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/usr/sbin', u'uid': 1})
ok: [controller01] => (item={u'shell': u'/bin/sync', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'sync', u'gid': 65534, u'gecos': u'sync', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/bin', u'uid': 4})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 3, u'name': u'sys'}, u'name': u'sys', u'gid': 3, u'gecos': u'sys', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/dev', u'uid': 3})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 2, u'name': u'bin'}, u'name': u'bin', u'gid': 2, u'gecos': u'bin', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/bin', u'uid': 2})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 60, u'name': u'games'}, u'name': u'games', u'gid': 60, u'gecos': u'games', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/usr/games', u'uid': 5})
ok: [compute01] => (item={u'shell': u'/bin/sync', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'sync', u'gid': 65534, u'gecos': u'sync', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/bin', u'uid': 4})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 3, u'name': u'sys'}, u'name': u'sys', u'gid': 3, u'gecos': u'sys', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/dev', u'uid': 3})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 12, u'name': u'man'}, u'name': u'man', u'gid': 12, u'gecos': u'man', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/cache/man', u'uid': 6})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 60, u'name': u'games'}, u'name': u'games', u'gid': 60, u'gecos': u'games', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/usr/games', u'uid': 5})
ok: [network01] => (item={u'shell': u'/bin/sync', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'sync', u'gid': 65534, u'gecos': u'sync', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/bin', u'uid': 4})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 7, u'name': u'lp'}, u'name': u'lp', u'gid': 7, u'gecos': u'lp', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/spool/lpd', u'uid': 7})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 12, u'name': u'man'}, u'name': u'man', u'gid': 12, u'gecos': u'man', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/cache/man', u'uid': 6})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 60, u'name': u'games'}, u'name': u'games', u'gid': 60, u'gecos': u'games', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/usr/games', u'uid': 5})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 7, u'name': u'lp'}, u'name': u'lp', u'gid': 7, u'gecos': u'lp', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/spool/lpd', u'uid': 7})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 8, u'name': u'mail'}, u'name': u'mail', u'gid': 8, u'gecos': u'mail', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/mail', u'uid': 8})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 12, u'name': u'man'}, u'name': u'man', u'gid': 12, u'gecos': u'man', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/cache/man', u'uid': 6})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 9, u'name': u'news'}, u'name': u'news', u'gid': 9, u'gecos': u'news', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/spool/news', u'uid': 9})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 8, u'name': u'mail'}, u'name': u'mail', u'gid': 8, u'gecos': u'mail', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/mail', u'uid': 8})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 7, u'name': u'lp'}, u'name': u'lp', u'gid': 7, u'gecos': u'lp', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/spool/lpd', u'uid': 7})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 9, u'name': u'news'}, u'name': u'news', u'gid': 9, u'gecos': u'news', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/spool/news', u'uid': 9})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 10, u'name': u'uucp'}, u'name': u'uucp', u'gid': 10, u'gecos': u'uucp', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/spool/uucp', u'uid': 10})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 10, u'name': u'uucp'}, u'name': u'uucp', u'gid': 10, u'gecos': u'uucp', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/spool/uucp', u'uid': 10})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 8, u'name': u'mail'}, u'name': u'mail', u'gid': 8, u'gecos': u'mail', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/mail', u'uid': 8})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 13, u'name': u'proxy'}, u'name': u'proxy', u'gid': 13, u'gecos': u'proxy', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/bin', u'uid': 13})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 13, u'name': u'proxy'}, u'name': u'proxy', u'gid': 13, u'gecos': u'proxy', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/bin', u'uid': 13})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 9, u'name': u'news'}, u'name': u'news', u'gid': 9, u'gecos': u'news', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/spool/news', u'uid': 9})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 33, u'name': u'www-data'}, u'name': u'www-data', u'gid': 33, u'gecos': u'www-data', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/www', u'uid': 33})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 33, u'name': u'www-data'}, u'name': u'www-data', u'gid': 33, u'gecos': u'www-data', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/www', u'uid': 33})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 34, u'name': u'backup'}, u'name': u'backup', u'gid': 34, u'gecos': u'backup', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/backups', u'uid': 34})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 10, u'name': u'uucp'}, u'name': u'uucp', u'gid': 10, u'gecos': u'uucp', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/spool/uucp', u'uid': 10})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 34, u'name': u'backup'}, u'name': u'backup', u'gid': 34, u'gecos': u'backup', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/backups', u'uid': 34})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 38, u'name': u'list'}, u'name': u'list', u'gid': 38, u'gecos': u'Mailing List Manager', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/list', u'uid': 38})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 13, u'name': u'proxy'}, u'name': u'proxy', u'gid': 13, u'gecos': u'proxy', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/bin', u'uid': 13})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 38, u'name': u'list'}, u'name': u'list', u'gid': 38, u'gecos': u'Mailing List Manager', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/list', u'uid': 38})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 39, u'name': u'irc'}, u'name': u'irc', u'gid': 39, u'gecos': u'ircd', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/run/ircd', u'uid': 39})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 33, u'name': u'www-data'}, u'name': u'www-data', u'gid': 33, u'gecos': u'www-data', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/www', u'uid': 33})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 39, u'name': u'irc'}, u'name': u'irc', u'gid': 39, u'gecos': u'ircd', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/run/ircd', u'uid': 39})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 41, u'name': u'gnats'}, u'name': u'gnats', u'gid': 41, u'gecos': u'Gnats Bug-Reporting System (admin)', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/gnats', u'uid': 41})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 34, u'name': u'backup'}, u'name': u'backup', u'gid': 34, u'gecos': u'backup', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/backups', u'uid': 34})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 41, u'name': u'gnats'}, u'name': u'gnats', u'gid': 41, u'gecos': u'Gnats Bug-Reporting System (admin)', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/gnats', u'uid': 41})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'nobody', u'gid': 65534, u'gecos': u'nobody', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 65534})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 38, u'name': u'list'}, u'name': u'list', u'gid': 38, u'gecos': u'Mailing List Manager', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/list', u'uid': 38})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'nobody', u'gid': 65534, u'gecos': u'nobody', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 65534})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 102, u'name': u'systemd-network'}, u'name': u'systemd-network', u'gid': 102, u'gecos': u'systemd Network Management,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/run/systemd/netif', u'uid': 100})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 39, u'name': u'irc'}, u'name': u'irc', u'gid': 39, u'gecos': u'ircd', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/run/ircd', u'uid': 39})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 102, u'name': u'systemd-network'}, u'name': u'systemd-network', u'gid': 102, u'gecos': u'systemd Network Management,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/run/systemd/netif', u'uid': 100})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 103, u'name': u'systemd-resolve'}, u'name': u'systemd-resolve', u'gid': 103, u'gecos': u'systemd Resolver,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/run/systemd/resolve', u'uid': 101})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 41, u'name': u'gnats'}, u'name': u'gnats', u'gid': 41, u'gecos': u'Gnats Bug-Reporting System (admin)', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/gnats', u'uid': 41})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 103, u'name': u'systemd-resolve'}, u'name': u'systemd-resolve', u'gid': 103, u'gecos': u'systemd Resolver,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/run/systemd/resolve', u'uid': 101})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 106, u'name': u'syslog'}, u'name': u'syslog', u'gid': 106, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/syslog', u'uid': 102})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'nobody', u'gid': 65534, u'gecos': u'nobody', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 65534})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 106, u'name': u'syslog'}, u'name': u'syslog', u'gid': 106, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/syslog', u'uid': 102})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 107, u'name': u'messagebus'}, u'name': u'messagebus', u'gid': 107, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 103})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 102, u'name': u'systemd-network'}, u'name': u'systemd-network', u'gid': 102, u'gecos': u'systemd Network Management,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/run/systemd/netif', u'uid': 100})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 107, u'name': u'messagebus'}, u'name': u'messagebus', u'gid': 107, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 103})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'_apt', u'gid': 65534, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 104})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'_apt', u'gid': 65534, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 104})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 103, u'name': u'systemd-resolve'}, u'name': u'systemd-resolve', u'gid': 103, u'gecos': u'systemd Resolver,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/run/systemd/resolve', u'uid': 101})
ok: [controller01] => (item={u'shell': u'/bin/false', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'lxd', u'gid': 65534, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/lxd/', u'uid': 105})
ok: [compute01] => (item={u'shell': u'/bin/false', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'lxd', u'gid': 65534, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17856, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/lxd/', u'uid': 105})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 106, u'name': u'syslog'}, u'name': u'syslog', u'gid': 106, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/syslog', u'uid': 102})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 110, u'name': u'uuidd'}, u'name': u'uuidd', u'gid': 110, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/run/uuidd', u'uid': 106})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 110, u'name': u'uuidd'}, u'name': u'uuidd', u'gid': 110, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17856, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/run/uuidd', u'uid': 106})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 107, u'name': u'messagebus'}, u'name': u'messagebus', u'gid': 107, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 103})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'dnsmasq', u'gid': 65534, u'gecos': u'dnsmasq,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/misc', u'uid': 107})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'dnsmasq', u'gid': 65534, u'gecos': u'dnsmasq,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17856, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/misc', u'uid': 107})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'_apt', u'gid': 65534, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 104})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 112, u'name': u'landscape'}, u'name': u'landscape', u'gid': 112, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/landscape', u'uid': 108})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 112, u'name': u'landscape'}, u'name': u'landscape', u'gid': 112, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17856, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/landscape', u'uid': 108})
ok: [network01] => (item={u'shell': u'/bin/false', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'lxd', u'gid': 65534, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/lxd/', u'uid': 105})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'sshd', u'gid': 65534, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/run/sshd', u'uid': 109})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'sshd', u'gid': 65534, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17856, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/run/sshd', u'uid': 109})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 110, u'name': u'uuidd'}, u'name': u'uuidd', u'gid': 110, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/run/uuidd', u'uid': 106})
ok: [controller01] => (item={u'shell': u'/bin/false', u'group': {u'passwd': u'x', u'gid': 1, u'name': u'daemon'}, u'name': u'pollinate', u'gid': 1, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/cache/pollinate', u'uid': 110})
ok: [compute01] => (item={u'shell': u'/bin/false', u'group': {u'passwd': u'x', u'gid': 1, u'name': u'daemon'}, u'name': u'pollinate', u'gid': 1, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17856, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/cache/pollinate', u'uid': 110})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'dnsmasq', u'gid': 65534, u'gecos': u'dnsmasq,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/misc', u'uid': 107})
ok: [compute01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 1000, u'name': u'itech'}, u'name': u'itech', u'gid': 1000, u'gecos': u'itech,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17856, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/itech', u'uid': 1000})
ok: [controller01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 1000, u'name': u'itech'}, u'name': u'itech', u'gid': 1000, u'gecos': u'itech,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/itech', u'uid': 1000})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 112, u'name': u'landscape'}, u'name': u'landscape', u'gid': 112, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/landscape', u'uid': 108})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 116, u'name': u'ntp'}, u'name': u'ntp', u'gid': 116, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17857, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 111})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 115, u'name': u'ntp'}, u'name': u'ntp', u'gid': 115, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 111})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'sshd', u'gid': 65534, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/run/sshd', u'uid': 109})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 118, u'name': u'postfix'}, u'name': u'postfix', u'gid': 118, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17857, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/spool/postfix', u'uid': 112})
ok: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 117, u'name': u'mongodb'}, u'name': u'mongodb', u'gid': 117, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/mongodb', u'uid': 112})
ok: [network01] => (item={u'shell': u'/bin/false', u'group': {u'passwd': u'x', u'gid': 1, u'name': u'daemon'}, u'name': u'pollinate', u'gid': 1, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/cache/pollinate', u'uid': 110})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 120, u'name': u'_chrony'}, u'name': u'_chrony', u'gid': 120, u'gecos': u'Chrony daemon,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17857, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/chrony', u'uid': 113})
ok: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 121, u'name': u'lxc-dnsmasq'}, u'name': u'lxc-dnsmasq', u'gid': 121, u'gecos': u'LXC dnsmasq,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17857, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/var/lib/lxc', u'uid': 114})
ok: [network01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 1000, u'name': u'itech'}, u'name': u'itech', u'gid': 1000, u'gecos': u'itech,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/itech', u'uid': 1000})
ok: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 116, u'name': u'ntp'}, u'name': u'ntp', u'gid': 116, u'gecos': u'', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 111})

TASK [ansible-hardening : Print warning for users with an assigned home directory that does not exist] ***************
ok: [controller01] => {
"msg": "These users have a home directory assigned, but the directory does not exist:\nlp (/var/spool/lpd does not exist)\nnews (/var/spool/news does not exist)\nuucp (/var/spool/uucp does not exist)\nwww-data (/var/www does not exist)\nlist (/var/list does not exist)\nirc (/var/run/ircd does not exist)\ngnats (/var/lib/gnats does not exist)\nnobody (/nonexistent does not exist)\nsyslog (/home/syslog does not exist)\nmessagebus (/nonexistent does not exist)\n_apt (/nonexistent does not exist)\nntp (/nonexistent does not exist)\n"
}
ok: [compute01] => {
"msg": "These users have a home directory assigned, but the directory does not exist:\nlp (/var/spool/lpd does not exist)\nnews (/var/spool/news does not exist)\nuucp (/var/spool/uucp does not exist)\nwww-data (/var/www does not exist)\nlist (/var/list does not exist)\nirc (/var/run/ircd does not exist)\ngnats (/var/lib/gnats does not exist)\nnobody (/nonexistent does not exist)\nsyslog (/home/syslog does not exist)\nmessagebus (/nonexistent does not exist)\n_apt (/nonexistent does not exist)\nntp (/nonexistent does not exist)\n"
}
ok: [network01] => {
"msg": "These users have a home directory assigned, but the directory does not exist:\nlp (/var/spool/lpd does not exist)\nnews (/var/spool/news does not exist)\nuucp (/var/spool/uucp does not exist)\nwww-data (/var/www does not exist)\nlist (/var/list does not exist)\nirc (/var/run/ircd does not exist)\ngnats (/var/lib/gnats does not exist)\nnobody (/nonexistent does not exist)\nsyslog (/home/syslog does not exist)\nmessagebus (/nonexistent does not exist)\n_apt (/nonexistent does not exist)\nntp (/nonexistent does not exist)\n"
}

TASK [ansible-hardening : Use pwquality when passwords are changed or created] ***************************************

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/aide.yml for controller01, compute01, network01

TASK [ansible-hardening : Verify that AIDE configuration directory exists] *******************************************
ok: [controller01] => (item=/etc/aide/aide.conf.d)
ok: [controller01] => (item=/etc/aide.conf)
ok: [compute01] => (item=/etc/aide/aide.conf.d)
ok: [compute01] => (item=/etc/aide.conf)
ok: [network01] => (item=/etc/aide/aide.conf.d)
ok: [network01] => (item=/etc/aide.conf)

TASK [ansible-hardening : Exclude certain directories from AIDE] *****************************************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [ansible-hardening : Configure AIDE to verify additional properties (Ubuntu)] ***********************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [ansible-hardening : Configure AIDE to verify additional properties (SUSE)] *************************************

TASK [ansible-hardening : Check to see if AIDE database is already in place] *****************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Initialize AIDE (this will take a few minutes)] ********************************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : Move AIDE database into place] *************************************************************
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|skipped` instead use `result is
skipped`. This feature will be removed in version 2.9. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.

TASK [ansible-hardening : Create AIDE cron job] **********************************************************************

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/auditd.yml for controller01, compute01, network01

TASK [ansible-hardening : Verify that auditd.conf exists] ************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Verify that audisp-remote.conf exists] *****************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-72083 - The operating system must off-load audit records onto a different system or media from the system being audited] ***

TASK [ansible-hardening : V-72085 - The operating system must encrypt the transfer of audit records off-loaded onto a different system or media from the system being audited] ***

TASK [ansible-hardening : Get valid system architectures for audit rules] ********************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Remove system default audit.rules file] ****************************************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [ansible-hardening : Remove old RHEL 6 audit rules file] ********************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Deploy rules for auditd based on STIG requirements] ****************************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [ansible-hardening : Adjust auditd/audispd configurations] ******************************************************
changed: [controller01] => (item={u'config': u'/etc/audisp/audisp-remote.conf', u'parameter': u'disk_full_action', u'value': u'syslog'})
ok: [compute01] => (item={u'config': u'/etc/audisp/audisp-remote.conf', u'parameter': u'disk_full_action', u'value': u'syslog'})
changed: [controller01] => (item={u'config': u'/etc/audisp/audisp-remote.conf', u'parameter': u'network_failure_action', u'value': u'syslog'})
ok: [compute01] => (item={u'config': u'/etc/audisp/audisp-remote.conf', u'parameter': u'network_failure_action', u'value': u'syslog'})
changed: [network01] => (item={u'config': u'/etc/audisp/audisp-remote.conf', u'parameter': u'disk_full_action', u'value': u'syslog'})
changed: [controller01] => (item={u'config': u'/etc/audit/auditd.conf', u'parameter': u'space_left', u'value': u'232558'})
ok: [compute01] => (item={u'config': u'/etc/audit/auditd.conf', u'parameter': u'space_left', u'value': u'239945'})
changed: [network01] => (item={u'config': u'/etc/audisp/audisp-remote.conf', u'parameter': u'network_failure_action', u'value': u'syslog'})
changed: [controller01] => (item={u'config': u'/etc/audit/auditd.conf', u'parameter': u'space_left_action', u'value': u'email'})
ok: [compute01] => (item={u'config': u'/etc/audit/auditd.conf', u'parameter': u'space_left_action', u'value': u'email'})
changed: [network01] => (item={u'config': u'/etc/audit/auditd.conf', u'parameter': u'space_left', u'value': u'7481'})
ok: [controller01] => (item={u'config': u'/etc/audit/auditd.conf', u'parameter': u'action_mail_acct', u'value': u'root'})
ok: [compute01] => (item={u'config': u'/etc/audit/auditd.conf', u'parameter': u'action_mail_acct', u'value': u'root'})
changed: [network01] => (item={u'config': u'/etc/audit/auditd.conf', u'parameter': u'space_left_action', u'value': u'email'})
ok: [network01] => (item={u'config': u'/etc/audit/auditd.conf', u'parameter': u'action_mail_acct', u'value': u'root'})

TASK [ansible-hardening : Ensure auditd is running and enabled at boot time] *****************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/auth.yml for controller01, compute01, network01

TASK [ansible-hardening : Set pam_faildelay configuration on Ubuntu] *************************************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [ansible-hardening : Prevent users with blank or null passwords from authenticating (Debian/Ubuntu)] ************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [ansible-hardening : Prevent users with blank or null passwords from authenticating (Red Hat)] ******************
skipping: [controller01] => (item=auth)
skipping: [controller01] => (item=password)
skipping: [compute01] => (item=auth)
skipping: [compute01] => (item=password)
skipping: [network01] => (item=auth)
skipping: [network01] => (item=password)

TASK [ansible-hardening : Prevent users with blank or null passwords from authenticating (SUSE)] *********************
skipping: [controller01] => (item=/etc/pam.d/common-auth)
skipping: [controller01] => (item=/etc/pam.d/common-password)
skipping: [compute01] => (item=/etc/pam.d/common-auth)
skipping: [compute01] => (item=/etc/pam.d/common-password)
skipping: [network01] => (item=/etc/pam.d/common-auth)
skipping: [network01] => (item=/etc/pam.d/common-password)

TASK [ansible-hardening : Lock accounts after three failed login attempts a 15 minute period] ************************

TASK [ansible-hardening : Check for 'nopasswd' in sudoers files] *****************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-71947 - Users must provide a password for privilege escalation.] *************************

TASK [ansible-hardening : Check for 'authenticate' in sudoers files] ************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-71949 - Users must re-authenticate for privilege escalation.] ****************************

TASK [ansible-hardening : Check if sssd.conf exists] *****************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Check if GRUB2 custom file exists] *********************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : blockinfile] *******************************************************************************

TASK [ansible-hardening : lineinfile] ********************************************************************************

TASK [ansible-hardening : V-72217 - The operating system must limit the number of concurrent sessions to 10 for all accounts and/or account types.] ***

TASK [ansible-hardening : Check for pam_lastlog in PAM configuration] ************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-72275 - Display date/time of last logon after logon] *************************************

TASK [ansible-hardening : Ensure .shosts find has finished] **********************************************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : Remove .shosts or shosts.equiv files] ******************************************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/file_perms.yml for controller01, compute01, network01

TASK [ansible-hardening : V-71849 - Get packages with incorrect file permissions or ownership] ***********************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : V-71849 - Reset file permissions/ownership to vendor values] *******************************

TASK [ansible-hardening : Search for files/directories with an invalid owner] ****************************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : V-72007 - All files and directories must have a valid owner.] ******************************

TASK [ansible-hardening : Search for files/directories with an invalid group owner] **********************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : V-72009 - All files and directories must have a valid group owner.] ************************

TASK [ansible-hardening : Set proper owner, group owner, and permissions on home directories] ************************
skipping: [controller01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'nobody', u'gid': 65534, u'gecos': u'nobody', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 65534})
skipping: [controller01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 1000, u'name': u'itech'}, u'name': u'itech', u'gid': 1000, u'gecos': u'itech,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/itech', u'uid': 1000})
skipping: [compute01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'nobody', u'gid': 65534, u'gecos': u'nobody', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 65534})
skipping: [compute01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 1000, u'name': u'itech'}, u'name': u'itech', u'gid': 1000, u'gecos': u'itech,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17856, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/itech', u'uid': 1000})
skipping: [network01] => (item={u'shell': u'/usr/sbin/nologin', u'group': {u'passwd': u'x', u'gid': 65534, u'name': u'nogroup'}, u'name': u'nobody', u'gid': 65534, u'gecos': u'nobody', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17737, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/nonexistent', u'uid': 65534})
skipping: [network01] => (item={u'shell': u'/bin/bash', u'group': {u'passwd': u'x', u'gid': 1000, u'name': u'itech'}, u'name': u'itech', u'gid': 1000, u'gecos': u'itech,,,', u'shadow': {u'expire_days': -1, u'min_days': 0, u'last_changed': 17862, u'max_days': 99999, u'warn_days': 7, u'inact_days': -1}, u'dir': u'/home/itech', u'uid': 1000})

TASK [ansible-hardening : Find all world-writable directories] *******************************************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : V-72047 - All world-writable directories must be group-owned by root, sys, bin, or an application group.] ***

TASK [ansible-hardening : Check if /etc/cron.allow exists] ***********************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Set owner/group owner on /etc/cron.allow] **************************************************

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/graphical.yml for controller01, compute01, network01

TASK [ansible-hardening : Check if gdm is installed and configured] **************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-71953 - The operating system must not allow an unattended or automatic logon to the system via a graphical user interface] ***

TASK [ansible-hardening : V-71955 - The operating system must not allow guest logon to the system.] ******************

TASK [ansible-hardening : Check for dconf profiles] ******************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Create a user profile in dconf] ************************************************************

TASK [ansible-hardening : Create dconf directories] ******************************************************************
skipping: [controller01] => (item=/etc/dconf/db/local.d/)
skipping: [controller01] => (item=/etc/dconf/db/local.d/locks)
skipping: [controller01] => (item=/etc/dconf/db/gdm.d/)
skipping: [compute01] => (item=/etc/dconf/db/local.d/)
skipping: [compute01] => (item=/etc/dconf/db/local.d/locks)
skipping: [compute01] => (item=/etc/dconf/db/gdm.d/)
skipping: [network01] => (item=/etc/dconf/db/local.d/)
skipping: [network01] => (item=/etc/dconf/db/local.d/locks)
skipping: [network01] => (item=/etc/dconf/db/gdm.d/)

TASK [ansible-hardening : Configure graphical session locking] *******************************************************

TASK [ansible-hardening : Prevent users from changing graphical session locking configurations] **********************

TASK [ansible-hardening : Create a GDM profile for displaying a login banner] ****************************************

TASK [ansible-hardening : Create a GDM keyfile for machine-wide settings] ********************************************
skipping: [controller01] => (item=/etc/dconf/db/gdm.d/01-banner-message)
skipping: [controller01] => (item=/etc/dconf/db/local.d/01-banner-message)
skipping: [compute01] => (item=/etc/dconf/db/gdm.d/01-banner-message)
skipping: [compute01] => (item=/etc/dconf/db/local.d/01-banner-message)
skipping: [network01] => (item=/etc/dconf/db/gdm.d/01-banner-message)
skipping: [network01] => (item=/etc/dconf/db/local.d/01-banner-message)

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/kernel.yml for controller01, compute01, network01

TASK [ansible-hardening : V-71983 - USB mass storage must be disabled.] **********************************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [ansible-hardening : Set sysctl configurations] *****************************************************************
changed: [controller01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.all.accept_source_route', u'value': 0})
ok: [compute01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.all.accept_source_route', u'value': 0})
changed: [controller01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.default.accept_source_route', u'value': 0})
ok: [compute01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.default.accept_source_route', u'value': 0})
changed: [network01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.all.accept_source_route', u'value': 0})
changed: [controller01] => (item={u'enabled': True, u'name': u'net.ipv4.icmp_echo_ignore_broadcasts', u'value': 1})
ok: [compute01] => (item={u'enabled': True, u'name': u'net.ipv4.icmp_echo_ignore_broadcasts', u'value': 1})
changed: [network01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.default.accept_source_route', u'value': 0})
changed: [controller01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.all.send_redirects', u'value': 0})
ok: [compute01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.all.send_redirects', u'value': 0})
changed: [network01] => (item={u'enabled': True, u'name': u'net.ipv4.icmp_echo_ignore_broadcasts', u'value': 1})
changed: [controller01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.default.send_redirects', u'value': 0})
skipping: [controller01] => (item={u'enabled': False, u'name': u'net.ipv4.ip_forward', u'value': 0})
ok: [compute01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.default.send_redirects', u'value': 0})
skipping: [compute01] => (item={u'enabled': False, u'name': u'net.ipv4.ip_forward', u'value': 0})
changed: [network01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.all.send_redirects', u'value': 0})
changed: [controller01] => (item={u'enabled': True, u'name': u'net.ipv6.conf.all.accept_source_route', u'value': 0})
ok: [compute01] => (item={u'enabled': True, u'name': u'net.ipv6.conf.all.accept_source_route', u'value': 0})
changed: [network01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.default.send_redirects', u'value': 0})
skipping: [network01] => (item={u'enabled': False, u'name': u'net.ipv4.ip_forward', u'value': 0})
ok: [compute01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.default.accept_redirects', u'value': 0})
changed: [controller01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.default.accept_redirects', u'value': 0})
changed: [network01] => (item={u'enabled': True, u'name': u'net.ipv6.conf.all.accept_source_route', u'value': 0})
ok: [compute01] => (item={u'enabled': True, u'name': u'kernel.randomize_va_space', u'value': 2})
changed: [controller01] => (item={u'enabled': True, u'name': u'kernel.randomize_va_space', u'value': 2})
skipping: [compute01] => (item={u'enabled': False, u'name': u'net.ipv6.conf.all.disable_ipv6', u'value': 1})
skipping: [controller01] => (item={u'enabled': False, u'name': u'net.ipv6.conf.all.disable_ipv6', u'value': 1})
changed: [network01] => (item={u'enabled': True, u'name': u'net.ipv4.conf.default.accept_redirects', u'value': 0})
changed: [network01] => (item={u'enabled': True, u'name': u'kernel.randomize_va_space', u'value': 2})
skipping: [network01] => (item={u'enabled': False, u'name': u'net.ipv6.conf.all.disable_ipv6', u'value': 1})

TASK [ansible-hardening : Check kdump service] ***********************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-72057 - Kernel core dumps must be disabled unless needed.] *******************************

TASK [ansible-hardening : Check if FIPS is enabled] ******************************************************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : Print a warning if FIPS isn't enabled] *****************************************************

TASK [ansible-hardening : V-77821 - Datagram Congestion Control Protocol (DCCP) kernel module must be disabled] ******
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/lsm.yml for controller01, compute01, network01

TASK [ansible-hardening : Check apparmor_status output] **************************************************************
ok: [compute01]
ok: [network01]
ok: [controller01]

TASK [ansible-hardening : Check if apparmor is running] **************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Ensure AppArmor is enabled at boot time] ***************************************************

TASK [ansible-hardening : Ensure AppArmor is running] ****************************************************************

TASK [ansible-hardening : Ensure SELinux is in enforcing mode on the next reboot] ************************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : Relabel files on next boot if SELinux mode changed] ****************************************

TASK [ansible-hardening : Check for unlabeled device files] **********************************************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : V-72039 - All system device files must be correctly labeled to prevent unauthorized modification.] ***

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/misc.yml for controller01, compute01, network01

TASK [ansible-hardening : Check autofs service] **********************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-71985 - File system automounter must be disabled unless required.] ***********************

TASK [ansible-hardening : Check if ctrl-alt-del.target is already masked] ********************************************
ok: [compute01]
ok: [controller01]
ok: [network01]

TASK [ansible-hardening : V-71993 - The x86 Ctrl-Alt-Delete key sequence must be disabled] ***************************

TASK [ansible-hardening : Check for /home on mounted filesystem] *****************************************************
ok: [controller01] => {
"msg": "The STIG requires that /home is on its own filesystem, but this system\ndoes not appear to be following the requirement.\n"
}
ok: [compute01] => {
"msg": "The STIG requires that /home is on its own filesystem, but this system\ndoes not appear to be following the requirement.\n"
}
ok: [network01] => {
"msg": "The STIG requires that /home is on its own filesystem, but this system\ndoes not appear to be following the requirement.\n"
}

TASK [ansible-hardening : Check for /var on mounted filesystem] ******************************************************
ok: [controller01] => {
"msg": "The STIG requires that /var is on its own filesystem, but this system\ndoes not appear to be following the requirement.\n"
}
ok: [compute01] => {
"msg": "The STIG requires that /var is on its own filesystem, but this system\ndoes not appear to be following the requirement.\n"
}
ok: [network01] => {
"msg": "The STIG requires that /var is on its own filesystem, but this system\ndoes not appear to be following the requirement.\n"
}

TASK [ansible-hardening : Check for /var/log/audit on mounted filesystem] ********************************************
ok: [controller01] => {
"msg": "The STIG requires that /var/log/audit is on its own filesystem, but this system\ndoes not appear to be following the requirement.\n"
}
ok: [compute01] => {
"msg": "The STIG requires that /var/log/audit is on its own filesystem, but this system\ndoes not appear to be following the requirement.\n"
}
ok: [network01] => {
"msg": "The STIG requires that /var/log/audit is on its own filesystem, but this system\ndoes not appear to be following the requirement.\n"
}

TASK [ansible-hardening : Check for /tmp on mounted filesystem] ******************************************************
ok: [controller01] => {
"msg": "The STIG requires that /tmp is on its own filesystem, but this system\ndoes not appear to be following the requirement.\n"
}
ok: [compute01] => {
"msg": "The STIG requires that /tmp is on its own filesystem, but this system\ndoes not appear to be following the requirement.\n"
}
ok: [network01] => {
"msg": "The STIG requires that /tmp is on its own filesystem, but this system\ndoes not appear to be following the requirement.\n"
}

TASK [ansible-hardening : Check if syslog output is being sent to another server] ************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-72209 - The system must send rsyslog output to a log aggregation server.] ****************
ok: [controller01] => {
"msg": "Output from syslog must be sent to another server."
}
ok: [compute01] => {
"msg": "Output from syslog must be sent to another server."
}
ok: [network01] => {
"msg": "Output from syslog must be sent to another server."
}

TASK [ansible-hardening : Check if ClamAV is installed] **************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Remove 'Example' line from ClamAV configuration files] *************************************
skipping: [controller01] => (item=/etc/freshclam.conf)
skipping: [controller01] => (item=/etc/clamd.d/scan.conf)
skipping: [compute01] => (item=/etc/freshclam.conf)
skipping: [compute01] => (item=/etc/clamd.d/scan.conf)
skipping: [network01] => (item=/etc/freshclam.conf)
skipping: [network01] => (item=/etc/clamd.d/scan.conf)

TASK [ansible-hardening : Set ClamAV server type as socket] **********************************************************

TASK [ansible-hardening : Allow automatic freshclam updates] *********************************************************

TASK [ansible-hardening : Check if ClamAV update process is already running] *****************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Update ClamAV database] ********************************************************************

TASK [ansible-hardening : Ensure ClamAV is running] ******************************************************************

TASK [ansible-hardening : Remove old config block for V-72223 from openstack-ansible-security] ***********************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-72223 - Set 10 minute timeout on communication sessions] *********************************
ok: [compute01]
changed: [controller01]
changed: [network01]

TASK [ansible-hardening : Start and enable chrony] *******************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Check if chrony configuration file exists] *************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-72269 - Synchronize system clock (configuration file)] ***********************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [ansible-hardening : Check firewalld status] ********************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Ensure firewalld is running and enabled] ***************************************************

TASK [ansible-hardening : Limit new TCP connections to 25/minute and allow bursting to 100] **************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : Count nameserver entries in /etc/resolv.conf] **********************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-72281 - For systems using DNS resolution, at least two name servers must be configured.] ***
ok: [controller01] => {
"msg": "Two or more nameservers must be configured in /etc/resolv.conf.\nNameservers found: 1\n"
}
ok: [compute01] => {
"msg": "Two or more nameservers must be configured in /etc/resolv.conf.\nNameservers found: 1\n"
}
ok: [network01] => {
"msg": "Two or more nameservers must be configured in /etc/resolv.conf.\nNameservers found: 1\n"
}

TASK [ansible-hardening : Check for interfaces in promiscuous mode] **************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-72295 - Network interfaces must not be in promiscuous mode.] *****************************

TASK [ansible-hardening : Check for postfix configuration file] ******************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-72297 - Prevent unrestricted mail relaying] **********************************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [ansible-hardening : Check for TFTP server configuration file] **************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Check TFTP configuration mode] *************************************************************
skipping: [controller01]
skipping: [compute01]
skipping: [network01]

TASK [ansible-hardening : V-72305 - TFTP must be configured to operate in secure mode] *******************************

TASK [ansible-hardening : Check to see if snmpd config contains public/private] **************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : V-72313 - Change SNMP community strings from default.] *************************************

TASK [ansible-hardening : include_tasks] *****************************************************************************
included: /etc/ansible/roles/ansible-hardening/tasks/rhel7stig/sshd.yml for controller01, compute01, network01

TASK [ansible-hardening : Copy login warning banner] *****************************************************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [ansible-hardening : Adjust ssh server configuration based on STIG requirements] ********************************
changed: [controller01]
ok: [compute01]
changed: [network01]

TASK [ansible-hardening : Ensure sshd is enabled at boot time] *******************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Determine existing public ssh host keys] ***************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Public host key files must have mode 0644 or less] *****************************************
ok: [controller01] => (item=/etc/ssh/ssh_host_ecdsa_key.pub)
ok: [controller01] => (item=/etc/ssh/ssh_host_ed25519_key.pub)
ok: [compute01] => (item=/etc/ssh/ssh_host_ecdsa_key.pub)
ok: [controller01] => (item=/etc/ssh/ssh_host_rsa_key.pub)
ok: [compute01] => (item=/etc/ssh/ssh_host_ed25519_key.pub)
ok: [network01] => (item=/etc/ssh/ssh_host_ecdsa_key.pub)
ok: [compute01] => (item=/etc/ssh/ssh_host_rsa_key.pub)
ok: [network01] => (item=/etc/ssh/ssh_host_ed25519_key.pub)
ok: [network01] => (item=/etc/ssh/ssh_host_rsa_key.pub)

TASK [ansible-hardening : Determine existing private ssh host keys] **************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : Private host key files must have mode 0600 or less] ****************************************
ok: [controller01] => (item=/etc/ssh/ssh_host_ecdsa_key)
ok: [controller01] => (item=/etc/ssh/ssh_host_ed25519_key)
ok: [compute01] => (item=/etc/ssh/ssh_host_ecdsa_key)
ok: [controller01] => (item=/etc/ssh/ssh_host_rsa_key)
ok: [compute01] => (item=/etc/ssh/ssh_host_ed25519_key)
ok: [network01] => (item=/etc/ssh/ssh_host_ecdsa_key)
ok: [compute01] => (item=/etc/ssh/ssh_host_rsa_key)
ok: [network01] => (item=/etc/ssh/ssh_host_ed25519_key)
ok: [network01] => (item=/etc/ssh/ssh_host_rsa_key)

TASK [ansible-hardening : Remove the temporary directory] ************************************************************
ok: [controller01]
ok: [compute01]
ok: [network01]

TASK [ansible-hardening : include_tasks] *****************************************************************************

RUNNING HANDLER [ansible-hardening : restart auditd] *****************************************************************
changed: [network01]
changed: [controller01]

RUNNING HANDLER [ansible-hardening : restart chrony] *****************************************************************
changed: [controller01]
changed: [network01]

RUNNING HANDLER [ansible-hardening : restart ssh] ********************************************************************
changed: [controller01]
changed: [network01]

RUNNING HANDLER [ansible-hardening : generate auditd rules] **********************************************************
changed: [controller01]
changed: [network01]

RUNNING HANDLER [ansible-hardening : restart auditd] *****************************************************************
changed: [controller01]
changed: [network01]

PLAY [Basic lxc host setup] ******************************************************************************************

TASK [Backup the default pip_install_upper_constraints] **************************************************************

TASK [Backup the default pip_default_index] **************************************************************************

TASK [Test internal repo URL for the current upper constraints file] *************************************************
ok: [controller01]

TASK [Remove global requirement pins file from host] *****************************************************************

TASK [Copy global requirement pins file to host] *********************************************************************
ok: [controller01]
ok: [network01]

TASK [Set pip install upper constraints] *****************************************************************************
ok: [controller01]

TASK [Fall back to repo_build_pip_default_index] *********************************************************************
ok: [controller01]

TASK [Check the state of the default LXC service log directory] ******************************************************
ok: [controller01]
ok: [network01]

TASK [Create the log aggregation parent directory] *******************************************************************
ok: [controller01]
changed: [network01]

TASK [Move the existing folder to the log aggregation parent] ********************************************************

TASK [Create the new LXC service log directory] **********************************************************************
changed: [controller01]
changed: [network01]

TASK [Create the LXC service log aggregation link] *******************************************************************
changed: [controller01]
changed: [network01]

TASK [apt_package_pinning : Add apt pin preferences] *****************************************************************

TASK [lxc_hosts : Check for the presence of a public key file on the deployment host] ********************************
ok: [controller01 -> localhost]
ok: [network01 -> localhost]

TASK [lxc_hosts : Fail if a ssh public key is not set in a var and is not present on the deployment host] ************

TASK [lxc_hosts : Gather variables for each operating system] ********************************************************
ok: [controller01] => (item=/etc/ansible/roles/lxc_hosts/vars/ubuntu-18.04-host.yml)
ok: [network01] => (item=/etc/ansible/roles/lxc_hosts/vars/ubuntu-18.04-host.yml)

TASK [lxc_hosts : Gather container variables] ************************************************************************
[WARNING]: Invalid request to find a file that matches a "null" value

[WARNING]: Invalid request to find a file that matches a "null" value

ok: [controller01] => (item=/etc/ansible/roles/lxc_hosts/vars/ubuntu-18.04.yml)
ok: [network01] => (item=/etc/ansible/roles/lxc_hosts/vars/ubuntu-18.04.yml)

TASK [lxc_hosts : include_tasks] *************************************************************************************
included: /etc/ansible/roles/lxc_hosts/tasks/lxc_pre_install.yml for controller01, network01

TASK [lxc_hosts : Create base directories] ***************************************************************************
changed: [controller01] => (item=/etc/lxc)
ok: [controller01] => (item=/usr/local/bin)
changed: [network01] => (item=/etc/lxc)
ok: [controller01] => (item=/etc/network/interfaces.d)
ok: [network01] => (item=/usr/local/bin)
ok: [controller01] => (item=/etc/apparmor.d/lxc)
ok: [network01] => (item=/etc/network/interfaces.d)
ok: [controller01] => (item=/usr/share/lxc/templates)
ok: [network01] => (item=/etc/apparmor.d/lxc)
ok: [controller01] => (item=/openstack)
ok: [network01] => (item=/usr/share/lxc/templates)
changed: [controller01] => (item=/openstack/backup)
ok: [network01] => (item=/openstack)
ok: [controller01] => (item=/openstack/log)
changed: [network01] => (item=/openstack/backup)
changed: [controller01] => (item=/var/lib/lxc)
ok: [network01] => (item=/openstack/log)
changed: [controller01] => (item=/var/cache/lxc/download)
changed: [network01] => (item=/var/lib/lxc)
changed: [network01] => (item=/var/cache/lxc/download)

TASK [lxc_hosts : include_tasks] *************************************************************************************
included: /etc/ansible/roles/lxc_hosts/tasks/lxc_install.yml for controller01, network01

TASK [lxc_hosts : include_tasks] *************************************************************************************
included: /etc/ansible/roles/lxc_hosts/tasks/lxc_install_apt.yml for controller01, network01

TASK [lxc_hosts : Remove conflicting packages] ***********************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Install apt packages] ******************************************************************************
changed: [network01]
changed: [controller01]

TASK [lxc_hosts : Drop irqbalance config] ****************************************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : include_tasks] *************************************************************************************
included: /etc/ansible/roles/lxc_hosts/tasks/lxc_apparmor.yml for controller01, network01

TASK [lxc_hosts : Check for apparmor profile] ************************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Relax dnsmasq apparmor profile] ********************************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : Check for apparmor profile] ************************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Relax ping apparmor profile] ***********************************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : Check for apparmor profile] ************************************************************************
skipping: [controller01]
skipping: [network01]

TASK [lxc_hosts : Relax lxc-start apparmor profile] ******************************************************************

TASK [lxc_hosts : Drop lxc-openstack apparmor profile] ***************************************************************
changed: [controller01]
changed: [network01]

RUNNING HANDLER [lxc_hosts : Start apparmor] *************************************************************************
ok: [controller01]
ok: [network01]

RUNNING HANDLER [lxc_hosts : Reload apparmor] ************************************************************************
changed: [controller01]
changed: [network01]

RUNNING HANDLER [lxc_hosts : Restart irqbalance] *********************************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : include_tasks] *************************************************************************************
included: /etc/ansible/roles/lxc_hosts/tasks/lxc_cache_prestage.yml for controller01, network01

TASK [lxc_hosts : Set LXC cache fact(s)] *****************************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Create legacy image URL fetch] *********************************************************************

TASK [lxc_hosts : Fetch legacy container image url] ******************************************************************
skipping: [controller01]
skipping: [network01]

TASK [lxc_hosts : Set LXC cache fact(s) (legacy)] ********************************************************************

TASK [lxc_hosts : Set LXC cache basename] ****************************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Pre-stage the LXC image on the system] *************************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : include_tasks] *************************************************************************************
included: /etc/ansible/roles/lxc_hosts/tasks/lxc_post_install.yml for controller01, network01

TASK [lxc_hosts : Ensure the lxc dnsmasq user exists] ****************************************************************
changed: [network01]
changed: [controller01]

TASK [lxc_hosts : Drop base config file(s)] **************************************************************************
changed: [controller01] => (item={u'dest': u'/etc/lxc/lxc-openstack.conf', u'src': u'lxc-openstack.conf.j2'})
changed: [network01] => (item={u'dest': u'/etc/lxc/lxc-openstack.conf', u'src': u'lxc-openstack.conf.j2'})
changed: [network01] => (item={u'dest': u'/etc/default/lxc-net', u'src': u'lxc.default.j2', u'mode': u'0644'})
changed: [controller01] => (item={u'dest': u'/etc/default/lxc-net', u'src': u'lxc.default.j2', u'mode': u'0644'})
changed: [controller01] => (item={u'dest': u'/usr/local/bin/lxc-system-manage', u'src': u'lxc-system-manage.j2', u'mode': u'0755'})
changed: [network01] => (item={u'dest': u'/usr/local/bin/lxc-system-manage', u'src': u'lxc-system-manage.j2', u'mode': u'0755'})

TASK [lxc_hosts : Create machinectl base template] *******************************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : Drop lxc veth check script] ************************************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : Set systemd DefaultTasksMax value] *****************************************************************
changed: [network01]
changed: [controller01]

TASK [lxc_hosts : Check that the init.scope support the pid controller] **********************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Get init.scope pids.max value] *********************************************************************
skipping: [controller01]
skipping: [network01]

TASK [lxc_hosts : Set systemd pids.max in init.scope] ****************************************************************

RUNNING HANDLER [lxc_hosts : Reload systemd units] *******************************************************************
ok: [network01]
ok: [controller01]

TASK [lxc_hosts : include_tasks] *************************************************************************************

TASK [lxc_hosts : include_tasks] *************************************************************************************
included: /etc/ansible/roles/lxc_hosts/tasks/lxc_kernel_tuning.yml for controller01, network01

TASK [lxc_hosts : Tuning kernel for lxc] *****************************************************************************
changed: [controller01] => (item={u'value': 1024, u'key': u'fs.inotify.max_user_instances'})
changed: [network01] => (item={u'value': 1024, u'key': u'fs.inotify.max_user_instances'})

TASK [lxc_hosts : include_tasks] *************************************************************************************
included: /etc/ansible/roles/lxc_hosts/tasks/lxc_net.yml for controller01, network01

TASK [lxc_hosts : Check if NetworkManager is running] ****************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Ensure network services wait on networking (if using NetworkManager)] ******************************

TASK [lxc_hosts : Drop lxc net bridge] *******************************************************************************
changed: [controller01] => (item={u'dest': u'/etc/network/interfaces.d/lxc-net-bridge.cfg', u'src': u'lxc-net-bridge.cfg.j2'})
changed: [network01] => (item={u'dest': u'/etc/network/interfaces.d/lxc-net-bridge.cfg', u'src': u'lxc-net-bridge.cfg.j2'})

TASK [lxc_hosts : Remove old post up script] *************************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Drop lxc net bridge routes (SUSE)] *****************************************************************

TASK [lxc_hosts : Create networking post-up and post-down data for Red Hat] ******************************************

TASK [lxc_hosts : Disable and stop lxc-net] **************************************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : Mask lxc-net systemd service] **********************************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : Ensure networking includes interfaces.d] ***********************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : Create systemd unit for dnsmasq] *******************************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : Check Container Bridge exists] *********************************************************************
changed: [controller01]
changed: [network01]

RUNNING HANDLER [lxc_hosts : Restart bridge] *************************************************************************
changed: [controller01]
changed: [network01]

RUNNING HANDLER [lxc_hosts : Bring bridge up] ************************************************************************
changed: [controller01]
changed: [network01]

RUNNING HANDLER [lxc_hosts : Veth check] *****************************************************************************
changed: [controller01]
changed: [network01]

RUNNING HANDLER [lxc_hosts : Reload systemd units] *******************************************************************
ok: [controller01]
ok: [network01]

RUNNING HANDLER [lxc_hosts : Restart dnsmasq] ************************************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : include_tasks] *************************************************************************************
included: /etc/ansible/roles/lxc_hosts/tasks/lxc_cache.yml for controller01, network01

TASK [lxc_hosts : Check cached image status] *************************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Retrieve the expiry object] ************************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Set cache refresh fact] ****************************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : include_tasks] *************************************************************************************
included: /etc/ansible/roles/lxc_hosts/tasks/lxc_cache_preparation.yml for controller01, network01

TASK [lxc_hosts : Pull systemd version] ******************************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Create machined proxy override unit directories] ***************************************************
skipping: [controller01] => (item=systemd-machined.service.d)
skipping: [controller01] => (item=systemd-importd.service.d)
skipping: [network01] => (item=systemd-machined.service.d)
skipping: [network01] => (item=systemd-importd.service.d)

TASK [lxc_hosts : Drop the machined proxy override units] ************************************************************
skipping: [controller01] => (item=systemd-machined.service.d)
skipping: [controller01] => (item=systemd-importd.service.d)
skipping: [network01] => (item=systemd-machined.service.d)
skipping: [network01] => (item=systemd-importd.service.d)

TASK [lxc_hosts : include_tasks] *************************************************************************************
included: /etc/ansible/roles/lxc_hosts/tasks/lxc_volume.yml for controller01, network01

TASK [lxc_hosts : Check machinectl mount point] **********************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Set volume size] ***********************************************************************************
ok: [network01]
ok: [controller01]

TASK [lxc_hosts : Format the machines sparse file] *******************************************************************
ok: [network01]
ok: [controller01]

TASK [lxc_hosts : Create machines mount point] ***********************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Move machines mount into place] ********************************************************************
changed: [controller01]
changed: [network01]

RUNNING HANDLER [lxc_hosts : Enable machines mount] ******************************************************************
ok: [controller01]
ok: [network01]

RUNNING HANDLER [lxc_hosts : Start or reload the machines mount] *****************************************************
changed: [network01]
changed: [controller01]

RUNNING HANDLER [lxc_hosts : Reload systemd units] *******************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Disable|Enable the machinectl quota system] ********************************************************
ok: [network01]
ok: [controller01]

TASK [lxc_hosts : Set the qgroup size|compression limits on machines] ************************************************
ok: [controller01] => (item=-e none)
ok: [controller01] => (item=-c none)
ok: [network01] => (item=-e none)
ok: [network01] => (item=-c none)

TASK [lxc_hosts : Ensure the machines fs is sized correctly] *********************************************************

TASK [lxc_hosts : include_tasks] *************************************************************************************
included: /etc/ansible/roles/lxc_hosts/tasks/lxc_cache_preparation_systemd_new.yml for controller01, network01

TASK [lxc_hosts : Remove old image cache] ****************************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Ensure image has been pre-staged] ******************************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : Retrieve base image] *******************************************************************************
changed: [network01]
changed: [controller01]

TASK [lxc_hosts : Set the qgroup size|compression limits on machines] ************************************************
ok: [controller01] => (item=-e none)
ok: [controller01] => (item=-c none)
ok: [network01] => (item=-e none)
ok: [network01] => (item=-c none)

TASK [lxc_hosts : Rsyncing files from the LXC host to the container cache] *******************************************
changed: [controller01] => (item=/etc/apt/sources.list)
changed: [network01] => (item=/etc/apt/sources.list)
changed: [controller01] => (item=/etc/apt/apt.conf.d/)
changed: [network01] => (item=/etc/apt/apt.conf.d/)
changed: [controller01] => (item=/etc/apt/trusted.gpg.d)
changed: [network01] => (item=/etc/apt/trusted.gpg.d)
changed: [controller01] => (item=/etc/apt/preferences.d/)
changed: [network01] => (item=/etc/apt/preferences.d/)
changed: [controller01] => (item=/etc/environment)
changed: [network01] => (item=/etc/environment)
changed: [controller01] => (item=/etc/localtime)
changed: [network01] => (item=/etc/localtime)
changed: [controller01] => (item=/etc/protocols)
changed: [network01] => (item=/etc/protocols)

TASK [lxc_hosts : Ensure directories exist for lxc_container_cache_files] ********************************************

TASK [lxc_hosts : Copy files from deployment host to the container cache] ********************************************

TASK [lxc_hosts : Cached image preparation script] *******************************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : Prepare cached image setup commands] ***************************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Obtain the deploy system's ssh public key] *********************************************************
ok: [controller01]
ok: [network01]

TASK [lxc_hosts : Deploy ssh public key into the cached image] *******************************************************
changed: [controller01]
changed: [network01]

TASK [lxc_hosts : Ensure that the LXC cache has been prepared] *******************************************************
FAILED - RETRYING: Ensure that the LXC cache has been prepared (120 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (120 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (119 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (119 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (118 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (118 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (117 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (117 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (116 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (116 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (115 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (115 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (114 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (114 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (113 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (113 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (112 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (112 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (111 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (111 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (110 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (110 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (109 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (109 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (108 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (108 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (107 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (107 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (106 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (106 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (105 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (105 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (104 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (104 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (103 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (103 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (102 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (102 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (101 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (101 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (100 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (100 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (99 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (99 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (98 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (98 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (97 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (97 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (96 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (96 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (95 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (95 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (94 retries left).
changed: [network01]
FAILED - RETRYING: Ensure that the LXC cache has been prepared (93 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (92 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (91 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (90 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (89 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (88 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (87 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (86 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (85 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (84 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (83 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (82 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (81 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (80 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (79 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (78 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (77 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (76 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (75 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (74 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (73 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (72 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (71 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (70 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (69 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (68 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (67 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (66 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (65 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (64 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (63 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (62 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (61 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (60 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (59 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (58 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (57 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (56 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (55 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (54 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (53 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (52 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (51 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (50 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (49 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (48 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (47 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (46 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (45 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (44 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (43 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (42 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (41 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (40 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (39 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (38 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (37 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (36 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (35 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (34 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (33 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (32 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (31 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (30 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (29 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (28 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (27 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (26 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (25 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (24 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (23 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (22 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (21 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (20 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (19 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (18 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (17 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (16 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (15 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (14 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (13 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (12 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (11 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (10 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (9 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (8 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (7 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (6 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (5 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (4 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (3 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (2 retries left).
FAILED - RETRYING: Ensure that the LXC cache has been prepared (1 retries left).
fatal: [controller01]: FAILED! => {"ansible_job_id": "161256119402.28173", "attempts": 120, "changed": false, "finished": 0, "started": 1}

TASK [lxc_hosts : Remove requiretty for sudo on centos] **************************************************************

TASK [lxc_hosts : Adjust sshd configuration in container] ************************************************************
changed: [network01] => (item={u'regexp': u'^PermitRootLogin', u'line': u'PermitRootLogin prohibit-password'})
changed: [network01] => (item={u'regexp': u'^TCPKeepAlive', u'line': u'TCPKeepAlive yes'})
changed: [network01] => (item={u'regexp': u'^UseDNS', u'line': u'UseDNS no'})
changed: [network01] => (item={u'regexp': u'^X11Forwarding', u'line': u'X11Forwarding no'})
changed: [network01] => (item={u'regexp': u'^PasswordAuthentication', u'line': u'PasswordAuthentication no'})

TASK [lxc_hosts : include_tasks] *************************************************************************************
included: /etc/ansible/roles/lxc_hosts/tasks/lxc_cache_create.yml for network01

TASK [lxc_hosts : Set backingstore fact] *****************************************************************************
ok: [network01]

TASK [lxc_hosts : Create LXC cache dir] ******************************************************************************
changed: [network01]

TASK [lxc_hosts : Remove existing cache archive] *********************************************************************
ok: [network01]

TASK [lxc_hosts : Create lxc image] **********************************************************************************
changed: [network01]

TASK [lxc_hosts : Drop container meta-data] **************************************************************************
changed: [network01] => (item=config)
changed: [network01] => (item=config.5)
changed: [network01] => (item=create-message)
changed: [network01] => (item=expiry)
changed: [network01] => (item=templates)

TASK [lxc_hosts : Set cache expiry] **********************************************************************************
ok: [network01]

TASK [lxc_hosts : Set expiry] ****************************************************************************************
changed: [network01]

TASK [lxc_hosts : Set build ID] **************************************************************************************
changed: [network01]

TASK [lxc_hosts : include_tasks] *************************************************************************************

RUNNING HANDLER [lxc_hosts : Remove rootfs archive] ******************************************************************
changed: [network01]
changed: [controller01]

TASK [lxc_hosts : Ensure SELinux module compile has finished] ********************************************************
skipping: [network01]

TASK [lxc_hosts : (RE)Gather facts post setup] ***********************************************************************
ok: [network01]

TASK [include_tasks] *************************************************************************************************
included: /opt/openstack-ansible/playbooks/common-tasks/rsyslog-client.yml for network01

TASK [Run the rsyslog client role] ***********************************************************************************

PLAY [Set lxc containers group] **************************************************************************************

TASK [Add hosts to dynamic inventory group] **************************************************************************
ok: [controller01_aodh_container-2144a74a]
ok: [controller01_horizon_container-1f0f282b]
ok: [controller01_utility_container-8ba09382]
ok: [controller01_keystone_container-88524ae6]
ok: [controller01_barbican_container-bbaa1d46]
ok: [controller01_magnum_container-4bce69c3]
ok: [controller01_repo_container-9f6233e3]
ok: [controller01_glance_container-a8a423b3]
ok: [controller01_memcached_container-64a68a57]
ok: [controller01_cinder_api_container-31138e8e]
ok: [controller01_heat_api_container-5480968a]
ok: [controller01_galera_container-e0f64b5b]
ok: [controller01_nova_api_container-a64fe31b]
ok: [controller01_rsyslog_container-5b63e9a5]
ok: [controller01_rabbit_mq_container-670106fd]
ok: [controller01_ceilometer_central_container-d81e1941]
ok: [network01_neutron_server_container-ae44a98b]

PLAY [Create container(s)] *******************************************************************************************

TASK [lxc_container_create : Pull lxc version] ***********************************************************************
ok: [controller01_aodh_container-2144a74a -> 172.29.236.11]
ok: [controller01_horizon_container-1f0f282b -> 172.29.236.11]
ok: [controller01_utility_container-8ba09382 -> 172.29.236.11]
ok: [controller01_keystone_container-88524ae6 -> 172.29.236.11]
ok: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11]
ok: [controller01_magnum_container-4bce69c3 -> 172.29.236.11]
ok: [controller01_repo_container-9f6233e3 -> 172.29.236.11]
ok: [controller01_glance_container-a8a423b3 -> 172.29.236.11]
ok: [controller01_memcached_container-64a68a57 -> 172.29.236.11]
ok: [controller01_cinder_api_container-31138e8e -> 172.29.236.11]
ok: [controller01_heat_api_container-5480968a -> 172.29.236.11]
ok: [controller01_galera_container-e0f64b5b -> 172.29.236.11]
ok: [controller01_nova_api_container-a64fe31b -> 172.29.236.11]
ok: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11]
ok: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11]
ok: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11]
ok: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : Enable or Disable lxc three syntax] *****************************************************
ok: [controller01_aodh_container-2144a74a]
ok: [controller01_horizon_container-1f0f282b]
ok: [controller01_utility_container-8ba09382]
ok: [controller01_keystone_container-88524ae6]
ok: [controller01_barbican_container-bbaa1d46]
ok: [controller01_magnum_container-4bce69c3]
ok: [controller01_repo_container-9f6233e3]
ok: [controller01_glance_container-a8a423b3]
ok: [controller01_memcached_container-64a68a57]
ok: [controller01_cinder_api_container-31138e8e]
ok: [controller01_heat_api_container-5480968a]
ok: [controller01_galera_container-e0f64b5b]
ok: [controller01_nova_api_container-a64fe31b]
ok: [controller01_rsyslog_container-5b63e9a5]
ok: [controller01_rabbit_mq_container-670106fd]
ok: [controller01_ceilometer_central_container-d81e1941]
ok: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Allow the usage of local facts] *********************************************************
ok: [controller01_aodh_container-2144a74a -> 172.29.236.11]
ok: [controller01_horizon_container-1f0f282b -> 172.29.236.11]
ok: [controller01_utility_container-8ba09382 -> 172.29.236.11]
ok: [controller01_keystone_container-88524ae6 -> 172.29.236.11]
ok: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11]
ok: [controller01_magnum_container-4bce69c3 -> 172.29.236.11]
ok: [controller01_repo_container-9f6233e3 -> 172.29.236.11]
ok: [controller01_glance_container-a8a423b3 -> 172.29.236.11]
ok: [controller01_memcached_container-64a68a57 -> 172.29.236.11]
ok: [controller01_cinder_api_container-31138e8e -> 172.29.236.11]
ok: [controller01_heat_api_container-5480968a -> 172.29.236.11]
ok: [controller01_galera_container-e0f64b5b -> 172.29.236.11]
ok: [controller01_nova_api_container-a64fe31b -> 172.29.236.11]
ok: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11]
ok: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11]
ok: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11]
ok: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : Check the physical_host variable is set] ************************************************

TASK [lxc_container_create : Collect physical host facts if missing] *************************************************

TASK [lxc_container_create : Kernel version and LXC backing store check] *********************************************

TASK [lxc_container_create : Gather variables for each operating system] *********************************************
[WARNING]: Invalid request to find a file that matches a "null" value

[WARNING]: Invalid request to find a file that matches a "null" value

ok: [controller01_horizon_container-1f0f282b] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
[WARNING]: Invalid request to find a file that matches a "null" value

ok: [controller01_aodh_container-2144a74a] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
[WARNING]: Invalid request to find a file that matches a "null" value

ok: [controller01_utility_container-8ba09382] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
[WARNING]: Invalid request to find a file that matches a "null" value

ok: [controller01_keystone_container-88524ae6] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
ok: [controller01_barbican_container-bbaa1d46] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
[WARNING]: Invalid request to find a file that matches a "null" value

[WARNING]: Invalid request to find a file that matches a "null" value

ok: [controller01_magnum_container-4bce69c3] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
[WARNING]: Invalid request to find a file that matches a "null" value

[WARNING]: Invalid request to find a file that matches a "null" value

ok: [controller01_repo_container-9f6233e3] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
ok: [controller01_glance_container-a8a423b3] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
ok: [controller01_memcached_container-64a68a57] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
[WARNING]: Invalid request to find a file that matches a "null" value

ok: [controller01_cinder_api_container-31138e8e] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
[WARNING]: Invalid request to find a file that matches a "null" value

[WARNING]: Invalid request to find a file that matches a "null" value

ok: [controller01_galera_container-e0f64b5b] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
[WARNING]: Invalid request to find a file that matches a "null" value

ok: [controller01_heat_api_container-5480968a] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
ok: [controller01_nova_api_container-a64fe31b] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
[WARNING]: Invalid request to find a file that matches a "null" value

ok: [controller01_rsyslog_container-5b63e9a5] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
[WARNING]: Invalid request to find a file that matches a "null" value

ok: [controller01_rabbit_mq_container-670106fd] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
[WARNING]: Invalid request to find a file that matches a "null" value

ok: [controller01_ceilometer_central_container-d81e1941] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)
[WARNING]: Invalid request to find a file that matches a "null" value

ok: [network01_neutron_server_container-ae44a98b] => (item=/etc/ansible/roles/lxc_container_create/vars/ubuntu-18.04.yml)

TASK [lxc_container_create : Read custom facts from previous runs] ***************************************************
ok: [controller01_horizon_container-1f0f282b -> 172.29.236.11]
ok: [controller01_aodh_container-2144a74a -> 172.29.236.11]
ok: [controller01_utility_container-8ba09382 -> 172.29.236.11]
ok: [controller01_keystone_container-88524ae6 -> 172.29.236.11]
ok: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11]
ok: [controller01_magnum_container-4bce69c3 -> 172.29.236.11]
ok: [controller01_repo_container-9f6233e3 -> 172.29.236.11]
ok: [controller01_glance_container-a8a423b3 -> 172.29.236.11]
ok: [controller01_memcached_container-64a68a57 -> 172.29.236.11]
ok: [controller01_cinder_api_container-31138e8e -> 172.29.236.11]
ok: [controller01_heat_api_container-5480968a -> 172.29.236.11]
ok: [controller01_galera_container-e0f64b5b -> 172.29.236.11]
ok: [controller01_nova_api_container-a64fe31b -> 172.29.236.11]
ok: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11]
ok: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11]
ok: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11]
ok: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : Check for lxc volume group] *************************************************************
skipping: [controller01_aodh_container-2144a74a]
skipping: [controller01_horizon_container-1f0f282b]
skipping: [controller01_utility_container-8ba09382]
skipping: [controller01_keystone_container-88524ae6]
skipping: [controller01_barbican_container-bbaa1d46]
skipping: [controller01_magnum_container-4bce69c3]
skipping: [controller01_repo_container-9f6233e3]
skipping: [controller01_glance_container-a8a423b3]
skipping: [controller01_memcached_container-64a68a57]
skipping: [controller01_cinder_api_container-31138e8e]
skipping: [controller01_heat_api_container-5480968a]
skipping: [controller01_galera_container-e0f64b5b]
skipping: [controller01_nova_api_container-a64fe31b]
skipping: [controller01_rsyslog_container-5b63e9a5]
skipping: [controller01_rabbit_mq_container-670106fd]
skipping: [controller01_ceilometer_central_container-d81e1941]
skipping: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : LXC VG check] ***************************************************************************

TASK [lxc_container_create : include_tasks] **************************************************************************

TASK [lxc_container_create : include_tasks] **************************************************************************
included: /etc/ansible/roles/lxc_container_create/tasks/lxc_container_create.yml for controller01_aodh_container-2144a74a, controller01_horizon_container-1f0f282b, controller01_utility_container-8ba09382, controller01_keystone_container-88524ae6, controller01_barbican_container-bbaa1d46, controller01_magnum_container-4bce69c3, controller01_repo_container-9f6233e3, controller01_glance_container-a8a423b3, controller01_memcached_container-64a68a57, controller01_cinder_api_container-31138e8e, controller01_heat_api_container-5480968a, controller01_galera_container-e0f64b5b, controller01_nova_api_container-a64fe31b, controller01_rsyslog_container-5b63e9a5, controller01_rabbit_mq_container-670106fd, controller01_ceilometer_central_container-d81e1941, network01_neutron_server_container-ae44a98b

TASK [lxc_container_create : Container service directories] **********************************************************
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item=/openstack/controller01_aodh_container-2144a74a)
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item=/openstack/backup/controller01_aodh_container-2144a74a)
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item=/openstack/controller01_horizon_container-1f0f282b)
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item=/openstack/log/controller01_aodh_container-2144a74a)
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item=/openstack/backup/controller01_horizon_container-1f0f282b)
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item=/openstack/controller01_utility_container-8ba09382)
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item=/openstack/controller01_keystone_container-88524ae6)
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item=/var/lib/lxc/controller01_aodh_container-2144a74a)
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item=/openstack/log/controller01_horizon_container-1f0f282b)
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item=/openstack/backup/controller01_utility_container-8ba09382)
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item=/openstack/controller01_barbican_container-bbaa1d46)
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item=/openstack/backup/controller01_keystone_container-88524ae6)
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item=/var/lib/lxc/controller01_horizon_container-1f0f282b)
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item=/openstack/log/controller01_utility_container-8ba09382)
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item=/openstack/controller01_magnum_container-4bce69c3)
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item=/openstack/backup/controller01_barbican_container-bbaa1d46)
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item=/openstack/log/controller01_keystone_container-88524ae6)
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item=/var/lib/lxc/controller01_utility_container-8ba09382)
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item=/openstack/controller01_repo_container-9f6233e3)
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item=/openstack/backup/controller01_magnum_container-4bce69c3)
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item=/openstack/log/controller01_barbican_container-bbaa1d46)
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item=/var/lib/lxc/controller01_keystone_container-88524ae6)
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item=/openstack/backup/controller01_repo_container-9f6233e3)
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item=/openstack/log/controller01_magnum_container-4bce69c3)
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item=/openstack/controller01_glance_container-a8a423b3)
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item=/var/lib/lxc/controller01_barbican_container-bbaa1d46)
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item=/openstack/log/controller01_repo_container-9f6233e3)
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item=/var/lib/lxc/controller01_magnum_container-4bce69c3)
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item=/openstack/controller01_memcached_container-64a68a57)
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item=/openstack/backup/controller01_glance_container-a8a423b3)
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item=/var/lib/lxc/controller01_repo_container-9f6233e3)
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item=/openstack/controller01_cinder_api_container-31138e8e)
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item=/openstack/backup/controller01_memcached_container-64a68a57)
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item=/openstack/log/controller01_glance_container-a8a423b3)
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item=/openstack/controller01_heat_api_container-5480968a)
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item=/openstack/backup/controller01_cinder_api_container-31138e8e)
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item=/openstack/log/controller01_memcached_container-64a68a57)
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item=/var/lib/lxc/controller01_glance_container-a8a423b3)
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item=/openstack/controller01_galera_container-e0f64b5b)
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item=/openstack/backup/controller01_heat_api_container-5480968a)
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item=/openstack/log/controller01_cinder_api_container-31138e8e)
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item=/var/lib/lxc/controller01_memcached_container-64a68a57)
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item=/openstack/backup/controller01_galera_container-e0f64b5b)
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item=/openstack/controller01_nova_api_container-a64fe31b)
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item=/openstack/log/controller01_heat_api_container-5480968a)
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item=/var/lib/lxc/controller01_cinder_api_container-31138e8e)
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item=/openstack/controller01_rsyslog_container-5b63e9a5)
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item=/openstack/log/controller01_galera_container-e0f64b5b)
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item=/openstack/backup/controller01_nova_api_container-a64fe31b)
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item=/var/lib/lxc/controller01_heat_api_container-5480968a)
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item=/openstack/controller01_rabbit_mq_container-670106fd)
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item=/openstack/backup/controller01_rsyslog_container-5b63e9a5)
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item=/var/lib/lxc/controller01_galera_container-e0f64b5b)
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item=/openstack/log/controller01_nova_api_container-a64fe31b)
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item=/openstack/controller01_ceilometer_central_container-d81e1941)
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item=/openstack/backup/controller01_rabbit_mq_container-670106fd)
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item=/openstack/log/controller01_rsyslog_container-5b63e9a5)
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=/openstack/network01_neutron_server_container-ae44a98b)
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item=/var/lib/lxc/controller01_nova_api_container-a64fe31b)
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item=/openstack/log/controller01_rabbit_mq_container-670106fd)
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item=/openstack/backup/controller01_ceilometer_central_container-d81e1941)
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item=/var/lib/lxc/controller01_rsyslog_container-5b63e9a5)
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=/openstack/backup/network01_neutron_server_container-ae44a98b)
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item=/var/lib/lxc/controller01_rabbit_mq_container-670106fd)
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item=/openstack/log/controller01_ceilometer_central_container-d81e1941)
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=/openstack/log/network01_neutron_server_container-ae44a98b)
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item=/var/lib/lxc/controller01_ceilometer_central_container-d81e1941)
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=/var/lib/lxc/network01_neutron_server_container-ae44a98b)

TASK [lxc_container_create : LXC autodev setup] **********************************************************************
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11]
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11]
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11]
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11]
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11]
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11]
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11]
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11]
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11]
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11]
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11]
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11]
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11]
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11]
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11]
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11]
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : include_tasks] **************************************************************************
included: /etc/ansible/roles/lxc_container_create/tasks/lxc_container_create_dir.yml for controller01_aodh_container-2144a74a, controller01_horizon_container-1f0f282b, controller01_utility_container-8ba09382, controller01_keystone_container-88524ae6, controller01_barbican_container-bbaa1d46, controller01_magnum_container-4bce69c3, controller01_repo_container-9f6233e3, controller01_glance_container-a8a423b3, controller01_memcached_container-64a68a57, controller01_cinder_api_container-31138e8e, controller01_heat_api_container-5480968a, controller01_galera_container-e0f64b5b, controller01_nova_api_container-a64fe31b, controller01_rsyslog_container-5b63e9a5, controller01_rabbit_mq_container-670106fd, controller01_ceilometer_central_container-d81e1941, network01_neutron_server_container-ae44a98b

TASK [lxc_container_create : Create container (dir)] *****************************************************************
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11]
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11]
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11]
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11]
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11]
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11]
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11]
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11]
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11]
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11]
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11]
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11]
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11]
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11]
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11]
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11]
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : Check container state] ******************************************************************
ok: [controller01_aodh_container-2144a74a -> 172.29.236.11]
ok: [controller01_horizon_container-1f0f282b -> 172.29.236.11]
ok: [controller01_utility_container-8ba09382 -> 172.29.236.11]
ok: [controller01_keystone_container-88524ae6 -> 172.29.236.11]
ok: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11]
ok: [controller01_magnum_container-4bce69c3 -> 172.29.236.11]
ok: [controller01_repo_container-9f6233e3 -> 172.29.236.11]
ok: [controller01_glance_container-a8a423b3 -> 172.29.236.11]
ok: [controller01_memcached_container-64a68a57 -> 172.29.236.11]
ok: [controller01_cinder_api_container-31138e8e -> 172.29.236.11]
ok: [controller01_heat_api_container-5480968a -> 172.29.236.11]
ok: [controller01_galera_container-e0f64b5b -> 172.29.236.11]
ok: [controller01_nova_api_container-a64fe31b -> 172.29.236.11]
ok: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11]
ok: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11]
ok: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11]
ok: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : Start the container if it is not already running] ***************************************
skipping: [controller01_aodh_container-2144a74a]
skipping: [controller01_horizon_container-1f0f282b]
skipping: [controller01_utility_container-8ba09382]
skipping: [controller01_keystone_container-88524ae6]
skipping: [controller01_barbican_container-bbaa1d46]
skipping: [controller01_magnum_container-4bce69c3]
skipping: [controller01_repo_container-9f6233e3]
skipping: [controller01_glance_container-a8a423b3]
skipping: [controller01_memcached_container-64a68a57]
skipping: [controller01_cinder_api_container-31138e8e]
skipping: [controller01_heat_api_container-5480968a]
skipping: [controller01_galera_container-e0f64b5b]
skipping: [controller01_nova_api_container-a64fe31b]
skipping: [controller01_rsyslog_container-5b63e9a5]
skipping: [controller01_rabbit_mq_container-670106fd]
skipping: [controller01_ceilometer_central_container-d81e1941]
skipping: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : include_tasks] **************************************************************************
included: /etc/ansible/roles/lxc_container_create/tasks/lxc_container_config.yml for controller01_aodh_container-2144a74a, controller01_horizon_container-1f0f282b, controller01_utility_container-8ba09382, controller01_keystone_container-88524ae6, controller01_barbican_container-bbaa1d46, controller01_magnum_container-4bce69c3, controller01_repo_container-9f6233e3, controller01_glance_container-a8a423b3, controller01_memcached_container-64a68a57, controller01_cinder_api_container-31138e8e, controller01_heat_api_container-5480968a, controller01_galera_container-e0f64b5b, controller01_nova_api_container-a64fe31b, controller01_rsyslog_container-5b63e9a5, controller01_rabbit_mq_container-670106fd, controller01_ceilometer_central_container-d81e1941, network01_neutron_server_container-ae44a98b

TASK [lxc_container_create : Write default container config] *********************************************************
ok: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item=lxc.start.auto=1)
ok: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item=lxc.start.delay=15)
ok: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item=lxc.start.auto=1)
ok: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item=lxc.group=onboot)
ok: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item=lxc.start.delay=15)
ok: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item=lxc.start.auto=1)
ok: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item=lxc.group=openstack)
ok: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item=lxc.start.auto=1)
ok: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item=lxc.group=onboot)
ok: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item=lxc.start.delay=15)
ok: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item=lxc.start.auto=1)
ok: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item=lxc.start.delay=15)
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item=lxc.autodev=1)
ok: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item=lxc.group=openstack)
ok: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item=lxc.group=onboot)
ok: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item=lxc.start.delay=15)
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item=lxc.pty.max=1024)
ok: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item=lxc.group=onboot)
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item=lxc.autodev=1)
ok: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item=lxc.group=openstack)
ok: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item=lxc.group=onboot)
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_aodh_container-2144a74a/autodev)
ok: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item=lxc.group=openstack)
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item=lxc.pty.max=1024)
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item=lxc.autodev=1)
ok: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item=lxc.group=openstack)
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item=lxc.autodev=1)
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_horizon_container-1f0f282b/autodev)
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item=lxc.pty.max=1024)
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item=lxc.autodev=1)
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item=lxc.pty.max=1024)
ok: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item=lxc.start.auto=1)
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_utility_container-8ba09382/autodev)
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item=lxc.pty.max=1024)
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_keystone_container-88524ae6/autodev)
ok: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item=lxc.start.delay=15)
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_barbican_container-bbaa1d46/autodev)
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
ok: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item=lxc.group=onboot)
ok: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item=lxc.start.auto=1)
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
ok: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item=lxc.group=openstack)
ok: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item=lxc.start.delay=15)
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item=lxc.autodev=1)
ok: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item=lxc.group=onboot)
ok: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item=lxc.start.auto=1)
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item=lxc.pty.max=1024)
ok: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item=lxc.group=openstack)
ok: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item=lxc.start.delay=15)
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_magnum_container-4bce69c3/autodev)
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item=lxc.autodev=1)
ok: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item=lxc.group=onboot)
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item=lxc.pty.max=1024)
ok: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item=lxc.start.auto=1)
ok: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item=lxc.group=openstack)
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_repo_container-9f6233e3/autodev)
ok: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item=lxc.start.delay=15)
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item=lxc.autodev=1)
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
ok: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item=lxc.group=onboot)
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item=lxc.pty.max=1024)
ok: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item=lxc.start.auto=1)
ok: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item=lxc.group=openstack)
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_glance_container-a8a423b3/autodev)
ok: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item=lxc.start.delay=15)
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item=lxc.autodev=1)
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
ok: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item=lxc.group=onboot)
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item=lxc.pty.max=1024)
ok: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item=lxc.start.auto=1)
ok: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item=lxc.group=openstack)
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_memcached_container-64a68a57/autodev)
ok: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item=lxc.start.delay=15)
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item=lxc.autodev=1)
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
ok: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item=lxc.group=onboot)
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item=lxc.pty.max=1024)
ok: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item=lxc.start.auto=1)
ok: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item=lxc.group=openstack)
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_cinder_api_container-31138e8e/autodev)
ok: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item=lxc.start.delay=15)
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item=lxc.autodev=1)
ok: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item=lxc.group=onboot)
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item=lxc.pty.max=1024)
ok: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item=lxc.start.auto=1)
ok: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item=lxc.group=openstack)
ok: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item=lxc.start.delay=15)
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_heat_api_container-5480968a/autodev)
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item=lxc.autodev=1)
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
ok: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item=lxc.group=onboot)
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item=lxc.pty.max=1024)
ok: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item=lxc.start.auto=1)
ok: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item=lxc.group=openstack)
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_galera_container-e0f64b5b/autodev)
ok: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item=lxc.start.delay=15)
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item=lxc.autodev=1)
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
ok: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item=lxc.group=onboot)
ok: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item=lxc.start.auto=1)
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item=lxc.pty.max=1024)
ok: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item=lxc.group=openstack)
ok: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item=lxc.start.delay=15)
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_nova_api_container-a64fe31b/autodev)
ok: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item=lxc.group=onboot)
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item=lxc.autodev=1)
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
ok: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item=lxc.start.auto=1)
ok: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item=lxc.group=openstack)
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item=lxc.pty.max=1024)
ok: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item=lxc.start.delay=15)
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item=lxc.autodev=1)
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_rsyslog_container-5b63e9a5/autodev)
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item=lxc.pty.max=1024)
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
ok: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item=lxc.group=onboot)
ok: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=lxc.start.auto=1)
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_rabbit_mq_container-670106fd/autodev)
ok: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item=lxc.group=openstack)
ok: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=lxc.start.delay=15)
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item=lxc.autodev=1)
ok: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=lxc.group=onboot)
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item=lxc.pty.max=1024)
ok: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=lxc.group=openstack)
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item=lxc.hook.autodev=/var/lib/lxc/controller01_ceilometer_central_container-d81e1941/autodev)
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=lxc.autodev=1)
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item=lxc.aa_profile=lxc-openstack)
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=lxc.pty.max=1024)
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=lxc.hook.autodev=/var/lib/lxc/network01_neutron_server_container-ae44a98b/autodev)
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=lxc.aa_profile=lxc-openstack)

TASK [lxc_container_create : Ensure containers have access RO cgroups] ***********************************************
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11]
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11]
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11]
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11]
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11]
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11]
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11]
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11]
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11]
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11]
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11]
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11]
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11]
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11]
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11]
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11]
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : Ensure bind mount host directories exists] **********************************************
ok: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_aodh_container-2144a74a'})
ok: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_horizon_container-1f0f282b'})
ok: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_utility_container-8ba09382'})
ok: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_keystone_container-88524ae6'})
ok: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_barbican_container-bbaa1d46'})
ok: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_magnum_container-4bce69c3'})
ok: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_repo_container-9f6233e3'})
ok: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_glance_container-a8a423b3'})
ok: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_memcached_container-64a68a57'})
ok: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_cinder_api_container-31138e8e'})
ok: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_heat_api_container-5480968a'})
ok: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_galera_container-e0f64b5b'})
ok: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_nova_api_container-a64fe31b'})
ok: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_rsyslog_container-5b63e9a5'})
ok: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_rabbit_mq_container-670106fd'})
ok: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_ceilometer_central_container-d81e1941'})
ok: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/network01_neutron_server_container-ae44a98b'})

TASK [lxc_container_create : Add bind mount configuration to container] **********************************************
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_aodh_container-2144a74a'})
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_horizon_container-1f0f282b'})
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_utility_container-8ba09382'})
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_keystone_container-88524ae6'})
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_barbican_container-bbaa1d46'})
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_magnum_container-4bce69c3'})
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_repo_container-9f6233e3'})
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_glance_container-a8a423b3'})
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_memcached_container-64a68a57'})
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_cinder_api_container-31138e8e'})
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_heat_api_container-5480968a'})
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_galera_container-e0f64b5b'})
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_nova_api_container-a64fe31b'})
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_rsyslog_container-5b63e9a5'})
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_rabbit_mq_container-670106fd'})
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/controller01_ceilometer_central_container-d81e1941'})
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item={u'container_directory': u'/var/backup', u'host_directory': u'/openstack/backup/network01_neutron_server_container-ae44a98b'})

TASK [lxc_container_create : Remove legacy network config for eth0] **************************************************
ok: [controller01_aodh_container-2144a74a -> 172.29.236.11]
ok: [controller01_horizon_container-1f0f282b -> 172.29.236.11]
ok: [controller01_utility_container-8ba09382 -> 172.29.236.11]
ok: [controller01_keystone_container-88524ae6 -> 172.29.236.11]
ok: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11]
ok: [controller01_magnum_container-4bce69c3 -> 172.29.236.11]
ok: [controller01_repo_container-9f6233e3 -> 172.29.236.11]
ok: [controller01_glance_container-a8a423b3 -> 172.29.236.11]
ok: [controller01_memcached_container-64a68a57 -> 172.29.236.11]
ok: [controller01_cinder_api_container-31138e8e -> 172.29.236.11]
ok: [controller01_heat_api_container-5480968a -> 172.29.236.11]
ok: [controller01_galera_container-e0f64b5b -> 172.29.236.11]
ok: [controller01_nova_api_container-a64fe31b -> 172.29.236.11]
ok: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11]
ok: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11]
ok: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11]
ok: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : Create and start the container] *********************************************************
ok: [controller01_aodh_container-2144a74a -> 172.29.236.11]
ok: [controller01_horizon_container-1f0f282b -> 172.29.236.11]
ok: [controller01_utility_container-8ba09382 -> 172.29.236.11]
ok: [controller01_keystone_container-88524ae6 -> 172.29.236.11]
ok: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11]
ok: [controller01_magnum_container-4bce69c3 -> 172.29.236.11]
ok: [controller01_repo_container-9f6233e3 -> 172.29.236.11]
ok: [controller01_glance_container-a8a423b3 -> 172.29.236.11]
ok: [controller01_memcached_container-64a68a57 -> 172.29.236.11]
ok: [controller01_cinder_api_container-31138e8e -> 172.29.236.11]
ok: [controller01_heat_api_container-5480968a -> 172.29.236.11]
ok: [controller01_galera_container-e0f64b5b -> 172.29.236.11]
ok: [controller01_nova_api_container-a64fe31b -> 172.29.236.11]
ok: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11]
ok: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11]
ok: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11]
ok: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : Gather container facts] *****************************************************************
fatal: [controller01_aodh_container-2144a74a]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_horizon_container-1f0f282b]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_utility_container-8ba09382]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_keystone_container-88524ae6]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_barbican_container-bbaa1d46]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_magnum_container-4bce69c3]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_repo_container-9f6233e3]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_glance_container-a8a423b3]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_memcached_container-64a68a57]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_cinder_api_container-31138e8e]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_heat_api_container-5480968a]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_galera_container-e0f64b5b]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_nova_api_container-a64fe31b]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_rsyslog_container-5b63e9a5]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_rabbit_mq_container-670106fd]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
fatal: [controller01_ceilometer_central_container-d81e1941]: FAILED! => {"changed": false, "module_stderr": "mesg: ttyname failed: Inappropriate ioctl for device\n/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
ok: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Drop container setup script] ************************************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Drop container first run script] ********************************************************
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : Execute first script] *******************************************************************
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : Create container mac script] ************************************************************
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item={'value': {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}, 'key': u'container_address'})
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item={'value': {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}, 'key': u'lxcbr0_address'})

TASK [lxc_container_create : Set define static mac address from an existing interface] *******************************
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item={'value': {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}, 'key': u'container_address'})
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item={'value': {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}, 'key': u'lxcbr0_address'})

TASK [lxc_container_create : Gather hardware addresses to be used as facts] ******************************************
ok: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item={'value': {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}, 'key': u'container_address'})
ok: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item={'value': {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}, 'key': u'lxcbr0_address'})

TASK [lxc_container_create : Set fixed hardware address fact] ********************************************************
ok: [network01_neutron_server_container-ae44a98b] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'MDA6MTY6M2U6MDY6NzA6NWMK', 'failed': False, u'source': u'/var/lib/lxc/network01_neutron_server_container-ae44a98b/eth1.hwaddr', 'item': {'key': u'container_address', 'value': {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}}, u'invocation': {u'module_args': {u'src': u'/var/lib/lxc/network01_neutron_server_container-ae44a98b/eth1.hwaddr'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'network01', 'ansible_host': u'172.29.236.10'}, '_ansible_ignore_errors': None, '_ansible_item_label': {'key': u'container_address', 'value': {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}}})
ok: [network01_neutron_server_container-ae44a98b] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'MDA6MTY6M2U6ZGM6MWY6YmUK', 'failed': False, u'source': u'/var/lib/lxc/network01_neutron_server_container-ae44a98b/eth0.hwaddr', 'item': {'key': u'lxcbr0_address', 'value': {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}}, u'invocation': {u'module_args': {u'src': u'/var/lib/lxc/network01_neutron_server_container-ae44a98b/eth0.hwaddr'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'network01', 'ansible_host': u'172.29.236.10'}, '_ansible_ignore_errors': None, '_ansible_item_label': {'key': u'lxcbr0_address', 'value': {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}}})

TASK [lxc_container_create : LXC host config for container networks] *************************************************
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=[0, {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}])
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=[1, {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}])

TASK [lxc_container_create : Container network includes] *************************************************************
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item={'value': {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}, 'key': u'container_address'})
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item={'value': {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}, 'key': u'lxcbr0_address'})

TASK [lxc_container_create : Create wiring script] *******************************************************************
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : Drop veth cleanup script] ***************************************************************
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : Defines a pre, post, and haltsignal configs] ********************************************
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=lxc.hook.pre-start = /var/lib/lxc/network01_neutron_server_container-ae44a98b/veth-cleanup.sh)
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=lxc.hook.post-stop = /var/lib/lxc/network01_neutron_server_container-ae44a98b/veth-cleanup.sh)
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=lxc.signal.halt = SIGRTMIN+4)

TASK [lxc_container_create : Run veth wiring] ************************************************************************

TASK [lxc_container_create : Run container veth wiring script] *******************************************************
skipping: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}, 'key': u'container_address'})
skipping: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}, 'key': u'lxcbr0_address'})

TASK [lxc_container_create : include] ********************************************************************************
included: /etc/ansible/roles/lxc_container_create/tasks/lxc_container_network_new.yml for network01_neutron_server_container-ae44a98b

TASK [lxc_container_create : Create networkd directory] **************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Drop container network file (interfaces)] ***********************************************
changed: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}, 'key': u'container_address'})
changed: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}, 'key': u'lxcbr0_address'})

TASK [lxc_container_create : Create resolved link] *******************************************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Remove old route network interface(s)] **************************************************
ok: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}, 'key': u'container_address'})
ok: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}, 'key': u'lxcbr0_address'})

TASK [lxc_container_create : Remove old network interface(s)] ********************************************************
ok: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}, 'key': u'container_address'})
ok: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}, 'key': u'lxcbr0_address'})

TASK [lxc_container_create : Remove old default network interface] ***************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Add global_environment_variables to environment file] ***********************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Create localhost config] ****************************************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Create domain config] *******************************************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Ensure the hostnamed override directory exists] *****************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Create hostnamed override] **************************************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Generate machine-id] ********************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Ensure the dbus directory exists] *******************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Create dbus machine-id] *****************************************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Retrieve the machine-id] ****************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Set bind mount for journal linking] *****************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Ensure journal directory exists] ********************************************************
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10] => (item=network01)
ok: [network01_neutron_server_container-ae44a98b -> 172.29.238.173] => (item=network01_neutron_server_container-ae44a98b)

TASK [lxc_container_create : Add bind mount configuration to container] **********************************************
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

TASK [lxc_container_create : Create post-up-down onshot service] *****************************************************
changed: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}, 'key': u'container_address'})
changed: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}, 'key': u'lxcbr0_address'})

TASK [lxc_container_create : Create pre-up-down onshot service] ******************************************************
changed: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}, 'key': u'container_address'})
changed: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}, 'key': u'lxcbr0_address'})

TASK [lxc_container_create : Ensure sysctl can be applied] ***********************************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Allow the usage of local facts] *********************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : Record the container variant deployed] **************************************************
changed: [network01_neutron_server_container-ae44a98b]

RUNNING HANDLER [lxc_container_create : Stop Container] **************************************************************
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11]
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11]
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11]
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11]
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11]
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11]
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11]
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11]
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11]
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11]
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11]
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11]
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11]
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11]
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11]
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11]
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]

RUNNING HANDLER [lxc_container_create : Start Container] *************************************************************
changed: [controller01_aodh_container-2144a74a -> 172.29.236.11]
changed: [controller01_horizon_container-1f0f282b -> 172.29.236.11]
changed: [controller01_utility_container-8ba09382 -> 172.29.236.11]
changed: [controller01_keystone_container-88524ae6 -> 172.29.236.11]
changed: [controller01_barbican_container-bbaa1d46 -> 172.29.236.11]
changed: [controller01_magnum_container-4bce69c3 -> 172.29.236.11]
changed: [controller01_repo_container-9f6233e3 -> 172.29.236.11]
changed: [controller01_glance_container-a8a423b3 -> 172.29.236.11]
changed: [controller01_memcached_container-64a68a57 -> 172.29.236.11]
changed: [controller01_cinder_api_container-31138e8e -> 172.29.236.11]
changed: [controller01_heat_api_container-5480968a -> 172.29.236.11]
changed: [controller01_galera_container-e0f64b5b -> 172.29.236.11]
changed: [controller01_nova_api_container-a64fe31b -> 172.29.236.11]
changed: [controller01_rsyslog_container-5b63e9a5 -> 172.29.236.11]
changed: [controller01_rabbit_mq_container-670106fd -> 172.29.236.11]
changed: [network01_neutron_server_container-ae44a98b -> 172.29.236.10]
changed: [controller01_ceilometer_central_container-d81e1941 -> 172.29.236.11]

RUNNING HANDLER [lxc_container_create : Flush addresses] *************************************************************
changed: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth1', u'bridge': u'br-mgmt', u'netmask': u'255.255.252.0', u'type': u'veth', u'address': u'172.29.238.173'}, 'key': u'container_address'})
changed: [network01_neutron_server_container-ae44a98b] => (item={'value': {u'interface': u'eth0', u'bridge': u'lxcbr0', u'type': u'veth'}, 'key': u'lxcbr0_address'})

RUNNING HANDLER [lxc_container_create : Restart systemd-networkd] ****************************************************
changed: [network01_neutron_server_container-ae44a98b]

RUNNING HANDLER [lxc_container_create : Enable resolved] *************************************************************
ok: [network01_neutron_server_container-ae44a98b]

RUNNING HANDLER [lxc_container_create : Enable dbus] *****************************************************************
ok: [network01_neutron_server_container-ae44a98b]

RUNNING HANDLER [lxc_container_create : Reload systemd daemon] *******************************************************
ok: [network01_neutron_server_container-ae44a98b]

RUNNING HANDLER [lxc_container_create : Start hostnamed] *************************************************************
changed: [network01_neutron_server_container-ae44a98b]

RUNNING HANDLER [lxc_container_create : Set hostnamectl name] ********************************************************
ok: [network01_neutron_server_container-ae44a98b]

RUNNING HANDLER [lxc_container_create : Enable container sysctl service] *********************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [lxc_container_create : (RE)Gather facts post setup] ************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [Wait for container connectivity] *******************************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [Gather facts for new container(s)] *****************************************************************************
ok: [network01_neutron_server_container-ae44a98b]

PLAY [Configure containers default software] *************************************************************************

TASK [Remove apt package manager proxy] ******************************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [Update apt when proxy is added/removed] ************************************************************************

TASK [Remove yum package manager proxy] ******************************************************************************

TASK [Remove dnf package manager proxy] ******************************************************************************

TASK [Backup the default pip_install_upper_constraints] **************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [Backup the default pip_default_index] **************************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [Test internal repo URL for the current upper constraints file] *************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [Remove global requirement pins file from host] *****************************************************************

TASK [Copy global requirement pins file to host] *********************************************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [Set pip install upper constraints] *****************************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [Fall back to repo_build_pip_default_index] *********************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [apt_package_pinning : Add apt pin preferences] *****************************************************************

TASK [openstack_hosts : Gather variables for each operating system] **************************************************
ok: [network01_neutron_server_container-ae44a98b] => (item=/etc/ansible/roles/openstack_hosts/vars/ubuntu-18.04.yml)

TASK [openstack_hosts : Allow the usage of local facts] **************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [openstack_hosts : include_tasks] *******************************************************************************
included: /etc/ansible/roles/openstack_hosts/tasks/openstack_release.yml for network01_neutron_server_container-ae44a98b

TASK [openstack_hosts : Drop openstack release file] *****************************************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [openstack_hosts : Remove legacy openstack release file] ********************************************************

TASK [openstack_hosts : Add global_environment_variables to environment file] ****************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [openstack_hosts : Configure etc hosts files] *******************************************************************
included: /etc/ansible/roles/openstack_hosts/tasks/openstack_update_hosts_file.yml for network01_neutron_server_container-ae44a98b

TASK [openstack_hosts : Drop hosts file entries script locally] ******************************************************
changed: [network01_neutron_server_container-ae44a98b -> localhost]

TASK [openstack_hosts : Copy templated hosts file entries script] ****************************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [openstack_hosts : Stat host file] ******************************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [openstack_hosts : Update hosts file] ***************************************************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [openstack_hosts : Apply package management distro specific configuration] **************************************
included: /etc/ansible/roles/openstack_hosts/tasks/openstack_hosts_configure_apt.yml for network01_neutron_server_container-ae44a98b

TASK [openstack_hosts : Remove the blacklisted packages] *************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [openstack_hosts : Add/Remove repositories gpg keys manually] ***************************************************

TASK [openstack_hosts : Add requirement packages (repositories gpg keys, toolkits...)] *******************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [openstack_hosts : Remove any old UCA repository using the old filename] ****************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [openstack_hosts : Add/Remove/Update standard and user defined repositories] ************************************
changed: [network01_neutron_server_container-ae44a98b] => (item={u'repo': u'deb http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/rocky main', u'state': u'present', u'filename': u'uca'})

TASK [openstack_hosts : Update Apt cache] ****************************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [openstack_hosts : include_tasks] *******************************************************************************

TASK [openstack_hosts : Install distro packages] *********************************************************************
changed: [network01_neutron_server_container-ae44a98b]

TASK [openstack_hosts : include_tasks] *******************************************************************************
included: /etc/ansible/roles/openstack_hosts/tasks/openstack_authorized_keys.yml for network01_neutron_server_container-ae44a98b

TASK [openstack_hosts : Ensure ssh directory] ************************************************************************
ok: [network01_neutron_server_container-ae44a98b]

TASK [openstack_hosts : Update SSH keys] *****************************************************************************
skipping: [network01_neutron_server_container-ae44a98b]
[WARNING]: Could not match supplied host pattern, ignoring: nspawn_hosts


PLAY [Additional nspawn host setup] **********************************************************************************
skipping: no hosts matched

PLAY [Gather nspawn container host facts] ****************************************************************************
skipping: no hosts matched

PLAY [Set nspawn containers group] ***********************************************************************************

TASK [Add hosts to dynamic inventory group] **************************************************************************
[WARNING]: Could not match supplied host pattern, ignoring: all_nspawn_containers


PLAY [Create container(s)] *******************************************************************************************
skipping: no hosts matched

PLAY [Rescan storage quotas] *****************************************************************************************
skipping: no hosts matched

PLAY [Configure containers default software] *************************************************************************
skipping: no hosts matched

PLAY RECAP ***********************************************************************************************************
compute01 : ok=123 changed=2 unreachable=0 failed=0
controller01 : ok=219 changed=76 unreachable=0 failed=1
controller01_aodh_container-2144a74a : ok=21 changed=8 unreachable=0 failed=1
controller01_barbican_container-bbaa1d46 : ok=21 changed=8 unreachable=0 failed=1
controller01_ceilometer_central_container-d81e1941 : ok=21 changed=8 unreachable=0 failed=1
controller01_cinder_api_container-31138e8e : ok=21 changed=8 unreachable=0 failed=1
controller01_galera_container-e0f64b5b : ok=21 changed=8 unreachable=0 failed=1
controller01_glance_container-a8a423b3 : ok=21 changed=8 unreachable=0 failed=1
controller01_heat_api_container-5480968a : ok=21 changed=8 unreachable=0 failed=1
controller01_horizon_container-1f0f282b : ok=21 changed=8 unreachable=0 failed=1
controller01_keystone_container-88524ae6 : ok=21 changed=8 unreachable=0 failed=1
controller01_magnum_container-4bce69c3 : ok=21 changed=8 unreachable=0 failed=1
controller01_memcached_container-64a68a57 : ok=21 changed=8 unreachable=0 failed=1
controller01_nova_api_container-a64fe31b : ok=21 changed=8 unreachable=0 failed=1
controller01_rabbit_mq_container-670106fd : ok=21 changed=8 unreachable=0 failed=1
controller01_repo_container-9f6233e3 : ok=21 changed=8 unreachable=0 failed=1
controller01_rsyslog_container-5b63e9a5 : ok=21 changed=8 unreachable=0 failed=1
controller01_utility_container-8ba09382 : ok=21 changed=8 unreachable=0 failed=1
network01 : ok=224 changed=84 unreachable=0 failed=0
network01_neutron_server_container-ae44a98b : ok=95 changed=45 unreachable=0 failed=0

 

EXIT NOTICE [Playbook execution failure] **************************************
===============================================================================


http://paste.openstack.org/show/736181/