question_id
int64
82.3k
79.7M
title_clean
stringlengths
15
158
body_clean
stringlengths
62
28.5k
full_text
stringlengths
95
28.5k
tags
stringlengths
4
80
score
int64
0
1.15k
view_count
int64
22
1.62M
answer_count
int64
0
30
link
stringlengths
58
125
55,425,311
Ansible: iterate over a list of dictionaries - loop vs. with_items
I'm getting different results when using loop vs with_items when trying to iterate over a list of dictionaries. I've tried using loop|dict2items (the structure isn't a dictionary, & it tells me as much. heh) and loop with the flatten filter. Here is the list of dictionaries: "msg": [ { "id": "id1", "ip": "ip1", "name": "name1" }, { "id": "id2", "ip": "ip2", "name": "name2" }, { "id": "id3", "ip": "ip3", "name": "name3" }, { "id": "id4", "ip": "ip4", "name": "name4" } ] } Here is the task in the playbook: - name: Add privateIp windows_instances to inventory add_host: name: "{{ item.ip }}" aws_name: "{{ item.name }}" groups: windows_instances aws_instanceid: "{{ item.id }}" ansible_user: "{{ windows_user }}" ansible_password: "{{ windows_password }}" ansible_port: 5985 ansible_connection: winrm ansible_winrm_server_cert_validation: ignore loop: - "{{ list1 | flatten(levels=1) }}" When attempting to run the above code, I get the "list object has no attribute" error. I've tried different flatten levels to no avail. HOWEVER... If I simply replace the loop above with: with_items: - "{{ list1 }}" Everything works perfectly. I'm missing something in the with_items > loop translation here...
Ansible: iterate over a list of dictionaries - loop vs. with_items I'm getting different results when using loop vs with_items when trying to iterate over a list of dictionaries. I've tried using loop|dict2items (the structure isn't a dictionary, & it tells me as much. heh) and loop with the flatten filter. Here is the list of dictionaries: "msg": [ { "id": "id1", "ip": "ip1", "name": "name1" }, { "id": "id2", "ip": "ip2", "name": "name2" }, { "id": "id3", "ip": "ip3", "name": "name3" }, { "id": "id4", "ip": "ip4", "name": "name4" } ] } Here is the task in the playbook: - name: Add privateIp windows_instances to inventory add_host: name: "{{ item.ip }}" aws_name: "{{ item.name }}" groups: windows_instances aws_instanceid: "{{ item.id }}" ansible_user: "{{ windows_user }}" ansible_password: "{{ windows_password }}" ansible_port: 5985 ansible_connection: winrm ansible_winrm_server_cert_validation: ignore loop: - "{{ list1 | flatten(levels=1) }}" When attempting to run the above code, I get the "list object has no attribute" error. I've tried different flatten levels to no avail. HOWEVER... If I simply replace the loop above with: with_items: - "{{ list1 }}" Everything works perfectly. I'm missing something in the with_items > loop translation here...
loops, ansible
16
46,218
1
https://stackoverflow.com/questions/55425311/ansible-iterate-over-a-list-of-dictionaries-loop-vs-with-items
42,836,598
How to download and install ansible modules?
I have found this DNSimple ansible module: [URL] but can not find anywhere on that page to download and install it? How do I go about downloading and installing ansible modules like this. Thanks.
How to download and install ansible modules? I have found this DNSimple ansible module: [URL] but can not find anywhere on that page to download and install it? How do I go about downloading and installing ansible modules like this. Thanks.
ansible
16
34,338
3
https://stackoverflow.com/questions/42836598/how-to-download-and-install-ansible-modules
43,438,519
Check if Ansible pipelining is enabled / working
Ansible has the ability to work faster when pipelining is enabled. However there are some requirements to make this work. Pipelinging must be enabled in ansible.cfg or in the inventory-file and requiretty must be dissabled. I already checked -vvvv, nothing showed up concering 'pipelining'. Also, I do not notice any difference in speed. Because of all this I would like to know if: is there a way to verify Ansible is using the pipelining ability?
Check if Ansible pipelining is enabled / working Ansible has the ability to work faster when pipelining is enabled. However there are some requirements to make this work. Pipelinging must be enabled in ansible.cfg or in the inventory-file and requiretty must be dissabled. I already checked -vvvv, nothing showed up concering 'pipelining'. Also, I do not notice any difference in speed. Because of all this I would like to know if: is there a way to verify Ansible is using the pipelining ability?
ansible, pipelining
16
6,024
1
https://stackoverflow.com/questions/43438519/check-if-ansible-pipelining-is-enabled-working
39,873,635
Ansible, role not found error
I try to play following playbook against localhost to provision Vagrant machine --- - hosts: all become: yes roles: - base - jenkins I have cloned necessary roles from github and they resides in a relative path roles/{role name} Executing following command: ansible-playbook -i "localhost," -c local playbook.yml outputs this error: ==> default: ERROR! the role 'geerlingguy.java' was not found in /home/vagrant/provisioning/roles:/home/vagrant/provisioning:/etc/ansible/roles:/home/vagrant/provisioning/roles ==> default: ==> default: The error appears to have been in '/home/vagrant/provisioning/roles/jenkins/meta/main.yml': line 3, column 5, but may ==> default: be elsewhere in the file depending on the exact syntax problem. ==> default: ==> default: The offending line appears to be: ==> default: ==> default: dependencies: ==> default: - geerlingguy.java ==> default: ^ here I cloned the missing dependency from github, and tried to reside it in relative path of roles/java and roles/geerlingguy/java , but either didn't solve the problem, and error stays the same. I want to keep all roles locally in the synced provisioning folder, without using ansible-galaxy runtime, to make the provisioning method as self contained as possible. Here is the provision folder structure as it is now . β”œβ”€β”€ playbook.yml └── roles β”œβ”€β”€ base β”‚ └── tasks β”‚ └── main.yml β”œβ”€β”€ java β”‚ β”œβ”€β”€ defaults β”‚ β”‚ └── main.yml β”‚ β”œβ”€β”€ meta β”‚ β”‚ └── main.yml β”‚ β”œβ”€β”€ README.md β”‚ β”œβ”€β”€ tasks β”‚ β”‚ β”œβ”€β”€ main.yml β”‚ β”‚ β”œβ”€β”€ setup-Debian.yml β”‚ β”‚ β”œβ”€β”€ setup-FreeBSD.yml β”‚ β”‚ └── setup-RedHat.yml β”‚ β”œβ”€β”€ templates β”‚ β”‚ └── java_home.sh.j2 β”‚ β”œβ”€β”€ tests β”‚ β”‚ └── test.yml β”‚ └── vars β”‚ β”œβ”€β”€ Debian.yml β”‚ β”œβ”€β”€ Fedora.yml β”‚ β”œβ”€β”€ FreeBSD.yml β”‚ β”œβ”€β”€ RedHat.yml β”‚ β”œβ”€β”€ Ubuntu-12.04.yml β”‚ β”œβ”€β”€ Ubuntu-14.04.yml β”‚ └── Ubuntu-16.04.yml └── jenkins β”œβ”€β”€ defaults β”‚ └── main.yml β”œβ”€β”€ handlers β”‚ └── main.yml β”œβ”€β”€ meta β”‚ └── main.yml β”œβ”€β”€ README.md β”œβ”€β”€ tasks β”‚ β”œβ”€β”€ main.yml β”‚ β”œβ”€β”€ plugins.yml β”‚ β”œβ”€β”€ settings.yml β”‚ β”œβ”€β”€ setup-Debian.yml β”‚ └── setup-RedHat.yml β”œβ”€β”€ templates β”‚ └── basic-security.groovy β”œβ”€β”€ tests β”‚ β”œβ”€β”€ requirements.yml β”‚ β”œβ”€β”€ test-http-port.yml β”‚ β”œβ”€β”€ test-jenkins-version.yml β”‚ β”œβ”€β”€ test-plugins-with-pinning.yml β”‚ β”œβ”€β”€ test-plugins.yml β”‚ β”œβ”€β”€ test-prefix.yml β”‚ └── test.yml └── vars β”œβ”€β”€ Debian.yml └── RedHat.yml
Ansible, role not found error I try to play following playbook against localhost to provision Vagrant machine --- - hosts: all become: yes roles: - base - jenkins I have cloned necessary roles from github and they resides in a relative path roles/{role name} Executing following command: ansible-playbook -i "localhost," -c local playbook.yml outputs this error: ==> default: ERROR! the role 'geerlingguy.java' was not found in /home/vagrant/provisioning/roles:/home/vagrant/provisioning:/etc/ansible/roles:/home/vagrant/provisioning/roles ==> default: ==> default: The error appears to have been in '/home/vagrant/provisioning/roles/jenkins/meta/main.yml': line 3, column 5, but may ==> default: be elsewhere in the file depending on the exact syntax problem. ==> default: ==> default: The offending line appears to be: ==> default: ==> default: dependencies: ==> default: - geerlingguy.java ==> default: ^ here I cloned the missing dependency from github, and tried to reside it in relative path of roles/java and roles/geerlingguy/java , but either didn't solve the problem, and error stays the same. I want to keep all roles locally in the synced provisioning folder, without using ansible-galaxy runtime, to make the provisioning method as self contained as possible. Here is the provision folder structure as it is now . β”œβ”€β”€ playbook.yml └── roles β”œβ”€β”€ base β”‚ └── tasks β”‚ └── main.yml β”œβ”€β”€ java β”‚ β”œβ”€β”€ defaults β”‚ β”‚ └── main.yml β”‚ β”œβ”€β”€ meta β”‚ β”‚ └── main.yml β”‚ β”œβ”€β”€ README.md β”‚ β”œβ”€β”€ tasks β”‚ β”‚ β”œβ”€β”€ main.yml β”‚ β”‚ β”œβ”€β”€ setup-Debian.yml β”‚ β”‚ β”œβ”€β”€ setup-FreeBSD.yml β”‚ β”‚ └── setup-RedHat.yml β”‚ β”œβ”€β”€ templates β”‚ β”‚ └── java_home.sh.j2 β”‚ β”œβ”€β”€ tests β”‚ β”‚ └── test.yml β”‚ └── vars β”‚ β”œβ”€β”€ Debian.yml β”‚ β”œβ”€β”€ Fedora.yml β”‚ β”œβ”€β”€ FreeBSD.yml β”‚ β”œβ”€β”€ RedHat.yml β”‚ β”œβ”€β”€ Ubuntu-12.04.yml β”‚ β”œβ”€β”€ Ubuntu-14.04.yml β”‚ └── Ubuntu-16.04.yml └── jenkins β”œβ”€β”€ defaults β”‚ └── main.yml β”œβ”€β”€ handlers β”‚ └── main.yml β”œβ”€β”€ meta β”‚ └── main.yml β”œβ”€β”€ README.md β”œβ”€β”€ tasks β”‚ β”œβ”€β”€ main.yml β”‚ β”œβ”€β”€ plugins.yml β”‚ β”œβ”€β”€ settings.yml β”‚ β”œβ”€β”€ setup-Debian.yml β”‚ └── setup-RedHat.yml β”œβ”€β”€ templates β”‚ └── basic-security.groovy β”œβ”€β”€ tests β”‚ β”œβ”€β”€ requirements.yml β”‚ β”œβ”€β”€ test-http-port.yml β”‚ β”œβ”€β”€ test-jenkins-version.yml β”‚ β”œβ”€β”€ test-plugins-with-pinning.yml β”‚ β”œβ”€β”€ test-plugins.yml β”‚ β”œβ”€β”€ test-prefix.yml β”‚ └── test.yml └── vars β”œβ”€β”€ Debian.yml └── RedHat.yml
configuration, vagrant, ansible, provisioning
16
51,719
4
https://stackoverflow.com/questions/39873635/ansible-role-not-found-error
30,960,904
Ansible xml manipulation similar to lineinfile
In Ansible, I'm looking for a technique that works similar to lineinfile or replace but for XML files when using templates is not an option. Seems like a very common need. With XML files, though, it is necessary to specify an xpath to guarantee the element is present/absent from the correct place in the DOM. The solution needs to ensure there is a mechanism of replacing an existing node that could look quite a bit different than the target node. Trivial example XML file: <?xml version="1.0" encoding="ISO-8859-1"?> <datasources-configuration xmlns:myns="[URL] <datasources> <!-- various other xml --> <datasource> <name>MyDS</name> <jdbcUrl>...</jdbcUrl> </datasource> </datasources> <!-- various other xml --> </datasources-configuration> I want to be able to ensure a full multiline block of XML gets inserted/replaced into target XML file given a certain xpath expression is matched. For example, to add the following datasource to datasources: <datasource> <name>AnotherDS</name> <jdbcUrl>...</jdbcUrl> </datasource> The best that I've seen is this custom module which breaks on it's own examples: [URL] Does such a module exist or solution recommendations?
Ansible xml manipulation similar to lineinfile In Ansible, I'm looking for a technique that works similar to lineinfile or replace but for XML files when using templates is not an option. Seems like a very common need. With XML files, though, it is necessary to specify an xpath to guarantee the element is present/absent from the correct place in the DOM. The solution needs to ensure there is a mechanism of replacing an existing node that could look quite a bit different than the target node. Trivial example XML file: <?xml version="1.0" encoding="ISO-8859-1"?> <datasources-configuration xmlns:myns="[URL] <datasources> <!-- various other xml --> <datasource> <name>MyDS</name> <jdbcUrl>...</jdbcUrl> </datasource> </datasources> <!-- various other xml --> </datasources-configuration> I want to be able to ensure a full multiline block of XML gets inserted/replaced into target XML file given a certain xpath expression is matched. For example, to add the following datasource to datasources: <datasource> <name>AnotherDS</name> <jdbcUrl>...</jdbcUrl> </datasource> The best that I've seen is this custom module which breaks on it's own examples: [URL] Does such a module exist or solution recommendations?
xml, xpath, replace, ansible
16
17,541
2
https://stackoverflow.com/questions/30960904/ansible-xml-manipulation-similar-to-lineinfile
47,352,361
How do I apply an Ansible with_items loop to the included tasks?
The documentation for import_tasks mentions Any loops, conditionals and most other keywords will be applied to the included tasks, not to this statement itself. This is exactly what I want. Unfortunately, when I attempt to make import_tasks work with a loop - import_tasks: msg.yml with_items: - 1 - 2 - 3 I get the message ERROR! You cannot use loops on 'import_tasks' statements. You should use 'include_tasks' instead. I don't want the include_tasks behaviour, as this applies the loop to the included file, and duplicates the tasks. I specifically want to run the first task for each loop variable (as one task with the standard with_items output), then the second, and so on. How can I get this behaviour? Specifically, consider the following: Suppose I have the following files: playbook.yml --- - hosts: 192.168.33.100 gather_facts: no tasks: - include_tasks: msg.yml with_items: - 1 - 2 msg.yml --- - name: Message 1 debug: msg: "Message 1: {{ item }}" - name: Message 2 debug: msg: "Message 2: {{ item }}" I would like the printed messages to be Message 1: 1 Message 1: 2 Message 2: 1 Message 2: 2 However, with import_tasks I get an error, and with include_tasks I get Message 1: 1 Message 2: 1 Message 1: 2 Message 2: 2
How do I apply an Ansible with_items loop to the included tasks? The documentation for import_tasks mentions Any loops, conditionals and most other keywords will be applied to the included tasks, not to this statement itself. This is exactly what I want. Unfortunately, when I attempt to make import_tasks work with a loop - import_tasks: msg.yml with_items: - 1 - 2 - 3 I get the message ERROR! You cannot use loops on 'import_tasks' statements. You should use 'include_tasks' instead. I don't want the include_tasks behaviour, as this applies the loop to the included file, and duplicates the tasks. I specifically want to run the first task for each loop variable (as one task with the standard with_items output), then the second, and so on. How can I get this behaviour? Specifically, consider the following: Suppose I have the following files: playbook.yml --- - hosts: 192.168.33.100 gather_facts: no tasks: - include_tasks: msg.yml with_items: - 1 - 2 msg.yml --- - name: Message 1 debug: msg: "Message 1: {{ item }}" - name: Message 2 debug: msg: "Message 2: {{ item }}" I would like the printed messages to be Message 1: 1 Message 1: 2 Message 2: 1 Message 2: 2 However, with import_tasks I get an error, and with include_tasks I get Message 1: 1 Message 2: 1 Message 1: 2 Message 2: 2
ansible
16
51,606
2
https://stackoverflow.com/questions/47352361/how-do-i-apply-an-ansible-with-items-loop-to-the-included-tasks
31,343,753
Ansible variable defined in group_vars/all not found
Assume the Ansible structure: . β”œβ”€β”€ group_vars β”‚ └── all └── site.yml Where all contains my_test_variable: yes , and site.yml is: - hosts: all tasks: - name: Variable test debug: msg={{ my_test_variable }} I'm using Vagrant to run it locally, so the command looks like: $ ansible-playbook site.yml -i /path-to-vagrant/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory --private-key=/path-to-vagrant/.vagrant/machines/default/virtualbox/private_key -u vagrant Vagrant's generated invertory file: # Generated by Vagrant default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 And the output: PLAY: *************************************************************************** TASK [setup] ******************************************************************** ok: [default] TASK [Variable test] ************************************************************ fatal: [default]: FAILED! => {"msg": "ERROR! the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'my_test_variable' is undefined", "failed": true} PLAY RECAP ********************************************************************** default : ok=1 changed=0 unreachable=0 failed=1 I know this vagrant inventory is not inside any group - because there isn't any - but all groups inherit from all , right? Why doesn't it work? What did I miss? I'm Quite new to Ansible. Read a lot of the docs, couple of examples and even some SO questions like Ansible doesn't pick up group_vars without loading it manually -- not quite my problem -- and Cannot get ansible to recognize group variables -- close but not there either. Edit I'm following the recommended project structure from Ansible's doc , and in the variables doc entry they mention the global_vars/all : Site wide defaults should be defined as a β€˜group_vars/all’ setting. Even though there is no direct reference on how to load these default values, I assume I don't have to explicitly add them (like suggested in the answer by @thiago-borges). Or do I? The reason for this is that I intend to have group vars inheriting from all , like: . β”œβ”€β”€ group_vars β”‚ └── all β”‚ └── production β”‚ └── staging └── site.yml And when I execute ansible-playbook for each, different files are loaded without I having to explicitly set them in the play file, eg: ansible-playbook -i production site.yml Edit 2 The issue was a bug on ansible. After an update it worked as documented. Should I delete this question then?
Ansible variable defined in group_vars/all not found Assume the Ansible structure: . β”œβ”€β”€ group_vars β”‚ └── all └── site.yml Where all contains my_test_variable: yes , and site.yml is: - hosts: all tasks: - name: Variable test debug: msg={{ my_test_variable }} I'm using Vagrant to run it locally, so the command looks like: $ ansible-playbook site.yml -i /path-to-vagrant/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory --private-key=/path-to-vagrant/.vagrant/machines/default/virtualbox/private_key -u vagrant Vagrant's generated invertory file: # Generated by Vagrant default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 And the output: PLAY: *************************************************************************** TASK [setup] ******************************************************************** ok: [default] TASK [Variable test] ************************************************************ fatal: [default]: FAILED! => {"msg": "ERROR! the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'my_test_variable' is undefined", "failed": true} PLAY RECAP ********************************************************************** default : ok=1 changed=0 unreachable=0 failed=1 I know this vagrant inventory is not inside any group - because there isn't any - but all groups inherit from all , right? Why doesn't it work? What did I miss? I'm Quite new to Ansible. Read a lot of the docs, couple of examples and even some SO questions like Ansible doesn't pick up group_vars without loading it manually -- not quite my problem -- and Cannot get ansible to recognize group variables -- close but not there either. Edit I'm following the recommended project structure from Ansible's doc , and in the variables doc entry they mention the global_vars/all : Site wide defaults should be defined as a β€˜group_vars/all’ setting. Even though there is no direct reference on how to load these default values, I assume I don't have to explicitly add them (like suggested in the answer by @thiago-borges). Or do I? The reason for this is that I intend to have group vars inheriting from all , like: . β”œβ”€β”€ group_vars β”‚ └── all β”‚ └── production β”‚ └── staging └── site.yml And when I execute ansible-playbook for each, different files are loaded without I having to explicitly set them in the play file, eg: ansible-playbook -i production site.yml Edit 2 The issue was a bug on ansible. After an update it worked as documented. Should I delete this question then?
vagrant, ansible
16
26,597
4
https://stackoverflow.com/questions/31343753/ansible-variable-defined-in-group-vars-all-not-found
26,491,295
ansible: sort of list comprehension?
Given this inventory: [webservers] 10.0.0.51 private_ip='X.X.X.X' 10.0.0.52 private_ip='Y.Y.Y.Y' 10.0.0.53 private_ip='Z.Z.Z.Z' How can I get a list of the private ips of the webservers? webservers_private_ips: "{{ }}" # ['X.X.X.X', 'Y.Y.Y.Y', 'Z.Z.Z.Z'] I know groups['webservers'] will give me this list ['10.0.0.51', '10.0.0.52', '10.0.0.53'] and I can get the private_ip of one with: {{ hostvars[item]['private_ip'] }} with_items: groups['webservers'] But I would like to declare a variable in my var file directly and not have a task to register it. It would be nice if something like the following could be done: webservers_private_ips: "{{ hostvars[item]['private_ip'] }} for item in groups['webservers']"
ansible: sort of list comprehension? Given this inventory: [webservers] 10.0.0.51 private_ip='X.X.X.X' 10.0.0.52 private_ip='Y.Y.Y.Y' 10.0.0.53 private_ip='Z.Z.Z.Z' How can I get a list of the private ips of the webservers? webservers_private_ips: "{{ }}" # ['X.X.X.X', 'Y.Y.Y.Y', 'Z.Z.Z.Z'] I know groups['webservers'] will give me this list ['10.0.0.51', '10.0.0.52', '10.0.0.53'] and I can get the private_ip of one with: {{ hostvars[item]['private_ip'] }} with_items: groups['webservers'] But I would like to declare a variable in my var file directly and not have a task to register it. It would be nice if something like the following could be done: webservers_private_ips: "{{ hostvars[item]['private_ip'] }} for item in groups['webservers']"
ansible
16
13,047
3
https://stackoverflow.com/questions/26491295/ansible-sort-of-list-comprehension
54,640,658
How to fix &quot;Could not match supplied host pattern, ignoring: bigip&quot; errors, works in Ansible, NOT Tower
I am running Ansible Tower v3.4.1 with Ansible v2.7.6 on an ubuntu 16.04 VM running on VirtualBox. I run a playbook that works when I run it from the command line using "ansible-playbook" but fails when I try to run it from Ansible Tower. I know I must have something misconfigured in ansible tower but I can't find it. I get this warning no matter what changes I make to the inventory (hosts) file. $ ansible-playbook 2.7.6 config file = /etc/ansible/ansible.cfg configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/dist-packages/ansible executable location = /usr/bin/ansible-playbook python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609] Using /etc/ansible/ansible.cfg as config file SSH password: **/tmp/awx_74_z6yJB4/tmpVlXGCX did not meet host_list requirements**, check plugin documentation if this is unexpected Parsed /tmp/awx_74_z6yJB4/tmpVlXGCX inventory source with script plugin PLAYBOOK: addpool.yaml ********************************************************* 1 plays in addpool.yaml [WARNING]: **Could not match supplied host pattern, ignoring: bigip** PLAY [Sample pool playbook] **************************************************** 17:05:43 skipping: no hosts matched I have enabled inventory plugins for YAML, and made my hosts file into a hosts.yml file. Here's my hosts file: 192.168.68.253 192.168.68.254 192.168.1.165 [centos] dad2 ansible_ssh_host=192.168.1.165 [bigip] bigip1 ansible_host=192.168.68.254 bigip2 ansible_host=192.168.68.253 Here's my playbook: --- - name: Sample pool playbook hosts: bigip connection: local tasks: - name: create web servers pool bigip_pool: name: web-servers2 lb_method: ratio-member password: admin user: admin server: '{{inventory_hostname}}' validate_certs: no
How to fix &quot;Could not match supplied host pattern, ignoring: bigip&quot; errors, works in Ansible, NOT Tower I am running Ansible Tower v3.4.1 with Ansible v2.7.6 on an ubuntu 16.04 VM running on VirtualBox. I run a playbook that works when I run it from the command line using "ansible-playbook" but fails when I try to run it from Ansible Tower. I know I must have something misconfigured in ansible tower but I can't find it. I get this warning no matter what changes I make to the inventory (hosts) file. $ ansible-playbook 2.7.6 config file = /etc/ansible/ansible.cfg configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/dist-packages/ansible executable location = /usr/bin/ansible-playbook python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609] Using /etc/ansible/ansible.cfg as config file SSH password: **/tmp/awx_74_z6yJB4/tmpVlXGCX did not meet host_list requirements**, check plugin documentation if this is unexpected Parsed /tmp/awx_74_z6yJB4/tmpVlXGCX inventory source with script plugin PLAYBOOK: addpool.yaml ********************************************************* 1 plays in addpool.yaml [WARNING]: **Could not match supplied host pattern, ignoring: bigip** PLAY [Sample pool playbook] **************************************************** 17:05:43 skipping: no hosts matched I have enabled inventory plugins for YAML, and made my hosts file into a hosts.yml file. Here's my hosts file: 192.168.68.253 192.168.68.254 192.168.1.165 [centos] dad2 ansible_ssh_host=192.168.1.165 [bigip] bigip1 ansible_host=192.168.68.254 bigip2 ansible_host=192.168.68.253 Here's my playbook: --- - name: Sample pool playbook hosts: bigip connection: local tasks: - name: create web servers pool bigip_pool: name: web-servers2 lb_method: ratio-member password: admin user: admin server: '{{inventory_hostname}}' validate_certs: no
ansible, ansible-inventory
16
138,649
5
https://stackoverflow.com/questions/54640658/how-to-fix-could-not-match-supplied-host-pattern-ignoring-bigip-errors-work
14,008,521
Please explain this algorithm to get all permutations of a String
The following code generates all the permutations for a string: def permutations(word): if len(word)<=1: return [word] #get all permutations of length N-1 perms=permutations(word[1:]) char=word[0] result=[] #iterate over all permutations of length N-1 for perm in perms: #insert the character into every possible location for i in range(len(perm)+1): result.append(perm[:i] + char + perm[i:]) return result Can you explain how it works? I don't understand the recursion.
Please explain this algorithm to get all permutations of a String The following code generates all the permutations for a string: def permutations(word): if len(word)<=1: return [word] #get all permutations of length N-1 perms=permutations(word[1:]) char=word[0] result=[] #iterate over all permutations of length N-1 for perm in perms: #insert the character into every possible location for i in range(len(perm)+1): result.append(perm[:i] + char + perm[i:]) return result Can you explain how it works? I don't understand the recursion.
ansible, python, string, recursion, permutation
15
17,913
2
https://stackoverflow.com/questions/14008521/please-explain-this-algorithm-to-get-all-permutations-of-a-string
37,142,357
Error: Instance does not have a volume attached at root (/dev/sda1)
Getting error while starting ec2 instance after attaching volume: I have defined device_name as "/dev/sda1" but it still picking up "/dev/sdf". Here is my code: ec2_vol: instance: "{{ instance_id }}" id: "{{ ec2_vol.volume_id }}" device_name: "/dev/sda1" region: "{{ aws_region }}"
Error: Instance does not have a volume attached at root (/dev/sda1) Getting error while starting ec2 instance after attaching volume: I have defined device_name as "/dev/sda1" but it still picking up "/dev/sdf". Here is my code: ec2_vol: instance: "{{ instance_id }}" id: "{{ ec2_vol.volume_id }}" device_name: "/dev/sda1" region: "{{ aws_region }}"
amazon-ec2, ansible
15
35,801
10
https://stackoverflow.com/questions/37142357/error-instance-does-not-have-a-volume-attached-at-root-dev-sda1
37,355,099
Ansible: playbook calling Role in a directory that is in the roles directory
I would like to shape my directory structure of my ansible roles and playbooks. Currently I have a directory structure like. group_vars * all * group-one - group-vars.yml - group-vault.yml ... host_vars - server1.yml plays - java_plays * deploy_fun_java_stuff.yml * deploy_playbook.yml roles - role1 - tasks * main.yml - handlers - (the rest of the needed directories) - role2 - java - java_role1 - tasks * main.yml - handlers - (the rest of the needed directories) I would like to be able to call upon the role java_role1 in the play deploy_fun_java_stuff.yml I can call --- - name: deploy fun java stuff hosts: java roles: - { role: role1 } but I cannot call (I've tried multiple ways). Is this possible? - name: deploy fun java stuff hosts: java roles: - { role: java/java_role1 } What I really want to accomplish is to be able to structure my plays in an orderly fashion along with my roles. I will end up with a large number of both roles and plays I would like to organize them. I can handle this with a separate ansible.cfg file for each play directory but I cannot add those cfg files to ansible tower (So I'm looking for an alternate solution).
Ansible: playbook calling Role in a directory that is in the roles directory I would like to shape my directory structure of my ansible roles and playbooks. Currently I have a directory structure like. group_vars * all * group-one - group-vars.yml - group-vault.yml ... host_vars - server1.yml plays - java_plays * deploy_fun_java_stuff.yml * deploy_playbook.yml roles - role1 - tasks * main.yml - handlers - (the rest of the needed directories) - role2 - java - java_role1 - tasks * main.yml - handlers - (the rest of the needed directories) I would like to be able to call upon the role java_role1 in the play deploy_fun_java_stuff.yml I can call --- - name: deploy fun java stuff hosts: java roles: - { role: role1 } but I cannot call (I've tried multiple ways). Is this possible? - name: deploy fun java stuff hosts: java roles: - { role: java/java_role1 } What I really want to accomplish is to be able to structure my plays in an orderly fashion along with my roles. I will end up with a large number of both roles and plays I would like to organize them. I can handle this with a separate ansible.cfg file for each play directory but I cannot add those cfg files to ansible tower (So I'm looking for an alternate solution).
ansible, ansible-tower
15
70,446
3
https://stackoverflow.com/questions/37355099/ansible-playbook-calling-role-in-a-directory-that-is-in-the-roles-directory
39,423,676
Get sorted list of folders with Ansible
I have OS X "El capitan" 10.11.6 and I am using Ansible 2.1.1.0 to run some maintenance tasks on a remote Linux server Ubuntu 16.04 Xenial . I am trying to get the following list of folders sorted on the remote machine (Linux), so I can remove the old ones when needed: /releases/0.0.0 /releases/0.0.1 /releases/0.0.10 /releases/1.0.0 /releases/1.0.5 /releases/2.0.0 I have been trying with the module find in Ansible, but it returns a not sorted list. Is there an easy way to achieve this with Ansible?
Get sorted list of folders with Ansible I have OS X "El capitan" 10.11.6 and I am using Ansible 2.1.1.0 to run some maintenance tasks on a remote Linux server Ubuntu 16.04 Xenial . I am trying to get the following list of folders sorted on the remote machine (Linux), so I can remove the old ones when needed: /releases/0.0.0 /releases/0.0.1 /releases/0.0.10 /releases/1.0.0 /releases/1.0.5 /releases/2.0.0 I have been trying with the module find in Ansible, but it returns a not sorted list. Is there an easy way to achieve this with Ansible?
ansible
15
28,722
4
https://stackoverflow.com/questions/39423676/get-sorted-list-of-folders-with-ansible
29,652,634
docker with ansible wait for database
I try to deploy docker with ansible. I have one docker database container, and in other container is my web app, and I try to link this two container. The problem is that database container didn't have a time to configure itself and a web container is already started. My ansible playbook look something like: ... - name: run mysql in docker container docker: image: "mysql:5.5" name: database env: "MYSQL_ROOT_PASSWORD=password" state: running - name: run application containers docker: name: "application" image: "myapp" ports: - "8080:8080" links: - "database:db" state: running How to determine if database is start? I try with wait_for module, but that didn't work. I don't want to set timeout, it's not good option for me.
docker with ansible wait for database I try to deploy docker with ansible. I have one docker database container, and in other container is my web app, and I try to link this two container. The problem is that database container didn't have a time to configure itself and a web container is already started. My ansible playbook look something like: ... - name: run mysql in docker container docker: image: "mysql:5.5" name: database env: "MYSQL_ROOT_PASSWORD=password" state: running - name: run application containers docker: name: "application" image: "myapp" ports: - "8080:8080" links: - "database:db" state: running How to determine if database is start? I try with wait_for module, but that didn't work. I don't want to set timeout, it's not good option for me.
docker, ansible
15
13,050
6
https://stackoverflow.com/questions/29652634/docker-with-ansible-wait-for-database
34,721,001
Check if directory is a mount point?
I need to check if a directory is a mount point before doing some other tasks. I have been looking around the documentation and it only seems that you can create/destroy mount points but not just check if one exists. From the link below. [URL] I am wondering if there is any way to check it exists with ansible, or will it have to be some other language called from ansible.
Check if directory is a mount point? I need to check if a directory is a mount point before doing some other tasks. I have been looking around the documentation and it only seems that you can create/destroy mount points but not just check if one exists. From the link below. [URL] I am wondering if there is any way to check it exists with ansible, or will it have to be some other language called from ansible.
ansible, mount
15
29,288
5
https://stackoverflow.com/questions/34721001/check-if-directory-is-a-mount-point
41,284,602
blockinfile keeps adding the block
I'd like to add a block of text to my ElasticSearch configuration using blockinfile, but every time I run my playbook, the block gets added to the file -- even when it already exists. This is a problem because ElasticSearch doesn't just take the last value, it chokes on startup saying "you have multiple entries for this value" (or something similar). My play looks like this: - name: configure elasticsearch blockinfile: dest: /etc/elasticsearch/elasticsearch.yml marker: "## added by ansible configuration" block: | network.host: 0.0.0.0 path.data: /var/lib path.logs: /var/log/elasticsearch path.repo: /home/chris/elastic-backups state: present But after running my playbook a second time, my elasticsearch.yml file looks like: ## added by ansible configuration network.host: 0.0.0.0 path.data: /var/lib path.logs: /var/log/elasticsearch path.repo: /home/chris/elastic-backups ## added by ansible configuration network.host: 0.0.0.0 path.data: /var/lib path.logs: /var/log/elasticsearch path.repo: /home/chris/elastic-backups ## added by ansible configuration Is there a way to only add the block if it does not exist yet?
blockinfile keeps adding the block I'd like to add a block of text to my ElasticSearch configuration using blockinfile, but every time I run my playbook, the block gets added to the file -- even when it already exists. This is a problem because ElasticSearch doesn't just take the last value, it chokes on startup saying "you have multiple entries for this value" (or something similar). My play looks like this: - name: configure elasticsearch blockinfile: dest: /etc/elasticsearch/elasticsearch.yml marker: "## added by ansible configuration" block: | network.host: 0.0.0.0 path.data: /var/lib path.logs: /var/log/elasticsearch path.repo: /home/chris/elastic-backups state: present But after running my playbook a second time, my elasticsearch.yml file looks like: ## added by ansible configuration network.host: 0.0.0.0 path.data: /var/lib path.logs: /var/log/elasticsearch path.repo: /home/chris/elastic-backups ## added by ansible configuration network.host: 0.0.0.0 path.data: /var/lib path.logs: /var/log/elasticsearch path.repo: /home/chris/elastic-backups ## added by ansible configuration Is there a way to only add the block if it does not exist yet?
ansible
15
13,765
2
https://stackoverflow.com/questions/41284602/blockinfile-keeps-adding-the-block
20,919,222
Ansible hangs when starting node.js server
I would like to start my node.js app in an ansible playbook. Right now, the final directive looks like this: - name: start node server shell: chdir=${app_path} npm start& The problem is that ansible never returns from this. How can I make it continue?
Ansible hangs when starting node.js server I would like to start my node.js app in an ansible playbook. Right now, the final directive looks like this: - name: start node server shell: chdir=${app_path} npm start& The problem is that ansible never returns from this. How can I make it continue?
node.js, ansible
15
7,675
4
https://stackoverflow.com/questions/20919222/ansible-hangs-when-starting-node-js-server
29,161,274
How to check whether an item is present in an Ansible array?
Let's say I have the following example, storing all git config values in an Ansible variable: - shell: git config --global --list register: git_config_list Ansible stores the result of this command in the git_config_list variable, and one of the items is stdout_lines , containing the output of the command in an array of entries, e.g. [ "user.name=Foo Bar", "user.email=foo@example.com" ] How can I check whether a certain value is already set, e.g. for verifying that user.name has a value? Is there a way to call something like contains on the array, combined with a regular expression, allowing me to find the value I'm looking for? Or do I have to loop over the stdout_lines entries to find what I'm looking for? An example on how to do something like this would be appreciated.
How to check whether an item is present in an Ansible array? Let's say I have the following example, storing all git config values in an Ansible variable: - shell: git config --global --list register: git_config_list Ansible stores the result of this command in the git_config_list variable, and one of the items is stdout_lines , containing the output of the command in an array of entries, e.g. [ "user.name=Foo Bar", "user.email=foo@example.com" ] How can I check whether a certain value is already set, e.g. for verifying that user.name has a value? Is there a way to call something like contains on the array, combined with a regular expression, allowing me to find the value I'm looking for? Or do I have to loop over the stdout_lines entries to find what I'm looking for? An example on how to do something like this would be appreciated.
ansible
15
43,580
3
https://stackoverflow.com/questions/29161274/how-to-check-whether-an-item-is-present-in-an-ansible-array
35,988,567
Ansible doesn&#39;t load ~/.profile
I'm asking myself why Ansible doesn't source ~/.profile file before execute template module on one host ? Distant host ~/.profile : export ENV_VAR=/usr/users/toto A single Ansible task: - template: src=file1.template dest={{ ansible_env.ENV_VAR }}/file1 Ansible fail with: fatal: [distant-host] => One or more undefined variables: 'dict object' has no attribute 'ENV_VAR'
Ansible doesn&#39;t load ~/.profile I'm asking myself why Ansible doesn't source ~/.profile file before execute template module on one host ? Distant host ~/.profile : export ENV_VAR=/usr/users/toto A single Ansible task: - template: src=file1.template dest={{ ansible_env.ENV_VAR }}/file1 Ansible fail with: fatal: [distant-host] => One or more undefined variables: 'dict object' has no attribute 'ENV_VAR'
ansible
15
19,797
8
https://stackoverflow.com/questions/35988567/ansible-doesnt-load-profile
60,162,859
ERROR! &#39;sudo&#39; is not a valid attribute for a Play
I have an ansible play file that has to performs two tasks, first on the local machine get the disk usage and another task is to get the disk usage of a remote machine and install apache2 in the remote machine. When I am trying to run the file I am getting the error "ERROR! 'sudo' is not a valid attribute for a Play" When I remove the sudo and apt section from the yml file, it is running fine. I am using ansible 2.9.4. Below are two playbook files: File running without any error, --- - connection: local hosts: localhost name: play1 tasks: - command: "df -h" name: "Find the disk space available" - command: "ls -lrt" name: "List all the files" - name: "List All the Files" register: output shell: "ls -lrt" - debug: var=output.stdout_lines - hosts: RemoteMachine1 name: play2 tasks: - name: "Find the disk space" command: "df -h" register: result - debug: var=result.stdout_lines File running with error: --- - connection: local hosts: localhost name: play1 tasks: - command: "df -h" name: "Find the disk space available" - command: "ls -lrt" name: "List all the files" - name: "List All the Files" register: output shell: "ls -lrt" - debug: var=output.stdout_lines - hosts: RemoteMachine1 name: play2 sudo: yes tasks: - name: "Find the disk space" command: "df -h" register: result - name: "Install Apache in the remote machine" apt: name=apache2 state=latest - debug: var=result.stdout_lines Complete error message: ERROR! 'sudo' is not a valid attribute for a Play The error appears to be in '/home/Documents/ansible/play.yml': line 20, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: - hosts: RemoteMachine1 ^ here
ERROR! &#39;sudo&#39; is not a valid attribute for a Play I have an ansible play file that has to performs two tasks, first on the local machine get the disk usage and another task is to get the disk usage of a remote machine and install apache2 in the remote machine. When I am trying to run the file I am getting the error "ERROR! 'sudo' is not a valid attribute for a Play" When I remove the sudo and apt section from the yml file, it is running fine. I am using ansible 2.9.4. Below are two playbook files: File running without any error, --- - connection: local hosts: localhost name: play1 tasks: - command: "df -h" name: "Find the disk space available" - command: "ls -lrt" name: "List all the files" - name: "List All the Files" register: output shell: "ls -lrt" - debug: var=output.stdout_lines - hosts: RemoteMachine1 name: play2 tasks: - name: "Find the disk space" command: "df -h" register: result - debug: var=result.stdout_lines File running with error: --- - connection: local hosts: localhost name: play1 tasks: - command: "df -h" name: "Find the disk space available" - command: "ls -lrt" name: "List all the files" - name: "List All the Files" register: output shell: "ls -lrt" - debug: var=output.stdout_lines - hosts: RemoteMachine1 name: play2 sudo: yes tasks: - name: "Find the disk space" command: "df -h" register: result - name: "Install Apache in the remote machine" apt: name=apache2 state=latest - debug: var=result.stdout_lines Complete error message: ERROR! 'sudo' is not a valid attribute for a Play The error appears to be in '/home/Documents/ansible/play.yml': line 20, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: - hosts: RemoteMachine1 ^ here
ansible
15
11,868
2
https://stackoverflow.com/questions/60162859/error-sudo-is-not-a-valid-attribute-for-a-play
51,765,306
Using Ansible to stop service that might not exist
I am using Ansible 2.6.1 . I am trying to ensure that certain service is not running on target hosts. Problem is that the service might not exist at all on some hosts. If this is the case Ansible fails with error because of missing service. Services are run by Systemd . Using service module: - name: Stop service service: name: '{{ target_service }}' state: stopped Fails with error Could not find the requested service SERVICE: host Trying with command module: - name: Stop service command: service {{ target_service }} stop Gives error: Failed to stop SERVICE.service: Unit SERVICE.service not loaded. I know I could use ignore_errors: yes but it might hide real errors too. An other solution would be having 2 tasks. One checking for existance of service and other that is run only when first task found service but feels complex. Is there simpler way to ensure that service is stopped and avoid errors if the service does not exists?
Using Ansible to stop service that might not exist I am using Ansible 2.6.1 . I am trying to ensure that certain service is not running on target hosts. Problem is that the service might not exist at all on some hosts. If this is the case Ansible fails with error because of missing service. Services are run by Systemd . Using service module: - name: Stop service service: name: '{{ target_service }}' state: stopped Fails with error Could not find the requested service SERVICE: host Trying with command module: - name: Stop service command: service {{ target_service }} stop Gives error: Failed to stop SERVICE.service: Unit SERVICE.service not loaded. I know I could use ignore_errors: yes but it might hide real errors too. An other solution would be having 2 tasks. One checking for existance of service and other that is run only when first task found service but feels complex. Is there simpler way to ensure that service is stopped and avoid errors if the service does not exists?
ansible
15
17,499
6
https://stackoverflow.com/questions/51765306/using-ansible-to-stop-service-that-might-not-exist
35,723,913
Need to hide failed log in ansible task
I am new to ansible tasks, am creating a yml which performs a login operation and if login gets failed, some script need to be called. - name: Logging Action shell: "/usr/local/bin/cqlsh -u xyzyx -p 1234abc" register: loginoutput ignore_errors: yes no_log: True - name: Run the cql script to create new user shell: sh create-new-user.cql" when: loginoutput|failed for the above one i created taks that works fine. My question, when performing login operation - it got failed and showing error messages like below... I dont want to display the log messgaes even not any failed string in logs. I tried no_log: True , this gives failed : [127.0.0.1] => {"censored": "results hidden due to no_log parameter", "changed": true, "rc": 1} I don't want to show failed string in o/p.
Need to hide failed log in ansible task I am new to ansible tasks, am creating a yml which performs a login operation and if login gets failed, some script need to be called. - name: Logging Action shell: "/usr/local/bin/cqlsh -u xyzyx -p 1234abc" register: loginoutput ignore_errors: yes no_log: True - name: Run the cql script to create new user shell: sh create-new-user.cql" when: loginoutput|failed for the above one i created taks that works fine. My question, when performing login operation - it got failed and showing error messages like below... I dont want to display the log messgaes even not any failed string in logs. I tried no_log: True , this gives failed : [127.0.0.1] => {"censored": "results hidden due to no_log parameter", "changed": true, "rc": 1} I don't want to show failed string in o/p.
ansible
15
35,183
2
https://stackoverflow.com/questions/35723913/need-to-hide-failed-log-in-ansible-task
37,576,449
How to tell what directory was created by ansible&#39;s unarchive module?
I'm writing an ansible playbook to automate the installation of a piece of software. When I download the tarball from the website I have: software-package-release.tar.gz When untarred I'm left with the directory software-package-v2.15/ Does ansible have any way of registering the directory created as part of the unarchive module? I've tried with the following plays (I put the nrpe-lkdsflkdjf file there which contains a dir nrpe-2.15 ) - name: Extract unarchive src:/tmp/nrpe-lkdsflkdjf dest: /tmp/ copy: no register: tar_reg - name: debug tar_reg debug: var=tar_reg And this was the output of the debug: ok: [IP-here] => { "tar_reg": { "changed": true, "check_results": { "cmd": "/bin/gtar -C \"/tmp/\" --diff -f \"/tmp/nrpe-lkdsjflkdsjf\"", "err": "/bin/gtar: nrpe-2.15: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/SECURITY: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/README.Solaris: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/init-script.debian.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/nrpe.spec.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/LEGAL: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/subst.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/sample-config: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/sample-config/nrpe.xinetd.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/sample-config/nrpe.cfg.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg/preinstall: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg/nrpe.xml: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg/i.config: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg/postinstall: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg/r.config: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg/nrpe: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/Makefile.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/nrpe.spec: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/README: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src/acl.c: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src/check_nrpe.c: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src/snprintf.c: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src/Makefile.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src/utils.c: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src/nrpe.c: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/config.guess: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/install-sh: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/contrib: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/contrib/nrpe_check_control.c: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/contrib/README.nrpe_check_control: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/init-script.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/docs: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/docs/NRPE.pdf: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/docs/NRPE.odt: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/init-script.suse.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/configure: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/config.sub: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/Changelog: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/Makefile.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/README.SSL: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/configure.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/update-version: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include/dh.h: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include/common.h: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include/utils.h: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include/acl.h: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include/config.h.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include/nrpe.h: Warning: Cannot stat: No such file or directory\n", "out": "", "rc": 1, "unarchived": false }, "dest": "/tmp/", "extract_results": { "cmd": "/bin/gtar -xf \"/tmp/nrpe-lkdsjflkdsjf\"", "err": "", "out": "", "rc": 0 }, "gid": 0, "group": "root", "handler": "TarArchive", "mode": "01777", "owner": "root", "size": 3072, "src": "/tmp/nrpe-lkdsjflkdsjf", "state": "directory", "uid": 0 } }
How to tell what directory was created by ansible&#39;s unarchive module? I'm writing an ansible playbook to automate the installation of a piece of software. When I download the tarball from the website I have: software-package-release.tar.gz When untarred I'm left with the directory software-package-v2.15/ Does ansible have any way of registering the directory created as part of the unarchive module? I've tried with the following plays (I put the nrpe-lkdsflkdjf file there which contains a dir nrpe-2.15 ) - name: Extract unarchive src:/tmp/nrpe-lkdsflkdjf dest: /tmp/ copy: no register: tar_reg - name: debug tar_reg debug: var=tar_reg And this was the output of the debug: ok: [IP-here] => { "tar_reg": { "changed": true, "check_results": { "cmd": "/bin/gtar -C \"/tmp/\" --diff -f \"/tmp/nrpe-lkdsjflkdsjf\"", "err": "/bin/gtar: nrpe-2.15: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/SECURITY: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/README.Solaris: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/init-script.debian.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/nrpe.spec.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/LEGAL: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/subst.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/sample-config: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/sample-config/nrpe.xinetd.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/sample-config/nrpe.cfg.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg/preinstall: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg/nrpe.xml: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg/i.config: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg/postinstall: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg/r.config: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/pkg/nrpe: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/package/solaris/Makefile.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/nrpe.spec: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/README: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src/acl.c: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src/check_nrpe.c: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src/snprintf.c: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src/Makefile.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src/utils.c: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/src/nrpe.c: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/config.guess: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/install-sh: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/contrib: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/contrib/nrpe_check_control.c: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/contrib/README.nrpe_check_control: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/init-script.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/docs: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/docs/NRPE.pdf: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/docs/NRPE.odt: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/init-script.suse.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/configure: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/config.sub: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/Changelog: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/Makefile.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/README.SSL: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/configure.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/update-version: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include/dh.h: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include/common.h: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include/utils.h: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include/acl.h: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include/config.h.in: Warning: Cannot stat: No such file or directory\n/bin/gtar: nrpe-2.15/include/nrpe.h: Warning: Cannot stat: No such file or directory\n", "out": "", "rc": 1, "unarchived": false }, "dest": "/tmp/", "extract_results": { "cmd": "/bin/gtar -xf \"/tmp/nrpe-lkdsjflkdsjf\"", "err": "", "out": "", "rc": 0 }, "gid": 0, "group": "root", "handler": "TarArchive", "mode": "01777", "owner": "root", "size": 3072, "src": "/tmp/nrpe-lkdsjflkdsjf", "state": "directory", "uid": 0 } }
tar, ansible
15
13,657
2
https://stackoverflow.com/questions/37576449/how-to-tell-what-directory-was-created-by-ansibles-unarchive-module
29,966,201
Ansible 1.9.1 &#39;become&#39; and sudo issue
I am trying to run an extremely simple playbook to test a new Ansible setup. When using the 'new' Ansible Privilege Escalation config options in my ansible.cfg file: [defaults] host_key_checking=false log_path=./logs/ansible.log executable=/bin/bash #callback_plugins=./lib/callback_plugins ###### [privilege_escalation] become=True become_method='sudo' become_user='tstuser01' become_ask_pass=False [ssh_connection] scp_if_ssh=True I get the following error: fatal: [webserver1.local] => Internal Error: this module does not support running commands via 'sudo' FATAL: all hosts have already failed -- aborting The playbook is also very simple: # Checks the hosts provisioned by midrange --- - name: Test su connecting as current user hosts: all gather_facts: no tasks: - name: "sudo to configued user -- tstuser01" #action: ping command: /usr/bin/whoami I am not sure if there is something broken in Ansible 1.9.1 or if I am doing something wrong. Surely the 'command' module in Ansible allows running commands as sudo.
Ansible 1.9.1 &#39;become&#39; and sudo issue I am trying to run an extremely simple playbook to test a new Ansible setup. When using the 'new' Ansible Privilege Escalation config options in my ansible.cfg file: [defaults] host_key_checking=false log_path=./logs/ansible.log executable=/bin/bash #callback_plugins=./lib/callback_plugins ###### [privilege_escalation] become=True become_method='sudo' become_user='tstuser01' become_ask_pass=False [ssh_connection] scp_if_ssh=True I get the following error: fatal: [webserver1.local] => Internal Error: this module does not support running commands via 'sudo' FATAL: all hosts have already failed -- aborting The playbook is also very simple: # Checks the hosts provisioned by midrange --- - name: Test su connecting as current user hosts: all gather_facts: no tasks: - name: "sudo to configued user -- tstuser01" #action: ping command: /usr/bin/whoami I am not sure if there is something broken in Ansible 1.9.1 or if I am doing something wrong. Surely the 'command' module in Ansible allows running commands as sudo.
bash, sudo, ansible
15
39,053
2
https://stackoverflow.com/questions/29966201/ansible-1-9-1-become-and-sudo-issue
34,004,171
Printing a variable value indented in a YAML file using ansible
I'm generating a Behat config file using Ansible. This configuration file is a YAML file. I'm using a Jinja2 template like this: default: paths: features: '../all/tests/features' filters: tags: "~@api&&~@drush" extensions: Behat\MinkExtension\Extension: files_path: '{{ project_docroot }}/sites/all/tests/files' files_path: '{{ project_docroot }}' goutte: ~ selenium2: ~ base_url: '{{ base_url }}' Drupal\DrupalExtension\Extension: blackbox: ~ drush_driver: "drush" drush: root: "{{ project_docroot }}" api_driver: "drupal" drupal: drupal_root: "{{ project_docroot }}" region_map: {{ project_behat_region_map }} selectors: {{ project_behat_selectors }} And the following defined vars: project_behat_region_map: | content: "#content" footer: "#footer" header: "#header" header bottom: "#header-bottom" navigation: "#navigation" highlighted: "#highlighted" help: "#help" bottom: "#bottom" project_behat_selectors: | message_selector: '.messages' error_message_selector: '.messages.error' success_message_selector: '.messages.status' warning_message_selector: '.messages.warning' As you can see the variable values are indented, but when pasted into the Jinja2 template the lost indentation: default: paths: features: '../all/tests/features' filters: tags: "~@api&&~@drush" extensions: Behat\MinkExtension\Extension: files_path: '/var/www//bacteriemias/docroot/sites/all/tests/files' files_path: '/var/www//bacteriemias/docroot' goutte: ~ selenium2: ~ base_url: '[URL] Drupal\DrupalExtension\Extension: blackbox: ~ drush_driver: "drush" drush: root: "/var/www//bacteriemias/docroot" api_driver: "drupal" drupal: drupal_root: "/var/www//bacteriemias/docroot" region_map: content: "#content" footer: "#footer" header: "#header" header bottom: "#header-bottom" navigation: "#navigation" highlighted: "#highlighted" help: "#help" bottom: "#bottom" selectors: message_selector: '.messages' error_message_selector: '.messages.error' success_message_selector: '.messages.status' warning_message_selector: '.messages.warning' This is not valid YAML. How can I print a variable with indentation in Jinja2?
Printing a variable value indented in a YAML file using ansible I'm generating a Behat config file using Ansible. This configuration file is a YAML file. I'm using a Jinja2 template like this: default: paths: features: '../all/tests/features' filters: tags: "~@api&&~@drush" extensions: Behat\MinkExtension\Extension: files_path: '{{ project_docroot }}/sites/all/tests/files' files_path: '{{ project_docroot }}' goutte: ~ selenium2: ~ base_url: '{{ base_url }}' Drupal\DrupalExtension\Extension: blackbox: ~ drush_driver: "drush" drush: root: "{{ project_docroot }}" api_driver: "drupal" drupal: drupal_root: "{{ project_docroot }}" region_map: {{ project_behat_region_map }} selectors: {{ project_behat_selectors }} And the following defined vars: project_behat_region_map: | content: "#content" footer: "#footer" header: "#header" header bottom: "#header-bottom" navigation: "#navigation" highlighted: "#highlighted" help: "#help" bottom: "#bottom" project_behat_selectors: | message_selector: '.messages' error_message_selector: '.messages.error' success_message_selector: '.messages.status' warning_message_selector: '.messages.warning' As you can see the variable values are indented, but when pasted into the Jinja2 template the lost indentation: default: paths: features: '../all/tests/features' filters: tags: "~@api&&~@drush" extensions: Behat\MinkExtension\Extension: files_path: '/var/www//bacteriemias/docroot/sites/all/tests/files' files_path: '/var/www//bacteriemias/docroot' goutte: ~ selenium2: ~ base_url: '[URL] Drupal\DrupalExtension\Extension: blackbox: ~ drush_driver: "drush" drush: root: "/var/www//bacteriemias/docroot" api_driver: "drupal" drupal: drupal_root: "/var/www//bacteriemias/docroot" region_map: content: "#content" footer: "#footer" header: "#header" header bottom: "#header-bottom" navigation: "#navigation" highlighted: "#highlighted" help: "#help" bottom: "#bottom" selectors: message_selector: '.messages' error_message_selector: '.messages.error' success_message_selector: '.messages.status' warning_message_selector: '.messages.warning' This is not valid YAML. How can I print a variable with indentation in Jinja2?
yaml, ansible, ansible-template
15
32,326
1
https://stackoverflow.com/questions/34004171/printing-a-variable-value-indented-in-a-yaml-file-using-ansible
29,914,253
Remove package ansible playbook
I've an EC2 instance create using Vagrant and provisioned with Ansible. I've this task that install 2 package using apt . --- - name: Install GIT & TIG action: apt pkg={{ item }} state=installed with_items: - git - tig I want now delete/remove tig from my instance. I've removed it from my playbook and I've run vagrant provision but the package is still there. How can I do this ?
Remove package ansible playbook I've an EC2 instance create using Vagrant and provisioned with Ansible. I've this task that install 2 package using apt . --- - name: Install GIT & TIG action: apt pkg={{ item }} state=installed with_items: - git - tig I want now delete/remove tig from my instance. I've removed it from my playbook and I've run vagrant provision but the package is still there. How can I do this ?
amazon-ec2, vagrant, ansible
15
48,225
3
https://stackoverflow.com/questions/29914253/remove-package-ansible-playbook
66,059,911
Login into Azure cli for a service principal
I'm trying to get my ansible script to get logged into azure via azure cli. For some reasons, I'm not allowed to use the ansible azure package. I have to use the shell and call directly the commands from there. I'm fairly new with azure in general, so all this tenants, service principals and such are still concepts that I don't fully grasp. I've been checking official the documentation . I've created an app registration for it (Named ansible_test). I get all I need, including the secret. and then I call the the commands as this: az login --service-principal -u $AZURE_SERVICE_PRINCIPAL_NAME -p $AZURE_SECRET --tenant $AZURE_TENANT where: $AZURE_SERVICE_PRINCIPAL_NAME = ansible_test $AZURE_SECRET = ${The one that I've defined via Certificates & secrets section in the app registration} $AZURE_TENANT = ${The azure tenant that I find in the app registration} I'm getting the error: Get Token request returned http error: 400 and server response: {"error":"unauthorized_client","error_description":"AADSTS700016: Application with identifier 'ansible_test' was not found in the directory '${AZURE_TENANT}(Blurred because I'm not sure this is something secret or not)'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant. As I understand, I got the wrong tenant. But I'm getting the exact one that I'm getting from the app registration. I've been hitting my head against this wall for some time. I've tried many other things, but it doesn't seem to work. In this image, I'm trying to show that I've indeed created the app registration (What I'm understanding that it's a service principal). I've blurred the ids just out of ignorance whether they are private or not. What is that I'm doing wrong? I can't really understand the origin of the error...
Login into Azure cli for a service principal I'm trying to get my ansible script to get logged into azure via azure cli. For some reasons, I'm not allowed to use the ansible azure package. I have to use the shell and call directly the commands from there. I'm fairly new with azure in general, so all this tenants, service principals and such are still concepts that I don't fully grasp. I've been checking official the documentation . I've created an app registration for it (Named ansible_test). I get all I need, including the secret. and then I call the the commands as this: az login --service-principal -u $AZURE_SERVICE_PRINCIPAL_NAME -p $AZURE_SECRET --tenant $AZURE_TENANT where: $AZURE_SERVICE_PRINCIPAL_NAME = ansible_test $AZURE_SECRET = ${The one that I've defined via Certificates & secrets section in the app registration} $AZURE_TENANT = ${The azure tenant that I find in the app registration} I'm getting the error: Get Token request returned http error: 400 and server response: {"error":"unauthorized_client","error_description":"AADSTS700016: Application with identifier 'ansible_test' was not found in the directory '${AZURE_TENANT}(Blurred because I'm not sure this is something secret or not)'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant. As I understand, I got the wrong tenant. But I'm getting the exact one that I'm getting from the app registration. I've been hitting my head against this wall for some time. I've tried many other things, but it doesn't seem to work. In this image, I'm trying to show that I've indeed created the app registration (What I'm understanding that it's a service principal). I've blurred the ids just out of ignorance whether they are private or not. What is that I'm doing wrong? I can't really understand the origin of the error...
azure, ansible, azure-cli, azure-service-principal
15
61,579
2
https://stackoverflow.com/questions/66059911/login-into-azure-cli-for-a-service-principal
32,087,617
ERROR: apt is not a legal parameter of an Ansible Play
I'm getting the following error when trying to run a YML file:- user@ubuntuA:~$ ansible-playbook -i hostfile setup.yml ERROR : apt is not a legal parameter of an Ansible Play Ansible version: 1.9.2 yml-file:- --- - name: Install MySQL server apt: name=mysql-server state=latest - name: Install Apache module for MySQL authentication apt: name=libapache2-mod-auth-mysql state=latest - name: Install MySQL module for PHP apt: name=php5-mysql state=latest
ERROR: apt is not a legal parameter of an Ansible Play I'm getting the following error when trying to run a YML file:- user@ubuntuA:~$ ansible-playbook -i hostfile setup.yml ERROR : apt is not a legal parameter of an Ansible Play Ansible version: 1.9.2 yml-file:- --- - name: Install MySQL server apt: name=mysql-server state=latest - name: Install Apache module for MySQL authentication apt: name=libapache2-mod-auth-mysql state=latest - name: Install MySQL module for PHP apt: name=php5-mysql state=latest
ansible
15
29,777
3
https://stackoverflow.com/questions/32087617/error-apt-is-not-a-legal-parameter-of-an-ansible-play
46,553,820
Get first &quot;N&quot; elements of a list in Jinja2 template in Ansible
Most of my locations have 4+ DNS sources, but a few have less. Each location gets their own dns4_ips list variable like this: dns4_ips: - dns_A - dns_B - dns_C - dns_C My resolv.conf template looks like this: domain example.com search example.com dom2.example.com dom3.example.com {% for nameserver in (dns4_ips|shuffle(seed=inventory_hostname)) %} nameserver {{nameserver}} {% endfor %} The Jinja for loop works great, but in the cases where I have numerous nameservers I'd rather only list the first 3 that the shuffle() returns. I thought of this: nameserver {{ (dns4_ips|shuffle(seed=inventory_hostname))[0] }} nameserver {{ (dns4_ips|shuffle(seed=inventory_hostname))[1] }} nameserver {{ (dns4_ips|shuffle(seed=inventory_hostname))[2] }} ...but there are some cases where I only have one or two DNS servers available so those would produce either an incorrect line or an error, correct? Is there a clean way to handle this with the for loop, or do I need to wrap the three nameserver lines in {% if (dns4_ips|shuffle(seed=inventory_hostname))[1] is defined %} ?
Get first &quot;N&quot; elements of a list in Jinja2 template in Ansible Most of my locations have 4+ DNS sources, but a few have less. Each location gets their own dns4_ips list variable like this: dns4_ips: - dns_A - dns_B - dns_C - dns_C My resolv.conf template looks like this: domain example.com search example.com dom2.example.com dom3.example.com {% for nameserver in (dns4_ips|shuffle(seed=inventory_hostname)) %} nameserver {{nameserver}} {% endfor %} The Jinja for loop works great, but in the cases where I have numerous nameservers I'd rather only list the first 3 that the shuffle() returns. I thought of this: nameserver {{ (dns4_ips|shuffle(seed=inventory_hostname))[0] }} nameserver {{ (dns4_ips|shuffle(seed=inventory_hostname))[1] }} nameserver {{ (dns4_ips|shuffle(seed=inventory_hostname))[2] }} ...but there are some cases where I only have one or two DNS servers available so those would produce either an incorrect line or an error, correct? Is there a clean way to handle this with the for loop, or do I need to wrap the three nameserver lines in {% if (dns4_ips|shuffle(seed=inventory_hostname))[1] is defined %} ?
ansible, jinja2, ansible-template
15
16,995
1
https://stackoverflow.com/questions/46553820/get-first-n-elements-of-a-list-in-jinja2-template-in-ansible
42,599,262
ansible conditional check a item&#39;s attribute exist
I want to create users using ansible and want to set their shell and sudo permissions. Now I have vars/main.yml as below: users: [{'name': 'user1', 'shell': '/bin/bash', 'sudo': 'user1 ALL=(ALL) NOPASSWD: ALL'}, {'name': 'user2', 'shell': '/bin/zsh', 'sudo': 'user2 ALL=NOPASSWD:/bin/systemctl *start nginx'}, {'name': 'user3', 'shell': '/bin/fish'}] On the task Set sudo permission for users because not every user have sudo permission, which I need to check if the sudo attribute is exist or not. - name: Set sudo permission for users lineinfile: dest: /etc/sudoers state: present regexp: '^{{ item.name }}' line: "{{ item.sudo }}" backup: true when: "{{ item.sudo }}" with_items: - "{{ users }}" I got error as below: TASK [createUsers : Set sudo permission for users] *************************** fatal: [ubuntu]: FAILED! => {"failed": true, "msg": "The conditional check '{{ item.sudo }}' failed. The error was: expected token 'end of statement block', got 'ALL'\n line 1\n\nThe error appears to have been in '/Users/csj/proj/roles/createUsers/tasks/main.yml': line 26, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Set sudo permission for users\n ^ here\n"} I tried many about quote things but it didn't help.
ansible conditional check a item&#39;s attribute exist I want to create users using ansible and want to set their shell and sudo permissions. Now I have vars/main.yml as below: users: [{'name': 'user1', 'shell': '/bin/bash', 'sudo': 'user1 ALL=(ALL) NOPASSWD: ALL'}, {'name': 'user2', 'shell': '/bin/zsh', 'sudo': 'user2 ALL=NOPASSWD:/bin/systemctl *start nginx'}, {'name': 'user3', 'shell': '/bin/fish'}] On the task Set sudo permission for users because not every user have sudo permission, which I need to check if the sudo attribute is exist or not. - name: Set sudo permission for users lineinfile: dest: /etc/sudoers state: present regexp: '^{{ item.name }}' line: "{{ item.sudo }}" backup: true when: "{{ item.sudo }}" with_items: - "{{ users }}" I got error as below: TASK [createUsers : Set sudo permission for users] *************************** fatal: [ubuntu]: FAILED! => {"failed": true, "msg": "The conditional check '{{ item.sudo }}' failed. The error was: expected token 'end of statement block', got 'ALL'\n line 1\n\nThe error appears to have been in '/Users/csj/proj/roles/createUsers/tasks/main.yml': line 26, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Set sudo permission for users\n ^ here\n"} I tried many about quote things but it didn't help.
ansible
15
23,800
1
https://stackoverflow.com/questions/42599262/ansible-conditional-check-a-items-attribute-exist
37,675,259
Correct way to create dynamic lists in Ansible
I'm looking for advice. I have the following code that creates a list dynamically that I can then later use in a template. This is a copy of the test code I put together - for the actual role I just added the admins|regex_replace variable into the j2 template. --- - hosts: localhost gather_facts: false vars: # define empty admins var first so ansible doesn't complain admins: admin_accounts: - name: john uid: 1000 group: sysadmin shell: /bin/bash comment: "Unix Administrator" - name: paul uid: 1001 group: sysadmin shell: /bin/bash comment: "Unix Administrator" - name: george uid: 1002 group: sysadmin shell: /bin/bash comment: "Unix Administrator" - name: ringo uid: 1003 group: sysadmin shell: /bin/bash comment: "Unix Administrator" tasks: - name: build array of admin user names set_fact: admins="{{ admins}} {{ item.name }}" with_items: "{{ admin_accounts }}" # print out the fact piping through two jinja2 filters # careful with word wrapping - debug: msg={{ admins | regex_replace( '\s+',', ' ) | regex_replace(',\s(.*)','\\1') }} This gives me the following: PLAY [localhost] *************************************************************** TASK [build array of admin user names] ***************************************** ok: [localhost] => (item={u'comment': u'Unix Administrator', u'shell': u'/bin/bash', u'group': u'sysadmin', u'name': u'john', u'uid': 1000}) ok: [localhost] => (item={u'comment': u'Unix Administrator', u'shell': u'/bin/bash', u'group': u'sysadmin', u'name': u'paul', u'uid': 1001}) ok: [localhost] => (item={u'comment': u'Unix Administrator', u'shell': u'/bin/bash', u'group': u'sysadmin', u'name': u'george', u'uid': 1002}) ok: [localhost] => (item={u'comment': u'Unix Administrator', u'shell': u'/bin/bash', u'group': u'sysadmin', u'name': u'ringo', u'uid': 1003}) TASK [debug] ******************************************************************* ok: [localhost] => { "msg": "john, paul, george, ringo" } PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 So...I get what I need, but am I going about it the right way? Ansible version is 2.0.2.0 running on Centos 7.2. Thanks in advance. Edit: The resultant filter ended up looking like this: - name: build list of admin user names set_fact: admin_list: "{{ admin_accounts | selectattr('state', 'equalto', 'present') | map(attribute='name') | join(', ') }}" - debug: msg={{ admin_list }} Having added another parameter to the yaml: state: absent Ringo was left out, as desired.
Correct way to create dynamic lists in Ansible I'm looking for advice. I have the following code that creates a list dynamically that I can then later use in a template. This is a copy of the test code I put together - for the actual role I just added the admins|regex_replace variable into the j2 template. --- - hosts: localhost gather_facts: false vars: # define empty admins var first so ansible doesn't complain admins: admin_accounts: - name: john uid: 1000 group: sysadmin shell: /bin/bash comment: "Unix Administrator" - name: paul uid: 1001 group: sysadmin shell: /bin/bash comment: "Unix Administrator" - name: george uid: 1002 group: sysadmin shell: /bin/bash comment: "Unix Administrator" - name: ringo uid: 1003 group: sysadmin shell: /bin/bash comment: "Unix Administrator" tasks: - name: build array of admin user names set_fact: admins="{{ admins}} {{ item.name }}" with_items: "{{ admin_accounts }}" # print out the fact piping through two jinja2 filters # careful with word wrapping - debug: msg={{ admins | regex_replace( '\s+',', ' ) | regex_replace(',\s(.*)','\\1') }} This gives me the following: PLAY [localhost] *************************************************************** TASK [build array of admin user names] ***************************************** ok: [localhost] => (item={u'comment': u'Unix Administrator', u'shell': u'/bin/bash', u'group': u'sysadmin', u'name': u'john', u'uid': 1000}) ok: [localhost] => (item={u'comment': u'Unix Administrator', u'shell': u'/bin/bash', u'group': u'sysadmin', u'name': u'paul', u'uid': 1001}) ok: [localhost] => (item={u'comment': u'Unix Administrator', u'shell': u'/bin/bash', u'group': u'sysadmin', u'name': u'george', u'uid': 1002}) ok: [localhost] => (item={u'comment': u'Unix Administrator', u'shell': u'/bin/bash', u'group': u'sysadmin', u'name': u'ringo', u'uid': 1003}) TASK [debug] ******************************************************************* ok: [localhost] => { "msg": "john, paul, george, ringo" } PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 So...I get what I need, but am I going about it the right way? Ansible version is 2.0.2.0 running on Centos 7.2. Thanks in advance. Edit: The resultant filter ended up looking like this: - name: build list of admin user names set_fact: admin_list: "{{ admin_accounts | selectattr('state', 'equalto', 'present') | map(attribute='name') | join(', ') }}" - debug: msg={{ admin_list }} Having added another parameter to the yaml: state: absent Ringo was left out, as desired.
ansible, ansible-2.x
15
55,951
2
https://stackoverflow.com/questions/37675259/correct-way-to-create-dynamic-lists-in-ansible
42,093,385
Disable handlers from running
Is there any way to stop handlers from running ? I was trying to add tag and use "--skip-tags" to it but it does not work. I could add next role variable reload_service: true and use it but I've already started using tags and they work great to just re-run part of role. Handlers are usually used to restart services and I want to run this role without starting service without changing role variables just to cover next case. I'm using ansible 2.1.2.0 Test case: mkdir -p test/role/handlers test/role/tasks cd test echo -ne '---\n - command: "echo Test"\n notify: restart\n' > role/tasks/main.yml echo -ne '---\n- name: restart\n command: "echo Handler"\n tags: [handlers]\n' > role/handlers/main.yml echo -ne '---\n- hosts: localhost\n gather_facts: false\n roles:\n - role\n' > play.yml ansible-playbook play.yml --skip-tags handlers
Disable handlers from running Is there any way to stop handlers from running ? I was trying to add tag and use "--skip-tags" to it but it does not work. I could add next role variable reload_service: true and use it but I've already started using tags and they work great to just re-run part of role. Handlers are usually used to restart services and I want to run this role without starting service without changing role variables just to cover next case. I'm using ansible 2.1.2.0 Test case: mkdir -p test/role/handlers test/role/tasks cd test echo -ne '---\n - command: "echo Test"\n notify: restart\n' > role/tasks/main.yml echo -ne '---\n- name: restart\n command: "echo Handler"\n tags: [handlers]\n' > role/handlers/main.yml echo -ne '---\n- hosts: localhost\n gather_facts: false\n roles:\n - role\n' > play.yml ansible-playbook play.yml --skip-tags handlers
ansible
15
14,426
3
https://stackoverflow.com/questions/42093385/disable-handlers-from-running
19,328,126
Run ad hoc Ansible commands in Vagrant?
When building out a Vagrant project it would be helpful to run ad hoc Ansible tasks instead of adding test commands to a playbook. I've tried several methods of targeting the VM but keep getting this error: default | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue I'm using the Vagrant generated Ansible inventory file and the box has a working hostname. How do I target my Vagrant VM with a single Ansible task?
Run ad hoc Ansible commands in Vagrant? When building out a Vagrant project it would be helpful to run ad hoc Ansible tasks instead of adding test commands to a playbook. I've tried several methods of targeting the VM but keep getting this error: default | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue I'm using the Vagrant generated Ansible inventory file and the box has a working hostname. How do I target my Vagrant VM with a single Ansible task?
vagrant, ansible, ansible-ad-hoc
15
4,813
6
https://stackoverflow.com/questions/19328126/run-ad-hoc-ansible-commands-in-vagrant
47,549,615
How do I force ansible to retry an &#39;apt&#39; task if it fails?
I have an ansible playbook running on many machines. In that playbook I have a few packages I am trying to install using apt , but occasionally they fail, either because other playbooks are running, a periodic update or any other apt instance running in parallel and grabbing the lock. I basically want to add a retry loop before giving up but failed to do so as retries is not supported for apt , apparently: I looked into the apt module page in ansible's documentation , and even tried to actually use it even though it is not there (which obviously failed). Anyway - I need an idea on how to get ansible to retry for let's say 3 times, with 30 seconds delay, but only on failures to install the package.
How do I force ansible to retry an &#39;apt&#39; task if it fails? I have an ansible playbook running on many machines. In that playbook I have a few packages I am trying to install using apt , but occasionally they fail, either because other playbooks are running, a periodic update or any other apt instance running in parallel and grabbing the lock. I basically want to add a retry loop before giving up but failed to do so as retries is not supported for apt , apparently: I looked into the apt module page in ansible's documentation , and even tried to actually use it even though it is not there (which obviously failed). Anyway - I need an idea on how to get ansible to retry for let's say 3 times, with 30 seconds delay, but only on failures to install the package.
ubuntu, ansible, apt
15
14,243
1
https://stackoverflow.com/questions/47549615/how-do-i-force-ansible-to-retry-an-apt-task-if-it-fails
36,520,425
Ansible : how to create a function like
I have a repeated pattern like this one:: - name: =code_01= clone repository git: repo=gitolite@git.site.net:/code_01 dest=/tmp/code_01 update=yes force=yes accept_hostkey=yes version=master sudo: true sudo_user: "{{ user }}" - name: =code_01= egg build shell: . {{ home }}/{{ venv_name }}/bin/activate && make egg args: chdir: "/tmp/code_01" sudo_user: "{{ user }}" sudo: true - name: =code_01= egg get command: find /tmp/code_01/dist -type f -iname '*.egg' register: code_01eggs - name: =code_01= egg install in {{ venv_name }} venv shell: . {{ home }}/{{ venv_name }}/bin/activate && easy_install {{ item }} args: chdir: "{{ home }}" with_items: "{{ code_01eggs.stdout_lines }}" sudo_user: "{{ user }}" sudo: true - name: =code_01= cleanup file: path=/tmp/code_01 state=absent sudo: true And I have this to do with: code_02 , code_03 , code_04 , ..., code_0n How can I "factorize" this ?
Ansible : how to create a function like I have a repeated pattern like this one:: - name: =code_01= clone repository git: repo=gitolite@git.site.net:/code_01 dest=/tmp/code_01 update=yes force=yes accept_hostkey=yes version=master sudo: true sudo_user: "{{ user }}" - name: =code_01= egg build shell: . {{ home }}/{{ venv_name }}/bin/activate && make egg args: chdir: "/tmp/code_01" sudo_user: "{{ user }}" sudo: true - name: =code_01= egg get command: find /tmp/code_01/dist -type f -iname '*.egg' register: code_01eggs - name: =code_01= egg install in {{ venv_name }} venv shell: . {{ home }}/{{ venv_name }}/bin/activate && easy_install {{ item }} args: chdir: "{{ home }}" with_items: "{{ code_01eggs.stdout_lines }}" sudo_user: "{{ user }}" sudo: true - name: =code_01= cleanup file: path=/tmp/code_01 state=absent sudo: true And I have this to do with: code_02 , code_03 , code_04 , ..., code_0n How can I "factorize" this ?
ansible
15
19,399
1
https://stackoverflow.com/questions/36520425/ansible-how-to-create-a-function-like
62,100,869
Ansible error: &quot;The Python 2 bindings for rpm are needed for this module&quot;
Im trying to pip install a requirements file in my python3 environment using the following task pip: python3: yes requirements: ./requirements/my_requirements.txt extra_args: -i [URL] I checked which version ansible is running on the controller node (RH7) and it's 3.6.8 ansible-playbook 2.9.9 config file = None configured module search path = ['/home/{hidden}/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.6.8 (default, Jun 11 2019, 15:15:01) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] No config file found; using defaults I am however getting the following error: fatal: [default]: FAILED! => {"changed": false, "msg": "The Python 2 bindings for rpm are needed for this module. If you require Python 3 support use the dnf Ansible module instead.. The Python 2 yum module is needed for this module. If you require Python 3 support use the dnf Ansible module instead."} My controller node is running RH7. The targets are centos7 (provisioned by vagrantfiles) Does anyonek now how to solve this?
Ansible error: &quot;The Python 2 bindings for rpm are needed for this module&quot; Im trying to pip install a requirements file in my python3 environment using the following task pip: python3: yes requirements: ./requirements/my_requirements.txt extra_args: -i [URL] I checked which version ansible is running on the controller node (RH7) and it's 3.6.8 ansible-playbook 2.9.9 config file = None configured module search path = ['/home/{hidden}/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.6.8 (default, Jun 11 2019, 15:15:01) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] No config file found; using defaults I am however getting the following error: fatal: [default]: FAILED! => {"changed": false, "msg": "The Python 2 bindings for rpm are needed for this module. If you require Python 3 support use the dnf Ansible module instead.. The Python 2 yum module is needed for this module. If you require Python 3 support use the dnf Ansible module instead."} My controller node is running RH7. The targets are centos7 (provisioned by vagrantfiles) Does anyonek now how to solve this?
python, pip, ansible
15
29,897
4
https://stackoverflow.com/questions/62100869/ansible-error-the-python-2-bindings-for-rpm-are-needed-for-this-module
35,748,580
msg: No handler was ready to authenticate. 1 handlers were checked. [&#39;HmacAuthV4Handler&#39;] Check your credentials
So I am trying to run ansible on my ec2 instances on aws, for the first time on a fresh instance, but every time I try to run a play I can't get around this error message: PLAY [localhost] ************************************************************** TASK: [make one instance] ***************************************************** failed: [localhost] => {"failed": true} msg: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/home/ubuntu/ans_test.retry localhost : ok=0 changed=0 unreachable=0 failed=1 I think there may be something wrong with the permissions in my IAM user and group. I have given my IAM user and group ReadOnlyAccess, AdministratorAccess and PowerUserAccess. I have an access id and secret access key that I am setting as environmental variable with the commands: export AWS_ACCESS_KEY_ID='AK123' export AWS_SECRET_ACCESS_KEY='abc123' With 'AK123'and 'abc123' replaced with my actual id and key values. What else do I need to do in order to get the ansible ec2 task working? UPDATE: I fixed the problem, I guess I didn't really have a solid understanding of what environmental variables are. I fixed it by just setting my aws_access_key and aws_secret_key inside of my ec2 task, below is my working playbook - hosts: localhost connection: local gather_facts: False tasks: #this task creates 5 ec2 instances that are all named demo and are copies of the image specified - name: Provision a set of instances ec2: aws_access_key: ..... aws_secret_key: .... key_name: ..... group: ..... instance_type: t2.micro image: ...... region: us-east-1 ec2_url: ....... wait: true exact_count: 5 count_tag: Name: Demo instance_tags: Name: Demo register: ec2 I guess now I need to start using ansible vault to just hold my key and ID.
msg: No handler was ready to authenticate. 1 handlers were checked. [&#39;HmacAuthV4Handler&#39;] Check your credentials So I am trying to run ansible on my ec2 instances on aws, for the first time on a fresh instance, but every time I try to run a play I can't get around this error message: PLAY [localhost] ************************************************************** TASK: [make one instance] ***************************************************** failed: [localhost] => {"failed": true} msg: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/home/ubuntu/ans_test.retry localhost : ok=0 changed=0 unreachable=0 failed=1 I think there may be something wrong with the permissions in my IAM user and group. I have given my IAM user and group ReadOnlyAccess, AdministratorAccess and PowerUserAccess. I have an access id and secret access key that I am setting as environmental variable with the commands: export AWS_ACCESS_KEY_ID='AK123' export AWS_SECRET_ACCESS_KEY='abc123' With 'AK123'and 'abc123' replaced with my actual id and key values. What else do I need to do in order to get the ansible ec2 task working? UPDATE: I fixed the problem, I guess I didn't really have a solid understanding of what environmental variables are. I fixed it by just setting my aws_access_key and aws_secret_key inside of my ec2 task, below is my working playbook - hosts: localhost connection: local gather_facts: False tasks: #this task creates 5 ec2 instances that are all named demo and are copies of the image specified - name: Provision a set of instances ec2: aws_access_key: ..... aws_secret_key: .... key_name: ..... group: ..... instance_type: t2.micro image: ...... region: us-east-1 ec2_url: ....... wait: true exact_count: 5 count_tag: Name: Demo instance_tags: Name: Demo register: ec2 I guess now I need to start using ansible vault to just hold my key and ID.
amazon-web-services, ansible, ec2-ami, amazon-ec2
15
23,959
6
https://stackoverflow.com/questions/35748580/msg-no-handler-was-ready-to-authenticate-1-handlers-were-checked-hmacauthv4
46,110,882
How can I run commands in sudo mode with ansible playbook?
I am trying to run a "folder creation" command with my ansible playbook. (Code is below) The creation requires sudo login to execute. I run the playbook as follows: ansible-playbook myfile.yml --ask-pass This prompts for user account password of remote machine. The ssh connection gets established, but commands fail with permission denied since its not taking super user password. How can I fix my issue? hosts: GSP tasks: - name: "make build directory" command: mkdir -p /home/build/ become: true become_user: root - name: "change permissions on the directory" command: chmod 777 -R /home/ become: true become_user: root
How can I run commands in sudo mode with ansible playbook? I am trying to run a "folder creation" command with my ansible playbook. (Code is below) The creation requires sudo login to execute. I run the playbook as follows: ansible-playbook myfile.yml --ask-pass This prompts for user account password of remote machine. The ssh connection gets established, but commands fail with permission denied since its not taking super user password. How can I fix my issue? hosts: GSP tasks: - name: "make build directory" command: mkdir -p /home/build/ become: true become_user: root - name: "change permissions on the directory" command: chmod 777 -R /home/ become: true become_user: root
ansible, ansible-2.x
15
53,334
2
https://stackoverflow.com/questions/46110882/how-can-i-run-commands-in-sudo-mode-with-ansible-playbook
41,724,600
Ansible: Conditional task parameter
I need to pass ssl_ca to mysql_db module only if mysql_use_ssl is defined. Is this possible using one task, like: mysql_db: name=mydb state=import target=/tmp/mysql.sql login_host="mydbhost" login_user="root" login_password="password" {% if mysql_use_ssl %}ssl_ca=/path/to/cert.pem{% endif %} ? This actual snippet does not work, result: {"failed": true, "msg": "template error while templating string: Encountered unknown tag 'endif'.. String: /path/to/cert.pem{% endif %}"} When moving the conditional as: mysql_db: name=mydb state=import target=/tmp/mysql.sql login_host="mydbhost" login_user="root" login_password="password" ssl_ca="{% if mysql_use_ssl %}/path/to/cert.pem{% else %}none{% endif %}" Then it "works" but none is not a supported parameter for turning off mysql ssl connection, so it does not turn off ssl.
Ansible: Conditional task parameter I need to pass ssl_ca to mysql_db module only if mysql_use_ssl is defined. Is this possible using one task, like: mysql_db: name=mydb state=import target=/tmp/mysql.sql login_host="mydbhost" login_user="root" login_password="password" {% if mysql_use_ssl %}ssl_ca=/path/to/cert.pem{% endif %} ? This actual snippet does not work, result: {"failed": true, "msg": "template error while templating string: Encountered unknown tag 'endif'.. String: /path/to/cert.pem{% endif %}"} When moving the conditional as: mysql_db: name=mydb state=import target=/tmp/mysql.sql login_host="mydbhost" login_user="root" login_password="password" ssl_ca="{% if mysql_use_ssl %}/path/to/cert.pem{% else %}none{% endif %}" Then it "works" but none is not a supported parameter for turning off mysql ssl connection, so it does not turn off ssl.
ansible, jinja2
15
6,850
2
https://stackoverflow.com/questions/41724600/ansible-conditional-task-parameter
37,557,412
Ansible: removing hosts
I know that one can add host with the following task: - name: Add new instance to host group add_host: hostname: '{{ item.public_ip }}' groupname: "tag_Name_api_production" with_items: ec2.instances But I can't seem to find a way to remove a host from inventory. Is there any way to do this?
Ansible: removing hosts I know that one can add host with the following task: - name: Add new instance to host group add_host: hostname: '{{ item.public_ip }}' groupname: "tag_Name_api_production" with_items: ec2.instances But I can't seem to find a way to remove a host from inventory. Is there any way to do this?
ansible, ansible-2.x
15
18,713
5
https://stackoverflow.com/questions/37557412/ansible-removing-hosts
44,651,877
Running a playbook on multiple host groups one at a time
I want to run a playbook containing some roles on multiple host groups I create dynamically with the group_by module. I'm able to do it like the example below (ping replacing my actual role). I was wondering if there is a way to run each group separately in a loop instead of listing all instance ids. I don't want to create a duplicate line with every instance id. The purpose here is to deploy to one instance in every data center at a time instead of running all with a low serial which takes a long time. There might be a different way of doing it, I don't want to create static groups in the inventory for each instance_id as well. --- - hosts: tag_type_edgenode tasks: - group_by: key=instance_id_{{instance_id}} register: dyn_groups - hosts: instance_id_1 tasks: - ping: - hosts: instance_id_2 tasks: - ping: - hosts: instance_id_3 tasks: - ping: - hosts: instance_id_4 tasks: - ping:
Running a playbook on multiple host groups one at a time I want to run a playbook containing some roles on multiple host groups I create dynamically with the group_by module. I'm able to do it like the example below (ping replacing my actual role). I was wondering if there is a way to run each group separately in a loop instead of listing all instance ids. I don't want to create a duplicate line with every instance id. The purpose here is to deploy to one instance in every data center at a time instead of running all with a low serial which takes a long time. There might be a different way of doing it, I don't want to create static groups in the inventory for each instance_id as well. --- - hosts: tag_type_edgenode tasks: - group_by: key=instance_id_{{instance_id}} register: dyn_groups - hosts: instance_id_1 tasks: - ping: - hosts: instance_id_2 tasks: - ping: - hosts: instance_id_3 tasks: - ping: - hosts: instance_id_4 tasks: - ping:
ansible, ansible-2.x, ansible-inventory
15
83,907
3
https://stackoverflow.com/questions/44651877/running-a-playbook-on-multiple-host-groups-one-at-a-time
32,627,624
Ansible: how to run task on other host inside one playbook?
I'm writing ansible playbook for one specific goal: i'd like to sync directory on server A-n, but before i should make git pull on server "B". I launch playbook on host A-n (described in inventory). I also have playbook for "git pull" on server B. Is it possible to include one playbook in another? I don't know how, because ansible allow to specify only one host group in beginning of playbook.
Ansible: how to run task on other host inside one playbook? I'm writing ansible playbook for one specific goal: i'd like to sync directory on server A-n, but before i should make git pull on server "B". I launch playbook on host A-n (described in inventory). I also have playbook for "git pull" on server B. Is it possible to include one playbook in another? I don't know how, because ansible allow to specify only one host group in beginning of playbook.
git, deployment, continuous-integration, ansible
15
23,024
1
https://stackoverflow.com/questions/32627624/ansible-how-to-run-task-on-other-host-inside-one-playbook
37,197,743
Pass variable to included playbook?
I would like to have a master playbook, which include's other playbooks. Is it possible to pass a variable to that included playbook? The normal syntax which is used for passing variables to included tasks doesn't work (see below) - include: someplaybook.yml variable=value and - include: someplaybook.yml vars: variable: value I'm running v2.0.2.0.
Pass variable to included playbook? I would like to have a master playbook, which include's other playbooks. Is it possible to pass a variable to that included playbook? The normal syntax which is used for passing variables to included tasks doesn't work (see below) - include: someplaybook.yml variable=value and - include: someplaybook.yml vars: variable: value I'm running v2.0.2.0.
ansible
15
30,948
4
https://stackoverflow.com/questions/37197743/pass-variable-to-included-playbook
33,920,530
Pass a parameter to Ansible&#39;s dynamic inventory
I'm using Ansible to configure some virtual machines. I wrote a Python script which retrieves the hosts from a REST service. My VMs are organized in "Environments". For example I have the "Test", "Red" and "Integration" environments, each with a subset of VMs. This Python script requires the custom --environment <ENV> parameter to retrieve the hosts of the wanted environment. The problem I'm having is passing the <ENV> to the ansible-playbook command. In fact the following command doesn't work ansible-playbook thePlaybook.yml -i ./inventory/FromREST.py --environment Test I get the error: Usage: ansible-playbook playbook.yml ansible-playbook: error: no such option: --environment What is the right syntax to pass variables to a dynamic inventory script? Update: To better explain, the FromREST.py script accepts the following parameters: Either the --list parameter or the --host <HOST> parameter, as per the Dynamic Inventory guidelines The --environment <ENVIRONMENT> parameter, which I added to the ones required by Ansible to manage the different Environments
Pass a parameter to Ansible&#39;s dynamic inventory I'm using Ansible to configure some virtual machines. I wrote a Python script which retrieves the hosts from a REST service. My VMs are organized in "Environments". For example I have the "Test", "Red" and "Integration" environments, each with a subset of VMs. This Python script requires the custom --environment <ENV> parameter to retrieve the hosts of the wanted environment. The problem I'm having is passing the <ENV> to the ansible-playbook command. In fact the following command doesn't work ansible-playbook thePlaybook.yml -i ./inventory/FromREST.py --environment Test I get the error: Usage: ansible-playbook playbook.yml ansible-playbook: error: no such option: --environment What is the right syntax to pass variables to a dynamic inventory script? Update: To better explain, the FromREST.py script accepts the following parameters: Either the --list parameter or the --host <HOST> parameter, as per the Dynamic Inventory guidelines The --environment <ENVIRONMENT> parameter, which I added to the ones required by Ansible to manage the different Environments
python, bash, ansible
15
12,154
2
https://stackoverflow.com/questions/33920530/pass-a-parameter-to-ansibles-dynamic-inventory
67,884,688
Ansible - what are the differences between version 2, 3, and 4?
Previously I used ansible 2.x and now the latest ansible version is 4.x. Having read Ansible Roadmap and Announcing the Community Ansible 3.0.0 Package but not clear what are the differences among versions 2, 3, and 4. It looks version 2.x (base or core) is included in Ansible 3.x and seemingly in 4.x as well. It is not clear if playbooks developed with Ansible 2.9 is compatible in Ansible 4.x. How can I clearly understand their differences? Please advise the best resources to understand the differences between Ansible 2, 3, and 4. Announcing the Community Ansible 3.0.0 Package Today, there are 3 distinct artefacts in the Ansible open source world: Ansible Core - A minimal Ansible language and runtime (soon to be renamed from ansible-base) Ansible Collections on Galaxy (community supported) Ansible community package - Ansible installation including ansible-base/core plus community curated Collections Now that these artefacts are managed separately, their versions are diverging as well. Moving forward, Ansible Core will maintain its existing numbering scheme (similar to the Linux Kernel). The next version of Ansible Core after ansible-base 2.10 will be ansible-core 2.11. The Ansible community package (Ansible Core + community Collections) is adopting semantic versioning. The next version of the Ansible community package after 2.10 is 3.0.0.
Ansible - what are the differences between version 2, 3, and 4? Previously I used ansible 2.x and now the latest ansible version is 4.x. Having read Ansible Roadmap and Announcing the Community Ansible 3.0.0 Package but not clear what are the differences among versions 2, 3, and 4. It looks version 2.x (base or core) is included in Ansible 3.x and seemingly in 4.x as well. It is not clear if playbooks developed with Ansible 2.9 is compatible in Ansible 4.x. How can I clearly understand their differences? Please advise the best resources to understand the differences between Ansible 2, 3, and 4. Announcing the Community Ansible 3.0.0 Package Today, there are 3 distinct artefacts in the Ansible open source world: Ansible Core - A minimal Ansible language and runtime (soon to be renamed from ansible-base) Ansible Collections on Galaxy (community supported) Ansible community package - Ansible installation including ansible-base/core plus community curated Collections Now that these artefacts are managed separately, their versions are diverging as well. Moving forward, Ansible Core will maintain its existing numbering scheme (similar to the Linux Kernel). The next version of Ansible Core after ansible-base 2.10 will be ansible-core 2.11. The Ansible community package (Ansible Core + community Collections) is adopting semantic versioning. The next version of the Ansible community package after 2.10 is 3.0.0.
ansible
15
9,792
2
https://stackoverflow.com/questions/67884688/ansible-what-are-the-differences-between-version-2-3-and-4
71,565,392
Why is ansible slow with simple tasks
After using ansible for about a week now, I found out that ansible takes similar amount of time regardless of how complicated is the task it is supposed to do. Install 20 packages using apt - 3 seconds Copy a single file with 2 config settings using template - 3 seconds. While I can easily install 20 packages using just a single command, template needs to be run in a loop, so if I have 20 config files to copy, then it takes a whole minute. Scale that to 10 roles, some of them repeated 5 times and you can get over an hour to do a simple deployment. Is ansible supposed to be this slow, or is there something I can do to improve the performace? edit: Based on your answers I assume this is not a normal behavior. Here is some code examples of those simple tasks as requested by @U880D. As I said, nothing special just simple configs: # tasks/main.yml - name: Configure php-{{ php_version }} template: src: '{{ item }}.j2' dest: '/etc/php/{{ php_version }}/{{ item }}' loop: - cli/conf.d/50-memory.ini - fpm/conf.d/50-memory.ini - fpm/conf.d/50-opcache.ini - fpm/pool.d/www.conf notify: - restart php {{ php_version }} # templates/fpm/conf.d/50-memory.ini.j2 memory_limit = {{ php_fpm_memory_limit }} post_max_size = {{ php_fpm_post_max_size }} upload_max_filesize = {{ php_fpm_upload_max_filesize }} max_file_uploads = {{ php_fpm_max_file_uploads }} # templates/fpm/conf.d/50-opcache.ini.j2 [opcache] opcache.enable=1 opcache.memory_consumption={{ php_opcache_memory_limit }} opcache.validate_timestamps=1 opcache.revalidate_freq=1 opcache.huge_code_pages=1 edit2: I am not sure if this is what task_profile should do, but here is the output of that command from above on server called management-1 . I added a debug task after to get exact timings. 4 templates that didn't even need an update took ~7.3s: TASK [php : Configure php-8.1] ************************************************************************************************************************************************************************************ Tuesday 22 March 2022 10:17:33 +0100 (0:00:02.730) 0:00:06.616 ********* ok: [management-1] => (item=cli/conf.d/50-memory.ini) ok: [management-1] => (item=fpm/conf.d/50-memory.ini) ok: [management-1] => (item=fpm/conf.d/50-opcache.ini) ok: [management-1] => (item=fpm/pool.d/www.conf) TASK [php : Debug] ************************************************************************************************************************************************************************************************ Tuesday 22 March 2022 10:17:40 +0100 (0:00:07.308) 0:00:13.924 *********
Why is ansible slow with simple tasks After using ansible for about a week now, I found out that ansible takes similar amount of time regardless of how complicated is the task it is supposed to do. Install 20 packages using apt - 3 seconds Copy a single file with 2 config settings using template - 3 seconds. While I can easily install 20 packages using just a single command, template needs to be run in a loop, so if I have 20 config files to copy, then it takes a whole minute. Scale that to 10 roles, some of them repeated 5 times and you can get over an hour to do a simple deployment. Is ansible supposed to be this slow, or is there something I can do to improve the performace? edit: Based on your answers I assume this is not a normal behavior. Here is some code examples of those simple tasks as requested by @U880D. As I said, nothing special just simple configs: # tasks/main.yml - name: Configure php-{{ php_version }} template: src: '{{ item }}.j2' dest: '/etc/php/{{ php_version }}/{{ item }}' loop: - cli/conf.d/50-memory.ini - fpm/conf.d/50-memory.ini - fpm/conf.d/50-opcache.ini - fpm/pool.d/www.conf notify: - restart php {{ php_version }} # templates/fpm/conf.d/50-memory.ini.j2 memory_limit = {{ php_fpm_memory_limit }} post_max_size = {{ php_fpm_post_max_size }} upload_max_filesize = {{ php_fpm_upload_max_filesize }} max_file_uploads = {{ php_fpm_max_file_uploads }} # templates/fpm/conf.d/50-opcache.ini.j2 [opcache] opcache.enable=1 opcache.memory_consumption={{ php_opcache_memory_limit }} opcache.validate_timestamps=1 opcache.revalidate_freq=1 opcache.huge_code_pages=1 edit2: I am not sure if this is what task_profile should do, but here is the output of that command from above on server called management-1 . I added a debug task after to get exact timings. 4 templates that didn't even need an update took ~7.3s: TASK [php : Configure php-8.1] ************************************************************************************************************************************************************************************ Tuesday 22 March 2022 10:17:33 +0100 (0:00:02.730) 0:00:06.616 ********* ok: [management-1] => (item=cli/conf.d/50-memory.ini) ok: [management-1] => (item=fpm/conf.d/50-memory.ini) ok: [management-1] => (item=fpm/conf.d/50-opcache.ini) ok: [management-1] => (item=fpm/pool.d/www.conf) TASK [php : Debug] ************************************************************************************************************************************************************************************************ Tuesday 22 March 2022 10:17:40 +0100 (0:00:07.308) 0:00:13.924 *********
ansible
15
16,146
2
https://stackoverflow.com/questions/71565392/why-is-ansible-slow-with-simple-tasks
66,828,315
What is the difference between ansible_architecture and ansible_machine on Ansible?
I'm trying to find the architecture of the machine by using Ansible facts . We can gather information about the machine by running ansible -m setup <host-name> command, as described in the documentation: Discovering variables: facts and magic variables β€” Ansible Documentation . But it seems that the ansible_architecture and ansible_machine are the same values. I'm not sure the difference between them. The example on the above documentation shows the following two values which have the same values: "ansible_architecture": "x86_64", "ansible_machine": "x86_64", On my M1 MacBook, the result is the same like this: shuuji3@momo mac-dev-playbook > ansible -m setup localhost | egrep '_architecture|_machine' "ansible_architecture": "arm64", "ansible_machine": "arm64", Can we use these values interchangeably? Or can they have other values in another case?
What is the difference between ansible_architecture and ansible_machine on Ansible? I'm trying to find the architecture of the machine by using Ansible facts . We can gather information about the machine by running ansible -m setup <host-name> command, as described in the documentation: Discovering variables: facts and magic variables β€” Ansible Documentation . But it seems that the ansible_architecture and ansible_machine are the same values. I'm not sure the difference between them. The example on the above documentation shows the following two values which have the same values: "ansible_architecture": "x86_64", "ansible_machine": "x86_64", On my M1 MacBook, the result is the same like this: shuuji3@momo mac-dev-playbook > ansible -m setup localhost | egrep '_architecture|_machine' "ansible_architecture": "arm64", "ansible_machine": "arm64", Can we use these values interchangeably? Or can they have other values in another case?
macos, ansible, ansible-facts, apple-silicon
15
11,829
1
https://stackoverflow.com/questions/66828315/what-is-the-difference-between-ansible-architecture-and-ansible-machine-on-a
49,062,482
Ansible service task fails with &quot;Could not find the requested service XXX&quot;
I am trying to create ansible playbooks to install and configure kerberos on centos7. I have a task which yum installs the required rpms - name: install kerberos yum: name={{ item }} state=present with_items: - krb5-server - krb5-libs And a task to start the service - name: start kerberos service service: name=krb5kdc.service state=started enabled=yes The playbook fails with TASK [kerberos : start the systemd kerberos service] ******************************** fatal: [zen_wozniak]: FAILED! => {"changed": false, "msg": "Could not find the requested service krb5kdc.service: host"} This seems like it should be pretty simple, yum install the rpm and then start the service, but the service unit file cant even be found. what am I doing wrong? For clarity I am using ansible 2.4.2.0 and centos:7.3.1611 docker base image. edit:: The yum install step is working... TASK [kerberos : debug] *********************************************************************************************** ok: [brave_payne] => { "result": { "changed": false, "failed": false, "results": [ { "arch": "x86_64", "envra": "0:krb5-server-1.15.1-8.el7.x86_64", "epoch": "0", "name": "krb5-server", "release": "8.el7", "repo": "base", "version": "1.15.1", "yumstate": "available" }, { "arch": "x86_64", "envra": "0:krb5-server-1.15.1-8.el7.x86_64", "epoch": "0", "name": "krb5-server", "release": "8.el7", "repo": "installed", "version": "1.15.1", "yumstate": "installed" } ] } } Logging into the the failed ansible container and manually starting looks like this [root@94e29c0e8bdd /]# systemctl status krb5kdc.service Failed to get D-Bus connection: Operation not permitted And yes the container is running privileged docker inspect --format='{{.HostConfig.Privileged}}' 94e29c0e8bdd true
Ansible service task fails with &quot;Could not find the requested service XXX&quot; I am trying to create ansible playbooks to install and configure kerberos on centos7. I have a task which yum installs the required rpms - name: install kerberos yum: name={{ item }} state=present with_items: - krb5-server - krb5-libs And a task to start the service - name: start kerberos service service: name=krb5kdc.service state=started enabled=yes The playbook fails with TASK [kerberos : start the systemd kerberos service] ******************************** fatal: [zen_wozniak]: FAILED! => {"changed": false, "msg": "Could not find the requested service krb5kdc.service: host"} This seems like it should be pretty simple, yum install the rpm and then start the service, but the service unit file cant even be found. what am I doing wrong? For clarity I am using ansible 2.4.2.0 and centos:7.3.1611 docker base image. edit:: The yum install step is working... TASK [kerberos : debug] *********************************************************************************************** ok: [brave_payne] => { "result": { "changed": false, "failed": false, "results": [ { "arch": "x86_64", "envra": "0:krb5-server-1.15.1-8.el7.x86_64", "epoch": "0", "name": "krb5-server", "release": "8.el7", "repo": "base", "version": "1.15.1", "yumstate": "available" }, { "arch": "x86_64", "envra": "0:krb5-server-1.15.1-8.el7.x86_64", "epoch": "0", "name": "krb5-server", "release": "8.el7", "repo": "installed", "version": "1.15.1", "yumstate": "installed" } ] } } Logging into the the failed ansible container and manually starting looks like this [root@94e29c0e8bdd /]# systemctl status krb5kdc.service Failed to get D-Bus connection: Operation not permitted And yes the container is running privileged docker inspect --format='{{.HostConfig.Privileged}}' 94e29c0e8bdd true
ansible, kerberos, ansible-2.x
15
49,970
2
https://stackoverflow.com/questions/49062482/ansible-service-task-fails-with-could-not-find-the-requested-service-xxx
29,022,767
using ansible set_fact module to define persistent facts?
I want to define a playbook which establishes facts about my hosts that can be used in other plays. The set_fact module claims to be able to do this ... [URL] -- however it's not working ... The facts I define are available after the call to set_fact within a run of the play-book -- I would then expect to be able to use ansible all -m setup and see the fact defined somewhere within the facts gathered for each host ... I tried looking into the code for the set_fact module -- but all I find is documentation string ... [URL]
using ansible set_fact module to define persistent facts? I want to define a playbook which establishes facts about my hosts that can be used in other plays. The set_fact module claims to be able to do this ... [URL] -- however it's not working ... The facts I define are available after the call to set_fact within a run of the play-book -- I would then expect to be able to use ansible all -m setup and see the fact defined somewhere within the facts gathered for each host ... I tried looking into the code for the set_fact module -- but all I find is documentation string ... [URL]
ansible
15
13,164
3
https://stackoverflow.com/questions/29022767/using-ansible-set-fact-module-to-define-persistent-facts
47,681,937
Ansible: specify inventory file inside playbook
So from what I gather, we can use ansible.cfg to set the default inventory file that ansible looks for when running a playbook. We can also override that inventory by using the -i parameter when running an ansible playbook via the command line. Is there a way for me to specify a specific special inventory file inside a playbook without using the command line or changing the ansible.cfg file?
Ansible: specify inventory file inside playbook So from what I gather, we can use ansible.cfg to set the default inventory file that ansible looks for when running a playbook. We can also override that inventory by using the -i parameter when running an ansible playbook via the command line. Is there a way for me to specify a specific special inventory file inside a playbook without using the command line or changing the ansible.cfg file?
ansible, ansible-inventory, ansible-facts
15
13,510
1
https://stackoverflow.com/questions/47681937/ansible-specify-inventory-file-inside-playbook
31,787,044
Ansible synchronize asking for a password
I am using Ansible (1.9.2) to deploy some files to a Redhat 6.4 server. The playbook looks something like this - name: deploy files hosts: web tasks: - name sync files sudo: no synchronize: src={{ local_path }} dest={{ dest_path }} And to kick this off I run something like the following ansible-playbook -i myinventory myplaybook.yml -u DOMAIN\\user --ask-pass When I start the play I enter my password at the prompt, facts are then obtained successfully, however as soon as the synchronize task is reached another prompt asks for my password again, like the following DOMAIN\user@hostname's password: If I enter my password again the deploy completes correctly. My questions are How can I fix or work around this, so that I do not have to enter my password for every use of the synchronize module? Is this currently expected behaviour for the synchronize module? Or is this a bug in Ansible? I cannot use ssh keys due to environment restrictions. I do not want to use the copy module for scalability reasons. Things I have tried I have seen a number of other questions on this subject but I have not been able to use any of them to fix my issue or understand if this is expected behavior. Ansible synchronize prompts passphrase even if already entered at the beginning Ansible prompts password when using synchronize [URL] [URL] The Ansible docs are generally excellent but I have not been able to find anything about this on the offical docs. I have tried specifiying the user and password in the inventory file and not using the --ask-pass and -u parameters. But while I then do not have to enter the password to collect facts, the synchronize module still requests my password. I have tried setting the --ask-sudo-pass as well, but it did not help I have been using a CentOS 7 control box, but I have also tried an Ubuntu 14.04 box Can anyone help?
Ansible synchronize asking for a password I am using Ansible (1.9.2) to deploy some files to a Redhat 6.4 server. The playbook looks something like this - name: deploy files hosts: web tasks: - name sync files sudo: no synchronize: src={{ local_path }} dest={{ dest_path }} And to kick this off I run something like the following ansible-playbook -i myinventory myplaybook.yml -u DOMAIN\\user --ask-pass When I start the play I enter my password at the prompt, facts are then obtained successfully, however as soon as the synchronize task is reached another prompt asks for my password again, like the following DOMAIN\user@hostname's password: If I enter my password again the deploy completes correctly. My questions are How can I fix or work around this, so that I do not have to enter my password for every use of the synchronize module? Is this currently expected behaviour for the synchronize module? Or is this a bug in Ansible? I cannot use ssh keys due to environment restrictions. I do not want to use the copy module for scalability reasons. Things I have tried I have seen a number of other questions on this subject but I have not been able to use any of them to fix my issue or understand if this is expected behavior. Ansible synchronize prompts passphrase even if already entered at the beginning Ansible prompts password when using synchronize [URL] [URL] The Ansible docs are generally excellent but I have not been able to find anything about this on the offical docs. I have tried specifiying the user and password in the inventory file and not using the --ask-pass and -u parameters. But while I then do not have to enter the password to collect facts, the synchronize module still requests my password. I have tried setting the --ask-sudo-pass as well, but it did not help I have been using a CentOS 7 control box, but I have also tried an Ubuntu 14.04 box Can anyone help?
deployment, rsync, ansible
15
12,414
4
https://stackoverflow.com/questions/31787044/ansible-synchronize-asking-for-a-password
31,239,793
Ansible command module can&#39;t execute arguments
an Ansible noobie here. The issue i'm having is the when I used "command" module to execute a command, it will fail. Tried this on a remote host and localhost as well. BASIC INFO: Version: ansible 2.0.0 (devel 2c9d1257ba) Core: (detached HEAD 5983d64d77) last updated 2015/05/30 07:22:33 (GMT +800) Extras: (detached HEAD 1276420a3a) last updated 2015/05/30 07:22:41 (GMT +800) Ansible Host file local ansible_ssh_host=127.0.0.1 ansible_ssh_port=22 ansible_ssh_user=root ansible_ssh_pass=a Command ansible -i ansible_hosts -m command -a "/usr/bin/ls" local Actual Result local | FAILED! => {u'msg': u'Traceback (most recent call last):\r\n File "/root/.ansible/tmp/ansible-tmp-1436165888.5-23845581569171/command", line 2139, in <module>\r\n main()\r\n File "/root/.ansible/tmp/ansible-tmp-1436165888.5-23845581569171/command", line 158, in main\r\n module = CommandModule(argument_spec=dict())\r\n File "/root/.ansible/tmp/ansible-tmp-1436165888.5-23845581569171/command", line 606, in __init__\r\n self._check_for_check_mode()\r\n File "/root/.ansible/tmp/ansible-tmp-1436165888.5-23845581569171/command", line 1142, in _check_for_check_mode\r\n for (k,v) in self.params.iteritems():\r\nAttributeError: \'tuple\' object has no attribute \'iteritems\'\r\n', u'failed': True, u'changed': False, u'parsed': False, u'invocation': {u'module_name': u'command', u'module_args': {u'_raw_params': u'/usr/bin/ls'}}} Was there something wrong in my config?
Ansible command module can&#39;t execute arguments an Ansible noobie here. The issue i'm having is the when I used "command" module to execute a command, it will fail. Tried this on a remote host and localhost as well. BASIC INFO: Version: ansible 2.0.0 (devel 2c9d1257ba) Core: (detached HEAD 5983d64d77) last updated 2015/05/30 07:22:33 (GMT +800) Extras: (detached HEAD 1276420a3a) last updated 2015/05/30 07:22:41 (GMT +800) Ansible Host file local ansible_ssh_host=127.0.0.1 ansible_ssh_port=22 ansible_ssh_user=root ansible_ssh_pass=a Command ansible -i ansible_hosts -m command -a "/usr/bin/ls" local Actual Result local | FAILED! => {u'msg': u'Traceback (most recent call last):\r\n File "/root/.ansible/tmp/ansible-tmp-1436165888.5-23845581569171/command", line 2139, in <module>\r\n main()\r\n File "/root/.ansible/tmp/ansible-tmp-1436165888.5-23845581569171/command", line 158, in main\r\n module = CommandModule(argument_spec=dict())\r\n File "/root/.ansible/tmp/ansible-tmp-1436165888.5-23845581569171/command", line 606, in __init__\r\n self._check_for_check_mode()\r\n File "/root/.ansible/tmp/ansible-tmp-1436165888.5-23845581569171/command", line 1142, in _check_for_check_mode\r\n for (k,v) in self.params.iteritems():\r\nAttributeError: \'tuple\' object has no attribute \'iteritems\'\r\n', u'failed': True, u'changed': False, u'parsed': False, u'invocation': {u'module_name': u'command', u'module_args': {u'_raw_params': u'/usr/bin/ls'}}} Was there something wrong in my config?
ansible
15
1,663
1
https://stackoverflow.com/questions/31239793/ansible-command-module-cant-execute-arguments
33,119,092
How to prompt user for a target host in Ansible?
I want to write a bootstrapper playbook for new machines in Ansible which will reconfigure the network settings. At the time of the first execution target machines will have DHCP-assigned address. The user who is supposed to execute the playbook knows the assigned IP address of a new machine. I would like to prompt the user for is value. vars_prompt module allows getting input from the user, however it is defined under hosts section effectively preventing host address as the required value. Is it possible without using a wrapper script modifying inventory file?
How to prompt user for a target host in Ansible? I want to write a bootstrapper playbook for new machines in Ansible which will reconfigure the network settings. At the time of the first execution target machines will have DHCP-assigned address. The user who is supposed to execute the playbook knows the assigned IP address of a new machine. I would like to prompt the user for is value. vars_prompt module allows getting input from the user, however it is defined under hosts section effectively preventing host address as the required value. Is it possible without using a wrapper script modifying inventory file?
ansible, ansible-inventory
14
27,514
5
https://stackoverflow.com/questions/33119092/how-to-prompt-user-for-a-target-host-in-ansible
34,429,842
How to evaluate a when condition for Ansible task
I am having a variable with the following value: name_prefix: stage-dbs I am having a task in my playbook which will have to check this variable and see if it contains *-dbs , if the condition is met then it should process. I wrote something like this : - name: Ensure deployment directory is present file: path=/var/tmp/deploy/paTestTool state=directory when: name_prefix =="*-dbs" # what condition is required here to evaulate the rest part of variable?? What regex pattern should be used or how to use a regex here ?
How to evaluate a when condition for Ansible task I am having a variable with the following value: name_prefix: stage-dbs I am having a task in my playbook which will have to check this variable and see if it contains *-dbs , if the condition is met then it should process. I wrote something like this : - name: Ensure deployment directory is present file: path=/var/tmp/deploy/paTestTool state=directory when: name_prefix =="*-dbs" # what condition is required here to evaulate the rest part of variable?? What regex pattern should be used or how to use a regex here ?
regex, ansible
14
58,086
3
https://stackoverflow.com/questions/34429842/how-to-evaluate-a-when-condition-for-ansible-task
53,198,576
Ansible playbook wait until all pods running
I have this ansible (working) playbook that looks at the output of kubectl get pods -o json until the pod is in the Running state. Now I want to extend this to multiple pods. The core issue is that the json result of the kubectl query is a list, I know how to access the first item, but not all of the items... - name: wait for pods to come up shell: kubectl get pods -o json register: kubectl_get_pods until: kubectl_get_pods.stdout|from_json|json_query('items[0].status.phase') == "Running" retries: 20 The json object looks like, [ { ... "status": { "phase": "Running" } }, { ... "status": { "phase": "Running" } }, ... ] Using [0] to access the first item worked for handling one object in the list, but I can't figure out how to extend it to multiple items. I tried [*] which did not work.
Ansible playbook wait until all pods running I have this ansible (working) playbook that looks at the output of kubectl get pods -o json until the pod is in the Running state. Now I want to extend this to multiple pods. The core issue is that the json result of the kubectl query is a list, I know how to access the first item, but not all of the items... - name: wait for pods to come up shell: kubectl get pods -o json register: kubectl_get_pods until: kubectl_get_pods.stdout|from_json|json_query('items[0].status.phase') == "Running" retries: 20 The json object looks like, [ { ... "status": { "phase": "Running" } }, { ... "status": { "phase": "Running" } }, ... ] Using [0] to access the first item worked for handling one object in the list, but I can't figure out how to extend it to multiple items. I tried [*] which did not work.
kubernetes, ansible
14
22,193
7
https://stackoverflow.com/questions/53198576/ansible-playbook-wait-until-all-pods-running
32,596,203
Cygwin - How to install ansible?
How to get / install ansible using Cygwin? I tried the following steps but it's didn't work during bullet 5 (while running " python setup.py install "). Steps taken from: Taken from [URL] 1) Download and install Cygwin, with at least the following packages selected (you can select the packages during the install process): libyaml libyaml-devel curl python (2.7.x) python-crypto python-openssl python-paramiko python-setuptools git (2.1.x) vim openssh openssl openssl-devel 2) Download and install PyYAML and Jinja2, as they're not available via Cygwin's installer: 1. Open Cygwin 2. Download PyYAML: curl -O [URL] 3. Download Jinja2: curl -O [URL] 4. Untar both downloads: tar -xvf PyYAML-3.10.tar.gz && tar -xvf Jinja2-2.6.tar.gz 5. Change directory into each of the expanded folders and run python "python setup.py install" to install each package. 6. Clone ansible from its repository on GitHub: git clone [URL] /opt/ansible This was tested with Ansible version v1.6.6, change directory into /opt/ansible and checkout the correct tag: git checkout v1.6.6. 7. Add the following lines into your Cygwin .bash_profile: # Ansible settings ANSIBLE=/opt/ansible export PATH=$PATH:$ANSIBLE/bin export PYTHONPATH=$ANSIBLE/lib export ANSIBLE_LIBRARY=$ANSIBLE/library 8. At this point, you should be able to run ansible commands via Cygwin (once you restart, or enter source ~/.bash_profile to pick up the settings you just added). Try ansible --version to display Ansible's version. 9. Passwordless ssh will need to be set up between your Windows machine and the deployment host(s) To enable passwordless ssh on Centos - ssh-copy-id root@node To enable passwordless ssh on SuSE I followed the steps in this blog: [URL] install sshpass v1.05 on your Windows machine<br> The error that I got during during bullet 5 is: $ python setup.py install running install running build running build_py creating build creating build/lib.cygwin-2.2.1-x86_64-2.7 creating build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/composer.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/constructor.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/cyaml.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/dumper.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/emitter.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/error.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/events.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/loader.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/nodes.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/parser.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/reader.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/representer.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/resolver.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/scanner.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/serializer.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/tokens.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/__init__.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml running build_ext creating build/temp.cygwin-2.2.1-x86_64-2.7 checking if libyaml is compilable gcc -fno-strict-aliasing -ggdb -O2 -pipe -Wimplicit-function-declaration -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/build=/usr/src/debug/python-2.7.10-1 -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/src/Python-2.7.10=/usr/src/debug/python-2.7.10-1 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/cygdrive/c/cygwin/include/python2.7 -c build/temp.cygwin-2.2.1-x86_64-2.7/check_libyaml.c -o build/temp.cygwin-2.2.1-x86_64-2.7/check_libyaml.o checking if libyaml is linkable gcc build/temp.cygwin-2.2.1-x86_64-2.7/check_libyaml.o -L/cygdrive/c/cygwin/lib/python2.7/config -L/usr/lib -lyaml -o build/temp.cygwin-2.2.1-x86_64-2.7/check_libyaml.exe skipping 'ext/_yaml.c' Cython extension (up-to-date) building '_yaml' extension creating build/temp.cygwin-2.2.1-x86_64-2.7/ext gcc -fno-strict-aliasing -ggdb -O2 -pipe -Wimplicit-function-declaration -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/build=/usr/src/debug/python-2.7.10-1 -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/src/Python-2.7.10=/usr/src/debug/python-2.7.10-1 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/cygdrive/c/cygwin/include/python2.7 -c ext/_yaml.c -o build/temp.cygwin-2.2.1-x86_64-2.7/ext/_yaml.o ext/_yaml.c:4:20: fatal error: Python.h: No such file or directory #include "Python.h" ^ compilation terminated. error: command 'gcc' failed with exit status 1 . $ ansible --version Traceback (most recent call last): File "/opt/ansible/bin/ansible", line 40, in <module> from ansible.utils.display import Display File "/opt/ansible/lib/ansible/utils/display.py", line 35, in <module> from ansible import constants as C File "/opt/ansible/lib/ansible/constants.py", line 30, in <module> from ansible.parsing.splitter import unquote File "/opt/ansible/lib/ansible/parsing/__init__.py", line 32, in <module> from ansible.parsing.vault import VaultLib File "/opt/ansible/lib/ansible/parsing/vault/__init__.py", line 82, in <module> from cryptography.hazmat.primitives.hashes import SHA256 as c_SHA256 File "/cygdrive/c/cygwin/lib/python2.7/site-packages/cryptography/hazmat/primitives/hashes.py", line 15, in <module> from cryptography.hazmat.backends.interfaces import HashBackend File "/cygdrive/c/cygwin/lib/python2.7/site-packages/cryptography/hazmat/backends/__init__.py", line 7, in <module> import pkg_resources File "/cygdrive/c/cygwin/lib/python2.7/site-packages/pkg_resources/__init__.py", line 84, in <module> packaging = pkg_resources._vendor.packaging AttributeError: 'module' object has no attribute '_vendor'
Cygwin - How to install ansible? How to get / install ansible using Cygwin? I tried the following steps but it's didn't work during bullet 5 (while running " python setup.py install "). Steps taken from: Taken from [URL] 1) Download and install Cygwin, with at least the following packages selected (you can select the packages during the install process): libyaml libyaml-devel curl python (2.7.x) python-crypto python-openssl python-paramiko python-setuptools git (2.1.x) vim openssh openssl openssl-devel 2) Download and install PyYAML and Jinja2, as they're not available via Cygwin's installer: 1. Open Cygwin 2. Download PyYAML: curl -O [URL] 3. Download Jinja2: curl -O [URL] 4. Untar both downloads: tar -xvf PyYAML-3.10.tar.gz && tar -xvf Jinja2-2.6.tar.gz 5. Change directory into each of the expanded folders and run python "python setup.py install" to install each package. 6. Clone ansible from its repository on GitHub: git clone [URL] /opt/ansible This was tested with Ansible version v1.6.6, change directory into /opt/ansible and checkout the correct tag: git checkout v1.6.6. 7. Add the following lines into your Cygwin .bash_profile: # Ansible settings ANSIBLE=/opt/ansible export PATH=$PATH:$ANSIBLE/bin export PYTHONPATH=$ANSIBLE/lib export ANSIBLE_LIBRARY=$ANSIBLE/library 8. At this point, you should be able to run ansible commands via Cygwin (once you restart, or enter source ~/.bash_profile to pick up the settings you just added). Try ansible --version to display Ansible's version. 9. Passwordless ssh will need to be set up between your Windows machine and the deployment host(s) To enable passwordless ssh on Centos - ssh-copy-id root@node To enable passwordless ssh on SuSE I followed the steps in this blog: [URL] install sshpass v1.05 on your Windows machine<br> The error that I got during during bullet 5 is: $ python setup.py install running install running build running build_py creating build creating build/lib.cygwin-2.2.1-x86_64-2.7 creating build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/composer.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/constructor.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/cyaml.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/dumper.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/emitter.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/error.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/events.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/loader.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/nodes.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/parser.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/reader.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/representer.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/resolver.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/scanner.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/serializer.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/tokens.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml copying lib/yaml/__init__.py -> build/lib.cygwin-2.2.1-x86_64-2.7/yaml running build_ext creating build/temp.cygwin-2.2.1-x86_64-2.7 checking if libyaml is compilable gcc -fno-strict-aliasing -ggdb -O2 -pipe -Wimplicit-function-declaration -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/build=/usr/src/debug/python-2.7.10-1 -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/src/Python-2.7.10=/usr/src/debug/python-2.7.10-1 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/cygdrive/c/cygwin/include/python2.7 -c build/temp.cygwin-2.2.1-x86_64-2.7/check_libyaml.c -o build/temp.cygwin-2.2.1-x86_64-2.7/check_libyaml.o checking if libyaml is linkable gcc build/temp.cygwin-2.2.1-x86_64-2.7/check_libyaml.o -L/cygdrive/c/cygwin/lib/python2.7/config -L/usr/lib -lyaml -o build/temp.cygwin-2.2.1-x86_64-2.7/check_libyaml.exe skipping 'ext/_yaml.c' Cython extension (up-to-date) building '_yaml' extension creating build/temp.cygwin-2.2.1-x86_64-2.7/ext gcc -fno-strict-aliasing -ggdb -O2 -pipe -Wimplicit-function-declaration -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/build=/usr/src/debug/python-2.7.10-1 -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/src/Python-2.7.10=/usr/src/debug/python-2.7.10-1 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/cygdrive/c/cygwin/include/python2.7 -c ext/_yaml.c -o build/temp.cygwin-2.2.1-x86_64-2.7/ext/_yaml.o ext/_yaml.c:4:20: fatal error: Python.h: No such file or directory #include "Python.h" ^ compilation terminated. error: command 'gcc' failed with exit status 1 . $ ansible --version Traceback (most recent call last): File "/opt/ansible/bin/ansible", line 40, in <module> from ansible.utils.display import Display File "/opt/ansible/lib/ansible/utils/display.py", line 35, in <module> from ansible import constants as C File "/opt/ansible/lib/ansible/constants.py", line 30, in <module> from ansible.parsing.splitter import unquote File "/opt/ansible/lib/ansible/parsing/__init__.py", line 32, in <module> from ansible.parsing.vault import VaultLib File "/opt/ansible/lib/ansible/parsing/vault/__init__.py", line 82, in <module> from cryptography.hazmat.primitives.hashes import SHA256 as c_SHA256 File "/cygdrive/c/cygwin/lib/python2.7/site-packages/cryptography/hazmat/primitives/hashes.py", line 15, in <module> from cryptography.hazmat.backends.interfaces import HashBackend File "/cygdrive/c/cygwin/lib/python2.7/site-packages/cryptography/hazmat/backends/__init__.py", line 7, in <module> import pkg_resources File "/cygdrive/c/cygwin/lib/python2.7/site-packages/pkg_resources/__init__.py", line 84, in <module> packaging = pkg_resources._vendor.packaging AttributeError: 'module' object has no attribute '_vendor'
cygwin, installation, ansible, setup.py, pyyaml
14
23,994
9
https://stackoverflow.com/questions/32596203/cygwin-how-to-install-ansible
48,779,938
How do you set up a confirmation prompt before running a task in a playbook?
I want to install MariaDB after confirming user. I have a role and one simple task: - name: install MariaDB yum: name: MariaDB state: present or if I want to use include: MySQL.yml , if the user wants to, this line executes, if not skip this include. - name: install MariaDB yum: name: MariaDB state: present - include: MySQL.yml more explain my hosts: [dbs] 192.168.0.10 192.168.0.11 192.168.0.12 Now, I want if the user enters no for answer prompt, MySQL.yml does not execute for any server. my code in role ( tasks/main.yml ): --- - pause: prompt: "Do you want to install mariadb (yes/no)?" register: my_pause delegate_to: localhost - include_tasks: mysql.yml when: hostvars['localhost'].my_pause.user_input | bool and my output : [root@anisble ansible]# ansible-playbook playbooks/test.yml PLAY [dbs] ******************************************************************** TASK [Gathering Facts] ****************************************************************** ok: [db1] ok: [db2] ok: [db3] TASK [ssh : pause] ****************************************************************************** [ssh : pause] Do you want to install mariadb (yes/no)?: no ok: [db1 -> localhost] TASK [ssh : include_tasks] *********************************************************************************** included: /etc/ansible/roles/ssh/tasks/mysql.yml for db1, db2, db3 TASK [ssh : install mariadb] ****************************************************************************** ok: [db3] ok: [db2] ok: [db1] PLAY RECAP ***************************************************************************** db1 : ok=4 changed=0 unreachable=0 failed=0 db2 : ok=3 changed=0 unreachable=0 failed=0 db3 : ok=3 changed=0 unreachable=0 failed=0
How do you set up a confirmation prompt before running a task in a playbook? I want to install MariaDB after confirming user. I have a role and one simple task: - name: install MariaDB yum: name: MariaDB state: present or if I want to use include: MySQL.yml , if the user wants to, this line executes, if not skip this include. - name: install MariaDB yum: name: MariaDB state: present - include: MySQL.yml more explain my hosts: [dbs] 192.168.0.10 192.168.0.11 192.168.0.12 Now, I want if the user enters no for answer prompt, MySQL.yml does not execute for any server. my code in role ( tasks/main.yml ): --- - pause: prompt: "Do you want to install mariadb (yes/no)?" register: my_pause delegate_to: localhost - include_tasks: mysql.yml when: hostvars['localhost'].my_pause.user_input | bool and my output : [root@anisble ansible]# ansible-playbook playbooks/test.yml PLAY [dbs] ******************************************************************** TASK [Gathering Facts] ****************************************************************** ok: [db1] ok: [db2] ok: [db3] TASK [ssh : pause] ****************************************************************************** [ssh : pause] Do you want to install mariadb (yes/no)?: no ok: [db1 -> localhost] TASK [ssh : include_tasks] *********************************************************************************** included: /etc/ansible/roles/ssh/tasks/mysql.yml for db1, db2, db3 TASK [ssh : install mariadb] ****************************************************************************** ok: [db3] ok: [db2] ok: [db1] PLAY RECAP ***************************************************************************** db1 : ok=4 changed=0 unreachable=0 failed=0 db2 : ok=3 changed=0 unreachable=0 failed=0 db3 : ok=3 changed=0 unreachable=0 failed=0
ansible
14
32,451
1
https://stackoverflow.com/questions/48779938/how-do-you-set-up-a-confirmation-prompt-before-running-a-task-in-a-playbook
38,423,219
Ansible This module requires the passlib Python library
I have tried to use ansible core module htpasswd on ubuntu and I get error This module requires the passlib Python library
Ansible This module requires the passlib Python library I have tried to use ansible core module htpasswd on ubuntu and I get error This module requires the passlib Python library
ansible
14
20,420
5
https://stackoverflow.com/questions/38423219/ansible-this-module-requires-the-passlib-python-library
26,606,121
Ansible loop over variables
i am using ansible to update configuration file of newly added NIC for that i have defined some variables in separate yml file /tmp/ip.yml #first interface interface1: eth1 bootproto1: static ipaddress1: 192.168.211.249 netmask1: 255.255.255.0 gateway: 192.168.211.2 DNS1: 192.168.211.2 #second interface interface2: eth2 bootproto2: static ipaddress2: 10.0.0.100 netmask2: 255.0.0.0 Playbook - include_vars: /tmp/ip.yml - name: configuring interface lineinfile: state=present create=yes dest=/etc/sysconfig/network-scripts/ifcfg-{{interface1}} regexp="{{ item.regexp }}" line="{{ item.line }}" with_items: - { regexp: '^BOOTPROTO=.*', line: 'BOOTPROTO={{interface1}}' } - { regexp: '^IPADDR=.*', line: 'IPADDR={{ipaddress1}' } - { regexp: '^NETMASK=.*', line: 'NETMASK={{netmask1}}' } - { regexp: '^GATEWAY=.*', line: 'GATEWAY={{gateway}}' } - { regexp: '^PEERDNS=.*', line: 'PEERDNS=no' } - { regexp: '^DNS1=.*', line: 'DNS1={{DNS1}}' } - { regexp: '^ONBOOT=.*', line: 'ONBOOT={{onboot}}' } when: bootproto1 == 'static' - name: configuring for DHCP lineinfile: state=present create=yes dest=/etc/sysconfig/network-scripts/ifcfg-{{interface1}} regexp="{{ item.regexp }}" line="{{ item.line }}" with_items: - { regexp: '^BOOTPROTO=.*',line: 'BOOTPROTO={{bootproto1}}' } - {regexp: '^PEERDNS=.*',line: 'PEERDNS=yes' } - { regexp: '^ONBOOT=.*', line: 'ONBOOT={{onboot}}' } when: bootproto1 == 'dhcp' similarly repeated for second interface. Even Though this method works for 2 NIC,this is too difficult to manage ,that is for each new NIC added i need to modify playbook and update corresponding variable in /tmp/ip.yml . Is there a way to add variables to /tmp/ip.yml and may be using some separator parse it to playbook with out modifying playbook each time for plugging in new NIC.
Ansible loop over variables i am using ansible to update configuration file of newly added NIC for that i have defined some variables in separate yml file /tmp/ip.yml #first interface interface1: eth1 bootproto1: static ipaddress1: 192.168.211.249 netmask1: 255.255.255.0 gateway: 192.168.211.2 DNS1: 192.168.211.2 #second interface interface2: eth2 bootproto2: static ipaddress2: 10.0.0.100 netmask2: 255.0.0.0 Playbook - include_vars: /tmp/ip.yml - name: configuring interface lineinfile: state=present create=yes dest=/etc/sysconfig/network-scripts/ifcfg-{{interface1}} regexp="{{ item.regexp }}" line="{{ item.line }}" with_items: - { regexp: '^BOOTPROTO=.*', line: 'BOOTPROTO={{interface1}}' } - { regexp: '^IPADDR=.*', line: 'IPADDR={{ipaddress1}' } - { regexp: '^NETMASK=.*', line: 'NETMASK={{netmask1}}' } - { regexp: '^GATEWAY=.*', line: 'GATEWAY={{gateway}}' } - { regexp: '^PEERDNS=.*', line: 'PEERDNS=no' } - { regexp: '^DNS1=.*', line: 'DNS1={{DNS1}}' } - { regexp: '^ONBOOT=.*', line: 'ONBOOT={{onboot}}' } when: bootproto1 == 'static' - name: configuring for DHCP lineinfile: state=present create=yes dest=/etc/sysconfig/network-scripts/ifcfg-{{interface1}} regexp="{{ item.regexp }}" line="{{ item.line }}" with_items: - { regexp: '^BOOTPROTO=.*',line: 'BOOTPROTO={{bootproto1}}' } - {regexp: '^PEERDNS=.*',line: 'PEERDNS=yes' } - { regexp: '^ONBOOT=.*', line: 'ONBOOT={{onboot}}' } when: bootproto1 == 'dhcp' similarly repeated for second interface. Even Though this method works for 2 NIC,this is too difficult to manage ,that is for each new NIC added i need to modify playbook and update corresponding variable in /tmp/ip.yml . Is there a way to add variables to /tmp/ip.yml and may be using some separator parse it to playbook with out modifying playbook each time for plugging in new NIC.
ansible
14
48,515
1
https://stackoverflow.com/questions/26606121/ansible-loop-over-variables
25,552,766
Change variable in Ansible template based on group
I've got an Ansible inventory file a bit like this: [es-masters] host1.my-network.com [es-slaves] host2.my-network.com host3.my-network.com [es:children] es-masters es-slaves I also have a Jinja2 template file that needs a certain value set to "true" if a host belongs to the "es-masters" group. I'm sure that there's a simple way of doing it but after some Googling and reading the documentation, I've drawn a blank. I'm looking for something simple and programmatic like this to go in the Jinja2 template: {% if hostvars[host][group] == "es-masters" %} node_master=true {% else %} node_master=false {% endif %} Any ideas?
Change variable in Ansible template based on group I've got an Ansible inventory file a bit like this: [es-masters] host1.my-network.com [es-slaves] host2.my-network.com host3.my-network.com [es:children] es-masters es-slaves I also have a Jinja2 template file that needs a certain value set to "true" if a host belongs to the "es-masters" group. I'm sure that there's a simple way of doing it but after some Googling and reading the documentation, I've drawn a blank. I'm looking for something simple and programmatic like this to go in the Jinja2 template: {% if hostvars[host][group] == "es-masters" %} node_master=true {% else %} node_master=false {% endif %} Any ideas?
templates, jinja2, ansible
14
43,594
4
https://stackoverflow.com/questions/25552766/change-variable-in-ansible-template-based-on-group
46,366,526
Appending files with Template Module in Ansible
So I have an ansible playbook that is using a Jinja2 template to create a log file. Everytime I run the playbook it is pulling in customer information from customers.yml and outputting the completed template into a 'stunnel.conf' file. The template works fine but I am trying to find a way to append the previous 'stunnel.conf' rather than overwriting it using the Template module. I wish to add text to the beginning of the 'stunnel.conf' manually and not have it overwritten. Do you think this would be possible? Stunnel.conf ; GFAM - PBSTP [customer-GFAM-34074] cert = /etc/stunnel/stunnel.pem accept = 34094 connect = 35094 ; GUANFABANK - FXSIM [customer-GUANFABANK-34051] cert = /etc/stunnel/stunnel.pem accept = 34095 connect = 35095 ; ONEZERO2 - TRADESTREAM [customer-ONEZERO2-39124] cert = /etc/stunnel/stunnel.pem accept = 34096 connect = 35096 ; BTG-VELOCITY - PBSTP [customer-BTG-VELOCITY-42533] cert = /etc/stunnel/stunnel.pem accept = 34097 connect = 35097 Jinja2 Template {#CONTEXT: {{ customers }}#} {% set currentport = 34093%} {% for cust, config in customers.items() %} ; {{ cust }} - {{ config['type'] }} [customer-{{ cust }}-{{ config['accept'] }}] cert = {{ "/etc/stunnel/stunnel.pem" }} {#accept = {{ config['accept'] }}#} {#connect = {{ config['connect'] }}#} accept = {{ currentport + 1 }} connect = {{ currentport + 1001 }} {% set currentport = currentport + 1 %} {% endfor %} playbook.yml - include_vars: file: /home/vagrant/stunnelSimAnsPractice/roles/ns16/vars/customers.yml name: customers - template: src: /home/vagrant/stunnelSimAnsPractice/roles/ns16/templates/stunnel.j2 dest: /home/vagrant/stunnelSimAnsPractice/roles/ns16/output/stunnel.conf owner: root group: root
Appending files with Template Module in Ansible So I have an ansible playbook that is using a Jinja2 template to create a log file. Everytime I run the playbook it is pulling in customer information from customers.yml and outputting the completed template into a 'stunnel.conf' file. The template works fine but I am trying to find a way to append the previous 'stunnel.conf' rather than overwriting it using the Template module. I wish to add text to the beginning of the 'stunnel.conf' manually and not have it overwritten. Do you think this would be possible? Stunnel.conf ; GFAM - PBSTP [customer-GFAM-34074] cert = /etc/stunnel/stunnel.pem accept = 34094 connect = 35094 ; GUANFABANK - FXSIM [customer-GUANFABANK-34051] cert = /etc/stunnel/stunnel.pem accept = 34095 connect = 35095 ; ONEZERO2 - TRADESTREAM [customer-ONEZERO2-39124] cert = /etc/stunnel/stunnel.pem accept = 34096 connect = 35096 ; BTG-VELOCITY - PBSTP [customer-BTG-VELOCITY-42533] cert = /etc/stunnel/stunnel.pem accept = 34097 connect = 35097 Jinja2 Template {#CONTEXT: {{ customers }}#} {% set currentport = 34093%} {% for cust, config in customers.items() %} ; {{ cust }} - {{ config['type'] }} [customer-{{ cust }}-{{ config['accept'] }}] cert = {{ "/etc/stunnel/stunnel.pem" }} {#accept = {{ config['accept'] }}#} {#connect = {{ config['connect'] }}#} accept = {{ currentport + 1 }} connect = {{ currentport + 1001 }} {% set currentport = currentport + 1 %} {% endfor %} playbook.yml - include_vars: file: /home/vagrant/stunnelSimAnsPractice/roles/ns16/vars/customers.yml name: customers - template: src: /home/vagrant/stunnelSimAnsPractice/roles/ns16/templates/stunnel.j2 dest: /home/vagrant/stunnelSimAnsPractice/roles/ns16/output/stunnel.conf owner: root group: root
ansible, jinja2, templating
14
22,981
2
https://stackoverflow.com/questions/46366526/appending-files-with-template-module-in-ansible
57,657,645
pam_unix(sudo:auth): conversation failed, auth could not identify password for [username]
I'm using ansible to provision my Centos 7 produciton cluster. Unfortunately, execution of below command results with ansible Tiemout and Linux Pluggable Authentication Modules ( pam ) error conversation failed . The same ansible command works well, executed against virtual lab mad out of vagrant boxes. Ansible Command $ ansible master_server -m yum -a 'name=vim state=installed' -b -K -u lukas -vvvv 123.123.123.123 | FAILED! => { "msg": "Timeout (7s) waiting for privilege escalation prompt: \u001b[?1h\u001b=\r\r" } SSHd Log # /var/log/secure Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): conversation failed Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): auth could not identify password for [lukas]
pam_unix(sudo:auth): conversation failed, auth could not identify password for [username] I'm using ansible to provision my Centos 7 produciton cluster. Unfortunately, execution of below command results with ansible Tiemout and Linux Pluggable Authentication Modules ( pam ) error conversation failed . The same ansible command works well, executed against virtual lab mad out of vagrant boxes. Ansible Command $ ansible master_server -m yum -a 'name=vim state=installed' -b -K -u lukas -vvvv 123.123.123.123 | FAILED! => { "msg": "Timeout (7s) waiting for privilege escalation prompt: \u001b[?1h\u001b=\r\r" } SSHd Log # /var/log/secure Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): conversation failed Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): auth could not identify password for [lukas]
ssh, ansible, centos, pam, sshd
14
96,740
7
https://stackoverflow.com/questions/57657645/pam-unixsudoauth-conversation-failed-auth-could-not-identify-password-for
47,423,488
Vagrant + Ansible + Python3
I have a Vagrantfile that is simplified to: Vagrant.configure(2) do |config| config.vm.box = "ubuntu/xenial64" config.vm.boot_timeout = 900 config.vm.define 'srv' do |srv| srv.vm.provision 'ansible' do |ansible| ansible.compatibility_mode = '2.0' ansible.playbook = 'playbook.yml' end end end When I run vagrant provision , at the Gathering Facts stage, I get /usr/bin/python: not found because Ubuntu 16.04 by default only has python3 not Python 2.x python installed. I see several older posts about this. It seems recent versions of Ansible support using Python 3, but it must be configured via ansible_python_interpreter=/usr/bin/python3 in the hosts file or on the ansible command line. Is there any way I specify this option in my Vagrantfile or in my playbook.yml file? I'm currently not using a hosts file, I'm not running ansible-playbook via command line, I'm running Ansible through the Vagrant integration. FYI, I'm using Ansible 2.4.1.0 and Vagrant 2.0.1, which are the latest versions as of this writing.
Vagrant + Ansible + Python3 I have a Vagrantfile that is simplified to: Vagrant.configure(2) do |config| config.vm.box = "ubuntu/xenial64" config.vm.boot_timeout = 900 config.vm.define 'srv' do |srv| srv.vm.provision 'ansible' do |ansible| ansible.compatibility_mode = '2.0' ansible.playbook = 'playbook.yml' end end end When I run vagrant provision , at the Gathering Facts stage, I get /usr/bin/python: not found because Ubuntu 16.04 by default only has python3 not Python 2.x python installed. I see several older posts about this. It seems recent versions of Ansible support using Python 3, but it must be configured via ansible_python_interpreter=/usr/bin/python3 in the hosts file or on the ansible command line. Is there any way I specify this option in my Vagrantfile or in my playbook.yml file? I'm currently not using a hosts file, I'm not running ansible-playbook via command line, I'm running Ansible through the Vagrant integration. FYI, I'm using Ansible 2.4.1.0 and Vagrant 2.0.1, which are the latest versions as of this writing.
python, vagrant, ansible, vagrantfile
14
6,707
1
https://stackoverflow.com/questions/47423488/vagrant-ansible-python3
39,795,873
Ansible lineinfile - modify a line
I'm new to Ansible and trying to modify a line in /etc/default/grub to enable auditing. I need to add audit=1 within the quotes somewhere on a line that looks like: GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap biosdevname=0 net.ifnames=0 rhgb quiet net.ifnames=0" So far I've managed to delete the line and am only left with net.ifnames=0, audit=1 when I use something like lineinfile: state: present dest: /etc/default/grub backrefs: yes regexp: "net.ifnames=0" line: "\1 audit=1" Can this be done?
Ansible lineinfile - modify a line I'm new to Ansible and trying to modify a line in /etc/default/grub to enable auditing. I need to add audit=1 within the quotes somewhere on a line that looks like: GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap biosdevname=0 net.ifnames=0 rhgb quiet net.ifnames=0" So far I've managed to delete the line and am only left with net.ifnames=0, audit=1 when I use something like lineinfile: state: present dest: /etc/default/grub backrefs: yes regexp: "net.ifnames=0" line: "\1 audit=1" Can this be done?
ansible, ansible-2.x
14
12,771
2
https://stackoverflow.com/questions/39795873/ansible-lineinfile-modify-a-line
20,837,850
Tailor ansible roles to environment
I have a number of environments, that require a bunch of text files to be tailored in order for things like mule to speak to the right endpoints. For this environment, this works: ansible-playbook test03.yml The only difference between an environment (from ansible's perspective) is the information held in ./roles/esb/vars/main.yml. I've considered using svn to keep a vars/main.yml for each environment, so each time I need to configure an environment I check out roles and then vars/main.yml for that environment, before I run the command above. To me, not an elegant solution. How can I do this better? Directory structure ./test03.yml ./roles/esb/vars/main.yml ./roles/esb/tasks/main.yml ./roles/esb/templates/trp.properties.j2 ./test03.yml --- - hosts: test03-esb gather_facts: no roles: - esb ./roles/esb/vars/main.yml --- jndiProviderUrl: 'jnp://mqendpoint.company.com:1099' trp_endpoint_estask: '[URL] trp_endpoint_builderQ: 'jnp://mqendpoint.company.com:1099' ./roles/esb/tasks/main.yml --- - name: replace variables in templates template: src=trp.properties.j2 dest=/path/to/mule/deploy/conf/trp.properties ./roles/esb/templates/trp.properties.j2 trp.endpoint.estask={{ trp_endpoint_estask }} trp.endpoint.builderQ={{ trp_endpoint_builderQ }}
Tailor ansible roles to environment I have a number of environments, that require a bunch of text files to be tailored in order for things like mule to speak to the right endpoints. For this environment, this works: ansible-playbook test03.yml The only difference between an environment (from ansible's perspective) is the information held in ./roles/esb/vars/main.yml. I've considered using svn to keep a vars/main.yml for each environment, so each time I need to configure an environment I check out roles and then vars/main.yml for that environment, before I run the command above. To me, not an elegant solution. How can I do this better? Directory structure ./test03.yml ./roles/esb/vars/main.yml ./roles/esb/tasks/main.yml ./roles/esb/templates/trp.properties.j2 ./test03.yml --- - hosts: test03-esb gather_facts: no roles: - esb ./roles/esb/vars/main.yml --- jndiProviderUrl: 'jnp://mqendpoint.company.com:1099' trp_endpoint_estask: '[URL] trp_endpoint_builderQ: 'jnp://mqendpoint.company.com:1099' ./roles/esb/tasks/main.yml --- - name: replace variables in templates template: src=trp.properties.j2 dest=/path/to/mule/deploy/conf/trp.properties ./roles/esb/templates/trp.properties.j2 trp.endpoint.estask={{ trp_endpoint_estask }} trp.endpoint.builderQ={{ trp_endpoint_builderQ }}
roles, ansible
14
8,983
2
https://stackoverflow.com/questions/20837850/tailor-ansible-roles-to-environment
36,448,944
Is there an elegant way to check file integrity with md5 in ansible using md5 files fetched from server?
I have several files on a server that I need to download from an ansible playbook , but because the connection has good chances of interruption I would like to check their integrity after download. I'm considering two approaches: Store the md5 of those files in ansible as vars Store the md5 of those files on the server as files with the extension .md5. Such a pair would look like: file.extension and file.extension.md5 . The first approach introduces overhead in maintaining the md5s in ansible. So everytime someone adds a new file, he needs to make sure he adds the md5 in the right place. But as an advantage, there is a solution for this, using the built in check from get_url action in conjunction with checksum=md5 . E.g.: action: get_url: url=[URL] dest=/etc/foo.conf checksum=md5:66dffb5228a211e61d6d7ef4a86f5758 The second approach is more elegant and the narrows the responsibility. When someone adds a new file on the server, he will make sure to add the .md5 as well and won't even need to use the ansible playbooks. Is there a way to use the checksum approach to match the md5 from a file?
Is there an elegant way to check file integrity with md5 in ansible using md5 files fetched from server? I have several files on a server that I need to download from an ansible playbook , but because the connection has good chances of interruption I would like to check their integrity after download. I'm considering two approaches: Store the md5 of those files in ansible as vars Store the md5 of those files on the server as files with the extension .md5. Such a pair would look like: file.extension and file.extension.md5 . The first approach introduces overhead in maintaining the md5s in ansible. So everytime someone adds a new file, he needs to make sure he adds the md5 in the right place. But as an advantage, there is a solution for this, using the built in check from get_url action in conjunction with checksum=md5 . E.g.: action: get_url: url=[URL] dest=/etc/foo.conf checksum=md5:66dffb5228a211e61d6d7ef4a86f5758 The second approach is more elegant and the narrows the responsibility. When someone adds a new file on the server, he will make sure to add the .md5 as well and won't even need to use the ansible playbooks. Is there a way to use the checksum approach to match the md5 from a file?
md5, ansible, integrity, md5-file
14
28,475
6
https://stackoverflow.com/questions/36448944/is-there-an-elegant-way-to-check-file-integrity-with-md5-in-ansible-using-md5-fi
36,885,689
Set additional ansible module parameter only if variable is defined
I have a custom ansible module. 10 def main(): 11 module = AnsibleModule( 12 argument_spec = dict( 13 server = dict(required=True, type='str'), 14 max_offset = dict(required=False, default=0.100, type='float') 15 ), 16 supports_check_mode = False 17 ) 18 19 # Write params into normal variables 20 max_offset = module.params['max_offset'] 21 server = module.params.get('server') I want to call it with additional parameter only if a variable ntp.max_offset is defined. I dont know how to do this. So I tried this code: - name: GROUP::TEST ntptest: server="{{ hostvars[item][eth]['ipv4']['address'] }}" parameter: name: "max_offset" value: ntp.max_offset when: ntp.max_offset is defined register: modules_output with_items: "{{groups['ntp_servers']}}" when: server is not defined But unfortunately.
Set additional ansible module parameter only if variable is defined I have a custom ansible module. 10 def main(): 11 module = AnsibleModule( 12 argument_spec = dict( 13 server = dict(required=True, type='str'), 14 max_offset = dict(required=False, default=0.100, type='float') 15 ), 16 supports_check_mode = False 17 ) 18 19 # Write params into normal variables 20 max_offset = module.params['max_offset'] 21 server = module.params.get('server') I want to call it with additional parameter only if a variable ntp.max_offset is defined. I dont know how to do this. So I tried this code: - name: GROUP::TEST ntptest: server="{{ hostvars[item][eth]['ipv4']['address'] }}" parameter: name: "max_offset" value: ntp.max_offset when: ntp.max_offset is defined register: modules_output with_items: "{{groups['ntp_servers']}}" when: server is not defined But unfortunately.
ansible
14
7,756
2
https://stackoverflow.com/questions/36885689/set-additional-ansible-module-parameter-only-if-variable-is-defined
42,760,061
I want to include another Jinja2 template in an Ansible context in Jinja2
I have an Ansible playbook that sets a lot of variables. One the playbooks has this task: - name: create config file template: src: 'templates/main_config.j2' dest: "{{ tmp_dir }}/main_config.json" The template main_config.j2 writes strings that are defined as variables in the parent Ansible playbooks and tasks. I want to include another Jinja2 template based on a value of an Ansible variable. {% include "./templates/configurations.j2" %}, {% include "./templates/security.j2" %}, {% include './templates/' + {{ job }} + '_steps.j2' %} job is a Ansible variable set in a parent playbook. This is not working. What could be the problem?
I want to include another Jinja2 template in an Ansible context in Jinja2 I have an Ansible playbook that sets a lot of variables. One the playbooks has this task: - name: create config file template: src: 'templates/main_config.j2' dest: "{{ tmp_dir }}/main_config.json" The template main_config.j2 writes strings that are defined as variables in the parent Ansible playbooks and tasks. I want to include another Jinja2 template based on a value of an Ansible variable. {% include "./templates/configurations.j2" %}, {% include "./templates/security.j2" %}, {% include './templates/' + {{ job }} + '_steps.j2' %} job is a Ansible variable set in a parent playbook. This is not working. What could be the problem?
ansible, jinja2, ansible-template
14
28,075
1
https://stackoverflow.com/questions/42760061/i-want-to-include-another-jinja2-template-in-an-ansible-context-in-jinja2
51,015,315
Ansible strip white space
When I try to run some commands on nxos devices, the output has a white space at the end. I have to compare the output to an existing variable list. The whitespace at the end is causing the comparison to go false. How to make use of .strip() function in a list of strings? - name: Current TACACS server host before nxos_command: commands: - sh run | include 'tacacs-server host' register: runconfserafter - debug: var: runconfserafter The output of this comes up like this: "stdout_lines": [ [ "tacacs-server host 1.1.1.1 key 7 \"HelloWorld\" ", "tacacs-server host 2.2.2.2 key 7 \"HelloWorld\"" ], ] When I compare this line with my desired variables, I can't get it matched because of the white space on the first line at the end.
Ansible strip white space When I try to run some commands on nxos devices, the output has a white space at the end. I have to compare the output to an existing variable list. The whitespace at the end is causing the comparison to go false. How to make use of .strip() function in a list of strings? - name: Current TACACS server host before nxos_command: commands: - sh run | include 'tacacs-server host' register: runconfserafter - debug: var: runconfserafter The output of this comes up like this: "stdout_lines": [ [ "tacacs-server host 1.1.1.1 key 7 \"HelloWorld\" ", "tacacs-server host 2.2.2.2 key 7 \"HelloWorld\"" ], ] When I compare this line with my desired variables, I can't get it matched because of the white space on the first line at the end.
ansible, jinja2
14
46,957
1
https://stackoverflow.com/questions/51015315/ansible-strip-white-space
38,259,422
In Ansible, How to connect to windows host?
I have been stuck with Ansible window module. I am just trying to ping windows machine.But i get 'connect timeout' hosts [windows] 192.168.1.13 group_vars/windows.yaml ansible_user: raja ansible_password: myPassword ansible_port: 5986 ansible_connection: winrm ansible_winrm_server_cert_validation: ignore And While I run : ansible windows -vvv -i hosts -m win_ping Using /etc/ansible/ansible.cfg as config file <192.168.1.13> ESTABLISH WINRM CONNECTION FOR USER: raja on PORT 5986 TO 192.168.1.13 192.168.1.13 | UNREACHABLE! => { "changed": false, "msg": "ssl: HTTPSConnectionPool(host='192.168.1.13', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fcb12024a90>, 'Connection to 192.168.1.13 timed out. (connect timeout=30)'))", "unreachable": true } However I can ping that windows machine using ping 192.168.1.13
In Ansible, How to connect to windows host? I have been stuck with Ansible window module. I am just trying to ping windows machine.But i get 'connect timeout' hosts [windows] 192.168.1.13 group_vars/windows.yaml ansible_user: raja ansible_password: myPassword ansible_port: 5986 ansible_connection: winrm ansible_winrm_server_cert_validation: ignore And While I run : ansible windows -vvv -i hosts -m win_ping Using /etc/ansible/ansible.cfg as config file <192.168.1.13> ESTABLISH WINRM CONNECTION FOR USER: raja on PORT 5986 TO 192.168.1.13 192.168.1.13 | UNREACHABLE! => { "changed": false, "msg": "ssl: HTTPSConnectionPool(host='192.168.1.13', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fcb12024a90>, 'Connection to 192.168.1.13 timed out. (connect timeout=30)'))", "unreachable": true } However I can ping that windows machine using ping 192.168.1.13
ansible, ansible-2.x
14
54,235
5
https://stackoverflow.com/questions/38259422/in-ansible-how-to-connect-to-windows-host
62,714,153
Does Ansible shell module need python on target server?
I have a very basic playbook that simply runs a script using the shell module on the target remote host. In the output it however fails stating python interpreter not found . Installing python on each target is not the solution I can pursue. Is it possible to use my Ansible automation to run the playbook and execute the script using shell module without having python dependency?
Does Ansible shell module need python on target server? I have a very basic playbook that simply runs a script using the shell module on the target remote host. In the output it however fails stating python interpreter not found . Installing python on each target is not the solution I can pursue. Is it possible to use my Ansible automation to run the playbook and execute the script using shell module without having python dependency?
python, shell, ansible
14
15,481
2
https://stackoverflow.com/questions/62714153/does-ansible-shell-module-need-python-on-target-server
49,678,997
Order of executing roles, tasks, pre_tasks, post_task. Can tasks be defined in playbook?
In playbook we can define roles , pre_tasks , post_tasks . Can we also define tasks ? And second question is about order of executing these things. I know that order is following: pre_tasks -> roles -> post_tasks . However, when tasks are executed?
Order of executing roles, tasks, pre_tasks, post_task. Can tasks be defined in playbook? In playbook we can define roles , pre_tasks , post_tasks . Can we also define tasks ? And second question is about order of executing these things. I know that order is following: pre_tasks -> roles -> post_tasks . However, when tasks are executed?
ansible, task
14
15,854
1
https://stackoverflow.com/questions/49678997/order-of-executing-roles-tasks-pre-tasks-post-task-can-tasks-be-defined-in-p
31,396,130
Ansible: install multiple Python packages on a single session
One of my playbooks contains a task that installs basic Python packages: --- - name: "Install Python packages: {{ python_packages_to_install }}" sudo: true pip: name={{ item }} with_items: python_packages_to_install With the following list of packages: - include: python_basics.yaml vars: python_packages_to_install: - virtualenv - pss - requests - comment-builder - boto - ansible - uwsgitop - gitpull - ipython The task works correctly and installs the packages: TASK: [common | Install Python packages: ['virtualenv', 'pss', 'requests', 'comment-builder', 'boto', 'ansible', 'uwsgitop', 'gitpull', 'ipython']] *** ok: [push-prod-01] => (item=virtualenv) ok: [push-prod-01] => (item=pss) ok: [push-prod-01] => (item=requests) ok: [push-prod-01] => (item=comment-builder) ok: [push-prod-01] => (item=boto) ok: [push-prod-01] => (item=ansible) ok: [push-prod-01] => (item=uwsgitop) ok: [push-prod-01] => (item=gitpull) changed: [push-prod-01] => (item=ipython) The problem is that each line is executed using a consecutive SSH command, instead of installing all the packages in a single call. Is there a way to install multiple Python packages on an Ansible pip command?
Ansible: install multiple Python packages on a single session One of my playbooks contains a task that installs basic Python packages: --- - name: "Install Python packages: {{ python_packages_to_install }}" sudo: true pip: name={{ item }} with_items: python_packages_to_install With the following list of packages: - include: python_basics.yaml vars: python_packages_to_install: - virtualenv - pss - requests - comment-builder - boto - ansible - uwsgitop - gitpull - ipython The task works correctly and installs the packages: TASK: [common | Install Python packages: ['virtualenv', 'pss', 'requests', 'comment-builder', 'boto', 'ansible', 'uwsgitop', 'gitpull', 'ipython']] *** ok: [push-prod-01] => (item=virtualenv) ok: [push-prod-01] => (item=pss) ok: [push-prod-01] => (item=requests) ok: [push-prod-01] => (item=comment-builder) ok: [push-prod-01] => (item=boto) ok: [push-prod-01] => (item=ansible) ok: [push-prod-01] => (item=uwsgitop) ok: [push-prod-01] => (item=gitpull) changed: [push-prod-01] => (item=ipython) The problem is that each line is executed using a consecutive SSH command, instead of installing all the packages in a single call. Is there a way to install multiple Python packages on an Ansible pip command?
pip, ansible
14
21,939
3
https://stackoverflow.com/questions/31396130/ansible-install-multiple-python-packages-on-a-single-session
21,413,613
Convert Ansible variable from Unicode to ASCII
I'm getting the output of a command on the remote system and storing it in a variable. It is then used to fill in a file template which gets placed on the system. - name: Retrieve Initiator Name command: /usr/sbin/iscsi-iname register: iscsiname - name: Setup InitiatorName File template: src=initiatorname.iscsi.template dest=/etc/iscsi/initiatorname.iscsi The initiatorname.iscsi.template file contains: InitiatorName={{ iscsiname.stdout_lines }} When I run it however, I get a file with the following: InitiatorName=[u'iqn.2005-03.org.open-iscsi:2bb08ec8f94'] What I want: InitiatorName=iqn.2005-03.org.open-iscsi:2bb08ec8f94 What am I doing wrong? I realize I could write this to the file with an "echo "InitiatorName=$(/usr/sbin/iscsi-iname)" > /etc/iscsi/initiatorname.iscsi" but that seems like an un-Ansible way of doing it. Thanks in advance.
Convert Ansible variable from Unicode to ASCII I'm getting the output of a command on the remote system and storing it in a variable. It is then used to fill in a file template which gets placed on the system. - name: Retrieve Initiator Name command: /usr/sbin/iscsi-iname register: iscsiname - name: Setup InitiatorName File template: src=initiatorname.iscsi.template dest=/etc/iscsi/initiatorname.iscsi The initiatorname.iscsi.template file contains: InitiatorName={{ iscsiname.stdout_lines }} When I run it however, I get a file with the following: InitiatorName=[u'iqn.2005-03.org.open-iscsi:2bb08ec8f94'] What I want: InitiatorName=iqn.2005-03.org.open-iscsi:2bb08ec8f94 What am I doing wrong? I realize I could write this to the file with an "echo "InitiatorName=$(/usr/sbin/iscsi-iname)" > /etc/iscsi/initiatorname.iscsi" but that seems like an un-Ansible way of doing it. Thanks in advance.
unicode, ansible
14
41,751
3
https://stackoverflow.com/questions/21413613/convert-ansible-variable-from-unicode-to-ascii
36,447,118
Ansible Amazon EC2. The key pair does not exist
I would like to create and provision Amazon EC2 machines with a help of Ansible. Now, I get the following error: fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Instance creation failed => InvalidKeyPair.NotFound: The key pair '~/.keys/EC2-Kibi-Enterprise-Deployment.pem' does not exist"} But the .pem key exists: $ ls -lh ~/.keys/EC2-Kibi-Enterprise-Deployment.pem -r-------- 1 sergey sergey 1.7K Apr 6 09:56 /home/sergey/.keys/EC2-Kibi-Enterprise-Deployment.pem And it was created in EU (Ireland) region. Here is my playbook: -- - name: Setup servers on Amazon EC2 machines hosts: localhost gather_facts: no tasks: - include_vars: group_vars/all/ec2_vars.yml ### Create Amazon EC2 instances - name: Amazon EC2 | Create instances ec2: count: "{{ count }}" key_name: "{{ key }}" region: "{{ region }}" zone: "{{ zone }}" group: "{{ group }}" instance_type: "{{ machine }}" image: "{{ image }}" wait: true wait_timeout: 500 #vpc_subnet_id: "{{ subnet }}" #assign_public_ip: yes register: ec2 - name: Amazon EC2 | Wait for SSH to come up wait_for: host: "{{ item.public_ip }}" port: 22 delay: 10 timeout: 60 state: started with_items: "{{ ec2.instances }}" - name: Amazon EC2 | Add hosts to the kibi_servers in-memory inventory group add_host: hostname={{ item.public_ip }} groupname=kibi_servers with_items: "{{ ec2.instances }}" ### END ### Provision roles - name: Amazon EC2 | Provision new instances hosts: kibi_servers become: yes roles: - common - java - elasticsearch - logstash - nginx - kibi - supervisor ### END And my var file: count: 2 region: eu-west-1 zone: eu-west-1a group: default image: ami-d1ec01a6 machine: t2.medium subnet: subnet-3a2aa952 key: ~/.keys/EC2-Kibi-Enterprise-Deployment.pem What is wrong with the .pem file here?
Ansible Amazon EC2. The key pair does not exist I would like to create and provision Amazon EC2 machines with a help of Ansible. Now, I get the following error: fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Instance creation failed => InvalidKeyPair.NotFound: The key pair '~/.keys/EC2-Kibi-Enterprise-Deployment.pem' does not exist"} But the .pem key exists: $ ls -lh ~/.keys/EC2-Kibi-Enterprise-Deployment.pem -r-------- 1 sergey sergey 1.7K Apr 6 09:56 /home/sergey/.keys/EC2-Kibi-Enterprise-Deployment.pem And it was created in EU (Ireland) region. Here is my playbook: -- - name: Setup servers on Amazon EC2 machines hosts: localhost gather_facts: no tasks: - include_vars: group_vars/all/ec2_vars.yml ### Create Amazon EC2 instances - name: Amazon EC2 | Create instances ec2: count: "{{ count }}" key_name: "{{ key }}" region: "{{ region }}" zone: "{{ zone }}" group: "{{ group }}" instance_type: "{{ machine }}" image: "{{ image }}" wait: true wait_timeout: 500 #vpc_subnet_id: "{{ subnet }}" #assign_public_ip: yes register: ec2 - name: Amazon EC2 | Wait for SSH to come up wait_for: host: "{{ item.public_ip }}" port: 22 delay: 10 timeout: 60 state: started with_items: "{{ ec2.instances }}" - name: Amazon EC2 | Add hosts to the kibi_servers in-memory inventory group add_host: hostname={{ item.public_ip }} groupname=kibi_servers with_items: "{{ ec2.instances }}" ### END ### Provision roles - name: Amazon EC2 | Provision new instances hosts: kibi_servers become: yes roles: - common - java - elasticsearch - logstash - nginx - kibi - supervisor ### END And my var file: count: 2 region: eu-west-1 zone: eu-west-1a group: default image: ami-d1ec01a6 machine: t2.medium subnet: subnet-3a2aa952 key: ~/.keys/EC2-Kibi-Enterprise-Deployment.pem What is wrong with the .pem file here?
amazon-ec2, ansible
14
18,264
4
https://stackoverflow.com/questions/36447118/ansible-amazon-ec2-the-key-pair-does-not-exist
22,960,944
running mkvirtualenv using ansible
I am provisioning a machine using ansible. I managed to install virtualenv and virtualenvwrapper fine on the vm. However, I can't seem to create a virtualenv on the vm. I am trying using - name: create virtualenv test shell: > executable=/bin/zsh source which virtualenvwrapper.sh && mkvirtualenv test register: run_cmd and - name: create virtualenv test action: command mkvirtualenv test but no luck. Any ideas?
running mkvirtualenv using ansible I am provisioning a machine using ansible. I managed to install virtualenv and virtualenvwrapper fine on the vm. However, I can't seem to create a virtualenv on the vm. I am trying using - name: create virtualenv test shell: > executable=/bin/zsh source which virtualenvwrapper.sh && mkvirtualenv test register: run_cmd and - name: create virtualenv test action: command mkvirtualenv test but no luck. Any ideas?
virtualenv, ansible
14
4,153
3
https://stackoverflow.com/questions/22960944/running-mkvirtualenv-using-ansible
60,729,009
Ansible: Check if a variable contains a list or dictionary
Sometimes, roles need different mandatory variables that needs to be defined when calling them. For instance - hosts: localhost remote_user: root roles: - role: ansible-aks name: myaks resource_group: myresourcegroup Inside the role, it can be controlled like this: - name: Assert AKS Variables assert: that: "{{ item }} is defined" msg: "{{ item }} is not defined" with_items: - name - resource_group I want to pass a list or dictionary to my role instead of a string. How can I assert that a variable contains a dictionary or a list?
Ansible: Check if a variable contains a list or dictionary Sometimes, roles need different mandatory variables that needs to be defined when calling them. For instance - hosts: localhost remote_user: root roles: - role: ansible-aks name: myaks resource_group: myresourcegroup Inside the role, it can be controlled like this: - name: Assert AKS Variables assert: that: "{{ item }} is defined" msg: "{{ item }} is not defined" with_items: - name - resource_group I want to pass a list or dictionary to my role instead of a string. How can I assert that a variable contains a dictionary or a list?
ansible
14
26,974
1
https://stackoverflow.com/questions/60729009/ansible-check-if-a-variable-contains-a-list-or-dictionary
40,625,832
Using Ansible docker_container, how can I display standard out? (stdout)
I'm using Ansible and its module docker_container to launch tape unit test containers in nodejs. This is nice because I don't have to have npm mess up my host, my only dev box dependency is python and docker. I need to be able to see stdout to see that tests have been run. However, Docker's --attach option is not exposed in docker_container and I cannot find any way to have stdout print out from the ansible launch of the container. I can go back to bash scripts to launch docker containers but I'd rather not... How can I display a container's standard out with Ansible's docker_container module?
Using Ansible docker_container, how can I display standard out? (stdout) I'm using Ansible and its module docker_container to launch tape unit test containers in nodejs. This is nice because I don't have to have npm mess up my host, my only dev box dependency is python and docker. I need to be able to see stdout to see that tests have been run. However, Docker's --attach option is not exposed in docker_container and I cannot find any way to have stdout print out from the ansible launch of the container. I can go back to bash scripts to launch docker containers but I'd rather not... How can I display a container's standard out with Ansible's docker_container module?
docker, ansible
14
12,770
3
https://stackoverflow.com/questions/40625832/using-ansible-docker-container-how-can-i-display-standard-out-stdout
24,557,042
Include vars in role tasks
In a role, I am trying to load some variables from another role. (If that role was included in the current play, the variables would be accessible, but it's not so they're not.) So I tried this: - include_vars: ../../another_role/defaults/main.yml But it doesn't work, no error but the variables are still undefined. So I tried to be smart and symlink the file to vars/another_role_defaults.yml in the role where I want to use the vars and then include it like this: - include_vars: another_role_defaults.yml Same result, no error (why doesn't it throw an error if the file cannot be found??) but variables are still undefined. I tried this as well, for good measure, but still no cigar. - include_vars: ../vars/another_role_defaults.yml What am I doing wrong?
Include vars in role tasks In a role, I am trying to load some variables from another role. (If that role was included in the current play, the variables would be accessible, but it's not so they're not.) So I tried this: - include_vars: ../../another_role/defaults/main.yml But it doesn't work, no error but the variables are still undefined. So I tried to be smart and symlink the file to vars/another_role_defaults.yml in the role where I want to use the vars and then include it like this: - include_vars: another_role_defaults.yml Same result, no error (why doesn't it throw an error if the file cannot be found??) but variables are still undefined. I tried this as well, for good measure, but still no cigar. - include_vars: ../vars/another_role_defaults.yml What am I doing wrong?
ansible
14
33,868
3
https://stackoverflow.com/questions/24557042/include-vars-in-role-tasks
43,364,956
Ansible Cloudwatch rule reports failed invocations
I have created an AWS lambda that works well when I test it and when I create a cron job manually through a cloudwatch rule. It reports metrics as invocations (not failed) and also logs with details about the execution. Then I decided to remove that manually created cloudwatch rule in order to create one with ansible. - name: Create lambda service. lambda: name: "{{ item.name }}" state: present zip_file: "{{ item.zip_file }}" runtime: 'python2.7' role: 'arn:aws:iam::12345678901:role/lambda_ecr_delete' handler: 'main.handler' region: 'eu-west-2' environment_variables: "{{ item.env_vars }}" with_items: - name: lamda_ecr_cleaner zip_file: assets/scripts/ecr-cleaner.zip env_vars: 'DRYRUN': '0' 'IMAGES_TO_KEEP': '20' 'REGION': 'eu-west-2' register: new_lambda - name: Schedule a cloudwatch event. cloudwatchevent_rule: name: ecr_delete schedule_expression: "rate(1 day)" description: Delete old images in ecr repo. targets: - id: ecr_delete arn: "{{ item.configuration.function_arn }}" with_items: "{{ new_lambda.results }}" That creates almost the exact same cloudwatch rule. The only difference I can see with the manually created one is in the targets, the lambda version / alias is set to Default when created manually while it is set to version, with a corresponding version number when created with ansible. The cloudwatch rule created with ansible has only failed invocations. Any idea why this is? I can't see any logs. Is there a way I can set the version to Default as well with the cloudwatchevent_rule module in ansible?
Ansible Cloudwatch rule reports failed invocations I have created an AWS lambda that works well when I test it and when I create a cron job manually through a cloudwatch rule. It reports metrics as invocations (not failed) and also logs with details about the execution. Then I decided to remove that manually created cloudwatch rule in order to create one with ansible. - name: Create lambda service. lambda: name: "{{ item.name }}" state: present zip_file: "{{ item.zip_file }}" runtime: 'python2.7' role: 'arn:aws:iam::12345678901:role/lambda_ecr_delete' handler: 'main.handler' region: 'eu-west-2' environment_variables: "{{ item.env_vars }}" with_items: - name: lamda_ecr_cleaner zip_file: assets/scripts/ecr-cleaner.zip env_vars: 'DRYRUN': '0' 'IMAGES_TO_KEEP': '20' 'REGION': 'eu-west-2' register: new_lambda - name: Schedule a cloudwatch event. cloudwatchevent_rule: name: ecr_delete schedule_expression: "rate(1 day)" description: Delete old images in ecr repo. targets: - id: ecr_delete arn: "{{ item.configuration.function_arn }}" with_items: "{{ new_lambda.results }}" That creates almost the exact same cloudwatch rule. The only difference I can see with the manually created one is in the targets, the lambda version / alias is set to Default when created manually while it is set to version, with a corresponding version number when created with ansible. The cloudwatch rule created with ansible has only failed invocations. Any idea why this is? I can't see any logs. Is there a way I can set the version to Default as well with the cloudwatchevent_rule module in ansible?
amazon-web-services, ansible, amazon-cloudwatch
14
14,429
3
https://stackoverflow.com/questions/43364956/ansible-cloudwatch-rule-reports-failed-invocations
23,115,619
How to use ansible with two factor authentication?
I have enabled two factor authentication for ssh using duosecurity (using this playbook [URL] ). How can I use ansible to manage the server now. The SSH calls fail at gathering facts because of this. I want the person running the playbook to enter the two factor code before the playbook is run. Disabling two factor for the deployment user is a possible solution but creates a security issue which I would I like to avoid.
How to use ansible with two factor authentication? I have enabled two factor authentication for ssh using duosecurity (using this playbook [URL] ). How can I use ansible to manage the server now. The SSH calls fail at gathering facts because of this. I want the person running the playbook to enter the two factor code before the playbook is run. Disabling two factor for the deployment user is a possible solution but creates a security issue which I would I like to avoid.
ssh, ansible, two-factor-authentication
14
21,125
3
https://stackoverflow.com/questions/23115619/how-to-use-ansible-with-two-factor-authentication
20,252,057
Using ansible, how would I delete all items except for a specified set in a directory?
I can, of course, use a shell command, but I was hoping there was an ansible way to do this, so that I can get the "changed/unchanged" response.
Using ansible, how would I delete all items except for a specified set in a directory? I can, of course, use a shell command, but I was hoping there was an ansible way to do this, so that I can get the "changed/unchanged" response.
ansible
14
17,472
3
https://stackoverflow.com/questions/20252057/using-ansible-how-would-i-delete-all-items-except-for-a-specified-set-in-a-dire
52,873,001
Ansible 2.7 include_tasks no longer accepts a variable
Each of the roles in my playbook ends with this code: - include_tasks: includes/log_role_completion.yml this_role={{ role_name }} Which is used (at the end of the playbook) to write a log on the target server, indicating when a PB was started (there's a task at the start of the PB for that), what roles ran, and when (the start and end-times are the same, but that's for another day). The problem is that with Ansible 2.7, I'm now getting an error caused by the line above: - include_tasks: includes/log_role_completion.yml this_role="{{ role_name }}" ^ here We could be wrong, but this one looks like it might be an issue with missing quotes. Always quote template expression brackets when they start a value. For instance: This worked up until 2.7, and is useful - I'd hate to have to lose it. I've tried putting quotes around the "includes...}}" part of the line, to no avail. PS I know that Ansible can write logs - I find this more useful. Also, I'm aware that include_tasks is marked 'preview', so may change, but I can't find release notes to tell me if it has.
Ansible 2.7 include_tasks no longer accepts a variable Each of the roles in my playbook ends with this code: - include_tasks: includes/log_role_completion.yml this_role={{ role_name }} Which is used (at the end of the playbook) to write a log on the target server, indicating when a PB was started (there's a task at the start of the PB for that), what roles ran, and when (the start and end-times are the same, but that's for another day). The problem is that with Ansible 2.7, I'm now getting an error caused by the line above: - include_tasks: includes/log_role_completion.yml this_role="{{ role_name }}" ^ here We could be wrong, but this one looks like it might be an issue with missing quotes. Always quote template expression brackets when they start a value. For instance: This worked up until 2.7, and is useful - I'd hate to have to lose it. I've tried putting quotes around the "includes...}}" part of the line, to no avail. PS I know that Ansible can write logs - I find this more useful. Also, I'm aware that include_tasks is marked 'preview', so may change, but I can't find release notes to tell me if it has.
ansible
14
17,347
1
https://stackoverflow.com/questions/52873001/ansible-2-7-include-tasks-no-longer-accepts-a-variable
50,855,206
UNREACHABLE error while running an Ansible playbook
I do have access to ssh into the destination machine, and it works, but whenever I run this playbook, I get this error output: sudo ansible-playbook ansible-playbook-test.yml PLAY [openstack] ***************************************************************************************************************************************************************************************** TASK [Gathering Facts] *********************************************************************************************************************************************************************************** fatal: [amachine]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactive).\r\n", "unreachable": true} to retry, use: --limit @/blah/ansible-play/ansible-playbook-test.retry PLAY RECAP *********************************************************************************************************************************************************************************************** amachine : ok=0 changed=0 unreachable=1 failed=0 My playbook is as simple as this: --- # hosts could have been "remote" or "all" as well - hosts: openstack tasks: - name: test connection ping: remote_user: djuarezg vars: ansible_ssh_extra_args: '-K -o ControlPath=none' - hosts: openstack tasks: - name: Create Swarm cluster command: mkdir djg vars: ansible_ssh_extra_args: '-K -o ControlPath=none' I was trying to use ansible_ssh_extra_args: '-K -o ControlPath=none' to see if it was able to forward the Kerberos ticket, but any kind of connection is enough.
UNREACHABLE error while running an Ansible playbook I do have access to ssh into the destination machine, and it works, but whenever I run this playbook, I get this error output: sudo ansible-playbook ansible-playbook-test.yml PLAY [openstack] ***************************************************************************************************************************************************************************************** TASK [Gathering Facts] *********************************************************************************************************************************************************************************** fatal: [amachine]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactive).\r\n", "unreachable": true} to retry, use: --limit @/blah/ansible-play/ansible-playbook-test.retry PLAY RECAP *********************************************************************************************************************************************************************************************** amachine : ok=0 changed=0 unreachable=1 failed=0 My playbook is as simple as this: --- # hosts could have been "remote" or "all" as well - hosts: openstack tasks: - name: test connection ping: remote_user: djuarezg vars: ansible_ssh_extra_args: '-K -o ControlPath=none' - hosts: openstack tasks: - name: Create Swarm cluster command: mkdir djg vars: ansible_ssh_extra_args: '-K -o ControlPath=none' I was trying to use ansible_ssh_extra_args: '-K -o ControlPath=none' to see if it was able to forward the Kerberos ticket, but any kind of connection is enough.
ansible
14
51,495
5
https://stackoverflow.com/questions/50855206/unreachable-error-while-running-an-ansible-playbook
47,567,580
Inline encrypted variable not JSON serializable
I'm trying to understand how to encrypt single variables with vault. First I encrypt the string with ansible-vault encrypt_string -n -p , then I write the output into my playbook. When I execute the playbook it says that the decrypted string isn't JSON serializable. Encrypted string: "inline_name" I also tried it with inline_name and inlinename , every time with the same result. My playbook: --- - name: Build System hosts: dev tasks: - name: Create mysql_db: state: present name: !vault | $ANSIBLE_VAULT;1.1;AES256 39613261386438623937643062636166663638633062323939343734306334346537613233623064 3761633832326365356231633338396132646532313861350a316666376566616633376238313636 39343833306462323534623238333639663734626662623731666239366566643636386261643164 3861363730336331660a316165633232323732633364346636363764623639356562336536636136 6364 login_host: "{{ mysql_host }}" login_user: "{{ mysql_user }}" login_password: "{{ mysql_pass }}" - name: Check if can access plain text vars debug: msg: "{{ my_plain_txt }}" Error message: An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: u'"inline_name"' is not JSON serializable fatal: [127.0.0.1]: FAILED! => {"failed": true, "msg": "Unexpected failure during module execution.", "stdout": ""}
Inline encrypted variable not JSON serializable I'm trying to understand how to encrypt single variables with vault. First I encrypt the string with ansible-vault encrypt_string -n -p , then I write the output into my playbook. When I execute the playbook it says that the decrypted string isn't JSON serializable. Encrypted string: "inline_name" I also tried it with inline_name and inlinename , every time with the same result. My playbook: --- - name: Build System hosts: dev tasks: - name: Create mysql_db: state: present name: !vault | $ANSIBLE_VAULT;1.1;AES256 39613261386438623937643062636166663638633062323939343734306334346537613233623064 3761633832326365356231633338396132646532313861350a316666376566616633376238313636 39343833306462323534623238333639663734626662623731666239366566643636386261643164 3861363730336331660a316165633232323732633364346636363764623639356562336536636136 6364 login_host: "{{ mysql_host }}" login_user: "{{ mysql_user }}" login_password: "{{ mysql_pass }}" - name: Check if can access plain text vars debug: msg: "{{ my_plain_txt }}" Error message: An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: u'"inline_name"' is not JSON serializable fatal: [127.0.0.1]: FAILED! => {"failed": true, "msg": "Unexpected failure during module execution.", "stdout": ""}
ansible, ansible-vault
14
8,487
3
https://stackoverflow.com/questions/47567580/inline-encrypted-variable-not-json-serializable
55,046,531
ansible regex_search with variable
How to find a match using regex in ansible playbook where variable appears in the regex_search argument? The following playbook doesn't find the match... when run using: ansible-playbook playbook.yml - hosts: localhost gather_facts: no tasks: - set_fact: pattern: "{{ 'foobar' | regex_search('foo') }}" - set_fact: m: "{{ 'beefoo' | regex_search('bee{{ pattern }}') }}" - debug: msg: "hi {{ m }}"
ansible regex_search with variable How to find a match using regex in ansible playbook where variable appears in the regex_search argument? The following playbook doesn't find the match... when run using: ansible-playbook playbook.yml - hosts: localhost gather_facts: no tasks: - set_fact: pattern: "{{ 'foobar' | regex_search('foo') }}" - set_fact: m: "{{ 'beefoo' | regex_search('bee{{ pattern }}') }}" - debug: msg: "hi {{ m }}"
regex, ansible, ansible-template
14
77,643
2
https://stackoverflow.com/questions/55046531/ansible-regex-search-with-variable
35,368,044
How to use Ansible 2.0 Python API to run a Playbook?
I'm trying to write a Python script which will call existing Ansible playbooks as it goes (because I want to loop over a list of plays while looping over a list of variables). This post explains it very well, for ansible pre-2.0: Running ansible-playbook using Python API This doc explains it very well if you're writing a new playbook in your script: [URL] But I don't see how to call an existing playbook using Python API 2.0, and ansible.runner no longer works.
How to use Ansible 2.0 Python API to run a Playbook? I'm trying to write a Python script which will call existing Ansible playbooks as it goes (because I want to loop over a list of plays while looping over a list of variables). This post explains it very well, for ansible pre-2.0: Running ansible-playbook using Python API This doc explains it very well if you're writing a new playbook in your script: [URL] But I don't see how to call an existing playbook using Python API 2.0, and ansible.runner no longer works.
python, ansible, ansible-2.x
14
15,279
1
https://stackoverflow.com/questions/35368044/how-to-use-ansible-2-0-python-api-to-run-a-playbook
47,392,748
Ansible: How to declare global variable within playbook?
How can I declare global variable within Ansible playbook. I have searched in google and found the below solution, but its not working as expected. - hosts: all vars: prod-servers: - x.x.x.x - x.x.x.x - hosts: "{{prod-servers}}" tasks: - name: ping action: ping When I'm trying the above code, it says variable prod-servers is undefined.
Ansible: How to declare global variable within playbook? How can I declare global variable within Ansible playbook. I have searched in google and found the below solution, but its not working as expected. - hosts: all vars: prod-servers: - x.x.x.x - x.x.x.x - hosts: "{{prod-servers}}" tasks: - name: ping action: ping When I'm trying the above code, it says variable prod-servers is undefined.
ansible, ansible-inventory
14
36,765
4
https://stackoverflow.com/questions/47392748/ansible-how-to-declare-global-variable-within-playbook
25,522,463
File lookup() relative to playbook
I am currently using lookup() function in role tasks.yml to get input from files for shell commands. Is there a way to make lookups relative to the playbook file (project root folder) instead of role itself? I'd rather store files on playbook level.
File lookup() relative to playbook I am currently using lookup() function in role tasks.yml to get input from files for shell commands. Is there a way to make lookups relative to the playbook file (project root folder) instead of role itself? I'd rather store files on playbook level.
ansible
14
15,647
1
https://stackoverflow.com/questions/25522463/file-lookup-relative-to-playbook
30,480,706
Is there any way to retry playbooks from where they failed?
Is there any way to retry playbooks from where they failed? I'm starting it with vagrant provision
Is there any way to retry playbooks from where they failed? Is there any way to retry playbooks from where they failed? I'm starting it with vagrant provision
vagrant, ansible, vagrantfile
14
14,596
1
https://stackoverflow.com/questions/30480706/is-there-any-way-to-retry-playbooks-from-where-they-failed
49,390,513
How do you get group name of the executing role in ansible
I have a use case where I need to create a directory in all hosts in a group whose name will be the name of the group. Eg : I create a dynamic inventory file with output of the form : { "db": ["host1", "host2", "host3"], "logs": ["host2"], "misc": ["host4"] } I need to create a directory by the name db in every host of the group db and directory named log in every host of the group log. My playbook so far looks like : - hosts: db tasks: - name: create db directory file: path=db state=directory - hosts: logs tasks: - name: create logs directory file: path=logs state=directory The number of groups is at least 10+. This may grow in the future. I want to write something that will be easily extendable and manageable for the future groups. I read that ansible doesn't provide a default fact or variable for group name for the task. I see that this can be best done if I provide a variable like groupname in host. Is there a better solution that could use looping instead of repeating the same task?
How do you get group name of the executing role in ansible I have a use case where I need to create a directory in all hosts in a group whose name will be the name of the group. Eg : I create a dynamic inventory file with output of the form : { "db": ["host1", "host2", "host3"], "logs": ["host2"], "misc": ["host4"] } I need to create a directory by the name db in every host of the group db and directory named log in every host of the group log. My playbook so far looks like : - hosts: db tasks: - name: create db directory file: path=db state=directory - hosts: logs tasks: - name: create logs directory file: path=logs state=directory The number of groups is at least 10+. This may grow in the future. I want to write something that will be easily extendable and manageable for the future groups. I read that ansible doesn't provide a default fact or variable for group name for the task. I see that this can be best done if I provide a variable like groupname in host. Is there a better solution that could use looping instead of repeating the same task?
ansible, ansible-inventory
14
48,674
1
https://stackoverflow.com/questions/49390513/how-do-you-get-group-name-of-the-executing-role-in-ansible
33,972,756
Ansible to generate random passwords automatically for users
I am trying to create playbook where list of users will be created. However, I also want to generate random password for each user. Once the passwords are generated, I would like to have a text file holding username:new_generated_password key values, next to the playbook file. Is it possible to do this without developing a new module?
Ansible to generate random passwords automatically for users I am trying to create playbook where list of users will be created. However, I also want to generate random password for each user. Once the passwords are generated, I would like to have a text file holding username:new_generated_password key values, next to the playbook file. Is it possible to do this without developing a new module?
ansible
14
20,206
1
https://stackoverflow.com/questions/33972756/ansible-to-generate-random-passwords-automatically-for-users
64,261,899
Ansible &quot;ERROR! Attempting to decrypt but no vault secrets found&quot; but I&#39;m not decrypting anything
Odd and Frustrating Error I'm trying to learn and create a python module for Ansible. I'm, following this page: Medium.com This is to a local module so I just want to use ./library/module.py . The code: Playbook --- - hosts: localhost # I've used 127.0.0.1 here also connection: local vars: - Test: "This is a test" tasks: - name: set result set_fact: result: "set" - name: Test that my hello_world module works hello_world: register: result - debug: var=result ... The module is a simple python script to print 'Hello World". it is ./library/hello_world.py Module #!/usr/bin/python from ansible.module_utils.basic import * def main(): module = AnsibleModule(argument_spec={}) theReturnValue = {"hello": "world"} module.exit_json(changed=False, meta=theReturnValue) if __name__ == '__main__': main() My ansible.config [defaults] inventory = ./inventory roles_path = ./roles library = ./library filter_plugins = ./plugins/filter lookup_plugins = ./plugins/lookup callback_whitelist = profile_tasks,timer log_path = ./ansible.log gathering = smart fact_caching = jsonfile fact_caching_connection = /tmp/server_configurations/Facts/ fact_caching_timeout = 86400 host_key_checking = True timeout=60 [inventory] enable_plugins = host_list, ini, script, yaml, auto [ssh_connection] pipelining=True control_path = %(directory)s/ssh-%%h-%%p-%%r My Question Why is my playbook throuing a vault error when I have not called to unlock anything? I created another playbook from scratch to see if there any there was a copy error. There was none, and got the same error. Error ansible@VirtualBox:/media/ubuntu20$ ansible-playbook module_test.yml --connection="local 127.0.0.1" -vv ansible-playbook 2.9.6 config file = /media/ubuntu20/ansible.cfg configured module search path = ['/media/ubuntu20/library'] ansible python module location = /usr/lib/python3/dist-packages/ansible executable location = /usr/bin/ansible-playbook python version = 3.8.2 (default, Jul 16 2020, 14:00:26) [GCC 9.3.0] Using /media/ubuntu20/ansible.cfg as config file PLAYBOOK: module_test.yml ********************************************************************************************************************************************************** 1 plays in module_test.yml PLAY [localhost] ******************************************************************************************************************************************************************* ERROR! Attempting to decrypt but no vault secrets found This error is is confusing as I don't call any vaults and I don't use any inventory. I hand typed the playbook by hand in case I copied anything I did not want. I have used various 'local' settings to force locality and no change in error. It does not help that even at -vvvv I just the same error line. If this a module error then why show a vault error? (I do have vaults in my ./inventory/group_vars, but I don't use them) I head scratching on this.
Ansible &quot;ERROR! Attempting to decrypt but no vault secrets found&quot; but I&#39;m not decrypting anything Odd and Frustrating Error I'm trying to learn and create a python module for Ansible. I'm, following this page: Medium.com This is to a local module so I just want to use ./library/module.py . The code: Playbook --- - hosts: localhost # I've used 127.0.0.1 here also connection: local vars: - Test: "This is a test" tasks: - name: set result set_fact: result: "set" - name: Test that my hello_world module works hello_world: register: result - debug: var=result ... The module is a simple python script to print 'Hello World". it is ./library/hello_world.py Module #!/usr/bin/python from ansible.module_utils.basic import * def main(): module = AnsibleModule(argument_spec={}) theReturnValue = {"hello": "world"} module.exit_json(changed=False, meta=theReturnValue) if __name__ == '__main__': main() My ansible.config [defaults] inventory = ./inventory roles_path = ./roles library = ./library filter_plugins = ./plugins/filter lookup_plugins = ./plugins/lookup callback_whitelist = profile_tasks,timer log_path = ./ansible.log gathering = smart fact_caching = jsonfile fact_caching_connection = /tmp/server_configurations/Facts/ fact_caching_timeout = 86400 host_key_checking = True timeout=60 [inventory] enable_plugins = host_list, ini, script, yaml, auto [ssh_connection] pipelining=True control_path = %(directory)s/ssh-%%h-%%p-%%r My Question Why is my playbook throuing a vault error when I have not called to unlock anything? I created another playbook from scratch to see if there any there was a copy error. There was none, and got the same error. Error ansible@VirtualBox:/media/ubuntu20$ ansible-playbook module_test.yml --connection="local 127.0.0.1" -vv ansible-playbook 2.9.6 config file = /media/ubuntu20/ansible.cfg configured module search path = ['/media/ubuntu20/library'] ansible python module location = /usr/lib/python3/dist-packages/ansible executable location = /usr/bin/ansible-playbook python version = 3.8.2 (default, Jul 16 2020, 14:00:26) [GCC 9.3.0] Using /media/ubuntu20/ansible.cfg as config file PLAYBOOK: module_test.yml ********************************************************************************************************************************************************** 1 plays in module_test.yml PLAY [localhost] ******************************************************************************************************************************************************************* ERROR! Attempting to decrypt but no vault secrets found This error is is confusing as I don't call any vaults and I don't use any inventory. I hand typed the playbook by hand in case I copied anything I did not want. I have used various 'local' settings to force locality and no change in error. It does not help that even at -vvvv I just the same error line. If this a module error then why show a vault error? (I do have vaults in my ./inventory/group_vars, but I don't use them) I head scratching on this.
python, ansible
14
60,538
4
https://stackoverflow.com/questions/64261899/ansible-error-attempting-to-decrypt-but-no-vault-secrets-found-but-im-not-de
35,798,445
How to pass variables from one role downstream to other dependency roles with ansible?
I have a generic webserver role that is using another nginx role to spawn new vservers. webserver/meta/main.yml looks like: allow_duplicates: yes dependencies: - role: nginx name: api vserver frontend_port: "{{ frontend_port }}" domain: "{{ api_domain }}" backend_host: 127.0.0.1 - role: nginx name: portal vserver domain: "{{ portal_domain }}" backend_host: 127.0.0.1 The problem is that these variables are supposed to be defined inside the webserver-role/vars/(test|staging).yml Is seems that Ansible will try to load the dependencies before loading the variables. How can I solve this problem? I don't want to put any configuration specifics inside the low level roles. Also, I do not want to put configurations inside the playbook itself because these configurations are shared across multiple playbooks.
How to pass variables from one role downstream to other dependency roles with ansible? I have a generic webserver role that is using another nginx role to spawn new vservers. webserver/meta/main.yml looks like: allow_duplicates: yes dependencies: - role: nginx name: api vserver frontend_port: "{{ frontend_port }}" domain: "{{ api_domain }}" backend_host: 127.0.0.1 - role: nginx name: portal vserver domain: "{{ portal_domain }}" backend_host: 127.0.0.1 The problem is that these variables are supposed to be defined inside the webserver-role/vars/(test|staging).yml Is seems that Ansible will try to load the dependencies before loading the variables. How can I solve this problem? I don't want to put any configuration specifics inside the low level roles. Also, I do not want to put configurations inside the playbook itself because these configurations are shared across multiple playbooks.
ansible
14
9,403
2
https://stackoverflow.com/questions/35798445/how-to-pass-variables-from-one-role-downstream-to-other-dependency-roles-with-an
28,717,783
Remove a line from a file using ansible?
I have a file called authorized_keys . I need to delete a particular line using an Ansible script. The problem is when I try to remove a line that includes a '+' character. Ansible is not able to remove this line. e.g authorized_keys file is: ..... abhi foo+bar saken ......(EOF) I want to remove the abhi foo+bar saken line but Ansible is not removing this line because of the + character. I am able to remove lines that do not contain a + character . Task: - name: Delete keys in sysadmin/.ssh/authoriezd_keys lineinfile: dest=/home/{{name}}/.ssh/authorized_keys state=absent regexp='^{{key}}$' PS: I am using Ansible's lineinfile module
Remove a line from a file using ansible? I have a file called authorized_keys . I need to delete a particular line using an Ansible script. The problem is when I try to remove a line that includes a '+' character. Ansible is not able to remove this line. e.g authorized_keys file is: ..... abhi foo+bar saken ......(EOF) I want to remove the abhi foo+bar saken line but Ansible is not removing this line because of the + character. I am able to remove lines that do not contain a + character . Task: - name: Delete keys in sysadmin/.ssh/authoriezd_keys lineinfile: dest=/home/{{name}}/.ssh/authorized_keys state=absent regexp='^{{key}}$' PS: I am using Ansible's lineinfile module
ansible
14
64,749
1
https://stackoverflow.com/questions/28717783/remove-a-line-from-a-file-using-ansible
26,188,055
Ansible: understanding a compound conditional when statement
Consider this trivial ansible playbook and associated output below. Why does task 5 get executed? These tasks were run against debian. Task 1 fails as expected. So, why does and'ing it with 'ansible_lsb.major_release|int < 14' make it true? Does this have something to do with operator precedence? -jk --- - name: These tests run against debian hosts: frontend001 vars: - bcbio_dir: /mnt/bcbio - is_ubuntu: "'{{ansible_distribution}}' == 'Ubuntu'" - is_debian: "'{{ansible_distribution}}' == 'Debian'" tasks: - name: 1. Expect skip because test is_ubuntu debug: msg="ansible distribution - {{ansible_distribution}}, release - {{ansible_distribution_release}}, {{ ansible_lsb.major_release }}" when: is_ubuntu - name: 2. Expect to print msg because test is_debian debug: msg="ansible distribution - {{ansible_distribution}}, release - {{ansible_distribution_release}}, {{ ansible_lsb.major_release }}" when: is_debian - name: 3. Expect to print msg because release 7 of wheezy debug: msg="ansible distribution - {{ansible_distribution}}, release - {{ansible_distribution_release}}, {{ ansible_lsb.major_release }}" when: ansible_lsb.major_release|int < 14 - name: 4. Expect to print msg because true and true is true debug: msg="ansible distribution - {{ansible_distribution}}, release - {{ansible_distribution_release}}, {{ ansible_lsb.major_release }}" when: is_debian and ansible_lsb.major_release|int < 14 - name: 5. Expect to skip because false and true is false debug: msg="ansible distribution - {{ansible_distribution}}, release - {{ansible_distribution_release}}, {{ ansible_lsb.major_release }}" when: is_ubuntu and ansible_lsb.major_release|int < 14 $ ansible-playbook -i ~/.elasticluster/storage/ansible-inventory.jkcluster zbcbio.yml PLAY [These tests run against debian] ***************************************** GATHERING FACTS *************************************************************** ok: [frontend001] TASK: [1. Expect skip because test is_ubuntu] ********************************* skipping: [frontend001] TASK: [2. Expect to print msg because test is_debian] ************************* ok: [frontend001] => { "msg": "ansible distribution - Debian, release - wheezy, 7" } TASK: [3. Expect to print msg because release 7 of wheezy] ******************** ok: [frontend001] => { "msg": "ansible distribution - Debian, release - wheezy, 7" } TASK: [4. Expect to print msg because true and true is true] ****************** ok: [frontend001] => { "msg": "ansible distribution - Debian, release - wheezy, 7" } TASK: [5. Expect to skip because false and true is false] ********************* ok: [frontend001] => { "msg": "ansible distribution - Debian, release - wheezy, 7" } PLAY RECAP ******************************************************************** frontend001 : ok=5 changed=0 unreachable=0 failed=0 Edited : Listing the changes based on tedder42's answer below in case someone is following along at home. 1) Changed - is_ubuntu: "'{{ansible_distribution}}' == 'Ubuntu'" to - is_ubuntu: "{{ansible_distribution == 'Ubuntu'}}" 2) change when: is_ubuntu and ansible_lsb.major_release|int < 14 to when: is_ubuntu|bool and ansible_lsb.major_release|int < 14 That did it! -jk
Ansible: understanding a compound conditional when statement Consider this trivial ansible playbook and associated output below. Why does task 5 get executed? These tasks were run against debian. Task 1 fails as expected. So, why does and'ing it with 'ansible_lsb.major_release|int < 14' make it true? Does this have something to do with operator precedence? -jk --- - name: These tests run against debian hosts: frontend001 vars: - bcbio_dir: /mnt/bcbio - is_ubuntu: "'{{ansible_distribution}}' == 'Ubuntu'" - is_debian: "'{{ansible_distribution}}' == 'Debian'" tasks: - name: 1. Expect skip because test is_ubuntu debug: msg="ansible distribution - {{ansible_distribution}}, release - {{ansible_distribution_release}}, {{ ansible_lsb.major_release }}" when: is_ubuntu - name: 2. Expect to print msg because test is_debian debug: msg="ansible distribution - {{ansible_distribution}}, release - {{ansible_distribution_release}}, {{ ansible_lsb.major_release }}" when: is_debian - name: 3. Expect to print msg because release 7 of wheezy debug: msg="ansible distribution - {{ansible_distribution}}, release - {{ansible_distribution_release}}, {{ ansible_lsb.major_release }}" when: ansible_lsb.major_release|int < 14 - name: 4. Expect to print msg because true and true is true debug: msg="ansible distribution - {{ansible_distribution}}, release - {{ansible_distribution_release}}, {{ ansible_lsb.major_release }}" when: is_debian and ansible_lsb.major_release|int < 14 - name: 5. Expect to skip because false and true is false debug: msg="ansible distribution - {{ansible_distribution}}, release - {{ansible_distribution_release}}, {{ ansible_lsb.major_release }}" when: is_ubuntu and ansible_lsb.major_release|int < 14 $ ansible-playbook -i ~/.elasticluster/storage/ansible-inventory.jkcluster zbcbio.yml PLAY [These tests run against debian] ***************************************** GATHERING FACTS *************************************************************** ok: [frontend001] TASK: [1. Expect skip because test is_ubuntu] ********************************* skipping: [frontend001] TASK: [2. Expect to print msg because test is_debian] ************************* ok: [frontend001] => { "msg": "ansible distribution - Debian, release - wheezy, 7" } TASK: [3. Expect to print msg because release 7 of wheezy] ******************** ok: [frontend001] => { "msg": "ansible distribution - Debian, release - wheezy, 7" } TASK: [4. Expect to print msg because true and true is true] ****************** ok: [frontend001] => { "msg": "ansible distribution - Debian, release - wheezy, 7" } TASK: [5. Expect to skip because false and true is false] ********************* ok: [frontend001] => { "msg": "ansible distribution - Debian, release - wheezy, 7" } PLAY RECAP ******************************************************************** frontend001 : ok=5 changed=0 unreachable=0 failed=0 Edited : Listing the changes based on tedder42's answer below in case someone is following along at home. 1) Changed - is_ubuntu: "'{{ansible_distribution}}' == 'Ubuntu'" to - is_ubuntu: "{{ansible_distribution == 'Ubuntu'}}" 2) change when: is_ubuntu and ansible_lsb.major_release|int < 14 to when: is_ubuntu|bool and ansible_lsb.major_release|int < 14 That did it! -jk
ansible
14
23,044
1
https://stackoverflow.com/questions/26188055/ansible-understanding-a-compound-conditional-when-statement
64,687,350
In Ansible, how to add a block of text at end of a file with a blank line before the marker?
I have a playbook as shown below: - hosts: localhost tasks: - name: update a file blockinfile: dest: /tmp/test block: | line 1 line 2 Upon running the playbook, the file /tmp/test becomes: a # this is the end line of the original file # BEGIN ANSIBLE MANAGED BLOCK line 1 line 2 # END ANSIBLE MANAGED BLOCK I would like to add a blank line (newline) before the marker " # BEGIN ANSIBLE MANAGED BLOCK " for visual effect. What is the easiest way to do it? Preferably within the task, but any idea is welcome. If I redefine the marker, it will affect both the " BEGIN " and " END " marker.
In Ansible, how to add a block of text at end of a file with a blank line before the marker? I have a playbook as shown below: - hosts: localhost tasks: - name: update a file blockinfile: dest: /tmp/test block: | line 1 line 2 Upon running the playbook, the file /tmp/test becomes: a # this is the end line of the original file # BEGIN ANSIBLE MANAGED BLOCK line 1 line 2 # END ANSIBLE MANAGED BLOCK I would like to add a blank line (newline) before the marker " # BEGIN ANSIBLE MANAGED BLOCK " for visual effect. What is the easiest way to do it? Preferably within the task, but any idea is welcome. If I redefine the marker, it will affect both the " BEGIN " and " END " marker.
ansible, ansible-2.x
14
25,485
4
https://stackoverflow.com/questions/64687350/in-ansible-how-to-add-a-block-of-text-at-end-of-a-file-with-a-blank-line-before
48,936,489
Use variable from another host
I have a playbook that executes a script on a Windows box that returns a value that I have to re-use later on in my playbook after switching to localhost . How can I access this value after switching back to localhost ? Here is an example: - hosts: windows gather_facts: no tasks: - name: Call PowerShell script win_command: "c:\\windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe c:\\psl_scripts\\getData.ps1" register: value_to_reuse - hosts: localhost gather_facts: no tasks: - name: debug store_name from windows host debug: var: "{{ hostvars[windows][value_to_reuse][stdout_lines] }}" What is the correct syntax accessing a variable from another host? I'm receiving error message: "msg": "The task includes an option with an undefined variable. The error was: 'windows' is undefined
Use variable from another host I have a playbook that executes a script on a Windows box that returns a value that I have to re-use later on in my playbook after switching to localhost . How can I access this value after switching back to localhost ? Here is an example: - hosts: windows gather_facts: no tasks: - name: Call PowerShell script win_command: "c:\\windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe c:\\psl_scripts\\getData.ps1" register: value_to_reuse - hosts: localhost gather_facts: no tasks: - name: debug store_name from windows host debug: var: "{{ hostvars[windows][value_to_reuse][stdout_lines] }}" What is the correct syntax accessing a variable from another host? I'm receiving error message: "msg": "The task includes an option with an undefined variable. The error was: 'windows' is undefined
ansible
14
24,085
2
https://stackoverflow.com/questions/48936489/use-variable-from-another-host
37,880,491
.ansible/tmp/ansible-tmp-* Permission denied
Remote host throws error while running Ansible playbook despite a user being sudo user. "/usr/bin/python: can't open file '/home/ludd/.ansible/tmp/ansible-tmp-1466162346.37-16304304631529/zypper'
.ansible/tmp/ansible-tmp-* Permission denied Remote host throws error while running Ansible playbook despite a user being sudo user. "/usr/bin/python: can't open file '/home/ludd/.ansible/tmp/ansible-tmp-1466162346.37-16304304631529/zypper'
ansible
14
20,910
4
https://stackoverflow.com/questions/37880491/ansible-tmp-ansible-tmp-permission-denied
69,508,170
Ansible role dependency handling for collections
When creating an ansible role you can specify the dependencies in meta/main.yml as follows dependencies: - role: papanito.xxx How I do this, if papanito.xxx was converted into a collection? Or let's phrase it otherwise: How to I tell my role, that it depends on a module/role from a collection? If I run ansible-playbook which uses a role, that has a dependency to a role in a collection, get the following error: ERROR! the role 'papanito.xxx' was not found in ansible.legacy: /home/papanito/Workspaces/devenv/roles: /home/papanito/.ansible/roles: /usr/share/ansible/roles: /etc/ansible/roles: /home/papanito/Workspaces/devenv/roles: /home/papanito/Workspaces/devenv As I can see ansible checks these paths for roles, however, collections are stored under /home/papanito/.ansible/collections/ansible_collections/ . I suspect this has to do with the mention of ansible.legacy , however I did not configure anything like that - intentionally at least. Using role: the_namespace.the_collection.the_role does not solve the problem, but I still get the same error. Further the documentation says Within a role, you can control which collections Ansible searches for the tasks inside the role using the collections keyword in the role’s meta/main.yml So I updated meta/main.yml to collections: - papanito.xxx dependencies: - role: papanito.xxx.xxx Which still results in the exact same error as above when I run the playbook.
Ansible role dependency handling for collections When creating an ansible role you can specify the dependencies in meta/main.yml as follows dependencies: - role: papanito.xxx How I do this, if papanito.xxx was converted into a collection? Or let's phrase it otherwise: How to I tell my role, that it depends on a module/role from a collection? If I run ansible-playbook which uses a role, that has a dependency to a role in a collection, get the following error: ERROR! the role 'papanito.xxx' was not found in ansible.legacy: /home/papanito/Workspaces/devenv/roles: /home/papanito/.ansible/roles: /usr/share/ansible/roles: /etc/ansible/roles: /home/papanito/Workspaces/devenv/roles: /home/papanito/Workspaces/devenv As I can see ansible checks these paths for roles, however, collections are stored under /home/papanito/.ansible/collections/ansible_collections/ . I suspect this has to do with the mention of ansible.legacy , however I did not configure anything like that - intentionally at least. Using role: the_namespace.the_collection.the_role does not solve the problem, but I still get the same error. Further the documentation says Within a role, you can control which collections Ansible searches for the tasks inside the role using the collections keyword in the role’s meta/main.yml So I updated meta/main.yml to collections: - papanito.xxx dependencies: - role: papanito.xxx.xxx Which still results in the exact same error as above when I run the playbook.
ansible
14
10,444
1
https://stackoverflow.com/questions/69508170/ansible-role-dependency-handling-for-collections