question_id
int64 82.3k
79.7M
| title_clean
stringlengths 15
158
| body_clean
stringlengths 62
28.5k
| full_text
stringlengths 95
28.5k
| tags
stringlengths 4
80
| score
int64 0
1.15k
| view_count
int64 22
1.62M
| answer_count
int64 0
30
| link
stringlengths 58
125
|
|---|---|---|---|---|---|---|---|---|
38,143,647
|
Set fact with dynamic key name in ansible
|
I am trying to shrink several chunks of similar code which looks like: - ... multiple things is going here register: list_register - name: Generating list set_fact: my_list="{{ list_register.results | map(attribute='ansible_facts.list_item') | list }}" # the same code repeats... The only difference between them is list name instead of my_list . In fact, I want to do this: set_fact: "{{ some var }}" : "{{ some value }}" I came across this post but didn't find any answer here. Is it possible to do so or is there any workaround?
|
Set fact with dynamic key name in ansible I am trying to shrink several chunks of similar code which looks like: - ... multiple things is going here register: list_register - name: Generating list set_fact: my_list="{{ list_register.results | map(attribute='ansible_facts.list_item') | list }}" # the same code repeats... The only difference between them is list name instead of my_list . In fact, I want to do this: set_fact: "{{ some var }}" : "{{ some value }}" I came across this post but didn't find any answer here. Is it possible to do so or is there any workaround?
|
variables, dynamic, ansible
| 43
| 73,365
| 7
|
https://stackoverflow.com/questions/38143647/set-fact-with-dynamic-key-name-in-ansible
|
37,297,249
|
How to store ansible_become_pass in a vault and how to use it?
|
I am a newbie to ansible and I am using a very simple playbook to issue sudo apt-get update and sudo apt-get upgrade on a couple of servers. This is the playbook I am using: --- - name: Update Servers hosts: my-servers become: yes become_user: root tasks: - name: update packages apt: update_cache=yes - name: upgrade packages apt: upgrade=dist and this is an extract from my ~/.ansible/inventory/hosts file: [my-servers] san-francisco ansible_host=san-francisco ansible_ssh_user=user ansible_become_pass=<my_sudo_password_for_user_on_san-francisco> san-diego ansible_host=san-diego ansible_ssh_user=user ansible_become_pass=<my_sudo_password_for_user_on_san-diego> This is what I get if I launch the playbook: $ ansible-playbook update-servers-playbook.yml PLAY [Update Servers] ********************************************************** TASK [setup] ******************************************************************* ok: [san-francisco] ok: [san-diego] TASK [update packages] ********************************************************* ok: [san-francisco] ok: [san-diego] TASK [upgrade packages] ******************************************************** ok: [san-francisco] ok: [san-diego] PLAY RECAP ********************************************************************* san-francisco : ok=3 changed=0 unreachable=0 failed=0 san-diego : ok=3 changed=0 unreachable=0 failed=0 What is bothering me is the fact that I have the password for my user user stored in plaintext in my ~/.ansible/inventory/hosts file. I have read about vaults , I have also read about the best practices for variables and vaults but I do not understand how to apply this to my very minimal use case. I also tried to use lookups . While in general they also work in the inventory file, and I am able to do something like this: [my-servers] san-francisco ansible_host=san-francisco ansible_ssh_user=user ansible_become_pass="{{ lookup('env', 'ANSIBLE_BECOME_PASSWORD_SAN_FRANCISCO') }}" where this case the password would be stored in an environment variable called ANSIBLE_BECOME_PASSWORD_SAN_FRANCISCO ; there is no way to look up variables in vaults as far as I know. So, how could I organize my file such that I would be able to lookup up my passwords from somewhere and have them safely stored?
|
How to store ansible_become_pass in a vault and how to use it? I am a newbie to ansible and I am using a very simple playbook to issue sudo apt-get update and sudo apt-get upgrade on a couple of servers. This is the playbook I am using: --- - name: Update Servers hosts: my-servers become: yes become_user: root tasks: - name: update packages apt: update_cache=yes - name: upgrade packages apt: upgrade=dist and this is an extract from my ~/.ansible/inventory/hosts file: [my-servers] san-francisco ansible_host=san-francisco ansible_ssh_user=user ansible_become_pass=<my_sudo_password_for_user_on_san-francisco> san-diego ansible_host=san-diego ansible_ssh_user=user ansible_become_pass=<my_sudo_password_for_user_on_san-diego> This is what I get if I launch the playbook: $ ansible-playbook update-servers-playbook.yml PLAY [Update Servers] ********************************************************** TASK [setup] ******************************************************************* ok: [san-francisco] ok: [san-diego] TASK [update packages] ********************************************************* ok: [san-francisco] ok: [san-diego] TASK [upgrade packages] ******************************************************** ok: [san-francisco] ok: [san-diego] PLAY RECAP ********************************************************************* san-francisco : ok=3 changed=0 unreachable=0 failed=0 san-diego : ok=3 changed=0 unreachable=0 failed=0 What is bothering me is the fact that I have the password for my user user stored in plaintext in my ~/.ansible/inventory/hosts file. I have read about vaults , I have also read about the best practices for variables and vaults but I do not understand how to apply this to my very minimal use case. I also tried to use lookups . While in general they also work in the inventory file, and I am able to do something like this: [my-servers] san-francisco ansible_host=san-francisco ansible_ssh_user=user ansible_become_pass="{{ lookup('env', 'ANSIBLE_BECOME_PASSWORD_SAN_FRANCISCO') }}" where this case the password would be stored in an environment variable called ANSIBLE_BECOME_PASSWORD_SAN_FRANCISCO ; there is no way to look up variables in vaults as far as I know. So, how could I organize my file such that I would be able to lookup up my passwords from somewhere and have them safely stored?
|
security, ansible, ansible-vault
| 43
| 47,668
| 3
|
https://stackoverflow.com/questions/37297249/how-to-store-ansible-become-pass-in-a-vault-and-how-to-use-it
|
29,635,627
|
Register Variables in Loop in an Ansible Playbook
|
I have two ansible tasks as follows tasks: - shell: ifconfig -a | sed 's/[ \t].*//;/^\(lo\|\)$/d' register: var1 - debug: var=var1 - shell: ethtool -i {{ item }} | grep bus-info | cut -b 16-22 with_items: var1.stdout_lines register: var2 - debug: var=var2 which is used to get a list of interfaces in a machine (linux) and get the bus address for each. I have one more task as follows in tha same playbook - name: Binding the interfaces shell: echo {{ item.item }} with_flattened: var2.results register: var3 which I expect to iterate over value from var2 and then print the bus numbers. var2.results is as follows "var2": { "changed": true, "msg": "All items completed", "results": [ { "changed": true, "cmd": "ethtool -i br0: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005778", "end": "2015-04-14 20:29:47.122203", "invocation": { "module_args": "ethtool -i br0: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "br0:", "rc": 0, "start": "2015-04-14 20:29:47.116425", "stderr": "", "stdout": "", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp13s0: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005862", "end": "2015-04-14 20:29:47.359749", "invocation": { "module_args": "ethtool -i enp13s0: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp13s0:", "rc": 0, "start": "2015-04-14 20:29:47.353887", "stderr": "", "stdout": "0d:00.0", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp14s0: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005805", "end": "2015-04-14 20:29:47.576674", "invocation": { "module_args": "ethtool -i enp14s0: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp14s0:", "rc": 0, "start": "2015-04-14 20:29:47.570869", "stderr": "", "stdout": "0e:00.0", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp15s0: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005873", "end": "2015-04-14 20:29:47.875058", "invocation": { "module_args": "ethtool -i enp15s0: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp15s0:", "rc": 0, "start": "2015-04-14 20:29:47.869185", "stderr": "", "stdout": "0f:00.0", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp5s0f1: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005870", "end": "2015-04-14 20:29:48.112027", "invocation": { "module_args": "ethtool -i enp5s0f1: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp5s0f1:", "rc": 0, "start": "2015-04-14 20:29:48.106157", "stderr": "", "stdout": "05:00.1", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp5s0f2: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005863", "end": "2015-04-14 20:29:48.355733", "invocation": { "module_args": "ethtool -i enp5s0f2: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp5s0f2:", "rc": 0, "start": "2015-04-14 20:29:48.349870", "stderr": "", "stdout": "05:00.2", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp5s0f3: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005829", "end": "2015-04-14 20:29:48.591244", "invocation": { "module_args": "ethtool -i enp5s0f3: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp5s0f3:", "rc": 0, "start": "2015-04-14 20:29:48.585415", "stderr": "", "stdout": "05:00.3", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp9s0f0: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005943", "end": "2015-04-14 20:29:48.910992", "invocation": { "module_args": "ethtool -i enp9s0f0: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp9s0f0:", "rc": 0, "start": "2015-04-14 20:29:48.905049", "stderr": "", "stdout": "09:00.0", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp9s0f1: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005863", "end": "2015-04-14 20:29:49.143706", "invocation": { "module_args": "ethtool -i enp9s0f1: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp9s0f1:", "rc": 0, "start": "2015-04-14 20:29:49.137843", "stderr": "", "stdout": "09:00.1", "warnings": [] }, { "changed": true, "cmd": "ethtool -i lo: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005856", "end": "2015-04-14 20:29:49.386044", "invocation": { "module_args": "ethtool -i lo: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "lo:", "rc": 0, "start": "2015-04-14 20:29:49.380188", "stderr": "Cannot get driver information: Operation not supported", "stdout": "", "warnings": [] }, { "changed": true, "cmd": "ethtool -i virbr0: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005859", "end": "2015-04-14 20:29:49.632356", "invocation": { "module_args": "ethtool -i virbr0: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "virbr0:", "rc": 0, "start": "2015-04-14 20:29:49.626497", "stderr": "", "stdout": "", "warnings": [] }, { "changed": true, "cmd": "ethtool -i virbr0-nic: | grep bus-info | cut -b 16-22", "delta": "0:00:00.024850", "end": "2015-04-14 20:29:49.901539", "invocation": { "module_args": "ethtool -i virbr0-nic: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "virbr0-nic:", "rc": 0, "start": "2015-04-14 20:29:49.876689", "stderr": "", "stdout": "", "warnings": [] } ] My objective is to get the value of stdout in each item above for example ("stdout": "09:00.0") . I tried giving something like - name: Binding the interfaces shell: echo {{ item.item.stdout}} with_flattened: var2.results # with_indexed_items: var2.results register: var3 But this is not giving the bus values in stdout correctly. Appreciate help in listing the variable of variable value in task as given below when the second variable is and indexed list. I am trying to avoid direct index numbering such as item[0] because the number of interfaces are dynamic and direct indexing may result in unexpected outcomes. Thanks
|
Register Variables in Loop in an Ansible Playbook I have two ansible tasks as follows tasks: - shell: ifconfig -a | sed 's/[ \t].*//;/^\(lo\|\)$/d' register: var1 - debug: var=var1 - shell: ethtool -i {{ item }} | grep bus-info | cut -b 16-22 with_items: var1.stdout_lines register: var2 - debug: var=var2 which is used to get a list of interfaces in a machine (linux) and get the bus address for each. I have one more task as follows in tha same playbook - name: Binding the interfaces shell: echo {{ item.item }} with_flattened: var2.results register: var3 which I expect to iterate over value from var2 and then print the bus numbers. var2.results is as follows "var2": { "changed": true, "msg": "All items completed", "results": [ { "changed": true, "cmd": "ethtool -i br0: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005778", "end": "2015-04-14 20:29:47.122203", "invocation": { "module_args": "ethtool -i br0: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "br0:", "rc": 0, "start": "2015-04-14 20:29:47.116425", "stderr": "", "stdout": "", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp13s0: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005862", "end": "2015-04-14 20:29:47.359749", "invocation": { "module_args": "ethtool -i enp13s0: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp13s0:", "rc": 0, "start": "2015-04-14 20:29:47.353887", "stderr": "", "stdout": "0d:00.0", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp14s0: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005805", "end": "2015-04-14 20:29:47.576674", "invocation": { "module_args": "ethtool -i enp14s0: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp14s0:", "rc": 0, "start": "2015-04-14 20:29:47.570869", "stderr": "", "stdout": "0e:00.0", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp15s0: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005873", "end": "2015-04-14 20:29:47.875058", "invocation": { "module_args": "ethtool -i enp15s0: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp15s0:", "rc": 0, "start": "2015-04-14 20:29:47.869185", "stderr": "", "stdout": "0f:00.0", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp5s0f1: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005870", "end": "2015-04-14 20:29:48.112027", "invocation": { "module_args": "ethtool -i enp5s0f1: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp5s0f1:", "rc": 0, "start": "2015-04-14 20:29:48.106157", "stderr": "", "stdout": "05:00.1", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp5s0f2: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005863", "end": "2015-04-14 20:29:48.355733", "invocation": { "module_args": "ethtool -i enp5s0f2: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp5s0f2:", "rc": 0, "start": "2015-04-14 20:29:48.349870", "stderr": "", "stdout": "05:00.2", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp5s0f3: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005829", "end": "2015-04-14 20:29:48.591244", "invocation": { "module_args": "ethtool -i enp5s0f3: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp5s0f3:", "rc": 0, "start": "2015-04-14 20:29:48.585415", "stderr": "", "stdout": "05:00.3", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp9s0f0: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005943", "end": "2015-04-14 20:29:48.910992", "invocation": { "module_args": "ethtool -i enp9s0f0: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp9s0f0:", "rc": 0, "start": "2015-04-14 20:29:48.905049", "stderr": "", "stdout": "09:00.0", "warnings": [] }, { "changed": true, "cmd": "ethtool -i enp9s0f1: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005863", "end": "2015-04-14 20:29:49.143706", "invocation": { "module_args": "ethtool -i enp9s0f1: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "enp9s0f1:", "rc": 0, "start": "2015-04-14 20:29:49.137843", "stderr": "", "stdout": "09:00.1", "warnings": [] }, { "changed": true, "cmd": "ethtool -i lo: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005856", "end": "2015-04-14 20:29:49.386044", "invocation": { "module_args": "ethtool -i lo: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "lo:", "rc": 0, "start": "2015-04-14 20:29:49.380188", "stderr": "Cannot get driver information: Operation not supported", "stdout": "", "warnings": [] }, { "changed": true, "cmd": "ethtool -i virbr0: | grep bus-info | cut -b 16-22", "delta": "0:00:00.005859", "end": "2015-04-14 20:29:49.632356", "invocation": { "module_args": "ethtool -i virbr0: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "virbr0:", "rc": 0, "start": "2015-04-14 20:29:49.626497", "stderr": "", "stdout": "", "warnings": [] }, { "changed": true, "cmd": "ethtool -i virbr0-nic: | grep bus-info | cut -b 16-22", "delta": "0:00:00.024850", "end": "2015-04-14 20:29:49.901539", "invocation": { "module_args": "ethtool -i virbr0-nic: | grep bus-info | cut -b 16-22", "module_name": "shell" }, "item": "virbr0-nic:", "rc": 0, "start": "2015-04-14 20:29:49.876689", "stderr": "", "stdout": "", "warnings": [] } ] My objective is to get the value of stdout in each item above for example ("stdout": "09:00.0") . I tried giving something like - name: Binding the interfaces shell: echo {{ item.item.stdout}} with_flattened: var2.results # with_indexed_items: var2.results register: var3 But this is not giving the bus values in stdout correctly. Appreciate help in listing the variable of variable value in task as given below when the second variable is and indexed list. I am trying to avoid direct index numbering such as item[0] because the number of interfaces are dynamic and direct indexing may result in unexpected outcomes. Thanks
|
linux, scripting, automation, ansible
| 43
| 136,767
| 1
|
https://stackoverflow.com/questions/29635627/register-variables-in-loop-in-an-ansible-playbook
|
49,246,837
|
Ansible Playbook: ERROR! 'command' is not a valid attribute for a Play
|
I'm just trying to write a basic playbook, and keep getting the error below. Tried a tonne of things but still can't get it right. I know it must be a syntax thing but no idea where. This is the code I have: --- # This playbook runs a basic DF command. - hosts: nagios #remote_user: root tasks: - name: find disk space available. command: df -hPT This is the error I get: > ERROR! 'command' is not a valid attribute for a Play > > The error appears to have been in '/root/playbooks/df.yml': line 4, > column 3, but may be elsewhere in the file depending on the exact > syntax problem. > > The offending line appears to be: > > > - hosts: nagios ^ here Ansible ver: 2.4.2.0 It's driving me insane. I've looked at some axamples from the Ansible docs, and it looks the same. No idea... Anyone know?
|
Ansible Playbook: ERROR! 'command' is not a valid attribute for a Play I'm just trying to write a basic playbook, and keep getting the error below. Tried a tonne of things but still can't get it right. I know it must be a syntax thing but no idea where. This is the code I have: --- # This playbook runs a basic DF command. - hosts: nagios #remote_user: root tasks: - name: find disk space available. command: df -hPT This is the error I get: > ERROR! 'command' is not a valid attribute for a Play > > The error appears to have been in '/root/playbooks/df.yml': line 4, > column 3, but may be elsewhere in the file depending on the exact > syntax problem. > > The offending line appears to be: > > > - hosts: nagios ^ here Ansible ver: 2.4.2.0 It's driving me insane. I've looked at some axamples from the Ansible docs, and it looks the same. No idea... Anyone know?
|
ansible
| 43
| 196,933
| 1
|
https://stackoverflow.com/questions/49246837/ansible-playbook-error-command-is-not-a-valid-attribute-for-a-play
|
19,390,600
|
Ansible lineinfile duplicates line
|
I have a simple file at /etc/foo.txt. The file contains the following: #bar I have the following ansible playbook task to uncomment the line above: - name: test lineinfile lineinfile: backup=yes state=present dest=/etc/foo.txt regexp='^#bar' line='bar' When I first run ansible-playbook, the line gets uncommented and the /etc/foo.txt now contains the following: bar However, if I run ansible-playbook again, I get the following: bar bar If I run it yet again, then the /etc/foo.txt file will look like this: bar bar bar How to avoid this duplications of lines? I just want to uncomment the '#bar' and be done with it.
|
Ansible lineinfile duplicates line I have a simple file at /etc/foo.txt. The file contains the following: #bar I have the following ansible playbook task to uncomment the line above: - name: test lineinfile lineinfile: backup=yes state=present dest=/etc/foo.txt regexp='^#bar' line='bar' When I first run ansible-playbook, the line gets uncommented and the /etc/foo.txt now contains the following: bar However, if I run ansible-playbook again, I get the following: bar bar If I run it yet again, then the /etc/foo.txt file will look like this: bar bar bar How to avoid this duplications of lines? I just want to uncomment the '#bar' and be done with it.
|
ansible
| 42
| 36,255
| 4
|
https://stackoverflow.com/questions/19390600/ansible-lineinfile-duplicates-line
|
24,765,930
|
Add swap memory with ansible
|
I'm working on a project where having swap memory on my servers is a needed to avoid some python long running processes to go out of memory and realized for the first time that my ubuntu vagrant boxes and AWS ubuntu instances didn't already have one set up. In [URL] a possible built in solution was discussed but never implemented, so I'm guessing this should be a pretty common task to automatize. How would you set up a file based swap memory with ansible in an idempotent way? What modules or variables does ansible provide help with this setup (like ansible_swaptotal_mb variable) ?
|
Add swap memory with ansible I'm working on a project where having swap memory on my servers is a needed to avoid some python long running processes to go out of memory and realized for the first time that my ubuntu vagrant boxes and AWS ubuntu instances didn't already have one set up. In [URL] a possible built in solution was discussed but never implemented, so I'm guessing this should be a pretty common task to automatize. How would you set up a file based swap memory with ansible in an idempotent way? What modules or variables does ansible provide help with this setup (like ansible_swaptotal_mb variable) ?
|
memory, amazon-web-services, swap, ansible
| 42
| 31,838
| 8
|
https://stackoverflow.com/questions/24765930/add-swap-memory-with-ansible
|
37,213,551
|
ansible SSH connection fail
|
I'm trying to run ansible role on multiple servers, but i get an error: fatal: [192.168.0.10]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true} My /etc/ansible/hosts file looks like this: 192.168.0.10 ansible_sudo_pass='passphrase' ansible_ssh_user=user 192.168.0.11 ansible_sudo_pass='passphrase' ansible_ssh_user=user 192.168.0.12 ansible_sudo_pass='passphrase' ansible_ssh_user=user I have no idea what's going on - everything looks fine - I can login via SSH, but ansible ping returns the same error. The log from verbose execution: <192.168.0.10> ESTABLISH SSH CONNECTION FOR USER: user <192.168.0.10> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=user -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.10 '/bin/sh -c '"'"'( umask 22 && mkdir -p " echo $HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829 " && echo " echo $HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829 " )'"'"'' Can you help me somehow? If I have to use ansible in local mode (-c local), then it's useless. I've tried to delete ansible_sudo_pass and ansible_ssh_user, but it did'nt help.
|
ansible SSH connection fail I'm trying to run ansible role on multiple servers, but i get an error: fatal: [192.168.0.10]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true} My /etc/ansible/hosts file looks like this: 192.168.0.10 ansible_sudo_pass='passphrase' ansible_ssh_user=user 192.168.0.11 ansible_sudo_pass='passphrase' ansible_ssh_user=user 192.168.0.12 ansible_sudo_pass='passphrase' ansible_ssh_user=user I have no idea what's going on - everything looks fine - I can login via SSH, but ansible ping returns the same error. The log from verbose execution: <192.168.0.10> ESTABLISH SSH CONNECTION FOR USER: user <192.168.0.10> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=user -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.10 '/bin/sh -c '"'"'( umask 22 && mkdir -p " echo $HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829 " && echo " echo $HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829 " )'"'"'' Can you help me somehow? If I have to use ansible in local mode (-c local), then it's useless. I've tried to delete ansible_sudo_pass and ansible_ssh_user, but it did'nt help.
|
ssh, connection, ansible
| 42
| 228,510
| 8
|
https://stackoverflow.com/questions/37213551/ansible-ssh-connection-fail
|
30,060,164
|
Save temporary ansible shell scripts instead of deleting
|
I noticed Ansible removes the temporary script using a semi-colon to separate the bash commands. Here is an example command: EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/devuser/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 build /bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1430847489.81-75617096172775/docker; rm -rf /home/ec2-user/.ansible/tmp/ansible-tmp-1430847489.81-75617096172775/ >/dev/null 2>&1' Is there a way to tell ansible to replace the semi-colon with a double ampersand or to tell it to save the script or output the contents when running ansible-playbook? I'm trying to debug an error in this script and right now the only thing that appears is this: failed: [build] => {"changed": false, "failed": true} msg: ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),)
|
Save temporary ansible shell scripts instead of deleting I noticed Ansible removes the temporary script using a semi-colon to separate the bash commands. Here is an example command: EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/devuser/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 build /bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1430847489.81-75617096172775/docker; rm -rf /home/ec2-user/.ansible/tmp/ansible-tmp-1430847489.81-75617096172775/ >/dev/null 2>&1' Is there a way to tell ansible to replace the semi-colon with a double ampersand or to tell it to save the script or output the contents when running ansible-playbook? I'm trying to debug an error in this script and right now the only thing that appears is this: failed: [build] => {"changed": false, "failed": true} msg: ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),)
|
ansible
| 42
| 24,163
| 1
|
https://stackoverflow.com/questions/30060164/save-temporary-ansible-shell-scripts-instead-of-deleting
|
29,986,794
|
Ansible: How can I update the system CentOS with Ansible
|
I am trying to update the CentOS systems with ansible. Unfortunately I am not able to do that. I already tried: - name: install updates yum: update_cache=yes when: ansible_os_family == "RedHat Isn't working. - name: install updates yum: name=* state=latest when: ansible_os_family == "RedHat The last task works but is it true, that the task updates the system?
|
Ansible: How can I update the system CentOS with Ansible I am trying to update the CentOS systems with ansible. Unfortunately I am not able to do that. I already tried: - name: install updates yum: update_cache=yes when: ansible_os_family == "RedHat Isn't working. - name: install updates yum: name=* state=latest when: ansible_os_family == "RedHat The last task works but is it true, that the task updates the system?
|
centos, ansible, yum
| 42
| 108,547
| 1
|
https://stackoverflow.com/questions/29986794/ansible-how-can-i-update-the-system-centos-with-ansible
|
71,585,303
|
How can I manage keyring files in trusted.gpg.d with ansible playbook since apt-key is deprecated?
|
Before apt-key was deprecated, I was using Ansible playbooks to add and update keys in my servers. At the moment, apt-key no longer updates the keys. In few searches, I found that I need to use gpg now. However, I have many servers and I don't want to do this manually for each one of them. Is there a way to manage my keyrings with gpg with Ansible? Here are my Ansible tasks, with deprecated apt-key : - apt_key: url: "[URL] state: present - apt_repository: repo: "deb [URL] ansible_distribution_release }}/ {{ ansible_distribution_release }} contrib" state: present filename: "treasure-data" # Name of the pre-compiled fluentd-agent I tried apt-key update but it is not working for me. If a key already exists but it is expired, it doesn't update it anymore.
|
How can I manage keyring files in trusted.gpg.d with ansible playbook since apt-key is deprecated? Before apt-key was deprecated, I was using Ansible playbooks to add and update keys in my servers. At the moment, apt-key no longer updates the keys. In few searches, I found that I need to use gpg now. However, I have many servers and I don't want to do this manually for each one of them. Is there a way to manage my keyrings with gpg with Ansible? Here are my Ansible tasks, with deprecated apt-key : - apt_key: url: "[URL] state: present - apt_repository: repo: "deb [URL] ansible_distribution_release }}/ {{ ansible_distribution_release }} contrib" state: present filename: "treasure-data" # Name of the pre-compiled fluentd-agent I tried apt-key update but it is not working for me. If a key already exists but it is expired, it doesn't update it anymore.
|
ansible, gnupg, apt, gpg-signature, apt-key
| 42
| 41,446
| 3
|
https://stackoverflow.com/questions/71585303/how-can-i-manage-keyring-files-in-trusted-gpg-d-with-ansible-playbook-since-apt
|
41,010,378
|
Ansible shell module returns error when grep results are empty
|
I am using Ansible's shell module to find a particular string and store it in a variable. But if grep did not find anything I am getting an error. Example: - name: Get the http_status shell: grep "http_status=" /var/httpd.txt register: cmdln check_mode: no When I run this Ansible playbook if http_status string is not there, playbook is stopped. I am not getting stderr. How can I make Ansible run without interruption even if the string is not found?
|
Ansible shell module returns error when grep results are empty I am using Ansible's shell module to find a particular string and store it in a variable. But if grep did not find anything I am getting an error. Example: - name: Get the http_status shell: grep "http_status=" /var/httpd.txt register: cmdln check_mode: no When I run this Ansible playbook if http_status string is not there, playbook is stopped. I am not getting stderr. How can I make Ansible run without interruption even if the string is not found?
|
grep, ansible
| 41
| 41,601
| 2
|
https://stackoverflow.com/questions/41010378/ansible-shell-module-returns-error-when-grep-results-are-empty
|
26,256,227
|
Ansible with multiple SSH key pair
|
I am new to Ansible. I am able to test it and its working fine with my test requirment. For making connection between management node and the client node I am using already created ssh key pair. How can I use another node with different SSH key pair? For reference I am considering 3 ec2-instance with different key pairs.
|
Ansible with multiple SSH key pair I am new to Ansible. I am able to test it and its working fine with my test requirment. For making connection between management node and the client node I am using already created ssh key pair. How can I use another node with different SSH key pair? For reference I am considering 3 ec2-instance with different key pairs.
|
ansible
| 41
| 36,868
| 2
|
https://stackoverflow.com/questions/26256227/ansible-with-multiple-ssh-key-pair
|
22,070,232
|
How to get an Ansible check to run only once in a playbook?
|
As a safeguard against using an outdated playbook, I'd like to ensure that I have an updated copy of the git checkout before Ansible is allowed to modify anything on the servers. This is how I've attempted to do it. This action is located in a file included by all play books: - name: Ensure local git repository is up-to-date local_action: git pull register: command_result failed_when: "'Updating' in command_result.stdout" The problem is that this command is run once for each node Ansible connects to, instead of only once for each playbook run. How can I avoid that?
|
How to get an Ansible check to run only once in a playbook? As a safeguard against using an outdated playbook, I'd like to ensure that I have an updated copy of the git checkout before Ansible is allowed to modify anything on the servers. This is how I've attempted to do it. This action is located in a file included by all play books: - name: Ensure local git repository is up-to-date local_action: git pull register: command_result failed_when: "'Updating' in command_result.stdout" The problem is that this command is run once for each node Ansible connects to, instead of only once for each playbook run. How can I avoid that?
|
git, ansible
| 41
| 76,615
| 2
|
https://stackoverflow.com/questions/22070232/how-to-get-an-ansible-check-to-run-only-once-in-a-playbook
|
41,094,864
|
Is it possible to write Ansible hosts/inventory files in YAML?
|
In the best practices page, there is an example that uses hosts.yml for hosts files: In the docs, however, I can only find the INI syntax for writing hosts files. What is the syntax for the inventory files in YAML?
|
Is it possible to write Ansible hosts/inventory files in YAML? In the best practices page, there is an example that uses hosts.yml for hosts files: In the docs, however, I can only find the INI syntax for writing hosts files. What is the syntax for the inventory files in YAML?
|
ansible, ansible-inventory
| 41
| 65,534
| 3
|
https://stackoverflow.com/questions/41094864/is-it-possible-to-write-ansible-hosts-inventory-files-in-yaml
|
45,013,306
|
How to Make Ansible variable mandatory
|
I'm looking for the way how to make Ansible to analyze playbooks for required and mandatory variables before run playbook's execution like: - name: create remote app directory hierarchy file: path: "/opt/{{ app_name | required }}/" state: directory owner: "{{ app_user | required }}" group: "{{ app_user_group | required }}" ... and rise error message if variable is undefined, like: please set "app_name" variable before run (file XXX/main.yml:99)
|
How to Make Ansible variable mandatory I'm looking for the way how to make Ansible to analyze playbooks for required and mandatory variables before run playbook's execution like: - name: create remote app directory hierarchy file: path: "/opt/{{ app_name | required }}/" state: directory owner: "{{ app_user | required }}" group: "{{ app_user_group | required }}" ... and rise error message if variable is undefined, like: please set "app_name" variable before run (file XXX/main.yml:99)
|
ansible
| 40
| 29,978
| 6
|
https://stackoverflow.com/questions/45013306/how-to-make-ansible-variable-mandatory
|
28,553,307
|
Ansible Using Custom ssh config File
|
I have a custom SSH config file that I typically use as follows ssh -F ~/.ssh/client_1_config amazon-server-01 Is it possible to assign Ansible to use this config for certain groups? It already has the keys and ports and users all set up. I have this sort of config for multiple clients, and would like to keep the config separate if possible.
|
Ansible Using Custom ssh config File I have a custom SSH config file that I typically use as follows ssh -F ~/.ssh/client_1_config amazon-server-01 Is it possible to assign Ansible to use this config for certain groups? It already has the keys and ports and users all set up. I have this sort of config for multiple clients, and would like to keep the config separate if possible.
|
ansible
| 40
| 72,948
| 4
|
https://stackoverflow.com/questions/28553307/ansible-using-custom-ssh-config-file
|
29,512,443
|
Register variables in with_items loop in Ansible playbook
|
I've got a dictionary with different names like vars: images: - foo - bar Now, I want to checkout repositories and afterwards build docker images only when the source has changed. Since getting the source and building the image is the same for all items except the name I created the tasks with with_items: images and try to register the result with: register: "{{ item }}" and also tried register: "src_{{ item }}" Then I tried the following condition when: "{{ item }}|changed" and when: "{{ src_item }}|changed" This always results in fatal: [piggy] => |changed expects a dictionary So how can I properly save the results of the operations in variable names based on the list I iterate over? Update: I would like to have something like that: - hosts: all vars: images: - foo - bar tasks: - name: get src git: repo: git@foobar.com/repo.git dest: /tmp/repo register: "{{ item }}_src" with_items: images - name: build image shell: "docker build -t repo ." args: chdir: /tmp/repo when: "{{ item }}_src"|changed register: "{{ item }}_image" with_items: images - name: push image shell: "docker push repo" when: "{{ item }}_image"|changed with_items: images
|
Register variables in with_items loop in Ansible playbook I've got a dictionary with different names like vars: images: - foo - bar Now, I want to checkout repositories and afterwards build docker images only when the source has changed. Since getting the source and building the image is the same for all items except the name I created the tasks with with_items: images and try to register the result with: register: "{{ item }}" and also tried register: "src_{{ item }}" Then I tried the following condition when: "{{ item }}|changed" and when: "{{ src_item }}|changed" This always results in fatal: [piggy] => |changed expects a dictionary So how can I properly save the results of the operations in variable names based on the list I iterate over? Update: I would like to have something like that: - hosts: all vars: images: - foo - bar tasks: - name: get src git: repo: git@foobar.com/repo.git dest: /tmp/repo register: "{{ item }}_src" with_items: images - name: build image shell: "docker build -t repo ." args: chdir: /tmp/repo when: "{{ item }}_src"|changed register: "{{ item }}_image" with_items: images - name: push image shell: "docker push repo" when: "{{ item }}_image"|changed with_items: images
|
loops, variables, ansible
| 40
| 61,586
| 1
|
https://stackoverflow.com/questions/29512443/register-variables-in-with-items-loop-in-ansible-playbook
|
25,129,728
|
Ansible. override single dictionary key
|
I am using ansible to manage configuration as for production, as well as for vagrant box. I have file with default values: group_vars/all . --- env: prod wwwuser: www-data db: root_pwd: root_pwd pdo_driver: pdo_mysql host: localhost name: test user: test pwd: test charset: utf8 domain: somedomain projectdir: /var/www/application webrootdir: "{{ projectdir }}/web" In host_vars/vagrantbox I want tohave something like: db: root_pwd: super_easy_password But this one is overriding completely db dictrionary, while I want to override single key. How to achieve that? UPDATE 1 Just checked with ansible.cfg: [defaults] host_key_checking=false hash_behaviour=merge groups_vars/all db: root_pwd: some_strong_pwd pdo_driver: pdo_mysql host: localhost name: dbname user: dbuser pwd: some password charset: utf8 host_vars/vagrantbox db: root_pwd: root I am getting following error: One or more undefined variables: 'dict object' has no attribute 'name' What I do wrong?
|
Ansible. override single dictionary key I am using ansible to manage configuration as for production, as well as for vagrant box. I have file with default values: group_vars/all . --- env: prod wwwuser: www-data db: root_pwd: root_pwd pdo_driver: pdo_mysql host: localhost name: test user: test pwd: test charset: utf8 domain: somedomain projectdir: /var/www/application webrootdir: "{{ projectdir }}/web" In host_vars/vagrantbox I want tohave something like: db: root_pwd: super_easy_password But this one is overriding completely db dictrionary, while I want to override single key. How to achieve that? UPDATE 1 Just checked with ansible.cfg: [defaults] host_key_checking=false hash_behaviour=merge groups_vars/all db: root_pwd: some_strong_pwd pdo_driver: pdo_mysql host: localhost name: dbname user: dbuser pwd: some password charset: utf8 host_vars/vagrantbox db: root_pwd: root I am getting following error: One or more undefined variables: 'dict object' has no attribute 'name' What I do wrong?
|
python, yaml, ansible
| 40
| 32,067
| 4
|
https://stackoverflow.com/questions/25129728/ansible-override-single-dictionary-key
|
38,187,557
|
Run Ansible playbook without inventory
|
Consider if I want to check something quickly. Something that doesn't really need connecting to a host (to check how ansible itself works, like, including of handlers or something). Or localhost will do. I'd probably give up on this, but man page says: -i PATH, --inventory=PATH The PATH to the inventory, which defaults to /etc/ansible/hosts. Alternatively , you can use a comma-separated list of hosts or a single host with a trailing comma host,. And when I run ansible-playbook without inventory, it says: [WARNING]: provided hosts list is empty, only localhost is available Is there an easy way to run playbook against no host, or probably localhost ?
|
Run Ansible playbook without inventory Consider if I want to check something quickly. Something that doesn't really need connecting to a host (to check how ansible itself works, like, including of handlers or something). Or localhost will do. I'd probably give up on this, but man page says: -i PATH, --inventory=PATH The PATH to the inventory, which defaults to /etc/ansible/hosts. Alternatively , you can use a comma-separated list of hosts or a single host with a trailing comma host,. And when I run ansible-playbook without inventory, it says: [WARNING]: provided hosts list is empty, only localhost is available Is there an easy way to run playbook against no host, or probably localhost ?
|
ansible, ansible-inventory
| 40
| 40,356
| 3
|
https://stackoverflow.com/questions/38187557/run-ansible-playbook-without-inventory
|
39,938,323
|
Jinja convert string to integer
|
I am trying to convert a string that has been parsed using a regex into a number so I can multiply it, using Jinja2. This file is a template to be used within an ansible script. I have a series of items which all take the form of <word><number> such as aaa01 , aaa141 , bbb05 . The idea was to parse the word and number(ignoring leading zeros) and use them later in the template. I wanted to manipulate the number by multiplication and use it. Below is what I have done so far ``` {% macro get_host_number() -%} {{ item | regex_replace('^\D*[0]?(\d*)$', '\\1') }} {%- endmacro %} {% macro get_host_name() -%} {{ item | regex_replace('^(\D*)\d*$', '\\1') }} {%- endmacro %} {% macro get_host_range(name, number) -%} {% if name=='aaa' %} {{ ((number*5)+100) | int | abs }} {% elif name=='bbb' %} {{ ((number*5)+200) | int | abs }} {% else %} {{ ((number*5)+300) | int | abs }} {% endif %} {%- endmacro %} {% set number = get_host_number() %} {% set name = get_host_name() %} {% set value = get_host_range(name, number) %} Name: {{ name }} Number: {{ number }} Type: {{ value }} With the above template I am getting an error coercing to Unicode: need string or buffer, int found which i think is telling me it cannot convert the string to integer, however i do not understand why. I have seen examples doing this and working.
|
Jinja convert string to integer I am trying to convert a string that has been parsed using a regex into a number so I can multiply it, using Jinja2. This file is a template to be used within an ansible script. I have a series of items which all take the form of <word><number> such as aaa01 , aaa141 , bbb05 . The idea was to parse the word and number(ignoring leading zeros) and use them later in the template. I wanted to manipulate the number by multiplication and use it. Below is what I have done so far ``` {% macro get_host_number() -%} {{ item | regex_replace('^\D*[0]?(\d*)$', '\\1') }} {%- endmacro %} {% macro get_host_name() -%} {{ item | regex_replace('^(\D*)\d*$', '\\1') }} {%- endmacro %} {% macro get_host_range(name, number) -%} {% if name=='aaa' %} {{ ((number*5)+100) | int | abs }} {% elif name=='bbb' %} {{ ((number*5)+200) | int | abs }} {% else %} {{ ((number*5)+300) | int | abs }} {% endif %} {%- endmacro %} {% set number = get_host_number() %} {% set name = get_host_name() %} {% set value = get_host_range(name, number) %} Name: {{ name }} Number: {{ number }} Type: {{ value }} With the above template I am getting an error coercing to Unicode: need string or buffer, int found which i think is telling me it cannot convert the string to integer, however i do not understand why. I have seen examples doing this and working.
|
python, ansible, jinja2
| 39
| 121,199
| 1
|
https://stackoverflow.com/questions/39938323/jinja-convert-string-to-integer
|
28,119,521
|
Ansible include task only if file exists
|
I'm trying to include a file only if it exists. This allows for custom "tasks/roles" between existing "tasks/roles" if needed by the user of my role. I found this: - include: ... when: condition But the Ansible docs state that: "All the tasks get evaluated, but the conditional is applied to each and every task" - [URL] So - stat: path=/home/user/optional/file.yml register: optional_file - include: /home/user/optional/file.yml when: optional_file.stat.exists Will fail if the file being included doesn't exist. I guess there might be another mechanism for allowing a user to add tasks to an existing recipe. I can't let the user to add a role after mine, because they wouldn't have control of the order: their role will be executed after mine.
|
Ansible include task only if file exists I'm trying to include a file only if it exists. This allows for custom "tasks/roles" between existing "tasks/roles" if needed by the user of my role. I found this: - include: ... when: condition But the Ansible docs state that: "All the tasks get evaluated, but the conditional is applied to each and every task" - [URL] So - stat: path=/home/user/optional/file.yml register: optional_file - include: /home/user/optional/file.yml when: optional_file.stat.exists Will fail if the file being included doesn't exist. I guess there might be another mechanism for allowing a user to add tasks to an existing recipe. I can't let the user to add a role after mine, because they wouldn't have control of the order: their role will be executed after mine.
|
ansible
| 39
| 91,832
| 9
|
https://stackoverflow.com/questions/28119521/ansible-include-task-only-if-file-exists
|
39,347,379
|
ansible run command on remote host in background
|
I am trying to start filebeat (or for that matter any other process which will run continuously on demand) process on multiple hosts using ansible. I don't want ansible to wait till the process keeps on running. I want ansible to fire and forget and come out and keep remote process running in back ground. I've tried using below options: --- - hosts: filebeat tasks: - name: start filebeat option a) command: filebeat -c filebeat.yml & option b) command: nohup filebeat -c filebeat.yml & option c) shell: filebeat -c filebeat.yml & async: 0 //Tried without as well. If its > 0 then it only waits for that much of time and terminates the filebeat process on remote host and comes out. poll: 0
|
ansible run command on remote host in background I am trying to start filebeat (or for that matter any other process which will run continuously on demand) process on multiple hosts using ansible. I don't want ansible to wait till the process keeps on running. I want ansible to fire and forget and come out and keep remote process running in back ground. I've tried using below options: --- - hosts: filebeat tasks: - name: start filebeat option a) command: filebeat -c filebeat.yml & option b) command: nohup filebeat -c filebeat.yml & option c) shell: filebeat -c filebeat.yml & async: 0 //Tried without as well. If its > 0 then it only waits for that much of time and terminates the filebeat process on remote host and comes out. poll: 0
|
ansible
| 39
| 78,229
| 3
|
https://stackoverflow.com/questions/39347379/ansible-run-command-on-remote-host-in-background
|
33,084,347
|
Pass cert password to Nginx with https site during restart
|
I configured nginx installation and configuration (together with setup SSL certificates for https site) via ansible . SSL certificates are under passphrases. I want to write ansilbe task which is restarting nginx. The problem is the following. Normally, nginx with https site inside asks for PEM pass phrase during restart. Ansible doesn't ask for that passphrase during execution of playbook. There is solution with storing decrypted cert and key in some private directory. But I don't really want to leave my cert and key somewhere unencrypted. How to pass password to nginx (or to openssl) during restart via ansible ? Perfect scenario is following: Ansible is asking for SSL password (via vars_promt ). Another option is to use ansible vault. Ansible is restarting nginx, and when nginx is asking for PEM pass phrase , ansible is passing password to nginx. Is it possible?
|
Pass cert password to Nginx with https site during restart I configured nginx installation and configuration (together with setup SSL certificates for https site) via ansible . SSL certificates are under passphrases. I want to write ansilbe task which is restarting nginx. The problem is the following. Normally, nginx with https site inside asks for PEM pass phrase during restart. Ansible doesn't ask for that passphrase during execution of playbook. There is solution with storing decrypted cert and key in some private directory. But I don't really want to leave my cert and key somewhere unencrypted. How to pass password to nginx (or to openssl) during restart via ansible ? Perfect scenario is following: Ansible is asking for SSL password (via vars_promt ). Another option is to use ansible vault. Ansible is restarting nginx, and when nginx is asking for PEM pass phrase , ansible is passing password to nginx. Is it possible?
|
nginx, https, openssl, ssl-certificate, ansible
| 39
| 92,123
| 2
|
https://stackoverflow.com/questions/33084347/pass-cert-password-to-nginx-with-https-site-during-restart
|
33,543,551
|
Is there with_fileglob that works remotely in ansible?
|
Is there with_fileglob that works remotely in ansible? Mainly I do want to use something similar with the with_fileglob but that will glob the files on the remote/target machine, not on the one that is running ansible.
|
Is there with_fileglob that works remotely in ansible? Is there with_fileglob that works remotely in ansible? Mainly I do want to use something similar with the with_fileglob but that will glob the files on the remote/target machine, not on the one that is running ansible.
|
ansible
| 39
| 35,201
| 4
|
https://stackoverflow.com/questions/33543551/is-there-with-fileglob-that-works-remotely-in-ansible
|
42,579,168
|
Ansible create a virtualenv using the venv module
|
How can one use Ansible to create a virtualenv using the venv module from Python3's standard library? Manually, one would do this to create a venv (virtual environment): python3 -m venv <venv-name> How do I do this using Ansible?
|
Ansible create a virtualenv using the venv module How can one use Ansible to create a virtualenv using the venv module from Python3's standard library? Manually, one would do this to create a venv (virtual environment): python3 -m venv <venv-name> How do I do this using Ansible?
|
python-3.x, ansible, python-venv
| 39
| 36,920
| 3
|
https://stackoverflow.com/questions/42579168/ansible-create-a-virtualenv-using-the-venv-module
|
32,475,881
|
How to disable json output from specific ansible commands?
|
Some ansible commands produce json output that's barely readable for humans. It distracts people when they need to check if playbook executed correctly and causes confusion. Example commands are shell and replace - they generate a lot of useless noise. How can I prevent this? Simple ok | changed | failed is enough. I don't need the whole JSON.
|
How to disable json output from specific ansible commands? Some ansible commands produce json output that's barely readable for humans. It distracts people when they need to check if playbook executed correctly and causes confusion. Example commands are shell and replace - they generate a lot of useless noise. How can I prevent this? Simple ok | changed | failed is enough. I don't need the whole JSON.
|
ansible
| 39
| 68,374
| 2
|
https://stackoverflow.com/questions/32475881/how-to-disable-json-output-from-specific-ansible-commands
|
50,456,997
|
Ansible with_items vs loop
|
What is the difference between using with_items vs loops in ansilbe?
|
Ansible with_items vs loop What is the difference between using with_items vs loops in ansilbe?
|
ansible
| 39
| 64,137
| 2
|
https://stackoverflow.com/questions/50456997/ansible-with-items-vs-loop
|
30,036,126
|
How can I run only ansible tasks with multiple tags?
|
Imagine this ansible playbook: - name: debug foo debug: msg=foo tags: - foo - name: debug bar debug: msg=bar tags: - bar - name: debug baz debug: msg=baz tags: - foo - bar How can I run only the debug baz task? I want to say only run tasks which are tagged with foo AND bar . Is that possible? I tried this, but it will run all 3 tasks: ansible-playbook foo.yml -t foo,bar
|
How can I run only ansible tasks with multiple tags? Imagine this ansible playbook: - name: debug foo debug: msg=foo tags: - foo - name: debug bar debug: msg=bar tags: - bar - name: debug baz debug: msg=baz tags: - foo - bar How can I run only the debug baz task? I want to say only run tasks which are tagged with foo AND bar . Is that possible? I tried this, but it will run all 3 tasks: ansible-playbook foo.yml -t foo,bar
|
ansible
| 39
| 59,501
| 5
|
https://stackoverflow.com/questions/30036126/how-can-i-run-only-ansible-tasks-with-multiple-tags
|
35,130,051
|
Order of notify handlers
|
I have a task: - name: uploads docker configuration file template: src: 'docker.systemd.j2' dest: '/etc/systemd/system/docker.service' notify: - daemon reload - restart docker in Ansible playbook's documentation , there is a sentence: Notify handlers are always run in the order written. So, it is expected, that daemon reload will be ran before restart docker , but in logs, i have: TASK [swarm/docker : uploads docker configuration file] ************************ … NOTIFIED HANDLER daemon reload NOTIFIED HANDLER restart docker … RUNNING HANDLER [swarm/docker : restart docker] ******************************** … RUNNING HANDLER [swarm/docker : daemon reload] ********************************* … There are no more "NOTIFIED HANDLER" in logs. Can anyone explain, what i'm doing wrong? :(
|
Order of notify handlers I have a task: - name: uploads docker configuration file template: src: 'docker.systemd.j2' dest: '/etc/systemd/system/docker.service' notify: - daemon reload - restart docker in Ansible playbook's documentation , there is a sentence: Notify handlers are always run in the order written. So, it is expected, that daemon reload will be ran before restart docker , but in logs, i have: TASK [swarm/docker : uploads docker configuration file] ************************ … NOTIFIED HANDLER daemon reload NOTIFIED HANDLER restart docker … RUNNING HANDLER [swarm/docker : restart docker] ******************************** … RUNNING HANDLER [swarm/docker : daemon reload] ********************************* … There are no more "NOTIFIED HANDLER" in logs. Can anyone explain, what i'm doing wrong? :(
|
ansible
| 38
| 23,217
| 2
|
https://stackoverflow.com/questions/35130051/order-of-notify-handlers
|
31,566,568
|
Double loop Ansible
|
I have an object like that objs: - { key1: value1, key2: [value2, value3] } - { key1: value4, key2: [value5, value6] } And I'd like to create the following files value1/value2 value1/value3 value4/value5 value4/value6 but I have no idea how to do a double loop using with_items
|
Double loop Ansible I have an object like that objs: - { key1: value1, key2: [value2, value3] } - { key1: value4, key2: [value5, value6] } And I'd like to create the following files value1/value2 value1/value3 value4/value5 value4/value6 but I have no idea how to do a double loop using with_items
|
ansible
| 38
| 127,083
| 5
|
https://stackoverflow.com/questions/31566568/double-loop-ansible
|
53,098,493
|
How to nicely split on multiple lines long conditionals with OR on ansible?
|
I already know that if you have long conditionals with and between them you can use lists to split them on multiple lines. Still, I am not aware of any solution for the case where you have OR between them. Practical example from real life: when: ansible_user_dir is not defined or ansible_python is not defined or ansible_processor_vcpus is not defined This line is ugly and hard to read, and clearly would not fit a 79 column. How can we rewrite it to make it easier to read?
|
How to nicely split on multiple lines long conditionals with OR on ansible? I already know that if you have long conditionals with and between them you can use lists to split them on multiple lines. Still, I am not aware of any solution for the case where you have OR between them. Practical example from real life: when: ansible_user_dir is not defined or ansible_python is not defined or ansible_processor_vcpus is not defined This line is ugly and hard to read, and clearly would not fit a 79 column. How can we rewrite it to make it easier to read?
|
ansible, ansible-lint
| 38
| 39,544
| 2
|
https://stackoverflow.com/questions/53098493/how-to-nicely-split-on-multiple-lines-long-conditionals-with-or-on-ansible
|
31,775,099
|
How to set environmental variables using Ansible
|
I need to set the variables like JAVA_HOME and update PATH . There are a number of ways of doing this. One way is to update the /etc/environment variable and include a line for JAVA_HOME using the lineinfile module and then run the command source /etc/environment directly on the guest OS (CentOS in my case). Another way is to execute the export command e.g. export JAVA_HOME=/usr/java/jre1.8.0_51 export PATH=$PATH:$JAVA_HOME Is there a cleaner way to do this as all these require manipulating files and running commands directly on the OS to update the environment variables?
|
How to set environmental variables using Ansible I need to set the variables like JAVA_HOME and update PATH . There are a number of ways of doing this. One way is to update the /etc/environment variable and include a line for JAVA_HOME using the lineinfile module and then run the command source /etc/environment directly on the guest OS (CentOS in my case). Another way is to execute the export command e.g. export JAVA_HOME=/usr/java/jre1.8.0_51 export PATH=$PATH:$JAVA_HOME Is there a cleaner way to do this as all these require manipulating files and running commands directly on the OS to update the environment variables?
|
ansible, java-home
| 38
| 64,972
| 3
|
https://stackoverflow.com/questions/31775099/how-to-set-environmental-variables-using-ansible
|
29,001,505
|
Ansible: variable interpolation in task name
|
I cannot get this seemingly simple example to work in Ansible 1.8.3. The variable interpolation does not kick in the task name. All examples I have seen seem to suggest this should work. Given that the variable is defined in the vars section I expected the task name to print the value of the variable. Why doesn't this work? Even the example from the Ansible documentation seems to not print the variable value. --- - hosts: 127.0.0.1 gather_facts: no vars: vhost: "foo" tasks: - name: create a virtual host file for {{ vhost }} debug: msg="{{ vhost }}" This results in the following output: PLAY [127.0.0.1] ************************************************************** TASK: [create a virtual host file for {{ vhost }}] **************************** ok: [127.0.0.1] => { "msg": "foo" } PLAY RECAP ******************************************************************** 127.0.0.1 : ok=1 changed=0 unreachable=0 failed=0 Update This works with 1.7.2 but does not work with 1.8.3. So either this is a bug or a feature.
|
Ansible: variable interpolation in task name I cannot get this seemingly simple example to work in Ansible 1.8.3. The variable interpolation does not kick in the task name. All examples I have seen seem to suggest this should work. Given that the variable is defined in the vars section I expected the task name to print the value of the variable. Why doesn't this work? Even the example from the Ansible documentation seems to not print the variable value. --- - hosts: 127.0.0.1 gather_facts: no vars: vhost: "foo" tasks: - name: create a virtual host file for {{ vhost }} debug: msg="{{ vhost }}" This results in the following output: PLAY [127.0.0.1] ************************************************************** TASK: [create a virtual host file for {{ vhost }}] **************************** ok: [127.0.0.1] => { "msg": "foo" } PLAY RECAP ******************************************************************** 127.0.0.1 : ok=1 changed=0 unreachable=0 failed=0 Update This works with 1.7.2 but does not work with 1.8.3. So either this is a bug or a feature.
|
ansible
| 38
| 64,247
| 4
|
https://stackoverflow.com/questions/29001505/ansible-variable-interpolation-in-task-name
|
62,452,039
|
How to run docker-compose commands with ansible?
|
In ansible playbook I need to run docker-compose commands. How can I do it? I need to run command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
|
How to run docker-compose commands with ansible? In ansible playbook I need to run docker-compose commands. How can I do it? I need to run command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
|
docker, deployment, docker-compose, ansible
| 37
| 64,108
| 2
|
https://stackoverflow.com/questions/62452039/how-to-run-docker-compose-commands-with-ansible
|
41,424,999
|
Rendering Ansible template into the fact variable
|
Is there a way to render Ansible template into the fact? I tried to find a solution but it looks like temp file is the the only way.
|
Rendering Ansible template into the fact variable Is there a way to render Ansible template into the fact? I tried to find a solution but it looks like temp file is the the only way.
|
ansible, jinja2, ansible-template
| 37
| 30,951
| 2
|
https://stackoverflow.com/questions/41424999/rendering-ansible-template-into-the-fact-variable
|
26,641,070
|
How do ansible host_vars work?
|
I created a repo to reproduce my scenario. Essentially we are loading an inventory with our hosts, we can override values per-host via the inventory without issue but would like to try and utilize host_vars. I'm not 100% clear on how host vars are matched to the host. I read the ansible repo for examples but cannot seem to get it to work as documented so I'm looking for some scrutiny of our setup. When I run the command ansible-playbook -i ansible.inventory site.yml -clocal in my example repo I expect the host_vars/{{ ansible_hostname }} file to be read and override anything set in the vars but that does not appear to be happening. Can someone please point me at a working example so I can see where we are going wrong?
|
How do ansible host_vars work? I created a repo to reproduce my scenario. Essentially we are loading an inventory with our hosts, we can override values per-host via the inventory without issue but would like to try and utilize host_vars. I'm not 100% clear on how host vars are matched to the host. I read the ansible repo for examples but cannot seem to get it to work as documented so I'm looking for some scrutiny of our setup. When I run the command ansible-playbook -i ansible.inventory site.yml -clocal in my example repo I expect the host_vars/{{ ansible_hostname }} file to be read and override anything set in the vars but that does not appear to be happening. Can someone please point me at a working example so I can see where we are going wrong?
|
ansible
| 37
| 94,423
| 2
|
https://stackoverflow.com/questions/26641070/how-do-ansible-host-vars-work
|
43,974,099
|
In Ansible, what's the diffence between the service and the systemd modules?
|
In Ansible, what is the difference between the service and the systemd modules? The service module seems to include the systemd module so what's the point of having systemd by itself?
|
In Ansible, what's the diffence between the service and the systemd modules? In Ansible, what is the difference between the service and the systemd modules? The service module seems to include the systemd module so what's the point of having systemd by itself?
|
service, module, ansible, systemd
| 37
| 22,171
| 1
|
https://stackoverflow.com/questions/43974099/in-ansible-whats-the-diffence-between-the-service-and-the-systemd-modules
|
38,461,920
|
How can I check if a string exists in a file?
|
Is it possible to check if a string exists in a file using Ansible? I want to check if a user has access to a server. This can be done on the server using cat /etc/passwd | grep username , but I want Ansible to stop if the user is not there. I have tried to use the lineinfile but can't seem to get it to return. - name: find lineinfile: dest: /etc/passwd regexp: [user] state: present line: "user" The code above adds the user to the file if it is not there. All I want to do is check. I don't want to modify the file in any way, is this possible?
|
How can I check if a string exists in a file? Is it possible to check if a string exists in a file using Ansible? I want to check if a user has access to a server. This can be done on the server using cat /etc/passwd | grep username , but I want Ansible to stop if the user is not there. I have tried to use the lineinfile but can't seem to get it to return. - name: find lineinfile: dest: /etc/passwd regexp: [user] state: present line: "user" The code above adds the user to the file if it is not there. All I want to do is check. I don't want to modify the file in any way, is this possible?
|
ansible, ansible-2.x
| 36
| 122,446
| 6
|
https://stackoverflow.com/questions/38461920/how-can-i-check-if-a-string-exists-in-a-file
|
39,731,999
|
How to add spaces at beginning of block in Ansible's blockinfile?
|
I found this blockinfile issue , where a user suggested adding a number after the "|" in the "block: |" line, but gives a syntax error. Basically, I want to use blockinfile module to add a block of lines in a file, but I want the block to be indented 6 spaces in the file. Here's the task - name: Added a block of lines in the file blockinfile: dest: /path/some_file.yml insertafter: 'authc:' block: | line0 line1 line2 line3 line4 I expect authc: line0 line1 line2 line3 line4 but get authc: line0 line1 line2 line3 line4 Adding spaces in the beginning of the lines does not do it. How can I accomplish this?
|
How to add spaces at beginning of block in Ansible's blockinfile? I found this blockinfile issue , where a user suggested adding a number after the "|" in the "block: |" line, but gives a syntax error. Basically, I want to use blockinfile module to add a block of lines in a file, but I want the block to be indented 6 spaces in the file. Here's the task - name: Added a block of lines in the file blockinfile: dest: /path/some_file.yml insertafter: 'authc:' block: | line0 line1 line2 line3 line4 I expect authc: line0 line1 line2 line3 line4 but get authc: line0 line1 line2 line3 line4 Adding spaces in the beginning of the lines does not do it. How can I accomplish this?
|
ansible, ansible-2.x
| 36
| 37,146
| 5
|
https://stackoverflow.com/questions/39731999/how-to-add-spaces-at-beginning-of-block-in-ansibles-blockinfile
|
33,541,870
|
How do I loop over each line inside a file with ansible?
|
I am looking for something that would be similar to with_items: but that would get the list of items from a file instead of having to include it in the playbook file. How can I do this in ansible?
|
How do I loop over each line inside a file with ansible? I am looking for something that would be similar to with_items: but that would get the list of items from a file instead of having to include it in the playbook file. How can I do this in ansible?
|
ansible
| 36
| 93,615
| 4
|
https://stackoverflow.com/questions/33541870/how-do-i-loop-over-each-line-inside-a-file-with-ansible
|
36,310,633
|
Ansible: Get number of hosts in group
|
I'm trying to get the number of hosts of a certain group. Imagine an inventory file like this: [maingroup] server-[01:05] Now in my playbook I would like to get the number of hosts that are part of maingroup which would be 5 in this case and store that in a variable which is supposed to be used in a template in one of the playbook's tasks. At the moment I'm setting the variable manually which is far from ideal.. vars: HOST_COUNT: 5
|
Ansible: Get number of hosts in group I'm trying to get the number of hosts of a certain group. Imagine an inventory file like this: [maingroup] server-[01:05] Now in my playbook I would like to get the number of hosts that are part of maingroup which would be 5 in this case and store that in a variable which is supposed to be used in a template in one of the playbook's tasks. At the moment I'm setting the variable manually which is far from ideal.. vars: HOST_COUNT: 5
|
ansible
| 36
| 43,618
| 2
|
https://stackoverflow.com/questions/36310633/ansible-get-number-of-hosts-in-group
|
50,009,505
|
Formatting stdout in a debug task of Ansible
|
Assuming the below tasks: - shell: "some_script.sh" register: "some_script_result" - debug: msg: "Output: {{ some_script_result.stdout_lines }} I receive the below output: "msg": "Output: [u'some_value',u'some_value2,u'some_value3]" How do I get the output to print as? "msg": Output: - some_value - some_value2 - some_value3 Ansible version is 2.4.2.
|
Formatting stdout in a debug task of Ansible Assuming the below tasks: - shell: "some_script.sh" register: "some_script_result" - debug: msg: "Output: {{ some_script_result.stdout_lines }} I receive the below output: "msg": "Output: [u'some_value',u'some_value2,u'some_value3]" How do I get the output to print as? "msg": Output: - some_value - some_value2 - some_value3 Ansible version is 2.4.2.
|
ansible, ansible-2.x
| 36
| 90,644
| 6
|
https://stackoverflow.com/questions/50009505/formatting-stdout-in-a-debug-task-of-ansible
|
35,139,711
|
Running Python script via ansible
|
I'm trying to run a python script from an ansible script. I would think this would be an easy thing to do, but I can't figure it out. I've got a project structure like this: playbook-folder roles stagecode files mypythonscript.py tasks main.yml release.yml I'm trying to run mypythonscript.py within a task in main.yml (which is a role used in release.yml). Here's the task: - name: run my script! command: ./roles/stagecode/files/mypythonscript.py args: chdir: /dir/to/be/run/in delegate_to: 127.0.0.1 run_once: true I've also tried ../files/mypythonscript.py. I thought the path for ansible would be relative to the playbook, but I guess not? I also tried debugging to figure out where I am in the middle of the script, but no luck there either. - name: figure out where we are stat: path=. delegate_to: 127.0.0.1 run_once: true register: righthere - name: print where we are debug: msg="{{righthere.stat.path}}" delegate_to: 127.0.0.1 run_once: true That just prints out ".". So helpful ...
|
Running Python script via ansible I'm trying to run a python script from an ansible script. I would think this would be an easy thing to do, but I can't figure it out. I've got a project structure like this: playbook-folder roles stagecode files mypythonscript.py tasks main.yml release.yml I'm trying to run mypythonscript.py within a task in main.yml (which is a role used in release.yml). Here's the task: - name: run my script! command: ./roles/stagecode/files/mypythonscript.py args: chdir: /dir/to/be/run/in delegate_to: 127.0.0.1 run_once: true I've also tried ../files/mypythonscript.py. I thought the path for ansible would be relative to the playbook, but I guess not? I also tried debugging to figure out where I am in the middle of the script, but no luck there either. - name: figure out where we are stat: path=. delegate_to: 127.0.0.1 run_once: true register: righthere - name: print where we are debug: msg="{{righthere.stat.path}}" delegate_to: 127.0.0.1 run_once: true That just prints out ".". So helpful ...
|
ansible, ansible-2.x
| 36
| 155,555
| 7
|
https://stackoverflow.com/questions/35139711/running-python-script-via-ansible
|
51,818,264
|
How to split an ansible role's defaults/main.yml file into multiple files?
|
In some ansible roles (e.g. roles/my-role/ ) I've got quite some big default variables files ( defaults/main.yml ). I'd like to split the main.yml into several smaller files. Is it possible to do that? I've tried creating the files defaults/1.yml and defaults/2.yml , but they aren't loaded by ansible.
|
How to split an ansible role's defaults/main.yml file into multiple files? In some ansible roles (e.g. roles/my-role/ ) I've got quite some big default variables files ( defaults/main.yml ). I'd like to split the main.yml into several smaller files. Is it possible to do that? I've tried creating the files defaults/1.yml and defaults/2.yml , but they aren't loaded by ansible.
|
ansible, ansible-role, file-organization
| 36
| 25,932
| 2
|
https://stackoverflow.com/questions/51818264/how-to-split-an-ansible-roles-defaults-main-yml-file-into-multiple-files
|
30,533,372
|
Run an Ansible task only when the hostname contains a string
|
I have multiple tasks in a role as follows. I do not want to create another yml file to handle this task. I already have an include for the web servers, but a couple of our Perl servers require some web packages to be installed. - name: Install Perl Modules command: <command> with_dict: perl_modules - name: Install PHP Modules command: <command> with_dict: php_modules when: <Install php modules only if hostname contains the word "batch"> Host inventory file [webs] web01 web02 web03 [perl] perl01 perl02 perl03 perl-batch01 perl-batch02
|
Run an Ansible task only when the hostname contains a string I have multiple tasks in a role as follows. I do not want to create another yml file to handle this task. I already have an include for the web servers, but a couple of our Perl servers require some web packages to be installed. - name: Install Perl Modules command: <command> with_dict: perl_modules - name: Install PHP Modules command: <command> with_dict: php_modules when: <Install php modules only if hostname contains the word "batch"> Host inventory file [webs] web01 web02 web03 [perl] perl01 perl02 perl03 perl-batch01 perl-batch02
|
ansible
| 36
| 128,987
| 2
|
https://stackoverflow.com/questions/30533372/run-an-ansible-task-only-when-the-hostname-contains-a-string
|
23,767,765
|
Ansible doesn't pick up group_vars without loading it manually
|
In my local.yml I'm able to run the playbook and reference variables within group_vars/all however I'm not able to access variables within group_vars/phl-stage . Let's assume the following. ansible-playbook -i phl-stage site.yml I have a variable, let's call it deploy_path that's different for each environment. I place the variable within group_vars/< environment name > . If I include the file group_vars/phl-stage within vars_files it works but I would've thought the group file would be automatically loaded? site.yml - include: local.yml local.yml - hosts: 127.0.0.1 connection: local vars_files: - "group_vars/perlservers" - "group_vars/deploy_list" group_vars/phl-stage [webservers] phl-web1 phl-web2 [perlservers] phl-perl1 phl-perl2 [phl-stage:children] webservers perlservers Directory structure: group_vars all phl-stage phl-prod site.yml local.yml
|
Ansible doesn't pick up group_vars without loading it manually In my local.yml I'm able to run the playbook and reference variables within group_vars/all however I'm not able to access variables within group_vars/phl-stage . Let's assume the following. ansible-playbook -i phl-stage site.yml I have a variable, let's call it deploy_path that's different for each environment. I place the variable within group_vars/< environment name > . If I include the file group_vars/phl-stage within vars_files it works but I would've thought the group file would be automatically loaded? site.yml - include: local.yml local.yml - hosts: 127.0.0.1 connection: local vars_files: - "group_vars/perlservers" - "group_vars/deploy_list" group_vars/phl-stage [webservers] phl-web1 phl-web2 [perlservers] phl-perl1 phl-perl2 [phl-stage:children] webservers perlservers Directory structure: group_vars all phl-stage phl-prod site.yml local.yml
|
ansible
| 36
| 75,120
| 1
|
https://stackoverflow.com/questions/23767765/ansible-doesnt-pick-up-group-vars-without-loading-it-manually
|
22,115,936
|
Install Bundler gem using Ansible
|
I am trying to install Bundler on my VPS using Ansible. I already have rbenv set up and the global ruby is 2.1.0. If I SSH as root into the server and run gem install bundler , it installs perfectly. I have tried the following three ways of using Ansible to install the Bundler gem and all three produce no errors, but when I SSH in and run gem list , Bundler is nowhere to be seen. Attempt 1: --- - name: Install Bundler shell: gem install bundler Attempt 2: --- - name: Install Bundler shell: gem install bundler Attempt 3: --- - name: Install Bundler gem: name=bundler state=latest I have also tried the last attempt with user_install=yes and also with user_install=no and neither make any difference. Any ideas how I can get it to install Bundler correctly via Ansible? I've been working on this for a little while now and I have 1 ruby version installed: 2.1.0 and ahve found that the shims directory for rbenv does not contain a shim for bundle . Should a shim for bundle be in there? I'm just getting confused as to why capistrano cannot find the bundle command as it's listed when I run sudo gem list but NOT when I run gem list ? root@weepingangel:/usr/local/rbenv/shims# echo $PATH /usr/local/rbenv/shims:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games root@weepingangel:/usr/local/rbenv/shims# gem environment RubyGems Environment: - RUBYGEMS VERSION: 2.2.0 - RUBY VERSION: 2.1.0 (2013-12-25 patchlevel 0) [x86_64-linux] - INSTALLATION DIRECTORY: /usr/local/rbenv/versions/2.1.0/lib/ruby/gems/2.1.0 - RUBY EXECUTABLE: /usr/local/rbenv/versions/2.1.0/bin/ruby - EXECUTABLE DIRECTORY: /usr/local/rbenv/versions/2.1.0/bin - SPEC CACHE DIRECTORY: /root/.gem/specs - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /usr/local/rbenv/versions/2.1.0/lib/ruby/gems/2.1.0 - /root/.gem/ruby/2.1.0 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :backtrace => false - :bulk_threshold => 1000 - :sources => ["[URL] "[URL] - "gem" => "--no-ri --no-rdoc" - REMOTE SOURCES: - [URL] - [URL] - SHELL PATH: - /usr/local/rbenv/versions/2.1.0/bin - /usr/local/rbenv/libexec - /usr/local/rbenv/shims - /usr/local/sbin - /usr/local/bin - /usr/sbin - /usr/bin - /sbin - /bin - /usr/games Any ideas? So, I think the two main problems I have: Why is bundler only visible when I run sudo gem list ? My deploy is saying: INFO [18d5838c] Running /usr/bin/env bundle install --binstubs /var/rails_apps/neiltonge/shared/bin --path /var/rails_apps/neiltonge/shared/bundle --without development test --deployment --quiet on 188.226.159.96 DEBUG [18d5838c] Command: cd /var/rails_apps/neiltonge/releases/20140301205432 && ( PATH=$PATH /usr/bin/env bundle install --binstubs /var/rails_apps/neiltonge/shared/bin --path /var/rails_apps/neiltonge/shared/bundle --without development test --deployment --quiet ) DEBUG [18d5838c] /usr/bin/env: bundle: No such file or directory and this is my $PATH : /usr/local/rbenv/shims:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games Why can't bundle be located?
|
Install Bundler gem using Ansible I am trying to install Bundler on my VPS using Ansible. I already have rbenv set up and the global ruby is 2.1.0. If I SSH as root into the server and run gem install bundler , it installs perfectly. I have tried the following three ways of using Ansible to install the Bundler gem and all three produce no errors, but when I SSH in and run gem list , Bundler is nowhere to be seen. Attempt 1: --- - name: Install Bundler shell: gem install bundler Attempt 2: --- - name: Install Bundler shell: gem install bundler Attempt 3: --- - name: Install Bundler gem: name=bundler state=latest I have also tried the last attempt with user_install=yes and also with user_install=no and neither make any difference. Any ideas how I can get it to install Bundler correctly via Ansible? I've been working on this for a little while now and I have 1 ruby version installed: 2.1.0 and ahve found that the shims directory for rbenv does not contain a shim for bundle . Should a shim for bundle be in there? I'm just getting confused as to why capistrano cannot find the bundle command as it's listed when I run sudo gem list but NOT when I run gem list ? root@weepingangel:/usr/local/rbenv/shims# echo $PATH /usr/local/rbenv/shims:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games root@weepingangel:/usr/local/rbenv/shims# gem environment RubyGems Environment: - RUBYGEMS VERSION: 2.2.0 - RUBY VERSION: 2.1.0 (2013-12-25 patchlevel 0) [x86_64-linux] - INSTALLATION DIRECTORY: /usr/local/rbenv/versions/2.1.0/lib/ruby/gems/2.1.0 - RUBY EXECUTABLE: /usr/local/rbenv/versions/2.1.0/bin/ruby - EXECUTABLE DIRECTORY: /usr/local/rbenv/versions/2.1.0/bin - SPEC CACHE DIRECTORY: /root/.gem/specs - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /usr/local/rbenv/versions/2.1.0/lib/ruby/gems/2.1.0 - /root/.gem/ruby/2.1.0 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :backtrace => false - :bulk_threshold => 1000 - :sources => ["[URL] "[URL] - "gem" => "--no-ri --no-rdoc" - REMOTE SOURCES: - [URL] - [URL] - SHELL PATH: - /usr/local/rbenv/versions/2.1.0/bin - /usr/local/rbenv/libexec - /usr/local/rbenv/shims - /usr/local/sbin - /usr/local/bin - /usr/sbin - /usr/bin - /sbin - /bin - /usr/games Any ideas? So, I think the two main problems I have: Why is bundler only visible when I run sudo gem list ? My deploy is saying: INFO [18d5838c] Running /usr/bin/env bundle install --binstubs /var/rails_apps/neiltonge/shared/bin --path /var/rails_apps/neiltonge/shared/bundle --without development test --deployment --quiet on 188.226.159.96 DEBUG [18d5838c] Command: cd /var/rails_apps/neiltonge/releases/20140301205432 && ( PATH=$PATH /usr/bin/env bundle install --binstubs /var/rails_apps/neiltonge/shared/bin --path /var/rails_apps/neiltonge/shared/bundle --without development test --deployment --quiet ) DEBUG [18d5838c] /usr/bin/env: bundle: No such file or directory and this is my $PATH : /usr/local/rbenv/shims:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games Why can't bundle be located?
|
ruby, ubuntu, rubygems, bundler, ansible
| 36
| 21,798
| 6
|
https://stackoverflow.com/questions/22115936/install-bundler-gem-using-ansible
|
59,384,708
|
ansible returns with "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6))
|
I am running myserver in ubuntu: + sudo cat /etc/os-release NAME="Ubuntu" VERSION="16.04.6 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.6 LTS" VERSION_ID="16.04" HOME_URL="[URL] SUPPORT_URL="[URL] BUG_REPORT_URL="[URL] VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial I use ansible and when I run it I get the following error: fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on dd63315fad06's Python /usr/bin/python. Please read module documentation and install in the appropriate location, for example via pip install docker or pip install docker-py (Python 2.6). The error was: No module named docker"} when I run python -c "import sys; print(sys.path)" I see: ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/local/lib/python2.7/dist-packages/pip-19.2.2-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/fasteners-0.15-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/monotonic-1.5-py2.7.egg', '/usr/lib/python2.7/dist-packages'] and python versions are as follows: + python --version Python 2.7.12 + python3 --version Python 3.5.2 Then as I see everything is fine and I am not sure why I get "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on dd63315fad06's Python /usr/bin/python. Please read module documentation and install in the appropriate location, for example via pip install docker or pip install docker-py (Python 2.6). The error was: No module named docker" in ansible?
|
ansible returns with "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) I am running myserver in ubuntu: + sudo cat /etc/os-release NAME="Ubuntu" VERSION="16.04.6 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.6 LTS" VERSION_ID="16.04" HOME_URL="[URL] SUPPORT_URL="[URL] BUG_REPORT_URL="[URL] VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial I use ansible and when I run it I get the following error: fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on dd63315fad06's Python /usr/bin/python. Please read module documentation and install in the appropriate location, for example via pip install docker or pip install docker-py (Python 2.6). The error was: No module named docker"} when I run python -c "import sys; print(sys.path)" I see: ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/local/lib/python2.7/dist-packages/pip-19.2.2-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/fasteners-0.15-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/monotonic-1.5-py2.7.egg', '/usr/lib/python2.7/dist-packages'] and python versions are as follows: + python --version Python 2.7.12 + python3 --version Python 3.5.2 Then as I see everything is fine and I am not sure why I get "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on dd63315fad06's Python /usr/bin/python. Please read module documentation and install in the appropriate location, for example via pip install docker or pip install docker-py (Python 2.6). The error was: No module named docker" in ansible?
|
ansible, ansible-2.x, ansible-inventory, ansible-facts
| 36
| 105,133
| 6
|
https://stackoverflow.com/questions/59384708/ansible-returns-with-failed-to-import-the-required-python-library-docker-sdk-f
|
30,024,664
|
Force fact-gathering on all hosts
|
I'm sitting in front of a fairly complex Ansible project that we're using to set up our local development environments (multiple VMs) and there's one role that uses the facts gathered by Ansible to set up the /etc/hosts file on every VM. Unfortunately, when you want to run the playbook for one host only (using the -limit parameter) the facts from the other hosts are (obviously) missing. Is there a way to force Ansible to gather facts on all hosts, even if you limit the playbook to one specific host? We tried to add a play to the playbook to gather facts from all hosts, but of course that also gets limited to the one host given by the -limit parameter. If there'd be a way to force this play to run on all hosts before the other plays, that would be perfect. I've googled a bit and found the solution with fact caching with redis, but since our playbook is used locally, I wanted to avoid the need for additional software. I know, it's not a big deal, but I was just looking for a "cleaner", Ansible-only solution and was wondering, if that would exist.
|
Force fact-gathering on all hosts I'm sitting in front of a fairly complex Ansible project that we're using to set up our local development environments (multiple VMs) and there's one role that uses the facts gathered by Ansible to set up the /etc/hosts file on every VM. Unfortunately, when you want to run the playbook for one host only (using the -limit parameter) the facts from the other hosts are (obviously) missing. Is there a way to force Ansible to gather facts on all hosts, even if you limit the playbook to one specific host? We tried to add a play to the playbook to gather facts from all hosts, but of course that also gets limited to the one host given by the -limit parameter. If there'd be a way to force this play to run on all hosts before the other plays, that would be perfect. I've googled a bit and found the solution with fact caching with redis, but since our playbook is used locally, I wanted to avoid the need for additional software. I know, it's not a big deal, but I was just looking for a "cleaner", Ansible-only solution and was wondering, if that would exist.
|
ansible, limit, hosts, ansible-facts
| 36
| 20,417
| 4
|
https://stackoverflow.com/questions/30024664/force-fact-gathering-on-all-hosts
|
36,965,199
|
ansible wget then exec scripts => get_url equivalent
|
I always wonder what is the good way to replace the following shell tasks using the "ansible way" (with get_url , etc.): - name: Install oh-my-zsh shell: wget -qO - [URL] | bash - or - name: Install nodesource repo shell: curl -sL [URL] | bash -
|
ansible wget then exec scripts => get_url equivalent I always wonder what is the good way to replace the following shell tasks using the "ansible way" (with get_url , etc.): - name: Install oh-my-zsh shell: wget -qO - [URL] | bash - or - name: Install nodesource repo shell: curl -sL [URL] | bash -
|
bash, curl, wget, ansible
| 36
| 36,550
| 7
|
https://stackoverflow.com/questions/36965199/ansible-wget-then-exec-scripts-get-url-equivalent
|
50,796,341
|
Add a new key-value to a json file using Ansible
|
I'm using Ansible to automate some configuration steps for my application VM, but having difficult to insert a new key-value to an existing json file on the remote host. Say I have this json file: { "foo": "bar" } And I want to insert a new key value pair to make the file become: { "foo": "bar", "hello": "world" } Since json format is not line based, I'm excluding lineinfile module from my options. Also, I would prefer not to use any external modules. Google keeps giving me examples to show how to read json file, but nothing about change json values and write them back to file. Really appreciate your help please!
|
Add a new key-value to a json file using Ansible I'm using Ansible to automate some configuration steps for my application VM, but having difficult to insert a new key-value to an existing json file on the remote host. Say I have this json file: { "foo": "bar" } And I want to insert a new key value pair to make the file become: { "foo": "bar", "hello": "world" } Since json format is not line based, I'm excluding lineinfile module from my options. Also, I would prefer not to use any external modules. Google keeps giving me examples to show how to read json file, but nothing about change json values and write them back to file. Really appreciate your help please!
|
json, ansible
| 36
| 34,811
| 4
|
https://stackoverflow.com/questions/50796341/add-a-new-key-value-to-a-json-file-using-ansible
|
43,185,043
|
What do double curly braces ({{) mean in YAML files (as used by Ansible)?
|
I'm fairly new to Ansible and trying to understand the YAML file. In that i'm not clear about this piece of line - file: dest={{ '{{' }} docroot {{ '}}' } . Can some one please explain me what those curly braces '{{' '}}' are doing? - name: Create Web Root when: nginxinstalled|success file: dest={{ '{{' }} docroot {{ '}}' }} mode=775 state=directory owner=www-data group=www-data notify: - Reload Nginx
|
What do double curly braces ({{) mean in YAML files (as used by Ansible)? I'm fairly new to Ansible and trying to understand the YAML file. In that i'm not clear about this piece of line - file: dest={{ '{{' }} docroot {{ '}}' } . Can some one please explain me what those curly braces '{{' '}}' are doing? - name: Create Web Root when: nginxinstalled|success file: dest={{ '{{' }} docroot {{ '}}' }} mode=775 state=directory owner=www-data group=www-data notify: - Reload Nginx
|
ansible
| 36
| 53,377
| 2
|
https://stackoverflow.com/questions/43185043/what-do-double-curly-braces-mean-in-yaml-files-as-used-by-ansible
|
46,352,173
|
Ansible Failed to set permissions on the temporary
|
I am using ansible to replace the ssh keys for a user on multiple RHEL6 & RHEL7 servers. The task I am running is: - name: private key copy: src: /Users/me/Documents/keys/id_rsa dest: ~/.ssh/ owner: unpriv group: unpriv mode: 0600 backup: yes Two of the hosts that I'm trying to update are giving the following error: fatal: [host1]: FAILED! => {"failed": true, "msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chown: changing ownership of /tmp/ansible-tmp-19/': Operation not permitted\nchown: changing ownership of /tmp/ansible-tmp-19/stat.py': Operation not permitted\n). For information on working around this, see [URL] "} The thing is that these two that are getting the errors are clones of some that are updating just fine. I've compared the sudoers and sshd settings, as well as permissions and mount options on the /tmp directory. They are all the same between the problem hosts and the working ones. Any ideas on what I could check next? I am running ansible 2.3.1.0 on Mac OS Sierra, if that helps. Update: @techraf I have no idea why this worked on all hosts except for two. Here is the original playbook: - name: ssh_keys hosts: my_hosts remote_user: my_user tasks: - include: ./roles/common/tasks/keys.yml become: yes become_method: sudo and original keys.yml: - name: public key copy: src: /Users/me/Documents/keys/id_rsab dest: ~/.ssh/ owner: unpriv group: unpriv mode: 060 backup: yes I changed the playbook to: - name: ssh_keys hosts: my_hosts remote_user: my_user tasks: - include: ./roles/common/tasks/keys.yml become: yes become_method: sudo become_user: root And keys.yml to: - name: public key copy: src: /Users/me/Documents/keys/id_rsab dest: /home/unpriv/.ssh/ owner: unpriv group: unpriv mode: 0600 backup: yes And it worked across all hosts.
|
Ansible Failed to set permissions on the temporary I am using ansible to replace the ssh keys for a user on multiple RHEL6 & RHEL7 servers. The task I am running is: - name: private key copy: src: /Users/me/Documents/keys/id_rsa dest: ~/.ssh/ owner: unpriv group: unpriv mode: 0600 backup: yes Two of the hosts that I'm trying to update are giving the following error: fatal: [host1]: FAILED! => {"failed": true, "msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chown: changing ownership of /tmp/ansible-tmp-19/': Operation not permitted\nchown: changing ownership of /tmp/ansible-tmp-19/stat.py': Operation not permitted\n). For information on working around this, see [URL] "} The thing is that these two that are getting the errors are clones of some that are updating just fine. I've compared the sudoers and sshd settings, as well as permissions and mount options on the /tmp directory. They are all the same between the problem hosts and the working ones. Any ideas on what I could check next? I am running ansible 2.3.1.0 on Mac OS Sierra, if that helps. Update: @techraf I have no idea why this worked on all hosts except for two. Here is the original playbook: - name: ssh_keys hosts: my_hosts remote_user: my_user tasks: - include: ./roles/common/tasks/keys.yml become: yes become_method: sudo and original keys.yml: - name: public key copy: src: /Users/me/Documents/keys/id_rsab dest: ~/.ssh/ owner: unpriv group: unpriv mode: 060 backup: yes I changed the playbook to: - name: ssh_keys hosts: my_hosts remote_user: my_user tasks: - include: ./roles/common/tasks/keys.yml become: yes become_method: sudo become_user: root And keys.yml to: - name: public key copy: src: /Users/me/Documents/keys/id_rsab dest: /home/unpriv/.ssh/ owner: unpriv group: unpriv mode: 0600 backup: yes And it worked across all hosts.
|
ansible
| 35
| 60,754
| 4
|
https://stackoverflow.com/questions/46352173/ansible-failed-to-set-permissions-on-the-temporary
|
35,984,151
|
How to create new system service by ansible-playbook
|
I have created a script to start/stop my application. Now I want to add it as a centos system service. First I created a task to create a link from my script to /etc/init.d/service_name as below. --- - name: create startup link file: src={{ cooltoo_service_script }} dest={{ cooltoo_service_init }} state=link After create the service, I want to add it to system service. The command used to do that is "chkconfig --add service_name". I wonder whether there is a ansible module to do that instead of hardcoded the command in ansible-playbook file. I have looked at this page [URL] but it only shows how to manage a service not create a new one.
|
How to create new system service by ansible-playbook I have created a script to start/stop my application. Now I want to add it as a centos system service. First I created a task to create a link from my script to /etc/init.d/service_name as below. --- - name: create startup link file: src={{ cooltoo_service_script }} dest={{ cooltoo_service_init }} state=link After create the service, I want to add it to system service. The command used to do that is "chkconfig --add service_name". I wonder whether there is a ansible module to do that instead of hardcoded the command in ansible-playbook file. I have looked at this page [URL] but it only shows how to manage a service not create a new one.
|
ansible
| 35
| 92,655
| 4
|
https://stackoverflow.com/questions/35984151/how-to-create-new-system-service-by-ansible-playbook
|
39,189,549
|
How can I hide skipped tasks output in Ansible
|
I have Ansible role, for example --- - name: Deploy app1 include: deploy-app1.yml when: 'deploy_project == "{{app1}}"' - name: Deploy app2 include: deploy-app2.yml when: 'deploy_project == "{{app2}}"' But I deploy only one app in one role call. When I deploy several apps, I call role several times. But every time there is a lot of skipped tasks output (from tasks which do not pass condition), which I do not want to see. How can I avoid it?
|
How can I hide skipped tasks output in Ansible I have Ansible role, for example --- - name: Deploy app1 include: deploy-app1.yml when: 'deploy_project == "{{app1}}"' - name: Deploy app2 include: deploy-app2.yml when: 'deploy_project == "{{app2}}"' But I deploy only one app in one role call. When I deploy several apps, I call role several times. But every time there is a lot of skipped tasks output (from tasks which do not pass condition), which I do not want to see. How can I avoid it?
|
plugins, output, ansible
| 35
| 75,755
| 6
|
https://stackoverflow.com/questions/39189549/how-can-i-hide-skipped-tasks-output-in-ansible
|
47,632,103
|
Ansible: How to pip install with --upgrade
|
I want to pip install with --upgrade , using Ansible. What's the syntax?
|
Ansible: How to pip install with --upgrade I want to pip install with --upgrade , using Ansible. What's the syntax?
|
pip, ansible
| 35
| 57,285
| 3
|
https://stackoverflow.com/questions/47632103/ansible-how-to-pip-install-with-upgrade
|
38,308,871
|
How to disable gathering facts for subplays not included within given tag
|
Several of my playbooks have sub-plays structure like this: - hosts: sites user: root tags: - configuration tasks: (...) - hosts: sites user: root tags: - db tasks: (...) - hosts: sites user: "{{ site_vars.user }}" tags: - app tasks: (...) In Ansible 1.x both admins and developers were able to use such playbook. Admins could run it with all the tags (root and user access), while developers had access only to the last tag with tasks at user access level. When developers run this playbook with the app tag, gathering facts was skipped for the first two tags. Now however, in Ansible 2.1, it is not being skipped, which causes failure for users without root access. Is there a mechanism or an easy modification to fix this behaviour? Is there a new approach which should be applied for such cases now?
|
How to disable gathering facts for subplays not included within given tag Several of my playbooks have sub-plays structure like this: - hosts: sites user: root tags: - configuration tasks: (...) - hosts: sites user: root tags: - db tasks: (...) - hosts: sites user: "{{ site_vars.user }}" tags: - app tasks: (...) In Ansible 1.x both admins and developers were able to use such playbook. Admins could run it with all the tags (root and user access), while developers had access only to the last tag with tasks at user access level. When developers run this playbook with the app tag, gathering facts was skipped for the first two tags. Now however, in Ansible 2.1, it is not being skipped, which causes failure for users without root access. Is there a mechanism or an easy modification to fix this behaviour? Is there a new approach which should be applied for such cases now?
|
ansible, ansible-2.x
| 35
| 129,619
| 1
|
https://stackoverflow.com/questions/38308871/how-to-disable-gathering-facts-for-subplays-not-included-within-given-tag
|
43,140,086
|
Loop through hosts with ansible
|
I have a problem to find a working solution to loop over my inventory. I start my playbook with linking a intentory file: ansible-playbook -i inventory/dev.yml playbook.yml My playbook looks like this: --- - hosts: localhost tasks: - name: Create VM if enviro == true include_role: name: local_vm_creator when: enviro == 'dev' So when loading the playbook the variable enviro is read from host_vars and sets the when condition to dev. The inventory file dev.yml looks like this: [local_vm] 192.168.99.100 192.168.99.101 192.168.99.102 [local_vm_manager_1] 192.168.99.103 [local_vm_manager_2] 192.168.99.104 [local-all:children] local_vm local_vm_manager_1 local_vm_manager_2 My main.yml in my role local_vm_creator looks like this: --- - name: Create test host local_action: shell docker-machine create -d virtualbox {{ item }} with_items: - node-1 - node-2 - node-3 - node-4 - node-5 - debug: msg="host is {{item}}" with_items: groups['local_vm'] And the problem is that i can't get the listed servers from the dev.yml inventory file. it just returns: ok: [localhost] => (item=groups['local_vm']) => { "item": "groups['local_vm']", "msg": "host is groups['local_vm']" }
|
Loop through hosts with ansible I have a problem to find a working solution to loop over my inventory. I start my playbook with linking a intentory file: ansible-playbook -i inventory/dev.yml playbook.yml My playbook looks like this: --- - hosts: localhost tasks: - name: Create VM if enviro == true include_role: name: local_vm_creator when: enviro == 'dev' So when loading the playbook the variable enviro is read from host_vars and sets the when condition to dev. The inventory file dev.yml looks like this: [local_vm] 192.168.99.100 192.168.99.101 192.168.99.102 [local_vm_manager_1] 192.168.99.103 [local_vm_manager_2] 192.168.99.104 [local-all:children] local_vm local_vm_manager_1 local_vm_manager_2 My main.yml in my role local_vm_creator looks like this: --- - name: Create test host local_action: shell docker-machine create -d virtualbox {{ item }} with_items: - node-1 - node-2 - node-3 - node-4 - node-5 - debug: msg="host is {{item}}" with_items: groups['local_vm'] And the problem is that i can't get the listed servers from the dev.yml inventory file. it just returns: ok: [localhost] => (item=groups['local_vm']) => { "item": "groups['local_vm']", "msg": "host is groups['local_vm']" }
|
ansible
| 35
| 92,286
| 1
|
https://stackoverflow.com/questions/43140086/loop-through-hosts-with-ansible
|
23,875,377
|
Ansible: Perform Cleanup on Task Failure
|
I'm currently writing an Ansible play that follows this general format and is run via a cron job: pre_tasks: -Configuration / package installation tasks: -Work with installed packages post_tasks: -Cleanup / uninstall packages The problem with the above is that sometimes a command in the tasks section fails, and when it does the post_tasks section doesn't run, leaving the system in a messy state. Is it possible to force the commands in post_tasks to run even if a failure or fatal error occurs? My current approach is to apply ignore_errors: yes to everything under the tasks section, and then apply a when: conditional to each task to individually check if the prior command succeeded. This solution seems like a hack, but it gets worse because even with ignore_errors: yes set, if a Fatal error is encountered for a task the entire play will still immediately fail, so I have to also run a cron'd bash script to manually check on things after reach play execution. All I want is a guarantee that even if tasks fails, post_tasks will still run. I'm sure there is a way to do this without resorting to bash script wrappers.
|
Ansible: Perform Cleanup on Task Failure I'm currently writing an Ansible play that follows this general format and is run via a cron job: pre_tasks: -Configuration / package installation tasks: -Work with installed packages post_tasks: -Cleanup / uninstall packages The problem with the above is that sometimes a command in the tasks section fails, and when it does the post_tasks section doesn't run, leaving the system in a messy state. Is it possible to force the commands in post_tasks to run even if a failure or fatal error occurs? My current approach is to apply ignore_errors: yes to everything under the tasks section, and then apply a when: conditional to each task to individually check if the prior command succeeded. This solution seems like a hack, but it gets worse because even with ignore_errors: yes set, if a Fatal error is encountered for a task the entire play will still immediately fail, so I have to also run a cron'd bash script to manually check on things after reach play execution. All I want is a guarantee that even if tasks fails, post_tasks will still run. I'm sure there is a way to do this without resorting to bash script wrappers.
|
ansible
| 35
| 36,063
| 4
|
https://stackoverflow.com/questions/23875377/ansible-perform-cleanup-on-task-failure
|
30,883,151
|
Ansible jinja2 filters '|'(pipe) what does it mean?
|
I have written a task as below but can not understand what '|' does? tasks: - shell: /usr/bin/foo register: result ignore_errors: True - debug: msg="it failed" when: result|failed - debug: msg="it changed" when: result|changed Also I have found some examples on web but can not understand what '|' does? debug: msg={{ ipaddr |replace(",", ".") }} One more example: - hosts: localhost vars: D: 1 : "one" 2 : "two" tasks: - debug: var=D - debug: msg="D[1] is {{ D[1]|default ('undefined') }}" Would be great if someone can explain in details or point me to some URL? Any help would be appreciated. Thanks.
|
Ansible jinja2 filters '|'(pipe) what does it mean? I have written a task as below but can not understand what '|' does? tasks: - shell: /usr/bin/foo register: result ignore_errors: True - debug: msg="it failed" when: result|failed - debug: msg="it changed" when: result|changed Also I have found some examples on web but can not understand what '|' does? debug: msg={{ ipaddr |replace(",", ".") }} One more example: - hosts: localhost vars: D: 1 : "one" 2 : "two" tasks: - debug: var=D - debug: msg="D[1] is {{ D[1]|default ('undefined') }}" Would be great if someone can explain in details or point me to some URL? Any help would be appreciated. Thanks.
|
ansible, jinja2
| 35
| 50,564
| 2
|
https://stackoverflow.com/questions/30883151/ansible-jinja2-filters-pipe-what-does-it-mean
|
33,163,204
|
Ansible remote templates
|
I have templates for configuration files stored in my project's repositories. What I would like to do is use Ansible's template module to create a configuration file using that template on the remote server, after the project has been cloned from the repository. Looking at the documentation for the template module it appears that the src attribute only supports local files. I wanted to avoid storing the configuration template with my Ansible playbook as it makes more sense for me to keep these project specific templates within the project repository. Is there an alternative to the template module that I could use?
|
Ansible remote templates I have templates for configuration files stored in my project's repositories. What I would like to do is use Ansible's template module to create a configuration file using that template on the remote server, after the project has been cloned from the repository. Looking at the documentation for the template module it appears that the src attribute only supports local files. I wanted to avoid storing the configuration template with my Ansible playbook as it makes more sense for me to keep these project specific templates within the project repository. Is there an alternative to the template module that I could use?
|
ansible
| 35
| 16,762
| 1
|
https://stackoverflow.com/questions/33163204/ansible-remote-templates
|
40,469,380
|
Docker, how to deal with ssh keys, known_hosts and authorized_keys
|
In docker, how to scope with the requirement of configuring known_hosts , authorized_keys and ssh connectivity in general, when container have to talk with external systems? For example, I'm running Jenkins container and try to checkout the project from GitHub in job, but connection fails with the error: host key verification failed This could be solved by login into container, connect to GitHub manually and trust the host key when prompted. However this isn't proper solution, as everything needs to be 100% automated (I'm building CI pipeline with Ansible and Docker). Another (clunky) solution would be to provision the running container with Ansible, but this would make things messy and hard to maintain. Jenkins container doesn't even has ssh daemon, and I'm not sure how to ssh into container from other host. Third option would be to use my own Dockerfile extending Jenkins image, where ssh is configured, but that would be hardcoding and locking the container to this specific environment. So what is the correct way with docker to manage (and automate) connectivity with external systems?
|
Docker, how to deal with ssh keys, known_hosts and authorized_keys In docker, how to scope with the requirement of configuring known_hosts , authorized_keys and ssh connectivity in general, when container have to talk with external systems? For example, I'm running Jenkins container and try to checkout the project from GitHub in job, but connection fails with the error: host key verification failed This could be solved by login into container, connect to GitHub manually and trust the host key when prompted. However this isn't proper solution, as everything needs to be 100% automated (I'm building CI pipeline with Ansible and Docker). Another (clunky) solution would be to provision the running container with Ansible, but this would make things messy and hard to maintain. Jenkins container doesn't even has ssh daemon, and I'm not sure how to ssh into container from other host. Third option would be to use my own Dockerfile extending Jenkins image, where ssh is configured, but that would be hardcoding and locking the container to this specific environment. So what is the correct way with docker to manage (and automate) connectivity with external systems?
|
jenkins, docker, ssh, configuration, ansible
| 35
| 49,559
| 7
|
https://stackoverflow.com/questions/40469380/docker-how-to-deal-with-ssh-keys-known-hosts-and-authorized-keys
|
28,855,236
|
Copy local file if exists, using ansible
|
I'm working in a project, and we use ansible to create a deploy a cluster of servers. One of the tasks that I've to implement, is to copy a local file to the remote host, only if that file exists locally. Now I'm trying to solve this problem using this - hosts: 127.0.0.1 connection: local tasks: - name: copy local filetocopy.zip to remote if exists - shell: if [[ -f "../filetocopy.zip" ]]; then /bin/true; else /bin/false; fi; register: result - copy: src=../filetocopy.zip dest=/tmp/filetocopy.zip when: result|success Bu this is failing with the following message: ERROR: 'action' or 'local_action' attribute missing in task "copy local filetocopy.zip to remote if exists" I've tried to create this if with command task. I've already tried to create this task with a local_action, but I couldn't make it work. All samples that I've found, doesn't consider a shell into local_action, there are only samples of command, and neither of them have anything else then a command. Is there a way to do this task using ansible?
|
Copy local file if exists, using ansible I'm working in a project, and we use ansible to create a deploy a cluster of servers. One of the tasks that I've to implement, is to copy a local file to the remote host, only if that file exists locally. Now I'm trying to solve this problem using this - hosts: 127.0.0.1 connection: local tasks: - name: copy local filetocopy.zip to remote if exists - shell: if [[ -f "../filetocopy.zip" ]]; then /bin/true; else /bin/false; fi; register: result - copy: src=../filetocopy.zip dest=/tmp/filetocopy.zip when: result|success Bu this is failing with the following message: ERROR: 'action' or 'local_action' attribute missing in task "copy local filetocopy.zip to remote if exists" I've tried to create this if with command task. I've already tried to create this task with a local_action, but I couldn't make it work. All samples that I've found, doesn't consider a shell into local_action, there are only samples of command, and neither of them have anything else then a command. Is there a way to do this task using ansible?
|
ansible, infrastructure
| 34
| 75,848
| 7
|
https://stackoverflow.com/questions/28855236/copy-local-file-if-exists-using-ansible
|
31,383,693
|
How to create a file locally with ansible templates on the development machine
|
I'm starting out with ansible and I'm looking for a way to create a boilerplate project on the server and on the local environment with ansible playbooks. I want to use ansible templates locally to create some generic files. But how would i take ansible to execute something locally? I read something with local_action but i guess i did not get this right. This is for the webbserver...but how do i take this and create some files locally? - hosts: webservers remote_user: someuser - name: create some file template: src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
|
How to create a file locally with ansible templates on the development machine I'm starting out with ansible and I'm looking for a way to create a boilerplate project on the server and on the local environment with ansible playbooks. I want to use ansible templates locally to create some generic files. But how would i take ansible to execute something locally? I read something with local_action but i guess i did not get this right. This is for the webbserver...but how do i take this and create some files locally? - hosts: webservers remote_user: someuser - name: create some file template: src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
|
ansible
| 34
| 71,282
| 5
|
https://stackoverflow.com/questions/31383693/how-to-create-a-file-locally-with-ansible-templates-on-the-development-machine
|
63,173,955
|
Aborting, target uses selinux but Python bindings (libselinux-Python) aren't installed
|
I am trying to run Ansible playbook command and getting an error as below. fatal: [localhost]: FAILED! => {"changed": false, "msg": "Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!"} When I checked for libselinux-python, it showsit is already available. [root@host all]# sudo yum list installed |grep libselinux-python libselinux-python3.x86_64 2.5-15.el7 @rhel-7-server-rpms Please provide your input if anyone has faced and resolved this. Below are my Python and Ansible versions installed on server. [root@ xxx bin]# python --version Python 3.6.5 [root@xxx bin]# which python /root/.pyenv/shims/python [root@xxx bin]# ansible --version ansible 2.9.9 config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /root/.pyenv/versions/3.6.5/lib/python3.6/site-packages/ansible executable location = /root/.pyenv/versions/3.6.5/bin/ansible python version = 3.6.5 (default, Jun 18 2020, 17:32:20) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] [root@ xxx bin]#
|
Aborting, target uses selinux but Python bindings (libselinux-Python) aren't installed I am trying to run Ansible playbook command and getting an error as below. fatal: [localhost]: FAILED! => {"changed": false, "msg": "Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!"} When I checked for libselinux-python, it showsit is already available. [root@host all]# sudo yum list installed |grep libselinux-python libselinux-python3.x86_64 2.5-15.el7 @rhel-7-server-rpms Please provide your input if anyone has faced and resolved this. Below are my Python and Ansible versions installed on server. [root@ xxx bin]# python --version Python 3.6.5 [root@xxx bin]# which python /root/.pyenv/shims/python [root@xxx bin]# ansible --version ansible 2.9.9 config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /root/.pyenv/versions/3.6.5/lib/python3.6/site-packages/ansible executable location = /root/.pyenv/versions/3.6.5/bin/ansible python version = 3.6.5 (default, Jun 18 2020, 17:32:20) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] [root@ xxx bin]#
|
python, ansible
| 34
| 79,548
| 19
|
https://stackoverflow.com/questions/63173955/aborting-target-uses-selinux-but-python-bindings-libselinux-python-arent-ins
|
22,939,775
|
Ansible and Wget
|
I am trying to wget a file from a web server from within an Ansible playbook. Here is the Ansible snippet: --- - hosts: all sudo: true tasks: - name: Prepare Install folder sudo: true action: shell sudo mkdir -p /tmp/my_install/mysql/ && cd /tmp/my_install/mysql/ - name: Download MySql sudo: true action: shell sudo wget [URL] repo_host }}/MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar Invoking it via: ansible-playbook my_3rparties.yml -l vsrv644 --extra-vars "repo_host=vsrv656" -K -f 10 It fails with the following: Cannot write to `MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar' (Permission denied). FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/usr2/ihazan/vufroria_3rparties.retry vsrv644 : ok=2 changed=1 unreachable=0 failed=1 When trying to do the command that fail via regular remote ssh to mimic what ansible would do, it doesn't work as follows: -bash-4.1$ ssh ihazan@vsrv644 'cd /tmp/my_install/mysql && sudo wget [URL] Enter passphrase for key '/usr2/ihazan/.ssh/id_rsa': sudo: sorry, you must have a tty to run sudo But I can solve it using -t as follows: -bash-4.1$ ssh -t ihazan@vsrv644 'cd /tmp/my_install/mysql && sudo wget [URL] Then it works. Is there a way to set the -t (pseudo tty option) on ansible? P.S: I could solve it by editing the sudoers file as others propose but that is a manual step I am trying to avoid.
|
Ansible and Wget I am trying to wget a file from a web server from within an Ansible playbook. Here is the Ansible snippet: --- - hosts: all sudo: true tasks: - name: Prepare Install folder sudo: true action: shell sudo mkdir -p /tmp/my_install/mysql/ && cd /tmp/my_install/mysql/ - name: Download MySql sudo: true action: shell sudo wget [URL] repo_host }}/MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar Invoking it via: ansible-playbook my_3rparties.yml -l vsrv644 --extra-vars "repo_host=vsrv656" -K -f 10 It fails with the following: Cannot write to `MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar' (Permission denied). FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/usr2/ihazan/vufroria_3rparties.retry vsrv644 : ok=2 changed=1 unreachable=0 failed=1 When trying to do the command that fail via regular remote ssh to mimic what ansible would do, it doesn't work as follows: -bash-4.1$ ssh ihazan@vsrv644 'cd /tmp/my_install/mysql && sudo wget [URL] Enter passphrase for key '/usr2/ihazan/.ssh/id_rsa': sudo: sorry, you must have a tty to run sudo But I can solve it using -t as follows: -bash-4.1$ ssh -t ihazan@vsrv644 'cd /tmp/my_install/mysql && sudo wget [URL] Then it works. Is there a way to set the -t (pseudo tty option) on ansible? P.S: I could solve it by editing the sudoers file as others propose but that is a manual step I am trying to avoid.
|
ansible
| 34
| 80,360
| 2
|
https://stackoverflow.com/questions/22939775/ansible-and-wget
|
30,987,865
|
can roles and tasks exist in the same playbook?
|
--- # file: main.yml - hosts: fotk remote_user: fakesudo tasks: - name: create a developer user user: name={{ user }} password={{ password }} shell=/bin/bash generate_ssh_key=yes state=present roles: - { role: create_developer_environment, sudo_user: "{{ user }}" } - { role: vim, sudo_user: "{{ user }}" } For some reason the create user task is not running. I have searched every key phrase I can think of on Google to find an answer without success. The roles are running which is odd. Is it possible for a playbook to contain both tasks and roles?
|
can roles and tasks exist in the same playbook? --- # file: main.yml - hosts: fotk remote_user: fakesudo tasks: - name: create a developer user user: name={{ user }} password={{ password }} shell=/bin/bash generate_ssh_key=yes state=present roles: - { role: create_developer_environment, sudo_user: "{{ user }}" } - { role: vim, sudo_user: "{{ user }}" } For some reason the create user task is not running. I have searched every key phrase I can think of on Google to find an answer without success. The roles are running which is odd. Is it possible for a playbook to contain both tasks and roles?
|
ansible
| 34
| 36,627
| 4
|
https://stackoverflow.com/questions/30987865/can-roles-and-tasks-exist-in-the-same-playbook
|
56,465,268
|
Ansible - host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
|
My local /etc/ansible/hosts file just has [example] 172.31.nn.nnn Why do I get that host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method message ? If I change it to [local] localhost ansible_connection=local it seems to work ok. But that is limited to local. I want to ping my aws instance from my local machine. Full message: ansible 2.8.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/michael/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/michael/.local/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0] Using /etc/ansible/ansible.cfg as config file setting up inventory plugins host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method script declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method auto declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method Parsed /etc/ansible/hosts inventory source with ini plugin Loading callback plugin minimal of type stdout, v2.0 from /home/michael/.local/lib/python2.7/site-packages/ansible/plugins/callb ack/minimal.pyc META: ran handlers <172.31.40.133> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.31.40.133> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/michael/Dropbox/90_201 9/work/aws/rubymd2.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,p ublickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/home/michael/.ansible/cp/7e7a30892 f 172.31.40.133 '/bin/sh -c '"'"'echo ~ubuntu && sleep 0'"'"'' <172.31.40.133> (255, '', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /et c/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\nd ebug1: Control socket "/home/michael/.ansible/cp/7e7a30892f" does not exist\r\ndebug2: resolving "172.31.40.133" port 22\r\ndebu g2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 172.31.40.133 [172.31.40.133] port 22.\r\ndebug2: fd 3 setting O_NON BLOCK\r\ndebug1: connect to address 172.31.40.133 port 22: Connection timed out\r\nssh: connect to host 172.31.40.133 port 22: C onnection timed out\r\n') 172.31.40.133 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Readin g configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Try ing existing master\r\ndebug1: Control socket \"/home/michael/.ansible/cp/7e7a30892f\" does not exist\r\ndebug2: resolving \"172 .31.40.133\" port 22\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 172.31.40.133 [172.31.40.133] port 22.\r \ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 172.31.40.133 port 22: Connection timed out\r\nssh: connect to h ost 172.31.40.133 port 22: Connection timed out", "unreachable": true } I tried ading [inventory] at the top and also enable_plugins = ini. The first didn't help and the second gave a parse message. fyi security group info:
|
Ansible - host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method My local /etc/ansible/hosts file just has [example] 172.31.nn.nnn Why do I get that host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method message ? If I change it to [local] localhost ansible_connection=local it seems to work ok. But that is limited to local. I want to ping my aws instance from my local machine. Full message: ansible 2.8.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/michael/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/michael/.local/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0] Using /etc/ansible/ansible.cfg as config file setting up inventory plugins host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method script declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method auto declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method Parsed /etc/ansible/hosts inventory source with ini plugin Loading callback plugin minimal of type stdout, v2.0 from /home/michael/.local/lib/python2.7/site-packages/ansible/plugins/callb ack/minimal.pyc META: ran handlers <172.31.40.133> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.31.40.133> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/michael/Dropbox/90_201 9/work/aws/rubymd2.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,p ublickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/home/michael/.ansible/cp/7e7a30892 f 172.31.40.133 '/bin/sh -c '"'"'echo ~ubuntu && sleep 0'"'"'' <172.31.40.133> (255, '', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /et c/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\nd ebug1: Control socket "/home/michael/.ansible/cp/7e7a30892f" does not exist\r\ndebug2: resolving "172.31.40.133" port 22\r\ndebu g2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 172.31.40.133 [172.31.40.133] port 22.\r\ndebug2: fd 3 setting O_NON BLOCK\r\ndebug1: connect to address 172.31.40.133 port 22: Connection timed out\r\nssh: connect to host 172.31.40.133 port 22: C onnection timed out\r\n') 172.31.40.133 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Readin g configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Try ing existing master\r\ndebug1: Control socket \"/home/michael/.ansible/cp/7e7a30892f\" does not exist\r\ndebug2: resolving \"172 .31.40.133\" port 22\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 172.31.40.133 [172.31.40.133] port 22.\r \ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 172.31.40.133 port 22: Connection timed out\r\nssh: connect to h ost 172.31.40.133 port 22: Connection timed out", "unreachable": true } I tried ading [inventory] at the top and also enable_plugins = ini. The first didn't help and the second gave a parse message. fyi security group info:
|
ansible
| 34
| 82,127
| 3
|
https://stackoverflow.com/questions/56465268/ansible-host-list-declined-parsing-etc-ansible-hosts-as-it-did-not-pass-its
|
30,550,378
|
What is the difference between a Docker Container and an Ansible Playbook?
|
It seems to me that both tools are used to easily install and automatically configure applications. However, I've limitedly used Docker and haven't used Ansible at all. So I'm a little confused. Whenever I search for a comparison between these two technologies, I find details about how to use these technologies in combination.
|
What is the difference between a Docker Container and an Ansible Playbook? It seems to me that both tools are used to easily install and automatically configure applications. However, I've limitedly used Docker and haven't used Ansible at all. So I'm a little confused. Whenever I search for a comparison between these two technologies, I find details about how to use these technologies in combination.
|
configuration, installation, docker, ansible
| 34
| 12,108
| 2
|
https://stackoverflow.com/questions/30550378/what-is-the-difference-between-a-docker-container-and-an-ansible-playbook
|
63,982,903
|
`apt-mark hold and apt-mark unhold` with ansible modules
|
I'm writing my k8s upgrade ansible playbook, and within that I need to do apt-mark unhold kubeadm . Now, I am trying to avoid using the ansible command or shell module to call apt if possible, but the apt hold/unhold command does not seem to be supported by neither package nor apt modules. Is it possible to do apt-mark hold in ansible without command or shell ?
|
`apt-mark hold and apt-mark unhold` with ansible modules I'm writing my k8s upgrade ansible playbook, and within that I need to do apt-mark unhold kubeadm . Now, I am trying to avoid using the ansible command or shell module to call apt if possible, but the apt hold/unhold command does not seem to be supported by neither package nor apt modules. Is it possible to do apt-mark hold in ansible without command or shell ?
|
ansible, apt
| 33
| 25,240
| 1
|
https://stackoverflow.com/questions/63982903/apt-mark-hold-and-apt-mark-unhold-with-ansible-modules
|
35,271,368
|
Get parent directory in Ansible?
|
Is there a way to evaluate a relative path in Ansible? tasks: - name: Run docker containers include: tasks/dockerup.yml src_code='..' Essentially I am interested in passing the source code path to my task. It happens that the source code is the parent path of {{ansible_inventory}} but there doesn't seem to be anything to accomplish that out of the box. ---- further info ---- Project structure: myproj app deploy deploy.yml So I am trying to access app from deploy.yml .
|
Get parent directory in Ansible? Is there a way to evaluate a relative path in Ansible? tasks: - name: Run docker containers include: tasks/dockerup.yml src_code='..' Essentially I am interested in passing the source code path to my task. It happens that the source code is the parent path of {{ansible_inventory}} but there doesn't seem to be anything to accomplish that out of the box. ---- further info ---- Project structure: myproj app deploy deploy.yml So I am trying to access app from deploy.yml .
|
path, yaml, ansible
| 33
| 35,944
| 3
|
https://stackoverflow.com/questions/35271368/get-parent-directory-in-ansible
|
25,324,261
|
How to get current role name in an ansible task
|
How can I get the current role name in an ansible task yaml file? I would like to do something like this --- # role/some-role-name/tasks/main.yml - name: Create a directory which is called like the current role name action: file path=/tmp/"{{ role_name }}" mode=0755 state=directory The result of this task should be a directory /tmp/some-role-name on the server
|
How to get current role name in an ansible task How can I get the current role name in an ansible task yaml file? I would like to do something like this --- # role/some-role-name/tasks/main.yml - name: Create a directory which is called like the current role name action: file path=/tmp/"{{ role_name }}" mode=0755 state=directory The result of this task should be a directory /tmp/some-role-name on the server
|
global-variables, ansible
| 33
| 23,955
| 4
|
https://stackoverflow.com/questions/25324261/how-to-get-current-role-name-in-an-ansible-task
|
23,899,028
|
Ansible failed to transfer file to /command
|
Recently I have been using ansible for a wide variety of automation. However, during testing for automatic tomcat6 restart on specific webserver boxes. I came across this new error that I can't seem to fix. FAILED => failed to transfer file to /command Looking at documentation said its because of sftp-server not being in the sshd_config, however it is there. Below is the command I am running to my webserver hosts. ansible all -a "/usr/bin/sudo /etc/init.d/tomcat6 restart" -u user --ask-pass --sudo --ask-sudo-pass There is a .ansible hidden folder on each of the boxes so I know its making to them but its not executing the command. Running -vvvv gives me this after: EXEC ['sshpass', '-d10', 'ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'GSSAPIAuthentication=no', '-o', 'PubkeyAuthentication=no', '-o', 'User=user', '-o', 'ConnectTimeout=10', '10.10.10.103', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689 && echo $HOME/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689'"] then 10.10.10.103 | FAILED => failed to transfer file to /home/user/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689/command Any help on this issue is much appreciated. Thanks, Edit: To increase Googleability, here is another manifestation of the error that the chosen answer fixes. Running the command ansible-playbook -i inventory hello_world.yml gives this warning for every host. [WARNING]: sftp transfer mechanism failed on [host.example.com]. Use ANSIBLE_DEBUG=1 to see detailed information And when you rerun the command as ANSIBLE_DEBUG=1 ansible-playbook -i inventory hello_world.yml the only extra information you get is: >>>sftp> put /var/folders/nc/htqkfk6j6h70hlxrr43rm4h00000gn/T/tmpxEWCe5 /home/ubuntu/.ansible/tmp/ansible-tmp-1487430536.22-28138635532013/command.py
|
Ansible failed to transfer file to /command Recently I have been using ansible for a wide variety of automation. However, during testing for automatic tomcat6 restart on specific webserver boxes. I came across this new error that I can't seem to fix. FAILED => failed to transfer file to /command Looking at documentation said its because of sftp-server not being in the sshd_config, however it is there. Below is the command I am running to my webserver hosts. ansible all -a "/usr/bin/sudo /etc/init.d/tomcat6 restart" -u user --ask-pass --sudo --ask-sudo-pass There is a .ansible hidden folder on each of the boxes so I know its making to them but its not executing the command. Running -vvvv gives me this after: EXEC ['sshpass', '-d10', 'ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'GSSAPIAuthentication=no', '-o', 'PubkeyAuthentication=no', '-o', 'User=user', '-o', 'ConnectTimeout=10', '10.10.10.103', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689 && echo $HOME/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689'"] then 10.10.10.103 | FAILED => failed to transfer file to /home/user/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689/command Any help on this issue is much appreciated. Thanks, Edit: To increase Googleability, here is another manifestation of the error that the chosen answer fixes. Running the command ansible-playbook -i inventory hello_world.yml gives this warning for every host. [WARNING]: sftp transfer mechanism failed on [host.example.com]. Use ANSIBLE_DEBUG=1 to see detailed information And when you rerun the command as ANSIBLE_DEBUG=1 ansible-playbook -i inventory hello_world.yml the only extra information you get is: >>>sftp> put /var/folders/nc/htqkfk6j6h70hlxrr43rm4h00000gn/T/tmpxEWCe5 /home/ubuntu/.ansible/tmp/ansible-tmp-1487430536.22-28138635532013/command.py
|
ansible
| 33
| 57,147
| 7
|
https://stackoverflow.com/questions/23899028/ansible-failed-to-transfer-file-to-command
|
40,983,674
|
Ansible non-root sudo user and "become" privilege escalation
|
I've set up a box with a user david who has sudo privileges. I can ssh into the box and perform sudo operations like apt-get install . When I try to do the same thing using Ansible's "become privilege escalation", I get a permission denied error. So a simple playbook might look like this: simple_playbook.yml: --- - name: Testing... hosts: all become: true become_user: david become_method: sudo tasks: - name: Just want to install sqlite3 for example... apt: name=sqlite3 state=present I run this playbook with the following command: ansible-playbook -i inventory simple_playbook.yml --ask-become-pass This gives me a prompt for a password, which I give, and I get the following error (abbreviated): fatal: [123.45.67.89]: FAILED! => {... failed: E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)\nE: Unable to lock the administration directory (/var/lib/dpkg/), are you root?\n", ...} Why am I getting permission denied? Additional information I'm running Ansible 2.1.1.0 and am targeting a Ubuntu 16.04 box. If I use remote_user and sudo options as per Ansible < v1.9, it works fine, like this: remote_user: david sudo: yes Update The local and remote usernames are the same. To get this working, I just needed to specify become: yes (see @techraf's answer):
|
Ansible non-root sudo user and "become" privilege escalation I've set up a box with a user david who has sudo privileges. I can ssh into the box and perform sudo operations like apt-get install . When I try to do the same thing using Ansible's "become privilege escalation", I get a permission denied error. So a simple playbook might look like this: simple_playbook.yml: --- - name: Testing... hosts: all become: true become_user: david become_method: sudo tasks: - name: Just want to install sqlite3 for example... apt: name=sqlite3 state=present I run this playbook with the following command: ansible-playbook -i inventory simple_playbook.yml --ask-become-pass This gives me a prompt for a password, which I give, and I get the following error (abbreviated): fatal: [123.45.67.89]: FAILED! => {... failed: E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)\nE: Unable to lock the administration directory (/var/lib/dpkg/), are you root?\n", ...} Why am I getting permission denied? Additional information I'm running Ansible 2.1.1.0 and am targeting a Ubuntu 16.04 box. If I use remote_user and sudo options as per Ansible < v1.9, it works fine, like this: remote_user: david sudo: yes Update The local and remote usernames are the same. To get this working, I just needed to specify become: yes (see @techraf's answer):
|
ansible, sudo, privileges
| 33
| 83,365
| 3
|
https://stackoverflow.com/questions/40983674/ansible-non-root-sudo-user-and-become-privilege-escalation
|
39,779,802
|
How to compare kernel, software, etc. versions numbers in Ansible?
|
For a role I'm developing I need to verify that the kernel version is greater than a particular version. I've found the ansible_kernel fact, but is there an easy way to compare this to other versions? I thought I might manually explode the version string on the dots ( . ) and compare the numbers, but I can't even find a friendly filter to explode the version string out, so I'm at a loss.
|
How to compare kernel, software, etc. versions numbers in Ansible? For a role I'm developing I need to verify that the kernel version is greater than a particular version. I've found the ansible_kernel fact, but is there an easy way to compare this to other versions? I thought I might manually explode the version string on the dots ( . ) and compare the numbers, but I can't even find a friendly filter to explode the version string out, so I'm at a loss.
|
ansible, ansible-2.x, version-numbering
| 33
| 51,241
| 5
|
https://stackoverflow.com/questions/39779802/how-to-compare-kernel-software-etc-versions-numbers-in-ansible
|
20,828,703
|
List all ansible hosts from inventory file for a task
|
For a backup I need to iterate over all hosts in my inventory file to be sure that the backup destination exists. My structure looks like /var/backups/ example.com/ sub.example.com/ So I need a (built-in) variable/method to list all hosts from inventory file, not only a single group. For groups its look like this - name: ensure backup directories are present action: file path=/var/backups/{{ item }} state=directory owner=backup group=backup mode=0700 with_items: groups['group_name'] tags: - backup
|
List all ansible hosts from inventory file for a task For a backup I need to iterate over all hosts in my inventory file to be sure that the backup destination exists. My structure looks like /var/backups/ example.com/ sub.example.com/ So I need a (built-in) variable/method to list all hosts from inventory file, not only a single group. For groups its look like this - name: ensure backup directories are present action: file path=/var/backups/{{ item }} state=directory owner=backup group=backup mode=0700 with_items: groups['group_name'] tags: - backup
|
ansible, ansible-inventory
| 33
| 33,390
| 2
|
https://stackoverflow.com/questions/20828703/list-all-ansible-hosts-from-inventory-file-for-a-task
|
38,144,598
|
How to write an Ansible role task that only runs when any of the previous other tasks in the task file have been changed?
|
I am working on a role where I want one task to be run at the end of the tasks file if and only if any of the previous tasks in that task file have changed. For example, I have: - name: install package apt: name=mypackage state=latest - name: modify a file lineinfile: do stuff - name: modify a second file lineinfile: other stuff - name: restart if anything changed service: name=mypackage state=restarted ... and I want to only restart the service if an update has been installed or any of the config files have been changed. How can I do this?
|
How to write an Ansible role task that only runs when any of the previous other tasks in the task file have been changed? I am working on a role where I want one task to be run at the end of the tasks file if and only if any of the previous tasks in that task file have changed. For example, I have: - name: install package apt: name=mypackage state=latest - name: modify a file lineinfile: do stuff - name: modify a second file lineinfile: other stuff - name: restart if anything changed service: name=mypackage state=restarted ... and I want to only restart the service if an update has been installed or any of the config files have been changed. How can I do this?
|
ansible, ansible-role
| 33
| 31,377
| 1
|
https://stackoverflow.com/questions/38144598/how-to-write-an-ansible-role-task-that-only-runs-when-any-of-the-previous-other
|
38,034,777
|
Conflicting action statement in ansible
|
I am getting the following error : Conflicting action statement in ansible . I tried to understand, my code seems to be correct. I declared the name correctly though it gives an error in the task name. Playbook: --- - hosts: webhost sudo: yes connection: ssh tasks: - name: debuging module shell: ps aux register: output debug: var=output Error message: ERROR! conflicting action statements The error appears to have been in '/home/test/playbooks/debug.yml': line 7, column 7, but may be present elsewhere in the file depending on the exact syntax problem. The offending line appears to be here: tasks: - name: debuging module ^ here
|
Conflicting action statement in ansible I am getting the following error : Conflicting action statement in ansible . I tried to understand, my code seems to be correct. I declared the name correctly though it gives an error in the task name. Playbook: --- - hosts: webhost sudo: yes connection: ssh tasks: - name: debuging module shell: ps aux register: output debug: var=output Error message: ERROR! conflicting action statements The error appears to have been in '/home/test/playbooks/debug.yml': line 7, column 7, but may be present elsewhere in the file depending on the exact syntax problem. The offending line appears to be here: tasks: - name: debuging module ^ here
|
ansible, ansible-2.x
| 33
| 109,896
| 1
|
https://stackoverflow.com/questions/38034777/conflicting-action-statement-in-ansible
|
34,621,799
|
Ansible: How do I avoid registering a variable when a "when" condition is *not* met?
|
I have the following Ansible Playbook code: - name: Users | Generate password for user (Debian/Ubuntu) shell: makepasswd --chars=20 register: make_password when: ansible_distribution in ['Debian', 'Ubuntu'] - name: Users | Generate password for user (Fedora) shell: makepasswd -m 20 -M 20 register: make_password when: ansible_distribution in ['Fedora', 'Amazon'] - name: Users | Generate password for user (CentOS) shell: mkpasswd -l 20 register: make_password when: ansible_distribution in ['CentOS'] - name: debug debug: var=make_password Which outputs: TASK: [users | debug] ok: [127.0.0.1] => { "var": { "make_password": { "changed": false, "skipped": true } } } ... Because every register block gets executed regardless of the when condition. How would I fix this so make_password doesn't get overwritten when the when condition isn't met? Or if this is the wrong approach for what you can see that I'm trying to accomplish, let me know of a better one.
|
Ansible: How do I avoid registering a variable when a "when" condition is *not* met? I have the following Ansible Playbook code: - name: Users | Generate password for user (Debian/Ubuntu) shell: makepasswd --chars=20 register: make_password when: ansible_distribution in ['Debian', 'Ubuntu'] - name: Users | Generate password for user (Fedora) shell: makepasswd -m 20 -M 20 register: make_password when: ansible_distribution in ['Fedora', 'Amazon'] - name: Users | Generate password for user (CentOS) shell: mkpasswd -l 20 register: make_password when: ansible_distribution in ['CentOS'] - name: debug debug: var=make_password Which outputs: TASK: [users | debug] ok: [127.0.0.1] => { "var": { "make_password": { "changed": false, "skipped": true } } } ... Because every register block gets executed regardless of the when condition. How would I fix this so make_password doesn't get overwritten when the when condition isn't met? Or if this is the wrong approach for what you can see that I'm trying to accomplish, let me know of a better one.
|
ansible
| 33
| 39,785
| 4
|
https://stackoverflow.com/questions/34621799/ansible-how-do-i-avoid-registering-a-variable-when-a-when-condition-is-not
|
39,370,364
|
when to use fabric or ansible?
|
OVERVIEW I'd like to have reliable django deployments and I think I'm not following the best practices here. Till now I've been using fabric as a configuration management tool in order to deploy my django sites but I'm not sure that's the best way to go. In the high performance django book there is a warning which says: Fabric is not a configuration management tool. Trying to use it as one will ultimately cause you heartache and pain. Fabric is an excellent choice for executing scripts in one or more remote systems, but that's only a small piece of the puzzle. Don't reinvent the wheel by building your own configuration management system on top of fabric So, I've decided I want to learn ansible. QUESTIONS Does it make sense using both fabric and ansible tools somehow? Is it possible to use ansible from my windows development environment to deploy to production centos(6/7) servers? There is this nice site [URL] which contains a lot of playbooks, any good recommendation to deploy django on centos servers?
|
when to use fabric or ansible? OVERVIEW I'd like to have reliable django deployments and I think I'm not following the best practices here. Till now I've been using fabric as a configuration management tool in order to deploy my django sites but I'm not sure that's the best way to go. In the high performance django book there is a warning which says: Fabric is not a configuration management tool. Trying to use it as one will ultimately cause you heartache and pain. Fabric is an excellent choice for executing scripts in one or more remote systems, but that's only a small piece of the puzzle. Don't reinvent the wheel by building your own configuration management system on top of fabric So, I've decided I want to learn ansible. QUESTIONS Does it make sense using both fabric and ansible tools somehow? Is it possible to use ansible from my windows development environment to deploy to production centos(6/7) servers? There is this nice site [URL] which contains a lot of playbooks, any good recommendation to deploy django on centos servers?
|
django, deployment, centos, ansible, fabric
| 33
| 10,758
| 1
|
https://stackoverflow.com/questions/39370364/when-to-use-fabric-or-ansible
|
23,919,744
|
How to restart Jenkins using Ansible and wait for it to come back?
|
I'm trying to restart the Jenkins service using Ansible: - name: Restart Jenkins to make the plugin data available service: name=jenkins state=restarted - name: Wait for Jenkins to restart wait_for: host=localhost port=8080 delay=20 timeout=300 - name: Install Jenkins plugins command: java -jar {{ jenkins_cli_jar }} -s {{ jenkins_dashboard_url }} install-plugin {{ item }} creates=/var/lib/jenkins/plugins/{{ item }}.jpi with_items: jenkins_plugins But on the first run, the third task throws lots of Java errors including this: Suppressed: java.io.IOException: Server returned HTTP response code: 503 for URL , which makes me think the web server (handled entirely by Jenkins) wasn't ready. Sometimes when I go to the Jenkins dashboard using my browser it says that Jenkins isn't ready and that it will reload when it is, and it does, it works fine. But I'm not sure if accessing the page is what starts the server, or what. So I guess what I need is to curl many times until the http code is 200? Is there any other way? Either way, how do I do that? How do you normally restart Jenkins?
|
How to restart Jenkins using Ansible and wait for it to come back? I'm trying to restart the Jenkins service using Ansible: - name: Restart Jenkins to make the plugin data available service: name=jenkins state=restarted - name: Wait for Jenkins to restart wait_for: host=localhost port=8080 delay=20 timeout=300 - name: Install Jenkins plugins command: java -jar {{ jenkins_cli_jar }} -s {{ jenkins_dashboard_url }} install-plugin {{ item }} creates=/var/lib/jenkins/plugins/{{ item }}.jpi with_items: jenkins_plugins But on the first run, the third task throws lots of Java errors including this: Suppressed: java.io.IOException: Server returned HTTP response code: 503 for URL , which makes me think the web server (handled entirely by Jenkins) wasn't ready. Sometimes when I go to the Jenkins dashboard using my browser it says that Jenkins isn't ready and that it will reload when it is, and it does, it works fine. But I'm not sure if accessing the page is what starts the server, or what. So I guess what I need is to curl many times until the http code is 200? Is there any other way? Either way, how do I do that? How do you normally restart Jenkins?
|
jenkins, jenkins-plugins, ansible
| 32
| 22,642
| 5
|
https://stackoverflow.com/questions/23919744/how-to-restart-jenkins-using-ansible-and-wait-for-it-to-come-back
|
61,948,417
|
How to measure and display time taken for tasks when running ansible-playbook?
|
I have one playbook and in this playbook, there are so many tasks. I need to know which task has taken how much time? Is there any solution?
|
How to measure and display time taken for tasks when running ansible-playbook? I have one playbook and in this playbook, there are so many tasks. I need to know which task has taken how much time? Is there any solution?
|
time, ansible, ansible-role
| 32
| 39,695
| 5
|
https://stackoverflow.com/questions/61948417/how-to-measure-and-display-time-taken-for-tasks-when-running-ansible-playbook
|
38,203,317
|
Ansible: no hosts matched
|
I'm trying to execute my first remote shell script on Ansible. I've first generated and copied the SSH keys. Here is my yml file: --- - name: Ansible remote shell hosts: 192.168.10.1 user: myuser1 become: true become_user: jboss tasks: - name: Hello server shell: /home/jboss/script.sh When launching the playbook however, the outcome is "no hosts matched": ansible-playbook setup.yml PLAY [Ansible remote shell ******************************************** skipping: no hosts matched PLAY RECAP ******************************************************************** I've tried also using the host name (instead of the IP address), however nothing changed. Any help ?
|
Ansible: no hosts matched I'm trying to execute my first remote shell script on Ansible. I've first generated and copied the SSH keys. Here is my yml file: --- - name: Ansible remote shell hosts: 192.168.10.1 user: myuser1 become: true become_user: jboss tasks: - name: Hello server shell: /home/jboss/script.sh When launching the playbook however, the outcome is "no hosts matched": ansible-playbook setup.yml PLAY [Ansible remote shell ******************************************** skipping: no hosts matched PLAY RECAP ******************************************************************** I've tried also using the host name (instead of the IP address), however nothing changed. Any help ?
|
ansible
| 32
| 140,883
| 3
|
https://stackoverflow.com/questions/38203317/ansible-no-hosts-matched
|
24,835,706
|
How can escape colon in a string within an Ansible YAML file?
|
I want to change one line of my code in file /var/www/kibana/config.js during installation from elasticsearch: "[URL] to elasticsearch: "[URL] Here I tried to use lineinfile to do that as show below - name: Comment out elasticsearch the config.js to ElasticSearch server lineinfile: dest=/var/www/kibana/config.js backrefs=true regexp="(elasticsearch.* \"http.*)$" line="elasticsearch\: \" {{ elasticsearch_URL }}:{{ elasticsearch_port }} \" " state=present I have set variables of {{elasticsearch_URL}} and {{elasticsearch_port}} to [URL] and 9200 , respectively. Here is the error message I met: ERROR: Syntax Error while loading YAML script, /Users/shuoy/devops_workspace/ansible_work/logging-for-openstack/roles/kibana/tasks/Debian.yml Note: The error may actually appear before this position: line 29, column 25 regexp="(elasticsearch.* \"http.*)$" line="elasticsearch\: \" {{ elasticsearch_URL }}:{{ elasticsearch_port }} \" " ^
|
How can escape colon in a string within an Ansible YAML file? I want to change one line of my code in file /var/www/kibana/config.js during installation from elasticsearch: "[URL] to elasticsearch: "[URL] Here I tried to use lineinfile to do that as show below - name: Comment out elasticsearch the config.js to ElasticSearch server lineinfile: dest=/var/www/kibana/config.js backrefs=true regexp="(elasticsearch.* \"http.*)$" line="elasticsearch\: \" {{ elasticsearch_URL }}:{{ elasticsearch_port }} \" " state=present I have set variables of {{elasticsearch_URL}} and {{elasticsearch_port}} to [URL] and 9200 , respectively. Here is the error message I met: ERROR: Syntax Error while loading YAML script, /Users/shuoy/devops_workspace/ansible_work/logging-for-openstack/roles/kibana/tasks/Debian.yml Note: The error may actually appear before this position: line 29, column 25 regexp="(elasticsearch.* \"http.*)$" line="elasticsearch\: \" {{ elasticsearch_URL }}:{{ elasticsearch_port }} \" " ^
|
regex, ansible
| 32
| 94,503
| 6
|
https://stackoverflow.com/questions/24835706/how-can-escape-colon-in-a-string-within-an-ansible-yaml-file
|
35,232,088
|
'ansible_date_time' is undefined
|
Trying to register an ec2 instance in AWS with Ansible's ec2_ami module, and using current date/time as version (we'll end up making a lot of AMIs in the future). This is what I have: - name: Create new AMI hosts: localhost connection: local gather_facts: false vars: tasks: - include_vars: ami_vars.yml - debug: var=ansible_date_time - name: Register ec2 instance as AMI ec2_ami: aws_access_key={{ ec2_access_key }} aws_secret_key={{ ec2_secret_key }} instance_id={{ temp_instance.instance_ids[0] }} region={{ region }} wait=yes name={{ ami_name }} with_items: temp_instance register: new_ami From ami_vars.yml: ami_version: "{{ ansible_date_time.iso8601 }}" ami_name: ami_test_{{ ami_version }} When I run the full playbook, I get this error message: fatal: [localhost]: FAILED! => {"failed": true, "msg": "ERROR! ERROR! ERROR! 'ansible_date_time' is undefined"} However, when run the debug command separately, from a separate playbook, it works fine: - name: Test date-time lookup hosts: localhost connection: local tasks: - include_vars: ami_vars.yml - debug: msg="ami version is {{ ami_version }}" - debug: msg="ami name is {{ ami_name }}" Result: TASK [debug] ******************************************************************* ok: [localhost] => { "msg": "ami version is 2016-02-05T19:32:24Z" } TASK [debug] ******************************************************************* ok: [localhost] => { "msg": "ami name is ami_test_2016-02-05T19:32:24Z" } Any idea what's going on?
|
'ansible_date_time' is undefined Trying to register an ec2 instance in AWS with Ansible's ec2_ami module, and using current date/time as version (we'll end up making a lot of AMIs in the future). This is what I have: - name: Create new AMI hosts: localhost connection: local gather_facts: false vars: tasks: - include_vars: ami_vars.yml - debug: var=ansible_date_time - name: Register ec2 instance as AMI ec2_ami: aws_access_key={{ ec2_access_key }} aws_secret_key={{ ec2_secret_key }} instance_id={{ temp_instance.instance_ids[0] }} region={{ region }} wait=yes name={{ ami_name }} with_items: temp_instance register: new_ami From ami_vars.yml: ami_version: "{{ ansible_date_time.iso8601 }}" ami_name: ami_test_{{ ami_version }} When I run the full playbook, I get this error message: fatal: [localhost]: FAILED! => {"failed": true, "msg": "ERROR! ERROR! ERROR! 'ansible_date_time' is undefined"} However, when run the debug command separately, from a separate playbook, it works fine: - name: Test date-time lookup hosts: localhost connection: local tasks: - include_vars: ami_vars.yml - debug: msg="ami version is {{ ami_version }}" - debug: msg="ami name is {{ ami_name }}" Result: TASK [debug] ******************************************************************* ok: [localhost] => { "msg": "ami version is 2016-02-05T19:32:24Z" } TASK [debug] ******************************************************************* ok: [localhost] => { "msg": "ami name is ami_test_2016-02-05T19:32:24Z" } Any idea what's going on?
|
amazon-web-services, amazon-ec2, ansible
| 32
| 28,544
| 2
|
https://stackoverflow.com/questions/35232088/ansible-date-time-is-undefined
|
28,234,430
|
Ansible integer variables in YAML
|
I'm using Ansible to deploy a webapp. I'd like to wait for the application to be running by checking that a given page returns a JSON with a given key/value. I want the task to be tried a few times before failing. I'm therefore using the combination of until / retries / delay keybwords. Issue is, I want the number of retries to be taken from a variable. If I write : retries: {{apache_test_retries}} I fall into the usual Yaml Gotcha ( [URL] ). If, instead, I write: retries: "{{apache_test_retries}}" I'm being said the value is not an integer. ValueError: invalid literal for int() with base 10: '{{apache_test_retries}}' Here is my full code: - name: Wait for the application to be running local_action: uri url=[URL] timeout=60 register: res sudo: false when: updated.changed and apache_test_url is defined until: res.status == 200 and res['json'] is defined and res['json']['status'] == 'UP' retries: "{{apache_test_retries}}" delay: 1 Any idea on how to work around this issue? Thanks.
|
Ansible integer variables in YAML I'm using Ansible to deploy a webapp. I'd like to wait for the application to be running by checking that a given page returns a JSON with a given key/value. I want the task to be tried a few times before failing. I'm therefore using the combination of until / retries / delay keybwords. Issue is, I want the number of retries to be taken from a variable. If I write : retries: {{apache_test_retries}} I fall into the usual Yaml Gotcha ( [URL] ). If, instead, I write: retries: "{{apache_test_retries}}" I'm being said the value is not an integer. ValueError: invalid literal for int() with base 10: '{{apache_test_retries}}' Here is my full code: - name: Wait for the application to be running local_action: uri url=[URL] timeout=60 register: res sudo: false when: updated.changed and apache_test_url is defined until: res.status == 200 and res['json'] is defined and res['json']['status'] == 'UP' retries: "{{apache_test_retries}}" delay: 1 Any idea on how to work around this issue? Thanks.
|
yaml, jinja2, ansible
| 32
| 86,073
| 4
|
https://stackoverflow.com/questions/28234430/ansible-integer-variables-in-yaml
|
31,731,756
|
Ansible and changed_when based on stdout value
|
I am running a custom command because I haven't found a working module doing what I need, and I want to adjust the changed flag to reflect the actual behaviour: - name: Remove unused images shell: '[ -n "$(docker images -q -f dangling=true)" ] && docker rmi $(docker images -q -f dangling=true) || echo Ignoring failure...' register: command_result changed_when: "command_result.stdout == 'Ignoring failure...'" - debug: var="1 {{ command_result.stdout }}" when: "command_result.stdout != 'Ignoring failure...'" - debug: var="2 {{ command_result.stdout }}" when: "command_result.stdout == 'Ignoring failure...'" (I know the shell command is ugly and could be improved by a more complex script but I don't want to for now) Running this task on an host where no Docker image can be removed gives the following output: TASK: [utils.dockercleaner | Remove unused images] **************************** changed: [cloud-host] => {"changed": true, "cmd": "[ -n \"$(docker images -q -f dangling=true)\" ] && docker rmi $(docker images -q -f dangling=true) || echo Ignoring failure...", "delta": "0:00:00.064451", "end": "2015-07-30 18:37:25.620135", "rc": 0, "start": "2015-07-30 18:37:25.555684", "stderr": "", "stdout": "Ignoring failure...", "stdout_lines": ["Ignoring failure..."], "warnings": []} TASK: [utils.dockercleaner | debug var="DIFFERENT {{ command_result.stdout }}"] *** skipping: [cloud-host] TASK: [utils.dockercleaner | debug var="EQUAL {{ command_result.stdout }}"] *** ok: [cloud-host] => { "var": { "EQUAL Ignoring failure...": "EQUAL Ignoring failure..." } } So, I have this stdout return value "stdout": "Ignoring failure..." , and the debug task shows the strings are equal, so why is the task still displayed as "changed" ? I am using ansible 1.9.1 . The documentation I am refering to is this one: [URL]
|
Ansible and changed_when based on stdout value I am running a custom command because I haven't found a working module doing what I need, and I want to adjust the changed flag to reflect the actual behaviour: - name: Remove unused images shell: '[ -n "$(docker images -q -f dangling=true)" ] && docker rmi $(docker images -q -f dangling=true) || echo Ignoring failure...' register: command_result changed_when: "command_result.stdout == 'Ignoring failure...'" - debug: var="1 {{ command_result.stdout }}" when: "command_result.stdout != 'Ignoring failure...'" - debug: var="2 {{ command_result.stdout }}" when: "command_result.stdout == 'Ignoring failure...'" (I know the shell command is ugly and could be improved by a more complex script but I don't want to for now) Running this task on an host where no Docker image can be removed gives the following output: TASK: [utils.dockercleaner | Remove unused images] **************************** changed: [cloud-host] => {"changed": true, "cmd": "[ -n \"$(docker images -q -f dangling=true)\" ] && docker rmi $(docker images -q -f dangling=true) || echo Ignoring failure...", "delta": "0:00:00.064451", "end": "2015-07-30 18:37:25.620135", "rc": 0, "start": "2015-07-30 18:37:25.555684", "stderr": "", "stdout": "Ignoring failure...", "stdout_lines": ["Ignoring failure..."], "warnings": []} TASK: [utils.dockercleaner | debug var="DIFFERENT {{ command_result.stdout }}"] *** skipping: [cloud-host] TASK: [utils.dockercleaner | debug var="EQUAL {{ command_result.stdout }}"] *** ok: [cloud-host] => { "var": { "EQUAL Ignoring failure...": "EQUAL Ignoring failure..." } } So, I have this stdout return value "stdout": "Ignoring failure..." , and the debug task shows the strings are equal, so why is the task still displayed as "changed" ? I am using ansible 1.9.1 . The documentation I am refering to is this one: [URL]
|
ansible
| 32
| 48,018
| 1
|
https://stackoverflow.com/questions/31731756/ansible-and-changed-when-based-on-stdout-value
|
22,469,880
|
How can I check if file has been downloaded in ansible
|
I am downloading the file with wget from ansible. - name: Download Solr shell: wget [URL] args: chdir: {{project_root}}/solr but I only want to do that if zip file does not exist in that location. Currently the system is downloading it every time.
|
How can I check if file has been downloaded in ansible I am downloading the file with wget from ansible. - name: Download Solr shell: wget [URL] args: chdir: {{project_root}}/solr but I only want to do that if zip file does not exist in that location. Currently the system is downloading it every time.
|
ansible
| 31
| 51,931
| 8
|
https://stackoverflow.com/questions/22469880/how-can-i-check-if-file-has-been-downloaded-in-ansible
|
37,888,760
|
Ansible "when variable == true" not behaving as expected
|
I have the following tasks in a playbook I'm writing (results listed next to the debug statement in <>): - debug: var=nrpe_installed.stat.exists <true> - debug: var=force_install <true> - debug: var=plugins_installed.stat.exists <true> - name: Run the prep include: prep.yml when: (nrpe_installed.stat.exists == false or plugins_installed.stat.exists == true or force_install == true) tags: ['prep'] - debug: var=nrpe_installed.stat.exists <true> - debug: var=force_install <true> - debug: var=force_nrpe_install <false> - name: Install NRPE include: install-nrpe.yml when: (nrpe_installed.stat.exists == false or force_install == true or force_nrpe_install == true) tags: ['install_nrpe'] vars: nrpe_url: '[URL] nrpe_md5: 3921ddc598312983f604541784b35a50 nrpe_version: 2.15 nrpe_artifact: nrpe-{{ nrpe_version }}.tar.gz nagios_ip: {{ nagios_ip }} config_dir: /home/ansible/config/ And I'm running it with the following command: ansible-playbook install.yml -i $invFile --extra-vars="hosts=webservers force_install=True" The first include runs, but the second skips with this output: skipping: [server1] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} I'm under the impression that the conditional check should pass for all of them as force_install == true evaluates to true which should make the whole when evaluate to true (since it's a series of 'OR's). How do I get the when to run when the variables are set appropriately? Edit: Changing the second when for the Install NRPE include to the following works, but doesn't explain why the other one, Run the prep runs appropriately: Working: when: (not nrpe_installed.stat.exists or force_install or force_nrpe_install) Also working: when: (nrpe_installed.stat.exists == false or plugins_installed.stat.exists == true or force_install == true) Not working: when: (nrpe_installed.stat.exists == false or force_install == true or force_nrpe_install == true) The truncated (duplicates removed) output of that particular section of the play is: TASK [debug] ******************************************************************* ok: [server2] => { "nrpe_installed.stat.exists": true } TASK [debug] ******************************************************************* ok: [server2] => { "plugins_installed.stat.exists": true } TASK [debug] ******************************************************************* ok: [server2] => { "force_install": true } TASK [Run the prep] ************************************************************ included: /tasks/nrpe-install/prep.yml for server2, server3, server4, server5, server6, server7 TASK [Prep and configure for installation | Install yum packages] ************** ok: [server6] => (item=[u'gcc', u'glibc', u'glibc-common', u'gd', u'gd-devel', u'make', u'net-snmp', u'openssl-devel', u'unzip', u'tar', u'gzip', u'xinetd']) => {"changed": false, "item": ["gcc", "glibc", "glibc-common", "gd", "gd-devel", "make", "net-snmp", "openssl-devel", "unzip", "tar", "gzip", "xinetd"], "msg": "", "rc": 0, "results": ["gcc-4.1.2-55.el5.x86_64 providing gcc is already installed", "glibc-2.5-123.el5_11.3.i686 providing glibc is already installed", "glibc-common-2.5-123.el5_11.3.x86_64 providing glibc-common is already installed", "gd-2.0.33-9.4.el5_4.2.x86_64 providing gd is already installed", "gd-devel-2.0.33-9.4.el5_4.2.i386 providing gd-devel is already installed", "make-3.81-3.el5.x86_64 providing make is already installed", "net-snmp-5.3.2.2-20.el5.x86_64 providing net-snmp is already installed", "openssl-devel-0.9.8e-40.el5_11.x86_64 providing openssl-devel is already installed", "unzip-5.52-3.el5.x86_64 providing unzip is already installed", "tar-1.15.1-32.el5_8.x86_64 providing tar is already installed", "gzip-1.3.5-13.el5.centos.x86_64 providing gzip is already installed", "xinetd-2.3.14-20.el5_10.x86_64 providing xinetd is already installed"]} TASK [Prep and configure for installation | Make nagios group] ***************** ok: [server2] => {"changed": false, "gid": 20002, "name": "nagios", "state": "present", "system": false} TASK [Prep and configure for installation | Make nagios user] ****************** ok: [server6] => {"append": false, "changed": false, "comment": "User for Nagios NRPE", "group": 20002, "home": "/home/nagios", "move_home": false, "name": "nagios", "shell": "/bin/bash", "state": "present", "uid": 20002} TASK [debug] ******************************************************************* ok: [server2] => { "nrpe_installed.stat.exists": true } TASK [debug] ******************************************************************* ok: [server2] => { "force_install": true } TASK [debug] ******************************************************************* ok: [server2] => { "force_nrpe_install": false } TASK [Install NRPE] ************************************************************ skipping: [server2] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true}
|
Ansible "when variable == true" not behaving as expected I have the following tasks in a playbook I'm writing (results listed next to the debug statement in <>): - debug: var=nrpe_installed.stat.exists <true> - debug: var=force_install <true> - debug: var=plugins_installed.stat.exists <true> - name: Run the prep include: prep.yml when: (nrpe_installed.stat.exists == false or plugins_installed.stat.exists == true or force_install == true) tags: ['prep'] - debug: var=nrpe_installed.stat.exists <true> - debug: var=force_install <true> - debug: var=force_nrpe_install <false> - name: Install NRPE include: install-nrpe.yml when: (nrpe_installed.stat.exists == false or force_install == true or force_nrpe_install == true) tags: ['install_nrpe'] vars: nrpe_url: '[URL] nrpe_md5: 3921ddc598312983f604541784b35a50 nrpe_version: 2.15 nrpe_artifact: nrpe-{{ nrpe_version }}.tar.gz nagios_ip: {{ nagios_ip }} config_dir: /home/ansible/config/ And I'm running it with the following command: ansible-playbook install.yml -i $invFile --extra-vars="hosts=webservers force_install=True" The first include runs, but the second skips with this output: skipping: [server1] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} I'm under the impression that the conditional check should pass for all of them as force_install == true evaluates to true which should make the whole when evaluate to true (since it's a series of 'OR's). How do I get the when to run when the variables are set appropriately? Edit: Changing the second when for the Install NRPE include to the following works, but doesn't explain why the other one, Run the prep runs appropriately: Working: when: (not nrpe_installed.stat.exists or force_install or force_nrpe_install) Also working: when: (nrpe_installed.stat.exists == false or plugins_installed.stat.exists == true or force_install == true) Not working: when: (nrpe_installed.stat.exists == false or force_install == true or force_nrpe_install == true) The truncated (duplicates removed) output of that particular section of the play is: TASK [debug] ******************************************************************* ok: [server2] => { "nrpe_installed.stat.exists": true } TASK [debug] ******************************************************************* ok: [server2] => { "plugins_installed.stat.exists": true } TASK [debug] ******************************************************************* ok: [server2] => { "force_install": true } TASK [Run the prep] ************************************************************ included: /tasks/nrpe-install/prep.yml for server2, server3, server4, server5, server6, server7 TASK [Prep and configure for installation | Install yum packages] ************** ok: [server6] => (item=[u'gcc', u'glibc', u'glibc-common', u'gd', u'gd-devel', u'make', u'net-snmp', u'openssl-devel', u'unzip', u'tar', u'gzip', u'xinetd']) => {"changed": false, "item": ["gcc", "glibc", "glibc-common", "gd", "gd-devel", "make", "net-snmp", "openssl-devel", "unzip", "tar", "gzip", "xinetd"], "msg": "", "rc": 0, "results": ["gcc-4.1.2-55.el5.x86_64 providing gcc is already installed", "glibc-2.5-123.el5_11.3.i686 providing glibc is already installed", "glibc-common-2.5-123.el5_11.3.x86_64 providing glibc-common is already installed", "gd-2.0.33-9.4.el5_4.2.x86_64 providing gd is already installed", "gd-devel-2.0.33-9.4.el5_4.2.i386 providing gd-devel is already installed", "make-3.81-3.el5.x86_64 providing make is already installed", "net-snmp-5.3.2.2-20.el5.x86_64 providing net-snmp is already installed", "openssl-devel-0.9.8e-40.el5_11.x86_64 providing openssl-devel is already installed", "unzip-5.52-3.el5.x86_64 providing unzip is already installed", "tar-1.15.1-32.el5_8.x86_64 providing tar is already installed", "gzip-1.3.5-13.el5.centos.x86_64 providing gzip is already installed", "xinetd-2.3.14-20.el5_10.x86_64 providing xinetd is already installed"]} TASK [Prep and configure for installation | Make nagios group] ***************** ok: [server2] => {"changed": false, "gid": 20002, "name": "nagios", "state": "present", "system": false} TASK [Prep and configure for installation | Make nagios user] ****************** ok: [server6] => {"append": false, "changed": false, "comment": "User for Nagios NRPE", "group": 20002, "home": "/home/nagios", "move_home": false, "name": "nagios", "shell": "/bin/bash", "state": "present", "uid": 20002} TASK [debug] ******************************************************************* ok: [server2] => { "nrpe_installed.stat.exists": true } TASK [debug] ******************************************************************* ok: [server2] => { "force_install": true } TASK [debug] ******************************************************************* ok: [server2] => { "force_nrpe_install": false } TASK [Install NRPE] ************************************************************ skipping: [server2] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true}
|
ansible
| 31
| 126,247
| 2
|
https://stackoverflow.com/questions/37888760/ansible-when-variable-true-not-behaving-as-expected
|
29,537,684
|
Add quotes to elemens of the list in jinja2 (ansible)
|
I have very simple line in the template: ip={{ip|join(', ')}} And I have list for ip: ip: - 1.1.1.1 - 2.2.2.2 - 3.3.3.3 But application wants IPs with quotes (ip='1.1.1.1', '2.2.2.2'). I can do it like this: ip: - "'1.1.1.1'" - "'2.2.2.2'" - "'3.3.3.3'" But it is very ugly. Is any nice way to add quotes on each element of the list in ansible? Thanks!
|
Add quotes to elemens of the list in jinja2 (ansible) I have very simple line in the template: ip={{ip|join(', ')}} And I have list for ip: ip: - 1.1.1.1 - 2.2.2.2 - 3.3.3.3 But application wants IPs with quotes (ip='1.1.1.1', '2.2.2.2'). I can do it like this: ip: - "'1.1.1.1'" - "'2.2.2.2'" - "'3.3.3.3'" But it is very ugly. Is any nice way to add quotes on each element of the list in ansible? Thanks!
|
jinja2, ansible
| 31
| 57,821
| 9
|
https://stackoverflow.com/questions/29537684/add-quotes-to-elemens-of-the-list-in-jinja2-ansible
|
29,955,605
|
How to reboot CentOS 7 with Ansible?
|
I'm trying to reboot server running CentOS 7 on VirtualBox. I use this task: - name: Restart server command: /sbin/reboot async: 0 poll: 0 ignore_errors: true Server is rebooted, but I get this error: TASK: [common | Restart server] *********************************************** fatal: [rolcabox] => SSH Error: Shared connection to 127.0.0.1 closed. It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue. FATAL: all hosts have already failed -- aborting What am I doing wrong? How can I fix this?
|
How to reboot CentOS 7 with Ansible? I'm trying to reboot server running CentOS 7 on VirtualBox. I use this task: - name: Restart server command: /sbin/reboot async: 0 poll: 0 ignore_errors: true Server is rebooted, but I get this error: TASK: [common | Restart server] *********************************************** fatal: [rolcabox] => SSH Error: Shared connection to 127.0.0.1 closed. It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue. FATAL: all hosts have already failed -- aborting What am I doing wrong? How can I fix this?
|
centos, ansible, centos7
| 31
| 48,086
| 11
|
https://stackoverflow.com/questions/29955605/how-to-reboot-centos-7-with-ansible
|
31,876,069
|
Is it possible to flatten a lists of lists with Ansible / Jinja2?
|
My basic problem is that upon creation of a set of aws servers I want to configure them to know about each other. Upon creation of each server their details are saved in a registered 'servers' var (shown below). What I really want to be able to do after creation is run a task like so: - name: Add servers details to all other servers lineinfile: dest: /path/to/configfile line: "servername={{ item.1.private_ip }}" delegate_to: "{{ item.0.public_dns_name }}" with_nested: - list_of_servers - list_of_servers Supplying the list twice to 'with_nested' is essential here. Getting a list of list is easy enough to do with: "{{ servers.results | map(attribute='tagged_instances') | list }}" which returns: [ [ { "private_ip": "ip1", "public_dns_name": "dns1" } , { ... }], [ { ... }, { ... } ] ] but how would you turn this into: [ { "private_ip": "ip1", "public_dns_name": "dns1" }, { ... }, { ... }, { ... } ] The 'servers' registered var looks like: "servers": { "changed": true, "msg": "All items completed", "results": [ { ... "tagged_instances": [ { ... "private_ip": "ip1", "public_dns_name": "dns1", ... }, { ... "private_ip": "ip2", "public_dns_name": "dns2", ... } ] }, { ... "tagged_instances": [ { ... "private_ip": "ip3", "public_dns_name": "dn3", ... }, { ... "private_ip": "ip4", "public_dns_name": "dns4", ... } ] }, ... ] } Note: I have a pretty ugly solution by using 'with_flattened' and a debug statement to create a new registered var 'flattened_servers' which I then map over again. But am hoping for a more elegant solution :)
|
Is it possible to flatten a lists of lists with Ansible / Jinja2? My basic problem is that upon creation of a set of aws servers I want to configure them to know about each other. Upon creation of each server their details are saved in a registered 'servers' var (shown below). What I really want to be able to do after creation is run a task like so: - name: Add servers details to all other servers lineinfile: dest: /path/to/configfile line: "servername={{ item.1.private_ip }}" delegate_to: "{{ item.0.public_dns_name }}" with_nested: - list_of_servers - list_of_servers Supplying the list twice to 'with_nested' is essential here. Getting a list of list is easy enough to do with: "{{ servers.results | map(attribute='tagged_instances') | list }}" which returns: [ [ { "private_ip": "ip1", "public_dns_name": "dns1" } , { ... }], [ { ... }, { ... } ] ] but how would you turn this into: [ { "private_ip": "ip1", "public_dns_name": "dns1" }, { ... }, { ... }, { ... } ] The 'servers' registered var looks like: "servers": { "changed": true, "msg": "All items completed", "results": [ { ... "tagged_instances": [ { ... "private_ip": "ip1", "public_dns_name": "dns1", ... }, { ... "private_ip": "ip2", "public_dns_name": "dns2", ... } ] }, { ... "tagged_instances": [ { ... "private_ip": "ip3", "public_dns_name": "dn3", ... }, { ... "private_ip": "ip4", "public_dns_name": "dns4", ... } ] }, ... ] } Note: I have a pretty ugly solution by using 'with_flattened' and a debug statement to create a new registered var 'flattened_servers' which I then map over again. But am hoping for a more elegant solution :)
|
jinja2, ansible
| 31
| 35,892
| 4
|
https://stackoverflow.com/questions/31876069/is-it-possible-to-flatten-a-lists-of-lists-with-ansible-jinja2
|
33,563,425
|
Ansible 1.9.4 : Failed to lock apt for exclusive operation
|
I bumped into Failed to lock apt for exclusive operation issue: [URL] I posted a lot of details in GitHub. I googled a lot of "Failed to lock apt for exclusive operation" Ansible complaints, but no simple answer. Any help?
|
Ansible 1.9.4 : Failed to lock apt for exclusive operation I bumped into Failed to lock apt for exclusive operation issue: [URL] I posted a lot of details in GitHub. I googled a lot of "Failed to lock apt for exclusive operation" Ansible complaints, but no simple answer. Any help?
|
ansible
| 31
| 51,733
| 12
|
https://stackoverflow.com/questions/33563425/ansible-1-9-4-failed-to-lock-apt-for-exclusive-operation
|
21,427,577
|
GIT over SSH in Ansible hangs, eventhough ssh-agent forwarding is set up
|
I have set up everyhing I could find, but still cloning a repo from GitHub hangs the provisioning process. I have: server in known_hosts .ssh/config Host github.com ForwardAgent yes StrictHostKeyChecking no copied private key public key is in authorized_keys the command runs as vagrant user the play is: - name: Checkout from git git: repo=git@github.com:username/repositoryname.git dest=/srv/website
|
GIT over SSH in Ansible hangs, eventhough ssh-agent forwarding is set up I have set up everyhing I could find, but still cloning a repo from GitHub hangs the provisioning process. I have: server in known_hosts .ssh/config Host github.com ForwardAgent yes StrictHostKeyChecking no copied private key public key is in authorized_keys the command runs as vagrant user the play is: - name: Checkout from git git: repo=git@github.com:username/repositoryname.git dest=/srv/website
|
git, ssh, timeout, ansible, ssh-agent
| 31
| 19,578
| 6
|
https://stackoverflow.com/questions/21427577/git-over-ssh-in-ansible-hangs-eventhough-ssh-agent-forwarding-is-set-up
|
46,362,787
|
How to list groups that host is member of?
|
I have a very complex Ansible setup with thousands of servers and hundreds of groups various servers are member of (dynamic inventory file). Is there any way to easily display all groups that a specific host is member of? I know how to list all groups and their members: ansible localhost -m debug -a 'var=groups' But I want to do this not for ALL hosts, but only for a single one.
|
How to list groups that host is member of? I have a very complex Ansible setup with thousands of servers and hundreds of groups various servers are member of (dynamic inventory file). Is there any way to easily display all groups that a specific host is member of? I know how to list all groups and their members: ansible localhost -m debug -a 'var=groups' But I want to do this not for ALL hosts, but only for a single one.
|
ansible
| 31
| 40,537
| 2
|
https://stackoverflow.com/questions/46362787/how-to-list-groups-that-host-is-member-of
|
40,788,575
|
Replace a line in a config file with ansible
|
I am new to ansible. Is there a simple way to replace the line starting with option domain-name-servers in /etc/dhcp/interface-br0.conf with more IPs? option domain-name-servers 10.116.184.1,10.116.144.1; I want to add ,10.116.136.1
|
Replace a line in a config file with ansible I am new to ansible. Is there a simple way to replace the line starting with option domain-name-servers in /etc/dhcp/interface-br0.conf with more IPs? option domain-name-servers 10.116.184.1,10.116.144.1; I want to add ,10.116.136.1
|
ansible
| 31
| 124,948
| 4
|
https://stackoverflow.com/questions/40788575/replace-a-line-in-a-config-file-with-ansible
|
42,037,814
|
Ansible IF ELSE construct
|
Here is my IF ELSE Ansible logic - name: Check certs exist stat: path=/etc/letsencrypt/live/{{ rootDomain }}/fullchain.pem register: st - include: ./_common/check-certs-renewable.yaml when: st.stat.exists - include: ./_common/create-certs.yaml when: not st.stat.exists This code boils down to IF certs exist renew certs ELSE create certs END IF Is this the correct approach or is there a better approach to the IF ELSE construct in Ansible?
|
Ansible IF ELSE construct Here is my IF ELSE Ansible logic - name: Check certs exist stat: path=/etc/letsencrypt/live/{{ rootDomain }}/fullchain.pem register: st - include: ./_common/check-certs-renewable.yaml when: st.stat.exists - include: ./_common/create-certs.yaml when: not st.stat.exists This code boils down to IF certs exist renew certs ELSE create certs END IF Is this the correct approach or is there a better approach to the IF ELSE construct in Ansible?
|
ansible, ansible-2.x
| 31
| 170,029
| 3
|
https://stackoverflow.com/questions/42037814/ansible-if-else-construct
|
26,597,926
|
Install MySQL with ansible on ubuntu
|
I have a problem installing MySQL with ansible on a vagrant ubuntu, This is my MySQL part --- - name: Install MySQL apt: name: "{{ item }}" with_items: - python-mysqldb - mysql-server - name: copy .my.cnf file with root password credentials template: src: templates/root/.my.cnf dest: ~/.my.cnf owner: root mode: 0600 - name: Start the MySQL service service: name: mysql state: started enabled: true # 'localhost' needs to be the last item for idempotency, see # [URL] - name: update mysql root password for all root accounts mysql_user: name: root host: "{{ item }}" password: "{{ mysql_root_password }}" priv: "*.*:ALL,GRANT" with_items: - "{{ ansible_hostname }}" - 127.0.0.1 - ::1 - localhost And I have this error failed: [default] => (item=vagrant-ubuntu-trusty-64) => {"failed": true, "item": "vagrant-ubuntu-trusty-64"} msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials failed: [default] => (item=127.0.0.1) => {"failed": true, "item": "127.0.0.1"} msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials failed: [default] => (item=::1) => {"failed": true, "item": "::1"} msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials failed: [default] => (item=localhost) => {"failed": true, "item": "localhost"} msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials my .my.cnf is [client] user=root password={{ mysql_root_password }} and when copied on the server [client] user=root password=root I don't understand why, ~/.my.cnf is created Project Github Thanks
|
Install MySQL with ansible on ubuntu I have a problem installing MySQL with ansible on a vagrant ubuntu, This is my MySQL part --- - name: Install MySQL apt: name: "{{ item }}" with_items: - python-mysqldb - mysql-server - name: copy .my.cnf file with root password credentials template: src: templates/root/.my.cnf dest: ~/.my.cnf owner: root mode: 0600 - name: Start the MySQL service service: name: mysql state: started enabled: true # 'localhost' needs to be the last item for idempotency, see # [URL] - name: update mysql root password for all root accounts mysql_user: name: root host: "{{ item }}" password: "{{ mysql_root_password }}" priv: "*.*:ALL,GRANT" with_items: - "{{ ansible_hostname }}" - 127.0.0.1 - ::1 - localhost And I have this error failed: [default] => (item=vagrant-ubuntu-trusty-64) => {"failed": true, "item": "vagrant-ubuntu-trusty-64"} msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials failed: [default] => (item=127.0.0.1) => {"failed": true, "item": "127.0.0.1"} msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials failed: [default] => (item=::1) => {"failed": true, "item": "::1"} msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials failed: [default] => (item=localhost) => {"failed": true, "item": "localhost"} msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials my .my.cnf is [client] user=root password={{ mysql_root_password }} and when copied on the server [client] user=root password=root I don't understand why, ~/.my.cnf is created Project Github Thanks
|
mysql, ubuntu, vagrant, ansible
| 31
| 54,531
| 3
|
https://stackoverflow.com/questions/26597926/install-mysql-with-ansible-on-ubuntu
|
22,769,568
|
System specific variables in ansible
|
Ansible expects python 2. On my system (Arch Linux), "python" is Python 3, so I have to pass -e "ansible_python_interpreter=/usr/bin/python2" with every command. ansible-playbook my-playbook.yml -e "ansible_python_interpreter=/usr/bin/python2" Is there a away to set ansible_python_interpreter globally on my system, so I don't have to pass it to every command? I don't want to add it to my playbooks, as not all systems that runs the playbook has a setup similar to mine.
|
System specific variables in ansible Ansible expects python 2. On my system (Arch Linux), "python" is Python 3, so I have to pass -e "ansible_python_interpreter=/usr/bin/python2" with every command. ansible-playbook my-playbook.yml -e "ansible_python_interpreter=/usr/bin/python2" Is there a away to set ansible_python_interpreter globally on my system, so I don't have to pass it to every command? I don't want to add it to my playbooks, as not all systems that runs the playbook has a setup similar to mine.
|
ansible
| 31
| 43,048
| 3
|
https://stackoverflow.com/questions/22769568/system-specific-variables-in-ansible
|
40,340,761
|
Is it possible to have Ansible retry on connection failure?
|
I am facing this annoying bug: Ansible hosts are randomly unreachable #18188 . Is there a way to tell Ansible that if SSH connection fails, to try it once more? Or 2 times more? According this post New SSH Retry In Ansible 2.0? there is "retries" option but it doesn't seem very trustworthy to me, the person who posted didn't even get SSH header right and there is no mention of this in Ansible docs.
|
Is it possible to have Ansible retry on connection failure? I am facing this annoying bug: Ansible hosts are randomly unreachable #18188 . Is there a way to tell Ansible that if SSH connection fails, to try it once more? Or 2 times more? According this post New SSH Retry In Ansible 2.0? there is "retries" option but it doesn't seem very trustworthy to me, the person who posted didn't even get SSH header right and there is no mention of this in Ansible docs.
|
ansible, ansible-2.x
| 31
| 29,129
| 1
|
https://stackoverflow.com/questions/40340761/is-it-possible-to-have-ansible-retry-on-connection-failure
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.