question_id
int64
82.3k
79.7M
title_clean
stringlengths
15
158
body_clean
stringlengths
62
28.5k
full_text
stringlengths
95
28.5k
tags
stringlengths
4
80
score
int64
0
1.15k
view_count
int64
22
1.62M
answer_count
int64
0
30
link
stringlengths
58
125
43,212,179
How to fallback to a default value when ansible lookup fails?
I was a little bit surprised to discover that his piece of code fails with an IOError exception instead of defaulting to omitting the value. #!/usr/bin/env ansible-playbook -i localhost, --- - hosts: localhost tasks: - debug: msg="{{ lookup('ini', 'foo section=DEFAULT file=missing-file.conf') | default(omit) }}" How can I load a value without raising an exception? Please note that the lookup module supports a default value parameter but this one is useless to me because it works only when it can open the file. I need a default value that works even when the lookup fails to open the file.
How to fallback to a default value when ansible lookup fails? I was a little bit surprised to discover that his piece of code fails with an IOError exception instead of defaulting to omitting the value. #!/usr/bin/env ansible-playbook -i localhost, --- - hosts: localhost tasks: - debug: msg="{{ lookup('ini', 'foo section=DEFAULT file=missing-file.conf') | default(omit) }}" How can I load a value without raising an exception? Please note that the lookup module supports a default value parameter but this one is useless to me because it works only when it can open the file. I need a default value that works even when the lookup fails to open the file.
ansible, jinja2
12
17,941
6
https://stackoverflow.com/questions/43212179/how-to-fallback-to-a-default-value-when-ansible-lookup-fails
43,003,681
Extract field from JSON response with Ansible
I have a task which performs a GET request to a page. The response's body is a JSON like the following. { "ips": [ { "organization": "1233124121", "reverse": null, "id": "1321411312", "server": { "id": "1321411", "name": "name1" }, "address": "x.x.x.x" }, { "organization": "2398479823", "reverse": null, "id": "2418209841", "server": { "id": "234979823", "name": "name2" }, "address": "x.x.x.x" } ] } I want to extract the fields id and address, and tried (for id field): tasks: - name: get request uri: url: "[URL] method: GET return_content: yes status_code: 200 headers: Content-Type: "application/json" X-Auth-Token: "0010101010" body_format: json register: json_response - name: copy ip_json content into a file copy: content={{json_response.json.ips.id}} dest="/dest_path/json_response.txt" but I get this error: the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'list object' has no attribute 'id'.. Where is the problem?
Extract field from JSON response with Ansible I have a task which performs a GET request to a page. The response's body is a JSON like the following. { "ips": [ { "organization": "1233124121", "reverse": null, "id": "1321411312", "server": { "id": "1321411", "name": "name1" }, "address": "x.x.x.x" }, { "organization": "2398479823", "reverse": null, "id": "2418209841", "server": { "id": "234979823", "name": "name2" }, "address": "x.x.x.x" } ] } I want to extract the fields id and address, and tried (for id field): tasks: - name: get request uri: url: "[URL] method: GET return_content: yes status_code: 200 headers: Content-Type: "application/json" X-Auth-Token: "0010101010" body_format: json register: json_response - name: copy ip_json content into a file copy: content={{json_response.json.ips.id}} dest="/dest_path/json_response.txt" but I get this error: the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'list object' has no attribute 'id'.. Where is the problem?
json, parsing, ansible
12
20,462
2
https://stackoverflow.com/questions/43003681/extract-field-from-json-response-with-ansible
42,282,750
How to store command output into array in Ansible?
Essentially, I want to be able to handle "wildcard filenames" in Linux using ansible. In essence, this means using the ls command with part of a filename followed by an "*" so that it will list ONLY certain files. However, I cannot store the output properly in a variable as there will likely be more than one filename returned. Thus, I want to be able to store these results no matter how many there might be in an array during one task. I then want to be able to retrieve all of the results from the array in a later task. Furthermore, since I don't know how many files might be returned, I cannot do a task for each filename, and an array makes more sense. The reason behind this is that there are files in a random storage location that are changed often, but they always have the same first half. It's their second half of their names that are random, and I don't want to have to hard code that into ansible at all. I'm not certain at all how to properly implement/manipulate an array in ansible, so the following code is an example of what I'm "trying" to accomplish. Obviously it won't function as intended if more than one filename is returned, which is why I was asking for assistance on this topic: - hosts: <randomservername> remote_user: remoteguy become: yes become_method: sudo vars: aaaa: b tasks: - name: Copy over all random file contents from directory on control node to target clients. This is to show how to manipulate wildcard filenames. copy: src: /opt/home/remoteguy/copyable-files/testdir/ dest: /tmp/ owner: remoteguy mode: u=rwx,g=r,o=r ignore_errors: yes - name: Determine the current filenames and store in variable for later use, obviously for this exercise we know part of the filenames. shell: "ls {{item}}" changed_when: false register: annoying with_items: [/tmp/this-name-is-annoying*, /tmp/this-name-is-also*] - name: Run command to cat each file and then capture that output. shell: cat {{ annoying }} register: annoying_words - debug: msg=Here is the output of the two files. {{annoying_words.stdout_lines }} - name: Now, remove the wildcard files from each server to clean up. file: path: '{{ item }}' state: absent with_items: - "{{ annoying.stdout }}" I understand the YAML format got a little mussed up, but if it's fixed, this "would" run normally, it just won't give me the output I'm looking for. Thus if there were 50 files, I'd want ansible to be able to manipulate them all, and/or be able to delete them all.. etc etc etc. If anyone here could let me know how to properly utilize an array in the above test code fragment that would be fantastic!
How to store command output into array in Ansible? Essentially, I want to be able to handle "wildcard filenames" in Linux using ansible. In essence, this means using the ls command with part of a filename followed by an "*" so that it will list ONLY certain files. However, I cannot store the output properly in a variable as there will likely be more than one filename returned. Thus, I want to be able to store these results no matter how many there might be in an array during one task. I then want to be able to retrieve all of the results from the array in a later task. Furthermore, since I don't know how many files might be returned, I cannot do a task for each filename, and an array makes more sense. The reason behind this is that there are files in a random storage location that are changed often, but they always have the same first half. It's their second half of their names that are random, and I don't want to have to hard code that into ansible at all. I'm not certain at all how to properly implement/manipulate an array in ansible, so the following code is an example of what I'm "trying" to accomplish. Obviously it won't function as intended if more than one filename is returned, which is why I was asking for assistance on this topic: - hosts: <randomservername> remote_user: remoteguy become: yes become_method: sudo vars: aaaa: b tasks: - name: Copy over all random file contents from directory on control node to target clients. This is to show how to manipulate wildcard filenames. copy: src: /opt/home/remoteguy/copyable-files/testdir/ dest: /tmp/ owner: remoteguy mode: u=rwx,g=r,o=r ignore_errors: yes - name: Determine the current filenames and store in variable for later use, obviously for this exercise we know part of the filenames. shell: "ls {{item}}" changed_when: false register: annoying with_items: [/tmp/this-name-is-annoying*, /tmp/this-name-is-also*] - name: Run command to cat each file and then capture that output. shell: cat {{ annoying }} register: annoying_words - debug: msg=Here is the output of the two files. {{annoying_words.stdout_lines }} - name: Now, remove the wildcard files from each server to clean up. file: path: '{{ item }}' state: absent with_items: - "{{ annoying.stdout }}" I understand the YAML format got a little mussed up, but if it's fixed, this "would" run normally, it just won't give me the output I'm looking for. Thus if there were 50 files, I'd want ansible to be able to manipulate them all, and/or be able to delete them all.. etc etc etc. If anyone here could let me know how to properly utilize an array in the above test code fragment that would be fantastic!
arrays, linux, ansible, ansible-facts
12
34,390
3
https://stackoverflow.com/questions/42282750/how-to-store-command-output-into-array-in-ansible
23,894,739
Synced folders lost when rebooting a Vagrant machine using the Ansible provisioner
Vagrant creates a development environment using VirtualBox and then provisions it using ansible. As part of the provisioning, ansible runs a reboot and then waits for SSH to come back up. This works as expected but because the vagrant machine is not being started from a "vagrant up" command the synced folders are not mounted properly when the box comes back up from the reboot. Running "vagrant reload" fixes the machine and mounts the shares again. Is there a way of either telling vagrant to reload the server or to do all the bits 'n bobs that vagrant would have done after a manual restart? Simply running "sudo reboot" when SSH-ed into the vagrant box also produces the same problem.
Synced folders lost when rebooting a Vagrant machine using the Ansible provisioner Vagrant creates a development environment using VirtualBox and then provisions it using ansible. As part of the provisioning, ansible runs a reboot and then waits for SSH to come back up. This works as expected but because the vagrant machine is not being started from a "vagrant up" command the synced folders are not mounted properly when the box comes back up from the reboot. Running "vagrant reload" fixes the machine and mounts the shares again. Is there a way of either telling vagrant to reload the server or to do all the bits 'n bobs that vagrant would have done after a manual restart? Simply running "sudo reboot" when SSH-ed into the vagrant box also produces the same problem.
linux, ssh, vagrant, reboot, ansible
12
4,881
5
https://stackoverflow.com/questions/23894739/synced-folders-lost-when-rebooting-a-vagrant-machine-using-the-ansible-provision
63,672,853
ansible config file not found; using defaults
I have created venv called ansible and installed ansible using pip3 install ansible . Now while checking ansible version config file = None ansible --version ansible 2.9.12 config file = None configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/ansible/ansible/lib/python3.7/site-packages/ansible executable location = /home/ansible/ansible/bin/ansible python version = 3.7.4 (default, Aug 18 2019, 12:04:45) [GCC 5.4.0 20160609] When I run ansible command it says No config file found; using defaults but which is that defaults file and where it can be found ?
ansible config file not found; using defaults I have created venv called ansible and installed ansible using pip3 install ansible . Now while checking ansible version config file = None ansible --version ansible 2.9.12 config file = None configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/ansible/ansible/lib/python3.7/site-packages/ansible executable location = /home/ansible/ansible/bin/ansible python version = 3.7.4 (default, Aug 18 2019, 12:04:45) [GCC 5.4.0 20160609] When I run ansible command it says No config file found; using defaults but which is that defaults file and where it can be found ?
ansible
12
48,474
6
https://stackoverflow.com/questions/63672853/ansible-config-file-not-found-using-defaults
57,864,828
How to indent nested if/for statements in jinja2
I have a long Jinja2 template which has many nested if / for statements. It's very hard to read. I would like to indent the {% %} bits, to make it clearer. However if I do that, the contents of those blocks gets indented further too. How can I indent just the {% %} bits? I'm using Ansible. Steps to reproduce: template.yaml.j2 {% for x in range(3) %} Key{{ x }}: # The following should be one list - always here {% if x % 2 %} - sometimes here {% endif %} {% endfor %} playbook.yaml --- - hosts: localhost connection: local tasks: - template: src: template.j2 dest: template.yaml Run with ansible-playbook playbook.yaml Desired Output Key0: # The following should be one list - always here Key1: # The following should be one list - always here - sometimes here Key2: # The following should be one list - always here Actual behavior: Key0: # The following should be one list - always here Key1: # The following should be one list - always here - sometimes here Key2: # The following should be one list - always here Workaround If I unindent the if statements like: {% for x in range(3) %} Key{{ x }}: # The following should be one list - always here {% if x % 2 %} - sometimes here {% endif %} {% endfor %} Then I get the output I want. But the problem is that this is hard to read. (In my actual template, I have if statements inside for inside if, etc. Highly nested.)
How to indent nested if/for statements in jinja2 I have a long Jinja2 template which has many nested if / for statements. It's very hard to read. I would like to indent the {% %} bits, to make it clearer. However if I do that, the contents of those blocks gets indented further too. How can I indent just the {% %} bits? I'm using Ansible. Steps to reproduce: template.yaml.j2 {% for x in range(3) %} Key{{ x }}: # The following should be one list - always here {% if x % 2 %} - sometimes here {% endif %} {% endfor %} playbook.yaml --- - hosts: localhost connection: local tasks: - template: src: template.j2 dest: template.yaml Run with ansible-playbook playbook.yaml Desired Output Key0: # The following should be one list - always here Key1: # The following should be one list - always here - sometimes here Key2: # The following should be one list - always here Actual behavior: Key0: # The following should be one list - always here Key1: # The following should be one list - always here - sometimes here Key2: # The following should be one list - always here Workaround If I unindent the if statements like: {% for x in range(3) %} Key{{ x }}: # The following should be one list - always here {% if x % 2 %} - sometimes here {% endif %} {% endfor %} Then I get the output I want. But the problem is that this is hard to read. (In my actual template, I have if statements inside for inside if, etc. Highly nested.)
nested, ansible, jinja2, indentation
12
13,874
2
https://stackoverflow.com/questions/57864828/how-to-indent-nested-if-for-statements-in-jinja2
43,636,503
Ansible - How to check the physical memory and free disk is greater than some value?
How to write an ansible task to check if the physical memory >=128 MB and free disk is >= 256 MB. i tried to get the output but i am not sure how to proceed further. # Check the physical disk memory 128 MB and free disk 256 MB - name: check the physical memory command: vmstat -s register: phy_mem
Ansible - How to check the physical memory and free disk is greater than some value? How to write an ansible task to check if the physical memory >=128 MB and free disk is >= 256 MB. i tried to get the output but i am not sure how to proceed further. # Check the physical disk memory 128 MB and free disk 256 MB - name: check the physical memory command: vmstat -s register: phy_mem
linux, ansible, ansible-2.x
12
38,892
1
https://stackoverflow.com/questions/43636503/ansible-how-to-check-the-physical-memory-and-free-disk-is-greater-than-some-va
26,685,544
Rolling restart with ansible handlers
I want to run an ansible playbook that installs a service and restarts it if anything has changed since the last run (more or less the canonical use-case for ansible handlers). But I want a different parallelism for installing than for restarting: I want to install on all the hosts at a time but, if the "service-restart" handler gets invoked I want that to run on X hosts at a time. I know this is possible with different plays that have different serial values. But I can't see how I could make use of handlers if I go this route. And I can't afford to have a single playbook with a serial value like 2 , as most of the time nothing will change for that service. Can the handlers span multiple plays? Or is there any other way to do this without hacks?
Rolling restart with ansible handlers I want to run an ansible playbook that installs a service and restarts it if anything has changed since the last run (more or less the canonical use-case for ansible handlers). But I want a different parallelism for installing than for restarting: I want to install on all the hosts at a time but, if the "service-restart" handler gets invoked I want that to run on X hosts at a time. I know this is possible with different plays that have different serial values. But I can't see how I could make use of handlers if I go this route. And I can't afford to have a single playbook with a serial value like 2 , as most of the time nothing will change for that service. Can the handlers span multiple plays? Or is there any other way to do this without hacks?
ansible
12
12,969
3
https://stackoverflow.com/questions/26685544/rolling-restart-with-ansible-handlers
33,528,212
Copy file to ansible host with custom variables substituted
I'm working on an ansible-playbook which should help to generate build agents for a continuous delivery pipeline. Among other issues, I'll need to install an oracle client on such an agent. I want to do something like - name: "Provide response file" copy: src=/custom.rsp dest=/opt/oracle Within the custom.rsp file I've got some variables to be substituted. Normally, one could do it with a separate shell command like this: - name: "Substitute Vars" shell: "sed 's|<PARAMETER>|<VALUE>|g' -i /opt/oracle/custom.rsp" I don't like it, though. There should be a more convinient way to do this. Anybody giving me a hint?
Copy file to ansible host with custom variables substituted I'm working on an ansible-playbook which should help to generate build agents for a continuous delivery pipeline. Among other issues, I'll need to install an oracle client on such an agent. I want to do something like - name: "Provide response file" copy: src=/custom.rsp dest=/opt/oracle Within the custom.rsp file I've got some variables to be substituted. Normally, one could do it with a separate shell command like this: - name: "Substitute Vars" shell: "sed 's|<PARAMETER>|<VALUE>|g' -i /opt/oracle/custom.rsp" I don't like it, though. There should be a more convinient way to do this. Anybody giving me a hint?
copy, ansible
12
10,821
2
https://stackoverflow.com/questions/33528212/copy-file-to-ansible-host-with-custom-variables-substituted
64,880,142
FAILED! =&gt; {&quot;changed&quot;: false, &quot;msg&quot;: &quot;apt cache update failed&quot;} when trying to
I am new to Ansible and try, as an example, a task to install Vivaldi . My only task in a role Vivaldi update starts with - name: Run apt upgrade apt: upgrade: "yes" update_cache: yes cache_valid_time: 432000 - name: Add Vivaldi Repository apt_repository: repo: "deb [URL] stable main" state: present filename: vivaldi.list update_cache: true tags: - vivaldi And with this, I fail on localhost on a Debian 10 (Buster) installation: Linux london 4.19.0-12-amd64 #1 SMP Debian 4.19.152-1 (2020-10-18) x86_64 GNU/Linux). All commands succeed on the command line. Ansible is 2.9.15. The first task runs OK (if run alone), but the second fails with: FAILED! => {"changed": false, "msg": "apt cache update failed"}. A task to add a repository key fails with: FAILED! => {"changed": false, "id": "6D3789EDC3401E12", "msg": "key does not seem to have been added"} However, if I add the repository manually to /etc/apt/sources.list the last task, - name: Install Vivaldi apt: name: vivaldi-stable update_cache: yes state: latest tags: - vivaldi it succeeds. What am I doing wrong?
FAILED! =&gt; {&quot;changed&quot;: false, &quot;msg&quot;: &quot;apt cache update failed&quot;} when trying to I am new to Ansible and try, as an example, a task to install Vivaldi . My only task in a role Vivaldi update starts with - name: Run apt upgrade apt: upgrade: "yes" update_cache: yes cache_valid_time: 432000 - name: Add Vivaldi Repository apt_repository: repo: "deb [URL] stable main" state: present filename: vivaldi.list update_cache: true tags: - vivaldi And with this, I fail on localhost on a Debian 10 (Buster) installation: Linux london 4.19.0-12-amd64 #1 SMP Debian 4.19.152-1 (2020-10-18) x86_64 GNU/Linux). All commands succeed on the command line. Ansible is 2.9.15. The first task runs OK (if run alone), but the second fails with: FAILED! => {"changed": false, "msg": "apt cache update failed"}. A task to add a repository key fails with: FAILED! => {"changed": false, "id": "6D3789EDC3401E12", "msg": "key does not seem to have been added"} However, if I add the repository manually to /etc/apt/sources.list the last task, - name: Install Vivaldi apt: name: vivaldi-stable update_cache: yes state: latest tags: - vivaldi it succeeds. What am I doing wrong?
ansible, debian
12
45,363
1
https://stackoverflow.com/questions/64880142/failed-changed-false-msg-apt-cache-update-failed-when-trying-to
58,559,043
ansible: create a list from comma separated string
I want to create a list from comma separated string to pass to loop in ansible, sometime variable can have only one value also var1=test1,test2 and it can be var1=test1 also here is my code - name: Separate facts set_fact: groups="{{ var1.split(',') }}" - name: delete gcp_compute_instance_group: name: "{{ item }}" zone: xxx project: xxx auth_kind: serviceaccount service_account_file: xxx state: absent loop: "{{ groups }}" this doesn't work, how can i achieve my requirement
ansible: create a list from comma separated string I want to create a list from comma separated string to pass to loop in ansible, sometime variable can have only one value also var1=test1,test2 and it can be var1=test1 also here is my code - name: Separate facts set_fact: groups="{{ var1.split(',') }}" - name: delete gcp_compute_instance_group: name: "{{ item }}" zone: xxx project: xxx auth_kind: serviceaccount service_account_file: xxx state: absent loop: "{{ groups }}" this doesn't work, how can i achieve my requirement
ansible
12
36,444
1
https://stackoverflow.com/questions/58559043/ansible-create-a-list-from-comma-separated-string
44,708,068
Can Ansible deploy public SSH key asking password only once?
I wonder how to copy my SSH public key to many hosts using Ansible. First attempt: ansible all -i inventory -m local_action -a "ssh-copy-id {{ inventory_hostname }}" --ask-pass But I have the error The module local_action was not found in configured module paths . Second attempt using a playbook: - hosts: all become: no tasks: - local_action: command ssh-copy-id {{ inventory_hostname }} Finally I have entered my password for each managed host: ansible all -i inventory --list-hosts | while read h ; do ssh-copy-id "$h" ; done How to fill password only once while deploying public SSH key to many hosts? EDIT: I have succeeded to copy my SSH public key to multiple remote hosts using the following playbook from the Konstantin Suvorov's answer . - hosts: all tasks: - authorized_key: key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}" The field user should be mandatory according to the documentation but it seems to work without. Therefore the above generic playbook may be used for any user when used with this command line: ansible-playbook -i inventory authorized_key.yml -u "$USER" -k
Can Ansible deploy public SSH key asking password only once? I wonder how to copy my SSH public key to many hosts using Ansible. First attempt: ansible all -i inventory -m local_action -a "ssh-copy-id {{ inventory_hostname }}" --ask-pass But I have the error The module local_action was not found in configured module paths . Second attempt using a playbook: - hosts: all become: no tasks: - local_action: command ssh-copy-id {{ inventory_hostname }} Finally I have entered my password for each managed host: ansible all -i inventory --list-hosts | while read h ; do ssh-copy-id "$h" ; done How to fill password only once while deploying public SSH key to many hosts? EDIT: I have succeeded to copy my SSH public key to multiple remote hosts using the following playbook from the Konstantin Suvorov's answer . - hosts: all tasks: - authorized_key: key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}" The field user should be mandatory according to the documentation but it seems to work without. Therefore the above generic playbook may be used for any user when used with this command line: ansible-playbook -i inventory authorized_key.yml -u "$USER" -k
ansible, ssh-keys, ansible-2.x
12
13,375
1
https://stackoverflow.com/questions/44708068/can-ansible-deploy-public-ssh-key-asking-password-only-once
34,560,622
Best way to get the IP address of the Ansible control machine
I am using Ansible and ufw to setup a firewall on my servers. As part of the ufw rules I would like to allow SSH from the Ansible control machine, but not from anywhere else. My question is - what is the best way to get the IP address of the control machine itself so I can put it into the rule? I'm aware that I can use facts to get the IP address of the machine I am running the playbook on, but I don't see any easy way to get it automatically for the machine that is running ansible. I'd like to avoid adding a new variable to represent this if possible since it would be nice if it was automatically discoverable, though if that's the only known best way to do it then I will just do that. EDIT: I found this duplicate question which is the same as mine, however it also is unanswered so will leave this open for a bit.
Best way to get the IP address of the Ansible control machine I am using Ansible and ufw to setup a firewall on my servers. As part of the ufw rules I would like to allow SSH from the Ansible control machine, but not from anywhere else. My question is - what is the best way to get the IP address of the control machine itself so I can put it into the rule? I'm aware that I can use facts to get the IP address of the machine I am running the playbook on, but I don't see any easy way to get it automatically for the machine that is running ansible. I'd like to avoid adding a new variable to represent this if possible since it would be nice if it was automatically discoverable, though if that's the only known best way to do it then I will just do that. EDIT: I found this duplicate question which is the same as mine, however it also is unanswered so will leave this open for a bit.
ansible, ansible-facts
12
9,167
7
https://stackoverflow.com/questions/34560622/best-way-to-get-the-ip-address-of-the-ansible-control-machine
30,660,080
Ansible set_fact across plays
I have to run an ansible playbook to execute the following tasks 1) Calculate date in YYYY_MM_DD format and then use this prefix to download some file from aws to my local machine. The filename is of the following format 2015_06_04_latest_file.csv 2) I have to then create a folder by the name 2015_06_04 into multiple hosts and upload this file there. This is my current playbook - --- - hosts: 127.0.0.1 connection: local sudo: yes gather_facts: no tasks: - name: calculate date shell: date "+%Y_%m_%d" --date="1 days ago" register: output - name: set date variable set_fact: latest_date={{ item }} with_items: output.stdout_lines - local_action: command mkdir -p /tmp/latest_contracts/{{ latest_date }} - local_action: command /root/bin/aws s3 cp s3://primarydatafolder/data/{{ latest_date }}_latest_data.csv /tmp/latest_contracts/{{ latest_date }}/ creates=/tmp/latest_contracts/{{ latest_date }}/latest_data.csv register: result ignore_errors: true - local_action: command /root/bin/aws s3 cp s3://secondarydatafolder/data/{{ latest_date }}_latest_data.csv /tmp/latest_contracts/{{ latest_date }}/ creates=/tmp/latest_contracts/{{ latest_date }}/latest_data.csv when: result|failed # remove the date prefix from the downloaded file - local_action: command ./rename_date.sh {{ latest_date }} ignore_errors: true - hosts: contractsServers sudo: yes gather_facts: no tasks: - name: create directory file: path={{item.path}} state=directory mode=0775 owner=root group=root with_items: - {path: '/var/mukul/contracts/{{ latest_date }}' } - {path: '/var/mukul/contracts/dummy' } - name: copy dummy contracts copy: src=dummy dest=/var/mukul/contracts/ - name: delete previous symlink shell: unlink /var/mukul/contracts/latest ignore_errors: true - name: upload the newly created latest date folder to the host copy: src=/tmp/latest_contracts/{{ latest_date }} dest=/var/mukul/contracts/ - name: create a symbolic link to the folder on the host and call it latest action: file state=link src=/var/mukul/contracts/{{ latest_date }} dest=/var/mukul/contracts/latest As per ansible's documentation on set_fact variable, this variable latest_date should be available across plays. However, ansible fails with the following message failed: [192.168.101.177] => (item={'path': u'/var/mukul/contracts/{# latest_date #}'}) => {"failed": true, "item": {"path": "/var/mukul/contracts/{# latest_date #}"}} msg: this module requires key=value arguments (['path=/var/mukul/contracts/{#', 'latest_date', '#}', 'state=directory', 'mode=0775', 'owner=root', 'group=root']) It looks as if the second playbook is unable to get the value of the latest_date fact. Can you please tell me where i'm making a mistake?
Ansible set_fact across plays I have to run an ansible playbook to execute the following tasks 1) Calculate date in YYYY_MM_DD format and then use this prefix to download some file from aws to my local machine. The filename is of the following format 2015_06_04_latest_file.csv 2) I have to then create a folder by the name 2015_06_04 into multiple hosts and upload this file there. This is my current playbook - --- - hosts: 127.0.0.1 connection: local sudo: yes gather_facts: no tasks: - name: calculate date shell: date "+%Y_%m_%d" --date="1 days ago" register: output - name: set date variable set_fact: latest_date={{ item }} with_items: output.stdout_lines - local_action: command mkdir -p /tmp/latest_contracts/{{ latest_date }} - local_action: command /root/bin/aws s3 cp s3://primarydatafolder/data/{{ latest_date }}_latest_data.csv /tmp/latest_contracts/{{ latest_date }}/ creates=/tmp/latest_contracts/{{ latest_date }}/latest_data.csv register: result ignore_errors: true - local_action: command /root/bin/aws s3 cp s3://secondarydatafolder/data/{{ latest_date }}_latest_data.csv /tmp/latest_contracts/{{ latest_date }}/ creates=/tmp/latest_contracts/{{ latest_date }}/latest_data.csv when: result|failed # remove the date prefix from the downloaded file - local_action: command ./rename_date.sh {{ latest_date }} ignore_errors: true - hosts: contractsServers sudo: yes gather_facts: no tasks: - name: create directory file: path={{item.path}} state=directory mode=0775 owner=root group=root with_items: - {path: '/var/mukul/contracts/{{ latest_date }}' } - {path: '/var/mukul/contracts/dummy' } - name: copy dummy contracts copy: src=dummy dest=/var/mukul/contracts/ - name: delete previous symlink shell: unlink /var/mukul/contracts/latest ignore_errors: true - name: upload the newly created latest date folder to the host copy: src=/tmp/latest_contracts/{{ latest_date }} dest=/var/mukul/contracts/ - name: create a symbolic link to the folder on the host and call it latest action: file state=link src=/var/mukul/contracts/{{ latest_date }} dest=/var/mukul/contracts/latest As per ansible's documentation on set_fact variable, this variable latest_date should be available across plays. However, ansible fails with the following message failed: [192.168.101.177] => (item={'path': u'/var/mukul/contracts/{# latest_date #}'}) => {"failed": true, "item": {"path": "/var/mukul/contracts/{# latest_date #}"}} msg: this module requires key=value arguments (['path=/var/mukul/contracts/{#', 'latest_date', '#}', 'state=directory', 'mode=0775', 'owner=root', 'group=root']) It looks as if the second playbook is unable to get the value of the latest_date fact. Can you please tell me where i'm making a mistake?
ansible
12
19,779
2
https://stackoverflow.com/questions/30660080/ansible-set-fact-across-plays
26,884,395
Ansible: Install tarball via HTTP
I'd like to extend my ansible playbook to install/verify installation of phantomjs and wkhtmltopdf to my Debian 7 machine. Both programs are available as packed tarballs via HTTP. I know the get_url module, but it doesn't unpack stuff, and if I'd add some shell commands for unpacking and moving the binaries, I suspect each time I run ansible, the tarballs would be downloaded, unpacked and moved again, causing unnecessary network traffic. How can I solve this? Should I make a .deb file and run that using the apt command, or should I make a new ansible module for installing tarballs, or is there something that I'm overlooking?
Ansible: Install tarball via HTTP I'd like to extend my ansible playbook to install/verify installation of phantomjs and wkhtmltopdf to my Debian 7 machine. Both programs are available as packed tarballs via HTTP. I know the get_url module, but it doesn't unpack stuff, and if I'd add some shell commands for unpacking and moving the binaries, I suspect each time I run ansible, the tarballs would be downloaded, unpacked and moved again, causing unnecessary network traffic. How can I solve this? Should I make a .deb file and run that using the apt command, or should I make a new ansible module for installing tarballs, or is there something that I'm overlooking?
http, installation, ansible
12
15,961
2
https://stackoverflow.com/questions/26884395/ansible-install-tarball-via-http
20,177,996
Ansible Playbook to run Shell commands
I recently dived into Ansible for one of my servers, and found it really interesting and time saving. I am running an Ubuntu dedicated server and have configured number of web applications written on Python and a few on PHP. For Python I am using uwsgi as the HTTP gateway. I have written shell scripts to start/restart a few processes in order to run the instance of a specific web application. What I have to do everytime is, connect ssh and navigate to that specific application and run the script. WHAT I NEED I've been trying to find a way to write Ansible playbook to do all that from my personal computer with one line of command, but I have no clue how to do that. I have'nt found a very explanatory (for a beginner) documentation or help on the internet. QUESTION How can I restart Nginx with Ansible playbook? How can I kill a process by process id?
Ansible Playbook to run Shell commands I recently dived into Ansible for one of my servers, and found it really interesting and time saving. I am running an Ubuntu dedicated server and have configured number of web applications written on Python and a few on PHP. For Python I am using uwsgi as the HTTP gateway. I have written shell scripts to start/restart a few processes in order to run the instance of a specific web application. What I have to do everytime is, connect ssh and navigate to that specific application and run the script. WHAT I NEED I've been trying to find a way to write Ansible playbook to do all that from my personal computer with one line of command, but I have no clue how to do that. I have'nt found a very explanatory (for a beginner) documentation or help on the internet. QUESTION How can I restart Nginx with Ansible playbook? How can I kill a process by process id?
shell, nginx, uwsgi, ansible
12
41,314
1
https://stackoverflow.com/questions/20177996/ansible-playbook-to-run-shell-commands
66,046,989
Unable to set timeout in Ansible shell module
I tried to set timout in Ansible shell module however it is not working. If i execute same in terminal it is working. Below is my code - name: Timeout shell: "timeout 30s {{ execute_path_nityo }}/execute.sh"
Unable to set timeout in Ansible shell module I tried to set timout in Ansible shell module however it is not working. If i execute same in terminal it is working. Below is my code - name: Timeout shell: "timeout 30s {{ execute_path_nityo }}/execute.sh"
ansible
12
23,201
1
https://stackoverflow.com/questions/66046989/unable-to-set-timeout-in-ansible-shell-module
55,750,587
ansible can&#39;t unarchive tar.gz file
I'm trying to unarchive a tar.gz file with ansible (which I've done quite a bit) but for some reason I can't get around this error this time. I keep getting: "msg": "Failed to find handler for \"/tmp/ansible_mnXJGp/internalbuildscripts1.tar.gz\". Make sure the required command to extract the file is installed. Command \"/bin/tar\" could not handle archive. Command \"/usr/bin/unzip\" could not handle archive." what weird is that it seems to be trying to use unzip on a tar.gz file. the task looks like this: - name: Download build scripts unarchive: src: [URL] dest: /home/xxxx/buildscripts remote_src: yes
ansible can&#39;t unarchive tar.gz file I'm trying to unarchive a tar.gz file with ansible (which I've done quite a bit) but for some reason I can't get around this error this time. I keep getting: "msg": "Failed to find handler for \"/tmp/ansible_mnXJGp/internalbuildscripts1.tar.gz\". Make sure the required command to extract the file is installed. Command \"/bin/tar\" could not handle archive. Command \"/usr/bin/unzip\" could not handle archive." what weird is that it seems to be trying to use unzip on a tar.gz file. the task looks like this: - name: Download build scripts unarchive: src: [URL] dest: /home/xxxx/buildscripts remote_src: yes
ansible
12
26,352
2
https://stackoverflow.com/questions/55750587/ansible-cant-unarchive-tar-gz-file
48,113,332
Ansible: How to copy files remote to remote
I need to copy file /etc/resolv.conf from a remote host and copy it on multiple remote hosts. my hosts: Ansible ubuntu1-4 I want to copy this file from ubuntu1 to ubuntu2 , ubuntu3 and ubuntu4 I tried the synchronize module but I can't/don't want to use rsync as a demon on ubuntu1-4 . Is there a better way than copying it on Ansible and from Ansible to ubuntu2 till 4 ?
Ansible: How to copy files remote to remote I need to copy file /etc/resolv.conf from a remote host and copy it on multiple remote hosts. my hosts: Ansible ubuntu1-4 I want to copy this file from ubuntu1 to ubuntu2 , ubuntu3 and ubuntu4 I tried the synchronize module but I can't/don't want to use rsync as a demon on ubuntu1-4 . Is there a better way than copying it on Ansible and from Ansible to ubuntu2 till 4 ?
ansible
12
30,087
2
https://stackoverflow.com/questions/48113332/ansible-how-to-copy-files-remote-to-remote
37,043,746
How to update nested variables in Ansible
I have some extra information like db connection details etc. stored in /etc/ansible/facts.d/environment.fact. These are made available as variables like ansible_local.environment.database.name . What is the best way to update the database name? I tried the set_fact module but could not get it to update the nested variable correctly. It just overwrites the whole ansible_local hash. - name: Update database name set_fact: args: ansible_local: environment: database: name: "{{ db_name }}"
How to update nested variables in Ansible I have some extra information like db connection details etc. stored in /etc/ansible/facts.d/environment.fact. These are made available as variables like ansible_local.environment.database.name . What is the best way to update the database name? I tried the set_fact module but could not get it to update the nested variable correctly. It just overwrites the whole ansible_local hash. - name: Update database name set_fact: args: ansible_local: environment: database: name: "{{ db_name }}"
python, ansible
12
7,561
2
https://stackoverflow.com/questions/37043746/how-to-update-nested-variables-in-ansible
29,179,856
Create MySQL Tables with Ansible
I'm using ansible to manage a small mail server using ubuntu. I wanted to use ansible to create a database which I can do and also create users for the database(s) which I can do also. But I'm not sure how to create tables using ansible. I'm trying to create the following three MySQL tables using ansible: 1) CREATE TABLE virtual_domains ( id int(11) NOT NULL auto_increment, name varchar(50) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; 2) CREATE TABLE virtual_users ( id int(11) NOT NULL auto_increment, domain_id int(11) NOT NULL, password varchar(106) NOT NULL, email varchar(100) NOT NULL, PRIMARY KEY (id), UNIQUE KEY email (email), FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8; 3) CREATE TABLE virtual_aliases ( id int(11) NOT NULL auto_increment, domain_id int(11) NOT NULL, source varchar(100) NOT NULL, destination varchar(100) NOT NULL, PRIMARY KEY (id), FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8; I have searched and searched and even ask in #ansible and have stated that I can use the mysql_db module to complete the above task but I can't find any examples that will give me some type of direction on how to achieve the above in ansible. Any and all help would be GREATLY appreciated!
Create MySQL Tables with Ansible I'm using ansible to manage a small mail server using ubuntu. I wanted to use ansible to create a database which I can do and also create users for the database(s) which I can do also. But I'm not sure how to create tables using ansible. I'm trying to create the following three MySQL tables using ansible: 1) CREATE TABLE virtual_domains ( id int(11) NOT NULL auto_increment, name varchar(50) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; 2) CREATE TABLE virtual_users ( id int(11) NOT NULL auto_increment, domain_id int(11) NOT NULL, password varchar(106) NOT NULL, email varchar(100) NOT NULL, PRIMARY KEY (id), UNIQUE KEY email (email), FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8; 3) CREATE TABLE virtual_aliases ( id int(11) NOT NULL auto_increment, domain_id int(11) NOT NULL, source varchar(100) NOT NULL, destination varchar(100) NOT NULL, PRIMARY KEY (id), FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8; I have searched and searched and even ask in #ansible and have stated that I can use the mysql_db module to complete the above task but I can't find any examples that will give me some type of direction on how to achieve the above in ansible. Any and all help would be GREATLY appreciated!
mysql, sql, database, ansible
12
11,612
2
https://stackoverflow.com/questions/29179856/create-mysql-tables-with-ansible
34,334,209
How to make ansible connect to windows host behind linux jump server
I want to provision Windows host that is in subnet accessible only with Linux jump host. Windows machine uses winrm connection method. Linux jump server is available via SSH. I have no problem accessing windows host if available directly with: ansible_connection: winrm If I try to delegate the task to the Linux jump server (that has direct access to Windows) by: - name: Ping windows hosts: windows_machines tasks: - name: ping win_ping: delegate_to: "{{ item }}" with_items: "{{ groups['jump_servers'][0] }}" it tries to connect to establish WINRM connection to the jump host. Not exactly what I had in mind. Note that for windows_machines group I have group_vars defined: ansible_port: 5986 ansible_connection: winrm ansible_winrm_server_cert_validation: ignore How should I provision Windows hosts via a bastion host?
How to make ansible connect to windows host behind linux jump server I want to provision Windows host that is in subnet accessible only with Linux jump host. Windows machine uses winrm connection method. Linux jump server is available via SSH. I have no problem accessing windows host if available directly with: ansible_connection: winrm If I try to delegate the task to the Linux jump server (that has direct access to Windows) by: - name: Ping windows hosts: windows_machines tasks: - name: ping win_ping: delegate_to: "{{ item }}" with_items: "{{ groups['jump_servers'][0] }}" it tries to connect to establish WINRM connection to the jump host. Not exactly what I had in mind. Note that for windows_machines group I have group_vars defined: ansible_port: 5986 ansible_connection: winrm ansible_winrm_server_cert_validation: ignore How should I provision Windows hosts via a bastion host?
linux, ansible
12
6,714
2
https://stackoverflow.com/questions/34334209/how-to-make-ansible-connect-to-windows-host-behind-linux-jump-server
36,472,017
ansible : get roles from playbook in a subfolder
I have this tree: ├── plays │ ├── ansible.cfg │ ├── playbook_01.yml │ ├── playbook_02.yml │ └── playbook_03.yml ├── README.rst ├── roles │ ├── role_A │ │ ├── files │ │ └── tasks │ │ └── main.yml │ └── role_B │ ├── files │ └── tasks │ └── main.yml ├── serverlist │ ├── client1_serverlist_prod │ ├── client1_serverlist_test │ ├── client1_serverlist_train │ ├── client2_serverlist_prod │ ├── client2_serverlist_test │ └── client2_serverlist_train └── vagrant └── Vagrantfile With ansible.cfg in play folder:: $ cat plays/ansible.cfg [defaults] roles_path=../roles/ $ I call from vagrant the ansible.playbook:: $ grep playbook vagrant/Vagrantfile ansible.playbook = "../plays/playbook_01.yml on playbook_01.yml:: $ cat plays/playbook_01.yml - hosts: vagrant vars: user: fox home: /home/fox roles: - role_B with role_B:: $ cat roles/role_B/tasks/main.yml --- - name: Create user group group: name={{ user }} state=present - name: Create home directory for user file: state=directory path={{ home }} group=www-data owner={{ user }} $ But ansible when only went to see roles in play folder, I get this error:: vagrant$ vagrant provision ==> vagrant: Running provisioner: ansible... PYTHONUNBUFFERED=1 ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_FORCE_COLOR=true ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --user=vagrant --connection=ssh --timeout=30 --limit='vagrant' --inventory-file=/home/luis/lab/sandbox/akd-iac/stack/vagrant/.vagrant/provisioners/ansible/inventory --sudo -vvvv ../plays/playbook_01.yml ERROR: cannot find role in ~/stack/plays/roles/role_B or ~/stack/plays/role_B or /etc/ansible/roles/role_B Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. vagrant$
ansible : get roles from playbook in a subfolder I have this tree: ├── plays │ ├── ansible.cfg │ ├── playbook_01.yml │ ├── playbook_02.yml │ └── playbook_03.yml ├── README.rst ├── roles │ ├── role_A │ │ ├── files │ │ └── tasks │ │ └── main.yml │ └── role_B │ ├── files │ └── tasks │ └── main.yml ├── serverlist │ ├── client1_serverlist_prod │ ├── client1_serverlist_test │ ├── client1_serverlist_train │ ├── client2_serverlist_prod │ ├── client2_serverlist_test │ └── client2_serverlist_train └── vagrant └── Vagrantfile With ansible.cfg in play folder:: $ cat plays/ansible.cfg [defaults] roles_path=../roles/ $ I call from vagrant the ansible.playbook:: $ grep playbook vagrant/Vagrantfile ansible.playbook = "../plays/playbook_01.yml on playbook_01.yml:: $ cat plays/playbook_01.yml - hosts: vagrant vars: user: fox home: /home/fox roles: - role_B with role_B:: $ cat roles/role_B/tasks/main.yml --- - name: Create user group group: name={{ user }} state=present - name: Create home directory for user file: state=directory path={{ home }} group=www-data owner={{ user }} $ But ansible when only went to see roles in play folder, I get this error:: vagrant$ vagrant provision ==> vagrant: Running provisioner: ansible... PYTHONUNBUFFERED=1 ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_FORCE_COLOR=true ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --user=vagrant --connection=ssh --timeout=30 --limit='vagrant' --inventory-file=/home/luis/lab/sandbox/akd-iac/stack/vagrant/.vagrant/provisioners/ansible/inventory --sudo -vvvv ../plays/playbook_01.yml ERROR: cannot find role in ~/stack/plays/roles/role_B or ~/stack/plays/role_B or /etc/ansible/roles/role_B Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. vagrant$
vagrant, ansible
12
12,508
4
https://stackoverflow.com/questions/36472017/ansible-get-roles-from-playbook-in-a-subfolder
34,133,810
can not source ~/.bashrc file with ansible
I have a list of aliases in a file, .bash_aliases , which is being copied to remote servers with ansible playbook . The file is getting copied to the destination but the .bashrc (which in turns load the .bash_aliases ) file is not getting loaded using the following ansible task. I have tried giving the executable argument - name: source the .bashrc file shell: source ~/.bashrc args: executables: "/bin/bash" Without argument - name: source the .bashrc file shell: source ~/.bashrc With raw module - name: source the .bashrc file raw: source ~/.bashrc With command module - name: source the .bashrc file command: source ~/.bashrc Nothing works!!! Any help
can not source ~/.bashrc file with ansible I have a list of aliases in a file, .bash_aliases , which is being copied to remote servers with ansible playbook . The file is getting copied to the destination but the .bashrc (which in turns load the .bash_aliases ) file is not getting loaded using the following ansible task. I have tried giving the executable argument - name: source the .bashrc file shell: source ~/.bashrc args: executables: "/bin/bash" Without argument - name: source the .bashrc file shell: source ~/.bashrc With raw module - name: source the .bashrc file raw: source ~/.bashrc With command module - name: source the .bashrc file command: source ~/.bashrc Nothing works!!! Any help
bash, ansible
12
13,435
2
https://stackoverflow.com/questions/34133810/can-not-source-bashrc-file-with-ansible
44,970,728
Ansible task - clone private git without SSH forwarding
I am trying to create an Ansible playbook which would be run from our dev team computers and from CI/CD servers. One of the tasks in the playbook is to get the source code of our project from a private git repository. Because the playbook has to run from CI/CD servers we can not use SSH forwarding. What i came up with is to copy necessary SSH private key to remote host machine and then using the key clone the code from the private git repository. However when trying this, the cloning task hangs. When trying to launch the command manually it asks for a passphrase for the SSH private key. SSH key uses no passphrase (blank). Could anyone share their solution of this (probably very common) problem? In case anyone needs, this is my current playbook: - name: Create SSH directory file: path=/root/.ssh state=directory - name: Copy SHH key for Git access copy: content: "{{ git_ssh_key }}" dest: /root/.ssh/id_rsa owner: root group: root mode: 0600 # Also tried this, but it also hangs #- name: Start SSH agent and add SSH key # shell: eval ssh-agent -s && ssh-add - name: Get new source from GIT git: key_file: /root/.ssh/id_rsa repo: "git@gitlab.com:user/repo.git" dest: "{{ staging_dir }}" depth: 1 accept_hostkey: yes clone: yes I am using ansible 2.3.1.0, python version = 2.7.12
Ansible task - clone private git without SSH forwarding I am trying to create an Ansible playbook which would be run from our dev team computers and from CI/CD servers. One of the tasks in the playbook is to get the source code of our project from a private git repository. Because the playbook has to run from CI/CD servers we can not use SSH forwarding. What i came up with is to copy necessary SSH private key to remote host machine and then using the key clone the code from the private git repository. However when trying this, the cloning task hangs. When trying to launch the command manually it asks for a passphrase for the SSH private key. SSH key uses no passphrase (blank). Could anyone share their solution of this (probably very common) problem? In case anyone needs, this is my current playbook: - name: Create SSH directory file: path=/root/.ssh state=directory - name: Copy SHH key for Git access copy: content: "{{ git_ssh_key }}" dest: /root/.ssh/id_rsa owner: root group: root mode: 0600 # Also tried this, but it also hangs #- name: Start SSH agent and add SSH key # shell: eval ssh-agent -s && ssh-add - name: Get new source from GIT git: key_file: /root/.ssh/id_rsa repo: "git@gitlab.com:user/repo.git" dest: "{{ staging_dir }}" depth: 1 accept_hostkey: yes clone: yes I am using ansible 2.3.1.0, python version = 2.7.12
git, ansible, ansible-2.x
12
9,712
2
https://stackoverflow.com/questions/44970728/ansible-task-clone-private-git-without-ssh-forwarding
36,620,963
Dynamic file name in vars_files
The following is a simple playbook which tries to dynamically load variables: site.yml --- - hosts: localhost vars_files: - "{{ name }}.yml" tasks: - debug: var={{ foo }} Variable foo is defined in this file: vars/myvars.yml --- foo: "Hello" Then playbook is run like this: ansible-playbook test.yml -e "name=myvars" However this results in this error: ERROR! vars file {{ name }}.yml was not found From what I understood from several code snippets this should be possible and import the variables from myvars.yml. When trying with ansible 1.7.x it indeed seemed to work (although I hit a different issue the file name vas resolved correctly). Was this behaviour changed (perhaps support for dynamic variable files was removed?). Is there a different way to achieve this behaviour (I can use include_vars tasks, however it is not quite suitable)? EDIT: To make sure my playbook structure is correct, here is a github repository: [URL]
Dynamic file name in vars_files The following is a simple playbook which tries to dynamically load variables: site.yml --- - hosts: localhost vars_files: - "{{ name }}.yml" tasks: - debug: var={{ foo }} Variable foo is defined in this file: vars/myvars.yml --- foo: "Hello" Then playbook is run like this: ansible-playbook test.yml -e "name=myvars" However this results in this error: ERROR! vars file {{ name }}.yml was not found From what I understood from several code snippets this should be possible and import the variables from myvars.yml. When trying with ansible 1.7.x it indeed seemed to work (although I hit a different issue the file name vas resolved correctly). Was this behaviour changed (perhaps support for dynamic variable files was removed?). Is there a different way to achieve this behaviour (I can use include_vars tasks, however it is not quite suitable)? EDIT: To make sure my playbook structure is correct, here is a github repository: [URL]
ansible
12
17,582
1
https://stackoverflow.com/questions/36620963/dynamic-file-name-in-vars-files
33,663,357
How to continuously tail remote files using ansible?
How is it possible to tail a file located at several remote servers in a known location using ansible? This question is based on this comment on this Hacker News thread : In my company, we have hundred machines and tailing done with ansible. If we want customize the log view, we can simply edit the playbook. I think it is very handy compared to we need additional npm package (and not to mention additional effort for customization).
How to continuously tail remote files using ansible? How is it possible to tail a file located at several remote servers in a known location using ansible? This question is based on this comment on this Hacker News thread : In my company, we have hundred machines and tailing done with ansible. If we want customize the log view, we can simply edit the playbook. I think it is very handy compared to we need additional npm package (and not to mention additional effort for customization).
ansible, tail
12
13,259
3
https://stackoverflow.com/questions/33663357/how-to-continuously-tail-remote-files-using-ansible
58,119,110
ansible: task naming style
Are there conventions regarding task names, e.g. all examples seem to have an leading lower case letter, but is that a official recommendation ? All examples I see on ansible website e.g. at [URL] use this style ... tasks: - name: ensure apache is at the latest version yum: name: httpd state: latest as opposed to Ensure apache is at the latest version . However when I use gather_facts: true in my playbook I see the built-in ansible generated ... TASK [Gathering Facts] which seems inconsistent? I know this may seem trivial point, but if we are writing lots of plays I'd like to ensure we adhere to conventions.
ansible: task naming style Are there conventions regarding task names, e.g. all examples seem to have an leading lower case letter, but is that a official recommendation ? All examples I see on ansible website e.g. at [URL] use this style ... tasks: - name: ensure apache is at the latest version yum: name: httpd state: latest as opposed to Ensure apache is at the latest version . However when I use gather_facts: true in my playbook I see the built-in ansible generated ... TASK [Gathering Facts] which seems inconsistent? I know this may seem trivial point, but if we are writing lots of plays I'd like to ensure we adhere to conventions.
ansible
12
1,955
1
https://stackoverflow.com/questions/58119110/ansible-task-naming-style
55,653,385
Ansible deployment to windows host behind bastion
I am currently successfully using Ansible to run tasks on hosts that are in a private subnet in AWS, which the below group_vars is setting up: ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p -q ec2-user@bastionhost@example.com"' This is working fine. For Windows instances not in a private subnet the following group_vars works: --- ansible_user: "AnsibleUser" ansible_password: "Password" ansible_port: 5986 ansible_connection: winrm ansible_winrm_server_cert_validation: ignore Now, trying to get Ansible to deploy to a Windows server behind the bastion by just using the ProxyCommand won't work - which I understand. I believe though that there is a new protocol/module I can use called psrp. I imagine that my group_vars for my Windows hosts needs to change to something like this: --- ansible_user: "AnsibleUser" ansible_password: "Password" ansible_port: 5986 ansible_connection: psrp ansible_psrp_cert_validation: ignore If I run with just the above changes against instances that are publicly available (and not trying to connect via a bastion), my task seems to work fine: Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/windows/win_shell.ps1 <10.100.11.14> ESTABLISH PSRP CONNECTION FOR USER: Administrator ON PORT 5986 TO 10.100.11.14 PSRP: EXEC (via pipeline wrapper) I know there must be more changes before I can try this on a windows server behind a bastion, but ran it anyway to see what errors I get to give me clues on what to do next. Here is the result when running this on an instance behind a bastion server: Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/windows/setup.ps1 <10.100.11.14> ESTABLISH PSRP CONNECTION FOR USER: Administrator ON PORT 5986 TO 10.100.11.14 The full traceback is: . . . . ConnectTimeout: HTTPSConnectionPool(host='10.100.11.14', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x110bbfbd0>, 'Connection to 10.100.11.14 timed out. (connect timeout=30)')) It seems like Ansible is ignoring my group_vars for the ProxyCommand - which I'm not sure if that's expected. I'm also not sure on what the next steps are to enable Ansible to deploy to Windows servers behind a bastion. What config am I missing?
Ansible deployment to windows host behind bastion I am currently successfully using Ansible to run tasks on hosts that are in a private subnet in AWS, which the below group_vars is setting up: ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p -q ec2-user@bastionhost@example.com"' This is working fine. For Windows instances not in a private subnet the following group_vars works: --- ansible_user: "AnsibleUser" ansible_password: "Password" ansible_port: 5986 ansible_connection: winrm ansible_winrm_server_cert_validation: ignore Now, trying to get Ansible to deploy to a Windows server behind the bastion by just using the ProxyCommand won't work - which I understand. I believe though that there is a new protocol/module I can use called psrp. I imagine that my group_vars for my Windows hosts needs to change to something like this: --- ansible_user: "AnsibleUser" ansible_password: "Password" ansible_port: 5986 ansible_connection: psrp ansible_psrp_cert_validation: ignore If I run with just the above changes against instances that are publicly available (and not trying to connect via a bastion), my task seems to work fine: Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/windows/win_shell.ps1 <10.100.11.14> ESTABLISH PSRP CONNECTION FOR USER: Administrator ON PORT 5986 TO 10.100.11.14 PSRP: EXEC (via pipeline wrapper) I know there must be more changes before I can try this on a windows server behind a bastion, but ran it anyway to see what errors I get to give me clues on what to do next. Here is the result when running this on an instance behind a bastion server: Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/windows/setup.ps1 <10.100.11.14> ESTABLISH PSRP CONNECTION FOR USER: Administrator ON PORT 5986 TO 10.100.11.14 The full traceback is: . . . . ConnectTimeout: HTTPSConnectionPool(host='10.100.11.14', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x110bbfbd0>, 'Connection to 10.100.11.14 timed out. (connect timeout=30)')) It seems like Ansible is ignoring my group_vars for the ProxyCommand - which I'm not sure if that's expected. I'm also not sure on what the next steps are to enable Ansible to deploy to Windows servers behind a bastion. What config am I missing?
amazon-web-services, ansible
12
3,677
1
https://stackoverflow.com/questions/55653385/ansible-deployment-to-windows-host-behind-bastion
56,332,649
Is there a way to allow downgrades with apt Ansible module?
I have an Ansible Playbook to deploy a specific version of docker. I want apt module to allow for downgrades when the target machine have a higher version installed. I browsed the documentation but couldn't find a suitable way to do it. The Yaml file have lines like: - name : "Install specific docker ce" become : true apt : name : docker-ce=5:18.09.1~3-0~ubuntu-bionic state : present
Is there a way to allow downgrades with apt Ansible module? I have an Ansible Playbook to deploy a specific version of docker. I want apt module to allow for downgrades when the target machine have a higher version installed. I browsed the documentation but couldn't find a suitable way to do it. The Yaml file have lines like: - name : "Install specific docker ce" become : true apt : name : docker-ce=5:18.09.1~3-0~ubuntu-bionic state : present
ansible
12
12,650
2
https://stackoverflow.com/questions/56332649/is-there-a-way-to-allow-downgrades-with-apt-ansible-module
21,892,636
can ansible ask for passwords automatically and only if necessary
So ansible-playbook has --ask-pass and --ask-sudo-pass . Is there a way to get ansible to try ssh without a password first and then only prompt for a password if passwordless login fails? Similarly, can ansible try sudo without a password first and then only prompt if that doesn't work? FYI I have a little shell function to try to figure this out by trial and error, but I'm hoping something like this is baked into ansible. get_ansible_auth_args() { local base_check="ansible all --one-line --inventory-file=deploy/hosts/localhost.yml --args=/bin/true --sudo" ${base_check} if [[ $? -eq 0 ]]; then return; fi local args="--ask-pass" ${base_check} ${args} if [[ $? -eq 0 ]]; then export ANSIBLE_AUTH_ARGS="${args}" return; fi local args="--ask-pass --ask-sudo-pass" ${base_check} ${args} if [[ $? -eq 0 ]]; then export ANSIBLE_AUTH_ARGS="${args}" return; fi }
can ansible ask for passwords automatically and only if necessary So ansible-playbook has --ask-pass and --ask-sudo-pass . Is there a way to get ansible to try ssh without a password first and then only prompt for a password if passwordless login fails? Similarly, can ansible try sudo without a password first and then only prompt if that doesn't work? FYI I have a little shell function to try to figure this out by trial and error, but I'm hoping something like this is baked into ansible. get_ansible_auth_args() { local base_check="ansible all --one-line --inventory-file=deploy/hosts/localhost.yml --args=/bin/true --sudo" ${base_check} if [[ $? -eq 0 ]]; then return; fi local args="--ask-pass" ${base_check} ${args} if [[ $? -eq 0 ]]; then export ANSIBLE_AUTH_ARGS="${args}" return; fi local args="--ask-pass --ask-sudo-pass" ${base_check} ${args} if [[ $? -eq 0 ]]; then export ANSIBLE_AUTH_ARGS="${args}" return; fi }
ansible
12
7,064
1
https://stackoverflow.com/questions/21892636/can-ansible-ask-for-passwords-automatically-and-only-if-necessary
37,287,882
Best way to check for installed yum package/rpm version in Ansible and use it
I've been getting my feet wet with Ansible (2.0.0.2) on CentOS 7. I'm trying to obtain a version from an installed rpm/yum package, but ran into a warning message when running the script. Ansible script: --- - name: Get version of RPM shell: yum list installed custom-rpm | grep custom-rpm | awk '{print $2}' | cut -d'-' -f1 register: version changed_when: False - name: Update some file with version lineinfile: dest: /opt/version.xml regexp: "<version>" line: " <version>{{ version.stdout }}</version>" Running this works fine and does what it's supposed to, but it's returning a warning after it executes: ok: [default] => {"changed": false, "cmd": "yum list installed custom-rpm | grep custom-rpm | awk '{print $2}' | cut -d'-' -f1", "delta": "0:00:00.255406", "end": "2016-05-17 23:11:54.998838", "rc": 0, "start": "2016-05-17 23:11:54.743432", "stderr": "", "stdout": "3.10.2", "stdout_lines": ["3.10.2"], "warnings": ["Consider using yum module rather than running yum"]} [WARNING]: Consider using yum module rather than running yum I looked up information for the yum module on the Ansible site , but I don't really want to install/update/delete anything. I could simply ignore it or suppress it, but I was curious if there was a better way?
Best way to check for installed yum package/rpm version in Ansible and use it I've been getting my feet wet with Ansible (2.0.0.2) on CentOS 7. I'm trying to obtain a version from an installed rpm/yum package, but ran into a warning message when running the script. Ansible script: --- - name: Get version of RPM shell: yum list installed custom-rpm | grep custom-rpm | awk '{print $2}' | cut -d'-' -f1 register: version changed_when: False - name: Update some file with version lineinfile: dest: /opt/version.xml regexp: "<version>" line: " <version>{{ version.stdout }}</version>" Running this works fine and does what it's supposed to, but it's returning a warning after it executes: ok: [default] => {"changed": false, "cmd": "yum list installed custom-rpm | grep custom-rpm | awk '{print $2}' | cut -d'-' -f1", "delta": "0:00:00.255406", "end": "2016-05-17 23:11:54.998838", "rc": 0, "start": "2016-05-17 23:11:54.743432", "stderr": "", "stdout": "3.10.2", "stdout_lines": ["3.10.2"], "warnings": ["Consider using yum module rather than running yum"]} [WARNING]: Consider using yum module rather than running yum I looked up information for the yum module on the Ansible site , but I don't really want to install/update/delete anything. I could simply ignore it or suppress it, but I was curious if there was a better way?
ansible
11
98,954
11
https://stackoverflow.com/questions/37287882/best-way-to-check-for-installed-yum-package-rpm-version-in-ansible-and-use-it
38,751,686
Ansible: Changing permissions of files inside a directory in .yml file
Suppose there's a directory /dir/tools. tools contains a bunch of scripts, say, a.sh, b.sh, c.sh. I need to set the permissions of a.sh, b.sh, and c.sh to 0775. I've currently done it in the following manner: - name: Fix 'support_tools' permissions file: path={{ item }} owner=abc group=abc mode=0775 with_items: - /dir/tools/a.sh - /dir/tools/b.sh - /dir/tools/c.sh Btw, 'file: path=/dir/tools owner=abc group=abc mode=0775' sets the permission of tools directory but not of the files inside it. Is there a better way of achieving this?
Ansible: Changing permissions of files inside a directory in .yml file Suppose there's a directory /dir/tools. tools contains a bunch of scripts, say, a.sh, b.sh, c.sh. I need to set the permissions of a.sh, b.sh, and c.sh to 0775. I've currently done it in the following manner: - name: Fix 'support_tools' permissions file: path={{ item }} owner=abc group=abc mode=0775 with_items: - /dir/tools/a.sh - /dir/tools/b.sh - /dir/tools/c.sh Btw, 'file: path=/dir/tools owner=abc group=abc mode=0775' sets the permission of tools directory but not of the files inside it. Is there a better way of achieving this?
ansible
11
69,405
5
https://stackoverflow.com/questions/38751686/ansible-changing-permissions-of-files-inside-a-directory-in-yml-file
40,316,836
Update bashrc with virtualenv info using Ansible
I have a python virtualenv running on a remote server. I am trying to update the bashrc of the remote server with the following info using Ansible. export WORKON_HOME=~/TestEnvs source /usr/local/bin/virtualenvwrapper.sh workon my_virtual_env Is there any way to accomplish this using Ansible?
Update bashrc with virtualenv info using Ansible I have a python virtualenv running on a remote server. I am trying to update the bashrc of the remote server with the following info using Ansible. export WORKON_HOME=~/TestEnvs source /usr/local/bin/virtualenvwrapper.sh workon my_virtual_env Is there any way to accomplish this using Ansible?
bash, ansible, virtualenvwrapper
11
9,947
1
https://stackoverflow.com/questions/40316836/update-bashrc-with-virtualenv-info-using-ansible
60,525,961
Removing empty values from a list and assigning it to a new list
I have a list generated in Ansible using values gathered by a task. In that list, there are empty strings as some of the keys don't have values assigned to them. So, what I am trying to achieve is to assign that list to a new list but without those empty values. list1: [ "a", "b", "", "7", "" ] I have tried the following and it doesn't seem to work: set_fact: list2: "{{ list1 1 | rejectattr('') |list }}" Is anyone able to point me what I am doing wrong and provide a solution to my issue? Ansible version: 2.9.1
Removing empty values from a list and assigning it to a new list I have a list generated in Ansible using values gathered by a task. In that list, there are empty strings as some of the keys don't have values assigned to them. So, what I am trying to achieve is to assign that list to a new list but without those empty values. list1: [ "a", "b", "", "7", "" ] I have tried the following and it doesn't seem to work: set_fact: list2: "{{ list1 1 | rejectattr('') |list }}" Is anyone able to point me what I am doing wrong and provide a solution to my issue? Ansible version: 2.9.1
ansible
11
18,010
3
https://stackoverflow.com/questions/60525961/removing-empty-values-from-a-list-and-assigning-it-to-a-new-list
65,092,169
Ansible - Edit a systemd service file
The systemd module: [URL] I'm looking for a way to add a Condition to the service file. For instance: ConditionPathIsMountPoint=/mnt/myreplication/path/ This would be useful for docker installations, ensuring docker doesn't start containers before a mount they need is actually available. Sadly, it looks like Ansible doesn't support adding this right now. Am I correct there? Will I need to manually add it, or with lineinfile ? Or is there an other way? EDIT: This question appears to be getting views, so I'll add this: [URL] And this answer to another question of mine: [URL] To quote it: Don't edit files in /lib/systemd/ or /usr/share/systemd as they will get overwritten on updates.
Ansible - Edit a systemd service file The systemd module: [URL] I'm looking for a way to add a Condition to the service file. For instance: ConditionPathIsMountPoint=/mnt/myreplication/path/ This would be useful for docker installations, ensuring docker doesn't start containers before a mount they need is actually available. Sadly, it looks like Ansible doesn't support adding this right now. Am I correct there? Will I need to manually add it, or with lineinfile ? Or is there an other way? EDIT: This question appears to be getting views, so I'll add this: [URL] And this answer to another question of mine: [URL] To quote it: Don't edit files in /lib/systemd/ or /usr/share/systemd as they will get overwritten on updates.
ansible, systemd
11
16,659
3
https://stackoverflow.com/questions/65092169/ansible-edit-a-systemd-service-file
35,270,024
Ansible: &#39;item&#39; is undefined
I'm trying to use with_items with delegate_to to run a Docker container in several hosts. I have a group test in /etc/ansible/hosts : [test] my_machine1 my_machine2 And this task: - name: Run app container docker: name: "{{artifact_id}}" insecure_registry: true image: "{{image}}:{{version}}" pull: always state: reloaded ports: - "{{port_mapping}}" delegate_to: '{{item}}' with_items: - "{{groups['test']}}" But when I run it, I get this error: {"failed": true, "msg": "ERROR! 'item' is undefined"} What am I doing wrong? Thanks in advance
Ansible: &#39;item&#39; is undefined I'm trying to use with_items with delegate_to to run a Docker container in several hosts. I have a group test in /etc/ansible/hosts : [test] my_machine1 my_machine2 And this task: - name: Run app container docker: name: "{{artifact_id}}" insecure_registry: true image: "{{image}}:{{version}}" pull: always state: reloaded ports: - "{{port_mapping}}" delegate_to: '{{item}}' with_items: - "{{groups['test']}}" But when I run it, I get this error: {"failed": true, "msg": "ERROR! 'item' is undefined"} What am I doing wrong? Thanks in advance
docker, ansible
11
23,323
1
https://stackoverflow.com/questions/35270024/ansible-item-is-undefined
37,225,589
Execute Ansible task on different hosts
I have a created a role, where I have defined all ansible tasks. Also I have host A and host B in the inventory. Is it possible to execute 90% of task on host A and 10% of task on host B? My Ansible controller is host C.
Execute Ansible task on different hosts I have a created a role, where I have defined all ansible tasks. Also I have host A and host B in the inventory. Is it possible to execute 90% of task on host A and 10% of task on host B? My Ansible controller is host C.
ansible
11
58,972
2
https://stackoverflow.com/questions/37225589/execute-ansible-task-on-different-hosts
56,048,959
Ansible local_action example how does it work
How does the following Ansible play work: - name: Generate join command command: kubeadm token create --print-join-command register: join_command - name: Copy join command to local file local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command" So as I understand, local_action is the same as delegate_to but, copy content= did not make any sense. Isn't the actual command like "cp" need to be specified? Take this example: local_action: command ping -c 1 {{ inventory_hostname }} Can we use something like this: local_action: command cp content="{{ join_command.stdout_lines[0] }}" dest="./join-command"
Ansible local_action example how does it work How does the following Ansible play work: - name: Generate join command command: kubeadm token create --print-join-command register: join_command - name: Copy join command to local file local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command" So as I understand, local_action is the same as delegate_to but, copy content= did not make any sense. Isn't the actual command like "cp" need to be specified? Take this example: local_action: command ping -c 1 {{ inventory_hostname }} Can we use something like this: local_action: command cp content="{{ join_command.stdout_lines[0] }}" dest="./join-command"
ansible
11
51,379
2
https://stackoverflow.com/questions/56048959/ansible-local-action-example-how-does-it-work
41,878,133
Ansible - find and set permissions, including sticky bit
Using Ansible 2.1.4.0 Is it possible to set the sticky bit and folder permissions in 1 task? Example; # Shell is used over find module cause symlink breaks and performance - name: Find directories in /tmp which are not valid shell: find /tmp/test -type d \( ! -user root -o ! -group root -o ! -perm 775 \) register: find1 - name: Set 775 for found directories file: path: "{{ item }}" owner: root group: vagrant mode: 0775 state: directory with_items: "{{ findPermission1.stdout_lines | default([]) }}" - name: Find directories in /tmp which have no sticky bit shell: find /tmp/test -type d \! -perm /1000 changed_when: false register: find2 - name: Set permissions for found directories file: path: "{{ item }}" owner: root group: vagrant mode: g+s state: directory recurse: no #cause it already found recurse with_items: "{{ find.stdout_lines | default([]) }}" Right now, I must have 2 different tasks to set the permissions. But they overwrite each other. Goal: set the permission to 775 and g+s in one task.
Ansible - find and set permissions, including sticky bit Using Ansible 2.1.4.0 Is it possible to set the sticky bit and folder permissions in 1 task? Example; # Shell is used over find module cause symlink breaks and performance - name: Find directories in /tmp which are not valid shell: find /tmp/test -type d \( ! -user root -o ! -group root -o ! -perm 775 \) register: find1 - name: Set 775 for found directories file: path: "{{ item }}" owner: root group: vagrant mode: 0775 state: directory with_items: "{{ findPermission1.stdout_lines | default([]) }}" - name: Find directories in /tmp which have no sticky bit shell: find /tmp/test -type d \! -perm /1000 changed_when: false register: find2 - name: Set permissions for found directories file: path: "{{ item }}" owner: root group: vagrant mode: g+s state: directory recurse: no #cause it already found recurse with_items: "{{ find.stdout_lines | default([]) }}" Right now, I must have 2 different tasks to set the permissions. But they overwrite each other. Goal: set the permission to 775 and g+s in one task.
permissions, ansible
11
15,994
2
https://stackoverflow.com/questions/41878133/ansible-find-and-set-permissions-including-sticky-bit
40,043,826
Accessing a dictionary using another variable as key YAML
My requirements are these: a deployment environment is passed into the playbook as extra vars, for ex : dev, qa or prod I have a variable called DEPLOY_URL Based on the value of the env variable, the DEPLOY_URL has to change. I tried doing the following : DEPLOY_URLS: "dev": "xyz" "prod" : "abc" "qa" : "123" DEPLOY_URL: "{{DEPLOY_URLS['{{DEPLOY_ENV}}']}}" The value is never correct. Is there a way to access a dictionary using another variable as key? (Using YAML and ansible)
Accessing a dictionary using another variable as key YAML My requirements are these: a deployment environment is passed into the playbook as extra vars, for ex : dev, qa or prod I have a variable called DEPLOY_URL Based on the value of the env variable, the DEPLOY_URL has to change. I tried doing the following : DEPLOY_URLS: "dev": "xyz" "prod" : "abc" "qa" : "123" DEPLOY_URL: "{{DEPLOY_URLS['{{DEPLOY_ENV}}']}}" The value is never correct. Is there a way to access a dictionary using another variable as key? (Using YAML and ansible)
ansible, yaml
11
25,624
1
https://stackoverflow.com/questions/40043826/accessing-a-dictionary-using-another-variable-as-key-yaml
36,236,449
Error when installing docker-compose using ansible-playbook
I have a very simple Ansible playbook, all dependencies installed for docker-compose and docker but I get an error when installing docker-compose, this is the task on my playbook to install docker-compose in a CentOS7 environment. #ensure docker-compose and chmod +x /usr/local/bin/docker-compose - name: Ensure docker-compose is installed and available command: curl -L [URL] -s-uname -m > /usr/local/bin/docker-compose - name: Ensure permissions docker-compose command: chmod +x /usr/local/bin/docker-compose The following error appears: TASK: [Ensure docker-compose is installed and available] ********************** failed: [nutrilife-aws] => {"changed": true, "cmd": ["curl", "-L", "[URL] "-s-uname", "-m", ">", "/usr/local/bin/docker-compose"], "delta": "0:00:00.004592", "end": "2016-03-26 14:19:41.852780", "rc": 2, "start": "2016-03-26 14:19:41.848188", "warnings": ["Consider using get_url module rather than running curl"]} stderr: curl: option -s-uname: is unknown curl: try 'curl --help' or 'curl --manual' for more information FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/home/mmaia/playbook.retry nutrilife-aws : ok=4 changed=0 unreachable=0 failed=1 I am kind of stuck with this simple error for a couple of hours. I got the command from standard docker site and tested directly in the shell and it works. I have also tried using double quotes to wrap around the command like command: "curl ..." but it didn't change the error.
Error when installing docker-compose using ansible-playbook I have a very simple Ansible playbook, all dependencies installed for docker-compose and docker but I get an error when installing docker-compose, this is the task on my playbook to install docker-compose in a CentOS7 environment. #ensure docker-compose and chmod +x /usr/local/bin/docker-compose - name: Ensure docker-compose is installed and available command: curl -L [URL] -s-uname -m > /usr/local/bin/docker-compose - name: Ensure permissions docker-compose command: chmod +x /usr/local/bin/docker-compose The following error appears: TASK: [Ensure docker-compose is installed and available] ********************** failed: [nutrilife-aws] => {"changed": true, "cmd": ["curl", "-L", "[URL] "-s-uname", "-m", ">", "/usr/local/bin/docker-compose"], "delta": "0:00:00.004592", "end": "2016-03-26 14:19:41.852780", "rc": 2, "start": "2016-03-26 14:19:41.848188", "warnings": ["Consider using get_url module rather than running curl"]} stderr: curl: option -s-uname: is unknown curl: try 'curl --help' or 'curl --manual' for more information FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/home/mmaia/playbook.retry nutrilife-aws : ok=4 changed=0 unreachable=0 failed=1 I am kind of stuck with this simple error for a couple of hours. I got the command from standard docker site and tested directly in the shell and it works. I have also tried using double quotes to wrap around the command like command: "curl ..." but it didn't change the error.
docker, ansible, docker-compose
11
5,744
2
https://stackoverflow.com/questions/36236449/error-when-installing-docker-compose-using-ansible-playbook
35,142,326
Modify list items in Ansible / Jinja2
For instance you have a list variable in your role... myitems: - one - two ... and want to modify each item within Ansible (e.g. append a string before/after), you can...
Modify list items in Ansible / Jinja2 For instance you have a list variable in your role... myitems: - one - two ... and want to modify each item within Ansible (e.g. append a string before/after), you can...
jinja2, ansible
11
20,456
3
https://stackoverflow.com/questions/35142326/modify-list-items-in-ansible-jinja2
36,629,599
Stat.exists with list of variables in ansible
I have a problem with checking existing files using dictonary in Ansible. tasks: - name: Checking existing file id stat: path=/tmp/{{ item.id }}.conf with_items: "{{ file_vars }}" register: check_file_id - name: Checking existing file name stat: path=/tmp/{{ item.name }}.conf with_items: "{{ file_vars }}" register: check_file_name - name: Checking file exists debug: msg='File_id exists' when: check_file_id.stat.exists == True - name: Checking file name exists debug: msg='File name exists' when: check_file_name.stat.exists == True vars: file_vars: - { id: 1, name: one } - { id: 2, name: two } Then, if I trying to run playbook, I got the error: FAILED! => {"failed": true, "msg": "The conditional check 'check_file_id.stat.exists == True' failed. The error was: error while evaluating conditional (check_file_id.stat.exists == True): 'dict' object has no attribute 'stat'\n\n I've tried to debug it: - debug: var=check_file_id and got: "results": [ { "_ansible_item_result": true, "_ansible_no_log": false, "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_checksum": true, "get_md5": true, "mime": false, "path": "/tmp/1.conf" }, "module_name": "stat" }, "item": { "id": 1, "name": "one" }, "stat": { "exists": false } }, { "_ansible_item_result": true, "_ansible_no_log": false, "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_checksum": true, "get_md5": true, "mime": false, "path": "/tmp/2.conf" }, "module_name": "stat" }, "item": { "id": 2, "name": "two" }, "stat": { "exists": false } } ] Where I am wrong? Is is possible to use stat.exists with list of variables? Thanks for answer!
Stat.exists with list of variables in ansible I have a problem with checking existing files using dictonary in Ansible. tasks: - name: Checking existing file id stat: path=/tmp/{{ item.id }}.conf with_items: "{{ file_vars }}" register: check_file_id - name: Checking existing file name stat: path=/tmp/{{ item.name }}.conf with_items: "{{ file_vars }}" register: check_file_name - name: Checking file exists debug: msg='File_id exists' when: check_file_id.stat.exists == True - name: Checking file name exists debug: msg='File name exists' when: check_file_name.stat.exists == True vars: file_vars: - { id: 1, name: one } - { id: 2, name: two } Then, if I trying to run playbook, I got the error: FAILED! => {"failed": true, "msg": "The conditional check 'check_file_id.stat.exists == True' failed. The error was: error while evaluating conditional (check_file_id.stat.exists == True): 'dict' object has no attribute 'stat'\n\n I've tried to debug it: - debug: var=check_file_id and got: "results": [ { "_ansible_item_result": true, "_ansible_no_log": false, "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_checksum": true, "get_md5": true, "mime": false, "path": "/tmp/1.conf" }, "module_name": "stat" }, "item": { "id": 1, "name": "one" }, "stat": { "exists": false } }, { "_ansible_item_result": true, "_ansible_no_log": false, "changed": false, "invocation": { "module_args": { "checksum_algorithm": "sha1", "follow": false, "get_checksum": true, "get_md5": true, "mime": false, "path": "/tmp/2.conf" }, "module_name": "stat" }, "item": { "id": 2, "name": "two" }, "stat": { "exists": false } } ] Where I am wrong? Is is possible to use stat.exists with list of variables? Thanks for answer!
variables, jinja2, ansible
11
34,118
1
https://stackoverflow.com/questions/36629599/stat-exists-with-list-of-variables-in-ansible
37,748,174
Extract last character of string with ansible and regex_replace filter
In a playbook, I try to extract the last character of variable "ansible_hostname". I try to use regex_replace filter to do that, but nothing works. I simplified my piece of script with this ad-hoc command : ansible localhost -m debug -a "msg= {{ 'devserver01' | regex_replace('[0-9]{1}$', '\1') }}" I want to extract the last character : '1'. I'm using Ansible 2.0.
Extract last character of string with ansible and regex_replace filter In a playbook, I try to extract the last character of variable "ansible_hostname". I try to use regex_replace filter to do that, but nothing works. I simplified my piece of script with this ad-hoc command : ansible localhost -m debug -a "msg= {{ 'devserver01' | regex_replace('[0-9]{1}$', '\1') }}" I want to extract the last character : '1'. I'm using Ansible 2.0.
regex, ansible, ansible-ad-hoc
11
37,647
2
https://stackoverflow.com/questions/37748174/extract-last-character-of-string-with-ansible-and-regex-replace-filter
44,762,488
Non interactive samba user creation via ansible
Although the following command works when typing in in shell echo -ne "myser\nmypass\n" | smbpasswd -a -s myuser The following task fails in ansible - name: add dms samba user command: echo -ne "myuser\nmypass\n" | smbpasswd -a -s myuser notify: restart samba It does not produce any errors, but the user is not created. Working with ansible 2.3.0.0 on Ubuntu 16.0.4.
Non interactive samba user creation via ansible Although the following command works when typing in in shell echo -ne "myser\nmypass\n" | smbpasswd -a -s myuser The following task fails in ansible - name: add dms samba user command: echo -ne "myuser\nmypass\n" | smbpasswd -a -s myuser notify: restart samba It does not produce any errors, but the user is not created. Working with ansible 2.3.0.0 on Ubuntu 16.0.4.
ansible, samba
11
11,373
6
https://stackoverflow.com/questions/44762488/non-interactive-samba-user-creation-via-ansible
41,888,181
How to include the Ansible config file in the ansible-playbook command?
I run my ansible-playbook with the following command in my localhost: ansible-playbook -i "localhost," -c local GitClone.yaml However, I also have a GitClone.cfg file which has: [defaults] transport = ssh [ssh_connection] ssh_args = -o ForwardAgent=yes The GitClone.cfg file is in the same directory as the GitClone.yaml file. How do I include this file in the command? The command mentioned above is not picking up the .cfg file.
How to include the Ansible config file in the ansible-playbook command? I run my ansible-playbook with the following command in my localhost: ansible-playbook -i "localhost," -c local GitClone.yaml However, I also have a GitClone.cfg file which has: [defaults] transport = ssh [ssh_connection] ssh_args = -o ForwardAgent=yes The GitClone.cfg file is in the same directory as the GitClone.yaml file. How do I include this file in the command? The command mentioned above is not picking up the .cfg file.
ansible
11
41,159
2
https://stackoverflow.com/questions/41888181/how-to-include-the-ansible-config-file-in-the-ansible-playbook-command
41,482,268
Multiple status_code in Ansible uri module
I am using ansible uri module to make a POST request. The request returns status either 201 or 208 and both status code should be considered for the task to pass. How can I specify multiple status_code value or how can I achieve this? - uri: url: "[URL] method: POST HEADER_Content-Type: "application/json" body: '{"version": "6.2.10"}' body_format: json status_code: 208 register: result failed_when: status_code != 201 or status_code != 208
Multiple status_code in Ansible uri module I am using ansible uri module to make a POST request. The request returns status either 201 or 208 and both status code should be considered for the task to pass. How can I specify multiple status_code value or how can I achieve this? - uri: url: "[URL] method: POST HEADER_Content-Type: "application/json" body: '{"version": "6.2.10"}' body_format: json status_code: 208 register: result failed_when: status_code != 201 or status_code != 208
ansible, uri
11
23,651
3
https://stackoverflow.com/questions/41482268/multiple-status-code-in-ansible-uri-module
29,368,686
Pass Ansible variables from one role (running on one host) to another role running on another host within the same playbook
I have a playbook that runs different roles on different hosts. Is it possible to pass a variable from one role running on one host to another role on another host running within the same playbook run? Or any workaround ? playbook host1 role1 here I get some variables: var1 var2 ...etc host2 role2 here I need to use var1 var2 ... etc from the above host/role The task in role1 that sets teh variable db looks like this: - shell: cd /ACE/conf && grep ^db.url local1.properties | awk -F/ '{print $4}' | awk -F? '{print $1}' register: db UPDATE: On the first host the values are dynamic, it's like a configuration file that is always updated. After I store the values in variables on host1 with the role1 I then move to the host2, run the role2 and do stuff with those values from variables stored by host1. I tried with hostvars: {{ hostvars.LBL.db.stdout }} {{ hostvars['LBL']['db'] }} {{ hostvars['LBL']['db']['stdout'] }} and I get error: in get_variables raise Exception("host not found: %s" % hostname) Exception: host not found: LBL LBL exists in hosts as on it I run the first role I set a variable on one host and I want that variable to be available to the other host. All this within a single playbook. Can it be done ? hostvars is not working using it like this: --- - name: test hostvars host1 hosts: LBL tasks: - command: "ls /bin" register: ls_out - name: test hostvars host2 hosts: LM tasks: - debug: var: "{{ hostvars['LBL']['ls_out']['stdout'] }}" error: fatal: [10.104.148.138] => host not found: LBL /etc/ansible/hosts [root@NS1 ansible]# cat /etc/ansible/hosts [LBL] 10.104.148.136 [LM] 10.104.148.138
Pass Ansible variables from one role (running on one host) to another role running on another host within the same playbook I have a playbook that runs different roles on different hosts. Is it possible to pass a variable from one role running on one host to another role on another host running within the same playbook run? Or any workaround ? playbook host1 role1 here I get some variables: var1 var2 ...etc host2 role2 here I need to use var1 var2 ... etc from the above host/role The task in role1 that sets teh variable db looks like this: - shell: cd /ACE/conf && grep ^db.url local1.properties | awk -F/ '{print $4}' | awk -F? '{print $1}' register: db UPDATE: On the first host the values are dynamic, it's like a configuration file that is always updated. After I store the values in variables on host1 with the role1 I then move to the host2, run the role2 and do stuff with those values from variables stored by host1. I tried with hostvars: {{ hostvars.LBL.db.stdout }} {{ hostvars['LBL']['db'] }} {{ hostvars['LBL']['db']['stdout'] }} and I get error: in get_variables raise Exception("host not found: %s" % hostname) Exception: host not found: LBL LBL exists in hosts as on it I run the first role I set a variable on one host and I want that variable to be available to the other host. All this within a single playbook. Can it be done ? hostvars is not working using it like this: --- - name: test hostvars host1 hosts: LBL tasks: - command: "ls /bin" register: ls_out - name: test hostvars host2 hosts: LM tasks: - debug: var: "{{ hostvars['LBL']['ls_out']['stdout'] }}" error: fatal: [10.104.148.138] => host not found: LBL /etc/ansible/hosts [root@NS1 ansible]# cat /etc/ansible/hosts [LBL] 10.104.148.136 [LM] 10.104.148.138
variables, roles, ansible
11
37,816
5
https://stackoverflow.com/questions/29368686/pass-ansible-variables-from-one-role-running-on-one-host-to-another-role-runni
39,994,218
ansible to restart network service
I copy-pasted this from the manual and it fails in my playbook (version 2.0.2): - service: name=network state=restarted args=eth0 I am getting this error: "msg": "Failed to stop eth0.service: Unit eth0.service not loaded.\nFailed to start eth0.service: Unit eth0.service failed to load: No such file or directory.\n"} What is the correct syntax, please?
ansible to restart network service I copy-pasted this from the manual and it fails in my playbook (version 2.0.2): - service: name=network state=restarted args=eth0 I am getting this error: "msg": "Failed to stop eth0.service: Unit eth0.service not loaded.\nFailed to start eth0.service: Unit eth0.service failed to load: No such file or directory.\n"} What is the correct syntax, please?
ansible
11
19,783
5
https://stackoverflow.com/questions/39994218/ansible-to-restart-network-service
37,668,442
Filter a substring matching a pattern from an ansible variable and assign matched substring to another variable
Let's say we have a long ansible string variable mystr . We have a regex pattern say substr_pattern and a substring matching this pattern is to be filtered out from mystr and to be assigned to another variable substr . There seems to be no obvious way to do this in ansible from the docs on playbook_filters though it is simple to do this with re module in python itself. There are 3 methods given in the ansible docs and none of them seems to solve my problem: match : This filter returns true/false depending on whether the entire pattern matches the entire string but does not return matched group/substring. search : Used to filter substr in a bigger string. But like match , only returns true/false and not matched group/substring that is needed here. regex_replace : This is used to replace a matched pattern in a string with another string. But it's not clear how to register the substring/group corresponding to the matched pattern into a new variable. Is there anything that I am missing? Or is this a missing feature in ansible? Ansible Version: 2.1 Example: mystr: "This is the long string. With a url. [URL] pattern: "http:\/\/example.org\/(\d+)" substr: 12345 # First matched group i.e. \\1 Summary: How to get the substring matching the pattern from mystr and register that to an ansible variable substr ?
Filter a substring matching a pattern from an ansible variable and assign matched substring to another variable Let's say we have a long ansible string variable mystr . We have a regex pattern say substr_pattern and a substring matching this pattern is to be filtered out from mystr and to be assigned to another variable substr . There seems to be no obvious way to do this in ansible from the docs on playbook_filters though it is simple to do this with re module in python itself. There are 3 methods given in the ansible docs and none of them seems to solve my problem: match : This filter returns true/false depending on whether the entire pattern matches the entire string but does not return matched group/substring. search : Used to filter substr in a bigger string. But like match , only returns true/false and not matched group/substring that is needed here. regex_replace : This is used to replace a matched pattern in a string with another string. But it's not clear how to register the substring/group corresponding to the matched pattern into a new variable. Is there anything that I am missing? Or is this a missing feature in ansible? Ansible Version: 2.1 Example: mystr: "This is the long string. With a url. [URL] pattern: "http:\/\/example.org\/(\d+)" substr: 12345 # First matched group i.e. \\1 Summary: How to get the substring matching the pattern from mystr and register that to an ansible variable substr ?
ansible, ansible-2.x
11
28,709
1
https://stackoverflow.com/questions/37668442/filter-a-substring-matching-a-pattern-from-an-ansible-variable-and-assign-matche
28,468,551
Using &#39;creates&#39; arg in Ansible shell module
When using the "creates" argument in the shell module, can this be a path to a directory? Or does it have to be a file?
Using &#39;creates&#39; arg in Ansible shell module When using the "creates" argument in the shell module, can this be a path to a directory? Or does it have to be a file?
ansible
11
31,437
1
https://stackoverflow.com/questions/28468551/using-creates-arg-in-ansible-shell-module
39,545,195
Match literals with &#39;regex_replace&#39; Ansible filter
I cannot find a way to much a literal (a dot) in Ansible's regex_replace filter. Here is the task: - name: Display database name debug: msg: "{{ vhost | regex_replace('(.+\.)(.+)$', \\1) }}" tags: [debug] My intention is to match and replace the whole URL like test.staging.domain.com with its first part ( test in the example). Ansible would report the following error: debug: msg: "{{ vhost | regex_replace('(.+\.)(.+)$', \\1) }}" ^ here We could be wrong, but this one looks like it might be an issue with missing quotes. Always quote template expression brackets when they start a value. How can I match literals in Ansible regexp_replace filter?
Match literals with &#39;regex_replace&#39; Ansible filter I cannot find a way to much a literal (a dot) in Ansible's regex_replace filter. Here is the task: - name: Display database name debug: msg: "{{ vhost | regex_replace('(.+\.)(.+)$', \\1) }}" tags: [debug] My intention is to match and replace the whole URL like test.staging.domain.com with its first part ( test in the example). Ansible would report the following error: debug: msg: "{{ vhost | regex_replace('(.+\.)(.+)$', \\1) }}" ^ here We could be wrong, but this one looks like it might be an issue with missing quotes. Always quote template expression brackets when they start a value. How can I match literals in Ansible regexp_replace filter?
regex, ansible
11
40,163
3
https://stackoverflow.com/questions/39545195/match-literals-with-regex-replace-ansible-filter
33,762,738
Specifying the OS - Ansible
I'm a newbie in Ansible , so I wrote a little ansible utility to install some package dependencies for a system I'm writing: --- - hosts: all user: root tasks: - name: install requirements apt: name={{item}} state=latest update_cache=true with_items: - gcc - python-dev - python-setuptools - python-software-properties The current supported environments are Ubuntu , Red Hat and Mac OS X . The current way this playbook is written it will only work in Ubuntu (Debian) . How can I have that part of the code be executed according to the OS? For Ubuntu it's apt , for Red Hat it's yum and for Mac OS X brew .
Specifying the OS - Ansible I'm a newbie in Ansible , so I wrote a little ansible utility to install some package dependencies for a system I'm writing: --- - hosts: all user: root tasks: - name: install requirements apt: name={{item}} state=latest update_cache=true with_items: - gcc - python-dev - python-setuptools - python-software-properties The current supported environments are Ubuntu , Red Hat and Mac OS X . The current way this playbook is written it will only work in Ubuntu (Debian) . How can I have that part of the code be executed according to the OS? For Ubuntu it's apt , for Red Hat it's yum and for Mac OS X brew .
python, macos, ubuntu, package, ansible
11
30,844
1
https://stackoverflow.com/questions/33762738/specifying-the-os-ansible
22,339,832
Which is the best way to make config changes in conf files in ansible
Initially I used a makefile to deploy my application in linux . I had various sed commands to replace variables like the PHP upload file size, post size, log file location etc. Now I am shifting to ansible. I know I can copy the files, but how can I make changes to the conf files? Like if i just want to change the upload_filesize = 50M parameter. I don't want to make copies of the whole conf file and then replace with my file. Sometimes it’s only a one-line change. Is there any better way to edit the config files in ansible?
Which is the best way to make config changes in conf files in ansible Initially I used a makefile to deploy my application in linux . I had various sed commands to replace variables like the PHP upload file size, post size, log file location etc. Now I am shifting to ansible. I know I can copy the files, but how can I make changes to the conf files? Like if i just want to change the upload_filesize = 50M parameter. I don't want to make copies of the whole conf file and then replace with my file. Sometimes it’s only a one-line change. Is there any better way to edit the config files in ansible?
linux, makefile, ansible
11
11,916
2
https://stackoverflow.com/questions/22339832/which-is-the-best-way-to-make-config-changes-in-conf-files-in-ansible
59,203,583
How can I create a Kubernetes Secret with Ansible?
I'm using an Ansible JMeter Operator to do distributed load testing and am having trouble with creating a Kubernetes secret. The operator I'm modifying is the JMeter one and the additional YAML I'm adding is as below: - name: InfluxDB Storage Secret k8s: apiVersion: v1 kind: Secret type: Opaque metadata: name: azure-storage-account-infxluxdb-secret namespace: '{{ meta.namespace }}' stringData: azurestorageaccountname: 'xxxxxxx' azurestorageaccountkey: 'xxxxxxxxxxx' Is there anything wrong with the YAML definition? I'm modifying the roles/jmeter/tasks/main.yaml of the role to add it into my specific namespace.
How can I create a Kubernetes Secret with Ansible? I'm using an Ansible JMeter Operator to do distributed load testing and am having trouble with creating a Kubernetes secret. The operator I'm modifying is the JMeter one and the additional YAML I'm adding is as below: - name: InfluxDB Storage Secret k8s: apiVersion: v1 kind: Secret type: Opaque metadata: name: azure-storage-account-infxluxdb-secret namespace: '{{ meta.namespace }}' stringData: azurestorageaccountname: 'xxxxxxx' azurestorageaccountkey: 'xxxxxxxxxxx' Is there anything wrong with the YAML definition? I'm modifying the roles/jmeter/tasks/main.yaml of the role to add it into my specific namespace.
kubernetes, ansible
11
18,873
2
https://stackoverflow.com/questions/59203583/how-can-i-create-a-kubernetes-secret-with-ansible
42,106,527
Ansible: how to call module add_host for all hosts of the play
I'm creating a playbook with this play: On hosts hypervisors : retrieve list of virtual machines from all hosts use module add_host to add all of them in a new inventory group called guests My inventory: [hypervisors] host1 host2 My playbook: - hosts: hypervisors - shell: virsh list | awk 'NR>2' | awk '{print $2}' register: result_virsh - add_host: name: "{{ item }}" group: "guests" with_items: "{{ result_virsh.stdout_lines }}" Module add_host bypasses the play host loop and only runs once for all the hosts in the play . Then it is called once (for host1), it's a particular case of the use of this module (see link above), as if the variable run_once was implicitly fixed to true . How can I use it for all hosts in group hypervisors ? EDIT: Example to reproduce it on your computer with only localhost Create file /tmp/host1_test to simulate a return of guests vm1 and vm2 : vm1 vm2 Create file /tmp/host2_test to simulate a return of guests vm3 and vm4 : vm3 vm4 Use this inventory ( test_add_host.ini ) with two hosts, both with fixed IP address 127.0.0.1 : [hypervisors] host1 ansible_host=127.0.0.1 test_filename=/tmp/host1_test host2 ansible_host=127.0.0.1 test_filename=/tmp/host2_test Use this playbook ( test_add_host.yml ): - hosts: hypervisors gather_facts: no tasks: - shell: "cat {{ test_filename }}" register: result_virsh - add_host: name: "{{ item }}" group: "guests" with_items: "{{ result_virsh.stdout_lines }}" - hosts: guests gather_facts: no tasks: - local_action: ping Call this playbook locally with command: ansible-playbook -c local -i test_add_host.ini test_add_host.yml First play call hosts host1 and host2 Second play call hosts vm1 and vm2 What should I do to call all hosts ( vm1 , vm2 , vm3 and vm4 ) in second play ?
Ansible: how to call module add_host for all hosts of the play I'm creating a playbook with this play: On hosts hypervisors : retrieve list of virtual machines from all hosts use module add_host to add all of them in a new inventory group called guests My inventory: [hypervisors] host1 host2 My playbook: - hosts: hypervisors - shell: virsh list | awk 'NR>2' | awk '{print $2}' register: result_virsh - add_host: name: "{{ item }}" group: "guests" with_items: "{{ result_virsh.stdout_lines }}" Module add_host bypasses the play host loop and only runs once for all the hosts in the play . Then it is called once (for host1), it's a particular case of the use of this module (see link above), as if the variable run_once was implicitly fixed to true . How can I use it for all hosts in group hypervisors ? EDIT: Example to reproduce it on your computer with only localhost Create file /tmp/host1_test to simulate a return of guests vm1 and vm2 : vm1 vm2 Create file /tmp/host2_test to simulate a return of guests vm3 and vm4 : vm3 vm4 Use this inventory ( test_add_host.ini ) with two hosts, both with fixed IP address 127.0.0.1 : [hypervisors] host1 ansible_host=127.0.0.1 test_filename=/tmp/host1_test host2 ansible_host=127.0.0.1 test_filename=/tmp/host2_test Use this playbook ( test_add_host.yml ): - hosts: hypervisors gather_facts: no tasks: - shell: "cat {{ test_filename }}" register: result_virsh - add_host: name: "{{ item }}" group: "guests" with_items: "{{ result_virsh.stdout_lines }}" - hosts: guests gather_facts: no tasks: - local_action: ping Call this playbook locally with command: ansible-playbook -c local -i test_add_host.ini test_add_host.yml First play call hosts host1 and host2 Second play call hosts vm1 and vm2 What should I do to call all hosts ( vm1 , vm2 , vm3 and vm4 ) in second play ?
ansible, ansible-inventory
11
18,928
3
https://stackoverflow.com/questions/42106527/ansible-how-to-call-module-add-host-for-all-hosts-of-the-play
25,185,018
how to stop Ansible task successfully, not failing?
I've searched for examples on how to stop execution of an Ansible task without failing it. A straightforward example: - name: check file stat: path={{some_path}} register: check_file_result - name: if file exists, stop **fail**: msg="file exists, stopping" when: check_file_result.stat.exists This works as in it stops the execution, but fails with tons of red flavoured text probably stopping whole playbook from running further tasks. Is there a way to stop the execution as if all of it ended with "OK"? Note: Workaround is to simply add "when: check_file_result.stat.exists == false", but this scales very poorly.
how to stop Ansible task successfully, not failing? I've searched for examples on how to stop execution of an Ansible task without failing it. A straightforward example: - name: check file stat: path={{some_path}} register: check_file_result - name: if file exists, stop **fail**: msg="file exists, stopping" when: check_file_result.stat.exists This works as in it stops the execution, but fails with tons of red flavoured text probably stopping whole playbook from running further tasks. Is there a way to stop the execution as if all of it ended with "OK"? Note: Workaround is to simply add "when: check_file_result.stat.exists == false", but this scales very poorly.
break, ansible
11
6,992
1
https://stackoverflow.com/questions/25185018/how-to-stop-ansible-task-successfully-not-failing
21,794,886
Why does this ansible lineinfile task always trigger a notify?
The following task always triggers a notify The first time it runs ansible applies the change which is expected, and the line is changed. If I run it again, ansible considers it as "changed", even though the regex cannot possibly match, since the line has become "bind-address = 0.0.0.0" why ? - name: Ensure MySQL will listen on all ip interfaces (bind to 0.0.0.0) lineinfile: dest=/etc/mysql/my.cnf regexp='bind-address\s*=\s*127\.0\.0\.1\s*' line='bind-address = 0.0.0.0' state=present insertafter=EOF notify: restart mysql
Why does this ansible lineinfile task always trigger a notify? The following task always triggers a notify The first time it runs ansible applies the change which is expected, and the line is changed. If I run it again, ansible considers it as "changed", even though the regex cannot possibly match, since the line has become "bind-address = 0.0.0.0" why ? - name: Ensure MySQL will listen on all ip interfaces (bind to 0.0.0.0) lineinfile: dest=/etc/mysql/my.cnf regexp='bind-address\s*=\s*127\.0\.0\.1\s*' line='bind-address = 0.0.0.0' state=present insertafter=EOF notify: restart mysql
ansible
11
8,270
1
https://stackoverflow.com/questions/21794886/why-does-this-ansible-lineinfile-task-always-trigger-a-notify
57,729,144
Ansible-Playbook: Error Message &quot;Unable to find any of pip3 to use. pip needs to be installed&quot;
I try to install boto3 via ansible in my playbook. I tried to create a new user on my host. - name: "test user" hosts: test tasks: - name: "install boto3" pip: name: boto3 executable: pip3 I got this message : {"changed": false, "msg": "Unable to find any of pip3 to use. pip needs to be installed."}
Ansible-Playbook: Error Message &quot;Unable to find any of pip3 to use. pip needs to be installed&quot; I try to install boto3 via ansible in my playbook. I tried to create a new user on my host. - name: "test user" hosts: test tasks: - name: "install boto3" pip: name: boto3 executable: pip3 I got this message : {"changed": false, "msg": "Unable to find any of pip3 to use. pip needs to be installed."}
ansible
11
24,262
3
https://stackoverflow.com/questions/57729144/ansible-playbook-error-message-unable-to-find-any-of-pip3-to-use-pip-needs-to
47,542,205
Print string when &quot;item exists in a list&quot; in Jinja2 template
I'm trying to populate nsswitch.conf with values that are determined from a list. The list is of string: openldap_nsswitch: - group - hosts - passwd - shadow If the string is in the list I want to output something in the template. passwd: compat {% if openldap_nsswitch contains passwd %}ldap{% endif %} How can I write a string only if my list contains a specific element?
Print string when &quot;item exists in a list&quot; in Jinja2 template I'm trying to populate nsswitch.conf with values that are determined from a list. The list is of string: openldap_nsswitch: - group - hosts - passwd - shadow If the string is in the list I want to output something in the template. passwd: compat {% if openldap_nsswitch contains passwd %}ldap{% endif %} How can I write a string only if my list contains a specific element?
ansible, jinja2, ansible-template
11
40,710
2
https://stackoverflow.com/questions/47542205/print-string-when-item-exists-in-a-list-in-jinja2-template
43,126,400
Ansible Host: how to get the value for $HOME variable?
I know about ansible_env.HOME variable, which allow us to retrieve the path for the home user for the VMs we are connecting using Ansible. However I need to get the home path for the ansible host. That means, the machine which is running the ansible playbook. Is there a short variable to retrieve that information? I was hoping to avoid running a local command and storing the result in a variable.
Ansible Host: how to get the value for $HOME variable? I know about ansible_env.HOME variable, which allow us to retrieve the path for the home user for the VMs we are connecting using Ansible. However I need to get the home path for the ansible host. That means, the machine which is running the ansible playbook. Is there a short variable to retrieve that information? I was hoping to avoid running a local command and storing the result in a variable.
ansible
11
13,385
1
https://stackoverflow.com/questions/43126400/ansible-host-how-to-get-the-value-for-home-variable
42,847,372
Ansible loop - how to match up template values with with_items?
I'm trying to create files that have values that match their with_items values. I have a var list like so: sites: - domain: google.com cname: blue - domain: facebook.com cname: green - domain: twitter.com cname: red I create individual files for each of the objects in the list in this task: - name: Create files template: src: file.conf dest: "/etc/nginx/conf.d/{{item.cname}}.conf" with_items: "{{sites}}" These both work great. What do I need to have in my template file for it create a file called blue.conf and has google.com in it only . I have tried a lot of variations. The closest I got to was this: server { listen 80; listen [::]:80; {% for item in sites %} server_name {{item.cname}}.es.nodesource.io; location / { proxy_pass {{item.domain}}; } {% endfor %} } That will create individual files, but every file has all the domains and cnames.
Ansible loop - how to match up template values with with_items? I'm trying to create files that have values that match their with_items values. I have a var list like so: sites: - domain: google.com cname: blue - domain: facebook.com cname: green - domain: twitter.com cname: red I create individual files for each of the objects in the list in this task: - name: Create files template: src: file.conf dest: "/etc/nginx/conf.d/{{item.cname}}.conf" with_items: "{{sites}}" These both work great. What do I need to have in my template file for it create a file called blue.conf and has google.com in it only . I have tried a lot of variations. The closest I got to was this: server { listen 80; listen [::]:80; {% for item in sites %} server_name {{item.cname}}.es.nodesource.io; location / { proxy_pass {{item.domain}}; } {% endfor %} } That will create individual files, but every file has all the domains and cnames.
ansible, jinja2, ansible-template
11
18,400
1
https://stackoverflow.com/questions/42847372/ansible-loop-how-to-match-up-template-values-with-with-items
38,272,125
password not being accepted for sudo user with ansible
I am running an ansible playbook as a sudo user (forcing the sudo password) - however, I am getting a response stating that the su password is incorrect even though I can do the following on the remote server (with the same password that I tried with ansible): sudo su - root error message fatal: [testserver]: FAILED! => {"failed": true, "msg": "Incorrect su password"} hosts [webservers] testserver ansible_ssh_host=ec2-52-87-166-241.compute-1.amazonaws.com ansible_ssh_port=9876 ansible command ansible-playbook test_playbook.yml -i hosts --ask-become-pass -vvv test_playbook --- - hosts: all gather_facts: no remote_user: testuser become: yes become_method: su become_user: root any_errors_fatal: true tasks: - group: name: devops state: present - name: create devops user with admin privileges user: name: devops comment: "Devops User" uid: 2001 groups: devops Any thoughts on what I might be doing wrong?
password not being accepted for sudo user with ansible I am running an ansible playbook as a sudo user (forcing the sudo password) - however, I am getting a response stating that the su password is incorrect even though I can do the following on the remote server (with the same password that I tried with ansible): sudo su - root error message fatal: [testserver]: FAILED! => {"failed": true, "msg": "Incorrect su password"} hosts [webservers] testserver ansible_ssh_host=ec2-52-87-166-241.compute-1.amazonaws.com ansible_ssh_port=9876 ansible command ansible-playbook test_playbook.yml -i hosts --ask-become-pass -vvv test_playbook --- - hosts: all gather_facts: no remote_user: testuser become: yes become_method: su become_user: root any_errors_fatal: true tasks: - group: name: devops state: present - name: create devops user with admin privileges user: name: devops comment: "Devops User" uid: 2001 groups: devops Any thoughts on what I might be doing wrong?
ansible, ansible-2.x
11
19,489
2
https://stackoverflow.com/questions/38272125/password-not-being-accepted-for-sudo-user-with-ansible
23,493,984
How to install Ruby 2 and Ruby Gems on Ubuntu box with Ansible
I want to provision Ubuntu Server machine with latest Ruby and Ruby Gems version using Ansible . How do I do this?
How to install Ruby 2 and Ruby Gems on Ubuntu box with Ansible I want to provision Ubuntu Server machine with latest Ruby and Ruby Gems version using Ansible . How do I do this?
ruby, ubuntu, ansible
11
6,458
2
https://stackoverflow.com/questions/23493984/how-to-install-ruby-2-and-ruby-gems-on-ubuntu-box-with-ansible
22,494,070
Ansible Using --extra-vars for conditional includes
I am using Ansible to deploy an environment that may have services distributed or not. I would like to conditionally include playbooks based on arguments I pass to ansible-playbook. create_server.yml --- - include: launch_ec2_instance.yml - include install_postgres.yml when {{db}} == "Y" - include install_redis.yml when {{redis}} == "Y" Here is how I am calling create_server.yml ansible-playbook create_server.yml -i local --extra-vars "db=Y redis=N" Is it possible to do this and if so, how?
Ansible Using --extra-vars for conditional includes I am using Ansible to deploy an environment that may have services distributed or not. I would like to conditionally include playbooks based on arguments I pass to ansible-playbook. create_server.yml --- - include: launch_ec2_instance.yml - include install_postgres.yml when {{db}} == "Y" - include install_redis.yml when {{redis}} == "Y" Here is how I am calling create_server.yml ansible-playbook create_server.yml -i local --extra-vars "db=Y redis=N" Is it possible to do this and if so, how?
ansible
11
15,882
2
https://stackoverflow.com/questions/22494070/ansible-using-extra-vars-for-conditional-includes
55,849,706
ansible vault encrypt string
I'm trying to encrypt some password but would like to encrypt just part of the string instead of the whole file. So there is a command ansible-vault encrypt_string which provide you an encrypted output however when I'm adding it to my .yaml playbook it failed to decrypt. command used for an encrypitng simple password: ansible-vault encrypt_string 'Test123!' --name 'ansible_password' result: ansible_password: !vault | $ANSIBLE_VAULT;1.1;AES256 30333733643939646130396638646138636338636162316536313236666334656338306634353434 3132326265313639623039653261336265343733383730340a663565323932636138633365386332 36363534326263326633623238653464376637646632363839313464333830363436643561626534 6338613837393539350a383962663766373466376138376666393639373631313861663866333663 6137 Encryption successful ^ formatting is a little bit clunky with long strings So I'm trying to put this value into my playbook like this: --- - name: Copy needed files to target machine hosts: prod vars: ansible_user: test_admin ansible_password: !vault $ANSIBLE_VAULT;1.1;AES256;303337336439396461303966386461386363386361623165363132366663346563383066343534343132326265313639623039653261336265343733383730340a663565323932636138633365386332363635343262633266336232386534643766376466323638393134643338303634366435616265346338613837393539350a3839626637663734663761383766663936393736313138616638663336636137 ansible_connection: winrm ansible_winrm_transport: credssp ansible_winrm_server_cert_validation: ignore tasks: - name: Copy test win_copy: src: /etc/winmachines/hosts dest: C:\test\ Then I want to execute playbook with command: ansible-playbook copy.yaml -i hosts.ini result: PLAY [Copy needed files to target machine] ******************************************************************************************************** TASK [Gathering Facts] **************************************************************************************************************************** fatal: [10.5.17.5]: FAILED! => {"msg": "Attempting to decrypt but no vault secrets found"} to retry, use: --limit @deleted PLAY RECAP **************************************************************************************************************************************** 10.5.17.5 : ok=0 changed=0 unreachable=0 failed=1 when i run playbook with parameter --ask-vault-password: ansible-playbook copy.yaml -i hosts.ini --ask-vault-pass PLAY [Copy needed files to target machine] ******************************************************************************************************** TASK [Gathering Facts] **************************************************************************************************************************** [WARNING]: There was a vault format error: Vault vaulttext format error: need more than 1 value to unpack fatal: [10.5.17.5]: FAILED! => {"msg": "Vault vaulttext format error: need more than 1 value to unpack"} to retry, use: --limit @deleted PLAY RECAP **************************************************************************************************************************************** 10.5.17.5 : ok=0 changed=0 unreachable=0 failed=1 I have tried to put the output from encrypting in various ways but every time it fails due to this problem or syntax problem. When Put in way delivered from the output I'm getting Syntax Error like this: ERROR! Syntax Error while loading YAML. could not find expected ':' The error appears to have been in : line 8, column 11, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: $ANSIBLE_VAULT;1.1;AES256 34323264326266303364656561616663306566356636616238623931613032343131643839336338 ^ here When everything is in the same line removed "|" got this error: TASK [Gathering Facts] **************************************************************************************************************************** fatal: [10.5.17.5]: FAILED! => {"msg": "AES256 343232643262663033646565616166633065663566366162386239316130323431316438393363383331346231343361386432666461646639386135666335380a373862336531613034393262653432313038303432636536313139353932356637343139393938666536383061383233613136643063353761386339323562640a3963306632393865613237666364386566623938356465336363613163646338 cipher could not be found"} Any ideas?
ansible vault encrypt string I'm trying to encrypt some password but would like to encrypt just part of the string instead of the whole file. So there is a command ansible-vault encrypt_string which provide you an encrypted output however when I'm adding it to my .yaml playbook it failed to decrypt. command used for an encrypitng simple password: ansible-vault encrypt_string 'Test123!' --name 'ansible_password' result: ansible_password: !vault | $ANSIBLE_VAULT;1.1;AES256 30333733643939646130396638646138636338636162316536313236666334656338306634353434 3132326265313639623039653261336265343733383730340a663565323932636138633365386332 36363534326263326633623238653464376637646632363839313464333830363436643561626534 6338613837393539350a383962663766373466376138376666393639373631313861663866333663 6137 Encryption successful ^ formatting is a little bit clunky with long strings So I'm trying to put this value into my playbook like this: --- - name: Copy needed files to target machine hosts: prod vars: ansible_user: test_admin ansible_password: !vault $ANSIBLE_VAULT;1.1;AES256;303337336439396461303966386461386363386361623165363132366663346563383066343534343132326265313639623039653261336265343733383730340a663565323932636138633365386332363635343262633266336232386534643766376466323638393134643338303634366435616265346338613837393539350a3839626637663734663761383766663936393736313138616638663336636137 ansible_connection: winrm ansible_winrm_transport: credssp ansible_winrm_server_cert_validation: ignore tasks: - name: Copy test win_copy: src: /etc/winmachines/hosts dest: C:\test\ Then I want to execute playbook with command: ansible-playbook copy.yaml -i hosts.ini result: PLAY [Copy needed files to target machine] ******************************************************************************************************** TASK [Gathering Facts] **************************************************************************************************************************** fatal: [10.5.17.5]: FAILED! => {"msg": "Attempting to decrypt but no vault secrets found"} to retry, use: --limit @deleted PLAY RECAP **************************************************************************************************************************************** 10.5.17.5 : ok=0 changed=0 unreachable=0 failed=1 when i run playbook with parameter --ask-vault-password: ansible-playbook copy.yaml -i hosts.ini --ask-vault-pass PLAY [Copy needed files to target machine] ******************************************************************************************************** TASK [Gathering Facts] **************************************************************************************************************************** [WARNING]: There was a vault format error: Vault vaulttext format error: need more than 1 value to unpack fatal: [10.5.17.5]: FAILED! => {"msg": "Vault vaulttext format error: need more than 1 value to unpack"} to retry, use: --limit @deleted PLAY RECAP **************************************************************************************************************************************** 10.5.17.5 : ok=0 changed=0 unreachable=0 failed=1 I have tried to put the output from encrypting in various ways but every time it fails due to this problem or syntax problem. When Put in way delivered from the output I'm getting Syntax Error like this: ERROR! Syntax Error while loading YAML. could not find expected ':' The error appears to have been in : line 8, column 11, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: $ANSIBLE_VAULT;1.1;AES256 34323264326266303364656561616663306566356636616238623931613032343131643839336338 ^ here When everything is in the same line removed "|" got this error: TASK [Gathering Facts] **************************************************************************************************************************** fatal: [10.5.17.5]: FAILED! => {"msg": "AES256 343232643262663033646565616166633065663566366162386239316130323431316438393363383331346231343361386432666461646639386135666335380a373862336531613034393262653432313038303432636536313139353932356637343139393938666536383061383233613136643063353761386339323562640a3963306632393865613237666364386566623938356465336363613163646338 cipher could not be found"} Any ideas?
ansible, ansible-vault
11
47,442
1
https://stackoverflow.com/questions/55849706/ansible-vault-encrypt-string
52,487,396
Pass integer variable to task without losing the integer type
I've got a task (actually a role, but using a task here to make the example easier) that I don't own which does some operations on a variable. It assumes the variable is an integer. I need to somehow pass it a variable and have it come through as an int, and I'm not having any luck. Here is a super simplified version of the task that I don't own: frob.yml - name: Validate that frob_count is <= 100 fail: msg="{{frob_count}} is greater than 100" when: frob_count > 100 - name: Do real work debug: msg="We frobbed {{frob_count}} times!" My playbook is: - name: Frob some things hosts: localhost vars: things: - parameter: 1 - parameter: 2 - parameter: 45 tasks: - with_items: "{{things}}" include: frob.yml vars: frob_count: "{{item.parameter}}" No matter what, I get errors like "1 is greater than 100" from frob.yml . Looks like it's getting the var as a string instead of an integer. I've tried stuff like frob_count: "{{item.parameter | int}}" with no luck. If I could change frob.yml it'd be easy, but like I said, that's out of my control. Any thoughts? This is on Ansible 2.6.4
Pass integer variable to task without losing the integer type I've got a task (actually a role, but using a task here to make the example easier) that I don't own which does some operations on a variable. It assumes the variable is an integer. I need to somehow pass it a variable and have it come through as an int, and I'm not having any luck. Here is a super simplified version of the task that I don't own: frob.yml - name: Validate that frob_count is <= 100 fail: msg="{{frob_count}} is greater than 100" when: frob_count > 100 - name: Do real work debug: msg="We frobbed {{frob_count}} times!" My playbook is: - name: Frob some things hosts: localhost vars: things: - parameter: 1 - parameter: 2 - parameter: 45 tasks: - with_items: "{{things}}" include: frob.yml vars: frob_count: "{{item.parameter}}" No matter what, I get errors like "1 is greater than 100" from frob.yml . Looks like it's getting the var as a string instead of an integer. I've tried stuff like frob_count: "{{item.parameter | int}}" with no luck. If I could change frob.yml it'd be easy, but like I said, that's out of my control. Any thoughts? This is on Ansible 2.6.4
ansible
11
9,287
3
https://stackoverflow.com/questions/52487396/pass-integer-variable-to-task-without-losing-the-integer-type
43,248,578
How check if a file exists in ansible windows?
I am using Ansible on windows and I have to check whether a file exists in C:\Temp . If the file Doesn't exist then I have to skip the task. I am trying to use the win_stat module and this is what I have which isn't working: - name: Check that the ABC.txt exists win_stat: path: 'C:\ABC.txt ' - name: Create DEF.txt file if ABC.txt exists win_file: path: 'C:\DEF.txt' state: touch when: stat_file.stat.exists == True
How check if a file exists in ansible windows? I am using Ansible on windows and I have to check whether a file exists in C:\Temp . If the file Doesn't exist then I have to skip the task. I am trying to use the win_stat module and this is what I have which isn't working: - name: Check that the ABC.txt exists win_stat: path: 'C:\ABC.txt ' - name: Create DEF.txt file if ABC.txt exists win_file: path: 'C:\DEF.txt' state: touch when: stat_file.stat.exists == True
ansible
11
17,098
2
https://stackoverflow.com/questions/43248578/how-check-if-a-file-exists-in-ansible-windows
39,698,067
Ansible: Check if my user exists on remote host, else use root user to connect with ssh
I'm currently writting an Ansible script which should update openssl on every host running Debian or CentOS. On the hosts our SSH-Keys are deposited for my own user or root. I want to check if my user is existing on the host, if not I want to authenticate with the root user. Is there a possibility to do this? I tried it with a bash command but I want to check if my user exists before I'm running the tasks. Maybe there are other solutions to my problem but I don't know them. Running this playbook throws a syntax error. My Script looks like this right now: --- - hosts: "{{ host_group }}" remote_user: "{{ username }}" tasks: # Check whether there's a existinig user or whether you have to use root - name: Check whether there's your user on the machine action: shell /usr/bin/getent passwd $username | /usr/bin/wc -l | tr -d '' register: user_exist remote_user: root when: user_exist.stdout == 0 tags: - users # Install openssl on Ubuntu or Debian - name: Install openssl on Ubuntu or Debian become: True become_user: root apt: name=openssl state=latest when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu' # Install openssl on CentOS or RHEL - name: Install openssl on CentOS or RHEL become: True become_user: root yum: name=openssl state=latest when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
Ansible: Check if my user exists on remote host, else use root user to connect with ssh I'm currently writting an Ansible script which should update openssl on every host running Debian or CentOS. On the hosts our SSH-Keys are deposited for my own user or root. I want to check if my user is existing on the host, if not I want to authenticate with the root user. Is there a possibility to do this? I tried it with a bash command but I want to check if my user exists before I'm running the tasks. Maybe there are other solutions to my problem but I don't know them. Running this playbook throws a syntax error. My Script looks like this right now: --- - hosts: "{{ host_group }}" remote_user: "{{ username }}" tasks: # Check whether there's a existinig user or whether you have to use root - name: Check whether there's your user on the machine action: shell /usr/bin/getent passwd $username | /usr/bin/wc -l | tr -d '' register: user_exist remote_user: root when: user_exist.stdout == 0 tags: - users # Install openssl on Ubuntu or Debian - name: Install openssl on Ubuntu or Debian become: True become_user: root apt: name=openssl state=latest when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu' # Install openssl on CentOS or RHEL - name: Install openssl on CentOS or RHEL become: True become_user: root yum: name=openssl state=latest when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
ssh, ansible
11
35,553
3
https://stackoverflow.com/questions/39698067/ansible-check-if-my-user-exists-on-remote-host-else-use-root-user-to-connect-w
27,606,119
make ansible check if database is present on a remote host
To make sure that host A can connect to the database of the host B, I try to use mysql_db on a remote host - name: Make sure A can connect to B database mysql_db: login_user=root login_password=password login_host=B_address login_port=B_port name=B_database state=present and I get that error message even if the login/pass is right msg: unable to connect, check login_user and login_password are correct, or alternatively check ~/.my.cnf contains credentials Am i missing something? can I set login_host with a specific ansible host?
make ansible check if database is present on a remote host To make sure that host A can connect to the database of the host B, I try to use mysql_db on a remote host - name: Make sure A can connect to B database mysql_db: login_user=root login_password=password login_host=B_address login_port=B_port name=B_database state=present and I get that error message even if the login/pass is right msg: unable to connect, check login_user and login_password are correct, or alternatively check ~/.my.cnf contains credentials Am i missing something? can I set login_host with a specific ansible host?
mysql, ansible
11
26,074
7
https://stackoverflow.com/questions/27606119/make-ansible-check-if-database-is-present-on-a-remote-host
49,747,576
Loop through a list of dictionaries and return another list in ansible
Let's say I have this list: myList - name: Bob age: 25 - name: Alice age: 18 - name: Bryan age: 20 All I want is to loop through myList and get a list of names and set it to a variable nameList : nameList - name: Bob - name: Alice - name: Bryan Is there a short syntax for this in ansible?
Loop through a list of dictionaries and return another list in ansible Let's say I have this list: myList - name: Bob age: 25 - name: Alice age: 18 - name: Bryan age: 20 All I want is to loop through myList and get a list of names and set it to a variable nameList : nameList - name: Bob - name: Alice - name: Bryan Is there a short syntax for this in ansible?
ansible
11
9,184
1
https://stackoverflow.com/questions/49747576/loop-through-a-list-of-dictionaries-and-return-another-list-in-ansible
36,748,711
Filtering multiple tags in Ansible dynamic inventory
I think I've seen an answer to this somewhere , but I can't seem to find it now. I'm creating a dynamic development inventory file for my EC2 instances. I'd like to group all instances tagged with Stack=Development . Moreover, I'd like to specifically identify the development API servers. Those would not only have the Stack=Development tag, but also the API=Yes tag. My basic setup uses inventory folders: <root>/development ├── base ├── ec2.ini └── ec2.py In my base file, I'd like to have something like this: [servers] tag_Stack_Development [apiservers] tag_Stack_Development && tag_API_Yes Then I'd be able to run this to ping all of my development api servers: ansible -i development -u myuser apiservers -m ping Can something like that be done? I know the syntax isn't right, but hopefully the intent is reasonably clear? I can't imagine I'm the only one who's ever needed to filter on multiple tags, but I haven't been able to find anything that gets me where I'm trying to go.
Filtering multiple tags in Ansible dynamic inventory I think I've seen an answer to this somewhere , but I can't seem to find it now. I'm creating a dynamic development inventory file for my EC2 instances. I'd like to group all instances tagged with Stack=Development . Moreover, I'd like to specifically identify the development API servers. Those would not only have the Stack=Development tag, but also the API=Yes tag. My basic setup uses inventory folders: <root>/development ├── base ├── ec2.ini └── ec2.py In my base file, I'd like to have something like this: [servers] tag_Stack_Development [apiservers] tag_Stack_Development && tag_API_Yes Then I'd be able to run this to ping all of my development api servers: ansible -i development -u myuser apiservers -m ping Can something like that be done? I know the syntax isn't right, but hopefully the intent is reasonably clear? I can't imagine I'm the only one who's ever needed to filter on multiple tags, but I haven't been able to find anything that gets me where I'm trying to go.
ansible
11
14,767
3
https://stackoverflow.com/questions/36748711/filtering-multiple-tags-in-ansible-dynamic-inventory
30,600,891
Manage whole crontab files in Ansible
I have a crontab containing around 80 entries on a server. And I would like to manage that crontab using Ansible. Ideally I would copy the server's crontab to my Ansible directory and create an Ansible task to ensure that crontab is set on the server. But the cron module only seems to manage individual cron entries and not whole crontab files. Manually migrating the crontab to Ansible tasks is tedious. And even if I find or make a tool that does it automatically, I feel the YAML file will be far less readable than the crontab file. Any idea how I can handle that big crontab using Ansible?
Manage whole crontab files in Ansible I have a crontab containing around 80 entries on a server. And I would like to manage that crontab using Ansible. Ideally I would copy the server's crontab to my Ansible directory and create an Ansible task to ensure that crontab is set on the server. But the cron module only seems to manage individual cron entries and not whole crontab files. Manually migrating the crontab to Ansible tasks is tedious. And even if I find or make a tool that does it automatically, I feel the YAML file will be far less readable than the crontab file. Any idea how I can handle that big crontab using Ansible?
cron, ansible
11
11,312
3
https://stackoverflow.com/questions/30600891/manage-whole-crontab-files-in-ansible
29,214,087
If condition is true run some include yml files
I have some playbook for ubuntu and centos and I want to use main.yml to check when: ansible_os_family == 'RedHat' or ansible_distribution == 'Centos' , run playbooks ( as some ans many :-) ). When I run just: -include: centos-xxx.yml -include: centos-xaa.yml -include: centos-xsss.yml It will run all of them Basically I want that the playbook will run if meet condition. I didn't find any doc that say how to run include: more then one i am trying to not make task per include if possible.
If condition is true run some include yml files I have some playbook for ubuntu and centos and I want to use main.yml to check when: ansible_os_family == 'RedHat' or ansible_distribution == 'Centos' , run playbooks ( as some ans many :-) ). When I run just: -include: centos-xxx.yml -include: centos-xaa.yml -include: centos-xsss.yml It will run all of them Basically I want that the playbook will run if meet condition. I didn't find any doc that say how to run include: more then one i am trying to not make task per include if possible.
ansible
11
25,363
1
https://stackoverflow.com/questions/29214087/if-condition-is-true-run-some-include-yml-files
65,303,764
Ansible variable in dictionary key
I am trying to execute a playbook for setting up Dell servers and I have some problems with a dictionary in module idrac_redfish_config . I need to enable SOL for a specific user, but for this I want to use a key in the dictionary with a variable because ID of user can be different from server to server. How I try to add a variable to a dictionary key like this: - name: Store id test-user set_fact: ID: "{{ result.redfish_facts.user.entries | json_query(\"[?UserName=='test-user'].Id\") }}" - name: Enable SOL for test-user community.general.idrac_redfish_config: category: Manager command: SetManagerAttributes resource_id: iDRAC.Embedded.1 manager_attributes: Users.{{ ID[0] }}.SolEnable: "Enabled" <--- Users.{{ ID[0] }}.IpmiLanPrivilege: "Administrator" <--- baseuri: "testhost" username: "admin" password: "admin" I get this error: TASK [Store id test-user] ************************************************************************************************************************************************************************************** ok: [localhost] => { "ansible_facts": { "ID": [ "8" ] }, "changed": false } TASK [Enable SOL for test-user] ******************************************************************************************************************************************************************************** fatal: [localhost]: FAILED! => { "changed": false, "invocation": { "module_args": { "baseuri": "testhost", "category": "Manager", "command": [ "SetManagerAttributes" ], "manager_attribute_name": null, "manager_attribute_value": null, "manager_attributes": { "Users.{{ ID[0] }}.IpmiLanPrivilege": "Administrator", "Users.{{ ID[0] }}.SolEnable": "Enabled" }, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "resource_id": "iDRAC.Embedded.1", "timeout": 10, "username": "admin" } }, "msg": "SetManagerAttributes: Manager attribute Users.{{ ID[0] }}.SolEnable not found" } If I do this: manager_attributes: "{ 'Users.{{ ID[0] }}.SolEnable': Enabled 'Users.{{ ID[0] }}.IpmiLanPrivilege': Administrator }" I get: fatal: [localhost]: FAILED! => { "changed": false, "invocation": { "module_args": { "baseuri": "testhost", "category": "Manager", "command": [ "SetManagerAttributes" ], "manager_attributes": "{ 'Users.8.SolEnable': Enabled 'Users.8.IpmiLanPrivilege': Administrator }", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "resource_id": "iDRAC.Embedded.1", "timeout": 10, "username": "admin" } }, "msg": "argument manager_attributes is of type <class 'str'> and we were unable to convert to dict: unable to evaluate string as dictionary" } I didn't find in Ansible documentation how to do this correctly.
Ansible variable in dictionary key I am trying to execute a playbook for setting up Dell servers and I have some problems with a dictionary in module idrac_redfish_config . I need to enable SOL for a specific user, but for this I want to use a key in the dictionary with a variable because ID of user can be different from server to server. How I try to add a variable to a dictionary key like this: - name: Store id test-user set_fact: ID: "{{ result.redfish_facts.user.entries | json_query(\"[?UserName=='test-user'].Id\") }}" - name: Enable SOL for test-user community.general.idrac_redfish_config: category: Manager command: SetManagerAttributes resource_id: iDRAC.Embedded.1 manager_attributes: Users.{{ ID[0] }}.SolEnable: "Enabled" <--- Users.{{ ID[0] }}.IpmiLanPrivilege: "Administrator" <--- baseuri: "testhost" username: "admin" password: "admin" I get this error: TASK [Store id test-user] ************************************************************************************************************************************************************************************** ok: [localhost] => { "ansible_facts": { "ID": [ "8" ] }, "changed": false } TASK [Enable SOL for test-user] ******************************************************************************************************************************************************************************** fatal: [localhost]: FAILED! => { "changed": false, "invocation": { "module_args": { "baseuri": "testhost", "category": "Manager", "command": [ "SetManagerAttributes" ], "manager_attribute_name": null, "manager_attribute_value": null, "manager_attributes": { "Users.{{ ID[0] }}.IpmiLanPrivilege": "Administrator", "Users.{{ ID[0] }}.SolEnable": "Enabled" }, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "resource_id": "iDRAC.Embedded.1", "timeout": 10, "username": "admin" } }, "msg": "SetManagerAttributes: Manager attribute Users.{{ ID[0] }}.SolEnable not found" } If I do this: manager_attributes: "{ 'Users.{{ ID[0] }}.SolEnable': Enabled 'Users.{{ ID[0] }}.IpmiLanPrivilege': Administrator }" I get: fatal: [localhost]: FAILED! => { "changed": false, "invocation": { "module_args": { "baseuri": "testhost", "category": "Manager", "command": [ "SetManagerAttributes" ], "manager_attributes": "{ 'Users.8.SolEnable': Enabled 'Users.8.IpmiLanPrivilege': Administrator }", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "resource_id": "iDRAC.Embedded.1", "timeout": 10, "username": "admin" } }, "msg": "argument manager_attributes is of type <class 'str'> and we were unable to convert to dict: unable to evaluate string as dictionary" } I didn't find in Ansible documentation how to do this correctly.
ansible
11
20,782
1
https://stackoverflow.com/questions/65303764/ansible-variable-in-dictionary-key
52,973,639
Force limit parameter to be set in ansible
Is there a way to force commands ansible-playbook , ansible-variable , etc... to be executed with a --limit option (otherwise to deny it) ? I discovered that, on a cluster it can easily run a playbook to all nodes if you mistakenly run it without limit, I'd like to prevent it from ansible users.
Force limit parameter to be set in ansible Is there a way to force commands ansible-playbook , ansible-variable , etc... to be executed with a --limit option (otherwise to deny it) ? I discovered that, on a cluster it can easily run a playbook to all nodes if you mistakenly run it without limit, I'd like to prevent it from ansible users.
ansible
11
7,330
4
https://stackoverflow.com/questions/52973639/force-limit-parameter-to-be-set-in-ansible
41,565,614
Ansible - how to register output from &quot;FIND&quot; module and use in other
I need to find a files in unknown directory place and remove them. Tried to use "find" module, register its output, and pass it to "file". Even if I see path registered, I can not use it later: < TASK [print find_result] > ok: [1.2.3.4] => { "find_result": { "changed": false, "examined": 3119, "files": [ { "atime": 1483973253.7295375, ... "mode": "0600", "mtime": 1483973253.7295375, "nlink": 1, "path": "/tmp/delme", My playbook is: - hosts: "{{ target }}" become: no vars: find_what: "delme*" find_where: "/tmp" tasks: - name: finding files find: paths: "{{ find_where }}" patterns: "{{ find_what }}" recurse: "yes" file_type: "file" register: find_result # \/ for debugging - name: print find_result debug: var=find_result - name: remove files file: path= "{{ item.path }}" state=absent with_items: "{{ find_result.files }}"
Ansible - how to register output from &quot;FIND&quot; module and use in other I need to find a files in unknown directory place and remove them. Tried to use "find" module, register its output, and pass it to "file". Even if I see path registered, I can not use it later: < TASK [print find_result] > ok: [1.2.3.4] => { "find_result": { "changed": false, "examined": 3119, "files": [ { "atime": 1483973253.7295375, ... "mode": "0600", "mtime": 1483973253.7295375, "nlink": 1, "path": "/tmp/delme", My playbook is: - hosts: "{{ target }}" become: no vars: find_what: "delme*" find_where: "/tmp" tasks: - name: finding files find: paths: "{{ find_where }}" patterns: "{{ find_what }}" recurse: "yes" file_type: "file" register: find_result # \/ for debugging - name: print find_result debug: var=find_result - name: remove files file: path= "{{ item.path }}" state=absent with_items: "{{ find_result.files }}"
ansible
11
51,441
2
https://stackoverflow.com/questions/41565614/ansible-how-to-register-output-from-find-module-and-use-in-other
37,348,655
Ansible stop playbook if file present
Is it possible to stop a playbook during his execution if a define file is present on my node and also output to explain why the playbook has stopped? It is to prevent an accidental re-execution of my playbook on a node that has my application already installed because I generate a password during this install and I don't want to reinitialise this password.
Ansible stop playbook if file present Is it possible to stop a playbook during his execution if a define file is present on my node and also output to explain why the playbook has stopped? It is to prevent an accidental re-execution of my playbook on a node that has my application already installed because I generate a password during this install and I don't want to reinitialise this password.
ansible
11
15,997
1
https://stackoverflow.com/questions/37348655/ansible-stop-playbook-if-file-present
30,173,713
Rename roles/rolename/tasks/main.yml to rolename.yml in Ansible
By default, Ansible looks for the tasks for a role in a main.yml. I have too many main.yml files and I'd like to rename this to rolename.yml or something that is more unique. How can I change Ansible's default behavior to use rolename.yml instead of tasks/main.yml?
Rename roles/rolename/tasks/main.yml to rolename.yml in Ansible By default, Ansible looks for the tasks for a role in a main.yml. I have too many main.yml files and I'd like to rename this to rolename.yml or something that is more unique. How can I change Ansible's default behavior to use rolename.yml instead of tasks/main.yml?
ansible
11
3,894
3
https://stackoverflow.com/questions/30173713/rename-roles-rolename-tasks-main-yml-to-rolename-yml-in-ansible
54,421,688
Ansible variables wildcard selection
The only way I've found to select variables by a wildcard is to loop all variables and test match . For example tasks: - debug: var: item loop: "{{ query('dict', hostvars[inventory_hostname]) }}" when: item.key is match("^.*_python_.*$") shell> ansible-playbook test.yml | grep key: key: ansible_python_interpreter key: ansible_python_version key: ansible_selinux_python_present Is there a more efficient way to do it? Neither json_query([?key=='name']) , nor lookup('vars', 'name') work with wildcards. Is there any other "wildcard-enabled" test, filter ...? Note: regex_search is discussed in What is the syntax within the regex_search() to match against a variable?
Ansible variables wildcard selection The only way I've found to select variables by a wildcard is to loop all variables and test match . For example tasks: - debug: var: item loop: "{{ query('dict', hostvars[inventory_hostname]) }}" when: item.key is match("^.*_python_.*$") shell> ansible-playbook test.yml | grep key: key: ansible_python_interpreter key: ansible_python_version key: ansible_selinux_python_present Is there a more efficient way to do it? Neither json_query([?key=='name']) , nor lookup('vars', 'name') work with wildcards. Is there any other "wildcard-enabled" test, filter ...? Note: regex_search is discussed in What is the syntax within the regex_search() to match against a variable?
ansible
11
10,985
2
https://stackoverflow.com/questions/54421688/ansible-variables-wildcard-selection
21,627,664
Ansible with_fileglob is skipping
I am putting together an Ansible Playbook designed to build webservers. However I am stuck when trying to use with_fileglob because Ansible keeps reporting that it's skipping the copy of nginx vhost files. My script looks like this: - name: Nginx | Copy vhost files copy: src={{ item }} dest=/etc/nginx/sites-available owner=root group=root mode=600 with_fileglob: - "{{ templates_dir }}/nginx/sites-available/*" notify - nginx-restart: {{ templates }} has been defined elsewhere as roles/common/templates . In this directory I have a file called webserver1 that I'm hoping Ansible will copy into /etc/nginx/sites-available/ I have found other people discussing this issue but no responses have helped me solve this problem. Why would Ansible be skipping files? Edit: I should point out that I want to use with_fileglob (rather than straight copy) as I want to iterate over other virtual hosts in the future.
Ansible with_fileglob is skipping I am putting together an Ansible Playbook designed to build webservers. However I am stuck when trying to use with_fileglob because Ansible keeps reporting that it's skipping the copy of nginx vhost files. My script looks like this: - name: Nginx | Copy vhost files copy: src={{ item }} dest=/etc/nginx/sites-available owner=root group=root mode=600 with_fileglob: - "{{ templates_dir }}/nginx/sites-available/*" notify - nginx-restart: {{ templates }} has been defined elsewhere as roles/common/templates . In this directory I have a file called webserver1 that I'm hoping Ansible will copy into /etc/nginx/sites-available/ I have found other people discussing this issue but no responses have helped me solve this problem. Why would Ansible be skipping files? Edit: I should point out that I want to use with_fileglob (rather than straight copy) as I want to iterate over other virtual hosts in the future.
ansible
11
17,016
1
https://stackoverflow.com/questions/21627664/ansible-with-fileglob-is-skipping
35,560,237
How can I configure Ansible to output errors without replaced newlines?
It seems that ansible return results using JSON format, replacing the newlines with \n in the output. This makes very hard to read the output on the screen / logs. How can I configure it to use real-newlines?
How can I configure Ansible to output errors without replaced newlines? It seems that ansible return results using JSON format, replacing the newlines with \n in the output. This makes very hard to read the output on the screen / logs. How can I configure it to use real-newlines?
ansible
11
2,537
2
https://stackoverflow.com/questions/35560237/how-can-i-configure-ansible-to-output-errors-without-replaced-newlines
44,343,146
Ansible find object in list by value of object field
I have this structure. For each host this structure may have more or less items. In a task I want to know if there is a a module defined with a particular name. --- web_module_list: - module_name: LaunchPad module_version: 1.4.0 - module_name: Manager module_version: 1.6.0 - module_name: NetworkInventory module_version: 1.1.4 - module_name: Reporting module_version: 1.0.18 - module_name: TriadJ module_version: 4.1.0-1.1.7 For instance I want to know if the module_name Reporting is defined so that I include a set of tasks for it. - set_fact: reporting: if the web_module_list contains an item with module_name Reporting then true else false woprinting: if the web_module_list contains an item with module_name WorkOrderPrinting then true else false - name: If the reporting module is listed in inventory then execute its tasks include: reporting.yml when: reporting - name: If the work order printing module is listed in inventory then execute its tasks include: woprinting.yml when: woprinting How do I get this to work? Is there a better way?
Ansible find object in list by value of object field I have this structure. For each host this structure may have more or less items. In a task I want to know if there is a a module defined with a particular name. --- web_module_list: - module_name: LaunchPad module_version: 1.4.0 - module_name: Manager module_version: 1.6.0 - module_name: NetworkInventory module_version: 1.1.4 - module_name: Reporting module_version: 1.0.18 - module_name: TriadJ module_version: 4.1.0-1.1.7 For instance I want to know if the module_name Reporting is defined so that I include a set of tasks for it. - set_fact: reporting: if the web_module_list contains an item with module_name Reporting then true else false woprinting: if the web_module_list contains an item with module_name WorkOrderPrinting then true else false - name: If the reporting module is listed in inventory then execute its tasks include: reporting.yml when: reporting - name: If the work order printing module is listed in inventory then execute its tasks include: woprinting.yml when: woprinting How do I get this to work? Is there a better way?
ansible
11
14,350
1
https://stackoverflow.com/questions/44343146/ansible-find-object-in-list-by-value-of-object-field
56,869,949
ansible user module always shows changed
I'm struggling to properly use ansible's user module. The problem is every time I run my playbook, the users I created always show as changed, even if I have already created them. I found other people with the same issue here , though I am struggling to actually fix it based on the github thread. Probably the most helpful comment that I didn't understand 👇 I can confirm that it only looked like a bug - adding the append option to two tasks made it so that they're not always undoing the work of the other, and fixed the permanently changed trigger. I did not need to add "group:" This is what my playbook looks like: - name: Generate all users for the environment user: createhome: yes state: present # to delete name: "{{ item.user }}" groups: "{{ 'developers' if item.role == 'developer' else 'customers' }}" password: "{{ generic_password | password_hash('sha512') }}" append: yes with_items: - "{{ users }}" My intention is the have every user belong to their own private group (User Private Groups) but also have a developer belong to the developers group. With the current configuration currently it works, with the problem being ansible always reports the user as "changed" . I'll then add the developers group to the sudoers file; hence I'd like to add the user to the developers group. e.g. vagrant@ubuntu-bionic:/home$ sudo su - nick $ pwd /home/nick $ touch file.txt $ ls -al -rw-rw-r-- 1 nick nick 0 Jul 3 12:06 file.txt vagrant@ubuntu-bionic:/home$ cat /etc/group | grep 'developers' developers:x:1002:nick,ldnelson,greg,alex,scott,jupyter Here is the verbose output running against vagrant locally for one of the users: changed: [192.168.33.10] => (item={'user': 'nick', 'role': 'developer', 'with_ga': False}) => { "append": true, "changed": true, "comment": "", "group": 1004, "groups": "developers", "home": "/home/nick", "invocation": { "module_args": { "append": true, "comment": null, "create_home": true, "createhome": true, "expires": null, "force": false, "generate_ssh_key": null, "group": null, "groups": [ "developers" ], "hidden": null, "home": null, "local": null, "login_class": null, "move_home": false, "name": "nick", "non_unique": false, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "password_lock": null, "remove": false, "seuser": null, "shell": null, "skeleton": null, "ssh_key_bits": 0, "ssh_key_comment": "ansible-generated on ubuntu-bionic", "ssh_key_file": null, "ssh_key_passphrase": null, "ssh_key_type": "rsa", "state": "present", "system": false, "uid": null, "update_password": "always" } }, "item": { "role": "developer", "user": "nick", "with_ga": false }, "move_home": false, "name": "nick", "password": "NOT_LOGGING_PASSWORD", "shell": "/bin/sh", "state": "present", "uid": 1002 } Should be unrelated, but I am adding some to the developers group as I intend to grant sudo access for certain commands.
ansible user module always shows changed I'm struggling to properly use ansible's user module. The problem is every time I run my playbook, the users I created always show as changed, even if I have already created them. I found other people with the same issue here , though I am struggling to actually fix it based on the github thread. Probably the most helpful comment that I didn't understand 👇 I can confirm that it only looked like a bug - adding the append option to two tasks made it so that they're not always undoing the work of the other, and fixed the permanently changed trigger. I did not need to add "group:" This is what my playbook looks like: - name: Generate all users for the environment user: createhome: yes state: present # to delete name: "{{ item.user }}" groups: "{{ 'developers' if item.role == 'developer' else 'customers' }}" password: "{{ generic_password | password_hash('sha512') }}" append: yes with_items: - "{{ users }}" My intention is the have every user belong to their own private group (User Private Groups) but also have a developer belong to the developers group. With the current configuration currently it works, with the problem being ansible always reports the user as "changed" . I'll then add the developers group to the sudoers file; hence I'd like to add the user to the developers group. e.g. vagrant@ubuntu-bionic:/home$ sudo su - nick $ pwd /home/nick $ touch file.txt $ ls -al -rw-rw-r-- 1 nick nick 0 Jul 3 12:06 file.txt vagrant@ubuntu-bionic:/home$ cat /etc/group | grep 'developers' developers:x:1002:nick,ldnelson,greg,alex,scott,jupyter Here is the verbose output running against vagrant locally for one of the users: changed: [192.168.33.10] => (item={'user': 'nick', 'role': 'developer', 'with_ga': False}) => { "append": true, "changed": true, "comment": "", "group": 1004, "groups": "developers", "home": "/home/nick", "invocation": { "module_args": { "append": true, "comment": null, "create_home": true, "createhome": true, "expires": null, "force": false, "generate_ssh_key": null, "group": null, "groups": [ "developers" ], "hidden": null, "home": null, "local": null, "login_class": null, "move_home": false, "name": "nick", "non_unique": false, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "password_lock": null, "remove": false, "seuser": null, "shell": null, "skeleton": null, "ssh_key_bits": 0, "ssh_key_comment": "ansible-generated on ubuntu-bionic", "ssh_key_file": null, "ssh_key_passphrase": null, "ssh_key_type": "rsa", "state": "present", "system": false, "uid": null, "update_password": "always" } }, "item": { "role": "developer", "user": "nick", "with_ga": false }, "move_home": false, "name": "nick", "password": "NOT_LOGGING_PASSWORD", "shell": "/bin/sh", "state": "present", "uid": 1002 } Should be unrelated, but I am adding some to the developers group as I intend to grant sudo access for certain commands.
ansible
11
5,477
1
https://stackoverflow.com/questions/56869949/ansible-user-module-always-shows-changed
42,556,175
Ansible/jinja2: Use filter result in if condition
is it possible to use the result of Jinja2 filters in conditions in ansible playbooks? I'm trying to get this working, but without success: {% if (item | ipv4) %}{{ item }}{% else %}{{ lookup('dig', item) }}{% endif %}} where item in my current test is set to localhost (and could be any other private or public domain). Should do: If item is an IPv4 address the adress should be returned, otherwise it should be "converted" (DNS lookup with dig ) to an IPv4 address - but it always is returning the hostname. Any idea? Thanks in advance Matthias
Ansible/jinja2: Use filter result in if condition is it possible to use the result of Jinja2 filters in conditions in ansible playbooks? I'm trying to get this working, but without success: {% if (item | ipv4) %}{{ item }}{% else %}{{ lookup('dig', item) }}{% endif %}} where item in my current test is set to localhost (and could be any other private or public domain). Should do: If item is an IPv4 address the adress should be returned, otherwise it should be "converted" (DNS lookup with dig ) to an IPv4 address - but it always is returning the hostname. Any idea? Thanks in advance Matthias
ansible, ansible-template
11
11,060
1
https://stackoverflow.com/questions/42556175/ansible-jinja2-use-filter-result-in-if-condition
36,871,527
Ansible : [DEPRECATION WARNING]: Using bare variables is deprecated
I think this is the part of the playbook that is generating the error. How should I be re-writing this part? roles: - role: json-transform json_transforms: '{{ clientValidation.json_transforms}}' It throws the following warning: [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{json_transforms}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
Ansible : [DEPRECATION WARNING]: Using bare variables is deprecated I think this is the part of the playbook that is generating the error. How should I be re-writing this part? roles: - role: json-transform json_transforms: '{{ clientValidation.json_transforms}}' It throws the following warning: [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{json_transforms}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible
11
9,460
1
https://stackoverflow.com/questions/36871527/ansible-deprecation-warning-using-bare-variables-is-deprecated
29,319,752
How can I add a PPA repository using Ansible?
I'm trying to add a new repository to a server so that I can install Java by Ansible. Unfortunately whenever I try to run the playbook it fails because of a GPG error. Can somebody explain what is going wrong here and what I need to do in order to fix this? I'm using Ansible 1.7.2 and currently only connecting to localhost. I have a very simple Playbook that looks like this: - hosts: home tasks: - name: Add repositories apt_repository: repo='ppa:webupd8team/java' state=present When I try to execute it, I get the following error: sal@bobnit:~/Workspace$ ansible-playbook --ask-sudo-pass basic.yml sudo password: PLAY [home] ******************************************************************* GATHERING FACTS *************************************************************** ok: [localhost] TASK: [Add repositories] ****************************************************** failed: [localhost] => {"cmd": "apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 7B2C3B0889BF5709A105D03AC2518248EEA14886", "failed": true, "rc": 2} stderr: gpg: requesting key EEA14886 from hkp server keyserver.ubuntu.com gpg: no writable keyring found: eof gpg: error reading [stream]': general error gpg: Total number processed: 0 stdout: Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.HKDOSZnVQP --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyring /etc/apt/trusted.gpg.d/steam.gpg --keyring /etc/apt/trusted.gpg.d/ubuntu-x-swat_ubuntu_x-updates.gpg --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 7B2C3B0889BF5709A105D03AC2518248EEA14886 msg: gpg: requesting key EEA14886 from hkp server keyserver.ubuntu.com gpg: no writable keyring found: eof gpg: error reading [stream]': general error gpg: Total number processed: 0 FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/home/sal/basic.retry localhost : ok=1 changed=0 unreachable=0 failed=1
How can I add a PPA repository using Ansible? I'm trying to add a new repository to a server so that I can install Java by Ansible. Unfortunately whenever I try to run the playbook it fails because of a GPG error. Can somebody explain what is going wrong here and what I need to do in order to fix this? I'm using Ansible 1.7.2 and currently only connecting to localhost. I have a very simple Playbook that looks like this: - hosts: home tasks: - name: Add repositories apt_repository: repo='ppa:webupd8team/java' state=present When I try to execute it, I get the following error: sal@bobnit:~/Workspace$ ansible-playbook --ask-sudo-pass basic.yml sudo password: PLAY [home] ******************************************************************* GATHERING FACTS *************************************************************** ok: [localhost] TASK: [Add repositories] ****************************************************** failed: [localhost] => {"cmd": "apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 7B2C3B0889BF5709A105D03AC2518248EEA14886", "failed": true, "rc": 2} stderr: gpg: requesting key EEA14886 from hkp server keyserver.ubuntu.com gpg: no writable keyring found: eof gpg: error reading [stream]': general error gpg: Total number processed: 0 stdout: Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.HKDOSZnVQP --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyring /etc/apt/trusted.gpg.d/steam.gpg --keyring /etc/apt/trusted.gpg.d/ubuntu-x-swat_ubuntu_x-updates.gpg --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 7B2C3B0889BF5709A105D03AC2518248EEA14886 msg: gpg: requesting key EEA14886 from hkp server keyserver.ubuntu.com gpg: no writable keyring found: eof gpg: error reading [stream]': general error gpg: Total number processed: 0 FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/home/sal/basic.retry localhost : ok=1 changed=0 unreachable=0 failed=1
linux, ubuntu, ansible
11
24,317
2
https://stackoverflow.com/questions/29319752/how-can-i-add-a-ppa-repository-using-ansible
44,660,161
How to solve TemplateRuntimeError: no test named &#39;equalto&#39; in Ansible?
Ansible v2.2.1.0 I have task that collects information over items, and I setup a register for the task. For example, I use jq to parse a JSON file, hello.json ---------- { "name" : "hello file", "english" : "hello", "spanish" : "hola", "german" : "wie gehts" } - name: parse the hello.json file shell: | jq -r '.{{ item }}' < hello.json register: hellos with_items: - english - spanish - german - debug: var=hellos The debug shows ok: [localhost] => { "hellos": { "changed": true, "msg": "All items completed", "results": [ { # snipped "item": "english", "stdout" : "hello", # snipped }, { # snipped "item": "spanish", "stdout" : "hola", # snipped }, { # snipped "item": "german", "stdout" : "wie gehts", # snipped } ] } } Now if I want to get the stdout value of the hellos register, I tried this - name: Display hello messages debug: msg="{{ hellos.results | selectattr("item","equalto",item) | map(attribute="stdout") | first }} worlld" with_items: - english - spanish - german I get An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TemplateRuntimeError: no test named 'equalto' fatal: [localhost]: FAILED! => {"failed": true, "msg": "Unexpected failure during module execution.", "stdout": ""} I'm basically parsing the hellos register for the "item" and getting its "stdout" attribute i the second debug task. Where's my error?
How to solve TemplateRuntimeError: no test named &#39;equalto&#39; in Ansible? Ansible v2.2.1.0 I have task that collects information over items, and I setup a register for the task. For example, I use jq to parse a JSON file, hello.json ---------- { "name" : "hello file", "english" : "hello", "spanish" : "hola", "german" : "wie gehts" } - name: parse the hello.json file shell: | jq -r '.{{ item }}' < hello.json register: hellos with_items: - english - spanish - german - debug: var=hellos The debug shows ok: [localhost] => { "hellos": { "changed": true, "msg": "All items completed", "results": [ { # snipped "item": "english", "stdout" : "hello", # snipped }, { # snipped "item": "spanish", "stdout" : "hola", # snipped }, { # snipped "item": "german", "stdout" : "wie gehts", # snipped } ] } } Now if I want to get the stdout value of the hellos register, I tried this - name: Display hello messages debug: msg="{{ hellos.results | selectattr("item","equalto",item) | map(attribute="stdout") | first }} worlld" with_items: - english - spanish - german I get An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TemplateRuntimeError: no test named 'equalto' fatal: [localhost]: FAILED! => {"failed": true, "msg": "Unexpected failure during module execution.", "stdout": ""} I'm basically parsing the hellos register for the "item" and getting its "stdout" attribute i the second debug task. Where's my error?
ansible, ansible-2.x
11
17,790
2
https://stackoverflow.com/questions/44660161/how-to-solve-templateruntimeerror-no-test-named-equalto-in-ansible
22,395,733
How can i run ansible command if certain file changed
I am using ansible to deploy my django App using - name: Upgrade the virtualenv. pip: requirements={{project_root}}/www/requirements.txt virtualenv={{project_root}}/www/virtualenv But i only want to run that if requirements.txt changed since last run
How can i run ansible command if certain file changed I am using ansible to deploy my django App using - name: Upgrade the virtualenv. pip: requirements={{project_root}}/www/requirements.txt virtualenv={{project_root}}/www/virtualenv But i only want to run that if requirements.txt changed since last run
ansible
11
26,592
3
https://stackoverflow.com/questions/22395733/how-can-i-run-ansible-command-if-certain-file-changed
58,845,530
Jinja escaping beginning of comment tag
I am trying to create a bash script in a jinja template. I have the following line: SOME_ARRAY_COUNT=${#SOME_ARRAY[@]} But it throws an error: AnsibleError: template error while templating string: Missing end of comment tag After investigating I found out that a {% raw %}...{% endraw %} block can be used to take literals, but it still doesn't work for {# obtaining a similar error: An unhandled exception occurred while templating Error was a <class 'ansible.errors.AnsibleError'>, original message: template error while templating string: Missing end of comment tag Is there a workaround for this without changing bash logic? Thanks UPDATE: including examples ansible playbook: ... more ansible playbook stuff ... tasks: - name: create script template: src: "src/path/to/script.sh.j2" dest: "dest/path/to/output/script.sh" - name: user data script as string set_fact: userdata: "{{ lookup('file', 'path/to/script.sh') }}" - name: some cloudformation thing cloudformation: stack_name: "stack_name" state: present region: "some region" template: "path/to/template/cloudformation.json" template_parameters: ... a bunch of params ... UserDataScript: "{{ userdata }}" tags: ... tags .. metadata: "... metadata ..." register: cloudformation_results jinja template (script.sh.j2): ... more script stuff ... SOME_ARRAY="some array with elements" SOME_ARRAY=($SOME_ARRAY) SOME_ARRAY_COUNT=${#SOME_ARRAY[@]} ... more script stuff ... THE PROBLEM: This line: SOME_ARRAY_COUNT=${#SOME_ARRAY[@]} caused in the first template task to ask for a closing #} comment tag. First fix adding raw block: {% raw %} SOME_ARRAY_COUNT=${#SOME_ARRAY[@]} {% endraw %} Fixes the templating part, but the cloudformation module also applies templating substitutions so it would then have the same error. THE FIX: {% raw %} SOME_ARRAY_COUNT=${{ '{#' }}SOME_ARRAY[@]} {% endraw %} First template module removes raw block and keeps {{ '{$' }} and cloudformation module finds {{ '{$' }} and applies literal substitution.
Jinja escaping beginning of comment tag I am trying to create a bash script in a jinja template. I have the following line: SOME_ARRAY_COUNT=${#SOME_ARRAY[@]} But it throws an error: AnsibleError: template error while templating string: Missing end of comment tag After investigating I found out that a {% raw %}...{% endraw %} block can be used to take literals, but it still doesn't work for {# obtaining a similar error: An unhandled exception occurred while templating Error was a <class 'ansible.errors.AnsibleError'>, original message: template error while templating string: Missing end of comment tag Is there a workaround for this without changing bash logic? Thanks UPDATE: including examples ansible playbook: ... more ansible playbook stuff ... tasks: - name: create script template: src: "src/path/to/script.sh.j2" dest: "dest/path/to/output/script.sh" - name: user data script as string set_fact: userdata: "{{ lookup('file', 'path/to/script.sh') }}" - name: some cloudformation thing cloudformation: stack_name: "stack_name" state: present region: "some region" template: "path/to/template/cloudformation.json" template_parameters: ... a bunch of params ... UserDataScript: "{{ userdata }}" tags: ... tags .. metadata: "... metadata ..." register: cloudformation_results jinja template (script.sh.j2): ... more script stuff ... SOME_ARRAY="some array with elements" SOME_ARRAY=($SOME_ARRAY) SOME_ARRAY_COUNT=${#SOME_ARRAY[@]} ... more script stuff ... THE PROBLEM: This line: SOME_ARRAY_COUNT=${#SOME_ARRAY[@]} caused in the first template task to ask for a closing #} comment tag. First fix adding raw block: {% raw %} SOME_ARRAY_COUNT=${#SOME_ARRAY[@]} {% endraw %} Fixes the templating part, but the cloudformation module also applies templating substitutions so it would then have the same error. THE FIX: {% raw %} SOME_ARRAY_COUNT=${{ '{#' }}SOME_ARRAY[@]} {% endraw %} First template module removes raw block and keeps {{ '{$' }} and cloudformation module finds {{ '{$' }} and applies literal substitution.
ansible, jinja2, template-engine, templating
11
10,214
1
https://stackoverflow.com/questions/58845530/jinja-escaping-beginning-of-comment-tag
35,176,944
Get the pid of a running playbook for use within the playbook
When we run a playbook, with verbose output enabled, in the ansible logs we can see something like this: 2016-02-03 12:51:58,235 p=4105 u=root | PLAY RECAP I guess that the p=4105 is the pid of the playbook when it ran. Is there a way to get this pid inside the playbook during its runtime (as a variable for example)?
Get the pid of a running playbook for use within the playbook When we run a playbook, with verbose output enabled, in the ansible logs we can see something like this: 2016-02-03 12:51:58,235 p=4105 u=root | PLAY RECAP I guess that the p=4105 is the pid of the playbook when it ran. Is there a way to get this pid inside the playbook during its runtime (as a variable for example)?
linux, ansible, pid
11
5,815
6
https://stackoverflow.com/questions/35176944/get-the-pid-of-a-running-playbook-for-use-within-the-playbook
56,154,791
Ansible: append a string to an existing line in a file
I am using ansible module to edit the manifest file for kube-apiserver - --feature-gates=AdvancedAuditing=true I want to append new feature-gate like - --feature-gates=AdvancedAuditing=true,TTLAfterFinished=true I tried many thing, one of which - - name: append TTLAfterFinished to existing list of feature-gates lineinfile: path: item.0.item.file_path backrefs: yes regexp: "^(.*feature-gates.*)$" line: '\1,TTLAfterFinished=true' With no luck.. :( Any help ?
Ansible: append a string to an existing line in a file I am using ansible module to edit the manifest file for kube-apiserver - --feature-gates=AdvancedAuditing=true I want to append new feature-gate like - --feature-gates=AdvancedAuditing=true,TTLAfterFinished=true I tried many thing, one of which - - name: append TTLAfterFinished to existing list of feature-gates lineinfile: path: item.0.item.file_path backrefs: yes regexp: "^(.*feature-gates.*)$" line: '\1,TTLAfterFinished=true' With no luck.. :( Any help ?
ansible, yaml
11
21,308
2
https://stackoverflow.com/questions/56154791/ansible-append-a-string-to-an-existing-line-in-a-file
50,009,948
GAE : How to deploy various environments with secrets?
I have a staging and a production project on App Engine, with 6 services on each. For the moment, we deploy from the developers computer, using gcloud app deploy app.staging.yaml --project staging-project or gcloud app deploy app.production.yaml --project production-project It works but it causes issues with environment variables, and especially with secrets. Our apps gets their Api keys, database credentials and other things from the environment variables - that allows us to have the exact same app running locally, in a Docker Container or in App Engine without knowing where it is deployed. If I follow the documentation way to deploy, our app.yaml files would look like this: app.production.yaml runtime: nodejs env: flex manual_scaling: instances: 1 env_variables: DATABASE_PASSWORD: "topsecret" MY_API_KEY: "ultrasecret" I think everybody easily understands why it's a bad idea to store that on a Git repository. For the moment, we have this shadow file that every developer have to fill before deployment app.production.yaml.shadow runtime: nodejs env: flex manual_scaling: instances: 1 env_variables: DATABASE_PASSWORD: "set me" MY_API_KEY: "set me" But as the team grows and we want everybody to be able to deploy on staging, it becomes more and more difficult to have the right settings for each developpers and each service. I found out 3 workarounds, and their reason to not be used: Use Google KMS - allows us to put encrypted secrets directly into the project, but it requires us to put custom code in our apps to decrypt them. It creates a different environment management between local, staging and production. It increases the risk of bugs due to the complexity. Store secrets in Google Datastore - I tried it, I created a helper that searches env vars in proccess.ENV, then in cache and ultimately in Datastore. But like KMS, it increases complexity a lot. Store secrets in a JSON file and put in on Google Cloud Storage : again, it requires to load env variables through an helper that checks env vars, then loads the file etc... Ultimately, we are exploring the feasibility to use a deployment server that would be triggered by developers or Continuous Integration and handle all the secrets injection when deploying to App Engine. But these tools like Ansible , Salt, Puppet, Chef only have plugins for Compute Engine and do not support App Engine. +-------------------------+ +-------------------+ +---------------------+ | | | +---> | | Developer workspace | | Ansible | | App Engine STAGING | | +----> (or other) | | | +-------------------------+ | | +---------------------+ | | +-------------------------+ | | +---------------------+ | +----> Injects secrets | | | | Continous Integration | | | App Engine PROD. | | | | +---> | +-------------------------+ +-------------------+ +---------------------+ This leads me to 3 questions: Do you think using a deployment server with App Engine is a good idea ? How can I make sure that production and staging secrets keep synchronisation so that deployments from developers are always right ? Is there a way to use classical environment variables for secrets on App Engine ?
GAE : How to deploy various environments with secrets? I have a staging and a production project on App Engine, with 6 services on each. For the moment, we deploy from the developers computer, using gcloud app deploy app.staging.yaml --project staging-project or gcloud app deploy app.production.yaml --project production-project It works but it causes issues with environment variables, and especially with secrets. Our apps gets their Api keys, database credentials and other things from the environment variables - that allows us to have the exact same app running locally, in a Docker Container or in App Engine without knowing where it is deployed. If I follow the documentation way to deploy, our app.yaml files would look like this: app.production.yaml runtime: nodejs env: flex manual_scaling: instances: 1 env_variables: DATABASE_PASSWORD: "topsecret" MY_API_KEY: "ultrasecret" I think everybody easily understands why it's a bad idea to store that on a Git repository. For the moment, we have this shadow file that every developer have to fill before deployment app.production.yaml.shadow runtime: nodejs env: flex manual_scaling: instances: 1 env_variables: DATABASE_PASSWORD: "set me" MY_API_KEY: "set me" But as the team grows and we want everybody to be able to deploy on staging, it becomes more and more difficult to have the right settings for each developpers and each service. I found out 3 workarounds, and their reason to not be used: Use Google KMS - allows us to put encrypted secrets directly into the project, but it requires us to put custom code in our apps to decrypt them. It creates a different environment management between local, staging and production. It increases the risk of bugs due to the complexity. Store secrets in Google Datastore - I tried it, I created a helper that searches env vars in proccess.ENV, then in cache and ultimately in Datastore. But like KMS, it increases complexity a lot. Store secrets in a JSON file and put in on Google Cloud Storage : again, it requires to load env variables through an helper that checks env vars, then loads the file etc... Ultimately, we are exploring the feasibility to use a deployment server that would be triggered by developers or Continuous Integration and handle all the secrets injection when deploying to App Engine. But these tools like Ansible , Salt, Puppet, Chef only have plugins for Compute Engine and do not support App Engine. +-------------------------+ +-------------------+ +---------------------+ | | | +---> | | Developer workspace | | Ansible | | App Engine STAGING | | +----> (or other) | | | +-------------------------+ | | +---------------------+ | | +-------------------------+ | | +---------------------+ | +----> Injects secrets | | | | Continous Integration | | | App Engine PROD. | | | | +---> | +-------------------------+ +-------------------+ +---------------------+ This leads me to 3 questions: Do you think using a deployment server with App Engine is a good idea ? How can I make sure that production and staging secrets keep synchronisation so that deployments from developers are always right ? Is there a way to use classical environment variables for secrets on App Engine ?
google-app-engine, ansible
11
3,826
3
https://stackoverflow.com/questions/50009948/gae-how-to-deploy-various-environments-with-secrets
34,260,658
How do I force-reinstall a package with Ansible?
I'm using Ansible to deploy .deb packages from a custom repository. Sometimes a developer can forget to change the package number, so the repository will have the new package with the old version. This is unnecessary, so I would like to always reinstall the package. How do I do that? There is the force=yes option for apt module . Ansible documentation says: If yes , force installs/removes. But that seems to be about force-accepting any warnings. At least when I turn it off, Ansible gets blocked with a warning about an untrusted source. (Both the repository and the servers are in the same intranet, so that should not be an issue) I could use this: - name: force-reinstall myservice shell: apt-get --reinstall install myservice But this way I cannot use other options for apt module , and Ansible gets blocked on warnings the same way. Is there a way to always reinstall a package and avoid blocking on any interactivity?
How do I force-reinstall a package with Ansible? I'm using Ansible to deploy .deb packages from a custom repository. Sometimes a developer can forget to change the package number, so the repository will have the new package with the old version. This is unnecessary, so I would like to always reinstall the package. How do I do that? There is the force=yes option for apt module . Ansible documentation says: If yes , force installs/removes. But that seems to be about force-accepting any warnings. At least when I turn it off, Ansible gets blocked with a warning about an untrusted source. (Both the repository and the servers are in the same intranet, so that should not be an issue) I could use this: - name: force-reinstall myservice shell: apt-get --reinstall install myservice But this way I cannot use other options for apt module , and Ansible gets blocked on warnings the same way. Is there a way to always reinstall a package and avoid blocking on any interactivity?
deployment, ansible, apt, deb
11
28,357
2
https://stackoverflow.com/questions/34260658/how-do-i-force-reinstall-a-package-with-ansible
21,172,312
How to loop over array containing template variables with ansible?
I'm setting up an automated provisioning process for a webserver using Ansible. For this, I have an array containing dictionaries with vhosts to setup: vhosts: - name: 'vhost1' server_name: 'domain1.com' - name: 'vhost2' server_name: 'domain2.com' I prepared a template with some generic nginx vhost configuration: server { listen 80; server_name {{ item.server_name }}; root /home/www/{{ item.name }}/htdocs; index index.php; location / { try_files $uri $uri/ /index.php?$args; } } Finally, I use the following task to copy a prepared template to the target host: - name: Setup vhosts template: src=vhost.j2 dest=/etc/nginx/sites-available/{{ item.name }} with_items: vhosts The tasks iterates over the vhost variable as expected. Unfortunately, Ansible does not pass the current item from the iterator to the template, instead the template has access to all currently valid variables. Is there any way to pass the current item from the iterator to the template?
How to loop over array containing template variables with ansible? I'm setting up an automated provisioning process for a webserver using Ansible. For this, I have an array containing dictionaries with vhosts to setup: vhosts: - name: 'vhost1' server_name: 'domain1.com' - name: 'vhost2' server_name: 'domain2.com' I prepared a template with some generic nginx vhost configuration: server { listen 80; server_name {{ item.server_name }}; root /home/www/{{ item.name }}/htdocs; index index.php; location / { try_files $uri $uri/ /index.php?$args; } } Finally, I use the following task to copy a prepared template to the target host: - name: Setup vhosts template: src=vhost.j2 dest=/etc/nginx/sites-available/{{ item.name }} with_items: vhosts The tasks iterates over the vhost variable as expected. Unfortunately, Ansible does not pass the current item from the iterator to the template, instead the template has access to all currently valid variables. Is there any way to pass the current item from the iterator to the template?
templates, nginx, ansible
11
7,396
1
https://stackoverflow.com/questions/21172312/how-to-loop-over-array-containing-template-variables-with-ansible
65,034,863
Updating a Ansible role which has been installed with ansible-galaxy from a GitHub repo
Whenever I change code in my Ansible role in a repository, I also want to have that code updated in the roles directory on my test machine. What I do to obtain the new code is to remove the role by running ansible-galaxy remove rolename install the role again by using ansible-galaxy install git+[URL] If I do not use the remove option before the install , ansible-galaxy just skips the role as it's already installed. It does not see the changed files in the repo as such. What is the best way to achive this?
Updating a Ansible role which has been installed with ansible-galaxy from a GitHub repo Whenever I change code in my Ansible role in a repository, I also want to have that code updated in the roles directory on my test machine. What I do to obtain the new code is to remove the role by running ansible-galaxy remove rolename install the role again by using ansible-galaxy install git+[URL] If I do not use the remove option before the install , ansible-galaxy just skips the role as it's already installed. It does not see the changed files in the repo as such. What is the best way to achive this?
github, ansible, roles, ansible-galaxy
11
10,556
1
https://stackoverflow.com/questions/65034863/updating-a-ansible-role-which-has-been-installed-with-ansible-galaxy-from-a-gith
39,473,719
Puppet hiera equivalent in Ansible
hiera.yaml --- :hierarchy: - node/%{host_fqdn} - site_config/%{host_site_name} - site_config/perf_%{host_performance_class} - site_config/%{host_type}_v%{host_type_version} - site/%{host_site_name} - environments/%{site_environment} - types/%{host_type}_v%{host_type_version} - hosts - sites - users - common # options are native, deep, deeper :merge_behavior: deeper We currently have this hiera config. So the config gets merged in the following sequence common.yaml > users.yaml > sites.yaml > hosts.yaml > types/xxx_vxxx.yaml > etc. For the variable top hierarchies, it gets overwritten only if that file exists. eg: common.yaml server: instance_type: m3.medium site_config/mysite.yaml server: instance_type: m4.large So for all other sites, the instance type will be m3.medium, but only for mysite it will be m4.large. How can I achieve the same in Ansible?
Puppet hiera equivalent in Ansible hiera.yaml --- :hierarchy: - node/%{host_fqdn} - site_config/%{host_site_name} - site_config/perf_%{host_performance_class} - site_config/%{host_type}_v%{host_type_version} - site/%{host_site_name} - environments/%{site_environment} - types/%{host_type}_v%{host_type_version} - hosts - sites - users - common # options are native, deep, deeper :merge_behavior: deeper We currently have this hiera config. So the config gets merged in the following sequence common.yaml > users.yaml > sites.yaml > hosts.yaml > types/xxx_vxxx.yaml > etc. For the variable top hierarchies, it gets overwritten only if that file exists. eg: common.yaml server: instance_type: m3.medium site_config/mysite.yaml server: instance_type: m4.large So for all other sites, the instance type will be m3.medium, but only for mysite it will be m4.large. How can I achieve the same in Ansible?
puppet, ansible, hiera
11
6,940
3
https://stackoverflow.com/questions/39473719/puppet-hiera-equivalent-in-ansible
37,646,581
Can Ansible unarchive be made to write static folder modification times?
I am writing a build process for a WordPress installation using Ansible. It doesn't have a application-level build system at the moment, and I've chosen Ansible so that it can cleanly integrate with server build scripts, so I can bring up a working server at the touch of a button. Most of my WordPress plugins are being installed with the unarchive feature, pointing to versioned plugin builds on the official wordpress.org installation server. I've encountered a problem with just one of these, which is that it is always being marked as "changed" even though the files are exactly the same. Having examined the state of ls -Rl before and after, I noticed that this plugin (WordPress HTTPS) is the only one to use internal sub-directories, and upon each decompression, the modification time of folders is getting bumped. It may be useful to know that this is a project build script, with a connection of local . I guess therefore that means that SSH is not being used. Here is a snippet of my playbook: - name: Install the W3 Total Cache plugin unarchive: > src=[URL] dest=wp-content/plugins copy=no - name: Install the WP DB Manager plugin unarchive: > src=[URL] dest=wp-content/plugins copy=no # @todo Since this has internal sub-folders, need to work out # how to preserve timestamps of the original folders rather than # re-writing them, which forces Ansible to record a change of # server state. - name: Install the WordPress HTTPS plugin unarchive: > src=[URL] dest=wp-content/plugins copy=no One hacky way of fixing this is to use ls -R before and after, using options to include file sizes but not timestamps, and then md5sum that output. I could then mark it as changed if there is a change in checksum. It'd work but it's not very elegant (and I'd want to do that for all plugins, for consistency). Another approach is to abandon the task if a plugin file already exists, but that would cause problems when I bump the plugin version number to the latest copy. Thus, ideally, I am looking for a switch to present to unarchive to say that I want the folder modification times from the zip file, not from playbook runtime. Is it possible? Update: a commenter asked if the file contents could have changed in any way. To determine whether they have, I wrote this script, which creates a checksum for (1) all file contents and (2) all file/directory timestamps: #!/bin/bash # Save pwd and then change dir to root location STARTDIR=pwd cd dirname $0/../.. # Clear collation file echo > /tmp/wp-checksum # List all files recursively find wp-content/plugins/wordpress-https/ -type f | while read file do #echo $file cat $file >> /tmp/wp-checksum done # Get checksum of file contents sha1sum /tmp/wp-checksum # Get checksum of file sizes ls -Rl wp-content/plugins/wordpress-https/ | sha1sum # Go back to original dir cd $STARTDIR I ran this as part of my playbook (running it in isolation using tags) and received this: PLAY [Set this playbook to run locally] **************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [jonblog : Run checksum command] ****************************************** changed: [localhost] TASK [jonblog : debug] ********************************************************* ok: [localhost] => { "checksum_before.stdout_lines": [ "374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum", "10d66f7bdbbdd3af531d1b11a3db3059a5868838 -" ] } TASK [jonblog : Install the WordPress HTTPS plugin] *************** changed: [localhost] TASK [jonblog : Run checksum command] ****************************************** changed: [localhost] TASK [jonblog : debug] ********************************************************* ok: [localhost] => { "checksum_after.stdout_lines": [ "374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum", "719c9da94b525e723b1abe188ee9f5bbaf121f3f -" ] } PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 The debug lines reflect the checksum hash of the contents of the files (this is identical) and then the checksum hash of ls -Rl of the file structure (this has changed). This is in keeping with my prior manual finding that directory checksums are changing. So, what can I do next to track down why folder modification times are incorrectly flagging this operation as changed?
Can Ansible unarchive be made to write static folder modification times? I am writing a build process for a WordPress installation using Ansible. It doesn't have a application-level build system at the moment, and I've chosen Ansible so that it can cleanly integrate with server build scripts, so I can bring up a working server at the touch of a button. Most of my WordPress plugins are being installed with the unarchive feature, pointing to versioned plugin builds on the official wordpress.org installation server. I've encountered a problem with just one of these, which is that it is always being marked as "changed" even though the files are exactly the same. Having examined the state of ls -Rl before and after, I noticed that this plugin (WordPress HTTPS) is the only one to use internal sub-directories, and upon each decompression, the modification time of folders is getting bumped. It may be useful to know that this is a project build script, with a connection of local . I guess therefore that means that SSH is not being used. Here is a snippet of my playbook: - name: Install the W3 Total Cache plugin unarchive: > src=[URL] dest=wp-content/plugins copy=no - name: Install the WP DB Manager plugin unarchive: > src=[URL] dest=wp-content/plugins copy=no # @todo Since this has internal sub-folders, need to work out # how to preserve timestamps of the original folders rather than # re-writing them, which forces Ansible to record a change of # server state. - name: Install the WordPress HTTPS plugin unarchive: > src=[URL] dest=wp-content/plugins copy=no One hacky way of fixing this is to use ls -R before and after, using options to include file sizes but not timestamps, and then md5sum that output. I could then mark it as changed if there is a change in checksum. It'd work but it's not very elegant (and I'd want to do that for all plugins, for consistency). Another approach is to abandon the task if a plugin file already exists, but that would cause problems when I bump the plugin version number to the latest copy. Thus, ideally, I am looking for a switch to present to unarchive to say that I want the folder modification times from the zip file, not from playbook runtime. Is it possible? Update: a commenter asked if the file contents could have changed in any way. To determine whether they have, I wrote this script, which creates a checksum for (1) all file contents and (2) all file/directory timestamps: #!/bin/bash # Save pwd and then change dir to root location STARTDIR=pwd cd dirname $0/../.. # Clear collation file echo > /tmp/wp-checksum # List all files recursively find wp-content/plugins/wordpress-https/ -type f | while read file do #echo $file cat $file >> /tmp/wp-checksum done # Get checksum of file contents sha1sum /tmp/wp-checksum # Get checksum of file sizes ls -Rl wp-content/plugins/wordpress-https/ | sha1sum # Go back to original dir cd $STARTDIR I ran this as part of my playbook (running it in isolation using tags) and received this: PLAY [Set this playbook to run locally] **************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [jonblog : Run checksum command] ****************************************** changed: [localhost] TASK [jonblog : debug] ********************************************************* ok: [localhost] => { "checksum_before.stdout_lines": [ "374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum", "10d66f7bdbbdd3af531d1b11a3db3059a5868838 -" ] } TASK [jonblog : Install the WordPress HTTPS plugin] *************** changed: [localhost] TASK [jonblog : Run checksum command] ****************************************** changed: [localhost] TASK [jonblog : debug] ********************************************************* ok: [localhost] => { "checksum_after.stdout_lines": [ "374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum", "719c9da94b525e723b1abe188ee9f5bbaf121f3f -" ] } PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 The debug lines reflect the checksum hash of the contents of the files (this is identical) and then the checksum hash of ls -Rl of the file structure (this has changed). This is in keeping with my prior manual finding that directory checksums are changing. So, what can I do next to track down why folder modification times are incorrectly flagging this operation as changed?
wordpress, ansible
11
674
2
https://stackoverflow.com/questions/37646581/can-ansible-unarchive-be-made-to-write-static-folder-modification-times
30,799,503
Ansible git module not checking out a branch
I'm using ansible to checkout my webapplication on EC2 web instances. My code is as followed: - name: Checkout the source code git: accept_hostkey=yes depth=5 dest={{ webapp_dir }} force=yes key_file=/var/tmp/webapp_deploy_key repo=git@github.com:MyRepo/web-app.git update=yes version={{ webapp_version }} register: git_output As long as webapp_version = master it works perfectly. But as soon as I put a SHA1 or Branch name it will fail. TASK: [webapp | Checkout the source code] ************************************* failed: [52.17.69.83] => {"failed": true} msg: Failed to checkout some-branch It's quite strange. I use: › ansible --version ansible 1.9.1 configured module search path = None
Ansible git module not checking out a branch I'm using ansible to checkout my webapplication on EC2 web instances. My code is as followed: - name: Checkout the source code git: accept_hostkey=yes depth=5 dest={{ webapp_dir }} force=yes key_file=/var/tmp/webapp_deploy_key repo=git@github.com:MyRepo/web-app.git update=yes version={{ webapp_version }} register: git_output As long as webapp_version = master it works perfectly. But as soon as I put a SHA1 or Branch name it will fail. TASK: [webapp | Checkout the source code] ************************************* failed: [52.17.69.83] => {"failed": true} msg: Failed to checkout some-branch It's quite strange. I use: › ansible --version ansible 1.9.1 configured module search path = None
git, ansible
11
10,044
3
https://stackoverflow.com/questions/30799503/ansible-git-module-not-checking-out-a-branch