question_id
int64 82.3k
79.7M
| title_clean
stringlengths 15
158
| body_clean
stringlengths 62
28.5k
| full_text
stringlengths 95
28.5k
| tags
stringlengths 4
80
| score
int64 0
1.15k
| view_count
int64 22
1.62M
| answer_count
int64 0
30
| link
stringlengths 58
125
|
|---|---|---|---|---|---|---|---|---|
51,291,521
|
kubernetes configmap prints \n instead of a newline
|
I am trying to deploy a configmap onto a cluster - name: Make/Update all configmaps on the cluster kubernetes: api_endpoint: blah url_username: blah url_password: blah inline_data: apiVersion: v1 kind: ConfigMap metadata: name: blah namespace: blah data: my-data.txt: "{{ data }}" state: present data: | some = foo foo = some (using spinnaker to attach it to pods) When I go into the pod and open my-data.txt it displays: some = foo\n foo = some\n I want it to look exactly like the text and print newline rather than \n Weird thing if I put ' ' single quotes somewhere in the text it prints the text as is but with the single quotes so : data: | some = foo foo = some ' ' prints exactly the same. I have tried to research but I couldn't find anything and I have been stuck on this for a while now.
|
kubernetes configmap prints \n instead of a newline I am trying to deploy a configmap onto a cluster - name: Make/Update all configmaps on the cluster kubernetes: api_endpoint: blah url_username: blah url_password: blah inline_data: apiVersion: v1 kind: ConfigMap metadata: name: blah namespace: blah data: my-data.txt: "{{ data }}" state: present data: | some = foo foo = some (using spinnaker to attach it to pods) When I go into the pod and open my-data.txt it displays: some = foo\n foo = some\n I want it to look exactly like the text and print newline rather than \n Weird thing if I put ' ' single quotes somewhere in the text it prints the text as is but with the single quotes so : data: | some = foo foo = some ' ' prints exactly the same. I have tried to research but I couldn't find anything and I have been stuck on this for a while now.
|
kubernetes, ansible
| 31
| 48,304
| 7
|
https://stackoverflow.com/questions/51291521/kubernetes-configmap-prints-n-instead-of-a-newline
|
36,724,870
|
Ansible - ERROR! the field 'hosts' is required but was not set
|
I've an error when I launch a playbook but I don't found why.... ERROR! the field 'hosts' is required but was not set There is my main.yml : --- - hosts: hosts - vars: - elasticsearch_java_home: /usr/lib/jmv/jre-1.7.0 - elasticsearch_http_port: 8443 - tasks: - include: tasks/main.yml - handlers: - include: handlers/main.yml And my /etc/ansible/hosts : [hosts] 10.23.108.182 10.23.108.183 10.23.108.184 10.23.108.185 When I test a ping, all is good : [root@poste08-08-00 elasticsearch]# ansible hosts -m ping 10.23.108.183 | SUCCESS => { "changed": false, "ping": "pong" } 10.23.108.182 | SUCCESS => { "changed": false, "ping": "pong" } 10.23.108.185 | SUCCESS => { "changed": false, "ping": "pong" } 10.23.108.184 | SUCCESS => { "changed": false, "ping": "pong" } Please, help me :) Regards,
|
Ansible - ERROR! the field 'hosts' is required but was not set I've an error when I launch a playbook but I don't found why.... ERROR! the field 'hosts' is required but was not set There is my main.yml : --- - hosts: hosts - vars: - elasticsearch_java_home: /usr/lib/jmv/jre-1.7.0 - elasticsearch_http_port: 8443 - tasks: - include: tasks/main.yml - handlers: - include: handlers/main.yml And my /etc/ansible/hosts : [hosts] 10.23.108.182 10.23.108.183 10.23.108.184 10.23.108.185 When I test a ping, all is good : [root@poste08-08-00 elasticsearch]# ansible hosts -m ping 10.23.108.183 | SUCCESS => { "changed": false, "ping": "pong" } 10.23.108.182 | SUCCESS => { "changed": false, "ping": "pong" } 10.23.108.185 | SUCCESS => { "changed": false, "ping": "pong" } 10.23.108.184 | SUCCESS => { "changed": false, "ping": "pong" } Please, help me :) Regards,
|
ansible
| 31
| 69,182
| 4
|
https://stackoverflow.com/questions/36724870/ansible-error-the-field-hosts-is-required-but-was-not-set
|
36,566,703
|
Ansible: /etc not writable
|
I am trying to copy a file in to /etc. But I am getting "msg: Destination /etc not writable" when i run the playbook. Here is my Playbook task part. Really Appreciate your help. tasks: - name: copy rsyslog sudo: yes copy: src: /home/nandakumar.nachimuth/playbooks/rhn_check/rtest.conf dest: /etc/rtest.conf owner: root group: root mode: 0755 ignore_errors: yes Error msg: Destination /etc not writable Note:I have provided the ssh and sudo pass while running the Playbook.
|
Ansible: /etc not writable I am trying to copy a file in to /etc. But I am getting "msg: Destination /etc not writable" when i run the playbook. Here is my Playbook task part. Really Appreciate your help. tasks: - name: copy rsyslog sudo: yes copy: src: /home/nandakumar.nachimuth/playbooks/rhn_check/rtest.conf dest: /etc/rtest.conf owner: root group: root mode: 0755 ignore_errors: yes Error msg: Destination /etc not writable Note:I have provided the ssh and sudo pass while running the Playbook.
|
ansible
| 31
| 48,325
| 4
|
https://stackoverflow.com/questions/36566703/ansible-etc-not-writable
|
20,801,787
|
"skipping: no hosts matched" issue with Vagrant and Ansible
|
I have installed Vagrant, VirtualBox and Ansible and trying to run provision over one host but it always returns "skipping: no hosts matched" The head of my playbook file looks like this: --- - hosts: webservers user: vagrant sudo: yes and my /etc/ansible/hosts file looks like this: [webservers] webserver1 I tried putting the IP address there but had the same result. I have added my ssh key to the server and added webserver1 host to both .ssh/config and /etc/hosts . I can ssh vagrant@webserver1 fine without being prompted for a password, thanks to using the ssh key. What am I missing here? Host: Debian 7.2 Client machine: Debian 7 Virtualbox: 4.1.18 Vangrantup: 1.4.1 Ansible: 1.5
|
"skipping: no hosts matched" issue with Vagrant and Ansible I have installed Vagrant, VirtualBox and Ansible and trying to run provision over one host but it always returns "skipping: no hosts matched" The head of my playbook file looks like this: --- - hosts: webservers user: vagrant sudo: yes and my /etc/ansible/hosts file looks like this: [webservers] webserver1 I tried putting the IP address there but had the same result. I have added my ssh key to the server and added webserver1 host to both .ssh/config and /etc/hosts . I can ssh vagrant@webserver1 fine without being prompted for a password, thanks to using the ssh key. What am I missing here? Host: Debian 7.2 Client machine: Debian 7 Virtualbox: 4.1.18 Vangrantup: 1.4.1 Ansible: 1.5
|
debian, vagrant, virtualbox, ansible
| 30
| 34,656
| 9
|
https://stackoverflow.com/questions/20801787/skipping-no-hosts-matched-issue-with-vagrant-and-ansible
|
21,869,912
|
Prevent simultaneous deploys with Ansible
|
Anyone on my team can SSH into our special deploy server, and from there run an Ansible playbook to push new code to machines. We're worried about what will happen if two people try to do deploys simultaneously. We'd like to make it so that the playbook will fail if anyone else is currently running it. Any suggestions for how to do this? The standard solution is to use a pid file, but Ansible does not have built-in support for these.
|
Prevent simultaneous deploys with Ansible Anyone on my team can SSH into our special deploy server, and from there run an Ansible playbook to push new code to machines. We're worried about what will happen if two people try to do deploys simultaneously. We'd like to make it so that the playbook will fail if anyone else is currently running it. Any suggestions for how to do this? The standard solution is to use a pid file, but Ansible does not have built-in support for these.
|
ansible
| 30
| 24,363
| 8
|
https://stackoverflow.com/questions/21869912/prevent-simultaneous-deploys-with-ansible
|
35,403,769
|
how to read json file using ansible
|
I have a json file in the same directory where my ansible script is. Following is the content of json file: { "resources":[ {"name":"package1", "downloadURL":"path-to-file1" }, {"name":"package2", "downloadURL": "path-to-file2"} ] } I am trying to to download these packages using get_url. Following is the approach: --- - hosts: localhost vars: package_dir: "/var/opt/" version_file: "{{lookup('file','/home/shasha/devOps/tests/packageFile.json')}}" tasks: - name: Printing the file. debug: msg="{{version_file}}" - name: Downloading the packages. get_url: url="{{item.downloadURL}}" dest="{{package_dir}}" mode=0777 with_items: version_file.resources The first task is printing the content of the file correctly but in the second task, I am getting the following error: [DEPRECATION WARNING]: Skipping task due to undefined attribute, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
|
how to read json file using ansible I have a json file in the same directory where my ansible script is. Following is the content of json file: { "resources":[ {"name":"package1", "downloadURL":"path-to-file1" }, {"name":"package2", "downloadURL": "path-to-file2"} ] } I am trying to to download these packages using get_url. Following is the approach: --- - hosts: localhost vars: package_dir: "/var/opt/" version_file: "{{lookup('file','/home/shasha/devOps/tests/packageFile.json')}}" tasks: - name: Printing the file. debug: msg="{{version_file}}" - name: Downloading the packages. get_url: url="{{item.downloadURL}}" dest="{{package_dir}}" mode=0777 with_items: version_file.resources The first task is printing the content of the file correctly but in the second task, I am getting the following error: [DEPRECATION WARNING]: Skipping task due to undefined attribute, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
|
ansible, ansible-2.x
| 30
| 56,961
| 3
|
https://stackoverflow.com/questions/35403769/how-to-read-json-file-using-ansible
|
21,908,363
|
Ansible - read inventory hosts and variables to group_vars/all file
|
I have a dummy doubt that keeps me stuck for a long time. I have a very banal inventory file with hosts and variables: [lb] 10.112.84.122 [tomcat] 10.112.84.124 [jboss5] 10.112.84.122 ... [tests:children] lb tomcat jboss5 [default:children] tests [tests:vars] data_base_user=NETWIN-4.3 data_base_password=NETWIN data_base_encrypted_password= data_base_host=10.112.69.48 data_base_port=1521 data_base_service=ssdenwdb data_base_url=jdbc:oracle:thin:@10.112.69.48:1521/ssdenwdb The problem is that I need to access all these hosts and variables, in the inventory file, from the group_vars/all file. I've tried the following manners to access the host IP: {{ lb }} "{{ hostvars[lb] }}" "{{ hostvars['lb'] }}" {{ hostvars[lb] }} To access a host variable I tried: "{{ hostvars[tests].['data_base_host'] }}" All of them are wrong!!! Can anyone help me find out the best practice to access hosts and variables, not from a playbook but from a variables file? EDIT: Ok. Let's clarify. Problem: Use a host declared in the inventory file in a variable file, let's say: group_vars/all. Example: I have a DB host with IP:10.112.83.37 . Inventory file: [db] 10.112.83.37 In the group:vars/all file I want to use that IP to build a variable. group_vars/all file: data_base_url=jdbc:oracle:thin:@{{ db }}:1521/ssdenwdb In a template I use the variable built in the group_vars/all file. Template file: oracle_url = {{ data_base_url }} The problem is that the {{ db }} variable in the group_vars/all file is not replaced by the DB host IP. The user can only edit the inventory file.
|
Ansible - read inventory hosts and variables to group_vars/all file I have a dummy doubt that keeps me stuck for a long time. I have a very banal inventory file with hosts and variables: [lb] 10.112.84.122 [tomcat] 10.112.84.124 [jboss5] 10.112.84.122 ... [tests:children] lb tomcat jboss5 [default:children] tests [tests:vars] data_base_user=NETWIN-4.3 data_base_password=NETWIN data_base_encrypted_password= data_base_host=10.112.69.48 data_base_port=1521 data_base_service=ssdenwdb data_base_url=jdbc:oracle:thin:@10.112.69.48:1521/ssdenwdb The problem is that I need to access all these hosts and variables, in the inventory file, from the group_vars/all file. I've tried the following manners to access the host IP: {{ lb }} "{{ hostvars[lb] }}" "{{ hostvars['lb'] }}" {{ hostvars[lb] }} To access a host variable I tried: "{{ hostvars[tests].['data_base_host'] }}" All of them are wrong!!! Can anyone help me find out the best practice to access hosts and variables, not from a playbook but from a variables file? EDIT: Ok. Let's clarify. Problem: Use a host declared in the inventory file in a variable file, let's say: group_vars/all. Example: I have a DB host with IP:10.112.83.37 . Inventory file: [db] 10.112.83.37 In the group:vars/all file I want to use that IP to build a variable. group_vars/all file: data_base_url=jdbc:oracle:thin:@{{ db }}:1521/ssdenwdb In a template I use the variable built in the group_vars/all file. Template file: oracle_url = {{ data_base_url }} The problem is that the {{ db }} variable in the group_vars/all file is not replaced by the DB host IP. The user can only edit the inventory file.
|
ansible, ansible-inventory
| 30
| 172,480
| 7
|
https://stackoverflow.com/questions/21908363/ansible-read-inventory-hosts-and-variables-to-group-vars-all-file
|
36,646,880
|
Ansible 2.1.0 using become/become_user fails to set permissions on temp file
|
I have an ansible 2.1.0 on my server, where I do deployment via vagrant and on PC too. The role "deploy" have : - name: upload code become: true become_user: www-data git: repo=git@bitbucket.org:****.git dest=/var/www/main key_file=/var/www/.ssh/id_rsa accept_hostkey=true update=yes force=yes register: fresh_code notify: restart php-fpm tags: fresh_code In this case with ansible 2.1.0 I get an error: fatal: [default]: FAILED! => {"failed": true, "msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user. For information on working around this, see [URL] It it ansible 2.0.1.0 which I use on my PC, is all normally - folder /var/www/ have folder main with owner and group www-data If I use only became_user: www-data and if I use become_method: sudo with became_user: www-data - i got same error What need to do to resolve this?
|
Ansible 2.1.0 using become/become_user fails to set permissions on temp file I have an ansible 2.1.0 on my server, where I do deployment via vagrant and on PC too. The role "deploy" have : - name: upload code become: true become_user: www-data git: repo=git@bitbucket.org:****.git dest=/var/www/main key_file=/var/www/.ssh/id_rsa accept_hostkey=true update=yes force=yes register: fresh_code notify: restart php-fpm tags: fresh_code In this case with ansible 2.1.0 I get an error: fatal: [default]: FAILED! => {"failed": true, "msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user. For information on working around this, see [URL] It it ansible 2.0.1.0 which I use on my PC, is all normally - folder /var/www/ have folder main with owner and group www-data If I use only became_user: www-data and if I use become_method: sudo with became_user: www-data - i got same error What need to do to resolve this?
|
git, vagrant, ansible, sudo, ansible-2.x
| 30
| 19,060
| 2
|
https://stackoverflow.com/questions/36646880/ansible-2-1-0-using-become-become-user-fails-to-set-permissions-on-temp-file
|
31,772,732
|
Ansible - How to keep appending new keys to a dictionary when using set_fact module with with_items?
|
I want to add keys to a dictionary when using set_fact with with_items. This is a small POC which will help me complete some other work. I have tried to generalize the POC so as to remove all the irrelevant details from it. When I execute following code it is shows a dictionary with only one key that corresponds to the last item of the with_items. It seems that it is re-creating a new dictionary or may be overriding an existing dictionary for every item in the with_items. I want a single dictionary with all the keys. Code: --- - hosts: localhost connection: local vars: some_value: 12345 dict: {} tasks: - set_fact: { dict: "{ {{ item }}: {{ some_value }} }" } with_items: - 1 - 2 - 3 - debug: msg="{{ dict }}"
|
Ansible - How to keep appending new keys to a dictionary when using set_fact module with with_items? I want to add keys to a dictionary when using set_fact with with_items. This is a small POC which will help me complete some other work. I have tried to generalize the POC so as to remove all the irrelevant details from it. When I execute following code it is shows a dictionary with only one key that corresponds to the last item of the with_items. It seems that it is re-creating a new dictionary or may be overriding an existing dictionary for every item in the with_items. I want a single dictionary with all the keys. Code: --- - hosts: localhost connection: local vars: some_value: 12345 dict: {} tasks: - set_fact: { dict: "{ {{ item }}: {{ some_value }} }" } with_items: - 1 - 2 - 3 - debug: msg="{{ dict }}"
|
ansible
| 30
| 95,384
| 4
|
https://stackoverflow.com/questions/31772732/ansible-how-to-keep-appending-new-keys-to-a-dictionary-when-using-set-fact-mod
|
21,925,808
|
automate usage of SSH local key for git deployment with ansible
|
I am working with vagrant and ansible. I want to automate the deployment role of ansible ( You can check my repo here ). For this purpose, I am trying to deploy my local ssh key into my VPS and my vagrant guest machine (I am trying SSH agent forwarding). GOAL Automate deployment process with git using ansible. I've already done this: --- - name: read-write git checkout from github git: repo={{ repository }} dest=/home/site Where: --- # Variables here are applicable to all host groups repository: git@bitbucket.org:dgnest/dgnest.git PROBLEM When I do: "vagrant provision", the console stop here: TASK: [deployment | read-write git checkout from github] ********************** That's because I haven't set up the ssh keys. I TRIED I would like to use the key_file option that the git module of ansible has. But it fails too. --- - name: read-write git checkout from github git: repo={{ repository }} dest=/home/site key_file=/home/oscar/.ssh/id_rsa.pub Another option is to copy my ~/ssh/id_rsa.pub into each VPS and vagrant, but my problem in this case is to handle with all the different users. Vagrant uses the "vagrant" user and my VPS uses another ones, so I had to put my ssh local key into each of these user? Hope you can help me. Thank you. UPDATE: I've just automated the @leucos answer (Thanks). Copying the private and public rsa keys. I share this link with the implementation.
|
automate usage of SSH local key for git deployment with ansible I am working with vagrant and ansible. I want to automate the deployment role of ansible ( You can check my repo here ). For this purpose, I am trying to deploy my local ssh key into my VPS and my vagrant guest machine (I am trying SSH agent forwarding). GOAL Automate deployment process with git using ansible. I've already done this: --- - name: read-write git checkout from github git: repo={{ repository }} dest=/home/site Where: --- # Variables here are applicable to all host groups repository: git@bitbucket.org:dgnest/dgnest.git PROBLEM When I do: "vagrant provision", the console stop here: TASK: [deployment | read-write git checkout from github] ********************** That's because I haven't set up the ssh keys. I TRIED I would like to use the key_file option that the git module of ansible has. But it fails too. --- - name: read-write git checkout from github git: repo={{ repository }} dest=/home/site key_file=/home/oscar/.ssh/id_rsa.pub Another option is to copy my ~/ssh/id_rsa.pub into each VPS and vagrant, but my problem in this case is to handle with all the different users. Vagrant uses the "vagrant" user and my VPS uses another ones, so I had to put my ssh local key into each of these user? Hope you can help me. Thank you. UPDATE: I've just automated the @leucos answer (Thanks). Copying the private and public rsa keys. I share this link with the implementation.
|
git, deployment, vagrant, ssh-keys, ansible
| 30
| 25,194
| 2
|
https://stackoverflow.com/questions/21925808/automate-usage-of-ssh-local-key-for-git-deployment-with-ansible
|
35,798,101
|
Execute curl -X with ansible playbook
|
I want to execute the next command using ansible playbook: curl -X POST -d@mesos-consul.json -H "Content-Type: application/json" [URL] How can I run it? If I run: - name: post to consul uri: url: [URL] method: POST body: "{{ lookup('file','mesos-consul.json') }}" body_format: json HEADER_Content-Type: "application/json" I have the next fail: fatal: [172.16.8.231]: FAILED! => {"failed": true, "msg": "ERROR! the file_name '/home/ikerlan/Ik4-Data-Platform/ansible/playbooks/Z_PONER_EN_MARCHA/dns-consul/mesos-consul.j2' does not exist, or is not readable"}
|
Execute curl -X with ansible playbook I want to execute the next command using ansible playbook: curl -X POST -d@mesos-consul.json -H "Content-Type: application/json" [URL] How can I run it? If I run: - name: post to consul uri: url: [URL] method: POST body: "{{ lookup('file','mesos-consul.json') }}" body_format: json HEADER_Content-Type: "application/json" I have the next fail: fatal: [172.16.8.231]: FAILED! => {"failed": true, "msg": "ERROR! the file_name '/home/ikerlan/Ik4-Data-Platform/ansible/playbooks/Z_PONER_EN_MARCHA/dns-consul/mesos-consul.j2' does not exist, or is not readable"}
|
ansible, ansible-2.x
| 30
| 114,131
| 1
|
https://stackoverflow.com/questions/35798101/execute-curl-x-with-ansible-playbook
|
19,702,879
|
Advantages of a deployment tool such as Ansible over shell
|
Currently I have all of my deployment scripts in shell, which installs about 10 programs and configures them. The way I see it shell is a fantastic tool for this: Modular: Only one program per script, this way I can spread the programs across different servers. Simple: Shell scripts are extremely simple and don't need any other software installed. One-click: I only have to run the shell script once and everything is setup. Agnostic: Most programmers can figure out shell and don't need to know how to use a specific program. Versioning: Since my code is on GitHub a simple Git pull and restart all of supervisor will run my latest code. With all of these advantages, why is it people are constantly telling me to use a tool such as Ansible or Chef , and not to use shell?
|
Advantages of a deployment tool such as Ansible over shell Currently I have all of my deployment scripts in shell, which installs about 10 programs and configures them. The way I see it shell is a fantastic tool for this: Modular: Only one program per script, this way I can spread the programs across different servers. Simple: Shell scripts are extremely simple and don't need any other software installed. One-click: I only have to run the shell script once and everything is setup. Agnostic: Most programmers can figure out shell and don't need to know how to use a specific program. Versioning: Since my code is on GitHub a simple Git pull and restart all of supervisor will run my latest code. With all of these advantages, why is it people are constantly telling me to use a tool such as Ansible or Chef , and not to use shell?
|
linux, shell, deployment, chef-infra, ansible
| 30
| 11,055
| 1
|
https://stackoverflow.com/questions/19702879/advantages-of-a-deployment-tool-such-as-ansible-over-shell
|
31,408,017
|
Ansible with a bastion host / jump box?
|
I'm fairly certain I've seen a feature in the ansible documentation where you can tell it that to connect to certain hosts it first needs to tunnel through a DMZ host. I can't however seem to find any documentation outside of some debates on the mailing lists. I'm aware of hacking this in with an ssh config like on this page [URL] however that's an overcomplicated kludge for an extremely common requirement in any kind of mildly regulated environment. Is there a way to do this without using custom ssh config includes and voodoo netcat sorcery?
|
Ansible with a bastion host / jump box? I'm fairly certain I've seen a feature in the ansible documentation where you can tell it that to connect to certain hosts it first needs to tunnel through a DMZ host. I can't however seem to find any documentation outside of some debates on the mailing lists. I'm aware of hacking this in with an ssh config like on this page [URL] however that's an overcomplicated kludge for an extremely common requirement in any kind of mildly regulated environment. Is there a way to do this without using custom ssh config includes and voodoo netcat sorcery?
|
ansible
| 30
| 39,484
| 3
|
https://stackoverflow.com/questions/31408017/ansible-with-a-bastion-host-jump-box
|
59,380,824
|
How to choose a python interpreter for Ansible playbook?
|
I have python2.7 and python3.5 in my ansible server , while executing playbooks it is using python2.7 . I wanted ansible to use python3.5 when executing playbooks. in order: 1 have set export path. 2 also changed default interpreter path in ansible.cfg as well. 3 have given specific interpretor path in hostsfile for particular host. But still, ansible is not running python3 .
|
How to choose a python interpreter for Ansible playbook? I have python2.7 and python3.5 in my ansible server , while executing playbooks it is using python2.7 . I wanted ansible to use python3.5 when executing playbooks. in order: 1 have set export path. 2 also changed default interpreter path in ansible.cfg as well. 3 have given specific interpretor path in hostsfile for particular host. But still, ansible is not running python3 .
|
python, ansible
| 30
| 120,660
| 2
|
https://stackoverflow.com/questions/59380824/how-to-choose-a-python-interpreter-for-ansible-playbook
|
37,313,780
|
How can I start php-fpm in a Docker container by default?
|
I have this Docker image - FROM centos:7 MAINTAINER Me <me.me> RUN yum update -y RUN yum install -y git [URL] RUN yum install -y ansible RUN git clone [URL] RUN ansible-playbook dockerFileBootstrap.yml RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \ rm -f /lib/systemd/system/multi-user.target.wants/*;\ rm -f /etc/systemd/system/*.wants/*;\ rm -f /lib/systemd/system/local-fs.target.wants/*; \ rm -f /lib/systemd/system/sockets.target.wants/*udev*; \ rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \ rm -f /lib/systemd/system/basic.target.wants/*;\ rm -f /lib/systemd/system/anaconda.target.wants/*; VOLUME [ "/sys/fs/cgroup" ] EXPOSE 80 443 3306 CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"] Basically, I want it so that php-fpm starts when the docker container starts. I have php-fpm working if I manually go into the container and turn it on with /usr/sbin/php-fpm . I tried it inside of my ansible file with this command (it didn't work). I tried using the service module as well with no luck.- - name: Start php fpm command: /usr/sbin/php-fpm How can I have php-fpm running along with apache?
|
How can I start php-fpm in a Docker container by default? I have this Docker image - FROM centos:7 MAINTAINER Me <me.me> RUN yum update -y RUN yum install -y git [URL] RUN yum install -y ansible RUN git clone [URL] RUN ansible-playbook dockerFileBootstrap.yml RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \ rm -f /lib/systemd/system/multi-user.target.wants/*;\ rm -f /etc/systemd/system/*.wants/*;\ rm -f /lib/systemd/system/local-fs.target.wants/*; \ rm -f /lib/systemd/system/sockets.target.wants/*udev*; \ rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \ rm -f /lib/systemd/system/basic.target.wants/*;\ rm -f /lib/systemd/system/anaconda.target.wants/*; VOLUME [ "/sys/fs/cgroup" ] EXPOSE 80 443 3306 CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"] Basically, I want it so that php-fpm starts when the docker container starts. I have php-fpm working if I manually go into the container and turn it on with /usr/sbin/php-fpm . I tried it inside of my ansible file with this command (it didn't work). I tried using the service module as well with no luck.- - name: Start php fpm command: /usr/sbin/php-fpm How can I have php-fpm running along with apache?
|
php, docker, ansible
| 29
| 88,897
| 7
|
https://stackoverflow.com/questions/37313780/how-can-i-start-php-fpm-in-a-docker-container-by-default
|
39,239,602
|
Commenting out a line with Ansible lineinfile module
|
I find it hard to believe there isn't anything that covers this use case but my search has proved fruitless. I have a line in /etc/fstab to mount a drive that's no longer available: //archive/Pipeline /pipeline/Archives cifs ro,credentials=/home/username/.config/cifs 0 0 What I want is to change it to #//archive/Pipeline /pipeline/Archives cifs ro,credentials=/home/username/.config/cifs 0 0 I was using this --- - hosts: slurm remote_user: root tasks: - name: Comment out pipeline archive in fstab lineinfile: dest: /etc/fstab regexp: '^//archive/pipeline' line: '#//archive/pipeline' state: present tags: update-fstab expecting it to just insert the comment symbol (#), but instead it replaced the whole line and I ended up with #//archive/Pipeline is there a way to glob-capture the rest of the line or just insert the single comment char? regexp: '^//archive/pipeline *' line: '#//archive/pipeline *' or regexp: '^//archive/pipeline *' line: '#//archive/pipeline $1' I am trying to wrap my head around lineinfile and from what I"ve read it looks like insertafter is what I'm looking for, but "insert after" isn't what I want?
|
Commenting out a line with Ansible lineinfile module I find it hard to believe there isn't anything that covers this use case but my search has proved fruitless. I have a line in /etc/fstab to mount a drive that's no longer available: //archive/Pipeline /pipeline/Archives cifs ro,credentials=/home/username/.config/cifs 0 0 What I want is to change it to #//archive/Pipeline /pipeline/Archives cifs ro,credentials=/home/username/.config/cifs 0 0 I was using this --- - hosts: slurm remote_user: root tasks: - name: Comment out pipeline archive in fstab lineinfile: dest: /etc/fstab regexp: '^//archive/pipeline' line: '#//archive/pipeline' state: present tags: update-fstab expecting it to just insert the comment symbol (#), but instead it replaced the whole line and I ended up with #//archive/Pipeline is there a way to glob-capture the rest of the line or just insert the single comment char? regexp: '^//archive/pipeline *' line: '#//archive/pipeline *' or regexp: '^//archive/pipeline *' line: '#//archive/pipeline $1' I am trying to wrap my head around lineinfile and from what I"ve read it looks like insertafter is what I'm looking for, but "insert after" isn't what I want?
|
ansible
| 29
| 53,357
| 3
|
https://stackoverflow.com/questions/39239602/commenting-out-a-line-with-ansible-lineinfile-module
|
44,713,880
|
How do I make decision based on arch in Ansible playbooks?
|
I have been trying to write playbooks where I can run different tasks based on the arch (i.e amd64, arm, ppc64le) that the playbook is running on. I can not figure out how do I get the arch of the system I am running it on. How to figure out the arch of the system in Ansible playbook?
|
How do I make decision based on arch in Ansible playbooks? I have been trying to write playbooks where I can run different tasks based on the arch (i.e amd64, arm, ppc64le) that the playbook is running on. I can not figure out how do I get the arch of the system I am running it on. How to figure out the arch of the system in Ansible playbook?
|
ansible
| 29
| 36,228
| 5
|
https://stackoverflow.com/questions/44713880/how-do-i-make-decision-based-on-arch-in-ansible-playbooks
|
32,189,385
|
Get Linux network interface name with Ansible?
|
What is the best way to get the main network interface name for a Linux server with Ansible? This is often/usually eth0 but we can't always assume this is the case and it would be better to identify this dynamically. We are configuring the firewall with Ansible so we need to be able to issue the interface name as part of the commands that we are using.
|
Get Linux network interface name with Ansible? What is the best way to get the main network interface name for a Linux server with Ansible? This is often/usually eth0 but we can't always assume this is the case and it would be better to identify this dynamically. We are configuring the firewall with Ansible so we need to be able to issue the interface name as part of the commands that we are using.
|
linux, ansible
| 29
| 34,750
| 1
|
https://stackoverflow.com/questions/32189385/get-linux-network-interface-name-with-ansible
|
34,788,520
|
Ansible: get files list from local directory
|
I use ansible 1.9.4 and I would like to get the list of files from a local directory. In 2.0 version, there is the find module but this version is beta. How to do this in < 2.0 ?
|
Ansible: get files list from local directory I use ansible 1.9.4 and I would like to get the list of files from a local directory. In 2.0 version, there is the find module but this version is beta. How to do this in < 2.0 ?
|
ansible
| 29
| 101,130
| 4
|
https://stackoverflow.com/questions/34788520/ansible-get-files-list-from-local-directory
|
32,988,878
|
Ansible: Run task if server belongs to a group
|
What's the sane way run a task only if the host belongs to one or more groups? Currently, I'm using a boolean within the relevant group, e.g.: Inventory file [db_servers:vars] copy_connection_string=true Task - name: Copy db connection string file synchronize: # ... when: copy_connection_string is defined What's the right condition in the when clause to check whether the current host belongs to the db_servers group?
|
Ansible: Run task if server belongs to a group What's the sane way run a task only if the host belongs to one or more groups? Currently, I'm using a boolean within the relevant group, e.g.: Inventory file [db_servers:vars] copy_connection_string=true Task - name: Copy db connection string file synchronize: # ... when: copy_connection_string is defined What's the right condition in the when clause to check whether the current host belongs to the db_servers group?
|
ansible
| 29
| 47,982
| 1
|
https://stackoverflow.com/questions/32988878/ansible-run-task-if-server-belongs-to-a-group
|
55,773,505
|
Where to place requirements.yml for Ansible and use it to resolve dependencies?
|
I am new to ansible and was exploring dependent roles. documentation link What I did not come across the documentation was- where to place the requirements.yml file. For instance, if my site.yml looks like this: --- - name: prepare system hosts: all roles: - role1 And, lets say role1 depends on role2 and role3 role2 depends on role4 and role5 Typically, ansible-galaxy have the following structure: └── test-role ├── defaults │ └── main.yml ├── files ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── README.md ├── tasks │ └── main.yml ├── templates ├── tests │ ├── inventory │ └── test.yml └── vars └── main.yml Dependencies, are added to meta/main.yml . Assuming, role1 has dependencies marked in this file like (and likewise for role2): dependencies: - role: role2 - role: role3 And, I also have a requirements.yml file which looks like: --- - src: some git link1 version: master name: role2 - src: some git link2 version: master name: role3 My question: where do I place this requirements.yml file for role1? I understand the requirements will need to be installed by the command, ansible-galaxy install -r requirements.yml -p roles/ And, I can do this for role1, but how does this get automated for role2? Do the successive dependencies need to be resolved and installed manually this way, or is there something better?
|
Where to place requirements.yml for Ansible and use it to resolve dependencies? I am new to ansible and was exploring dependent roles. documentation link What I did not come across the documentation was- where to place the requirements.yml file. For instance, if my site.yml looks like this: --- - name: prepare system hosts: all roles: - role1 And, lets say role1 depends on role2 and role3 role2 depends on role4 and role5 Typically, ansible-galaxy have the following structure: └── test-role ├── defaults │ └── main.yml ├── files ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── README.md ├── tasks │ └── main.yml ├── templates ├── tests │ ├── inventory │ └── test.yml └── vars └── main.yml Dependencies, are added to meta/main.yml . Assuming, role1 has dependencies marked in this file like (and likewise for role2): dependencies: - role: role2 - role: role3 And, I also have a requirements.yml file which looks like: --- - src: some git link1 version: master name: role2 - src: some git link2 version: master name: role3 My question: where do I place this requirements.yml file for role1? I understand the requirements will need to be installed by the command, ansible-galaxy install -r requirements.yml -p roles/ And, I can do this for role1, but how does this get automated for role2? Do the successive dependencies need to be resolved and installed manually this way, or is there something better?
|
ansible, ansible-role
| 29
| 95,943
| 1
|
https://stackoverflow.com/questions/55773505/where-to-place-requirements-yml-for-ansible-and-use-it-to-resolve-dependencies
|
36,150,362
|
Installing specific apt version with ansible
|
I used an ansible playbook to install git: --- - hosts: "www" tasks: - name: Update apt repo apt: update_cache=yes - name: Install dependencies apt: name={{item}} state=installed with_items: - git I checked the installed versions: $ git --version git version 1.9.1 But adding these to the ansible playbook: apt: name=git=1.9.1 state=installed and rerunning results in the following error: fatal: [46.101.94.110]: FAILED! => {"cache_update_time": 0, "cache_updated": false, "changed": false, "failed": true, "msg": "'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install 'git=1.9.1'' failed: E: Version '1.9.1' for 'git' was not found\n", "stderr": "E: Version '1.9.1' for 'git' was not found\n", "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information..."]}
|
Installing specific apt version with ansible I used an ansible playbook to install git: --- - hosts: "www" tasks: - name: Update apt repo apt: update_cache=yes - name: Install dependencies apt: name={{item}} state=installed with_items: - git I checked the installed versions: $ git --version git version 1.9.1 But adding these to the ansible playbook: apt: name=git=1.9.1 state=installed and rerunning results in the following error: fatal: [46.101.94.110]: FAILED! => {"cache_update_time": 0, "cache_updated": false, "changed": false, "failed": true, "msg": "'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install 'git=1.9.1'' failed: E: Version '1.9.1' for 'git' was not found\n", "stderr": "E: Version '1.9.1' for 'git' was not found\n", "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information..."]}
|
ansible
| 29
| 44,384
| 2
|
https://stackoverflow.com/questions/36150362/installing-specific-apt-version-with-ansible
|
26,639,325
|
Ansible: can't access dictionary value - got error: 'dict object' has no attribute
|
--- - hosts: test tasks: - name: print phone details debug: msg="user {{ item.key }} is {{ item.value.name }} ({{ item.value.telephone }})" with_dict: "{{ users }}" vars: users: alice: "Alice" telephone: 123 When I run this playbook, I am getting this error: One or more undefined variables: 'dict object' has no attribute 'name' This one actually works just fine: debug: msg="user {{ item.key }} is {{ item.value }}" What am I missing?
|
Ansible: can't access dictionary value - got error: 'dict object' has no attribute --- - hosts: test tasks: - name: print phone details debug: msg="user {{ item.key }} is {{ item.value.name }} ({{ item.value.telephone }})" with_dict: "{{ users }}" vars: users: alice: "Alice" telephone: 123 When I run this playbook, I am getting this error: One or more undefined variables: 'dict object' has no attribute 'name' This one actually works just fine: debug: msg="user {{ item.key }} is {{ item.value }}" What am I missing?
|
dictionary, ansible
| 29
| 148,940
| 4
|
https://stackoverflow.com/questions/26639325/ansible-cant-access-dictionary-value-got-error-dict-object-has-no-attribu
|
35,183,744
|
Jinja2: format + join the items of a list
|
play_hosts is a list of all machines for a play. I want to take these and use something like format() to rewrite them like rabbitmq@%s and then join them together with something like join() . So: {{ play_hosts|format(???)|join(', ') }} All the examples of format use piping where the input is the format string and not a list. Is there a way to use these (or something else) to accomplish what I want? The output should looks something like: ['rabbitmq@server1', 'rabbitmq@server2', rabbitmq@server3', ...] The jinja2 doc describes format like this: format(value, *args, **kwargs) Apply python string formatting on an object: {{ "%s - %s"|format("Hello?", "Foo!") }} -> Hello? - Foo! So it gives three kinds of input but doesn't describe those inputs in the example, which shows one in the pipe and the other two passed in via args. Is there a keyword arg to specify the string that's piped? Please help, python monks!
|
Jinja2: format + join the items of a list play_hosts is a list of all machines for a play. I want to take these and use something like format() to rewrite them like rabbitmq@%s and then join them together with something like join() . So: {{ play_hosts|format(???)|join(', ') }} All the examples of format use piping where the input is the format string and not a list. Is there a way to use these (or something else) to accomplish what I want? The output should looks something like: ['rabbitmq@server1', 'rabbitmq@server2', rabbitmq@server3', ...] The jinja2 doc describes format like this: format(value, *args, **kwargs) Apply python string formatting on an object: {{ "%s - %s"|format("Hello?", "Foo!") }} -> Hello? - Foo! So it gives three kinds of input but doesn't describe those inputs in the example, which shows one in the pipe and the other two passed in via args. Is there a keyword arg to specify the string that's piped? Please help, python monks!
|
python, jinja2, ansible
| 28
| 31,764
| 4
|
https://stackoverflow.com/questions/35183744/jinja2-format-join-the-items-of-a-list
|
35,470,667
|
Ansible with_items if item is defined
|
Ansible 1.9.4. The script should execute some task only on hosts where some variable is defined. It works fine normally, but it doesn't work with the with_items statement. - debug: var=symlinks when: symlinks is defined - name: Create other symlinks file: src={{ item.src }} dest={{ item.dest }} state=link with_items: "{{ symlinks }}" when: symlinks is defined But I get: TASK: [app/symlinks | debug var=symlinks] ********************* skipping: [another-host-yet] TASK: [app/symlinks | Create other symlinks] ****************** fatal: [another-host-yet] => with_items expects a list or a set Maybe I am doing something wrong?
|
Ansible with_items if item is defined Ansible 1.9.4. The script should execute some task only on hosts where some variable is defined. It works fine normally, but it doesn't work with the with_items statement. - debug: var=symlinks when: symlinks is defined - name: Create other symlinks file: src={{ item.src }} dest={{ item.dest }} state=link with_items: "{{ symlinks }}" when: symlinks is defined But I get: TASK: [app/symlinks | debug var=symlinks] ********************* skipping: [another-host-yet] TASK: [app/symlinks | Create other symlinks] ****************** fatal: [another-host-yet] => with_items expects a list or a set Maybe I am doing something wrong?
|
ansible
| 28
| 43,124
| 2
|
https://stackoverflow.com/questions/35470667/ansible-with-items-if-item-is-defined
|
40,844,720
|
Json parsing in Ansible
|
I have to parse the output of the following command: mongo <dbname> --eval "db.isMaster()" which gives output as follows: { "hosts" : [ "xxx:<port>", "xxx:<port>", "xxx:<port>" ], "setName" : "xxx", "setVersion" : xxx, "ismaster" : true, "secondary" : false, "primary" : "xxx", "me" : "xxx", "electionId" : ObjectId("xxxx"), "maxBsonObjectSize" : xxx, "maxMessageSizeBytes" : xxxx, "maxWriteBatchSize" : xxx, "localTime" : ISODate("xxx"), "maxWireVersion" : 4, "minWireVersion" : 0, "ok" : 1 } I need to parse the above output to check the value of "ismaster" is true. Please let me know how i can do this in ansible. At the moment i am simply checking that the text "ismaster" : true is shown in the output using the following code: tasks: - name: Check if the mongo node is primary shell: mongo <dbname> --eval "db.isMaster()" register: output_text - name: Run command on master shell: <command to execute> when: "'\"ismaster\\\" : true,' in output_text.stdout" However it would be nice to use Ansible's json processing to check the same. Please advise.
|
Json parsing in Ansible I have to parse the output of the following command: mongo <dbname> --eval "db.isMaster()" which gives output as follows: { "hosts" : [ "xxx:<port>", "xxx:<port>", "xxx:<port>" ], "setName" : "xxx", "setVersion" : xxx, "ismaster" : true, "secondary" : false, "primary" : "xxx", "me" : "xxx", "electionId" : ObjectId("xxxx"), "maxBsonObjectSize" : xxx, "maxMessageSizeBytes" : xxxx, "maxWriteBatchSize" : xxx, "localTime" : ISODate("xxx"), "maxWireVersion" : 4, "minWireVersion" : 0, "ok" : 1 } I need to parse the above output to check the value of "ismaster" is true. Please let me know how i can do this in ansible. At the moment i am simply checking that the text "ismaster" : true is shown in the output using the following code: tasks: - name: Check if the mongo node is primary shell: mongo <dbname> --eval "db.isMaster()" register: output_text - name: Run command on master shell: <command to execute> when: "'\"ismaster\\\" : true,' in output_text.stdout" However it would be nice to use Ansible's json processing to check the same. Please advise.
|
json, ansible
| 28
| 71,613
| 2
|
https://stackoverflow.com/questions/40844720/json-parsing-in-ansible
|
34,287,465
|
Ansible shared files between roles
|
Ansible Best Practices described that every role contains file directory that have all files needed by this rule. In my case I have different roles that share the same files. But I cannot make a copy of these files in each role as there will be no one source of these files and if edit happens to one of them it will become tedious to make this change for every role. A solution I made is to create another folder and reference it using absolute or relative path. Is this the best way of doing it? My ansible directory look like this play.yml roles/ web/ tasks/ files/ common-1 common-2 other-multiple-files role-2/ tasks/ files/ common-1 common-2 other-multiple-files role-3/ tasks/ files/ common-2 role-4/ tasks/ files/ common-1
|
Ansible shared files between roles Ansible Best Practices described that every role contains file directory that have all files needed by this rule. In my case I have different roles that share the same files. But I cannot make a copy of these files in each role as there will be no one source of these files and if edit happens to one of them it will become tedious to make this change for every role. A solution I made is to create another folder and reference it using absolute or relative path. Is this the best way of doing it? My ansible directory look like this play.yml roles/ web/ tasks/ files/ common-1 common-2 other-multiple-files role-2/ tasks/ files/ common-1 common-2 other-multiple-files role-3/ tasks/ files/ common-2 role-4/ tasks/ files/ common-1
|
ansible
| 28
| 29,316
| 6
|
https://stackoverflow.com/questions/34287465/ansible-shared-files-between-roles
|
40,410,270
|
What is the difference between two "state" option values, "present" and "installed", available in Ansible's yum module?
|
I have the following task in my ansible playbook: - name: Install EPEL repo. yum: name: "{{ epel_repo_url }}" state: present register: result until: '"failed" not in result' retries: 5 delay: 10 Another value I can pass to state is "installed". What is the difference between the two? Some documentation available here: [URL]
|
What is the difference between two "state" option values, "present" and "installed", available in Ansible's yum module? I have the following task in my ansible playbook: - name: Install EPEL repo. yum: name: "{{ epel_repo_url }}" state: present register: result until: '"failed" not in result' retries: 5 delay: 10 Another value I can pass to state is "installed". What is the difference between the two? Some documentation available here: [URL]
|
centos, ansible, yum, epel
| 28
| 52,750
| 3
|
https://stackoverflow.com/questions/40410270/what-is-the-difference-between-two-state-option-values-present-and-install
|
31,881,762
|
Ansible register result of multiple commands
|
I was given a task to verify some routing entries for all Linux server and here is how I did it using an Ansible playbook --- - hosts: Linux serial: 1 tasks: - name: Check first command: /sbin/ip route list xxx.xxx.xxx.xxx/24 register: result changed_when: false - debug: msg="{{result.stdout}}" - name: Check second command: /sbin/ip route list xxx.xxx.xxx.xxx/24 register: result changed_when: false - debug: msg="{{result.stdout}}" You can see I have to repeat same task for each routing entry and I believe I should be able to avoid this. I tried use with_items loop but got following error message One or more undefined variables: 'dict object' has no attribute 'stdout' is there a way to register variable for each command and loop over them one by one ?
|
Ansible register result of multiple commands I was given a task to verify some routing entries for all Linux server and here is how I did it using an Ansible playbook --- - hosts: Linux serial: 1 tasks: - name: Check first command: /sbin/ip route list xxx.xxx.xxx.xxx/24 register: result changed_when: false - debug: msg="{{result.stdout}}" - name: Check second command: /sbin/ip route list xxx.xxx.xxx.xxx/24 register: result changed_when: false - debug: msg="{{result.stdout}}" You can see I have to repeat same task for each routing entry and I believe I should be able to avoid this. I tried use with_items loop but got following error message One or more undefined variables: 'dict object' has no attribute 'stdout' is there a way to register variable for each command and loop over them one by one ?
|
ansible
| 28
| 103,442
| 4
|
https://stackoverflow.com/questions/31881762/ansible-register-result-of-multiple-commands
|
30,227,140
|
Best way to launch aws ec2 instances with ansible
|
I'm trying to create an small webapp infrastructure with ansible on Amazon AWS and I want to do all the process: launch instance, configure services, etc. but I can't find a proper tool or module to deal with that from ansible. Mainly EC2 Launch. Thanks a lot.
|
Best way to launch aws ec2 instances with ansible I'm trying to create an small webapp infrastructure with ansible on Amazon AWS and I want to do all the process: launch instance, configure services, etc. but I can't find a proper tool or module to deal with that from ansible. Mainly EC2 Launch. Thanks a lot.
|
amazon-web-services, amazon-ec2, ansible
| 28
| 29,493
| 3
|
https://stackoverflow.com/questions/30227140/best-way-to-launch-aws-ec2-instances-with-ansible
|
34,959,691
|
Pass a file and variables through --extra-vars
|
I see that files can supply variables to Ansible through the command line using --extra-vars "@some_file.json" , or variables can be set in strings as key=value. Is it possible to do both? And if so, what's the syntax?
|
Pass a file and variables through --extra-vars I see that files can supply variables to Ansible through the command line using --extra-vars "@some_file.json" , or variables can be set in strings as key=value. Is it possible to do both? And if so, what's the syntax?
|
ansible
| 28
| 48,257
| 2
|
https://stackoverflow.com/questions/34959691/pass-a-file-and-variables-through-extra-vars
|
27,006,925
|
How to replace a directory with a symlink using ansible?
|
I would like to replace /etc/nginx/sites-enabled with a symlink to my repo. I'm trying to do this using file module, but that doesn't work as the file module doesn't remove a directory with force option. - name: setup nginx sites-available symlink file: path=/etc/nginx/sites-available src=/repo/etc/nginx/sites-available state=link force=yes notify: restart nginx I could fall back to using shell. - name: setup nginx sites-available symlink shell: test -d /etc/nginx/sites-available && rm -r /etc/nginx/sites-available && ln -sT /repo/etc/nginx/sites-available /etc/nginx/sites-available notify: restart nginx Is there any better way to achieve this instead of falling back to shell?
|
How to replace a directory with a symlink using ansible? I would like to replace /etc/nginx/sites-enabled with a symlink to my repo. I'm trying to do this using file module, but that doesn't work as the file module doesn't remove a directory with force option. - name: setup nginx sites-available symlink file: path=/etc/nginx/sites-available src=/repo/etc/nginx/sites-available state=link force=yes notify: restart nginx I could fall back to using shell. - name: setup nginx sites-available symlink shell: test -d /etc/nginx/sites-available && rm -r /etc/nginx/sites-available && ln -sT /repo/etc/nginx/sites-available /etc/nginx/sites-available notify: restart nginx Is there any better way to achieve this instead of falling back to shell?
|
ansible
| 28
| 34,218
| 1
|
https://stackoverflow.com/questions/27006925/how-to-replace-a-directory-with-a-symlink-using-ansible
|
42,302,522
|
Ansible: how to get output to display
|
I have the following playbook, Playbook with output There are currently no errors and it runs fine. However, it does not display the output to the console. I have come across this with other playbooks and got around it by adding the following task to the playbook: -debug: var=output.stdout_lines and it prints the output. However, I tried to do the same thing in the above playbook and it says that the variable was undefined (code not shown because it didn't work). Is anyone aware of a better way to get the output to print to the console without using -debug ? Any ansible references would be greatly appreciated.
|
Ansible: how to get output to display I have the following playbook, Playbook with output There are currently no errors and it runs fine. However, it does not display the output to the console. I have come across this with other playbooks and got around it by adding the following task to the playbook: -debug: var=output.stdout_lines and it prints the output. However, I tried to do the same thing in the above playbook and it says that the variable was undefined (code not shown because it didn't work). Is anyone aware of a better way to get the output to print to the console without using -debug ? Any ansible references would be greatly appreciated.
|
ansible
| 28
| 176,512
| 1
|
https://stackoverflow.com/questions/42302522/ansible-how-to-get-output-to-display
|
50,151,210
|
Unable to run docker compose in an ansible playbook
|
I appear to be unable to run docker compose tasks in an ansible playbook. I get stuck in a loop. The first error I get when running sudo ansible-playbook playbook.yml is the following fatal: [10.0.3.5]: FAILED! => {"changed": false, "msg": "Unable to load docker-compose. Try pip install docker-compose. Error: No module named compose"} so I remote to that machine and did sudo pip install docker-compose and try running the playbook again. This time I get... fatal: [10.0.3.5]: FAILED! => {"changed": false, "msg": "Cannot have both the docker-py and docker python modules installed together as they use the same namespace and cause a corrupt installation. Please uninstall both packages, and re-install only the docker-py or docker python module"} so I try uninstalling docker python... sudo uninstall docker python Then I get the following when attempting to run the playbook again fatal: [10.0.3.5]: FAILED! => {"changed": false, "msg": "Failed to import docker-py - No module named docker. Try pip install docker-py"} However this is already install on the machine, as when I run sudo pip install docker-py I see the following... Requirement already satisfied (use --upgrade to upgrade): docker-py in /usr/local/lib/python2.7/dist-packages Cleaning up... Does anyone know how to escape this loop and successfully get an ansible playbook that uses docker-compose to run? The machine os is linux 14.04 Thanks,
|
Unable to run docker compose in an ansible playbook I appear to be unable to run docker compose tasks in an ansible playbook. I get stuck in a loop. The first error I get when running sudo ansible-playbook playbook.yml is the following fatal: [10.0.3.5]: FAILED! => {"changed": false, "msg": "Unable to load docker-compose. Try pip install docker-compose. Error: No module named compose"} so I remote to that machine and did sudo pip install docker-compose and try running the playbook again. This time I get... fatal: [10.0.3.5]: FAILED! => {"changed": false, "msg": "Cannot have both the docker-py and docker python modules installed together as they use the same namespace and cause a corrupt installation. Please uninstall both packages, and re-install only the docker-py or docker python module"} so I try uninstalling docker python... sudo uninstall docker python Then I get the following when attempting to run the playbook again fatal: [10.0.3.5]: FAILED! => {"changed": false, "msg": "Failed to import docker-py - No module named docker. Try pip install docker-py"} However this is already install on the machine, as when I run sudo pip install docker-py I see the following... Requirement already satisfied (use --upgrade to upgrade): docker-py in /usr/local/lib/python2.7/dist-packages Cleaning up... Does anyone know how to escape this loop and successfully get an ansible playbook that uses docker-compose to run? The machine os is linux 14.04 Thanks,
|
ansible, docker-compose
| 28
| 35,539
| 4
|
https://stackoverflow.com/questions/50151210/unable-to-run-docker-compose-in-an-ansible-playbook
|
37,189,826
|
Skip certain items on condition in ansible with_items loop
|
Is it possible to skip some items in Ansible with_items loop operator, on a conditional, without generating an additional step? Just for example: - name: test task command: touch "{{ item.item }}" with_items: - { item: "1" } - { item: "2", when: "test_var is defined" } - { item: "3" } in this task I want to create file 2 only when test_var is defined.
|
Skip certain items on condition in ansible with_items loop Is it possible to skip some items in Ansible with_items loop operator, on a conditional, without generating an additional step? Just for example: - name: test task command: touch "{{ item.item }}" with_items: - { item: "1" } - { item: "2", when: "test_var is defined" } - { item: "3" } in this task I want to create file 2 only when test_var is defined.
|
filter, ansible, skip
| 28
| 45,183
| 5
|
https://stackoverflow.com/questions/37189826/skip-certain-items-on-condition-in-ansible-with-items-loop
|
20,136,358
|
ansible: git module is hanging
|
I am using Ansible and I am having a hard time making the git module works. I have read several posts of people having the same problem, I looked at the ansible doc, well I tried almost everything. I found a clear tutorial that I followed until they use git but again I have a problem when I use my repository... :/ The git task just hangs... no error, it is just stuck! Here is my host file: [web] dev1 ansible_ssh_host=10.0.0.101 ansible_ssh_user=root This is a vagrant VM running on virtualbox on my computer. I took the playbook from this tutorial and did all the steps until step 08: [URL] I run it on my VM, it works fine, then I add one task "Deploy my code" to use my repository... but this task does not work. It is a private repository on bitbucket. Does it make a difference? - hosts: web tasks: - name: Deploy our awesome application action: git repo=[URL] dest=/var/www/awesome-app tags: deploy - name: Deploy my code action: git repo=[URL] dest=/var/www/my-app tags: deploy There might be something with the user, or the user running ansible, or the keys, etc, but I tried back and forth for hours and I am even more confused now... I just do not know what to do to debug that now and find out what is wrong and what I am missing. Thanks.
|
ansible: git module is hanging I am using Ansible and I am having a hard time making the git module works. I have read several posts of people having the same problem, I looked at the ansible doc, well I tried almost everything. I found a clear tutorial that I followed until they use git but again I have a problem when I use my repository... :/ The git task just hangs... no error, it is just stuck! Here is my host file: [web] dev1 ansible_ssh_host=10.0.0.101 ansible_ssh_user=root This is a vagrant VM running on virtualbox on my computer. I took the playbook from this tutorial and did all the steps until step 08: [URL] I run it on my VM, it works fine, then I add one task "Deploy my code" to use my repository... but this task does not work. It is a private repository on bitbucket. Does it make a difference? - hosts: web tasks: - name: Deploy our awesome application action: git repo=[URL] dest=/var/www/awesome-app tags: deploy - name: Deploy my code action: git repo=[URL] dest=/var/www/my-app tags: deploy There might be something with the user, or the user running ansible, or the keys, etc, but I tried back and forth for hours and I am even more confused now... I just do not know what to do to debug that now and find out what is wrong and what I am missing. Thanks.
|
git, ansible
| 28
| 21,349
| 8
|
https://stackoverflow.com/questions/20136358/ansible-git-module-is-hanging
|
33,768,690
|
Is it possible to use inline templates?
|
I need to create a single file with the contents of a single fact in Ansible. I'm currently doing something like this: - template: src=templates/git_commit.j2 dest=/path/to/REVISION My template file looks like this: {{ git_commit }} Obviously, it'd make a lot more sense to just do something like this: - inline_template: content={{ git_revision }} dest=/path/to/REVISION Puppet offers something similar. Is there a way to do this in Ansible?
|
Is it possible to use inline templates? I need to create a single file with the contents of a single fact in Ansible. I'm currently doing something like this: - template: src=templates/git_commit.j2 dest=/path/to/REVISION My template file looks like this: {{ git_commit }} Obviously, it'd make a lot more sense to just do something like this: - inline_template: content={{ git_revision }} dest=/path/to/REVISION Puppet offers something similar. Is there a way to do this in Ansible?
|
ansible
| 28
| 26,645
| 4
|
https://stackoverflow.com/questions/33768690/is-it-possible-to-use-inline-templates
|
31,685,125
|
Is it possible to map multiple attributes using Jinja/Ansible?
|
I would like to build an output that shows the key and value of a variable. The following works perfectly ... # Format in Ansible msg="{{ php_command_result.results | map(attribute='item') | join(', ') }}" # Output {'value': {'svn_tag': '20150703r1_6.36_homeland'}, 'key': 'ui'}, {'value': {'svn_tag': '20150702r1_6.36_homeland'}, 'key': 'api'} What I would like is to show the key and svn_tag together as so: I'm able to display either the key or svn_tag but getting them to go together doesn't work. msg="{{ php_command_result.results | map(attribute='item.key') | join(', ') }}" # Output ui, api However, this is what I want. # Desired Output api - 20150702r1_6.36_homeland ui - 20150703r1_6.36_homeland
|
Is it possible to map multiple attributes using Jinja/Ansible? I would like to build an output that shows the key and value of a variable. The following works perfectly ... # Format in Ansible msg="{{ php_command_result.results | map(attribute='item') | join(', ') }}" # Output {'value': {'svn_tag': '20150703r1_6.36_homeland'}, 'key': 'ui'}, {'value': {'svn_tag': '20150702r1_6.36_homeland'}, 'key': 'api'} What I would like is to show the key and svn_tag together as so: I'm able to display either the key or svn_tag but getting them to go together doesn't work. msg="{{ php_command_result.results | map(attribute='item.key') | join(', ') }}" # Output ui, api However, this is what I want. # Desired Output api - 20150702r1_6.36_homeland ui - 20150703r1_6.36_homeland
|
jinja2, ansible
| 28
| 38,387
| 6
|
https://stackoverflow.com/questions/31685125/is-it-possible-to-map-multiple-attributes-using-jinja-ansible
|
33,992,153
|
Ansible playbook-wide variable
|
I have a playbook with multiple hosts section. I would like to define a variable in this playbook.yml file that applies only within the file, for example: vars: my_global_var: 'hello' - hosts: db tasks: -shell: echo {{my_global_var}} - hosts: web tasks: -shell: echo {{my_global_var}} The example above does not work. I have to either duplicate the variable for each host section (bad) or define it at higher level, for example in my group_vars/all (not what I want, but works). I am also aware that variables files can be included, but this affects readibility. Any suggestion to get it in the right scope (e.g the playbook file itself)?
|
Ansible playbook-wide variable I have a playbook with multiple hosts section. I would like to define a variable in this playbook.yml file that applies only within the file, for example: vars: my_global_var: 'hello' - hosts: db tasks: -shell: echo {{my_global_var}} - hosts: web tasks: -shell: echo {{my_global_var}} The example above does not work. I have to either duplicate the variable for each host section (bad) or define it at higher level, for example in my group_vars/all (not what I want, but works). I am also aware that variables files can be included, but this affects readibility. Any suggestion to get it in the right scope (e.g the playbook file itself)?
|
variables, ansible
| 28
| 24,416
| 4
|
https://stackoverflow.com/questions/33992153/ansible-playbook-wide-variable
|
59,716,485
|
Ansible: How to change Python Version
|
Trying to use GNS3 to practice ansible script, there is a docker instance called "Network Automation" with built-in ansible. However, it still uses Python 2.7 as the interpreter: root@Network-Automation:~# ansible --version ansible 2.7.11 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/dist-packages/ansible executable location = /usr/bin/ansible python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609] I understand I can use "ansible-playbook --version -e 'ansible_python_interpreter=/usr/bin/python3'" command to run a playbook with Python version 3, or I can specifiy var within the playbook: - name: Common package hosts: all gather_facts: no vars: ansible_python_interpreter: /usr/bin/python3 roles: - { role: python, tags: [ init, python, common, addusers] } ... ... However, I would like to have a permanent way to force ansible to use Python3 version. How can I achieve this? Thanks.
|
Ansible: How to change Python Version Trying to use GNS3 to practice ansible script, there is a docker instance called "Network Automation" with built-in ansible. However, it still uses Python 2.7 as the interpreter: root@Network-Automation:~# ansible --version ansible 2.7.11 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/dist-packages/ansible executable location = /usr/bin/ansible python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609] I understand I can use "ansible-playbook --version -e 'ansible_python_interpreter=/usr/bin/python3'" command to run a playbook with Python version 3, or I can specifiy var within the playbook: - name: Common package hosts: all gather_facts: no vars: ansible_python_interpreter: /usr/bin/python3 roles: - { role: python, tags: [ init, python, common, addusers] } ... ... However, I would like to have a permanent way to force ansible to use Python3 version. How can I achieve this? Thanks.
|
python, python-3.x, ubuntu, ansible
| 28
| 142,111
| 4
|
https://stackoverflow.com/questions/59716485/ansible-how-to-change-python-version
|
53,205,687
|
Ansible :Unable to parse /etc/ansible/hosts as an inventory source
|
I am new to ansible , got the below issue. I was able to ssh into my client machine .but unable to run playbook. Getting the error below: [WARNING]: Unable to parse /etc/ansible/hosts as an inventory source [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Could not match supplied host pattern, ignoring: a here a is my group name. my hosts given below : --------- [a] 172.31.26.93 [all:vars] ansible_user=vagrant ansible_ssh_pass=vagrant ansible_ssh_host=172.31.26.93 ansible_ssh_port=22 ansible_ssh_user='ansibleuser' ansible_ssh_private_key_file=/home/ansibleuser/.ssh ------- my playbook file given below ---- - hosts: a tasks: - name: create a directory file: path=/home/ansiblesuser/www state=directory This is the first time I am getting this issue.
|
Ansible :Unable to parse /etc/ansible/hosts as an inventory source I am new to ansible , got the below issue. I was able to ssh into my client machine .but unable to run playbook. Getting the error below: [WARNING]: Unable to parse /etc/ansible/hosts as an inventory source [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Could not match supplied host pattern, ignoring: a here a is my group name. my hosts given below : --------- [a] 172.31.26.93 [all:vars] ansible_user=vagrant ansible_ssh_pass=vagrant ansible_ssh_host=172.31.26.93 ansible_ssh_port=22 ansible_ssh_user='ansibleuser' ansible_ssh_private_key_file=/home/ansibleuser/.ssh ------- my playbook file given below ---- - hosts: a tasks: - name: create a directory file: path=/home/ansiblesuser/www state=directory This is the first time I am getting this issue.
|
ssh, error-handling, ansible
| 27
| 176,142
| 13
|
https://stackoverflow.com/questions/53205687/ansible-unable-to-parse-etc-ansible-hosts-as-an-inventory-source
|
37,623,849
|
How can I get a list of hosts from an Ansible inventory file?
|
Is there a way to use the Ansible Python API to get a list of hosts from a given inventory file / group combination? For example, our inventory files are split up by service type: [dev:children] dev_a dev_b [dev_a] my.host.int.abc.com [dev_b] my.host.int.xyz.com [prod:children] prod_a prod_b [prod_a] my.host.abc.com [prod_b] my.host.xyz.com Can I use ansible.inventory in some way to pass in a specific inventory file, and the group I want to act on, and have it return a list of hosts that match?
|
How can I get a list of hosts from an Ansible inventory file? Is there a way to use the Ansible Python API to get a list of hosts from a given inventory file / group combination? For example, our inventory files are split up by service type: [dev:children] dev_a dev_b [dev_a] my.host.int.abc.com [dev_b] my.host.int.xyz.com [prod:children] prod_a prod_b [prod_a] my.host.abc.com [prod_b] my.host.xyz.com Can I use ansible.inventory in some way to pass in a specific inventory file, and the group I want to act on, and have it return a list of hosts that match?
|
python, ansible, ansible-2.x
| 27
| 75,398
| 7
|
https://stackoverflow.com/questions/37623849/how-can-i-get-a-list-of-hosts-from-an-ansible-inventory-file
|
34,334,377
|
how to specify user name in host file of ansible
|
I am using the host file as below, [qa-workstations] 10.39.19.190 ansible_user=test ansible_ssh_pass=test I am using below command to execute "whoami" command in host root@Svr:~/ansible# ansible all -a "whoami" -i /etc/ansible/host 10.39.19.190 | success | rc=0 >> root ansible by default trying to use user name in which I have logged in, i.e root instead of test user which I have specified in host file It works fine when I try to pass the username in ansible cli command root@Svr:~/ansible# ansible all -a "whoami" -i /etc/ansible/host -u test 10.39.19.190 | success | rc=0 >> test But its not possible for me to pass username every time in CLI as different host uses different username. Also I don't have a key pair generated for each host, because host machine keeps changing often version used: ansible 1.5.4 Ubuntu 14.04 LTS
|
how to specify user name in host file of ansible I am using the host file as below, [qa-workstations] 10.39.19.190 ansible_user=test ansible_ssh_pass=test I am using below command to execute "whoami" command in host root@Svr:~/ansible# ansible all -a "whoami" -i /etc/ansible/host 10.39.19.190 | success | rc=0 >> root ansible by default trying to use user name in which I have logged in, i.e root instead of test user which I have specified in host file It works fine when I try to pass the username in ansible cli command root@Svr:~/ansible# ansible all -a "whoami" -i /etc/ansible/host -u test 10.39.19.190 | success | rc=0 >> test But its not possible for me to pass username every time in CLI as different host uses different username. Also I don't have a key pair generated for each host, because host machine keeps changing often version used: ansible 1.5.4 Ubuntu 14.04 LTS
|
ansible, ansible-inventory
| 27
| 54,918
| 2
|
https://stackoverflow.com/questions/34334377/how-to-specify-user-name-in-host-file-of-ansible
|
26,989,492
|
Ansible loop through group vars in template
|
I'm struggling with a pattern pulling inventory vars in Ansible templates, please help. :) I'm setting up a monitoring server, and I want to be able to automatically provision the servers using Ansible. I'm struggling with loops in the template to allow me to this. My semi-working soluition so far is in the playbook that calls the template task I have: monitoringserver.yml vars: servers_to_monitor: - {cname: web1, ip_address: 192.168.33.111} - {cname: web2, ip_address: 192.168.33.112} - {cname: db1, ip_address: 192.168.33.211} - {cname: db2, ip_address: 192.168.33.212} template.yml all_hosts += [ {% for host in servers_to_monitor %} "{{ host.cname }}{{ host.ip }}|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/", {% endfor %} ] But this isn't ideal as I can't define different IP address for different servers to be monitoring. How have other people done this? I'm sure it must be trivial but my brain's struggling with the syntax. Thanks Alan edit: To clarify the resulting template looks something like this: all_hosts += [ "web1|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/", "web2|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/", "db1|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/", "db2|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/", ] What I would like is the values web1/web2/db1/db2 to be different depending on whether I'm using a production inventory file or a development inventory file.
|
Ansible loop through group vars in template I'm struggling with a pattern pulling inventory vars in Ansible templates, please help. :) I'm setting up a monitoring server, and I want to be able to automatically provision the servers using Ansible. I'm struggling with loops in the template to allow me to this. My semi-working soluition so far is in the playbook that calls the template task I have: monitoringserver.yml vars: servers_to_monitor: - {cname: web1, ip_address: 192.168.33.111} - {cname: web2, ip_address: 192.168.33.112} - {cname: db1, ip_address: 192.168.33.211} - {cname: db2, ip_address: 192.168.33.212} template.yml all_hosts += [ {% for host in servers_to_monitor %} "{{ host.cname }}{{ host.ip }}|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/", {% endfor %} ] But this isn't ideal as I can't define different IP address for different servers to be monitoring. How have other people done this? I'm sure it must be trivial but my brain's struggling with the syntax. Thanks Alan edit: To clarify the resulting template looks something like this: all_hosts += [ "web1|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/", "web2|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/", "db1|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/", "db2|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/", ] What I would like is the values web1/web2/db1/db2 to be different depending on whether I'm using a production inventory file or a development inventory file.
|
jinja2, ansible
| 27
| 143,076
| 1
|
https://stackoverflow.com/questions/26989492/ansible-loop-through-group-vars-in-template
|
35,892,455
|
Execute task (or handler) if any task failed
|
I am using Ansible to deploy a Django website into my servers (production, staging, etc), and I would like to get a notification (via slack in this case) if and only if any task fails. I can only figure out how to do it if a specified task fails (so I guess I could add a handler to all tasks), but intuition tells me there has to be an easier and more elegant option. Basically what I am thinking of is: --- - hosts: "{{hosts_to_deploy}}" - tasks: [...] - name: notify slack of deploy failure local_action: module: slack token: "{{slack_token}}" msg: "Deploy failed on {{inventory_hostname}}" when: # any task failed I have been diving into the Ansible documentation, specially in the error handling section, and answers here at SO, but I'm struggling to find an answer to my question. So any help will be much appreciated.
|
Execute task (or handler) if any task failed I am using Ansible to deploy a Django website into my servers (production, staging, etc), and I would like to get a notification (via slack in this case) if and only if any task fails. I can only figure out how to do it if a specified task fails (so I guess I could add a handler to all tasks), but intuition tells me there has to be an easier and more elegant option. Basically what I am thinking of is: --- - hosts: "{{hosts_to_deploy}}" - tasks: [...] - name: notify slack of deploy failure local_action: module: slack token: "{{slack_token}}" msg: "Deploy failed on {{inventory_hostname}}" when: # any task failed I have been diving into the Ansible documentation, specially in the error handling section, and answers here at SO, but I'm struggling to find an answer to my question. So any help will be much appreciated.
|
ansible
| 27
| 44,639
| 2
|
https://stackoverflow.com/questions/35892455/execute-task-or-handler-if-any-task-failed
|
41,947,750
|
Ansible task timeout max length
|
I execute a shell: docker ps ... task in some of my playbooks. This normally works but sometimes the docker daemon hangs and docker ps does not return for ~2 hours. How can I configure Ansible to timeout in a reasonable amount of time if docker ps does not return?
|
Ansible task timeout max length I execute a shell: docker ps ... task in some of my playbooks. This normally works but sometimes the docker daemon hangs and docker ps does not return for ~2 hours. How can I configure Ansible to timeout in a reasonable amount of time if docker ps does not return?
|
docker, ansible
| 27
| 70,117
| 5
|
https://stackoverflow.com/questions/41947750/ansible-task-timeout-max-length
|
24,798,382
|
How to assign an array to a variable in an Ansible-Playbook
|
In a playbook I got the following code: --- - hosts: db vars: postgresql_ext_install_contrib: yes postgresql_pg_hba_passwd_hosts: ['10.129.181.241/32'] ... I would like to replace the value of postgresql_pg_hba_passwd_hosts with all of my webservers private ips . I understand I can get the values like this in a template : {% for host in groups['web'] %} {{ hostvars[host]['ansible_eth1']['ipv4']['address'] }} {% endfor %} What is the simplest/easiest way to assign the result of this loop to a variable in a playbook ? Or is there a better way to collect this information in the first place? Should I put this loop in a template? Additional challenge: I'd have to add /32 to every entry.
|
How to assign an array to a variable in an Ansible-Playbook In a playbook I got the following code: --- - hosts: db vars: postgresql_ext_install_contrib: yes postgresql_pg_hba_passwd_hosts: ['10.129.181.241/32'] ... I would like to replace the value of postgresql_pg_hba_passwd_hosts with all of my webservers private ips . I understand I can get the values like this in a template : {% for host in groups['web'] %} {{ hostvars[host]['ansible_eth1']['ipv4']['address'] }} {% endfor %} What is the simplest/easiest way to assign the result of this loop to a variable in a playbook ? Or is there a better way to collect this information in the first place? Should I put this loop in a template? Additional challenge: I'd have to add /32 to every entry.
|
ansible
| 27
| 102,017
| 5
|
https://stackoverflow.com/questions/24798382/how-to-assign-an-array-to-a-variable-in-an-ansible-playbook
|
39,040,521
|
Apply with_items on multiple tasks
|
Is it possible to apply a list of items to multiple tasks in an Ansible playbook? To give an example: - name: download and execute hosts: server1 tasks: - get_url: url="some-url/{{item}}" dest="/tmp/{{item}}" with_items: - "file1.sh" - "file2.sh" - shell: /tmp/{{item}} >> somelog.txt with_items: - "file1.sh" - "file2.sh" Is there some syntax to avoid the repetition of the item-list?
|
Apply with_items on multiple tasks Is it possible to apply a list of items to multiple tasks in an Ansible playbook? To give an example: - name: download and execute hosts: server1 tasks: - get_url: url="some-url/{{item}}" dest="/tmp/{{item}}" with_items: - "file1.sh" - "file2.sh" - shell: /tmp/{{item}} >> somelog.txt with_items: - "file1.sh" - "file2.sh" Is there some syntax to avoid the repetition of the item-list?
|
ansible
| 27
| 52,766
| 2
|
https://stackoverflow.com/questions/39040521/apply-with-items-on-multiple-tasks
|
19,942,269
|
How to check if list of strings are present in command output in Ansible?
|
I want to run an Ansible action on the condition that a shell command doesn't return the expected output. ogr2ogr --formats pretty-prints a list of compatible file formats. I want to grep the formats output, and if my expected file formats aren't in the output, I want to run a command to install these components. Does anyone know how to do this? - name: check if proper ogr formats set up command: ogr2ogr --formats | grep $item with_items: - PostgreSQL - FileGDB - Spatialite register: ogr_check # If grep from ogr_check didn't find a certain format from with_items, run this - name: install proper ogr formats action: DO STUFF when: Not sure what to do here
|
How to check if list of strings are present in command output in Ansible? I want to run an Ansible action on the condition that a shell command doesn't return the expected output. ogr2ogr --formats pretty-prints a list of compatible file formats. I want to grep the formats output, and if my expected file formats aren't in the output, I want to run a command to install these components. Does anyone know how to do this? - name: check if proper ogr formats set up command: ogr2ogr --formats | grep $item with_items: - PostgreSQL - FileGDB - Spatialite register: ogr_check # If grep from ogr_check didn't find a certain format from with_items, run this - name: install proper ogr formats action: DO STUFF when: Not sure what to do here
|
grep, ansible
| 27
| 54,489
| 2
|
https://stackoverflow.com/questions/19942269/how-to-check-if-list-of-strings-are-present-in-command-output-in-ansible
|
35,554,415
|
In Ansible, how to combine variables from separate files into one array?
|
In Ansible, in a role, I have vars files like this: vars/ app1.yml app2.yml Each file contains vars specific to an app/website like this: name: app1 git_repo: [URL] # ... Ideally, without the task knowing in advance which apps have variable files, I'd like to end up with an array called apps like this: apps: - name: app1 git_repo: [URL] # ... - name: app2 git_repo: [URL] # ... ie, that combines the variables from the files into one. I know I can load all the variable files like this: - name: Load var files with_fileglob: - ../vars/*.yml include_vars: '{{ item }}' But given each file has identical variable names, it will overwrite each previous set of variables. I can't see a way to load the variables and put them into an apps array. I'm open to rearranging things slightly if it's the only way to make something like this possible.
|
In Ansible, how to combine variables from separate files into one array? In Ansible, in a role, I have vars files like this: vars/ app1.yml app2.yml Each file contains vars specific to an app/website like this: name: app1 git_repo: [URL] # ... Ideally, without the task knowing in advance which apps have variable files, I'd like to end up with an array called apps like this: apps: - name: app1 git_repo: [URL] # ... - name: app2 git_repo: [URL] # ... ie, that combines the variables from the files into one. I know I can load all the variable files like this: - name: Load var files with_fileglob: - ../vars/*.yml include_vars: '{{ item }}' But given each file has identical variable names, it will overwrite each previous set of variables. I can't see a way to load the variables and put them into an apps array. I'm open to rearranging things slightly if it's the only way to make something like this possible.
|
ansible
| 27
| 53,724
| 8
|
https://stackoverflow.com/questions/35554415/in-ansible-how-to-combine-variables-from-separate-files-into-one-array
|
24,738,264
|
How to test Ansible playbook using Docker
|
I'm new to ansible (and docker). I would like to test my ansible playbook before using it on any staging/production servers. Since I don't have access to an empty remote server, I thought the easiest way to test would be to use Docker container and then just run my playbook with the Docker container as the host. I have a basic DockerFile that creates a standard ubuntu container. How would I configure the ansible hosts in order to run it against the docker container? Also, I suspect I would need to "run" the docker container to allow ansible to connect to it.
|
How to test Ansible playbook using Docker I'm new to ansible (and docker). I would like to test my ansible playbook before using it on any staging/production servers. Since I don't have access to an empty remote server, I thought the easiest way to test would be to use Docker container and then just run my playbook with the Docker container as the host. I have a basic DockerFile that creates a standard ubuntu container. How would I configure the ansible hosts in order to run it against the docker container? Also, I suspect I would need to "run" the docker container to allow ansible to connect to it.
|
docker, ansible
| 27
| 22,310
| 4
|
https://stackoverflow.com/questions/24738264/how-to-test-ansible-playbook-using-docker
|
24,095,807
|
Ansible get the username from the command line
|
In my playbooks I reference username (exclusively its "ubuntu") a lot. Is there a built in way to say "get it from the value passed in the command line"? I know I can do ansible-playbook <task> -u <user> -K --extra-vars "user=<user>" and then I can use {{user}} in the playbook, but it feels odd defining the user twice.
|
Ansible get the username from the command line In my playbooks I reference username (exclusively its "ubuntu") a lot. Is there a built in way to say "get it from the value passed in the command line"? I know I can do ansible-playbook <task> -u <user> -K --extra-vars "user=<user>" and then I can use {{user}} in the playbook, but it feels odd defining the user twice.
|
ansible
| 27
| 32,480
| 2
|
https://stackoverflow.com/questions/24095807/ansible-get-the-username-from-the-command-line
|
38,393,343
|
How to use ansible 'expect' module for multiple different responses?
|
Here I am trying to test my bash script where it is prompting four times. #!/bin/bash date >/opt/prompt.txt read -p "enter one: " one echo $one echo $one >>/opt/prompt.txt read -p "enter two: " two echo $two echo $two >>/opt/prompt.txt read -p "enter three: " three echo $three echo $three >>/opt/prompt.txt read -p "enter password: " password echo $password echo $password >>/opt/prompt.txt for this script I wrote the code below, and it is working fine - hosts: "{{ hosts }}" tasks: - name: Test Script expect: command: sc.sh responses: enter one: 'one' enter two: 'two' enter three: 'three' enter password: 'pass' echo: yes But if I am doing the same for mysql_secure_installation command it not working - hosts: "{{ hosts }}" tasks: - name: mysql_secure_installation Command Test expect: command: mysql_secure_installation responses: 'Enter current password for root (enter for none):': "\n" 'Set root password? [Y/n]:': 'y' 'New password:': '123456' 'Re-enter new password:': '123456' 'Remove anonymous users? [Y/n]:': 'y' 'Disallow root login remotely? [Y/n]:': 'y' 'Remove test database and access to it? [Y/n]:': 'y' 'Reload privilege tables now? [Y/n]:': 'y' echo: yes and its trackback is here: PLAY [S1] ********************************************************************** TASK [setup] ******************************************************************* ok: [S1] TASK [mysql_secure_installation Command Test] ********************************** fatal: [S1]: FAILED! => {"changed": true, "cmd": "mysql_secure_installation", "delta": "0:00:30.139266", "end": "2016-07-15 15:36:32.549415", "failed": true, "rc": 1, "start": "2016-07-15 15:36:02.410149", "stdout": "\r\n\r\n\r\n\r\nNOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL\r\n SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!\r\n\r\n\r\nIn order to log into MySQL to secure it, we'll need the current\r\npassword for the root user. If you've just installed MySQL, and\r\nyou haven't set the root password yet, the password will be blank,\r\nso you should just press enter here.\r\n\r\nEnter current password for root (enter for none): ", "stdout_lines": ["", "", "", "", "NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL", " SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!", "", "", "In order to log into MySQL to secure it, we'll need the current", "password for the root user. If you've just installed MySQL, and", "you haven't set the root password yet, the password will be blank,", "so you should just press enter here.", "", "Enter current password for root (enter for none): "]} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/home/jackson/AnsibleWorkSpace/AnsibleTest/example1.retry PLAY RECAP ********************************************************************* S1 : ok=1 changed=0 unreachable=0 failed=1 I have also tried blank '' instead of "\n" for the first answer but it is not working either. I also visited Ansible expect doc but they show only very simple example and explanation. I am also trying regex match for multiple different responses but it is also not working. Please do not recommend me to use mysql module of Ansible, because here my purpose is to learn this module for future use.
|
How to use ansible 'expect' module for multiple different responses? Here I am trying to test my bash script where it is prompting four times. #!/bin/bash date >/opt/prompt.txt read -p "enter one: " one echo $one echo $one >>/opt/prompt.txt read -p "enter two: " two echo $two echo $two >>/opt/prompt.txt read -p "enter three: " three echo $three echo $three >>/opt/prompt.txt read -p "enter password: " password echo $password echo $password >>/opt/prompt.txt for this script I wrote the code below, and it is working fine - hosts: "{{ hosts }}" tasks: - name: Test Script expect: command: sc.sh responses: enter one: 'one' enter two: 'two' enter three: 'three' enter password: 'pass' echo: yes But if I am doing the same for mysql_secure_installation command it not working - hosts: "{{ hosts }}" tasks: - name: mysql_secure_installation Command Test expect: command: mysql_secure_installation responses: 'Enter current password for root (enter for none):': "\n" 'Set root password? [Y/n]:': 'y' 'New password:': '123456' 'Re-enter new password:': '123456' 'Remove anonymous users? [Y/n]:': 'y' 'Disallow root login remotely? [Y/n]:': 'y' 'Remove test database and access to it? [Y/n]:': 'y' 'Reload privilege tables now? [Y/n]:': 'y' echo: yes and its trackback is here: PLAY [S1] ********************************************************************** TASK [setup] ******************************************************************* ok: [S1] TASK [mysql_secure_installation Command Test] ********************************** fatal: [S1]: FAILED! => {"changed": true, "cmd": "mysql_secure_installation", "delta": "0:00:30.139266", "end": "2016-07-15 15:36:32.549415", "failed": true, "rc": 1, "start": "2016-07-15 15:36:02.410149", "stdout": "\r\n\r\n\r\n\r\nNOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL\r\n SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!\r\n\r\n\r\nIn order to log into MySQL to secure it, we'll need the current\r\npassword for the root user. If you've just installed MySQL, and\r\nyou haven't set the root password yet, the password will be blank,\r\nso you should just press enter here.\r\n\r\nEnter current password for root (enter for none): ", "stdout_lines": ["", "", "", "", "NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL", " SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!", "", "", "In order to log into MySQL to secure it, we'll need the current", "password for the root user. If you've just installed MySQL, and", "you haven't set the root password yet, the password will be blank,", "so you should just press enter here.", "", "Enter current password for root (enter for none): "]} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/home/jackson/AnsibleWorkSpace/AnsibleTest/example1.retry PLAY RECAP ********************************************************************* S1 : ok=1 changed=0 unreachable=0 failed=1 I have also tried blank '' instead of "\n" for the first answer but it is not working either. I also visited Ansible expect doc but they show only very simple example and explanation. I am also trying regex match for multiple different responses but it is also not working. Please do not recommend me to use mysql module of Ansible, because here my purpose is to learn this module for future use.
|
python, python-2.7, ansible, ansible-2.x
| 27
| 53,611
| 1
|
https://stackoverflow.com/questions/38393343/how-to-use-ansible-expect-module-for-multiple-different-responses
|
74,544,930
|
Ansible: Unsupported parameters for (ansible.legacy.command) module: warn
|
Today I encountered an old part of our Ansible playbook failing: Current situation - name: Install something shell: somecomand args: warn: false This throws fatal: "msg": "Unsupported parameters for (ansible.legacy.command) module: warn. Supported parameters include: chdir, _raw_params, removes, stdin, argv, executable, strip_empty_ends, stdin_add_newline, creates, _uses_shell." Apparently warn is no longer supported. I tried to find the "new" way to ignore warnings, but everything I found directed me towards the old warn: false method. Question What is the "new" way to ignore warning or - if a better way comes to mind: to fix this issue? Ansible version pip show ansible Name: ansible Version: 7.0.0 Summary: Radically simple IT automation Home-page: [URL] Author: Ansible, Inc. Author-email: info@ansible.com License: GPLv3+ Location: /usr/local/lib/python3.10/dist-packages Requires: ansible-core Required-by: The same playbook runs without issues under: pip show ansible Name: ansible Version: 6.6.0 Summary: Radically simple IT automation Home-page: [URL] Author: Ansible, Inc. Author-email: info@ansible.com License: GPLv3+ Location: /home/ubuntu/.local/lib/python3.10/site-packages Requires: ansible-core Required-by:
|
Ansible: Unsupported parameters for (ansible.legacy.command) module: warn Today I encountered an old part of our Ansible playbook failing: Current situation - name: Install something shell: somecomand args: warn: false This throws fatal: "msg": "Unsupported parameters for (ansible.legacy.command) module: warn. Supported parameters include: chdir, _raw_params, removes, stdin, argv, executable, strip_empty_ends, stdin_add_newline, creates, _uses_shell." Apparently warn is no longer supported. I tried to find the "new" way to ignore warnings, but everything I found directed me towards the old warn: false method. Question What is the "new" way to ignore warning or - if a better way comes to mind: to fix this issue? Ansible version pip show ansible Name: ansible Version: 7.0.0 Summary: Radically simple IT automation Home-page: [URL] Author: Ansible, Inc. Author-email: info@ansible.com License: GPLv3+ Location: /usr/local/lib/python3.10/dist-packages Requires: ansible-core Required-by: The same playbook runs without issues under: pip show ansible Name: ansible Version: 6.6.0 Summary: Radically simple IT automation Home-page: [URL] Author: Ansible, Inc. Author-email: info@ansible.com License: GPLv3+ Location: /home/ubuntu/.local/lib/python3.10/site-packages Requires: ansible-core Required-by:
|
ansible
| 27
| 36,369
| 1
|
https://stackoverflow.com/questions/74544930/ansible-unsupported-parameters-for-ansible-legacy-command-module-warn
|
47,167,446
|
How to set an Ansible variable for all plays/hosts?
|
This question is NOT answered. Someone mentioned environment variables. Can you elaborate on this? 5/28/2024 - Simplified the question (below): This is an oracle problem. I have 4 PCs. I need program 1 run on the one machine that has Drive E. Out of the remaining 3 that don't have drive E, I need program 2 run on ONLY one of the 3. For the other 2, don't run anything. This seems like a simple problem, but not in ansible. It keeps coming up. Especially in error conditions. I need a global variable. One that I can set when processing one host play, then check at a later time with another host. In a nutshell, so I can branch later in the playbook, depending on the variable. We have no control over custom software installation, but if it is installed, we have to put different software on other machines. To top it off, the installations vary, depending on the VM folder. My kingdom for a global var. The scope of variables relates ONLY to the current ansible_hostname. Yes, we have group_vars/all.yml as globals, but we can't set them in a play. If I set a variable, no other host's play/task can see it. I understand the scope of variables, but I want to SET a global variable that can be read throughout all playbook plays. The actual implementation is unimportant but variable access is (important). My Question: Is there a way to set a variable that can be checked when running a different task on another host? Something like setGlobalSpaceVar(myvar, true)? I know there isn't any such method, but I'm looking for a work-around. Rephrasing: set a variable in one tasks for one host, then later in another task for another host, read that variable. The only way I can think of is to change a file on the controller, but that seems bogus. An example The following relates to oracle backups and our local executable, but I'm keeping it generic. For below - Yes, I can do a run_once, but that won't answer my question. This variable access problem keeps coming up in different contexts. I have 4 xyz servers. I have 2 programs that need to be executed, but only on 2 different machines. I don't know which. The settings may be change for different VM environments. Our programOne is run on the server that has a drive E. I can find which server has drive E using ansible and do the play accordingly whenever I set a variable (driveE_machine). It only applies to that host. For that, the other 3 machines won't have driveE_machine set. In a later play, I need to execute another program on ONLY one of the other 3 machines. That means I need to set a variable that can be read by the other 2 hosts that didn't run the 2nd program. I'm not sure how to do it. Inventory file: [xyz] serverxyz[1:4].private.mystuff Playbook example: --- - name: stackoverflow variable question hosts: xyz gather_facts: no serial: 1 tasks: - name: find out who has drive E win_shell: dir e:\ register: adminPage ignore_errors: true # This sets a variable that can only be read for that host - name: set fact driveE_machine when rc is 0 set_fact: driveE_machine: "{{inventory_hostname}}" when: adminPage.rc == 0 - name: run program 1 include: tasks/program1.yml when: driveE_machine is defined # program2.yml executes program2 and needs to set some kind of variable # so this include can only be executed once for the other 3 machines # (not one that has driveE_machine defined and ??? - name: run program 2 include: tasks/program2.yml when: driveE_machine is undefined and ??? # please don't say run_once: true - that won't solve my variable access question Is there a way to set a variable that can be checked when running a task on another host?
|
How to set an Ansible variable for all plays/hosts? This question is NOT answered. Someone mentioned environment variables. Can you elaborate on this? 5/28/2024 - Simplified the question (below): This is an oracle problem. I have 4 PCs. I need program 1 run on the one machine that has Drive E. Out of the remaining 3 that don't have drive E, I need program 2 run on ONLY one of the 3. For the other 2, don't run anything. This seems like a simple problem, but not in ansible. It keeps coming up. Especially in error conditions. I need a global variable. One that I can set when processing one host play, then check at a later time with another host. In a nutshell, so I can branch later in the playbook, depending on the variable. We have no control over custom software installation, but if it is installed, we have to put different software on other machines. To top it off, the installations vary, depending on the VM folder. My kingdom for a global var. The scope of variables relates ONLY to the current ansible_hostname. Yes, we have group_vars/all.yml as globals, but we can't set them in a play. If I set a variable, no other host's play/task can see it. I understand the scope of variables, but I want to SET a global variable that can be read throughout all playbook plays. The actual implementation is unimportant but variable access is (important). My Question: Is there a way to set a variable that can be checked when running a different task on another host? Something like setGlobalSpaceVar(myvar, true)? I know there isn't any such method, but I'm looking for a work-around. Rephrasing: set a variable in one tasks for one host, then later in another task for another host, read that variable. The only way I can think of is to change a file on the controller, but that seems bogus. An example The following relates to oracle backups and our local executable, but I'm keeping it generic. For below - Yes, I can do a run_once, but that won't answer my question. This variable access problem keeps coming up in different contexts. I have 4 xyz servers. I have 2 programs that need to be executed, but only on 2 different machines. I don't know which. The settings may be change for different VM environments. Our programOne is run on the server that has a drive E. I can find which server has drive E using ansible and do the play accordingly whenever I set a variable (driveE_machine). It only applies to that host. For that, the other 3 machines won't have driveE_machine set. In a later play, I need to execute another program on ONLY one of the other 3 machines. That means I need to set a variable that can be read by the other 2 hosts that didn't run the 2nd program. I'm not sure how to do it. Inventory file: [xyz] serverxyz[1:4].private.mystuff Playbook example: --- - name: stackoverflow variable question hosts: xyz gather_facts: no serial: 1 tasks: - name: find out who has drive E win_shell: dir e:\ register: adminPage ignore_errors: true # This sets a variable that can only be read for that host - name: set fact driveE_machine when rc is 0 set_fact: driveE_machine: "{{inventory_hostname}}" when: adminPage.rc == 0 - name: run program 1 include: tasks/program1.yml when: driveE_machine is defined # program2.yml executes program2 and needs to set some kind of variable # so this include can only be executed once for the other 3 machines # (not one that has driveE_machine defined and ??? - name: run program 2 include: tasks/program2.yml when: driveE_machine is undefined and ??? # please don't say run_once: true - that won't solve my variable access question Is there a way to set a variable that can be checked when running a task on another host?
|
ansible
| 27
| 48,601
| 7
|
https://stackoverflow.com/questions/47167446/how-to-set-an-ansible-variable-for-all-plays-hosts
|
41,579,581
|
Filter object by property and select with key in jmespath
|
I'm trying to filter properties of an object in jmespath based on the value of a subproperty and want to include only those properties where the subproperty is set to a specific value. Based on this example data: { "a": { "feature": { "enabled": true, } }, "b": { }, "c": { "feature": { "enabled": false } } } I'd like to get an object with all properties where the feature is enabled. { "a": { "feature": { "enabled": true, } } } I figured I could use this jmespath query to filter the objects where property. enabled is set to true. Unfortunateley, it doesn't seem to work and instead returns an empty array. *[?feature.enabled==true] *.feature.enabled or *[feature.enabled] return just the boolean values without any context. Even if *[?feature.enabled== true ] would work, it would just be an array of the property values, but I need the keys ( a and c ) aswell. Is there any way to make this happen in jmespath? This is all part of an ansible playbook, so there would certainly be a way to achieve selection in a different way (Jinja2 templates or custom plugin) but I wanted to try jmespath and would reason, that it should be capable of such a task.
|
Filter object by property and select with key in jmespath I'm trying to filter properties of an object in jmespath based on the value of a subproperty and want to include only those properties where the subproperty is set to a specific value. Based on this example data: { "a": { "feature": { "enabled": true, } }, "b": { }, "c": { "feature": { "enabled": false } } } I'd like to get an object with all properties where the feature is enabled. { "a": { "feature": { "enabled": true, } } } I figured I could use this jmespath query to filter the objects where property. enabled is set to true. Unfortunateley, it doesn't seem to work and instead returns an empty array. *[?feature.enabled==true] *.feature.enabled or *[feature.enabled] return just the boolean values without any context. Even if *[?feature.enabled== true ] would work, it would just be an array of the property values, but I need the keys ( a and c ) aswell. Is there any way to make this happen in jmespath? This is all part of an ansible playbook, so there would certainly be a way to achieve selection in a different way (Jinja2 templates or custom plugin) but I wanted to try jmespath and would reason, that it should be capable of such a task.
|
json, ansible, normalization, jmespath
| 27
| 25,654
| 4
|
https://stackoverflow.com/questions/41579581/filter-object-by-property-and-select-with-key-in-jmespath
|
30,637,054
|
Case statement for setting var in Ansible/Jinja2
|
I'm using Ansible with Jinja2 templates, and this is a scenario that I can't find a solution for in Ansible's documentation or googling around for Jinja2 examples. Here's the logic that I want to achieve in Ansible: if {{ existing_ansible_var }} == "string1" new_ansible_var = "a" else if {{ existing_ansible_var }} == "string2" new_ansible_var = "b" <...> else new_ansible_var = "" I could probably do this by combining several techniques, the variable assignment from here: Set variable in jinja , the conditional comparison here: [URL] , and the defaulting filter here: [URL] , ...but I feel like that's overkill. Is there a simpler way to do this?
|
Case statement for setting var in Ansible/Jinja2 I'm using Ansible with Jinja2 templates, and this is a scenario that I can't find a solution for in Ansible's documentation or googling around for Jinja2 examples. Here's the logic that I want to achieve in Ansible: if {{ existing_ansible_var }} == "string1" new_ansible_var = "a" else if {{ existing_ansible_var }} == "string2" new_ansible_var = "b" <...> else new_ansible_var = "" I could probably do this by combining several techniques, the variable assignment from here: Set variable in jinja , the conditional comparison here: [URL] , and the defaulting filter here: [URL] , ...but I feel like that's overkill. Is there a simpler way to do this?
|
jinja2, ansible
| 26
| 69,605
| 3
|
https://stackoverflow.com/questions/30637054/case-statement-for-setting-var-in-ansible-jinja2
|
42,832,530
|
with_items output is too verbose
|
I wrote an ansible task to iterate over a list of settings using with_items . Now all my settings are logged when I run ansible. It is very verbose and makes it hard to see what is happening. But, if I disable all the output with no_log , I will have no way to identify specific items when they fail. How could the output be improved — to show only an identifier for each item? Example task: - authorized_key: user: "{{ item.user }}" key: "{{ item.key }}" with_items: "{{ ssh_keys }}" Example output: TASK [sshkey-alan-sysop : ssh authorized keys] ********************************* ok: [brick] => (item={u'user': u'alan-sysop', u'key': u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAgRe16yLZa8vbzsrxUpT5MdHoEYYd/awAnEWML4g+YoUvLDKr+zwmu78ze/E1NSipoZejXpggUYRVhh8MOiCX6qpUguBDWZFlvSCE/7uXWWg7Oht0f1kDS2xU7YiycPIzMN1dmUEFY9AixnN936Dq6nOtEzgBwjo66I1YC/5jrsQEqF19shx43A4DTFlPUz/PnsqHl2ESrkIk3e8zyidaPN2pRbA5iKzdvPW4E2W2tKw9ll40vqRXzaWIF7v293Ostwi1IPi2erlC777DhjZUhZ1VGXIR7FDAfANzalrMe6c/ZysiXewiUYgMw0I8Dh1LK3QMj9Kuo35S5E0Xj3TB alan-sysop@alan-laptop'})
|
with_items output is too verbose I wrote an ansible task to iterate over a list of settings using with_items . Now all my settings are logged when I run ansible. It is very verbose and makes it hard to see what is happening. But, if I disable all the output with no_log , I will have no way to identify specific items when they fail. How could the output be improved — to show only an identifier for each item? Example task: - authorized_key: user: "{{ item.user }}" key: "{{ item.key }}" with_items: "{{ ssh_keys }}" Example output: TASK [sshkey-alan-sysop : ssh authorized keys] ********************************* ok: [brick] => (item={u'user': u'alan-sysop', u'key': u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAgRe16yLZa8vbzsrxUpT5MdHoEYYd/awAnEWML4g+YoUvLDKr+zwmu78ze/E1NSipoZejXpggUYRVhh8MOiCX6qpUguBDWZFlvSCE/7uXWWg7Oht0f1kDS2xU7YiycPIzMN1dmUEFY9AixnN936Dq6nOtEzgBwjo66I1YC/5jrsQEqF19shx43A4DTFlPUz/PnsqHl2ESrkIk3e8zyidaPN2pRbA5iKzdvPW4E2W2tKw9ll40vqRXzaWIF7v293Ostwi1IPi2erlC777DhjZUhZ1VGXIR7FDAfANzalrMe6c/ZysiXewiUYgMw0I8Dh1LK3QMj9Kuo35S5E0Xj3TB alan-sysop@alan-laptop'})
|
debugging, logging, ansible
| 26
| 15,582
| 2
|
https://stackoverflow.com/questions/42832530/with-items-output-is-too-verbose
|
51,167,099
|
Installing Ansible Python package on Windows
|
I'm struggling to install Ansible Python package on my Windows 10 machine. I don't need Ansible to run on my machine, this is purely for development purpose on my Windows host. All commands will later be issued on a Linux machine. After running: pip install ansible I get the following exception: Command "c:\users\evaldas.buinauskas\appdata\local\programs\python\python37-32\python.exe -u -c "import setuptools, tokenize;__file__='C:\Users\evaldas.buinauskas\AppData\Local\Temp\pip-install-hpay_le9\ansible\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\evaldas.buinauskas\AppData\Local\Temp\pip-record-dvfgngpp\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\evaldas.buinauskas\AppData\Local\Temp\pip-install-hpay_le9\ansible\ Also there's a repetetive exception that I think is the root cause: error: can't copy 'lib\ansible\module_utils\ansible_release.py': doesn't exist or not a regular file This GitHub issue says that installing should be possible, not running it. That's basically all I really need. I tried running CMD/PowerShell/Cygwin as Administrator, didn't help. Also, there's an answer that tells how to install it on Windows: How to overcome - pip install ansible on windows failing with filename or extension too long on windows But I don't really understand how to get a wheel file for Ansible package.
|
Installing Ansible Python package on Windows I'm struggling to install Ansible Python package on my Windows 10 machine. I don't need Ansible to run on my machine, this is purely for development purpose on my Windows host. All commands will later be issued on a Linux machine. After running: pip install ansible I get the following exception: Command "c:\users\evaldas.buinauskas\appdata\local\programs\python\python37-32\python.exe -u -c "import setuptools, tokenize;__file__='C:\Users\evaldas.buinauskas\AppData\Local\Temp\pip-install-hpay_le9\ansible\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\evaldas.buinauskas\AppData\Local\Temp\pip-record-dvfgngpp\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\evaldas.buinauskas\AppData\Local\Temp\pip-install-hpay_le9\ansible\ Also there's a repetetive exception that I think is the root cause: error: can't copy 'lib\ansible\module_utils\ansible_release.py': doesn't exist or not a regular file This GitHub issue says that installing should be possible, not running it. That's basically all I really need. I tried running CMD/PowerShell/Cygwin as Administrator, didn't help. Also, there's an answer that tells how to install it on Windows: How to overcome - pip install ansible on windows failing with filename or extension too long on windows But I don't really understand how to get a wheel file for Ansible package.
|
python, pip, ansible
| 26
| 34,990
| 5
|
https://stackoverflow.com/questions/51167099/installing-ansible-python-package-on-windows
|
33,061,524
|
Ansible: ansible_user in inventory vs remote_user in playbook
|
I am trying to run an Ansible playbook against a server using an account other than the one I am logged on the control machine. I tried to specify an ansible_user in the inventory file according to the documentation on Inventory : [srv1] 192.168.1.146 ansible_connection=ssh ansible_user=user1 However Ansible called with ansible-playbook -i inventory playbook.yml -vvvv prints the following: GATHERING FACTS *************************************************************** <192.168.1.146> ESTABLISH CONNECTION FOR USER: techraf What worked for me was adding the remote_user argument to the playbook: - hosts: srv1 remote_user: user1 Now the same Ansible command connects as user1 : GATHERING FACTS *************************************************************** <192.168.1.146> ESTABLISH CONNECTION FOR USER: user1 Also adding remote_user variable to ansible.cfg makes Ansible use the intended user instead of the logged-on one. Are the ansible_user in inventory file and remote_user in playbook/ ansible.cfg for different purposes? What is the ansible_user used for? Or why doesn't Ansible observe the setting in the inventory?
|
Ansible: ansible_user in inventory vs remote_user in playbook I am trying to run an Ansible playbook against a server using an account other than the one I am logged on the control machine. I tried to specify an ansible_user in the inventory file according to the documentation on Inventory : [srv1] 192.168.1.146 ansible_connection=ssh ansible_user=user1 However Ansible called with ansible-playbook -i inventory playbook.yml -vvvv prints the following: GATHERING FACTS *************************************************************** <192.168.1.146> ESTABLISH CONNECTION FOR USER: techraf What worked for me was adding the remote_user argument to the playbook: - hosts: srv1 remote_user: user1 Now the same Ansible command connects as user1 : GATHERING FACTS *************************************************************** <192.168.1.146> ESTABLISH CONNECTION FOR USER: user1 Also adding remote_user variable to ansible.cfg makes Ansible use the intended user instead of the logged-on one. Are the ansible_user in inventory file and remote_user in playbook/ ansible.cfg for different purposes? What is the ansible_user used for? Or why doesn't Ansible observe the setting in the inventory?
|
ansible, ansible-2.x, ansible-inventory
| 26
| 74,127
| 2
|
https://stackoverflow.com/questions/33061524/ansible-ansible-user-in-inventory-vs-remote-user-in-playbook
|
41,613,343
|
Define one notify block for several tasks in Ansible
|
Is it possible to define one notify block for several tasks? In next code snippet notify: restart tomcat defined 3 times, but I want to define it only one time and "apply" to list of tasks - name : template context.xml template: src: context.xml.j2 dest: /usr/share/tomcat/conf/context.xml group: tomcat mode: 0664 notify: restart tomcat - name : copy server.xml copy: src: server.xml dest: /etc/tomcat/server.xml group: tomcat mode: 0664 notify: restart tomcat - name : copy atomikos-integration-extension copy: src: atomikos-integration-extension-3.7.1-20120529.jar dest: /usr/share/tomcat/ext-libs/ group: tomcat mode: 0664 notify: restart tomcat
|
Define one notify block for several tasks in Ansible Is it possible to define one notify block for several tasks? In next code snippet notify: restart tomcat defined 3 times, but I want to define it only one time and "apply" to list of tasks - name : template context.xml template: src: context.xml.j2 dest: /usr/share/tomcat/conf/context.xml group: tomcat mode: 0664 notify: restart tomcat - name : copy server.xml copy: src: server.xml dest: /etc/tomcat/server.xml group: tomcat mode: 0664 notify: restart tomcat - name : copy atomikos-integration-extension copy: src: atomikos-integration-extension-3.7.1-20120529.jar dest: /usr/share/tomcat/ext-libs/ group: tomcat mode: 0664 notify: restart tomcat
|
ansible
| 26
| 14,703
| 4
|
https://stackoverflow.com/questions/41613343/define-one-notify-block-for-several-tasks-in-ansible
|
63,177,609
|
zsh: command not found: ansible after pip installing
|
I've installed ansible on my Mac using pip as advised by ansible's documentation: [URL] However when I try to run ansible I get the following: zsh: command not found: ansible I've never had this problem when installing ansible before. pip-installing again tells me it's already installed under site packages: Requirement already satisfied: ansible in ./Library/Python/3.8/lib/python/site-packages (2.9.11) And my python installation in ~/.zshrc points to: # Add user python 3.7 to path export PATH="/usr/local/opt/python/libexec/bin:$PATH" Might be obvious to some but I can't figure out why this simple installation isn't working..
|
zsh: command not found: ansible after pip installing I've installed ansible on my Mac using pip as advised by ansible's documentation: [URL] However when I try to run ansible I get the following: zsh: command not found: ansible I've never had this problem when installing ansible before. pip-installing again tells me it's already installed under site packages: Requirement already satisfied: ansible in ./Library/Python/3.8/lib/python/site-packages (2.9.11) And my python installation in ~/.zshrc points to: # Add user python 3.7 to path export PATH="/usr/local/opt/python/libexec/bin:$PATH" Might be obvious to some but I can't figure out why this simple installation isn't working..
|
macos, ansible
| 25
| 54,479
| 8
|
https://stackoverflow.com/questions/63177609/zsh-command-not-found-ansible-after-pip-installing
|
52,743,147
|
Changing ansible loop due to v2.11 deprecation
|
I'm running a playbook which defined several packages to install via apt : - name: Install utility packages common to all hosts apt: name: "{{ item }}" state: present autoclean: yes with_items: - aptitude - jq - curl - git-core - at ... A recent ansible update on my system now renders this message concerning the playbook above: [DEPRECATION WARNING]: Invoking "apt" only once while using a loop via squash_actions is deprecated. Instead of using a loop to supply multiple items and specifying name: {{ item }}, please use name: [u'aptitude', u'jq', u'curl', u'git-core', u'at', u'heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp', u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi'] and remove the loop. If I'm understanding this correctly, Ansible now wants this list of packages as an array which leaves this: name: [u'aptitude', u'jq', u'curl', u'git-core', u'at','heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp',u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi'] Is there a better way? Just seems like I'll be scrolling right forever in VIM trying to maintain this. Either that, or word wrap it and deal with a word-cloud of packages.
|
Changing ansible loop due to v2.11 deprecation I'm running a playbook which defined several packages to install via apt : - name: Install utility packages common to all hosts apt: name: "{{ item }}" state: present autoclean: yes with_items: - aptitude - jq - curl - git-core - at ... A recent ansible update on my system now renders this message concerning the playbook above: [DEPRECATION WARNING]: Invoking "apt" only once while using a loop via squash_actions is deprecated. Instead of using a loop to supply multiple items and specifying name: {{ item }}, please use name: [u'aptitude', u'jq', u'curl', u'git-core', u'at', u'heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp', u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi'] and remove the loop. If I'm understanding this correctly, Ansible now wants this list of packages as an array which leaves this: name: [u'aptitude', u'jq', u'curl', u'git-core', u'at','heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp',u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi'] Is there a better way? Just seems like I'll be scrolling right forever in VIM trying to maintain this. Either that, or word wrap it and deal with a word-cloud of packages.
|
linux, ansible
| 25
| 15,153
| 4
|
https://stackoverflow.com/questions/52743147/changing-ansible-loop-due-to-v2-11-deprecation
|
46,515,704
|
How to kill a running process using ansible?
|
I have an ansible playbook to kill running processes and works great most of the time!, however, from time to time we find processes that just can't be killed so, "wait_for" gets to the timeout, throws an error and it stops the process. The current workaround is to manually go into the box, use "kill -9" and run the ansible playbook again so I was wondering if there is any way to handle this scenario from ansible itself?, I mean, I don't want to use kill -9 from the beginning but I maybe a way to handle the timeout?, even to use kill -9 only if process hasn't been killed in 300 seconds? but what would be the best way to do it? These are the tasks I currently have: - name: Get running processes shell: "ps -ef | grep -v grep | grep -w {{ PROCESS }} | awk '{print $2}'" register: running_processes - name: Kill running processes shell: "kill {{ item }}" with_items: "{{ running_processes.stdout_lines }}" - name: Waiting until all running processes are killed wait_for: path: "/proc/{{ item }}/status" state: absent with_items: "{{ running_processes.stdout_lines }}" Thanks!
|
How to kill a running process using ansible? I have an ansible playbook to kill running processes and works great most of the time!, however, from time to time we find processes that just can't be killed so, "wait_for" gets to the timeout, throws an error and it stops the process. The current workaround is to manually go into the box, use "kill -9" and run the ansible playbook again so I was wondering if there is any way to handle this scenario from ansible itself?, I mean, I don't want to use kill -9 from the beginning but I maybe a way to handle the timeout?, even to use kill -9 only if process hasn't been killed in 300 seconds? but what would be the best way to do it? These are the tasks I currently have: - name: Get running processes shell: "ps -ef | grep -v grep | grep -w {{ PROCESS }} | awk '{print $2}'" register: running_processes - name: Kill running processes shell: "kill {{ item }}" with_items: "{{ running_processes.stdout_lines }}" - name: Waiting until all running processes are killed wait_for: path: "/proc/{{ item }}/status" state: absent with_items: "{{ running_processes.stdout_lines }}" Thanks!
|
process, ansible, kill, ansible-2.x, kill-process
| 25
| 73,694
| 5
|
https://stackoverflow.com/questions/46515704/how-to-kill-a-running-process-using-ansible
|
34,485,286
|
How to filter gathering facts inside a playbook?
|
I'm working on a role that only needs to gather a single fact. Performance it's a concern and I know that gathering facts it's time-consuming. I'm looking for some way to filter gather_facts inside a playbook, this will allow me to gather only the required facts. This is possible using the setup core module: ansible -m setup -a 'filter=ansible_hostname' my_host 10.200.0.127 | success >> { "ansible_facts": { "ansible_hostname": "my_host" }, "changed": false } It's possible to use this feature inside the playbook? Something like this? - hosts: all sudo: yes gather_facts: True filter: "filter=ansible_*" PS: The code above throws syntax exception. EDIT 1 : If someone needs to get hostname there's also another useful variable inventory_hostname .
|
How to filter gathering facts inside a playbook? I'm working on a role that only needs to gather a single fact. Performance it's a concern and I know that gathering facts it's time-consuming. I'm looking for some way to filter gather_facts inside a playbook, this will allow me to gather only the required facts. This is possible using the setup core module: ansible -m setup -a 'filter=ansible_hostname' my_host 10.200.0.127 | success >> { "ansible_facts": { "ansible_hostname": "my_host" }, "changed": false } It's possible to use this feature inside the playbook? Something like this? - hosts: all sudo: yes gather_facts: True filter: "filter=ansible_*" PS: The code above throws syntax exception. EDIT 1 : If someone needs to get hostname there's also another useful variable inventory_hostname .
|
ansible, ansible-facts
| 25
| 83,387
| 4
|
https://stackoverflow.com/questions/34485286/how-to-filter-gathering-facts-inside-a-playbook
|
30,413,616
|
using regex in jinja 2 for ansible playbooks
|
HI i am new to jinja2 and trying to use regular expression as shown below {% if ansible_hostname == 'uat' %} {% set server = 'thinkingmonster.com' %} {% else %} {% set server = 'define yourself' %} {% endif %} {% if {{ server }} match('*thinking*') %} {% set ssl_certificate = 'akash' %} {% elif {{ server }} match( '*sleeping*')%} {% set ssl_certificate = 'akashthakur' %} {% endif %} based on the value of "server" i would like to evaluate as which certificates to use. ie if domain contains "thinking" keyword then use these certificates and if it contains "sleeping" keyword then use that certificate. But didn't found any jinja2 filter supporting this. Please help me.I found some python code and sure that can work but how to use python in jinja2 templates?
|
using regex in jinja 2 for ansible playbooks HI i am new to jinja2 and trying to use regular expression as shown below {% if ansible_hostname == 'uat' %} {% set server = 'thinkingmonster.com' %} {% else %} {% set server = 'define yourself' %} {% endif %} {% if {{ server }} match('*thinking*') %} {% set ssl_certificate = 'akash' %} {% elif {{ server }} match( '*sleeping*')%} {% set ssl_certificate = 'akashthakur' %} {% endif %} based on the value of "server" i would like to evaluate as which certificates to use. ie if domain contains "thinking" keyword then use these certificates and if it contains "sleeping" keyword then use that certificate. But didn't found any jinja2 filter supporting this. Please help me.I found some python code and sure that can work but how to use python in jinja2 templates?
|
python, regex, jinja2, ansible
| 25
| 114,630
| 7
|
https://stackoverflow.com/questions/30413616/using-regex-in-jinja-2-for-ansible-playbooks
|
26,985,112
|
R install packages from Shell
|
I am trying to implement a reducer for Hadoop Streaming using R. However, I need to figure out a way to access certain libraries that are not built in R, dplyr..etc. Based on my research seems like there are two approaches: (1) In the reducer code, install the required libraries to a temporary folder and they will be disposed when the session is done, like this: .libPaths(c(.libPaths(), temp <- tempdir())) install.packages("dplyr", lib=temp, repos='[URL] library(dplyr) ... However, this approach will have a dramatic overhead depending on how many libraries you are trying to install. So most of the time will be wasted on installing libraries(sophisticated libraries like dplyr has tons of dependencies which will take minutes to install on a vanilla R session). So sounds like I need to install it before hand, which leads us to approach2. (2) My cluster is fairly big. And I have to use some tool like Ansible to make it work. So I prefer to have one Linux shell command to install the library. I have seen R CMD INSTALL... before, however, it feels like will only install packages from source file instead of doing install.packages() in R console, figure out the mirror, pull the source file, install it in one command. Can anyone show me how to use one command line in shell to non-interactively install a R package? (sorry for this much background knowledge, if anyone thinks I am not even following the right phylosophy, feel free to leave in the comment how this whole cluster R package should be managed.)
|
R install packages from Shell I am trying to implement a reducer for Hadoop Streaming using R. However, I need to figure out a way to access certain libraries that are not built in R, dplyr..etc. Based on my research seems like there are two approaches: (1) In the reducer code, install the required libraries to a temporary folder and they will be disposed when the session is done, like this: .libPaths(c(.libPaths(), temp <- tempdir())) install.packages("dplyr", lib=temp, repos='[URL] library(dplyr) ... However, this approach will have a dramatic overhead depending on how many libraries you are trying to install. So most of the time will be wasted on installing libraries(sophisticated libraries like dplyr has tons of dependencies which will take minutes to install on a vanilla R session). So sounds like I need to install it before hand, which leads us to approach2. (2) My cluster is fairly big. And I have to use some tool like Ansible to make it work. So I prefer to have one Linux shell command to install the library. I have seen R CMD INSTALL... before, however, it feels like will only install packages from source file instead of doing install.packages() in R console, figure out the mirror, pull the source file, install it in one command. Can anyone show me how to use one command line in shell to non-interactively install a R package? (sorry for this much background knowledge, if anyone thinks I am not even following the right phylosophy, feel free to leave in the comment how this whole cluster R package should be managed.)
|
r, ansible, hadoop-streaming
| 25
| 45,842
| 2
|
https://stackoverflow.com/questions/26985112/r-install-packages-from-shell
|
46,732,703
|
How to generate single reusable random password with Ansible
|
That is to say: how to evaluate the password lookup only once? - name: Demo hosts: localhost gather_facts: False vars: my_pass: "{{ lookup('password', '/dev/null length=15 chars=ascii_letters') }}" tasks: - debug: msg: "{{ my_pass }}" - debug: msg: "{{ my_pass }}" - debug: msg: "{{ my_pass }}" each debug statement will print out a different value, e.g: PLAY [Demo] ************* TASK [debug] ************ ok: [localhost] => { "msg": "ZfyzacMsqZaYqwW" } TASK [debug] ************ ok: [localhost] => { "msg": "mKcfRedImqxgXnE" } TASK [debug] ************ ok: [localhost] => { "msg": "POpqMQoJWTiDpEW" } Using Ansible version 2.3.2.0
|
How to generate single reusable random password with Ansible That is to say: how to evaluate the password lookup only once? - name: Demo hosts: localhost gather_facts: False vars: my_pass: "{{ lookup('password', '/dev/null length=15 chars=ascii_letters') }}" tasks: - debug: msg: "{{ my_pass }}" - debug: msg: "{{ my_pass }}" - debug: msg: "{{ my_pass }}" each debug statement will print out a different value, e.g: PLAY [Demo] ************* TASK [debug] ************ ok: [localhost] => { "msg": "ZfyzacMsqZaYqwW" } TASK [debug] ************ ok: [localhost] => { "msg": "mKcfRedImqxgXnE" } TASK [debug] ************ ok: [localhost] => { "msg": "POpqMQoJWTiDpEW" } Using Ansible version 2.3.2.0
|
ansible
| 25
| 54,195
| 4
|
https://stackoverflow.com/questions/46732703/how-to-generate-single-reusable-random-password-with-ansible
|
40,235,550
|
How to inspect a json response from Ansible URI call
|
I have a service call that returns system status in json format. I want to use the ansible URI module to make the call and then inspect the response to decide whether the system is up or down {"id":"20161024140306","version":"5.6.1","status":"UP"} This would be the json that is returned This is the ansible task that makes a call: - name: check sonar web is up uri: url: [URL] method: GET return_content: yes status_code: 200 body_format: json register: data Question is how can I access data and inspect it as per ansible documentation this is how we store results of a call. I am not sure of the final step which is to check the status.
|
How to inspect a json response from Ansible URI call I have a service call that returns system status in json format. I want to use the ansible URI module to make the call and then inspect the response to decide whether the system is up or down {"id":"20161024140306","version":"5.6.1","status":"UP"} This would be the json that is returned This is the ansible task that makes a call: - name: check sonar web is up uri: url: [URL] method: GET return_content: yes status_code: 200 body_format: json register: data Question is how can I access data and inspect it as per ansible documentation this is how we store results of a call. I am not sure of the final step which is to check the status.
|
json, sonarqube, uri, ansible
| 25
| 80,898
| 4
|
https://stackoverflow.com/questions/40235550/how-to-inspect-a-json-response-from-ansible-uri-call
|
20,966,921
|
Ansible-galaxy throws ImportError: No module named yaml
|
When I try to install ansible role, I see this exception. $ ansible-galaxy install zzet.postgresql Traceback (most recent call last): File "/Users/myHomeDir/.homebrew/Cellar/ansible/1.4.3/libexec/bin/ansible-galaxy", line 34, in <module> import yaml ImportError: No module named yaml OS: Mac Os Maverick Ansible: 1.4.3 Does anyone know how to fix it?
|
Ansible-galaxy throws ImportError: No module named yaml When I try to install ansible role, I see this exception. $ ansible-galaxy install zzet.postgresql Traceback (most recent call last): File "/Users/myHomeDir/.homebrew/Cellar/ansible/1.4.3/libexec/bin/ansible-galaxy", line 34, in <module> import yaml ImportError: No module named yaml OS: Mac Os Maverick Ansible: 1.4.3 Does anyone know how to fix it?
|
macos, ansible, ansible-galaxy
| 25
| 28,844
| 5
|
https://stackoverflow.com/questions/20966921/ansible-galaxy-throws-importerror-no-module-named-yaml
|
44,614,863
|
Ansible unable to find .my.cnf Can't connect to local MySQL server
|
Since 6 hours I'm trying to find out how ansible wants to work with my mariadb on ubuntu 16. When I manually log in on the server with mysql -u someuser -p everything works fine when I try to access the server with the ansible script with: - name: Create mysql database mysql_user: name=someuser state=present password=somepw its complaining: fatal: [IP]: FAILED! => {"changed": false, "failed": true, "msg": "unable to connect to database, check login_user and login_password are correct or /home/someuser/.my.cnf has the credentials. Exception message: (2002, \"Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)\")"} Then I try to put .my.cnf into the directory on the remote server like it is trying to tell me. I also added login_user and login_password. No help. The file looks like this: /home/someuser/.my.cnf [client] user=someuser password=somepw "Okay" I thought. Maybe the credentials are doubled in the ansible script and in the conf file. I tried to leave the conf file or to skip the credentials in the ansible script. No help I also restarted the mysql server. No help. The strange thing is, that from the server itself everything worked with the mysql shell. I'm really struggleing to find the solution. Shouldn't this work just like a charm?
|
Ansible unable to find .my.cnf Can't connect to local MySQL server Since 6 hours I'm trying to find out how ansible wants to work with my mariadb on ubuntu 16. When I manually log in on the server with mysql -u someuser -p everything works fine when I try to access the server with the ansible script with: - name: Create mysql database mysql_user: name=someuser state=present password=somepw its complaining: fatal: [IP]: FAILED! => {"changed": false, "failed": true, "msg": "unable to connect to database, check login_user and login_password are correct or /home/someuser/.my.cnf has the credentials. Exception message: (2002, \"Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)\")"} Then I try to put .my.cnf into the directory on the remote server like it is trying to tell me. I also added login_user and login_password. No help. The file looks like this: /home/someuser/.my.cnf [client] user=someuser password=somepw "Okay" I thought. Maybe the credentials are doubled in the ansible script and in the conf file. I tried to leave the conf file or to skip the credentials in the ansible script. No help I also restarted the mysql server. No help. The strange thing is, that from the server itself everything worked with the mysql shell. I'm really struggleing to find the solution. Shouldn't this work just like a charm?
|
ansible
| 25
| 48,693
| 8
|
https://stackoverflow.com/questions/44614863/ansible-unable-to-find-my-cnf-cant-connect-to-local-mysql-server
|
20,952,689
|
Ansible SSH forwarding doesn't seem to work with Vagrant
|
OK, strange question. I have SSH forwarding working with Vagrant. But I'm trying to get it working when using Ansible as a Vagrant provisioner. I found out exactly what Ansible is executing, and tried it myself from the command line, sure enough, it fails there too. [/common/picsolve-ansible/u12.04%]ssh -o HostName=127.0.0.1 \ -o User=vagrant -o Port=2222 -o UserKnownHostsFile=/dev/null \ -o StrictHostKeyChecking=no -o PasswordAuthentication=no \ -o IdentityFile=/Users/bryanhunt/.vagrant.d/insecure_private_key \ -o IdentitiesOnly=yes -o LogLevel=FATAL \ -o ForwardAgent=yes "/bin/sh \ -c 'git clone git@bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker' " Permission denied (publickey,password). But when I just run vagrant ssh the agent forwarding works correctly, and I can checkout R/W my github project. [/common/picsolve-ansible/u12.04%]vagrant ssh vagrant@vagrant-ubuntu-precise-64:~$ /bin/sh -c 'git clone git@bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker' Cloning into '/home/vagrant/poc_docker'... remote: Counting objects: 18, done. remote: Compressing objects: 100% (14/14), done. remote: Total 18 (delta 4), reused 0 (delta 0) Receiving objects: 100% (18/18), done. Resolving deltas: 100% (4/4), done. vagrant@vagrant-ubuntu-precise-64:~$ Has anyone got any idea how it is working? Update: By means of ps awux I determined the exact command being executed by Vagrant. I replicated it and git checkout worked. ssh vagrant@127.0.0.1 -p 2222 \ -o Compression=yes \ -o StrictHostKeyChecking=no \ -o LogLevel=FATAL \ -o StrictHostKeyChecking=no \ -o UserKnownHostsFile=/dev/null \ -o IdentitiesOnly=yes \ -i /Users/bryanhunt/.vagrant.d/insecure_private_key \ -o ForwardAgent=yes \ -o LogLevel=DEBUG \ "/bin/sh -c 'git clone git@bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker' "
|
Ansible SSH forwarding doesn't seem to work with Vagrant OK, strange question. I have SSH forwarding working with Vagrant. But I'm trying to get it working when using Ansible as a Vagrant provisioner. I found out exactly what Ansible is executing, and tried it myself from the command line, sure enough, it fails there too. [/common/picsolve-ansible/u12.04%]ssh -o HostName=127.0.0.1 \ -o User=vagrant -o Port=2222 -o UserKnownHostsFile=/dev/null \ -o StrictHostKeyChecking=no -o PasswordAuthentication=no \ -o IdentityFile=/Users/bryanhunt/.vagrant.d/insecure_private_key \ -o IdentitiesOnly=yes -o LogLevel=FATAL \ -o ForwardAgent=yes "/bin/sh \ -c 'git clone git@bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker' " Permission denied (publickey,password). But when I just run vagrant ssh the agent forwarding works correctly, and I can checkout R/W my github project. [/common/picsolve-ansible/u12.04%]vagrant ssh vagrant@vagrant-ubuntu-precise-64:~$ /bin/sh -c 'git clone git@bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker' Cloning into '/home/vagrant/poc_docker'... remote: Counting objects: 18, done. remote: Compressing objects: 100% (14/14), done. remote: Total 18 (delta 4), reused 0 (delta 0) Receiving objects: 100% (18/18), done. Resolving deltas: 100% (4/4), done. vagrant@vagrant-ubuntu-precise-64:~$ Has anyone got any idea how it is working? Update: By means of ps awux I determined the exact command being executed by Vagrant. I replicated it and git checkout worked. ssh vagrant@127.0.0.1 -p 2222 \ -o Compression=yes \ -o StrictHostKeyChecking=no \ -o LogLevel=FATAL \ -o StrictHostKeyChecking=no \ -o UserKnownHostsFile=/dev/null \ -o IdentitiesOnly=yes \ -i /Users/bryanhunt/.vagrant.d/insecure_private_key \ -o ForwardAgent=yes \ -o LogLevel=DEBUG \ "/bin/sh -c 'git clone git@bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker' "
|
ssh, vagrant, ansible, ssh-agent
| 25
| 11,911
| 6
|
https://stackoverflow.com/questions/20952689/ansible-ssh-forwarding-doesnt-seem-to-work-with-vagrant
|
56,436,906
|
How to cleanly edit sshd_config for basic security options in an ansible playbook?
|
I'm trying to write a playbook that cleanly edits /etc/ssh/sshd_config so that it has PasswordAuthentication no and PermitRootLogin no . I can think of a few ways that are all problematic. Firstly, I could delete all the lines matching PasswordAuthentication|PermitRootLogin using lineinfile, and then append two new lines that I want, but i) this can fail non-atomically AND ii) appending lines at the end can mix them up with 'Match' blocks, which can typically appear at the end. I could replace every line matching ^(# *)?PasswordAuthentication with PasswordAuthentication no , also using lineinfile, but that doesn't work if a matching line doesn't already exist. Also, if there are multiple matching lines, I'll have duplicate PasswordAuthentication no lines. I could use a template for the entire file, but that means I need to specify everything, including HostKey, but I don't want to specify everything and want to leave the other options the way they were originally setup. None of the above ways are satisfactory because of the problems listed. Is there a clean way that makes the desired changes reliably, is idempotent, and does not leave the system in a bad state if it fails halfway?
|
How to cleanly edit sshd_config for basic security options in an ansible playbook? I'm trying to write a playbook that cleanly edits /etc/ssh/sshd_config so that it has PasswordAuthentication no and PermitRootLogin no . I can think of a few ways that are all problematic. Firstly, I could delete all the lines matching PasswordAuthentication|PermitRootLogin using lineinfile, and then append two new lines that I want, but i) this can fail non-atomically AND ii) appending lines at the end can mix them up with 'Match' blocks, which can typically appear at the end. I could replace every line matching ^(# *)?PasswordAuthentication with PasswordAuthentication no , also using lineinfile, but that doesn't work if a matching line doesn't already exist. Also, if there are multiple matching lines, I'll have duplicate PasswordAuthentication no lines. I could use a template for the entire file, but that means I need to specify everything, including HostKey, but I don't want to specify everything and want to leave the other options the way they were originally setup. None of the above ways are satisfactory because of the problems listed. Is there a clean way that makes the desired changes reliably, is idempotent, and does not leave the system in a bad state if it fails halfway?
|
ansible
| 25
| 31,951
| 6
|
https://stackoverflow.com/questions/56436906/how-to-cleanly-edit-sshd-config-for-basic-security-options-in-an-ansible-playboo
|
62,403,030
|
Terraform: wait till the instance is "reachable"
|
I have some Terraform code with an aws_instance and a null_resource : resource "aws_instance" "example" { ami = data.aws_ami.server.id instance_type = "t2.medium" key_name = aws_key_pair.deployer.key_name tags = { name = "example" } vpc_security_group_ids = [aws_security_group.main.id] } resource "null_resource" "example" { provisioner "local-exec" { command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -T 300 -i ${aws_instance.example.public_dns}, --user centos --private-key files/id_rsa playbook.yml" } } It kinda works, but sometimes there is a bug (probably when the instance in a pending state). When I rerun Terraform - it works as expected. Question: How can I run local-exec only when the instance is running and accepting an SSH connection?
|
Terraform: wait till the instance is "reachable" I have some Terraform code with an aws_instance and a null_resource : resource "aws_instance" "example" { ami = data.aws_ami.server.id instance_type = "t2.medium" key_name = aws_key_pair.deployer.key_name tags = { name = "example" } vpc_security_group_ids = [aws_security_group.main.id] } resource "null_resource" "example" { provisioner "local-exec" { command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -T 300 -i ${aws_instance.example.public_dns}, --user centos --private-key files/id_rsa playbook.yml" } } It kinda works, but sometimes there is a bug (probably when the instance in a pending state). When I rerun Terraform - it works as expected. Question: How can I run local-exec only when the instance is running and accepting an SSH connection?
|
amazon-web-services, amazon-ec2, ansible, terraform
| 25
| 34,123
| 5
|
https://stackoverflow.com/questions/62403030/terraform-wait-till-the-instance-is-reachable
|
49,087,220
|
How to update package cache when using module package from ansible
|
If I run apt, I can update the package cache: apt: name: postgresql state: present update_cache: yes I'm now trying to use the generic package command, but I don't see a way to do this. package: name: postgresql state: present Do I have to run an explicit command to run apt-get update , or can I do this using the package module?
|
How to update package cache when using module package from ansible If I run apt, I can update the package cache: apt: name: postgresql state: present update_cache: yes I'm now trying to use the generic package command, but I don't see a way to do this. package: name: postgresql state: present Do I have to run an explicit command to run apt-get update , or can I do this using the package module?
|
ansible, package-managers
| 25
| 15,518
| 5
|
https://stackoverflow.com/questions/49087220/how-to-update-package-cache-when-using-module-package-from-ansible
|
37,057,086
|
Don't mark something as changed when determining the value of a variable
|
I have the following role in my Ansible playbook to determine the installed version of Packer and conditionally install it if it doesn't match the version of a local variable: --- # detect packer version - name: determine packer version shell: /usr/local/bin/packer -v || true register: packer_installed_version - name: install packer cli tools unarchive: src: [URL] packer_version }}/packer_{{ packer_version }}_linux_amd64.zip dest: /usr/local/bin copy: no when: packer_installed_version.stdout != packer_version The problem/annoyance is that Ansible marks this step as having "changed": I'd like gather this fact without marking something as changed so I can know reliably at the end of my playbook execution if anything has, in fact, changed. Is there a better way to go about what I'm doing above?
|
Don't mark something as changed when determining the value of a variable I have the following role in my Ansible playbook to determine the installed version of Packer and conditionally install it if it doesn't match the version of a local variable: --- # detect packer version - name: determine packer version shell: /usr/local/bin/packer -v || true register: packer_installed_version - name: install packer cli tools unarchive: src: [URL] packer_version }}/packer_{{ packer_version }}_linux_amd64.zip dest: /usr/local/bin copy: no when: packer_installed_version.stdout != packer_version The problem/annoyance is that Ansible marks this step as having "changed": I'd like gather this fact without marking something as changed so I can know reliably at the end of my playbook execution if anything has, in fact, changed. Is there a better way to go about what I'm doing above?
|
ansible
| 25
| 17,893
| 1
|
https://stackoverflow.com/questions/37057086/dont-mark-something-as-changed-when-determining-the-value-of-a-variable
|
36,728,760
|
reading json like variable in ansible
|
I'm new to ansible and I'm having a problem reading a value from json file in ansible role. my variable has a value like the following: { "queue": { "first": { "car": "bmw", "year": "1990", "model": "x3", "color": "blue" }, "second": { "car": "bmw", "year": "2000", "model": "318", "color": "red" } } } I'm trying to print the color's value only to compare it with some other variable. I used with_dict to iterate over the json object (stored in variable called jsonVar) like the following: - name: test loop with_dict: "{{jsonVar}}" shell: | if echo "blue" | grep -q "${{item.value.color}}" ; then echo "success" so far there is no luck in getting the comparison of color's value from json to "blue" from the if statement. I was wondering if I'm doing something wrong? thanks in advance!
|
reading json like variable in ansible I'm new to ansible and I'm having a problem reading a value from json file in ansible role. my variable has a value like the following: { "queue": { "first": { "car": "bmw", "year": "1990", "model": "x3", "color": "blue" }, "second": { "car": "bmw", "year": "2000", "model": "318", "color": "red" } } } I'm trying to print the color's value only to compare it with some other variable. I used with_dict to iterate over the json object (stored in variable called jsonVar) like the following: - name: test loop with_dict: "{{jsonVar}}" shell: | if echo "blue" | grep -q "${{item.value.color}}" ; then echo "success" so far there is no luck in getting the comparison of color's value from json to "blue" from the if statement. I was wondering if I'm doing something wrong? thanks in advance!
|
ansible, ansible-2.x
| 25
| 87,923
| 3
|
https://stackoverflow.com/questions/36728760/reading-json-like-variable-in-ansible
|
23,014,713
|
Run a build task only when changes have been pulled from a git repository
|
I have a C++ program hosted in Bitbucket git repository that I'm compiling with CMake. The current play can be seen below. It works fine except build -task is run every time the play is run. Instead I'd like build -task to run only when new software version is pulled by git -module. How I can tell in build -task if clone -task found new version ? --- # tasks of role: foo - name: clone repository git: repo=git@bitbucket.org:foo/foo.git dest={{ foo.dir }} accept_hostkey=yes - name: create build dir file: state=directory path={{ foo.build_dir }} - name: build command: "{{ item }} chdir={{ foo.build_dir }}" with_items: - cmake .. - make
|
Run a build task only when changes have been pulled from a git repository I have a C++ program hosted in Bitbucket git repository that I'm compiling with CMake. The current play can be seen below. It works fine except build -task is run every time the play is run. Instead I'd like build -task to run only when new software version is pulled by git -module. How I can tell in build -task if clone -task found new version ? --- # tasks of role: foo - name: clone repository git: repo=git@bitbucket.org:foo/foo.git dest={{ foo.dir }} accept_hostkey=yes - name: create build dir file: state=directory path={{ foo.build_dir }} - name: build command: "{{ item }} chdir={{ foo.build_dir }}" with_items: - cmake .. - make
|
git, ansible
| 25
| 12,395
| 1
|
https://stackoverflow.com/questions/23014713/run-a-build-task-only-when-changes-have-been-pulled-from-a-git-repository
|
37,675,262
|
What is the difference between Ansible template module and copy module?
|
What is the difference between Ansible template module and Ansible copy module?
|
What is the difference between Ansible template module and copy module? What is the difference between Ansible template module and Ansible copy module?
|
ansible
| 25
| 23,508
| 3
|
https://stackoverflow.com/questions/37675262/what-is-the-difference-between-ansible-template-module-and-copy-module
|
32,802,956
|
Ansible, running role multiple times with different parameter sets
|
What is the best practice for running one role with different set of parameters? I need to run one application(docker container) multiple times on one server with different environment variables for each.
|
Ansible, running role multiple times with different parameter sets What is the best practice for running one role with different set of parameters? I need to run one application(docker container) multiple times on one server with different environment variables for each.
|
docker, ansible
| 25
| 23,313
| 3
|
https://stackoverflow.com/questions/32802956/ansible-running-role-multiple-times-with-different-parameter-sets
|
44,595,867
|
How to copy file from localhost to remote host in Ansible playbook?
|
I have a directory: /users/rolando/myfile I want to copy "myfile" to hostname "targetserver" in directory: /home/rolando/myfile What is the syntax in the playbook to do this? Examples I found with the copy command look like it's more about copying a file from a source directory on a remote server to a target directory on the same remote server. The line in my playbook .yml I tried that failed: - copy: src='/users/rolando/myfile' dest='rolando@targetserver:/home/rolando/myfile' What am I doing wrong?
|
How to copy file from localhost to remote host in Ansible playbook? I have a directory: /users/rolando/myfile I want to copy "myfile" to hostname "targetserver" in directory: /home/rolando/myfile What is the syntax in the playbook to do this? Examples I found with the copy command look like it's more about copying a file from a source directory on a remote server to a target directory on the same remote server. The line in my playbook .yml I tried that failed: - copy: src='/users/rolando/myfile' dest='rolando@targetserver:/home/rolando/myfile' What am I doing wrong?
|
ansible
| 25
| 121,473
| 3
|
https://stackoverflow.com/questions/44595867/how-to-copy-file-from-localhost-to-remote-host-in-ansible-playbook
|
37,925,282
|
iteration using with_items and register
|
Looking for help with a problem I've been struggling with for a few hours. I want to iterate over a list, run a command, register the output for each command and then iterate with debug over each unique registers {{ someregister }}.stdout For example, the following code will spit out "msg": "1" and "msg": "2" --- - hosts: localhost gather_facts: false vars: numbers: - name: "first" int: "1" - name: "second" int: "2" tasks: - name: Register output command: "/bin/echo {{ item.int }}" register: result with_items: "{{ numbers }}" - debug: msg={{ item.stdout }} with_items: "{{ result.results }}" If however, I try and capture the output of a command in a register variable that is named using with_list, I am having trouble accessing the list or the elements within it. For example, altering the code slightly to: --- - hosts: localhost gather_facts: false vars: numbers: - name: "first" int: "1" - name: "second" int: "2" tasks: - name: Register output command: "/bin/echo {{ item.int }}" register: "{{ item.name }}" with_items: "{{ numbers }}" - debug: var={{ item.name.stdout }} with_items: "{{ numbers }}" Gives me: TASK [debug] > ******************************************************************* fatal: [localhost]: FAILED! => {"failed": true, "msg": "'unicode > object' has no attribute 'stdout'"} Is it not possible to dynamically name the register the output of a command which can then be called later on in the play? I would like each iteration of the command and its subsequent register name to be accessed uniquely, e.g, given the last example I would expect there to be variables registered called "first" and "second" but there aren't. Taking away the with_items from the debug stanza, and just explicitly defining the var or message using first.stdout returns "undefined". Ansible version is 2.0.2.0 on Centos 7_2. Thanks in advance.
|
iteration using with_items and register Looking for help with a problem I've been struggling with for a few hours. I want to iterate over a list, run a command, register the output for each command and then iterate with debug over each unique registers {{ someregister }}.stdout For example, the following code will spit out "msg": "1" and "msg": "2" --- - hosts: localhost gather_facts: false vars: numbers: - name: "first" int: "1" - name: "second" int: "2" tasks: - name: Register output command: "/bin/echo {{ item.int }}" register: result with_items: "{{ numbers }}" - debug: msg={{ item.stdout }} with_items: "{{ result.results }}" If however, I try and capture the output of a command in a register variable that is named using with_list, I am having trouble accessing the list or the elements within it. For example, altering the code slightly to: --- - hosts: localhost gather_facts: false vars: numbers: - name: "first" int: "1" - name: "second" int: "2" tasks: - name: Register output command: "/bin/echo {{ item.int }}" register: "{{ item.name }}" with_items: "{{ numbers }}" - debug: var={{ item.name.stdout }} with_items: "{{ numbers }}" Gives me: TASK [debug] > ******************************************************************* fatal: [localhost]: FAILED! => {"failed": true, "msg": "'unicode > object' has no attribute 'stdout'"} Is it not possible to dynamically name the register the output of a command which can then be called later on in the play? I would like each iteration of the command and its subsequent register name to be accessed uniquely, e.g, given the last example I would expect there to be variables registered called "first" and "second" but there aren't. Taking away the with_items from the debug stanza, and just explicitly defining the var or message using first.stdout returns "undefined". Ansible version is 2.0.2.0 on Centos 7_2. Thanks in advance.
|
ansible, ansible-2.x
| 25
| 76,702
| 2
|
https://stackoverflow.com/questions/37925282/iteration-using-with-items-and-register
|
30,162,528
|
Ansible Service Restart Failed
|
I've been having some trouble with restarting the SSH daemon with Ansible. I'm using the latest software as of May 11 2015 (Ansible 1.9.1 / Vagrant 1.7.2 / VirtualBox 4.3.26 / Host: OS X 10.10.1 / Guest: ubuntu/trusty64 ) tl;dr : There appears to be something wrong with the way I'm invoking the service syntax. Problem With Original Use Case (Handler) Playbook - hosts: all - remote_user: vagrant - tasks: ... - name: Forbid SSH root login sudo: yes lineinfile: dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="permitRootLogin no" state=present notify: - restart ssh ... - handlers: - name: restart ssh sudo: yes service: name=ssh state=restarted Output NOTIFIED: [restart ssh] failed: [default] => {"failed": true} FATAL: all hosts have already failed -- aborting The nginx handler completed successfully with nearly identical syntax. Task Also Fails Playbook - name: Restart SSH server sudo: yes service: name=ssh state=restarted Same output as the handler use case. Ad Hoc Command Also Fails Shell > ansible all -i ansible_inventory -u vagrant -k -m service -a "name=ssh state=restarted" Inventory 127.0.0.1:8022 Output 127.0.0.1 | FAILED >> { "failed": true, "msg": "" } Shell command in box works When I SSH in and run the usual command, everything works fine. > vagrant ssh > sudo service ssh restart ssh stop/waiting ssh start/running, process 7899 > echo $? 0 Command task also works Output TASK: [Restart SSH server] **************************************************** changed: [default] => {"changed": true, "cmd": ["service", "ssh", "restart"], "delta": "0:00:00.060220", "end": "2015-05-11 07:59:25.310183", "rc": 0, "start": "2015-05-11 07:59:25.249963", "stderr": "", "stdout": "ssh stop/waiting\nssh start/running, process 8553", "warnings": ["Consider using service module rather than running service"]} As we can see in the warning, we're supposed to use the service module, but I'm still not sure where the snag is.
|
Ansible Service Restart Failed I've been having some trouble with restarting the SSH daemon with Ansible. I'm using the latest software as of May 11 2015 (Ansible 1.9.1 / Vagrant 1.7.2 / VirtualBox 4.3.26 / Host: OS X 10.10.1 / Guest: ubuntu/trusty64 ) tl;dr : There appears to be something wrong with the way I'm invoking the service syntax. Problem With Original Use Case (Handler) Playbook - hosts: all - remote_user: vagrant - tasks: ... - name: Forbid SSH root login sudo: yes lineinfile: dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="permitRootLogin no" state=present notify: - restart ssh ... - handlers: - name: restart ssh sudo: yes service: name=ssh state=restarted Output NOTIFIED: [restart ssh] failed: [default] => {"failed": true} FATAL: all hosts have already failed -- aborting The nginx handler completed successfully with nearly identical syntax. Task Also Fails Playbook - name: Restart SSH server sudo: yes service: name=ssh state=restarted Same output as the handler use case. Ad Hoc Command Also Fails Shell > ansible all -i ansible_inventory -u vagrant -k -m service -a "name=ssh state=restarted" Inventory 127.0.0.1:8022 Output 127.0.0.1 | FAILED >> { "failed": true, "msg": "" } Shell command in box works When I SSH in and run the usual command, everything works fine. > vagrant ssh > sudo service ssh restart ssh stop/waiting ssh start/running, process 7899 > echo $? 0 Command task also works Output TASK: [Restart SSH server] **************************************************** changed: [default] => {"changed": true, "cmd": ["service", "ssh", "restart"], "delta": "0:00:00.060220", "end": "2015-05-11 07:59:25.310183", "rc": 0, "start": "2015-05-11 07:59:25.249963", "stderr": "", "stdout": "ssh stop/waiting\nssh start/running, process 8553", "warnings": ["Consider using service module rather than running service"]} As we can see in the warning, we're supposed to use the service module, but I'm still not sure where the snag is.
|
ansible, sshd
| 25
| 33,993
| 1
|
https://stackoverflow.com/questions/30162528/ansible-service-restart-failed
|
62,116,534
|
Ansible copy files with wildcard?
|
In my files directory I have various files, with a similar name structure: data-example.zip data-precise.zip data-arbitrary.zip data-collected.zip I would like to transfer all of these files in the /tmp directory of my remote machine using Ansible without specifying each file name explicitly. In other words I would like to transfer every file that stars with "data-". What is the correct way to do that? In a similar thread, someone suggested the with_fileglob keyword, - but I couldn't get that to work. Can someone provide me an example on how to accomplish said task?
|
Ansible copy files with wildcard? In my files directory I have various files, with a similar name structure: data-example.zip data-precise.zip data-arbitrary.zip data-collected.zip I would like to transfer all of these files in the /tmp directory of my remote machine using Ansible without specifying each file name explicitly. In other words I would like to transfer every file that stars with "data-". What is the correct way to do that? In a similar thread, someone suggested the with_fileglob keyword, - but I couldn't get that to work. Can someone provide me an example on how to accomplish said task?
|
ansible, copy, wildcard, provisioning
| 24
| 58,925
| 2
|
https://stackoverflow.com/questions/62116534/ansible-copy-files-with-wildcard
|
38,479,423
|
Is it possible to include file content in a playbook?
|
I need to create or overwrite files on remote hosts. The modules lineinfile or blockinfile are useful when updating files, but not to create ones from scratch or completely overwrite existing ones. The obvious solution is to use copy but I would like to have as much as possible a standalone playbook, without files on the side. Is it possible to include in a playbook the content of the file to create? Maybe something along the lines of having a variable with the content of the file which can be used as the src= parameter for copy (I tried this but it does not work as src expects a local file)
|
Is it possible to include file content in a playbook? I need to create or overwrite files on remote hosts. The modules lineinfile or blockinfile are useful when updating files, but not to create ones from scratch or completely overwrite existing ones. The obvious solution is to use copy but I would like to have as much as possible a standalone playbook, without files on the side. Is it possible to include in a playbook the content of the file to create? Maybe something along the lines of having a variable with the content of the file which can be used as the src= parameter for copy (I tried this but it does not work as src expects a local file)
|
ansible
| 24
| 31,066
| 1
|
https://stackoverflow.com/questions/38479423/is-it-possible-to-include-file-content-in-a-playbook
|
50,966,777
|
Ansible - Download latest release binary from Github repo
|
With Ansible please advise how i could download the latest release binary from Github repository. As per my current understanding the steps would be: a. get URL of latest release b. download the release For a. I have something like which does not provide the actual release (ex. v0.11.53): - name: get latest Gogs release local_action: module: uri url: [URL] method: GET follow_redirects: no status_code: 301 register: release_url For b. I have the below which works but needs constant updating. Instead of version i would need a variable set in a.: - name: download latest become: yes become-user: "{{gogs_user}}" get_url: url: [URL] dest: "/home/{{gogs_user}}/linux_amd64.tar.gz" Thank you!
|
Ansible - Download latest release binary from Github repo With Ansible please advise how i could download the latest release binary from Github repository. As per my current understanding the steps would be: a. get URL of latest release b. download the release For a. I have something like which does not provide the actual release (ex. v0.11.53): - name: get latest Gogs release local_action: module: uri url: [URL] method: GET follow_redirects: no status_code: 301 register: release_url For b. I have the below which works but needs constant updating. Instead of version i would need a variable set in a.: - name: download latest become: yes become-user: "{{gogs_user}}" get_url: url: [URL] dest: "/home/{{gogs_user}}/linux_amd64.tar.gz" Thank you!
|
url, github, download, automation, ansible
| 24
| 22,317
| 8
|
https://stackoverflow.com/questions/50966777/ansible-download-latest-release-binary-from-github-repo
|
57,648,261
|
Split ansible role's tasks into multiple files
|
This is my ansible role: /roles /foo /tasks main.yml <----- I want to split this The main.yml file is really big, so I want to split it into multiple files, and call them in sequence. /roles /foo /tasks run-this-first.yml <--- 1st run-this-last.yml <--- last run-this-second.yml <--- 2nd How do I invoke those files, and how do I ensure they are run in order?
|
Split ansible role's tasks into multiple files This is my ansible role: /roles /foo /tasks main.yml <----- I want to split this The main.yml file is really big, so I want to split it into multiple files, and call them in sequence. /roles /foo /tasks run-this-first.yml <--- 1st run-this-last.yml <--- last run-this-second.yml <--- 2nd How do I invoke those files, and how do I ensure they are run in order?
|
ansible
| 24
| 17,809
| 2
|
https://stackoverflow.com/questions/57648261/split-ansible-roles-tasks-into-multiple-files
|
42,914,324
|
Extra spaces appearing in Ansible templates
|
I am generating config files and I want them to be indented just so. I started with a Jinja2 template that rendered correctly when called from a simple python program. When I call it from ansible, I will get 2 extra spaces on all but the first line of the loop. Generating things like YAML and python has been a real pain. I have taken to putting a comment line as the first line of a for block to fix this... Here is a really simple example of a YAML generator: playbook call: - name: generate bgp vars file, put in includes directory local_action: template src={{ role_dir }}/templates/bgp_vars.j2 dest={{ incvar_dir }}/bgp_vars.yaml run_once: true section of template: dc_route_reflectors: {% for dc in SH_dcs %} # dc is "{{ dc }}" {{ dc }}: {% for host in groups[bgpgroupname] if dc == hostvars[host].MYDC %} - "{{ hostvars[host].MAIN_MYADDR }}" {% endfor %} {% endfor %} rendered output: dc_route_reflectors: # dc is "pnp" pnp: - "10.100.16.3" - "10.100.32.3" # dc is "sgs" sgs: - "10.8.0.3" - "10.8.16.3" # dc is "cst" cst: - "10.4.0.3" - "10.4.16.3" # dc is "dse" dse: - "10.200.0.3" - "10.200.16.3" Notice how the dc is "pnp" comment is not indented as it is shown in the template, but sgs,cst and dse comments are indented by 2 spaces. All of the array lines of ip addresses are also indented. I have tried various versions of adding "-" to the "%" things as Jinja2 describes, but none have given consistent correct results. Others must have seen this before. I'm running 2.2.1.0 on CentOS7.
|
Extra spaces appearing in Ansible templates I am generating config files and I want them to be indented just so. I started with a Jinja2 template that rendered correctly when called from a simple python program. When I call it from ansible, I will get 2 extra spaces on all but the first line of the loop. Generating things like YAML and python has been a real pain. I have taken to putting a comment line as the first line of a for block to fix this... Here is a really simple example of a YAML generator: playbook call: - name: generate bgp vars file, put in includes directory local_action: template src={{ role_dir }}/templates/bgp_vars.j2 dest={{ incvar_dir }}/bgp_vars.yaml run_once: true section of template: dc_route_reflectors: {% for dc in SH_dcs %} # dc is "{{ dc }}" {{ dc }}: {% for host in groups[bgpgroupname] if dc == hostvars[host].MYDC %} - "{{ hostvars[host].MAIN_MYADDR }}" {% endfor %} {% endfor %} rendered output: dc_route_reflectors: # dc is "pnp" pnp: - "10.100.16.3" - "10.100.32.3" # dc is "sgs" sgs: - "10.8.0.3" - "10.8.16.3" # dc is "cst" cst: - "10.4.0.3" - "10.4.16.3" # dc is "dse" dse: - "10.200.0.3" - "10.200.16.3" Notice how the dc is "pnp" comment is not indented as it is shown in the template, but sgs,cst and dse comments are indented by 2 spaces. All of the array lines of ip addresses are also indented. I have tried various versions of adding "-" to the "%" things as Jinja2 describes, but none have given consistent correct results. Others must have seen this before. I'm running 2.2.1.0 on CentOS7.
|
ansible, jinja2, ansible-template
| 24
| 23,177
| 1
|
https://stackoverflow.com/questions/42914324/extra-spaces-appearing-in-ansible-templates
|
34,809,845
|
Check Ansible version from inside of a playbook
|
I have a playbook that is running in different way in Ansible 1.9.x and 2.0. I would like to check currently running ansible version in my playbook to avoid someone running it with old one. I don't think that this is the best solution: - local_action: command ansible --version register: version What would you suggest?
|
Check Ansible version from inside of a playbook I have a playbook that is running in different way in Ansible 1.9.x and 2.0. I would like to check currently running ansible version in my playbook to avoid someone running it with old one. I don't think that this is the best solution: - local_action: command ansible --version register: version What would you suggest?
|
ansible
| 24
| 33,388
| 3
|
https://stackoverflow.com/questions/34809845/check-ansible-version-from-inside-of-a-playbook
|
42,123,317
|
How to use a public keypair .pem file for ansible playbooks?
|
I want to use a public aws keypair .pem file for running ansible playbooks. I want to do this without changing my ~/.ssh/id_rsa.pub and I can't create a new keypair from my current ~/.ssh/id_rsa.pub and apply it to the ec2 instances I am trying to change. $ ansible --version ansible 1.9.6 configured module search path = None Here is my hosts file (note that my actual ip is replaced with 1.2.3.4 ). This is probably the issue since I need a way to set a public key variable and use that: [all_servers:vars] ansible_ssh_private_key_file = ./mykeypair.pem [dashboard] 1.2.3.4 dashboard_domain=my.domain.info Here is my playbook: --- - hosts: dashboard gather_facts: False remote_user: ubuntu tasks: - name: ping ping: This is the command I am using to run it: ansible-playbook -i ./hosts test.yml It results in the following error: fatal: [1.2.3.4] => SSH Error: Permission denied (publickey). while connecting to 1.2.3.4:22 There is no problem with my keypair: $ ssh -i mykeypair.pem ubuntu@1.2.3.4 'whoami' ubuntu What am I doing wrong?
|
How to use a public keypair .pem file for ansible playbooks? I want to use a public aws keypair .pem file for running ansible playbooks. I want to do this without changing my ~/.ssh/id_rsa.pub and I can't create a new keypair from my current ~/.ssh/id_rsa.pub and apply it to the ec2 instances I am trying to change. $ ansible --version ansible 1.9.6 configured module search path = None Here is my hosts file (note that my actual ip is replaced with 1.2.3.4 ). This is probably the issue since I need a way to set a public key variable and use that: [all_servers:vars] ansible_ssh_private_key_file = ./mykeypair.pem [dashboard] 1.2.3.4 dashboard_domain=my.domain.info Here is my playbook: --- - hosts: dashboard gather_facts: False remote_user: ubuntu tasks: - name: ping ping: This is the command I am using to run it: ansible-playbook -i ./hosts test.yml It results in the following error: fatal: [1.2.3.4] => SSH Error: Permission denied (publickey). while connecting to 1.2.3.4:22 There is no problem with my keypair: $ ssh -i mykeypair.pem ubuntu@1.2.3.4 'whoami' ubuntu What am I doing wrong?
|
ssh, amazon-ec2, ansible
| 24
| 43,073
| 3
|
https://stackoverflow.com/questions/42123317/how-to-use-a-public-keypair-pem-file-for-ansible-playbooks
|
40,353,666
|
How to pass terraform outputs variables into ansible as vars_files?
|
I am provisioning AWS infrastructure using terraform and want to pass variables such as aws_subnet_id and aws_security_id into ansible playbook using vars_file (don't know if there is any other way though). How can I do that?
|
How to pass terraform outputs variables into ansible as vars_files? I am provisioning AWS infrastructure using terraform and want to pass variables such as aws_subnet_id and aws_security_id into ansible playbook using vars_file (don't know if there is any other way though). How can I do that?
|
ansible, terraform
| 24
| 31,946
| 5
|
https://stackoverflow.com/questions/40353666/how-to-pass-terraform-outputs-variables-into-ansible-as-vars-files
|
30,605,950
|
Running a SELECT Query with an Ansible Task
|
In this list of mysql db modules for Ansbile, there's one for creating a db, or creating a user, etc. I would like to run a query against a pre-existing table and use the results of that query to populate an Ansible variable (list of IP addresses, and node type) upon which I would run different tasks, depending on node type. How can that be done in Ansible?
|
Running a SELECT Query with an Ansible Task In this list of mysql db modules for Ansbile, there's one for creating a db, or creating a user, etc. I would like to run a query against a pre-existing table and use the results of that query to populate an Ansible variable (list of IP addresses, and node type) upon which I would run different tasks, depending on node type. How can that be done in Ansible?
|
mysql, ansible
| 24
| 47,073
| 1
|
https://stackoverflow.com/questions/30605950/running-a-select-query-with-an-ansible-task
|
47,305,658
|
Install snap packages with Ansible
|
I am automating Canonical Kubernetes installation with Ansible. The intallation process required snap to be present on the host. Is there a standard way to install snap packages with Ansible already?
|
Install snap packages with Ansible I am automating Canonical Kubernetes installation with Ansible. The intallation process required snap to be present on the host. Is there a standard way to install snap packages with Ansible already?
|
ansible
| 24
| 19,880
| 3
|
https://stackoverflow.com/questions/47305658/install-snap-packages-with-ansible
|
36,134,552
|
Use multiple var files in ansible role
|
One of my roles has two different variable types. One is public (things like package versions and other benign information). These can be committed to SCM without a worry. It also requires some private information (such as API keys and other secret information). I'm using ansible-vault to encrypt secret information. My solution was to have vars/main.yaml for pulic, and vars/vault.yml for the encrypted private information. I came across a problem and am uncertain what's the best practice or actual solution here. It seems that ansible only loads the vars/main.yml file. Naturally I do not want to encrypt the public information so I looked for solution. So far the only solution I came up with (suggested on IRC) is to create group_vars/all/vault.yml and prefix all variables with the role name. This works because ansible seems to recursively load everything under group_vars . This does work but seems organizationally incorrect because the variables are for a specific role and not "globally universally true". I also tried to put include: vars/vault.yml into vars/main.yml but that did not work. Is there a proper way to do this?
|
Use multiple var files in ansible role One of my roles has two different variable types. One is public (things like package versions and other benign information). These can be committed to SCM without a worry. It also requires some private information (such as API keys and other secret information). I'm using ansible-vault to encrypt secret information. My solution was to have vars/main.yaml for pulic, and vars/vault.yml for the encrypted private information. I came across a problem and am uncertain what's the best practice or actual solution here. It seems that ansible only loads the vars/main.yml file. Naturally I do not want to encrypt the public information so I looked for solution. So far the only solution I came up with (suggested on IRC) is to create group_vars/all/vault.yml and prefix all variables with the role name. This works because ansible seems to recursively load everything under group_vars . This does work but seems organizationally incorrect because the variables are for a specific role and not "globally universally true". I also tried to put include: vars/vault.yml into vars/main.yml but that did not work. Is there a proper way to do this?
|
ansible, ansible-vault
| 24
| 25,162
| 3
|
https://stackoverflow.com/questions/36134552/use-multiple-var-files-in-ansible-role
|
29,271,967
|
Ansible: How can I set log file name dynamically?
|
I'm currently developing an Ansible script to build and deploy a Java project. So, I can set the log_path like below log_path=/var/log/ansible.log But, it is hard to look up build history. Is it possible to append datetime to log file name? For example, ansible.20150326145515.log
|
Ansible: How can I set log file name dynamically? I'm currently developing an Ansible script to build and deploy a Java project. So, I can set the log_path like below log_path=/var/log/ansible.log But, it is hard to look up build history. Is it possible to append datetime to log file name? For example, ansible.20150326145515.log
|
ansible
| 24
| 30,007
| 8
|
https://stackoverflow.com/questions/29271967/ansible-how-can-i-set-log-file-name-dynamically
|
33,701,062
|
How can Ansible "register" in a variable the result of including a playbook?
|
How can an Ansible playbook register in a variable the result of including another playbook? For example, would the following register the result of executing tasks/foo.yml in result_of_foo ? tasks: - include: tasks/foo.yml - register: result_of_foo How else can Ansible record the result of a task sequence?
|
How can Ansible "register" in a variable the result of including a playbook? How can an Ansible playbook register in a variable the result of including another playbook? For example, would the following register the result of executing tasks/foo.yml in result_of_foo ? tasks: - include: tasks/foo.yml - register: result_of_foo How else can Ansible record the result of a task sequence?
|
ansible
| 24
| 79,862
| 2
|
https://stackoverflow.com/questions/33701062/how-can-ansible-register-in-a-variable-the-result-of-including-a-playbook
|
32,468,350
|
Ansible Copy vs Synchronize
|
What are the pros and cons to using Ansible Synchronize vs Copy modules. As far as I can tell synchronize has all the functionality that copy does but may be much faster so I'm considering changing everything to use synchronize. The only downside of synchronize is that rsync is required, which seems fairly ubiquitous in the Linux environment.
|
Ansible Copy vs Synchronize What are the pros and cons to using Ansible Synchronize vs Copy modules. As far as I can tell synchronize has all the functionality that copy does but may be much faster so I'm considering changing everything to use synchronize. The only downside of synchronize is that rsync is required, which seems fairly ubiquitous in the Linux environment.
|
ansible
| 24
| 27,210
| 4
|
https://stackoverflow.com/questions/32468350/ansible-copy-vs-synchronize
|
24,732,627
|
Ansible Roles and handlers - Cannot get role handlers to work
|
I need to set up Apache/mod_wsgi in Centos 6.5 so my main YAML file is as such: --- - hosts: dev tasks: - name: Updates yum installed packages yum: name=* state=latest - hosts: dev roles: - { role: apache } This should update all yum-installed packages then execute the apache role. The apache role is configured to install Apache/mod_wsgi, set Apache to start at boot time and restart it. The following are the contents of roles/apache/tasks/main.yml : --- - name: Installs httpd and mod_wsgi yum: name={{ item }} state=latest with_items: - httpd - mod_wsgi notify: - enable httpd - restart httpd And the handlers in roles/apache/handlers/main.yml : --- - name: enable httpd service: name=httpd enabled=yes - name: restart httpd service: name=httpd state=restarted The handlers do not seem to run since the following output is given when I execute the playbook: PLAY [dev] ******************************************************************** GATHERING FACTS *************************************************************** ok: [dev.example.com] TASK: [Updates yum installed packages] **************************************** ok: [dev.example.com] PLAY [dev] ******************************************************************** GATHERING FACTS *************************************************************** ok: [dev.example.com] TASK: [apache | Installs httpd and mod_wsgi] ********************************** ok: [dev.example.com] => (item=httpd,mod_wsgi) PLAY RECAP ******************************************************************** dev.example.com : ok=4 changed=0 unreachable=0 failed=0 And when I vagrant ssh into the virtual machine, sudo service httpd status shows httpd is stopped and sudo chkconfig --list shows it has not been enabled to be started by init . I'm just starting out with Ansible, so is there something obvious I could be missing?
|
Ansible Roles and handlers - Cannot get role handlers to work I need to set up Apache/mod_wsgi in Centos 6.5 so my main YAML file is as such: --- - hosts: dev tasks: - name: Updates yum installed packages yum: name=* state=latest - hosts: dev roles: - { role: apache } This should update all yum-installed packages then execute the apache role. The apache role is configured to install Apache/mod_wsgi, set Apache to start at boot time and restart it. The following are the contents of roles/apache/tasks/main.yml : --- - name: Installs httpd and mod_wsgi yum: name={{ item }} state=latest with_items: - httpd - mod_wsgi notify: - enable httpd - restart httpd And the handlers in roles/apache/handlers/main.yml : --- - name: enable httpd service: name=httpd enabled=yes - name: restart httpd service: name=httpd state=restarted The handlers do not seem to run since the following output is given when I execute the playbook: PLAY [dev] ******************************************************************** GATHERING FACTS *************************************************************** ok: [dev.example.com] TASK: [Updates yum installed packages] **************************************** ok: [dev.example.com] PLAY [dev] ******************************************************************** GATHERING FACTS *************************************************************** ok: [dev.example.com] TASK: [apache | Installs httpd and mod_wsgi] ********************************** ok: [dev.example.com] => (item=httpd,mod_wsgi) PLAY RECAP ******************************************************************** dev.example.com : ok=4 changed=0 unreachable=0 failed=0 And when I vagrant ssh into the virtual machine, sudo service httpd status shows httpd is stopped and sudo chkconfig --list shows it has not been enabled to be started by init . I'm just starting out with Ansible, so is there something obvious I could be missing?
|
ansible
| 24
| 31,297
| 1
|
https://stackoverflow.com/questions/24732627/ansible-roles-and-handlers-cannot-get-role-handlers-to-work
|
30,552,317
|
problems using Ansible to run docker tasks on OS X
|
I am just trying to tell Ansible to build a docker image on my OS X machine and this is the error I get: $ ansible-playbook main.yml PLAY [localhost] ************************************************************** GATHERING FACTS *************************************************************** ok: [localhost] TASK: [Build docker image from dockerfiles] *********************************** failed: [localhost] => {"changed": false, "failed": true} msg: ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),) FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/Users/ronny/main.retry localhost : ok=1 changed=0 unreachable=0 failed=1 This is the main.yml file I am using: --- - hosts: localhost connection: local tasks: - name: Build docker image from dockerfiles docker_image: name: testimage path: test state: build My Dockerfile: # Build an example Docker container image. FROM busybox MAINTAINER Jeff Geerling <geerlingguy@mac.com> # Run a command when the container starts. CMD ["/bin/true”]” My docker file is located in cookbook/test/Dockerfile And the main.yml file is located in cookbook/main.yml I'm running this on OS X. I am totally lost at this point and any help would be very appreciated. edit: In response to Nathanial's request that I use -vvvv I get the following error: (this is where the path is set to the subdirectory "test") TASK: [Build docker image from dockerfiles] *********************************** <localhost> ESTABLISH CONNECTION FOR USER: ronny <localhost> REMOTE_MODULE docker_image name=test state=build path=test <localhost> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/ronny/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 localhost /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724 && echo $HOME/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724' <localhost> PUT /var/folders/s1/g6kswg952gvg5df6wld173480000gn/T/tmp3g0PIz TO /Users/ronny/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724/docker_image <localhost> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/ronny/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 localhost /bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/ronny/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724/docker_image; rm -rf /Users/ronny/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724/ >/dev/null 2>&1' failed: [localhost] => {"failed": true, "parsed": false} Traceback (most recent call last): File "/Users/ronny/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724/docker_image", line 1852, in <module> main() File "/Users/ronny/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724/docker_image", line 235, in main image_id = manager.build() File "/Users/ronny/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724/docker_image", line 140, in build stream = self.client.build(self.path, tag=':'.join([self.name, self.tag]), nocache=self.nocache, rm=True, stream=True) File "/usr/local/lib/python2.7/site-packages/docker/client.py", line 319, in build raise TypeError("You must specify a directory to build in path") TypeError: You must specify a directory to build in path OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: auto-mux: Trying existing master debug2: fd 3 setting O_NONBLOCK debug2: mux_client_hello_exchange: master version 4 debug3: mux_client_forwards: request forwardings: 0 local, 0 remote debug3: mux_client_request_session: entering debug3: mux_client_request_alive: entering debug3: mux_client_request_alive: done pid = 78798 debug3: mux_client_request_session: session request sent debug1: mux_client_request_session: master session id: 2 debug3: mux_client_read_packet: read header failed: Broken pipe debug2: Received exit status from master 0 Shared connection to localhost closed. FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/Users/ronny/main.retry localhost : ok=1 changed=0 unreachable=0 failed=1 In response to the above error , I tried giving the full path to the build directory where Dockerfile is located path: "/Users/ronny/projects/dockers/tutorial/ansibledocker/test” then I got: TASK: [Build docker image from dockerfiles] *********************************** <localhost> ESTABLISH CONNECTION FOR USER: ronny <localhost> REMOTE_MODULE docker_image name=test state=build path=/Users/ronny/projects/dockers/tutorial/ansibledocker/test <localhost> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/ronny/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 localhost /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1433079137.87-213359153110012 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1433079137.87-213359153110012 && echo $HOME/.ansible/tmp/ansible-tmp-1433079137.87-213359153110012' <localhost> PUT /var/folders/s1/g6kswg952gvg5df6wld173480000gn/T/tmplH4Lln TO /Users/ronny/.ansible/tmp/ansible-tmp-1433079137.87-213359153110012/docker_image <localhost> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/ronny/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 localhost /bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/ronny/.ansible/tmp/ansible-tmp-1433079137.87-213359153110012/docker_image; rm -rf /Users/ronny/.ansible/tmp/ansible-tmp-1433079137.87-213359153110012/ >/dev/null 2>&1' failed: [localhost] => {"changed": false, "failed": true} msg: ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),) FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/Users/ronny/main.retry localhost : ok=1 changed=0 unreachable=0 failed=1 edit #2 OK, I tried doing something different. instead of building an image, I tried to simply start one. After diggin in lib/python2.7/site-packages/ansible/modules/core/cloud/docker/docker.py I noticed that in class DockerManager.init around line 558, they set a few environment variables: env_host = os.getenv('DOCKER_HOST') env_docker_verify = os.getenv('DOCKER_TLS_VERIFY') env_cert_path = os.getenv('DOCKER_CERT_PATH') env_docker_hostname = os.getenv('DOCKER_TLS_HOSTNAME') So I spat out those values into a log file, and lo and behold, none of them were being set. Then when I set them directly in main.yml: - name: start container docker: name: mydata image: busybox state: present environment: DOCKER_HOST: tcp://192.168.59.103:2376 DOCKER_TLS_VERIFY: 1 DOCKER_CERT_PATH: /Users/ronny/.boot2docker/certs/boot2docker-vm DOCKER_TLS_HOSTNAME: boot2docker I was able to successfully start a container! However, this method did not work with my initial issue, which is to build a docker image. Digging further in docker_image.py I noticed that it breaks down around line 188 (I say "around" because I have logging breaks so I don't know the exact line) where it has the following code: images = self.client.images() so more digging and I see that self.client is checking out the docker_url at 'unix://var/run/docker.sock' but over at this link I see that /var/run/docker.sock does not exist on OS X, instead one reply said that /var/run/docker.sock will not be on your OSX filesystem - the Docker daemon is running inside the boot2docker VM - and that's where the unix socket is. That serial file is also not related to the docker socket. oyu need to talk to the TCP socket specified in the DOCKER_HOST env. now I tried setting docker_url to the DOCKER_HOST URL, as the description for this modules says: docker_url: description: - URL of docker host to issue commands to required: false default: unix://var/run/docker.sock aliases: [] but when I set it to the DOCKER_HOST address, I got an error. Here is main.yml - name: Build docker image from dockerfiles docker_image: name: testimage # path: test path: /Users/ronny/projects/dockers/tutorial/ansibledocker/test state: build docker_url: 192.168.59.103:2376 and here is the error: failed: [localhost] => {"changed": false, "failed": true} msg: ConnectionError(ProtocolError('Connection aborted.', BadStatusLine('\x15\x03\x01\x00\x02\x02\n',)),) FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/Users/ronny/main.retry localhost : ok=3 changed=0 unreachable=0 failed=1 Any ideas?
|
problems using Ansible to run docker tasks on OS X I am just trying to tell Ansible to build a docker image on my OS X machine and this is the error I get: $ ansible-playbook main.yml PLAY [localhost] ************************************************************** GATHERING FACTS *************************************************************** ok: [localhost] TASK: [Build docker image from dockerfiles] *********************************** failed: [localhost] => {"changed": false, "failed": true} msg: ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),) FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/Users/ronny/main.retry localhost : ok=1 changed=0 unreachable=0 failed=1 This is the main.yml file I am using: --- - hosts: localhost connection: local tasks: - name: Build docker image from dockerfiles docker_image: name: testimage path: test state: build My Dockerfile: # Build an example Docker container image. FROM busybox MAINTAINER Jeff Geerling <geerlingguy@mac.com> # Run a command when the container starts. CMD ["/bin/true”]” My docker file is located in cookbook/test/Dockerfile And the main.yml file is located in cookbook/main.yml I'm running this on OS X. I am totally lost at this point and any help would be very appreciated. edit: In response to Nathanial's request that I use -vvvv I get the following error: (this is where the path is set to the subdirectory "test") TASK: [Build docker image from dockerfiles] *********************************** <localhost> ESTABLISH CONNECTION FOR USER: ronny <localhost> REMOTE_MODULE docker_image name=test state=build path=test <localhost> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/ronny/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 localhost /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724 && echo $HOME/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724' <localhost> PUT /var/folders/s1/g6kswg952gvg5df6wld173480000gn/T/tmp3g0PIz TO /Users/ronny/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724/docker_image <localhost> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/ronny/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 localhost /bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/ronny/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724/docker_image; rm -rf /Users/ronny/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724/ >/dev/null 2>&1' failed: [localhost] => {"failed": true, "parsed": false} Traceback (most recent call last): File "/Users/ronny/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724/docker_image", line 1852, in <module> main() File "/Users/ronny/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724/docker_image", line 235, in main image_id = manager.build() File "/Users/ronny/.ansible/tmp/ansible-tmp-1433079033.35-4833710313724/docker_image", line 140, in build stream = self.client.build(self.path, tag=':'.join([self.name, self.tag]), nocache=self.nocache, rm=True, stream=True) File "/usr/local/lib/python2.7/site-packages/docker/client.py", line 319, in build raise TypeError("You must specify a directory to build in path") TypeError: You must specify a directory to build in path OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: auto-mux: Trying existing master debug2: fd 3 setting O_NONBLOCK debug2: mux_client_hello_exchange: master version 4 debug3: mux_client_forwards: request forwardings: 0 local, 0 remote debug3: mux_client_request_session: entering debug3: mux_client_request_alive: entering debug3: mux_client_request_alive: done pid = 78798 debug3: mux_client_request_session: session request sent debug1: mux_client_request_session: master session id: 2 debug3: mux_client_read_packet: read header failed: Broken pipe debug2: Received exit status from master 0 Shared connection to localhost closed. FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/Users/ronny/main.retry localhost : ok=1 changed=0 unreachable=0 failed=1 In response to the above error , I tried giving the full path to the build directory where Dockerfile is located path: "/Users/ronny/projects/dockers/tutorial/ansibledocker/test” then I got: TASK: [Build docker image from dockerfiles] *********************************** <localhost> ESTABLISH CONNECTION FOR USER: ronny <localhost> REMOTE_MODULE docker_image name=test state=build path=/Users/ronny/projects/dockers/tutorial/ansibledocker/test <localhost> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/ronny/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 localhost /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1433079137.87-213359153110012 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1433079137.87-213359153110012 && echo $HOME/.ansible/tmp/ansible-tmp-1433079137.87-213359153110012' <localhost> PUT /var/folders/s1/g6kswg952gvg5df6wld173480000gn/T/tmplH4Lln TO /Users/ronny/.ansible/tmp/ansible-tmp-1433079137.87-213359153110012/docker_image <localhost> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/ronny/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 localhost /bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/ronny/.ansible/tmp/ansible-tmp-1433079137.87-213359153110012/docker_image; rm -rf /Users/ronny/.ansible/tmp/ansible-tmp-1433079137.87-213359153110012/ >/dev/null 2>&1' failed: [localhost] => {"changed": false, "failed": true} msg: ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),) FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/Users/ronny/main.retry localhost : ok=1 changed=0 unreachable=0 failed=1 edit #2 OK, I tried doing something different. instead of building an image, I tried to simply start one. After diggin in lib/python2.7/site-packages/ansible/modules/core/cloud/docker/docker.py I noticed that in class DockerManager.init around line 558, they set a few environment variables: env_host = os.getenv('DOCKER_HOST') env_docker_verify = os.getenv('DOCKER_TLS_VERIFY') env_cert_path = os.getenv('DOCKER_CERT_PATH') env_docker_hostname = os.getenv('DOCKER_TLS_HOSTNAME') So I spat out those values into a log file, and lo and behold, none of them were being set. Then when I set them directly in main.yml: - name: start container docker: name: mydata image: busybox state: present environment: DOCKER_HOST: tcp://192.168.59.103:2376 DOCKER_TLS_VERIFY: 1 DOCKER_CERT_PATH: /Users/ronny/.boot2docker/certs/boot2docker-vm DOCKER_TLS_HOSTNAME: boot2docker I was able to successfully start a container! However, this method did not work with my initial issue, which is to build a docker image. Digging further in docker_image.py I noticed that it breaks down around line 188 (I say "around" because I have logging breaks so I don't know the exact line) where it has the following code: images = self.client.images() so more digging and I see that self.client is checking out the docker_url at 'unix://var/run/docker.sock' but over at this link I see that /var/run/docker.sock does not exist on OS X, instead one reply said that /var/run/docker.sock will not be on your OSX filesystem - the Docker daemon is running inside the boot2docker VM - and that's where the unix socket is. That serial file is also not related to the docker socket. oyu need to talk to the TCP socket specified in the DOCKER_HOST env. now I tried setting docker_url to the DOCKER_HOST URL, as the description for this modules says: docker_url: description: - URL of docker host to issue commands to required: false default: unix://var/run/docker.sock aliases: [] but when I set it to the DOCKER_HOST address, I got an error. Here is main.yml - name: Build docker image from dockerfiles docker_image: name: testimage # path: test path: /Users/ronny/projects/dockers/tutorial/ansibledocker/test state: build docker_url: 192.168.59.103:2376 and here is the error: failed: [localhost] => {"changed": false, "failed": true} msg: ConnectionError(ProtocolError('Connection aborted.', BadStatusLine('\x15\x03\x01\x00\x02\x02\n',)),) FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/Users/ronny/main.retry localhost : ok=3 changed=0 unreachable=0 failed=1 Any ideas?
|
macos, docker, ansible
| 24
| 2,786
| 4
|
https://stackoverflow.com/questions/30552317/problems-using-ansible-to-run-docker-tasks-on-os-x
|
41,667,864
|
Can the templates module handle multiple templates / directories?
|
I believe the Ansible copy module can take a whole bunch of "files" and copy them in one hit. This I believe can be achieved by copying a directory recursively. Can the Ansible template module take a whole bunch of "templates" and deploy them in one hit? Is there such a thing as deploying a folder of templates and applying them recursively?
|
Can the templates module handle multiple templates / directories? I believe the Ansible copy module can take a whole bunch of "files" and copy them in one hit. This I believe can be achieved by copying a directory recursively. Can the Ansible template module take a whole bunch of "templates" and deploy them in one hit? Is there such a thing as deploying a folder of templates and applying them recursively?
|
ansible, ansible-2.x, ansible-template
| 23
| 36,402
| 5
|
https://stackoverflow.com/questions/41667864/can-the-templates-module-handle-multiple-templates-directories
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.