(Translation) : A Study of the deployment using the Ansible
(Yakuchu : This article Thoughts on deploying with Ansible . Is translated author of Ramon de I am translating / publishing with permission from la Fuente.It is published June 2014, but I think that it is a story to pass as well as January 2015)
Our wrote a role in Ansible in order to simplify the deployment procedure (formerly Capistrano was using). This role is now quite perfect and we are starting to use it in production. However, at the beginning we started discussing in several ways. I thought about sharing that discussion with you this time.
What is deployment?
Let's first define "deployment". When deploying, it is assumed that the user has already finished "Provisioning" and the authority and so on are properly arranged.
We divided the deployment into the following five steps.
- Update code base and settings
- Install dependencies
- Shared resource protection Preserve shared resources
- Build
- End processing
In addition, as shown in the following, current the latest release is a symbolic link to, and shall have the directory structure of Capistrano.
.
├── releases
| ├── 20140415234508
| └── 20140415235146
├── shared
| ├── sessions
| ├── source
| └── uploads
└── current -> releases/20140415235146
Role
It is not difficult to follow Capistrano's model when writing a role that executes a task. If you apply it to the module of Ansible as it is it will be like this.
- Git or synchronize + copy or template
- Command or shell
- File
- Command or shell
- File
There are other jobs that can not be said to be deployed elsewhere. But, these can be easily created.
- Create time stamp
- Cleaning old releases
The time stamp command created in, register and stores it in the local variable in. Its value is your_registered_variable.stdout located in (so you can to any format).
tasks:
- command: date '+%Y%m%d%H%I%S'
register: date_output
- debug: msg={{ date_output.stdout }}
As a matter of fact, we are using timestamps, but they are not required separately. You can use any format as long as there is no duplication for each release (eg commit hash, or do not deploy the same version twice on your own).
cleanup command is a little more complicated. You need to get a list of remote release directories. Once again register use, 最新n個の holding the release Turn the loop to turn off the rest.
tasks:
- command: ls releases
register: ls_output
- debug: msg={{ item }}
with_items: ls_output.stdout_lines
Of course, 'var=ls_output.stdout_lines' it is able to debug know in, but here at the point of is that, looping the list in with_items.
I have created a module called "deploy" and added it to the role, because these tasks are distributed among multiple tasks and it is extremely easy to write in Python. By using this module, it is guaranteed that the directory structure is in order, and the time stamp is returned as a return value.
- name: Initialize
deploy: "path={{ project_root }} state=present"
- debug: {{ deploy.new_release }}
This is how to delete old directories.
- name: Remove old releases
deploy: "path={{ project_root }} state=clean"
It became very easy to understand.
When did the problem start
The problem occurred when trying to borrow the concept of reuse from Capistrano. For example, in the Capistrano callbacks You can write. Things will look something like 'before_X' or 'after_Y'. Capistrano is rollback also enable you to write code to (will be executed when that failed). This means that, Capistrano is the deployment process 変更 , and can be reused in other projects, is that. Or rather, Capistrano allows you to add operations anywhere in the deployment procedure. After all it is Ruby. Based on the calculated default value ユーザーからの入力 or perform, such as or perform the operation before the end is often the case. But, Ansible does not go that way.
なぜ whether this concept is a problem? This is because, Ansibleはプログラミング言語ではない is from. Repeat this word. Three times. With a loud voice. "You are not in the other Kansas (Yakuchu : You'Re Not In Kansas song called Anymore)" I (do to)
What is rollback doing
Well, when something goes wrong, I will return to the previous state.
有効なrelease: "A-OK" ➙ デプロイ失敗 "BORKED" ➙ rollback ➙ 有効なrelease: "A-OK"
But, which timing fails during deployment is a problem? Um ... ... I am doing a destructive task, for example updating the DB schema (probably, I guess that is the most important reason for needing rollback). So, if you built a good DB upgrade / downgrade system into Ansible, this is not a problem.
In this case, rollback is part of the DB update task. Therefore, register in conjunction with the ignore_errors: True use.
tasks:
- command: sh -c 'exit 1'
register: task_output
ignore_errors: True
- name: Rollback
command: echo 'rollback'
when: task_output|failed
failed_when: True
The end of the failed_when: True is to stop the deployment when the end of the roll back. This is a bit dirty, but it moves.
There is another way to check the return value of Ansible itself (However, this does not tell you why Ansible failed, in fact this method needs to add a bit more functionality)
ansible-playbook deploy.yml || ansible-playbook rollback.yml
The thing to say here is that placing the appropriate rollback in the right place (like a DB downgrade) is often too much. We Doctrine If you're using of the migration, and has prevented the update of the destructive DB such as column name change. After doing a safe process such as moving a data by adding a column, we performed a destructive operation such as deleting an old column which is no longer 'attached' to the code in the next iteration.
Also, we decided to abandon the concept of rollback itself. rollbackなんて嘘だ!
The thing we really need is to decide what to deploy, something. Therefore, we created a task that puts the file "BUILD_UNFINISHED" in release. As long as this file exists, the release is deemed not to have ended. If this file already exists in release in the releases directory when a new build begins, this release will be considered a failure and the directory will be deleted. (The deploy module will automatically delete it)
With this method, we were able to obtain the advantage over Capistrano that we can check the contents of a failed release.
有効なrelease: "A-OK" ➙ deploying "BORKED" ➙ 失敗
(シンボリックリンクは置き換わらず、A-OKはまだ有効)
リリース中に "BORKED" ディレクトリの中身を見て、問題を見つけられます。
有効なrelease: "A-OK" ➙ deploying "FIXED"
(開始する前に "BORKED" はリリースフォルダーから削除されます)
Is there a callback?
In order to reuse the deploy role between projects, it is necessary to keep nearly identical projects or flexible mechanisms. Unless you allow "arbitrary" commands to be "injected", you will need to copy the entire role for each change. Trying here is to find a sweet spot with enough flexibility so as not to create a monster.
The solution for us was to have a list of commands as variables. The default setting of the variable is an empty list : project_post_build_steps: []
The internal task is like this.
- name: Run post_build_commands
command: "{{ item }} chdir={{ deploy.new_release_path }}"
with_items: project_post_build_commands
It is now possible to run any command after the dependency manager ran after the symbolic link was created. This is not a complete callback, but in the flow of a series of operations, it is no other than getting the initiative back into your hands. For the sake of symmetry, is executed before the dependency manager is run, project_pre_build_steps: [] also added.
追記: 注意 The following does not work for security reasons in the Ansible 1.6.8 or later. I have abandoned this idea.
Will Be Exploring We The Option Of Adding A Task That Uses The Older Action : . Notation Of Ansible This Means You Could Run Any Ansible Module (Pseudo Code):
project_post_build_actions:
- { module: '[some_module]', parameters: 'some_param=true some_other_param=some/value' }
- { module: '[some_module]', parameters: 'some_param=false some_other_param=some/other/value' }
The Task Would And Turn Into Something Like:
- name: Run post_build_actions
action: "{{ item.module }} {{ item.parameters }}"
with_items: project_post_build_actions
This technique could save it a bit of meta-ansible, but it's might mighty useful if the rest of the role is a perfect fit.
追記: 注意 In the above Ansible 1.6.8 or later does not work for security reasons. I have abandoned this idea.
How do I get the correct version to deploy?
I want to be able to select the release (tag) that is about to be deployed. Although it is not always the latest tag, it is occasionally to release tags other than the latest, so I want to use the latest tag as default.
Actually this is surprisingly difficult in Ansible. To get the input from the user in Ansible is vars_prompt use. You can also set defaults for this. However, when you try to use a variable or search for a value, it is not converted in the question. for that reason,
vars_prompt:
- name: "release_version"
prompt: "Product release version"
default: "{{ lookup('pipe','latest_git_tag.sh') }}"
It will be like this.
$ ansible-playbook playbook.yml
Product release version [{{ lookup('pipe','latest_git_tag.sh') }}]:
The actual value of the "release_version" is latest_git_tag.sh is the output value of. Because this does not work, I do not know which version to deploy. This problem is very stubborn. even if there is more than one play in the playbook, 2-th play of vars_prompt part of is called before the first play to be executed.
Therefore, I decided to enclose Ansible in a shell script. This project make the appropriate question is, the Ansible --extra-vars run with the option bin/deploy decided to make the script. Using this makes this way.
$ bin/deploy
Product release version [v1.2.3]:
Project environment (Staging/Production) [staging]:
Running: ansible-playbook deploy.yml --limit staging --extra-vars "environment=staging project_version=v1.2.3"
Where is the maintenance mode!
The last one not implemented by deploy role is "maintenance mode". Needless to say, it is required when you are trying to perform a destructive tasks (Yakuchu : suspicious translation). We just made the file presence check in Nginx. Therefore, file module and command: touch maintenance is enough in.
Nonetheless, the only thing that always happens is to separate the dangerous possibilities at deployment into different roles. (Yakuchu : translation is suspicious) This is after the step 4 (build task), will occur in front of the step 5 (end processing).
Therefore we are setting the role "open-end". BUILD_UNFINSHED file has been deleted, to ensure that the current symbolic link is replaced by project_finalize variable (the default is True) has to have. If this is set to false, the project can add its own role to the deploy procedure, so that it can set / cancel the maintenance mode at the responsibility of the role.
If If you're considering trying to start the deployment by setting the maintenance mode, pre_tasks it is better to set the deployment playbook. This is done before the deploy role begins. If you always cancel maintenance mode, you can use the 'exit-code' technique described above. (Yakuchu : post_tasks is not called if the playbook failed)
Ansible.project_deploy playbook example
Here is an example of playbook that will deploy our user group Sweetlake PHP. This is a Symfony 2 project, using Assetic. There is nothing special. All actions are done in the role, and we use that same role to deploy other projects (not just Symfony 2). The difference is between the project vars only.
---
- name: Deploy the application
hosts: production
remote_user: "{{ production_deploy_user }}"
sudo: no
vars:
project_root: "{{ sweetlakephp_root }}"
project_git_repo: "{{ sweetlakephp_github_repo }}"
project_deploy_strategy: git
project_environment:
SYMFONY_ENV: "prod"
project_shared_children:
- path: "/app/sessions"
src: "sessions"
- path: "/web/uploads"
src: "uploads"
project_templates:
- name: parameters.yml
src: "templates/parameters_prod.yml.j2"
dest: "/app/config/parameters_prod.yml"
project_has_composer: yes
project_post_build_commands:
- "php vendor/sensio/distribution-bundle/Sensio/Bundle/DistributionBundle/Resources/bin/build_bootstrap.php"
- "app/console cache:clear"
- "app/console doctrine:migrations:migrate --no-interaction"
- "app/console assets:install"
- "app/console assetic:dump"
roles:
- f500.project_deploy
post_tasks:
- name: Remove old releases
deploy: "path={{ project_root }} state=clean"