(Translation) : A Study of the deployment using the Ansible

(Yakuchu : This article Thoughts on deploying with Ansible . Is translated author of Ramon de I am translating / publishing with permission from la Fuente.It is published June 2014, but I think that it is a story to pass as well as January 2015)

Our wrote a role in Ansible in order to simplify the deployment procedure (formerly Capistrano was using). This role is now quite perfect and we are starting to use it in production. However, at the beginning we started discussing in several ways. I thought about sharing that discussion with you this time.

What is deployment?

Let's first define "deployment". When deploying, it is assumed that the user has already finished "Provisioning" and the authority and so on are properly arranged.

We divided the deployment into the following five steps.

  1. Update code base and settings
  2. Install dependencies
  3. Shared resource protection Preserve shared resources
  4. Build
  5. End processing

In addition, as shown in the following, current the latest release is a symbolic link to, and shall have the directory structure of Capistrano.

.
├── releases
|   ├── 20140415234508
|   └── 20140415235146
├── shared
|   ├── sessions
|   ├── source
|   └── uploads
└── current -> releases/20140415235146

Role

It is not difficult to follow Capistrano's model when writing a role that executes a task. If you apply it to the module of Ansible as it is it will be like this.

  1. Git or synchronize + copy or template
  2. Command or shell
  3. File
  4. Command or shell
  5. File

There are other jobs that can not be said to be deployed elsewhere. But, these can be easily created.

  • Create time stamp
  • Cleaning old releases

The time stamp command created in, register and stores it in the local variable in. Its value is your_registered_variable.stdout located in (so you can to any format).

tasks:
  - command: date '+%Y%m%d%H%I%S'
    register: date_output

  - debug: msg={{ date_output.stdout }}

As a matter of fact, we are using timestamps, but they are not required separately. You can use any format as long as there is no duplication for each release (eg commit hash, or do not deploy the same version twice on your own).

cleanup command is a little more complicated. You need to get a list of remote release directories. Once again register use, 最新n個の holding the release Turn the loop to turn off the rest.

tasks:
  - command: ls releases
    register: ls_output

  - debug: msg={{ item }}
    with_items: ls_output.stdout_lines

Of course, 'var=ls_output.stdout_lines' it is able to debug know in, but here at the point of is that, looping the list in with_items.

I have created a module called "deploy" and added it to the role, because these tasks are distributed among multiple tasks and it is extremely easy to write in Python. By using this module, it is guaranteed that the directory structure is in order, and the time stamp is returned as a return value.

- name: Initialize
  deploy: "path={{ project_root }} state=present"

- debug: {{ deploy.new_release }}

This is how to delete old directories.

- name: Remove old releases
  deploy: "path={{ project_root }} state=clean"

It became very easy to understand.

When did the problem start

The problem occurred when trying to borrow the concept of reuse from Capistrano. For example, in the Capistrano callbacks You can write. Things will look something like 'before_X' or 'after_Y'. Capistrano is rollback also enable you to write code to (will be executed when that failed). This means that, Capistrano is the deployment process 変更 , and can be reused in other projects, is that. Or rather, Capistrano allows you to add operations anywhere in the deployment procedure. After all it is Ruby. Based on the calculated default value ユーザーからの入力 or perform, such as or perform the operation before the end is often the case. But, Ansible does not go that way.

なぜ whether this concept is a problem? This is because, Ansibleはプログラミング言語ではない is from. Repeat this word. Three times. With a loud voice. "You are not in the other Kansas (Yakuchu : You'Re Not In Kansas song called Anymore)" I (do to)

What is rollback doing

Well, when something goes wrong, I will return to the previous state.

有効なrelease: "A-OK" ➙ デプロイ失敗 "BORKED" ➙ rollback ➙ 有効なrelease: "A-OK"

But, which timing fails during deployment is a problem? Um ... ... I am doing a destructive task, for example updating the DB schema (probably, I guess that is the most important reason for needing rollback). So, if you built a good DB upgrade / downgrade system into Ansible, this is not a problem.

In this case, rollback is part of the DB update task. Therefore, register in conjunction with the ignore_errors: True use.

tasks:
  - command: sh -c 'exit 1'
    register: task_output
    ignore_errors: True

  - name: Rollback
    command: echo 'rollback'
    when: task_output|failed
    failed_when: True

The end of the failed_when: True is to stop the deployment when the end of the roll back. This is a bit dirty, but it moves.

There is another way to check the return value of Ansible itself (However, this does not tell you why Ansible failed, in fact this method needs to add a bit more functionality)

ansible-playbook deploy.yml || ansible-playbook rollback.yml

The thing to say here is that placing the appropriate rollback in the right place (like a DB downgrade) is often too much. We Doctrine If you're using of the migration, and has prevented the update of the destructive DB such as column name change. After doing a safe process such as moving a data by adding a column, we performed a destructive operation such as deleting an old column which is no longer 'attached' to the code in the next iteration.

Also, we decided to abandon the concept of rollback itself. rollbackなんて嘘だ!

The thing we really need is to decide what to deploy, something. Therefore, we created a task that puts the file "BUILD_UNFINISHED" in release. As long as this file exists, the release is deemed not to have ended. If this file already exists in release in the releases directory when a new build begins, this release will be considered a failure and the directory will be deleted. (The deploy module will automatically delete it)

With this method, we were able to obtain the advantage over Capistrano that we can check the contents of a failed release.

有効なrelease: "A-OK" ➙ deploying "BORKED"  ➙ 失敗
(シンボリックリンクは置き換わらず、A-OKはまだ有効)

リリース中に "BORKED" ディレクトリの中身を見て、問題を見つけられます。

有効なrelease: "A-OK" ➙ deploying "FIXED"
(開始する前に "BORKED" はリリースフォルダーから削除されます)

Is there a callback?

In order to reuse the deploy role between projects, it is necessary to keep nearly identical projects or flexible mechanisms. Unless you allow "arbitrary" commands to be "injected", you will need to copy the entire role for each change. Trying here is to find a sweet spot with enough flexibility so as not to create a monster.

The solution for us was to have a list of commands as variables. The default setting of the variable is an empty list : project_post_build_steps: []

The internal task is like this.

- name: Run post_build_commands
  command: "{{ item }} chdir={{ deploy.new_release_path }}"
  with_items: project_post_build_commands

It is now possible to run any command after the dependency manager ran after the symbolic link was created. This is not a complete callback, but in the flow of a series of operations, it is no other than getting the initiative back into your hands. For the sake of symmetry, is executed before the dependency manager is run, project_pre_build_steps: [] also added.

追記: 注意 The following does not work for security reasons in the Ansible 1.6.8 or later. I have abandoned this idea.

Will Be Exploring We The Option Of Adding A Task That Uses The Older Action : . Notation Of Ansible This Means You Could Run Any Ansible Module (Pseudo Code):

project_post_build_actions:
  - { module: '[some_module]', parameters: 'some_param=true some_other_param=some/value' }
  - { module: '[some_module]', parameters: 'some_param=false some_other_param=some/other/value' }

The Task Would And Turn Into Something Like:

- name: Run post_build_actions
  action: "{{ item.module }} {{ item.parameters }}"
  with_items: project_post_build_actions

This technique could save it a bit of meta-ansible, but it's might mighty useful if the rest of the role is a perfect fit.

追記: 注意 In the above Ansible 1.6.8 or later does not work for security reasons. I have abandoned this idea.

How do I get the correct version to deploy?

I want to be able to select the release (tag) that is about to be deployed. Although it is not always the latest tag, it is occasionally to release tags other than the latest, so I want to use the latest tag as default.

Actually this is surprisingly difficult in Ansible. To get the input from the user in Ansible is vars_prompt use. You can also set defaults for this. However, when you try to use a variable or search for a value, it is not converted in the question. for that reason,

vars_prompt:
    - name: "release_version"
      prompt: "Product release version"
      default: "{{ lookup('pipe','latest_git_tag.sh') }}"

It will be like this.

$ ansible-playbook playbook.yml
Product release version [{{ lookup('pipe','latest_git_tag.sh') }}]:

The actual value of the "release_version" is latest_git_tag.sh is the output value of. Because this does not work, I do not know which version to deploy. This problem is very stubborn. even if there is more than one play in the playbook, 2-th play of vars_prompt part of is called before the first play to be executed.

Therefore, I decided to enclose Ansible in a shell script. This project make the appropriate question is, the Ansible --extra-vars run with the option bin/deploy decided to make the script. Using this makes this way.

$ bin/deploy
Product release version [v1.2.3]:
Project environment (Staging/Production) [staging]:

Running: ansible-playbook deploy.yml --limit staging --extra-vars "environment=staging project_version=v1.2.3"

Where is the maintenance mode!

The last one not implemented by deploy role is "maintenance mode". Needless to say, it is required when you are trying to perform a destructive tasks (Yakuchu : suspicious translation). We just made the file presence check in Nginx. Therefore, file module and command: touch maintenance is enough in.

Nonetheless, the only thing that always happens is to separate the dangerous possibilities at deployment into different roles. (Yakuchu : translation is suspicious) This is after the step 4 (build task), will occur in front of the step 5 (end processing).

Therefore we are setting the role "open-end". BUILD_UNFINSHED file has been deleted, to ensure that the current symbolic link is replaced by project_finalize variable (the default is True) has to have. If this is set to false, the project can add its own role to the deploy procedure, so that it can set / cancel the maintenance mode at the responsibility of the role.

If If you're considering trying to start the deployment by setting the maintenance mode, pre_tasks it is better to set the deployment playbook. This is done before the deploy role begins. If you always cancel maintenance mode, you can use the 'exit-code' technique described above. (Yakuchu : post_tasks is not called if the playbook failed)

Ansible.project_deploy playbook example

Here is an example of playbook that will deploy our user group Sweetlake PHP. This is a Symfony 2 project, using Assetic. There is nothing special. All actions are done in the role, and we use that same role to deploy other projects (not just Symfony 2). The difference is between the project vars only.

---
 - name: Deploy the application
   hosts: production
   remote_user: "{{ production_deploy_user }}"
   sudo: no

   vars:
     project_root: "{{ sweetlakephp_root }}"
     project_git_repo: "{{ sweetlakephp_github_repo }}"
     project_deploy_strategy: git

     project_environment:
       SYMFONY_ENV: "prod"

     project_shared_children:
       - path: "/app/sessions"
         src: "sessions"
       - path: "/web/uploads"
         src: "uploads"

     project_templates:
       - name: parameters.yml
         src: "templates/parameters_prod.yml.j2"
         dest: "/app/config/parameters_prod.yml"

     project_has_composer: yes

     project_post_build_commands:
       - "php vendor/sensio/distribution-bundle/Sensio/Bundle/DistributionBundle/Resources/bin/build_bootstrap.php"
       - "app/console cache:clear"
       - "app/console doctrine:migrations:migrate --no-interaction"
       - "app/console assets:install"
       - "app/console assetic:dump"

   roles:
     - f500.project_deploy

   post_tasks:
     - name: Remove old releases
       deploy: "path={{ project_root }} state=clean"

Use Ansible's Fact Caching

This article Ansible Advent Calendar 2014 ` _ is of the third day article.

Note

初出時まったくのデタラメ書いていました。申し訳ございません!

Fact Caching is a function introduced since Ansible 1.8, making it possible to use Fact from other hosts.

Let's say that you have two web servers and a db server, and the web server must know the address of the db server. For this purpose,

- hosts: db
  tasks: []

- hosts: web
  tasks:
    - template: ...

And then, in a template for the web server {{hostvars['db']['ansible_os_family'] }} '] [ 'ansible_os_family']}} `` as you can use the address of the db.

If you do this, you have to do to the db server first though it is a playbook to the web server. Fact Caching eliminates the need to execute playbook for db server by caching Fact acquired for a certain period of time.

Try using Fact Caching

Since it is invalidated by default, the following setting is described in ansible.cfg.

[defaults]
fact_caching = redis
fact_caching_timeout = 86400 # seconds

Currently redis is the only place to store cached facts.

Save to redis

In order to use redis you need to put redis python library,

pip install redis

I need you.

By the way, the following two key seems to be registered in redis.

  • "Ansible_cache_keys"
  • "Ansible_factstargetmachine"

Save to file

It is not written in the document, but besides redis

  • Jsonfile
  • Memcached
  • Memory

You can use. I will do this to use jsonfile.

[defaults]
fact_caching = jsonfile
fact_caching_connection = /tmp/cache
fact_caching_timeout = 86400 # seconds

However, because the area around here is not official, it is fluid, and Japanese is included

https://github.com/ansible/ansible/blob/60b51ef6c3e4eeca5ee1170ba32bc3 9284db97ae/lib/ansible/utils/__init__.py#L232

I can not write out JSON at around, so please wait. ( Issue in up)

Test goji with net / httptest

goji to now trying to write a Web app. When I use it as an API server, I am concerned about testing.

golang the net/http/httptest >` _ packages that do that the test of http comes with from the beginning. Using this makes it easy to write tests.

Httptest.Server creates a server with local loopback interface.

(2014/11/20 postscript) : Because from mopemope's "better to put the Routing test of the good that is has" received a pointed out that, divide the Route set to a different function, we have to use from the test.

Sample application

/hello/hoge return JSON If you come have access to the URL, a simple A pre-called.

package main

import (
    "encoding/json"
    "fmt"
    "net/http"

    "github.com/zenazn/goji"
    "github.com/zenazn/goji/web"
)

type User struct {
    Id   int64
    Name string
}

func hello(c web.C, w http.ResponseWriter, r *http.Request) {
    u := User{
        Id:   1,
        Name: c.URLParams["name"],
    }

    j, _ := json.Marshal(u)
    fmt.Fprintf(w, string(j))
}

func Route(m *web.Mux) {
    m.Get("/hello/:name", hello)
}

func main() {
    Route(goji.DefaultMux)
    goji.Serve()
}

The test for this is as follows.

package main

import (
    "io/ioutil"
    "net/http"
    "net/http/httptest"
    "testing"

    "github.com/zenazn/goji/web"
)

func ParseResponse(res *http.Response) (string, int) {
    defer res.Body.Close()
    contents, err := ioutil.ReadAll(res.Body)
    if err != nil {
        panic(err)
    }
    return string(contents), res.StatusCode
}

func Test_Hello(t *testing.T) {
    m := web.New()
    Route(m)
    ts := httptest.NewServer(m)
    defer ts.Close()

    res, err := http.Get(ts.URL + "/hello/hoge")
    if err != nil {
        t.Error("unexpected")
    }
    c, s := ParseResponse(res)
    if s != http.StatusOK {
        t.Error("invalid status code")
    }
    if c != `{"Id":1,"Name":"hoge"}` {
        t.Error("invalid response")
    }
}

web.New() Create a goji / web of HTTP multiplexer in. After that, if you give it as httptest.NewServer (m), the test server will start up. Do not forget to close with defer.

It is to be noted that the ts.URL, URL contains the port of launched test server is http://127.0.0.1:51923 contains in like a feeling.

For simplicity we make a function called ParseResponse, but this is not required.

Please tell me if there are other ways better.

Use WICED Sense from Intel Edison

WICED Sense that there is a device. This is the BLE tag that Broadcom is making. In addition to BLE, the following five MEMS sensors are installed.

  • Triaxial gyro
  • Accelerometer
  • Orientation
  • barometer
  • Humidity, thermometer

I photographed Intel Edison and WICED Sense on Mac Book Pro 13Inch. I think that you can understand the small size.

2014/11/14/edison_wiced.png

You can buy it from here from here. It is one 2450 yen. (But, I have been sold out now) http://www.macnicaonline.com/SHOP/BCM9WICED_SENSE.html

When you buy, plastic for preventing electrification is caught in the battery box inside the main body, so take it first. (I am addicted here w)

Try connecting from a smartphone

Install the app named "WICED Sense" from google play. Because I am using Nexus 5, it is an Android application, but even on iPhone, the application is out.

When launching the application, Bluetooth turns on and Scan starts. So tap when WICED comes out. (Maybe you need to first wakeup the button above first)

  1. Tap Connect
  2. Press the button under WICED Sense

The connection is completed as soon as it is short as "bu". Information such as battery capacity and temperature will be displayed on the screen.

2014/11/14/wiced_app.png

It is displayed like this.

Since firmware has been updated, select "Check for software updates" from the menu and update to the latest version. It's noisy "Bububu", but it will be completed in around 30 seconds.

Let's continue from Intel Edison.

Try connecting from Intel Edison

Let's use WICED Sense from Intel Edison.

First of all, log in to Edison and let bluetooth be enabled with the following command. (I will not explain how to log in to Edison here)

rfkill unblock bluetooth

Treat from cylon

cylonjs that there is a node library. This is to control various things such as Aurdino in JavaScript. This corresponds to WICED Sense.

In order to connect to WICED Sense, you need to install the following three node packages in Edison.

  • Npm install noble
  • Npm install cylon-ble
  • Npm install cylon-wiced-sense

Take information

The following code to save on Edison, node get_info.js and run with.

var Cylon = require('cylon');

Cylon.robot({
  connection: { name: 'bluetooth', adaptor: 'central', module:
  'cylon-ble', uuid: '207377654321'},
  devices: [{name: 'battery', driver: 'ble-battery-service'},
            {name: 'deviceInfo', driver: 'ble-device-information'},
            {name: 'generic', driver: 'ble-generic-access'},
            {name: 'wiced', driver: 'wiced-sense'}],

  display: function(err, data) {
    if (err) {
      console.log("Error:", err);
    } else {
      console.log("Data:", data);
    }
  },

  work: function(my) {
    my.generic.getDeviceName(function(err, data){
      my.display(err, data);
      my.generic.getAppearance(function(err, data){
        my.display(err, data);
        my.deviceInfo.getManufacturerName(function(err, data){
          my.display(err, data);
          my.wiced.getData(function(err, data){
            my.display(err, data);
          });
        });
      });
    });
  }
}).start();

With the feeling like the following, data will come out steadily. Connecting at the, you may need to press the up button.

I, [2014-11-14T13:38:26.496Z]  INFO -- : Initializing connections.
I, [2014-11-14T13:38:26.511Z]  INFO -- : Initializing connection 'bluetooth'.
I, [2014-11-14T13:38:26.780Z]  INFO -- : Initializing devices.
I, [2014-11-14T13:38:26.782Z]  INFO -- : Initializing device 'battery'.
I, [2014-11-14T13:38:26.788Z]  INFO -- : Initializing device 'deviceInfo'.
I, [2014-11-14T13:38:26.790Z]  INFO -- : Initializing device 'generic'.
I, [2014-11-14T13:38:26.792Z]  INFO -- : Initializing device 'wiced'.
I, [2014-11-14T13:38:26.823Z]  INFO -- : Starting connections.
I, [2014-11-14T13:38:26.827Z]  INFO -- : Connecting to 'bluetooth'.
I, [2014-11-14T13:38:30.841Z]  INFO -- : Starting devices.
I, [2014-11-14T13:38:30.843Z]  INFO -- : Starting device 'battery'.
I, [2014-11-14T13:38:30.846Z]  INFO -- : Starting device 'deviceInfo'.
I, [2014-11-14T13:38:30.849Z]  INFO -- : Starting device 'generic'.
I, [2014-11-14T13:38:30.851Z]  INFO -- : Starting device 'wiced'.
I, [2014-11-14T13:38:30.854Z]  INFO -- : Working.
Data: WICED Sense Kit
Data: { value: 'Generic Tag', description: 'Generic category' }
Data: Broadcom
Data: { accelerometer: { x: 1, y: -1, z: 82 },
  gyroscope: { x: -26, y: -78, z: 203 },
  magnetometer: { x: 842, y: -523, z: -2101 } }
Data: { accelerometer: { x: 0, y: -3, z: 83 },
Data: { accelerometer: { x: 0, y: -2, z: 83 },
  gyroscope: { x: -24, y: -150, z: 261 },
  magnetometer: { x: 850, y: -532, z: -2099 } }
Data: { humidity: 618, pressure: 10155, temperature: 245 }

Let's send this obtained data with MQTT.

I will send it on MQTT

For sending with MQTT, mqttcli is useful. Since binaries already built are distributed in drone.io, we will get the file for linux_386 as follows. (Chmod is required)

curl -O https://drone.io/github.com/shirou/mqttcli/files/artifacts/bin/linux_386/mqttcli
chmod ugo+x mqttcli

And, to ~ / .Mqttcli.Cfg MQTT as a Service: sango we have described the information obtained in.

{
  "host": "sango.mqtt.example.jp",
  "port": 1883,
  "username": "shirou@github",
  "password": "BLAHBLAH"
}

After that, if you feed the standard output of the node earlier, you can receive the data from anywhere.

node get_info.js | ./mqttcli pub -t "shirou@github/edison" -s

After that, I will thrust into the DB and send an alert, you can tamper as I like.

Summary

There are BLE tags equipped with various sensors called WICED Sense. I handled this from Intel Edison. Furthermore, I sent the obtained data with MQTT.

Intel Edison is compact, can be placed anywhere, and WIFI is also large. WICED Sense is also small and very handy. Furthermore, by combining MQTT, it became possible to handle sensor information with ease.

How to use sango - C #

So far MQTT library's ppatierno had developed m2mqtt has been donated to the paho.

http://www.eclipse.org/paho/clients/dotnet/

This library is compatible with the following .Net platforms.

  • .Net Framework (up to 4.5)
  • .Net Compact Framework 3.5 & 3.9 (for Windows Embedded Compact 7/2013)
  • .Net Micro Framework 4.2 & 4.3
  • Mono (for Linux OS)
  • WinRT platforms (Windows 8.1 and Windows Phone 8.1)

However, MQTT 3.1.1 is not supported at this time. (Sango also supports 3.1)

Although I have not confirmed, it may be possible to use MQTT even with Unity with this library.

1. Download library -------------------------------------------------------------------------------------------- ------

http://www.eclipse.org/paho/clients/dotnet/

You can download from.

However, since codeplex distributes it with NuGet, I think that it is easier to download from NuGet. (Search with m2mqtt)

2. Client implementation

Connect

class MQTT
{
    private MqttClient client;

    public void Connect(string brokerHostname, int brokerPort,
    string userName, string password)
    {
        // SSL使用時はtrue、CAを指定
        client = new MqttClient(brokerHostname, brokerPort, false,
        null);
        // clientidを生成
        string clientId = Guid.NewGuid().ToString();
        client.Connect(clientId, userName, password);
    }

Connect uPLibrary.Networking.M2Mqtt.MqttClient with new. In MqttClient, there is an overload such as specifying only the host name.

Subscribe

// メッセージ到着時に呼び出されるメソッド
private void onReceive(object sender,
uPLibrary.Networking.M2Mqtt.Messages.MqttMsgPublishEventArgs e)
{
    Console.WriteLine(e.Topic);
    string msg = Encoding.UTF8.GetString(e.Message);
    Console.WriteLine(msg);
}

public void Subscribe(string topic){
    // callbackを登録
    client.MqttMsgPublishReceived += this.onReceive;
    client.Subscribe(new string[] { topic }, new byte[] {
    MqttMsgBase.QOS_LEVEL_AT_MOST_ONCE });
}

client.MqttMsgPublishReceived leave add a method to, Subscribe you.

Publish

public void Publish(string topic, string msg)
{
    client.Publish(
        topic, Encoding.UTF8.GetBytes(msg),
        MqttMsgBase.QOS_LEVEL_AT_MOST_ONCE, false); // retainが
        false
}

Publish just a.

reference

M2Mqtt Client
http://m2mqtt.wordpress.com/m2mqtt_doc/

I started a sango paid plan

It has been well received, if you make only GitHub account so soon MQTT can use MQTT as a Service: sango https://sango.shiguredo.jp/>` _ to pay Standard Plan is introduced It was. Of course, free plans are used as they are.

Standard plan is

  • The number of concurrent connections : 50
  • TLS allowed
  • QoS 0, 1, 2 correspondence
  • Maximum message size : 50 kilobytes
  • Monthly maximum number of messages : 120,000

It is very practical compared with free plan.

In addition, in order to have with a function of up to here, 一月たったの500円(税込み540 円) is. This is profitable!

Cluster support

And the best seller of the standard plan is "cluster correspondence".

When you apply to the standard plan, the connection destinations are displayed as follows as shown below. (It is not an actual host name)

2014/11/04/sango-standard.png

It is possible to connect to either of these, and pub / sub is possible. That is,

  • Subscribe to the first
  • Publish to the second

Even then, the message will flow to the one that is properly connected to the first one.

For example, in the paho implementation of golang this is done when setting the destination server.

opts.AddBroker("tcp://good1.shiguredo.jp:1883")

AddBroker in about that, you can add any number of units. I will try the connection in the order of addition. In other words, even if one is dropped, you can connect if another server is alive.

(Unfortunately it's only golang in paho that makes this kind of interface, but it's easy to write fallback code yourself)

Summary

Provides a service of MQTT, sango has been added to the pay plan. In addition to functions such as TLS and QoS 2, "cluster function" has been added.

By using the standard plan, you will be able to use more practical MQTT functions beyond just trying. Because it is profitable as 500 yen per month, we can recommend you with confidence to those who want to use MQTT properly for actual use.

Introduction Ansible PDF, EPUB version launched

Gumroadにて入門Ansibleを販売開始 was island.

https://gumroad.com/l/TNHSc

PDF, EPUB, MOBI are summarized in one form. You can read it in your favorite format.

A description corresponding to Windows is added as an appendix to this version. The operation is confirmed at 1.7.2 which is the latest version at the present moment. Windows compatibility was planned to be officially supported at 1.8, but it was not in time for the launch.

Amazon but to continue to sell the island. I am applying for updating the version that added Windows compatibility, so I think that it will be updated after 48 hours. Those who have already purchased it, sorry to inconvenience you but please update from the Amazon page.

Well, what do you say, when you buy it on Amazon it's 35%, so I think I'm glad that you bought it at Gumroad.

If you have any comments you have noticed, I would appreciate it if you can register the assignment below.

https://bitbucket.org/r_rudi/ansible-book/issues

(There are issues that have not been solved yet ...)

Sphinx writes to a directory not _static

Note

For english reader

You Want Change If The Output Directory From _Static To Something, Use this gist As An Extension.

In Sphinx, you may want to write JS and other files to places other than _static. In that case, tk0miya wrote the extension for that.

Place it in the same directory as conf.py with an appropriate name and conf.py

sys.path.insert(0, os.path.abspath('.'))
extensions = ['sphinxcontrib_staticdir_hack.py']

And if you do it is ok.

In conf.py

staticdir_name="なにかすきなディレクトリ名"

Let's make sure that the static directory has been changed with make html.

Caution

This extension is a bit aggressive hacking, so I will not guarantee that it will work in future versions.

If you have any demand, ML and issues if Kudasare by a raised, might officially be able to cope with Sphinx side.

Move time-charged windows with vultr

There is a VPS shop named vultr with a data center in Tokyo. ( 以前のエントリー see also)

https://vultr.com/

Vultr can choose Windows 2012 R2 x64 as the OS. This time, I started Windows using this.

plan

Vultr can be used from the lowest ranking VPS at $ 5 / month. However, if you select Windows, 20 GB is required for installation, so you can no longer select the lowest ranking VPS in the first place.

Also, because the license fee is on board, the amount is high.

1 CPU, 1024 MB, SSD 20 GB
$ 21 / month, $ 0.031 / hour
2 CPU, 2048 MB, SSD 40 GB
$ 29 / month, $ 0.043 / hour
2 CPU, 4096 MB, SSD 65 GB
$ 49 / month, $ 0.073 / hour
4 CPU, 8192 MB, SSD 120 GB
$ 84 / month, $ 0.125 / hour
4 CPU, 16384 MB, SSD 250 GB
$ 139 / month, $ 0.207 / hour

I feel a lot higher compared with Linux, but I think that it is sufficient if only trying it if thinking it is 3 yen per hour. It is a merit that there is time charging around here.

Login

"How do you log in?" I think everyone is in doubt. Use RDP (Remote Desktop Protocol). You can get the user name and password from the control panel.

Impressions

Because there is a data center in Tokyo, it moves quite properly with RDP. You can also display Japanese.

In our company Tsukinowa, because everyone is a Mac, there is no Windows. I bother to launch it in the VM,

  • Test whether IE can be seen properly in IE
  • Use a Web service that only works with IE

I think vultr is perfect for minutes to use.

Finally

こちらのリンク from and to a registered user who can pay the money, you increase the amount that can be used with our company! Please support micro enterprises!

Ansible coding convention (example)

edX is exposes the coding conventions of Ansible on github.

https://github.com/edx/configuration/wiki/Ansible-Coding-Conventions

This repository is GNU AGPLv3. I think that it is probably okay in the case of translation, so I will translate it here and publish it.


General

  • YAML file

    All of yaml files in the indentation of 2 spaces, .yml please with the extension.

  • variable

    Please use the format of jinja variable. $var rather than {{ var }} ` is.

  • Please put a blank before and after jinja's variable name. {{var}} ` rather than {{ var }} ` it is.

  • Variable names that need to be overwritten by your own environment should be all capital letters.

  • All variable names to be completed in the role should be all lowercase letters.

  • Be sure to add the role name to the variable name defined in the role. Example : EDXAPP_FOO

  • Please make the roll self-contained.

    Roles should not include tasks from other roles whenever possible.

  • Playbook should not do more than include the list of roles.

    However, please use it if you need pre_tasks and post_tasks. (For example, management of load balancers)

  • The Playbook to be applied to the general community, please have your on configuration / playbooks (Yakuchu : EdX own story).

  • Specific organization (edx-east, edx-west ) Playbook to be applied to create a sub-directory under the configuration / playbooks, sure to put in there (Yakuchu : edx own story).

  • test

    Automated integration test is test.yml the include Shiteku please playbook that. In addition, run_tests please so as to confirm whether or not the test by using a variable called.

  • Deploy

    Application to the update of, service stop begin with, service start including a series of tasks that end in deploy.yml the sure to include. deploy.yml to each task in the deploy please with a tag called.

  • deploy.yml addition to the tasks in, yo Una task that affect the state of the application please do not write.

  • deploy.yml all tasks in is to be able to run as a user with sudo privileges are limited, so please root authority is not required.

  • Handlers

    Each role needs to have one or more handlers for tasks in main.yml that need to restart the service.

Conditional expressions and return values

  • Always when: Please use.

    If you examine whether or not the variable is set when: my_var is defined or when: my_var is not defined use.

  • To check the return value, do this.

    Code-Block .. :: Yaml

    • Command : / Bin / False Register : My_result ignore_errors : True
    • Debug : msg = "Task Failed" When : My_result | I Failed

format

For long lines please use line continuation of YAML to break.

- debug: >
    msg={{ test }}

Alternatively, you can write this in ansible.

- debug:
    msg: "{{ test }}"

roll

Role variable

  • Group_vars / all

    Please define the variables that apply to all roles.

  • "Common" role

    Please define variables and tasks that apply to all edX proprietary roles.

  • Role variable

    Please define your own variables in /var/main.yml. All variable names should be prefixed with role names.

  • Roll defaults

    The default variable of the role should be set to run all services on a single server.

  • Define variables that are unique to the environment and that need to be overwritten with all capital letters.

  • All roles should have a single standard directory of roles. Below is an example of python and ruby virtualenv.

    Code-Block .. :: Yaml

    Edxapp_rbenv_dir : "{{Edxapp_app_dir}}" Edxapp_rbenv_root : ". {{Edxapp_rbenv_dir}} / Rbenv" Edxapp_rbenv_shims : "{{Edxapp_rbenv_root}} / Shims" Edxapp_rbenv_bin : "{{Edxapp_rbenv_root}} / Bin" Edxapp_gem_root : "{{Edxapp_rbenv_dir} } / Gem "Edxapp_gem_bin. : " {{Edxapp_gem_root}} / Bin "

Role name convention

  • Role name

    Be concise and make it as single word as possible. If necessary _ sure to use.

  • Task name of role

    Be concise and descriptive. The blank is OK. Be sure to add the role name to the beginning. (Yakuchu : Since the role name is also displayed at run time, but I think there is no need to put the name of the role ...)

  • Roll handler

    Be concise and descriptive. The blank is OK. Be sure to add the role name to the beginning. (Yakuchu : same as above)

Secure vs Secure data

As a basic policy, the following data needs to be protected.

  • username

  • Public key

    Even if the key itself is public OK, there is a possibility that the user name may be guessed

  • hostname

  • Password, API key

Directory structure of a secure repository.

ansible
├── files
├── keys
└── vars

secure_dir is group_vars/all placed, to overwrite as needed other files that use the group name in the group_vars.

There is a need to be safe, to templates and file is first_available_file use.

- name: install read-only ssh key for the content repo that is required for grading
  copy: src={{ item }} dest=/etc/git-identity force=yes owner=ubuntu group=adm mode=60
  first_available_file:
    - "{{ secure_dir }}/files/git-identity"
    - "git-identity-example"

Summary

Although this convention is not necessarily suitable, I think that it will be helpful.

In addition, since a lot of playbooks are published in the edX repository, there may be discoveries through the first visit.