Sokohaka http://tdoc.info/en/blog/ Python, Sphinx, Mercurial, PostgreSQL, MQTT, Ansible etc... en-us Wed, 07 Dec 2016 00:00:00 +0900 http://tdoc.info/en/blog/2016/12/07/go_swagger.html http://tdoc.info/en/blog/2016/12/07/go_swagger.html <![CDATA[Use go-swagger]]> Use go-swagger

This article Go (その2) Advent Calendar 2016 ` _ is the seventh day of the article.

Swagger is popular recently as an automatic code generation tool from the API (objections are accepted).

In this field, goa but is famous, goa is a method that only generated from the DSL of Go. It can not be used if swagger definition already exists.

In the case of Ki There swagger definition, this time will be described in go-swagger _ is convenient to use when.

In addition, the contents of https://github.com/shirou/swagger_sample has published at.

What is go-swagger?

Go-swagger is a tool that generates go server and client from swagger definition.

Installation may be installed with go get, but if there is no go environment

There are prepared.

How to use go-swagger - Server

As an example, let's assume that we define it with swagger.yml like the following.

produces:
  - application/json
paths:
  '/search':
    get:
      summary: get user information
      consumes:
        - application/x-www-form-urlencoded
      tags:
        - user
      parameters:
        - name: user_id
          type: integer
          in: formData
          description: user id
      responses:
        200:
          schema:
            $ref: '#/definitions/User'
        default:
          schema:
            $ref: '#/definitions/Error'
definitions:
  User:
    type: object
    required:
      - user_id
    properties:
      user_id:
        type: integer
        description: user id
      name:
        type: string
        description: name
  Error:
    type: object
    properties:
      message:
        type: string

To generate server side code execute the following command.

$ swagger generate server -f swagger.yml -A swaggertest

with this,

  • Cmd
  • Models
  • Restapi

This creates the directory named "source code" required. Specifically, it is like this.

restapi/
|-- configure_swaggertest.go
|-- doc.go
|-- embedded_spec.go
|-- operations
|    |-- swaggertest_api.go
|    `-- user
|         |-- get_search.go
|         |-- get_search_parameters.go
|         |-- get_search_responses.go
|         `-- get_search_urlbuilder.go
`-- server.go

-A swaggertest is the application name. If specified, it will be used in various places.

under the Operations user it has been taken from the tag. If multiple tags are set, the same contents will be generated in multiple directories, so please be careful.

After that, I will not touch files other than configure_swaggertest.go which will be explained later.

Implementing Handler

Let's create another file under the package without touching the generated file. restapi/operations/user/search.go create a file called, to implement the code, such as the following.

package user

import (
    middleware "github.com/go-openapi/runtime/middleware"
    "github.com/go-openapi/swag"

    "github.com/shirou/swagger_sample/models"
)

func Search(params GetSearchParams) middleware.Responder {
    payload := &models.User{
            UserID: params.UserID,
            Name:    fmt.Sprintf("name_%d", swag.Int64Value(params.UserID)),
    }

    return NewGetSearchOK().WithPayload(payload)
}

Params is a parameter that comes across. It is validated in advance according to swagger definition, so you can use with confidence.

NewGetSearchOK is in the swagger definition 200 to create a structure in which to return to the case of. Define and pass User as payload.

swag package or conversion and a pointer and the real, useful library that provides a convenient functions such as or search for a path . As a language restriction of go, we need to make it a pointer to distinguish between no value and default value, so we need to use the conversion with swag.

Register Handler

restapi/configure_swaggertest.go to adds the Handler was now implemented. The first is middleware.NotImplemented so is written, and replace it with the Handler was earlier implementation.

api.UserGetSearchHandler = user.GetSearchHandlerFunc(user.Search)

Since configure.go is generated once it is not changed automatically afterwards, care must be taken when updating swagger.yml and adding a handler. I'm renaming it once and then copying it from the generated file again.

Since configure.go also contains the setup code, it is a good idea to add preconfiguration or add middleware as needed.

Execution

We are ready now. cmd/swaggertest-server Let's build in the following.

$ cd cmd/swaggertest-server && go build -o ../../server

$ ./server -port=8000

Since the default is to start with a random Port, port has been specified in the argument.

After that you will be able to access normally.

$ curl "http://localhost:8080/search?user_id=10"
{"name":"name_10","user_id":10}

It's simple.

Raw Request

in the params is http.Request because there is, if you want to obtain a Request itself can be obtained from here. I think whether to obtain context from here.

How to use Swagger-Go : client Hen

Go-swagger can generate clients with the following commands, not just servers.

$ swagger generate client -f swagger.yml -A swaggertest

After that, if you write the following code, you can execute it as client.

package main

import (
    "fmt"
    "log"
    "time"

    "github.com/go-openapi/swag"

    apiclient "github.com/shirou/swagger_sample/client"
    "github.com/shirou/swagger_sample/client/user"
)

func main() {

    // make the request to get all items
    p := &user.GetSearchParams{
            UserID: swag.Int64(10),
    }

    resp, err := apiclient.Default.User.GetSearch(p.WithTimeout(10 * time.Second))
    if err != nil {
            log.Fatal(err)
    }
    fmt.Println(resp.Payload.Name)
}

In this example, you only hit with fixed parameters, but you can easily implement the CLI tool by setting the arguments and so on properly.

Model

In swagger you can define a model and include it in a reply. Definitions below.

Of example you are using this time User model is models/user.go will be generated as follows in.

// User user
// swagger:model User
type User struct {
     // name
     Name string `json:"name,omitempty"`

     // user id
     // Required: true
     UserID *int64 `json:"user_id"`
}

Since the models / user.go that things would have been crushed to write and re-generation, models I think that making such user_hoge.go below may you at continue to add lot of the function.

test

To test the server implementation pass the specified parameters to Handler. However, since it is necessary to put a http.Request, it is at that time httptest.NewRecorder() so that you record the return value using. After that, because each Handler is a mere function, and it is only necessary to perform its function, httptest.Server you do not need to launch.

func TestSearch(t *testing.T) {
    req, err := http.NewRequest("GET", "", nil)
    if err != nil {
            t.Fatal(err)
    }

    params := GetSearchParams{
            HTTPRequest: req,
            UserID:      swag.Int64(10),
    }
    r := Search(params)
    w := httptest.NewRecorder()
    r.WriteResponse(w, runtime.JSONProducer())
    if w.Code != 200 {
            t.Error("status code")
    }
    var a models.User
    err = json.Unmarshal(w.Body.Bytes(), &a)
    if err != nil {
            t.Error("unmarshal")
    }
    if swag.Int64Value(a.UserID) != 10 {
            t.Error("wrong user id")
    }
}

Summary

I showed an example of generating server and client code from swagger definition file using go-swagger.

swagger is swagger ui is fairly easy to use, such as coming out of the command of the curl, the problem that the definition file is also not difficult if you get used to it (file split can not There is.

When developing with development of swagger and swagger ui while checking protocol definitions, such as when developing separately between client and server, inconsistency of understanding is less likely to occur.

It is good to generate from Goa, but if you are generating from the swagger definition file you should consider go-swagger as well.

]]>
Wed, 07 Dec 2016 00:00:00 +0900
http://tdoc.info/en/blog/2016/11/02/eawsy_lambda.html http://tdoc.info/en/blog/2016/11/02/eawsy_lambda.html <![CDATA[Try using eawsy's aws-lambda-go]]> Try using eawsy's aws-lambda-go

AWS lambda is popular recently. However, the execution environment is only python, java, node. I came along quite well with python, but recently I've been using go for a long time and I thought it would be nice to be able to run with go.

So, yesterday eawsy/aws-lambda-go <https://github.com/eawsy/aws-lambda-go>` _ so was aware of the library that can be described the lambda in that go, tried . (Hereinafter eawsy is called)

Comparison with apex

feelings say lambda and I want to write in the go is in the past AWS Lambdaで効率的にgoバイナリを実行する that I was writing an article. At this time, lambda_proc that I was using the library, and then come out in various ways, that's recent apex it is famous.

So what is the difference between eawsy and apex? Eawsy runs with Python's C extension.

  • Apex
    • Lambda calls node as runtime, and node calls bin with go with spawn.
    • It's normal binary to run. Can be run as it is on Linux
  • Eawsy
    • the Go -buildmode=c-shared Build as a shared library in
    • Lambda run python. Python reads go as C extension and executes it

In other words, it is only once in eawsy, compared with two processes being generated in apex. Instead, you need the cgo environment for build. However, eawsy offers a Docker container so you do not need to prepare cgo environment.

Try it.

I write maing.go like this.

package main

import (
      "encoding/json"
      "log"
      "time"

      "github.com/eawsy/aws-lambda-go/service/lambda/runtime"
)

func handle(evt json.RawMessage, ctx *runtime.Context) (interface{}, error) {
      ret := make(map[string]string)
      ret["now"] = time.Now().UTC().Format(time.RFC3339)

      return ret, nil
}

func init() {
      runtime.HandleFunc(handle)
}

func main() {}

main in the sky, init to register a handle on. json.RawMessage in will come across entered parameters.

ret is interface{} So can return any type. This will be answered in the form of JSON.

benchmark

I tried to write the same code with apex.

package main

import (
     "encoding/json"
     "time"
     apex "github.com/apex/go-apex"
)

type message struct {
   Time string `json:"time"`
}

func main() {
     apex.HandleFunc(func(event json.RawMessage, ctx *apex.Context) (interface{}, error) {
             var m message
             if err := json.Unmarshal(event, &m); err != nil {
                     return nil, err
             }
             m.Time = time.Now().Format(time.RFC3339)
             return m, nil
     })
}

Run it directly as follows and see the execution time in the CloudWatch log.

eawsy
$ aws lambda invoke --function-name preview-go output.txt
apex
$ apex invoke hello
ベンチマーク
回数 eawsy apex
一回目 16.38 ms 41.11 ms
二回目 0.48 ms 1.21 ms
三回目 0.50 ms 0.64 ms

The first time lambda starts up the container (although it is unknown whether it is) it takes time. It is early because it has been started since the second time. It is the first important thing to say that, apex is about half compared to 40 msec, eawsy is 16 msec. Because it is a troubles I post only one result, but the basic tendency was the same after doing it several times.

Once the activation is completed, it becomes 1 msec or less, both of which are the same.



However, it's just one time when it's quiet, when 40 msec has reached 16 msec. There will also be important workloads, but I do not think it makes much sense. Since lambda's execution time is unstable in the first place, it is feeling that it has gotten faster by several msec.

The advantage of eawsy is not benchmarking, it is possible to call log output and runtime functions. (That, of Reddit Finally vanilla Go on AWS Lambda (no serverless!) startup <5ms author says in.)

Benefits of eawsy

Log output

Apex runs through the runtime of node, so log output is only output to stdout. As a result, it took one hand and it was necessary. In contrast, eawsy can use the standard log package.

log.Printf("Log stream name: %s", ctx.LogStreamName)
log.Printf("Log group name: %s", ctx.LogGroupName)
log.Printf("Request ID: %s", ctx.AWSRequestID)
log.Printf("Mem. limits(MB): %d", ctx.MemoryLimitInMB)
log.Printf("RemainingTime: %d", ctx.RemainingTimeInMillis)

If you do like this like normal log output, it will appear in the CloudWatch log as follows.

13:19:55 START RequestId: 9bf7d852-a0b3-11e6-b64b-7dec169bb683 Version: $LATEST
13:19:55 2016-11-02T04:19:55.919Z     9bf7d852-a0b3-11e6-b64b-7dec169bb683    Log stream name: 2016/11/02/[$LATEST]1e58f3ef77894283988110ea452dc931
13:19:55 2016-11-02T04:19:55.919Z     9bf7d852-a0b3-11e6-b64b-7dec169bb683    Log group name: /aws/lambda/preview-go
13:19:55 2016-11-02T04:19:55.919Z     9bf7d852-a0b3-11e6-b64b-7dec169bb683    Request ID: 9bf7d852-a0b3-11e6-b64b-7dec169bb683
13:19:55 2016-11-02T04:19:55.919Z     9bf7d852-a0b3-11e6-b64b-7dec169bb683    Mem. limits(MB): 128
13:19:55 END RequestId: 9bf7d852-a0b3-11e6-b64b-7dec169bb683
13:19:55 REPORT RequestId: 9bf7d852-a0b3-11e6-b64b-7dec169bb683
Duration: 16.38 ms
Billed Duration: 100 ms Memory Size: 128 MB   Max Memory Used: 8 MB

Fatalf will display an error, Panic will display stacktrace.

error processing

handle and put the error in the return value of the

return ret, fmt.Errorf("Oops")

The following log will be exported to CloudWatch.

Oops: error
Traceback (most recent call last):
File "/var/runtime/awslambda/bootstrap.py", line 204, in handle_event_request
result = request_handler(json_input, context)
error: Oops

You can call the function of runtime

In the approach of apex, since execution is go only, it was impossible to obtain the information provided to the node. However, eawsy can get the information provided for runtime from go.

Examples of the log output of the above ctx.RemainingTimeInMillis There are places to get the rest time that. This is evidence that the information provided in python runtime is available.

Summary

The approach of calling go from Python via C extension was interesting, so I tried using it.

Although it is said that it is not a decisive difference from the benchmark (it is fast from the beginning), it may be nice to call the function of runtime or to use the standard log package, but as a programming model it is too big There is no difference.

apex and that not to golang is, apex deploy from the fact that is also excellent as a management tool and so on, I think the tactics rises towards the apex at the moment.

bonus

proxy.c is it is an entity of the runtime. handle in is called the go.

Since it is normal C extension, the same approach can be done with rust, C ++, etc. It may be interesting to write it with rust.

]]>
Wed, 02 Nov 2016 00:00:00 +0900
http://tdoc.info/en/blog/2016/08/16/testing_docker_swarm.html http://tdoc.info/en/blog/2016/08/16/testing_docker_swarm.html <![CDATA[I tried the Docker swarm]]> I tried the Docker swarm

TD; LR

Although docker swarm is very easy to use, it is easier to actually operate after using it properly.

Foreword

Docker swarm has been integrated with docker since 1.12.0 of Docker, making it easy to use Docker in a cluster environment. This article is a record which I examined about this Docker swarm mode as free study of summer vacation.

It is long. It is fine. Various assumptions are skipped. We do not recommend reading all.

It is a great help if you point out what is wrong.

Repository

Repository on this sentence
https://github.com/shirou/docker-swarm-test
Engine Docker
: Docker Engine CLI specific guy.
https://github.com/docker/docker
Swarmkit
: it contains the actual contents of the swarm in this
https://github.com/docker/swarmkit
Swarm
: Swarm implementation of up to 1.12. Not covered this time
https://github.com/docker/swarm

Docker swarm mode

The docker swarm mode is a mode in which multiple docker hosts can be bundled and used. Traditionally it was another thing as a docker swarm, but it was integrated into the docker engine from 1.12.

I skip over the summary quickly. The following are memos of terms and commands.

Character

Manager node
Management node. The optimum number of manager nodes 3 to 7
Worker node
Task execution node
Node
A server running one docker engine
Service
Multiple docker engines cooperate to provide services with one port One swarm cluster can have multiple services
https://docs.docker.com/engine/swarm/images/swarm-diagram.png

How to use

  • Swarm
    • docker swarm init
      • Initialize swarm clustr. In other words, the docker host that executed this command becomes the first manager node.
    • docker swarm join
      • Join the swarm cluster managed by the specified manager node. --token Specifies the swarm cluster token in. If you specify manager token you will join as a manager, if you specify worker token join as a worker. Or --manager You can also explicitly to join as a manager in.
    • docker swarm leave
      • Leave cluster
  • Node
    • docker node ls
      • Look at the state of node
    • docker node ps
      • Look at the state of task
    • docker node update
      • Update node
    • docker node demote / docker node promote
      • Demote to worker (demote) / Promote to manager (promote)
  • Service
    • docker service create
      • Create service
    • docker service ls
      • I look at the state of service
    • docker service ps
      • Look at the state of task
    • docker service update
      • I do rolling update
  • Network
    • docker network create
      • Create an overray network

Process used this time

The process to be executed this time is as follows. Code is こちら .

package main

import (
     "fmt"
     "net/http"
     "strings"
     "time"
)

var wait = time.Duration(1 * time.Second)

func handler(w http.ResponseWriter, r *http.Request) {
     rec := time.Now()

     time.Sleep(wait)
     rep := time.Now()

     s := []string{
             rec.Format(time.RFC3339Nano),
             rep.Format(time.RFC3339Nano),
     }

     fmt.Fprintf(w, strings.Join(s, ","))
}

func main() {
     http.HandleFunc("/", handler)
     http.ListenAndServe(":8080", nil) // fixed port num
}

I merely wait for one second and only return the time of request and reply with CSV 8080 port. It's the worst process of blocking one second and waiting.

This time the build is left to CircleCI, and since we have made tar.gz, we can import on each node as follows. Tag is appropriate.

$ docker import https://<circleci artifact URL>/docker-swarm-test.tar.gz docker-swarm-test:1

Hint

golangはglibcなどが必要なく1バイナリで実行できるので、Dockerfileとか別に要らなくてtar.gzで十分。linuxでnetを使うとdynamic linkになる件は Go 1.4でstatic binaryを作成するgolangで書いたアプリケーションのstatic link化 をみてください。

$ docker service create --name web --replicas 3 --publish 8080:8080 docker-swarm-test:1 "/docker-swarm-test"
$ docker service ps web
ID                         NAME   IMAGE                NODE       DESIRED STATE  CURRENT STATE          ERROR
18c1hxqoy3gkaavwun43hyczw  web.1  docker-swarm-test:1  worker-2   Running        Running 3 minutes ago
827sjn1t4nrj7r4c0eujix2it  web.2  docker-swarm-test:1  manager-1  Running        Running 3 minutes ago
2xqzzwf2bte3ibj2ekccrt6nv  web.3  docker-swarm-test:1  worker-3   Running        Running 3 minutes ago

In this state, we will return it with curl on worker - 2, 3, manager - 1 where the container is running. Besides that, even if you listen to curl for a worker - 1 whose container is not working, it will answer properly. This is because the request has been transferred inside.

Rolling update

In Cluster Swarm --publish and to provide the service with a, ` 一つのport will do the load balancing to allocate requests to multiple node in. Therefore, there have been cases where the port dynamically changes in docker or using the same port number in one node, but that problem does not arise. Also, because it performs load balancing within swarm cluster, rolling update is also easy.

% sudo docker service update --image "docker-swarm-test:1" web

So I tried it. I will wait one second for the previous process, so if you do not do it well you should miss the request.

I use ab for the tool. This time it is not a processing capacity but a test as to whether the request will not be lost, so we decided that ab is sufficient.

% ab -rid -c 10 -n 500 http://45.76.98.219:8080/

Concurrency Level:      10
Time taken for tests:   50.146 seconds
Complete requests:      500
Failed requests:        8
   (Connect: 0, Receive: 4, Length: 0, Exceptions: 4)

That is why I missed it. that's too bad. --update-delay is concerned because it is time to start the next container is unlikely. --restart-delay tried in combination also It was useless. If you manually change the status of node to drain, it may work, but it does not try as it takes time and effort.

When examining it, it seems that this area is related.

It will be fixed by the next patch release. I still do not have enough investigation up to libnetwork, so I do not know if this is really fixed, but it seems that it is about to be used on a production environment yet soon.

Rather than using nginx

Than, the first place ingress overlay network says 内部で使う用途で、外部サービスに公開する用途ではない , of the fact that It seems. When publishing it to the outside, it seems to be nginx and decide which container to use according to DNS service discovery described below.

I feel like I have to look a little further in the neighborhood this time.

Network

docker network create in to create the network. Later docker service update --network-add When you try to add a network in

Error response from daemon: rpc error: code = 2 desc = changing network in service is not supported

I was offended and I will rebuild the service.

docker service create --replicas 3 --name web --network webnet ...

Then, launch alpine as a shell.

$ docker service create --name shell --network webnet alpine sleep 3000
$ sudo docker service ls
ID            NAME        REPLICAS  IMAGE                COMMAND
1f9jj2izi9gr  web         3/3       docker-swarm-test:1  /docker-swarm-test
expgfyb6yadu  my-busybox  1/1       busybox              sleep 3000

and that belong to the same network in the nslookup entered into in Exec web looks for a service in DNS.

$ docker exec -it shell.1.3x69i44r6elwtu02f1nukdm2v /bin/sh
/ # nslookup web

Name:      web
Address 1: 10.0.0.2

/ # nslookup tasks.web

Name:      tasks.web
Address 1: 10.0.0.5 web.3.8y9qbba8eknegorxxpqchve76.webnet
Address 2: 10.0.0.4 web.2.ccia90n3f4d2sr96m2mqa27v3.webnet
Address 3: 10.0.0.3 web.1.44s7lqtil2mk4g47ls5974iwp.webnet

web that is, the VIP To hear the service name, tasks.web us to answer each node directly in the DNS RoundRobin To hear.

In this way, as long as you belong to the same network, you can draw other services by name, so I think that it is easy to cooperate between containers.

protocol

Raft

In docker swarm, the Leader election between a plurality of Node Manager Raft consensus we use. The implementation of raft is raft library of etcd. docker node ls any Manager in you see how Leader.

$ docker node ls
ID                           HOSTNAME   STATUS  AVAILABILITY  MANAGER STATUS
5g8it81ysdb3lu4d9jhghyay3    worker-3   Ready   Active
6td07yz5uioon7jycd15wf0e8 *  manager-1  Ready   Active        Leader
91t9bc366rrne19j37d0uto0x    worker-1   Ready   Active
b6em4f475u884jpoyrbeubm45    worker-2   Ready   Active

Since it is raft, in order to have proper fault tolerance, at least 3 manager nodes are needed, and if 2 out of 3 falls, you can not select Leader. In this case, docker swarm will be in a state where new task can not be accepted.

Heat beat

swarm node between the heatbeat has a life-and-death monitoring in. heatbeat is usually 5秒間隔 you, but, docker swarm init at --dispatcher-heartbeat duration can also be specified in. Results of life and death monitoring are distributed with gossip.

Doubt

What happens to the container after erasing service?

docker service rm and turn off the service in, container also will disappear altogether. It takes time to disappear, so be careful

What if I get tasks over the worker node?

What happens if docker swarm scale web = 10 if there are only three nodes?

The answer is that there are multiple containers on one node.

Concept of pod

It is unlikely. When you create a Service --constraint placement restrictions in the affinity wonder if such Toka use.

Afterword

I do not care about container technology itself any longer, I personally think that multiple Node management such as Kubernetes is important. Although Docker swarm itself has existed before, it is felt the enthusiasm that integrating it with the Docker Engine makes it possible to manage not only containers but also de facto standards on them. Moreover, although it is difficult to start using kubernetes, since it has little time and effort, I feel that it is advantageous.

It is easy to form swarm cluster, and cluster itself seemed to be very stable. Of course, I am not doing a fancy test such as split brain, so it's nothing but I am thinking that the raft relationship is stabilized because it uses etcd.

However, it seems that the network is still unstable, making and erasing the service, creating and erasing the network, the name can not be closed (details are not pursued).

There are points that have not yet been achieved, such as network and graceful, but I think that Docker swarm will become popular in the future.

]]>
Tue, 16 Aug 2016 00:00:00 +0900
http://tdoc.info/en/blog/2016/07/08/ansible_to_docker.html http://tdoc.info/en/blog/2016/07/08/ansible_to_docker.html <![CDATA[Test Ansible with Docker container]]> Test Ansible with Docker container

Ansible 2.0 became, and Docker connection plugin entered by standard. This makes it possible to directly execute Ansible without setting sshd in Docker.

Many people have already been introduced, quite a bit now, but here's how to test Ansible against the Docker container.

Restrictions on Docker

First of all, it's about restrictions on running Ansible against Docker container.

Basically all functions are available. However, there are the following restrictions.

  • / Etc / hosts, /etc/resolv.conf, / etc / hostname can not be rewritten

    These files are bind-mounted by Docker and can be rewritten, but can not be replaced. Since / etc / hostname can not be changed, it can not be changed with the hostname module.

There are at least the following problems depending on the image to be executed. There are many other things. This area is about Docker itself, it is not an issue peculiar to Ansible, so please solve it somehow.

  • Systemd's service can not be started

    Because there is no-Bus D, Failed to connect to bus: No such file or directory is said to be. You can start upstart and rc init. CAP_SYS_ADMIN Capability is necessary

  • There may be no sudo

In addition, if you do a test from a slim image, it will take time for downloading, so if you prepare an image with appropriate settings beforehand, I think that the test time will decrease.

Inventory

Well, it is the main issue. How to use Ansible's docker connection plugin

web ansible_connection=docker ansible_host=<コンテナID>

So on, ansible_connection=docker you can use immediately by simply with. However, you must specify a container ID for ansible_host. The container ID of Docker is really temporary, so I write it here only during debugging.

In order to avoid this docker launched a container using the Module, add_host it is also possible to generate a group, you will need to edit for testing the playbook. Even so it may be okay, but let's use the Docker dynamic inventory.

Docker dynamic inventory

GitHubのansibleのリポジトリ from docker.py to get the leave grant execute permissions. docker.yml It does not matter not.

# docker containerを立ち上げる
$ docker run --name <hostsでの指定対象> ubuntu:16.04 /bin/sleep 3600

# 立ち上げたdocker containerに対してansibleを実行する
$ ansible-playbook -i docker.py something.yml

, We get a container ID from the name of the currently running container and use it. Some of the information that can be obtained with docker.py is shown below. In addition to name, you can see that groups are created with image and container ID. However, this time because the test, because you want to use the same name as a regular group name use.

"web": [
  "web"
],
"image_ubuntu:16.04": [
  "web"
],
"zzzzzd5bed36e033fac72d52ae115a2da12f3e08b995325182398aae2a95a": [
  "web"
],
"zzzzz114d5bed": [
  "web"
],
"running": [
  "web"
],
"docker_hosts": [
  "unix://var/run/docker.sock"
],
"unix://var/run/docker.sock": [
  "web"
],

Inventory Directory

If you use dynamic inventory, you may think that group_vars specified in the inventory file may become unusable.

In that case separate the directory and put docker.py and a static file. As inventory file in advance to do so ディレクトリ If you specify, will take the information from both static files and dynamic inventory. When using only with CI, I think that it becomes easier to handle as an Inventory for CI by separating directories.

CircleCI

Let's try it through CI. I will try CircleCI. Circle.yml is like this.

machine:
  services:
    - docker
  environment:
    DOCKER_API_VERSION: "1.20"

dependencies:
  pre:
    - docker pull ubuntu:16.04
    - sudo pip install ansible ansible-lint docker-py

test:
  override:
    - docker run -d --name web ubuntu:16.04 /bin/sleep 3600
    - ansible-playbook -i inventory_docker web.yml test.yml --syntax-check
    - ansible-lint test.yml
    - ansible-playbook -i inventory_docker web.yml test.yml -l running

--syntax-check and ansible-lint we also incidentally. DOCKER_API_VERSION has set for the old is docker of CircleCI. In addition, in the Run Docker --name web it is. This is because you are running on a web group on a regular playbook and you do not want to change that playbook.

When you push it,

fatal: [web]: FAILED! => {"changed": false, "failed": true, "rc": 1, "stderr": "Error response from daemon: Unsupported: Exec is not supported by the lxc driver\n", "stdout": "", "stdout_lines": []}

I was scolded. That's right. CircleCI are using the lxc driver, Docker connection plugin uses docker exec can not I use.

I gave it up for that.

Other CI service is wercker Toka drone.io have you, but we use the Docker These are the first place in the CI, Docker It became in Docker and it is hard work.

Another solution : to provide its own Docker host

Or, of Circle.Yml environment in DOCKER_HOST By setting, can also be run against Docker host stood outside CircleCI. Although it may be easier than using GitLab, which is explained next, please pay special attention to keeping the security settings firm.

GitLab

GitLab is it's fashion recently. I will also introduce you to use this as CI also recently attached.

I omit installing gitlab and gitlab CI runner itself. There is also a CI Runner that runs in Docker, but then it will be Docker in Docker so do not forget to make it shell runner for this application.

From a conclusion, if it firmly to the setting of the runner, such .gitlab-ci.yml you move in. Almost unchanged from CircleCI, after_script is about containing the deletion of the container by.

before_script:
  - pip install ansible ansible-lint docker-py

stages:
  - build

build_job:
  stage: build
  script:
    - docker run -d --name web ubuntu:16.04 /bin/sleep 3600
    - ansible-playbook -i inventory_docker web.yml test.yml --syntax-check
    - ansible-lint test.yml
    - ansible-playbook -i inventory_docker web.yml test.yml -l running

after_script:
  - docker kill `docker ps -aq`
  - docker rm `docker ps -aq`

as the setting of the Runner sudo gpasswd -a $USER docker I think that it may want to be able to use the docker even without sudo to the.

Postscript : Travis CI

@auchida 's from Travis CI if used, was to ask you that. auchidaさんのリポジトリ the decided to reference.

Point sudo: required seems to be putting.

However, probably something seems to be wrong with virtualenv and system, and the following error occurred when running docker dynamic inventory. I want to fix it soon.

class AnsibleDockerClient(Client):
    NameError: name 'Client' is not defined

Thank you very much.

Summary

In this article, I showed you how to test Ansible using the Docker container.

  • Ansible 2.0 can directly ansible to Docker container
  • Although there are some restrictions, it runs fine even on Docker
  • Since it does not work in CircleCI, it introduces the following three methods
    • Prepare your own Docker host
    • Make GitLab and more
    • Use Travis CI

Bonus : Ansible-Lint rules of

Recently we began to establish ansible-lint rules to unify the way we write Playbook within our company. ansible-lint-rules >` _ it has published at.

Although there is no Long description, it is still in the middle, but thank you very much for using Issue and PR if you use it.

]]>
Fri, 08 Jul 2016 00:00:00 +0900
http://tdoc.info/en/blog/2016/07/06/xo.html http://tdoc.info/en/blog/2016/07/06/xo.html <![CDATA[Introduction of xo to generate model of golang directly from DB]]> Introduction of xo to generate model of golang directly from DB

When developing a web application, I think that there are various ways of DB model definition method.

xo is, DB direct a tool that automatically generates a model definition of golang.

  • PostgreSQL
  • MySQL
  • Oracle
  • Microsoft SQL Server
  • SQLite

I think that it covers almost commonly used RDB.

Installation

Because it is a go of the tool, go get can be installed in.

$ go get -u golang.org/x/tools/cmd/goimports (for dependency)
$ go get -u github.com/knq/xo

This xo I think the command is installed that.

How to use

Let's use it now. The DB you use is PostgreSQL.

CREATE TABLE users (
    id   BIGSERIAL PRIMARY KEY,
    name TEXT,
    age  INT NOT NULL,
    weight INT,
    created_at timestamptz NOT NULL,
    updated_at timestamptz
);

CREATE INDEX users_name_idx ON users(name);

Let's assume that there is such a table and index.

xo run the.

$ mkdir -p models  # create directory
$ xo pgsql://localhost/example -o models

Then, below models

  • User.xo.go
  • Xo_db.xo.go

Two files are created. file generated by the xo *.xo.go It is easy to understand. Therefore.

The following contents are generated in user.xo.go. NOT NULL Notice where the type is different with or not with or wearing. Also, since the json tag is also generated, it can be output as it is as JSON.

// User represents a row from 'public.users'.
type User struct {
        ID        int64          `json:"id"`         // id
        Name      sql.NullString `json:"name"`       // name
        Age       int            `json:"age"`        // age
        Weight    sql.NullInt64  `json:"weight"`     // weight
        CreatedAt *time.Time     `json:"created_at"` // created_at
        UpdatedAt pq.NullTime    `json:"updated_at"` // updated_at

        // xo fields
        _exists, _deleted bool
}

For this generated User type, the following function is generated.

  • Func (u * User) Exists () bool
  • Func (u * User) Deleted () bool
  • Func (u * User) Insert (db XODB) error
  • Func (u * User) Update (db XODB) error
  • Func (u * User) Save (db XODB) error
  • Func (u * User) Delete (db XODB) error
  • Func (u * User) Upsert (db XODB) error (PostgreSQL 9.5+ and above)

XODB型 is the interface to the db that are defined in the xo_db.xo.go.

ID is Primary Key, and index is pasted on name. Therefore, the following two functions are also generated.

  • Func UserByID (db XODB, id int64) (* User, error)
  • Func UsersByName (db XODB, name sql.NullString) ([] * User, error)

It is a flow to SELECT using these functions. UsersByName person is, it is also the point that the return value Slice.

Implementation

It is easy if it is automatically generated so far. The following implementation can be done immediately.

db, err := sql.Open("postgres", "dbname=example sslmode=disable")
if err != nil {
   panic(err)
}

now := time.Now()
u := &User{
   Age:       18,
   CreatedAt: &now,
}
err = u.Insert(db)
if err != nil {
   panic(err)
}

user, err := UserByID(db, u.ID)  // Insertでu.IDがセットされている
if err != nil {
   panic(err)
}
fmt.Println(user.Age)  // -> returnes 18

SQL

Insert and Update and say if you have made the contents of the functions, such as,

// sql query
const sqlstr = `INSERT INTO public.users (` +
        `name, age, weight, created_at, updated_at` +
        `) VALUES (` +
        `$1, $2, $3, $4, $5` +
        `) RETURNING id`

// run query
XOLog(sqlstr, u.Name, u.Age, u.Weight, u.CreatedAt, u.UpdatedAt)
err = db.QueryRow(sqlstr, u.Name, u.Age, u.Weight, u.CreatedAt, u.UpdatedAt).Scan(&u.ID)
if err != nil {
        return err
}

And so on, SQL is generated as it is. Behavior is easy to understand, I like this person.

function

Xo does not deal with table definitions. It also deals with functions.

CREATE FUNCTION say_hello(text) RETURNS text AS $$
BEGIN
    RETURN CONCAT('hello ' || $1);
END;
$$ LANGUAGE plpgsql;

Let's say that we defined a function called. That way, sp_sayhello.xo.go file that will be generated. It is Stored Procedure.

This includes SayHello are defined functions of golang that.

  • Func SayHello (db XODB, v0 string) (string, error)

This is what

// sql query
const sqlstr = `SELECT public.say_hello($1)`

// run query
var ret string
XOLog(sqlstr, v0)
err = db.QueryRow(sqlstr, v0).Scan(&ret)
if err != nil {
        return "", err
}

For example, we call the defined say_hello function in SQL. Therefore,

SayHello(db, "hoge")

As you can see, you can call from golang.

Summary

Us to produce a code of golang from DB of metadata, xo introduced the.

Besides this, it's pretty nervous, such as converting the type defined in PostgreSQL to golang's type. In addition, the code is created with template, you can define this template yourself, so you can change SQL statements, add functions, etc. freely.

I just create almost same thing , but, because the people of xo is far advanced, I think it is better to use the xo.

]]>
Wed, 06 Jul 2016 00:00:00 +0900
http://tdoc.info/en/blog/2016/04/18/ansible_up_and_running.html http://tdoc.info/en/blog/2016/04/18/ansible_up_and_running.html <![CDATA[Book Review : First Ansible]]> Book Review : First Ansible

The book "First Ansible" was released from O'Reilly Japan. I got the book, so I tried reading it.

Conclusion

From the conclusion, this book comes with "first time", but I think that it is a book to buy not only for the people who want to use it but also for those who are using it now.

I think that important points of using Ansible, how to use Ansible such as Playbook, Task, and Inventory are written in order from the beginning and I think that it will be able to understand immediately. For that reason it is for beginners.

And yet, quite annotations are many, not only for beginners, that it, caught easily grammatical problems and of YAML, localhost has written neatly to fine place like that is implicitly added to the inventory, the current I think that it is meaningful for people who are using it. Especially, I'm very thankful that it is written about things related to the design, such as writing the playbook by thinking this way and applying these rules.

In addition, translation is also done, there is no place to read and catch up.

2.0 compatible

I think that it is a place to worry about the correspondence situation to 2.0. This book is, Ansible: Up and Running is that the present translation. Since the Ansible 2.0 has not yet appeared when the original work was written, the content is naturally 1.9 compliant.

However, when translated into Japanese, it is said that all written Playbooks have confirmed the operation at 2.0.0.2. Well I would have not had much time, it's wonderful. So, there is nothing wrong with 1.9 compatible places.

In addition, "Ansible 2.0" is attached as an appendix, and new features such as blocks are explained.

Highlights of each chapter

From now on, I will talk about each chapter with arbitrariness and prejudice. I think that I do not know well even if I read this much, so please buy and read it if you are interested.

Chapter 1 : Introduction

Why you choose Ansible, things like installation method, etc. are written.

It is better that the Inventory file is still in the repository. Why default is /etc/ansible/hosts would be what.

Chapter 2 : Playbook : Getting started

It is the way to write Playbook and the definition of Ansible term such as task and Play. Particularly where you need YAML quotation marks, it is good to have something to do with it, so read it.

I regretly have not written much about cowsay. And to write a way to invalidate it!

Regarding handlers, I did not even think it was essential, so I agree. It is good that the task itself is simpler because it is better, but it is certainly addictive. By the way, --force-handlers but can be activated to force the handler by put options, it is what you do not notice things like that when you addictive.

Chapter 3 : Inventory : the server description

It is an inventory file. It is good to explain using Vagrant.

In particular, it is good to describe dynamic inventory with an example. I think that it is better to use dynamic inventory for more than 10 units. group_by is ... Hey is there is no thing that I also use .... Since it is used for installation of Ansible Tower etc., it may be good "at the time of this distribution collectively this".

Chapter 4 : Variables and Fact

Variables and facts. It is very important.

It is good that the local facts are properly written. This may be unnecessary because it requires one time to install, but it's pretty useful. Also, it was roughly summarized as the priority order "all methods not on this list", but it blew out a bit, but it is correct.

Chapter 5 : Mezzanine introduction of

It is quite purely an introduction of Mezzanine. Sorry, I did not know this.

Chapter 6 : Deployment of Mezzanine

Actual deployment using Vagrant. If you read this chapter, I think that you will be able to deal with real production environments as well. Well, since I am using it for django application, I think that it is necessary to make various changes in other cases, but the basic flow is fully understood.

xip.io did not know. It's a convenient service. Let's use it next time.

Chapter 7 : complex playbook

Explanation of functions not explained in Chapter 6. local_action , ` run_once , change_when/failed_when such, is an important feature is full. However, if you read chapters 6 and 7, you can do anything at all.

The highlight of this chapter is "Create your own filter" and "Create your own lookup plugin". I skip slowly, but it is good to have mention. As you can create filters and lookups yourself, what you can do with Ansible will dramatically increase. People who think that "YAML programming is complicated", I think that it is good to try creating filters oneself once.

By the way, in this chapter there is a complex loop after the lookup. This, in fact, with_hoge loop, such as is the is such because they are implemented as a look-up plug-in, it is good for is written neatly.

Chapter 8 : roll

It is a role. Roll is the most important function of Ansible. If you understand the roll exactly, you will be able to deal with complicated configurations with a concise playbook.

The difference between vars and defaults is quite troublesome. The author writes a story saying that the defaults may be overwritten and the usual variable is vars. It is correct, but sometimes you will want to overwrite it later. Personally I think that all defaults are OK.

By the way, although I mention a little about Ansible Galaxy, since the ansible-galaxy command can be specified outside the Ansible Galaxy site, keep the shared role in private git repository and share it with ansible-galaxy command within the organization, It is now possible to do.

In addition, -r in options, so can now be specified in the text file that from somewhere put any roll, just keep the one that file to the repository, you have to be able to initial setting . From this point of view, I think whether you can understand the importance of dividing into roles.

Chapter 9 : Ansible faster

It's like ssh multiplexing. In the case of such as EC2, too long for Unix domain socket of an error that has explains that there be returned looks good. It is good to have written about pipelining.

I have never used the fact cache. Collecting facts is not so time consuming (in proportion to other processing) if it is a regular machine, so I am concerned about cache mismatch. However, since there are cases where only truly a little task is executed, or in the case of a slow machine, it may be dramatically effective, so if you are "Ansible late," I think that you may try it.

Chapter 10 : Custom modules

Yes, it is a custom module. If you try to use Ansible slightly deeply, your own creation is the best. It is nonsense that I try my best trying to manage somehow at YAML.

In this chapter, in addition to how to make modules in Python, there is also a little bit how to create a module in bash. Module creation in Python has a fairly comprehensive helper function, so we can deal with many things, so if you make it in full swing it will be python optional. However, if a little operation is to be modularized, it is better for familiar language, so I think that it is very convenient that bash creation method is written.

Chapter 11 : Vagrant

A description of Vagrant and ansible provisioners. I did not know how to do parallel provision at Vagrant, so I learned a lot.

Chapter 12 : Amazon EC2

It is EC2. However, not only AWS but also security group and acquisition of AMI are listed up. In this chapter, it is better to keep in mind about the relationship between tag and group. Also, as there is a description about packer, it may be nice to try using it.

Note : P.230 of the footnote is left remaining that something editing stage?

Chapter 13 : Docker

It is Docker. In this chapter, it is written that there are two points concerning the relationship between Ansible and Docker.

  1. Used to ensure that multiple Dockers are activated in the specified order
  2. Use Ansible when creating Docker image

As for 1, I feel that docker-compose is better. About 2., there is a method to use a container containing Ansible. However, in Ansible 2.0, since docker connection plugin is attached as a standard, it is better to handle the image of docker directly. However, as I saw in 13.5.5, I have the idea that docker image creation is part of the build process, and Ansible does not build a Docker Image. I agree with this idea.

Chapter 14 : Playbook Debugging

A tips useful for debugging playbook is written. By the way, I have before Software DesignでAnsibleに関する記事を書かせて頂いた sometimes, I gained some debug module first and foremost. This is because the debug module is used for debugging without fail.

Summary

This book is recommended not only for those who use Ansible for the first time, but also for those who are actually using it. Let's buy by all means.

]]>
Mon, 18 Apr 2016 00:00:00 +0900
http://tdoc.info/en/blog/2016/03/01/go_diet.html http://tdoc.info/en/blog/2016/03/01/go_diet.html <![CDATA[Reduce the binary size of Go]]> Reduce the binary size of Go

Go has become 1 binary very well, but its 1 binary is quite big, but it may be a bit of a bottleneck.

The Go binary diet read an article that, the binary of Go tried to diet.

Ldflags = "- w"

The 1st shown -ldflags="-w" ** is a way to ensure that does not generate a symbol table of DWARF by.

% go build -ldflags="-w"

Ldflags = "- s"

Then, -ldflags="-s" is a way to ensure that does not generate all the symbol table to be used for debugging by.

% go build -ldflags="-s"

Up to this point it is common to, but had been then shown The Ultimate Packer for eXecutables was the way to use.

UPX

Wikipedia

I did not know it widely, but UPX seems to be some software since 1998. Windows / Linux (i386 / ARM / MIPS) / Mac / * BSD and almost all major platforms will work. License GPL ( [注1] is). UPX compresses the binary while keeping the executable format.

The actual operation, 7-Zip for use in, say, LZMA leave compression on, Extract it at runtime and write it directly to memory before executing it. If it is impossible to replace it in memory, it will be written in a temporary file and executed.

The code required for decompression is only a few hundred bytes. Of course, CPU and memory are necessary for expansion, but it is only once, and I think that there is not much to be a problem if it is as big as Go binary.

result

I tried it on the code of a fairly large project at hand. OS used Mac OS, go 1.6 was used.

Object Number of bytes
Former 26 MB (27090756)
"-w" 19 MB (20414276)
"-s" 26 MB (27090756)
"-w" + UPX (-9 o'clock) 5.2 M (5443584)
"-w" + UPX (-1 o'clock) 6.4 M (6684672)

Well, it has not changed in "-s" ... I think that it is something around ld that is not in the darwin environment, so it is something around it that I was chasing afterwards, the original 26 MB was reduced to 5.2 MB.

Compression to upx -9 when using, took time I will take a decent time at 15.70 seconds. It was about the same as running about 3 times. It was about 0.10 seconds when stretched. Of course it also depends on memory etc, so this result can not be taken as a swallowing but as a guide only.

More say, upx -1 If that is compressed by only takes 0.78 seconds. Yet, it was 6.4 MB and the compression efficiency was satisfactory. This area but I think that I be determined in accordance with the environment to target, -1 you also feel that enough.

Summary

The big problem of Go binary was that it could be solved to some extent by using ldflags and UPX. I did not know UPX, but it looks pretty amazing when I see the code.

[注1]ライセンスについて。UPXはGPLに例外事項をつけたライセンスをとっ ています。これは、GPLであるUPXがバイナリにリンクする形となる ため、圧縮対象となるバイナリも通常はGPLになる必要があるのです が、リンクするUPXのコードを改変しないかぎりその必要はない、と いう例外条項です。 ライセンス を確認してい ただけると分かるように、商用プログラムに対して適用できると明 確に書かれています。
]]>
Tue, 01 Mar 2016 00:00:00 +0900
http://tdoc.info/en/blog/2016/01/07/lambda.html http://tdoc.info/en/blog/2016/01/07/lambda.html <![CDATA[AWS Lambda efficiently executes the go binary]]> AWS Lambda efficiently executes the go binary

Recently have quite using the Lambda and the API Gateway, this is called Do not, and wondering where was, AWS LambdaでJavaとNode.jsとGoの簡易ベンチマークをしてみた 1451665326> `_ that because I saw the article, try to write a related article.

Precondition

You can use nodejs etc in AWS Lambda, but you can not use golang (for now). Therefore, if you want to write with golang, as mentioned in the above article,

run nodejs -> nodejsがgolangのバイナリをchild_process.spawn で起動

It will be a way of doing.

As a result, it costs 500 megabytes each time it takes a process to launch the process with each request.

Library to solve this problem is lambda_proc is.

Lambda_proc

Lambda is started in its own container. The activated container will continue to exist for a certain period of time and its container will be used every time it is requested. So, once rather than re-start the process of a single go, or not than no longer start-up cost of go Once you have to leave to start, way that the lambda_proc is.

The communication between node and go uses stdin / stdout. Specifically it is like this.

request
client -> node --(stdin)--> go

response
go --(stdout)--> node -> client

In lambda_proc, the JSON is formatted on the node side and passed to go as one line of JSON (Line Delimited JSON). The go-side helper library formats JSON and passes it to the go main function.

In reply, when a suitable struct is returned, lambda_proc helper library formats it to JSON and returns it to node.

The actual source code of go is benchmark below. If you write it to the standard output, it will go to the node side, so you need to write the log to the standard error output.

package main

import (
     "encoding/json"
     "log"
     "os"

     "github.com/aws/aws-sdk-go/aws"
     "github.com/aws/aws-sdk-go/aws/session"
     "github.com/aws/aws-sdk-go/service/dynamodb"
     "github.com/bitly/go-simplejson"
     "github.com/jasonmoo/lambda_proc"
)

// 標準エラー出力に書き出す
var logObj = log.New(os.Stderr, "", 0)

// 元記事のメイン部分の関数
func parse(jsonStr string) {
     js, _ := simplejson.NewJson([]byte(jsonStr))
     records := js.Get("Records")
     size := len(records.MustArray())

     for i := 0; i < size; i++ {
             record := records.GetIndex(i)

             logLn(record.Get("eventName").MustString())  // fmt.Printlnは使えない
             logLn(record.Get("eventId").MustString())
             logLn(record.Get("dynamodb").MustMap())
     }

     ddb := dynamodb.New(session.New(), aws.NewConfig().WithRegion("ap-northeast-1"))
     tableName := "mytest"
     keyValue := "test"
     attribute := dynamodb.AttributeValue{S: &keyValue}
     query := map[string]*dynamodb.AttributeValue{"id": &attribute}

     getItemInput := dynamodb.GetItemInput{
             TableName: &tableName,
             Key:       query,
     }

     obj, _ := ddb.GetItem(&getItemInput)
     logLn(obj)
}

// fmt.Printlnをして標準出力に書き込むと、js側でparseしてしまうので、標準エラー出力に書き出す
func logLn(a ...interface{}) {
     logObj.Println(a...)
}

// なにかstructを返さなければいけないのでダミーの構造体を作成。普通に書くと、むしろstructを返せたほうがいいでしょう
type Return struct {
     Id    string
     Value string
}

// メインとなる関数
func handlerFunc(context *lambda_proc.Context, eventJSON json.RawMessage) (interface{}, error) {
     parse(string(eventJSON))
     return Return{Id: "test", Value: "somevalue"}, nil
}

// mainではlambda_procに登録する
func main() {
     lambda_proc.Run(handlerFunc)
}

benchmark

Lambda has standard JSON for testing. This time, DynamoDB Update have been saved at hand the JSON for testing. I prepared Lambda's API Endpoint for hitting from curl and started curl 10 times every 0.5 seconds with the following script.

for I in `seq 1 10`
do
curl -X POST -H "Content-type: application/json" --data @body.json https://hoge.execute-api.ap-northeast-1.amazonaws.com/prod/benchmark
sleep 0.5
done

When you do this,

Duration Billed Duration Used Memory
367.42 ms 400 ms 14 MB
36.92 ms 100 ms 14 MB
44.00 ms 100 ms 14 MB
46.05 ms 100 ms 14 MB
61.44 ms 100 ms 15 MB
50.48 ms 100 ms 15 MB

When

Duration Billed Duration Used Memory
393.30 ms 400 ms 14 MB
44.13 ms 100 ms 14 MB
47.99 ms 100 ms 14 MB
52.30 ms 100 ms 14 MB

Two log streams came out for CloudWatch.

You can see from this log that two containers are used. Also, it takes 400 msec for the first time, but it takes only about 40 msec for subsequent requests. Of course the memory is also minimal. I did not make a call 10 times this time, but it is OK even more quantity. Also, if you extend the execution time limit once more you will live longer and I think that the startup cost will be within a negligible range.

Note the log output

It is also described in the above code, but in the log output fmt.Println and would use would be written out to standard output, will transmitted to the node side. For that reason, we are trying to write to standard error output. You can solve this, but you should be careful when using the logging library.

The program of go has become simple

As a side effect of using lambda_proc this time, the go program has been simplified.

With an ordinary application server, we had to deal with context and various things with HTTP awareness. However, in this method, only stdin/out does not matter. AWS Lambda (and API Gateway) will cover all things related to HTTP. You only need to look at standard I / O for go, and its format is JSON which became standard.

This restriction narrows the range to be implemented, making processing very easy to write.

It also makes testing easier. If the conventional, "net/http/httptest" the upright, but you need to think Toka, you only need standard I / O.

Summary

lambda_proc By using, showed that invoke the go of the program efficiently on AWS Lambda. By doing this, I think that go can be used not only for the occasional process but also for applications to field for cargo requests.

Because lambda can use quite a computer resource for free, I want to save money by using it well.

]]>
Thu, 07 Jan 2016 00:00:00 +0900
http://tdoc.info/en/blog/2015/12/16/gopsutil.html http://tdoc.info/en/blog/2015/12/16/gopsutil.html <![CDATA[Introduction of gopsutil to acquire information such as CPU and memory]]> Introduction of gopsutil to acquire information such as CPU and memory

Go Advent Calendar 2015 ` _ is a 16-day.

The Python psutil that, there is a library to obtain information, such as CPU and memory. Sessaku gopsutil began trying to port this psutil to golang.

Gopsutil has the following features.

  • It works on Linux / Darwin / FreeBSD / Windows
    • Of course, the correspondence situation is quite different
  • It is (almost) implemented in pure golang. Therefore, cross-compiling is easy
    • Almost, I use cgo only for darwin's CPU utilization. If cgo is not used, simply implemented will be returned.
  • You can also retrieve information not found in psutil
    • It is the information of docker (cgroup), virtualization situation, and adds functions as you like

Gopsutil has continued to develop daily since over a year and a half, and now thanks to the star of github exceeding 800 more.

Also,

It is used as a library from software such as.

How to use

Usage is written in README, but it is as follows. github.com/shirou/gopsutil/mem to import, etc., simply call the method that is placed in the package.

import (
    "fmt"

    "github.com/shirou/gopsutil/mem"
)

func main() {
    v, _ := mem.VirtualMemory()

 // structが返ってきます。
    fmt.Printf("Total: %v, Free:%v, UsedPercent:%f%%\n", v.Total, v.Free, v.UsedPercent)

 // PrintするとJSON形式の結果が返ってきます
    fmt.Println(v)
}

When you do this, it looks something like this.

Total: 3179569152, Free:284233728, UsedPercent:84.508194%

{"total":3179569152,"available":492572672,"used":2895335424,"usedPercent":84.50819439828305, (以下省略)}

You can get it as a struct, so please customize it as you like. Or, if you print it you can treat it as JSON.

Information that can be obtained

I can get quite a lot of information, but I will introduce some of them.

  • CPU
    • CPU utilization, CPU hardware information
  • Memory
    • Memory usage rate, swap usage rate
  • Disk
    • Partition information, I / O, disk utilization, disk serial number
  • Host
    • Host name, start time, OS, virtualization method,
    • Login user information
  • Load
    • Load 1, 5, 15
  • Process
    • PID and state of each process, boot process name, memory, CPU usage etc.
  • Docker
    • Container internal CPU usage, memory usage, etc.

If there is demand, if it is within the range not to destroy the existing API, I think that I will increase it more and more.

contents

Gopsutil is doing a lot of very dirty things. First of all, cgo can not be used because it is making a big principle of going with pure go. Also, Linux / BSD / Windows has a very different method.

Linux
File base such as proc file system
FreeBSD / Darwin
Sysctl
Windows
DLL and WMI

These are cpu_darwin.go are separated by file name, such as.

Linux

Basically it's text file base so it's pretty easy.

And thinks Ya, or different information can take, depending on the version of Linux, in the interior of the container /sys such as it is necessary to be replacing the path, since it could not be used is different from the small point.

In addition, user information is /var/run/utmp でバイナリ(utmp構造体)で格納されていますので、ちゃんとparseしてあげる必要があります。このあたりは2015年6月のGoConで 公開 was (not announced).

FreeBSD / Darwin

BSD systems are sysctl You can get a variety of information on the command. sysctl vm.stats.vm.v_page_size it is or take a page size.

However, only information in text format can be acquired with the sysctl command. Since such information of Proc structure is not hitting from the command, syscall.Syscall6 Strike, such as using. (As an aside, since only Linux code comes out with godoc, you need to read the source code if you want to know other than Linux)

mib := []int32{CTLKern, KernProc, KernProcProc, 0}
miblen := uint64(len(mib))

// まずlengthを0にして叩き、必要となるバッファ量を得る
length := uint64(0)
_, _, err := syscall.Syscall6(
    syscall.SYS___SYSCTL,
    uintptr(unsafe.Pointer(&mib[0])),
    uintptr(miblen),
    0,
    uintptr(unsafe.Pointer(&length)),
    0,
    0)

// 必要な情報を得る
buf := make([]byte, length)
_, _, err = syscall.Syscall6(
    syscall.SYS___SYSCTL,
    uintptr(unsafe.Pointer(&mib[0])),
    uintptr(miblen),
    uintptr(unsafe.Pointer(&buf[0])),
    uintptr(unsafe.Pointer(&length)),
    0,
    0)

However, Darwin is sysctl information to get in is also the place that gave up so much less compared to FreeBSD.

Windows

We are calling the DLL to get the information.

procGetDiskFreeSpaceExW := modkernel32.NewProc("GetDiskFreeSpaceExW")

diskret, _, err := procGetDiskFreeSpaceExW.Call(
     uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(path))),
     uintptr(unsafe.Pointer(&lpFreeBytesAvailable)),
     uintptr(unsafe.Pointer(&lpTotalNumberOfBytes)),
     uintptr(unsafe.Pointer(&lpTotalNumberOfFreeBytes)))

It is like that. However, because indeed this is various painful, github.com/StackExchange/wmi the use we try to hit the WMI.

type Win32_Processor struct {
    LoadPercentage            *uint16
    Family                    uint16
    Manufacturer              string
    Name                      string
    NumberOfLogicalProcessors uint32
    ProcessorId               *string
    Stepping                  *string
    MaxClockSpeed             uint32
}

func get() {
    var dst []Win32_Processor
    q := wmi.CreateQuery(&dst, "")
    err := wmi.Query(q, &dst)
    if err != nil {
        return ret, err
    }
    fmt.Println(dst)
}

Performance

Although it is not measured, since it is easy to call external commands, etc., performance should not be so much. Running at a tremendous high frequency will put load on the host side. I think that you can cash the point appropriately on the side you use.

Summary

To obtain information, such as CPU and memory of the host gopsutil was introduced.

It was about time I started using go, it was not long before I got to use go, and since I got knowledge about various platforms later, I do not feel unity. I am thinking that I want to do it properly ...

If you want to get information on the system with go, I would appreciate it if you remember gopsutil. In addition, we will wait for PR from time to time.

]]>
Wed, 16 Dec 2015 00:00:00 +0900
http://tdoc.info/en/blog/2015/12/03/docker_connection_plugin.html http://tdoc.info/en/blog/2015/12/03/docker_connection_plugin.html <![CDATA[Using Ansible Docker Connection Plugin]]> Using Ansible Docker Connection Plugin

It was to write this article in April 2014 of more than a year ago docker containerに対して直接ansibleを実行する Since then, in Ansible 2.0 there is a standard Docker Connection Plugin. (Although it's not my implementation, though)

What is Docker Connection Plugin

First of all, Connection Plugin explains. Ansible usually connects to the target host using SSH. However, you can switch the connection method by using Connection Plugin.

Typical is local connection is. If written as follows, it will be executed as is in localhost instead of ssh. The difference from ssh's localhost is that ssh is not used at all and it is executed as is by the user as it is. It is convenient for development.

- hosts: all
  connection: local
  tasks:
    - file: path=/tmp/this_is_local state=directory

In addition, the following connection plugin is prepared. I think that there are also many people who used paramiko and winrm.

Accelerate
Accelaret mode (it is a past heritage so you do not need to memorize it)
Chroot
Chroot
Funcd
Func : via Fedora Unified Network Controller
Zone
Solaris Zone
Jail
FreeBSD's Jail
Libvirt_lxc
Virt's LXC
Paramiko
Ssh python implementation
Winrm
Windows

One of these is the docker connection plugin.

Benefits of Docker connection plugin

By using the Docker Connection Plugin, you can execute Ansible directly to the Docker container. Specifically docker exec the run command, a copy of the file docker cp run the. You do not need to build sshd inside the Docker container.

It is certain that the Build by Dockerfile is the simplest. But,

  • In order not to increase the Layer \ there is a case in which also will increase many lines in
  • Because there is no template, it is troublesome to make and create multiple types of images
  • Even though others manage it with Ansible, management becomes divided when it becomes Dockerfile here only

For reasons such as you may want to use Ansible, it is useful in that case.

In addition, I think that it is better if I can do it with Dockerfile. You do not have to bother to use Ansible. However, as it becomes complicated, it seems that Ansible is more convenient in some cases, so I will introduce it here.

Using the Docker connection plugin

Let's have a note for this, and let's use it immediately. Because I think that most people are using the Ansible 2.0RC1, but is not a new installation it is necessary, people who are using the emergency 1.9.4 is こちら _ from `docker.py download, connection_plugins Let's put into it to create a directory called. It has the following configuration.

.
|-- connection_plugins
|   `-- docker.py
|-- hosts
`-- site.yml

In addition, in the Pip docker-py let's install. (It is not necessary in ansible v 2.0.)

I will write playbook as follows.

- name: Dockerコンテナを起動
  hosts: localhost
  connection: local
  vars:
    base_image: ubuntu:latest
    docker_hostname: test

  tasks:
    - name: Dockerコンテナを起動
      local_action: docker image={{ base_image }} name={{ docker_hostname }} detach=yes tty=yes command=bash
    - name: ホストを追加
      add_host: name={{ docker_hostname }}

- name: Dockerコンテナ内を構成
  hosts: test
  connection: docker   # ここで docker connectionを指定
  tasks:  # 好きなように書きます
    - file: path=/tmp/docker state=directory
    - file: path=/tmp/ansible state=directory
    - group: name=admin state=present
    - user: name=johnd comment="John Doe" uid=1040 group=admin
    - copy: src=site.yml dest=/tmp/ansible/

  post_tasks:
    - local_action: shell /usr/local/bin/docker commit {{ inventory_hostname }} ubuntu:ansible

The playbook in this example consists of the following two.

  1. Launch Docker Container
  2. Configuration management inside the launched Docker container

For 1, start using the docker module. This is normally a local connection. 2 is using the Docker connection.

What is important is, connection: docker only line that is different from, the other is that the normal Playbook no different.

Finally, dokcer commit by running, you have to save as an image. Because it is it up to the point of being carried out by including all docker exec, not saved, layer in the end as a whole docker commit will be the only one that can be when you run. By doing this, you do not have to do a lot of lines with Dockerfile.

Automate commit

In the previous example post_tasks として、 docker commit を実行しています。しかし、 Ansible を使って Docker コンテナーをプロビジョニングする in the article that is, callback plugin This example shows how to commit every task execution every time using.

As with the method by Dockerfile, this method will have many layers. Instead, it is cached, so there is also the advantage of being faster next time.

Use Remote's Docker host

The Docker host can be remote, not just at hand.

export DOCKER_HOST=tcp://192.168.0.10:4243

When the DOCKER_HOST be set in the environment variable, to access the Docker container via the host. I have not tried it, but I think Swarm and others will work properly.

with this,

  • Use of cloud services such as instance startup
  • Construction of the docker host itself
  • Building a docker container / image
  • Functions required for deployment such as attaching and removing of ELB

All of it is possible with Ansible.

Summary

In this article, I introduced Docker Connection Plugin which directly touches the Docker container from Ansible. Just putting one python file allows you to do the same thing as a normal ssh host for the Docker container. Also, the Docker host can be used not only locally but also remotely.

Finally.

As I mentioned earlier, it would be better if you could do it with a Dockerfile. You can also understand why you want to do with Ansible, but there is no reason to use Ansible. Let's think about it again so as not to suffer unnecessary trouble at the right place.

And, first of all, I think that the interior of the Docker container is incorrect at a complicated time. golangをDockerでデプロイする as shown in the, if golang, in order to move if you put only 1 binary, " Provisioning "no longer exists. Ian, who change jobs to Google is still (より)小さいDockerイメージを作ろう and wrote an article called, the minimum necessary It is ideal to put only the limit file.

Before automation, let's think about "Is it really necessary in the first place?"

]]>
Thu, 03 Dec 2015 00:00:00 +0900
http://tdoc.info/en/blog/2015/10/09/aws_iot_mqttcli.html http://tdoc.info/en/blog/2015/10/09/aws_iot_mqttcli.html <![CDATA[Connect to AWS IoT from mqttcli]]> Connect to AWS IoT from mqttcli

mqttcli that, to develop a MQTT client running in the CLI, to the public.

Let's connect to AWS IoT from this mqttcli.

Download mqttcli

mqttcli files from,

  • Linux (arm / amd64)
  • FreeBSD (arm / amd64)
  • Darwin (amd64)
  • Windows (amd64)

There are prepared, please download the binary suitable for your own architecture. Then, chmod u+x mqttcli and please grant execute permissions.

Make things with AWS IoT

  1. From AWS console AWS IoT Open.
  2. Create Resource from Create Thing select the
  3. Enter the Name, Create press
  4. Since the Thing of the name that you just entered in the list below comes out, select, from the right of the tag Connect a Device click.
  5. Connect a Device since coming out of the screen that, NodeJS press, Generate Certificate and Policy Press
  6. After about 10 seconds,
  • Download Public Key
  • Download Private Key
  • Download Certificate

You are instructed to download three files, so download them all.

  1. Confirm & Start Connecting press. Then, the following JSON is displayed, so copy it and save it in a file.

    Code-Block .. :: json

    {

    "Host" : "A3HVHEAALED.Iot.Ap-northeast-1.Amazonaws.Com", "Port" : 8883, "clientId" : "Something", "ThingName" : "Something", "CaCert" : "Root-CA .Crt "," ClientCert " : " 2A338xx2xxf-Certificate.Pem.Crt "," privateKey " : " Aad380efffx-Private.Pem.Key "

    }

  2. root-CA.crt というファイルを AWS IoT SDKのここ _ as it is written on, `こちらのSymantecのページ Authority-G5.pem> `_ please be obtained from.

  3. Please put the three files you downloaded earlier, the JSON file and root - CA.crt in the same directory.

This is the end of the preparation.

Connect to AWS IoT

Go to the directory containing the file and start mqttcli as follows. -t to specify the topic in but, $ you might need to escape. --conf is a JSON file that you just saved to specify in. -d is for debugging.

$ mqttcli sub -t "\$aws/things/something/shadow/update" --conf something.json -d
INFO[0000] Broker URI: ssl://A3HVHEAALED.iot.ap-northeast-1.amazonaws.com:8883
INFO[0000] Topic: $aws/things/something/shadow/update
INFO[0000] connecting...
INFO[0000] client connected

With success it is successful. I can connect with MQTT.

Update Thing Shadow

To update Thing Shadow send JSON as shown below.

{
  "state": {
    "reported": {
      "from": "mqttcli"
    }
  }
}

Let's send it with mqttcli

echo '{"state": {"reported": {"from": "mqttcli"} } }"' | mqttcli pub -t "\$aws/things/something/shadow/update" --conf something.json -d -s

Now, from the AWS Console, the state should be rewritten.

In this way, I was able to touch AWS IoT with mqttcli and. I also confirm that the same thing can be done by mosquitto_sub.

At last

Let's use AWS IoT's SDK instead of hitting MQTT directly. Then you do not have to be conscious of MQTT like this.

]]>
Fri, 09 Oct 2015 00:00:00 +0900
http://tdoc.info/en/blog/2015/10/09/thing_shadows.html http://tdoc.info/en/blog/2015/10/09/thing_shadows.html <![CDATA[Awareness on AWS IoT and Thing Shadows]]> Awareness on AWS IoT and Thing Shadows

Note : This article contains assumptions.

AWS IoT has been announced. AWS IoT has prepared a Managed MQTT Server, and it seems to be appreciated that MQTT evaluates that it takes care of MQTT Server, which is difficult to operate.

But it is not. The essence of AWS IoT is the mechanism of Thing Shadows.

However, I have not used it properly yet, please point out if there is a mistake.

Thing Shadows

In IoT AWS, Things it has been defined that thing. There are two kinds of Things.

Things
Actual physical things. device
Thing Shadow
Things state on network (AWS)

Things is as it is. The new one is Thing Shadow. This is "a (virtual) mapping of physical devices onto the network".

Things and Thing Shadow are connected one to one. If there is any change in Things, a change will also occur in Thing Shadows. Inversely also. If you make a change to Thing Shadows, it also makes a change to Things.

That is,

  • Physical space
  • Virtual space

It is nothing other than that it was integrated.

Information on Thing Shadow

The implementation of Thing Shadow is just JSON.:

{
    "state" : {
        "desired" : {
          "color" : "RED",
          "sequence" : [ "RED", "GREEN", "BLUE" ]
        },
        "reported" : {
          "color" : "GREEN"
        }
    },
    "metadata" : {
        "desired" : {
            "color" : {
                "timestamp" : 12345
            },
            "sequence" : {
                "timestamp" : 12345
            }
        },
        "reported" : {
            "color" : {
                "timestamp" : 12345
            }
        }
    },
    "version" : 10,
    "clientToken" : "UniqueClientToken",
    "timestamp": 123456789
}

Here important is possessed by each state and the Metadata, desired and reported is.

1. When Thing is updated

When Thing is updated, which is a physical thing, the information is notified to Thing Shadow by MQTT or HTTPS.

Along with this, reported of state will be updated.

2. When Thing Shadow is updated

Thing Shadow can be updated in virtual space by MQTT or HTTP. In that case, desired information is updated.

In this case, desired and reported If there is a difference, the message is sent to the Thing (not necessarily the one) that subscribe to this Thing Shadow. Thing can receive this and update its own information. Then, when you can update, reported to update the Thing Shadow as.


With these actions, Thing and Thing Shadows can be synchronized. If, repoted and desired If you are different are not synchronized, it will be called.

More say, as API, update / get / delete are prepared for each accepted and rejected we are available. For this reason, I tried to update Things Shadows, but I can understand that I could not do it.

Difference from MQTT

So far, I have explained Things and Thing Shadows. By the way, AWS IoT does not have the following functions of MQTT.

  • Retain
  • Will
  • QoS 2

Why. That is because there is a Thing Shadow.

  • Retain is Shadow itself
  • Wll does not need to be in the first place as there is no offline state
  • Synchronization with QoS 2 can be realized using Shadow's desired / reported

Would not it be nice to emphasize that AWS IoT is not a message protocol called MQTT but is for handling "state"?

Summary

If you capture AWS IoT as just a Managed MQTT Server, you will miss the essence. It may be fun to look back at something about the fusion of virtual space and physical space, Internet of Things, and so on.

In addition, still this time Rule it does not depress respect. By combining Thing Shadows / Rule, you should be able to create a machine - to - machine, Things - to - Things world without human intervention.

It was a bit of an emotional story that I did not write in this blog so much. (Actually, I was doing this kind of research more than 10 years ago, I am glad that it expanded to this point, I decided to write it momentum)

]]>
Fri, 09 Oct 2015 00:00:00 +0900
http://tdoc.info/en/blog/2015/08/21/webdbpress.html http://tdoc.info/en/blog/2015/08/21/webdbpress.html <![CDATA[I wrote an article on MQTT in WEB + DB PRESS vol 88]]> I wrote an article on MQTT in WEB + DB PRESS vol 88

Released August 22, 2015 WEB+DB PRESS vol88 >` _ to 速習 MQTT titled allowed to write an article about MQTT I had you.

The contents are as follows.

  • What is MQTT
  • Usage scene of MQTT
  • Features of MQTT
  • Comparison with other protocols
  • Try using MQTT

As described in the title "Hayami," it is content that explains the contents of the protocol that MQTT is unable to hear, which is the main explanation of usage scenes and features. Although we try to implement try once using the MQTT, apps that use the MQTT at the point that in the python, eclipse paho thanks There is also a very Implementation is completed in a short row, and the amount is honestly high.

Recently it has become completely buzzword, and I have gradually come to see discourses such as "MQTT is the protagonist of the IoT era" and "MQTT can do all it". The contents of this time aimed to convey the actual situation of MQTT as accurately as possible. The use of MQTT is limited, and of course there are usage forms that are perfectly suitable, but I think that there are many cases where it is better to use other protocols. I hope to be able to communicate the neighborhood without excess or deficiency.

Sphinx InDesign Builder

Well, it is the main issue.

I wrote this in reStructuredText (rst) format. However, since in the rst form it knew that the variety is its work after the occurrence, Sphinx Indesign Builder has created a Spinx extension called.

% pip install sphinxcontrib_indesignbuilder

After installing with, in conf.py

extensions = ['sphinxcontrib.indesignbuilder']

The preparation is completed if it writes.

later,

% sphinx-build -b singlewebdb -d build/doctrees source build/singlewebdb

If, build/singlewebdb InDesign XML files for use with WEB + DB PRESS in will be created. The one marked with "single" can be put together in one xml file, it is possible to combine it into one. Editing WEB + DB PRESS will push this XML into InDesign, placing float elements such as diagrams, you will be ready to create PDF. It was a hassle that was almost the same as using md2inao. (However, note that among the functions that can be realized with md2inao, currently only functions required for this manuscript are implemented)

Actually, since we need human power to place the float elements, it is not a consistent flow from rst format file to PDF. For that, InDesgin power seems to be quite necessary.

It should be noted that, webdb as with the, is was created this time sphinx extension for WEB + DB PRESS. In other books and magazines, since the style naming convention is different, it can not be applied as it is. However, if you refer to the extension you created this time, it is fairly easy to implement another extension that conforms to other style naming conventions. Perhaps if you read a certain configuration file, it may be possible to export the XML according to that style naming convention by itself. If so, implementation is no longer necessary.

The reStructuredText (rst) format is also important in that it has high readability even in its original form, but its high extensibility is due to the extension point being decided. For example, we can flexibly respond to requests that "I want to express this part of this sentence specially". As a result, expressiveness is much higher than other description formats.

Summary

  • I wrote an article on MQTT in WEB + DB PRESS vol 88
    • It is contents to explain what MQTT can do and what I can not do
  • We implemented the Sphinx extension called Sphinx InDesign Builder
    • Drafted in reStructuredText (rst) format

WEB+DB PRESS vol88 >` _ the development of mobile development and LINE, please buy by all means because it also listed articles about Elixir.

]]>
Fri, 21 Aug 2015 00:00:00 +0900
http://tdoc.info/en/blog/2015/08/02/edison_golang_ble.html http://tdoc.info/en/blog/2015/08/02/edison_golang_ble.html <![CDATA[Handle BLE from golang at Edison]]> Handle BLE from golang at Edison

Intel Edison is popular. In particular, it is very suitable for IoT that both WiFi and BLE can be used.

The Edison standard NodeJS you have a development environment that uses is mounted, but still here is where you want to handle in golang.

Paypal / gatt

To handle the BLE in Golang, github.com/paypal/gatt is the best. This library is implemented by golang all from the operation of BLE's peripheral etc. It works with pure golang.

The sample program is described below. In this example, the Main gatt.NewDevice create a device in, onStateChanged and onPeriphDiscovered only to register the two functions of the handler, you can the search for BLE equipment.

func onStateChanged(d gatt.Device, s gatt.State) {
     fmt.Println("State:", s)
     switch s {
     case gatt.StatePoweredOn:
             fmt.Println("scanning...")
             d.Scan([]gatt.UUID{}, false)
             return
     default:
             d.StopScanning()
     }
}
func onPeriphDiscovered(p gatt.Peripheral, a *gatt.Advertisement, rssi int) {
     fmt.Printf("\nPeripheral ID:%s, NAME:(%s)\n", p.ID(), p.Name())
     fmt.Println("  Local Name        =", a.LocalName)
     fmt.Println("  TX Power Level    =", a.TxPowerLevel)
     fmt.Println("  Manufacturer Data =", a.ManufacturerData)
     fmt.Println("  Service Data      =", a.ServiceData)
}
func main() {
     d, err := gatt.NewDevice(option.DefaultClientOptions...)
     if err != nil {
             log.Fatalf("Failed to open device, err: %s\n", err)
     }
     // Register handlers.
     d.Handle(gatt.PeripheralDiscovered(onPeriphDiscovered))
     d.Init(onStateChanged)
     select {}
}

Using paypal / gatt makes it easy to handle BLE from golang, but there is one problem.

paypal / gatt is, HCI USER CHANNEL are based on the assumption, which are equipped only with Linux 3.14 or later. In other words, in Edison running on Linux 3.10, paypal / gatt will not work.

Noble

However, to work with NodeJS noble Do I work is, and I think examined go and noble is build during the installation of the following two small helper process, run I understood what I was using at the time.

When installing noble, the following two binaries are completed.

  • Node_modules / noble / build / Release / hci-ble
  • Node_modules / noble / build / Release / l2cap-ble

As its name implies, hci-ble deals with HCI and l2cap-ble deals with L2CAP. The two are Bluez for that link directly to the library and work with kernel 3.10.

Noblechild

I came up with the examination so far.

"If you use the noble helper process you can handle BLE in kernel 3.10 even in Pure golang"

That was made in that noblechild is.

Noblechild starts the helper process of noble and handles the helper process like noble, so that Pure golang can handle BLE from kernel 3.10 as well.

How to use

  1. Install noble.
  2. NOBLE_TOPDIR the environment variable, and set it to the top of the directory where you installed the noble.

that's all. After that, we provide almost the same interface as paypal / gatt, so you can use it as it is.

func main() {
     d, err := noblechild.NewDevice(DefaultClientOptions...)
     if err != nil {
             log.Fatalf("Failed to open device, err: %s\n", err)
     }
     d.Handle(noblechild.PeripheralDiscovered(onPeriphDiscovered))
     d.Init(onStateChanged)
}

Summary

To handle the BLE from golang in Edison Intel noblechild it has created a library called.

Since it is a huge hack, the line is not good. If Edison goes up to 3.14, we can directly use paypal / gatt, so we think that it is the position as a connection between the past.

It is useful to handle BLE with golang. We also MQTT gateways that are associated with fuji is also mounted on, so we work without a problem, I think you would like to publish them.

Looking for work

ツキノワ株式会社 In, BLE and MQTT, is looking for your work on Golang. Please feel free to contact us.

]]>
Sun, 02 Aug 2015 00:00:00 +0900
http://tdoc.info/en/blog/2015/06/22/gocon_psutil.html http://tdoc.info/en/blog/2015/06/22/gocon_psutil.html <![CDATA[Create Go structure from C header file]]> Create Go structure from C header file

I did not announce it at Gocon 2015.