Connect to AWS IoT from mqttcli
mqttcli that, to develop a MQTT client running in the CLI, to the public.
Let's connect to AWS IoT from this mqttcli.
Download mqttcli
mqttcli files from,
- Linux (arm / amd64)
- FreeBSD (arm / amd64)
- Darwin (amd64)
- Windows (amd64)
There are prepared, please download the binary suitable for your own architecture. Then, chmod u+x mqttcli and please grant execute permissions.
Make things with AWS IoT
- From AWS console AWS IoT Open.
- Create Resource from Create Thing select the
- Enter the Name, Create press
- Since the Thing of the name that you just entered in the list below comes out, select, from the right of the tag Connect a Device click.
- Connect a Device since coming out of the screen that, NodeJS press, Generate Certificate and Policy Press
- After about 10 seconds,
- Download Public Key
- Download Private Key
- Download Certificate
You are instructed to download three files, so download them all.
Confirm & Start Connecting press. Then, the following JSON is displayed, so copy it and save it in a file.
Code-Block .. :: json
- {
"Host" : "A3HVHEAALED.Iot.Ap-northeast-1.Amazonaws.Com", "Port" : 8883, "clientId" : "Something", "ThingName" : "Something", "CaCert" : "Root-CA .Crt "," ClientCert " : " 2A338xx2xxf-Certificate.Pem.Crt "," privateKey " : " Aad380efffx-Private.Pem.Key "
}
root-CA.crt というファイルを AWS IoT SDKのここ _ as it is written on, `こちらのSymantecのページ Authority-G5.pem> `_ please be obtained from.
Please put the three files you downloaded earlier, the JSON file and root - CA.crt in the same directory.
This is the end of the preparation.
Connect to AWS IoT
Go to the directory containing the file and start mqttcli as follows. -t to specify the topic in but, $ you might need to escape. --conf is a JSON file that you just saved to specify in. -d is for debugging.
$ mqttcli sub -t "\$aws/things/something/shadow/update" --conf something.json -d
INFO[0000] Broker URI: ssl://A3HVHEAALED.iot.ap-northeast-1.amazonaws.com:8883
INFO[0000] Topic: $aws/things/something/shadow/update
INFO[0000] connecting...
INFO[0000] client connected
With success it is successful. I can connect with MQTT.
Update Thing Shadow
To update Thing Shadow send JSON as shown below.
{
"state": {
"reported": {
"from": "mqttcli"
}
}
}
Let's send it with mqttcli
echo '{"state": {"reported": {"from": "mqttcli"} } }"' | mqttcli pub -t "\$aws/things/something/shadow/update" --conf something.json -d -s
Now, from the AWS Console, the state should be rewritten.
In this way, I was able to touch AWS IoT with mqttcli and. I also confirm that the same thing can be done by mosquitto_sub.
At last
Let's use AWS IoT's SDK instead of hitting MQTT directly. Then you do not have to be conscious of MQTT like this.
Awareness on AWS IoT and Thing Shadows
Note : This article contains assumptions.
AWS IoT has been announced. AWS IoT has prepared a Managed MQTT Server, and it seems to be appreciated that MQTT evaluates that it takes care of MQTT Server, which is difficult to operate.
But it is not. The essence of AWS IoT is the mechanism of Thing Shadows.
However, I have not used it properly yet, please point out if there is a mistake.
Thing Shadows
In IoT AWS, Things it has been defined that thing. There are two kinds of Things.
- Things
- Actual physical things. device
- Thing Shadow
- Things state on network (AWS)
Things is as it is. The new one is Thing Shadow. This is "a (virtual) mapping of physical devices onto the network".
Things and Thing Shadow are connected one to one. If there is any change in Things, a change will also occur in Thing Shadows. Inversely also. If you make a change to Thing Shadows, it also makes a change to Things.
That is,
- Physical space
- Virtual space
It is nothing other than that it was integrated.
Information on Thing Shadow
The implementation of Thing Shadow is just JSON.:
{
"state" : {
"desired" : {
"color" : "RED",
"sequence" : [ "RED", "GREEN", "BLUE" ]
},
"reported" : {
"color" : "GREEN"
}
},
"metadata" : {
"desired" : {
"color" : {
"timestamp" : 12345
},
"sequence" : {
"timestamp" : 12345
}
},
"reported" : {
"color" : {
"timestamp" : 12345
}
}
},
"version" : 10,
"clientToken" : "UniqueClientToken",
"timestamp": 123456789
}
Here important is possessed by each state and the Metadata, desired and reported is.
1. When Thing is updated
When Thing is updated, which is a physical thing, the information is notified to Thing Shadow by MQTT or HTTPS.
Along with this, reported of state will be updated.
2. When Thing Shadow is updated
Thing Shadow can be updated in virtual space by MQTT or HTTP. In that case, desired information is updated.
In this case, desired and reported If there is a difference, the message is sent to the Thing (not necessarily the one) that subscribe to this Thing Shadow. Thing can receive this and update its own information. Then, when you can update, reported to update the Thing Shadow as.
With these actions, Thing and Thing Shadows can be synchronized. If, repoted and desired If you are different are not synchronized, it will be called.
More say, as API, update / get / delete are prepared for each accepted and rejected we are available. For this reason, I tried to update Things Shadows, but I can understand that I could not do it.
Difference from MQTT
So far, I have explained Things and Thing Shadows. By the way, AWS IoT does not have the following functions of MQTT.
- Retain
- Will
- QoS 2
Why. That is because there is a Thing Shadow.
- Retain is Shadow itself
- Wll does not need to be in the first place as there is no offline state
- Synchronization with QoS 2 can be realized using Shadow's desired / reported
Would not it be nice to emphasize that AWS IoT is not a message protocol called MQTT but is for handling "state"?
Summary
If you capture AWS IoT as just a Managed MQTT Server, you will miss the essence. It may be fun to look back at something about the fusion of virtual space and physical space, Internet of Things, and so on.
In addition, still this time Rule it does not depress respect. By combining Thing Shadows / Rule, you should be able to create a machine - to - machine, Things - to - Things world without human intervention.
It was a bit of an emotional story that I did not write in this blog so much. (Actually, I was doing this kind of research more than 10 years ago, I am glad that it expanded to this point, I decided to write it momentum)
I wrote an article on MQTT in WEB + DB PRESS vol 88
Released August 22, 2015 WEB+DB PRESS vol88 >` _ to 速習 MQTT titled allowed to write an article about MQTT I had you.
The contents are as follows.
- What is MQTT
- Usage scene of MQTT
- Features of MQTT
- Comparison with other protocols
- Try using MQTT
As described in the title "Hayami," it is content that explains the contents of the protocol that MQTT is unable to hear, which is the main explanation of usage scenes and features. Although we try to implement try once using the MQTT, apps that use the MQTT at the point that in the python, eclipse paho thanks There is also a very Implementation is completed in a short row, and the amount is honestly high.
Recently it has become completely buzzword, and I have gradually come to see discourses such as "MQTT is the protagonist of the IoT era" and "MQTT can do all it". The contents of this time aimed to convey the actual situation of MQTT as accurately as possible. The use of MQTT is limited, and of course there are usage forms that are perfectly suitable, but I think that there are many cases where it is better to use other protocols. I hope to be able to communicate the neighborhood without excess or deficiency.
Sphinx InDesign Builder
Well, it is the main issue.
I wrote this in reStructuredText (rst) format. However, since in the rst form it knew that the variety is its work after the occurrence, Sphinx Indesign Builder has created a Spinx extension called.
% pip install sphinxcontrib_indesignbuilder
After installing with, in conf.py
extensions = ['sphinxcontrib.indesignbuilder']
The preparation is completed if it writes.
later,
% sphinx-build -b singlewebdb -d build/doctrees source build/singlewebdb
If, build/singlewebdb InDesign XML files for use with WEB + DB PRESS in will be created. The one marked with "single" can be put together in one xml file, it is possible to combine it into one. Editing WEB + DB PRESS will push this XML into InDesign, placing float elements such as diagrams, you will be ready to create PDF. It was a hassle that was almost the same as using md2inao. (However, note that among the functions that can be realized with md2inao, currently only functions required for this manuscript are implemented)
Actually, since we need human power to place the float elements, it is not a consistent flow from rst format file to PDF. For that, InDesgin power seems to be quite necessary.
It should be noted that, webdb as with the, is was created this time sphinx extension for WEB + DB PRESS. In other books and magazines, since the style naming convention is different, it can not be applied as it is. However, if you refer to the extension you created this time, it is fairly easy to implement another extension that conforms to other style naming conventions. Perhaps if you read a certain configuration file, it may be possible to export the XML according to that style naming convention by itself. If so, implementation is no longer necessary.
The reStructuredText (rst) format is also important in that it has high readability even in its original form, but its high extensibility is due to the extension point being decided. For example, we can flexibly respond to requests that "I want to express this part of this sentence specially". As a result, expressiveness is much higher than other description formats.
Summary
- I wrote an article on MQTT in WEB + DB PRESS vol 88
- It is contents to explain what MQTT can do and what I can not do
- We implemented the Sphinx extension called Sphinx InDesign Builder
- Drafted in reStructuredText (rst) format
WEB+DB PRESS vol88 >` _ the development of mobile development and LINE, please buy by all means because it also listed articles about Elixir.
Handle BLE from golang at Edison
Intel Edison is popular. In particular, it is very suitable for IoT that both WiFi and BLE can be used.
The Edison standard NodeJS you have a development environment that uses is mounted, but still here is where you want to handle in golang.
Paypal / gatt
To handle the BLE in Golang, github.com/paypal/gatt is the best. This library is implemented by golang all from the operation of BLE's peripheral etc. It works with pure golang.
The sample program is described below. In this example, the Main gatt.NewDevice create a device in, onStateChanged and onPeriphDiscovered only to register the two functions of the handler, you can the search for BLE equipment.
func onStateChanged(d gatt.Device, s gatt.State) {
fmt.Println("State:", s)
switch s {
case gatt.StatePoweredOn:
fmt.Println("scanning...")
d.Scan([]gatt.UUID{}, false)
return
default:
d.StopScanning()
}
}
func onPeriphDiscovered(p gatt.Peripheral, a *gatt.Advertisement, rssi int) {
fmt.Printf("\nPeripheral ID:%s, NAME:(%s)\n", p.ID(), p.Name())
fmt.Println(" Local Name =", a.LocalName)
fmt.Println(" TX Power Level =", a.TxPowerLevel)
fmt.Println(" Manufacturer Data =", a.ManufacturerData)
fmt.Println(" Service Data =", a.ServiceData)
}
func main() {
d, err := gatt.NewDevice(option.DefaultClientOptions...)
if err != nil {
log.Fatalf("Failed to open device, err: %s\n", err)
}
// Register handlers.
d.Handle(gatt.PeripheralDiscovered(onPeriphDiscovered))
d.Init(onStateChanged)
select {}
}
Using paypal / gatt makes it easy to handle BLE from golang, but there is one problem.
paypal / gatt is, HCI USER CHANNEL are based on the assumption, which are equipped only with Linux 3.14 or later. In other words, in Edison running on Linux 3.10, paypal / gatt will not work.
Noble
However, to work with NodeJS noble Do I work is, and I think examined go and noble is build during the installation of the following two small helper process, run I understood what I was using at the time.
When installing noble, the following two binaries are completed.
- Node_modules / noble / build / Release / hci-ble
- Node_modules / noble / build / Release / l2cap-ble
As its name implies, hci-ble deals with HCI and l2cap-ble deals with L2CAP. The two are Bluez for that link directly to the library and work with kernel 3.10.
Noblechild
I came up with the examination so far.
"If you use the noble helper process you can handle BLE in kernel 3.10 even in Pure golang"
That was made in that noblechild is.
Noblechild starts the helper process of noble and handles the helper process like noble, so that Pure golang can handle BLE from kernel 3.10 as well.
How to use
- Install noble.
- NOBLE_TOPDIR the environment variable, and set it to the top of the directory where you installed the noble.
that's all. After that, we provide almost the same interface as paypal / gatt, so you can use it as it is.
func main() {
d, err := noblechild.NewDevice(DefaultClientOptions...)
if err != nil {
log.Fatalf("Failed to open device, err: %s\n", err)
}
d.Handle(noblechild.PeripheralDiscovered(onPeriphDiscovered))
d.Init(onStateChanged)
}
Summary
To handle the BLE from golang in Edison Intel noblechild it has created a library called.
Since it is a huge hack, the line is not good. If Edison goes up to 3.14, we can directly use paypal / gatt, so we think that it is the position as a connection between the past.
It is useful to handle BLE with golang. We also MQTT gateways that are associated with fuji is also mounted on, so we work without a problem, I think you would like to publish them.
Looking for work
ツキノワ株式会社 In, BLE and MQTT, is looking for your work on Golang. Please feel free to contact us.
Create Go structure from C header file
I did not announce it at Gocon 2015.
I did not know if I could participate in the first place.
So by using godef, I created a document describing how to generate a golang structure from a structure definition with a C header and threw it to Twitter.
However, this generation method is a long ago, and now that there is c2go, it may be better to use that. I have not tried it by myself yet. It is appreciated if you can look around that.
Gopsutil
I mention in the article but, gopsutil is psutil system, such as memory and CPU that It is a port of Python library which takes information to go. Like psutil, it features not only Linux but also FreeBSD, OSX, Windows. (Although there are some places that can not be implemented, although there are quite a few differences)
We can also get information on the process, so if you want to take information on CPU and memory by go please try using it. All is written with go, cross-compiling is also easy. Actually it moves with Raspberry Pi.
Server Side React with PostgreSQL
Reactjs It is nice. We are already running the services we have written with React.
Well, I think Server Side Rendering is one of Reactjs' selling. There are already those who are running in various languages. An example:
- Java : http://www.slideshare.net/makingx/reactjs-meetupjavassr
- Go : https://github.com/olebedev/go-react-example
- python : https://github.com/markfinger/django-react
But wait a moment. If rendering on the server side you do not have to let the App server do it separately. Rather, if you let the database holding the data do it, it will be faster as there is no data movement.
So I tried implementing Server Side Rendering on PostgreSQL.
PL / v 8
To move the JavaScript in PostgreSQL is PL/v8 _ use. PL / v 8 is an extension to run the v8 JavaScript engine as it is on PostgreSQL. It seems to be usable even on Amazon RDS.
Preparation
1. Installing PL / v 8
It seems that PL / v 8 can be used if it is after 9.3.5 on Amazon RDS, but this time I prepared it in Ubuntu. postgresql-9.4-plv8 the is OK if the apt-get.
$ sudo apt-get install postgresql-9.4 postgresql-9.4 postgresql-9.4-plv8 postgresql-client-9.4
(createdbとかいろいろしてDBを作って下さい)
$ psql -c 'CREATE EXTENSION plv8' # plv8拡張をインストールします
Now you can use plv8. (By the way ansible of postgresql_ext use the module and management Hakadori)
2. Read Reactjs in PL / v 8
To load external JavaScript in PL / v 8, do as follows. First, download Reactjs and execute the following SQL.
\set reactjs `cat react-0.13.2.js`
CREATE TABLE plv8_modules(modname text primary key, load_on_start boolean, code text);
INSERT INTO plv8_modules values ('reactjs', true, :'reactjs');
CREATE OR REPLACE FUNCTION plv8_startup()
RETURNS void
LANGUAGE plv8
AS
$$
load_module = function(modname)
{
var rows = plv8.execute("SELECT code from plv8_modules " +
" where modname = $1", [modname]);
for (var r = 0; r < rows.length; r++)
{
var code = rows[r].code;
eval("(function() { " + code + "})")();
}
};
$$;
plv8_modules to table that, in advance put the path of the required modules, plv8_startup() to eval load the module from this table in the function called, and that.
We are ready at this point.
Try it.
1. Write an application
First, write the application. Because it is a sample, it's easy.
/** @jsx React.DOM */
var Name = React.createClass({
render: function() {
return (
<b>{ this.props.name }</b>
);
}
});
var Hello = React.createClass({
render: function() {
return (
<td>Hello <Name name={ this.props.name } /></td>
);
}
});
This in jsx app.js leave writes to a file called.
$ jsx --harmony hello.jsx > app.js
Then load this app.js.
-- read app.js
\set appjs `cat app.js`
INSERT INTO plv8_modules values ('appjs',true,:'appjs');
Preparation is complete.
2. Create JS call function
Please write something like this. plv8_module of table appjs read the code from, and eval.
-- Function itself
CREATE OR REPLACE FUNCTION name_render(n text) RETURNS text
LANGUAGE plv8
AS
$$
load_module("reactjs");
var rows = plv8.execute("SELECT code from plv8_modules where modname = $1", ["appjs"]);
eval(rows[0].code);
var e = React.createElement(Hello, {name: n});
return React.renderToString(e);
$$;
In addition, it seems that v8 itself does not have a mechanism such as require or module, so I had a lot of difficulties. There may be better ways.
3. I actually call it
Let's actually call it.
test=# select plv8_startup(); -- 事前にload_module関数とかを読み込んでおく必要があります
test=# select name_render('world');
name_render
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
<td data-reactid=".1lf0zu1tg5c" data-react-checksum="-1307760157"><span data-reactid=".1lf0zu1tg5c.0">Hello </span><b data-reactid=".1lf0zu1tg5c.1">world</b></td>
(1 行)
HTML came out well.
It's an ordinary SQL function, so you can use whatever you want. unnest on whether Let's output to the expansion of the array in a row.
shirou=# select name_render(n) from unnest(ARRAY['shirou', 'rudi', 'tsukinowa']) as n;
name_render
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
<td data-reactid=".delxp6jitc" data-react-checksum="-896062726"><span data-reactid=".delxp6jitc.0">Hello </span><b data-reactid=".delxp6jitc.1">shirou</b></td>
<td data-reactid=".1yy26e8ifpc" data-react-checksum="-2021250693"><span data-reactid=".1yy26e8ifpc.0">Hello </span><b data-reactid=".1yy26e8ifpc.1">rudi</b></td>
<td data-reactid=".y9snhgvrb4" data-react-checksum="1022766062"><span data-reactid=".y9snhgvrb4.0">Hello </span><b data-reactid=".y9snhgvrb4.1">tsukinowa</b></td>
(3 行)
Somewhat reactid and others are noisy, but it is being output properly.
After that, it is the dimension that it will be rendered if you give it back to the browser side as it is.
Summary
- I tried server side rendering of Reactjs with PostgreSQL
- PLv 8 OK
- I can not assume the responsibility for the results of actual battle
Ansible 1.9 has been released
Ansible 1.9 was released on March 25, 2015. A fair amount has been added / changed, so I'd like to help everyone translate the release notes here. Since compatibility is basically ensured, I think that it is not necessary to rewrite the playbook. However, it is necessary to change the playbook at that point, since it has been changed to the safe side that it will fail if there is a local change in the module for version control system such as git module.
In addition, 1.9 is the last release of 1 series. Ansible 2.0 which was rewritten drastically will be out soon.
Release URL : https://github.com/ansible/ansible/blob/devel/CHANGELOG.md#19-dancing- in-the-street---mar-25-2015
1.9 "Dancing In the Street" - Mar 25, 2015
Major change
- Kerberos is supported with winrm connection plugin.
- Add Tag : 'All', 'Always', 'Untagged', a special tag called 'tagged' has been added. --list-tasks and the newly added --list-tags look at the information on the options in the tag.
- The 'Become' system of privilege escalation was introduced, and there were changes in variables and methods. Sudo and su are backwards compatible. pbrun and pfexec was introduced as an experimental stage. In addition, runas has been added in winrm connection plugin.
- Improved error display of ssh connection.
- Documentation has been added regarding the return value in the module and updated with the ansible-doc command and web site. We will gradually update the document of copy, stats, acl module.
- I optimized plugin loader and cache plugin. In some cases, you can expect a dramatic speedup at startup.
- I rebuilt the checksum mechanism so that I can check it properly in various places.
- no_log if that has been specified, we do not want to see the parameters in the skipped task.
- Many fixes entered with unicode support. We standardized the function so that problems do not occur on the boundary of input / output.
- Added CI of travis with github. You can now improve the ticket's triage and merge speed.
- environment: directive can now be set in the entire play. Each task inherits this and can overwrite it with each task.
- Enhanced OS / distribution support in fact collection. It also improved the speed with pypy.
- to Lookup wantlist Added an option. It returns a comma-delimited string as a list type variable. (There was a problem from yunano.) If it is less than 1.9, the list was always returned by joining to the comma-delimited string at lookup, but by returning the list as it is by returning the wantlist option is)
- File Shared module for backup, I set the accuracy of time stamp to second (it was minutes ago)
- I decided to allow an empty inventory. A warning is issued, but it is not an error. (Used in localhost and cloud modules)
- By switching to CParser loader, we improved YAML parsing speed by 25%.
New module
- Cryptab
- Managing linux encrypted block devices
- Gce_img
- Management of GCE image resources
- Gluster_volume
- Management of glusterfs volumes
- Haproxy
- Management of haproxy
- Known_hosts
- Ssh known_hosts file
- Lxc_container
- Lxc containers management
- Patch
- Apply patch on target system with patch command
- Pkg 5
- Package management on Solaris
- Pkg5_publisher
- Solaris pkg 5 repository configuration management
- Postgresql_ext
- Manage postgresql extensions
- Snmp_facts
- Collect fact using snmp
- Svc
- Management of daemontool based service
- Uptimerobot
- Management of Uptime Robot Monitoring
New filter
- Ternary
- Change the value to be returned as true and false
- Cartesian
- Return the Cartesian product of two lists
- To_uuid
- Generate an ansible domain specific UUID from a character string
- Checksum
- Generate checksum internally used by ansible
- Hash
- Hash Generate character string (md 5, sha 1, etc)
- Password_hash
- Generate a hash string that can be used with user module's password
- Add as ip / network related
- Ipaddr, ipwrap, ipv4, ipv6ipsubnet, nthhost, hwaddr, macaddr
Yakuchu : filter and is the guy to use as follows. Here is an example of the hash filter added this time.
- debug: msg={{ 'test1' | hash('sha1') }}
Other noticeable changes
- New Plugin Lookup:
- Dig : returns the IP address to the DNS resolution
- Url : to get the data from the specified URL
- New Plugin Callback:
- Syslog_json : output in JSON format the output of the play to the syslog
- Added a lot of functions to Amazon Web service module
- More than one security group can now be specified when creating a new instance with ec2. Previously it was only one.
- You can specify EBS Volume type with ec2_vol.
- In Ec2_vol instance=None can now detach by specifying the.
- We modified the ec2_group so that only specific grants are deleted, not all rules are erased.
- Support for tenancy with ec2.
- You can now manage tags, charset, and public accessibility with RDS
- Capability can now be acquired to delete snapshots with ec2_ snapshot
- Alias was supported on route 53
- Private_zones supported on route 53
- Ec2_asg : wait_for_instances now supports parameters. This will wait until it gets ready for that instance before the ansible task is over.
- Docker function addition
- restart_policy in the parameters, you can now control the automatic restart of the container.
- If the docker client or server did not support the option, the task failed instead of ignoring that option without permission.
- insecure_registry has been added in order to access to the registry by using the HTTP parameters.
- Added parameter to set the domain name of the container.
- The docker_image module has been deprecated until its functionality is fully covered.
- You can now set container PID namespace.
- pull to add the parameters, now the more recent image of the registry as ansible to choose.
- can be specified in the docker module stat worked add. I will write a new stat below.
- present create a container, but the start is not.
- restarted and then restart the container.
- reloaded Once you have detected that ansible Phil now has been changed, and how the container earlier.
- Reloaded accounts for exposed ports, env vars, and volumes
- TLS can now be used to connect to docker server
- Some of the source control module is force parameter was true by default. This was changed to false by default. This will prevent you from being crushed in accident. force is based on the premise that it is a true playbook is, force=True will move in only added. The affected modules are as follows.
- bzr : When the checkout at the time there are local changes, bzr module had to remove all the changes even if you specify any action. From now on force=yes not be crushed to write as long as that is not specified. In the case of an operation assuming a state where there is no change, it may fail.
- git : When the checkout at the time there are local changes, git module will fail as long as the force has not been specified. force=yes case of, to revert, rewind all the changes.
- hg : same as bzr
- Subversion : same as bzr
- New inventory script
- vbox : virtualbox
- consul : Gets the inventory from the consul
- Gce : ip_forward can now forward the IP packet in the parameter.
- Gce : disk_auto_delete after instance discarded in the parameter can now erase the boot disk.
- Gce : now even without an external IP address instance to be able to spwan.
- Gce_pd : now the disk type can be selected.
- Gce_net : target_tags is now possible to set the rules of the firewall in the parameter.
- rax : Added parameters to create a boot volume.
- Nova_compute : scheduler_hints Added parameters.
- Vsphere_guest : now from the template to be able to deploy a guest.
- Associated with the file module all : a lot of fixes for hardlink and softlink
- Unarchive : User, Group, Mode, And selinux parameter has been added.
- authorized_keys : You can now specify the URL from which to retrieve the key.
- authorized_keys : Key is the processing of the case that are not specified in the Task exclude can now be specified in the parameter. (Yakuchu : exclusive=yes if it was specified in the keys (how s) other than the key key is erased)
- selinux : state = disabled case of, now to change the current state to the permissive
- User : You can now set the expire.
- Service : Rewrite, it was to show a better behavior.
- yum : update_cache parameter has been added, you can specify to update the cache.
- Apt : so that it can be installed on the basis of the dependencis of Package build_dep parameters have been added
- postgres : You can now specify the unix socket to DB connection
- Mount : now supports bind mount.
- git : clone Added parameters. With this you can get information on the remote repository without a local repository
- git : refspec Added parameters. This will allow you to pull commits that are not part of the branch.
- Many fixes on documents
Summary
It is a release with several minor improvements. For modules and tags in particular, I think that the function "I wanted this" was added.
Deploying golang with Docker
(Share the findings gained when deploying on Docker with golang. Please point out if there is a mistake or a better way)
Golang does not depend on libc etc. It has the feature that it all makes static link. What this means is that if there is only one binary output by golang it will work.
Move golang binary on Docker
Docker is excellent as a container to move different environments. However, or it was huge images that the Base, docker pullは安全なの? it was or that we also had a problem.
However, if you are using the golang, because there is a feature of the above, 出力したバイナリ + 必要なファイル can operate in only. (Provided, however, that you are not using cgo etc.)
An example
1. Create tar.gz
Suppose you have the following directory structure.
github.com/shirou/test
|-- main.go
|-- public
| `-- css
| `-- sample.css
`-- view
`-- base.html
Suppose that main.go is a web application that appropriately uses public and view. (Anything is fine)
Continue with the whole tar.gz and scp to the host running docker.
GOOS=linux GOARCH=amd64 go build
tar cvfz /tmp/image.tar.gz .
scp /tmp/image.tar.gz docker:/tmp/
Even if you do not consolidate them all, it is okay to fix only the necessary files. In fact, you will create a build directory and copy the necessary files into it.
2. Docker import
On docker host side, create docker image from tar.gz.
cat image.tar.gz | sudo docker import - test:latest
3. Docker run
And you can move it as usual.
sudo docker run -p 8000:8000 test:latest /test
advantage
With this method, the following advantages arise.
- You do not need docker pull or docker hub. It also does not need a private repository
- Since it is only necessary files, it does not consume capacity
- Since unnecessary processes do not move at all and no files exist, security problems can not occur (unless there is a problem with your program)
- There is no need to perform configuration management (there is no need to install dependent packages)
Use of s3
This time I send tar.gz by scp, but of course I can put it in s3. docker import will pass the URL to the argument.
To consolidate into one file
But this time we took the system to copy the necessary files, go- bindata With such, compacted all into a single binary file I will. kocha It may help to use.
In that case, tar.gz, etc. is not required, just the file in Dockerfile ADD will move if.
Humming point
If you want to access to the outside HTTPS, /etc/ssl/certs/ca- certificates.crt you may need.
Summary
I explained that moving the binary created with golang with Docker can bring about various benefits. I did not need ansible! (Is a lie)
MQ of MQTT is not Message Queue
MQTT is an abbreviation of "Message Queueing Telemetry Transport", I see sometimes the discourse of the. However, as can be seen person if you have used, MQTT IS NOT A Queue . MQTT's MQ is not an abbreviation for Message Queue.
Has become a OASIS standard MQTT Version 3.1.1 _ In MQTT What is the abbreviation It does not indicate clearly. In addition, MQTTのWikipedia to be the (formerly Message Queue Telemetry Transport) shall be deemed to be replaced as in the past certainly different now but was a Message Queue, and It is marked. Unfortunately, OASISのTC Although the name has become a Message Queuing, this is that the remains of when I made the TC It is that.
What is MQTT
MQTT was originally developed by IBM. IBM is a Websphere MQ after IBM MQ has developed, that the the series of series, The word MQ is attached. This is also clearly stated in the above Wikipedia.
At the time of donating to OASIS, there was a debate as to whether to change the name MQTT, but because it was used for more than 10 years, how about changing it, leaving it as it is, without specifying the official name It seems that it has settled down.
This story is here has been posted to.
MQTT IS NOT A QUEUE
Since it is important, I said twice.
Create a static binary with Go 1.4
(This sentence is a memorandum writing if it is wrong, please contact me)
The binary file created in Go language has the feature that everything is static linked and goes into one binary.
However, it seems that there is a problem that becomes dynamic link if you put "net" package from 1.4. (Confirmed in 1.4.1)
% file hoge
hoge: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), not stripped
% ldd hoge
linux-vdso.so.1 => (0x00007fff05dfe000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fd8c690a000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fd8c6544000)
/lib64/ld-linux-x86-64.so.2 (0x00007fd8c6b30000)
It seems to be static binary unless you use "net".
-a options seem to be going to now do not apply to the standard library.
So, what should I do?
For the time being is as follows: -installsuffix was able to generate a static binary by adding a.
% go build -installsuffix .
It does not follow all the discussion, even without such a strange way いいようになる sounds too good with.