Links 2019-02-13

On Licenses, Open Source, Communities, etc

Links 2019-02-11


Links 2019-01-23

On Java …

Ansible and Vagrant SSH Keys

I recently came across the question how to handle SSH keys with Vagrant and Ansible.


Traditionally all Vagrant boxes and VMs require a fixed login (vagrant) with a fixed SSH key. This was very convenient in a development context, but could raise security issues in case Vagrant VMs were used for anything important and users were not aware of the insecure key.

In current Vagrant versions only boxes (i. e. base images) use the insecure key. When Vagrant starts a new VM it generates a new individual SSH key for this instance. Vagrant keeps these custom keys in the .vagrant/machines subdirectory; so host to guest logins (like vagrant ssh) are still possible.

A vagrant up shows the multi-step procedure:

    workstation: Vagrant insecure key detected. Vagrant will automatically replace
    workstation: this with a newly generated keypair for better security.
    workstation: Inserting generated public key within guest...
    workstation: Removing insecure key from the guest if it's present...
    workstation: Key inserted! Disconnecting and reconnecting using new SSH key...
==> workstation: Machine booted and ready!

To prevent this mechanism one can configure the VM with config.ssh.insert_key = False. This is necessary when modifying a box. Say you want to take a bento base box, install your development tools, and then repackage the VM as your own development box in order to distribute it to a development team. — With the default behaviour (config.ssh.insert_key = True) every repackage build would generate a new basebox with an individual ssh keypair; this makes the new boxes practically unusable, because every user would have to get and configure the custom SSH key for each box. With config.ssh.insert_key = False the box will retain the previous key; that means you users have the same experience as with a normal box.

Note: As a compromise between the two modes one can set a custom ssh key for all baseboxes with config.ssh.private_key_path. This might be useful for companies with many internal boxes. In this case one distribute one SSH key to use with all Vagrant boxes but not use the publicly known insecure standard key.


If all VMs use the same SSH key the setup is straightforward: Configure all Vagrant VMs with config.ssh.insert_key = False or config.ssh.private_key_path and use that key for the Ansible login.

With Vagrant’s default behaviour there is no common SSH key for all VMs. In this case one has to configure Ansible to use the right keys for every VM, but the setup is still simple as long as you have the default /vagrant mount with the .vagrant subdirectory.

This subdirectory contains all Vagrant state and contains these files, right now the private_key is the important one:

$ find .vagrant/

I usually run one Vagrant VM as an Ansible controller (with node.vm.provision :ansible_local) and then 1 to N other VMs as installation targets. To enable access from the controller to the other VMs I include this line in the Ansible inventory: ansible_ssh_private_key_file=/vagrant/.vagrant/machines/{{ inventory_hostname }}/virtualbox/private_key

This should work just the same for running Ansible on your host machine and use it to provision the guest VMs. In this case the path would be a relative one: .vagrant/machines/....

The only important detail is the consistent naming of the VM in Vagrant and Ansible. With the ansible_ssh_private_key_file as above Vagrant’s VM name (set with config.vm.define) and Ansible’s inventory_hostname have to be the same. They are not required to match the guest’s hostname (config.vm.hostname), but I strongly recommend to either use the same value or use an obvious mapping. I usually try to use an FQDN for the hostname, and then use the short hostname (the first component) as the VM name and inventory name.

The only important detail is a consistent naming between Vagrant’s VM name (as set with config.vm.define), the VM’s hostname

Links 2019-01-08

On cloud computing and vendors …

Links 2018-12-13

On computer and programming history …

  • What is Code?
    Software has been around since the 1940s. Which means that people have been faking their way through meetings about software, and the code that builds it, for generations.
  • Learning BASIC Like It’s 1983
    In 1983, though, home computers were unsophisticated enough that a diligent person could learn how a particular computer worked through and through. That person is today probably less mystified than I am by all the abstractions that modern operating systems pile on top of the hardware.
  • How Lisp Became God’s Own Programming Language
    Lisp transcends the utilitarian criteria used to judge other languages, because the median programmer has never used Lisp to build anything practical and probably never will, yet the reverence for Lisp runs so deep that Lisp is often ascribed mystical properties.
  • Should you learn C to “learn how the computer works”?
    I’ve often seen people suggest that you should learn C in order to learn how computers work. Is this a good idea? Is this accurate?
  • C Portability Lessons from Weird Machines
    In this article we’ll go on a journey from 4-bit microcontrollers to room-sized mainframes and learn how porting C to each of them helped people separate the essence of the language from the environment of its birth.
  • The Coming Software Apocalypse
    A small group of programmers wants to change how we code—before catastrophe strikes.

Links 2018-11-19

On teams and their problems …


I recently updated my small mailserver and finally configured DKIM. But another change was easier and still had more impact: installing postwhite. This little tool takes a list of mail domains, then uses their SPF records to derive a list of their outgoing mail servers, then writes this list into a postscreen whitelist configuration. The current default setting contains 43 domains and generates a whitelist with nearly 2000 lines (each containing an IP or subnet). Everything is nicely scripted and can run as a nightly cronjob.

This setup eliminates my biggest problem with greylisting, which is Office356. Their combination of long email resubmit intervals and using multiple cluster servers for delivery attemps always lead to long delays before I received email from Microsoft or any company using Office356. (BTW, I really like greylisting but this is its biggest design problem: it works for single SMTP servers and enforces certain behaviour, but does not and can not consider clusters.)