Thursday, March 01, 2018

I tried kubernetes on lxd, but failed:

  • host os: ubuntu 16.04
  • all run as root

1. upgrade lxd

(host) $ apt update && apt install -t xenial-backports lxd lxd-client

2. disable swap

(host) $ swapoff -a
  • should also remove swap line in /etc/fstab

3. init lxd

(host) $ lxd init
  • use dir as storage backend (overlay won't work with zfs, btrfs is troublesome, they're for serious use)
  • none for ipv6

4. create profile

(host) $ lxc profile copy default kubernetes
(host) $ lxc profile edit kubernetes

something like this:

  linux.kernel_modules: bridge,br_netfilter,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
  raw.lxc: |-
    lxc.aa_profile = unconfined
    lxc.cgroup.devices.allow = a sys:rw
    lxc.cap.drop =
  security.nesting: "true"
  security.privileged: "true"
description: Kubernetes LXD profile
    nictype: bridged
    parent: lxdbr0
    type: nic
    path: /
    pool: lxd
    type: disk
name: kubernetes
used_by: []

5. launch and setup master

(host) $ lxc launch ubuntu:16.04 master -p kubernetes
(host) $ lxc exec master -- bash
(master) $ apt update && apt install -y linux-image-$(uname -r)
(master) $ curl -s | apt-key add -
(master) $ echo "deb kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
(master) $ apt update && apt install -y kubelet kubeadm kubectl

6. save image

(host) $ lxc stop master
(host) $ lxc publish master --alias kubenetes

7. kubeadm init

(master) $ kubeadm init --apiserver-advertise-address= --pod-network-cidr=

got [ERROR FileContent–proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

ignore it, because even I included br_netfilter in the profile, but seems it is not network namespace aware, nothing I can do here.

(master) $ kubeadm init --apiserver-advertise-address= --pod-network-cidr= --ignore-preflight-errors=all
if succeeded, it shows something like this:
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token ...

8. install pod network add-on

since most of the available provider requires /proc/sys/net/bridge/bridge-nf-call-iptables

I picked calico here:

(master) $ kubectl apply -f

ouput is like:

configmap "calico-config" created
daemonset "calico-etcd" created
service "calico-etcd" created
daemonset "calico-node" created
deployment "calico-kube-controllers" created
clusterrolebinding "calico-cni-plugin" created
clusterrole "calico-cni-plugin" created
serviceaccount "calico-cni-plugin" created
clusterrolebinding "calico-kube-controllers" created
clusterrole "calico-kube-controllers" created
serviceaccount "calico-kube-controllers" created

9. To be continue

(master) $ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                     READY     STATUS              RESTARTS   AGE       IP              NODE
kube-system   calico-etcd-7cn7c                        1/1       Running             0          9m   master
kube-system   calico-kube-controllers-f86cf776-88cw5   1/1       Running             6          9m   master
kube-system   calico-node-vbzsm                        1/2       CrashLoopBackOff    6          9m   master
kube-system   etcd-master                              1/1       Running             0          7m   master
kube-system   kube-apiserver-master                    1/1       Running             1          7m   master
kube-system   kube-controller-manager-master           1/1       Running             0          7m   master
kube-system   kube-dns-6f4fd4bdf-j65xk                 0/3       ContainerCreating   0          47m       <none>          master
kube-system   kube-proxy-cm2gc                         0/1       CrashLoopBackOff    14         47m   master
kube-system   kube-scheduler-master                    1/1       Running             0          7m   master

kube-dns will not start until calico is ready.

but still something is wrong there

because master node is running with flag –enable-debugging-handlers=false, kubectl logs doesn't work on those pods.

tried removing the taints on the master:

(master) $ kubectl taint nodes --all

and finally accessed to logs by (actually I forgot docker logs):

(master) $ kubectl describe pod calico-node-vbzsm -n kube-system

the error is:

Error response from daemon: linux runtime spec devices: lstat /dev/.lxc/proc/664/fdinfo/73: no such file or directory

(UPDATE: actually this is about missing /var/lib/kube-proxy/config.conf file when running kube-proxy, waiting for updated version)

I gave up here, probably will try again after some time.

next thing I will try is docker swarm on lxd s

Friday, March 02, 2018

try to play youtube video in terminal:

$ brew reinstall mpv --with-libcaca


$ mpv "" -vo caca

a youtube cli tool:


under ubuntu:

$ sudo apt install mps-youtube

Sunday, March 04, 2018

even the kubernetes on lxd attempt failed, I still learned quite a lot lxd stuffs.

lxd 2.x is much better than the lxc I tried before. I don't need to use iptables to route traffics into the container, I can easily mount folder from host on it.

the best resource for lxd 2.x is LXD 2.0: Blog post series

I got few notes:

to list remote images, use images: (not just images):

$ lxc image list images:

to just list alpine:

$ lxc image list images:alpine

then start a container using the image:

lxc launch images:alpine/3.7 container_name

mount folder from host:

lxc config device add container_name my_name disk source=$(pwd) path=/opt/my_home

and I started to use alpine as base, one great thing is I can browse which version of the software I want here: Alpine Linux packages

then start a container with alpine version that contains it.

for example, use alpine/3.6 if I want nodejs 6.x, alpine/3.7 if I need nodejs 8.x

to install software is similar to apt-get, alpine uses apk (use sh, alpine images don't have bash by default):

$ lxc exec container_name -- sh
~ apk update
~ apk add nodejs

easy pieces

now I need something like Dockerfile for building images, I will try two:

if none of above works, lxd is linux container, I can still use puppet to provision it. shouldn't be a problem.

since lxd and alpine are so lightweight, I don't even need virtualenv for projects (I know, same as docker, but I rarely run exec on docker containers ...)

borrowing the idea from virtualenv, I wonder if I could intercept commands and pass them to lxc exec

actually there's a way to do it, found it on unix stackexchange:

shopt -s extdebug

preexec_invoke_exec () {
    [ -n "$COMP_LINE" ] && return  # do nothing if completing
    [ "$BASH_COMMAND" = "$PROMPT_COMMAND" ] && return # don't cause a preexec for $PROMPT_COMMAND
    local this_command=`HISTTIMEFORMAT= history 1 | sed -e "s/^[ ]*[0-9]*[ ]*//"`
    echo $this_command

    # So that you don't get locked accidentally
    if [ "shopt -u extdebug" == "$this_command" ]; then
        return 0

    lxc exec container_name -- ${this_command}

    return 1 # This prevent executing of original command
trap 'preexec_invoke_exec' DEBUG

after source this script, every command will pass to lxc exec container_name –

and use shopt -u extdebug to disable it

however, soon I found this is no better than lxc exec container_name – sh

add a alias for lxc exec container_name – is more simply and safe (above script when pressCtrl-D will pass history 1 to lxc too)

some picks from my bookmarked tweets:

a faster pandas, underlying is Apache Arrow: Pandas on Ray, github link

I didn't really read about CSP header yet, I think it's time to understand and implement it: Using HTTP Headers to Secure Your Site

a very popular command line library for golang: spf13/cobra: A Commander for modern Go CLI interactions

Peter Norvig added pdf version of Paradigms of Artificial Intelligence Programming on github

Go Microservices, Part 1: Building Microservices With Go

New Defaults in MySQL 8.0

not many good podcast episodes these days, this one is good:

defn episode 30 - Zach Tellman aka @ztellman

reading Professional Clojure again.

and really really start studying re-frame.

Monday, March 05, 2018

after watched this video: VisiData Lightning Demo at PyCascades 2018 - YouTube

I played with visidata for a while today, I put my notes to python/visidata page.

for large dataset, this clojure library is useful: jackrusher/spicerack: A Clojure wrapper for MapDB, which is a fast, disk-persistent data-structures library.

couple libraries to try for clojure debugging:

Saturday, March 10, 2018

super useful library for clojure: Clojure Tip of the Day - Episode 7: clj-refactor (web page version)

see all available refactorings

the two screencasts about cider debug are also good:

I also learned how to easily discard code: add #_ to beginning of the line.

an article about Reducers and Transducers

UNLINK is a new command introduced by redis 4.0

like DEL but try to reclaims memory after item removed.

I read a few articles about ClickHouse — open source distributed column-oriented DBMS

I have to give it a try.

Gentle Introduction to Apache NiFi for Data Flow... and Some Clojure

tldr; Apache Nifi routes your data from any streaming source to any data store

I've switched my local development vms from vagrant/docker to lxd, now I need an internal dns server, or, a service discovery system ...

this is a good summary on different service discovery systems:

Choosing A Service Discovery System

dnsmasq should be enough, but I want to try serf

nice write up about spark 2.3: Introducing Apache Spark 2.3 - The Databricks Blog

looking forward to their in-depth blog posts

I tried using influxdb + grafana for some real time analytics checking, I like them a lot.

at first I wanted to intercept traffic before sending to google analytics, but due to HSTS it's not working.

I found chrome://net-export/, it logs all network activities to files, that I can filter and forward to influxdb

then I need to find a way to tail follow new created files in a folder ...

  • inotifywait combined with simple bash: not working because file changes in host can't notify vm's inotifywait
  • multitail: too difficult to work with
  • xtail: this one works!

actually a better way should be using one of the log shipper, like influx data's telegraf I was just too lazy

about log shipper, two new names for me:

Blog Archives

Search Blog: