Thursday, March 01, 2018

I tried kubernetes on lxd, but failed:

  • host os: ubuntu 16.04
  • all run as root

1. upgrade lxd

(host) $ apt update && apt install -t xenial-backports lxd lxd-client

2. disable swap

(host) $ swapoff -a
  • should also remove swap line in /etc/fstab

3. init lxd

(host) $ lxd init
  • use dir as storage backend (overlay won't work with zfs, btrfs is troublesome, they're for serious use)
  • none for ipv6

4. create profile

(host) $ lxc profile copy default kubernetes
(host) $ lxc profile edit kubernetes

something like this:

config:
  linux.kernel_modules: bridge,br_netfilter,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
  raw.lxc: |-
    lxc.aa_profile = unconfined
    lxc.cgroup.devices.allow = a
    lxc.mount.auto=proc:rw sys:rw
    lxc.cap.drop =
  security.nesting: "true"
  security.privileged: "true"
description: Kubernetes LXD profile
devices:
  eth0:
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: lxd
    type: disk
name: kubernetes
used_by: []

5. launch and setup master

(host) $ lxc launch ubuntu:16.04 master -p kubernetes
(host) $ lxc exec master -- bash
(master) $ apt update && apt install -y docker.io linux-image-$(uname -r)
(master) $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
(master) $ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
(master) $ apt update && apt install -y kubelet kubeadm kubectl

6. save image

(host) $ lxc stop master
(host) $ lxc publish master --alias kubenetes

7. kubeadm init

(master) $ kubeadm init --apiserver-advertise-address=10.243.36.206 --pod-network-cidr=10.243.36.0/24

got [ERROR FileContent–proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

ignore it, because even I included br_netfilter in the profile, but seems it is not network namespace aware, nothing I can do here.

(master) $ kubeadm init --apiserver-advertise-address=10.243.36.206 --pod-network-cidr=10.243.36.0/24 --ignore-preflight-errors=all

if succeeded, it shows something like this:

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token ...

8. install pod network add-on

since most of the available provider requires /proc/sys/net/bridge/bridge-nf-call-iptables

I picked calico here:

(master) $ kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

ouput is like:

configmap "calico-config" created
daemonset "calico-etcd" created
service "calico-etcd" created
daemonset "calico-node" created
deployment "calico-kube-controllers" created
clusterrolebinding "calico-cni-plugin" created
clusterrole "calico-cni-plugin" created
serviceaccount "calico-cni-plugin" created
clusterrolebinding "calico-kube-controllers" created
clusterrole "calico-kube-controllers" created
serviceaccount "calico-kube-controllers" created

9. To be continue

(master) $ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                     READY     STATUS              RESTARTS   AGE       IP              NODE
kube-system   calico-etcd-7cn7c                        1/1       Running             0          9m        10.243.36.206   master
kube-system   calico-kube-controllers-f86cf776-88cw5   1/1       Running             6          9m        10.243.36.206   master
kube-system   calico-node-vbzsm                        1/2       CrashLoopBackOff    6          9m        10.243.36.206   master
kube-system   etcd-master                              1/1       Running             0          7m        10.243.36.206   master
kube-system   kube-apiserver-master                    1/1       Running             1          7m        10.243.36.206   master
kube-system   kube-controller-manager-master           1/1       Running             0          7m        10.243.36.206   master
kube-system   kube-dns-6f4fd4bdf-j65xk                 0/3       ContainerCreating   0          47m       <none>          master
kube-system   kube-proxy-cm2gc                         0/1       CrashLoopBackOff    14         47m       10.243.36.206   master
kube-system   kube-scheduler-master                    1/1       Running             0          7m        10.243.36.206   master

kube-dns will not start until calico is ready.

but still something is wrong there

because master node is running with flag –enable-debugging-handlers=false, kubectl logs doesn't work on those pods.

tried removing the taints on the master:

(master) $ kubectl taint nodes --all node-role.kubernetes.io/master-

and finally accessed to logs by (actually I forgot docker logs):

(master) $ kubectl describe pod calico-node-vbzsm -n kube-system

the error is:

Error response from daemon: linux runtime spec devices: lstat /dev/.lxc/proc/664/fdinfo/73: no such file or directory

(UPDATE: actually this is about missing /var/lib/kube-proxy/config.conf file when running kube-proxy, waiting for updated version)

I gave up here, probably will try again after some time.

next thing I will try is docker swarm on lxd s

Friday, March 02, 2018

try to play youtube video in terminal:

$ brew reinstall mpv --with-libcaca

play:

$ mpv "http://www.youtube.com/watch?v=xxx" -vo caca

a youtube cli tool:

mps-youtube

under ubuntu:

$ sudo apt install mps-youtube

Sunday, March 04, 2018

even the kubernetes on lxd attempt failed, I still learned quite a lot lxd stuffs.

lxd 2.x is much better than the lxc I tried before. I don't need to use iptables to route traffics into the container, I can easily mount folder from host on it.

the best resource for lxd 2.x is LXD 2.0: Blog post series

I got few notes:

to list remote images, use images: (not just images):

$ lxc image list images:

to just list alpine:

$ lxc image list images:alpine

then start a container using the image:

lxc launch images:alpine/3.7 container_name

mount folder from host:

lxc config device add container_name my_name disk source=$(pwd) path=/opt/my_home


and I started to use alpine as base, one great thing is I can browse which version of the software I want here: Alpine Linux packages

then start a container with alpine version that contains it.

for example, use alpine/3.6 if I want nodejs 6.x, alpine/3.7 if I need nodejs 8.x

to install software is similar to apt-get, alpine uses apk (use sh, alpine images don't have bash by default):

$ lxc exec container_name -- sh
~ apk update
~ apk add nodejs

easy pieces


now I need something like Dockerfile for building images, I will try two:

if none of above works, lxd is linux container, I can still use puppet to provision it. shouldn't be a problem.


since lxd and alpine are so lightweight, I don't even need virtualenv for projects (I know, same as docker, but I rarely run exec on docker containers ...)

borrowing the idea from virtualenv, I wonder if I could intercept commands and pass them to lxc exec

actually there's a way to do it, found it on unix stackexchange:

shopt -s extdebug

preexec_invoke_exec () {
    [ -n "$COMP_LINE" ] && return  # do nothing if completing
    [ "$BASH_COMMAND" = "$PROMPT_COMMAND" ] && return # don't cause a preexec for $PROMPT_COMMAND
    local this_command=`HISTTIMEFORMAT= history 1 | sed -e "s/^[ ]*[0-9]*[ ]*//"`
    echo $this_command

    # So that you don't get locked accidentally
    if [ "shopt -u extdebug" == "$this_command" ]; then
        return 0
    fi

    lxc exec container_name -- ${this_command}

    return 1 # This prevent executing of original command
}
trap 'preexec_invoke_exec' DEBUG

after source this script, every command will pass to lxc exec container_name –

and use shopt -u extdebug to disable it

however, soon I found this is no better than lxc exec container_name – sh

add a alias for lxc exec container_name – is more simply and safe (above script when pressCtrl-D will pass history 1 to lxc too)


some picks from my bookmarked tweets:

a faster pandas, underlying is Apache Arrow: Pandas on Ray, github link

I didn't really read about CSP header yet, I think it's time to understand and implement it: Using HTTP Headers to Secure Your Site

a very popular command line library for golang: spf13/cobra: A Commander for modern Go CLI interactions

Peter Norvig added pdf version of Paradigms of Artificial Intelligence Programming on github

Go Microservices, Part 1: Building Microservices With Go

New Defaults in MySQL 8.0


not many good podcast episodes these days, this one is good:

defn episode 30 - Zach Tellman aka @ztellman


reading Professional Clojure again.

and really really start studying re-frame.

Monday, March 05, 2018

after watched this video: VisiData Lightning Demo at PyCascades 2018 - YouTube

I played with visidata for a while today, I put my notes to python/visidata page.


for large dataset, this clojure library is useful: jackrusher/spicerack: A Clojure wrapper for MapDB, which is a fast, disk-persistent data-structures library.

couple libraries to try for clojure debugging:

Saturday, March 10, 2018

super useful library for clojure: Clojure Tip of the Day - Episode 7: clj-refactor (web page version)

see all available refactorings

the two screencasts about cider debug are also good:

I also learned how to easily discard code: add #_ to beginning of the line.

an article about Reducers and Transducers


UNLINK is a new command introduced by redis 4.0

like DEL but try to reclaims memory after item removed.


I read a few articles about ClickHouse — open source distributed column-oriented DBMS

I have to give it a try.


Gentle Introduction to Apache NiFi for Data Flow... and Some Clojure

tldr; Apache Nifi routes your data from any streaming source to any data store


I've switched my local development vms from vagrant/docker to lxd, now I need an internal dns server, or, a service discovery system ...

this is a good summary on different service discovery systems:

Choosing A Service Discovery System

dnsmasq should be enough, but I want to try serf


nice write up about spark 2.3: Introducing Apache Spark 2.3 - The Databricks Blog

looking forward to their in-depth blog posts


I tried using influxdb + grafana for some real time analytics checking, I like them a lot.

at first I wanted to intercept traffic before sending to google analytics, but due to HSTS it's not working.

I found chrome://net-export/, it logs all network activities to files, that I can filter and forward to influxdb

then I need to find a way to tail follow new created files in a folder ...

  • inotifywait combined with simple bash: not working because file changes in host can't notify vm's inotifywait
  • multitail: too difficult to work with
  • xtail: this one works!

actually a better way should be using one of the log shipper, like influx data's telegraf I was just too lazy

about log shipper, two new names for me:

Tuesday, March 27, 2018

I've been coding a little bit of clojure again, this time I tried using clj-refactor which is great, however, even with cider debugger, two kinds of codes are still hard to test:

  • let form
  • thread first/last form

eval with context helps a little bit but overall it is still not smooth for me.


I also started using spec, which is a great tool to guard inputs, combine with test.check, good for generating tests

more resource about spec:

a recent article using spec (with kafka and avro): Production data never lies

more static typed, and more towards Avoid Else, Return Early

more discussion at hackernews and proggit


I tried to make emacs works under terminal for many times, all failed.

I found that use another init.el is essential for the task.

I started using emacs -nw -q –load terminal_init.el -nw means no window and -q skipped loading init.el, then use –load to load another init.el for terminal

you can still use (package-initialize) to make all packages available for terminal version, however, most of short keys I have to switched to use C-c prefix.


I'm happy with my new experience with clojure, but my pinky started having problem again.

I have to switch back to vim for a while, and discovered something new:

recent files

:bro(wse) ol(files)

use set viminfo='30 to set size

diff split buffers

run :diffthis in both buffers

  • ]c: next diff
  • [c: previous diff
  • diffget: accept update from another buffer
  • diffput: push update to other buffer

fold

to enable, add to .vimrc:

set foldmethod=indent
set nofoldenable
  • zn: fold none, open all
  • zM: fold more, close all
  • zj/zk: next/previous fold
  • zA: toggle recursively
  • zo, zO: open fold (recursively)
  • zc, zC: close fold (recursively)


tired of using cd to navigate, returned to ranger:

prepare configs

$ ranger --copy-config=all

create tab

  • ctrl n: new tab
  • alt (num): switch tab
  • tab: navigate
  • q: close tab

commands

  • r: open with
  • yy: copy file
  • pp: paste file
  • dd: delete file
  • move file: dd then pp
  • space: mark file
  • mark file in different directories: ya, da
  • t: tag file
  • :flat 1: flat nested directory, :flat to reset
  • :rename newfilename
  • mark patterns: :mark, unmark (regex)

folder navigation

  • [, ]: move up/down parent dirs
  • m{char}: bookmark folder
  • um{char}: remove bookmark
  • {char}`: go to bookmark
  • `` : quick switch between previous/current folder

sort

  • ot: sort by type first
  • os: sort by size first
  • oa /oc /om: sort by time
  • on: sort natural
  • o{T/S/A/C/M}: sort in reverse order from the above modes
  • or: reverse the sort order


this talk changed me: Joe Armstrong - Keynote: The Forgotten Ideas in Computer Science - Code BEAM SF 2018

all these years I read tech news/article everyday, I know most of trendy stuffs, but I didn't really know them and nothing has been built.

one thing I strongly agreed is that reading a book is different, there're no links you can click in a book, which distract you.

I began to read less news and try to spend my time to write code or read a book.

currently I'm reaing Cassandra: The Definitive Guide

it has good introduction chapters on distributed database system, which already worth it.

Wednesday, March 28, 2018

there're more on debugging clojure:

these solved my frustration on debugging let and -> forms, especially debux, great tool.

Friday, March 30, 2018

I had a vm to run lxd and I'm using tmux, quite often I have to ssh into the vm once switched to a new tmux pane.

I finally decided to use screen inside the vm, it's simple, check SCREEN Quick Reference


the old reddit 1.0 source code released on github, written in common lisp

I tried installing it, not succeeded yet.

I used sbcl, and install quicklisp first

looks like it was using Scieneer Common Lisp, according to (sys:make-fd-stream (ext:connect-to-inet-socket ... line in memcached.lisp (I changed to sbcl's socket)

and mp: package also not found (multi-processing), I added bordeaux-threads and replaced mp: with bt:, and changed function names according to this: bordeaux-threads/impl-cmucl.lisp

then stucked on the http server: Hunchentoot, I don't know how to start the server ...

need more readings, but I think I can solve it someday.

I found it's a very good resource for learning common lisp


reddit was rewritten in python, by Aaron Swartz: Rewriting Reddit (Aaron Swartz's Raw Thought)

more history:

I think the problem is there're way too many implementations of common lisp, each one has its own way/library to do socket, multi-processing (like the problem I faced above)

once you switched to another implementation, codes are broken. and not all implementations are avaialble on windows/linux/os x ...

and don't forget The Lisp Curse


looks like I missed these about Rich Hickey:


Blog Archives

Search Blog: