Favourite 'nixisms

Created: November 5, 2018

Introduction

A log to myself of things I like about Unix type systems. In case I forget why someday.

Dotfiles

files in braille? are they for the blind?

Dotfiles in Linux are hidden files preceded with a period. Depending on the program, they are used for configuration in your home directory.

It is common to have a set of dotfiles that you use regularly on different computer systems.

There are many ways and tools to manage your dotfiles. A good one to use is GNU stow.

One can create a modular and secure repository with a set of conventions and rules enforced with ignore files and rc files.

Dotfile repositories are usually personalized, however I forked mine from a repository that implemented the above.

Environment shims

a thin layer above your path to keep a level surface across versions.

Have you ever had to solve an issue using a different version than your environment? You might have wanted a way to keep the stability of your system.

Enter environment shims that sit at the front of your path, intercepting tools and forwarding to the right version.

A side effect of using shims is that package manager installs are limited to the project or a global version and don’t need priviledges to interfere with the system version.

Maybe not uniquely a Unix solution (all you need is a CLI path), nonetheless, they fit well in the ecosystem.

Linux namespaces / containers

container of system resources isolated from the pirates.

Namespaces were an addition to the Linux kernel that enable partitioning of kernel resources. Namespaces isolate everything at a system level (processes, network, users, etc). This is different from chroot which isolates at a file hierarchy level.

They could be considered like a virtual machine without the virtual hardware. More efficient without the extra security of a hypervisor.

Namespaces can be used in C code via corresponding system commands and C api.

Using the first namespace mount allows a process to create new mount points that don’t affect the rest of the system.

An important namespace was user, which allowed isolation of user privileges. It’s because of this namespaces container software like docker and lxc were able to form.

There were some neat LXC sessions at the recent Open Source Summit 2018 in Vancouver, I recommend the conference if you get a chance.

For me containers have been helpful in cross-compiler and cross-platform testing. Has also opened a lot of doors in Platform as a Service such as Dokku which is a few lines of bash and docker containers (good for homebrew types that want to keep usage down).

The Linux tech has even spawned a new OS, Rancher OS where the entire host system is in docker containers. For example systemd is a container. Having tried it, it makes me wonder what more can be done with it.

Vim

Oh vim…. you’re like a complicated lover, hard to get, and with a few accessories - so worth it.

neovim is my fork of choice.

The editing modes and navigation using keystrokes in Vim are useful for never having to use a mouse again. Once you get the hang of it, your wrists are happier. Especially if you combine with an ergonomic keyboard layout like Dvorak or Colemak.

Try the interactive vim tutorial for a better example of vims capabilities.

Plugins and/or syntax highlighting are usually needed for anything serious. Some included in my dotfiles are:

Shells / unix philosphy / multiplexing

Who doesn’t love piping a solution to your problems?

Being productive on the command line comes down to a few things: multitasking, piping/redirect, multiplexing

Multitasking involves foreground and background jobs in the shell:

  1. suspending the current foreground process using CTRL-Z
  2. do some other stuff, like restarting services or checking mail
  3. returning to the suspended process with the fg command

Shell piping and redirection is the glue that makes the Unix Philosophy of concise modular programs work. Piping basically means taking the output of program and using it as the input for another program. You can solve a wide variety of problems this way.

Writing your own shell is good fun and practice. I usually rate software as more sophisticated if it includes a CLI toolset.

To get a better idea of Unix philosophy in action, check out CommandLine Fu. Commands like cut, cat, more, uniq, sed, parallel, xargs, wc, find, sort, rev, tr, echo, curl, jq and others are all modular programs that work together.

Another common thing to do with shells is multiplex them (usually with tmux). This is sort of like piping the entire shell into different views.

It can be handy to quickly split screen with another shell, zoom a split pane to full, switch between virtual views, or detach/retach sessions with a persistent state (SSH). Like vim, this is all done with key strokes, saving the mouse effort.

Some people prefer byobu as a layer over tmux or screen. Either way, if your new to multiplexing it will take some getting used to.

Everything is a file and interprocess communication

hippie: “how does the system communicate to us?” nerd: “with files”

Your entire system is available to you as a file, meaning the same C API and kernel calls that opens, reads and writes files can also be used for sockets, processes, devices, etc.

The proc filesystem exposes kernel information to user land as files. Information like cpuinfo, memory, battery, and more is all available to you as a file.

Fifo files are basically pipes as a file and are used as a communication channel (IPC).

A use case of FIFOs could be testing client/server applications communicating over a socket (another file), you would pass your programs a FIFO instead and simulate the server output and the client input.

Shared memory / mmap also returns a file descriptor that can be mapped to a block of memory instead of a file and also used for IPC.

Open source / C / Security

The source of life must be open as nothing is separate from it.

Basically, my life as I know it, would not exist without open source, C and Linux. As a young high-school grad with nothing to live for, hacking code and having people use it was immensely more exciting than real world life. Too much for my poor parents, who did not understand, and it turns out neither did I.

The past aside, people nowadays will complain about C as dangerous. Especially since the heartbleed bug and recent open source bugs found with fuzz testing tools. But the reality is, if you deeply understand buffer overflows, pointers and you are a proactive tester, C is quite manageable.

But humans are what they are, imperfect as the software they create. I believe C is like the matrix in that it has just enough flaws for humans to accept it as a challenge and not too many that we’ll reject it.

Downsides?

  • Distribution fragmentation, but I think this can be broken down into packaging differences which flatpak and snap packages seemed to have solved.

  • User experience fragmentation because the users system and workflow can be so heavily customized. Which is a good segue to the next one.

  • Complex configuration can be annoying, there are times when you need to do something pronto and some time consuming configuration thing will be blocking you. Thankfully using containers solves most of this for enterprise needs.

  • Can be a messy or unorganized file system, but its not too bad. Gobo Linux has redefined the standard file system hierarchy to something that makes more sense to me.

  • Nix is mostly only friends with technical people. This is a feature though, not a downside.


comments powered by Disqus