EPRDCTN in the command line: Package Managers

Most Operating systems have ways to automate software installation, upgrade, management and configuration of the software on your computer with package managers.

Package managers are designed to eliminate the need for manual installs and updates. This can be particularly useful for Linux and other Unix-like systems, typically consisting of hundreds or even tens of thousands of distinct software packages.[2]

We’ll look at Homebrew and apt-get, their requirements and ecosystems, along with some basic commands to get you started.

Homebrew and Cask

Homebrew allows you access to a large ecosystem of Unix Software on your Mac. It is a Ruby application which is one of the reasons why we installed the Xcode command line tools; they include Ruby.

To install Homebrew paste the following command on your terminal. This will download, install and configure Homebrew.

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Now that we have installed Homebrew we’ll use it to install, upgrade and remove (uninstall) a package. Even though I’ve chosen to install a single package. The same would be applicable to single and multiple packages install.

This and the following sections will install the akamai package.

Installing the package

Installing packages is simple. The command is:

brew install akamai

The command will also install any dependencies needed for the package to run. Akamai has no dependencies.

Screenshot showing the install process for a homebrew package
Screenshot showing the install process for a homebrew package

Updating/Upgrading the package

We should periodically update our packages to make sure we’re using the latest version and capture any/all packages. The upgrade process gives us two options, one is to upgrade individual packages with an example like the one below:

brew upgrade akamai

If the package you’re upgrading individually is already up to date, Homebrew will present you with this ‘error’ message. It’s not an error at all, just Homebrew’s way of telling you it’s not needed.

alt-text
alt-text

The other option is to upgrade all installed packages at the same time by just using

brew upgrade
Homebrew upgrade process for all packages
Homebrew upgrade process for all packages

Uninstalling the package

When we’re done, we can uninstall the package to free up space on the hard drive (always a concern for me). The command is

brew uninstall akamai

Cleaning up after your installed packages

OK, I’ll admit it… I have packages in my Homebrew installation that I haven’t used in ages but, sooner or later my hard drive will complain and force me to clean up old stuff. With homebrew this is simple, the command is:

brew cleanup

This will go through all installed packages and remove old versions. It will also report when it skips versions because the latest one is not installed and how much hard drive space it gave you back

Homebrew cleanup showing a listing of removed packages and how much disk space was saved in the process.
Homebrew cleanup showing a listing of removed packages and how much disk space was saved in the process.

There are more commands to use when troubleshooting and building recipes for Homebrew but the ones we’ve covered are the basic ones you’ll use most often.

Cask: Like Homebrew but for applications

I don’t particularly care for the way you have to install some software for MacOS. You download the file, open it then drag the application to the applications folder in your Mac (usually aliased in the folder created by the installer) and only then you can actually use the program.

The creators of Homebrew put out Cask, a command line software installer. The installation is simple, paste the following command on your terminal:

brew tap caskroom/cask

Then you can use Cask to install software on your system. For example, to install Java, run the following command.

cask install java

Cask will accept ULAs and other legal agreements for you. If these type of agreements are important do not use Cask and install software the old-fashioned way.

Apt-get and apt-cache for Windows

Linux is built around the concept of packages. Everything in a Linux distribution from the kernel, the core of the Operating System, and every application is built as a package. Ubuntu uses APT as the package manager for the distribution.

There are two commands under the apt-get umbrella: apt-get itself and apt-cache. apt-get is for installing, upgrading and cleaning packages while apt-cache is used for finding new packages. We’ll look at the basic uses for both these commands in the next sections.

In the following sections, ack is the name of the package we’ll be working with, not part of the commands.

Update package database with apt-get

apt-get works on a database of available packages. You must update the database before updating the system, otherwise, apt-get won’t know if there are newer packages available. This is the first command you need to run on any Linux system after a fresh install.

Updating the package database requires superuser privileges so you’ll need to use sudo.

sudo apt-get update

Upgrade installed packages with apt-get

Once you have updated the package database, you can upgrade the installed packages. Using apt-get upgrade will update all packages in the system for which an update is available. There is a way to work with individual packages that we’ll discuss when installing new packages

# Upgrades all packages for which an update is available
sudo apt-get upgrade

You can also combine the update and upgrade command into a single command that looks like this:

# Combines both update and upgrade command
sudo apt-get update && sudo apt-get upgrade -y

The logical and will make sure that both commands run and will fail if either does. It is the same as running the commands individually.

Installing individual packages

After you update your system there is not much need to update it again. However, you may want to install new packages or update individual packages. The install command will do either.

sudo apt-get install ack

If the package is not installed, the command will install it, along with its dependencies and make the command available to you.

If you’ve already installed the package, either during an upgrade or manual install, the command will compare the installed version with the one you want to install, if the existing version is the same or newer the installer will skip and exit. If the version being installed is newer then the installer will execute the upgrade.

Uninstalling individual packages

There are a few times when a package breaks stuff somewhere else or you no longer need the functionality the package provides. In this case, you can do two things.

You can use the remove command to only remove the binaries, the applications themselves, and leave configuration and other auxiliary files in place. This will make it easier to keep your configuration without having to write it down.

# ONLY REMOVES BINARIES
sudo apt-get remove ack

The next, and more extreme, option is to use the purge command. This will get rid of all portions of the package, beyond what the remove command will do. Use sparingly if at all.

# REMOVES EVERYTHING, INCLUDING CONFIGURATION FILES
sudo apt-get purge ack

Cleaning up after yourself

Just like with Homebrew, apt-get will keep older versions of installed packages. Sooner or later your system will complain about being low on resources and will require you to clean up the system

The first option is to run the clean command. This will clean your local system of all downloaded package files.

sudo apt-get clean

The second options is the less extreme autoclean command. This will only remove those retrieved package files that have a newer version now and they won’t be used anymore.

sudo apt-get autoclean

apt-search to find packages

There are times when you’re looking for something but are not sure exactly what. This is where the apt-cache search command comes in, if you enter a search term it’ll find all related packages.

apt-cache search <search term>

If you know the exact package name you can use apt-cache pkgnames command that will return all package names that match your search criteria. The number of returned items will be smaller than the search return.

apt-cache pkgnames <search_term>

EPRDCTN in the command line: Introduction

This series of posts intends to walk you through some basic concepts, activities and shell commands to take the fear and pain away from working with a text-based interface that links directly to the Operating System.

In more formal terms:

A command-line interface is a means of interacting with a computer program where the user (or client) issues commands to the program in the form of successive lines of text (command lines). A program which handles the interface is called a command language interpreter or shell.

Wikipedia

So what does it mean?

It’s like an old style terminal where you enter commands that make the computer do something.

All Operating Systems have a CLI. Yes, even Windows and MacOS.

screenshot of a Bash shell in the GNOME windows manager for Linux
Screenshot of a Bash shell in the GNOME windows manager for Linux
screenshot of Windows Powershell as it works in Windows Vista
Screenshot of Windows Powershell as it works in Windows Vista

This is important because sooner or later you will find tools that will only work from a command line interface. We’ll explore some of these tools (Node, Daisy Ace) in later sections but it’s important to make this clear.

What command line tools will we use?

In Windows, the better tools are PowerShell a souped-up terminal shell with additional scripting capabilities, and Windows Subsystem for Linux (WSL) a way to run Linux native applications from Windows. It uses a Ubuntu Linux image for Windows, not a modified version of Linux to run on Windows but a full version of Ubuntu Linux that will work together with Windows.

As far as terminals are concerned we’ll use the iTerm2 for the Mac and a standard Bash shell for WSL.

Before we get started

Before we jump into further installations and customizations we need to do a few things that are dependent on the Operating System we’re using.

Mac Users: Install XCode Command Line Tools

Before we install Homebrew we need to install Xcode command line tools. These are part of the full Xcode download but I’d rather save you the 5GB+ download so we’ll go the slim (but with more steps) route instead.

  1. Go to the Apple Developer’s site
  2. Click on the account link on the right side of the top navigation bar. You can use the same account that you use of iTunes or any Apple property.
    • If prompted verify your account. This mostly happens when logging in from a new location or with a new computer
  3. Click on Download Tools
  4. Scroll down the screen and click on See more downloads
  5. On the search box (to the left of the list of items to download) enter Command Line Tools. This will reduce the number of entries
  6. Download the version that matches your MacOS version
  7. Install the package.

The version I downloaded was 173MB. I’m OK with the extra work 🙂

Command Line Tools For Xcode download screen
Command Line Tools For Xcode download screen

Windows Users: Make sure WSL and the Ubuntu Image are installed

Before we move forward with WSL and Linux on Windows we need to make sure we have the right version of WSL installed and that we downloaded Ubuntu from the Microsoft Store.

These instructions assume you’re using the latest version of Windows 10.

  1. Install the latest version of PowerShell
    • Download the MSI package from our GitHub releases page. The MSI file looks like this – PowerShell-6.0.0.<buildversion>.<os -arch>.msi
    • Once downloaded, double-click the installer and follow the prompts. There is a shortcut placed in the Start Menu upon installation.
    • By default the package is installed to $env:ProgramFiles\PowerShell\
    • You can launch PowerShell via the Start Menu or $env:ProgramFiles\PowerShell\pwsh.exe
  2. Install WSL from PowerShell as Administrator
    • Type powershell in the Cortana search box
    • Right click on Windows PowerShell on the results and select Run as administrator
    • The UAC prompt will ask you for your consent. Click Yes, and the elevated PowerShell prompt will open
  3. In the PowerShell window you just opened type: Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
    • Reboot the system when prompted
  4. Install your Linux Distribution
  5. Select “Get”
  6. Once the download has completed, select “Launch”.
    • This will open a console window. Wait for the installation to complete then you will be prompted to create your LINUX user account
  7. Create your LINUX username and password. This user account has no relationship to your Windows username and password and hence can be different

Installing a terminal on the Mac

Even though there is a terminal bundled with MacOS (hidden inside applications -> utilities) I like iTerm 2 as a more feature complete replacement for the terminal that comes with MacOS. You can download it from the iTerm 2 download site and, please, make sure you download the stable release.

Templating the web

There are times when writing the same thing over and over again gets to be really tedious. Think of populating large bulleted lists, select statements and other repetitive elements.

We’ve been able to do this for a while using libraries… We’ll look at it handlebars.js and then we’ll look at using the template tag built into the HTML specification.

We’ll also look at why we might want to keep working with libraries to support older browsers and browsers with incomplete support.

The current way: Template Libraries

When I first looked at Handlebars it appeared to be too much work for the result I was looking for. As I started working with WordPress, particularly when they released their REST API, I realized that templating would be key to build custom interfaces.

The most basic example shows how to use Handlebars to populate a template with information stored in the same script. The HTML contains both the placeholder element where we’ll store the content and the script containing the actual template.

<div id="entries"></div>

<script id="entry-template" type="text/x-handlebars-template">
  <div class="entry">
    <h1>{{title}}</h1>
    <div class="body">
      {{body}}
    </div>
  </div>
</script>

The script element captures the content of the template in the source variable and defines a shorthand for template compilation on renderEntry.

We then define the content we want to insert. In this case, it’s a variable holding an array of data we want to populate the content with.

Finally, we use renderEntry to render the template, populated with the data, into HTML and insert it into our placeholder element.

let source = document.getElementById('entry-template').innerHTML;
let renderEntry = Handlebars.compile(source);

let blogEntry = {
  title: 'My New Post',
  body: 'This is my first post!'
};

document.getElementById('entries').innerHTML = renderEntry(blogEntry);

A next iteration could be used to pull from an array of values stored locally. For this, we use a helper method, #each to loop through the values on the array and provide the data to populate the template.

The HTML is almost identical except that we wrap the content we’ll iterate over with an #each helper ({{#each cats}} and {{/each}}. cats is a reference to the data we will pass on Javascript.

<div id="cat-list"></div>

<template id="cat-list-template">
  {{#each cats}}
    <div class="cat">
      <h1>{{name}}</h1>
      <p>Age: {{age}}</p>
    </div>
  {{/each}}
</template>

The Javascript is also similar to the prior example. The differences are:

  • The data we’re passing is now an array
  • When we instantiate the template we wrap the data we pass in an array to make sure the helper works as intended
var myCats = [
  { name: 'Fiona', age: 4 },
  { name: 'Spot', age: 12 },
  { name: 'Chestnut', age: 4 },
  { name: 'Frisky', age: false },
  { name: 'Biscuit', age: 4 }
];

var template = document.getElementById('cat-list-template').innerHTML;
var renderCats = Handlebars.compile(template);
document.getElementById('cat-list').innerHTML = renderCats({
  cats: myCats
});

The last bit of Handlebars magic we’ll look at is how to use it to render templates with external data from a REST API; in this case WordPress.

The template is pretty similar to the cat example. The main difference is the use of nested values (title.rendered and content.rendered) and the use of the triple mustache around content.rendered to tell Handlebars that we don’t want to escape HTML values for this variable.

If you don’t own the content then please don’t do this! In this case, since it’s my blog and I’m pretty sure I don’t write malware (bad code, maybe but definitely not malware) I’ve accepted the risk.

<div id="myContent"></div>

  <template id="post-list-template">

    {{#each posts}}
    <div class="post">
      <h1>{{title.rendered}}</h1>
      <div>
        {{{content.rendered}}}
      </div>
    </div>
    {{/each}}

  </template>

The Javascript is different. I’ve chosen to use fetch and promises to make the code look nicer. We could go with async and await but that would limit the code to newer browsers (yes, I know they are evergreen but I also know of IT departments that block updates or choose to use LTS/ESR versions that lag behind in features), even more so than promises do.

The code starts with a fetch request to the WordPress REST API requesting the 4 more recent posts on my blog. You can change the value by changing the value of the per_page parameter. This will generate a promise that resolves when the fetch request completes and the data download is finished and rejects otherwise.

Once the promise resolves we move to the next step and convert it to JSON data using the response object’s json method.

Once the promise of response.json() fulfills we move to the next, and final, step. We compile the template and render it using the data we just fetched. These are the same commands that were at the bottom of the cat example. Since we are working with promises we must move them into the promise chain; otherwise, the fetch request will complete before we reach the part of the script where we compile and render the template.

If any of the promises rejects the code will jump to the catch statement. In this case, we’re only logging the error to console. we might also want to display something to the user to indicate the failure. No blank pages, please.

let myPosts = fetch(
  'https://publishing-project.rivendellweb.net/wp-json/wp/v2/posts?per_page=4'
)
  .then(response => {
    return response.json();
  })
  .then(myJson => {
    let template = document.getElementById('post-list-template').innerHTML;
    let renderPosts = Handlebars.compile(template);
    document.getElementById('myContent').innerHTML = renderPosts({
      posts: myJson
    });
  })
  .catch(err => {
    console.log("There's been an error getting the data", err);
  });

This was meant as a proof of concept and is in no way, shape or form production code. Some areas of further work:

  • Caching the fetch results to improve load times after the first visit
  • Pagination

The template element

Rather than use a library, wouldn’t it be nice if we could use native HTML to create templates and then instantiate them with Javascript without having to add libraries and additional HTTP request.

We can!

HTML templates were first proposed as part of the web components family of specifications. They have since moved to the HTML Specification itself.

The idea behind the template element is that it holds content on the page without the content being active… we can activate the template at any time using Javascript.

Again it’s worth repeating some of the characteristics of native HTML templates:

  1. Its content is effectively inert until activated. Essentially, your markup is hidden DOM and does not render. This means that script won’t run, images won’t load, audio won’t play until the template is used
  2. Content is considered not to be in the document. Using document.getElementById() or querySelector() in the main page won’t return child nodes of a template
  3. Templates can be placed anywhere inside of <head>, <body>, or <frameset> and can contain any type of content which is allowed in those elements
    • Note that “anywhere” means that <template> can safely be used in places that the HTML parser disallows…all but content model children. It can also be placed as a child of <table> or <select>

The following template can be anywhere on the page but, for consistency sake, let’s put it at the bottom of the page. Notice that we’re ok with having an empty src attribute for the image element

<template id="mytemplate">
  <img src="" alt="great image">
  <div class="comment"></div>
</template>

In your application script use the following script to stamp the template into the live DOM.

We first create a function to make feature detection for templates easier.

We then use the function in an if statement. Inside the if block we do the following:

  • Select the template element and store the result in a variable
  • Insert the path to the image inside the source attribute. We use the template’s content method to traverse inside the inert template
  • We create a cloned copy of the template content using document.importNode
  • We append the cloned node into the document.

If the user agent doesn’t support templates we can fall back to using a library like Handlebars.

function supportsTemplate() {
  return 'content' in document.createElement('template');
}

if (supportsTemplate()) {
  var t = document.querySelector('#mytemplate');
  // Populate the src at runtime.
  t.content.querySelector('img').src = 'logo.png';
  var clone = document.importNode(t.content, true);
  document.body.appendChild(clone);
} else {
  // Use old templating techniques or libraries.
}

The one thing I’m still working on figuring out is how you can create multiple copies of the same template and populate it with different data, like the WordPress/Handlebars example we discussed earlier. I will update the post once I figure it out 🙂

Links and Resources

Theme switcher With CSS Variables

Using CSS Variables to create themes we should be able to create more than one and then switch between them. I’ve experimented with using multiple roots with different classes and then use javascript to switch the class of the HTML element to match the theme.

I understand the theory but I had a little bit of a hard time understanding how to add/switch a class from the HTML element. Looking at code from @justmarkup article helped clarify the idea.

Initially, I had decided to use buttons to make the theme switcher but a select drop down menu looks better. We also have to hide one element rather than several if we choose to do so.

The HTML with the switcher element and the content, Lorem Ipsum for now, looks like this.

<h1>Theme Switcher Demo!</h1>

<label id="theme-changer" class="hidden">
  Choose theme
  <select name="theme" id="theme">
    <option value="default">default</option>
    <option value="theme-blue">blue</option>
    <option value="theme-red">red</option>
    <option value="theme-green">green</option>
    <option value="theme-grey">grey</option>
  </select>
</label>

    <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit.</p>
    <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit.</p>
    <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit.</p>
    <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit.</p>
    <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit.</p>
    <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit.</p>
    <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit.</p>
    <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit.</p>
    <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit.</p>

The CSS is broken into two parts. The first part defines multiple root elements with classes. We’ll use these classes to change the currently selected theme from Javascript.

:root.default {
  --global-backgroundcolor: white;
  --global-h1-size: 2em;
}

:root.theme-blue {
  --global-backgroundcolor: lightblue;
  --global-h1-size: 3em;
}

:root.theme-red {
  --global-backgroundcolor: indianred;
  --global-h1-size: 2.5em;
}

:root.theme-green {
  --global-backgroundcolor: lightgreen;
  --global-h1-size: 2em;
}

:root.theme-grey {
  --global-backgroundcolor: lightgray;
  --global-h1-size: 4em;
}

The second block of Javascript uses the themes we created. We don’t specify which team to use but, since all themes have the same basic variables we only use the variable name, Javascript will take care of the switching.

The h1 element has different values depending on the theme we selected. This also allows us to see the differences between themes.

body {
  background-color: var(--global-backgroundcolor);
  width: 80%;
  margin: 0 auto;
}

h1 {
  font-size: 3em;
  font-size: var(--global-h1-size);
}

.hidden {
  display: none;
}

The Javascript portion of the project is what makes the switch happen. We use three different feature detection tests:

  • window.CSS checks if the browser implements the CSS Object Model (CSSOM)
  • window.CSS.supports is the Javascript equivalent to CSS @supports
  • windows.CSS.supports('--a', 0) is the actual support test

These are called using logical and meaning that they must all return true for the expression to evaluate to true.

We capture different elements (using document.querySelector) into constants that we’ll use later in the script.

We set the initial theme we’ll use to the ‘default’ theme.

We reveal the theme chooser by removing the hidden class from the select element.

We add a change event listener to capture the child element (representing the child theme) we selected, make it the current selection and use it as the new class for our root element.

If we don’t pass the test it’s because the browser doesn’t implement CCSSOM (unlikely), it doesn’t implement support or it doesn’t support CSS variables. If any of these conditions are not met we pop an alert to the user… there’s not much more we can do on the JS side.

if (window.CSS && window.CSS.supports && window.CSS.supports('--a', 0)) {
  const themeChanger = document.querySelector('#theme-changer');
  const root = document.querySelector(':root');
  const themeChooser = document.querySelector('#theme');

  root.className = 'default';

  themeChanger.classList.remove('hidden');

  themeChooser.addEventListener('change', function(e) {
    selectOption = this.options[this.selectedIndex];
    currentTheme = selectOption.value;
    root.className = currentTheme;
  });
} else {
  alert("You browser doesn't support CSS custom variables yet");
}

This is a very simple proof of concept with a very big omission:

As implemented, the script can’t save the theme selection for resumption on future visits; you’ll have to select your theme everytime you visit. This could be solved with either Local Storage or Indexed DB.

Full working example on this pen at Codepen.

Theming with CSS Variables

One of the best things about CSS variables is that they allow you to create themes for your content.

Reviewing CSS Variables

CSS variables or, more precisely, CSS Custom properties allow you to define reusable values for elements across your stylesheet. This is particularly good for consistency and ease of change, we only need to make one change and all the places where we use the variable will be changed automatically.

Variables can be global, those defined in the :root pseudo-element (equivalent to the HTML document) and those specific to elements in the page.

Defining The Core Theme

The example below shows a set of variables defined globally, values that we can use regardless of the element we’re working with.

The second block defines attributes for messages with different levels of severity (info, warning and danger). One block has attributes that are common to all messages, the other is specific to each level and, for now, adds a background color specific to each type of message we’re working with.

:root {
  /* generic margin values*/
  --margin-small: 0.5em;
  --margin-normal: 1em;
  --margin-large: 2em;

  /* generic padding values */
  --padding-small: 0.5em;
  --padding-normal: 1em;
  --padding-large: 2em;

  /* MESSAGES */
  /* message common attributes */
  --message-bordercolor: rgba(0, 0, 0, 1);
  --message-bordertype: solid;
  --message-borderthickness: 1px;
  --message-borderradius: 10px;
  /* info background color */
  --message-info--backgroundcolor: rgba(176, 216, 230, 1);
  /* warning background color */
  --message-warning--backgroundcolor: rgba(255, 255, 224, 1);
  /* danger background color */
  --message-danger--backgroundcolor: rgba(205, 92, 92, 1);
}

Using the stylesheet we defined above, we can create new elements using the variables defined in the stylesheet. You can use shorthand properties such as border, shown below.

.message {
  border: var(--message-borderthickness)
          var(--message-bordertype)
          var(--message-bordercolor);
  border-radius: var(--message-borderradius);
  margin: var(--margin-normal) auto;
  padding: var(--padding-small);
}

.info {
  background-color: var(--mesage-info--backgroundcolor);
}

.warning {
  background-color: var(--message-warning--backgroundcolor);
}

.danger {
  background-color: var(--message-danger--backgroundcolor);
}

We can also code defensively and use multiple values to cover browsers that don’t support variables. In the example below, the CSS parser will work through the different values and ignore those it doesn’t support, so it’ll go through RGB, RGBA and then use custom variables to assign the background color, if it doesn’t understand a value it will ignore the rule so we can rely on having only one rule that the browser will understand and it’ll use the last rule added to the page. I’ve made the assumption that if the browser supports variables it also supports RGBA, which I’ve used to define the color.

.info {
  background-color: rgb(176, 216, 230);
  background-color: rgba(176, 216, 230, 1);
  background-color: var(--message-info--backgroundcolor);
}

I’ve worked this example into a Codepen Demo that may be easier to understand.