Finding the Joy in 2019

Nigel enjoyed 2019. Especially the fishy bits.
Nigel enjoyed 2019. Especially the fishy bits.

This won’t be a long, navel-gazing post about all the wisdom I’ve gathered over the past year. Rather, a quick list of things that stuck out to me. And honestly, it’ll probably largely be from the past couple months, because a year is a long time to remember. And I didn’t take notes. 🙂

  • Happiness is a funny thing. I realized this year that, for me at least, happiness isn’t the result of things done well. Rather, happiness tends to cause things to go well. If I focus first on being happy, content, and having fun — those things like work, family, and hobbies tend to be more successful. And how does one focus on being happy? Oddly enough, choosing to be happy.
  • Facebook isn’t a good source for news. But (un?)fortunately it’s a really good place to find out about people in your life. If you want to see who a person really is, look at what they post/share/like. Or don’t, because it’s often more heartbreaking than anything.
  • Proxmox is awesome. Sorry in advance to my non-nerdy friends, but Proxmox is a virtualization platform, and I’m sad to say I haven’t used it before this year. I’ve been so very foolish to wait. It’s incredible. Hopefully you’ll hear more about it from me in 2020, because holy cat biscuits am I a fan.
  • eBay is a great place to buy servers. If you can deal with last-generation hardware, buying use/reconditioned servers on eBay is so affordable, it feels criminal. Granted buying used equipment forces you to focus on redundancy and backups in case of failure — but shouldn’t you be focusing on those things anyway?!?!?
  • Losing weight is HARD. And it’s even harder for women than men. I lost over 50 pounds this year, and although I gained back 11-ish over the holidays, the past 6 months have been a big first step in a lifestyle change. I’m in my mid-40s now, and I need to eat far less, and exercise far more often than I did in decades past. I want to get really old someday, and keeping my body healthy and strong is an important part of that goal.
  • Point of view is critical. I’m a pretty sickly guy. From bad lungs, to bad kidneys, to heart concerns in my 20s — there’s a lot wrong with me. (Seriously, that’s just a tiny fraction of my issues, I don’t want to depress anyone with the entire list, especially myself!) I try daily to focus on how healthy I am in spite of all the things working against me. I’m not sickly, I’m impossible to kill!
  • Learning is awesome. Yeah, I know, I’m a trainer by profession so this sounds like a marketing tactic, but I mean for myself as much as anyone else. I absolutely love learning. This year alone I:
    • Built a hydroponic system in my basement
    • Learned this decade’s nuances with video and live-streaming
    • Installed lighting systems of multiple brands/kinds/styles all over my house and office
    • Learned a bit of a new programming language (python)
    • Read over a book a week
    • Left the country (this is a big deal for me, it’s a phobia)
    • Fixed a refrigerator
    • Installed a dishwasher
    • Bought/used/learned/installed/played_with more technology and gadgets than anyone has a right to
    • Finally pinned a tweet (sorry it took so long, Jake!)

I don’t know what 2020 has in store for me health-wise, work-wise, etc. — but I know that if I approach it purposely filled with joy first, it will be far better than if I try to create happiness by doing things. If I learned anything from 2019, it’s that joy is a choice. A decision. And it puts all the other things in place, regardless of what those things might be. Happy New Year, everyone. Let’s make it awesome together. 🙂

Grepping is Awesome. Just Don’t Glob it Up!

Greps and pipes and greps and pipes and greps and pipes…

This article covers some grep and regex basics.

There are generally two types of coffee drinkers. The first type buys a can of pre-ground beans and uses the included scoop to make their automatic drip coffee in the morning. The second type picks single-origin beans from various parts of the world, accepts only beans that have been roasted within the past week and grinds those beans with a conical burr grinder moments before brewing in any number of complicated methods. Text searching is a bit like that.

For most things on the command line, people think of *.* or *.txt and are happy to use file globbing to select the files they want. When it comes to grepping a log file, however, you need to get a little fancier. The confusing part is when the syntax of globbing and regex overlap. Thankfully, it’s not hard to figure out when to use which construct.

Globbing

The command shell uses globbing for filename completion. If you type something like ls *.txt, you’ll get a list of all the files that end in .txt in the current directory. If you do ls R*.txt, you’ll get all the files that start with capital R and have the .txt extension. The asterisk is a wild card that lets you quickly filter which files you mean.

You also can use a question mark in globbing if you want to specify a single character. So, typing ls read??.txt will list readme.txt, but not read.txt. That’s different from ls read*.txt, which will match both readme.txt and read.txt, because the asterisk means “zero or more characters” in the file glob.

Here’s the easy way to remember if you’re using globbing (which is very simple) vs. regular expressions: globbing is done to filenames by the shell, and regex is used for searching text. The only frustrating exception to this is that sometimes the shell is too smart and conveniently does globbing when you don’t want it to—for example:


grep file* README.TXT

In most cases, this will search the file README.TXT looking for the regular expression file*, which is what you normally want. But if there happens to be a file in the current folder that matches the file* glob (let’s say filename.txt), the shell will assume you meant to pass that to grep, and so grep actually will see:


grep filename.txt README.TXT

Gee, thank you so much Mr. Shell, but that’s not what I wanted to do. For that reason, I recommend always using quotation marks when using grep. 99% of the time you won’t get an accidental glob match, but that 1% can be infuriating. So when using grep, this is much safer:


grep "file*" README.TXT

Because even if there is a filename.txt, the shell won’t substitute it automatically.

So, globs are for filenames, and regex is for searching text. That’s the first thing to understand. The next thing is to realize that similar syntax means different things.

Glob and Regex Conflicts

I don’t want this article to become a super in-depth piece on regex; rather, I want you to understand simple regex, especially as it conflicts with blobbing. Table 1 shows a few of the most commonly confused symbols and what they mean in each case.

Table 1. Commonly Used Symbols

Special CharacterMeaning in GlobsMeaning in Regex
*zero or more characterszero or more of the character it follows
?single occurrence of any characterzero or one of the character it follows but not more than 1
.literal “.” characterany single character

To add insult to injury, you might be thinking about globs when you use grep, but just because you get the expected results doesn’t mean you got the results for the correct reason. Let me try to explain. Here is a text file called filename.doc:


The fast dog is fast.
The faster dogs are faster.
A sick dog should see a dogdoc.
This file is filename.doc

If you type:


grep "fast*" filename.doc

The first two lines will match. Whether you’re thinking globs or regex, that makes sense. But if you type:


grep "dogs*" filename.doc

The first three lines will match, but if you’re thinking in globs, that doesn’t make sense. Since grep uses regular expressions (regex) when searching files, the asterisk means “zero or more occurrences of the previous character”, so in the second example, it matches dog and dogs, because having zero “s” characters matches the regex.

And let’s say you typed this:


grep "*.doc" filename.doc

This will match the last two lines. The asterisk doesn’t actually do anything in this command, because it’s not following any character. The dot in regex means “any character”, so it will match the “.doc”, but it also will match “gdoc” in “dogdoc”, so both lines match.

The moral of the story is that grep never uses globbing. The only exception is when the shell does globbing before passing the command on to grep, which is why it’s always a good idea to use quotation marks around the regular expression you are trying to grep for.

Use fgrep to Avoid Regex

If you don’t want the power of regex, it can be very frustrating. This is especially true if you’re actually looking for some of the special characters in a bunch of text. You can use the fgrep command (or grep -F, which is the same thing) in order to skip any regex substitutions. Using fgrep, you’ll search for exactly what you type, even if they are special characters. Here is a text file called file.txt:


I really hate regex.
All those stupid $, {}, and \ stuff ticks me off.
Why can't text be text?

If you try to use regular grep like this:


grep "$," file.txt

you’ll get no results. That’s because the “$” is a special character (more on that in a bit). If you’d like to grep for special characters without escaping them, or knowing the regex code to get what you want, this will work fine:


grep -F "$," file.txt

And, grep will return the second line of the text file because it matches the literal characters. It’s possible to build a regex query to search for special characters, but it can become complicated quickly. Plus, fgrep is much, much faster on a large text file.

Some Simple, Useful Regex

Okay, now that you know when to use globbing and when to use regular expressions, let’s look at a bit of regex that can make grepping much more useful. I find myself using the caret and dollar sign symbols in regex fairly often. Caret means “at the beginning of the line”, and dollar sign means “at the end of the line”. I used to mix them up, so my silly method to remember is that a farmer has to plant carrots at the beginning of the season in order to sell them for dollars at the end of the season. It’s silly, but it works for me!

Here’s a sample text file named file.txt:


chickens eat corn
corn rarely eats chickens
people eat chickens and corn
chickens rarely eat people

If you were to type:


grep "chickens" file.txt

you will get all four lines returned, because “chickens” is in each line. But if you add some regex to the mix:


grep "^chickens" file.txt

you’ll get both the first and fourth line returned, because the word “chickens” is at the beginning of those lines. If you type:


grep "corn$" file.txt

you will see the first and third lines, because they both end with “corn”. However, if you type:


grep "^chickens.*corn$" file.txt

you will get only the first line, because it is the only one that begins with chickens and ends with corn. This example might look confusing, but there are three regular expressions that build the search. Let’s look at each of them.

First, ^chickens means the line must start with chickens.

Second, .* means zero or more of any character, because remember, the dot means any character, and the asterisk means zero or more of the previous character.

Third, corn$ means the line must end with corn.

When you’re building regular expressions, you just mush them all together like that in a long string. It can become confusing, but if you break down each piece, it makes sense. In order for the entire regular expression to match, all of the pieces must match. That’s why only the first line matches the example regex statement.

A handful of other common regex characters are useful when grepping text files. Remember just to mush them together to form the entire regular expression:

  • \ â€” the backslash negates the “special-ness” of special characters, which means you actually can search for them with regex. For example, \$ will search for the $ character, instead of looking for the end of a line.
  • \s â€” this construct means “whitespace”, which can be a space or spaces, tabs or newline characters. To find the word pickle surrounded by whitespace, you could search for \spickle\s, and that will find “pickle” but not “pickles”.
  • .* â€” this is really just a specific use of the asterisk, but it’s a very common combination, so I mention it here. It basically means “zero or more of any characters”, which is what was used in the corn/chicken example above.
  • | â€” this means “or” in regex. So hi|hello will match either “hi” or “hello”. It’s often used in parentheses to separate it from other parts of the regular expression. For example, (F|f)rankfurter will search for the word frankfurter, whether or not it’s capitalized.
  • [] â€” brackets are another way to specify “or” options, but they support ranges. So the regex [Ff]rankfurter is the same as the above example. Brackets support ranges though, so ^[A-Z] will match any line that starts with a capital letter. It also supports numbers, so [0-9]$ will match any line that ends in a digit.

Your Mission

You can do far more complicated things with regular expressions. These basic building blocks are usually enough to get the sort of text you need out of a log file. If you want to learn more, by all means, either do some googling on regex, or get a book explaining all the nitty-gritty goodness. If you want me to write more about it, drop a comment and let me know.

I really, really encourage you to practice using regex. The best way to learn is to do, so make a few text files and see if the regex statements you create give you the results you expect. Thankfully, grep highlights the “match” it finds in the line it returns. That means if you’re getting more results than you expect, you’ll see why the regex matched more than you expected, because grep will show you.

The most important thing to remember is that grep doesn’t do globbing—that wild-card stuff is for filenames on the shell only. Even if globbing with grep seems to work, it’s probably just coincidence (look back at the dog/dogs example here if you don’t know what I’m talking about). Have fun grepping!

The Powers Family Christmas Eve Scavenger Hunt

Every year, since our (now adult) girls were tiny, Donna and I have created a scavenger hunt for our kids on Christmas Eve. They follow clues, solve puzzles, and at the end, there’s a group gift/prize for them to enjoy together. It’s not our only family tradition, but it’s by far the biggest and most consistent one we have. Since we’ve started livestreaming the shenanigans every December 24th, we’ve gotten quite a few inquiries about how we do it.

This is the answer, in the form of recommendations if you want to do your own version.

Make it easy to set up, or it won’t be a tradition, it’ll be a single fun memory.

Donna and I don’t usually prepare weeks or even days in advance. Some years, we’ve created clues on the fly, while the girls are doing the hunt. We want it to be a tradition, not a burden. We used to have a tradition of making a Christmas Star together every year. But it turns out that can be difficult to do, and the tradition fizzled. We’ve NEVER missed a year with our scavenger hunt, because we never let it become a burden. It’s truly not about how clever your clues are, or how many people are involved. It’s about doing silly things together, and even the lamest years have been a ton of fun.

Remember WHY you’re doing it.

Our goal has always been for our girls to have fun with each other. We’re not trying to stump them. There aren’t teams competing. They aren’t competing against each other. They’re just having fun working together. The final clue/solution is always something we can do together as a family afterward. Some years it’s a video game. Some years it’s a board game. Some years it’s a movie. It’s impossible to “lose” at the scavenger hunt, and if a clue is too challenging, we’ll totally help and give more clues, because it’s not about challenging the girls. It’s about the girls having fun TOGETHER.

Include everyone.

This isn’t something we have to remind our girls of anymore. They know it’s about everyone having fun, so they go out of their way to include each other and anyone else that might be with them that year. But at the beginning, or especially if your group is varied in age — make a point to include everyone. Something too hard for little Johnny? Let him hold the video camera while Suzy climbs the fence, etc, etc.

Consider your participants’ ages.

Our girls are fairly close in age. When they were young, the scavenger hunt was an indoor event. When they got older, they’d have to go into the yard or on the Internet. (See a clue from 2010: https://youtu.be/KfCDJv7ZXds ) Some years there are friend and/or relatives that go with the girls, and we make sure to consider their ages and abilities while designing the clues.

Now? The girls are all adults, and clues will take them around town and even to other towns. They’ll drive a half hour one way to get a picture with a street sign. And they’ll laugh together the WHOLE time. It’s seriously magical, and allowing friends, etc, join in has never been a problem. We play the scavenger hunt fast and loose, and that means it’s very flexible and age inclusive.

Consider video streaming publicly or privately.

Now that video streaming technology is possible with mobile devices, it has made the entire experience more fun and inclusive. Perhaps you’ve seen the livestream. It’s silly, it’s fun, and holding the phone/camera is something someone can do. If you don’t want to livestream, consider facetime.

How we actually do it now:

We take full advantage of technology. The girls have a phone livestreaming the whole time, for our enjoyment at home (Donna and I stay home). The actual clue/solution goes something like this:

  1. We text them a clue. “I’m downtown, but my phone died, and I’m not wearing a watch. How will I know what time it is?!??!”
  2. They figure out what we’re hinting at, and pile into a car together and drive (safely!) downtown. They get to the clock on main street, and take a photo of themselves in front of the clock.
  3. They text the photo to our family group text, and if they’re correct, they get sent the next clue.
  4. If they happen to go to the waterfront and get a photo in front of THAT clock, we’ll respond with something like, “when I’m downtown, I can’t see that clock…” — and they’ll figure out what we actually meant, and drive to the clock downtown and try again.
  5. Or, we’ll decide their solution was better than what we meant anyway, and pretend we meant the clock by the waterfront after all, and send them the next clue. 🙂

Sometimes, we’ll think ahead enough to have some jigsaw puzzles, which we put into an envelope and send with them. In which case, one of the clues they’ll receive via text is, “Open Envelope #2” — then they’ll follow the instructions inside the envelope.

Some of the clues involve them doing things like, “Open envelope #3, and use the $15 inside to buy hot cocoa from the bookstore, and get a stranger to take your photo” — then they send the photo to us to get the next clue.

We usually make them do some (slightly) embarrassing things, like going into a store and having one (or more) of them sing a Christmas Carol out loud while recording. They send the video to us, and we send the next clue/challenge.

Since it’s Christmas Eve, there’s usually a “build a snowman” challenge, which they need to accomplish and then take a photo and send it to us.

We’ll call a family/friend and make sure they’re home, then have a clue that has them go to XXX’s house and sing them “we wish you a Merry Christmas” while recording it, and we have the person give them the next clue (which we tell them when we call them, sometimes in advance, sometimes just before sending the clue, because we don’t prepare well, LOL)

End with some group fun.

Every “Just Dance” video game we own was the result of a scavenger hunt. We’ve had the last clue lead to a bowling alley (I think… maybe not, perhaps that will be this year’s prize), we’ve ended with video games, DVDs, etc, etc.

My biggest advice is to keep it simple. My girls rarely remember the clues or even the prizes at the end. They remember the fun they had doing silly, simple things together. They remember singing together in the car at the top of their lungs between clues. They remember anticipating the scavenger hunt. They tell their friends how awesome the tradition is, even if when they explain it, it doesn’t sound amazing. It’s far more about doing silly things together than the silly things themselves. 🙂

Good luck, and I hope your version is as much fun for your family as ours is for us!!!

Ansible Part 4: Putting it All Together

Roles are the most complicated and yet simplest aspect of Ansible to learn.

I’ve mentioned before that Ansible’s ad-hoc mode often is overlooked as just a way to learn how to use Ansible. I couldn’t disagree with that mentality any more fervently than I already do. Ad-hoc mode is actually what I tend to use most often on a day-to-day basis. That said, using playbooks and roles are very powerful ways to utilize Ansible’s abilities. In fact, when most people think of Ansible, they tend to think of the roles feature, because it’s the way most Ansible code is shared. So first, it’s important to understand the relationship between ad-hoc mode, playbooks and roles.

Ad-hoc Mode

This is a bit of a review, but it’s easy to forget once you start creating playbooks. Ad-hoc mode is simply a one-liner that uses an Ansible module to accomplish a given task on a set of computers. Something like:


ansible cadlab -b -m yum -a "name=vim state=latest"

will install vim on every computer in the cadlab group. The -b signals to elevate privilege (“become” root), the -m means to use the yum module, and the -a says what actions to take. In this case, it’s installing the latest version of vim.

Usually when I use ad-hoc mode to install packages, I’ll follow up with something like this:


ansible cadlab -b -m service -a "name=httpd state=started
 ↪enabled=yes"

That one-liner will make sure that the httpd service is running and set to start on boot automatically (the latter is what “enabled” means). Like I said at the beginning, I most often use Ansible’s ad-hoc mode on a day-to-day basis. When a new rollout or upgrade needs to happen though, that’s when it makes sense to create a playbook, which is a text file that contains a bunch of Ansible commands.

Playbook Mode

I described playbooks in my last article. They are YAML- (Yet Another Markup Language) formatted text files that contain a list of things for Ansible to accomplish. For example, to install Apache on a lab full of computers, you’d create a file something like this:


---

- hosts: cadlab
  tasks:
  - name: install apache2 on CentOS
    yum: name=httpd state=latest
    notify: start httpd
    ignore_errors: yes

  - name: install apache2 on Ubuntu
    apt: update_cache=yes name=apache2 state=latest
    notify: start apache2
    ignore_errors: yes

  handlers:
  - name: start httpd
    service: name=httpd enable=yes state=started

  - name: start apache2
    service: name=apache2 enable=yes state=started

Mind you, this isn’t the most elegant playbook. It contains a single play that tries to install httpd with yum and apache2 with apt. If the lab is a mix of CentOS and Ubuntu machines, one or the other of the installation methods will fail. That’s why the ignore_errors command is in each task. Otherwise, Ansible would quit when it encountered an error. Again, this method works, but it’s not pretty. It would be much better to create conditional statements that would allow for a graceful exit on incompatible platforms. In fact, playbooks that are more complex and do more things tend to evolve into a “role” in Ansible.

Roles

Roles aren’t really a mode of operation. Actually, roles are an integral part of playbooks. Just like a playbook can have tasks, variables and handlers, they can also have roles. Quite simply, roles are just a way to organize the various components referenced in playbooks. It starts with a folder layout:


roles/
  webserver/
    tasks/
      main.yml
    handlers/
      main.yml
    vars/
      main.yml
    templates/
      index.html.j2
      httpd.conf.j2
    files/
      ntp.conf

Ansible looks for a roles folder in the current directory, but also in a system-wide location like /etc/ansible/roles, so you can store your roles to keep them organized and out of your home folder. The advantage of using roles is that your playbooks can look as simple as this:


---

- hosts: cadlab
  roles:
    - webserver

And then the “webserver” role will be applied to the group “cadlab” without needing to type any more information inside your playbook. When a role is specified, Ansible looks for a folder matching the name “webserver” inside your roles folder (in the current directory or the system-wide directory). It then will execute the tasks inside webserver/tasks/main.yml. Any handlers mentioned in that playbook will be searched for automatically in webserver/handlers/main.yml. Also, any time files are referenced by a template module or file/copy module, the path doesn’t need to be specified. Ansible automatically will look inside webserver/files/ or /webserver/templates/ for the files.

Basically, using roles will save you lots of path declarations and include statements. That might seem like a simple thing, but the organization creates a standard that not only makes it easy to figure out what a role does, but also makes it easy to share your code with others. If you always know any files must be stored in roles/rolename/files/, it means you can share a “role” with others and they’ll know exactly what to do with it—namely, just plop it in their own roles folder and start using it.

Sharing Roles: Ansible Galaxy

One of the best aspects of current DevOps tools like Chef, Puppet and Ansible is that there is a community of people willing to share their hard work. On a small scale, roles are a great way to share with your coworkers, especially if you have roles that are customized specifically for your environment. Since many of environments are similar, roles can be shared with an even wider audience—and that’s where Ansible Galaxy comes into play.

I’ll be honest, part of the draw for me with Ansible is the sci-fi theme in the naming convention. I know I’m a bit silly in that regard, but just naming something Ansible or Ansible Galaxy gets my attention. This might be one of those “built by nerds, for nerds” sort of things. I’m completely okay with that. If you head over to the Galaxy site, you’ll find the online repository for shared roles—and there are a ton.

For simply downloading and using other people’s roles, you don’t need any sort of account on Ansible Galaxy. You can search on the website by going to Galaxy and clicking “Browse Roles” on the left side of the page (Figure 1). There are more than 13,000 roles currently uploaded to Ansible Galaxy, so I highly recommend taking advantage of the search feature! In Figure 2, you’ll see I’ve searched for “apache” and sorted by “downloads” in order to find the most popular roles.

Figure 1. Click that link to browse and search for roles.

Figure 2. Jeff Geerling’s roles are always worth checking out.

Many of the standard roles you’ll find that are very popular are written by Jeff Geerling, whose user name is geerlingguy. He’s an Ansible developer who has written at least one Ansible book that I’ve read and possibly others. He shares his roles, and I encourage you to check them out—not only for using them, but also for seeing how he codes around issues like conditionally choosing the correct module for a given distribution and things like that. You can click on the role name and see all the code involved. You might notice that if you want to examine the code, you need to click on the GitHub link. That’s one of the genius moves of Ansible Galaxy—all roles are stored on a user’s GitHub page as opposed to an Ansible Galaxy server. Since most developers keep their code on GitHub, they don’t need to remember to upload to Ansible Galaxy as well.

Incidentally, if you ever desire to share your own Ansible roles, you’ll need to use a GitHub user name to upload them, because again, roles are all stored on GitHub. But that’s getting ahead of things; first you need to learn how to use roles in your environment.

Using ansible-galaxy to Install Roles

It’s certainly possible to download an entire repository and then unzip the contents into your roles folder. Since they’re just text files and structured folders, there’s not really anything wrong with doing it that way. It’s just far less convenient than using the tools built in to Ansible.

There is a search mechanism on the Ansible command line for searching the Ansible Galaxy site, but in order to find a role I want to use, I generally go to the website and find it, then use the command-line tools to download and install it. Here’s an example of Jeff Geerling’s “apache” role. In order to use Ansible to download a role, you need to do this:


sudo ansible-galaxy install geerlingguy.apache

Notice two things. First, you need to execute this command with root privilege. That’s because the ansible-galaxy command will install roles in your system-wide roles folder, which isn’t writable (by default) by your regular user account. Second, take note of the format of roles named on Ansible Galaxy. The format is username.rolename, so in this case, geerlingguy.apache, which is also how you reference the role inside your playbooks.

If you want to see roles listed with the correct format, you can use ansible-galaxy‘s search command, but like I said, I find it less than useful because it doesn’t sort by popularity. In fact, I can’t figure out what it sorts by at all. The only time I use the command-line search feature is if I also use grep to narrow down roles by a single person. Anyway, Figure 3 shows what the results of ansible-galaxy search look like. Notice the username.rolename format.

Figure 3. I love the command line, but these search results are frustrating.

Once you install a role, it is immediately available for you to use in your own playbooks, because it’s installed in the system-wide roles folder. In my case, that’s /etc/ansible/roles (Figure 4). So now, if I create a playbook like this:


---
- hosts: cadlab
  roles:
    - geerlingguy.apache

Apache will be installed on all my cadlab computers, regardless of what distribution they’re using. If you want to see how the role (which is just a bunch of tasks, handlers and so forth) works, just pick through the folder structure inside /etc/ansible/roles/geerlingguy.apache/. It’s all right there for you to use or modify.

Figure 4. Easy Peasy, Lemon Squeezy

Creating Your Own Roles

There’s really no magic here, since you easily can create a roles folder and then create your own roles manually inside it, but ansible-galaxy does give you a shortcut by creating a skeleton role for you. Make sure you have a roles folder, then just type:


ansible-galaxy init roles/rolename

and you’ll end up with a nicely created folder structure for your new role. It doesn’t do anything magical, but as someone who has misspelled “Templates” before, I can tell you it will save you a lot of frustration if you have clumsy fingers like me.

Sharing Your Roles

If you get to the point where you want to share you roles on Ansible Galaxy, it’s fairly easy to do. Make sure you have your role on GitHub (using git is beyond the scope of this article, but using git and GitHub is a great way to keep track of your code anyway). Once you have your roles on GitHub, you can use ansible-galaxy to “import” them into the publicly searchable Ansible Galaxy site. You first need to authenticate:


ansible-galaxy login

Before you try to log in with the command-line tool, be sure you’ve visited the Ansible Galaxy website and logged in with your GitHub account. You can see in Figure 5 that at first I was unable to log in. Then I logged in on the website, and after that, I was able to log in with the command-line tool successfully.

Figure 5. It drove me nuts trying to figure out why I couldn’t authenticate.

Once you’re logged in, you can add your role by typing:


ansible-galaxy import githubusername githubreponame

The process takes a while, so you can add the -no-wait option if you want, and the role will be imported in the background. I really don’t recommend doing this until you have created roles worth sharing. Keep in mind, there are more than 13,000 roles on Ansible Galaxy, so there are many “re-inventions of the wheel” happening.

From Here?

Well, it’s taken me four articles, but I think if you’ve been following along, you should be to the point where you can take it from here. Playbooks and roles are usually where people focus their attention in Ansible, but I also encourage you to take advantage of ad-hoc mode for day-to-day maintenance tasks. Ansible in some ways is just another DevOps configuration management tool, but for me, it feels the most like the traditional problem-solving solution that I used Bash scripts to accomplish for decades. Perhaps I just like Ansible because it thinks the same way I do. Regardless of your motivation, I encourage you to learn Ansible enough so you can determine whether it fits into your workflow as well as it fits into mine.

If you’d like more direct training on Ansible (and other stuff) from yours truly, visit me at my DayJob as a trainer for CBT Nuggets. You can get a full week free if you head over to https://cbt.gg/shawnp0wers and sign up for a trial!

The 4 Part Series on Ansible includes:
Part 1 – DevOps for the Non-Dev
Part 2 – Making Things Happen
Part 3 – Playbooks
Part 4 – Putting it All Together

Ansible Part 3: Playbooks

Playbooks make Ansible even more powerful than before.

To be quite honest, if Ansible had nothing but its ad-hoc mode, it still would be a powerful and useful tool for automating large numbers of computers. In fact, if it weren’t for a few features, I might consider sticking with ad-hoc mode and adding a bunch of those ad-hoc commands to a Bash script and be done with learning. Those few additional features, however, make the continued effort well worth it.

Tame the Beast with YAML

Ansible goes out of its way to use an easy-to-read configuration file for making “playbooks”, which are files full of separate Ansible “tasks”. A task is basically an ad-hoc command written out in a configuration file that makes it more organized and easy to expand. The configuration files use YAML, which stands for “Yet Another Markup Language”. It’s an easy-to-read markup language, but it does rely on whitespace, which isn’t terribly common with most config files. A simple playbook looks something like this:


---

- hosts: webservers
  become: yes
  tasks:
    - name: this installs a package
      apt: name=apache2 update_cache=yes state=latest

    - name: this restarts the apache service
      service: name=apache2 enabled=yes state=restarted

The contents should be fairly easy to identify. It’s basically two ad-hoc commands broken up into a YAML configuration file. There are a few important things to notice. First, every filename ends with .yaml, and every YAML file must begin with three hyphen characters. Also, as mentioned above, whitespace matters. Finally, when a hyphen should precede a section and when it should just be spaced appropriately often is confusing. Basically every new section needs to start with a – symbol, but it’s often hard to tell what should be its own section. Nevertheless, it starts to feel natural as you create more and more playbooks.

The above playbook would be executed by typing:


ansible-playbook filename.yaml

And that is the equivalent of these two commands:


ansible webservers -b -m apt -a "name=apache2
 ↪update_cache=yes state=latest"
ansible webservers -b -m service -a "name=apache2
 ↪enabled=yes state=restarted"

Handling Your Handlers

But a bit of organization is really only the beginning of why playbooks are so powerful. First off, there’s the idea of “Handlers”, which are tasks that are executed only when “notified” that a task has made a change. How does that work exactly? Let’s rewrite the above YAML file to make the second task a handler:


---

- hosts: webservers
  become: yes
  tasks:
    - name: this installs a package
      apt: name=apache2 update_cache=yes state=latest
      notify: enable apache

  handlers:
    - name: enable apache
      service: name=apache2 enabled=yes state=started

On the surface, this looks very similar to just executing multiple tasks. When the first task (installing Apache) executes, if a change is made, it notifies the “enable apache” handler, which makes sure Apache is enabled on boot and currently running. The significance is that if Apache is already installed, and no changes are made, the handler never is called. That makes the code much more efficient, but it also means no unnecessary interruption of the already running Apache process.

There are other subtle time-saving issues with handlers too—for example, multiple tasks can call a handler, but it executes only a single time regardless of how many times it’s called. But the really significant thing to remember is that handlers are executed (notified) only when an Ansible task makes a change on the remote system.

Just the Facts, Ma’am

Variable substitution works quite simply inside a playbook. Here’s a simple example:


---

- hosts: webservers
  become: yes
  vars:
    package_name: apache2
  tasks:
    - name: this installs a package
      apt: "name={{ package_name }} update_cache=yes state=latest"
      notify: enable apache

  handlers:
    - name: enable apache
      service: "name={{ package_name }} enabled=yes state=started"

It should be fairly easy to understand what’s happening above. Note that I did put the entire module action section in quotes. It’s not always required, but sometimes Ansible is funny about unquoted variable substitutions, so I always try to put things in quotes when variables are involved.

The really interesting thing about variables, however, are the “Gathered Facts” about every host. You might notice when executing a playbook that the first thing Ansible does is “Gathering Facts…”, which completes without error, but doesn’t actually seem to do anything. What’s really happening is that system information is getting populated into variables that can be used inside a playbook. To see the entire list of “Gathered Facts”, you can execute an ad-hoc command:


ansible webservers -m setup

You’ll get a huge list of facts generated from the individual hosts. Some of them are particularly useful. For example, ansible_os_family will return something like “RedHat” or “Debian” depending on which distribution you’re using. Ubuntu and Debian systems both return “Debian”, while Red Hat and CentOS will return “RedHat”. Although that’s certainly interesting information, it’s really useful when different distros use different tools—for example, apt vs. yum.

Getting Verbose

One of the frustrations of moving from Ansible ad-hoc commands to playbooks is that in playbook mode, Ansible tends to keep fairly quiet with regard to output. With ad-hoc mode, you often can see what is going on, but with a playbook, you know only if it finished okay, and if a change was made. There are two easy ways to change that. The first is just to add the -v flag when executing ansible-playbook. That adds verbosity and provides lots of feedback when things are executed. Unfortunately, it often gives so much information, that usefulness gets lost in the mix. Still, in a pinch, just adding the -v flag helps.

If you’re creating a playbook and want to be notified of things along the way, the debug module is really your friend. In ad-hoc mode, the debug module doesn’t make much sense to use, but in a playbook, it can act as a “reporting” tool about what is going on. For example:


---

- hosts: webservers
  tasks:
   - name: describe hosts
     debug: msg="Computer {{ ansible_hostname }} is running
      ↪{{ ansible_os_family }} or equivalent"

The above will show you something like Figure 1, which is incredibly useful when you’re trying to figure out the sort of systems you’re using. The nice thing about the debug module is that it can display anything you want, so if a value changes, you can have it displayed on the screen so you can troubleshoot a playbook that isn’t working like you expect it to work. It is important to note that the debug module doesn’t do anything other than display information on the screen for you. It’s not a logging system; rather, it’s just a way to have information (customized information, unlike the verbose flag) displayed during execution. Still, it can be invaluable as your playbooks become more complex.

Figure 1. Debug mode is the best way to get some information on what’s happening inside your playbooks.

If This Then That

Conditionals are a part of pretty much every programming language. Ansible YAML files also can take advantage of conditional execution, but the format is a little wacky. Normally the condition comes first, and then if it evaluates as true, the following code executes. With Ansible, it’s a little backward. The task is completely spelled out, then a when statement is added at the end. It makes the code very readable, but as someone who’s been using if/then mentality his entire career, it feels funny. Here’s a slightly more complicated playbook. See if you can parse out what would happen in an environment with both Debian/Ubuntu and Red Hat/CentOS systems:


---

- hosts: webservers
  become: yes
  tasks:
    - name: install apache this way
      apt: name=apache2 update_cache=yes state=latest
      notify: start apache2
      when: ansible_os_family == "Debian"

    - name: install apache that way
      yum: name=httpd state=latest
      notify: start httpd
      when: ansible_os_family == "RedHat"

  handlers:
    - name: start apache2
      service: name=apache2 enabled=yes state=started

    - name: start httpd
      service: name=httpd enabled=yes state=started

Hopefully the YAML format makes that fairly easy to read. Basically, it’s a playbook that will install Apache on hosts using either yum or apt based on which type of distro they have installed. Then handlers make sure the newly installed packages are enabled and running.

It’s easy to see how useful a combination of gathered facts and conditional statements can be. Thankfully, Ansible doesn’t stop there. As with other configuration management systems, it includes most features of programming and scripting languages. For example, there are loops.

Play It Again, Sam

If there is one thing Ansible does well, it’s loops. Quite frankly, it supports so many different sorts of loops, I can’t cover them all here. The best way to figure out the perfect sort of loop for your situation is to read the Ansible documentation directly.

For simple lists, playbooks use a familiar, easy-to-read method for doing multiple tasks. For example:


---

- hosts: webservers
  become: yes

  tasks:
    - name: install a bunch of stuff
      apt: "name={{ item }} state=latest update_cache=yes"
      with_items:
        - apache2
        - vim
        - chromium-browser

This simple playbook will install multiple packages using the apt module. Note the special variable named item, which is replaced with the items one at a time in the with_items section. Again, this is pretty easy to understand and utilize in your own playbooks. Other loops work in similar ways, but they’re formatted differently. Just check out the documentation for the wide variety of ways Ansible can repeat similar tasks.

Templates

One last module I find myself using often is the template module. If you’ve ever used mail merge in a word processor, templating works similarly. Basically, you create a text file and then use variable substitution to create a custom version on the fly. I most often do this for creating HTML files or config files. Ansible uses the Jinja2 templating language, which is conveniently similar to standard variable substitution in playbooks themselves. The example I almost always use is a custom HTML file that can be installed on a remote batch of web servers. Let’s look at a fairly complex playbook and an accompanying HTML template file.

Here’s the playbook:


---

- hosts: webservers
  become: yes

  tasks:
   - name: install apache2
     apt: name=apache2 state=latest update_cache=yes
     when: ansible_os_family == "Debian"

   - name: install httpd
     yum: name=httpd state=latest
     when: ansible_os_family == "RedHat"

   - name: start apache2
     service: name=apache2 state=started enable=yes
     when: ansible_os_family == "Debian"

   - name: start httpd
     service: name=httpd state=started enable=yes
     when: ansible_os_family == "RedHat

   - name: install index
     template:
       src: index.html.j2
       dest: /var/www/html/index.html

Here’s the template file, which must end in .j2 (it’s the file referenced in the last task above):


<html><center>
<h1>This computer is running {{ ansible_os_family }},
and its hostname is:</h1>
<h3>{{ ansible_hostname }}</h3>
{# this is a comment, which won't be copied to the index.html file #}
</center></html>

This also should be fairly easy to understand. The playbook takes a few different things it learned and installs Apache on the remote systems, regardless of whether they are Red Hat- or Debian-based. Then, it starts the web servers and makes sure the web server starts on system boot. Finally, the playbook takes the template file, index.html.j2, and substitutes the variables while copying the file to the remote system. Note the {# #} format for making comments. Those comments are completely erased on the remote system and are visible only in the .j2 file on the Ansible machine.

The Sky Is the Limit!

I’ll finish up this series in my next article, where I plan to cover how to build on your playbook knowledge to create entire roles and take advantage of the community contributions available. Ansible is a very powerful tool that is surprisingly simple to understand and use. If you’ve been experimenting with ad-hoc commands, I encourage you to create playbooks that will allow you to do multiple tasks on a multitude of computers with minimal effort. At the very least, play around with the “Facts” gathered by the ansible-playbook app, because those are things unavailable to the ad-hoc mode of Ansible. Until next time, learn, experiment, play and have fun!

If you’d like more direct training on Ansible (and other stuff) from yours truly, visit me at my DayJob as a trainer for CBT Nuggets. You can get a full week free if you head over to https://cbt.gg/shawnp0wers and sign up for a trial!

The 4 Part Series on Ansible includes:
Part 1 – DevOps for the Non-Dev
Part 2 – Making Things Happen
Part 3 – Playbooks
Part 4 – Putting it All Together

Ansible Part 2: Making Things Happen

Finally, an automation framework that thinks like a sysadmin. Ansible, you’re hired.

In my last article, I described how to configure your server and clients so you could connect to each client from the server. Ansible is a push-based automation tool, so the connection is initiated from your “server”, which is usually just a workstation or a server you ssh in to from your workstation. In this article, I explain how modules work and how you can use Ansible in ad-hoc mode from the command line.

Ansible is supposed to make your job easier, so the first thing you need to learn is how to do familiar tasks. For most sysadmins, that means some simple command-line work. Ansible has a few quirks when it comes to command-line utilities, but it’s worth learning the nuances, because it makes for a powerful system.

Command Module

This is the safest module to execute remote commands on the client machine. As with most Ansible modules, it requires Python to be installed on the client, but that’s it. When Ansible executes commands using the Command Module, it does not process those commands through the user’s shell. This means some variables like $HOME are not available. It also means stream functions (redirects, pipes) don’t work. If you don’t need to redirect output or to reference the user’s home directory as a shell variable, the Command Module is what you want to use. To invoke the Command Module in ad-hoc mode, do something like this:


ansible host_or_groupname -m command -a "whoami"

Your output should show SUCCESS for each host referenced and then return the user name that the user used to log in. You’ll notice that the user is not root, unless that’s the user you used to connect to the client computer.

If you want to see the elevated user, you’ll add another argument to the ansible command. You can add -b in order to “become” the elevated user (or the sudo user). So, if you were to run the same command as above with a “-b” flag:


ansible host_or_groupname -b -m command -a "whoami"

you should see a similar result, but the whoami results should say root instead of the user you used to connect. That flag is important to use, especially if you try to run remote commands that require root access!

Shell Module

There’s nothing wrong with using the Shell Module to execute remote commands. It’s just important to know that since it uses the remote user’s environment, if there’s something goofy with the user’s account, it might cause problems that the Command Module avoids. If you use the Shell Module, however, you’re able to use redirects and pipes. You can use the whoami example to see the difference. This command:


ansible host_or_groupname -m command -a "whoami > myname.txt"

should result in an error about > not being a valid argument. Since the Command Module doesn’t run inside any shell, it interprets the greater-than character as something you’re trying to pass to the whoami command. If you use the Shell Module, however, you have no problems:


ansible host_or_groupname -m shell -a "whom > myname.txt"

This should execute and give you a SUCCESS message for each host, but there should be nothing returned as output. On the remote machine, however, there should be a file called myname.txt in the user’s home directory that contains the name of the user. My personal policy is to use the Command Module whenever possible and to use the Shell Module if needed.

The Raw Module

Functionally, the Raw Module works like the Shell Module. The key difference is that Ansible doesn’t do any error checking, and STDERRSTDOUT and Return Code is returned. Other than that, Ansible has no idea what happens, because it just executes the command over SSH directly. So while the Shell Module will use /bin/sh by default, the Raw Module just uses whatever the user’s personal default shell might be.

Why would a person decide to use the Raw Module? It doesn’t require Python on the remote computer—at all. Although it’s true that most servers have Python installed by default, or easily could have it installed, many embedded devices don’t and can’t have Python installed. For most configuration management tools, not having an agent program installed means the remote device can’t be managed. With Ansible, if all you have is SSH, you still can execute remote commands using the Raw Module. I’ve used the Raw Module to manage Bitcoin miners that have a very minimal embedded environment. It’s a powerful tool, and when you need it, it’s invaluable!

Copy Module

Although it’s certainly possible to do file and folder manipulation with the Command and Shell Modules, Ansible includes a module specifically for copying files to the server. Even though it requires learning a new syntax for copying files, I like to use it because Ansible will check to see whether a file exists, and whether it’s the same file. That means it copies the file only if it needs to, saving time and bandwidth. It even will make backups of existing files! I can’t tell you how many times I’ve used scp and sshpass in a Bash FOR loop and dumped files on servers, even if they didn’t need them. Ansible makes it easy and doesn’t require FOR loops and IP iterations.

The syntax is a little more complicated than with Command, Shell or Raw. Thankfully, as with most things in the Ansible world, it’s easy to understand—for example:


ansible host_or_groupname -b -m copy \
    -a "src=./updated.conf dest=/etc/ntp.conf \
        owner=root group=root mode=0644 backup=yes"

This will look in the current directory (on the Ansible server/workstation) for a file called updated.conf and then copy it to each host. On the remote system, the file will be put in /etc/ntp.conf, and if a file already exists, and it’s different, the original will be backed up with a date extension. If the files are the same, Ansible won’t make any changes.

I tend to use the Copy Module when updating configuration files. It would be perfect for updating configuration files on Bitcoin miners, but unfortunately, the Copy Module does require that the remote machine has Python installed. Nevertheless, it’s a great way to update common files on many remote machines with one simple command. It’s also important to note that the Copy Module supports copying remote files to other locations on the remote filesystem using the remote_src=true directive.

File Module

The File Module has a lot in common with the Copy Module, but if you try to use the File Module to copy a file, it doesn’t work as expected. The File Module does all its actions on the remote machine, so src and dest are all references to the remote filesystem. The File Module often is used for creating directories, creating links or deleting remote files and folders. The following will simply create a folder named /etc/newfolder on the remote servers and set the mode:


ansible host_or_groupname -b -m file \
       -a "path=/etc/newfolder state=directory mode=0755"

You can, of course, set the owner and group, along with a bunch of other options, which you can learn about on the Ansible doc site. I find I most often will either create a folder or symbolically link a file using the File Module. To create a symlink:


sensible host_or_groupname -b -m file \
         -a "src=/etc/ntp.conf dest=/home/user/ntp.conf \
             owner=user group=user state=link"

Notice that the state directive is how you inform Ansible what you actually want to do. There are several state options:

  • link â€” create symlink.
  • directory â€” create directory.
  • hard â€” create hardlink.
  • touch â€” create empty file.
  • absent â€” delete file or directory recursively.

This might seem a bit complicated, especially when you easily could do the same with a Command or Shell Module command, but the clarity of using the appropriate module makes it more difficult to make mistakes. Plus, learning these commands in ad-hoc mode will make playbooks, which consist of many commands, easier to understand (I plan to cover this in my next article).

File Management

Anyone who manages multiple distributions knows it can be tricky to handle the various package managers. Ansible handles this in a couple ways. There are specific modules for apt and yum, but there’s also a generic module called “package” that will install on the remote computer regardless of whether it’s Red Hat- or Debian/Ubuntu-based.

Unfortunately, while Ansible usually can detect the type of package manager it needs to use, it doesn’t have a way to fix packages with different names. One prime example is Apache. On Red Hat-based systems, the package is “httpd”, but on Debian/Ubuntu systems, it’s “apache2”. That means some more complex things need to happen in order to install the correct package automatically. The individual modules, however, are very easy to use. I find myself just using apt or yum as appropriate, just like when I manually manage servers. Here’s an apt example:


ansible host_or_groupname -b -m apt \
          -a "update_cache=yes name=apache2 state=latest"

With this one simple line, all the host machines will run apt-get update (that’s the update_cache directive at work), then install apache2’s latest version including any dependencies required. Much like the File Module, the state directive has a few options:

  • latest â€” get the latest version, upgrading existing if needed.
  • absent â€” remove package if installed.
  • present â€” make sure package is installed, but don’t upgrade existing.

The Yum Module works similarly to the Apt Module, but I generally don’t bother with the update_cache directive, because yum updates automatically. Although very similar, installing Apache on a Red Hat-based system looks like this:


ansible host_or_groupname -b -m yum \
      -a "name=httpd state=present"

The difference with this example is that if Apache is already installed, it won’t update, even if an update is available. Sometimes updating to the latest version isn’t want you want, so this stops that from accidentally happening.

Just the Facts, Ma’am

One frustrating thing about using Ansible in ad-hoc mode is that you don’t have access to the “facts” about the remote systems. In my next article, where I plan to explore creating playbooks full of various tasks, you’ll see how you can reference the facts Ansible learns about the systems. It makes Ansible far more powerful, but again, it can be utilized only in playbook mode. Nevertheless, it’s possible to use ad-hoc mode to peek at the sorts information Ansible gathers. If you run the setup module, it will show you all the details from a remote system:


ansible host_or_groupname -b -m setup

That command will spew a ton of variables on your screen. You can scroll through them all to see the vast amount of information Ansible pulls from the host machines. In fact, it shows so much information, it can be overwhelming. You can filter the results:


ansible host_or_groupname -b -m setup -a "filter=*family*"

That should just return a single variable, ansible_os_family, which likely will be Debian or Red Hat. When you start building more complex Ansible setups with playbooks, it’s possible to insert some logic and conditionals in order to use yum where appropriate and apt where the system is Debian-based. Really, the facts variables are incredibly useful and make building playbooks that much more exciting.

But, that’s for another article, because you’ve come to the end of the second installment. Your assignment for now is to get comfortable using Ansible in ad-hoc mode, doing one thing at a time. Most people think ad-hoc mode is just a stepping stone to more complex Ansible setups, but I disagree. The ability to configure hundreds of servers consistently and reliably with a single command is nothing to scoff at. I love making elaborate playbooks, but just as often, I’ll use an ad-hoc command in a situation that used to require me to ssh in to a bunch of servers to do simple tasks. Have fun with Ansible; it just gets more interesting from here!

If you’d like more direct training on Ansible (and other stuff) from yours truly, visit me at my DayJob as a trainer for CBT Nuggets. You can get a full week free if you head over to https://cbt.gg/shawnp0wers and sign up for a trial!

The 4 Part Series on Ansible includes:
Part 1 – DevOps for the Non-Dev
Part 2 – Making Things Happen
Part 3 – Playbooks
Part 4 – Putting it All Together

Ansible Part 1: DevOps for the Non-Dev

I’ve written about and trained folks on various DevOps tools through the years, and although they’re awesome, it’s obvious that most of them are designed from the mind of a developer. There’s nothing wrong with that, because approaching configuration management programmatically is the whole point. Still, it wasn’t until I started playing with Ansible that I felt like it was something a sysadmin quickly would appreciate.

Part of that appreciation comes from the way Ansible communicates with its client computers—namely, via SSH. As sysadmins, you’re all very familiar with connecting to computers via SSH, so right from the word “go”, you have a better understanding of Ansible than the other alternatives.

With that in mind, I’m planning to write a few articles exploring how to take advantage of Ansible. It’s a great system, but when I was first exposed to it, it wasn’t clear how to start. It’s not that the learning curve is steep. In fact, if anything, the problem was that I didn’t really have that much to learn before starting to use Ansible, and that made it confusing. For example, if you don’t have to install an agent program (Ansible doesn’t have any software installed on the client computers), how do you start?

Getting to the Starting Line

The reason Ansible was so difficult for me at first is because it’s so flexible with how to configure the server/client relationship, I didn’t know what I was supposed to do. The truth is that Ansible doesn’t really care how you set up the SSH system; it will utilize whatever configuration you have. There are just a couple things to consider:

  1. Ansible needs to connect to the client computer via SSH.
  2. Once connected, Ansible needs to elevate privilege so it can configure the system, install packages and so on.

Unfortunately, those two considerations really open a can of worms. Connecting to a remote computer and elevating privilege is a scary thing to allow. For some reason, it feels less vulnerable when you simply install an agent on the remote computer and let Chef or Puppet handle privilege escalation. It’s not that Ansible is any less secure, but rather, it puts the security decisions in your hands.

Next I’m going to list a bunch of potential configurations, along with the pros and cons of each. This isn’t an exhaustive list, but it should get you thinking along the right lines for what will be ideal in your environment. I also should note that I’m not going to mention systems like Vagrant, because although Vagrant is wonderful for building a quick infrastructure for testing and developing, it’s so very different from a bunch of servers that the considerations are too dissimilar really to compare.

Some SSH Scenarios

1) SSHing into remote computer as root with password in Ansible config.

I started with a terrible idea. The “pros” of this setup is that it eliminates the need for privilege escalation, and there are no other user accounts required on the remote server. But, the cost for such convenience isn’t worth it. First, most systems won’t let you SSH in as root without changing the default configuration. Those default configurations are there because, quite frankly, it’s just a bad idea to allow the root user to connect remotely. Second, putting a root password in a plain-text configuration file on the Ansible machine is mortifying. Really, I mentioned this possibility because it is a possibility, but it’s one that should be avoided. Remember, Ansible allows you to configure the connection yourself, and it will let you do really dumb things. Please don’t.

2) SSHing into a remote computer as a regular user, using a password stored in the Ansible config.

An advantage of this scenario is that it doesn’t require much configuration of the clients. Most users are able to SSH in by default, so Ansible should be able to use credentials and log in fine. I personally dislike the idea of a password being stored in plain text in a configuration file, but at least it isn’t the root password. If you use this method, be sure to consider how privilege escalation will take place on the remote server. I know I haven’t talked about escalating privilege yet, but if you have a password in the config file, that same password likely will be used to gain sudo access. So with one slip, you’ve compromised not only the remote user’s account, but also potentially the entire system.

3) SSHing into a remote computer as a regular user, authenticating with a key pair that has an empty passphrase.

This eliminates storing passwords in a configuration file, at least for the logging in part of the process. Key pairs without passphrases aren’t ideal, but it’s something I often do in an environment like my house. On my internal network, I typically use a key pair without a passphrase to automate many things like cron jobs that require authentication. This isn’t the most secure option, because a compromised private key means unrestricted access to the remote user’s account, but I like it better than a password in a config file.

4) SSHing into a remote computer as a regular user, authenticating with a key pair that is secured by a passphrase.

This is a very secure way of handling remote access, because it requires two different authentication factors: 1) the private key and 2) the passphrase to decrypt it. If you’re just running Ansible interactively, this might be the ideal setup. When you run a command, Ansible should prompt you for the private key’s passphrase, and then it’ll use the key pair to log in to the remote system. Yes, the same could be done by just using a standard password login and not specifying the password in the configuration file, but if you’re going to be typing a password on the command line anyway, why not add the layer of protection a key pair offers?

5) SSHing with a passphrase-protected key pair, but using ssh-agent to “unlock” the private key.

This doesn’t perfectly answer the question of unattended, automated Ansible commands, but it does make a fairly secure setup convenient as well. The ssh-agent program authenticates the passphrase one time and then uses that authentication to make future connections. When I’m using Ansible, this is what I think I’d like to be doing. If I’m completely honest, I still usually use key pairs without passphrases, but that’s typically because I’m working on my home servers, not something prone to attack.

There are some other considerations to keep in mind when configuring your SSH environment. Perhaps you’re able to restrict the Ansible user (which is often your local user name) so it can log in only from a specific IP address. Perhaps your Ansible server can live in a different subnet, behind a strong firewall so its private keys are more difficult to access remotely. Maybe the Ansible server doesn’t have an SSH server installed on itself so there’s no incoming access at all. Again, one of the strengths of Ansible is that it uses the SSH protocol for communication, and it’s a protocol you’ve all had years to tweak into a system that works best in your environment. I’m not a big fan of proclaiming what the “best practice” is, because in reality, the best practice is to consider your environment and choose the setup that fits your situation the best.

Privilege Escalation

Once your Ansible server connects to its clients via SSH, it needs to be able to escalate privilege. If you chose option 1 above, you’re already root, and this is a moot point. But since no one chose option 1 (right?), you need to consider how a regular user on the client computer gains access. Ansible supports a wide variety of escalation systems, but in Linux, the most common options are sudo and su. As with SSH, there are a few situations to consider, although there are certainly other options.

1) Escalate privilege with su.

For Red Hat/CentOS users, the instinct might be to use su in order to gain system access. By default, those systems configure the root password during install, and to gain privileged access, you need to type it in. The problem with using su is that although it gives you total access to the remote system, it also gives you total access to the remote system. (Yes, that was sarcasm.) Also, the su program doesn’t have the ability to authenticate with key pairs, so the password either must be interactively typed or stored in the configuration file. And since it’s literally the root password, storing it in the config file should sound like a horrible idea, because it is.

2) Escalate privilege with sudo.

This is how Debian/Ubuntu systems are configured. A user in the correct group has access to sudo a command and execute it with root privileges. Out of the box, this still has the problem of password storage or interactive typing. Since storing the user’s password in the configuration file seems a little less horrible, I guess this is a step up from using su, but it still gives complete access to a system if the password is compromised. (After all, typing sudo su - will allow users to become root just as if they had the root password.)

3) Escalate privilege with sudo and configure NOPASSWD in the sudoers file.

Again, in my local environment, this is what I do. It’s not perfect, because it gives unrestricted root access to the user account and doesn’t require any passwords. But when I do this, and use SSH key pairs without passphrases, it allows me to automate Ansible commands easily. I’ll note again, that although it is convenient, it is not a terribly secure idea.

4) Escalate privilege with sudo and configure NOPASSWD on specific executables.

This idea might be the best compromise of security and convenience. Basically, if you know what you plan to do with Ansible, you can give NOPASSWD privilege to the remote user for just those applications it will need to use. It might get a little confusing, since Ansible uses Python for lots of things, but with enough trial and error, you should be able to figure things out. It is more work, but does eliminate some of the glaring security holes.

Implementing Your Plan

Once you decide how you’re going to handle Ansible authentication and privilege escalation, you need to set it up. After you become well versed at Ansible, you might be able to use the tool itself to help “bootstrap” new clients, but at first, it’s important to configure clients manually so you know what’s happening. It’s far better to automate a process you’re familiar with than to start with automation from the beginning.

I’ve written about SSH key pairs in the past, and there are countless articles online for setting it up. The short version, from your Ansible computer, looks something like this:


# ssh-keygen
# ssh-copy-id -i .ssh/id_dsa.pub remoteuser@remote.computer.ip
# ssh remoteuser@remote.computer.ip

If you’ve chosen to use no passphrase when creating your key pairs, that last step should get you into the remote computer without typing a password or passphrase.

In order to set up privilege escalation in sudo, you’ll need to edit the sudoers file. You shouldn’t edit the file directly, but rather use:


# sudo visudo

This will open the sudoers file and allow you to make changes safely (it error-checks when you save, so you don’t accidentally lock yourself out with a typo). There are examples in the file, so you should be able to figure out how to assign the exact privileges you want.

Once it’s all configured, you should test it manually before bringing Ansible into the picture. Try SSHing to the remote client, and then try escalating privilege using whatever methods you’ve chosen. Once you have configured the way you’ll connect, it’s time to install Ansible.

Installing Ansible

Since the Ansible program gets installed only on the single computer, it’s not a big chore to get going. Red Hat/Ubuntu systems do package installs a bit differently, but neither is difficult.

In Red Hat/CentOS, first enable the EPEL repository:


sudo yum install epel-release

Then install Ansible:


sudo yum install ansible

In Ubuntu, first enable the Ansible PPA:


sudo apt-add-repository spa:ansible/ansible
(press ENTER to access the key and add the repo)

Then install Ansible:


sudo apt-get update
sudo apt-get install ansible

Configuring Ansible Hosts File

The Ansible system has no way of knowing which clients you want it to control unless you give it a list of computers. That list is very simple, and it looks something like this:


# file /etc/ansible/hosts

[webservers]

blogserver ansible_host=192.168.1.5 wikiserver ansible_host=192.168.1.10

[dbservers]

mysql_1 ansible_host=192.168.1.22 pgsql_1 ansible_host=192.168.1.23

The bracketed sections are specifying groups. Individual hosts can be listed in multiple groups, and Ansible can refer either to individual hosts or groups. This is also the configuration file where things like plain-text passwords would be stored, if that’s the sort of setup you’ve planned. Each line in the configuration file configures a single host, and you can add multiple declarations after the ansible_host statement. Some useful options are:


ansible_ssh_pass
ansible_become
ansible_become_method
ansible_become_user
ansible_become_pass

The Ansible Vault

I also should note that although the setup is more complex, and not something you’ll likely do during your first foray into the world of Ansible, the program does offer a way to encrypt passwords in a vault. Once you’re familiar with Ansible and you want to put it into production, storing those passwords in an encrypted Ansible vault is ideal. But in the spirit of learning to crawl before you walk, I recommend starting in a non-production environment and using passwordless methods at first.

Testing Your System

Finally, you should test your system to make sure your clients are connecting. The ping test will make sure the Ansible computer can ping each host:


ansible -m ping all

After running, you should see a message for each defined host showing a ping: pong if the ping was successful. This doesn’t actually test authentication, just the network connectivity. Try this to test your authentication:


ansible -m shell -a 'uptime' webservers

You should see the results of the uptime command for each host in the webservers group.

In a future article, I plan start to dig in to Ansible’s ability to manage the remote computers. I’ll look at various modules and how you can use the ad-hoc mode to accomplish in a few keystrokes what would take a long time to handle individually on the command line. If you didn’t get the results you expected from the sample Ansible commands above, take this time to make sure authentication is working. Check out the Ansible docs for more help if you get stuck.

If you’d like more direct training on Ansible (and other stuff) from yours truly, visit me at my DayJob as a trainer for CBT Nuggets. You can get a full week free if you head over to https://cbt.gg/shawnp0wers and sign up for a trial!

The 4 Part Series on Ansible includes:
Part 1 – DevOps for the Non-Dev
Part 2 – Making Things Happen
Part 3 – Playbooks
Part 4 – Putting it All Together

Have a Plan for Netplan

Ubuntu changed networking. Embrace the YAML.

If I’m being completely honest, I still dislike the switch from eth0, eth1, eth2 to names like, enp3s0, enp4s0, enp5s0. I’ve learned to accept it and mutter to myself while I type in unfamiliar interface names. Then I installed the new LTS version of Ubuntu and typed vi /etc/network/interfaces. Yikes. After a technological lifetime of entering my server’s IP information in a simple text file, that’s no longer how things are done. Sigh. The good news is that while figuring out Netplan for both desktop and server environments, I fixed a nagging DNS issue I’ve had for years (more on that later).

The Basics of Netplan

The old way of configuring Debian-based network interfaces was based on the ifupdown package. The new default is called Netplan, and although it’s not terribly difficult to use, it’s drastically different. Netplan is sort of the interface used to configure the back-end dæmons that actually configure the interfaces. Right now, the back ends supported are NetworkManager and networkd.

If you tell Netplan to use NetworkManager, all interface configuration control is handed off to the GUI interface on the desktop. The NetworkManager program itself hasn’t changed; it’s the same GUI-based interface configuration system you’ve likely used for years.

If you tell Netplan to use networkd, systemd itself handles the interface configurations. Configuration is still done with Netplan files, but once “applied”, Netplan creates the back-end configurations systemd requires. The Netplan files are vastly different from the old /etc/network/interfaces file, but it uses YAML syntax, and it’s pretty easy to figure out.

The Desktop and DNS

If you install a GUI version of Ubuntu, Netplan is configured with NetworkManager as the back end by default. Your system should get IP information via DHCP or static entries you add via GUI. This is usually not an issue, but I’ve had a terrible time with my split-DNS setup and systemd-resolved. I’m sure there is a magical combination of configuration files that will make things work, but I’ve spent a lot of time, and it always behaves a little oddly. With my internal DNS server resolving domain names differently from external DNS servers (that is, split-DNS), I get random lookup failures. Sometimes ping will resolve, but dig will not. Sometimes the internal A record will resolve, but a CNAME will not. Sometimes I get resolution from an external DNS server (from the internet), even though I never configure anything other than the internal DNS!

I decided to disable systemd-resolved. That has the potential to break DNS lookups in a VPN, but I haven’t had an issue with that. With resolved handling DNS information, the /etc/resolv.conf file points to 127.0.0.53 as the nameserver. Disabling systemd-resolved will stop the automatic creation of the file. Thankfully, NetworkManager itself can handle the creation and modification of /etc/resolv.conf. Once I make that change, I no longer have an issue with split-DNS resolution. It’s a three-step process:

  1. Do sudo systemctl disable systemd-resolved.service.
  2. Then sudo rm /etc/resolv.conf (get rid of the symlink).
  3. Edit the /etc/NetworkManager/NetworkManager.conf file, and in the [main] section, add a line that reads DNS=default.

Once those steps are complete, NetworkManager itself will create the /etc/resolv.conf file, and the DNS server supplied via DHCP or static entry will be used instead of a 127.0.0.53 entry. I’m not sure why the resolved dæmon incorrectly resolves internal addresses for me, but the above method has been foolproof, even when switching between networks with my laptop.

Netplan CLI Configuration

If Ubuntu is installed in server mode, it is almost certainly configured to use networkd as the back end. To check, have a look at the /etc/netplan/config.yaml file. The renderer should be set to networkd in order to use the systemd-networkd back end. The file should look something like this:


network:
  version: 2
  renderer: networkd
  ethernets:
    enp2s0:
      dhcp4: true

Important note: remember that with YAML files, whitespace matters, so the indentation is important. It’s also very important to remember that after making any changes, you need to run sudo netplan apply so the back-end configuration files are populated.

The default renderer is networkd, so it’s possible you won’t have that line in your configuration file. It’s also possible your configuration file will be named something different in the /etc/netplan folder. All .conf files are read, so it doesn’t matter what it’s called as long as it ends with .conf. Static configurations are fairly simple to set up:


network:
  version: 2
  renderer: networkd
  ethernets:
    enp2s0:
      dhcp4: no
      addresses:
        - 192.168.1.10/24
        - 10.10.10.10/16
      gateway4: 192.168.1.1
      nameservers:
        addresses: [192.168.1.1, 8.8.8.8]

Notice I’ve assigned multiple IP addresses to the interface. Netplan does not support virtual interfaces like enp3s0:0, rather multiple IP addresses can be assigned to a single interface.

Unfortunately, networkd doesn’t create an /etc/resolv.conf file if you disable the resolved dæmon. If you have problems with split-DNS on a headless computer, the best solution I’ve come up with is to disable systemd-resolved and then manually create an /etc/resolv.conf file. Since headless computers don’t usually move around as much as laptops, it’s likely the /etc/resolv.conf file won’t need to be changed. Still, I wish networkd had an option to manage the resolv.conf file the same way NetworkManager does.

Advanced Network Configurations

The configuration formats are different, but it’s still possible to do more advanced network configurations with Netplan:

Bonding:


network:
  version: 2
  renderer: networkd
  bonds:
    bond0:
      dhcp4: yes
      interfaces:
        - enp2s0
        - enp3s0
      parameters:
        mode: active-backup
        primary: enp2s0

The various bonding modes (balance-rractive-backupbalance-xorbroadcast802.3adbalance-tlb and balance-alb) are supported.

Bridging:


network:
  version: 2
  renderer: networkd
  bridges:
    br0:
      dhcp4: yes
      interfaces:
        - enp4s0
        - enp3s0

Bridging is even simpler to set up. This configuration creates a bridge device using the two interfaces listed. The device (br0) gets address information via DHCP.

CLI Networking Commands

If you’re a crusty old sysadmin like me, you likely type ifconfig to see IP information without even thinking. Unfortunately, those tools are not usually installed by default. This isn’t actually the fault of Ubuntu and Netplan; the old ifconfig toolset has been deprecated. If you want to use the old ifconfig tool, you can install the package:


sudo apt install net-tools

But, if you want to do it the “correct” way, the new “ip” tool is the proper way to do it. Here are some equivalents of things I commonly do with ifconfig:

Show network interface information.

Old way:


ifconfig

New way:

ip address show

(Or you can just do ip a, which is actually less typing than ifconfig.)

Bring interface up.

Old way:

ifconfig enp3s0 up

New way:

ip link set enp3s0 up

Assign IP address.

Old way:

ifconfig enp3s0 192.168.1.22

New way:

ip address add 192.168.1.22 dev enp3s0

Assign complete IP information.

Old way:


ifconfig enp3s0 192.168.1.22 net mask 255.255.255.0 broadcast
 ↪192.168.1.255

New way:


ip address add 192.168.1.22/24 broadcast 192.168.1.255
 ↪dev enp3s0

Add alias interface.

Old way:


ifconfig enp3s0:0 192.168.100.100/24

New way:


ip address add 192.168.100.100/24 dev enp3s0 label enp3s0:0

Show the routing table.

Old way:


route

New way:


ip route show

Add route.

Old way:


route add -net 192.168.55.0/24 dev enp4s0

New way:


ip route add 192.168.55.0/24 dev enp4s0

Old Dogs and New Tricks

I hated Netplan when I first installed Ubuntu 18.04. In fact, on the particular server I was installing, I actually started over and installed 16.04 because it was “comfortable”. After a while, curiosity got the better of me, and I investigated the changes. I’m still more comfortable with the old /etc/network/interfaces file, but I have to admit, Netplan makes a little more sense. There is a single “front end” for configuring networks, and it uses different back ends for the heavy lifting. Right now, the only back ends are the GUI NetworkManager and the systemd-networkd dæmon. With the modular system, however, that could change someday without the need to learn a new way of configuring interfaces. A simple change to the renderer line would send the configuration information to a new back end.

With regard to the new command-line networking tool (ip vs. ifconfig), it really behaves more like other network devices (routers and so on), so that’s probably a good change as well. As technologists, we need to be ready and eager to learn new things. If we weren’t always trying the next best thing, we’d all be configuring Trumpet Winsock to dial in to the internet on our Windows 95 machines. I’m glad I tried that new Linux thing, and while it wasn’t quite as dramatic, I’m glad I tried Netplan as well!

If you’re interested in learning from me directly, my day job is a Linux trainer at CBT Nuggets. There’s TONS of training available, on Linux, Cisco, Microsoft, etc., and you get a full week free when you sign up. It’s like drinking from the firehose of tech knowledge! https://cbt.gg/shawnp0wers

Password Managers. Yes You Need One.

If you can remember all of your passwords, they’re not good passwords.

I used to teach people how to create “good” passwords. Those passwords needed to be lengthy, hard to guess and easy to remember. There were lots of tricks to make your passwords better, and for years, that was enough.

That’s not enough anymore.

It seems that another data breach happens almost daily, exposing sensitive information for millions of users, which means you need to have separate, secure passwords for each site and service you use. If you use the same password for any two sites, you’re making yourself vulnerable if any single database gets compromised.

There’s a much bigger conversation to be had regarding the best way to protect data. Is the “password” outdated? Should we have something better by now? Granted, there is two-factor authentication, which is a great way to help increase the security on accounts. But although passwords remain the main method for protecting accounts and data, there needs to be a better way to handle them—that’s where password managers come into play.

The Best Password Manager

No, I’m not burying the lede by skipping to all the reviews. As Doc Searls, Katherine Druckman and myself discussed in Episode 8 of the Linux Journal Podcast, the best password manager is the one you use. It may seem like a cheesy thing to say, but it’s a powerful truth. If it’s more complicated to use a password manager than it is to re-use the same set of passwords on multiple sites, many people will just choose the easy way.

Sure, some people are geeky enough to use a password manager at any cost. They understand the value of privacy, understand security, and they take their data very seriously. But for the vast majority of people, the path of least resistance is the way to go. Heck, I’m guilty of that myself in many cases. I have a Keurig coffee machine, not because the coffee is better, but because it’s more convenient. If you’ve ever eaten a Hot Pocket instead of cooking a healthy meal, you can understand the mindset that causes people to make poor password choices. If the goal is having smart passwords, it needs to be easier to use smart passwords than to type “password123” everywhere.

The Reason It Might Work Now

Mobile devices have become the way most people do most things online. Heck, Elon Musk said that we’ve become cybernetic beings, it’s just that the bandwidth to our cybernetic components is really slow (that is, typing on our phones). It’s always been possible to have some sort of password management app on your phone, but until recently, the operating systems didn’t integrate with password managers. That meant you’d have to go from one app into your password manager, look up the site/app, copy the password, switch back to the app, paste the password, and then hope you got it right. Those days are thankfully in the past.

Both recent Android systems and iOS (Apple, not Cisco) versions allow third-party password managers to integrate directly into the data entry system. That means when you’re using a keyboard to type in a login or password, in any app, you can pull in a password manager and enter the data directly with no app switching. Plus, if you have biometrics enabled, most of the time you can unlock your password database with a fingerprint or a view of your face. (For those concerned about the security of biometric-only authentication, it can, of course, be turned off, but remember how important ease of use is for most people!)

So although password managers have been around for years and years, I truly believe it’s only with the advent of their integration into the main operating system of mobile devices that people will actually be able to use them widely. Not all Linux users will agree with me, and not all people in general will want their passwords available in such an easy manner. For the purpose of this article, however, a mobile option is a necessity.

A Tale of Two Concepts

Remember when “the cloud” was a buzzword that didn’t really mean anything specific, but people used it all the time anyway? Well, now it very clearly means servers or services run on computers you don’t own, in data centers you don’t control. The “cloud” is both awesome and terrible. When it comes to storing password data, many people are rightfully concerned about cloud storage. When it comes to password managers, there are basically two types: the kind that stores everything in a local database file and those that store the database in the cloud.

The cloud-based storage isn’t as unsettling as it seems. When the database is stored on the “servers in the sky”, it’s encrypted before it leaves your device. Those companies don’t have access to your actual passwords, just the highly encrypted database that holds them—as long as you trust the companies to be honest about such things. For what it’s worth, I do think the major companies are fairly trustworthy about keeping their grubby mitts off your actual passwords. Still, with the closed-source options, a level of trust is required that some people just aren’t willing to give. I’m going to look at password managers from both camps.

The Contenders

I picked five(-ish) password managers for this review. Please realize there are dozens and dozens of very usable, very secure, password managers for Linux. Some are command-line only. Some are just basic PGP encryption of text files containing user name/password pairs. Today’s review is not meant to be all-encompassing; it’s meant to be helpful for average Linux users who want to handle their passwords better than they currently do. I say five(-ish), because one of the entries has multiple versions. The list is:

  1. KeePass/KeePassX/KeePassXC: this is the one(-ish) that has multiple variations on the same theme. More details later.
  2. 1Password.
  3. LastPass.
  4. Bitwarden.
  5. Browser.

I highlight each of these in this article, in no particular order.

Your Browser’s Password Database

Most people don’t consider using their browser as a password manager a good idea. I’m one of those people. Depending on the browser, the version and the settings you choose, your passwords might not even be encrypted. There is also the problem of using those passwords in other apps. Granted, if you use Chrome, your Android phone likely will be able to access the passwords for you to use in other apps, but I’m simply not convinced the browser is the best place to store your passwords.

I’m sure the password storage feature of modern browsers is more secure than in the past, but a browser’s main function isn’t to secure your passwords, so I wouldn’t trust it to do so. I mention this option because it’s installed by default with every browser. It’s probably the most widely used option, and that breaks my heart. It’s too easy to click “save my password” and conveniently have your password filled in the next time you visit.

Is using the browser’s “save password” function better than using nothing at all? Maybe. It does allow people to use different passwords, trusting the browser to remember them. But, that’s about it. I’m sure the latest browsers have the option to secure the passwords a bit, but it’s not that way by default. I know this, because when I sit at my wife’s computer, I simply start her browser (Chrome), and all her passwords are filled in for me when I visit various websites. They’ve almost made it too easy to use poor security practices. The only hope is to have better options that are even easier—and I think we actually do. Keep reading!

The KeePass Kraziness

First off, these password managers are the ones that use a local, non-cloud-based database for storing passwords. If the thought of your encrypted passwords living on someone else’s servers offends your sensibilities, this is probably the best choice for you. And it is a really good choice, whichever flavor you pick.

The skinny on the various programs that share similar names is that originally, there was KeePass. It didn’t have a Linux version, so there was another program, KeePassX, that used an identical (and fully compatible) database. KeePassX runs natively on Linux, along with the other major OSes. To complicate issues, KeePass then released a Linux version, which runs natively, but it uses Mono libraries. It runs, and it runs fine, but Mono is a bit kludgy on Linux, so most folks still used KeePassX. Then KeePassXC came around, because the KeePassX program was getting a little long in the tooth, and it hadn’t been updated in a long time. So now, there are three programs, all of which work natively on Linux, and all of which are perfectly acceptable programs to use. I prefer KeePassXC (Figure 1), but only because it seems to be most actively developed. The good news is, all three programs can use the exact same database file. Really. If there is a single ray of sunshine on a messy situation, it’s that.

""

Figure 1. KeePassXC has a friendly, native Linux interface.

KeePass(X/XC) Features:

  • Local database file, with no syncing mechanism.
  • Database can be synced by a third party (such as Dropbox).
  • Supports master password and/or keyfile unlocking.
  • Very nice password generator (Figure 2).
  • Secure localhost-only browser integration (KeePassHTTP).

KeePass(X/XC) Pros:

  • No cloud storage.
  • Command-line interface included.
  • 2FA abilities (YubiKey).
  • Open source.
  • No “premium” features, everything is free.

KeePass(X/XC) Cons:

  • No cloud storage (yes, it’s a pro and a con, depending).
  • Brand confusion with multiple variations.
  • Requires third-party Android/iOS app for mobile use.
  • More complicated than cloud-based alternatives (file to sync/copy).
""

Figure 2. The KeePassXC password generator is awesome. I don’t even use KeePassXC for my password manager, but I still like the generator!

The KeePass family of password managers is arguably the most open-source-minded option of those I cover here. Depending on the user, to handle syncing/copying the database rather than depending on an unknown third party to store the data has a traditional Linux feel. For those folks who are most concerned about their data integrity, a KeePass database is probably the best option. Thankfully, due to third-party tools like KeePass2Droid (for Android) and MiniKeePass/KyPass for iOS, it’s possible to use your database on mobile devices as well. In fact, most apps handle syncing your database for you.

Bitwarden

I didn’t know the Bitwarden password manager even existed until we did a Twitter poll asking what password managers LJ readers used. I have to admit, it’s an impressive system, and it ticks almost all the “feel good” boxes Linux users would want (Figure 3). Not only is it open source, but also the non-premium offering is a complete system. Yes, there is a premium option for $10/year, but the non-paid version isn’t crippled in any way.

""

Figure 3. Bitwarden is very well designed, and with its open-source nature, it’s hard to beat.

Bitwarden does store your data in its own cloud servers, but since the software is open source, you can examine the code to make sure the company isn’t doing anything underhanded. Bitwarden also has its own apps for Android/iOS and extensions for all major browsers. There’s no need to use a third-party tool. In fact, it even includes command-line tools for those folks who want to access the database in a text-only environment.

Bitwarden Features:

  • Open-source.
  • Cloud-based storage.
  • Decent password generator.
  • Native apps for Linux, Windows, Mac, Android and iOS.
  • Browser extensions for all major browsers.
  • Options to store logins, secure notes, credit cards and so on.

Bitwarden Pros:

  • One developer for all apps.
  • Open-source!
  • Cloud-based access.
  • Works offline if the “cloud” is unavailable.
  • Free version isn’t crippled.
  • Browser plugin works very well.

Bitwarden Cons:

  • Database is stored in the cloud (again, it’s a pro and a con, depending).
  • Some 2FA options require the Premium version.

Bitwarden Premium Version:

  • $10/year.
  • Additional 2FA options.
  • 1GB encrypted storage.

I’ll admit, Bitwarden is very, very impressive. If I had to pick a personal favorite, it probably would be this one. I’m already using a different option, and I’m happy with it, but if I were starting from scratch, I’d probably choose Bitwarden.

1Password

1Password is a widely used program for password management. But honestly, I’m not sure why. Don’t get me wrong; it works well, and it has great features. The problem is that I can’t find any features it has over the alternatives, and there isn’t a free option at all.

There’s also no native Linux application, but the 1PasswordX browser extension works well under Linux, and it’s user-friendly enough to use for things other than browser login needs. Still, although I don’t begrudge the company for charging a fee for the service, the alternatives offer significant services for free, and that’s hard to beat. Finally, 1Password utilizes a “secret key” that’s required on each device to log in. Although it is an additional layer of security, in practice, it’s a bit of a pain to install on each device.

1Password Features:

  • Cloud-based storage.
  • Non-login data encryption (Figure 4).
  • Printable “emergency kit” for recovering account.
  • Cross-platform browser extension.
  • Offline access.

1Password Pros:

  • Easy-to-use interface.
  • Very good browser integration.

1Password Cons:

  • $3/month, no free features.
  • Secret-key system can be cumbersome.
  • No native Linux app.
  • Proprietary, closed-source code.

1Password Premium Features:

  • All features require a monthly subscription.
""

Figure 4. 1Password has a great interface, and it stores lots of data.

If there weren’t any other password managers out there, 1Password would be incredible. Unfortunately for the 1Password company, there are other options, several of which are at least as good. I will admit, I really liked the browser extension’s interface, and it handled inserting login information into authentication fields very well. I’m not convinced it’s enough for the premium price, however, especially since there isn’t a free option at all.

LastPass

Okay, first I feel I should admit that LastPass is the password manager I use (Figure 5). As I mentioned previously, if I were to start over from scratch, I’d probably choose Bitwarden. That said, LastPass keeps getting better, and its integration with browsers, mobile devices and native operating systems is pretty great.

""

Figure 5. I seldom use anything other than LastPass’s browser extension, unless I’m on my mobile device, but the app looks very similar.

LastPass offers a free tier and a paid tier. Not too long ago, you had to pay for the premium service ($2/month) in order to use it on a mobile device. Recently, however, LastPass opened mobile device syncing and integration into the completely free offering. That is significant, because it brings the free version to the same level as the free version of Bitwarden. (I suspect perhaps Bitwarden is the reason LastPass changed its free tier, but I have no way of knowing.)

LastPass Features:

  • Cloud-based storage.
  • Native apps for Linux, iOS and Android.
  • 2FA.
  • Offline access.
  • Cross-platform browser extension.

LastPass Pros:

  • Cloud-based storage.
  • Very robust free offering.
  • Smoothest browser-based password saving (in my experience).

LastPass Cons:

  • Data stored in the cloud (yes, it’s a pro and a con, depending).
  • Rumored to have poor support (I’ve never needed it).
  • Proprietary, closed-source code.

LastPass Premium:

  • $2/month.
  • Gives 1GB online file storage.
  • Provides the ability to share passwords.
  • Enhanced 2FA possibilities.
  • Emergency access granting (Figure 6).
""

Figure 6. This is sort of a “deadman’s” switch for emergency access. It allows you to give emergency access to someone, with the ability to revoke that access before it actually happens. Pretty neat!

LastPass is the only option I can give an opinion on based on extended experience. I did try each option listed here for a few days, and honestly, each one was perfectly acceptable. LastPass has been rock-solid for me, and even though it’s not open source, it does work well across multiple platforms.

The Winner?

Honestly, with the options available, especially those highlighted today, it’s hard to lose when picking a password manager. I sort of picked the top managers, and gave an overview of each. There are other, more obscure password managers. There are some options that are Linux-only. I decided to look at options that would work regardless of what platform you find yourself on now or even in the future. Once you pick a solution, migrating is a bit of a pain, so starting with something flexible is ideal.

If you’re concerned about someone else controlling your data (even if it’s encrypted), the KeePass/KeePassX/KeePassXC family is probably your best bet. If you don’t mind trusting others with your data-syncing, LastPass or Bitwarden probably will be ideal. I suppose if you don’t trust “free” products, or if you just really like the layout of 1Password, it’s a viable option. And I guess, in a pinch, using browser password management is better than nothing. But please, be sure the data is encrypted and password-protected.

Finally, even if none of these options are something you’d use on a daily basis, consider recommending one to someone you care about. Keeping track of passwords in a secure, sync-able database is a huge step in living a more secure online lifestyle. Now that mobile devices are taken seriously in the password management world, password managers make sense for everyone—even your non-techie friends and family.

Resources

[NOTE: This post was originally posted on the Linux Journal website. Since Linux Journal is now defunct, and authors own their content, I’m reposting here.]

Today, I Broke My Brain

Some days suck. Today, for instance.

I don’t talk much about mental illness. Not because of any stigma against it, or because I’m ashamed of having and handling mental illness, but rather because I just don’t have much to say on the issue. My car accident (see link above) sparked some serious brain issues for me, including anxiety, depression, OCD, and some symptoms that I’m not even sure what to call.

Today is a bad day.

I don’t have many bad days anymore. I’ve been on a medication for over a decade that works well to keep my brain in check. I’ve lived through enough rough times, that I can look back and see patterns, and know I’m not actually going crazy, and that this too will pass. That doesn’t make today better, really, but it does give me hope that tomorrow will be.

Today, I went grocery shopping with Donna. The store was busy. And really, that was it. My brain broke. For me, that means I was overwhelmed, for no really good reason. It manifests for me in a pretty predictable fashion:

  • I look scared and bewildered.
  • I can’t discern when people are talking to me over the din of background noise.
  • I stutter. (That’s really the one that gives it away to my loved ones. I can fake ’em out a bit usually, but stuttering is hard to hide)
  • I get confused easily. This is mainly due to the background noise thing.
  • I get VERY frustrated with myself, my stupid brain, my inability to be an effective family member, and my inability to pull myself out of it.
  • My hands shake.
  • I get odd facial twitches.
  • The worst part is, inside my head, I’m perfectly fine. I can think, I can reason — but it’s like I’m trying to function with 1,000 people screaming directions at me, and a layer of cotton between me and life.

I’ll be fine tomorrow. Really I will. And my family is incredibly supportive. They aren’t frustrated with me. They might be frustrated FOR me, but that’s different altogether. (It’s also not pity, for which I’m grateful) Unfortunately, Sunday night is our young adult ministry, and it means we’re feeding 20-30 college-aged people, along with coordinating music and discussion. I won’t be any help, which means Donna will have to do twice the amount of work. And THAT is the most frustrating part. Being a burden. (If Donna reads this, she’ll insist I’m not a burden, and I get it, she’s not upset with me. But really, it’s a burden we share, but a burden nonetheless)

ANYWAY, I post lots of silly photos. I share funny anecdotes. I smile a lot on the Internet. In my attempt to be as real as possible, I figured it only fair to share that sometimes I have bad days too. And that’s OK. Just think good thoughts at my wife. She totally deserves it today.