Jeff Rankin Project Miscellany & Blog

Blog

Thoughts on Star Trek

By Jeff Rankin, Sun May 27 2018

Some thoughts on Star Trek: Discovery—which, having watched a few episodes, I didn't enjoy—and Star Trek in general. Star Trek is great because it can be a vehicle for many different kinds of stories: political/social commentary, action/adventure, comedy, sci-fi concepts (of course), or a combination of any of these things. But for Star Trek to be Star Trek, there are a number of foundational elements required:

Gene Roddenberry's vision was one of a united humanity working together to solve problems. The stories weren't always about that, of course, but the overall feel of the show was hopeful. The ensemble cast was an expression of humanity working together, with characters using their individual personalities and skills to contribute to the story. The episodic nature meant that lots of different stories could be told, rather than one overarching story resolved over several episodes or season.

Oh, and remember fun? OK, perhaps fun isn't foundational. Nonetheless, this is entertainment, therefore it's probably a good idea to make the stories fun now and then. Fun is the primary reason I've enjoyed The Orville which, while certainly not perfect, meets all the foundational elements of Star Trek.

Read Article & Discuss 

Self-referential design (aka why usability testing is important)

By Jeff Rankin, Sat May 19 2018

Some notes on why self-referential design is a problem, and why usability testing is important.

Notes:

Read Article & Discuss 

How do you know what to build? How do you know it's right?

By Jeff Rankin, Mon May 7 2018

I had a conversation recently with analysts at a Columbus-based consulting agency regarding user research and usability evaluation. I was disappointed—but not surprised—to learn the agency didn't engage in either despite having a team of designers on hand. This isn't the first time I've encountered this phenomenon, and I wanted to put some questions and comments together.

Some Basic Questions

So, assuming a lack of user research and/or usability evaluation for a project, these questions occur to me:

The client and/or product owner hopefully (but not always!) has a vision in mind, but how well-defined and realistic is this vision? How much research has been done? How many people have they spoken to about the product or service? Who are the competitors? If the amount of research has been minimal, the consultant needs to utilize their expertise in helping refine the vision by conducting some user research. It doesn't have to take a long time or be expensive (check out our user research cheat sheet for more information).

What if there's significant ambiguity about what's being built (there's always some)? Yes, the team can get together, ideate, and generate some stories. But are these stories anything other than assumptions if there's no data to back them up? So, take time to gather some data to help inform ideation/storytelling sessions.

In the midst of design, the team should take the initiative and conduct informal usability evaluation during the design sprints (or at some point that makes sense for the project). Just like gathering data for ideation/storytelling sessions, it doesn't have to be expensive in terms of time and resources. Formative sessions, with 4 to 6 users (actual users strongly preferred), and held over a day or two should provide enough data to ensure that the design is on course. Employ techniques with some level of rigor: Internal tests, 5 second tests, and similar techniques return questionable results in my experience.

A Bigger Point

We're the experts and shouldn't assume the client knows exactly what needs to be built. An interesting product/service concept needs to be developed, and we owe it to our clients to utilize the tools and techniques at our command. Part of this may involve educating the client (and perhaps internal people who manage the client relationship) about what needs to be done and why it will benefit the project. Instances of great ambiguity may involve forging ahead and doing the work you know needs to be done (sometimes it's better to beg forgiveness, than ask permission)!

Read Article & Discuss 

The LFS201 Course & LFCS Exam

By Jeff Rankin, Fri Dec 1 2017

Note: This article is based upon my Nov 16, 2017 presentation to the DMA Linux Users Group.

Late last summer I decided to take a class in Linux system administration and get certified. I've been using Unix/Linux since the early '90s and wanted to augment my existing skills and pick up some new skills. It probably seems strange for a designer to be interested in Unix, but I've always appreciated its power, modularity, and depth of its design.

There are a number of system admin-oriented courses and certifications that I considered (Red Hat, CompTIA, Linux Foundation). I ended up choosing the LFS201 course and LFCS exam offered by the Linux Foundation. The neutrality (wrt Linux distribution) of the Linux Foundation helped in making that decision. The wide coverage of the course was appealing as well: I wanted to start with something more general and then take more topical (perhaps distribution-specific) courses later. Finally, Linus Torvalds is listed among the fellows of the Linux Foundation. Registration for both the course and exam cost $499, the exam included a free re-take.

To back up a little, I first took the free edX LFS101x (Introduction to Linux) course. It was genuinely useful and I highly recommend it as either an introduction or refresher.

Here are my thoughts on both the LFS201 course and LFCS exam content.

The LFS201 Course

Whew, that's a lot of content! At the time that I took the course, around September 2017, the course consisted of 42 chapters. Coverage included (just the high points here) filesystem layout, processes, package management, system monitoring, process monitoring, memory monitoring and tuning, IO monitoring and tuning, disk partitioning, disk encryption, LVM, RAID, user and group management, file permissions and management, PAM, network configuration, firewall, system startup and shutdown, GRUB, backup and recovery, basic troubleshooting, and system rescue. Exercises and labs conclude most chapters. In a word: exhaustive.

It took about a month for me to get through all the course material (I reviewed some of it). I found some chapters more useful and informative than others, including those chapters covering LVM, disk management, user/group management, and GRUB. Chapters covering system monitoring (processes, memory, CPU, I/O) and tuning were less useful. I was looking forward to learning more about these topics, but the course presentation was very dry. The content would've benefited a great deal from discussion of real-world scenarios that occur and the techniques/tools that can be used to fix issues. Put simply: make it real. Finally, one positive aspect of all the content was the command-line orientation. Graphical tools were discussed occasionally, but never as the primary means to perform a task.

A side issue regards the length of the LFCS Domains and Competencies (V 2.16). This document lists the knowledge and skills the Linux Foundation believes sysadmins should possess. V 2.16 is very long, and contains some odd items. This was a problem for me because it created uncertainty about what would appear on the exam, especially when it wasn't covered in LFS201 (I don't expect a 1:1 relationship between the domains/competencies, course, and exam, but the correlation should be reasonably strong). Fortunately, this has been addressed, and the latest domains and competencies document has been streamlined to make more sense.

The LFCS Exam

While I can't get into specifics about the exam content, I can share some of the basics and offer some pointers. The exam consists of 25 questions and you're given 2 hours to complete it (enough time in my experience). It's conducted through a web browser (Google Chrome is required) using a plug-in that enables a terminal session to a CentOS installation (I chose CentOS for the exam, Ubuntu is offered as well). I was using a MacBook Pro connected to a Thunderbolt display. It's a terminal session only, no graphical tools are used. A test administrator monitors the session through your webcam. You'll be asked to remove everything from your desktop and the wall you are facing. You're not permitted to have anything to eat or drink during the session. From what I understand, you can request a brief break during the test session.

Exam Tips

Final Thoughts

Critique of some of the course content aside, the course was thorough (almost to a fault) and well-done. I felt the exam was fair and support was great. For me, it was worth the investment. I'll also note that you can purchase the exam alone for $300 and use alternate resources for study (see the references below) instead of taking the LFS201 course.

Note: I'll be taking the LFS211 course (Linux Networking and Administration) and LFCE exam as well.

References

Read Article & Discuss 

Changing a design while it's being tested: good idea?

By Jeff Rankin, Mon Jul 17 2017

A while back I got into an exchange with another designer on Twitter regarding his conduct of a usability test. It started when he "tweeted" this (these aren't the exact words, but I've captured the gist): "I'm testing with users and updating the design as issues are uncovered."

This surprised me: it didn't seem like a good idea from a methodological perspective, yet here was a fairly well-known designer (who'd written a book or two by the time of our exchange) talking about it like it was business as usual.

I replied something to the effect of "Shouldn't you change the design after you've run all the sessions (and therefore collected all the data)?" We had a brief friendly exchange following this: he didn't see the harm in changing the design as he was running the sessions. I left it at that, but wanted to put down my thoughts on why this is not a good idea.

Is the problem really a problem?

If you change the design immediately after the session, how do you know the issue is a problem? And to what extent is it a problem? For example, pretend that you ran a series of 12 user-testing sessions, and observed several issues:

Is issue 5 really a problem? Maybe, but certainly not as big of a problem as issues 1 and 2. Had issue 5 been "fixed" early in the test sessions, you wouldn't have been able to know whether it's truly a problem (perhaps it's a methodological or other test anomaly), and you wouldn't know the magnitude of the problem (data necessary for prioritization).

A Missed Opportunity

Usability testing, in part (a large part), is about understanding why users are having problems with a product's design. For a given issue, you want as much data as you can get so you can understand the nature of the problem. If you're changing the design (ostensibly to fix an observed issue) as you're testing, you've lost the opportunity to learn more the problem. If you only have 1 data point for an issue, can you really address it with a high level of confidence (and again, is it really a problem)?

A side question: What happens when the mid-test "fix" introduces new issues? It seems like the design and testing sessions could go off the rails pretty quickly with this methodology.

The Bottom Line

Testing and updating in this way, it's entirely possible that the designer is "fixing" issues that aren't really problems, or are relatively trivial problems. Or, because the designer doesn't have an understanding of the nature of an actual problem, it's not addressed as well as it could've been, had there been more data.

What are your thoughts? Is this a common testing methodology? Is there a context in which it would make sense?

Read Article & Discuss