The Ilonka Reader

Notes on the Books I Have Read

Month: February, 2016

The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century

By David Salsburg

Another book as part of my statistics kick, this time much more relevant to what I wanted but not the style I was looking for. I guess you can’t get everything.

Salsburg is explicit that this book is made for those with little to no mathematical background, thus it has no equations and not even a good deal of explanation of the mathematics being developed. This was the major flaw in the book for me. I loved his detailed history of the various people developing statistics. He starts with Laplace and Bayes who laid the groundwork for this kind of thinking and then gets deep in Pearson, Fisher, Neyman, and many others who really developed the tools we use today in scientific experimentation. He talks about experiment design and hypothesis testing, the various lemmas and theorems as they were developed to deal with understanding agricultural and epidemiological and beer brewing success.

But he shies away from anything too mathematical, which was difficult for me because I didn’t know about many of the ideas these people were developing and thus gained no insight into the concepts except the high level explanations Salsburg thought were appropriate. I think I’d rather have read the book after having a more technical understanding of some of the material, such that the information in the book could have consolidated my learning instead of starting it.

Still, my main takeaway is that statistics led us to think more clearly about how to design and interpret experiments in fields where it is hard to isolate variables: agriculture, medicine, eventually molecular biology, and many others. A lot of this had to do with moving away from considering specific measurements and instead considering the distribution of the those measurement. It also led to a flourishing field of mathematics that dealt with many of the real world problems of experimentation.

Lady Luck: The Theory of Probability

By Warren Weaver.

I’m on a statistics kick. I want to learn the fundamentals such that I can onto machine learning with a solid base of from whence it came. I wasn’t sure exactly how I wanted to do this, so I’ve been doing a combination of reading books, flipping through textbooks, taking online courses (most of which I don’t like and don’t finish, but that’s a different story,) and researching explanations on Wikipedia. This is the first book I finished as part of this endeavor.

I like it’s style, but I found the content to be a little lateral to what I was looking for. It’s from the sixties and also (obviously) focused more on probability than statistics, though the two are strongly related.

I enjoyed the mix of narrative and technical explanation. Weaver doesn’t shy away from equations and uses standard notation for referring back to previous ones. He wrote it for precocious high schoolers and I think fairly accurately judged that level: as someone with an undergraduate education I found the math straightforward. Which is not to say it was unnecessary; I was extremely glad he got detailed with the math and was explicit about when he was not explaining a proof because it was too complicated or detailed or, and generally this was in combination with the previous, not particularly necessary.

The narrative was useful in tying the theory to its history and uses. However this is where the book was a little astray for me. Weaver talks mostly about counting probabilities, a lot of which came from gambling games. While I found it interesting, it definitely was not the kind of statistics I was looking for. He didn’t connect it to experimental design and hypothesis testing at all. It’s really a different side of statistics that I didn’t realize wasn’t, at least in this book, entirely related.

Still, I enjoyed the book and its contents and especially its style.

To Show and To Tell: The Craft of Literary Nonfiction

By Phillip Lopate.

I got this book because I’m interested in getting better at writing personal essays. I’m not sure if this book will help me with that. Maybe.

It reads mostly as a series of opinions, which is fine, the author seems to know what he’s talking about. He also seems interested in reading and writing the kinds of essays I’m interested in, ones in which the author does serious thinking around some problem or event, generally a personal one. The chapters are fairly scattered with no real logical progression that I could gather. It’s a scattershot of thoughts about things like how to end an essay and how to ethically write about people in your lives.

There is a second section in which he discusses various famous essayists, which I enjoyed and gave me a good sense of classics I should be reading. He talked about Hazlitt, Lamb, Emerson, Baldwin and Hoagland. He also at various points in the first section attributes the birth of the personal essay to Montaigne.

I think the main thing I remember him talking about which I want to do better is giving the reader a sense of the author’s profile at the beginning of the essay. He says the reader needs to know some basic things about you – age, location, gender, demographic, race – in order to understand your personal essay since it’s about you. He says it can be hard to include these things elegantly, but it’s important. I agree.

Overall not exactly the book I was looking for but an easy and enjoyable read.

Interviewing Users

By Steve Portigal.

Silas’ roommate Sarah recommended this book to me. I read it to prepare for interviewing people about jewelry and electronic accessories like activity trackers, watches. I thought it was not as good as Just Enough Research — it was a little less mean and lean. It also mostly spoke to a circumstance I don’t expect to be in: interviewing users in their homes that have been recruited by a recruiting agency. In my case, I expect to interview friends and acquaintances, probably in coffee shops. But it was still a good, fast read that gave me some insight on how to interview users.

It helped me think more broadly about the kinds of questions I should ask, how to get a larger sense of users’ interactions with the world. It confirmed my sense that being quiet is important, as is attentiveness, trying to turn up ‘blank’ and unassuming, and preparing for the interview with a guide or set up of questions. It also makes me think I should record my interviews, since taking detailed notes is disruptive and often things becomes clearer the second time they are heard. I wish he had spoken a bit more about analyzing interviews, though I understand that wasn’t what the book was about. He touches on it briefly but basically says you should read another book about that.

It reminded me that my experience interviewing my fellow yoga teacher trainees was a good practice in interviewing that will probably help with this different kind of interviewing I’m interested in now.

Crossing the Chasm

By Geoffrey A. Moore.

This book is about how to market high-tech, disruptive products to mainstream customers. The concept is that early on you’re selling your product to the lunatic fringe or early adopters/visionaries, who want something new and exciting that’ll get them ahead and don’t care if it doesn’t work perfectly. In fact that’s kind of a plus, it’s a sign that they’re truly ahead of the curve. But eventually you want to sell your product to regular people, pragmatists and conservatives, and they want it to work perfectly every time and for everyone else to be using it also. They want the opposite of what the early people want. So how do you cross the chasm?

The book was good, strong at the beginning with a bit of a lull and then a good mid-section that got meaty with advice, real world examples, and details that warranted a whole book and not just an article. His theory for how to cross the chasm is to pick a small market segment to dominate, then branch out into others. You need to dominate a market to demonstrate that you’re a mainstream solution and not some crazy concept.

A market segment is a group of people with a particular need that all reference each other. To dominate it you need to provide a whole product solution, which generally requires partners, and indicate two competitors: the old guard solution that you’re disrupting and another solution comparable to yours that isn’t right for this particular application. This is important for the market to accept your product as mainstream, even if it’s not yet.

Although the book was written specifically for high-tech, disruptive products, a lot of it was more broadly information (how to define a market segment, why whole product solutions are important) and could probably be applied to other kinds of products.

There was also a little point towards the end about how being an early employee at a start-up never provides good financial compensation, which is the second time of late I’ve read that. So that was a fun personal touch.

Just Enough Research

By Erika Hall.

I read this book a couple weeks ago, before I finished Seeing Voices, for work so that I could better run usability testing for our app. Because of this I took pretty extensive notes, which I’ve copied below. I really liked the book: it’s short and to the point and the author has just the right amount of sass. She talks not just about how to do user research, but how to do it given that you’re probably not a professional and how to do it in organizations that might not want you to do so. Very practical.

My notes:

A way to classify the purpose of your research:

  • Generative or exploratory: “What’s up with…?”
  • Descriptive and explanatory: “What and how?”
  • Evaluative: “Are we getting close?”
  • Causal: “Why is this happening?”

Setting roles, where roles represent a cluster of tasks. Single people can take on multiple roles or a role could be shared:

  • Author
  • Interviewer/Moderator
  • Coordinator/Scheduler
  • Notetaker/Recorder
  • Recruiter
  • Analyst
  • Documenter
  • Observer

Best practices:

  • Phrase questions clearly: not the questions you’re asking users but the questions you are trying to answer.
  • Set realistic expectations: what’s the question being answered, the methods being used, and the decisions to be informed by the findings.
  • Be prepared: makes everything go cleanly and quickly.
  • Allow sufficient time for analysis.
  • Take dictation: notes or it didn’t happen, and they need to be sufficiently informative for anyone who has to make decisions based on the research.

“To make the best use of your time and truly do just enough research, try to identify your highest-priority questions–your assumptions that carry the biggest risk.”

The process:

  1. Define the problem.
  2. Select the approach.
  3. Plan and prepare for the research.
  4. Collect the data.
  5. Analyze the data.
  6. Report the results.

Screening participants is important. You want to make sure they are the type of person your product/organization is looking to engage. You also want to make sure they will be an active and useful participant: not too terse but also not too distracted and verbose. You can use a simple survey and might want to follow with a quick phone call to see if they are a good match for your study.

Usability testing is basically a directed interview with a representative user while they use a prototype or actual product to attempt certain tasks. Usability testing tells you whether people understand the product or service and can use it without difficulty. It does not provide you with a vision, tell you if your product or service will be successful or which user tasks are more important than others.

A sample structure for an analysis session:

  • Setup
    — Summarize the goals and process of the research.
    — Describe who you spoke with and under which circumstances.
    — Describe how you gathered the data.
    — Describe the types of analysis you will be doing.
  • Analyze!
    — Pull out quotes and observations.
    — Group quotes and observations that typify a repeated pattern or idea into themes.
  • Synthesize
    — Summarize findings, including the patterns you noticed, the insights you gleaned from those patterns, and their implications for the design.
    — Document the analysis in a shareable format.

Do organizational research. Your work doesn’t happen in a vacuum, so interview stakeholders to come up with business requirements. Documentation should include problem statement and assumptions, success metrics, completion criteria, scope, risks/concerns/contingency plans, verbatim quotes (without identifying information), and workflow diagrams. “…improve the odds that changes will be fully understood and take hold.”

Assumptions are insults.

House M.D. was right: Everybody lies. (But don’t break into anyone’s house.)

Ethnography: getting to know people, generally by talking to them. Interview with an interview guide that has a brief description and goal of the study (share the with the interviewee but also use to keep yourself on track), any demographic questions you’ll want to ask, a couple of small talk questions (maybe just for inspiration!), and some questions or topics that are the primary focus. Remember, it’s just a guide. When interviewing, you know nothing. The most fascinating thing is the person you are interviewing. Gather a bit of background information (about the subject area, not the person) beforehand.

Contextual inquiry: getting to know people by following them around and observing them.

Jakob Nielsen’s checklist of ten heuristics (heuristic inspection) which can be done by anyone and should be done and scored as often as possible to ensure a basic level of usability. There is more information about these here.
– System status visibility.
– Match between system and real world.
– User control and freedom.
– Consistency and standards.
– Error prevention.
– Recognition rather than recall.
– Flexibility and efficiency of use.
– Aesthetic and minimalist design.
– Help users recognize and recover from errors.
– Help and documentation.

Find out everything you can using cheap usability tests first: paper prototypes before actual ones, tests in the office before tests in the field. It doesn’t really matter how often you do usability testing, as long as you do it earlier than right before you launch, when it’s expensive to fix things. The most expensive usability testing is the kind your customers do for you by way of customer support.

Ask users to do a task. Have a good facilitator who can put the user at ease but won’t lead them through the process. Take notes and record audio; be careful about videotaping because it’s trickier. Analyze the data by looking for issues that prevent the user from completing the task or just cause difficulty. Don’t forget to note how many users have that issue. At the end, rank the issues in terms of priority and get to work!

How to do qualitative analysis of your data: review the notes, looking for interesting behaviors, actions, emotions, and verbatim quotes. Write your observations on sticky notes and start to cluster the notes based on patterns. Watch patterns emerge. Rearranging is key. This is an affinity diagram. Use this to name patterns and identify user needs that come from them. The final step is to identify the actionable mandate or principle.

Personas: fictional archetypes that represent types of users and can be used to “advocate” for the user experience. Design, business strategy, marketing, and engineering can benefit from a single set of personas. HOWEVER, design targets ARE NOT marketing targets. The user type with the highest value to the business might be be the one most valuable to the design process. Personas should be fictional, drawing on many different users, but archetypal. Use a photo of a person no one knows. Give them a name, role, goals, demographic, quote, behaviors and habits, skills, environment, relationships, scenarios.

Mental models describe how users understand how things work. Strictly speaking there are also the mental models that designers sketch out to create a better world. You can make mental models through user interviews and affinity diagrams. Similarly you can make task/workflow analyses.

Quantitative analysis: split testing! It has math, which makes it more fun, but make sure you know what you’re looking for (set up your experiment and define you requirements and goals) and be careful about finding a local maximum-you can only do so much through incremental changes-and upsetting what is working for current users.

Conclusion: I’m just going to quote some lines from the conclusion of the book! To inspire!

“Questions are more powerful than answers. And asking often takes more courage than sticking with comfortable assumptions.”

“Cultivate a desire to be proven wrong as quickly as possible and for the lowest cost. If you work in a culture that prizes failing fast, there is no faster way to fail than by testing an idea that’s still on the drawing board. Exception maybe checking your assumptions before you even get to drawing.”

“Let curiosity be your guide.”

Aaaaand we’re done. Thanks Erika Hall!

Seeing Voices

By Oliver Sacks.

This collection of three essays Sacks wrote about deaf culture and sign language has so many amazing stories and neuroscience factoids in it that I can hardly contain myself when I tell people about it. He talks about the history of deaf people being considered ‘dumb’ because they were not given the opportunity to acquire language, the strange and horrible history of deaf people being forced to learn to speak and denied the opportunity to learn and communicate with sign language, the fascinating phenomena of sign language being seated in the left hemisphere of the brain even though we generally considered spatial and visual tasks to be seated in the right, the way children will naturally develop a certain kind of grammar in sign language that doesn’t demand the sequential operations spoken language does… It is incredibly interesting and cool and a short read to boot. It has inspired me to try to learn a little bit of American Sign Language.