Posts by Anna Foard

Former math and statistics teacher. Current statistical analyst, data visualization enthusiast, business development specialist at Velocity Group in Atlanta.

5 Ways to Keep Adult Learners Engaged

The year was 2002. It was the first time I ever stood in front of a classroom of “grown-ups”. The students didn’t know me or care who I was — a TA at LSU covering a College Algebra class. The topic was logarithms. Specifically, an introduction to logs as the inverse of an exponent. I may have been slightly older than the median age of those students and I was terrified, nervous, and profusely sweating.

Up until that moment, I thought deep content knowledge was the secret sauce of teaching. But in the 17 years of experiences that followed, I’ve learned how much MORE there is to teaching than merely knowing your stuff. Student buy-in is the key to student engagement and, ultimately, student learning.

As a corporate trainer I’ve found adults are no different from kids in how they learn and how they engage. It all depends on the trainer’s ability to read the room and adapt as needed.

I compiled the list below after 17 years of total instruction including college algebra and statistics, high school math and AP Statistics, and corporate training for data analysts. I’m sure I will update this list in the future, but at this point, reflecting on my own feedback and observing other trainers, these are the top points I find trainers miss.

Please note this list is not exhaustive and assumes you follow the basic tenants of instruction such as: knowing your audience, knowing your content, preparing x 10, taking breaks every 60 – 75 minutes, beginning promptly after a break, being approachable, avoiding excessive talk and rabbit holes, Rule of 3, minimizing powerpoint, summarizing major points, etc. So here you go – 5 ways to improve student engagement.

1. Be Adaptable.

“It is not the strongest of the species that survives, nor the most intelligent. It is the one that is most adaptable to change.”

– Charles Darwin

I’ve entered a classroom to discover the WiFi down. Many times. There have been a few unannounced fire drills, some medical emergencies, that time the projector bulb blew — all derailing my meticulously-planned lesson. But the show must go on!

If something goes wrong, keep calm but think on your feet. Focus on keeping the students engaged first. So if the students have working laptops and the only problem is a projector (or your laptop), get them started. Walk around the room teaching the concepts you’d planned to teach from the front. Ask the students questions, have them come up with solutions.

Tech completely down? Ask yourself, “What is the goal of this class?” It’s not easy to teach a tech-driven class with a whiteboard, but it can be done (and here’s hoping the WiFi will be up soon). In fact, if you teach a workshop using a software or only technology, I would urge you to get in the habit of adding in low-tech activities for those “just in case” moments.

Pro Tip: Oftentimes you can lead the students to an a-ha moment or two, then request IT support at the next break.

According to The Oxford Review, adaptability in the workplace is related to one’s emotional intelligence and emotional resilience. And, of course, mindset.

Lastly, being adapatable also means being coachable. Everyone gets frustrated from negative evaluations/feedback at times. But try to step back and ask yourself if you could have improved the delivery. Making tweaks to your performance based on student feedback can help YOU in the long run. Being coachable does NOT mean you give up confidence. You are the professional, but all great professionals learn from feedback and reflection.

2. Don’t Fake it.

It’s okay to admit you don’t know the answer to a question. Saying, “I’ll find out and get back to you” is not a weakness. What’s not okay is making up an answer. “Fake it til you make it” is NOT a mantra of teaching. Especially when you have Google.

“When you tell a lie, you steal someone’s right to the truth.”

– Khaled Hosseini

If you’re worried about questions, I recommend giving everyone sticky notes at the beginning of the class. Encourage students to ask questions. If a question comes up that is not relevant to the topic at hand OR if you don’t know the exact answer, ask the student to write down their question. Create a space on the wall for participants to stick these questions up (some trainers call this a “parking lot”) and on breaks, take some time to research and answer the questions. I would strongly recommend you DON’T take class time to do said research.

And oh the mistakes I’ve made when teaching. Some embarrassing. It happens. And it is important to own those mistakes, especially if you can turn it into a “teaching moment.” For example, I once observed a new math teacher square a binomial incorrectly. A COMMON mistake among students. So instead of (x+2)^2 = (x+2)*(x+2), she (without thinking about it) squared both terms, making (x+2)^2 = x^2 + 4. Whoops. A big mistake in the math world, but really not a big deal when she stopped herself, realizing her mistake, and laughed about it. Then explained that was an example of what NOT to do.

Squaring a binomial incorrectly may not be your mistake, but you will make one (probably many). But to err is human. And there is plenty of research out there to suggest an HONEST teacher is a TRUSTWORTHY teacher. People LIKE honest teachers, especially when it comes to their own flaws.

3. Use People’s Names.

“Go the extra mile. It’s never crowded.”

-Author Unknown

Names are powerful. Dale Carnegie once said, “A person’s name is, to him or her, the sweetest and most important sound in any language.” When someone takes the time to learn and use your name, you feel important. Which means using a person’s name in conversation is the quickest way to connect with them on a personal level — and therefore promotes positive classroom engagement.

Generally, people also enjoy talking about themselves. Which is a great way to learn their name. On the very first day, after I introduce myself, I give participants the opportunity to introduce themselves and say a few words. You probably already do this. And I use this as an opportunity to learn their name — I write it down then say their name aloud (so they hear it AND to help me remember). Creating a blank seating chart ahead of time is always helpful – this way I can jot down the name and an interesting fact while they speak, creating a reference for later in the course.

My friend and colleague Ryan Nokes remembers names much better than I do, impressing his classes by learning every name immediately! After preliminary introductions he says each person’s name, first to last person, and states their name (without notes). And then Ryan does it again at the start of the next day. People enjoy hearing their own names and are pleased when you remember them later

I teach hands-on courses and encourage constant interaction. When calling on students, I use their first name, careful not to just point to them. When talking to them one on one, I use their name. And by the way, please use the name they gave you. NOT their government name. FYI, I cringe when people call me, “Annamarie.”

Many articles have been published around the power of names. If you aren’t sure about the power of using a name, start here.

4. Move Around the Room.

“Nothing happens until something moves”

-Albert Einstein

Moving around the room, when done correctly, increases participant engagement.

When I teach, I rarely sit down. Moving around the room allows me to interact with each student one-on-one and check for their understanding. This proximity also allows the more reluctant talker/questioner to ask their burning question when they know the entire class won’t hear them. And, dare I say it: Moving around the room keeps you in control.

This is why you hear grade school teachers say they never sit down. K-12 teachers use physical proximity to manage their classrooms. Being interested in each student’s learning promotes positive behaviors and keep students on task. In the same way, walking around keeps adults out of their inbox. And you won’t ever hear me criticize a training participant about their email/phone use in class (despite it being a bit irritating — I mean, you DID sign up to be here) because when they expect me to move their direction, they self monitor and correct these behaviors themselves, often apologizing.

Note: I do try to give every group/person an equal amount of “attention” without lingering anywhere too long.

Educational research also promotes student movement around the room. So when delivering instruction, I like to create activities that make students/groups visualize data by hand – on a white board or big 3M sticky poster. Or even a post-it. “Around the room” activities could also include giving other groups positive feedback or presenting a new discovery in their data.

5. Seat People with Similar Experience-Levels Together.

When dealing with heterogeneous groups of participants.

“We are more powerful when we empower each other.”

-Unknown

After years of hoopla over the concept of “tracking students”, this tip might surprise you. How many times have you heard someone say, “Pair a low with a high?” And, while this strategy could work in certain courses and situations, it is, overall, an outdated practice.

Imagine. You have a grasp of the basics of a particular data visualization tool and use it weekly. A colleague in the same course has only installed the software that morning. Your instructor teams you up so you can “help” your colleague. How does this make you feel? At first, it might feel rewarding — you know the answers! However, in many situations the person doing the “helping” ends up feeling like they didn’t grow in their domain while the person being helped can eventually feel inadequate and frustrated.

No matter how well we market a course (“beginner”, “advanced”, etc) there will always be a heterogeneous group of abilities when I walk in to start instruction. And this is the way it always is — K12 or corporate training. So I can either roll my eyes and teach the outline as prescribed, pacing the middle of all abilities, or I can help all learners by differentiating my instruction a bit.

I’m not talking about group projects here — I only mean seating participants of like abilities near each other for an improved user experience. Of course in a training situation, this arrangement has more to do with experience levels — of which they can self-sort. I generally ask students “new” to the software to sit in the front, and others to sit behind them. That’s all that is usually needed.

But let’s look at the origins of this thought: When used appropriately, “flexible grouping” — pairing and grouping students based on need — can aid student learning on both ends of the experience spectrum. This can be homogeneous or heterogeneous groups. And how you utilize it matters. If you must pair high/low, do it only for a short time. (Because I’ve had students ask me, “Am I the dumb one or the smart one?”)

In the long run, research suggests pairing/seating students with similar abilities/experience in a domain (or software) engages all students. And if done correctly, actually improves their learning experience and accelerates their growth. How? If you’re already moving around the room (see #4 above), then it should make sense that you can tailor your instruction much easier to pairs/groups of similar background knowledge than if they are scattered around the room. Think about it — when you are helping a group of participants who are relatively new to the topic or software, you can give a “you try” practice problem to enrich or even accelerate the other group to work on their own, and vice versa. Peer pairing/grouping on similar experience levels also encourages those students to develop a deeper understanding of the topic together, rather than the back-and-forth waiting that occurs when unlike abilities are grouped.

Personally, I mix up my delivery — some whole group instruction, some partner work, maybe an activity in a group, and solo work. Pairing them up encourages dialog about the concepts while they work through a challenge. Groups can offer multiple points of view. Solo work helps the student think through the problem on their own. Since my classes are always hands-on I incorporate the process of I do, we do, you do. But I do start with seating like experience levels together.

Last Note

Being a teacher (or instructor, or coach) does require multiple skill sets including: entertainer, orchestra conductor, problem-solver, mind reader, therapist, referee, and cheerleader. However, promoting student engagement (teens and adults alike) goes beyond preparing a “fun lesson.” Student engagement results from student buy-in. And student buy-in results from the little things that create a positive atmosphere.

I’m going to add to this list over time. Do you have any suggestions on how you promote adult student engagement?

How Laser Tag Helped Students Learn About Data

How do you get a group of 15 to 18-year-old students interested in data prep and analysis? Why, you take them to play laser tag, of course!

That’s right, on a cold January day I loaded up two buses of teens and piloted them to an adventure at our local Stars and Strikes. And this is no small feat — this particular trip developed out of months of planning, and after years proclaiming that I will never ever ever ever EVER coordinate my own field trip for high school kids. I mean, you should SEE the stack of paperwork. And the level of responsibility itself made me anxious.

I’m a parent so I get it. And from a teacher’s point of view, many field trips aren’t worth the hassle.

So there I was, field trip money in one hand, clipboard in another: Imagine a caffeinated Tracy Flick. But thanks to the help of two parent chaperones and the AP Psychology teacher (Coach B), we ran the smoothest data-related field trip modern education has ever known.

What Does Laser Tag Have to do With Statistics?

Statistics textbooks are full of canned examples and squeaky clean data that often have no bearing on a students’ interests. For example, there is an oh-so-relatable exercise computing standard error for D-glucose contained in a sample of cockroach hindguts. In my experience I’ve learned when students can connect to the data, they are able to connect to the concept. We’re all like that, actually — to produce/collect our own data enables us to see what we otherwise would have missed.

(I can assure you confidence intervals constructed from D-glucose in coachroach hindguts did little for understanding standard error.)

The real world is made up of messy data. It’s full of unknowns, clerical errors, bias, unnecessary columns, confusing date formats, missing values; the list goes on. Laser Tag was suggested to me as a way to collect a “large” amount of data in a relatively short amount of time. And because of the size of the dataset, it required the student to input their own data — creating their own version of messy data complete with clerical errors. From there they’d have to make sense of the data, look for patterns, form hypotheses.

The Project

  • Students entered their data into a Google doc — you can find the complete data here.
  • Each partner team developed two questions for the data: One involving 1-variable analysis, another requiring bivariate analysis.
  • The duos then had to explore, clean, and analyze all 47 rows and 48 columns. At this point in the school year, students had been exposed to data up to about 50 rows, but never had they experienced “wide” data.
  • Analyses and presentations required a visualization, either using Excel or Tableau.

Partner projects lend to fantastic analyses, with half the grading

Playing the Games

Methodology: Each student was randomly assigned to a team using a random number generator. Teams of 5 played each other twice during the field trip. The teams were paired to play each other randomly. If, by chance, a team was chosen to play the same team twice, that choice would be ignored and another random selection would be made until a new team was chosen.

Before each game, I recorded which student wore which laser tag vest number. From the set-up room (see above picture), I could view which vest numbers were leading the fight and which team had the lead. It was entertaining. As the students (and Coach B — we needed one more player for even teams) finished their games, score cards were printed and I handed each student their own personal results. The words, “DON’T lose this” exited my lips often.

Upon our return to school (this only took a few hours, to the students’ dismay), results were already pouring the into the Google doc I’d set up ahead of time.

Teaching Tableau and Excel Skills

The AP Statistics exam is held every year in May, hosted by The College Board. On the exam, students are expected to use a graphing calculator but have no access to a computer or Google. Exactly the opposite of the real world.

Throughout the course, I taught all analysis first by hand, or using the TI-83/84. As students became proficient, I added time in the computer lab to teach basic skills using Excel and Tableau (assignments aligned to the curriculum while teaching skills in data analysis). It was my goal for students to have a general understanding of how to use these “real world” analytics tools while learning and applying AP Statistics curriculum.

After the field trip, we spent three days in the computer lab – ample time to work in Tableau and Excel with teacher guidance. Students spent time exploring the 48-column field trip dataset with both Excel and Tableau. They didn’t realize it, but by deciding which chart type to use for different variables, they were actually reviewing content from earlier in the year.

When plotting bivariate quantitative data, a scatterplot is often the go-to chart

Most faculty members had never heard of Tableau. At lunch one day I sat down with Coach B to demonstrate Tableau’s interface with our field trip dataset.

“What question would you ask this set of data?” I asked.

“A back shot is a cheap shot. I wonder who is more likely to take a cheap shot, males or females?”

So I proceeded to pull up a comparison and used box-and-whiskers plots to look for outliers. Within seconds, a large outlier was staring back at us within the pool of male students:

“Ha. I wonder who that was.” – Coach B

“That’s YOU.” – Me

From there, I created a tongue-in-cheek competitive analysis from the data:

Full color version found here.

Student Response

I’ve been teaching since 2004. Over the years, this was probably the most successful project I’ve seen come through my classroom. By “successful”, I’m talking the proportion of students who were able to walk outside of their comfort zone and into a challenging set of data, perform in-depth analyses, then communicate clear conclusions was much higher than in all previous years.

At the end of the year, after the AP Exam, after grades were all but inked on paper, students still talked excitedly about the project. I’d like to think it was the way I linked a fun activity to real-world analysis, though it most likely has to do with getting out of school for a few hours. Either way, they learned something valuable.

Univariate Analysis

One student, Abby, gave me permission to share her work adding, “This is the project that tied it all together. This was the moment I ‘got’ statistics.”

Interestingly, students were less inclined to suggest the female outlier of 2776 shots was a clerical mistake (which it was). I found there were two camps: Students who didn’t want to hurt feelings, and students who think outliers in the wild need no investigation. Hmmm.

Bivariate Analysis

For a group of kids new to communicating stats, I thought this was pretty good. We tweaked their wording (to be more contextual) as we dove into more advanced stats, but their analysis was well thought through.

What I Learned

When you teach, you learn.
Earlier I said the project was a success based on the students’ results. That’s only partially true; it was also a success because I grew as an educator. After years of playing by the rules I realized that sometimes you need to get outside your comfort zone. For me that was two-fold: 1) Sucking it up and planning a field trip and 2) Losing the old, tired TI-83 practice problems and teaching real-world analytics tools.

How to Decipher False Positives (and Negatives) with Bayes’ Theorem

Note: Before proceeding, a great recap of probability concepts can be found here, written by Paul Rossman. 

But First, Conditional Probability

When I teach conditional probability, I tell my students to pay close attention to the vertical line in the formula above. Whenever they see it, they must imagine the loud baritone behind-the-scenes announcer voice from Bill Nye saying, “GIVEN!”

This symbol | always indicates we assume the event that follows it has already occurred. The formula above, then, should be read: The probability event A will occur given event B has already occurred.

A simple example of conditional probability uses the ubiquitous deck of cards. From a standard deck of 52, what is the probability you draw an ace on the second draw if you know an ace has already been drawn (and left out of the deck) on the first draw?

Since a deck of 52 playing cards contains 4 aces, the probability of drawing the first ace is 4/52. But the probability of drawing an ace given the first card drawn was an ace is 3/51 — 3 aces left in the deck with 51 total cards remaining. Hence, conditional probability assumes another event has already taken place.

False Positives and False Negatives: What They’re Not

Tests are flawed.

According to MedicineNet, a rapid strep test from your doctor or urgent care has a 2% false positive rate. This means 2% of patients who do not actually have Group A streptococcus bacteria present in their mouth test positive for the bacteria. The rapid strep test also indicates a negative result in patients who do have the bacteria 5% of the time — a false negative.

Another way to look at it: The 2% “false positive” result indicates the test displays a true positive in 98% of patients. The 5% “false negative” result means the test displays a true negative in 95% of patients.

It’s common to hear these false positive/true positive results incorrectly interpreted. These rates do not mean the patient who tests positive for a rapid strep test has a 98% likelihood of having the bacteria and a 2% likelihood of not having it. And a negative result does not indicate one still has a 5% chance of having the bacteria.

Even more confusing, but important is the idea that while a 2% false positive does indicate that 2% of patients who do not have strep test positive, it does not mean that of all positives, 2% do have strep. There is more to consider in calculating those kinds of probabilities. Specifically, we would need to know how pervasive strep is for that population in order to come close to the actual probability that someone testing positive has the bacteria.

Enter: Bayes’ Theorem

Bayes’ Theorem considers both the population’s probability of contracting the bacteria and the false positives/negatives.

I know, I know — that formula looks INSANE. So I’ll start simple and gradually build to applying the formula – soon you’ll realize it’s not too bad.

Example: Drug Testing

Many employers require prospective employees to take a drug test. A positive result on this test indicates that the prospective employee uses illegal drugs. However, not all people who test positive actually use drugs. For this example, suppose that 4% of prospective employees use drugs, the false positive rate is 5%, and the false negative rate is 10%.

Here we’ve been given 3 key pieces of information:

  • The prevalence of drug use among these prospective employees, which is given as a probability of 4% (or 0.04). We can use the complement rule to find the probability an employee doesn’t use drugs: 1 – 0.04 = 0.96.
  • The probability a prospective employee tests positive when they did not, in fact, take drugs — the false positive rate — which is 5% (or 0.05).
  • The probability a prospective employee tests negative when they did, in fact, take drugs — the false negative rate — which is 10% (or 0.10).

It’s helpful to step back and consider the two things are happening here: First, the prospective employee either takes drugs, or they don’t. Then, they are given a drug test and either test positive, or they don’t.

I recommend a visual guide for these types of problems. A tree diagram helps you take these two pieces of information and logically draw out the unique possibilities.

Tree diagrams are also helpful to show us where to apply the multiplication principle in probability. For example, to find the probability a prospective employee didn’t take drugs and tests positive, we multiply P(no drugs) * P(positive) = (.96)*(.05) = 0.048.

An important note: The probability of selecting a potential employee who did not take drugs and tests negative is not the same as the probability an employee tests negative GIVEN they did not take drugs. In the former, we don’t know if they took drugs or not; in the latter, we know they did not take drugs – the “given” language indicates this prior knowledge/evidence.

What’s the probability someone tests positive?

We can also use the tree diagram to calculate the probability a potential employee tests positive for drugs.

A potential employee could test positive when they took drugs OR when they didn’t take drugs. To find the probabilities separately, multiply down their respective tree diagram branches:

Using probability rules, “OR” indicates you must add something together. Since one could test positive in two different ways, just add them together after you calculate the probabilities separately:

P(positive) = 0.048 + 0.036 = 0.084

Given a positive result, what is the probability a person doesn’t take drugs?

Which brings us to Bayes’ Theorem:

Let’s find all of the pieces:

  • P(positive | no drugs) is merely the probability of a false positive = 0.05
  • P(no drugs) = 0.96
  • So we already calculated the numerator above when we multiplied 0.05*0.96 = 0.048
  • We also calculated the denominator: P(positive) = 0.084

which simplifies to

Whoa.

This means, if we know a potential employee tested positive for drug use, there is a 57.14% probability they don’t actually take drugs — which is MUCH HIGHER than the false positive rate of 0.05. In other words, if a potential employee (in this population with 4% drug use) tests positive for drug use, the probability they don’t take drugs is 57.14%

How is that different from a false positive? A false positive says, “We know this person doesn’t take drugs, but the probability they will test positive for drug use is 5%.” While if we know they tested positive, the probability they don’t take drugs is 57%.

Why is this probability so large? It doesn’t seem possible! Yet, it takes into account the likelihood a person in the population takes drugs, which is only 4%.

In math terms:

P(positive | no drugs) = 0.05 while P(no drugs | positive) = 0.5714

Which also means that if a potential employee tests positive, the probability they do indeed take drugs is lower than what you might think. You can find this probability by taking the complement of the last calculation: 1 – 0.5714 = 0.4286. OR, recalculate using the formula:

Now You Try: #DataQuiz

Back in October I posted a #DataQuiz to Twitter, with a Bayesian twist. Can you calculate the answer using this tutorial without looking at the answer (in tweet comments)?

Hints:

  • Draw out the situation using a tree diagram
  • What happens first? What happens second?
  • What is “given”?

Next Up: Business Applications

Stay tuned! Paul Rossman has a follow-up post that I’ll link to when it’s ready. He’s got some brilliant use case scenarios with application in Tableau.

How to Navigate Confidence Intervals With Confidence

Teaching statistics year after year prepped me for the most common misinterpretations of confidence intervals and confidence levels. Confusion such as:

  • Incorrectly interpreting a 99% interval as having a “99% probability of containing the true population parameter”
  • Finding significance because “the sample mean is contained in the interval”
  • Applying a confidence interval to samples that do not meet specific assumptions

What are Confidence Intervals?

Confidence intervals are like fishing nets to an analyst looking to capture the actual measure of a population in a pond of uncertainty. The margin of error dictates the width of the “net”. But unlike fishing scenarios, whether or not the confidence interval actually captures the true population measure typically remains uncertain. Confidence intervals are not intuitive, yet they are logical once you understand where they start.

So what EXACTLY, are we confident about? Is it the underlying data? Is it the result? Is it the sample? The confidence is actually in the procedures used to obtain the sample that was used to create the interval — and I’ll come back to this big idea at the end of the post. First, let’s paint the big picture in three parts: The data, the math, and the interpretation.

The Data

As I mentioned, a confidence interval captures a “true” (yet unknown) measure of a population using sample data. Therefore, you must be working with sample data to apply a confidence interval — you’re defeating the purpose if you’re already working with population data for which the metrics of interest are known.

Sampling Bias

It’s important to investigate how the sample was taken and determine if the sample represents the entire population. Sampling bias means a certain group has been under- or over- represented in a sample – in which case, the sample does not represent the entire population. A common misconception is that you can offset bias by increasing the sample size; however, once bias has been introduced to the sample, a larger sample using the same procedure will ensure the sample is much different from the population. Which is NOT a representative sample.

Examples of sampling bias:

  • Excluding a group who cannot be reached or does not respond
  • Only sampling groups of people who can be conveniently reached
  • Changing sampling techniques during the sampling process
  • Contacting people not chosen for sample

Statistic vs Parameter

A statistic describes a sample. A parameter describes a population. For example, if a sample of 50 adult female pandas weigh an average of 160 pounds, the sample mean of 160 is known as the statistic. Meanwhile, we don’t actually know the average of all adult female pandas. But if we did, that average (mean) of the population of all female pandas would be the parameter. Statistics are used to estimate parameters. Since we don’t typically know the details of an entire population, we rely heavily on statistics.

Mental Tip: Look at the first letters! A Statistic describes a Sample and a Parameter describes a Population

The Math

All confidence intervals take the form:

A common example here is polling reports — “The exit polls show John Cena has 46% of the vote, with a margin of error of 3 points.” Most people without a statistics background can draw the conclusion: “John Cena likely has between 43% and 49% of the vote.”


What if John Cena actually has 44% of the vote? Here, I’ve visualized 40 samples for which 38 contain that 44% Notice the confidence interval has two parts – the square in the middle represents the sample proportion and the horizontal line is the margin of error.

The Statistic, AKA “The Point Estimate”

The “statistic” is merely our estimate of the true parameter.

The statistic in the voting example is the sample percent from exit polls — the 46%. The actual percent of the population voting for John Cena – the parameter – is unknown until the polls close, so forecasters rely on sample values.

A sample mean is another example of a statistic – like the mean weight of an adult female panda. Using this statistic helps researchers avoid the hassle of traveling the world weighing all adult female pandas.

John Bazemore/AP

The Margin of Error

With confidence intervals, there’s a trade off between precision and accuracy: A wider interval may capture the true mean accurately, but it’s also less precise than a more narrow interval.

The width of the interval is decided by the margin of error because, mathematically, it is the piece that is added to and subtracted from the statistic to build the entire interval.

How do we calculate the margin of error? You have two main components — a t or z value derived from the confidence level,and the standard error. Unless you have control over the data collection on the front end, the confidence level is the only component you’ll be able to determine and adjust on the back end.

Two common Margin of Error (MoE) calculations

The confidence level

“Why can’t we just make it 100% confidence?” Great question! And one I’ve heard many times. Without going into the details of sampling distributions and normal curves, I’ll give you an example:

Assume the “average” adult female panda weighs “around 160 pounds.” To be 100% confident that we’ve created an interval that includes the TRUE mean weight, we’d have to use a range that includes all possible values of mean weights. This interval might be from, say 100 to 400 pounds – maybe even 50 to 1000 pounds. Either way, that interval would have to be ridiculously large to be 100% confident you’ve estimated the true mean. And with a range that wide, have you actually delivered any insightful message?

Again consider a confidence interval like a fishing net, the width of the net determined by the margin of error – more specifically, the confidence level (since that’s about all you have control over once a sample has been taken). This means a LARGER confidence level produces a WIDER net and a LOWER confidence level produces a more NARROW net (everything else equal).

For example: A 99% confidence interval fishing net is wider than a 95% confidence interval fishing net. The wider net catches more fish in the process.

But if the purpose of the confidence interval is to narrow down our search for the population parameter, then we don’t necessarily want more values in our “net”. We must strike a balance between precision (meaning fewer possibilities) and confidence.

Once a confidence level is established, the corresponding t* or z* value — called an upper critical value — is used in the calculation for the margin of error. If you’re interested in how to calculate the z* upper critical value for a 95% z-interval for proportions, check out this short video using the Standard Normal Distribution.

The standard error

This is the part of the margin of error you most likely won’t get to control.

Keeping with the panda example, if we are interested in the true mean weight for the adult female panda then the standard error is the standard deviation of the sampling distribution of sample mean weights. Standard error, a measure of variability, is based on a theoretical distribution of all possible sample means. I won’t get into the specifics in this post but here is a great video explaining the basics of the Central Limit Theorem and the standard error of the mean. 

If you’re using proportions, such as in our John Cena election example, here is my favorite video explaining the sampling distribution of the sample proportion (p-hat).

As I mentioned, you will most likely NOT have much control over the standard error portion of the margin of error. But if you did, keep this PRO TIP in your pocket: a larger sample size (n) will reduce the width of the margin of error without sacrificing the level of confidence.

The Interpretation

Back to the panda weights example here. Let’s assume we used a 95% confidence interval to estimate the true mean weight of all adult female pandas:

Interpreting the Interval

Typically the confidence interval is interpreted something like this: “We are 95% confident the true mean weight of an adult female panda is between 150 and 165 pounds.”

Notice I didn’t use the word probability. At all. Let’s look at WHY:

Interpreting the Level

The confidence level tells us:, “If we took samples of this same size over and over again (think: in the long run) using this same method, we would expect to capture the true mean weight of an adult female panda 95% of the time.” Notice this IS a probability. A 95% probability of capturing the true mean exists BEFORE taking the sample. Which is why I did NOT reference the actual interval values. A different sample would produce a different interval. And as I said in the beginning of this post, we don’t actually know if the true mean is in the interval we calculated.

Well then, what IS the probability that my confidence interval – the one I calculated between the values of 150 and 165 pounds – contains the true mean weight of adult female pandas? Either 1 or 0. It’s either there, or it isn’t. Because — and here’s the tricky part — the sample was already collected before we did the math. NOTHING in the math can change the fact that we either did or didn’t collect a representative sample of the population. OUR CONFIDENCE IS IN THE DATA COLLECTION METHOD – not the math.

The numbers in the confidence interval would be different using a different sample.

Visualizing the Confidence Interval

Let’s assume the density curve below represents the actual population mean weights of all adult female pandas. In this made up example the mean weight of all adult pandas is 156.2 pounds with a population standard deviation of 13.6 pounds.

Beneath the population distribution are the simulation results of 300 samples of n = 20 pandas (sampled using an identical sampling method each time). Notice that roughly 95% of the intervals cover the true mean — capturing 156.2 within the interval (the green intervals) while close to 5% of intervals do NOT capture the 156.2 (the red intervals).

Pay close attention to the points made by the visualization above:

  • Each horizontal line represents a confidence interval constructed from a different sample
  • The green lines “capture” or “cover” the true (unknown) mean while the red lines do NOT cover the mean.
  • If this was a real situation, you would NOT know if your interval contained the true mean (green) or did not contain the true mean (red).

The logic of confidence intervals is based on long-run results — frequentist inference. Once the sample is drawn, the resulting interval either does or doesn’t contain the true population parameter — a probability of 1 or 0, respectively. Therefore, the confidence level does not imply the probability the parameter is contained in the interval. In the LONG run, after many samples, the resulting intervals will contain the mean C% of the time (where C is your confidence level).

So in what are we placing our confidence when we use confidence intervals? Our confidence is in the procedures used to find our sample. Any sampling bias will affect the results – which is why you don’t want to use confidence intervals with data that may not represent the population.

The Box-and-Whisker Plot For Grown-Ups: A How-to

Author’s note: This post is a follow-up to the webinar, Percentiles and How to Interpret a Box-and-Whisker Plot, which I created with Eva Murray and Andy Kriebel. You can read more on the topic of percentiles in my previous posts.

No, You Aren’t Crazy.

That box-and-whisker plot (or, boxplot) you learned to read/create in grade school probably IS different from the one you see presented in the adult world.

versions

The boxplot on the top originated as the Range Bar, published by Mary Spear in the 1950’s. While the boxplot on the bottom was a modification created by John Tukey to account for outliers. Source: Hadley Wickham

As a former math and statistics teacher, I can tell you that (depending on your state/country curriculum and textbooks, of course) you most likely learned how to read and create the former boxplot (or, “range bar”) in school for simplicity. Unless you took an upper-level stats course in grade school or at University, you may have never encountered Tukey’s boxplot in your studies at all.

You see, teachers like to introduce concepts in small chunks. While this is usually a helpful strategy, students lose when the full concept is never developed. In this post I walk you through the range bar AND connect that concept to the boxplot, linking what you’ve learned in grade school to the topics of the present.

The Kid-Friendly Version: The Range Bar

In this example, I’m comparing the lifespans of a small, non-random set of animals. I chose this set of animals based solely on convenience of icons. Meaning, conclusions can only be drawn on animals for which Anna Foard has an icon. I note this important detail because, when dealing with this small, non-random sample, one cannot infer conclusions on the entire population of all animals.

1) Find the quartiles, starting with the median

Quartiles break the dataset into 4 quarters. Q1, median, Q3 are (approximately) located at the 25th, 50th, and 75th percentiles, respectively.

Finding the median requires finding the middle number when values are ordered from least to greatest. When there is an even number of data points, the two numbers in the middle are averaged.

median
Here the median is the average of the cat and dog’s longevity. NOTE: If, with an even set of values the two in the middle were different, the lower of the two values would be in the 50th percentile and would not be the same measure as the median.

Once the median has been located, find the other quartiles in the same way: The middle value in the bottom set of values (Q1), then the middle value in the top set (Q3).

quartiles
Here we can easily see when quartiles don’t match up exactly with percentiles: Even thought Q1 = 8.5, the duck (7) is in the 25th percentile while the pig is above the 25th percentile. And the sheep is in the 75th percentile despite the value of 17.5 at Q3.

2) Use the Five Number Summary to create the Range Bar

The first and third quartiles build the “box”, with the median represented by a line inside the box. The “whiskers” extend to the minimum and maximum values in the dataset:

range bar

But without the points:

animal box no points

The Range Bar probably looks similar to the first box-and-whisker plot you created in grade school. If you have children, it is most likely the first version of the box-and-whisker plot that they will encounter.

school example
from elementary school Pinterest

Suggestion:

Since the kid’s version of the boxplot does not show outliers, I propose teachers call this version, “The Range Bar” as it was originally dubbed, to not confuse those reading the chart. After all, someone looking at this version of a boxplot may not realize it does not account for outliers and may draw the wrong conclusion.

The Adult Version: The Boxplot

The only difference between the range bar and the boxplot is the view of outliers. Since this version requires a basic understanding of the concept of outliers and a stronger mathematical literacy, it is generally introduced in a high school or college statistics course.

1) Calculate the IQR

The interquartile range is the difference, or spread, between the third and first quartile reflecting the middle 50% of the dataset. The IQR builds the “box” portion of the boxplot.

IQR 2

2) Multiply the IQR by 1.5

IQR

3) Determine a threshold for outliers – the “fences”

1.5*IQR is then subtracted from the lower quartile and added to the upper quartile to determine a boundary or “fences” between non-outliers and outliers.

IQR 1

4) Consider values beyond the fences outliers

outliers

Since no animals’ lifespans are below -5 years, it is not possible for a low-value outlier in this particular set of data; however, one animal in this dataset lives beyond 31 years – an outlier in higher values.

5) Build the boxplot

Here we find the modification on the “range bar” – the whiskers only extend as far as non-outlier values. Outliers are denoted by a dot (or star).

boxplot 12
The adult version also allows us to apply technology, so I left the points in the view to enhance the distribution’s view.

Advantage: Boxplot

In an academic setting, I use boxplots a great deal. When teaching AP Statistics, they are helpful to visualize the data quickly by hand as they only require summary statistics (and outliers). They also help students compare and visualize center, spread, and shape (to a degree).

When we get into the inference portion of AP Stats, students must verify assumptions for certain inference procedures — often those procedures require data symmetry and/or absence of outliers in a sample. The boxplot is a quick way for a student to verify assumptions by hand, under time constraints. When coaching doctoral candidates through the dissertation stats, similar assumptions are verified to check for outliers — using boxplots.

TI83plus
My portable visualization tool

Boxplot Advantages:

  • Summarizes variation in large datasets visually
  • Shows outliers
  • Compares multiple distributions
  • Indicates symmetry and skewness to a degree
  • Simple to sketch
  • Fun to say

laser tag competitive analysis
I took my students on a field trip to play laser tag. Here, boxplots help compare the distributions of tags by type AND compare how Coach B measures up to the students.


So What Could Go Wrong?

Unfortunately, boxplots have their share of disadvantages as well.

Consider:

A boxplot may show summary statistics well; however, clusters and multimodality are hidden.

In addition, a consumer of your boxplot who isn’t familiar with the measures required to construct one will have difficulty making heads or tails of it. This is especially true when your resulting boxplot looks like this:

example q2 and q3 equal
The median value is equal to the upper quartile. Would someone unfamiliar recognize this?

Or this:

example no upper
The upper quartile is the maximum non-outlier value in this set of data.

Or what about this?

example no whiskers
No whiskers?! Dataset values beyond the quartiles are all outliers.


Boxplot Disadvantages:

  • Hides the multimodality and other features of distributions
  • Confusing for some audiences
  • Mean often difficult to locate
  • Outlier calculation too rigid – “outliers” may be industry-based or case-by-case

Variations

Over the course of the years, multiple boxplot variations have been created to display parts (or all) of the distribution’s shape and features.

no whisker
No Whisker Box Plot Source: Andy Kriebel

Going For It

Box-and-whisker plots may be helpful for your specific use case, though not intuitive for all audiences. It may be helpful to include a legend or annotations to help the consumer understand the boxplot.

boxplot overview

Check Yourself: Ticket out the Door

No cheating! Without looking back through this post, check your own understanding of boxplots. Answer can be found on the #MakeoverMonday webinar I recorded with Eva Murray a couple weeks ago.

data quiz

Cartoon Source: xkcd

How to Build a Cumulative Frequency Distribution in Tableau

When my oldest son was born, I remember the pediatrician using a chart similar to the one below to let me know his height and weight percentile. That is, how he measured up relative to other babies his age. This is a type of cumulative relative frequency distribution. These charts help determine relative position of one data point to the rest of the dataset, showing an accumulating percent of observations for each value. In this case, the chart helps determine how a child is growing relative to other babies his age.

 

boys percentile

Source: CDC

I decided to figure out how to create one in Tableau. Based on the types of cumulative frequency distributions I was used to when I taught AP Stats, I first determined I wanted the value of interest on the horizontal axis and the percents on the vertical axis.

Make a histogram

Using a simple example – US President age at inauguration – I started with a histogram so I could look at the overall shape of the distribution:

histogram pills

age of presidents histogram

Adjust bin size appropriately

From here I realized I already had what I needed in my view – discrete ages on the x-axis and counts of ages on the y-axis. For a wider range of values I would want a wider bin size, but in this situation I needed to resize bins to 1, representing each individual age.

age bins

age of presidents histogram bin 1

Change the marks from bars to a line

age line

Create a table calculation

Click on the green pill on the rows (the COUNT) and add a table calculation.

table calc freq dist

Actually, TWO table calculations

First choose “Running Total”, then click on the box “add secondary calculation”:

table calcs 1

Next, choose “percent of total” as the secondary calculation:

table calcs

Polish it up

Add drop lines…

drop lines

…and CTRL drag the COUNT (age in years) green pill from the rows to labels. Click on “Label” on the marks card and change the marks to label from “all” to “selected”.

label

 

 

And there you have it.

full graph 2

Interpreting percentiles

Percentiles describe the position of a data point relative to the rest of the dataset using a percent. That’s the percent of the rest of the dataset that falls below the particular data point. Using the baby weights example, the percentile is the percent of all babies of the same age and gender weighing less than your baby.

Back to the US president example.

Since I know Barack Obama was 47 when inaugurated, let’s look at his age relative to the other US presidents’ ages at inauguration:

obama point

13.3% of US presidents were younger than Barack Obama when inaugurated.                           Source: The Practice of Statistics, 5th Edition

And another way to look at this percentile: 87% of US presidents were older than Barack Obama when inaugurated.

Thank you for reading and have an amazing day!

-Anna

The Ways of Means

As a follow-up to last week’s webinar with Andy Kriebel and Eva Murray, I’ve put together just a few common examples of means other than the ubiquitous arithmetic mean. A great deal of work on each of these topics can be found throughout the interwebs if your Googling fingers get itchy.

The Weighted Mean

My favorite of all the means. Sometimes called expected value, or the mean of a discrete random variable.

Grades

When computing a course grade or overall GPA, the weighted mean takes into account each possible outcome and how often that outcome occurs in a dataset. A weight is applied to each possible outcome — for example, each type of grade in a course — then added together to return the overall weighted mean. And since Econ was my favorite course in college…

syllabus

If you have an exam average of 80, quiz/homework average of 65 and lab average of 78, what is your final grade? (Hint: Don’t forget to change percentages to decimals.)

grades

If your professor’s software is forgiving, that’s a 76.

Vegas

Weighted means are also effective for assessing risk in insurance or gambling. Also known as the expected value, it considers all possible outcomes of an event and the probability of each possible outcome. Expected values reflect a long-term average. Meaning, over the long run, you would expect to win/lose this amount. A negative expected value indicates a house advantage and a positive expected value indicates the player’s advantage (and unless you have skills in the poker room, the advantage is never on the player’s side). An expected value of $0 indicates you’ll break even in the long-run.

I’ll admit my favorite casino game is American roulette:

Casino complete table with roulette and chips, 3d render

As you can see, the “inside” of the roulette table contains numbers 1-36 (18 of which are red, the other 18 black). But WAIT! Here’s how they fool you — see the numbers “0” and “00”? 0 and 00 are neither red nor black, though they do count towards the 38 total outcomes on the roulette board. When the dealer spins the wheel, a ball bounces around and chooses from numbers 1 thru 36, 0 AND 00 — that’s 38 possible outcomes.

Let’s say you wager $1 on “black”. And if the winning number is, in fact, black, you get your original dollar AND win another (putting you “up” $1). Unsuspecting victims new to the roulette table think they have a 50/50 shot at black; however, the probability of “black” is actually 18/38 and the probability of “not black” is 20/38″.

Here’s how it breaks down for you:

roulette table math

Just as in the grading example, each outcome (dollars made or lost) is first multiplied by its weight, where the weight here is the theoretical probability assigned to that outcome. After multiplying, add each product (outcome times probability) together. Note: Don’t divide at the end like you’d do for the arithmetic mean – it’s a common mistake, but easy to remedy if you check your work.

Some Gambling Advice: The belief that casino games adhere to some “law of averages” in the short run is called the Gambler’s Fallacy. Just because the ball on the roulette wheel landed on 5 red numbers in a row doesn’t mean it’s time for a black number on the next spin! I watched a guy lose $300 on three spins of the wheel because, as he exclaimed, “Every number has been red It’s black’s turn! It’s the law of averages!”

The Geometric Mean

A Geometric mean is useful when you’re looking to average a factor (multiplier) applied over time – like investment growth or compound interest.

investments

I enjoyed my finance classes in school, especially the part about how compound interest works. If you think about compound interest over time, you may recall the growth is exponential, not linear. And exponential growth indicates that in order to grow from one value to the next, a constant was multiplied (not added).

As a basic example, let’s say you invest $100,000 at the beginning of 4 years. For simplicity, let’s say the growth rate followed the pattern +40%, -40%, +40%, -40% over the 4 years. At the end of 4 years, you’ve got $70,560 left.

excel growth

So you know your 4-year return on the investment is: (70,560 – 100,000)/100,000 = -.2944 or -29.44%. But if you averaged out the 4 growth rates using the arithmetic mean, you’d have 0%. Which is why the arithmetic mean doesn’t make sense here.

Instead, apply the geometric mean:

geometric mean calc

Note: Multiplying by .4 (or -.4) only returns the amount gained (or lost). Multiplying by 1.4 (or .6) returns the total amount, including what was gained (or lost).

The Harmonic Mean

You drive 60 mph to grandma’s house and 40 mph on the return trip. What was your average speed?

grandma

Let’s dust off that formula from physics class: speed = distance/time

Since the speed you drive plays into the time it takes to cover a certain distance, that formula may clue you in as to why you can’t just take an arithmetic mean of the two speeds. So before I introduce the formula for harmonic mean, I’ll combine those two trips using the formula for speed to determine the average speed.

The set-up Distance doesn’t matter here so we’ll use 1 mile. Feel free to use a different distance to verify, but you’d be reducing fractions a good bit along the way and I’m all about efficiency. Use a distance of 1 mile for each leg of the journey and the two speeds of 40mph and 60 mph.

First determine the time it takes to go 1 mile by reworking the speed formula:

harmonic mean calcs 1

To determine the average speed, we’ll combine the two legs of the trip using the speed formula (which will return the overall, or average, speed of the entire trip):

harmonic mean calcs 2

If, instead of driving equal distances, you were looking for the average speed it took you to drive two equal amounts of time, the arithmetic mean WOULD be useful.

The formula for the harmonic mean looks like this:

harmonic mean

Where n is the number of 1-mile trips, in this example, and the rates are 40 and 60 mph:

harmonic mean calcs 3

If you scroll up and check out that last step using the speed formula (above), you’ll see the harmonic mean formula was merely a clean shortcut.

qed.jpeg

If you want more information about measures of center, check out the previous blog post — Mean, Median, and Mode: How Visualizations Help Measure What’s Typical

If your organization is looking to expand its data strategy, fix its data architecture, implement data visualization, and/or optimize using machine learning, check out Velocity Group.