Tag Archives: applied Communication Research class

Communication Research Class Media Placement Assignment, Part 2: Doing Data Entry and Creating a Data Legend

This post is long overdue (I feel like I say that a lot!).

It is a follow up to a post I published in January titled “Here’s my communication research class assignment on analyzing media placement.”  Recently, I received a public comment on that post from a professor I greatly admire, Kelli Burns, pointing out that project assignment (see the bottom of this post for that document) notes at the bottom of the document that additional work will be assigned the following day. But, I never discuss what that entails in the blog post.  I apologize to everyone who read that post because, in that sense, it was incomplete in terms of explaining the project.

Thank you to Dr. Burns for bringing this to my attention. With this in mind, I’ve decided to do a much-delayed follow up post, turning that initial post into a two-part series.

So, if you haven’t read the first post in this series, I encourage you to go back and do so. If you just want to know about teaching students to do data entry from coded data and to create data legends, then read on my friend!

The Set Up

In review, in the first post I provide an assignment where students download a data set of media articles using the Meltwater social intelligence software. Their task is to conduct a quantitative content analysis using a coding sheet (which I’ve provided in that first post). They are then told to do all of the coding at home, dividing up the articles to code as evenly as possible among their team.

On the second day of class, students come back with the coding sheet coded for the number of articles they needed to code. I instruct the students to download the coding sheet, copy it onto a new page in their document for the total number of articles they need to code and code them by highlighting the answers on the coding sheet. For example, a student who needed to code 30 articles would return with a digital copy of an MS Word document with 30 pages, each page containing a completed coding sheet.

Problems

All good right? They just need to get their coded data into something that SPSS can read… because that always goes smoothly! 😛

This whole project is aimed at introducing students to quantitative research and all we’re doing is running descriptive statistics. But here’s the problem:

As you probably remember learning in a quantitative methods class some years ago (let’s not age ourselves), the numbers in a data set don’t mean anything themselves. We, the researchers, assign meaning to them. This is an idea that we have to teach the students.

Here’s a simple example. Let’s say that we are coding for eye color. We assign the following numbers for coding purposes:

  • 1 = brown
  • 2 = blue
  • 3 = hazel
  • 4 = green
  • … and so forth until we have an exhaustive list.

But when a student runs the mean and find that variable 1 has a mode of 3, they ask “what the heck does that mean?”

The problems with this are are:

  1. They don’t know what variable 1 corresponds to on their coding sheet (in this example, eye color).
  2. They don’t know what a mode of 3 represents (that the most common eye color is hazel).

Oh, and keep in mind that the students haven’t done any data entry yet. They don’t have their data into a spreadsheet format yet that can be imported into SPSS. So, there’s another problem. Most students have never entered data into a spreadsheet before.

What They Need to Do

  1. Get their coded data into a spreadsheet format that can be analyzed in SPSS.
  2. Create a data legend so they can interpret the SPSS output

What They Need to Know About Measurements First

In my class, students need to know the four common types of measurement – nominal, ordinal, interval and ratio – , as the Netflix assignment (and other assignments to follow) use them.  Students in our major are not required to take any statistics class and thus this is new information to the vast majority of them. If your students know this, you can skip it. If you need a refresher on these, here is a quick summary that explains each measurement type and its strengths and limitations.  I teach them these concepts with a lecture and in-class activity to test their application. I do this earlier int he semester before we get into the Netflix assignment.

Teaching Students Basic Data Entry

This part is pretty simple. As a reminder, the students are working in teams on this project. So the team needs to create a shared Google spreadsheet in which they enter all their coded data from their coding sheets.  They just need to open Word and open the shared Google spreadsheet and enter the corresponding numbers from the coding sheet in Word for each article coded. The key thing is that in this spreadsheet the columns are the questions (i.e., variables) on the coding sheet and the rows are the individual articles (such as in the image below). Otherwise, it won’t import into SPSS correctly (Note: You can import a CSV file through SPSS. So, I have my students download the Google Spreadsheet in CSV format and import that into SPSS).

But, before they can enter their data they need a data legend. So..

Teaching Students to Create Data Legends

A data legend lets the researcher quickly put meaning to the variables and numbers in their results.

Creating a data legend can be done in SPSS. But, for time purposes and because students wont always be using SPSS, I prefer to do it another way. It is quite useful as I can have the data legend right in front of me on a piece of paper.

Simply, have your students type or write up their data legend and keep it handy.

Each variable needs a descriptive label that’s under 13 characters (13 characters is the max that SPSS allows you to use in describing a variable).

Each possible numerical value of that variable needs a name, which is the simplest possible description of what that number means. So, in our example above, if 1 equaled brown eye color, 2 equaled blue eye color and so forth, then we write it up to look like this:

variable:

eyecolor  (1) brown, (2) blue, (3) hazel, (4) green.

In the above, I have given the variable for eyecolor the label eyecolor. The numbers in parentheses represent the numerical value that I have assigned to the possible responses.

For scale questions, the number equals the number on the scale. Example: On a scale of 1-7 where 1 means not at all, and 7 means very much so, how much do you like string cheese?

stringcheese    (1) not at all, (2) 2, (3), 3, (4), 4, (5), 5, (6), 6, (7) very much so.

So, the instructions for creating a data legend are quite simple:

On a separate file or paper:

  1. Assign each variable a label (max 13 letters). So, “schoolstatus”, “favicecream” and “rankicecream” work.
  2. If it is nominal or ordinal label it in parentheses (this is optional, but I like to do it to help students remind what type of variable it is)
  3. With each label, make a list that indicates what # we have assigned to each term within our measurement, by placing the # in parentheses.

Of course, there are some caveats when dealing different measurement types, such as ordinal data. Indeed, ordinal data and ‘check all that apply’ questions are tough.  These can be a bit frustrating when doing data entry. That’s why I’ve provided below a handout I created and use in class to teach students how to create data legends using the different types of measurements. This walks them through how to not only create a data legend for that variable but subsequently how to enter that data correctly from their coding sheet into their spreadsheet so that the spreadsheet can be analyzed in SPSS or elsewhere.

Activity

Once you walk students through this process, you can give them an activity to test for understanding and application. If the students don’t enter their data correctly now, it is going to be a mess when they try to import it into SPSS. So while this may take some valuable class time or may serve as homework, I recommend assigning the data entry and data legend activity (see below) and making sure the students entered their data correctly.

In the activity, it is important to clarify to students that, in part 2 of the activity, the survey responses are separated by semi-colons such that the first respondent’s answers are: a) digital film, b) freshman, c) 4, and d) Domino’s, Pizza Hut, Pizza Perfection.

Once the students have created the data legend and entered it into the table on the activity sheet, their answers should look like this

Data Legend

Spreadsheet

Once your students got this down, set them loose to do their data entry. You may want to assign that as homework. You can give them a lecture on descriptive statistics and work with SPSS or whatever software you’ll be doing the analysis in. Help the students interpret what the data means by pointing them to their data legend.

I hope this blog post was helpful. Again, if you have not yet done so, check out the first article in this post to learn more about the Netflix media placement assignment. If you want to know more about my applied communication research class, you can see all blog posts related to communication research here.

Data Entry and Data Legend Handout for Students

Data Entry and Data Legend Activity for Students

Project 1: Media Placement Assignment Handout (from previous blog post cited above).

-Cheers!

Matt

credits: Photo public domain from Pexels

Teaching Students to Use iPads for Survey Data Collection (2 of 2)

In my last post, I wrote about a Comm Research project where students use iPads for survey data collection.This is my favorite of the 3 projects we do in my Communication Research Class (see all posts on Comm 435; see syllabus).

This week, I want to follow up by discussing how to program the surveys to work on the iPads. I’ll talk through how I teach all of this in class and through activities.

Lastly, I’ll explain how I prepare the data for use in SPSS.

Once students have created their surveys, we need to get them onto ONA.io

Programming surveys to work on ONA.io – the free, open-source tool used by my class and researchers around the world – is a little tricky. It follows XLS formatting. Once you get the hang of it, it is super easy. And it is quick to teach and learn.

I go over this online Lab Guide (http://bit.ly/435_lab_digitalsurvey) that I created on how to program XLS forms in class. I then provide students with a practice activity to create a survey in Excel or Google Spreadsheets. The activity asks students to create:

1) A question of how many years they are in school

2) A check all that apply question – I usually pick something fun like their favorite movies from a list

3) A likert-style question. Ex: How much they like binge-watching on Netflix.

In sum, they practice creating an integer, select_multiple, and select_one question.

Once students get the hang of it, they log into an ONA.io account I create for the class. Next, they upload their practice survey to test in class using our department’s iPads. But, this could be done on a phone or even a computer itself (Instructions on how to do this are in the lab guide).

The #1 thing, is that things have to be done exactly in this formatting. So, little errors like forgetting to put an _ (and putting a space instead) for “list_name” will result in ONA.io kicking the survey back and telling you there is an error. If a mistake is made, no problem. Just fix your form and re-upload.

I check to make sure everything is done correctly. This saves time when they program their own surveys. If everything is good, I give students lab time to work on formatting their surveys and help out as needed.

After everything has been uploaded successfully – this usually takes time outside of class, so I make it due the following class – students are ready to go out into the field. This is where the fun happens!

Students always get great feedback when they use iPads to collect survey data. People tend to be interested in what they’re doing and happy to participate. Some students this year told me that people came up to them around campus and asked if they could participate. That is much different than the usual online survey where we often struggle to get respondents! I can’t express how rewarding it is to see students go out into the field, collect data, and come back having gathered data no one else has before. For most of them, this is their first time doing data collection of any kind. And so while the class is tough and a lot of work, it is rewarding. You can see the ‘aha’ moments the students have when they start drawing inferences from their data.

Preparing Data for Analysis in SPSS

If you only want to look at summaries of responses, you can check that out in ONA.io. But, if you want to analyze the data you’ve got to get it from the way students labeled it to the #s for SPSS.

For example, in the below example where the question asks the participant their favorite ice cream, if the ‘choices’ in our XLS code is:

Lab_Guide_-_FormHub_-_Google_Docs

And the participant answers “Vanilla” the data collected would be icecream2.

But, SPSS can’t analyze “incecream2.” It can only analyze a number. So, we need every instance when a participant selected Vanilla to be recorded as simply “2” in SPSS.

Here’s how to quickly do this:

Download the data Excel file of the completed surveys. Open in Excel. Replace “icecream” with “” (that is, with nothing – no spaces. Just leave the replace section blank). Excel will remove “icecream” from the Excel file and you’re left with the number for responses such that “icecream2” now is “2”. Repeat this step for each question. For check all that apply questions, ONA.io records “FALSE” for answer choices left blank, and “TRUE” for instances when the participant checked the answer choice. For example, if the question was “Check all your favorite ice cream flavors” and the participant checked “Vanilla,” ONA would record a “TRUE” and if they left it blank, ONA would record “FALSE.” These can be easily prepared for SPSS by replacing FALSE with “0” and TRUE with “1”.

Admittedly, this step is the drawback of using XLS forms. While a little tedious, it is quick and easy to do. Considering the advantages, I don’t mind taking 20 minutes of my time cleaning the data for my students.

When done, I send the student teams their data and we work on analyzing them in class.

 

Well that’s all for now! I hope you enjoyed this tutorial and consider using iPads for survey data collection in your research class, or other classes where surveys could prove valuable!

Here at Shepherd, finals week starts this week. I hope everyone has a great end to the semester!

Using iPads for Survey Data Collection in the Communication Research Class

Surveys are a common method uses in communication research class projects. Since I started teaching this class at Shepherd University, I’ve added a fun, cool feature that really brings the survey data collection process to life!

Students in my Comm 435 Communication Research class (see all posts on Comm 435; see syllabus) now use iPads for data collection in the field. My students grab a department iPad and go around campus to recruit participants. The participants complete the surveys on the iPads, and the data is synched to the cloud where it can be downloaded and analyzed.

ipadsurveys

Overview

For the final of three hands-on projects in my class, student teams identify a problem or question they have pertaining to Shepherd University or the local community. They design a study to research that problem. In my first two hands-on projects, students don’t design the methods or the measurements. They are based on scenarios I set up and materials I provide. For example, here’s a discussion of my computer-assisted content analysis assignment.

As a part of the assignment for today’s post, students are required to conduct 1) surveys, and 2) either focus groups or interviews. Let’s talk about the surveys:

After discussing surveys as a method, with a particular focus on survey design and considerations, each team designs a brief survey.

In the lecture before they create the survey, I lecture on important considerations in survey design. And then students do an in class activity to practice putting these concepts into motion using a mock scenario. I then provide feedback on their survey design, and help them make improvements.

The class the following time we meet is dedicated to helping students design measurements that meet the research objective and research questions they’ve developed that will help them get the answers to the questions they want to know. The day is also dedicated to helping them write effective survey questions (as well as interview or focus group questions, for that part of the assignment). I started dedicating an entire class period to measurement design after spotting this as a major weakness in the projects last semester.

Next, rather than using paper & pen, or surveymonkey.com (which limits students to only 10 questions), teams program their surveys into ONA.io. It is a free, open access web survey tool designed by folks at Columbia University. So, we spend the 3rd day learning how to use ONA.io to program their surveys. I’ll talk in detail about that in the next post.

During data collection week, students check out department iPads, load the survey onto their iPad, and go out into the field to collect data. A group of students will check out several iPads and hit up the student union, library, or campus quads and collect data fairly quickly. The data syncs automatically over our campus-wide wifi! That means, when all students get back to the computer lab, their data – from each iPad used – is already synced to ONA.io where it could be downloaded and analyzed.

Pretty cool, huh? It is my favorite project that we do in my communication research class and the students seem to really enjoy using the iPads for surveys.

There are a few caveats.

  1. After the data is collected, in order for it to be analyzed in SPSS it has to be cleaned. If you do formhub, you’ll notice that the data you get doesn’t quite fit in with the format SPSS needs. So, I spend a few hours before we meet as a class to look at the data that was collected and analyze it.
  2. This year, Formhub.org seems to be moving painfully slow. I’ve had trouble last week getting the website to work. And am still having trouble this week. With data collection set to start tomorrow, I am stressing that it may not work! – update: I’ve read in several places about ongoing stability issues with Formhub. I’m now using ONA.io instead which works the exact same way! I’ve updated verbiage above to reflect that.

I’ve provided a copy of the assignment below. Enjoy!

On my next post, I will provide info on programming surveys into the XLS forms format, which is a bit tricky. I spend a day in class teaching this. I’ll also show you how to load the surveys onto the iPads and get them synced up to the computer if you aren’t on WiFi when you collect the data.

photo: CC by Sean MacEntee

Applied Research Class: Sentiment Analysis Project Reflection

I began this semester with the intention of blogging a bit about my applied research class. I provided an overview of it and a copy of the syllabus on an earlier post. But since writing that post, I’ve yet to do a follow up… until now.

Edit: There are 2 follow up posts to this post. 1) Looks at activities for this assignment, and 2) provides the assignment itself.

First, let me say that more and more I am trying to decrease my lecturing and spend more time in class with hands on learning, having my students learn by doing rather than just listening – sort of like the flipped classroom Gary Schirr has been discussing recently on his blog.   So this class is really pushing in class projects and experiential learning. Following this approach, in order to introduce students to research, I provided students with the instructions and a lot of structure for their first two projects.

I want to use our second research project as an example. Then, I’ll talk about the pros and cons. The second project was a sentiment analysis of Tweets about a brand I chose and a (realistic but not necessarily real) scenario.

My goals with this project were to teach students:

  1. About computer-assisted content analysis. We focused on how it is different from a hand-coded quantitative content analysis (which was the focus of our first project). And its strengths and weaknesses.
  2. How to do a basic computer-assisted content analysis using Yoshikoder, an easy to use, free App that works on Mac and PC. So my students can use it at home if needed!
  3. About sentiment analysis – what it is, why it is used by organizations to evaluate the online conversation about their brand, and its strengths and weaknesses.
  4. How to write up a research report (In the first project, I provided the project overview and requested results and discussion. In the second project, I added a literature review and methods section, and had them write the research objective and research question).

Why I chose to do this project this way: A number of social media analytics tools today are offering sentiment analysis.  There are also sites like socialmention.com that will provide you with a free sentiment analysis of a search term. But how are these analyses conducted? What are their strengths and weaknesses? Are they reliable? Do they mean anything at all? And what do we need to be careful of before accepting them, and thus drawing inferences from them?

So what I wanted my students to do, was to SEE how a sentiment analysis would be conducted by some of those high-price (or no price!) analytic tools. In other words, I want my students to get their hands dirty as opposed to allowing some distant and hidden algorithm to do the analysis for them. I believe gaining hands on experience with this project provides students a more critical lens through which to see and evaluate a sentiment analysis of social media messages.

The Set Up: I provide in the assignment: The Situation or Problem / Campaign goals and objectives (of an imaginary campaign that is ongoing or happened) / benchmarks / KPIs. In this case, the situation had to do with a popular online retail brand and rising customer complains and dissatisfaction as the brand has grown beyond its core base of loyal customers in recent years.

I provide students with the sample of about 1000 Tweets I downloaded and formatted to play nicely with Yoshikoder. The sample comprises of mentions of the brand. This ensures students are all looking at the same dataset, and streamlines (or eliminates I should say) the data collection process to help students focus on other elements of the assignment. For the sentiment analysis,

I rely on the AFINN dictionary, which was designed for sentiment analysis of microblogs. Students learn about what the AFINN is and a little about how linguistic analysis dictionaries are created through research. Students then analyze the Twitter dataset using the AFINN dictionary to determine the sentiment scores. There is no fancy stats being done here. By checking the sentiment analysis output, they simply determine if their KPI (which was a % of positive Tweets about the brand) was met. In this case, the result they are looking for is a % – so simple division. Not scary at all, no SPSS training needed (that comes with a later project).

They also look at the valence of the sentiment (with a range of + or -5) and explore the meaning of that. The students use this information, along with class lecture, other exercises on how to write research reports, etc., to produce their project #2 report.

Again, to reiterate an important point, we discuss the benefits and of this analysis as well as its real weaknesses. Students always bring up the fact that the results lack context – what if someone used the word “bad” meaning good? What about sarcasm? I show them how to use Yoshikoder to look at Keywords in context as a way of addressing this.

The Benefits and Drawbacks of This (and these types of) Projects As I said above,  I am really trying to move away from lecture in favor of experiential learning. Here are some things I’ve noticed. Some may be benefits, others drawbacks, and others a bit of both…

  • The focus on this project is not on the stats or the analysis and I provide a lot of the needed information – so it makes for a good ‘getting your feet wet’ project that teaches students other important elements of research.
  • It would be nice to teach them more advanced methods of analysis – but I do cover that a bit more later in the semester.
  • Students learn through their mistakes and from my feedback as opposed to me paving the way for them and simply asking them to drive down the smooth road.
  • I provide a LOT of handouts on how to write different sections of a research report, etc. They are detailed… sometimes too detailed and I fear students don’t read them because it is information shock.
  • Sometimes, I wish I had more time to teach them how to avoid the simple mistakes I see in their work, particularly their research reports. I say to myself, “oh man, I thought I told them how to do that.” Or, “Why didn’t you read the handout that explains how to structure this!?”
  • They likely won’t do sentiment analysis like this every again – but at least they’ll understand it!
  • They get to see the results for themselves and get a sense that they discovered the results.
  • Class time is busy – our class rushes by and we don’t always get to cover everything I want to. As a person who likes order and time management, I am having to “let go a little” and let things happen. This is helping me grow. I wonder if it is helping my students though…
  • I know I enjoy doing these sorts of projects a lot more than standing and lecturing, lecturing, lecturing about research. I feel it has made research a lot more “real” and hands on to them.

So that is my overview of the project in general, and some thoughts. It isn’t perfect but it seems to have gone well and I really enjoyed doing it. I’d love any feedback or suggestions you may have to make this the best possible experience for my students. And of course, feel free to adapt, modify, or improve upon this idea.

In an upcoming post(s), I’ll share the assignments (I want to move my documents over to SlideShare due to the pay wall on Scribd). And I will provide some basic info on how to use the Yoshikoder software.

Cheers! -Matt

Just a reminder: There are 2 follow up posts to this post. 1) Looks at activities for this assignment, and 2) provides the assignment itself.

photo CC by netzkobold

Here Are My Spring 2014 Syllabi: Writing and Research

The snow is coming down here in West Virginia! Classes are canceled today so I will be catching up on research and some other things. But let’s talk classes and syllabi!

In addition to the applied Communication Research class I am teaching this semester (discussed in the previous post) I’m also teaching a few other classes. 🙂 I want to quickly share some of my syllabi for the semester. I’ve uploaded syllabi for these classes to my Scribd account, which is where I host past syllabi and class assignments. Click the link below to see the syllabus. (You can also see all the below-described syllabi as well as past syllabi via the menu on the left, by mousing over “syllabi.”)

Comm 435: Communication Research – This class is discussed in depth in my previous post. Please read it to learn more about that class.

Comm 335: Writing Across Platforms – Changes from Fall 13 include: A lab day for greater access to press release examples and working with peers on the first press release assignment, I’ve re-organized and updated the related social media and blog writing assignments, and have shifted a few lectures around to more effectively deliver material. Other minor changes to make sure content is up to date. I’m also super excited that for our PitchEngine assignment this semester, all of our students will be temporarily upgraded from the free version of PitchEngine to the paid level thanks to the awesome people at PitchEngine! So, students will get experience with advanced functionality.

Hope you find these new syllabi helpful! If you share your syllabi online, please share in the comments below!