Category Archives: Communication Research

Teaching Students to Analyze social data with Microsoft Social Engagement: Social Media Analytics Assignment (Post 3 of 4)

This is post #3 in a four part series about a new assignment that I’m using this semester in my Communication research class (all posts on that class).

That assignment is a 3-part social media analytics project. Each part is related but unique, allowing students to pick up a new skill set. In this post we’ll discuss part 2 of the assignment. If you haven’t read the assignment overview post, and the post about pivot tables in Excel, I encourage you to do so before proceeding. In the first post, you will see a copy of the assignment that is discussed below.

Part 2 of the assignment asks student teams to analyze their client and its competitors using Microsoft Social Engagement (sometimes called Microsoft Social Listening).  You can learn more about how our communication department is participating in the Microsoft Dynamics Academic Alliance program program in my initial post on Microsoft Social Engagement.

Microsoft Social Engagement in the classroom

Setting Up The Assignment

As I wrote in my prior post, Microsoft Social Engagement” is a social listening tool that enables users to track metrics for public social media accounts or posts (e.g., keywords or hashtags) such as posts on Facebook, Twitter and Instagram.  You can also track mentions forums and blog.”

Keep in mind that you have to program what you want the software to track ahead of time.  It isn’t like a Twitter search where you can go in and look into past 2,500 posts on a topic after the fact. So, we have to program the student team’s clients and 1-3 competitors into Microsoft Social Engagement several weeks before the students sit down to work on the assignment. That way, there is some data for students to analyze.

I required students to turn into me the Twitter, and if available Instagram account, for their client and their competitors. I programmed them about a month before we worked on the assignment. To make my life simple, students had to turn all of this in at the same time they turned into me the Excel file of their client’s Twitter data (discussed in the pivot table blog post).

In my social media class, students were introduced to Microsoft Social Engagement and were given some guidance on how to use it to complete a metrics tracking spreadsheet. The purpose there was for them to track data week by week. In this class, we went a bit deeper. My purpose here was for students to look at the sum of data over a given period and extract specific insights. I added geolocation (q 3), a look at top engagement across time (q 4), parsing top positive and negative keywords (q 6 & 7), and exploring critics of the brand (q. 8).

Taken together, my goal was for students to learn the software in my social media class by throwing them into it. In this class, I wanted them to gain more experience, think a bit deeper and dig a bit deeper into the software.

The Assignment

For this part of the assignment, I created specific questions I wanted students to answer (below). To guide them through the steps needed to answer the below questions, I created this lab guide. Students worked through the lab guide in class and I was on hand to assist them.

This lab guide is similar to the lab guide I provide my social media class (and which I shared in the original blog post on Microsoft Social Engagement). However, this lab guide is appropriately more thorough.

Sentiment view of Microsoft Social Engagement

For the client and each competitor, the students were to answer the below questions.

  1. For CLIENT’S NAME what is the total number of a) shares, b) replies, and c) posts on Twitter during TIME PERIOD?
  2. For each keyword, what is the share of voice for the client and its competitors?
    1. (repeat this for however many keywords you have – up to 3)
  3. In what STATE/COUNTRY were the top posts posted that mention CLIENT?
  4. What day(s) of the month was CLIENT talked about the most on each social media platform?
    1. Note: if we only have data from Twitter, then just use Twitter.
  5. What is the sentiment percentages (positive, negative, neutral) for CLIENT?
  6. What are the top positive keywords associated with CLIENT on each social media platform?
    1. Note: if we only have data from Twitter, then just use Twitter.
  7. What are the top negative keywords associated with CLIENT on each social media platform?
    1. Note: if we only have data from Twitter, then just use Twitter.
  8. Who are the top fans and critics for CLIENT on each platform?

Of course, the above 8 questions are just a sampling of what you could do with the software.

In Summary

The software can be a bit challenging to use. And I found that students struggled at times to navigate it. It is important to make yourself available in class to help students.

Also, because the students had to answer these questions for their client and then for their competitors, it was rather time consuming. Teams that tackled this project in a smart manner, divided up the work and then put their answers together and reviewed them.

Some students may feel that this part is somewhat redundant to what they do in Microsoft Excel pivot tables. Questions 1 and 2 from the pivot exercise are similar to questions 1 and 4 from Microsoft Social Engagement, respectively. But, in my point of view, it is different enough and, importantly, it is a different way of analyzing things. Still, because this project overall requires a good deal of work when you consider the pivot tables and the social network mapping ,which we’ll discuss in the next post, you may find it useful to remove some of the above questions.

Projects like these can be intimidating and challenging for students. But I truly believe the benefits outweigh the drawbacks. The opportunity for students to learn industry software in the classroom is highly valuable. And it is better for students to dive in while in school than have their first exposure be overwhelming on the job.

In the next post, we will discuss part 4 of this assignment which gets students using Netlytic.org to do some basic network mapping of their client’s online network.  I will be publishing that post in 2 weeks.

Update: You can now read Post #4 on Netlytic.

Teaching Students to Analyze Twitter data with Excel pivot tables: Social Media Analytics Assignment (Post 2 of 4)

In my last post, I discussed a new assignment that I’m using this semester in my Communication research class (all posts on that class).

That social media analytics project assignment contains 3 parts. Each part is related but unique, allowing students to pick up a new skill set. In this post, post 2 of 4 in the series I’m writing about this assignment, we’ll discuss part 1 of the assignment. If you haven’t read the assignment overview post, I encourage you to do so before proceeding. There you will see a copy of the assignment discussed in the below post.

Part 1 of the assignment asks student teams to analyze the Twitter data provided by their clients by creating pivot tables in Microsoft Excel.

social media analytics pviot tables excel Twitter data

If you aren’t familiar with pivot tables, they enable you to filter and visualize spreadsheets. This allows you to focus in on specific data points and quickly extract insights from large data sets.

I got the inspiration to create this part of the assignment from a very helpful conversation I had with Professor Stefanie Moore at Kent State University. A big thank you to Stefanie for taking the time to chat with me and for providing me with insights to how she teaches analytics. I am really impressed and inspired by what Professor Moore is doing at Kent State.

Preparation: Getting Twitter Data
In order to analyze Twitter data using pivot tables in Excel, you need to first download Tweets from Twitter’s analytics (ads) page. If you’ve never done this before, it is really quite easy.

The reason we use Twitter is because Twitter enables you to extract a ton of valuable account data from your account in the form of a CSV spreadsheet. But, as an aside, you could analyze just about any data with pivot tables.

My students were required to get the Twitter data from a client. Therefore, I created a step-by-step guide that they could provide to the client so that the client could extract the appropriate data and supply it to me.

To ensure we had enough data, I instructed the students to ensure that their client was posting at least a few times per week. I asked students to get 6 months of Twitter data if possible. In short, I wanted to ensure that there were at least 50 Tweets from the client in the time period we collected. This number is somewhat arbitrary. And ideally you’d like to have more. But, 50 Tweets is enough to sort and play with.

Here are the steps for extracting Twitter data from an account:

Step 1:  log into your organization’s Twitter account at http://twitter.com. Next, select your account profile picture (as shown below) and select “Analytics.”


Step 2: A new window will appear. Click “Tweets” from the menu at the top. Then, select the date range (see below). A menu will open. Please select a date range of at least 3 to 6 months back so that there are enough Tweets for the students to analyze.
Important: Click “Update” to change the selected date range.
In the below example, I selected Feb 1 through May 1 (3 months).

Step 3: Once the dates have been selected, click “export data.” A new window will appear. Click “save file” to save the file to your computer. Email that file (it should be a .CSV file named something starting with: “tweet_activity_metrics…”). You have your data. If someone else is downloading the data – such as a class client – , they will need to email the file to you or your student.

Using Pivot Tables to Analyze Twitter Data

A few days were set aside in class to work with the pivot tables and learn how to answer the questions students were asked to answer in the project. On day 1, I provided a brief lecture  (about 10 minutes). And then I instructed students to begin working with the lab guide I had created. If you’re a longtime reader of this blog, you know I am big on creating lab guides to assist students in learning software.

See the lab guide students used to learn to analyze their Twitter data using pivot tables: http://bit.ly/435_pivottableslab

While working with the lab guide, students were to have a copy of the assignment that contained the research questions they needed to answer using the pivot tables. Those research questions were:

  1. Which Twitter posts received the most (Fill in the blank – you need to decide what variables are important engagement data for your client. You’ll need more than 1 variable. And, you’ll want to show more than just the top Tweet for that variable, but the top few)?
  2. What is the client’s Twitter engagement by month? (again, you choose the appropriate engagement metrics)
  3. Come up with 1 other RQs for important data points you extract from your pivot table analysis that you believe will be of value to your client.

For the above questions, students needed to pick what engagement metrics they wanted to analyze. There are several engagement metrics in the CSV file when you download it from Twitter. Examples include retweets and favorites.

For research question #3, most groups analyzed engagement by Tweet category. As you’ll see in the lab guide, students learned how to comb through their Tweets and identify common themes by which to categorize their Tweets. Examples may include promotional Tweets, humorous Tweets, Tweets that ask a question, etc.

The above 3 research questions are just a sampling of what you could do with the pivot tables.

In Summary

In the next post, we will discuss part 2 of this assignment which gets students using Microsoft Social Engagement to answer some research questions about their client.  I will be publishing that post in 2 weeks.

In the meantime, if you want to get your feet wet, I encourage you to download your own Twitter data and walk through the lab guide above. Or, check out some of the sources listed below to learn how to analyze Twitter data with pivot tables.

As you will see when you take a look at the lab guide, you must first clean the data so that Excel can analyze it. I then walk you through a number of different ways you can analyze your Twitter data.

The fact is that I was a bit of a newbie to pivot tables when I created this assignment. To build the above-discussed lab guide I provided students to help them through learning how to use pivot tables, I relied heavily on several key resources. Much of what is in the lab guide is built directly on what I learned from these sources. To learn directly from the sources I learned from, check out the sources below. A big thank you to all of them for sharing their knowledge publicly. I hope I was able to honor them in adapting their work for a classroom assignment.

Update: You can now read the follow up posts to this blog series.

Sources:

 

The New Social Media Analytics Assignment for my Comm Research Class (Post 1 of 4)

A few months ago I wrote about how students in my social media class were using Microsoft Social Engagement to track metrics and do some social listening. At the time, I said I’d follow up with a post about how we were using the software in my communication research class. Well, the time has come! But, this post will do more than dive into how we are using Microsoft Engagement in my class. It will share with you a whole new project my research students are doing.

This is post #1 in a 4 part series on a new assignment my students are working on in my communication research class. The assignment spreads over several weeks with a good amount of time in class working in the computer lab. The project is the result of continued and ongoing efforts I’ve been making in a few classes to enhance student education in social media analytics. The project replaces the sentiment analysis assignment I wrote about a few years ago.

This post will cover an overview of the assignment (A copy of the assignment is below). Post #2 will discuss using pivot tables to analyze Twitter data. Post #3 will discuss Microsoft Social Engagement. Post #4 will discuss Netlyitic.

Update: Post #2 on pivot tables is now available, as is Post #3 on MS Engagement and Post #4 on Netlytic.

First, let me provide some context. In my communication research class (see all posts related to the class), students work in teams to complete 3 projects. Each project gets progressively more difficult. The project we are going to discuss today is project #2.

Overview of Social Media Analytics Project for A Client

The purpose of the assignment is for students to get experience performing a social media analytics audit of a client using a variety of social media analytics and social network analysis tools. The goal is for the students to try and understand their client’s current use of social media and provide insights and recommendations for enhancing that client’s social media presence.

Each team was tasked with going out and finding a client that would agree to participate. While I had hoped that most groups would approach local businesses, they tended to focus more on on-campus groups like athletic teams. This may have been a result of convenience because each team had to acquire several months worth of Twitter data from their client. I will explain that in further detail when we discuss pivot tables in post #2. So students tended to go to on campus organizations where they already knew who ran the Twitter account.

The three main components of the project are:

  1. Client Social Media Profile & Engagement Analysis
    1. Students use Pivot Tables to explore your client’s posts on social media and analyze their overall engagement. For example, students determine the top posts by their client which made that have gotten the most likes.
  2. Analyzing Trends
    1. Students use Microsoft Social Engagement to monitor and analyze the conversation surrounding the client’s brand.
  3. Social Network Analysis
    1. Students use Netlytic.com to build visual representations of their client’s social network on Twitter or Instagram and do some basic analysis.

For each component, I have created a set of research questions that students answer using the appropriate software. The students adapt the research questions a bit to their context when necessary. You can see the research questions in the assignment below.

The Plan in the Classroom

On day 1, I provide a 10 minute lecture on pivot tables. The rest of the class is a lab for students to work on learning how to create pivot tables to analyze Twitter data and answer the RQs.

On day 2, I give a 20 minute lecture about the social engagement software and talk a little about sentiment analysis so students understand what it is when they look at it in the Microsoft software.

Day 3 is a lab day to work on whatever they weren’t able to get done in the pivot tables or the social engagement software.

On day 4, I lecture about social network analysis and some basic concepts. (We cover some other material this day about writing research papers).

On day 5, we finish talking about social network analysis – about 15 minutes – and the students analyze their client’s data.

Research Write Up

After students complete all 3 parts of the project, they then have to write up their study. The research paper format I use in this class is inspired by Don Stacks book, Primer in Public Relations Research.

In the past, by the second project students are writing brief literature reviews. However, because this is the first time I’ve run this project and it has been a lot of work, I called an audible and removed the requirement for the lit review in this project. So, you will see in the assignment below that those requirements have been withheld.

Thus, by the second project students have been taught about writing research problem overviews (problem statement, campaign goals & objectives, research objective & RQs/hypotheses), methods, results and discussion sections.

The students write up their reports. And they are encouraged to share them with their client.

Limitations & Final Thoughts

There are a few drawbacks I’ve experienced thus far with this project.

First, there is a lot of info coming at the students with this project. The assignment sheet itself is several pages long. As such, it is important to explain things several times and work with the students as they are doing this project.

Students need to be responsible for getting the data for this project from their client, creating their own Netlytic account and setting it up to collect data. And, they need to provide me with who their client is and some competitors of the client far enough in advance that I can program it into Microsoft Social Engagement (I’ll go into more depth on this in the individual posts about each section). We had a few groups that made mistakes along the way and were short on data or had to do some last minute scrambling.

The data collection periods across the Twitter CSV file, the Microsoft Social Engagement and the Netlytic are not consistent. This is simply a result of the classroom setting and a lack of full control over when data collection happens. For example, a team’s client may have sent their Twitter data which covers the last 6 months one day, a teammate set up Netlytic to collect data another day, and the day I set up the Microsoft Social Engagement to collect data on their client on a third day.

With these another limitations in mind, the project has been fun thus far this semester. A major benefit of this assignment is that most of the tools used in this assignment are free or inexpensive and not too difficult to learn (and thus teach your students).

Over the next few posts, I will offer some depth on each section of the project.  So check back soon! For now, you can get a copy of the assignment below.

Update: You can now read the follow up posts to this blog series.

Teaching Students to Use iPads for Survey Data Collection (2 of 2)

In my last post, I wrote about a Comm Research project where students use iPads for survey data collection.This is my favorite of the 3 projects we do in my Communication Research Class (see all posts on Comm 435; see syllabus).

This week, I want to follow up by discussing how to program the surveys to work on the iPads. I’ll talk through how I teach all of this in class and through activities.

Lastly, I’ll explain how I prepare the data for use in SPSS.

Once students have created their surveys, we need to get them onto ONA.io

Programming surveys to work on ONA.io – the free, open-source tool used by my class and researchers around the world – is a little tricky. It follows XLS formatting. Once you get the hang of it, it is super easy. And it is quick to teach and learn.

I go over this online Lab Guide (http://bit.ly/435_lab_digitalsurvey) that I created on how to program XLS forms in class. I then provide students with a practice activity to create a survey in Excel or Google Spreadsheets. The activity asks students to create:

1) A question of how many years they are in school

2) A check all that apply question – I usually pick something fun like their favorite movies from a list

3) A likert-style question. Ex: How much they like binge-watching on Netflix.

In sum, they practice creating an integer, select_multiple, and select_one question.

Once students get the hang of it, they log into an ONA.io account I create for the class. Next, they upload their practice survey to test in class using our department’s iPads. But, this could be done on a phone or even a computer itself (Instructions on how to do this are in the lab guide).

The #1 thing, is that things have to be done exactly in this formatting. So, little errors like forgetting to put an _ (and putting a space instead) for “list_name” will result in ONA.io kicking the survey back and telling you there is an error. If a mistake is made, no problem. Just fix your form and re-upload.

I check to make sure everything is done correctly. This saves time when they program their own surveys. If everything is good, I give students lab time to work on formatting their surveys and help out as needed.

After everything has been uploaded successfully – this usually takes time outside of class, so I make it due the following class – students are ready to go out into the field. This is where the fun happens!

Students always get great feedback when they use iPads to collect survey data. People tend to be interested in what they’re doing and happy to participate. Some students this year told me that people came up to them around campus and asked if they could participate. That is much different than the usual online survey where we often struggle to get respondents! I can’t express how rewarding it is to see students go out into the field, collect data, and come back having gathered data no one else has before. For most of them, this is their first time doing data collection of any kind. And so while the class is tough and a lot of work, it is rewarding. You can see the ‘aha’ moments the students have when they start drawing inferences from their data.

Preparing Data for Analysis in SPSS

If you only want to look at summaries of responses, you can check that out in ONA.io. But, if you want to analyze the data you’ve got to get it from the way students labeled it to the #s for SPSS.

For example, in the below example where the question asks the participant their favorite ice cream, if the ‘choices’ in our XLS code is:

Lab_Guide_-_FormHub_-_Google_Docs

And the participant answers “Vanilla” the data collected would be icecream2.

But, SPSS can’t analyze “incecream2.” It can only analyze a number. So, we need every instance when a participant selected Vanilla to be recorded as simply “2” in SPSS.

Here’s how to quickly do this:

Download the data Excel file of the completed surveys. Open in Excel. Replace “icecream” with “” (that is, with nothing – no spaces. Just leave the replace section blank). Excel will remove “icecream” from the Excel file and you’re left with the number for responses such that “icecream2” now is “2”. Repeat this step for each question. For check all that apply questions, ONA.io records “FALSE” for answer choices left blank, and “TRUE” for instances when the participant checked the answer choice. For example, if the question was “Check all your favorite ice cream flavors” and the participant checked “Vanilla,” ONA would record a “TRUE” and if they left it blank, ONA would record “FALSE.” These can be easily prepared for SPSS by replacing FALSE with “0” and TRUE with “1”.

Admittedly, this step is the drawback of using XLS forms. While a little tedious, it is quick and easy to do. Considering the advantages, I don’t mind taking 20 minutes of my time cleaning the data for my students.

When done, I send the student teams their data and we work on analyzing them in class.

 

Well that’s all for now! I hope you enjoyed this tutorial and consider using iPads for survey data collection in your research class, or other classes where surveys could prove valuable!

Here at Shepherd, finals week starts this week. I hope everyone has a great end to the semester!

Using iPads for Survey Data Collection in the Communication Research Class

Surveys are a common method uses in communication research class projects. Since I started teaching this class at Shepherd University, I’ve added a fun, cool feature that really brings the survey data collection process to life!

Students in my Comm 435 Communication Research class (see all posts on Comm 435; see syllabus) now use iPads for data collection in the field. My students grab a department iPad and go around campus to recruit participants. The participants complete the surveys on the iPads, and the data is synched to the cloud where it can be downloaded and analyzed.

ipadsurveys

Overview

For the final of three hands-on projects in my class, student teams identify a problem or question they have pertaining to Shepherd University or the local community. They design a study to research that problem. In my first two hands-on projects, students don’t design the methods or the measurements. They are based on scenarios I set up and materials I provide. For example, here’s a discussion of my computer-assisted content analysis assignment.

As a part of the assignment for today’s post, students are required to conduct 1) surveys, and 2) either focus groups or interviews. Let’s talk about the surveys:

After discussing surveys as a method, with a particular focus on survey design and considerations, each team designs a brief survey.

In the lecture before they create the survey, I lecture on important considerations in survey design. And then students do an in class activity to practice putting these concepts into motion using a mock scenario. I then provide feedback on their survey design, and help them make improvements.

The class the following time we meet is dedicated to helping students design measurements that meet the research objective and research questions they’ve developed that will help them get the answers to the questions they want to know. The day is also dedicated to helping them write effective survey questions (as well as interview or focus group questions, for that part of the assignment). I started dedicating an entire class period to measurement design after spotting this as a major weakness in the projects last semester.

Next, rather than using paper & pen, or surveymonkey.com (which limits students to only 10 questions), teams program their surveys into ONA.io. It is a free, open access web survey tool designed by folks at Columbia University. So, we spend the 3rd day learning how to use ONA.io to program their surveys. I’ll talk in detail about that in the next post.

During data collection week, students check out department iPads, load the survey onto their iPad, and go out into the field to collect data. A group of students will check out several iPads and hit up the student union, library, or campus quads and collect data fairly quickly. The data syncs automatically over our campus-wide wifi! That means, when all students get back to the computer lab, their data – from each iPad used – is already synced to ONA.io where it could be downloaded and analyzed.

Pretty cool, huh? It is my favorite project that we do in my communication research class and the students seem to really enjoy using the iPads for surveys.

There are a few caveats.

  1. After the data is collected, in order for it to be analyzed in SPSS it has to be cleaned. If you do formhub, you’ll notice that the data you get doesn’t quite fit in with the format SPSS needs. So, I spend a few hours before we meet as a class to look at the data that was collected and analyze it.
  2. This year, Formhub.org seems to be moving painfully slow. I’ve had trouble last week getting the website to work. And am still having trouble this week. With data collection set to start tomorrow, I am stressing that it may not work! – update: I’ve read in several places about ongoing stability issues with Formhub. I’m now using ONA.io instead which works the exact same way! I’ve updated verbiage above to reflect that.

I’ve provided a copy of the assignment below. Enjoy!

On my next post, I will provide info on programming surveys into the XLS forms format, which is a bit tricky. I spend a day in class teaching this. I’ll also show you how to load the surveys onto the iPads and get them synced up to the computer if you aren’t on WiFi when you collect the data.

photo: CC by Sean MacEntee

Syllabi Spring 2015: Communication Research and Writing Across Platforms classes

The semester is underway!

I have shared select syllabi every semester since I started this blog. A lot of people contact me asking me for my syllabi and for class assignments. And thus I am happy to continue the trend. You can find all past syllabi from the menu on the left! I’m so glad that folks enjoy these and find them useful!

This semester, two classes I will discuss are my Writing Across Platforms and my applied Communication Research class. I’ve talked about assignments, activities, and perspectives on both in the past (see posts about them under the menu on the left. Blog Topics->Teaching Social Media->Classes).

I have not changed each class all that much since last teaching them. So I’ll spare reviewing each in depth. But here are a few changes or little things worth mentioning:

Writing Across Platforms

Facebook – In the last post I wrote about my decision to continue to teach Facebook in this class.

– Mobile – This is a packed class and it is hard to add without taking something else away. I’ve squeezed in a little time to focus on mobile and writing for mobile. I have an exercise planned where I will bring in our department iPads and have students explore the look and feel of their writing as read from mobile devices. As more and more people rely on mobile devices to read, it is important that we emphasize the medium, its affordances, and its limitations.

– Concise Writing – I am placing more emphasize on conciseness in writing. This is something we all struggle with. I know I do. While it has always been important, shorter attention spans, mobile and digital platforms, and the high-stakes competition for reader attention necessitate saying more with less. We’ll do exercises where students help one-another find the shortest, most powerful way to communicate. There’s also a fun website I am incorporating that can help with writing. I will give it its own post in the future.

– PitchEngine – I used PitchEngine the past two years for my social news release. I haven’t blogged about PitchEngine much. But I’ll be sure to do so this semester. I always try to bring in industry software when possible. And the awesome people at PitchEngine have been very helpful. I’m excited we’ll be using PitchEngine again this year. PitchEngine has undergone exciting changes since last year. And I’ll be adapting my social news release assignment accordingly. Note: Dr. Gallicano and Dr. Sweetser have a great guideline for teaching the social media release.

Communication Research

I made some minor tweaks and improvements to how I’ll present content, and streamlined a few assignments. In a tough class like this, I provide a lot of handouts – such as for how to structure a literature review, methods, results, and discussion section. I worked hard to simplify and clarify those.

I’ve been chatting with colleagues about changes and advancement in social data analysis. I’m hoping to incorporate them into this class in a future semester. To do so, I will need time to dedicate to exploring these options this semester. Thus, I’m presently sticking with my same 3-project model I wrote about last year. Hopefully I’ll have a brand new social media analysis assignment for Spring 2016.

This semester I promise to do something I failed to do last year – blog about our final project in the Comm Research class where students use iPads to collect survey data around campus. I love this project and hope you do too.

Below are the syllabi. A happy start to the semester to all! – Matt

Writing Across Platforms:

Communication Research

Sentiment Analysis using Content Analysis Software: Project Assignment

In the last two posts, I’ve been discussing the Yoshikoder sentiment analysis project in my Communication Research class here at Shepherd University.

My first post looked at the project in general. And the second, most recent post, looked at how to teach computer-assisted content analysis using the Yoshikoder computer-assisted content analysis software and the activities I provide my students to prepare them for the project.

I encourage you to check out those posts for background and set up! Ok, now on to sharing the assignment itself and providing a brief overview of it.

As I’ve stated elsewhere, the purpose of this assignment is to

1) give students a hands-on look under the hood of sentiment analysis – that is, to understand HOW it works and its flaws.

2) To teach students via hands=on experience about quantitative content analysis, particularly computer-assisted content analysis

3) To teach them how to conduct a computer-assisted content analysis using software (Yoshikoder)

So here’s the set up to the assignment (which you can see below). This hands-on learning project is based on a real brand and a realistic but made up scenario. I do this with both this assignment, and my first project in this class.  Specifically, I provide The Situation or Problem / Campaign goals and objectives (of an imaginary campaign that is ongoing or happened) / benchmarks / KPIs.

In this case, the situation had to do with a popular online retail brand and rising customer complains and dissatisfaction as the brand has grown beyond its core base of loyal customers in recent years.I’ve redacted the brand and the situation from the below assignment. But you can fill in your own.

I rely on Stacks (2011) model for writing the problem, goals, objectives.  While I provide the research objective(s) in my first project, in this project students must come up with the research objective(s) and RQ(s).

I then provide some benchmarks. In this scenario, at a certain point in time sentiment was strong (let’s say, 70% positive). And then after the hypothetical situation, it dropped (say, to 50%). The students have been recently introduced to the concepts of benchmarks and KPIs via a brief lecture, so this is their first experience with these concepts. They are given 1 KPI (let’s say 65% positive sentiment) against which to measure their success. Keep in mind that the situation assumes that a campaign already took place aimed at addressing decreased customer satisfaction and negative comments on Twitter addressed at the brand of choice. We are now seeking to assess whether this campaign that happened successfully increased sentiment towards the brand (at a deeper level, repaired relationships and the image of the brand among the online community).

There are other important considerations students must make:

1) Since we’ve discussed sentiment and its flaws, they need to think about the valence of sentiment (The AFINN dictionary scores terms from -5 to +5), and they need to research and understand how AFINN was designed and works (I provide some sources to get them started). If you’re not familiar with the AFINN dictionary, it was designed for sentiment analysis of microblogs.It is a free sentiment dictionary of terms you can download and use in Yoshikoder. 

For more details on the assignment, check out the assignment embedded below and the requirements for what must be turned in.

As I’ve noted in a previous post, this project isn’t perfect. But it is a fairly straightforward and accessible learning experience for students who are in their first semester of experiencing how research can be conducted. It covers a wide array of experiences and learning opportunities – from discussion of what sentiment is, to understanding its flaws, to understanding the flaws of quantitative content analysis, to learning to apply a number of key research terms, as well as providing exposure to how to write research reports. The project itself is bolstered by several lectures, it comes about 1/2 way through the semester, and takes several days in the classroom of hands on learning. Students of course finish the writing up outside of class. But we do the analysis all in class to ensure students are getting my help as the “guide on the side.”

My previous post covers some activities we do to build up to this assignment.

So that’s all for now! Please feel to use this assignment, to modify it, and improve it. If you do, come back and share how you have or how you would improve upon it and modify it in the comments below!

If you want to know more about my Communication Research class, please see this post which includes the syllabus.