This post may contain affiliate links. Please read my disclosure for details.
A professional development goal of mine is to learn a lot more about social network analysis and visualization of social media data. This area has grown increasingly valuable and important in our field. And I believe we all need to have at least a base knowledge of social data and how to play with it.
In the last few posts, I’ve been writing about my Social Media class and the semester project we’ve been doing. To recap, students create a social media content strategy for our department’s social media (the details of the assignment are on the previous post). They then use this plan to create content for the department. They create content 3 times, each time they are creating content for a certain time period. The content is presented to the class and then goes through an editorial process (i.e.., I grade them and make any needed mods) if needed before being published.
With the semester winding down, I want to share some of the work the students have been doing!
So when I saw today a similar, more streamlined approach used by the Pew Internet project in their reports, I had to make a quick blog post about it. I was reading the Cell Phones, Social media, and Campaign 2014 report when I stumbled across this.
I encourage you to check out those posts for background and set up! Ok, now on to sharing the assignment itself and providing a brief overview of it.
As I’ve stated elsewhere, the purpose of this assignment is to
1) give students a hands-on look under the hood of sentiment analysis – that is, to understand HOW it works and its flaws.
2) To teach students via hands=on experience about quantitative content analysis, particularly computer-assisted content analysis
3) To teach them how to conduct a computer-assisted content analysis using software (Yoshikoder)
So here’s the set up to the assignment (which you can see below). This hands-on learning project is based on a real brand and a realistic but made up scenario. I do this with both this assignment, and my first project in this class. Specifically, I provide The Situation or Problem / Campaign goals and objectives (of an imaginary campaign that is ongoing or happened) / benchmarks / KPIs.
In this case, the situation had to do with a popular online retail brand and rising customer complains and dissatisfaction as the brand has grown beyond its core base of loyal customers in recent years.I’ve redacted the brand and the situation from the below assignment. But you can fill in your own.
I rely on Stacks (2011) model for writing the problem, goals, objectives. While I provide the research objective(s) in my first project, in this project students must come up with the research objective(s) and RQ(s).
I then provide some benchmarks. In this scenario, at a certain point in time sentiment was strong (let’s say, 70% positive). And then after the hypothetical situation, it dropped (say, to 50%). The students have been recently introduced to the concepts of benchmarks and KPIs via a brief lecture, so this is their first experience with these concepts. They are given 1 KPI (let’s say 65% positive sentiment) against which to measure their success. Keep in mind that the situation assumes that a campaign already took place aimed at addressing decreased customer satisfaction and negative comments on Twitter addressed at the brand of choice. We are now seeking to assess whether this campaign that happened successfully increased sentiment towards the brand (at a deeper level, repaired relationships and the image of the brand among the online community).
There are other important considerations students must make:
1) Since we’ve discussed sentiment and its flaws, they need to think about the valence of sentiment (The AFINN dictionary scores terms from -5 to +5), and they need to research and understand how AFINN was designed and works (I provide some sources to get them started). If you’re not familiar with the AFINN dictionary, it was designed for sentiment analysis of microblogs.It is a free sentiment dictionary of terms you can download and use in Yoshikoder.
For more details on the assignment, check out the assignment embedded below and the requirements for what must be turned in.
As I’ve noted in a previous post, this project isn’t perfect. But it is a fairly straightforward and accessible learning experience for students who are in their first semester of experiencing how research can be conducted. It covers a wide array of experiences and learning opportunities – from discussion of what sentiment is, to understanding its flaws, to understanding the flaws of quantitative content analysis, to learning to apply a number of key research terms, as well as providing exposure to how to write research reports. The project itself is bolstered by several lectures, it comes about 1/2 way through the semester, and takes several days in the classroom of hands on learning. Students of course finish the writing up outside of class. But we do the analysis all in class to ensure students are getting my help as the “guide on the side.”
My previous post covers some activities we do to build up to this assignment.
So that’s all for now! Please feel to use this assignment, to modify it, and improve it. If you do, come back and share how you have or how you would improve upon it and modify it in the comments below!
As promised, I want to share my assignment, and my handout for students that teaches them how to use Yoshikoder. Before we do the project, however, I do a brief in class activity to get students learning how to use Yoshikoder. So let’s start there for today’s post. And next post, I’ll share the assignment itself.
PART 1: THE SET UP
What I like to do, is present the problem to the students via the project assignment. Then, we go back and start learning what we’d need to do to solve the problem. So, after lecturing about what sentiment analysis is and why it is important, I get students introduced first to the idea of constructing a coding sheet for keywords by taking a list of keywords and adding them to categories.
First, we talk about the idea in class, and I show them some simple examples, like: If I wanted to code a sample for the presence of “sunshine” – what words would I need? Students brainstorm things like start, sun, sunny, sunshine, etc., etc.
We discuss the importance of mutual exclusivity, being exhaustive, etc.
I show an example from my dissertation which looked at agenda setting topics on Twitter.
On the class day before I introduce Yoshikoder to the class, students do a practice assignment where I give them a list of random terms related to politics and elections. They then have to create “positive” and “negative” content categories using the terms. The terms aren’t necessarily well fit for this exercise, which gets them thinking a bit… They then hand code a sample of Tweets I provide about two different politicians. I tend to use the most recent election. So, in this case Obama and Romney. They are frustrated by having to hand code these Tweets – but a little trick is to do a search for the exact phrases in the Tweet files on the computer and they are done fairly quickly. Ok, so on the next class period:
1) Practice with Yoshikoder We do the same basic task, but this time they learn to program their “positive” and “negative” categories into Yoshikoder. They then load the Tweets (which I have saved as a txt file) and analyze them for the presence of their positive and negative content categories. This is a great point to stop and have students assess the reliability between what they hand coded and what the computer coded. Often, there will be discrepancies. And this makes for a great opportunity for discussion.
Here is the activity that I use in class. I also provide Tweets that I’ve downloaded using the search terms for the politician/candidate I’m using in the activity (e.g., Obama; Romney) in plain text format so Yoshikoder can read it. Also, see the below handout which I provide students to show them how to use Yoshikoder and how to program, and run the analyses I just described.
As I mentioned above, I create a handout that I like to give students that explains the different functionalities of Yoshikoder and how to run the analyses. As I’ve discussed elsewhere, I like to provide handouts. And the one below isn’t one of my more elaborate handouts. But it provides a quick overview with some screen shots to show what buttons need to be clicked. This is super helpful if you are trying to learn Yoshikoder, or want to use it alongside the activity (discussed in this post or the project discussed in my last post, and which I will provide in my next blog post).
First, let me say that more and more I am trying to decrease my lecturing and spend more time in class with hands on learning, having my students learn by doing rather than just listening – sort of like the flipped classroom Gary Schirr has been discussing recently on his blog. So this class is really pushing in class projects and experiential learning. Following this approach, in order to introduce students to research, I provided students with the instructions and a lot of structure for their first two projects.
I want to use our second research project as an example. Then, I’ll talk about the pros and cons. The second project was a sentiment analysis of Tweets about a brand I chose and a (realistic but not necessarily real) scenario.
My goals with this project were to teach students:
About computer-assisted content analysis. We focused on how it is different from a hand-coded quantitative content analysis (which was the focus of our first project). And its strengths and weaknesses.
How to do a basic computer-assisted content analysis using Yoshikoder, an easy to use, free App that works on Mac and PC. So my students can use it at home if needed!
About sentiment analysis – what it is, why it is used by organizations to evaluate the online conversation about their brand, and its strengths and weaknesses.
How to write up a research report (In the first project, I provided the project overview and requested results and discussion. In the second project, I added a literature review and methods section, and had them write the research objective and research question).
Why I chose to do this project this way: A number of social media analytics tools today are offering sentiment analysis. There are also sites like socialmention.com that will provide you with a free sentiment analysis of a search term. But how are these analyses conducted? What are their strengths and weaknesses? Are they reliable? Do they mean anything at all? And what do we need to be careful of before accepting them, and thus drawing inferences from them?
So what I wanted my students to do, was to SEE how a sentiment analysis would be conducted by some of those high-price (or no price!) analytic tools. In other words, I want my students to get their hands dirty as opposed to allowing some distant and hidden algorithm to do the analysis for them. I believe gaining hands on experience with this project provides students a more critical lens through which to see and evaluate a sentiment analysis of social media messages.
The Set Up: I provide in the assignment: The Situation or Problem / Campaign goals and objectives (of an imaginary campaign that is ongoing or happened) / benchmarks / KPIs. In this case, the situation had to do with a popular online retail brand and rising customer complains and dissatisfaction as the brand has grown beyond its core base of loyal customers in recent years.
I provide students with the sample of about 1000 Tweets I downloaded and formatted to play nicely with Yoshikoder. The sample comprises of mentions of the brand. This ensures students are all looking at the same dataset, and streamlines (or eliminates I should say) the data collection process to help students focus on other elements of the assignment. For the sentiment analysis,
I rely on the AFINN dictionary, which was designed for sentiment analysis of microblogs. Students learn about what the AFINN is and a little about how linguistic analysis dictionaries are created through research. Students then analyze the Twitter dataset using the AFINN dictionary to determine the sentiment scores. There is no fancy stats being done here. By checking the sentiment analysis output, they simply determine if their KPI (which was a % of positive Tweets about the brand) was met. In this case, the result they are looking for is a % – so simple division. Not scary at all, no SPSS training needed (that comes with a later project).
They also look at the valence of the sentiment (with a range of + or -5) and explore the meaning of that. The students use this information, along with class lecture, other exercises on how to write research reports, etc., to produce their project #2 report.
Again, to reiterate an important point, we discuss the benefits and of this analysis as well as its real weaknesses. Students always bring up the fact that the results lack context – what if someone used the word “bad” meaning good? What about sarcasm? I show them how to use Yoshikoder to look at Keywords in context as a way of addressing this.
The Benefits and Drawbacks of This (and these types of) Projects As I said above, I am really trying to move away from lecture in favor of experiential learning. Here are some things I’ve noticed. Some may be benefits, others drawbacks, and others a bit of both…
The focus on this project is not on the stats or the analysis and I provide a lot of the needed information – so it makes for a good ‘getting your feet wet’ project that teaches students other important elements of research.
It would be nice to teach them more advanced methods of analysis – but I do cover that a bit more later in the semester.
Students learn through their mistakes and from my feedback as opposed to me paving the way for them and simply asking them to drive down the smooth road.
I provide a LOT of handouts on how to write different sections of a research report, etc. They are detailed… sometimes too detailed and I fear students don’t read them because it is information shock.
Sometimes, I wish I had more time to teach them how to avoid the simple mistakes I see in their work, particularly their research reports. I say to myself, “oh man, I thought I told them how to do that.” Or, “Why didn’t you read the handout that explains how to structure this!?”
They likely won’t do sentiment analysis like this every again – but at least they’ll understand it!
They get to see the results for themselves and get a sense that they discovered the results.
Class time is busy – our class rushes by and we don’t always get to cover everything I want to. As a person who likes order and time management, I am having to “let go a little” and let things happen. This is helping me grow. I wonder if it is helping my students though…
I know I enjoy doing these sorts of projects a lot more than standing and lecturing, lecturing, lecturing about research. I feel it has made research a lot more “real” and hands on to them.
So that is my overview of the project in general, and some thoughts. It isn’t perfect but it seems to have gone well and I really enjoyed doing it. I’d love any feedback or suggestions you may have to make this the best possible experience for my students. And of course, feel free to adapt, modify, or improve upon this idea.
In an upcoming post(s), I’ll share the assignments (I want to move my documents over to SlideShare due to the pay wall on Scribd). And I will provide some basic info on how to use the Yoshikoder software.
This post may contain affiliate links. Please read my disclosure for details.
Metrics, Metrics, Metrics! I hear it everywhere I turn. 🙂 More than ever, we need to be teaching our students research skills.
This Spring 2014 semester I am really excited to be teaching an applied Communication Research class!
For two years at Utah Valley University, I taught communication research with an emphasis on academic research. You can see the syllabus for that class. In that class, student groups planned, wrote up, and executed a semester long academic research study. Though many professors don’t prefer to teach this class, research is one of my favorite classes to teach. I’ve had numerous undergraduate students present their research at undergraduate research conferences and earn travel grants to do so. This is a super valuable experience for those considering grad school. Though it is very time demanding, and some feel teaching others how to conduct research is tedious, I didn’t find it that way at all. Seeing students get that “aha” moment in research and seeing them succeed makes teaching the class very rewarding.
This semester, I’ll be focusing on the more practical uses of research with an emphasis on using research for strategic purposes. This class emphasizes research across new media, legacy media, and interpersonal and online environments. Students will learn both quantitative and qualitative methods.
This hands on class will emphasize the following research skill sets:
How to conduct content analysis using a coding sheet.
How to conduct a computer-assisted content analysis
How to conduct interviews and focus groups
How to conduct quantitative electronic surveys using iPads
Students will work in teams to conduct 3 applied projects. The first 2 projects are real-world problems I set up and the students have to solve, and in the 3rd project they have to identify a problem, write a proposal, and execute:
Media placement evaluation – Answering questions such as, placement, share of voice, and whether key messages are included in media coverage and to what extent. Done via content analysis of media clippings.
Sentiment analysis of social media content – What are people saying about your brand on social media, and what is sentiment towards it? Done via computer-assisted content analysis of Twitter posts.
Audience Research – Focuses on 1 of the 5 key PR variables discussed by Stacks (2011): Confidence, credibility, relationship, reputation (which may include awareness), or trust. Students will choose 2 of the following: interviews, focus groups, and surveys.
Students will be introduced to the following software:
Computer-assisted content analysis (Yoshikoder will be used as it is free and easy to learn)
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.