When it comes to monitoring brand sentiment, you can’t trust technology. Even the best social media monitoring tools fail miserably.
The most evolved monitoring tools still depend on unevolved sentiment algorithms. As a result, they struggle with sarcasm, disgruntled spammers, emojis, and identifying bogus social media accounts that wildly skew sentiment results on all social media analytics platforms.
During a recent look at more than 14,500 brand mentions, a leading social media monitoring tool identified 2,875 positive mentions (20%) and 543 negative mentions (4%). This resulted in a net sentiment score of +3.41. The score represents a very healthy brand on a scale of -5 to +5… if it’s accurate.
After manually verifying and adjusting the sentiment, we found that there were actually 972 (7%) positive mentions and 1,121 (8%) negative mentions. This resulted in a net sentiment score of -0.36. It represented a brand with more opportunities for improvement than the original, unverified data suggests.
This is a big problem.
The business decisions, priorities, and strategies that a marketers make based on the unverified sentiment data are likely to vary greatly from those made based on the verified and adjusted sentiment data. Make good decisions, based on accurate sentiment data, requires marketers to verify sentiment for all brand-related mentions.
If you’re benchmarking the performance of your company against competitors, you need to find a means of comparing your company to others in a meaningful way. There are three practices to consider:
- Audit none of your competitors. Use your unadjusted totals for comparison. This is problematic. You can’t make good decisions on poor data. Sadly, this is the most common practice.
- Audit all of your competitors. Guarantee the accuracy of data for comparison. This is desirable, but impractical. If you’re monitoring five or more competitors, evaluating 50,000+ mentions during a month may be a poor use of time.
- Audit competitor mentions that have been assigned positive or negative sentiment. Since only positive and negative mentions are used to calculate net sentiment, and only represent a small portion of overall mentions, this may be a good choice in instances where auditing all mentions is impractical. You’ll be overlooking positive/negative mentions that are mistakenly assigned neutral sentiment, but you’ll have valid samples to work from.
I’m an advocate for methods #2 (when possible) and #3 (when necessary). However, benchmarking using method #1 is better than not benchmarking at all. It’s still a good starting point for exploration. The data isn’t as reliable for trending (making month-to-month comparisons), but it can help identify the activities of competitors.
Sentiment algorithms can be more accurate when users can self-select sentiment using emojis, likes, and thumbs up. Here’s a fun take on the evolution of Facebook ‘Likes’, courtesy of Rondon Fernandes aka FDP (s.o.b.):