Request a Demo
Open Menu
Back to Blog

Out of The Dark Ages: The Rise of Social Media Sentiment Analysis, Part 2 – The Renaissance

Social Media Analytics, Social Media Marketing, Social Media Strategy

Out of The Dark Ages: The Rise of Social Media Sentiment Analysis, Part 2 – The Renaissance

THE RENAISSANCE

“THE MACHINE-LEARNING ALGORITHM METHOD OF SCORING SENTIMENT WAS EXECUTED AT THE POST LEVEL, MEANING THAT EACH TEXT-BASED PIECE OF CONTENT WAS REVIEWED AND SCORED OVERALL BASED ON LANGUAGE CUES.”

Prior to The Renaissance, sentiment analysis in social media was manual, slow, inaccurate, and limited heavily by the technology readily available in the marketplace. But things were changing rapidly…

Once the concept of sentiment analysis had been legitimized and adopted throughout the social media industry, significant strides were made to automate the scoring, recording, and reporting processes. The biggest enhancement of sentiment analysis rested on the shoulders of machine-learning algorithms.

The Rise of Machine-Learning

By 2013-2014, algorithms being utilized by major SMMS platforms yielded about 70% accuracy when it came to identifying posts as positive, negative, or neutral. If an organization wanted to increase the accuracy beyond that, a skilled representative would train the system by sorting through a significant number of social posts, and manually scoring them until the algorithm had enough data to refine its definition of positive, negative, or neutral sentiment.

The new wave of machine-learning was significant for the evolution of sentiment analysis, as algorithm-based sentiment scoring only grew more accurate over time depending on the number of posts scored. Analysis at the post level meant that each piece of content received an overall score. While a post may have been comprised of both positive and negative expressions, only one definitive social media sentiment score would be applied, thus limiting an organization’s understanding of consumers’ true perception of their brand.

Limitations: An Example

In the following example Tweet, we look at the limitations that early machine learning surfaced.

sampletweet

The limitations of this generation of sentiment analysis meant that mass generalizations of the true sentiment driving the pieces of content were unavoidable, as there were only three levels of scoring – positive,
negative, and neutral. As shown in the example above, social posts can express sarcastic tones that would be misinterpreted by sentiment tracking engines. A traditional system would see the words “wonderful” and “thanks,” and ultimately score this post as positive, when in reality it’s actually expressing a very negative experience.

“BRANDS DEMANDED GREATER GRANULARITY AND SEGMENTATION OF SENTIMENT DATA.”

Brands Needed More Control

In addition to the language limitations, brands needed a way to categorize sentiment across different categories that were important and unique to each organization. The majority of platforms on the market only provided basic machine-learning sentiment coupled with limited filtering functionality. Brands demanded greater granularity and segmentation of sentiment data. They needed a structured framework to address sentiment in key areas that each brand needed to measure. For instance, a CPG brand would benefit from looking at packaging, taste, ingredients and other categories, while an automotive company would have far different categories like comfort, handling, service, and others. The “general” sentiment score did not tell an accurate story, unless that score could be further broken down.

In addition, many of these tools were standalone offerings that required jumping between different platforms and using disparate data sets. While this was a partial solution, sentiment outside of context was inaccurate.

But these hurdles would soon be obsolete…

Continue to read Part 1 & Part 3.

Stay on top of social media trends and insights for your enterprise solution.

Search the Blog


Come see Tracx in action!

Request a Demo

More from the Blog