7 Big Data Blunders of 2016

The big data arena has erupted over the last year with incredible technological advancements. Predictive analytics and machine learning went mainstream. Artificial Intelligence (AI) bot platforms and Smart Technology took over our homes and offices. Across the world organization’s strategized to cultivate cultures fueled by data and analytics.

But, with all the successes and accomplishments, it’s important to remember the mistakes and mishaps that we can learn from. To quote the great George Santayana, “those who cannot remember the past are condemned to repeat it.” Thus, in hopes of learning from the missteps and miscalculations of others, we’ve recapped some of the biggest big data blunders in 2016, as a reminder that even the most well-thought out plans can go astray.

1) Trump Wins the Election

We cannot talk about big data failures in 2016 without talking about the recent Presidential Election. On November 4th, Donald Trump’s victory ran counter to almost every major forecast released. Before winning, everyone from CNN to the New York Times predicted a Trump loss by sizeable margins.

The result plagued the media with cries of a “big data failure.” But, what we can learn from this is far more impactful than who (or what) we can blame. The uncalculated Trump victory was a reminder that in the end, we are humans, prone to emotion and unpredictability. We are heavily influenced by things that are at times seemingly real (fake news) and are not always open or forthcoming in our intentions. Especially, when it comes to something as significant and personal as a presidential candidate.

2) Yahoo’s Faces one the Largest Breaches Ever

On September 22nd, Yahoo admitted to the breach of some 500 million accounts. Infiltrated by hackers, personal details including email addresses, names, phone number and encrypted passwords were compromised. Described as one of the largest breaches ever in terms of user accounts- it exposed the vulnerability that even the most seemingly secure encryption technologies can have.  

3) Facebook’s Flawed Metrics

Facebook made the news not once, not twice, but three times in 2016 for skewed advertising metrics. Miscalculations included overstated app referrals, inaccurate video view times and a failure to calculate duplicate visitors in their reach measurements.

Most recently, after undergoing an internal metrics audit, “discrepancies, or “bugs,” led to the undercounting or overcounting of four measurements, including the weekly and monthly reach of marketers’ posts, the number of full video views and time spent with publishers’ Instant Articles.” As a result, Facebook has enlisted third-party help with future metrics verification in an effort to put advertisers and publishers at ease going into 2017.

4) Cubs Win the World Series

On Wednesday, November 2nd, after a 108 year drought, the Chicago Cubs took home a World Series victory.  Coming back from a 3-games-to-1 deficit against the Cleveland Indians, this was a highly unlikely win according to every statistic and data prediction released. Even the renowned Nate Silver, who’s made a career accurately predicting elections and World Series champions, failed to anticipate the Cubs taking home the win and breaking the curse that’s plagued the team since 1908!

5) Pokémon Go is Labeled as “Inherently Racist”

Who could forget Pokémon Go sweeping the country in the summer of 2016? The augmented-reality game hit international news when it was downloaded 10 million times in the first week after its release. Then, made news again after being abandoned by 10 million users in less than a month!

Shortly after its launch, reports began speculating that the app’s “Poké stop” locations favored white neighborhoods.  Later, the Urban Institute released a study that found an average of 55 “Poke Stops” located in neighborhoods with a majority white population. Since then, the app has been called “inherently racist” by leaders of the Black Lives Matter movement and has battled the public outcry with justifications on the apparent geographic skew.

6)  Admiral Insurance App Violates Facebook Data Policy

UK based insurance company Admiral, made the news in November after getting a hand slap from Facebook. The company intended to launch an app offering discounted car insurance premiums to first time drivers using an algorithm that scanned a user’s Facebook posts. The algorithm was designed to offer additional discounts or specials based on a user’s personality traits and characteristics revealed through the scanning of their Facebook posts.

Facebook quickly put an end to their plans only a few hours prior to the scheduled launch as it directly violated the use of Facebook data. Specifically, clause 3.15 of the policy which “prohibits the use of data obtained from Facebook to make decisions about eligibility, including whether to approve or reject an application or how much interest to charge on a loan.”

It’s a reminder for any company planning to leverage Facebook data to conduct proper due diligence before writing an algorithm that doesn’t adhere to Facebook’s data usage policies.

7) Microsoft’s Teen Chatbot Goes Racist and Homophobic

Less than a day after the debut of Tay, Microsoft’s AI chatbot, modeled to learn from conversations and speak like a millennial, was promptly taken offline. Launched in March, “Tay.ai” quickly became everything that is wrong with the internet, morphing into a “sexist, racist monster.”

The experiment, launched as Microsoft’s attempt to form a relationship with teens, displayed all that is wrong in using communication mimicry for AI algorithms. In less than 24 hours, Tay had gained more than 50,000 followers and produced nearly 100,000 tweets, slurred with bigotry and racism. Microsoft pulled Tay off Twitter with a formal apology after the embarrassing debut and promised to, “look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Conclusion

These blunders and oversights should serve as lessons that all of us can learn from entering the new year. As data becomes the thread that weaves together everything from our television show selections to self-driving automobiles, learning “what not to do” is just as important as learning “what to do.”

About the Author

Elie Khoury is the co-founder and CEO of Woopra, a customer intelligence platform that helps businesses centralize customer engagement across all channels. An engineer by trade, CEO by profession,  Elie built Woopra under the belief that data made accessible to everyone in the organization would empower all employees to make better-informed, data-driven decisions.

He studied computer science at the Lebanese American University where he built the first generation of Woopra in 2007. In addition to Woopra, he co-founded YallaStartup, a non-profit organization established to foster entrepreneurship in the Middle East and North Africa.

References :

http://www.huffingtonpost.com/entry/macedonian-teen-claims-trump-supporters-paid-him-60k-to-produce-fake-news-during-campaign_us_584ac403e4b0bd9c3dfc51b7http://adage.com/article/digital/facebook-reveals-advertiser-number-flaws/307110/https://www.facebook.com/terms.phphttp://www.theverge.com/2016/3/23/11290200/tay-ai-chatbot-released-microsoft

Are you an expert in IoT, Big Data, Virtualization, Cloud, Mobile, Artificial Intelligence and/or IT technologies?Do you want to feature yourself or your content on VcloudNews.com? Do you have products you would like to showcase to our visitors? Or simply do you want to share your comments with our readers? We want to hear from you and have both free and sponsorship opportunities for you and your products. E-mail us at submit@vcloudnews.com to join the fun.

Leave a Reply

Your email address will not be published. Required fields are marked *