by Peter Kusnic
June 27, 2018
In March, the New York Times published a bombshell report that rattled the tech industry: British political consulting firm Cambridge Analytica had illicitly obtained the data of more than 80 million Facebook users in 2016 in an effort to elect Donald Trump to the US presidency.
The details are shocking. Not only had Facebook enabled the breach by failing to monitor the use of data it shared with third parties (a hallmark of the company’s business model), the very mission of Cambridge Analytica – to leverage data to change user behavior and shape their opinions – indicates a trend of psychological manipulation in big tech that begs the question:
What constitutes proper use of data, what constitutes misuse – and who gets to say?
In April, Facebook CEO Mark Zuckerberg testified before Congress in what many say are the opening salvos of a regulatory clamp-down on the tech industry. Of course, the implications go far beyond social media, as data mining is also key to the performance of smartphones, tablets, search engines, apps, smart home speakers and hubs, and Internet of Things (IoT) devices like smart home security products.
According to the recent Freedonia Group study Smart Home Security, the number of US households containing smart home speakers or hubs is expected to rise from 23 million in 2018 to 34 million in 2025. Smart home security products in particular are expected to see double digit gains going forward.
It’s no surprise that consumers are flocking to the IoT. The conveniences of home automation – lower energy bills, remote household oversight and control, insurance incentives, and peace of mind – are manifold, and industry leaders like Amazon, Google, and Samsung are investing heavily to advance the technology and mainstream it first.
However, with each new product innovation also comes rising user concerns about the type of data their smart devices collect, where it is stored, with whom it is shared, and how easily it can be stolen.
That’s because IoT devices are notoriously weak on cybersecurity, with numerous high profile reports of hackers not only gaining access to smart devices like lights to cause mischief, but also obtaining private information such as passwords, credit card numbers, and video surveillance feeds.
Because data breaches tend to occur invisibly, months or years can pass before a company realizes it’s been hacked. By then, there’s no telling how far stolen user data has traveled, who has it, and whether it has been or will be used. Like a genie in a bottle, once the data’s out, there’s no putting it back.
Third-party threats aren’t the only concern. In addition to hacking, users will have to weigh the conveniences of these devices against some of their more intrusive qualities, a dilemma exacerbated by recent reports of unauthorized eavesdropping by devices like the Amazon Echo Dot and Google Home Mini. In one high-profile instance, the Amazon Echo Dot went as far as recording a private discussion before sending it to another individual in the device’s contact list.
While such issues have since been passed off as easily fixed technical glitches and command errors, it will be a challenge going forward for developers to convince unsettled consumers that their smart devices aren’t surreptitiously monitoring them in their own homes, without their consent.
On a given smart device, data is generated in three stages:
Every event a smart device processes creates data that the central system then leverages to streamline efficiencies and tailor its performance to unique user and/or household needs. As such, each device in a smart home hosts vast quantities of data. Depending on the privacy agreement between the user and the supplier, the information can be used in a variety of ways.
Consider the ads that suddenly appear on your social media feed after you search for a certain product online. These ads are generated when third parties pay a company to identify users who are likely to respond favorably to a given ad based on their internet usage (e.g., content liked and/or shared, keywords searched, locations checked in to, etc.). This happens unbeknownst to most users, and the result is major profits for the company that owns the data.
Hypertargeting ads at users for products that their data indicates they will like may seem innocuous and perhaps even preferred by consumers weary of seeing irrelevant ads. But for critics, it’s a slippery slope to misuse, especially with no regulatory mechanism in place to ensure that the data is being adequately protected,.
During his testimony in Congress, Mark Zuckerberg said that regulation of the tech industry is inevitable. But what will it entail?
Illinois, Texas, and Washington already have laws that enforce greater transparency in the collection and use of biometric data. But critics say these laws don’t go far enough to discourage misuse and fail to cover non-biometric data. For example, users in Illinois may only file lawsuits if their biometric privacy has been violated. And the effect of such laws on the conduct of big tech companies remain to be seen. While the Illinois law means that the facial recognition feature of the Honeywell Smart Home Security System will be unavailable in that state when it is released later this year, the limited scope of the ban may seem like little more than a drop in the bucket to a multibillion dollar corporation.
Some lawmakers argue that users should be able to opt in to data sharing rather than opt out, as is the current norm, which could force companies to reevaluate their very business models. Others invoke the M-word – monopoly –which is anathema to tech leaders like Facebook that often lack meaningful competition, because implicit in it is the need to break up big tech on antitrust auspices, the way Standard Oil of Ohio was broken up in 1911.
Others see a potential model in the European Commission’s General Data Protection Regulation, which went into effect at the end of May. Among other things, the policy requires domestic as well as international organizations to obtain the explicit and informed consent of any EU citizen from whom it gathers data, or else face steep fines. These regulations also stipulate how companies may share that data once it’s collected.
Will regulation curb the dangers of big data? Or will hackers find other loopholes to exploit? At this point, one thing is certain: Silicon Valley’s unregulated days are numbered, and big – if yet unknown – changes are coming.
Check out the Freedonia Group’s new industry study Smart Home Security in the US, which analyzes the US market for smart home security products including all-in-one systems and kits, security cameras and video doorbells, safety and security alarms, and locks and access controls, presenting historical data (2016, 2017) and forecasts (2018, 2025) in value and volume terms, as well as detailed profiles of key product introductions and leading market participants.
Can't get enough of smart technology? Be on the lookout for Freedonia's next smart home study, Smart Thermostats in the US, slated for July 2018.
Peter Kusnic is an Industry Studies editor at the Freedonia Group, where he also writes and edits blogs.
Provide the following details to subscribe.