Big tech companies, major employers, and government agencies all have a vested interest in monitoring activities and collecting data – but the privacy of the people concerned in these information gathering exercises may often be disregarded, or abused.
Players in the “big tech” arena have had a checkered history with regard to data privacy matters. But recent changes in European and ultimately global privacy laws may force a rethink, and herald a new age of greater accountability and concern for the rights of individual privacy.
In this article, we’ll be considering these issues, and taking a look at what the future may actually hold.
Big Tech and Privacy – An Unimpressive Past
Figures compiled from as far back as 1998 by the US Federal Trade Commission (FTC) confirmed that of the approximately 98% of websites which were collecting personal data at that time, only 14% actually revealed to visitors what that information was being gathered and used for. The harvesting of data from website visitors and subscribers to online services – and the sharing and/or sale of this information to third parties were observed as routine practices.
The Patriot Act of 2001 greatly expanded the powers of the US government with regard to electronic surveillance domestically, and with the expanded Foreign Intelligence Surveillance Act, or FISA, internationally as well. Initiatives like the Carnivore program of the Federal Bureau of Investigation (FBI) – which gave that agency the authority to monitor internet communications such as email messages, chat rooms, and instant messaging platforms – were an extension of these powers.
In the commercial sector, a 2001 report by the American Management Association concluded that “seventy-eight percent of major US corporations use at least one technique to monitor employees’ on-the-job phone calls, e-mail, Internet use, and computer files, without notice.”
Things haven’t really improved much, since then.
2016 was an especially bad year. During that 12-month period, big tech companies, high-profile players in the digital arena and government agencies were responsible for numerous violations of personal privacy, including:
- After Facebook acquired the messaging platform WhatsApp, its terms of service were amended to allow the sharing of its subscribers’ personal information with the parent company – supposedly in the interests of reducing spam and increasing business-to-consumer communications.
- The ride-sharing platform Uber updated its mobile app to track user locations – even when they weren’t using the software. This was allegedly to improve “the pickup and drop-off experience.”
- Google’s archive of the locations, text messages, and call logs of Android users were routinely passed through a back door mechanism on each device to a (presumably government) server, somewhere in China.
- The Pokémon Go gaming sensation was discovered to require access to a user’s entire Google account on iOS, including their location, email and browsing history, for full functionality.
- Major social platforms including Facebook and Instagram changed the algorithms used in generating news feeds, dropping real time reports from subscribers’ friends in favor of a curated “digest” compiled from posts that the company thinks “you want to see.”
The GDPR Privacy Shake-up
Things might have continued in this manner unchecked – but recent developments in Europe have been shifting the balance of power to favor the individual.
When it became clear that the European Union’s General Data Protection Regulation (GDPR) was actually going to become a reality in May 2018, there was a frantic rush by data-handling organizations to improve their infrastructure and procedures in line with the new laws and the very strict compliance regime. This included both the primary users of consumer information (the “data controllers”) and their partners and support services (the “data processors”).
With its emphasis on privacy protection and the rights of individuals to their own data, the GDPR has also inspired the drafting of similar legislation in other parts of the world.
To cover all their bases, big tech players like Google and Facebook have made global changes to their privacy policies and operating conditions. But this hasn’t prevented them from testing the limits of what they can afford not to change, under the new rules.
Big Tech Attempts at Business as Usual
As the May deadline for GDPR got closer, users around the world will have noticed an influx of email and other communications from online services and social media platforms, urging them to accept new or updated terms of service, which were supposedly drafted in line with the new privacy regulations. Mobile users would also have received a stream of “GDPR compliant” updates to their apps and services.
Much of this update activity was delivered in a way that discouraged users from reading the fine print. And much of the actual content of the “new” privacy policies and procedures was very similar to the way that these platforms had been going about their business, before.
The GDPR and other privacy laws based on its model require data-gathering organizations to give up front disclosure at the point of entry for website visitors and consumers, explaining what information will be collected from them, how it will be stored, and what it’s going to be used for. On this basis, individuals must then give affirmative consent (they have to actively say “Yes”) before their data can be collected.
But as users discovered in the flood of revised service conditions issued by many organizations as GDPR was about to take effect, not saying “Yes” could effectively mean saying “No” to their entire online presence. For example, Facebook subscribers who declined to accept the new terms were unable to log into their accounts. And Android users who didn’t agree to the updated terms of service were effectively locked out of their phones and mobile devices.
Having given their consent to having data gathered from them, users under GDPR-style privacy law have the power to opt out of this agreement in future, and to have all the information that’s been collected from them erased from the archives.
You’d think that presenting all of these options in plain and easy to understand language would be a comparatively simple task. But many players in the big tech arena have been less than straight shooting in the way they’ve gone about it.
A study conducted using artificial intelligence (AI) software created jointly by the European Union Institute in Florence and an EU consumer organization examined the privacy policies of 14 major technology businesses, including Alphabet Inc. (owners of Google), Amazon.com Inc., and Facebook Inc. The research concluded that around 33% of the clauses in these organizations’ revised terms of service were “potentially problematic” or contained “insufficient information.” A further 11% of the policy statements used unclear language.
Deceptive practices such as these have prompted a backlash from privacy and consumer rights advocates. Hours after GDPR came into effect, the non-profit NOYB (“none of your business”) organization filed the first of four complaints against big tech companies based in Silicon Valley. Litigation filed by Austrian privacy rights lawyer Max Schrems accuses Facebook, Google, Instagram, and WhatsApp of failure to comply with the new rules.
The complaints are on behalf of users in France, Belgium, Germany, and Austria – and requests have been issued to the regulators, asking for fines of up to $4.3 billion on Google’s parent company, Alphabet, and $1.5 billion each for Facebook, Instagram and WhatsApp.
It’s likely that these cases are just the first in a wave of litigation that will continue as both sides in the privacy debate test the limits of each other’s resources and resolve.
Privacy – A Vision of a Brighter Future
Whatever the outcome, it’s certain that privacy matters and the rights of the individual to control over how their information is handled, will be major issues in the months and years ahead.
The European Union’s data protection supervisor Giovanni Buttarelli – sometimes known as “Mr. GDPR” – has a vision for the future, beyond the General Data Protection Regulation and any privacy legislation that follows it. In Buttarelli’s words:
“I am nevertheless already thinking about the post-GDPR future: a manifesto for the effective de-bureaucratizing and safeguarding of peoples’ digital selves. It would include a consensus among developers, companies and governments on the ethics of the underlying decisions in the application of digital technology. Devices and programming would be geared by default to safeguard people’s privacy and freedom. Today’s over-centralized Internet would be de-concentrated, as advocated by Tim Berners-Lee, who first invented the Internet, with a fairer allocation of the digital dividend and with the control of information handed back to individuals from big tech and the state.”
Share this Post