Editor's note: Jonathan Arnott is a former member of the European Parliament. The article reflects the author's opinions, and not necessarily the views of CGTN.
In the digital age, things change on a daily basis and the growth of "big tech" companies regularly throws up new and unexpected issues. Yet the nature of government is to be slow, methodical and cautious. New legislation must be carefully thought out, opinions sought, and then drafted in such a way that it has the intended effect. Then the long political process starts: the building of coalitions, amendments and approval. Passing even a relatively simple piece of legislation may take years to move from concept to statute.
New dilemmas, which could never have been even dreamed of just a few years ago, are now a reality. Information flows freely across jurisdictions. The most complex of situations are created at breakneck speed. Suppose that a German citizen, living in Switzerland, goes on holiday to India and writes an anonymous blog post about issues in Russia – hosted by a Canadian company, yet viewed by people in a hundred countries worldwide using an American search engine. At the click of a button, a complex legal situation has been created which would take years to attempt to unpick. It's hardly surprising therefore that Western governments seek to regulate big tech by placing responsibility into the hands of those companies, and creating legal obligations to (for example) remove illegal content.
Yet paradoxically, attempts at legislating frequently increase the practical power of the big technology companies. By creating a blanket requirement for services such as YouTube to remove content, which is protected by copyright, the legislation forces platforms to remove content based on an algorithm. It's not government, but the tech companies, who are empowered and required to take decisions which impact on people's lives.
They act with caution, gold-plating legislation: not wanting to fall foul of the law for failing to take down illegal content, they would rather err on the side of removing more content rather than less. Questions of citizens' rights are created: how, for example, can someone reasonably appeal against the incorrect application of an algorithm when it accidentally (as will inevitably happen from time to time) removes permitted content?
Google's offices in downtown Manhattan, New York City, October 20, 2020. /Getty
Google's offices in downtown Manhattan, New York City, October 20, 2020. /Getty
In truth, we are barely scratching the surface of the issues here. Stare at them for too long and you'll likely end up with a headache. Legislators who grapple with the thorny question of "fake news" run the risk of making big tech the de facto arbiters of whether content is accurate or inaccurate. Whether it is the ongoing feud between Twitter and Donald Trump, or the recent Facebook announcement accusing people connected to the French military of running troll accounts attempting to influence opinion in the Central African Republic and Mali, big tech has shown that it is able to stand up to government.
There will be many people who see this as a good thing, at least at first, countering the narratives created by political spin and providing a check and balance on the actions of governments in the digital sphere. But then there is the pushback – where is the objective standard so that the actions of Big Tech can be scrutinized? How can we know that they are acting fairly and impartially, and that their accusations against government will always be accurate and unbiased? There is no easy answer to these questions.
As the European Union has gradually involved itself more and more in regulating large technology companies, generally based in the United States, a new dynamic has emerged. There has been tension at times, but the U.S. government – together with eleven states – has launched its own antitrust action against Google. It's easy to see why there are concerns about effective monopolies. Once a tech giant has been established, it's already a huge hurdle for a competitor to enter the market. Then, as in the case of Facebook's acquisitions of WhatsApp and Instagram, competitors are likely to be purchased before they become a threat.
For now, there seems to be some consensus amongst Western governments. The United Kingdom has been attempting to pass an "Online Harms Bill" since April 2019 to force the removal of illegal and morally objectionable content (such as content promoting suicide); the European Union's current proposals include the power to temporarily ban a platform altogether, whilst Japan, Canada and Australia are all proposing their own legislation.
Day by day, the issues are growing. Time will tell whether the European Union's proposed Digital Service Act and Digital Markets Act survive in the form of the current proposals. It will likely be at least 2023 before they come into force, by which time the world of technology will have moved on again. The nature of the relationship between government and Big Tech is likely to be one of the defining features of the 2020s. It is still far from clear the extent to which any country has truly grasped the scale of the issues which have been raised: the tug-of-war between government and big tech is only just beginning.
(If you want to contribute and have specific expertise, please contact us at opinions@cgtn.com.)