The Congressional spotlight shone on the Silicon Valley behemoths exposes one of the great weaknesses about Internet firms in the past two decades that has had little attention.
Contrary to common sense, technology platform companies, including Facebook, Google, Twitter and hundreds of their peers, do not know—nor will they know in the future—all of the myriad ways their platforms will be used by millions of users. These platforms leverage new and creative technologies to enable many activities and capabilities. But what they fail to anticipate are all of the ways that users of these platforms create, maintain and deliver old and new kinds of content.
Fake news, graphic videos, 140-character messages fueling terror networks, and your banking app on a hand-held device are recent manifestations of a connected society enabled by technology platforms that were unforeseen, except in in broadest terms, by everyone. We do not know how people and organizations will use existing platforms, nor can we project what new technologies will become available in a few years—and the innovative uses they will be put to. For example, augmented and virtual reality technologies are becoming readily available. Yet do we really know how we will be using them?
There is confusion as to how to classify these platform companies. Are they technology companies? Media companies? Content providers? Adding to the confusion is the fact that the platform companies have multiple sides. Typically, one side has responsibility to maintain and add more functionality to the platform, while the other sides are actually users of the platform. They use the platforms in separate and creative ways.
These platform companies are best described as media companies using the outdated categorization of industrial sectors and the description is not perfect. The latest comment by Sheryl Sandberg, the Facebook COO that Facebook is not a media company because it does not hire journalists is like saying McDonald’s is not a restaurant because it does not hire waiters and waitresses. However, she is just trying to avoid the additional regulatory scrutiny that comes with being a traditional media company. Facebook and many other Internet firms are truly media companies.
What should be expected going forward? Two things are clear. First and most importantly, there will continue to be creative new uses to these platforms in ways that we cannot predict. Countries and organizations that we view as our adversaries see these platforms as our Achilles’ heel. They will continue to evolve their use of the platforms to destabilize democracies and nonprofit organizations worldwide given their successes to date. The old attack vectors of fake news and buying ads will be replaced by new attack vectors.
Second, much of this new use will be created and carried out by artificial intelligence software rather than humans sitting around coming up with the ideas. AI has become a useful tool because of cheap processing power, but also because of the almost limitless supply of data that can be used to train the AI software to perform specific tasks.
Putting controls on these platforms to solve the problems of the past few years is a fool’s errand. Our adversaries are creating new and improved capabilities to adversely impact our democratic processes in ways we have not considered. The vast amounts of information generated on the election interference topic and our reactions to it this past year provide our adversaries with copious amounts of data to train their AIs. These newly trained AIs can evolve new attack vectors to undermine future democratic processes throughout the world. We do not know what these attack vectors are, therefore, whatever is put into place now will most likely prove ineffective in the future.
What should we do? The best course is to stop solving last year’s problems and start understanding potential new problems and how to mitigate them. Preemptive actions should be taken now to forestall new attack vectors. For example, a new vector could be to create a “person” that exists in cyberspace only who donates large sums of money to extremist campaigns and SuperPACs. Functionally this is easy to do and hard to detect and track after the fact. How about doing it before the fact? The Congressional hearings make for good theater but little else. At worst, they will force resources to be expended in solving problems that will bear little resemblance to the problems our adversaries are planning to put into place in the next election cycle.
Finally, as an educator, it would be remiss of me not to point out two contributing factors to the effectiveness of fake news and other mountebanks. The first is a lack of understanding on the part of the electorate of our democratic processes and their historical origins. The latter evolved over many centuries and our Founding Fathers were well-steeped in this history as they created the Constitution. Secondly, at many levels of society, there is a lack of thoughtful and substantive discussions of opposing viewpoints to understand the complex problems brought about by the use of these platforms. Collaborative development of solutions to these complex problems, whether at local or global levels, requires more than 140 characters and photo-sharing skills.
Commentary by Timothy Carone, an associate teaching professor in the Department of IT, Analytics, and Operations in the University of Notre Dame’s Mendoza College of Business.
For more insight from CNBC contributors, follow @CNBCopinion on Twitter.
Source: Tech CNBC
Here's the real problem for tech companies trying to fix the 'fake news' crisis