In his paper “A Few Useful Things to Know About Machine Learning,” Professor Pedro Domingos identifies five main “tribes” or approaches within machine learning. These “tribes” represent methods used in the field of machine learning, each with its own strengths, weaknesses, and resulting areas of application.
Symbolists/rule-based learners:
Here the “smart” logic is hard coded, say in an expert system, where the rules are known and relatively simple to implement. This approach was the best practice of the 1970s.
Evolutionaries/genetic programmers: Evolutionary Algorithms:
First proposed by John Holland at the University of Michigan in 1960, useful implementations became popular in the 1990s. Here different models of understand evolve through a method of survival of the fittest.
Bayesians/probabilistic learners:
Bayesians use math (probability theory) to determine the likelihood of certain events. This method was first published by Thomas Bayes in 1763. It was first computerized in late 1950s to early 1960s. A modern example today is political polling.
Analogizers/instance-based learners:
Analogizes learn by identifying similarities between new instances and instances encountered from before. Discussed in the 1950s, it was first published by Thomas Cover and Peter Hart in 1967.3
Connectionists/neural network learners:
Connectionists, also known as neural network learners, are inspired by the function of the brain’s neural networks. They learn patterns and relationships by constructing interconnected layers of artificial neurons. Deep learning, a subset of connectionist methods, involves training neural networks with vast numbers of multiple hidden layers to handle complex tasks. As an approach, it was first envisioned by Walter Pitts and Warren
McCulloch in 1943. Over the decades it has been tried, failed and forgotten, retried, failed and forgotten many times.
The Trivergence
It was over a half a century later before the trivergence of:
1) much smarter advances in the techniques of AI
2) much faster ARM based computer chips funded by blockchain mining, and
3) mountains more data (from the IoT, social media….)
started the current explosion in AI intelligence and resulting applications. Unlike the other approaches it gathers its intelligence from patterns in the (up to trillions of) data points that are fed into the algorithm. It is evident that ChatGPT (for example) has read much of the Internet including the New York Times.
Given that its conclusions are based on massive amounts of data (as opposed to coded logic) its conclusions are somewhat unpredictable. Not surprisingly given our history of bias our data, and AI’s resulting conclusions will reflect those biases.
Many are calling for greater accountability and transparency. It is easier said than done. To understand at a high level how neural networks work is relatively simple. How the trillions of nodes interact to drive a particular conclusion is beyond our comprehension. In simpler words it seems likely that we can’t control, what we don’t understand. Historically fear of change has been driven by many factors, one of the largest being, the lack of understanding. Education here will not alleviate anyone’s fears.
Everyone agrees that AI should be transparent, open, fair, and accountable. But for and to whom? In simple terms, although we can strongly influence the behavior and conclusions of the Trivergence, it is impossible to fully control. Greater transparency in the methods and data used to train AI systems is certainly possible. But dissecting the “how” in the conclusions of complex neural networks is an illusory goal. When an AI conclusion is a shock beyond what its creators envisioned; it is referred as emergent behavior. Expect from AI more and more surprises to emerge.
Given that many believe that all is fair in love, war, and politics expect to see AI used to create deepfakes to influence elections, misdirect troops in war, and further balkanize public discourse. Social media companies have learned that there is more money in addictive sites that further polarize our views than in actual news. Nearly half of those 18 to 29 get their news from social media, whose addictive algorithms reinforce our biases . As some have put, with narrowcasting more and more prefer tribe over truth. The looming 2024 election in the United States will be heavily influenced if not determined by filtered biased news or outright deepfakes over social media. With the narrow casting of news outlets in the United States, misarticulating what the opposition believes is now standard practice.
There will be winners and losers in the race to develop ever-more powerful forms of AI. Ultimately, our human species could lose, hoisted by our digital petard, to borrow from Shakespeare. We already see the effects of social media algorithms on young people’s mental health: teen suicide rates have climbed over the last decade. What does the second era mean when technology can mimic and exceed people in creative endeavors and interactions once thought to be uniquely human? There is much promise and peril in the Trivergence, and the book hopes to develop a deeper understanding of the issues.