Gödel, Turing – Fixing AI Flaws and Frauds – A Challenge to AI Solutions Providers for Transparency

By Thomas B. Cross CEO Techtionary & SocialStreamingTV

I really hate AI. Not because of what it can be but the fraud that companies “say” they have AI and really don’t. As I often ask these companies who blast AI all over their websites or use a .AI domain name what I get back is something like this. They say we cannot share our AI because of proprietary IP-intellectual property rights, our AI is company secret and a competitive advantage, or our AI is embedded in our technology and cannot be shown. All this is like food, fast food, beverage, car, or any other company not wanting to share their product “ingredients” knowing full-well their problem may harm the buyer. Maybe it is time for some standards, some disclosure, full-disclosures or even some consumer reports like analysis and reports on these products. However, this isn’t the only point of this short story but where I explore and try to define what AI really is to begin with. This is where the story gets really complicated really fast. Yet to make it easy let’s say we wanted an AI bread toaster. What would that be like? What would it be enough to say that it has AI? And is it possible to have something like this at all? Toasters are really simple. You put bread in, and it bakes it until it is done. Yet, what is bread? Would an AI toaster analyze the bread, adjust for thickness like with a bagel, or should an AI toaster grow the grain, get the yeast, if desired, then make the bread the way “you like” before it toasts it? Or should this AI toaster be able to make a croissant, pizza dough, or be a bread-making machine as well? Historically, making and baking bread may be the first things that humans did, so this shouldn’t be too hard for AI to do. Or would it? 

No alt text provided for this image

Today companies toss around the term AI like they are curing cancer, fixing climate-change, forecast tornados faster or thousands of other really critical problems. This is simply not true. They are simply building computer algorithm(s), the technical term for “rule of thumb” and slapping a label on it and calling it AI. They are often single purpose applications. They can’t “walk and chew gum” at the same time. Even advanced autonomous self-driving systems can’t get out of the vehicle to go climb a mountain or bake bread.  

Even Turing recognized this in the 1930’s, “by reasoning about the behavior of the universal machine, Turing was able to show that there are well-defined mathematical problems that the universal machine cannot solve. This result was as astounding as Gödel’s incompleteness theorem.” (1)  Machines cannot solve many of our problems simply because we might be able to see a problem but often cannot say specifically what the solution is or should be such as climate change, racism, etc. We need to step back and look at what AI is. Is it some mathematical formula, algorithm or system that can only make or bake bread, drive a car or forecast financial trends, fix a pandemic or many other critical problems. Or should it be something like as Gödel and Turing argued about as reflected here, “Gödel’s work had shaken belief in the existence of a supreme systematic procedure, and now Turing produced a completely convincing argument that no supreme procedure could exist. If it did exist, then the universal Turing machine could carry it out, since the universal machine can carry out every systematic procedure.” (1) This is complicated because humans work and live in words and behaviors. Machines, if they were living systems, (by their own definition, not ours), might easily decide to use some form of communication more reflective or appropriate for their environment. We do need AI more than ever, but we don’t need to mislead customers and the public as to what it is. Companies promoting their AI should be on their own or required to disclose its uses and warnings, “do not step here” or “your mileage may vary,” etc. Even if there is no requirement, then companies on their own may and should explain what their AI is really good for and what it is not. Then explain the “thinking” behind the technology to help buyers and others “understand” what they are trying to do. Far from disclosing company secrets or competitive advantage gives everyone a glimpse of the goals and reasoning and builds a community of other “thinkers” to collaborate even more so.  

“In a final dramatic flourish, Gödel added that Turing’s result showed that ‘the human mind will never be able to be replaced by a machine.” (1)

 Possible AI Models

For four decades I researched, written and designed concepts around thinking and machines. Doubtless to say I will be long gone before we come close to what I or Turing or anyone else including a child would ever consider real AI. Even today, the dominant search engine people use and with all their vast resources cannot explain human behavior other than just the “facts.” AI needs to be able to “look beyond” or have intuition. Turing in the 1930’s explored the concept of “intuition.” Turing said, “The activity of intuition, consists in making spontaneous judgements which are not the result of conscious trains of reasoning.” (1) For AI to have any meaning to most people is to look beyond “just the facts” and give some insights and intuition to help people cope, survive and thrive. AI needs to be more, much more. I leave you with three concepts I have developed from the four decades of analysis. Before telling you what it is, though no doubt some of you may skip ahead (another very human trait), here are some of the human thinking behind them. 

Add alt text

A long time ago, a power company who had a lot of water dams that provided electrical power generation had a “dam expert.” He went all over the country inspecting dams for potential failure (not a good thing to happen). He was about to retire, and the power company thought it might be a good time to use AI to “automate this job.” They hired “knowledge engineers” who knew how to build AI solutions. These knowledge engineers followed the “dam expert” around for months to see for themselves and learn what he did. What they learned is that being a real expert quite simply goes far beyond knowing something but understanding all the other “elements” such as weather, upstream farming, forestry or human growth and enterprise, erosion, water flow, even climate-change with just a lot of common sense. Ultimately, they gave up because the “dam expert” knew intuitively or what I call lateral or even oblique intelligence, the condition of the dams which could not be quantified into AI technology. The next example leads to the concept of vertical intelligence. A soup company found that human taste, while complex in itself, is not the same for any two people. Often people “add salt to taste” as for some they need a lot. The problem was that while an AI system might help them with one kind of soup, it was useless in making another kind of soup (true story). And without further ado, here are my three concepts or models to build AI solutions:

Vertical Intelligence is in simple terms making not one kind of soup but all kinds of soup. Creating different kinds of soup is actually much harder than you think. This is most of what machine learning or so-called AI is today.

Lateral Intelligence is where one set of skills can be applied to other things. This is where AI gets complicated really fast. As humans we can get off our bikes, drive a car or pilot an airplane without even thinking about it but knowing what needs to be done. However, the skills for each are completely different from a machine perspective.

Oblique Intelligence is different. Oblique is somewhat in between vertical and lateral. It combines concepts used by both vertical and horizontal but brings new solutions. This makes this type of intelligence “oblique” in the way it will emerge. Telling a joke, music, touching, evoking emotion, coping with anxiety or stress, inspiring leadership are forms that humans don’t really understand all that well but provide real humanity.

Add alt textNo alt text provided for this image

Each form whether vertical, lateral or oblique offers unique challenges to the developer, user and benefits to society. The concept here is to build these “crosspoints” or intersections to meet specific goals. You may and hopefully have other models for your own AI system or solution. As Turing pointed out, “Turing was able to show that there are well-defined mathematical problems that the universal machine cannot solve. This result was as astounding as Gödel’s incompleteness theorem. As we would express it nowadays, Turing had shown that there are well-defined mathematical problems, admitting of a straightforward yes-or-no answer.” (1)

Summary – AI is not easy now, nor are the problems we face now or in the future. Machines or “artificial” intelligence may be able solve some of them though unlikely will not be what the problems really are when they are finished but how they can be solved by the technology available, not necessarily what technology that is or will be needed (such as quantum computing), etc.

If you want to explore this issue more, consider this course – email cross@gocross.com