Before Your AI Product or Project Fails Again

There is another level of interface beyond the point where the machine controls, manages, and processes content and viewpoints. While human rage is without bound, machines and what may be called artificial intelligence machines will intervene to bring results. On this level, the machine operates on other processes and can be called an inference engine. The inference engine is essentially the “assumptions” or “reasons” software that provides much of the machine’s information processing. In human terms, the inference engine can be compared to the involuntary nervous system that keeps a heart beating, digests food, and automatically performs a myriad of other functions without the mind consciously thinking about them. The human application required for these automatic functions has evolved over millions of years. Duplication of these processes is the challenge undertaken by today’s AI scientists. That is, they are striving to develop systems that perform a myriad of functions intelligently without the need for constant human support or intervention by using “inferences” to mimic specific functions. Mental functions such as problem-solving and decision-making will require advanced software because of the often ill-defined or changing nature of tasks such as creative thinking. Game theory, such as in Monopoly(R), chess, or video war simulations, can be a useful tool in developing systems that allow for solving larger problems or managing detailed projects. Inference engines need to be developed within a behavioral framework. One approach is to develop systems that “learn” or infer on game user-player and begin to solve problems in “their” way, rather than to solve problems in the traditional linear process. Most expert systems are like very young children. The basic building blocks are there: the muscles, neural networks, and mental management resource support. Viewpoints can be assimilated at a rapid rate, and the mental processes react and can act at an ever-increasing rate on their own. Networking inside the brain organizes the child’s words, actions, and emotions into speech and movement. It is a network approach that makes this work. Any human activity depends upon coordination of many muscles and muscle groups. This is one type of human “knowledge system.” Each person’s knowledge system or “personality” forms the basis for their approach to the world around them. As has been demonstrated, often dramatically, a key person such as a CEO or politician can change the entire scope and purpose of an organization or country. Generally, an expert within a company falls into the same domain as the key player or leader of some specific activity. At one level within a company, an AI designer or engineer works in a research lab to solve complex technical problems. In contrast, the company president manages and coordinates organizational policies as well as determines long-range planning. The base of knowledge for each of these activities is really quite different, as are the rules by which they operate. The inference needs of these users are, therefore, greatly different. An AI designer scientist might operate in the realm of the laws of nature, whereas the organization’s president might operate under human laws. Often humans have a goal, plan, or outcome of an activity beyond the task. They are far more likely to make decisions on the “politics” of the situation while another decides to update the company’s employment or diversity opportunity policies and project the number of workers who will become executives over a five-year period.

No alt text provided for this image

The 600 points of communications interface for an example of the numerous outcomes do no by themselves yield to collaboration or consensus of even a simple task(s). Before AI can be really AI subjective gut feelings, and middle-management attitudes toward workers, prejudice of all forms, discrimination and so many other human qualities and frailties must be integrated at the beginning not a “topping” to the project.

Before you spend hundreds of thousands or millions on your next AI product or project fails, you may want to get another review of it, just a little CYA – email cross@gocross.com

No alt text provided for this image