randa-marzouk-ilwI-AIAQr4-unsplash

AI vs. the Human Brain

The human neural net is a logical data processor unparalleled in history. The human brain remains, by and large, the best information synthesis machine ever created. And that will not change in our lifetimes. What will change is how quickly human beings — innately lazy beings — will be willing to let it atrophy.

What is AI?

Artificial intelligence seeks to create systems that mimic the ability of human beings to solve complex problems asymmetrically. In other words, while a typical computer processor may process hundreds of commands in parallel to arrive at a decision that causes a functional outcome — such as adjusting a configuration in an energy system to redirect energy to a different locale — each of those commands (and all of their subcommands) follow a sequence that was preset by the coder. Where any of these commands arrived at a decision point, the coder made a specific decision, and coded the criteria to ensure that specific decision was made based on the state of the machine embodying the conditions compelled by other commands.

But that is not how our brain works… as far as we know. Instead, every thought we have, every action we take, every decision we make is arrived at through an almost inconceivable number of chemical reactions that bring together information ranging from our long- and short-term memories of similar experiences, disparate snips of related and unrelated information that we somehow discern having an impact on the decision, as well as what we project the outcome of the impact of that decision will be. But that’s still not all. The final decision is affected by our emotions — how we feel at the moment, where we are, how we think others will feel about us after making that decision, and even how clear we are thinking based on the health of the microbes in our gut, the amount of sleep we had last night, and the specific nutrients roaming around in our brain from our most recent meal. In other words, there is simply no comparison to the computing power of the human brain to anything else on this earth, and it’s unlikely there ever will be. Millions of years of evolution have created the most advanced computer that human civilization will ever see, provided it continues to evolve.

However, advanced computing efforts have managed to create unbounded systems that can themselves make decisions based on the universe of information made available to it. It can learn from those decisions, and refine its own performance to be more precise in the future.

The Setup

The journey to AI is a long one, beginning with Alan Turing in the 1950s, who proposed the Turing test of a machine’s ability to mimic intelligent behavior equivalent to, or indistinguishable from, a human being. John McCarthy then created LISP, the programming language used to research AI. Stanford became the hotbed of AI research in the 1970s and the Association for the Advancement of Artificial Intelligence (AAAI) was born.

But AI was costly back then, and few individuals or institutions had the means to fund the creation and experimentation with “expert systems” until the late 1990s, when the practical commercial applications of naïve AI systems — such as the Roomba self-navigating vacuum — and natural speech recognition began to gain a foothold in the collective human psyche.

IBM, who had proven a machine could overcome the world’s best chess champion in 1977, resuscitated its AI program and in 2011 unveiled Watson, a natural language question and answer system. Due to their proclivity with games, they then pitted Watson against two former champions, and won a televised game of Jeopardy. It proved the potential power of artificial intelligence systems and thus revealed opportunities for its used in advancing healthcare research, and accelerating decision-making in finance and customer service.

The Attraction

By 2012, there was no doubt that highly complex mathematical problems could be solved faster than by a human being. And where finance is concerned, this is hugely attractive. Milliseconds on a trade can mean millions in profits, and none to your competitor. Prior to wireless computing, trading companies competed for building space as close as possible to the NY Stock Exchange because a shorter phone cable often made the difference. Someone using AI need not have humans involved at all to make fast, accurate trading decisions. Where healthcare and research were concerned, AI could synthesize vast amounts of obscure, but valuable clinical research, correlating disparate lab results to reveal potentially lucrative new drug and treatment activities. And the vast information collected by social media companies like Facebook, Google… the list goes on, provided customer information that was immediately valuable for targeting product and service advertising. A whole new industry built up to exploit this aggregation and resale of customer data.

The Risk

Well, just like human being, AI systems make mistakes. Human brains evolve when they are confronted with challenges that compel the creation of new neural communication paths. The same is true of AI systems. We need to feed it new information to learn and evolve. And that’s a problem. Feed it bad or misleading information deliberately, or information they that is ambiguous, wrong, or incomplete inadvertently, and you will get bad results.

Worse yet, those AI mistakes can be used for nefarious purposes. AI Package Hallucination was discovered in the Spring of 2023, when Vulcan Cyber team reported a new supply chain attack technique that uniquely leveraged AI, but in a way that can affect any company — even if they don’t use AI. Their research alleged that approximately 30% of AI engineering queries can recommend non-existent open source coding packages, because enough people speculated they might or should exist. Vulcan asserted that noticing that mistake could allow a savvy malicious actor to use that hypothetical name to populate malware into trusted public Open Source repositories like GitHub for download. Subsequent engineers requesting recommendations on the same topic from AI systems like ChatGPT would then receive the same recommendations from AI systems — but now those non-existent packages do exist out there in the wild. Developers would be tricked into thinking the package is legitimate and download it, confidently embedding it into their code. This poses a real-world risk to companies whether they leverage AI or not.

AI can be inconsistent, if not drop-dead wrong if it is acquiring data from bad or nefarious sources that seek to taint the data pool, or from just missing the nuances of human discussion as in this case. And it can change its mind. Ask the same question of ChatGPT next week and you are likely to get a slightly different answer, because just like human beings, it adjusts its perceptions based on new information.

So, big deal. What’s the real risk?

When a Consultant supplies a recommendation to a customer that doesn’t sound correct, the client can question the Consultant to understand the rationale behind the recommendation. But that is not the case with the AI systems. You can’t interrogate AI the same way you can a human being. Such human-AI “conversation” lacks the nuances of human discussion and most of the meaningful information in it. That is because ~80% of information is conveyed in human interactions is conveyed by non-verbal cues, such as posture, eye contact, voice timber, vocal emphasis, grammatical choices. These cues can — and should — mean everything to reaching a suitable confidence level in business decisions.

The Challenge

So where should your attention be? How should you think about AI in your business?

Right now, most companies around the world are asking the question “How can I use AI to further my business?” But the right question — the very first question we need to ask answer — is: “How do we intend to respond to AI?” Moving at the speed of business often compels management to take risks to further business objectives, and only then set policy once those moves prove fruitful — defining policies to constrain specific risk as an afterthought. But the risk of relying solely on AI to make critical decisions is so huge in certain circumstances (safety at a nuclear energy plant, for instance) that for the time being it should be avoided entirely until policy, standards, procedures, and tested controls are in place — and staff are trained. Using AI to make industrial control system decisions where safety is at risk should be off the table, until there is some method of validating AI decisions.

You cannot isolate yourself from the tsunami of AI — nor should you. Its effective use may very quickly become a competitive differentiator. But your attention should first be on how you will respond to the outputs of AI based on the nature of your business and your company’s risk appetite. If mishandled, AI is a far greater risk to your business than it offers as an opportunity. Be deliberate about how your company chooses to respond to AI. Start with an AI position statement, grow that into a set of principles, and codify it into policy that is broadly disseminated across your company. Let those principles sink in across the fabric of your company culture. Then build standards and procedures against them, putting in place the controls that will allow you to avoid AI output mishandling. Only then will you be prepared to stand on a solid foundation, step back and figure out what you want to do with AI and take full advantage of its business enhancement opportunity.

Your future is secured when your business can use, maintain, and improve its technology

Request a free consultation