What's the Future of AI?

And How You Can Get It Today

As I mentioned in my last post, we’re probably pushing the boundaries on what we can get out of the large language model, or LLM, approach in AI in general.

I said it’d take a few more breakthroughs. I have my bets on one of those breakthroughs.

In a nutshell, neuro-symbolic computing is combining statistical (neuro) and traditional (symbolic) AI approaches.

You may not remember traditional AI approaches, but we’ve been here before. Many of those approaches did not pan out, causing the “AI Winter” of the 1980s.

Many of these methods still survive and are used today in niche problems - operations research tools, theorem proving in mathematics, formal verification of programs, and business rules engines are all examples.

These approaches tend to be somewhat labor-intensive compared to their perceived value. But you can reduce the labor required by a great deal by having an LLM assist you in using them - moreover, due to their self-correcting nature, hallucinations aren’t a problem. You can use an LLM to generate the next step of proof, then use the proper tools to determine if it’s correct.

If it’s not, you can feed that back into the proof. If it is, you move forward with the next step.

Real neuro-symbolic systems are more complicated than that and have tighter links between the neuro side and the symbolic side.

You can, however, take advantage of this idea even today. ChatGPT can write simple Python programs, and Python supports Google’s OR tools (operations research). It also supports writing out actual theorems - though this may be a bit heavy-handed if all you’re trying to do is argue coffee in the office should be free.

To try this out, put your problem—with as many details as required—in the prompt and ask ChatGPT if there’s an operations research or mathematical programming approach to it. Generally, we talk about facts and figures problems rather than people issues.

It may recommend an approach. In follow-up questions, ask, “What further questions do you need to be answered to specify the problem fully.” It may, naturally, get carried away and start trying to generate Python for you, but ignore all that so that you can make sure it has the problem specified. Finally, you can ask it to attempt to use the OR tools or similar packages to solve your problem.

What will this look like in the future? More and more “tools,” as OpenAI calls them. The latest ChatGPT appears able to recognize math problems and move them to a dedicated calculator. Earlier iterations could already talk to Mathmatica’s non-statistical engine, allowing it to solve some somewhat abstract problems.

We won’t be seeing Python, at least not as the next step, in my opinion. While these LLMs prefer Python generation for statistical reasons - there are many tutorial blogs in the training set - it’s not a very good language for the “symbolic” part of neuro-symbolic. Python can fail in many ways despite perfect syntax. Instead, we’ll probably see a stepping stone in languages that borrow heavily from logic and functional programming. These languages, in turn, may provide Python transpilers - turning their formal methods into a valid, readable Python program - if humans wanted to see the output.

What problem would you like a “General” AI to solve? What’s holding back current LLM approaches for you?