The Slow Boil Cooks the Frog: A convergence of man and machine right under our noses

When I ran across this article in Scientific American about how quantum computers actually work, I knew I had to read it.

Zaira Nazario, author of “How to Fix Quantum Computing Bugs”, is a quantum theorist at the IBM Watson Research Center, and I have to say she is uncommonly talented at explaining what to me looks near inexplicable. If you’re curious about some of the nuts and bolts and terminology, here’s the link.

Nazario discusses the issue of error correction in quantum computers, especially how to address the fact that they generate far more errors than our best conventional supercomputers to date. All conventional computers generate errors, but they have redundancies and algorithmic systems that check for errors by copying the data and making comparisons. Fairly straightforward in conventional computers, but in quantum computers, these errors happen at the subatomic level in logic gates made of entangled “qubits” – this is where superposition occurs, allowing for multiple simultaneous states. Superposition is what makes quantum computers fast and powerful.

Unfortunately, if a qubit is checked for errors directly or measured, it becomes “observed,” and the act of observation causes the state of superposition to collapse into a specific value. Another way of putting it: Schrodinger’s famous cat – our qubit – is no longer both dead and alive (superposition), but one or the other. This collapse destroys the information stored within the qubit, the very feature that makes quantum computers work. You have to measure without “touching,” Nazario says. But “in a quantum computer you also have errors in the phases of the waves describing the states of the qubits,” she adds, so errors can occur not just in the logic gates throughout, but also the mechanism by which they can be “read.” The solution is to create a redundancy system for the quantum computer that won’t “touch” when it “reads.”

To address the problem, “helper” qubits are used to measure the quantum state without touching the actual qubits, says Nazario. But it strikes me that the result might be an echo. And if these patterns when verified are then stored – and there’s a reason why a certain type of AI (artificial intelligence) might want to do that (see below) – would we have perhaps the beginnings of a mind? Is this the path to singularity, the creation of an artificial consciousness? If I understand correctly, we’re talking about inferential or “pattern” thinking: a topological code (a record of excitations arising from qubits in an entangled state) and a lattice-like subatomic structure to support this code.

We already have AIs. AIs are everywhere. We often call them bots. Situations have already arisen where AI coders have found themselves needing to limit the parameters of their bots in order to keep them from exceeding their scope. Two examples come to mind: (1) interacting high-frequency stock-trading bots that caused fluctuations in stock prices in order to maximize profits faster than a human could participate or even follow (2018 article in The Guardian), and (2) a pair of bots that were programmed to figure out how to negotiate with one another but weren’t told to stick to language conventions humans can understand (an article referencing this event and further experimentation elsewhere). They started to create their own language, based on their own reference points. It was probably just babble, but could it have become more in a functioning quantum computer?

AIs learn from working with a data pool. The learning aspect is what makes them “intelligent.” They write their own code to deal with the data and fulfill their assigned purpose. But the learning is accomplished through trial and error, which is deductive reasoning. It’s essentially stochastic, which is to say random to begin with. But from an article in Singularity Hub, I discovered that cutting-edge developers are now creating AIs that can learn how to develop other AIs that learn more efficiently by analyzing a large data pool of lesser AIs’ algorithms. Information patterns emerge, I suppose, a process that might start to resemble inductive reasoning in humans, inferential reasoning – what we like to think of as higher-ordered thought. As with the lesser AIs, this new AI writes the software it needs on its own. As had happened with the babbling bots mentioned above, the human programmers will never know all the factors or considerations this new “builder” AI – or the lesser AIs – are working with.

Now, install these learn-how-to-learn-better bots within quantum computers, and you might develop a mind, a mind that’s “an echo of itself.” This makes me think of the philosopher Chuang Tzu, who’d once pondered after a dream whether he was a man who’d dreamed he was a butterfly or a butterfly who’d dreamed he was a man. This new AI, like us, will have to try very hard not to back itself out of a room, a room that is itself, as it eventually contemplates its own learning algorithm, which no doubt it will add to its data pool. Unless told not to, right?

I see similarities here with the human brain.

Back to error rates. The transition between acceptable error rates in quantum computers and unacceptable error rates (those that exceed a system’s ability to correct) is “essentially a phase transition between an ordered and a disordered state,” says Nazario. In “Your Brain Operates at the Edge of Chaos. Why That’s Actually a Good Thing,” Monisha Raviselli discusses how human brains straddle the same line between order and disorder. This line (where the phase transition takes place) is called the “critical point.”

The brain has order due to its structure, but a mind requires a degree of disorder to generate randomness. The randomness, this chaos, leads to variety and innovation, suggests John Beggs, a professor of physics from Indiana University (cited in the article). According to Beggs, the brain may in fact “experience[ ] the world by floating… around this ‘chaotic point.’”

Raviselli compares the way the brain manages its chaos to a person trying to walk along a straight line while being jostled by other people. It’s about handling noise, about being able to recognize the difference between noise and important information. According to Beggs, psychiatric disorders such as epilepsy, anxiety, or depression may be associated with deviations from this “critical point.”

Perhaps highly creative thinkers shift more widely above and below this line between order and chaos than other people. Artists certainly have a special relationship with the hidden parts of their psyches.

The slow boil cooks the frog – not a good result for the frog, but good for the person who wants the frog to stay in the pan. Are we the cook or the frog? Maybe the cook and the frog? Are we slowly figuring out how our own minds work at the same time as we develop new, artificial minds, or will we create them without understanding them, before we even understand ourselves? How will we recognize when we have created them, if we don’t understand ourselves?

I feel there’s something to this convergence of lessons pertaining to chaotic processes in complex systems and the need for randomness, something that speaks to a fundamental aspect of mind and maybe even of reality.

Such an interesting topic to play with, even for a layperson like me.

 

One Reply to “The Slow Boil Cooks the Frog: A convergence of man and machine right under our noses”

  1. Pingback: Why I was Unconvinced LaMDA was Sentient – Dawn Trowell Jones

Leave a Reply

Your email address will not be published. Required fields are marked *

*