Imagining that the machine is human
Advertisement
Read this article for free:
or
Already have an account? Log in here »
To continue reading, please subscribe:
Monthly Digital Subscription
$1 per week for 24 weeks*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.
Monthly Digital Subscription
$4.75/week*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $19 plus GST every four weeks. Cancel any time.
To continue reading, please subscribe:
Add Free Press access to your Brandon Sun subscription for only an additional
$1 for the first 4 weeks*
*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.
Read unlimited articles for free today:
or
Already have an account? Log in here »
When I taught computer science, often on the first day of class, once my excited nerdlings had sat themselves down in front of a computer to begin their quest to become the next Bill Gates and conquer the world, I would flick the classroom lights off and on several times.
This was to contain the visible excitement of soon-to-be programmers in their happy place and focus their attention on me, but was also to put a perspective on how computers work and to start my first lesson as such.
Computers are stupid. They are no smarter than the light switch that I had been flicking on and off to get their attention. At a very basic level, computers operate in binary, ones and zeros, on and off, like a light switch. In fact, mathematically, computers can only count to one. Not very smart. This would usually cause some furrowed brows among my students who couldn’t believe I would be disrespectful to their beloved computers. Of course I would explain further.
The simplest storage unit in a computer is the bit (one or zero), and eight bits make a byte. A byte can represent a number, which is more useful to humans. We can do calculations using bytes, and since math was their first use, we call them “computers.” In class I would introduce my charges to binary and base-2 calculations. We divert from there.
Bytes can represent numbers but the same byte could represent a character on a keyboard. Other bytes can represent pixels on a screen and many more can represent an entire picture. An important thing to remember is that the computers still only work with numbers in the background, it is humans that decide what those numbers mean.
Computer use took off with the GUI (graphical user interface) that allowed users to use a keyboard and interact with pictures of objects on a screen using a mouse (think Windows 95). Back then this was called the user illusion because it gave everyone the illusion that they were interacting with a computer in a way that was more human-like. Before long, advanced chips, many more leaps in computing power, and access to virtually unlimited data, and the computer can execute very complicated algorithms to produce something even more familiar to us.
Maybe eerily like us.
But it’s still based on layers of illusion. From bit to byte to number to pixel to picture to GUI to touch screen, to VR, to AI, these layers of illusion are still built with number crunching. And so we go from computers that can barely add two numbers together with the help of dozens of mathematicians to machines that think like us.
But they don’t really think, and that’s the illusion.
Even the CEO of Nvidia, the most valuable publicly traded company in the world (as of July 16) with a dominant role in powering AI infrastructure wouldn’t use AI to think. In a recent interview, founder Jensen Huang said “I am not asking it (AI) to think for me, I’m asking it to teach me things I don’t know.”
That’s because anyone that understands how computers work know that AI is just another layer of illusion created by advancing technology. There are some out there that will spout platitudes about what AI is doing, both evangelizing and anthropomorphizing AI to be something it is not, to be revered and perhaps to be feared.
Because the danger in using AI is not what it can do but what we think it can do. This may lead to humans giving machines agency and power that it doesn’t deserve, especially when some of them can’t even self-regulate their own “thoughts.” Just ask Elon Musk, who had to quickly shut down his Grok AI when it spit out hate-filled screeds.
And that is mainly the point, AI is a machine that is created by us as a tool. But unlike many tools, this one is a digital doppelganger and we now have a human illusion in AI. But remember that’s what we are seeing, and shouldn’t see it for what it’s not.
We have always imitated things we know in new technologies, but AI is no more human than a plane is a bird or a submarine is a fish.
For all the bandwagon hopping by everyone quickly trying get into the “inevitable” adoption of AI, even as the Canadian government created a new minister of artificial intelligence and digital innovation, there needs to be a clear understanding of the purpose of the technology as a tool for human advancement, just like any machine that humans have created.
Because, like I might say to my students about their beloved computers, it is just a machine and we can shut it down by turning off the switch.
David Nutbean was a technology instructor at RRC Polytech, a digital literacy administrator for a school division, and a computer science teacher. He writes from his home in Oakville, Manitoba.