A Gauntlet for AGI - The Blind Man Strikes Back
I’m laying down a gauntlet to any proponent of the imminent emergence of AGI, i.e. Artificial General intelligence

If eyes were stars I would have more zero stars than the worst Uber driver on National Bad Driving Day. And believe me, that is one big shed load of zeroes.
In other words, in the current lexicon I have a severe vision impairment. But put more simply I am entirely blind...
Over the past 50 years, I have been constantly disappointed by technological assistance as my sight followed a relentless journey towards rock bottom. Stunningly, in all that time, the white cane has prevailed as the best navigation aid. If prehistoric archaeologists could distinguish a blind walking aid from a tool for prodding animals, waymarking routes or whacking enemies around the head, it seems likely that we’d have evidence of the first blind man emerging from his cave aided by an as yet unwhitened cane.
So why is it so sodding difficult to create a technological solution for a task that sighted people don’t even think of as a task? I will expand on the challenges in a later post, but first, I’m laying down a gauntlet to any proponent of the imminent emergence of AGI, i.e. Artificial General intelligence.
Here is the challenge: in HG Wells’ The Country Of The Blind, the sighted hero memorably repeats the ancient proverb ‘In the country of the blind, the one eyed man is king’. Now, if I Chris Mairs could use AI to interact with the world as well as if I had just one eye, rather than none, then I would truly feel like a king. And if AI is in any sense artificial general intelligence, then it surely should be capable of assisting me better than a prehistoric stick, or assisting me as well as a small child, or assisting me better than a well trained dog.
I don’t need to grade gemstones, detect forged banknotes or even thread a needle and darn a sock. But please Mr Altman, get me across the carpark without falling in the river, and get me through the door to the shop – and preferably the right shop so I don’t try to buy a bag of nails in a women’s underwear emporium.
The Open Eyed Man will be a regular blog commenting on the emergent capabilities and ongoing shortcomings of AI. As a blind user of the technology I hope that my insights will be refreshingly more pragmatic than either the optimism of the tech bro camp or the pessimism of the existential, cultural, societal doomers. The Open Eyed Man will not be too serious or technical and I’m afraid it will often be irreverent or a bit smutty. Such is my way.
In a subsequent post I will refine the One Eyed gauntlet that I am hereby laying down. I really and truly hope to metaphorically see the duel won by AGI in my lifetime, and spend my dotage in kingly bliss.
Short term, , I am working with two early stage start ups, building solutions for specific unsighted challenges. These start ups are respectively leveraging two underpinning reasons why I am so optimistic about the future of assistive technology; specifically, the emergence of useable "voicefirst" conversational AI and the emergence of low cost AI powered robots.
But before getting into more details on those two tracks, I’m already using Chat GPT to help me write this blog and post about it on X.
Creating whimsical imagery to visually augment a blog has never made it anywhere near the top of my wish list for assistive technology. But as I can now do this with ChatGPT, I thought I’d better follow all the guidance that adding visual imagery is A Good Thing. So I’m afraid that until enough people tell me to refrain from offending their aesthetic sensibilities, each post will contain an image styled like the one above with some vague relevance to the text of the particular post.
These images are mostly eye broccoli — sort of like eye candy although less appealing. But more seriously, there is a Narration button near the top of each post, providing an audio rendition of the text, scarily narrated in an AI clone of my own distinctly not mellifluous voice. If you'd like to judge the veracity of the narration clone above, you can check out this short comparison clip of my actual voice. I will have much to say in coming posts on the opportunities, challenges and risks afforded by today's ultra-realistic synthetic voices, very particularly when used in interactive dialogs.
Finally, I'm running a live session this week on the theme of this blog, including some blind-folded practical exercises. So I'm looking forward to reporting a couple of amusing mishaps, for a change not from a first person perspective.