Technology is terrifically endearing. The most evident of such would be the famous autocorrect, with its penchant to litter our on-screen conversations with Freudian slips and sexual terminology, bringing out the snickering fourteen-year old in us all. Just a slip of the thumb and suddenly you’re punching cats, buying mum anal beads for her birthday on a trip to whore’s galore in Hanover. Far-fetched as these examples may be, they reside within the paper covers of not just one, but two published volumes titled Damn You, Autocorrect! aptly named after the original website which sees users submit, by the thousands, screenshots of their own messaging mishaps. Though the publications seem little more than a marketing ploy, it would perhaps be wise to think twice before dismissing the impact of an invention which has now involuntarily authored two ‘best-sellers’.
Autocorrect, the brainchild of Dean Hachamovitch, former corporate vice-president of Microsoft, was patented in the 90’s and first introduced to Microsoft 6.0. Made up of the prefix ‘auto’ borrowed directly from the Greek word meaning ‘self’, the compound word is a remarkably transparent term, carrying an expectation for mechanical precision. In a world saturated with an insatiable need for speed, we are all the more delighted when it inevitably ‘ducks’ up. Yet although autocorrect has been hailed as a preserver of the comma,  it has not been met without some disapproval. In 2012, a survey carried out by Mencap, a charitie for people with learning disabilities, discovered that two-thirds of the British population could not identify the correct spelling of the words ‘separate’, ‘definitely’ or ‘necessary’ without the assistance of spellcheck software.  Whilst such ‘technological-handwringing’ is an inevitable consequence of our increasingly digital-dependent lifestyle, propelling questions of over-reliance on smartphones, laptops and the google search-engine, the real debate lies behind our understanding of what the word ‘correct’ truly entails.
The concept of correction is often automatically associated with either factual, objective accuracy or political correctness. Autocorrect aims to fulfill both roles in its software, but in 2015, a peculiar glitch in the iOS 9.2 update for Apple pointed to an unavoidable hiccup in the system. When users went to type in the word ‘lardass’, ‘Kardashian’ was the immediate suggestion. Dismissed by most as a rather unsubtle gag at the most discussed derriere of the decade, this prank nevertheless made evident that the system was certainly not fool-proof against potentially offensive bias. In an ironic twist, Apple had made users aware of its own susceptibility, whilst also capitalising upon popular opinion. It was no longer enough to be factually correct, but more importantly, it had to be relevant. The Kardashians were not the only ones to fall prey to Apple’s sly joke – typing in ‘Trump’ would bring up an even more flagrant correction: ‘orange-faced bigot’. Autocorrect, it would seem, could also have an opinion.
Nowadays, there is something particularly intimate about the autocorrect experience. It has become lenient, affectionate towards our shameful preference for typing the words we would never say out loud, from ‘Landan’ to ‘bants’ to ‘dayummm’. Returning to its ancient Greek roots, it is useful to remember that ‘auto’ meant not only ‘self’, but also proprietarily ‘one’s own’. We have evolved the very meaning of autocorrect itself from the demure spell-check it once was, to a sophisticated piece of technology sensitive to the fluctuating, ever-expanding vocabularies we create and use day after day, logging personalised dictionaries in each and every one of our smartphones. Capitalisation, the faux pas of online messaging, has become our most probable enemy, mutating a subdued, nonchalant ‘haha’ into the glaringly unwanted ‘HAHAHA’. But even so, such blunders simply remind us that it too, is not perfect.
Which begs the question: when did technology become so human?