top of page

Science fiction, science fact


An extended blog discussing communication,connectivity, and where tech is taking us next.

 

One particular strand of subject matter has recently crash-landed into my field of vision from several different directions.

At the beginning of October, I posted a short blog piece for flex storytellers, musing on how companies communicate with their staff. This included something on the importance of considering the different workplace expectations of those who have grown up with mobile technology, social media and the like.

I have been dipping into dystopian-type fiction lately, with Dave EggersThe Circle my current bedtime reading. A satirical tale of an early-twenty-something’s entry into the world of an all-embracing corporate, it was described by one reviewer as “a deeply disconcerting vision of how real life might soon be chased into hiding by the tyranny of total techno-intrusion.”

Then, I was contacted by a client who is currently undertaking a piece of research that very much touches on artificial intelligence and the concept of technological singularity – the stage at which the invention of artificial superintelligence goes so far that human civilisation as we know it simply cannot continue as it is, and a “brave new world” emerges.

With all this coming along at once, and with Elon Musk and others regularly hitting the headlines with dire warnings about the dangers of AI, it’s easy to see why certain aspects of technology and the future are very much on my mind. So, what else to do but write about it?

Communication for the next generation

In the flex blog post ‘Getting the message’, I pointed to the need for companies to take into account the different, and more extensive, communication needs of millennials – those born between 1980 and 1995 – and Generation Z, the first generation of true digital natives.

Millennials are set to make up over 50% of the global workforce by 2020. For this generation, although the internet may not always have been there and many are likely to remember the peculiar tones of the dial-up connection, connectivity is understood, taken for granted. The internet, smartphones, social media – it’s all normal, but there is recognition of a world before.

Generation Z – generally taken to be those born from 1995 onward – were born into a tech-driven world. Raised in the smartphone era, this generation is unlikely to remember a time before social media. In recent research commissioned by Newsbeat, the BBC’s current affairs programme for young people, market research consultancy Ipsos reported that of 1,003 Generation Z-ers, 75% use Facebook, 59% use Instagram and 56% use Snapchat. But perhaps most interesting – from my point of view, at least – are the attitudes this generation have towards social media and its use. “It’s not just for connecting with friends – but also building your career,” one respondent is reported as saying. “It’s shaped my career,” says another, adding, “I don’t have a CV. I use Instagram as a CV.”

Transparency transformed

This is where The Circle comes in.

In Dave Eggers’ dystopia of the not-too-distant future, Mae Holland becomes an employee of The Circle, an all-powerful tech company whose guiding principle is transparency in all things; where deletion is outlawed, sharing is caring, and privacy is theft. Information is power – and to withhold it is an act of selfishness. Everything is networked, connected, monitored, stored and saved; it can be accessed and shared.

Social media plays a huge part in supporting all of this. Data can make the world a better place. In a world where everything is in the public domain, crime disappears, everyone is accountable, diseases can be cured. So why wouldn’t we want complete transparency?

As Mae settles into her role as a customer experience worker, the number of screens on her desk grows to accommodate all the different ways she is expected to participate in Circle life. Simply being an employee is not enough; you must fully immerse yourself in the world of The Circle. The Circle must see that you are “zinging” messages, climbing the rankings of its own social medium in a way that makes our use of Facebook, Twitter, Instagram and the like look tame. The Circle must see you attending the plethora of social events it lays on, featuring the most popular and best names in music, art, theatre, circus. Why would you have anything better or more important to do? There are dorms on the company “campus”, available for any employee to take advantage of after late-night organisational partying with other Circlers. Or any time at all, for that matter – many faithful Circlers effectively live at work. Why wouldn’t they? The Circle gives you an active social life. The Circle provides employee healthcare so good that your vital signs are under constant monitoring and anything so much as hinting at the vaguely sinister can be nipped in the bud before it even starts.

All of this serves a purpose, of course: the constant gathering of data to serve the greater good – albeit one dominated by a corporate whose tech-tentacles touch every aspect of our lives.

As a dystopia, it’s somewhat different to what we might usually associate with the term. As Margaret Atwood put it in her review of The Circle, it is “a green and pleasant land of a satirical utopia for our times, where […] people keep saying how much they like each other, and the brave new world of virtual sharing and caring breeds monsters.”

And there’s the crunch. Because, actually, who really wants any organisation, government, corporate or otherwise, to have full access to everything about us? It’s a frightening, anxiety-inducing thought. But it’s one that is made possible by a tech-driven world where everything can be monitored and connected ad infinitum.

Increasingly connected

In many ways, what is most alarming about the utopian hell of The Circle is that it’s plausible. But rewind a little… How did we get here?

Once upon a time, not so long ago, Filofaxes were the height of sophistication – but they were still, essentially, paper and pen. As one born towards the tail-end of Generation X, I remember this – along with the advent of “mobile” phones so large and heavy they barely qualified for the term.

When I went to university in 1992, it was only during my second year that I even attempted to use email – there had never been any expectation before that I should, and even then, the pressure to do so was fairly minimal. I would have to go to one of the “computer clusters” dotted around campus, kept overly warm by the heat given out by rows of whirring machines and big chunky monitors. I can only think of one friend who actually had their own computer set up in their shared student house. My essays were mostly handwritten. The internet didn’t really feature as a “thing”, despite it generally being acknowledged that by 1995 it had started to become widely available. Nobody had mobile phones. My ten-year-old son finds this all absolutely hilarious – he can’t believe a life without the 21st-century trappings of tech was possible.

Having resisted for some time, I can remember getting a mobile phone at some point in the late 1990s. I wasn’t an early adopter, but the model I ended up with still resembled a small brick and was lightyears away from today’s smartphones.

Less than a decade later, in 2007, iPhones came onto the mass market. Another ten years on, eMarketer estimates that, in 2017, almost 2.4 billion people will use a smartphone, accounting for over 54% of all mobile phone users worldwide, and that more than a third of the global population will be using smartphones in 2018.

With this pace of change, it’s hard to imagine where we’ll be in five years’ time, let alone ten or twenty. I would never have imagined how connected I’d be and how much technology has become part of my life back in my student days – and it isn’t that long ago. Although a refreshing release sometimes, being without my smartphone and my laptop can bring about a feeling bordering on dread. Apart from anything else, I can’t work without them – or Wi-Fi. And as for the cloud… Well, let’s just say that Dropbox saved my skin one disastrous day a couple of years back. Thankfully, I’d had the foresight to realise some time ago that not having everything stored on my laptop and having it ever-accessible somewhere “out in the ether” was the way forward.

The upshot is that these changes have happened over a very short period of time, and the speed of innovation doesn’t seem to be slowing – quite the opposite.

Artificially intelligent

So, what happens next?

AI and automation have been hot topics in recent years for clients of flex and other clients I work with on a freelance basis. It’s certainly a big thing in the financial and professional services sectors, where the vision for the future is based on the likelihood that, increasingly, the more mundane, mechanical, data-driven tasks will become automated, with machine learning and AI enabling computers to learn and optimise these tasks as they go. It’s something that the flex team have written about for clients, too. The December 2016 issue of UHY Global – the thought leadership magazine flex produces for the UHY International network of accountancy firms – included a number of tech-themed features, including articles discussing the benefits of tech for accountancy and professional services, and the rise and rise of AI.

Will there come a point where machines can do a better job than people; when their intelligence outperforms our own? Where data tasks are concerned, it arguably already has. Computers are better and far quicker at handling vast quantities of data and “crunching” it, and the margin for human error is removed, isn’t it? As machine learning and AI progress even further, even more possibilities open up. So, are we already close to “achieving” singularity? All this, of course, raises the issue of how we, as humans, will interact and work with computers. Here, the idea of connectivity – and the prospect of singularity – really does become something else.

In 2015, Ray Kurzweil, Google’s director of engineering, predicted that humans will become hybrids in the 2030s, that our brains will be able to connect directly to the cloud, and that this connectivity will enable us to increase our intelligence. We will be “hybrid thinkers”, computer-enhanced humans, and it will mark the next phase in human evolution.

All sounding a bit Blade-Runner-esque? Maybe. But Ray Kurzweil has a very good track record in making predictions. Of 147 predictions he made during the 1990s about what would happen in 2009, 87% proved to be correct. We may be much closer to brain-computer interfaces becoming commonplace than most of us would imagine.

Researchers at Brown University in the US have already succeeded in creating a “wireless, implantable, rechargeable, long-term brain-computer interface”. New companies have been founded to work on creating implantable devices for the human brain.

Elon Musk is backing Neuralink, a company dedicated to creating implantable devices for the human brain “with the eventual purpose of helping human beings merge with software and keep pace with advancements in artificial intelligence”. Kernel aims to “interpret the brain’s complex workings in order to create applications towards cognitive enhancement”.

Facebook has also announced that it has a team working on a brain-computer interface, albeit one without an implant. Theirs will use optical imaging to detect "silent speech" directly from the brain, effectively enabling use of the social network via the power of thought.

It’s coming.

Does it have to be dystopia?

For many of us – maybe most of us – scenarios where implantable brain-computer interface devices are part of the mainstream are hard to imagine. There is much that is unknown, anxiety-inducing, frightening, full-on scary – which, of course, gives rise to all kinds of interesting fiction. Should we worry?

Well, yes, of course we should. Controls need to be put in place somehow. Envisaging Elon Musk’s warnings about the potential of AI to run amok is one thing, but no one wants to live it. Developers, corporates and governments alike have a responsibility to ensure that we can remain safe. But perhaps we should also remember that we already use machine enhancements that connect to or are implanted in our bodies. How many people have had their lives transformed by pacemakers or cochlear implants?

Ray Kurzweil believes that AI will do more harm than good. Speaking to Fortune magazine in 2017, Kurzweil acknowledged that technology can amplify both the creative and destructive impulses of humans, that all the new technologies currently entering our lives are a risk, and that the most powerful of these – biotechnology, nanotechnology and AI – could potentially threaten our existence. However, he also reminded us to look back at our history with tech, arguing that it has helped us more than it has hurt us.

Ultimately, it’s down to us – as human beings, if we have the capability to create the almost unimaginable, we must also ensure that we use it well and wisely. In the words of Bryan Johnson, founder of Kernel, “It’s not artificial intelligence I’m worried about, it’s human stupidity.”

There is much food for thought.

 

Featured Posts
Recent Posts
Archive
Search By Tags
No tags yet.
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page