Is tech the universal force for good?

The technology industry has developed in broadly three phases to date. The era of the nerds, the reverence, and now the tech-lash. But first, let’s rewind to an earlier simpler time.

In the days of cheque books and ZipZap machines (credit card imprinters – does anyone remember those?) tech leaders didn’t get a seat at the top table. To the extent that companies had technology departments, it was to streamline correspondence, for the implementation of efficient accounting systems, and for the management of administrative tasks such as those in HR. A company might have had a head of technology, but they certainly wouldn’t have been consulted for strategic decisions, nor would they have been invited to play a role in product design or innovation.

The worldwide web changed everything. Within a decade of Tim Berners-Lee’s gift to humanity, the nerds had moved from a position of derision to one of exultation. Yet, while tech became a strategic differentiator, the governance of technology didn’t shift up the same gear. The high watermark in the rising tide of importance of the nerds probably came in the form of Mark Zuckerberg’s award as Time Person of the Year in 2010. This was the year that the iPad was released, and consequently the era of the app was born.

Yet eight short years later, Zuckerberg had been hauled in front of Congress to answer questions about the Cambridge Analytica controversy, where Facebook was accused of colluding with a startup allegedly paid to manipulate voters leading to potentially change the outcome of elections around the world.

The social harms of big tech are clear – but what of the environmental harms?

Although ESG has only been widely written and spoken about in recent years, the discipline began a quarter century ago as some investors started to realise that while taking shortcuts on corporate governance might convey short-term benefits, it was a practice that in the long-term was highly destructive of capital value.

Similarly, it was realised that firms who factored in environmental harms and mitigated them, outperformed their peers, as did those who had a strong sense of social justice integrated within their operations. This is the origin of so-called materiality assessments – the extent to which an organisation is exposed to risk by a particular factor. An example would be a casino operator on the Miami shoreline compared to its equivalent in central Paris. If ocean levels were to rise by half a meter in Florida, it might have some impact on footfall – but at two meters greater, it would be existential. For the Parisian operator – neither scenarios would have much consequence. However, this singular understanding of materiality is problematic – it doesn’t account for the harm caused by the company.

So, this is the crux of ESG: what are the external factors (broadly laid out along lines of environment, social, and governance), which need to be layered on top of the company’s financial reports and projections to better model its future performance.

What are the ESG risks of digitalisation?

For Facebook (now Meta – a business about the monetisation of metadata), the initial impact of Cambridge Analytica, according to Bloomberg, was 15% in the short-term and 58% over a long-term view. Highly material for investors, in other words.

This category of harm applies across all companies embarking on digitalisation – as evidenced by International Distribution Services plc (aka Royal Mail) who were unable to provide any international distribution services over the 2022/23 New Year owing to a cyber-attack; or British Airways, which negotiated down a record GDPR fine for data privacy violations in 2018; or TSB’s £48 million fine for IT system failures (on top of nearly £300 million of losses relating to an outage in 2018). And now, with the advent of artificial intelligence (AI), the EU is poised to introduce a regulation, which will consider the high-risk impacts of algorithmic systems such as in creditscoring and HR. Every company has an HR system that is likely to use algorithms either in performance reviews, or for hiring. But how misogynist or racist are such algorithms? That’s an ESG risk that your investors and customers will be trying to assess.