September at Future Leap
August at Future Leap
July at Future Leap
The Road to B Corp™ Certification: Future Leap’s Journey
May at Future Leap with Megan Gallagher
Good Employment Charter
B in the Know: The Routes and Benefits to Becoming a B Corp
March at Future Leap with Alan Bailey
The Discourse: AI and the Bigger Picture – 3/3
Written by The Discourse
“Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” – Dr Ian Malcolm, Jurassic Park
In the previous blogs, we talked about how AI doesn’t directly threaten the creative industry and how it can actually help us. But let’s zoom out and look at the bigger picture.
AI doesn’t exist on its own; it’s going to affect a lot of things, some of which we won’t fully understand for years to come. Think about the internet and social media. Few people predicted the giant it has grown into today, affecting behaviours, politics, mental health, and destructive levels of consumption. Unlike the scientists in Jurassic Park, we hope that as we continue down this road we can all pause and take a wider look at how this technology will shape our lives.
AI doesn’t exist in a vacuum
It was Hannah Smith from the Green Web Foundation that first inspired our thinking and research on this subject. While everyone around us appeared to be celebrating the possibilities that AI presented, Hannah’s response was ‘turn it off, for now at least!’. We are learning more everyday, but with Hannah’s help we’ve identified a number of useful articles and thought leaders that we have referenced below.
Even though the technology is made easy and simple to use, it can effectively conceal many problems, especially when it comes to the exploitation involved in building it. In series 7 of the IRL podcast, People over Profit, Bridget Todd explores the human cost of the AI tools we are all using in the second episode, The Humans in the Machine. Bridget says that ‘it makes sense that most of us wouldn’t know about data workers, when AI is literally designed to make us think that machines are intelligent. A corporate narrative that is being perpetuated by big tech’.
Content moderation
The Sama Group, formerly known as Samasource and Sama, is a training-data company, focusing on moderating and annotating data for artificial intelligence companies and large language models. While it’s positive that companies like OpenAI (the company behind ChatGPT) recognised early on that content moderation was essential to protect people, the amount of work involved in doing so was huge, and so age old habits of outsourced labour and exploitation ran rife.
In Niamh Rowe’s recent Guardian article titled ‘It’s destroyed me completely’, she writes about some of the work the people ‘behind the scenes’ have faced. ‘The moderators say they weren’t adequately warned about the brutality of some of the text and images they would be tasked with reviewing, and were offered no or inadequate psychological support. Workers were paid between $1.46 and $3.74 an hour, according to a Sama spokesperson.’
The Diary of a CEO podcast also has an incredibly insightful interview with Mo Gawdat, former chief business officer at Google. In the conversation Mo shares lots of his insights working and developing AI at Google, which are both fascinating and scary. They also discuss how AI will radically change the world by 2025, the societal changes it will create and the solutions and responses we should be implementing now.
Remember, AI doesn’t exist within a vacuum; it’s part of a big chain of things and by using it as a tool there is also a responsibility to educate and advocate for ethical practice. There is much more awareness these days about labour exploitation in industries such as fashion, and we make informed decisions about the clothes that we buy and brands that we support. Whilst the AI industry is still relatively unknown, we need to adopt the same mindset. As designers and humans we shouldn’t just accept the status quo of what big corporate companies want us to think.
‘For AI to be genuinely ethical, we need to consider more than who is harmed after AI is deployed, we need to remember the people building it too’ (Bridget Tood).
Copyright and IP
Many artists have raised a concern about plagiarism in AI as algorithms need information to create and learn from, which begs the question: Whose work is being used? AI learns from the data set it’s handled. As you read this, AI bots are crawling across the web, ignoring opt out requests and scraping data indiscriminately. This means artists’ original works are being downloaded to train AI models without their consent. This is very clearly a form of plagiarism, which could very well be taking place to you. Therefore check out websites such as have I been trained which search for your work in popular AI training datasets, to ensure it’s not being stolen. Similarly there are programs such as Kudurru, which detect and block AI systems that show signs of this activity. They collaborate within the network to swiftly recognise scrapers to all the protected sites. Together, these sites under Kudurru’s protection jointly prevent the scraper from retrieving content from their respective platforms and stealing artists’ work. Sadly, this is an ongoing problem in the industry and people taking from artists is nothing new, just think the Mona Lisa was vandalised or stolen up to 5 times! That said, it definitely wasn’t a machine doing the stealing back then. Fixing the issue of plagiarism and AI isn’t simple, but the fact that there is a digital footprint offers more transparency and hope for artists. That’s why we urge everyone to make the most of the tools available to safeguard their work.
The environmental impacts of AI
There are undoubtedly some benefits which will come from AI in the future (Hannah Smith, Green Web Foundation), but if we want to use AI to crunch data and help develop solutions for the planet, then we need to consider the energy consumption and the carbon footprint it generates. As Sasha Luccioni, thought leader and researcher in ethical and sustainable AI at Hugging Face, says ‘It doesn’t make sense to burn a forest and then use AI to track deforestation.’
Figuring out how much energy AI uses and its carbon footprint is not easy. When we’ve looked into academic studies, it’s estimated that training GPT-3 used as much energy as 1,287 homes in an hour. This also created emissions equal to more than 550 tons of carbon dioxide, like taking 550 round trips between New York and San Francisco by plane.
Prior to the integration of GPT-4 into ChatGPT, researchers estimated that ChatGPT would use up 500ml of water for every 20 questions and corresponding answers. OpenAI’s GPT-4 is the largest language model created to date. ChatGPT-4 was released on March 14, 2023. As of February 2023 it’s estimated to receive 10 million queries per day.
- 10 million queries / 20 (questions per 500ml) = 500,000
- 500,000 queries x 500ml = 250 million ml
- 250,000 litres of water per day, and that’s just one platform
The user-friendly interface of tools such as ChatGPT might make us wonder, “What’s the harm?” However, the real issues often stay hidden from users. Luccioni, along with other researchers, are finding it tough to measure the environmental impact of AI due to the lack of information made available. What complicates matters is that companies are not transparent about the amount of energy these services consume either.
Whilst we certainly don’t expect anyone to stop using AI in its entirety, especially after we’ve written a whole article on which tools to use, it’s important to be conscious of the impacts and if you are implementing it into your working practice then where can you reduce your carbon footprint in other areas to compensate? We’ve written a blog discussing web optimisation and digital sustainability which is a great starting place, but there are also many other ways to be more environmentally friendly in all aspects of your business and personal life, you just need to think and encourage others to do the same.
In summary
We are building the path that we are walking, had we known what we now know about climate change at the beginning of the industrial revolution, we’d like to think that people would have taken a different approach when it comes to mitigating the negative effects that technology has had on society and the environment.
That being said, there’s no doubt that the world at large is still being driven by profit, not the best interests of people and the planet. So if we are to use this technology, then it’s important that we do so with open eyes, that we advocate for protecting jobs in sectors that it threatens, and that we commit to using and developing AI with ethics, purpose and the environment at the heart of our decision making.
At The Discourse, we enable businesses and charities to present the best version of themselves, whilst helping them to shift the narrative in society and industry through brand, the web and content. Done right, design builds trust, communicates purpose and positions your organisation successfully at every touch point.
If you’re ready to take your business or charity to the next level, we’d love to have a chat to see if we can help, you can contact us here.