How would you like to join the AI Task Force (AI TF)? It’s a question you may be asked by your employer very soon. If you’re an employer, you may well be posing that question to your employees. Research for this edition of The Woz Report took me down plenty of rabbit holes, but it was both fun and a great way to learn more about a complex topic. Sit back and relax, as we dig a bit deeper into the risks and opportunities of AI.
For many, the idea of embracing technology remains intimidating, even as we approach nearly a quarter-century into this decade. If you’re like me, someone with an inquisitive mind, you’ll probably volunteer for a role on the AI TF.
Why companies are creating AI task forces
A modern business may have an organisational structure similar to this, with one addition at the bottom of this list;
Chief Executive Officer
Chief Finance Officer
Director of People
Director of Communications
Chief Operating Officer
Director of Health, Safety and Wellbeing
Director of Sustainability
Chief Artificial Intelligence Officer?
AI use cases?
AI this, AI that… just ask ChatGPT to do it. You’ve probably heard all the phrases. AI is here to stay, and it’s a disrupter of all things that we know. Let’s just pause there for a moment and consider some personal AI use cases;
Learning a language
Letting Spotify pick a playlist
Asking Alexa for the weather
Asking Google to plan a route
Health Care
You get it, the options are endless. Now let’s consider a snapshot of AI use cases in workplace.
Chatbots for 24/7 customer service
Voice authentication
Data visualisation
Invoicing
Building Management (your heating and cooling)
My colleagues and I use AI to measure embodied carbon in the built environment, and we’re also working with clients to understand how they can get the best out of their AI enabled Building Management System. I’d like to point out that none of those AI functions replace human input.
The key difference between using AI at home and at work is governance. It’s down to the AI TF to reduce the risks. To keep things simple, here are some of the key AI TF headlines;
Ethical and Safe Use. Deployed AI services must comply with law.
Privacy and Confidentiality. Just think of the GDPR implications.
Data Quality and Control. Is the data collected ethically sourced, and how will the system protect data integrity?
Human Oversight. Like any other machine, AI requires human oversight. Expect companies to offer both a human and AI solution.
Large Language Models (LLMs) like ChatGPT are prone to AI hallucinations, and here’s an example. “False positives: When working with an AI model, it may identify something as being a threat when it is not. For example, an AI model that is used to detect fraud may flag a transaction as fraudulent when it is not.”
Risk, concerns and mitigations. Organisations can spend a fortune on cyber security mitigations, and the use of AI is only going to compound the company risk register.
AI can also be employed by malicious actors to orchestrate more sophisticated and believable phishing campaigns or spam messages that exploit human vulnerabilities with greater precision and effectiveness. Cornell University
AI in Health Care
AI has made strides in health care. Millions wear a smart watch that will prompt you to drink, stand up, and in some cases breathe. Recently, Forbes asked a valid question. If AI Harms A Patient, Who Gets Sued? It’s an important question, and the article argues that technology experts are increasingly optimistic that next generations of AI technology will prove reliable and safe for patients, especially under expert human oversight.
Millions of people are already turning to ChatGPT and specialist therapy chatbots, which offer convenient and inexpensive mental health support. New Scientist
There are synergies between AI in healthcare and the automotive industry. Liability. AI is embryonic systems cannot reason through complex moral dilemmas or make ethically aware independent judgments. Any misconduct will probably lead to one or more individuals facing a judge, as you might have guessed. It's for these reasons that I think widespread adoption of autonomous driving on a global scale is still decades away.
Jacob Abernethy, et al, writing for Harvard Business Review (HBR) argues that human values should be brought to AI. As early as 1948, Norbert Wiener, the father of cybernetics, wrote about information ethics. According to HBR, Wiener proposed an idea on computer ethics in a seminal 1960 Science article, launching an entire academic discipline focused on ensuring that automated tools incorporate the values of their creators.
Irrespective of the form of dominance (whether political, economic, or military), AI will assume a greater role, though human supervision will remain crucial.
One immediate risk in danger of being overlooked is AI’s ability to amplify the spread of false information. New Scientist
Systems like ChatGPT are just tools. They can be manipulated, and they will evolve and become part of everyday life. Just imagine your great-great-grand children asking AI to create an entire cartoon box set!
I’ll leave the last word to Steve Wozniak (no relation) and co founder of Apple. In 2023, he gave his thoughts on the risks of AI. “AI is so intelligent it’s open to the bad players, the ones that want to trick you about who they are”. I’m having second thoughts about volunteering for the AI TF.
Thank you for subscribing. I hope you enjoyed reading this edition of The Woz Report. I’m pleased to say that ChatGPT did not write this article. On that note, I’m off to ask ChatGPT to explain quantum physics. Wish me luck!
Feel free to leave a tip in my tip jar here. 👇