Introduction
Well, it wouldn’t be a cyber security event if we didn’t mention AI, right? It would almost feel rude not to. AI is the buzzword of the moment, splashed across headlines, boardroom agendas, and tech blogs everywhere. But what does it actually mean in practice? And more importantly, can it protect us, or can it trick us? The short answer: both.
I don’t want to be all doom and gloom, but AI really is reshaping the digital world at speed. With that comes new risks, but also some big opportunities.
What is AI?
Let’s start with the basics. AI or Artificial Intelligence is essentially software that can “think” and learn in ways that mimic human decision-making. That could mean analysing huge piles of data in seconds, predicting outcomes, recognising patterns, or even writing a blog (like this one, although I promise this is me typing).
How can AI help us?
AI is showing up everywhere. From suggesting what to watch on Netflix, to mapping the quickest route home, to writing that awkward email you’ve been putting off.
Some of the more everyday uses include:
- Productivity: summarising long documents, drafting emails, or generating ideas.
- Creativity: writing stories, composing music, or even creating artwork.
- Convenience: virtual assistants that set reminders, order your shopping, or manage your calendar.
- Personalisation: tailoring adverts, news feeds, and recommendations to your habits (sometimes a little too well).
It can be a huge time saver and often feels like having an extra pair of hands but it’s also worth remembering it’s only as good as the data it’s been trained on.
And of course, the same power that makes it helpful also makes it dangerous when misused.
Where does the data go?
This is the bit people often skip over. Every time we interact with an AI tool whether that’s ChatGPT, Claude, Co-Pilot or something else, our data goes somewhere.
If you’re using it personally and have a paid account, it’s worth trying out a little experiment: ask the AI what it “remembers” about you. You might be surprised (or slightly alarmed) at how much context it can pull back.
This doesn’t mean you should never use it, but it does mean you should be mindful. Don’t feed it sensitive or confidential information unless you’re absolutely sure about the privacy settings and retention policies.
Responsible AI Use
A few tips to keep yourself and your organisation safe:
- Check the settings: Most platforms let you switch off “training” on your data. Do that.
- Separate work and play: Keep University data away from personal AI accounts.
- Be sceptical: AI outputs can be convincing but wrong. Always double-check.
- Know the limits: AI doesn’t “understand” it predicts. There’s a difference.
AI-Enabled Scams
Remember the good old days when phishing emails were littered with spelling mistakes and dodgy grammar? Those days are gone.
Scammers are now using AI to write flawless, convincing emails in any language. They can churn them out at scale, making the attack surface bigger than ever. And it’s not just emails, AI can generate fake social media accounts, impersonate voices, or even create deepfake videos that look terrifyingly real.
If you thought spotting scams was tricky before, buckle up, this is the new normal.
Deepfakes
This deserves a category of its own. AI-generated deepfakes can swap faces in videos, clone voices, and create realistic “evidence” that simply didn’t happen. Imagine getting a video call from Senior Management asking you to transfer money urgently except it isn’t real… That’s the level of sophistication we’re dealing with. Oh this stuff only happens in the movies… have a wee read of this: Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’ | CNN
So where does that leave us?
AI isn’t going anywhere. It’s going to keep getting better, faster, and more embedded into daily life. The challenge for us is to use it responsibly: embracing the tools that help, while staying alert to the tricks and traps that come with it.
Key Takeaways as generated by AI…
- AI is everywhere helpful, powerful, but not magic.
- Use it for productivity, creativity, and convenience but know where your data is going.
- Don’t feed it confidential or sensitive info unless you’re 100% sure of the settings.
- Scammers love AI too: phishing emails, fake calls, and deepfakes are getting harder to spot.
- Responsible use is the sweet spot AI can be your friend, but only if you keep your eyes open.
And no, AI didn’t write this whole blog… or did it?
Useful Resources:
Cybersecurity Awareness: AI (LinkedIn Learning) This course gives you a practical, down-to-earth introduction to how AI and cybersecurity intersect. It is aimed at folks who don’t necessarily live and breathe tech.
TEDx Talk: “Dark Side of AI — How Hackers Use AI & Deepfakes” by Mark T. Hofmann
In this talk, Mark Hofmann dives into the darker side of AI, how cybercriminals are deploying deepfakes, voice cloning, and AI-powered attacks, and what that means for all of us.
This webinar talks about how AI is changing the cyber threat landscape and what humans can do about It. It will explore how malicious actors are leveraging AI to automate attacks, evade detection, and exploit human vulnerabilities. At the same time, we’ll examine how AI is being harnessed defensively to detect threats faster, respond more effectively, and build more resilient systems.
First Aid kit for Cyber Incidents
We all like to think we’d spot a scam but with AI-powered voices, videos and texts, it’s easier than ever to get caught off guard. This quick guide from GEANT (available here: GEANT Cyber first aid toolkit) shows you the red flags to watch for and the first steps to take if you think you’ve been targeted, whether at work or in your personal life.
