Thought #3 - AI Tools Are Everywhere, But Are Teams Ready?

Exploring AI adoption challenges, smarter learning recommendations, and the tools worth your time this month.

Hello lovely humans,

We’re somehow in February already (time flies when you're knocked out by whatever bug is floating around). As always, there's a lot happening in AI this year so far. A little too much to keep up with - no, we’re not going to talk about DeepSeek right now, but keep an eye on our blog…

This month, we're digging into AI adoption challenges (shockingly throwing tools at people doesn't magically make them confident), sharing an update on DOT-ed’s new course recommendations, and questioning whether "new and improved" AI models always live up to the hype.

AI Adoption Isn’t Just About Access - It’s About Confidence

Companies are investing heavily in AI tools, but adoption is still a major challenge - not because employees don’t have access, but because many don’t feel confident using them.

65% of businesses were using Generative AI in 2024 (nearly double from the previous year according to McKinsey), and we can expect this to continue rises in 2025. But despite this surge, a Gallup report shows that only 6% of employees feel very comfortable using AI in their roles, while a third feel very uncomfortable. Even at a strategic level, Boston Consulting Group found that while many businesses are implementing AI, only 26% have the capabilities to move beyond pilots and generate real value.

I recently wrote about this for Business News Wales, breaking down why AI confidence matters just as much as AI capability. The real challenge isn’t just introducing AI - it’s making sure people know how to use it effectively and feel confident enough to do so. We’ve been exploring different ways to address this, especially how AI can support learning without overwhelming users, and believe meeting people where they are with tailored workshops or personalised content is what’s needed.

Would love to hear how your teams are navigating this shift - what’s working, and what’s not?

DOT-ed Update

Speaking of personalisation - DOT-ed now recommends our courses (or your amazing company created content) when you chat to Dotly.

A screenshot of a chat with Dotly, showing the user mention learning about AI and three AI courses recommended.

We’re using vector embeddings for this (basically taking text and mapping it to a long string of numbers) and returning the course with the most in common with the chat.

One issue we’re facing with this simple approach - negative sentiment
If you type "I don’t want to learn about Google Sheets", it still recommends Google Sheets. The vector model doesn’t understand sentiment - it doesn’t realise that “don’t want” is negative.

Don’t worry - we’re testing a few improved methods (including some Generative AI models, and some old-school language models) to get better recommendations to our learners!

AI Myth – Just Because It’s New and "Better" Doesn’t Mean It’s Right for You

There’s a new o3-mini ChatGPT model that’s supposed to be a massive improvement when it comes to reasoning. Naturally, I tried it immediately-and naturally, it wouldn’t do what I wanted.

ChatGPT o3-mini chat, asking for a LinkedIn post without a useful response.

But going back to the older 4o model got me what I wanted with the same prompt.

ChatGPT 4o, proving a LinkedIn post as requested.

What’s actually going on here? It seems 4o uses the memory function (so remembers what I’ve told it in previous chats about Taught by Humans, and how I talk) but o3-mini doesn’t.

On o3-mini, you can click above the response to see how the AI got the response:

o3-mini’s reasoning explaining it doesn’t have memory.

Does this mean o3-mini isn’t better? No it means it isn’t well suited to how I use ChatGPT and the use case I’m working on. But for anything which requires deeper reasoning (and I can provide the needed information for), apparently o3-mini will get better results.

My takeaway - newer doesn’t always mean better (at least, not for your specific use case). It’s good to test, explore, and keep an open mind - but switching just because something is “new and improved” isn’t always the right move.

Always experiment don’t take improvements as facts.

Updated AI Tools to Try

📌 Microsoft Copilot is now integrated into Word, Excel, and more (on the Enterprise version). It can be used to write formulas (which we are particularly impressed with). Copilot can also be used in OneDrive to create FAQs, summaries, and ask questions about a specific document. We aren’t the biggest fans of Microsoft’s AI attempts, but this update is really useful for workplace AI.

📌 Google’s NotebookLM is worth a test. It can summarise any Youtube video, website, document (only upload things you own, which aren’t private or sensitive). The output is impressive, and a useful way to learn - especially if you enjoy Americans getting overly excited about AI on podcasts.

We’re always testing, learning, and figuring out how to make AI education actually useful. If you’ve been experimenting with new AI tools, found an amazing use case or have any tips you want to share - let us know.

Until next time,

Laura – always learning