Notes on Data and Learning — N.9
When AI Becomes a Learning Companion
AI rarely creates in isolation.
What it produces is influenced by:
patterns learned from large amounts of existing material
examples, styles, and conventions already present on the internet
the prompts and constraints given by the user
So, when AI generates something that looks “good,” it is often because it is drawing on patterns of good choices that already exist somewhere else. In that sense, the system is not inventing taste from nothing; it is recombining and surfacing patterns that have proven effective before.
This week’s reflection comes after completing a course on artificial intelligence, it focused on prompts, emerging tools, and their use in developing and presenting new business ideas to potential investors. A wide range of platforms was investigated to accelerate tasks that would otherwise require significant time: generating presentations, structuring ideas, drafting strategies, and exploring visual styles.
The experience was stimulating. It also made me notice something that does not appear often in discussions about AI.
Artificial intelligence accelerates work, but the growing ecosystem of tools introduces a new cognitive cost: deciding what to learn, what to ignore, and which system to trust.
That is also why the role of the human remains central. The user decides:
which outputs are meaningful
which suggestions are appropriate
which patterns actually fit the context
AI propose possibilities, but judgment still sits with the person using it, that decision-making process is rarely visible.
The Time Cost of Learning Tools
Many AI tools promise efficiency. In practice, learning how to use them well takes time.
During the course I experimented with several platforms designed to help structure ideas, analyse problems, and support reasoning. Some were genuinely impressive. They could quickly outline arguments, explore alternative approaches, and connect concepts in ways that were useful and sometimes unexpected.
But learning each system required an investment. Understanding its interface, its prompting logic, and its limits takes time. And that time competes with something else: the work itself.
This is where the promise of acceleration becomes more complicated.
Every new tool introduces a small learning curve. Individually those curves seem manageable. Collectively they create friction. Deciding whether a tool is worth learning becomes part of the process.
More than once I realised that the most efficient choice was simply to return to tools I already knew well.
When AI Extends Reasoning
What surprised me most during the course was not automation but reasoning.
Some systems were able to analyse a problem and suggest structures or interpretations I had not initially considered. They could organise arguments, propose perspectives, or highlight connections that expanded the way I was thinking about a topic.
This is where AI begins to feel less like a tool and more like a companion.
It does not replace reasoning, but it can extend it. Sometimes the suggestions are obvious. Sometimes they reveal a direction that would not have emerged immediately when working alone.
When a system proposes a structure or solution that you would not have produced independently, the result may genuinely be better. Yet part of the decision-making process has shifted.
The question becomes subtle but important: whose decision is it?
The answer is rarely simple. The output is influenced by both the user’s request and the system’s internal logic. What emerges is a hybrid form of reasoning, part human intention, part machine suggestion.
The Learning Question
This experience leads to a practical question.
If AI accelerates execution, where should we invest our learning time?
Should the effort go into mastering specific tools, knowing that the landscape evolves quickly? Should it focus on understanding the logic of prompting and interaction? Or should the emphasis remain on domain knowledge, the expertise that allows someone to recognise whether a generated answer actually makes sense?
In practice, the answer probably includes all three. But the balance matters.
Tools change rapidly. Interfaces evolve. New systems appear every few months. By contrast, the habits of reasoning that guide interpretation tend to remain stable.
This suggests that the most durable learning investment may not.
A Working Position
When AI becomes a learning companion, the task is not simply to use it efficiently. It is to decide where our own effort should remain.
Some parts of the process can be accelerated or delegated. Generating drafts, exploring structures, testing alternative ways of framing an idea, these are areas where AI can genuinely extend what we are able to do quickly.
But other parts remain ours. Deciding what the question actually is. Interpreting whether a result makes sense. Recognising when an output is persuasive but conceptually weak.
AI can expand the space of possibilities. It can suggest directions that would not have appeared immediately when working alone. Yet the responsibility for judging those directions does not disappear.
Learning alongside AI therefore requires a different kind of discipline. Not simply learning new tools, but deciding which ones are worth learning, when to rely on them, and when to return to one’s own reasoning.
Thank you for reading. This series continues because these questions keep returning.



