From Seed to Silicon: The Metaphors We Think With

I was reminded of an interesting idea recently—though I can’t remember where or by whom (another great example of the intellectual sloppiness that writing in a blog affords me). The idea is this: we tend to perceive the world, and ourselves, through the metaphorical lens of the dominant technology of our time. I think that’s exactly right. Just as artists teach us to see the world by guiding us through one-way doors of perception, revealing novel ways of seeing that can’t be unseen, each technological paradigm reshapes how we understand the world and our place in it. This new understanding then embeds itself deeply in our language and culture.

However, the resulting metaphors aren’t linguistic decor. For the same reasons we integrate mental models to cultivate the mind, we soak up metaphor because it compresses complexity and tames new concepts by anchoring them to abstractions we already know how to manipulate. This is, I think, a core mechanism of human sensemaking. As such, whatever technology dominates a given era usually supplies us with its master metaphor. We use it to describe how the world works. We use it to describe how we work.

Our tools become the metaphors we think with.

The Cognitive Effectiveness Stack through Metaphor

Despite its sci-fi roots, I’ve found The Cognitive Effectiveness Stack to deliver substantial explanatory power in our universe, specifically in my life. While I stand by the utility of the mental model, I also found myself veering off into metaphor when drafting the descriptions of its layers and their necessary-&-sufficient-esque relationships with each other. As a result, I found myself deriving clarity from those metaphors and they could certainly be useful to the typical reader. Originally, I tried embedding the metaphors directly into the exposition of the Stack. But in review, I found that consuming both at once was burdensome and undermined the clarity that those same metaphors were intended to impart.

So, I’ve cut the metaphors out of the previous post and have dedicated this post to them. This is the metaphorical companion piece (pun intended). I’ve included three strong metaphors (and one “okay” metaphor) that can help bring the Stack into sharper focus. Think of this as a bridge between my original exposition of the model and the subsequent posts on their application. The intent is to imbue understanding into the minds of the readers with more intuitive, and perhaps more lightweight and familiar, metaphors before exploring how to leverage The Cognitive Effectiveness Stack.

We will do this by anchoring our understanding of The Stack to metaphors of several historical technological revolutions:

  1. The Agricultural Revolution
  2. The Industrial Revolution
  3. (Bonus!) Electrification (a core phase of the Second Industrial Revolution)
  4. The A.I. Revolution

Each metaphorical exploration will help reveal something different about the three-layer structure of the Stack. And the last one, A.I., will do more than just clarify. Because the Cognitive Effectiveness Stack is about intelligence, and A.I. is our modern metaphor for intelligence, this final analogy carries extra teaching power. So, when we talk about “stacks” or “layers” or “hardware” or “bandwidth” in a conversation about minds it is an explicit example of how our understanding of ourselves is anchored to this paradigm and not simply me being glib.

If you would like a quick primer on The Cognitive Effectiveness Stack go here. For a more in-depth breakdown of each of layer go here.

Fed by The Agricultural Revolution

We start our journey with the Agricultural Revolution and its metaphorical lens.

Physiological Layer – Seed and Soil

At the physiological layer, we can think of our genetic proclivity for intelligence like the seed of a plant. The seed contains the genetic blueprint for scale, complexity, and utility of the plant. However, that all remains inert until placed in the right environmental conditions. Soil nutrients, water, sunlight, weather, and predation are all examples of environmental factors that shape whether and how the seed grows into the resulting plant. Even the best seed can fail to manifest its potential in poor soil. A brilliant genetic endowment can remain dormant or be degraded by inferior conditions.

Experiential Layer – Cultivation & Farming

Seed and soil can do their part without human intervention or intention. Agriculture and farming, however, are not passive activities. This is the difference between weeds and crops.

The methods of crop cultivation are methods of enabling efficiency. Put another way, the crops we rely on don’t simply grow in the wild. In the same way, an uncultivated mind tends toward ineffectiveness, stagnation, or, at the limit, senility. For plants to contribute meaningfully to human flourishing, they must be cultivated with intent. The same is true for minds. The techniques of agriculture, like the techniques of education and self-development, are learned through experience, encoded in tradition, and optimized through technology.

Referential Layer – Harvest

However, there is no seed, nor fertile ground, nor method of cultivation that can make agriculture effective without the associated harvest. A field destroyed by fire, pests, or some other calamity carries a specific sting: the sting of a wasted sacrifice. This is the same sting we feel, in ourselves or regarding others, when a biologically superior, expertly cultivated human mind is wasted because it was not properly aligned and leveraged.

The harvest is activation. Often signified with celebration, it is the moment that all potential of earlier sacrifices are manifest into reality. Only when we harvest our produce, including timing and distribution, are we able to capture the value of the nutrition and energy that we so carefully toiled over for seasons. This highlights the importance of whole-stack thinking: the fertility of a seed means nothing without care, strategy, and distribution.

More acutely: a brilliant crop is wasted if it rots in the field or fed to no one.

Powered by the Industrial Revolution

Now we move to the Industrial Revolution and its core metaphor: the engine and the automobile.

Physiological Layer – Raw Horsepower of the Engine

Like I alluded to previously, the Physiological Layer of the Stack is like the raw horsepower of an engine. You’ll even hear people say, “Yeah, that person has a lot of raw intellectual horsepower.” Though, admittedly, this sentence is usually followed by a qualifier like “but…” that attempts to discount their overall effectiveness. But… that is another story.

The maximum horsepower of an engine is encoded into its design. That design is shaped (degradatively) by manufacturing, operations, and maintenance. Some people go to aftermarket solutions for tuning or adding more power beyond the factory limits. However, these enhancements often come with tradeoffs.

Experiential Layer – Transportation Machines

Horsepower on its own is inert. The engine necessarily sits idle unless paired to some machine which can harness its mechanical output into some desirable action. Power must be directed, steered (pun intended), and made efficient. This is similar to how we think of steering a human’s development through experience and education. That is the job of a vehicle. When paired with an automobile (or an aircraft), there are links between the mechanical output of the engine and the actual force applied to the road (or air), directed by the driver (or pilot!). The automobile can produce a maximum power but a maximum power transfer is dependent on its links to the road: transmission, handling, tires, etc. The design of the transportation machine, more-so than the total engine power, determines the overall performance of the vehicle or aircraft and its effectiveness.

Referential Layer – Mobility

That said, if that sunny-day sports car sits idle in the garage it remains ineffective no matter how powerful it may be. The engine and the transportation machine become effective when they are equipped with a driver. The driver has goals and agency. The driver orients, navigates, and directs the power into purposeful action. The driver transforms movement into mobility.

The car that prettily sat in the garage is never the car we say we “got our money’s worth” from. Contrastingly, the car that becomes part of your life, the one that takes you to work, on road trips, through storms and errands, is the one that you “got your money’s worth” from. Again: it’s the connection to purpose and outcomes that activates the potential of a machine or a mind.

Electrification (My Bonus Metaphor)

I had originally prepared three metaphors for this post. But I couldn’t help myself. I wanted to be a bit cheeky and sneak one additional metaphor into this post. It’s a stretch. But stretching is good for us, isn’t it?

I thought, “I’ve addressed the Agricultural Revolution, the Industrial Revolution, the (pinnacle of the) Information Revolution, but not the technological revolution that underpinned the latter – Electrification.” Given its significance and given it is my professional domain, I felt it improper to omit entirely. Leave it to me to stretch a metaphor until it breaks. 😉

Power generation corresponds to the physiological layer. Just as our genetics shape our biological endowment and proclivity for intelligence, power generation extracts potential energy from the fabric of the universe. This energy is latent, invisible, but waiting to be tapped.

From there, that energy flows through the transmission system: a network structured with deep domain knowledge to route potential and power in preparation for its use. This is our Experiential Layer. It is a mechanism of robustness, reliability, and efficiency.

Finally, power flows into the distribution system, where it reaches its point of use and changes the state of the world. That’s our Referential Layer. The distribution system more closely mirrors our application oriented decisions about Electrification (or what objectives can be addressed with electricity). It activates the potential of the entire energy system, closing the circuit and instantaneously converting potential into purposeful output.

Information Revolution & Artificial Intelligence

As we’ll see, the most apt analogy for understanding the Cognitive Effectiveness Stack appears to be that of the fundamental layers of Artificial Intelligence. Could this be because, as I previously asserted, that we look to understand things through the lens of our current technology paradigm? Or could it be because I was subconsciously inspired by the structure of A.I. systems? Or could it possibly be that there is some fundamental relationship between the activation of both biological and digital intelligence? My bet is on the latter – at least ontologically. And at least sufficiently so such that we can gain insights from the comparison.

Disclaimer – Ok, before you go calling me a smooth-brained mouth-breather, I should say I do know that A.I. systems and their tech stacks are much more nuanced and sophisticated than the three layers I describe below. So, if you find yourself nitpicking the analogy because of that… Congratulations, you’ve missed the point. Good luck developing your own cognitive effectiveness. You’re going to need it.

Physiological Layer – Network & System Architecture

What’s the “genetic inheritance” or biology of an A.I. system? It’s clear that the underlying network architecture is the right analogy. This structure is determined a priori, though likely informed by the problem domain (both in an optimization and evolutionary sense), but still relatively fixed.

Just as the “hard wiring” of our brains determines the upper limits of our cognitive potential, so does the architecture of a neural network for an A.I. In the world of transformers, this generalizes and maps to another metric: the total number of model parameters. These features dictate how it consumes information, how well and how fast it can “learn”, and defines (if not in an explicit, close-form way) an upper limit on performance. Poor architecture limits the effectiveness of training similar to how even a great education can’t overcome severe biological constraints. And as A.I. technologies improve, so do their architectures and so does our understanding of the relationship between architecture and application. This is evolutionary at its core. A mouse can not learn what a human can learn, no matter what kind of life it lives. Similarly, a CNN can not map a system or serve an application the same way a transformer can.

Experiential Layer – Training

Perhaps even more obvious than the analogous relationship between the genetic encoding of our biological neural wiring and the network architecture of an A.I. system, is the relationship between The Experiential Layer and how A.I. systems are “trained”. I consider training to be broadly defined by two dimensions (IMHO). These are (1) data, and (2) methodology.

To merge back in with the notions of the Experiential Layer, I consider data to be like the lived human experience and methodology is like the structured nature of education. The training data and methodology cultivate an artificial intelligence. They take its raw potential and shape it into what it will ultimately become. It may be distasteful to say this about human intelligence, but with the inanimate digital intelligences we are free to speak the truth: Garbage in; garbage out.

While the network structure and architecture may determine the ceiling of artificial intelligence, similar tobhow biology defines the ceiling for human intelligence, the model of the world that it captures is far more deeply governed by how it is trained – by how it is cultivated.

Referential Layer – Inference

Like biological intelligence, however well developed, digital intelligence is ineffective if left dormant. Its activation happens downstream from the moment of inference. When an input is fed into an A.I., it converts it to some kind of an output based on previously developed models of the world. What an A.I. is tasked with at inference, and how that resulting information product is leveraged, determines the true effectiveness of that Artificial Intelligence.

Many people believe that A.I. will be the great equalizer, similar to how they believed that the Internet would be the great equalizer, or how iTunes would “democratize” music. In reality, as inference is the human-machine-interface to A.I., it will likely have the opposite effect. Just like how iTunes simply made the most prolific artists even more prolific and saturated the rest of the curve with mediocrity and ineffectiveness.

Conclusion

In this third post on Cognitive Effectiveness, I’ve built a bridge between the exposition of the first two posts (where I introduced The Cognitive Effectiveness Stack) and the last two posts (where I’ll focus on application and exploitation).

I laid out several, (perhaps) more intuitive metaphors for understanding the Cognitive Effectiveness Stack through the lens of four major technological revolutions starting with the Agricultural Revolution and crescendoing with the peak of the Information Revolution, the rise of Artificial Intelligence. Each metaphor reinforces the same core insight: effectiveness requires coordination across all layers. Potential must be cultivated. Cultivation must be activated. In the next post, I’ll dive into the behavioral and adaptive strategies that emerge as a consequence.

Kevin

This is a test