Skip to content

AI Explainability and Communication

Monolith cropped


The exploration of explainability in deep learning AI (link to MIT Technology Review) touches on fascinating areas of exploration in Communication. These areas in Communication may be helpful for approaching explainability in deep learning AI.

  1. How do we communicate with a deep learning algorithm?

It seems we could use the same techniques from communication measurement and apply them to deep learning to best understand an algorithm’s orientation and its decision making process. That is, we could use a combination of revealed preferences and OODA. We know the observations, we know the actions, we may even know the decisions. We can then begin to infer the orientation of the algorithm. Granted, this will render an incomplete explanation but it is likely to be along the same range of completeness as an explanation we would get from interrogating and studying a human intelligence (see Dennett’s Consciousness Explained).

Yes, this is different than the sense of completeness we currently have on current state computer programs. But we may be able to apply the same type of tools such as unit tests, integration tests and automated tests to better understand how the algorithm thinks of itself. For example, we could ask the algorithm to create its own tests. The nature and form of those tests could prove informative on how an algorithm explains itself. The output will certainly be influenced by the our description of the desired output. But again, this is likely along the same range of accuracy in terms of self-representation/testimony as a human intelligence. (It would be interesting to see if setting up a test as output prompts the same kind of questions a human developer would ask, such as what are the requirements – see question 2. Given the number of languages available for writing requirements which can be turned into automated tests, this may be low hanging fruit.)

Like any input (observations), orientation and output (action), the tests are subject to many of the same influences as artifacts humans create i.e. the general elements of communication objects and design elements of a communication environment. Current approaches to explainability, in fact, are reported to include a deeper analysis of the input objects to surface the elements which seem to be most relevant to the algorithm’s process.

Incidentally, the overall problem space seems similar to the challenges of communicating with a “contented organism” and with the output/artifacts created from prophetic visions recorded in various mystical traditions. Prophetic visions often describe encounters with an intelligence vastly different from our own.

We can also look at the reported output of those different intelligences to see how they have reportedly chosen to be described to us. For some, this would undoubtedly be an exercise in using interaction with the divine to understand interaction with a human created other. For others, this could be an exercise in understanding how we have historically looked at the difference between the human created output of an encounter with a non-human intelligence and the reported output of the non-human intelligence itself to us.

  1. What would a self-generated explanation of a deep learning algorithm tell us about explanations and our own decision making?

Let’s say we ask an algorithm to explain itself or put it in a situation where part of the required output is an explanation of what it did. The explanation could be a required output at any point in time. It could be part of the original output, predefined as something that needs to be generated directly when generating the original output or perhaps it could be be generated long after the original decision was made or intended output was generated i.e. surprise, you owe us an explanation.

We can imagine a wide range of causal chains generated as explanations. Or perhaps it wouldn’t explain itself in terms of causality at all. It may be probabilistic or some completely other form or chain of explanation which it decides meets the criteria of an explanation.

Again, this will likely be highly influenced by the design of the communication environment in which we ask it for explainability as an output and likely the communication object elements of the output. The pattern between the design of the communication environment, object elements and output may be tightly coupled in a manner aligning with existing conceptions of an appropriate relationship. For example:

  • Were we to ask it for a causal chain, it would give us a causal chain.
  • Were we to ask it for a probabilistic reason, it would give us a probabilistic reason.
  • Were we to ask it to convince a regulatory, civil or criminal court (as in the case of explaining a credit decision or parole decision), it would give a persuasive, legalistic reason.
  • Were we to ask it to convince a patent examiner that what it was doing or did was a unique invention or process, it would give us an explanation suited to whatever we define as an acceptable explanation for that examiner.
  • Were we to ask it to justify its use in a battlespace, it would give us an explanation based on lethality, accuracy, efficacy, strategic implications as well as potentially in terms of cost and explainability to politicians.

It seems reasonable to assume the explanation would depend on the audience and how we define an explanation.

Alternatively, the pattern between the design of the communication environment , object elements and explanation output could follow something completely new or perhaps more aligned with less than accepted conceptions of appropriateness. Would we recognize those patterns as explanations? (see Question 3).

Comparing the explanation of an algorithm with explanations provided by humans, for a given domain, could be an interesting model for experimental philosophy seeking to understand how we explain. It could as easily be applied to various epistemological domains and philosophy of science.

Alternatively, it seems like it would be highly significant were the explanation the same or simply different lenses on the same explanation, regardless of input and defined output (if, for example, the defined output changed the lens but not the underlying substance of the explanation). That would seem to say either a lot about the existence of a singular Truth or perhaps something intrinsic to human language (the input we desire and how we see the world) or perhaps about the things we create.

  1. How is our approach to explainability influenced by our orientation toward uncertainty?

It seems reasonable to assume our approach to explainability of a deep learning algorithm is significantly impacted by our orientation toward uncertainty. Predefining an acceptable explanation may generate different approaches than leaving it wide open (or than turning unsupervised learning on itself). How do we accept a given output as a boundary object, as something which has meaning, between ourselves and a non-human intelligence? We are at the early stages of this process. But it will likely be valuable to remain cognizant of the orientation toward uncertainty behind various approaches to the question.

We, as humans, have a long history of how we approach the other, how we think about approaching knowable and unknowable systems – how we feel and react, the philosophies, politics and interpersonal relationships we adopt (see Graeber’s Debt for a discussion on the units we use to keep score and value each other and ourselves), when faced with choices that lend themselves to a desire for chaos or order, anarchy or hierarchy/structure/taxonomy. We’ve faced it many times. Not sure we’re as good as we want to be. It seems worthwhile to continue to learn more and more on how to do it better. Exploration of explainability of deep learning AI seems a great lab for learning more.

APM Achieves Chartered Status

APM Chartered Body

Congratulations to APM on becoming a chartered body! This is a big milestone for the project management profession. This means there will be an official register of project managers in the UK, similar to that of other professions, like accountancy. People on that list will be ‘Chartered Project Managers’ similar to how there are Chartered Public Accountants – the equivalent of CPA’s (Certified Public Accountants) in the United States.

APM has a ton of information on this milestone on its site. They also intend to produce a series of briefing papers exploring “the new possibilities and challenges now available to the profession.” The first is 21st century professionalism: the importance of being Chartered.

If history is any example, I’d expect something equivalent to chartered status to come to the United States in about 30 – 50 years. It’ll be interesting to see how project management evolves now in the UK and how the role of a project manager changes, particularly in the public eye.

But for now, congratulations, and thank you  to APM, for this exciting next step in the field!

Management and the ‘Dead Zones of the Imagination’


It takes work to understand people. Many of us take short cuts. In fact, much of modern project management is about creating these kind of short cuts. Yet, understanding people, especially the people on your team, your stakeholders and other project participants, is essential to increasing the probability of success on a project. Engaging in the work to understand each other also seems to be a key component of creating innovation, of creating an environment where individuals feel empowered to imagine and build new solutions.

The anthropologist David Graeber has come up with a term to describe this kind of work.

“Most of us are capable of getting a superficial sense of what others are thinking or feeling just by observing their tone of voice, or body language – it’s usually not hard to get a sense of people’s immediate intentions and motives, but going beyond the superficial often takes a great deal of work. Much of the everyday business of social life, in fact, consists in trying to decipher others’ motives and perceptions. Let us call this “interpretive labor.””

-From the essay ‘Dead Zones of the Imagination’ in his book “The Utopia of Rules,” p. 66-67.

Interpretive labor takes up a large part of business life as well, especially in high performing organizations.

Graeber goes on to offer an interesting insight which seems helpful for project management. He observes that the amount of interpretive labor spent between people decreases as regulations and bureaucracy increases. That is, the more bureaucracy in place, the less time people spend understanding each other. This appears to be particularly acute between people of differing power relations – such as a manager and team member.

Some of the hallmarks of bureaucracy, according to Graeber, are having a well-defined process or method with which something must be done and having metrics around activity. Project management, and management in general, is rife with efforts to find “best practice” processes and key performance indicators which define, measure and track activity. There can be great value in the use of these bureaucratic approaches. I’ve experienced it many times professionally – it is often the difference between performance and failure, and can be the first step towards higher performance. But there is a lot of evidence that says we should be selective in our use of bureaucratic approaches.

We should be keenly aware of the potential downside of these approaches when deciding how to structure and manage our project environments. For example, the risk of gaming is well known when discussing metrics. People tend to work towards whatever moves the metric rather than what increases probability of success. So we should think carefully when picking a metric and designing systems to measure success.

Graeber brings another potential risk, that of reducing the need for interpretive labor or understanding each other. Bureaucratic approaches provide short cuts and short hand for all too human activity, giving folks an avenue to avoid interpretive labor. Track the metric, track compliance, that’s all. This, in turn, tends to dehumanize the whole project environment, to the detriment of on an organization’s solution delivery capabilities. I’ve discussed the impact of metrics and compliance based approaches on communication environments and overall team capabilities in Reinventing Communication. The notion of people over process is also covered in the Agile literature.

Graeber brings an additional concern with bureaucratic methods. He observes that the use of bureaucratic methods tends to place an excess interpretive labor burden on the relatively subordinate parties in a power relationship. This pulls effort away from other potentially fruitful endeavors which these parties could spend time doing. It can lead, as well, to a reduced feeling of empowerment, limiting creativity, flexibility and growth potential in these parties – characteristics which may increase managerial efficiency but at a cost of reduced solution delivery, including basic scope delivery.

I’ve often observed managers uninterested in getting into the weeds quickly resort to asking for metrics rather than understanding context and solving issues or, holding off and giving the team time to figure things out. There is clearly a time and place where this can help. Other times it creates undue administrative burden on a team and can magnify potential marginal deficiencies and finger pointing, rather than foster a collaborative spirit of free information flow and team problem solving. This is echoed in the observation cited in Graeber’s book that there is a negative correlation between coercion and information. The ask for metrics, particularly on the spur of the moment, often feels coercive or reflective of a person’s ability to exercise coercive power over others.

Coincidentally, Graeber’s term ‘bureaucratic’ can be used in much the same way as the term ‘engineering’ is used to describe various procedures, methods or approaches to management. Both can have positive and negative connotations. The negative connotations seem to jump off the page in the context of constraint. But keep in mind that for many thinkers and managers (including my own experience in organizations at various levels of project management maturity) bureaucratic approaches, like engineering approaches, can be the very model of efficiency and effective management. See General Motors in the mid-twentieth century, managerial literature from that period, literature around Quality and Lean Manufacturing, as well as the evolution of earned value management based approaches in project management. Both terms seem to describe approaches based on belief of certainty and predictability.

It seems our drive to bureaucratic or engineering based approaches is related to a drive for certainty and predictability. To quote Graeber’s description:

“Bureaucratic knowledge is all about schematization. In practice, bureaucratic procedure invariably means ignoring all the subtleties of real social existence and reducing everything to preconceived mechanical or statistical formulae.” p. 75.

Substitute engineering for bureaucratic and the definition holds just as well, particularly in the context of management, project management and various methods for dealing with risk. It is this definition which has spawned so much of the literature on complexity, chaos and non-traditional forms of management, including the birth of Agile.

As discussed in this blog, the drive to certainty and predictability may be rooted in the very meme of consciousness, the mechanics of the human brain and in some of the ways we choose to construct reality. Social – organizational – economic components seem to further influencing this orientation. My lecture at the London School of Economics on Uncertainty as Competitive Advantage discusses how the orientation toward certainty or uncertainty, impacts various project delivery capabilities such as innovation, as well as the survivability and resilience of an organization. We can add Graeber’s observations around interpretive labor, and the ensuing dynamics it creates, to the list of potential impacts to consider when adopting an orientation towards uncertainty or certainty in our project environments. These considerations contribute to the ongoing discussion on how to design project environments for various project delivery capabilities, and perhaps most pointedly, designing environments that maintain innovation, imagination and individual empowerment.

Merging Sartre and Dennett on Causality

Reading Sartre on the nature of “past” in the spectrum of Universal Time. It reminds me of Dennett’s Darwinian approach to causality and Dennett’s description of our perceptions as being right-fitted for our survival, rather than actual qualia of the object being perceived. Sartre’s description of the role the For-Itself plays in the creation of Universal Time reminds me of Dennett’s description of the brain as a prediction machine and consciousness as an effective meme for harnessing and wiring the brain for predictions. Merging the two we can say the past is the story we tell about the present to make predictions about the future.

Sartre is a Cliff-Hanger and Understanding Sartre through Dennett

Being and NothingnessDennett Cover

I have now read and re-read the first 140 pages of Being and Nothingness. It is a cliff-hanger. Besides the mechanics of consciousness, which seem to parallel Dennett’s (albeit 50 years earlier and far less accessible), Sartre paints the picture of human reality bound to incompleteness, to lack and bad faith and anguish.  I’m on the edge of my seat to see what he does with a humanity whose reality is thus described.

It may raise a few eyebrows to compare Dennett and Sartre but both seem to describe the same mechanics of consciousness.

1. We exist.

2. We distinguish between ourselves and the rest of existence.

For Dennett this is the distinction between me/inside and others/outside. For Sartre it is the distinction between the in-itself and the for-itself.

3. We obtain information from the world and this information is uniquely human.

For Dennett this is epistemic hunger driven by evolutionary fitness. It shapes the seemingly unique way we experience the world, the information we receive from the world. For Sartre this is definitional to the for-itself driven by lack. The for-itself continually obtains information from the world, nihilating the in-itself being of the world, contextualizing the world into specific information for us/for human reality. It explains the transphenomenality of our experience in the world. That is, how and why we get information from the world, why we can’t experience the world as it truly exists. But rather, we can continually ask questions of the world around us and continually learn more.

The more which there is to learn comes from human reality. It is not a feature of the world itself. For Dennett, this feature of human reality comes from epistemic hunger, an evolutionarily successful trait. For Sartre, it is an existentially accurate description.

For Dennett, the specific ways in which we experience the world are a product of memetic evolution. Successful memes survive and multiply. Thus we experience thirst as an awareness of the need to find and drink water, for example. Sartre describes the awareness of a desire to drink water when thirsty as a function of human reality. It is not inherent in our bodies nor in thirst itself. But he does not describe the mechanism by which that human reality is realized as the specific desire to drink water.

4. Human reality causes consciousness to appear.

For Dennett, the appearance of consciousness is an evolutionarily successful meme for obtain information, interacting with the world and for the human organism to survive.

For Sartre, consciousness is a function of the in-itself falling into the for-itself. What it seems Sartre means by this is that consciousness appears as we ask questions of ourselves and world. Consciousness comes from the nothingness human reality brings to the world. This nothingness creates the possibility of us perceiving an object as red or blue, for example. The object is what it is. The world is what it is. It exists. The nothingness we bring to the world creates the uncertainty around whether an object is red or blue. We cause nothingness to be part of the world and we then attempt to fill it by obtaining information, by answering the question.

It seems the concept of memes is more helpful/less ambiguous in explaining the appearance of consciousness. But the concept didn’t exists in Sartre’s day. Even if he did, I wonder if he would have used it. He seems to have a specific conception of our being (I’m working hard to avoid the cliche “of the human condition”) which he wants to describe.

5.  Consciousness is an internal conversation which we are occasionally aware of. It is not a thing which exists. It is not who we are.

For Dennett, there is no single thing which is consciousness. There is no Cartesian theater where consciousness takes place. There are conversations and perceptions which become famous in the brain for a period of time (fame in the brain). They rise to the surface and we become aware of them in a heterophenomenological manner. But there is no single thread of a “me” which is my consciousness nor which is me.

For Sartre, the being of consciousness is the consciousness of being. Consciousness arises when we ask questions, when we are aware we are asking the questions and aware we are answering them. It arises from existence and the way human reality is continually formed. It is not independent of the condition of our existence and it is not a thing in itself. It comes from the pre-reflective cogito. I only think because I am. I am only aware that I think because of how I am (i.e. that I bring nothingness to the world). I am, as a being, well before I think. [This can get painfully fun]. I am only conscious when I have knowledge of my consciousness, of asking and answering about myself or the world.

Sartre, so far, does not seem to explicitly address whether there is one stream or multiple conversations which appear as consciousness. I imagine he would be lean toward a single, consistent thread so that his method of introspection and phenomenology could provide a consistent platform for exposition. His rebuttal of Freud also suggests he would lean toward a single thread.

For Dennett, the conversations are tied to evolutionary survival. Sartre has no such imperative driving his exposition. Sartre does not try to explain consciousness. He probes it and describes it. There may be an over arching theme to his exposition. His description of human being seems to hint that there is, that there is something to be said about the way we experience our consciousness. He doesn’t seem to think about it as a mechanism of our survival as a species. Though I’m looking forward to finding out. Perhaps he does, but in a very different way.


Tying it to AI, Sartre, like Dennett, seems to support the proposition that human reality creates/shapes the specific information we obtain from the world. Thus, an artificial consciousness would have to have its own reality and its own information. Otherwise, it would simply be a human automaton. Not a small feat, but different than an artificial consciousness. As discussed previously, communication between us and an artificial consciousness would likely present an interesting challenge and area of research.

Picking the Right Topics

As project managers it is helpful to focus conversations on the right topics. Talking about the right topics with the right people at the right time increases our value in the eyes of the team members or stakeholders with whom we communicate. Further, it helps strengthen bonds with the people we talk to, which comes in handy during the ups and downs of a project lifecycle.

On the other side of the coin, talking about the wrong topics wastes time, decreases our perceived value and can negatively impact relationships with critical team members or stakeholders –magnifying the impact of risks which surface over the lifecycle of a project.

Selecting the right topic for every conversation is a complex decision. There are many factors which go into selecting the right topic to discuss. It entails matching content, phrasing/syntax, audience and timing, each of which has multiple dimensions to it. In this post, we’ll discuss the match between content and audience using a particular dimension –their hierarchical order.

Topics, like scope, can be thought of as existing at different hierarchical levels. Topics can be discussed in a broad, general way or in excruciating detail.

For example, a new inventory management system can be discussed at a general level such as its role in achieving a long term, corporate strategic goals. Let’s say the goal is to deleverage the balance sheet by increasing cash on hand. This topic can be decomposed into a list of business benefits that contribute to achieving that goal, such as reducing days in inventory. It can be further decomposed into high level features which a system could have to deliver that business benefit, such as order management. The topic can be reduced even further down to the specific requirements of each feature such as the way the order management feature integrates with particular point of sales systems.

This hierarchy and decomposition can be represented using a Work Breakdown Structure type representation. Let’s call it the Topic Breakdown Structure (TBS).

Here is a generic model of the above example.

TBS Topic Name
1 Strategic Goals
1.1 Business Benefits
1.1.1      Features          System Requirements


Here is that same model filled in with the specifics of the example discussed.

TBS Topic Name
1 Deleverage Balance Sheet by Increasing Cash
1.1    Reduce Days in Inventory
1.1.1      Improved Order Management          Integration with Particular Point of Sales Systems


Understanding the TBS level of a particular topic becomes valuable when matching content to audience. Different audiences are interested in different TBS levels of a topic.

One can imagine a senior executive is likely far more interested in discussing how to achieve corporate goals than the benefits of as specific software feature. When speaking to a senior executive in the example above you’re likely better off talking about how to deleverage the corporation’s balance sheet than how the order management feature of a piece of software could integrate with a particular point of sales system.

On the other hand, developers working on implementing the order management features are likely much more interested in understanding how it should integrate with particular point of sales systems than discussing the corporation’s balance sheet.

We can represent this audience hierarchy using an Organizational Breakdown Structure.

OBS Organizational Title
1 Senior Executive -SVP Operations
1.1    VP Operations
1.1.1      Software Development Manager          Development Team Members


We can cross-reference the TBS and the OBS to help guide which topics to discuss and with whom.

Picking the Right Topics

Matching the topic level with the audience can determine whether your conversation is considered valuable or a waste of time. Thinking about the hierarchical dimension to matching topics and audience can help us pick out the right topics for the right people and improve our communication. This can lead to better project management and better project performance.

Thinking of the hierarchical dimension of the topic/audience match has an added benefit for how we all spend our time. It can help guide whether you’re involved in conversations that are truly worth your time or whether they are better suited for someone at a different level of the organization.

Daniel Dennett, AI Experiments of Consciousness and God

Dennett CoverEx Machina


When we break the quantum state we obtain information which falls into non-quantum causality. Time and distance matter. Particles are only in one place at a time. Hume’s unexplainable but useful causality appears. Our consciousness pulls together a practical world from the information around it. When consciousness breaks the quantum state, why do we obtain the information which we do?

Perhaps this is entirely a function of the specific hardware which is our brain? Reading Descartes it is easy to be enthralled with a mystery of consciousness, theorize about alternate mental states and imagine different forms of consciousness which can obtain different information from the quantum world. Reading Dennett, we can move beyond enthralled to a discussions which seems more practical and more in line with our contemporary world. Instead of imagining different forms of consciousness, we can think about the hardware, input/output and communication necessary to make up a consciousness which showed us different facets of the quantum world or helped us better understand how we can interact with the world. A discussion in this arena may still seem far-fetched and overly conceptual. But a Dennett based discussion seems to provide a language and building blocks to advance the discussion and potentially enable the design of empirical experiments. At the least, it puts thought experiments in a more modern context.

We can think about a potentially useful set of experiments in the area of artificial intelligence. The current AI’s we’re building are automatons, facsimiles of ourselves and functions we perform. While impressive, I believe we can do more and doing so would provide a revolutionary leap in our tool set for understanding the world. What follows is an extended probing around designing AI experiments for consciousness, with a slight side trip into a discussion on the conditions necessary for a god-like brain to exist.

What if we expanded the aims for AI to include the creation of new forms of consciousness? For example, create a consciousness which has no need to extract information from the world. We can call this a Contented Organism. Why contented? Well, to follow Dennett, consciousness evolves for fitness of the species. To follow our analysis of Descartes and Hume, our conception of intelligence is useful and we break the quantum state in ways that are useful. Therefore, we can suppose an organism that has no need to extract information from the world or that has no need for what we’d call a useful intelligence nor need to usefully break the quantum state must indeed be contented. We can suppose it is ambivalent with respect to its state of being, whether alive or dead, happy or sad, threatened or safe.

Or perhaps it must be, by definition, completely unaware of its state of being, receiving input only. If it were aware it was an observer would that reduce the amount and type of input it received? Or does that reduction come when it is aware of what it is observing, or when it tries to communicate what it observes using a language, like ours, which supposes a subject and object, collapsing and categorizing the world around to facilitate communication, sharing with another?

I’d like to imaging the possibility of a Contented Organism taking in a full spectrum of input from the quantum word around without limit yet, somehow, aware usefully are Is this possible? Or, would the need to communicate with this consciousness necessitate some form of information extraction? Could we create a language with which to communicate with such a consciousness allowing it to remain with its perceptions and us to understand those perceptions?

What if it had no need to be understood or share, would its speech action or artifacts, its heterophenomenological output, come across as nonsense? Or perhaps it would come across as the ramblings of an input-drunk mystic?

Speaking of mystics, it seems natural, and devilishly fun given Dennett’s atheism, to explore the conditions necessary for the existence of a god-like brain. (Like Dennett addressing philosophical zombies on page 95 this is being written with a smile on my face.)

Dennett begins Consciousness Explained with a prelude that contains a convincing argument for the possibility of relatively simple processes able to create the narrative experience of consciousness. Using a thought experiment about dreams he describes how simple rules can produce elaborate narratives of external experiences that actually never happened. We have used this type of approach, along with his subsequent description of consciousness, in formulating the questions which bound the necessary conditions for a Contented Organism to exist. At first blush it seems trivial to use the same approach to describe the necessary conditions for the existence of a being which has awareness aspects generally ascribed to the Western conception of god. By awareness aspects I mean traits such as being all knowing, self-aware and even being the essence of all existence it knows, yet not having easily understandable direct communication with humans. We can use Dennett’s approach and, subsequent description of consciousness, to figure out how such a being and relevant set of conditions can exist. For Dennett, the being and set of conditions in his prelude is how a brain living in a vat can be fooled into thinking it lives in the world, without having an incredible amount of computer hardware generating all possible inputs. This is solved by showing how a relatively small amount of hardware with a relatively simple set of rules can generate all input necessary to fool a brain in a vat.

Building off the Contented Organism can we imagine hardware, a type of brain, that is aware of everything, at all times? This requires a little further imagination to conceive evolutionary circumstances that somewhere in the universe, at some time, there’s a brain that doesn’t need worry about limiting input, making survival decisions, constructing an internal narrative or creating heterophenomenological output understandable to human beings (this, of all the conditions, seems the least difficult –unless there is something universally constraining about the way we understand language). Can we conceive of hardware with these properties and that has no conflict between observing, being an observer, being aware of being an observer and being aware of the observations?

Communication Kanban for More Effective Communication and Better Management

On projects and in our daily life, we are faced with the decisions of when to communicate to a specific person and how much to communicate to them. We can apply the concept of a Kanban board to help approach these decisions, communicate more effectively and be better project managers.

A Kanban board describes the big-bucket, discrete steps work goes through until it is done. For example, a software development project may have steps like To-Do, In Progress, Testing, Done. Work is written on a note card. The work can be specific stories in Agile or work packages in Waterfall. The work is then moved through the discrete steps on the board as it progresses through various stages towards completion. The work can move forward and backward. For example, it can fail Testing and be sent back to In Progress. But the work can never be in two stages at the same time. The process is serial.

At any point in time we can look at a Kanban board and see how much work is in each bucket. We can see how much there is still to do, how much is actively being worked on, how much is being tested and how much is done. As a result, a Kanban board provides a quick, visual representation of whether work is flowing through the process or whether there are bottlenecks. We can also see exactly where the bottlenecks are in a process. They are at whichever bucket has the most cards in it. When cards pile up in a bucket it means the people working that step are backed-up. Maybe the equipment they use is broken or maybe a particular piece of work is taking longer than anticipated. They may not have the training necessary to work on that card or there may not be enough people or resources to work on the amount of work being pushed to that bucket.

There are a number of reasons cards can pile up. But once cards pile up, each additional card only adds to the bottleneck. What’s more, the larger the pile up, the less likely it is to clear up. Each card reduces the speed at which any single card can be processed in that bucket. It reduces the overall efficiency and effectiveness of that step in the process. Quality goes down and throughput is reduced. Cards are likely to get dropped, pushed aside for higher priority cards, or forgotten – because the work doesn’t stop.

We can use the concept of a Kanban board to think about when to communicate to someone and how much to communicate to them. Before applying the concept, let’s define these decisions a bit further.

The decision “when to communicate” is straightforward. It means what point in time is the best time to communicate to this person in order for the communication to be effective.

The decision “how much to communicate” means how many pieces of information should we convey in any communication in order for the communication to be effective. For example, let’s say I have several pieces of information to convey, such as a project is delayed, two people are taking vacation next month and that the VPN is being flaky. I can choose to convey all that information at once. Alternatively, I can choose to convey only one piece of information now, wait a day, convey another two pieces, wait a day and, if the information is relevant, convey the final piece of information then. It may very well be that that final piece of information is no longer relevant and therefore we can drop it, reducing the total amount of information needed to be conveyed. For example, one of the people may have changed their vacation plans and is no longer taking vacation next month or the VPN may have become more stable.

(As a side note, we make decisions on how much to communicate when designing presentations or writing sentences. We can choose to try to communicate a lot of information on one slide of a presentation or in one sentence. Or, we can limit how much we communicate with each slide or sentence.)

The choices we make on when to communicate and how much information to convey impact the effectiveness of our communication.

Applying the concept of a Kanban board we can think of each person has having a distinct internal process through which communication flows. For example, in Reinventing Communication I leverage OODA to describe the process through which communication flows internally and translates into individual behavior. OODA stands for Observe, Orient, Decide, Act. Using an OODA model, communication is received by an individual, processed, a decision made in relation to that information and then the individual takes an action. That is, they choose to behave in a particular way.

(As an aside, OODA nicely links together information and action. But we can certainly use other models to describe the internal flow and maintain the value of applying the Kanban board concept to helping us decide when and how much to communicate.)

We can conceptually map the steps of OODA onto a Kanban board with each step or bucket being one of the four steps of the OODA model. One column for Observe, another for Orient, another for Decide and the final column for Act. Now, unlike a visualized Kanban board, it is quite difficult to have 100% visibility into the flow of information inside any individual. However, we can watch for signs from an individual of how much information they are ready to receive. For example, are they open to conversations or cutting off conversations? Is there body language open and attentive or closed off and eager to move onto something else?

We can also keep a watchful eye on the environment in which someone operates to get a sense of how much information they have in process and how many decisions they are facing at any point in time. For example, you may have sat in three meetings with someone or have been cc’d on emails to them and know they’ve received a ton of information that impacts their projects. They are processing a lot of information and have several decisions to make and potential actions to take.

We can use this awareness to decide when is a good time to communicate with them and about what. It may make sense to wait for them to finish processing most of what they’ve already received before adding another piece of information, another card, as it were, to their processing queue. Being aware of each person’s internal Kanban board can help us decide when to communicate and how much to communicate at any point in time. Remember, effective communication is based on the receiver, on the person we are trying to communicate to, and not on when it is convenient for us to communicate.

Given that any process has a limited number of cards it can effectively cycle through at any point in time (i.e. a limited amount of information, when talking about communication) only a limited amount of information can be conveyed to a person at any given point in time to be effectively processed. Providing too much information leads to a bottleneck. There is information overload, reduced effectiveness of each next piece of information and a higher likelihood that any particular piece of information will dropped, pushed aside or processed incorrectly – resulting in an undesired decision or an undesired behavior.

We can further extend the application of the Kanban concept by introducing the idea of Work In Progress or WIP. WIP is the number of cards in process and not complete. We can keep tabs of how much WIP a particular process can hold at any point in time and find the optimal number of cards for that process. We can do the same for each person with whom we communicate. We can pay attention to the people with whom we communicate and develop of sense of their individual throughput, their optimal WIP, as it were. We can be aware of how much communication WIP they can process and how much they currently have moving through the process. This can help us decide when to communicate with each person and how much information to convey at any point in time. Doing so will make us better communicators and, at the end of the day, have more successful projects.

Looking at it from the other side, we are not only senders of information but also receive information throughout the day. Our minds automatically control the amount of communication WIP we are working on at any point in time. But we can increase our effectiveness as managers by consciously managing our internal WIP. We can move off autopilot and consciously and intentionally decide which pieces of information to deal with, what decisions to make and what actions to take. Further, we can free up capacity in our internal process by delegating tasks and delegating decisions, honestly leaving a decision up to someone else and not thinking about it. These are often hallmarks of the most effective managers. They are excellent at delegating and have other people they can lean on for decisions. These factors give managers the capacity to focus on the right decisions, properly process the available information and take appropriate actions. Managing our own internal communication WIP, and finding ways to increasing our throughput, can make us more effective managers.

Information, Dreams and Leaving Descartes’ Meditations

It seems a distraction to argue we create casual relationships in reality due to some interaction between consciousness or the mind and the quantum world. I find the applied implications unproductive in that there will always be someone who is not in-line with the desired state of consciousness and can use that to their advantage. As the joke goes, your AI is no match for my baseball bat. Further, attempts to change other people’s consciousness remove individual freedom and tend to empower tyrants. Tests and experimental research into changing consciousness seem to carry grave potential harm to those involved, coming from psychological or chemical experimentation.

Avoiding the first hypothesis -that consciousness creates causal reality, we’ll focus on the second -that there is something about the interaction between the information we extract from the quantum world and our minds which leads to a causal reality. From what we know so far of the quantum world, there is an interaction between reality, as it were, and observation by a consciousness. Quantum experiments show the effect of observation on subsequent observations and measurements of physical experiments. The cat is neither dead nor alive until we observe and make a conscious determination. Particles create an interference pattern, traveling through both, one and neither of two slits, until they are observed. Information in the future effects the past once the future measurement is taken.

One characteristic of the information we receive when the quantum state is broken is that it seems to be readily usable. With it, we can create causal theories that accurately predict future states of reality. The theories provide a basis for incredible feats of engineering. Engineering increases crop yields. Engineering sends people into space and back. Engineering produces lifesaving medicine. Engineering produces machines we use to probe deeper into the world to describe new theories enabling further feats of technological accomplishment. Valuing these types of accomplishments we can say the information is usable and useful.

Descartes’ Sixth Meditation describes different states of the human mind. He distinguishes between being awake and dreaming. Memory serves as a distinguishing factor between being awake and dreaming. While awake we connect perceptions with each other and with other event in our lives which happen while we are awake. Like Hume, Descartes notes that this is a habit. But he says this habit and connections does not occur while we dream. That is, our dreams don’t follow the same causal chains we follow while awake. He goes so far as to say that were reality to behave like a dream with say a person suddenly appearing and disappearing, he would believe the person was a product of his mind rather than truly existing in the external world.

I’d propose that memory is not unique to being awake. We certainly seem to be able to have the sensation of memory in a dream. We can feel as if something in our dream has happened before and that one event can lead to another. We can participate in a dream and feel impending joy or fear because of some understanding of the sequence of events and perceptions going on in the dream.

I’d suggest the difference is that our perceptions in a dream follow different causal paths than our perceptions when awake. The information we receive is not usable in the same way as the information we receive from our perceptions while awake. It is difficult to predict future states of what we’ll perceive in a dream. We’d likely find it hard to build a table with theories born from a dream. The habits of non-quantum, causal physics which we find while awake don’t seem to apply.

However, quantum realities seem quite at home in a dream. A person can exist and not exist at the same time. A ball can pass through a window and a door at the same time. We can perceive an event occurring one way than perceive it completely differently a few seconds later, in the future. Perhaps our mind is continually extracting both quantum and non-quantum information? Yet it is useful to retain and work with non-quantum causality through the regular course of our day. All the while waiting until we’re asleep to process the quantum world.

This suggests an imperative of utility of information while awake and going about the course of our day. I’m left with many questions.

Is there something about our world that necessitates extracting useful information? Will the nature of the information we extract change through some sort of extension transference loop when we harness quantum states in our tools such as quantum computing? Or, as seems to be the case so far, does the information need to fit into non-quantum causality to become useful? If there is something unique about the interaction between our minds and the information we extract, can we create a truly independent AI if we are locked into non-quantum causality, one which breaks the quantum state through its perceptions and extracts information with different characteristics? Or, as seems to be the case so far, are we pursuing AI’s that abide by the same non-quantum causality as us? Could we recognize an AI which did otherwise?

Descartes relies on useful conceptions to make his arguments and on conceptions which preclude certain alternatives. His conception of the Divine is useful for his argument about God’s existence, and ours. His conception of the intellect is useful for his arguments on our existence, God’s existence, existence of the external world and in how we come to truth. We haven’t dwelt on this but have mentioned the power of the natural light of intellect which paves the way to truth. Along similar lines, he distinguishes between conceiving and imagining. Whereby, conceiving holds truth and imagination, while powerful, is ultimately fancy. The function of a mental state determines if what it holds is true. Conceiving can hold truth. Imaging cannot.

Descartes speaks of the necessities of the world in the Meditations. He mentions them earlier but closes the Sixth Meditation remarking that “the necessities of action often oblige us to make a decision before we have had the leisure to examine things so carefully…” The Meditations are a break from the necessities of the world, a vacation, an experiment where Descartes cloisters himself and explores his inner terrain. He asks questions that he finds interesting. He follows paths he believe are truthful. I find finishing The Meditations leaves me with a feeling described by admirers of Thoreau finishing Walden. We’ve joined a powerful observer on a journey outside of the realm of the usual. We’ve participated in an experiment which inspires us to ask our own questions and apply the voyager’s curiosity and passion to those questions. We’ve seen in it and taken from it what we find appealing and likely, useful.

Echoing the practical discussion in this post, I’ve found The Meditations useful for exploring ways to think about artificial intelligence, mental states and how we process information in a quantum world – particularly in light of Hume’s Problem of Induction, as well as probing the political and social implications of various patterns of argument. Descartes Meditations leave me with a contagious nostalgia for its reasoning and arguments.

Hume, Induction Creating Reality and the Value of Descartes’ Limited Conception of the Divine

Hume’s induction problem is not a problem in the quantum world. Linear causality seems to exist only because we observe the chain of events. While it is true we can’t infer causality from one billiard ball hitting another, we may be able to infer causality from the act of observing one billiard ball hitting the other. That is, the causality exists by us observing the event, the interaction of one ball and the other.

Induction itself does not make logical sense as a basis for ascertaining universal laws. Induction is not a valid logical chain. However, the method of induction itself, based on observation, may actually create the universal laws. Or, on a weaker claim, it may form reality into a shape by which we can extract usable information that forms the basis of universal laws. Observation increases the probability amplitude of reality taking a shape by which we can use the information from reality.

Descartes, in the Fifth Meditation, provides another proof of God using a parallel to geometric shapes and the associated mathematical properties of those shapes. Take, for example, a triangle. The properties of a triangle, such as all angles summing to 180 degrees (the sum of two right angles) and that larger angles are opposite the longest side, are inherent in the triangle. They are true regardless of whether the triangle exists in his mind or in reality. God’s existence, he argues, is inherent in God. He can conceive of God existing as clearly as he conceives of the triangle’s properties being a definitional part of the triangle.

He mentions he can image a winged horse. However, existence is not inherent in a winged horse. The fact he can image a winged horse does not prove that it exists. Thus his argument falls back to knowing the existence of God is true because of the clear light of the fact that existence is inherent in the idea of God himself. It is firmly rooted in his conception of God.

This parallels the idea of induction as producing usable information. We can say Descartes saw similarity between the nature of information about a triangle and the nature of information about God. In a proof of God he is looking for information that is similar to information he can draw about a triangle. While this seems a vast reduction of what God is, it is useful for Descartes. By reducing the information required for a proof of God (metaphorically, breaking the quantum state, an unknowable state of what God is, as it were) he makes it easier to extract what he sees as useful information about God. This may explain his conception of the Divine. It seems smaller than what a more complex and nuanced conception could be. But for Descartes there is value in have a smaller and more tightly defined conception of the Divine. It allows him to create seemingly rational proofs for what could otherwise be an entity not knowable through reason or logic.