top of page

Creating TRUST & Understanding Using Digital Body Language

A colleague recently sent me an article from Erica Dhawan (thought leader on collaboration). I reviewed one of her courses (see link below) to gain a few insights into this emerging capability for improving collaboration and TRUST via online conferencing.

This is complex because TRUST is perceptual, contextual, and episodic in nature. TRUST is comprised of experiences and emotions (weighting factors that can bias our experiences).

The interactions we create through visual body language, visual content, the environments (multiple layers), timing of delivery (pauses and moments of silence are powerful), and the use of auditory language can and do shape our consumption and understanding of the content.

These are never more evident than watching how content is conveyed in courtrooms. The level of intonation, gestures, carefully planted semantics, and theater of presentation all serve to sway judges and juries alike. If you have ever read a transcript of a Trump speech, it is intelligible. But, when you see him deliver it with all the visual and auditory queue, it’s very clear what he is saying directly and indirectly (like any master magician - the term is purposefully chosen).

There’s a lot to unpack with any one of these dimensions of interactions, so adding in technology creates a combinatorial effect on our emotions, understanding, and how we perceive each episode of content delivery. Our nature as children is to TRUST implicitly, especially those individuals and things that are familiar and imbue comfort. Over time, we learn to MisTrust and grow ever more skeptical as we experience violations of TRUST. Eventually, TRUST is eroded and the relationship may be damaged beyond repair. We have relationships with people and things (physical and virtual). Measuring that relationship is still elusive. We typically rely on responses through questions and surveys to assess comfort levels and understanding. These assessment and feedback mechanisms remain very primitive and we have not even discussed the complexities associated with developing the sensory acuity to promote understanding.

One of the primary goals used by brain researchers is to find ways to modify behaviors using BCI, but much of the research in the last 40 years has focused on the visual pathways to access the brain. What you see is what you believe and what is reinforced through multiple channels can influence and even override what you see. This is where most of the manipulation has taken place and continues to improve with the advent of technology. So, many people are running away from technology and no longer placing their TRUST in the institutions and underlying science that drives most of the change in our ecosystems (economic, social, political, industrial, and environmental).

Erica has provided some interesting research about the conveyance of TRUST through body language and the impact from any one of these variables around modality (channels, tools), persona (ethnographic, psychographic, geographic, demographics, ..), and the physical presentation (emotions, confidence, clarity, expression). Still… a ton of information to absorb and consider and way more than anyone person can possibly model and prepare for every interaction (online or otherwise).

I especially like slide 19 as this is very close to the approach I used to capture value. We can use this as an exemplar for how to guide investments in technology to help optimize digital interactions to improve ways we can convey TRUST and build a better normal. Ideally, the technology medium would need to build in feedback loops for each participant to help frame and clarify the content to improve understanding. This is a non-trivial exercise and is beyond the scope of any one vendor. What may be worth pursuing is rethinking the design and manner in which the content is contextualized through the digital environment. Today, we rely on elaborate sets to establish a given environment for a business convention, a conference between a small group of people within a company, a conference between individuals in different companies, etc. Tomorrow, we may want to create these environments using a component design that can augment the digital experience by adapting the visual and auditory environments for each interaction.


Let’s imagine that John is talking to 3 individuals (Tom, Sally, Mike) in a large company about a set of capabilities that he is rolling out in his platform. Tom, is a visual guy and wants to see everything and experience these new features. Sally, is more introspective and wants to hear Michal describe these features and consider how they might be useful in the company, and Mike is technical and wants to walk through the list of features and parse through the text to create his own understanding so he can map these capabilities into those in the company. Those are 3 separate experiences that a component design could (ideally) adapt for each persona (assuming we “learned” about their preferences using on-demand feedback). Depending on their expressions and auditory queues, the platform would adopt their experiences uniquely for each session and potentially for future sessions. I’m actually describing the next generation EX design platform (EX = EQ + IQ + UX). The experiential design needs to incorporate the adaptability of people with high Emotional Quotients (emotional intelligence - social/emotional) and optimize understanding in the same manner that people with high IQ (intellectual intelligence - cognitive).


Recent Posts

See All


bottom of page