Copy
View this email in your browser
Word on the Future
February 2020 | ~ 2 min read
Welcome to the 17th edition of Word on the Future. Thank you for being here with us.
Keywords: trust; AI; ethics
Deloitte’s annual Tech Trends is one of the most anticipated reports in our industry. You’ll struggle trying to find more in-depth insights into the dynamics and trends likely to disrupt enterprise in the near future.
In this year’s report, among a number of extremely interesting trends, Deloitte recognize trust as a business-critical goal, achieved with the help of ethical technology.

If you followed earlier editions of Word on the Future, you’ve read about the personalization–privacy paradox and how enterprises are well-advised to earn authentic customer trust in the form of zero-party data to base personalized experiences on.

Deloitte not only declare trust as “business-critical”, they also provide the data and insights to help us understand why: in their surveys, 55% of the respondents from companies with a 10%+ growth rate expressed high concern about ethical considerations. These companies differentiate themselves “in an increasingly complex and overfilled market, […] taking a 360-degree approach to maintain the high level of trust their stakeholders expect”.

How? They hack their own organizational matrix and make it ethics-first – using the same disruptive technologies that could existentially threaten their reputation to “increase transparency, harden security, and boost data privacy”.

Take the Canadian Imperial Bank of Commerce (CIBC) as an example. First, they developed an organization-wide AI strategy at the heart of which they put three questions:
  • “When will we use the technology?”
  • “When will we not use it?”
  • “How do we ensure that we have our client’s permission?”
From there, they build an AI governance process for stakeholders to cover a broad range of ethical considerations before they embark on a project. 

They developed advanced analytics that encode client data in a way that it cannot be reverse-engineered to identify an individual.

They designed a “data veracity score” that gets assigned to each piece of information potentially used by an algorithm, which enables their algorithmic models to recognize data quality and integrity, possible bias, ambiguity, timeliness, and relevance in order to support more reliable, trustworthy, and engaging interactions. 

And they did all of this in less than a year.

AI will go rogue if you let it, but it will just as willingly fight bias and carry out your organization’s values and principles if you teach it properly.
Noel Tock
 
Until next time!
 
Noel Tock
Partner and CGO at Human Made
HEADS UP
Altis Marketing Experience
Celebrate with us: We recently published a new page on Altis’ Marketing Experience, a full suite designed to empower teams with marketing technology they can trust, to help them engage, manage and measure campaign performance, combining automated processes to drive efficiency and productivity.
Forward to a friend
LinkedIn
Twitter
Facebook
humanmade.com
February 2020 contributions from: Ana Silva, Camila Villegas, Caspar Hübinger


Copyright © 2019 Human Made Limited. All rights reserved.
Our mailing address is:
81 Dale Road, Matlock, Derbyshire, DE4 3LU

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.