top of page

Programming Morality

Updated: Apr 10, 2019

You might find this article interesting.  This concerns you, your peers, and everyone who will be around in the next 15 years and beyond. 

First premise:

Humans are not a perfect design. Biologically, we have many limitations and Intellectually, let’s just say the last few years have not been a step forward. 

Second premise:

Design for robotics is moving beyond human limitation (why should a robot walk when it can move with greater agility using multiple legs and/or methods of propulsion ?).  So too for the algorithms we build to mimic intelligence. Today, we are playing with neural nets and trying to build structures and algorithms to match human capability. In the future, we’ll see computers design their own structures, which will be far superior to our human abilities. 

Third premise:

Cognition and multi-layered thinking is only one part of analysis and decision making. The other is the “human” ability to do what is right and what is best in a given scenario. We rely on our “feelings” (empathy, sympathy, compassion, humor”) to guide these decisions. We also rely on our “beliefs” (morality, ethics, values) to provide guardrails to balance (perhaps insulate our decisions from) non-desirable human “emotions” (bias, anger, resentment, greed, selfishness, etc.). 


Given the above progression, it seems to make sense that evolution of artificial (non-human) intelligence has the potential to surpass human intelligence in every conceivable manner. One question that remains is how and who should build the algorithms to guide the right behaviors (decisions bounded by the best of what is right for a given scenario). Given the limitation of human capabilities, how should these core algorithms be formed and governed to ensure humans do not corrupt them (intentionally or unintentionally)?

Here’s an interesting article that was posted in a blog in 2015 that begins to explore this question with a bit more depth. 

What is even more interesting to consider, is IFF we can program "Morality" and we can assure that it is "tamper proof", then how and when would we apply this artificial capability to improve on human intelligence (and decision making) ?


Recent Posts

See All


bottom of page