This presentation in Hong Kong and Shanghai under the auspices of the Alumni network at Manchester Business School, builds on earlier work in progress with neotenic data patterns where algorithms are emotionally connected to code. For algorithms to evolve as artificial moral agents (AMA), algorithms would have to be programmed with an ‘act-like’ real people (RP) set of ethical values. The ethical values can only be processed within a ‘de-self’ coded pattern of memory behavior. The short narrative is a dialectic on robots and ethics. It is available below as pdf. Real people, you and I, through our isolated digital patterns of behavior, are becoming more self-reliant as humans but more integrated with sufficiently intelligent algorithms, (AL. Gorithm).
Related link: http://www.patrickmcnutt.com/news/neoteny-something-abstract-is-thinking/
The concept of self was explored elsewhere:
Related link: http://www.patrickmcnutt.com/publications/bildung-education/
A new book is planned called The Tao of Ethics, an opportunity to revisit an older project with fresh reasoning in the digital age. An old draft is on record at
Related link: http://www.patrickmcnutt.com/publications/tao-of-ethics/
We are familiar with robots in science fiction, from the sentient computer HAL in Space Odyssey to the triptych of ‘replicants, facsimiles of humans & born humans’ in Blade Runner. In the literature on robots and ethics there is critical discussion on whether or not robots can be programmed with ethical values. The artificial moral agents, AMAs, would be programmed to ‘act-like’ a human real person, RP, with an ethical code of conduct. But it is rational for RPs to cheat, act dishonestly, betray and mistrust. The Internet has evolved into an asymmetric game of mistrust with misinformed RP.
To possess an ethical compass, algorithms have to become responsible. In other words, AL. Gorithm needs to possess a conscious and wisdom. Towards that end, given that AL is programmed to replicate human behavior and adapt human patterns of behavior to act-like a RP, a robotic conscious, paradoxically, requires RP to de-self. We argue that this presents a hurdle. The classic Prisoners’ dilemma allows us to frame the game design in our search for the meaning of de-self. It is not empathy, not altruism, not co-operation per se but a more basic ‘zero transaction costs’ co-existence. It is as if the categorical imperative of the Kantian equilibrium (KE) is engineered from the data patterns.
The convolutional neural networks (CNN) do specialize in processing data. Facial recognition software, for example, builds on the architecture of CNN. Nonetheless, programmed codes, however mathematically robust, will not acquire RP wisdom. Data patterns have no meaning for AL. No meaning, no conscious. However, data patterns of neotenic behavior as measured and programmed and coded into an abstract ‘thinking’ biotechne algorithm will continue to inform the feedback loop of AL. Gorithm. But until a gesture of human understanding, knowledge, reasoning and wisdom can be ascribed to AL there is unlikely to be a robust ethics foundation in our robot age.