“Rationality”

“Rationality”

Research aka Staring At The Ceiling...

This semester my research is related to some ideas I have regarding why humans and animals alike are able to learn and generalize so much faster than the current start of the art AI. The question is how can we leverage what we learn from nature regarding “fast learning” and use this to build faster learning AI?

So how is the above question related to this megashark sized hand-wavy, pseudo-philosophical blog post?

Welp…this week I was reading a great paper, Neuroscience-Inspired Artificial Intelligence [1]. It is a discussion of the past, present and future of biologically inspired AI techniques. In the paper, I came across the following quote...

In the neuroscience literature, one hallmark of transfer learning has been the ability to reason rationality… [1]

Rationality In Stochastic Environments

This made me think about some theories that currently govern rational thought and decision making in AI systems. One common approach for defining rational thought and decision making is maximum utility theory [2]; where a utility or reward function defines an intelligent agent’s preferences. These functions give a numeric value to any given state in the agent’s environment. Thus, in this world, a rational decision is one that maximizes the agent's expected utility i.e. reward.

Though this totally makes sense and is “rational” (pun so intended), this seems to be a very rigid way to make decisions. It doesn’t leave much room for creativity, improvisation or even irrationality; which I argue are sometimes needed to make decisions in stochastic environments. For example, perhaps insane situations require irrational behavior to survive and maybe irrationality in some problem spaces can be seen as being creative or unorthodox...perhaps it can provide an element of surprise or the ability to derive more creative strategies in a game scenario?

Rationality Is Context Specific

In both deterministic and stochastic environments decisions are locally scoped even when achieving the overall goal is still the primary objective. Thus, rational decisions are context specific and perhaps even subjective in some circumstances. So shouldn’t an approach like maximum utility theory fall down in truly stochastic environments? How can we use utility functions to make decisions when the context of what is desirable is changing from moment to moment?

Nature Inspired Models For Decision Making?

Now you may be saying okay, but what does this really mean in terms of building better mousetraps i.e. intelligent agents. This thought exercise makes me think that in order to build agents that exhibit general intelligence and true autonomy in stochastic i.e. real-world environments perhaps we need new theories to define how decisions and more specifically desirable decisions are made. Perhaps we need to explore nature inspired theories that model how humans and animals make decisions?

Despite our limitations and flaws, humans are highly adaptable creatures and we have the tools to navigate the uber stochastic environment called life on this planet. Can this ability to adapt and make decisions in unpredictable environments be better understood? Can we then use this understanding to give intelligent agents a similar level of adaptability, creativity and hence flexibility in their decision making?

Call For Further Discussion!

Have you had similar thoughts? Do you know of any related research addressing the questions in this post or perhaps you just want to discuss this further? Feel free to contact me!

References

  1. D. Hassabis, D. Kumaran, C. Summerfield and M. Botvinick, "Neuroscience-Inspired Artificial Intelligence", Neuron, vol. 95, no. 2, pp. 245-258, 2017.
  2. Stuart Russell and Peter Norvig. 2009. Artificial Intelligence: A Modern Approach (3rd ed.). Prentice Hall Press, Upper Saddle River, NJ, USA, Chapter 16, pp. 610 - 615.