From an AI perspective, I have a rough idea of what intrinsic motivation means to me:
To allow an embodied agent to perform actions within an environment that would generally be considered positive, without the definition of an objective function.
To break that down, to be embodied in this case is to act, sense and have some internal model that can be adapted, all operating within an environment that can be considered external to the agent.
An objective function is where there is some external push towards optimality that requires knowledge of the sensors, actuators, environment, etc. A good test for whether you accidentally baked in system knowledge is if you change the rules considerably and the agent will not operate.
Whether or not an agent acts positively can itself be measured by an environment specific objective function. A properly operating intrinsically motivated agent may perform well on some metrics, i.e. long time lived, reduced search time, etc.
Why do you want an intrinsically motivated agent? Almost all reward/objective functions are somewhat flawed, even if the problem is simple. I am reminded of a group training a robot to walk fast, measured by speed over time with a cut off. Simple enough? Well, they reviewed the trained agent and they immediately feel to the ground to be reset far away. In another test, the agents would purposely break the simulation environment, causing the agents to glitch and be launched far. One thing to note is that in each of those scenarios, the agent optimised for the reward, but made themselves "useless" after doing so.
For AI I have found Empowerment an interesting solution to intrinsic motivation [1]. Essentially agents choose actions to "keep their options open", and try to avoid actions that would reduce the action state space. The actual environment itself is not encoded into the algorithm and the state spaces are arbitrary and could be replaced with any symbol. As a result, you can make large changes to the environment and use the same motivation algorithm.
To allow an embodied agent to perform actions within an environment that would generally be considered positive, without the definition of an objective function.
To break that down, to be embodied in this case is to act, sense and have some internal model that can be adapted, all operating within an environment that can be considered external to the agent.
An objective function is where there is some external push towards optimality that requires knowledge of the sensors, actuators, environment, etc. A good test for whether you accidentally baked in system knowledge is if you change the rules considerably and the agent will not operate.
Whether or not an agent acts positively can itself be measured by an environment specific objective function. A properly operating intrinsically motivated agent may perform well on some metrics, i.e. long time lived, reduced search time, etc.
Why do you want an intrinsically motivated agent? Almost all reward/objective functions are somewhat flawed, even if the problem is simple. I am reminded of a group training a robot to walk fast, measured by speed over time with a cut off. Simple enough? Well, they reviewed the trained agent and they immediately feel to the ground to be reset far away. In another test, the agents would purposely break the simulation environment, causing the agents to glitch and be launched far. One thing to note is that in each of those scenarios, the agent optimised for the reward, but made themselves "useless" after doing so.
For AI I have found Empowerment an interesting solution to intrinsic motivation [1]. Essentially agents choose actions to "keep their options open", and try to avoid actions that would reduce the action state space. The actual environment itself is not encoded into the algorithm and the state spaces are arbitrary and could be replaced with any symbol. As a result, you can make large changes to the environment and use the same motivation algorithm.
[1] https://arxiv.org/abs/1310.1863