Object-Action Complexes are proposed as a universal representation enabling efficient planning and execution of purposeful action at all levels of a cognitive architecture. OACs combine the representational and computational efficiency for purposes of search of STRIPS rules and the object- and situation-oriented concept of affordance with the logical clarity of the event calculus. Affordance is the relation between a situation, usually including an object of a defined type, and the actions that it allows. While affordances have mostly been analyzed in their purely perceptual aspect, the OAC concept defines them more generally as state transition functions suited to prediction. Such functions can be used for efficient forward chaining planning, learning, and execution of actions represented simultaneously at multiple levels in an embodied agent architecture. , an Integrated Project funded by the European Commission through its Cognition Unit under the Information Society Technologies of the sixth Framework Programme, and launched on 1 February 2006, brings together an interdisciplinary research team to design and build cognitive robots capable of developing perceptual, behavioural and cognitive categories that can be used, communicated and shared with other humans and artificial agents. In the project they hypothesize that such understanding can only be attained by embodied agents and requires the simultaneous consideration of perception and action resting on three foundational assumptions:
Cognition is based on reflective learning, and then reinterpreting OACs to learn more abstract OACs, through a grounded sensing and action cycle.
The core measure of effectiveness for all learned cognitive structures is: Do they increase situation reproducibility and/or reduce situational uncertainty in ways that allow the agent to achieve its goals?
The domain of Cognitive robotics tries to recognize manipulation tasks which are demonstrated by humans and other robots. In the simplest form, it's equal to a video parsing system but it can be extended with learning capabilities. Before a robot can execute tasks, the environment has to perceived with Robotic sensors. The raw data are converted into machine readable information which are enriched with semantic information. Natural language grounding is equal to convert the actions of the robot's environment into textual information. Semantic Event Chains and Object-Action complex are used to store the information in a database.