Robotics is about the development of autonomous agents capable of functioning within a complex world. While they exist in a 3-space continuum, the physics of their world is dominated by discontinuous transitions between different phases of matter (primarily the solid and vapour phase for most existing studies). This discontinuity implies that the mathematical domain of robotics encompasses both discrete, combinatorial mathematics, and continuous analytic mathematics. An intelligent robot is one which has a representation of its world in a form which supports the derivation of plans which can be executed to bring about a desired goal state of the world.
The approach that I have adopted consistently in robotics has been an entity-relation paradigm, in which matter is regarded as being partitioned into bodies which are further subdivided into features. States of the world, actual or potential, are characterised in terms of these bodies, their features and relationships between them.
These relationships can include non-contact relationships derived from remote sensing (typically vision) as well as contacting relationships between body features. Much of the geometry is common between contacting and non-contacting relationships; thus a straight line "seen" in a camera corresponds to a plane in space which bears a relationship to the edge of a body which would be described as a "higher pair" by kinematicians.
Within this approach, perception amounts to:
(a) Formulating a correspondence between sense-data and model-features. Such a correspondence forms part of a spatial-relation-graph which also has constraints arising from prior knowledge (e.g. the table is on the floor but exactly -where- may be determined by seeing it).
(b) Solving such a constraint graph. I have myself always used analytic methods to obtain a closed form solution to such graphs, but recent work in the Vision group here and elsewhere has convinced me that an integrating analytic with numerical approaches could be powerful. Typically numerical approaches take a very local view of a problem, and so get stuck in false minima, whereas analytic approaches are more global, but are limited by the capabilities of symbolic computation.
In the case of sensory perception, the pure kinematics of these graphs is complicated by the fact that inaccuracies in the sense-data make most of the kinematics, which is typically over-constrained, formally unsatisfiable. Here the relaxation approach common to numerical methods offers a ready solution to this difficulty.
Some recent work we have done has been to recast a substantial part of the spatial relationship work in terms of subgroups of the Euclidean group. This approach, focussing as it does on the symmetry properties of body features, represents a useful abstraction in that it supports a unified framework in which discrete symmetry groups can play the same role as the continuous symmetry groups inherent in traditional kinematic formulations.
Other recent work includes "planning for uncertainty" which involves automatically extending spatial relationship graphs arising from a nominal plan with perfect bodies to take account of the more complex relationships that can arise as a result of errors in maniplulation.