It seems that great shifts in how human beings use technology often create a
push for changes to the way we divide work between human beings and technology.
Chemical film has all but disappeared and almost everybody takes digital
photos which they proceed to put online for easy sharing with friends and family.
Together with a number of other trends which have contributed to the vast
amount of online and locally stored digital photos, this has made automatic
recognition of people in images an important research topic - in spite of the
fact that recognition is one of the tasks generally left to human beings, since
we excel at recognition. We believe that recognition of the style of a 3D object
is something that is also likely to be increasingly useful in the foreseeable future.
Optical scanning methodologies make the generation of 3D content more
feasible than previously, and it is easy to envision digital artists wanting to compile
content for a 3D scene or composite object being in need of a method for
searching for an object not just of a specific function but also a specific style.
The scope broadens further if we look beyond man made objects. It seems
clear that, say, the various limbs of a specific human being have some commonality
that separate them from those of another person. Thus, one could argue
that an individual represents a style. Style in the context of biological variation
is something that we explore in the work presented here. Specifically, we
investigate whether we can define a style class for the teeth of a person.
Unfortunately, style is subtle and we do not hope to be able to automatically
extract a description of style from 3D objects. Furthermore, we avoid using
explicit ways of describing style. Recognizing the style of an object based on
some textual or otherwise encoded information might be a feasible approach in
some cases such as, for instance, recognizing to which order a given classical greek column belongs. But, relying on explicit information about a given style
would require us either to solve the above problem of automatically extracting
style information from shapes or to rely on human beings to encode style - a
task that we believe would be both tedious and difficult.
Instead, we rely on examples in the work presented here. This requires that
we have example (training) objects for each style. It also requires that we have
an orthogonal class of functions, since, as we discuss below, the function of the
object (what it is) clearly also has a profound impact on shape. Thus, our work
can be summed up as example based classification of digital 3D shapes in both
style and function categories.