Reply to a Question: "Mr. Kant Where is Space?"
NATHANIEL BOBBITT

Try this one out:

You hold up a sign right in my face too close for me to read the letters. [Cf. Berkeley's monograph on Vision]

You bend down and another person holds up the same size sign and I can read that the sign says "Hello world."

That person holds the sign and next throws the sign pretty far - the words "Hello World: are now a blur to me.

Another person runs a ways in a field and is looking at the sign. She reads the words "Hello World" we can't see anything but her reading a sign.

All along that you have thought about each of these cases, I have been thinking about these scenes using peripheral vision. These vignettes presents some fundamental aspects related with cognition and vision.

How many can you list and detail?

Toy Worlds

Toy worlds provide experimental inquiry with a readiness to observe and describe behaviors by human beings and natural forces.

  • Archimedes (floatingobjects in a pool)
  • Newton (pendulum)

    Visual literacy leads to an ability to create or model classes of toy worlds. Toy worlds assist in the isolation and observation of dynamic behaviors and interactive motion. Toy worlds display physical and human phenomena according to kinematics, optical flow, population flow in an architectural structure, gaze and eye motion.

    According to each toy world there is a representation of an excitation. The key to designing "human tasking interfaces" is the role of excitation especially according to the allocation of sensory resources, the replenishment of excitation, and sensory resources. The integration of visual literacy, gross motor behavior, and generic scene descriptions are essential in the discovery of a visual system which can capture, isolate, and shape a mobile agent's performance behavior.

    A toy world helps to inform inquiries into a "direct interface" between the real world and computer sensing or imaging capabilities. Toy worlds illustrate physical phenomena without the artifact of natural language. Toy worlds help us to overcome the artifact of language as we design visual reasoning exchanges.

    Today visual research is supported by several computer techniques. One of these techniques is Qualitative Reasoning, which relies upon a knowledge base to make logical inferences on spatial relationships.

    Stanford University Knowledge Systems Lab

    Qualitative Reasoning Group at University of Texas Computer Science Dept.

    A more direct approach would be based on "Performer Vision," that is, a human being in the real world. The question of exploratory vision (navigation-robotics) and performative vision (allocation of visual resources) inform a discussion on:

    1. How we organize spatial awareness in mobility situations or during task realization?

    2. How we use sensory resources and semi-automatic behavior to help us to see.

    All of the above leads us to reconsider description as a probing of resources and the allocation of resources. The consideration of optical experience and spatial relationships has been a long passage, through the history of ideas, art history, and mathematical treatises. Two quick starting points for others remain Max Jammer's Concepts of Space, and Panofsky's monograph on Perspective Drawing. These works have served as my "stopping point" as I am led to consider performer vision and human decision making based on map-reading.

    Still I plan to address some spatial and plot issues which arise out of: Land Survey Aerial and Topographic Mapping Computational Geometry Remote Sensing and Mapping My motion sensing dance project Casting a Shadow intends to address performer vision and visual decision. The development of a Diagrammatic Syntax for schematics is long overdue.


    This page is hosted by Get your own Free Homepage

    1