Casting A Shadow

Scenes Behind Visualization & Simulation:

Motion Sensing,Imaging, & Spatial Reasoning

Nathaniel Bobbitt


Feb 19, 1997

Eric,

 

An overview of this project. I want to do some imaging which looks at instruments in all of their physical manifestations:

 

Instruments as they exist in space and move. To do this there are several modules which need to be developed according to:

1. motion sensing

2. spatial reasoning

3. imaging (visualization)

4. navigation through an environment

 

Each of these modules will need to have their own interfaces to bring out the desired effects. That is the big picture.

 

The smaller picture is how to blend audio and visual materials or better how to organize sound capture in terms of visual imagery and visual accompaniment.

 

The visual imagery will explore the cardinal points of above and below like noone has ever done, accept for Hopi Sandpainting. This will be closely linked to the site-specific elements of the project but now let's just talk about some very basic cases which are non-site-specific.

 

To do so I would like you to start thinking and looking at how things look when you are walking down stairs as you look downward. This is used to simulate the transtion from one scale size (in an aerial view) to another scale, among other things. Motion impacts on vision according to thresholds of resolution. Each major movement lead to a new threshold and a gradual shift until the look from above 2-D changes into a look from across, that is, an elevation drawing cast in a 3-D perspective. Note aerial overviews are 2-D, compare this with gnomic map projections.

 

Other cases will include the parallax view (which is another problem) when doing aerial photo interpretation.

 

Now the case of walking down stairs is really a problem of looking at verticality not the y-plot in a Cartesian Grid. The vertical orientation provides much of the imagery for this project. We are going to focus on the vertical. Note that the imagery is one thing and imaging (visualzation) is another.

The imaging has more to do with the superimpostion of multi-dimensionality in the instrument, instrumentalist, as well as, the exchange of energy (force and sensory) between the instrument and the instrumentalist.

 

Feb 20, 1997

 

Tamas Ungvary:

 

The problem with working on human performance remain:

 

An absence of the modeling and decompostion of INPUT parameters in performance behavior:

 

1. What are the various inventories of controls (which regulate and direct excitation) which help classify motor-excitation exchanges?

a. How shall we diagram such an inventory to create a model world (toy world)?

b. What classes of excitation in the above toy world lead to an organization (syntax or hierarchy) of gestures and performance behavior?

 

2. How to categorize the control mechanism in an excitation in terms of sensory feedback?

 

Besides looking at control variables in an attack there is also work to be done related with supply and replenishment of exctation resources(See my paper on Sensory Processing:Excitation & Replenishment). Of extreme importance is how excitation is sustained or the case of dynamic excitation (when output is cycled back into the stream of input's excitation.).

 

These questions lead to a way of framing human performance behavior. From this frame one goes on to look at visual decision making according to the allocation of sensory resources in particular environments. These architectural spaces and spatial domains represent a particular spatial semantics which is based on obstructed space, access way.

 

Finally my research will consider tasks as they are:

  • cyclic

  • repetitive

  • sequential

  • sequential-conditional:

    (in which the subject has options based on conditional cues.)

     

    April 7,1997

     

    Eric,

     

    Visual aspects of the motion tracking/visual accompaniment module is non-site-specific unlike videography which is based on a projection (projective system) of visual image. Videography fails to treat the complexity of scene description mobility:

    Consider the change of spatial relations for a lone basketball player weaving through the mobile background/human architecture created by both the opposing team and his/her team. All of which is organized according to the physical dynamics of approaching the stationary basket (a pyramidal form), with the purpose of scoring.

     

    The motion tracking/visual accompaniment module is non-site-specific in terms of:

    1. Imaging:[Dancer's visual resources]

     

    2. Invisible Architecture:[Network of sensors which are triggered by a mobile user (dancer)]

     

    Activation of triggers is based on micro-controllers, photoelectric sensors, and sensor cameras (CCD Cameras).

     

    Interactive interfaces: isolate, amplify, and capture the allocation of sensory-cognitive capabilities. The trick is to arrive at a physical playback based on beating or vibrotactile manifestations. The physical playback represents intangible aspects of: gross motor, cognitive, and sensory feedback activity.

     

    The tasking of the dancer will define parameters for the classification of gestural (gross motoric) movement and the system's interfacing capability. The management of this could use shaping interfaces (iconic, diagrammatic). The use of shaping interfaces allows the public to see the drama of human tasking unfold before the public's very eyes.

    Note: Always keep in mind what the public sees and what the performer (dancers or musicians) sees focus on two different things. The public sees the performer's output. The performer sees (scans & focuses on) the steps en route to allocating resources for the purpose of excitation,that is, attack.

     

    A person can speak into a megaphone and the voice is projected. But I am looking at the steps as the person decides to speak into the megaphone, according to the mental and physical energy that goes into directing (aiming) air, supplying (air flow) and forming intelligible sounds.

    Note, this process of engagement and allocation is useful in the application of active vision in the case of continuous aiming with a coordinated/sustained sensory feedback (continuous).

     

    Too often aspects of human behaivor are not included within the human-machine exchange. The only thing I want to capture is aspects of behavior which fall between the cracks during a human-machine interface.

     

    The easy part is all the site-specfic imagery, video, etc.

     

    The tricky part is how to diagram the capture space within a performance space in a stage (See my notes which I mailed to you)

     

    Phyllis asked:

     

    "I still have absolutely noclue how the dancer's movement relates to the sound and images you are projecting within the performance space. I was asking earlier if we react to what we see and hear, using the sounds and images as cues. You have stated this is not the case. You are making movement sound like it is totally independent of the sound and images (which are of a very specific subject matter) you have chosen. Are the other performance elements separate entities surrounding the dancer's vertical-polyrhythmic-doing-two-things-at-once-task-oriented-and-whatever-else-movement? What the heck are these other elements are doing while we are dancing?"

     

    Phyllis:

    Reaction is used to refine control by physical playback and to dramatize the engagement of dancer sensory resources.

     


    This page is hosted by Get your own Free Homepage

    1