On Environment
Nathaniel Bobbitt

to my son


Architecture stands at the threshold of an environment (the outdoors). The boundaryline between the unprotected (weather variant atmosphere) and the overhanging is where architecture and environment converge. Anasazi cliffside dwellings dramatize the transitional border between atmospheric states and the protected state.

Several analogies have allowed me to consider environment as a boundary line between atmospheric conditions and protected habitat.

  • Colonnade at St. Peters in Rome
  • Cluny Polytope
  • The Cluny Polytope is an installation by the composer-architect Iannis Xenakis which has helped me see the boundary between natural and human environments.

    Environment, that is, the border between outdoors and indoors is central to developing scene descriptions. A visualization with static environmental and habitat conditions limits a scene description to an agent's mobility. But if our starting point is where environment and habitat converge there are other mobile (dynamic) factors (rain, wind, and the shifting of sunlight). The cases represent dynamic scenes. The description of these scenes have time variant atmospheric conditions (rain, wind, and shifting glare sunlight).

    How atmospheric can multimedia be? Or better how does one exchange atmospheric conditions between near-end & far-end sites in a video-conference transaction?

    In Xenakis' Music & Architecture are illustrations for the use of two types of optical flow which he used for the Cluny Polytope:

  • a strobe set up (Concentric Shape)
  • a laser light installation (Star-shaped: convex point set) a laser light traces in a box (room)
  • No illustration is provided here because the really remarkable aspect of this installation, for me, is theoretical not visual. In the installation the optical flow of the strobe is a saturated atmosphere intervention. Although, Xenakis is addressing the environment of a room the intervention does not have much flexibility to:

  • partition the atmospheric manipulation
  • to extract segments of the atmospheric components as an extracted scene
  • The extraction of partial scene information is useful as a criteria in developing interfaces which support greater realism in video-conferencing or distant learning transactions. To extract partial scenes (relevant components) from a video conference the intervention which captures atmospheric data should not be a saturated environment but configured in buffers as one stands on the border between the atmospheric and the protected, for example, an umbrella or that patio which stands as a buffer between the outdoors and the entrance to the indoors.

    My criteria for the evaluation of an art installation is in terms of the transition between the outside and the inside. The selection of Anasazi sites for the New Media project Casting a Shadow is a product of flow between the atmospheric environment and the architectural habitat. There are some other spatial organizations related to the Anasazi which will make Casting a Shadow a startling exploration of space & mobility displays,site-specific and non-site-specific components of this project.

    New Media include how to convey "site-specific" attributes to a remote viewer. The desire to convey greater realism in video-conferencing and the imaging of human behavior are further problems of "capture" rather than "rendering." Virtual reality technology renders while a sensor based physical computing visual system captures. The challenge in physical computing is some where between the integration of site-specific details and the architecture of an invisible (transparent) distribution of sensors. Rather then using sensors only to represent human motion sensors can be useful to display site specific details. Accordingly, scene extraction based on a distributed sensor configuration and the engagement of sensors by the activity of a mobile agent (human) completes a scene description. This view of the completion of a scene description requires that scene have a level of activity and that activity is dynamic in terms of excitation and the involvement of the agent.

    The extraction of scene data or the capture of human sensorimotor behavior reveals spatial, environmental, or architectural data for video-conferencing or distance learning technology. Aided by a sensor installation computer graphics and multimedia use real world (physical interfaces) & physical playback (interface which human motion can refine or amplify) based on a system's capture capability. In this case multimedia interface is used to moderate and facilitate without rendering. Rendering is secondary in this paradigm because the agent is already in the real worlds and a computer duplicate is not warranted. I am proposing a real world interfaced based on physical & sensorimotor feedback.

    It is through task realization behavior in a dynamic environment that one would be able to capture greater levels of detail in human behavior, intangible aspects of human performance. The capture of task realization behavior requires a habitable space with installed sensors: environmental, atmospheric, and architectural niches. It will be through habitatability of an interactive system that the capture of streams of human behavior in the real world can be pursued. The study of toy world behavior is essential in looking at dynamic site specific behaviors and central to the imagery which is being developed for the dance & motion sensing project, Casting a Shadow.


    This page is hosted by Get your own Free Homepage

    1