Next: Writings & Talks Table »
« Previous: Person Table
Nature forms various patterns and states. Tremendous reactions have been involved in the natural pattern transformations. The primary goal of Transition is to affect this cognition though presentation of temporal patterns built with procedural logic and geometry.
These images were created with Alias Maya 3D.
Landscapes are drawn from elevation data entered by painting an overhead view or by hand digitizing a topographic map.
Hdw: Apple 3/Inovion PGS 2 Sftw: By artist
Hyper Nebula was inspired by watching a nature documentary on the activities of sand crab interaction. I found the patterns created by 1000s of sand crabs feeding throughout the course of the day to be very beautiful and have interesting emergent properties. I developed a system to simulate these sand crab patterns – through the interaction of the sand crabs seeking food and evading predators. I found that by tweaking the system in various ways such as imposing geometrical patterns to the crabs starting points or linking traveling crabs and predators using colored lines created a series of very interesting patterns and images reminiscent of nebulas and galaxies. Hyper Nebulas are a series of images inspired by natural processes occurring at a micro level and abstracted to hint at the macro – cosmology from sand crabs.
Media Used: Created as generative work using Processing.
“Algotecton” (from “algorithm” and “tecton” — carpentry, articulation) is a site-specific generative sculpture inspired by the Wearie-Phelan structure, a mathematical construct that approximates the geometry of foam. “Algotecton” harnesses state-of-the-art computational design-fabrication techniques to give material expression to mathematical concepts, invite discovery, and playfully transform people’s perception of space and form.
Extended Summary:
Algotecton I (from ‘algorithm’, and ‘tecton’ — carpentry, articulation) is a site-specific generative sculpture inspired by the Wearie-Phelan structure, a mathematical construct that approximates the geometry of foam. It comprises sixteen interlocking polyhedra fabricated using advanced parametric modeling and CNC technologies. Evocative of different natural formations — a crystalline structure, a kelp forest, a molecular compound — the sculpture responds to Kendall Buster’s Parabiosis II piece at the Washington Convention Center street level. Algotecton I harnesses state of the art computational design and fabrication techniques to give material expression to mathematical concepts, invite discovery, and playfully transform people’s perception of space and form. Algotecton I extends a tradition of mathematical and design research with roots in early studies in morphology, crystallography, and molecular modeling. In D’Arcy Thompson’s pioneering exploration of form in the natural world, for example, morphology is understood as the expression of dynamic processes involving multiple intertwined biophysical forces which can be represented mathematically.
The modeling of such structures has been the subject of mathematical and geometric investigations for centuries, playing a role in scientific understandings of the world at the atomic, molecular, and architectural scales. With this broader context as a background, we draw inspiration from the Wearie-Phelan structure, a mathematical construct that approximates the geometry of foam bubbles discovered by Denis Wearie and Robert Phelan, two physicists working at Trinity College, Dublin, in 1993. Using computer simulations, Wearie and Phelan discovered that by minimizing the surface between the different polyhedra this structure could fill space more efficiently than previously thought possible, thus modeling a more ideal ‘foam.’ The structure comprises two kinds of cells of equal volume: an irregular pentagonal dodecahedron and a tetrakaidekahedron, along with two hexagons and 12 pentagons. Wearie-Phelan is just the starting point of an investigation into computational aesthetics and tectonics. Algotecton I is based on a system of parametrically-defined modular, interlocking, and flexible components that is capable of creating solid structures without the need for any fasteners, nails, or glue, and is sustained exclusively through friction.
This extends our previous studies of modular mono-material mechanical assemblies, and speaks to a tradition of architectural and artistic investigations into 3-D lattices and space structures. At a technical level, Algotecton explores the potential of state of the art computational design and fabrication technologies, and mechanical assemblies, to enable new kinds of structural efficiency, sustainable production, and on-site assembly and disassembly. At an aesthetic level it gives abstract mathematical structures a tangible and site-specific expression. More specifically, it creates a striking artificial ecosystem in synergy with Kendall Buster’s excellent Parabiosis II piece located at the street level of the Washington Convention Center. Algotecton comprises sixteen interlocking polyhedra pre-fabricated in wood laminates at the Computational Design Laboratory at Carnegie Mellon University, Pittsburgh, flat-packed, and assembled on site by the artists with hand tools (funding for shipping is already available through an internal grant). In addition, a 2.5 hr ‘generative sculpture’ workshop is proposed allowing visitors to compose their own ‘generative sculptures’ using smaller cardboard Algotecton ‘kits.’
Two interactive software reconstructions allow gallery visitors to experience two seminal developments in Computer-Aided Design (CAD) history: Steven A. Coons’ “Patch” (1967) and Ivan Sutherland’s “Sketchpad” (1963). Based on archival research, and custom software and hardware design, these interactive systems offer access beyond the visual into sensual, gestural, and interactive aspects of these landmark computational design techniques. Along with the two reconstructions, a selection of rare handwritten notes by original authors Coons and Sutherland are displayed to offer additional context about the origins of CAD.
Hardware: Apple II, Grappler, IDS Microprism Software: D. Cooper
The mathematically defined Hilbert Lindenmayer System is replete with imaginative and unexpected imagery. When they are viewed like Rorschach drawings, they reveal numerous visual scenarios. Added colors and textures begin to uncover forms within each system and invite viewers to discover their own personal interpretations
The mathematically defined Hilbert Lindenmayer System is replete with imaginative and unexpected imagery. When they are viewed like Rorschach drawings, they reveal numerous visual scenarios. Added colors and textures begin to uncover forms within each system and invite viewers to discover their own personal interpretations.
The Readers Project was begun in 2009 in response to the question, “How might cellular automata play out a ‘game of life’ – or rather a ‘game of reading’ – on the (complex) surface of a text?” [1] In the best-known form of the game of life [2], the grid on which the cellular automata live and die maps out generations of binary distinctions. (Figure r) This grid and the automata’s behaviors are one and the same. By contrast, a textual grid is inherently complex, bearing all the structure of natural language, despite remaining – as graphic representation – unambiguously two-dimensional and, indeed, both grid-like and cellular [3]. While certain 2D characteristics of visible language may have inspired us to ask our question about reading and cellular automata (CAs), we do not claim any regular or formal relationship between CAs and our expressive natural language processing [4]. In fact, while cellular automata have proven a productive formalism in a range of art contexts [5], there has been surprisingly little experimentation with CAs in the domain of literary art. The Readers Project thus represents an initial foray into this interesting and problematic space.
The Readers Project is an aesthetically oriented system of software entities designed to explore the culture of human reading. These entities, or “readers,” navigate texts according to specific reading strategies based upon linguistic feature analysis and real-time probability models harvested from search engines. As such, they function as autonomous text generators, writing machines that become visible within and beyond the typographic dimension of the texts on which they operate. Thus far the authors have deployed the system in a number of interactive art installations at which audience members can view the aggregate behavior of the readers on a large screen display and also subscribe, via mobile device, to individual reader outputs. As the structures on which these readers operate are culturally and aesthetically implicated, they shed critical light on a range of institutional practices – particularly those of reading and writing – and explore what it means to engage with the literary in digital media.
For the last 20-plus years, pouring my mind onto paper has been a daily necessity. Self expression is my never-ending passion and a means of therapy. Life is solitary and intense, and thus my art provides me escape, release, comprehension, and strength. It is a safehouse where I can express thoughts, emotions, fantasies, and reactions to life experiences without limitation, and thus harness the intensity within me.
My creativity is a tap that cannot be turned off. It’s a gushing, angry, roaring river of ideas and energy. Endless shapes, textures, and constructs that I assemble into surreal, fantastic, weird, and abstract realms. My work defines me and brings order to my world.
In summary, my work is a reaction to my life experiences and to the world around me. Monsters and cartoon characters are my primary work, which are influence! by industrial and urban subject matter, pop culture, conflict, and science fiction More recently, my work consists of electronic collages that allow me to explore visual qualities of more traditional media such as paint, pastel, and photography. The collages begin with small areas of my hand drawings scaled up very large and overlaid with textures, color, and photographic elements. In 2002, I evolved these collages even further and built them solely upon photography of man-made objects and structures.
Digital art has been my medium of choice since 1993, but my work always begins with my drawings.
The most significant art from this century might be born from “Renaissance teams” where specialists contribute skills toward the creation of work in a collaborative, creative process. – D. Cox
Forms generated by elliptic ovals have fascinated geometers, artist, and astronomers ever since Appollonius, de Vinci, and Kepler. – G. Francis
Conquering this art involves nothing short of PhDs, MFAs and XMPs. – R. ldaszak
I want to make the invisible visible. – D. Sandin
Hdw: Cray/IBM PC Sftw: Fortran/RT1
Hardware: Datamax UV-1 Software: Zgrass-T. DeFanti
Hardware: Datamax UV-1 Software: Zgrass Holograms printed by John Huffman at the Fine Arts Research and Holograph Center
Hardware: Datamax UV-1, Mitsubishi printer Software: Zgrass – T. DeFanti; Assembly Language – R. Lee
Hardware: AT&T Pixel Machine Software: Custom written in the RT/1 and C programming languages
Virtual reality creates environments where the meaning resides primarily in its immersive and interactive qualities. These interactive works engage the viewer in experiences that break the traditional boundaries of art, by actively involving participants in a series of visually compelling environments. They extend the traditional arts by encouraging the viewer to actively participate in the creative process.
In this work, virtual reality becomes accessible to digital artists through the lmmersaDesk, a projection-based, drafting-table-sized virtual reality system. The size and position of the screen give a sufficiently large wide-angle view that the viewer feels fully immersed in the visual scene. Head tracking allows the participant to experience a first-person view as opposed to the third-person view that is experienced with other visual media. The hand position is tracked by the “wand”, the main device with which participants can manipulate the scene. Additionally, the area around the desk is surrounded by a directional sound system.
The physical installation of the lmmersaDesk creates an evocative setting for viewers to experience and participate in the worlds that unfold before them. It merges aesthetic and conceptual concerns with high- resolution display technology, network connectivity, and advanced visualization techniques. Moreover, participants at remote sites have the opportunity to explore the same worlds and interact with each other.
Neither Here Nor There is a series of collaborative events utilizing advanced networking, software and hardware to interconnect the lmmersaDesk environments in The Bridge: SIGGRAPH 96 Art Show, an lmmersadesk installed in the Digital Bayou, and a CAVE™ at the Ars Electronica Center in Linz, Austria. The title of this collaboration reflects the ethereal status of the cybercommunication in current society. It also characterizes the notion that while on a bridge you are between locations. It is a state of being that is time-based, where geographic location (space) is irrelevant.
These virtual reality environments create a new form of communication that offers a presence not ever experienced in traditional forms of communication. By digitally connecting to other VR platforms, users experience the potential of networked interactivity. The use of interactive applications opens a window into the probable future of high-end telecommunications. The fusion of disciplines is the basis for this unique collaborative effort. In this model, technology and art collaborate to create highly interactive and immersive virtual environments.
A frame from the stereo animation A Volume of 2-Dimensional Julia Sets
This animation (like most computer animations) took up to 30 minutes per frame to render, 54,000 times slower than real time. In the early 1980s (with the exception of space roaches in video games), computer graphics stopped moving in real time. Frame buffers gave us photographic realism, but computers could not move enough bits fast enough to animate in real time.
These images are from a Virtual Reality Installation in the CAVE. In this work, participants interact with a time lapse, 360-degree, 3D panorama based on video images captured on Poverty Island, an island in the archipelago from Death’s Door to the Garden Peninsula in Lake Michigan.
In the late 1980s and through the 1990s, real-time computer graphics and interactivity were back (largely thanks to Silicon Graphics). I now have real-time computation, real-time computer graphics, and real-time interaction combined with a stunning display that surrounds the participant and matches well the two-eyed, two-eared moving human.
The CAVE was developed by the students and faculty (scientists and artists) of the Electronic Visualization Laboratory, the School of Art and Design, and the Department of Electrical Engineering and Computer Science at The University of Illinois at Chicago.
Hardware: Zenith ZW248, Targa M8, joystick Software: RT|1, C
Hardware: Sandin Image processor, Sandin digital colorizer
For the last two years, I have been working in and continuing to develop a new way of creating in virtual reality. CavePainting is a 3D analog to traditional 2D painting. The software, created at Brown University, runs in a four-wall immersive virtual reality system called a “cave.” Creating in CavePainting is a new way of working and thinking. While a painter often steps back from his work or a sculptor steps around his work or even holds it in his hand, a CavePainter stands up and walks through his work, grabs and rotates it in his hand, shrinks or enlarges it on a whim, and finally manipulates color variations and stroke size, shape, and placement to create a visual representation for complex forms.
Many of these operations have no counterpart in the physical world, thus they allow interactions and make possible the creation of a form that would otherwise not exist. For example, paint strokes would not be able to float or co-inhabit the same volume in the physical world. The computer interface in the CavePainting system is composed entirely of physical props.
When I paint, I hold a real paintbrush that has a six degree-of-freedom positional tracker attached to it. To change attributes of the virtual paint strokes I create, I dip the real brush into real buckets that “contain” different types of virtual strokes. These real, physical interactions compliment the dramatic 3D virtual form that can be generated with the system. The result is a virtual reality medium that is strikingly immediate, fluid, and responsive to the artist.
The interface and the space of the cave lend themselves to creating fluid, full-body, gestural strokes. In fact, watching a CavePainter at work can be almost like watching a dance performance. As such, I consider the process of creating a CavePainting as much a part of the final result as the finished 3D painting itself.
La Guitarrista Gitana is an interactive virtual environment that combines my desire to show a completed CavePainting work with the desire to illustrate and allow an observer to explore this unique 3D painting process.
Hardware: Silicon Graphics. Software: SOFTIMAGE.
Accidental discovery can lead to a new role: social-change agent. In “Venus Pie Trap,” a pod finds a taste for cherry pie when it misses its intended target: a fly. The other members of the fly trap’s collective mind become curious as this pod asserts its individuality.
Concerned about the complexity of ecological problems – poorly communicated to the public by the mass media – a team of artists aim to present new public space possibilities through mass-participatory augmented reality experiences. Wind over Water provides a full and diverse media experience designed to engage the public with environmental ideas and concepts at varying layers. For SIGGRAPH Asia, Wind over Water will allow a large number of participants to simultaneously explore Hong Kong’s Victoria Harbor and to interact with a 3D computational simulation and narrative in a responsive, geo-locative, markerless AR visual and sonic experience. Wind over Water connects participants’ perspectives on space, memory and imagination with a mass-participatory augmented reality fantasy. Recognizing the importance of multi-level interdisciplinary collaboration through consultation with local experts, Wind over Water development begins with geographical and historical research and soundwalks leading to the identification of sites and development of geo-locative media. Wind over Water is an initiative from a small international collective of artists and researchers from 3 continents: Australasia, Europe and North America; and 3 disciplines: architecture, sonics and mobile geo-reality. Wind over Water is designed to “explore intersections between nature, science, technology and society as we move into an era of both unprecedented ecological threats and trans-disciplinary possibilities.”
An interactive journey through the history of computer graphics adapted to contemporary technology. Users control Ed Catmull’s “A Computer Generated Hand” to explore a digital collage of early computer graphics history. An animated version can be viewed as an alternative to the interactive version. Containing clips from Japan Computer Graphics Lab (1985), Sogitec Showreel (1985), The Bicycle Company (1984), Sketchpad (1963), Martian Magnolia (1984), Put That There (1980), Blit Terminal (1982), Xerox Mockingbird (1980), Shirogumi Sample Reel (1983), Mandala (1983), Pantomation (1977-1979), MAGI Synthavision Demo Reel (1980), Image West Demo Reel (1981), Eurythmy Motion Studies (1985), Wonder Works (1984), Deja Vu (1987), Locomotion Studies – MIT – (Karl Sims) (1987), Mental Images (1987), Intelligent Light (1985), and more.
Wooden Mirror explores the line between analog and digital. The essence of the piece is the notion of inflicting digital order on a material that is as analog as it gets: wood. I was hoping to take the computational power of a computer and video camera, and seamlessly integrate them into the physicality, warmth, and beauty of a wooden mirror.
The piece reflects any object or person in front of it by organizing the wooden pieces. It moves fast enough to create live animation. The simple interaction between the viewer and the piece removes any uncertainty regarding its operation. It is a mirror. The non-reflective surfaces of the wood are able to reflect an image because the computer manipulates them to cast back different amounts of light as they tilt toward or away from the light source. The image reflected in the mirror is a very minimal one. It is, I believe, the least amount of information required to convey a picture (less than an icon on a computer and with no color). It is amazing how little information this is for a computer, and yet how much character it can have (and what an endeavor it is to create it in the physical world).
Wooden Mirror produces a distinctive sound when something moves in front of it: the sound of hundreds of tiny motors. The sound is directly connected to the motion of the person in front of the mirror and provides a pleasing secondary feedback to the image.
All the construction of this piece was done by hand, including mechanical connections and wiring. It took 10 months to build the mirror.
In the Line of Sight is a light installation that uses 100 computer-controlled tactical flashlights to project low-resolution video footage of suspicious human motion into the exhibition space. Each flashlight projects a light spot on the wall. All flashlights combined create a ten-by-ten matrix representation of the source footage, featured on a video monitor in an adjacent part of the gallery. In the Line of Sight is an artistic exploration of low-resolution video projections exploring electronic images not as simulations of reality but as objects anchored in the physical space.
Daniel Sauter’s works are designed as open frameworks that require an active audience to complete the work. Sauter is interested in creating artworks that evolve over time, anticipating unpredictable and unexpected interactions between the work and the audience. This relationship focuses on unique experiences and engagement with the work. It questions the very nature of authorship and mastery by replacing finished work with open and ongoing processes. Fabian Winkler’s work proposes new practices for looking at familiar objects and spaces around us. Using the expressive and aesthetic potential of new media technologies, he creates critical, surprising, and sometimes humorous interventions. By linking technologies with concepts and vice versa, four different media have become integral to Winkler’s art practice: sound, light, robotics, and moving images. Winkler relates sound to the physical structures and the electronic components of his works and sees light’s potential for abstraction to create new, artificial realities and to transform objects and environments visually and ideologically. Winkler treats robotic and kinetic systems as sculpture, installation, and environment, allowing audiences to experience the
Light Attack is a media artwork, as well as social experiment, performed in public urban spaces. As a car drives through the city, an animated virtual character is projected onto the cityscape, exploring places “to go” and places “not to go,” according to the popular Lonely Planet travel guide.
Light Attack elaborates the concept of the “moving moving” image. The projected moving imagery corresponds to the movement through the space, while the character’s behavior is influenced by the urban context and passers-by. The piece suggests “projection” as an emergent ubiquitous medium, raising questions about property and privacy. How public is public space? How do authorities deal with this question? How is “projection,” as an ubiquitous medium, changing the environment in which we live?
In its first version, premiered in Los Angeles in 2004, Light Attack focused on the ambiguous nature of the city, such as logics of place, neighborhood, environment, landscape, and social context in the stereotyped neighborhoods of Hollywood, Beverly Hills, Santa Monica, Downtown, Watts, and Compton. Performed within the iconic architecture of Florence, Italy, in 2005, the virtual character revealed and absorbed a radically different urban context through its own beam of light, engaging passers-by and architecture in a visual dialogue.
One of the main objectives of Light Attack is to transform the city’s signs and architecture as a “sender” into a “recipient” through mobile projection. By augmenting a virtual character onto the buildings’ facades, Light Attack appropriates the urban context for artistic expression. Hence, the project challenges the concept of the public sphere, individual and commercial interests, privacy, and property.
Light Attack uses a custom mobile projection setup installed in a car to project an animated virtual character onto the cityscape. The setup includes a computer laptop, velocity sensor, power supply, projector, and a video camera to document the piece. The car’s movement through the city determines the virtual character’s behavior and motion patterns, synchronized by a velocity sensor attached to the car wheel and custom computer software. Short pre-recorded video loops are arranged into seamless motion patterns by the computer software, allowing interaction with the architecture and passers-by in real time.
An installation is based on the concept of an electronic town meeting. The viewer can witness and participate in a community discussion on the issues of waste disposal and pollution. The ongoing discussion is stored in the computer as a multimedia document including digital video. The viewer watches a base documentary of interviews with civic leaders. The viewer is in control of the viewing process and may stop the “meeting” and respond by video-recording him or herself into the piece. Since this is an interactive environment, the video clip is not inserted into a linear path, but into the category chosen by the viewer/participant.
In my work I assume the computer to be an essential component of the design process.
Hdw: VAX 11/780 Sftw: Draft
Opticks is a live radio transmission performance between the Earth and the Moon during which images are sent to the Moon and back as radio signals. The project has been realized by visual artist Daniela de Paulis (IT/NL) in collaboration with Jan van Muijlwijk and the CAMRAS radio amateurs association based at Dwingeloo radio telescope (NL). Each live performance is made possible thanks to the collaboration of radio enthusiasts Howard Ling (UK), Bruce Halász (Brazil) and Daniel Gautschi (CH). Opticks uses a technology called Earth-Moon-Earth or Moon-bounce, developed shortly after WWII by the US Military for espionage purposes. EME uses the Moon as a natural reflector for radio signals. After the deployment of artificial satellites in the late 50s, radio amateurs continued using it as mean of communication. The ‘noise’ showing in any moon-bounced image is caused by the great distance travelled by the radio signals to the Moon and back (approximately 800.000 kilometers) and by the poor reflective qualities of the Moon’s surface. When the radio signals hit the Moon’s surface, they are scattered in all directions so that only a small percentage of the original signalsis reflected back on Earth.
The title Opticks is inspired by Newton’s discoveries of the light spectrum, reflection and refraction. The colors composing an image – converted into radio signals – are bounced off the Moon (reflected and refracted) by its surface during each live performance of Opticks.
SOMNIUM is a cybernetic installation that provides visitors with the ability to sensorily, cognitively and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our galaxy.
The technology may be interesting, but it’s what you do with it that’s important.
Hardware: NYIT Software: A. R. Smith paint program ca. 1980
Hardware: PDP 11/34, Genisco frame buffer, Dicomed D-48 Software: A.R. Smith 3-Paint
Hardware: Apple Macintosh II, NuVista board, Dunn film recorder, Wacom tablet. Software: Adobe Photoshop.
Hardware: Gould SEL, Celco film recorder Software: MAGI Synthavision
Hardware/Software: Aurora Systems, Inc.
Still, the images as they exist on the CRT are not in themselves satisfying as art pieces, so I further enhance them by translating them into physical mediums ….
Hdw: Dual 6800/Digital Graphics/Dunn Sftw: Custom Paint System
Plays Well with Others is a collaboration in which a team of artists, educators, game designers, programmers, scientists, theorists, and social activists explore the unique dynamics of spherical-globe projection. I brought the group together to engage in a dialog with ARC Science Simulations after seeing the OmniGlobe at ACM SIGGRAPH 2003 Emerging Technologies. Our program for the OmniGlobe, v.1.0 was part of my art exhibition, Daria Dorosh: Plays Well with Others, shown at A.I.R. Gallery in New York City in April 2004. The exhibition gave us the opportunity to see our work projected in the globe for the first time. We created an audio/visual program for the globe that reflected our personal and professional interests. Working individually and collaboratively, some of the shared themes that emerged were culture, identity, location, pattern, re-use, and play. The exploration continues as we test new capabilities provided by ARC Science and continue pushing the boundaries of our experience with the OmniGlobe. The content creators and collaborators are ARC Science, Galen Brandt, Kate Brehm, Clilly Castiglia, Steve DiPaola, Daria Dorosh, Carter Emmart, Ian Epps, Kevin Feeley, Mary Flanagan, Harriet Mayor Fulbright, Lizbeth Goodman, Gayil Nalls, Jeremi Sudol, and Camille Utterback. Viewers can access the work from a touch screen or by using a track ball. The spherical projection choreographs the viewer to be an active participant by choosing a vantage point for seeing the work. The interaction is serious fun, a game of chance and choice. The curved screen changes everything, both for us as content creators and for the viewers. The medium of the OmniGlobe demands that we experience and articulate our work from a different perspective. It raises some interesting questions: How does creating content for a sphere change the creator’s experience of organizing space? Conversely, how does designing for a flat rectilinear pictorial space affect our perception of the world? In Plays Well with Others, we have stepped across the boundaries of our disciplines and professional roles into digital territory where definitions of space, scale, location, shape, authorship, art, design, and communication are, at best, blurred.
The OmniGlobe is a large spherical, rear-projection screen illuminated internally by a digital projector in its supporting base. This spherical computer display can be used interactively for presenting global data in its natural geometry or any content that lends itself to viewing “in the round.” It is a true 360 -degree display system. How It Works The projector image is focused up from the base onto an internal dispersing mirror high in the screen. This convex element spreads the geometrically corrected image over the surface of the spherical screen, which is then visible on the external surface. ARC Science has received a patent for this display concept. Images for the OmniGlobe are mapped into a special geometry for spherical projection. On a flat monitor, these images appear as a circular “super fisheye” view, with the point that will be at the bottom of the spherical screen at the center and the pixels around the outer edge squeezed toward the screen top. OmniGlobe Content Originally conceived for Earth-related themes, spherical projection offers many other interesting visualization possibilities. Unlike conventional projection, whether on flat or concave screens, what is seen depends on one’s position. This provides an opportunity for user interaction, and a personal vantage point within the real 3D space of the globe. To view our surroundings on the surface of a spherical screen is to view the world inside out. It translates surprisingly well. Content for the OmniGlobe can be rectangular image “maps” representing 360 degrees around the globe and 180 degrees from the bottom to the top. ARC software can correctly “wrap” such images on the screen fast enough to produce the illusion of smooth rotation and user track-ball navigation. Alternately, pre-rendered movies can be created using standard 2D or 3D software along with ARC authoring software.
Passing “Place for Games” visualizes the world famous Kizhi site as high-resolution 3D environment. The goal of this project is to create a virtual prototype of the current state of Kizhi wooden architecture. Kizhi museum preserves a concentration of masterpieces of the Russian heritage and protected by World Heritage List of UNESCO. It is located on an island in Lake Onega in northern Karelia in Russia. Word “kizhi” is translated from Karelian as “a place for games”. In ancient times people gathered here and performed their religious rituals. The visualization reconstructs the original architecture of Kizhi island based on the detailed photographs, architectural and geometric measurements, textural data and video surveys taken during the visit of Kizhi as well as geometric analysis of the surviving structure. The project strives to advance the development of historical restoration in an artistic direction. It is being developed using latest concepts in real-time graphics, including complex illumination with dynamic irradiance environment mapping, shadow mapping, and complex materials containing normal and gloss mapping.
Rutopia 2 is an art project built for the C-wall virtual reality system. It explores the aesthetics of virtual art in relation to traditional Russian folk arts and crafts such as wood sculpture, toys, and the decorative painting styles of Palekh, Khokhloma, and Dymkovo. The work’s aesthetic is based on the generalized outlines, principles of composition, bright colors, and simplified shapes inspired by these styles. The project presents a magic garden with interactive sculptural trees. It was conceived as a virtual environment linked to a matrix of several other unique virtual environments that together create a shared network community. A series of 3D modular sculptural trees, each consisting of dozens of rectangular screens, appear in the main environment and serve as portals to the other linked environments. Animation of these dynamic tiled trees is an attempt to break through the static flatness of the contemporary tiled-display grids, architectural façades, and surfaces into the perpetually changing 3D sculptural forms of the ubiquitous public network. Users can “grow” three trees in the island world by moving within the proximity of each tree. Each tree appears as a rapid sequence of flipping and rotating rectangular screens expanding out and upward. Once all the trees are fully grown, their screens turn into windows, and the island changes from monochrome to color. Each window shows a view of the remote environment connected to it. Just as we can look through a window and see the outside, users can look through each of the screens to see remote worlds consisting of imagery found in traditional Russian fairytales and folk art. When they move their heads completely through one of the virtual screens, they enter the connected environments.
Rutopia 2 was built using Ygdrasil advanced rendering techniques, the Bergen spatialized sound server, OpenGL Performer 3.2, and the CAVE library. It operates on an Intel Linux PC running SUSE 10.0 and connected to an Ascension Flock of Birds tracker. The user is tracked from the stereo glasses and hand Wanda tracker. Participants control the direction of the movement and interaction with the objects by using only a wand interface and no buttons. The windows of the trees were made using the new Ygdrasil node stencilBuffer. This node acts as a mask covering the areas outside the windows so that only the selected window area allows a view to the other world. The storyboard sketches were first hand-painted using gouache and watercolor. They set up the color palette, composition, and virtual space layout and served as reference for development of the scene graph. The 3D models were built using the 3D Paint tool in the Maya software. The details of the decorative ornamentation were painted inside the 3D scene and then exported as models with textures. Other textures were individually painted, scanned, and applied on the 3D objects.
Through the “Panorama Time” project the artist discusses how to reach significant and aesthetically appealing, but most importantly “broken” results through hacking the use of our everyday device – a mobile phone camera. In this project, by using panoramic mode of the digital camera, the artist tries to break the concept of panorama, which might be referred to as an “unbroken” wide view in front of the viewer.
Das Vegas work is grounded within visual, conceptual and media art paradigms, recently strongly focusing on contemporary art and technological aspect of new media. He is also combining many traditional media art techniques such as photography, video and film into complex art installations and screenings. In the artist’s work, both artistic and media stances are interrelated and stand for open dialogue and critical engagement. The interaction as practice and a theme is one of the main fields in his recent artistic processes. The important topic of authorship, that graduates from arts into other disciplines of artist’s practice and research, has recently been explored from various perspectives, e.g. from art theory or design methods.
The change is an overarching concept that describes the artistic statement and explorations behind the most body of work. Talking more broadly, the artist is interested in the current discourse on art world and on life in general through the latest artistic expressions, techniques and technologies. However, his central focus is an exploration and development of dematerialized art forms and its production through the perspective of human behavior and experiences taking a stance from phenomenology. Most works of art started by engaging in discourses from subjective perspective either from theoretical or practical reasons, e.g. The Migrating Archive commenced from his family’s personal experience. Taking the personal perspective of the artist, the work of art could be seen as an examination of other important societal subjects and in most cases the focus on art world reflects upon broader phenomena. Although, by interrelating personal experiences and subjective stances with objective attributes of broader fields like societal life and art-world the artist creates conceptually thrilling artistic experiences.
Media Used: Digital photograph taken by phone camera with panorama mode.
Summary
This image is created from repeated geometric forms, transformed over time. Their positioning, repetitions and application of feedback creates more complex interwoven patterns. Through this process light patterns and surprising forms revealed themselves by chance. The diffuse textured lights are frozen moments of complex moving structures expressed in static form.
Abstract
A still photo from artificial life, this is a frozen moment from absolute animation. Repeated geometric forms were rotated and transformed over time and complex interwoven abstract patterns emerged. These unexpected forms are born from motion. Key frames were defined and positioned over an extended sequence and the animation was set in motion. When the motion is paused the cellular beauty of individual frames is revealed. Chance plays its part in this phenomenon. Individual frames are determined but the translation between these points is indeterminate. The space between known quantities is where the unexpected patterns and lights emerge.
I am an audio-visual artist who works primarily with electronic sound and video. For the past 30 years, my musical creativity and output has been paralleled with accompanying visuals, initially in live performance settings and more recently in fixed media composition. My visual work is driven by an interest in light, abstract imagery and the behaviour of forms over time. I seek to express visual media in music but also recognise the visual beauty in some of the imagery produced by animation. Abstract computer graphics can be programmed in precise detail, but when working with code and visual materials it is often the case that unexpected patterns reveal themselves. Frequently the figures, which emerge from chance outcomes of algorithmic processes, are the most intriguing. Please note this artwork can be considered part of a series alongside my other three art gallery submissions.
Roadside Artifacts along Old Route 66 is a new image series I began in the last year. These images grew out of an on-going project to document the present-day reality of Route 66 with a pinhole camera. For several years, I have been using a digital camera to record potential sites and then document the process of making pinhole photographs on large format film. Several of the images I made during the summer of 2006 with the digital camera captured situations that were so transitory that I was not able to make a pinhole image of the site at that time. Many of these images had particularly strong dynamics of light and environment that I felt could best be communicated with a selectively manipulated print. These manipulated images are intended to provide a different subjective interpretation of these sites that will hopefully serve as a useful contrast to the pinhole camera series.
The images in this series were captured with a Canon Digital XT camera at maximum resolution, then cropped to 4 x 5 format at 4500 x 3600 pixel resolution and selectively manipulated using multiple masks, image filters, and color adjustments. In most cases, several different manipulation strategies were applied to copies of each image, and the best solution was chosen for the series portfolio. The images were then printed with an Epson Stylus Pro 3800 on Arches Infinity Textured archival paper.
For me, art is a release of mind and spirit, a freedom that allows selfdiscovery. Where the rational and the spiritual mix, there is revelation, and my creativity makes great leaps. The colors and textures of impressionism, the compositions and structures of cubism, and the energy of Pollock’s action paintings, as well as the rhythms of modern music, all influence my work. Music is constant company as I create. When I begin a piece, I have selfish goals. I want to find myself, to understand more of my mind and soul, and to express that discovery in my work. As I connect with my work, my goals expand to include a desire to provoke reaction and thought.2004
I begin with a photograph or a simple painting for texture and digitally manipulate it to create complex compositions in which texture, color, and space are my primary concerns. Laying down vertical and horizontal guidelines, I use the rectangle marqee to select areas to modify with simple filters. As I work, I will modify the color, brightness, and contrast. This piece was created from a digital photograph taken with an Olympus digital camera and then manipulated on a Macintosh G3 with Photoshop.
Waxweb, based on David Blair’s electronic feature “Wax or the discovery of television among the bees” (85:00, 1991), is a large constructive hypertext that has been converted to MOO-space (Object-Oriented MUDs) as part of the Hypertext Hotel. Tom Myer modified MOO code to make the Hypertext Hotel a suitable environment for full, simple hypertext reading and writing plus the ability to view and add stills, audio, and video by a connection to NCSA Mosaic. These functionalities are embedded in a text-based virtual reality that gives multiple users the capacity for synchronous intercommunication.
About 10 months ago, “Wax or the discovery of television among the bees” was sent out over the MBONE (multimedia backbone) of the Internet by Vince Bilotta, which prompted John Markoff of the New York Times to write a story titled “Cult Film is a First over the Internet,” casting the event as a milestone on the way to 500 channels. Unfortunately, at the time, there were really only about 450 sites able to see the ‘film,” a fact that was a bit strange to point out to the people who wrote asking how they could see “Wax on the Internet.” The article also failed to mention that this was not a broadcast, but a multi-cast, meaning anyone who could receive could also send audio or video ( or text, of course), so that an individual’s reception screen could be filled with little boxes of talkie.
Waxweb is an attempt, within some necessary limits, to re-multicast “Wax” at a bandwidth more appropriate to the current Internet. All users of Waxweb will have access to its densest layer, the constructive hypertext. Users able to run Mosaic will have access to additional levels of functionality, depending on the width of their connection to the net (or their patience). Two thousand still pictures, an audio version of “WAX”, and the complete audio/video content of the film will be made available as hypermedia attachments to the main text, creating the equivalent of an on-line multimedia CD-ROM that multiple users can simultaneously read from and add material to.
Waxweb is a practical and aesthetic experiment in multiple-media, integrated narrative. It is a laboratory for a planned electronic feature investigating how artists can produce multiple-media, integrated narratives out of a single dataset using hybrid tools to affordably create a multitude of hybrid forms to form a single narrative.
Most text tools have collapsed into the integrated text amplifier – the computer – allowing us to do anything we want with words, in any order we want, on the way to composition. At the same time, we have gained the ability to project these functionalities across any distance, allowing us to not only write or read, but to do many hybrid things that are neither exactly one nor the other. This will not only increase the number of hybrid media production forms, but the number of hybrid, multiplexed works that are unitary yet take multiple forms: where a single, variegated chunk of proto-narrative, proto-image, proto-anything data can, and often will, take many different forms, each of which will have the aesthetic tension of being morphologically similar, though in different media.
Audience participation On-site users are able to read from and write to the film – in essence, to reprocess it. SIGGRAPH 94 participants are offered the following answers to the question, “What do I do?”: Do what you will, be it false backstory, or simple linkages between places with interstice boxes that explain ordinary obsessions. You can make a random structure of odd small stories, or a counter-structure of formal mechanism or anti-story. You can write an essay or anti-essay or faux-essay in linked little boxes. Or you can create new paths that intersect the story in horrible ways. You can learn the MOO software. You can talk to other people.
This input will be edited and published as a hypertext and CDROM edited by Michael Joyce and Larry McCaffery. Participants will be warned of potential republication and will be asked only to read if they do not agree to duplication. It will not be possible to pay published participants.
An introductory document to the hypertext features of the MOO is available by anonymous ftp from count.cs.brown.edu, in /pub/hypertext/docs.txt.
Growth Rendering Device is a kinetic installation that captures the growth of a pea plant over a 24-hour period. Suspended in a nutrient-rich hydroponic solution, the pea plant growth is record-ed during the length of the exhibition. Attached to a wall, the plant is connected to a vertical scanner, an ink-jet printer, and a growth light. This system provides everything that is needed to sustain and record the plant’s development. The device produces a rasterized drawing every 24 hours. After each new drawing is produced, the system scrolls the roll of paper approximately four inches to make way for the next drawing cycle to begin. The outcome of this work is not predetermined. As the name suggests, the focus is on growth—a complete feedback system between machine and plant. However, it is possible that what the machine may record is also the decay and demise of the plant. Drawing marked parallels to Gregor Mendel’s work on inheritance in peas, Growth Rendering Device seems to ask whether both the mechanic and the artistic parents will leave their mark on their offspring.
David Bowen’s work is concerned with aesthetics that result from interactive, reactive, and generative processes as they relate to intersections between natural and mechanical systems. tele-present wind consists of a field of x/y tilting devices connected to thin, dried plant stalks installed in the gallery, and a dried plant stalk connected to an accelerometer installed outdoors. When the wind blows, it causes the stalk outside to sway. The accelerometer detects this movement, transmitting it in real time to the grouping of devices in the gallery. The stalks in the gallery space move in unison, based on the movement of the wind outside. Bowen says of his work, “I produce devices and situations that are set in motion to create drawings, movements, compositions, sounds, and objects based on their perception of and interaction with the space and time they occupy. The devices I construct often play the roles of both observer and creator, providing limited and mechanical perspectives of dynamic situations and living objects. The work is a result of the combination of a particular event and the residue left after the event.” His work thus offers an imperfect and revealing transposition of data. “In some ways, the devices are attempting, often futilely, to simulate or mimic a natural form, system, or function. When the mechanisms fail to replicate the natural system, the result is a completely unique outcome. It is these unpredictable occurrences that I find most fascinating. These outcomes are a collaboration between the natural form or function, the mechanism, and myself. This combination can be seen as an elaborate and even absurd method of capturing qualified data. I see the data collected in this manner as aesthetic data.”
Hardware: Cromemco Z-2D, SDI Graphics, Matrix Instrument camera Software: Cook, D. & Wright, W.
www.stopmotionstudies.net ‘Stop Motion Studies” is a series of experimental documentaries that chronicle my interaction with subway passengers in cities around the world. The aim of the project is to create an international character s tudy based on the aspects of identity that emerge. It is said that 90 percent of human communication is non-verbal. In these photographs, the body language of the subjects becomes the basic syntax for a series of web-based animations exploring movement, gesture, and algorithmic montage. Many sequences document a person’s reaction to being photographed by a stranger. Some smile, others snarl, still others perform. Some pretend not to notice. Underneath all of this are assumptions and unknowns unique to each sit uation. The series extends my long-standing interest in narrative and, in particular, looks at the subway as a stage upon which social dynamics and individual behavior are increasingly mediated by digital technology. As one of the most vibrant and egalitarian networks in our cities, subways bring people from a wide range of social and cultural backgrounds into close contact. This process plays a significant role in shaping both the character of a city and our individual identities.
At its heart, the project celebrates what can be accomplished within the file-size constraints presented by current network architectures. Flash MX is used as both sequencer and streaming technology for what might be referred to as “poor-man’s video.” In any case, the experience is rich while being specific to the online environment. As it is interactive and non-hierarchical, the project functions more like a simulation with dramatic over tones than a linear narrative. Users simulate the act of riding in the subway, a transportation network that provides an allegory for the ebb and flow of information that is the traffic of the internet. The target audience is both PC and Mac users with a screen resolution of 800 x 600 or higher and a color depth of at least 8 bits (256 colors). A web browser of version 5.0 or greater is required, either Netscape Navigator or Internet Explorer. The Flash plug-in (version 6.0 or higher) is also necessary. Roughly 80 percent of users will be able to view the project without making any adjustments to their current hardware or software configurations. File sizes will be optimized so that users can access the project over a 56K modem.
Stop Motion Studies is a series of experimental documentaries that chronicle my interaction with subway passengers in cities around the world. Begun in the fall of 2002, the project currently includes 13 installments from countries including Sweden, the United Kingdom, France, the United States, and Japan. It is said that 90% of human communication is non-verbal. In these photographs, the body language of the subjects becomes the basic syntax for a series of animations exploring movement, gesture, and algorithmic montage. Many sequences document a person’s reaction to being photographed by a stranger. Some smile, others snarl, still others perform. Some pretend not to notice. Underneath all of this are assumptions and unknowns unique to each situation. The Stop Motion Studies extend my long-standing interest in narrative and, in particular, look at the subway as a stage upon which social dynamics and individual behavior are increasingly mediated by digital technology. As one of the most vibrant and egalitarian networks in our cities, subways bring people from a wide range of social and cultural backgrounds into close contact with each other. This process plays a significant role in shaping both the character of a city as well as our individual identities .
Hardware: Litho press, DICOMED D48, DeAnza frame buffer
Hdw: DG MV/10000/E&S PS300/Raster Tech/Dunn 631 Sftw: Clockworks
Hardware: VAXstation II|GPX, VAXstation 2000, VAX 11|750, E&S PS390, Rester Model One|80 Software: The Clockworks
The making of Aoxoamoxoa 7 tells a lot about the evolution of digital imaging over the last 25 years.
All my early computer art was done in big labs with hardware that cost hundreds of thousands of dollars. The machines were scotch-taped together and required a full time team of programmers and managers to run. The software was one-of-a-kind, completely undocumented, and definitely not designed for artists. Trying to print digital artwork accurately was a nightmare.
By contrast, I created Aoxoamoxoa 7 on an Intergraph desktop PC with 30 Studio Max, an affordable off-the-shelf software program, then printed it on a Hewlett Packard 2500CP inkjet printer, all in my own studio.
That’s progress.
This image was photographed off a Conrac tube so it is fairly close to the look of the the original frame buffer display.
“Down Stream [Appalachia]” addresses themes of ecological preservation, conservation, and connectedness. The exhibition is composed of reflective, refractive sculptures and underwater video footage, surrounded by fully immersive spatial audio. The interactive audiovisual elements respond to audience presence and proximity, illuminating the precarity of imperiled freshwater species in the Appalachian region.
Down Stream [Appalachia] is an immersive, interactive art installation that addresses themes of ecological preservation, conservation, and connectedness—illuminating the precarity of imperiled freshwater species in the Appalachian region. The exhibition is composed of three reflective, refractive sculptural forms, each made from stacked acrylic and mirrored surfaces, projected on from within to create glowing objects that appear to float in the darkness, surrounded by spatialized sound consisting of music and field recorded audio. Each object contains video footage of a different imperiled animal native to southwest Virginia: The Candy Darter (Etheostoma osburni); the Cumberlandian Combshell (Epioblasma brevidens) and other freshwater mussels; and the Eastern Hellbender Salamander (Cryptobranchus alleganiensis alleganiensis).
Upon entering the installation, the viewer is plunged into near darkness and submerged in swirling sound, with only the glow from the three forms to guide them, acting as beacons in the void. A disorienting loss of sense of place requires the viewer to focus on the glimmering forms, begging them to draw closer to the objects, and subsequently the threatened and endangered animals contained therein. The glowing forms could be likened to pools of water, precious gems, or shards of amber preserving these animals for a hypothetical future where they may not exist. As each object is approached, rippling underwater footage fades away to reveal the animals in their natural habitats, illuminating these rarely seen and imperiled species. Simultaneously, the immersive audio reacts to the presence of the viewer, swelling and unfolding new threads of the composition. The reactions are compounded as more people gather, rising to a nearly smothering blanket of sound.
Hardware: VAX 11/780, Genisco frame buffer, Dicomed film plotter
This animation was a happy accident. I was interested in seeing what kinds of shapes could be generated by simple L-systems, so I coded a Python/PyQt application to iterate the replacement rule and draw the resulting string using the typical turtle geometry interpretation. I quickly saw that changing the angle parameter about a degree could produce very different shapes, and that haphazard exploration of the angle parameter might miss interesting shapes. So I added an animation feature to vary the angle systematically and save out the resulting frames. By trial and error, I settled on .01 degree as an increment that produced a small but noticeable change in the picture. I rendered the 18,000 frames into a movie for review and was surprised to find that at 30 frames per second the animation was interesting and fun to watch for 10 minutes, and that the eye could easily pick out single-frame shapes that differed substantially from the neighboring frames.
From a series called Architech, 3-D Pool is a result of playing with my digital print-outs to give the experience of moving through an underwater world. After the backdrop and the initial triangular tower were created, I had a dream to make a cylindrical pool. The blue dots on the reclining sunbather reference John Baldassari, with my own addition of the blue triangle shape. Partly inspired by the profusion of architectural renderings since the destruction of the World Trade Center, I was driven to create new forms based on more feminine shapes than the usual masculine environments. The result is a digital vision of a world where grace and beauty can be celebrated unapologetically.
I grew up with Crayolas as one of my first art supplies. Many people are aware that the color called “Flesh” since 1949 was voluntarily changed to “Peach” in 1962, partially as a response to the US Civil Rights Movement (see www.crayola.com). Having photographed over 64 models in the past 10 years for use in my artwork, I consider each of them to be a unique part of my palette. As represented here, one of the colors gets to describe the concept of flesh by coming to life-free to create her own steps through the world.
With digital media, due to its sheer range and flexibility, there is sometimes an overuse of color. But I believe it allows for the possibility of fine tuning one’s vision as opposed to just utilizing the available palette. Having (if anything) only a numeric value, digital hues need not be loaded with politically divisive names, and they can exist as “pure color.”
My artwork explores the aesthetic possibilities of pure mathematical equations. I am specifically seeking out forms that are organic enough to challenge any viewer’s notions of what mathematics can visually represent. The vehicle for this exploration is interactive artificial evolution, a computational analogy to natural selection, which allows an artist to literally grow complex and beautiful images using equations as DNA.
The software used to create these works was written by myself, and has itself been slowly evolving for over 10 years. In some ways, I consider the software part of the art itself. Balancing the combination between simplicity of use and complete controlability is one of my goals, as well as the ability for the evolution process to give the artist an instinctive and purely visual sense of the underlying equations without the need to understand them deeply at the mathematical level; to know what they do without knowing what they are. I share some of this instinct with the viewer through the simple coloring scheme I use, which is typically composed of four colors: black, white, a reddish warm tone, and a bluish cool tone. Black represents zero, white infinity, warm represents positive values and cool negative. Knowing only how the equations produce color gives the viewer an immediate visual sense of the mathematical structure of these images.
These works are plots of mathematical equations that were evolved artificially through an artist-driven mutation, reproduction, and selection process. Initially, very simple equations are mutated randomly to produce a population of new equations. The artist selectively chooses the most interesting or aesthetic images out of this population, and the chosen ones are cross-bred and randomly mutated to produce the next generation. This process often repeats for hundreds of generations before artistically viable images are achieved.
A linear collage of frames from the digital film Red Sphere Light Room.
A synthetic space is depicted with ladders that are not confined to the controls of real world gravity.
The Immutability of Transformation
In “Leaves of Grass,” Walt Whitman wrote: “The law of promotion and transformation cannot be eluded.” This idea is explored in David Hylton’s haunting digital paintings. His work challenges viewers to consider how they may instigate, react to, or remain oblivious to the inexorable effects of change and transformation upon their existence.
Depicts transformation with surreal imagery that vibrates with dramatic, deeply saturated hues. His work captures and heightens the tensions born of the process of transformation. Some of his paintings have a mystical quality that suggests a vine and positive influence upon transformation. Others present the horror of expected, uncontrollable evolution.
Siren’s Call is a portrait of a woman whose alluring call is suggested by her haunting expression. She is both cool and warm, and is composed of organic elements. Unlike her mythical forebears, this enigmatic siren inspires new discoveries and exploration.
Hylton’s work evokes the isolation of modern men and women whose struggle to make sense of daily inundation with information blinds them to the deeper forces at work in their lives.
Hardware: IBM 3081-D, VAX 11/750, Matrix Instruments QCR, Liacom Display Generator Software: Design-interactive scene simulation and animation software; BUSMS-solids modeling system
A Small Fee (2009) uses sentences taken from the script of the movie West Side Story (USA 1961). By reducing them to a quick paced graphic “dialogue”, the sentences become extremely current and seem to be referring to present political and ideological conflicts. The verbal conflict occurs within two distinguished groups, natives and immigrants, touching issues of racism, the desire of acceptance vs. the will to exclude. The sentences are bright, colorful, fast paced and formally “scream” at the viewer. Soon it becomes clear that the object of the conflict is an America that reveals its political failure and that hasn’t fulfilled its promise for wealth and happiness.
Software: Adobe Photoshop, Final Cut Pro.
Three Views of the US Senate (2003) is a political data visualization that uses dynamic typography to tell a story about the makeup of the United States Senate prior to the then-upcoming 2004 elections. In particular, senators are presented by party, by year of next election, and by state. The story that emerged from the data was an open ended one: could the Democrats retake the Senate?The project uses data scraped from Project Vote Smart, an online political database, by a robot written in Python. The visualization itself was designed in Adobe Illustrator and implemented in Processing. A Java-enabled web browser is required to view this piece.
Hardware: Tektronix 4014, Xynetics plotter
Hardware: Datamax UV-1 Software: Trans Package Fabrication: Schmidt Iron
Maquette and Installation photographs.
Hardware: Tektronix 4014, Xynetics plotter.
Hardware: lkonas, Versatec Software: Paint Program
Hardware: Apple II, processors & camera – D. Rokeby Software: D. Rokeby
Time Away is an abstract representation of the psychological change that occurs during creation of a work of art. The change begins slowly, while the artist’s mind is distracted by everyday thoughts. As the artist continues to work on the piece, thoughts begin to dissipate, and the mind begins to free itself. Instinctual creative processes begin to take over.
The audio was created using traditional and electro-acoustic audio techniques. All sampled sound originated from the violin via traditional bow techniques as well as experimental methods such as banging on, scratching, and scraping. The sound files were digitally manipulated and arranged to create a composition.
Hardware and Software
Adobe Photoshop, Adobe After Effects, DSP Quatro, Logic Pro.
Lacking strength, beauty hates understanding for asking of her what it cannot do. But the life of spirit is not the life that shrinks from death and keeps itself untouched by devastation, but rather the life that endures it and maintains itself in it. It wins truth only when, in utter dismemberment, it finds itself. This tarrying with the negative is the magical power that converts it into being.
Cyberspace, particularly amongst teenage girls, has become the 21st-century bully’s playground. The opportunity for anonymity has escalated the mean and hateful role-playing that teens would normally do off line.
The characters I create appear to be dredged up from the darker recesses of the subconscious. However, I try to render them from the point of view of compassion. They consist of composites of human parts found and made, my own photography, constructed sets, drawing, bits from my memory and an eclectic collection of ephemera. I have invented subtle character “types” that have characteristics that we all might vividly remember: the domineering leader, the charming bad boy, the sensitive androgynous target, the internally tortured bully, and so on. These figures are swimming in that grey realm between loss of innocence and coming of age. They also, on a secondary level, resonate with the assortment of avatars in contemporary video games, which are becoming more and more realistic as technology progresses.
By splicing bits of fiction together, I encourage story telling and trigger the viewer’s imagination. The sleek compositing effects of the computer, where real and unreal are seamlessly blended, act as a metaphor for the complex ambiguities surrounding our choices, particularly in this new digital age, where new strains have been put on the human psyche. Entities that are created through the culture of the computer are taking on a whole new meaning as “real” and “imaginary” step onto the same plane.
Davida Kidd uses her own photography, which is then taken into Photoshop. Using a minimum number of effects and relying on mostly drawing “photographically,” she then seamlessly blends several components from various images into one. She asks the viewer to question what has been created for the camera, what has been created to be scanned, and what has been created within the software itself. Initially, various components of the image are documented from different and very disjunct documentary points of view: camera lens, scanner bed, human eye. The images result in a metaphor for how the brain works. Images that we see and images that we remember are not distinguished as different. What is real? What isn’t?
In my most recent work, concrete walls in a small room are covered with dark elements, warnings of a world of transgression, of suppressed violence and sexual ambiguity, aggression and timidity, anxiety and exuberance, resistance and control, playfulness, and ironic humour. Historic and contemporary references jostle against each other. The sense of unease in the room is directly related to the unease of the world in which we live, where unpredictable violence is never far away, particularly in this new digital age, where new strains have been put on the human psyche. My work combines elaborate staging followed by editing in Photoshop. I create personality “types” that subtly explore the fragility and ferocity of the contemporary human condition. For this piece, I staged friends, neighbors, and acquaintances in the room, and collaboratively allowed them to respond to the environment. The characters are particular to the urban environment of the west coast of British Columbia, where I live.
Over the span of two years, a concrete room was painted with images “that merge historic and contemporary references with the detritus of Shangri-La: an uncanny mélange of dolls’ houses, comic strips, fairy tales, and frightening toys.” The texts and images that fill the studio walls were collected, painted, or drawn and meticulously arranged and manipulated using digital scans and traditional collage methods, and then writ large in water-soluble paint. People were staged in the room and photographed with a medium-format camera. Transparencies were scanned at a high resolution, and Photoshop was used to make additions, deletions, and subtle scale changes. Many times, more than one negative was used in the making of an image.
Hardware: Evans and Sutherland ESV 3+. Software: Custom.
Produced: Teletronics Hardware: Quantel DPE 5000 with real time image processing system and additional “Dimension” frame store, GVG-300 Switcher, VIA/Video Computer Painting System Software: Quantel V4 Operating system with enhanced BBC Teletrack Video: Dean Winkler, John Sanborn, Kit Fitzgerald Music: Adrian Belew
Hardware: Via Video Computer Painting System, #GVG-300 Video Switcher, Quantel DPE-5000 with real time image processing system and additional “Dimension” frame store Software: Teletronics V12 operating system ver L2.3 (written by Robert L. Lund)
Produced: Teletronics
Hardware: MCI/Quantel DPB7000 Grass Valley GVG-300, Ampex Digital Optics real-time image processors, Quantel DPE5000 + with frame store, Teletronics VI Square Communications control system. Software: Quantel VER. 3.2 Ampex, VER. 4.2, Quantel VER. 4 with enhanced BBC teletrack Teletronics VI Square Operating System VER. 12.3-R. Lund.
Hardware: D. Winkler custom computer, Grass Valley GUG 300 switcher, Digital framestore
Ribbon-like characters search for identity in their cut-out environments of hedges, people, and houses. Lost Ground is a modern love story for the nineties.
Hardware: SGI Personal IRIS Software: Alias
The computer as an illustration tool allows me a higher degree of design freedom, direct contact with a vast array of colors as well as a higher degree of variety in my assigned projects. I use photographs and pencil sketches as predesign tools.
Hdw: Genigraphics 100V/Renton WA Sftw: Genigraphics
Hardware/Software: Cygnus I digitizer, Terminet 200 printer
Hardware: Cygnos I Video Digitizer, Terminex 200 Software: System
Hardware: VAX 750, Ramtek 9200 Software: WCU Paint-G. Walker
Hardware: AT&T 6300+, Targa M8, trackball Software: RT|1, C
Hardware: Perkin-Elmer 3230, Ramtek 9300, Matrix camera Software: by the artist
Hardware: Ridge 3200, Raster Tech frame buffer Software: P.D.I.
Hdw: Ridge 32/Raster Tech F B Sftw: In-house
Digital media content and the internet are having a profound impact on our society, especially the identities of entire groups of people. The connectedness, the ability to edit, and the storage capabilities of digital media are, in a sense, helping to recall and even shape cultural memories. These memories, when shared, provide political and ethnic understanding in new ways that reach different audiences. This series of work is one such celebration of a people. These are Some Jews that Hitler Did Not Get:American Jews and the Survival of a People celebrates life and hope using one representational horrific event. The Holocaust is recent enough to be part of a contemporary shared experience, one that can capture the past and digitally fuse it with the present, thus implying hope for a future. By celebrating the survival of American Jews, we remember all the times through history that we were threatened yet were not destroyed. It gives us strength and anticipation for generations to come. It reminds us to respect the survival of all peoples and the importance of their identities. Digital technologies will enable these memories to move forward in time so we may always have a sense of tikvah, or hope.
These works were constructed digitally by fusing imagery and symbols from the past with imagery from the present. Many of the pictures from the past came from online digital storage banks. All of the works were constructed from numerous sources, including visible digital collage and seamless digital montages of photographic forms. The works have various elements of mixed media, from custom substrates and hand collage to coated hand-made papers, that together provide layers of information to the overall story.
Hardware: Amiga Software: DigiPaint by Newtek
These associative objects are derived from similarities in form and function of everyday items, as well as wordplay. Retro components, perhaps considered futuristic in their time, are re-formed to create new, contemporary devices illuminating the dialogue between art and design.
NAAITAFEL(sewing table) is a combination of functions. Instead of a needle from a record player, a sewingmachine needle is used. NOOTZUIGER(note sucker) is a harmonium (air organ) built into a device that also uses air pressure to operate a vacuum cleaner. STRATENSPELER(street player) is a device that makes urban patterns and textures audible. A microphone is used together with rotation to create an audio loop of the particular surface it’s placed on.
Nebula III is an Artificially Intelligent interactive computer graphic installation. It forms the third part of a larger computer graphic project exploring the literary fiction of Georg Buchner, each iteration focusing on different aspects of his work using more complex AI algorithms.
SoS forms part of a research program exploring ‘dialogical spatial aesthetics’, namely the investigation of spatial experience as a two-way dynamic transaction between the viewer and space, rather than its treatment as a linear and static relationship between an observer and a setting. It tests the hypothesis of Manuel Castells, the world’s foremost-cited communications theorist, who formulates space as a fluid process that is constituted through the interactive relationships between its constitutive cyber, human and physical components. SoS tests this by creating a cyber/physical experience where the viewer is physically positioned to immersively experience a virtual ocean voyage whose wave and climatic behaviours change in unforeseen ways. By doing so it models the dynamical uncertainties that characterize the actual experience. These dynamics involve, on the one hand, the behavior of the user, and on the other the behavior of the virtual oceanic space. As the viewer experiences the voyage, attempting to control it through their gesture and optical orientation, the space responds to their behaviour in relatively independent ways, resulting in different wave and climatic behaviours. Inversely, changes in the space induce shifts in the viewer’s behavior.
Visual music is an interdisciplinary artistic genre with roots dating back hundreds of years. The emergence of film and video in the 20th century allowed this genre to reach its full potential. The concept can be applied using a variety of approaches: for example, works in which the images and music are directly tied by sharing parameters or works in which the images “interpret” the music (or vice versa). A third category is pieces in which the visuals are edited in tight synchrony with cues in the music. The common theme is that the music and images are closely related in some form.
My work attempts to bring principles of organization and development drawn from musical composition into the visual world. I am particularly interested in creating vividly colored images that display repeated patterns of movement, similar to the rhythmic patterns often found in music. These patterns coalesce into recognizable shapes and forms within the context of a virtual world, where all cues as to size and scale are missing and must be inferred by the viewer. This approach leaves the works open to the widest possible interpretation, which is a main goal of my work.
Dennis H. Miller uses a variety of methods to create his 3D images. His works employ two primary tools: POVRay, a public-domain image compiler, and Cinema 4D, a commercial 3D modeling and animation program. In many of his works, Miller sets in motion processes that result in the generation of basic forms that show repetition in their structure. From these raw images, Miller carefully composes an environment and context, then explores various color, lighting, and textural options.
Residue was written in 1999, and unlike other works by its creator, the animation and music were created simultaneously. The technical and artistic challenges this created were immense, but the necessity to carry both elements forward, each with some meaningful continuity, and to keep the two in sync from an aesthetic viewpoint, provided the author with a stimulating and provocative experience. For the record, the work consists of 16,200 individual Targa (graphic) files, which live a precarious existence on the composer’s hard drive.
“Second Thoughts” was composed in 2000 and is intended for performance on videotape. The work is in three sections, the first two of which dominate the form. The opening section explores the inside of a virtual object and depicts many of the surfaces and textures found therein. The second section moves into 3D space and presents different perspectives of the initial object, as well as adding a number of new elements that derive their form from the elements in the opening. The short, third section is a recap of the first and adds several minor variations to it. The music, also composed by this author, contributes an emotive element as well as an added layer of continuity to the piece. Like previous works by this author, all visual elements were scripted using the POV-ray scene description language; no special effects or plug-ins of any type were used. The musical material derives from both synthetic and acoustic sound sources.
My experience in traditional based media includes photography, printmaking and drawing. Electronic imaging has provided me an opportunity of work with my photographs or draw directly with the computer using an electronic tablet. I view much of my visual computer work as electronic printmaking. Images are signed, titled, and identified with edition numbers. For an artist working with traditional media the character of paper (for drawing or printmaking) is an important concern. Now with the Iris print an artist working with electronic media can make use of almost any paper for final printing. After an intense learning process with hardware and software the computer as a visual creative tool seems very friendly and affords endless directions for new imagery. I now create drawings that are influenced by my knowledge and experience with the computer. Many of my new photographs were made and influenced by the intention of bringing the images to the computer. I expect that electronic imaging will be more and more considered an emerging medium of the visual arts.
With the plethora of real-time information and news coming from both commercial news sources and individual broadcasters using the internet and Twitter to get their messages across, we seem to have arrived at a backlog of data chaos that is clamoring for order. Plinko Poetry, by Deqing Sun and Peiqi Su, playfully recontextualizes this heavy onslaught of information. The project gets its name from the game of Plinko, introduced in 1983 on the hit American television game show The Price Is Right. In the original game, contestants climbed a staircase and dropped large chips into a game board fitted with hundreds of pegs. The chips would hit the pegs as they fell, sending them off in different directions until they landed at the bottom of the board into one of several slots assigned with different dollar values.
Bringing the aesthetic of the original game of Plinko into the digital era, Plinko Poetry retains the chip and pegs convention, but instead of waiting for the chip to drop to the bottom of the board to arrive at a result, the chip creates a poem out of scrolling Twitter feeds on a screen: Each peg hit by the chip stops the Twitter feed on a word, collecting the words into a poem that is printed on a piece of paper and given to the participant. The resulting poem is also live-tweeted to the @PlinkoPoetry Twitter account. The result is a live poetry engine, powered by people, that constantly broadcasts its poems as the installation is used. The installation feeds off of the seemingly useless number of messages that are sent daily by celebrities, political leaders, ordinary citizens, and media outlets. Ultimately, it makes light of the fact that we are living in such an information-rich landscape, from social media to traditional news media, that the amount of data produced daily often outlives its usefulness to the general public.
If we humans are made up of 98 percent water, then what comprises the other two percent? A similar question could be asked in the current debate on the influence of the digital world on our definition and the identity of our non-virtual world. My subjects are immersed in particular aquatic environs and are given instructions to consider themselves more like a land mass within a body of water. I ask them to consider themselves not so much as who they are, but what they might be as shorelines, tides, shallows, depths, currents, undertows, and corrosions. I shoot several digital photos of each subject, then file that raw information away for a year. When I finally return to the file, I have enough distance to be more objective as I build the psychological portrait of the subject as water. My works become images that reflect personality traits of the subject; centered, disturbed, serene, clear, scattered, distorted etc. I remove anything that distracts from the water image Oewelry, birthmarks, unwanted surface disturbance) and reconstruct the image as a translation of digital information. I use industrial digital-output equipment from the billboard industry because advances in the kind of work I build are more available there than the fine arts. These images are thermal ink laminated on a vinylmesh screen, and, in certain light, they have surface properties similar to a solarized photo image. The scale allows viewers to wander into the dot matrix and digital field, and find their way out again.
I reworked the digital photo sources in Photoshop and desaturate the color to black and white. I digitally recorded the vocal patterns of Isabella Couperthwaite Kreizel reading every definition of “water” from my 1973 Funk & Wagnalls University Edition Dictionary. I also gathered digital water sources as audio tracks from my June 2003 artist’s residency at Palazzo Venier/Casa Artom Study + Research Centre in Venice. Collaborating with Paul Connolly (Fluor lnternational’s Computer Data Team Leader in South Africa); we’ve rebuilt both sound-source wavelengths into one soundtrack using PEAK and PROTOOLS.
Out-of-focus elements read as individual faces, familiar yet unfamiliar. The works speak about recognition of what we think we know … but they are elusive.
A celebration of ham acting: A frog auditions for a part in Hamlet and fails badly.
Hardware: Apple Macintosh llcx, Apple Portrait Display, Hewlett Packard Scan Jet Plus. Software: Aldus Freehand, Image Studio, Superpaint.
Hardware: Apple Macintosh llcx, Apple Portrait Display, Hewlett Packard Scan Jet Plus. Software: Aldus Freehand, Image Studio, SuperPaint, Aldus PageMaker.
RolyPoly is a networked installation designed to enable two individuals to “sense” the presence of each other, even though they may be physically apart. The mirrored movements in a pair of RolyPolys is such that a soft tap to rock one will simultaneously rock its partner to the same degree, instantly creating a corresponding reaction in the other. While the Internet provides a vast array of text messaging and video interaction options, RolyPoly offers a unique, spontaneous, and subtle mode of instant communication, exclusively between two individuals. RolyPoly addresses the phenomenon of people living apart and examines gesture as a different mode of communication from speech and text. Through gestures, one is able to bring intimate familiarity to the other party, even when miles apart.
Hardware: Silicon Graphics 4D 25-70. Software: Alias 2.4.2.
Places of Memory recreates the emotional and psychological paradox of my experiences watching Hurricane Katrina through media reports and participating in the recovery process as a relief volunteer. I interviewed Wangui Kaniaru, a law student and volunteer in the Upper Ninth Ward. Her words articulated my contradictory experience of being removed from, and simultaneously embedded within, a community devastated by disaster. As the video plays, Hurricane Katrina, its aftermath, and the recovery process slowly unfold before the viewer in a giant, seamless composite image. The composite represents both my physical and psychological experience of moving through and working within a city still divided by the complexities of race, economics, allocations of resources, and politics. In essence, it is my visual memory of the recovery process in New Orleans since Hurricane Katrina; an archive of an ongoing, evolving historical event that I experienced both firsthand and through mainstream media sources.
Places of Memory seeks to use multiple points of view to construct new ways of seeing and remembering what happened in New Orleans. The internet provides an opportunity to research how Hurricane Katrina and the recovery process were visualized through images from mainstream, independent, and personal media sources (such as weblogs). Images were appropriated and combined with personal photographs using Photoshop CS. iMovie was used to edit video and GarageBand was used to extract and remix the audio. The composite, sound, and video files were combined together using Adobe After Effects 6.5.
Hardware: IBM PC, Cubicomp frame buffer, Diablo C150 printer Software: Time Arts-Easel, dithering software by J. Schier
Pictures are called to the screen, modified and processed live, resulting in new combinations along the way …. The sequence will shift from preset combinations, to pseudo varied combination of effects. Periodically it will return to home base.
Hdw: Cubicomp/Xerox 402/IBM XT clone Sftw: Lumena/Custom by J.Schier
Hdw: Cubicomp FB Sftw: Lumena
Fantasy is sanctuary.
The imagination is a threshold to an inner world, uncovering the tension between an image that conjures its mutable revelations and the idée fixe. This work embodies the hidden poetry of the ordinary, making visible what previously was hidden.
“Secrets of the Magdalen Laundries” explores the theme of imagination in the inner life. Dreaming, reverie, and fantasy are ways of being that make the reality of circumstances more tolerable.
The history of the Magdalen Laundries serves as a point of departure for the installation. These convent industries in Ireland existed from the mid 19th century until the late 20th century. The Magdalen Laundries institutionalized women who were smeared with the reputation of being immoral, or who were indigent, and kept them imprisoned through the social machinations of the Catholic Church. These misused women lived in punitive labor, lost to both their families and themselves. Henceforth, they became invisible, concealed beyond the margins of society.
At the boundaries of the visible exists the invisible.
In these images the women live in a private world of desire, longing, and unreachable fulfillment, forced into a mundane ritual of service without pleasure or amenities. Their vitality and eras, bound by the superficial morality of the Church, reemerge as images on the sheets that they repetitiously wash, a reminder of their stained existence.
They dreamed until the secret images were burned onto the sheets.
Sheets facilitate dreaming. They enfold the body, carry its warmth, desire, perfume, and wrap it in death. The discarded bedsheets give form to an imagination that releases desire in spite of circumstances. The sheets move from matter to metaphysics, reminding us of the body and its dreams. The portraits from the Magdalen Laundries appear and disappear as you move around them. Viewed from an oblique perspective, the images vanish like the women lost in time. Facing them, they assume their own dreaming existence.
The unique sound composition for “Secrets”, created by Michael McNabb, brings a psychological fourth dimension to the work. Ten independent audio sources surround the viewer with the voices of women conversing in Irish Gaelic, transformed by the composer’s software, using only processing with no synthesizers or conventional instruments.
There are two concentric four-channel layers. The inner presents an intimate perspective while the outer, more distant and manipulated, represents the deeper emotional desires of the women, transfigured by memory and imagination. Additional channels emanate from washtubs, their watery resonance a secret communication across time.
The Hide and Seek Series: An Archaeological Excavation of Memory
This autobiographical body of work addresses issues of self, gender, and intimacy using the concept of on archaeological excavation of memory as a metaphorical structure.
My involvement with digital imaging began 12 years ago. I soon discovered that computer technology provided me with the opportunity to manipulate, edit, and expand the photomontage format that I felt most suited my personal artistic expression. My work reflects my interest in both the Dada and Surrealist art movements, primarily in the use of the juxtaposition of seemingly unrelated visual elements. This methodology enables me to present an almost “cinematic” storyline based on the relationship of each of the vignettes within a particular piece. The computer has now offered me an even wider range of possibilities within the photomontage format. The technology has actually freed my range of expression and allowed an even more personal shaping of the symbolic elements I use in my work.
In my earlier work, I utilized “found” vintage or family photographs as a starting point far the final photomontage. In my more recent works, I experiment with different types of image processes using my own photography as a means to further strengthen the “finding of my own voice” through the presentation of “landscapes” that are charged with symbolism and emotion.
My art is a combination of myth, spirit, science, and technology. I see myself as a modern alchemist, using silicon chips as a tool to transform electrical patterns into art. My attempt to portray an element of mystery is the guiding factor in these works. The juxtaposition of the image elements hopefully serves as a catalyst for the viewer’s recognition of her/his own inner processes. The computer does not destroy your soul, as I once thought, but rather has liberated a creative aspect of the self that might have otherwise remained undiscovered.
In this series, slides of the model and the objects (primarily furniture from motel rooms I have stayed in) are token with a traditional 35mm camera. These are imaged with a slide printer using normal and transfer techniques. Resultant images are scanned and composited in Photoshop with text and other elements.
Humans have always attempted to pull back the veils obscuring the future. We peek inside the soothsayer’s parlor and gaze through the curtains that hide the cards on the table. The patterns of our future shimmer as predictions of events that will either ravage or embrace our lives. These are the cards we draw when we pick up the deck. No card is alike. No deck the same. Yet because we share the same chance of the draw, we are bound together, suspended in the picture albums of all the worlds.
During the early 20th century, dream books became very popular. These books, produced to advertise patent medicine, were odd compendiums of dream images, superstitions, and omens along with their symbolic meanings. Gathered from both historical custom and tradition, as well as individual insight and interpretation, they were used to forecast and predict the future, as well as understand the past. Seen as primitive lists and summaries, and sophisticated presentations of Jungian-like dictionaries of symbolic objects and ideas, the dreams and omens were seen to reflect and represent (symbolically) our life and its place in the cosmos.
Drawing on inspiration from my collection of early dream books, I used both digital and traditional photographic techniques to produce Dreams and Omens as 20-inch-by-24-inch Polaroid image transfers printed onto fabric.
Hardware: Apple Macintosh. Software: Aldus Pagemaker, Applescan.
We are losers, misfits, we trip in codes and appropriate technologies. We are three women living in three continents—Europe, North America, Australia. Since the 1990s we love to interweave our playing using various Internet protocols. Today we refuse the dictatorship of social networks. We don’t adapt to the Megamachine gears. We don’t believe the rhetoric of gamified social platforms. We resist leaving the fragments of our electric bodies in the networks of domination (Facebook Inc., Google, etc). Instead, we are forever in search of the liminal and the spaces between the singularities that patriarchy and capitalism define and operate under.
Open Sorcery Poetry (OSP) is a multilingual battle-cry, a series of generative poems created with known accomplices, deep aliases and passing strangers. We gather feeds in the free software drift zones. These ‘anchors for listening’, hexes against Power, can be hacked, weaponized, remixed, and recast. The method is the content, so we are making new worlds using F/LOSS (Free/Libre Open Source Software): online editor Etherpad for a/synchronous writing, Nextcloud for sharing and synchronizing folders, Audacity to create podcasts. We tangle our words and share our poetry and audio recordings.
Join us in the Etherpad to create new poems, spellcasting with us. It’s a convivial experience that we are after. Poems will emerge and dissolve. This is a challenge, as the Etherpad will be open like a naked body, but it is also a calculated risk.
Will you play with us?
Hmmm … It could be a painting but then again, maybe it’s not. With Photoshop and a few favourite third-party plug-ins, I can achieve effects that are very painterly and yet intriguingly not. At first, I tried to emulate natural media, but now I try to blur the line between something that seems to be created on canvas or paper with paint, and something obviously created on a computer. My inks, and the papers I choose to print on, are very much a part of the process, and the final piece for me is always the final print, even though the image was created completely on the computer. There are always subtleties in the printed piece that just are not visible on the computer screen. That final surprise is the payoff for me.
In the fall of 2005, as a result of their interest in my previous work and techniques, a class of college students taking an introductory digital art class invited me to give a virtual tutorial. To give them a taste of how I work, I proposed “painting machines” that they could create by recording actions using standard Photoshop filters. During the development of the tutorial, I discovered bugs in a couple of the standard Photoshop CS2 filters, which at first annoyed me because they could compromise the tutorial, but then they intrigued me. The combination of the bugs and the “painting machine” technique that I developed for the students resulted in this series of work.
The work began with a low-resolution photograph used as a color base and then was manipulated entirely in Adobe Photoshop CS2 using standard filters and exploiting certain bugs. One of those bugs is that the cutout filter breaks with certain complex images, returning what can only be described as shards. The other is that the shear filter returns straight or angled lines instead of curves if the filter has not been run at least once outside an action.
Hardware: VAX 11/780, Genisco frame buffer, Dicomed film plotter Software: Paul Heckbert
Hdw: Iris 3030 Sftw: By artist