Next: Writings & Talks Table »
« Previous: Person Table
Echolocalizator (2015) is a wearable device that aims to change or augment our human way of interacting with the environment. Using “sound spatialization,” this technological helmet simulates the echolocation sonar used by animals like bats and dolphins, highlighting the essential role of technology in the coevolution of humans and animals.
Echolocalizator is, in fact, a perception-bending, environment-transforming portal to a world that simultaneously exists and does not exist. The work proposes a “virtualized reality” where visible phenomena are reinterpreted into synthesized sounds that generate new cognitive associations and perceptive experiences.
The helmet is a cybernetic hybrid computer that recreates physical reality within a biofeedback system and executes a computer algorithm in real time, translating sensory stimuli into a new language for human interpretation. Using ultrasonic sensors placed to the left and right of the forehead and a microcontroller that translates incoming signals into centimeters, this leather wearable device is able to produce a binaural sound atmosphere created in the mind of the user with sounds that correspond to their movements and the positions of objects in their vicinity.
“Cacophonic Choir,” an interactive art installation, addresses the ways that sexual assault survivors’ experiences are distorted by digital and mass media and its effect. The installation is composed of distributed agents in space that individually respond by becoming visually bright, semantically coherent, and sonically clear, revealing original testimonies of survivors.
Extended Summary:
Cacophonic Choir is an interactive sound installation aimed at bringing attention to the first-hand stories of sexual assault survivors, and the way such stories may be distorted by the media and in online discourse. The work is composed of nine vocalizing physical agents distributed in space. Each agent tells a story. Altogether, from a distance, the listener hears an unintelligible choir—the stories are fragmented and the voices distorted. As the user approaches an agent, the story becomes sonically and semantically more coherent. When in the agent’s personal space, the viewer can hear the first-hand account* of a sexual assault survivor. The work employs several digital media techniques, including machine learning, physical computing, digital audio signal processing, and digital design and fabrication. Agents are fitted with ultrasonic sensors and respond to a viewer approaching it in three ways simultaneously. First, the narrative becomes more coherent, reflecting how stories become distorted by the media. This is achieved by adjusting the accuracy of a generative machine learning algorithm that we designed and trained on the anonymous accounts of more than five hundred sexual assault survivors. Second, to express how survivors are silenced, the voices are treated by a granular synthesis algorithm which generates a stuttering and halting effect that decreases as the viewer approaches the agent. Third, the unique form of each agent becomes revealed as the result of it illuminating itself from within, enabling the viewer to see through the soft silicon shell to the digitally fabricated organic form within. Via these interactions, the work embodies the stories of sexual assault survivors, and how these stories are obscured and distorted in online public discourse.
* These stories were shared on “The When You’re Ready Project”, a web-based platform where survivors of sexual violence can have their voices heard. https://whenyoureready.org/
My focus of interest is experimenting with the algorithmic generation of pen-plotter drawings. I wrote a program to realize a conceptual idea for a drawing, and it demanded all the strictness and logic common to computer programs. It also strongly contributes to the clarification of the conceptual idea. Later, it may use additional processes drawn from other software programs.
To write a program for the purpose of generating a piece of art is pure luxury, and it is a highly enjoyable personal experience. Such a program does not solve a pressing problem, no client is waiting for code, nobody is interested, there is no real purpose, it is serious and challenging, but it is intimately connected to pleasure, nothing but pleasure.
I make use of a number of programming languages, some of them running on very old computers, some of them still running on my Macs. Programming languages die, computer systems die, and the peripheral computer device I love most, the pen-plotter is already dead or almost so. But its high potential for realizing drawings of all types have not nearly been fathomed before it was replaced by printing technology. The plotter uses strings of HPGL code, which, in the most simple case, are coordinate pairs that provide the commands pen-up and pen-down. It was a most irritating experience recently, after many years of serious programming, to be able to produce one of my drawings with a sort of program that consists only of a few successive search-and-replace statements applied to a list of coordinate pairs in a standard word processor.
The simplicity of the line and its indefinite richness of expression in drawings are fascinating, even more so when the design of the drawing is based on strict rules of generation.
The generative process is programmed to leave larger areas toward the center of the image empty. The blurring is deliberately produced by minor scaling operations. The image is part of a series of experiments with unsharp boundaries.
My artistic interest is centering on the adventures arising from the difficulties in mastering the plotted line as a means of artistic expression. Three fascinating aspects contribute to my interest: 1. The fascination of the mechanically guided pen. 2. The fascination of the monochrome line. 3. The fascination of the generative code. The technology of mechanical drawing is almost extinct. It has been supplanted by other print technologies in the course of technological development. As a metaphor, the moving pen in the grip of a plotter in action resembles relatively closely the process of the hand engaged in drawing. Interesting consequences of artistic concern arise from this observation. Historically, drawings have been around since the beginning of art, and drawing is an enormously rich domain of art. It is a universe, indeed, that is complemented by the equally rich universe of machine-generated drawings, also a universe in its own right. It is a big artistic challenge to work in this universe, to invent strategies and code them into programs from which drawings can be generated that possess identity and uniqueness and that demonstrate with great clarity that they belong to the machine universe. Artistic quality is living comfortably in both universes. One of the limitations of plotter drawings, which also can be regarded as a strength, is the monochrome line produced by a pen, a pencil, or the like. There is a substantial difference between a printed and a drawn line, and my drawings exploit this difference.
One of the specific properties of machine-generated drawings is their reliance on a generative code. The program is the instrument by which the idea and the intentions of the artist are transformed into the drawing. The conceptual work necessary long before a line is actually drawn creates a distance between idea and output. Design of a generative set of rules must precede the actual production process. This leaves room for a vast space of possible approaches, which I have barely scratched with my efforts.
The image is one of a series, using very small elements in very large arrays. It is constructing a contradiction, because it defines and draws “pixels” with lines. The plane of pixels is actually a drawing, with each pixel being a very small circular or randomly shaped “potato,” an individual instance, a closed line with a unique position in the plane. A minimalist generator was programmed to write HPGL code, which is used to realize the image on a pen plotter. The mechanical shortcomings of the fast-moving pen generate slight desirable deviations, which result in an overall gray scale. Random processes are used to disturb the strictly orthogonal arrangement. In nature, we find many situations where small elements are assembled in large arrays. The image is regarded as “synaesthetic” because it attempts a synergistic junction of contradicting and mutually exclusive concepts, which jointly form a new concept. There is no meaning to the drawing, but associations are triggered, which may connect the image to known and familiar patterns. These associations can also be connected to a synaesthetic function, bringing different views together into mental concepts.
A massive arrangement of polygons composed of parallel junks is arranged on the plane. The drawing is a pen-plotter image plotted with pencil on paper. The image is generated using random procedures within certain boundaries and observing some parametric restrictions. The program accepts a set of starting points for polylines and generates the lines agorithmically on the basis of parameter settings. The drawing comes into existence in a single shot and as one instance of a possibly endless sequence of drawings. All the required decisions are coded in the program. The intentions of the artist are formulated before the process is triggered. I regard this type of drawing as a species of the universe of machinegenerated drawings with specific characteristics. The plotted line is different from the printed line, and that contrast becomes important in the areas where lines intersect. The concept of algorithmically generated drawings is a unique concept that raises many interesting questions within the idea of synaesthesia.
Starting with a photograph of the bark of a tree, an image from nature is transformed over and over again. It is a bag of data, which is hitchhiking along a chain of processes, getting a lift from many readily available programs and changing in every one of these steps. The result is not an intentionally designed image but a drawing, which is declared finished after an arbitrary sequence of transformations. The goal is not to arrive in some preconceived location but to travel and find something. The emphasis is on finding, catching, grabbing as opposed to deliberate or algorithmic generation. After each step, the system decides whether to trigger another transformation with totally uncertain outcome. I see the idea of “synaesthesia” in this process of transformations. Each one represents a concept that is imprinted onto the image. The image is accumulating and absorbing all these changes and mutates into something new. But the artist decides if it is allowed to survive.
Perhaps the most convincing way to communicate my lasting interest in the line as a basic element of artistic expression in drawings is the reference to the unaccountable and fascinating drawings we know from art history. Drawings rely on lines. It is absolutely spectacular what can be achieved with lines, and, drawn by the hands of artists, lines have been with art right from the beginning, when art emerged on stones, bones, and the walls of caves. With respect to drawings (line-based artwork) I like to distinguish between “The Universe of Hand Drawings” and “The Universe of Machine Drawings.” Both of these universes may be thought of as equally rich and very densely populated. The distinction between the two acknowledges the rise and formation of a universe of drawings different (and very much so) from the ones drawn by the hand of an artist. The Universe of Machine Drawings is a contemporary phenomenon, but its philosophical roots are old and found in the restless desire of homo faber to master and shape tools, and use them with a great intentional drive. My efforts are focused on exploration of the universe of machine drawings. Algorithmic generation of a drawing or a sequence of drawings is a fascinating challenge. It requires coding of an intent which the machine will follow to produce a result. I like to work o the old-fashioned (and now nearly extinct) pen plotters. Although longer in use as a standard peripheral device for computers, the basic idea of this technology (a mechanical arm equipped with at is now widely used in industrial-production processes. For cutting sheet metal with lasers, there is a very close analogy to plotting on the old-fashioned flat-bed plotter. I am now using my old code for experiments, to cut drawings with a laser into aluminium, steel, layered plastic sheets, and thick cardboard. It is interesting to observe how very old and very new computer technology can be merged and used for production of aesthetic events.
Perhaps the most convincing way to communicate my lasting interest in the line as a basic element of artistic expression in drawings is the reference to the unaccountable and fascinating drawings we know from art history. Drawings rely on lines. It is absolutely spectacular what can be achieved with lines, and, drawn by the hands of artists, lines have been with art right from the beginning, when art emerged on stones, bones, and the walls of caves. With respect to drawings (line-based artwork) I like to distinguish between “The Universe of Hand Drawings” and “The Universe of Machine Drawings.” Both of these universes may be thought of as equally rich and very densely populated. The distinction between the two acknowledges the rise and formation of a universe of drawings different (and very much so) from the ones drawn by the hand of an artist. The Universe of Machine Drawings is a contemporary phenomenon, but its philosophical roots are old and found in the restless desire of homo faber to master and shape tools, and use them with a great intentional drive.My efforts are focused on exploration of the universe of machine drawings. Algorithmic generation of a drawing or a sequence of drawings is a fascinating challenge. It requires coding of an intent which the machine will follow to produce a result. I like to work the old-fashioned (and now nearly extinct) pen plotters. Although longer in use as a standard peripheral device for computers, the basic idea of this technology (a mechanical arm equipped with at is now widely used in industrial-production processes. For cutting sheet metal with lasers, there is a very close analogy to plotting on the old-fashioned flat-bed plotter. I am now using my old code for experiments, to cut drawings with a laser into aluminium, steel, layered plastic sheets, and thick cardboard. It is interesting to observe how very old and very new computer technology can be merged and used for production of aesthetic events.
This image is realized as a pen drawing on a pen plotter. Structures and patterns found in nature are transformed into line patterns. They are combined with algorithmically generated drawings and arranged in a collage-like fashion. Fuzzy clipping is applied, which generates unsharp, fuzzy boundaries along clipped lines and areas. A vaguely recognizable animal figure, the_bird_facing_Jeft is used to give hints at an interpretation, if one chooses to look for one. The drawing is identifiable as a program-based machine drawing because it uses a line type that is difficult (or impossible) to realize by hand, and it uses this line type with great consistency. The work is consistent with the SIGGRAPH 2004 Art Gallery theme, synaesthesia, because it draws on the vagueness and fuzziness of the clipping processes. They open a space for interpretations and associations from different sources. They construct a twilight or shadow concept, avoiding a clear message, and thus hold potential for bringing together different emotions.
Touching is a very broad concept. In these images, lines are playing a game of touching, of near-touching, of avoiding, of seeking, of crossing and intermingling: a manifestation of the purity of the line and an invitation to meditate.
Algorithmically generated drawings, drawn on a pen plotter, constitute a very small segment within the area of computer artwork. It is this small segment, however, which I find most fascinating. This has to do with the archaic notion of a mechanical extension to the drawing hand, unlocking a universe of machine-generated drawings utterly different from hand drawings.
The triptichon is visualizing the 800,000 Kosovo war refugees by representing each one with an individually generated line, consisting of five line-segments. The figure of 800,000 is a conservative estimate, published in the German newspaper Die Zeit. The triptichon may be seen as an interpretation of the SIGGRAPH 2001 Art Gallery theme, “N-Space,” where the dimension of the space I refer to, is the space of “social consciousness.”
It reminds us of the impact of war, as we look into the faces of the affected – individual human beings, suffering and deprived, displaced from their homes and cast into the void.
The “counting table” at large is an open-ended listing, a metaphor for meditating about humanity.
When walking through a landscape in snow, we observe many types of linear structures. The tree as a metaphor and as an element of landscapes is a familiar image and a poetic reminder to enjoy life. What I am trying to communicate through my work are interpretations of the mysteries and tragedies that surround us.
Computer-generated artwork, based on line drawings, is challenging for a number of reasons. It makes use of line as the characteristic element of the generative process, and the results rely entirely on the calligraphic qualities of the line. Besides the heritage of hand drawings, which we conceive as a fantastically rich universe, we may conceive an equally fantastic universe of machine drawings. Line drawings that populate this universe should exhibit qualities in their own right. For instance: they should exploit algorithmic techniques, not be reproducible by hand, show that they have been drawn by a machine, achieve a distinct and unique type of structuring, belong to an identifiable universe, exhibit strong calligraphic qualities, and make the question “how was it done?” entirely unimportant.
Lines are very simple geometric structures and at the same time inexhaustibly rich elements of artistic expression. This is one of the main reasons why I like to work with them. From the vastness of the possible structural descriptions of lines, I have chosen a personal definition, that makes these lines distinctly and identifiably my lines. For the generation of such lines, relevant feature values are: the number of starting points, the number of lines originating from a given point, the angular boundary for a polygon, the spread of a segment, and the number of segments in a polygon.
In statu nascendi, when a line is developing on a piece of paper, it does so from a unique starting point. It is the starting point that calls for the first decision in a drawing process, no matter whether the hand of an artist or a computer-driven device is steering the pen. The question of starting points and the question of the “character” of the line developing from those points have to be taken care of by the program. Especially interesting are two sets of algorithms: those that generate drawings in a “one-shot” generative process and those that make use of “composite” processes.
It all has to do with an obsession in line-oriented art. Technically, there are two main problems in generating the work I am interested in. I have to write programs or find programs into which I can cast my intentions as an artist. And I have to find output devices, onto which I can deposit the produced results (for eternity). For both problems, I have (temporary) solutions: I use both my own programs and standard programs, plotters, and printers.
Plotters (which are becoming extinct) and printers (which replace them) are very different in the way they produce output. From an artist’s point of view, they both have strengths and weaknesses. The plotter relies on a drawing pen. It mimics, to a certain degree, the mechanical and sequential process of drawing by hand, and it works with “vector data.” The printer is pixel-oriented, and it works line by line from the top of a sheet of paper to its lower rim.
Since my interest is focused on lines as a basic generative element for artwork, the properties and the calligraphic quality of lines (printed or plotted) are of great interest. Comparing the properties of the lines generated by the two classes of devices reveals how they can be exploited for generating processes. Some of the important properties of plotting are:
• Only lines of a limited thickness are available and they come in discrete steps. • Crossing lines generate gray-scale values and depth. • The mechanical nature of the drawing process produces inconsistencies and slight variations in the plotted line (for example, the starting points of a line become distinctly noticeable or the pen may temporarily fail). • Each pen can carry one color only.
The printed line also has its own characteristic properties, some of which are:
• A homogeneous and perfect line image is achievable. • Black lines (or lines of the same color) cross each other “flat,” and the illusion of depth is lost. • There are no limitations to the width of lines, and they may be chosen from a continuum. • A very large spectrum of colors is available for prints.
There is a distinct quality to a plotted line (as opposed to a printed line), which I like a lot, and which I consider as an important feature of a plotter drawing. There are qualities in printed lines, too, which I am beginning to explore.
With line-drawings in mind, algorithms and their underlying concepts allow the artist to formulate interesting strategies for generative processes that produce artwork, I am relying on such algorithms to place large numbers of points onto the drawing area from which, in successive steps, complicated patterns of lines may emerge. Standard graphical operations like scale, move, clip, rotate, etc. are also employed. On purpose, only limited means of editing are available in the generating program, because a high value is placed on conceiving concepts that are then realized, if possible, in a “one-shot” operation.
A compositional mode of operation is supported, as well. It comes close to classical collage techniques (with all the dangers involved). Earlier versions of the program ran on a Tektronics 4052 and later on a PC. The program in its present form is written in Fortran using GKS and is operable on a Siemens WS 430 workstation. It was implemented as a partnership project between the North China University of Technology in Beijing (Qi Dongxu, Xu Yingqing) and Universität Kassel.
For the generative act, we can identify different approaches. One of them could be described as: “The intentional execution of a concept. Another could be described as: “The probing search along an unknown road, supported by the hope to find something.” With the intentional approach, the artist tries to aim directly at the goal. It is the lucky hit which he is after. The probing search ends with a catch. Searching and finding are central concepts to this approach. “Hit” and “Catch” are two metaphors for two different generative scenarios.
In my own work, I place a high value on the “Hit.” The execution of an idea by a program is a direct means to a result. To catch something requires a process, which eventually will lead to a state, which by declaration (decision) is proclaimed the result. The process of development is interrupted (ended) at an arbitrary, previously unknown point, and the last “state of the system” is singled out and raised into the position of a result. The result then suddenly stands for itself, and the generating process becomes entirely unimportant in the moment of the decision. It is (usually) not even traceable any more.
The generation of the image baum_V14 starts with a concept for a tree (tree11), which is emerging as result of a “one-shot generative process.” In the tree11 image, a dense set of points is cast into a small area. From each point, one polygon emerges. As a bundle, they form tree11, using a very simple generative rule. The strictness of this approach can (I suppose) be felt in the visual strength of the resulting image. It is this image which then is manipulated in other programs until an arbitrary decision terminates this process and delivers the final image: baum V14.
It can not be plotted anymore, but it can be printed. The strokes_mi31 image is composed in a similar way. One of its three bundles of lines is generated in a “one-shot” operation, which is then replicated twice, and then plotted on a pen plotter. A number of questions arise at this point: Should a drawing that was designed to be plotted be printed at all? What significant changes occur? What features of a plotter drawing are actually changed when it is transferred to a printer, and how does this transfer affect the image, its quality, its visual evaluation?
Generate tree11 as one-shot operation: 1. Define an area (width of the stem) outside the drawing area. 2. Cast a large number (several hundred) random points into this are. 3. Generate polygons by programs with the same features but different feature values from those points to produce treel1 in one shot. 4. Cut to sheet, save as HPGL-file for plotting.
Generate baum_V14 from tree11: 1. Translate HPGL to EPS. 2. Polygons milled through filter. 3. Play with filters and decide when to stop. 4. Mask and save for printing.
Computer art is usually regarded as short-lived with respect to the durability of the objects over time. For prints, an expected lifetime of 25 years (longer in special cases) is assumed, and the goal of newer processes is 100 years (not a long time, given the timespans of art history, which, depending on one’s viewpoint, may be in the range of 30,000 years). To overcome this problem (if one chooses to regard it as a problem), processes using high temperature, which melt computer-generated images onto glass, may be used.
This work, Turm Unter Glas, is “permanent” under “normal” circumstances. It is part of a series of experiments that addresses the permanency and durability of computer-generated art objects. In these experiments, algorithmically generated drawings are: 1. Melted into the surface of thick glass sheets under high temperature. 2. Sandwiched and melted in between two glass sheets. 3. Cut into stainless steel with lasers.
The drawings are line images only, because lines are very simple geometric structures and, at the same time, inexhaustibly rich elements of artistic expressions. This is one of the main reasons why I like to work with lines. I have chosen a personal definition, which makes these lines distinctly and identifiably my lines. For the generation of such lines, relevant feature values are: number of starting points, number of lines originating from a given point, angular boundaries for a polygon, spread of a segment, and number of segments in a polygon. It is the starting point that calls for the first decision in a drawing process, no matter if the pen is steered by the hand of an artist or a computer-driven device. The question of starting points and the question of the “character” of the line developing from those points have to be taken care of by the programme. Especially interesting are two sets of algorithms, those which generate drawings in a “one-shot” generative process, and those that make use of “composite” processes.
Yellow strokes in blue space. Algorithmic generated drawing. Lines are polygons. The plotter data are converted to printer output.
Some remarks on algorithmic generated lines:
Artwork based on line drawings is challenging for a number of reasons. It makes use of one element only – the line – and it relies entirely on its calligraphic qualities. Drawing is more related to writing than to painting, and it has a transient element to it, which is attributed to the movements of the pen-equipped hand.
Besides the heritage of hand drawing, which we conceive as a fantastically rich universe, we may conceive an equally fantastic universe of machine drawings. Line drawings that populate this universe should exhibit qualities in their own right, i.e. they should: exploit algorithmic techniques; be non- reproducible by hand; show that they have been drawn by a machine; achieve a distinct and unique type of structuring; belong to an identifiable universe; exhibit strong calligraphic qualities; and make the question “how was it done?” entirely beside the point. My art experiments focus on drawings, generated by algorithms. The drawings are usually plotted on paper with ink, pencil, and ballpoint pens. The basic line element is a polygon. A number of parameters like length of segment, angle, number of segments, spread, and so on are used to control the development of such lines. Since pen-driven plotters are becoming extinct, new print technologies are used, which also allow for exploring new ways of interpretation.
Earlier versions of the program were running on a Tektronics 4052 and later on a PC. The program in its present form is written in Fortran using GKS and is operable on a Siemens WS 430 workstation. It was implemented as a partnership-project between the North China University of Technology in Beijing (Qi Dongxu, Xu Yingqing) and the University of Kassel (Hans Dehlinger).
Ink Fall is an interactive installation using digital techniques to make ancient Chinese paintings come alive. The inspiration for the work reaches back to 3000 years ago, when Chinese artists first began creating Shanshui ink paintings. The emphasis in traditional Chinese ink paintings is on atmosphere, specifically the fluid atmosphere of moving water. The ancient artists’ representation was limited by traditional techniques; once painted, the ink would not move. Viewers could only imagine the movement being depicted. Ink Fall experiments with bringing the concept of fluidity to a new reality through modern digital techniques, allowing viewers to perceive atmospheric changes over time.
Hdw: VAX 11-780/E&S/Dicomed Sftw: Solid Modeling/Polyhedra
In 1981, Haresh Lalvani developed a geometric generalization of the Penrose tiling as a projection from higher dimensions. This work led to his discovery and subsequent patenting of a large class of number-coded convex and non-convex tessellations embodying the generative paradigm “Shape by Number”. Milgo-Bufkin has introduced HyperGrills as one application of Lalvani’s tiling designs in laser-cut sheet metal.
In the 80’s and 90’s, Lalvani extended these tessellations into large classes of 3-dimensional structures that could be constructed from systems of nodes, struts and panels. These patented inventions were amongst the first examples of modular construction systems enabling irregular and fractal spatial geometries in the building arts. Included amongst these were Lalvani’s HyperSurfaces, a new mathematical class of surface subdivisions that combined aperiodic tilings with any curved surface.
The example of a hypersurface panel-system shown here is constructed from laser-cut stainless steel components. We are looking to extend these into active and passive smart structures in view of our interest in responsive architecture.
Hardware: lkonas & VAX 11/780 Software: Solid Modeling System, Quadric Rendering Software
XURF (eXpandable sURFace) is an experiment in developing a performative surface with the emergent properties of curvature, strength, porosity and transparency. Constructed from continous sheet materials – metal, in this instance – XURF exemplifies a morphable rigid curved surface. Possible in regular as well as irregular pattern geometries, XURF can be variably formed by controlling the interplay between force and form.
The patent pending XURF system was invented by Haresh Lalvani in 1998 and has been under development since with Milgo. It is a highly scalable invention with applications ranging from nano and micro scales to product design and architecture. We envision XURF as yardage (as in textiles) so that responsive architectural skins can be tailored. Smart XURF, with digitally operable components, are a natural possibility.
Fabricator and Sponsor Milgo-Bufkin www.milgo-bufkin.com
The SCORE program was used to combine and coordinate the various programs, and their graphic output. SCORE, by allowing a user to manipulate and tweak the parameters of each program used in the creation of an image, gives a designer complete control in getting exactly a desired image.
Hardware: Grinell frame buffer, VAX 11/780 Software: Harold Hedelman, Rikk Carey, Dan Ambrosi, Tom Mazzotta, Roy Hall, Wayne Robertz
Hardware: AT&T 6300+, Targa M8, camera, joystick Software: RT|1, C
Like recurring conversations with friends over cups of tea or coffee, this variant edition of 200 cups responds to and reflects on the consuming conversation of our consumer society. Diverted from their destiny as trash, the recycled tin containers are deconstructed, cut, folded, and reassembled. Beginning as post-consumer material, they revitalize the mundane into the extraordinary. Conspicuous consumption as a cultural norm flourishes in the rapid-fire pace of changing styles, models, and merchandising, and even influences the marketing of art and craft. My work questions whether creativity, content, and craftsmanship are becoming yet another disposable commodity. Most importantly, the use of recycled packaging as a medium and source of content addresses a spectrum of social and political issues. Hopefully, this work transforms the viewers’ awareness of their participation and challenges their own complacency.
Berman’s work is constructed from recycled tin containers that are opened and flattened, then stored in her studio according to color, pattern, or image subject (such as candy, crackers, standing women, sitting women, etc.). Each cup is fabricated from 10 to 13 wedge-shaped pieces cut from tins of one particular consumer product, so every finished cup has a different product identity. The metal pieces are punched from the tin cans using a hydraulic press and then bent by hand to fit with neighboring parts. The pieces are carefully soldered together. Handles are added. Saucers are formed using a hydraulic press to force the tin cans into a saucer-like shape. The first 70 cups have magnets so they can be stacked or rearranged in a random manner. The remaining cups are permanently arranged with a concealed brass rod to appear precariously balanced. This is a simplified description of the process. In reality, each construction step took months to figure out. Overall, it took four years to construct the 200 cups and finish the companion video.
Of water and the sea. It examines the relationship between media-over saturation and drug abuse: an endless reassuring blanket-landscape of mid week television, new age conspiracist internet video and computer games, a line of flight that begins under a blanket on a sofa in Douglas Adam’s ‘Long Dark Tea-Time of the Soul’ and ends in the a supplementary dimension of drug induced media.
Software: Final Cut, Premiere
Hardware: LSI-11, AED 512 Software: CMU-PAINT by Warren Wake
In two months time, I have produced over 30 complete electronic image paintings on the color display system. The images are generated by the artist and translated (by the artist) into a full color screen image to realize rapidly what might take a week with traditional brush and paint technology. A history file is recorded on a flexible disk which may be played back at any time to review mistakes or procedures. The final image may also be recorded as a single image picture file (on the disk) in order to allow recall of the final image only. The artist has at his command the possibility of altering colors and adding to his painting at any time in the future. Theoretically, there are 16.8 million separate mixable colors. The important thing, to me, is getting traditional art artists (unfamiliar with computers) like myself involved with electronic tools and technology. Creative artists are customarily resourceful, usually not satisfied with the status quo and generally comfortable with visual invention modes. The results could produce some surprising, unexpected processes and products. With this conviction and underlying my intentions, I sent out to tackle this new animal…the computer. I am relatively satisfied with the results so far, and am realizing fully the limitations (both with the computer, and myself). The future is exciting and challenging. I look forward to the next steps (programming or shape-molding systems) with the hope I can infect other artists (and computer scientists) with my enthusiasm.
Hardware: PDP 11/03, AED 512 display device Software: Warren K. Wake
How does artificial-life art adapt to its environment? What is the significance of a computational ecosystem proposed as contemporary art? These are some of the ideas examined in this bio-inspired immersive art installation.
The computational world of Artificial Nature consists of organisms interacting within an environment, consuming flowing energy and matter to grow and survive, generating continuous patterns of emergent beauty. Spectators can explore this world freely and endlessly, and influence it indirectly just as they might play in a stream or forest. Sensual data collected through a camera-eye and microphone-ear, and sometimes tactile touch, become the environmental conditions to which organisms must adapt.
Artificial Nature is an infinite game. It invites you to play and create, as continuation rather than toward a termination. It actively fuses intuition, artistic expression, and personal awakening with knowledge of complex systems, thermodynamics, physical biology, and computer science. In this way, art, research, and play are integrated into one aesthetic and creative experience of infinite depth, inspiring the growth of the art work, the spectators, and the artists in a symbiotic circle. Artificial Nature is proposed as “art-as-it-could-be”, suggesting the future-possible of art through its unconventional expansion. This is a vital role of contemporary art: to conceive and create the open-ended world in which we are about to live. artificialnature.mat.ucsb.edu/
Haru Ji is a 3D sculptor, trans-artist, and researcher exploring the subject of life in art through artificial life world-making as computational sculpture.
Graham Wakefield is a composer, software designer, and researcher investigating the computational embodiment of creative becoming.
“Time of Doubles” is an immersive interactive art installation. It invites visitors to experience mirror existences of themselves taking upon new roles as sources of energy and kinetic disturbance within a virtual ecosystem, a uniquely created computational world. Visitors encounter their doubles in a deep hyper space with 3D stereoscopic projection and 3D depth cameras. The immersive projection dissolves the illusion of a window to form an entryway into a shared, co-present world, and the volumetric sensors take the visitors beyond avatar-based representation to become embodied within a world of physical simulation. This world displays some familiar characteristics as our own, but is populated by unfamiliar life-forms swimming through the sensitive motions of dark fluids and singing continuously. The visitor doubles are energy fields which emanate myriad bright fluid particles, food sources which are eaten by the virtual organisms. Visitors hear, see and feel how they are fed to unknown species in the virtual ecosystem aesthetically. Without visitors, the world-fluid is filled with life seeds that cannot grow, but with a human presence the populations explode into alien orchestras. Larger organisms leave physical residues and films behind as they pass, which constrain the fluid flows and which can be sculpted by visitor doubles.
Hyper Scratch presents an active creative space with an interactive system that is like a video game in which any person can easily take part.
Real-time synchronization of sound and images is achieved through the use of a personal computer. A touch interface makes it possible for nontechnical persons to interactively manipulate images and sampled sounds. These sounds and images are combined with a set audio rhythm through the use of a digital sampler and a personal computer. The images are then projected onto a 100-inch screen using a liquid crystal video projector, and are accompanied by stereo sound.
I want to examine contemporary society, international events, technology, and communications media. TV, radio, facsimile machines, and personal computers are said to have played a key role in providing instantaneous worldwide communication of events such as the collapse of the Soviet Union, the splintering of Eastern Europe, and the student uprisings in Tianamen Square. A lack of international communication may have been a contributing factor to the misunderstandings that lead to World War II.
In the Vietnam War and the Gulf War, called by some “television wars,” real-time reports on television influenced the prosecution and outcome of the battles as they occurred. Today, political systems exist that attempt to control the information its people receive. If people are able to receive information directly, free from the control of their rulers, oppressive political systems embodied in those rulers will become untenable. As the Cold War has ended, the world has become more fragmented. It is becoming increasingly dangerous to depend on traditional political systerns and values. Independent thinking is paramount in this new, uncertain environment.
Tools like ISDN allow fast, personal exchanges of large amounts of information. Soon, conventional text-based data will be replaced by visual images interface makes it possible for non- and sound. The value of face-to-face spoken communication may change. Electronically assisted communication will allow deep, direct communication between individuals with diverse linguistic and cultural backgrounds.
North Korea, a country that is very close to Japan, has limited communications with the rest of the world. In its isolation it has threatened to become a nuclear renegade. As a citizen of the only country that has been the recipient of a nuclear attack, I am very concerned about this issue.
Perhaps, through work such as Hyper Scratch, subjects such as nuclear proliferation may receive greater exposure to ordinary citizens. Through the use of an interactive system, I hope to symbolically express the closeness that all people share as citizens of this world.
In this interactive installation, anyone can use hand motions to freely control light and sound in a 3D space.
The twentieth century was an age of mass production and mass consumerism. Our lives depended upon the consumption of huge amounts of energy sources such as petroleum as well as enormous amounts of other natural resources. However, it is said that fossil fuel sources such as petroleum will play out after another hundred years or so – meanwhile destruction of the natural environment around us continues, the environmental crises deepens, and in many ways eats away at our existence. Upon the threshold of the twenty-first century, one can only assume that we are facing an inevitable change in our lifestyles and concept of values.
This work consists of roughly 500 motors which make music by ringing bells while light-emitting diodes (LED) flash on and off in the darkness. The bells and light-emanating diodes are placed at varying heights and locations surrounding the one experiencing them. The bells are made of copper and brass pipes, making sounds and producing music from the striking of a wooden ball driven by a computer-controlled motor. The lights flash on and off by motor in synchronization with the music. The sound the viewer hears is only the slight sound of the copper bells themselves with no electronic processing or amplification. The luminescence of the LEDs is also something quite subtle.
What this work offers its audience is a chance to watch carefully and listen consciously. In our day-to-day lives, we come in contact with so many things and pieces of information that we may have gradually lost the inclination to actually watch and listen, to experience our surroundings. We must learn again how to watch and listen, to recover the will to experience our natural environment.
In the natural sphere of this world or of the cosmos, there is no specific center and all things exist as separate and independent entities. On the other hand, many things have a mutual and organic involvement with one another, as a whole forming a single living organism. In this work, sounds are produced in different places and resound to compose a harmonic state. In other words, each bell sends out a different sound but forms an organic and harmonious state, symbolizing a sort of multidimensional perspective of the cosmos.
Wall is a work that generates various sounds and images in real time based on the movement of a visitor’s hands in space. An invisible cubic grid with 100 invisible cells extends out in front of the visitors as far as they can reach. The grid has four layers. Each layer consists of five horizontal rows and five vertical columns. Visitors can freely manipulate the work by touching this invisible grid (entirely without tactile sensation). Different sounds and images are allocated to the four layers of the grid. The images are generated on a screen. Audio sounds are emitted from speakers placed near the screen. When a hand probes into the invisible grid, a sound and an image are activated each time it passes through a cell. First, the hand touches a layer that generates sound effects and moving pictures. Next, the hand reaches a layer that brings forth words. The hand’s movement is captured with two video cameras set in front of and next to the grid. The sign a ls are converted into images and sounds through the computer and digital sound sampler.
I try to avoid difficult operations, as well as mechanical interfaces such as joy-sticks or touch panels. I want participants to be able to interact freely in a three-dimensional space. It is important to me that my interactive work incorporates an environment in which users can easily participate with little information. Participants can continuously concentrate on an act if the interface they use is uninterrupted by intervals, and since the interface area and the viewing position are the same (the outstretched arm points directly to the screen), participants can concentrate entirely on the screen. Of course, since the interface is invisible, it is difficult to conjure sounds and images as accurately as one would like. I hope that participants will enjoy the coincidental combination of sounds and images.
It is critical that the work instantly generates sounds and images that reflect the behavior and reactions of the human participant. I aim for this sense of oneness between human behavior and the response to sounds and images. In the East, space is not empty. It is filled with feeling or energy called Ch’i. Some believe that there are various entities within invisible space, such as feelings, thoughts, energies, and intentions. To cite a familiar example, you may have seen how in Chinese Kung-fu movies, the Kung-fu master directs all his energies to throw back his opponent without touching him at all.
In short, human behavior can easily affect the outer world immediately beyond space. Wall can provide such an experience virtually. I sought to provide an environment in which a participant’s body has a free and direct relationship with the work without touching any equipment. I want the visitor to feel in control of the images and sounds as though there are tiny strings attached to his or her fingertips, to stir within a mysterious sense of oneness between the participant and the surrounding space.
Summary
Persistence is a kinetic installation exploring the conflict between geologic and human timescales. It reveals the collective frailty of our memory, and the dire need to preserve lifeforms on this planet. As the robotic arm rotates across a phosphorescent canvas, ultraviolet lasers activate the underlying pigment—revealing fleeting planetary memory.
Abstract
Persistence is a kinetic installation that explores memory and forgetting. The project discusses planetary memory and the new hallmarks of the Anthropocene: endangered wildlife, melting icebergs, and deserted land.
As the six-foot robotic arm rotates across a phosphorescent canvas, ultraviolet lasers activate the underlying pigment – revealing a fleeting image. Each additional pass of the robotic arm, mimicking a clock, invites new opportunities to allow existing memories and images to fade – or to activate entirely new compositions.
The project consists of a rotating kinetic arm with focused lasers. The light is directed to a prepared canvas with phosphorescent pigment.
We have shown previous iterations of the project at events around the world. The project has been re-imagined multiple times in different ways and concepts. The work has always conceptually investigated ideas of memory and forgetting in the digital age. The proposed iteration will be an original version of the project, illuminating important figures of our times that we must preserve and remember.
We are interested in the conceptual conversations around the ubiquity of externalizing our humanity to computerized systems. We are curious about what is lost in the cracks of this push towards converting experiences to digital bytes. Technically, we have been working on this project for almost 8 years. The engineering and research to build our own circuit boards and software to make this work has taken many iterations.
When building the project, we had to experiment with lots of different phosphorescent materials. The trickiest part was building large surfaces of evenly distributed glowing material. We had to construct our own paint dilution to apply lots of light coats onto the wall.
We had many smaller engineering learning experiences as well. Rotation of this sort of machine continuously required us to consider rotational electronic connectors and engineer the motorized components to not interfere with the wiring.
One of the early challenges was having so many small light sources that all had to be carefully aligned onto the surface, lot’s of work has been done to simplify the calibration of the system.
The intention in this series of work is to pay homage to our most primary of tools – the human hand, foot, and head. The delicate and complex anatomy of our collective feet, hands, and heads, and their transcendent efficacy in relation to early tool development and usage, their gestural communicative capacities, and their potency as primal tabulation and measuring devices, gives them a signification that I find fecund with metaphorical meaning and energy. Our rarely examined soles/souls and their evolutionary relationship to the world of high technology are an ongoing source of wonderment and contemplation.
Blue Eyes in the Land of Forgotten Moisture is a piece from a larger body of work untitled, The Magic Potter Series. The dominant theme in this series is a self referential romantic reflection back to an earlier pre-electronic life. The artist is searching for the threads of his past life as a potter and sculptor. The vessels in these images function as metaphors for life containing forms. The setting is an environment consisting of pure light, creating a meditative transcendent ambiance. The figure is in a state of discovery, exaltation and sometimes anguish. The images in The Magic Potter Series are an attempt to integrate the warmth and magic of the old with that of the technological, elusive and new.
One of a series of experiments with digital illumination in a virtual environment. Pure light and its refractive and reflective manifestations are used as the artist’s paint brush. Compositions are punctiliously constructed and articulated in the ongoing quest for pure and essential interactions and illumination.
An erroneous tip called into law enforcement authorities in 2002 subjected Hasan Elahi to an intensive investigation by the FBI and after undergoing months of interrogations, he was finally cleared of suspicions. After this harrowing experience, Elahi conceived “Tracking Transience” and opened just about every aspect of his life to the public. Predating the NSA’s PRISM surveillance program by half a decade, the project questions the consequences of living under constant surveillance and continuously generates databases of imagery that tracks the artist and his points of transit in real-time. Although initially created for his FBI agent, the public can also monitor the artist’s communication records, banking transactions, and transportation logs along with various intelligence and government agencies who have been confirmed visiting his website.
In the analogue world, electronic signals are based on waveforms. Transmissions of sound waves, light waves, and water waves, all use waveforms to transmit vital information directly related to energy distribution, making waveforms an integral part of our daily lives. The art work Light Storm PLUS uses power generated by waveforms to control the motor of a high-speed rotation device transmitting electroluminescent (EL) cold light. The artwork replicates the shape of wave forms in the real world, thus the light waveforms fluctuate with same rhythm as they do in the analog world. Through interacting with the artwork, people sense that their bodies are key to the transmission of data, as they become active components in the feedback loop, but also become part of the mechanism of transmission.
Digital Buddha begins with an abstract sculptural object that is without identifiable meaning in the real world. When the camera digitally captures the sculpture, however, the sculpture is transported to the virtual world, decoded in real time, and transformed into a Buddha in the virtual world. The work asks the questions: What is reality and why is reality? Is our reality false, and is the virtual true?
When the audience stands near the sculpture, the camera captures both the viewer and the sculpture, thus placing the audience simultaneously in reality and virtuality. They sense both reality and unreality’s contradiction. In reality, the sculpture is abstract shape but the audience is normal. In virtual space, the audience is abstract shape but sculpture is normal. What is reality? In modern, virtual things replace reality. Maybe the virtual is the truth.
在作品Digital Buddha 中,形狀抽象無法辨識意義的雕塑物件,透過數位攝影機捕捉而載入虛擬世界,即時解碼成為一尊佛陀。它試圖討論現實的本質與由來,以及現實的虛假與虛擬的真實可能性。 當觀眾靠近雕塑時,會同時被攝影機捕捉並置於虛擬之中,因而意識到正常/不正常在虛實轉換中流變的衝突。當現代虛擬事物取代現實時,真實的定義可能也正轉換著。
AND THESE ARE THE NAMES OF THE DAUGHTERS Series of eight digital prints in light boxes. This series of images address the artist’s experience as a daughter, and more specifically, the role reversal that occurred when her mother became terminally ill and succumbed to metastasized breast cancer. Utilizing family photographs and video, her mother’s MRI brain scans, nature, and her own body as source materials, she develops metaphors between matriarchal relationships and the seasons of the year. The artist was forced to continue life with only her mother’s memory as a guide. During the advanced stages of her mother’s illness, the artist became the nurturer and maternal figure for her own mother. The mother became her daughter’s child. Byinserting her body into the scans of her mother’s brain, she fills the memories her mother lost as a result of disease, simultaneously attempting to hold onto her own memories for the future. The medium, illuminated prints, reflects the medical documentation (MRI scans), video sources, and the digital creation process utilized by the artist.
Relics I and II show an LED “billboard” which is no longer viable, a messenger who cannot transcribe the message. No longer able to communicate through digital signal, it has become a siren, beaconing lost communication, and digital dependency.
Relic IV shows a trail of 70 mobile phones in snake like formation following the contours of the rock face. In direct contrast to the natural surrounds, they create an unlikely symmetry with their environment. As a result, they appear to have greater questioning purpose as an (analogue) sculptural object and metaphor than in their prescribed usage for communication.
Relic V shows a bottle containing streaming videotape which has been dismembered from its original container and placed alongside other “keys” – a metaphor for inherent obsolescence, a message which has arrived without the prescribed messenger of technology to transcribe its contents.
The images are from my Logophobia/Logophilia series. These images explore language’s limitations and the merits and frustrations of these limitations. The Logophobia/Logophilia series is printed onto HP Studio Canvas with an HP Designjet 500ps. The canvas was then cut along the edges (hence the irregular border). The prints are reinforced with fiberglass resin to evoke both skin and parchment, although they were originally spray-adhered directly to the wall. The process of creation involved both traditional studio practices and Adobe PhotoShop 7 and CS. First, player-piano-roll paper was painted with latex. Then I did drawings of songbirds with a Wacom tablet directly into PhotoShop, which I printed and transferred to the latex-painted paper. The roll was then torn into segments and scanned into a Macintosh G4 with an Epson Perfection flatbed scanner. I then took photographs of myself with a Canon EOS D10 and incorporated these into the scans of the player piano paper. Sometimes the photographs were simply used as templates for line drawings. I also included photographs of plant matter and flowers from the Lexington arboretum, the Arizona desert, and the swamps of northern Florida. I also scanned in birdseed and other materials as appropriate to a given image.
This talk explores some of the possible applications of unsupervised machine learning methods in found footage cinema, a tradition of experimental art that re-edits excerpts from existing films. This artistic practice sometimes aims to reconfigure our experience of the moving image heritage. In this context, machine learning algorithms has the potential to capture aspects of the cinematic experience for which we lack critical concepts, and which are for this reason difficult to describe. One important example concerns cinematic motion. Established critical discourse often speaks of motion in film by reference to the movement of objects or the camera. Film scholars might describe a scene by noting, for instance, that a person is walking fast or that the camera is tilting upwards. What is missing in this kind of description is the visual texture of cinematic movement. The two-channel algorithmic installation Errant: The Kinetic Propensity of Images applies matrix factorization techniques to the analysis of optical flow in cinema, focusing on the work of Chinese director King Hu. This method produces a visual dictionary of basic motion patterns the represent what could be described as the “kinetic overtones” of image sequences. The results are then visualized using streaklines, a technique from fluid dynamics. This presentation will discuss the motivation and methodology used in the production of this work, in relation to other work by the speaker. Implications for cinema theory will also be briefly discussed.
The Gestus project consists of a custom software that generates a vector analysis of the videos in a data set, and uses this analysis to search for sequences containing similar movements. It then renders similar sequences side-by-side as a split-screen display, enabling users to compare the movements that occur in them. The software thus brings together scenes from different locations in the story, finding echoes between very diverse moments in the film.
Vox Balaenae (Voice of Whale) is music by the American experimental composer George Crumb. His notation method is unconventional; it’s very fluid and artistic. This piece is an abstract visualization of his music and carries his idea into animation. The music notes fly under water and come from a giant shell with a texture of notes.
For this animation, I used After Effects and Cinema 40 for 20 com positing and 30 animation. I created basic 30 elements in C4D and impo ed them into AE with cameras and lighting.
PRODUCTION Modeling: hype r NURBS. Rendering technique used most: N.A. Average CPU time for rendering per frame: N.A. Total production time: 90 days. Production highlight: Cinema 40 integrates well with After Effects, most 30 elements were rendered in black and white to save rendering time and colored in After Effects.
SOFTWARE Modeling: Maxon Cinema 40 8.1 Animation: Adobe After Effects 5.5. Rendering: Dynamics: Compositing: Adobe After Effects 5.5. Additiora software: Adobe Photoshop 7, Illustrator 10. OS: Mac OS 10.2.
HARDWARE Apple Mac G4 dual 800 MHz CPU, 1.25 GB RAM.
An interactive media art named See Through U for emotional communication using a both-sided transparent display has articulate value, being a new medium, and can share emotion and feelings through its participatory nature, providing an aesthetic experience.
Emerging from the intersection of sculpture, theater, and engineering, Heidi Kumao’s video and machine art generates artistic spectacles in order to visualize the unseen: thought patterns, mental states, emotions, compulsions, and dreams. Through the creation of hybrid art forms (kinetic sculptures, animations, and interactive works), she explores the psychological underpinnings of everyday situations and institutional contexts, such as the nuclear family, mainstream media, and traditional gender roles. Transplant explores the lives of]apanese nationals and citizens who were interned in War Relocation Centers in the dusty desert of California during World War II, and how they cultivated gardens as a creative outlet to survive their confinement. Despite the horrendous conditions, residents of these “camps” constructed beautifully landscaped parks, ponds, and rock gardens. Transplant pays homage to their ingenuity and personal drive to transform gravel into gardens, altering their built environment as an act of defiance. This piece is part of the project Timed Release, a continuing series of intimate theater pieces about surviving confinement, supported by a 2009 Guggenheim Fellowship.
Hardware: VAX 11/780, Grinnell frame buffer Software: by the artists
Studies of transitions from order to chaos, ferromagnetic phase transitions and lattice models
Goethe Institute, San Francisco
I currently make art all the time, obsessively and happily and at the core of my being. I am compelled to create art. Regardless of the media I am using, the goals are always the same. I endeavor to create art that will go beyond surface representation to get to the spirit of the idea.
Although the inspiration for the images is often photographic in origin, the resulting art is mixed-media work that ultimately comes from the synthesis of new digital technology tools with traditional ones such as photography or etching and drawing. I often call this “tradigital” art.
Working with the computer and its associated sophisticated technologies has enabled and empowered me to explore myriad new ideas and to play, risk, and experiment more with them. As all media interact and collaborate with the artist, I find the serendipitous dialogue and the rich possibilities inspiring.
The deep satisfactions that I get from being the agent of the transformative process (making something new out of something else) lured me into recent explorations in which I have collected color and black-and-white newspaper photographs and then digitally collaged fragments taken from them.
In these four images, I used the computer to reassemble and abstract the photos so that they emerged beyond recognition, and so that the black and colored halftone dots of which they are composed became my “brushes” and elemental structural material. I fabricated new images from those components by layering, stretching, re-configuring, and re-coloring them. The source images are subsumed but remain as an armature for the newer abstracted ones that emerge.
I wanted to go beyond the surface representation of the “stories” that I started with, and I worked to get to the essence or the deeper defining aspects of the images. I used Altamira’s Genuine Fractals software to take little bits of information and turn them into much larger re-sized images, and the software changed not only the dimensions but opened up and expanded previously almost unseen particles and pieces. Seeing those fragments enlarged my ideas about the original experience (which are already pre-filtered through the lens of a camera) and let me move in unexpected and unrestricted directions. That itself brought me close· to visually representing the feelings I have about the fragmentation of our experiences and the shifting patterns of our lives.
Adrift is an evolving performance work that is streamed live from multiple locations to audiences on the Internet and in distinct geographical locations.
A collaborative work by three artists (writer and composer Helen Thorington, composer Jesse Gilbert, and architect Marek Walczak), Adrift involves an interplay among three environments: text, sound, and VRML 3D, where borders are permeated, new relationships are developed, and the expressive power of the networked medium is made visible and audible.
The content focuses on a harbor, a city, and the human body. As the three artists and their computers pass information back and forth in real time, an interaction among the senses, and among the geographies, scales, and narratives represented in the work is created. The fluid perceptions and the multiple intersecting journeys they suggest lie at the heart of this networked performance.
Premiered at the Ars Electronica Festival in Linz, Austria in September 1997, Adrift has been performed monthly since, including a simultaneous performance in Vienna and Brooklyn. Future performances will include other artists, additional audience locations, and the extended possibilities of new programming and new perceptions.
Archives of earlier performances, including full performances and slide shows with sound, are available on the Web site (www.turbulence.org/Adrift).
As technology revolutionises our public, private and interstitial spaces, shrinking the globe, compressing time and expanding horizons, the reach of an arm, the length of a stride is distorted and confused. Where do flesh, fragile bone, senses and perceptions fit into the new geographies of the late 20th century?
The contemporary desire to escape the physical through technology is a version of the ancient quest to evade the body’s mortality and access “other” existences in spiritual or metaphysical realms. The complex ethical and artistic implications which this contemporary quest embodies are enmeshed in the multiple perspectives on the body incorporated in the work.
Escape Velocity raises questions, without answers, presenting a range of perspectives upon the nakedly physical real body, the technologically mediated body, the absent body, and the disembodied emanations of the mind.
Escape Velocity is movement driven mirroring the complex nature of the body’s relationship to space, time and place at a moment in history when these parameters are starting to shift under our feet.
In this piece one touches a larger-than-life torso that reacts. The skin of the body moves subtly as one’s hand moves over the surface. If the hand is held down for a moment and then taken away, an impression of the hand is left behind. This imprint soon lifts off the surface and begins to blow around as if it were a leaf or tissue caught in a breeze. Many people can touch the surface at the same time, sharing in the interaction.
The work employs rear infrared illumination and custom computer vision software to track the touches that the camera sees and map them to the curved screen. The touch outlines are converted to polygonal meshes that are placed in a fluid flow simulation that makes them blow like leaves in a breeze.
When someone touches you and takes their hand away, what do you feel afterwards? Is that tingling warm sensation something of them, or is it created in your body? What remains of yourself when you touch something? If you could see the imprint of your touch, how would it appear? Kaufman’s vision was to make it possible to see what you leave behind, turning that ephemeral residue into something visible. By showing people’s touches to be semi-physical objects, the work seeks to create awareness with regard to touch and, perhaps, encourage more touching. In general, people need more physical contact to be healthy and balanced. Many Western cultures discourage touch. Whether through fear of litigation or sexual harassment, or uprightness in general, it seems that personal contact has become something to be avoided. One nurse told Kaufman that she was asked to stop hugging the patients in a nursing home even though she knew it helped them. Kaufman hopes that people will spend some time with this piece, that their explorations will lead to meditative reflection about touch and personal contact, and that their experience prompts discussion about touch-whether intimate, friendly, or for therapeutic healing-and personal boundaries in general.
This piece is part of a series composed of three related works. The first, The Memory of Your Touch, explores the material of touch itself by giving physicality and weight to the touches that people leave behind. The second (the present work), The Lightness of Your Touch, explores touch on a human body, specifically a belly, and gives the touches a behavior like leaves blowing in the wind. Finally, the third work, A Touch of Ancient Memories, explores the “touches” as handprints created by our ancestors in cave paintings around the world.
This is a variant of a graphics pattern which appeared on one of the earliest computer graphics systems, the PDP-1 at MIT. The basic pattern is the graph of the x-y coordinates Y=Exclusive-Or(X, T) for successive values of the parameter T. Squares are “eaten away” by a series of four smaller squares along the diagonal. This picture incorporates fifteen of these patterns out of phase in different colors.
Hardware: MIT Lisp computer, frame buffer Software: MIT Lisp
Hardware: Perkin Elmer Software: DIBIAS
This picture was made with Mathematica, a special software tool for all types of mathematical calculations, research, and visualization, and the graphic software Bryce. The computer is a Pentium Siemens-Nixdorf PCD-5H. Beginning with mathematical experimentation, I developed interesting forms, one of which was imported in a desert landscape I made with Bryce.
Even in my youngest days, I was impressed with unusual pictures of an aesthetic point of view. This interest was never passive. It was an active challenge, to develop different methods for producing pictures. During my study of physics, I learned the strange results of scientific photography, and I asked if it would be possible to use the microscopes, x-ray equipment, and oscilloscopes for free visual experiments. Some years later, when the first computer graphic systems appeared, it was clear that this was the best approach for my intentions, and I began with my first explorations in this new and fascinating region.
After my first period of generative photography, I began to use computers and mechanical plotters. Since then, I have found more and more interesting possibilities for producing new types of pictures, in 2D and 3D, and also in motion, or interactive, or in connection with music. In one of my last series, I returned to my interest in the future and science fiction, so the content of these pictures was phantastic planetoid landscapes with fragments of a strange technology.
This little system is a simple analogue computer, specially built by Franz Raimann to help me make pictures. It is called Oszillogramm because, in principle, the result is the superposition of two electronic oscillation components. The real picture is in motion, for viewing an old cathode tube oscilloscope. The photo is a slide made from the screen.
TWEET ART, in French TUITART, is an actualization on the web of the former postal art and the New York Correspondence School of Ray Johnson, of rubberstamp art, wall’s graffiti and posters, street’s imaginary signalizations, pills of the Fischer Pharmacy, tags, tattoos, etc. Futurist artists would have enjoyed Twitter‘s immediate power to diffuse images on Internet and social media. These meaningful small images, instead of the 140 characters allowed by Twitter, are exploring the topic of philosophical and ethical questioning about today’s art, politics and main social issues. Barcodes, binary codes, codes with 4 letters (acgt) of DNA, financial diagrams are the main icons of the digital age which l explore. A new step in sociological art, with its interrogative aesthetics.
Tweet art is also a way to link again fine arts and digital arts. I work with painting and computing simultaneously. I don’t agree with the binary way of thinking of artists of the fine arts against the digital arts or reciprocally. They have developed an attitude of anathema between themselves. They should admit that there is no progress in art history. A computer piece of art is not more valuable than a painting on canvas because it is digital. Technological progress is not an issue for art, even if art is always linked with technologies. The art spirit is not in the computer nor in the pen, but in the brain and sensibility of the artist himself. Therefore l look myself for what l call “digital fine arts”.
We are pessimistic about the future of an art which seems obliged to run after the most up to date hardware. If it is necessary to create nuclear centers in order to produce two minutes of movies after hours of effort, sorry… we will return to our brushes. On the other hand, after working in this crazy field for 10 years (where we began by hand-painting on a listing), we have always enjoyed ourselves so much…
Hardware: LSI 11 frame buffer
Hardware: VAX 11/780, Lexidata 3400 display
Hardware: VAX 11/780 Software: Production Automation, Rochester University
Hardware: FACOM M-200 Computer Software: Custom FORTRAN
Hidenori Watanave is researching the arts in the 3D internet (for example, Second Life) and the 3DGIS (for example, Google Earth). He is interested in collaborative work in the realms of architecture and environmental design in tele-existence in the 3D internet.
Spatial design in the 3D internet was established through the Archidemo project (2007-2008), which was selected to be part of FILE 2008 and SIGGRAPH 2008 in Los Angeles. His current experiment focuses on translating 3D internet space into real space through GPS and GIS, using techniques like those developed by Hidenori in the NetAIBO project (2004-2005, Honorary Mention, Prix Ars Electronica) and the ObaMcCain project (2008) of 3Di-chatterbots-space, which was exhibited in Mission Accomplished at the Location One gallery, New York.
The theme of this SIGGRAPH Asia 2008 project is a visualisation of a huge visual archive of SIGGRAPH 2008 Emerging Technologies in the 3D internet domain.
This saccade-based display is a device capable of presenting two-dimensional images using a unique bar of addressable light sources (a column of LEDs). In a dimly lit environment, each time the saccadic eye movement of the observer is detected, the flashing pattern of the column light expands, and ghostly images appear in midair.
Due to the electronic scanning mechanism of the CCD image sensor, certain video cameras are also capable of capturing these floating images even when they are not moving at all. These observations encourage a reflection on the process of vision. Natural and artificial visual systems rely on some sort of active sensory mechanism for exploring the external world, though their temporal scales may be different. We sense and react to the world, and we even use machines that can take pictures without paying attention to these hidden perceptual mechanisms, but understanding and exploiting them may open up new possibilities of perceiving and displaying.
According to the Chernobyl nuclear disaster report of the International Atomic Energy Agency, it is academically and socially important to conduct ecological studies concerning the levels and effects of radiation exposure on wild animal populations over several generations. Although many studies and investigation were conducted around the Chernobyl nuclear power plant, there were little audio samples. At last, twenty years after the Chernobyl disaster, Peter Cusack made recordings in the exclusion zone in Ukraine. To understand the effects of the nuclear accident, long-term and wide-range monitoring of the effects of nuclear radiation on animals is required because there is little evidence of the direct effects of radioactivity on the wildlife in Fukushima. Immediately following the Fukushima Daiichi Nuclear Power Plant disaster, Ishida (a research collaborator at the University of Tokyo) started conducting regular ecological studies of wild animals in the northern Abukuma Mountains near the Fukushima Daiichi Nuclear Power Plant, where high levels of radiation were detected. Ishida reported that it is essential to place automatic recording devices (e.g., portable digital recorders) at over 500 locations to collect and analyze the vocalizations of target wild animals. For monitoring such species, counting the recorded calls of animals is considered an effective method because acoustic communication is used by various animals, including mammals, birds, amphibians, fish, and insects. In addition to using visual counts, this method, in particular, is commonly used to investigate the habitat of birds and amphibians. Furthermore, the ecological studies of the environment near urban areas are being conducted using cell phones. However, it is difficult to use such information devices in the exclusion zone as these areas do not have infrastructure services. It is necessary to develop a monitoring system capable of operating over multiple years to ensure long-term stability under unmanned operating conditions.
The artist constructed one Acoustic Ecology Data Transmitter in the Exclusion Zone (Namie, Fukushima, Japan), 10 km from Fukushima Daiichi Nuclear Power Plant. It aims to transmit and store a live stream of sound from an unmanned remote sensing station in the area. The artist expects these data to prove useful for studies on topics, which include radioecology and the emerging dialects for future observations. This work addresses the dramatic issue of the ecological disaster of Fukushima, creating an immersive and experiential reality for the audience. The live sound data to the public is available via Radioactive Live Soundscape website.
Hdw: Tektronix 4404 Sftw: By artists
As we approach the millennium, we find that we are unsure of what we are looking for and where we are. The information pollution that befuddles the mind leaves a wake of uncertain melange of people and machines blinded by information overload.
Our world generates a dizzying conflict of messages, desires, and revulsions within the amalgam of the post-postmodern world. It is about being overwhelmed and underwhelmed. Opposites are compressed into a new intentionally dysfunctional unity. It is about the birth of double and triple reverse psychology, double reverse satire and irony. I, myself, am suffering from plot twist overload.
Blind Date is an interactive installation that uses a touch-screen monitor to invite the viewer to touch an image of a hand. The monitor reclines in a mock seductive pose amid sexy fabric. When the viewer touches the hand on screen, the hand becomes aroused. The tension of machine arousal is heightened by a voice that pleads,”No, oh God, please no!”
This piece underscores the confusion over gender and body issues as we embrace technology. As technology has progressed, we have injected it into many aspects of our lives, including sexual and emotional intimacy. From pin-ups to blow-up dolls, phone sex, and bi-coastal relationships, machines have enabled the distancing of intimate behavior. A recent direction is interactive pornography on computer screen.
Machines provide a new intimacy. Are we simultaneously intrigued and alarmed by the horribly mixed messages presented by the media and by those that we encounter in our lives?
The Distancing of lntimacy: Toward Machine-mediated Closeness
I would like to share my views as an artist involved with questioning the changing modes of intimacy arising from the influence of technology.
It would be helpful to reflect for a moment on the traditional connotations of intimacy and intimate behavior. The term,”intimacy,” as commonly defined, includes the innermost, most private or personal thoughts or activities of a person. Intimacy requires closeness and familiarity, and can be a deeply personal expression of thoughts or feelings through voice, touch, or gesture. It denotes a wide range of human behavior—from sexual intercourse to revealing oneself verbally, to the interaction of a painter with a brush and paint, to an auto mechanic rebuilding an engine. Something personal is revealed in each of these acts. For me, the cornerstone of intimacy is personal revelation.
Technology is becoming an intermediary for intimate expression. This creates a physical, emotional, or conceptual distance between the participants in an intimate act. Protected by distance and anonymity, a participant in a discussion group on an electronic network can reveal highly personal information to strangers or to invent alluring fictions. The physical protection afforded through networks acts like a barrier against assaults on the self, thus allowing great latitude for newly invented intimate behaviors.
The new, more distant intimacy is increasingly commonplace. Increasing use of long-distance transportation and communication allows remote contact. As distance is spanned more conveniently than ever before, we will intensify our contacts over distance, while diminishing involvement with contacts of proximity. In fact, for many people, long distance is more than “the next best thing to being there”—it is preferable. Years ago, some found it useful to keep relatives at a safe distance in a nearby town or city. As transportation improved, the perceived separation decreased, so that today many adult children prefer to avoid living in the same time zone as their parents. Some long-distance love relationships work in spite of their difficulties, only to crumble when the partners eliminate the miles between them. Indeed, many find remote intimacy more comfortable. By keeping those close to us at a distance, potentially damaging confrontations are limited.
The physical presence requisite in traditional notions of intimacy is being redefined. Virtual presence through immersion in cyberspace will overtake physical presence. This shift brings profound implications for human interaction. The ultimate cyberspace might be one that cannot be distinguished from original reality. As cyberspace becomes increasingly sophisticated, intimate behavior will increasingly direct itself into virtual reality and away from material reality. As greater numbers of people spend greater percentages of time in the virtual domain, human-to-human communication—and skills of intimacy in general—will diminish from lack of use. In the near future,many of us may experience diminished interpersonal skills. This is already evident among children who become absorbed by video games and television. In a world of deteriorating social skills, the apparently less threatening world of cyberspace may serve as a haven for an increasingly dysfunctional society.
New products will arrive that plunge us deeper into an expanding definition of cyberspace. We can already envision a market for communication devices that mediate between people in conflict. Acting like a language translator for two people with a common tongue, the device listens to the words of each, decodes the essential meanings, strips them of inflammatory, racist, gendered, or other objectionable language, then presents the cleaned-up translation to the other person. Thus, the distancing of intimacy might be viewed in terms of the number of layers of mediation or its complexity.
As we become increasingly accustomed to machine mediation, our tolerance for human patterns of interaction will be tested. Today, most people recognize that humans are more fallible than machines. I can’t help but wonder if we stand at the threshold of a massive human inferiority complex, in conjunction with increasingly mixed emotions toward both people and machines. At what point might someone abandon all attempts to deal directly with others and retreat into the embrace of a machine-mediated reality?
I don’t want to imply that intimacy will necessarily fade away, but rather that new kinds of activities that feel intimate are already evolving. Human-to-machine intimacy is increasing within both mundane and grandiose arenas. An office worker may have more physical contact with a computer keyboard than with any person. In addition, on a non-physical level, the worker most likely has detailed knowledge of the inner workings of her favorite programs and understands how to avoid making them angry (crashing), and can negotiate their quirks and weaknesses. On a grand level, the virtual reality community is pushed by an incredibly powerful drive for immersion into an intimate interplay with machines. To what extent will human-VR intimacy provide a safe, convenient alternative to human-to-human intimacy?
Hardware: Apollo DN 550 Software: H. Kapan
A fairy welcomes spring by colorizing the monochromatic space with saturated colors. The trace of fairy is expressed by the diagonal stitch of the printed fabric. When the work is viewed from the left, a monochromic image can be seen, and as the viewer moves to the right, a colored image emerges. Thus, this work encapsulates time passing as the seasons progress.
The base images were rendered in 30 onto the fabric. Then the fabric was handstitched to create physical creases.
This work is one of my series of artworks that is created by transferring 3D rendered images. Digital images are easily duplicated with no degradation, thus threatening the uniqueness of the original work. I am pursuing the creation of digital image-based artworks in which only one original exists. I have chosen fabric as an output medium to transfer my digital images. Fabric is a soft and flexible medium that can be processed. I have manipulated the fabric to create 3D creases, thus the visual impression of this piece changes by the viewing position. I am also trying to express through my work the relationship between non-tactile virtual space and tactile physical space.
Various expressions in computer-generated imagery cannot be realized in hand-drawn paintings. In this work, the image is created in the non-tactile virtual space, transferred to fabric, and then further manipulated to produce 3D relief. The color scheme, in combination with the relief, creates different impressions dependent on the perspective.
In this “bookshelf” installation, video content displayed on the book spines changes as the books are pulled and pushed. The work explores the intersection between daily life and imaginary spaces.
Hardware: Iris 3100, Culler Software: Wavefront, Abel
Fukuwarai is a classical Japanese game in which a paper face is cut into pieces and a player tries to place the pieces onto a model face while blindfolded. It was a popular family game and a tradition that helped maintain family ties, until video games took over the home.
In Digital Fukuwarai, two video cameras capture the images of two participants’ faces. The images are then decomposed into many jigsaw-shaped pieces, which are presented in random arrangements on a screen. The pieces are dynamically updated from the real-time video images and are never still, even while both participants are moving them simultaneously. The participants compete to assemble the pieces into their respective faces. Changing one’s facial expression and moving one’s head is helpful in identifying the connectivity among the pieces. Alternatively, they can simply mix parts of their faces in collaboration. Reorganized versions of their faces, or faces mixed with other faces, may be more attractive than the original faces. In either case, facial expressiveness and head movement are key components of this interactive art.
The mobile phone is rapidly evolving and changing our lives. Not so long ago, it was hard to imagine that telephones could carry a video display. But now, almost every mobile phone is equipped with video display capability, and the quality and performance of these phones continues to be enhanced more and more. Future mobile phones will have a much larger and much more comfortable video display. They might even include a video projection function. In our research, we focus on the ever-evolving mobile phone and use wireless display panels and projectors to realize works of performing art that predict innovative methods of interaction and play with mobile phones.
The artists created two new kinds performance: • A wireless mobile display panel based on LCD. The panel has a 17-inch display area with a one-centimeter-wide frame. • A wireless mobile projector system. This system uses a projector unit with LED as its lightsource and DLP as its light valve. In both cases, performers can create collective and synchronized video images. The images can be controlled by the performers as they move and arrange the panels or change the orientation of projectors that synchronize with the video content. For example, as they follow the capricious movements of a person in the displays, the performers arrange the panels to continuously display the person’s whole figure. They may have to quickly change the configuration from vertical to horizontal when the figure moves from a standing position to a lying position.
In the process of making, I try to listen to things and see how things are working together to form a bigger something. My work is an expression of the movement of human activities.
This CG animation work presents the enjoyable rhythms of rolling dice. Many movement patterns of rolling dice were captured, and then surprising movements were selected. The work consists of one continuous take, one scene rolling into the next as interesting movements are expressed in the chain reaction of the rolling dice. Maya was used for the production. We wrote a software tool with the Mel script language, which can control dice in arbitrary directions and speeds. As the dice landed and collided with each other, localization and time data were collected by Maya and exported to the MAX/MSP sound software to generate the audio track and synchronized it with the movement of the dice. These data were also used to create melody.
Moving from digital output to traditional art media and permanent, pleasing output raises several issues. In this piece, a homemade, three-axis milling machine and a plotter were used to generate the third depth. With a cone-shaped bit, the depth was translated into line width, then a plane was defined, covering a fractal line, and attached to a depth/width pattern, which is repeated along its course.
This was all done in a few lines of code. Fractals can be beautiful, yet simple.
The resulting coordinates, translated into step-motor input, direct the bit into the maple in one single path from start to end – an elegant process, adapted to the tools at hand. The resulting carved woodblock can be the source of many pleasing experiments.
This piece is but one of many possibilities: a frottage or rubbing, as done by petrographs, archeologists, Chinese scholars, and artists. Max Ernst in particular was fond of the medium: “Histoires Naturelles.” The graphite lead rubbed on the sheet laid over the block marks only where wood was not removed.
The concept of this work is separating landscape scenes from the people and objects that occupy the street space. The source of the idea is background-subtraction programming developed at the Tele-Immersion Lab at the University of California, Berkeley. Viewers see two different location street scenes, one from somewhere in the USA and the other from the SIGGRAPH Asia 2008 Art Gallery. Viewers experience three different ways of digitally visualising animated street scenes in which they themselves are included.
I observed old paper through a scanning electron microscope and discovered the natural shapes. This invisible nature was projected onto nature. With the help of an electron microscope, it was possible to express invisible nature and to provide differentiated visual insights through the grafting of science and art.
The extension of Nanography to Cinematic Projection (various experiments based on the act of “seeing”) the works prove that the attempt to present a new perspective on the act of seeing are made in various stages. Major works are motivated by comparing the images of old and new Hanji (traditional Korean paper) taken by an electron microscope. In the image of the old Hanji, Mother Nature is engrained with the traces of time accumulated. The image resembles the scenery of mountain in that there are soil, trees grow, flowers bloom and fruits are born. With this motive, the background of this work turns to the nature. The photo works are harvested in the process of shooting all over the country by time and season. They highlight the contingency rather than intentionality and enhance fictitiousness by blurring the line between the actual forest and the virtual reality synthesized with a nano-image. Why don’t we imagine that the screen-like image in the wild nature is the screen of an outdoor theater? By stimulating the emotional code of a fictional drama, it spurs us to recall movies based on a specific situation. This work led to an opportunity for the artist to naturally develop the sense of improvisation and direction in the field and to integrate it into other cultural areas. My nano-image was projected behind a scene on a stage of a documentary film starring a pianist, as part of a theater stage set, on a small village in Jeju island, and on a house designed by H-Sang, Seung ‘s 18 years ago. The space of life and the space of fiction become more romantic because of the fictional clothes that are worn for a while. As the project progresses, the cultural sensitivity becomes more intense against the back drop of science.
It all began by observing old paper with a scanning electron microscope. A scanning electron microscope is a microscope that can observe the surface information of paper by hundreds to tens of thousands of times using electrons. Through this it is possible to accurately observe the natural appearance of the subject through high depth and high magnification images. For this observation, Coxem’s scanning electron microscope (CX-200Plus) was used to observe and photograph at a magnification of 500 times to 50,000 times, making it suitable for photographing the subject’s nature with a high depth of field image. The surface of the old paper observed through the electron microscope showed various sediments and microorganisms rather than the grain of the paper. This was similar to the nature that we often see. The time-provided nature such as the dirt, the trees growing, and the flowers blooming on the paper resembled the scenery in the mountains we commonly see. (Figure1). We wanted to present a new perspective on human’s viewing behavior by simultaneously expressing the invisible nature and nature in everyday life. For the simultaneous expression of the two spaces, we produced a mobile system that was capable of projecting with its own power. By moving to a natural space in everyday life, an electron microscope image of old paper was projected onto nature and photographed.. It is a new artistic expression technique that evolved by grafting the micro world into the dépaysement technique of the surrealist painter Rene Magritte, which creates a strange and unfamiliar scene by removing a specific object from the context of common sense and placing it in a disparate situation, thereby shocking the viewer.
Today’s cutting-edge filmmaking is all about creative collision and inspired convergence, as artists from wildly different backgrounds use computer-based tools to create exhilarating, hybrid projects that unite music, storytelling, video games, animation, music video, abstract art, and the lines, symbols, and logos from the world of graphic design. And this is the sort of work favored by RESFEST, an international, traveling festival of innovative shorts, music videos, design films, and features.
RESFEST, along with RES Magazine, was founded in 1997 as a showcase for work that employs digital technology (whether in the form of new DV cameras, desktop-editing systems, or animation applications) in innovative ways. Early on, the festival’s goals were to meet with filmmaking communities across the world, sharing ideas and techniques, and to offer a venue for cutting-edge work not screened anywhere else. The festival has grown steadily over the last several years, and it now travels to 10 cities on six continents.
As digital tools have become ubiquitous in both the production and post-production spheres of filmmaking, the festival’s digital emphasis has been replaced by tie simple desire to showcase projects that push the boundaries of form and storytelling.
This best of RESFEST screening presents a survey of projects from the 2002 festival: experimental documentaries, music videos, design films, animations, and live action/animation hybrids. In each project, filmmakers leave behind musty notions of realism in favor of creatively exploring the collision of the real and the fabricated in a series of kinetic, often music-based projects that delve into peculiar corners of history, offer the perfect depiction of a data-obsessed culture, or portray the delicate ripples of love and loss. In each project in the show, viewers will find some thrilling twist when the depiction of reality gets tweaked!
In Vietnam, medium ship remains a popular practice to enable the livings communicate with deceased loved one or to ask the spirits for wealth or protection of their family. When the spirit occupies a medium, he or she often dances freely in the ritual music. The background music in this work is a mix of ritual music and popular music in Vietnam. The artists re-performed a medium ceremony and used Depth tool kit and kinetic as another type of medium to seek substance of the cultural phenomena digitally.
In the past, large-scale, high-quality, computer-generated art was difficult and expensive to produce. Pricey software, supercomputers, and rare output devices were all needed. Similar to most technologies, as computer-generated art develops, the process becomes cheaper. Open-source software, inexpensive home computers, and online service bureaus have transformed computer-generated art from an elitist hobby to simply a set of cheap tools to create art.
This collection was created using only open-source tools on a home computer and printed on archival-quality photographic paper using an online service bureau. The final result is a large-scale print for less than 100 dollars, including tax and shipping.
The prints presented at SIGGRAPH 2003 focus on creating complex images from simple shapes and colors through photon mapping and other photorealistic processes. The two images taken from the series “Orange Test” are high-resolution versions of test images from an upcoming HDTV animation. Although they are only stills from an animation, they stand on their own when printed at high resolution.
This is most apparent in “Orange Test 1: Caustics,” which consists only of an elongated semi-transparent orange sphere, a white plane, and two spotlights. From simplicity, an unimaginably complex image arises. Beautiful yellows emerge from over saturation. Quantization errors in the photon-mapping process yield complex hairline curves that, even at 48 inches x 27 inches, one must be within a few inches of the image to see.
Hardware: Harris frame store, 3M digital wipe generator
This performance piece includes three municipal safety vests embedded with 840 red and green LEDs in each to show 24 characters per vest. The LEDs are the kind that are generally used to display information in public spaces: crosswalk signals, commuter information, and consumer advertising.
The messages on the front and back of the vests relate to the psychology of the individual within a specific public space. They are the secret thoughts that we urgently hide in public. Instead of expressing those inner thoughts, it is common for people to avoid eye contact and other forms of communication when in close proximity to others. People focus on public information or advertising rather than connecting with other people.
In an earlier version of this work, vests were worn by three individuals who rode an evening commuter train in Chicago. The vests said: “I want to fart … make me happy.” “Took my seat … I am not happy!” “Look you are trapped … are you happy?”
In this version, I changed two of the messages to reflect thoughts that attendees might type into a computer inquiry during SIGGRAPH 2006: “Took my idea … I am not happy!” “This is crap … Are you happy?”
There is a “flow” to Hoyun Son’s materials. There is no hi-tech or lo-tech to her constructions. They channel a desire to create. Instead of merely creating function, they question function. Hoyun Son uses her hands to crochet circuits into the vests, or to stencil words on vests. Technology is politicized and contextualized within time. Crocheting was considered technology centuries ago upon invention, and circuits are currently technology as well, but landline phones are slowly becoming antiquated. Hoyun Son combines different ages within the material to create and overturn notions of time and function, which channels the energy of creativity and, in a sense, creates spontaneity of unification.
Deeply rooted in mathematics, technology, art and music, anchor on linux and android, the artist conceives, initiates, establishes paradigm shift concepts, and builds digital artworks from scratch. There is no art, because science is the governor of art. Since 2007, the artist has discarded the resource intensive 3D modeling method, and replaced it by equations — the ultimate eco-friendly one-step process to build static and/or animated, stereoscopic 2D, 3D, interactive and Web-based digital art. The latest (2013) being: playing and dancing with the zero-gravity 3D objects, structures, and/or membranes created for Android-based smartphones to realize touch-and-go screen performances anytime, anywhere.
Hardware: VAX 11/780, 480x640x32 bit frame buffer Software: ray-tracing and sub-division algorithms
Hardware: Apple II + , Epson MX-80 printer Software: 6502 Assembler-H. Hohn
This animation is based on sketchbooks and a storyline provided by internationally known Los Angeles painter Gronk. Examining the flashpoint in a creative idea, the work begins and ends with a seed pod struggling to burst open. The story moves from a rock-strewn desert to a flaming mind, following an artist as he is pulled up into a hovering glass brain and broken apart. The fragments eventually coalesce into a new work of art. The landscape and figures were modeled from Gronk’s sketchbooks; the textures were created using scans of the artist’s skteches and photos of his paintings and the murals in his Spring Street residence. The Glass Brain was modeled after a series of Giant Glass Brains Gronk made during a residency in Tacoma, Washington.
Hardware: Apple Macintosh II. Software: PixelPaint.
This animation deals with oppression, torture, and hope. It is based on the Tibet-China relationship over the last 50 years, and it focuses on extreme violations of basic human rights. The story is told through the eyes of a prisoner who recalls the events of life in captivity.
Mantra is a digital panorama that represents hyper-realistic landscape with the collapse of body language and sound generated from digital object and human are permitted into the digital scenario, and gives birth to the new cycle of synthetic beauty. Inside of endless communication between them, a new experiential form of vision.
The way of approaching natural phenomena in the West is very different from the East in terms of how each group thinks about “things” in the world. I think that the main sense of nature in the East is of “universal events” which are too broad and ambiguous to define only one “thing” in the world, but do allow for the spontaneous awareness and abstractness of time and space. It can be interpreted by the Buddhist idea of transient moments from personal experience when we actually identify ourselves as a part of nature, which gives us a greater respect for nature and reveals ourselves to our own minds. To me, it was the moment when I picked up one yellow leaf and felt that this was no different from my own existence, living in this time and space but also going through the cycles of life; much like floating waves, an idea which is hard to grasp and form into a structure. The “thing” in Western ideology has a more dimensional quality, creating an identical and mechanistic sense of nature. For example, if I would say “one bright day of disaster,” there should be some understood damage or physical destruction that follows. This type of “analyzed physicality” is a significant part of the Western sensibility.
In the “Overgrown Artificiality,” I wanted to build up an ironical virtual assumption of futuristic views, which may speak to the negative side of natural disaster on a bright day in a new age that people might dream about. The yellow trees represent a sickened and altered nature as a metaphor for the human mind in some stage of purification. In the “Overgrown Artificiality,” the Western idea of substance and physical matter which is transitory in time and space has combined with the Eastern sense of value that relies on the “thing as an event,” but still encapsulates the Western physical value and perspective.
The aim of my work is to combine digital technology with analog thought process. I created the images on the computer in an attempt to express something about the human experience that anyone could relate to. Various modes of thinking float around our daily lives. This work tries to express our own inner world which is separated from conscious thought in the actual world. I would also like to express something that breaks away from the typical computer graphics image and moves it beyond the technology. I experimented with brush strokes to create abstract images from those ideas. Meditation is one of those abstract images and is printed on Korean traditional paper called Hanji.
The concept of Meditation is very Korean, especially the fact that the image is printed on Hanji. This is very challenging in digital art. The brush strokes, created in Photoshop, represent a variety of techniques. This work will help people understand Korean emotions and culture through digital art.
Statement: Dinner Party is an interactive installation in which a single chair and a place set for one person seems to provide a solitary dining experience. However, the interaction offers a communication between oneself and imaginary creatures. A participant sits down at an interactive table on which are placed several objects the participant can move as if she or he is about to enjoy a meal. The objects cast virtual shadows on the tabletop with animated creatures hiding in the shadows.
Dinner Party is an interactive installation, where a single chair and a place set for one person seem to provide a solitary dining experience. However, the installation offers an interaction between oneself and imaginary creatures. As if she or he is about to enjoy a meal, a participant sits down at an interactive table on which are placed several objects that he or she can move. The objects cast virtual shadows on the tabletop, with animated creatures hiding in these shadows.
Among our everyday habits, having a meal is a banal routine. With tabletop technology and computer vision, however, a diner encounters a magical moment where imaginary creatures appear during the meal. Meaningless everyday gestures become meaningful when a participant touches the point of entry into a new world. Dinner Party provides an environment where people meet and interact with Lewis Carroll’s “Jabberwocky” (1872), which describes creatures hiding in the shadows. There is a chair, a table, and a table setting for one person’s dinner. The table becomes the interactive platform between the participant and the imaginary creatures living in the shadows of the table setting. Creatures move from the main plate’s shadow to other shadows while scattering or hiding in between. When the participant waits long enough, the creatures reveal themselves and the “Jabberwocky” poem appears on the table. In our solitary modern society, an imaginary friend is able to make our loneliness disappear.
Space takes on multiple definitions. I understand space as the sum of cultural and social forces that act on me. Through space, my body feels all changes around me instantly and intimately. When I moved from Korea to the United States, my body became a gauge that not only felt my displacement and recognized the conformity inflicted on me in the United States, but also allowed me to deconstruct the hometown rules that I had taken for granted as normal.
In my video piece, I attempt to convey the feeling of displacement and conformity by the act of walking. I walk forward, and other people seem to be walking backward. However, in the real scene I was walking backward, and I simply reversed the video. The space of being neither here following correct rules nor there following incorrect rules is precisely what I try to convey in this video.
We imagine the virtual space as being a stable state of energies. As a human enters the space, the space starts to oscillate. The state of equilibrium is broken, and the energy of the human body spreads to the space. As in Hegel’s pattern of dialectical reasoning (thesis, antithesis, synthesis), we defined the relationship of the space and the human in three phases:
1. Recognition
The space shows the horizontal lines that represent a stable state. As the performer enters the space, the space starts to oscillate. The noise increases as the performer approaches the space, and the lines are distorted according to the shape or movement of the performer. The sound also is synchronized with the same data used for the image, generating a granular process.
2. Confrontation
The space starts to conflict with the human body. The performer and the space oscillate to compete for predominance. The image represents the reaction of the space such as changing dimensions, colors, and speed according to the movement of the performer.
3. Fusion
The energies from the space and the performer are mixed. The colored particles in the image represent the energies from the human body. The particles fill the space and the space reaches a new equilibrium state that includes the energy of the performer.
The motion of the performer is detected by a DV camera and data such as the speed of activity and the position of the performer in the space are shared with the computers and mapped to parameters for processing sounds and images in real time. The virtual particles’ movements and energies induced by the performer are used as parameters for the particles-like sound.
In screen-based experiences, the screen itself can become an interactive element. The “moveability” of the screen affords interactivity between the screen artifact and the viewer, and between the virtual space and the physical space. Cross-Being: Dancer features a movable screen interface, a spinning screen based on a two-sided monitor mounted on a revolving base. User interactions with the spinning screen can support diverse temporal and spatial responses, thereby enriching users’ experiences. The spinning screen enables viewers to grasp the interplay between visibility and invisibility, creating an aesthetic experience. The angle and direction of rotation affects the displayed visuals and audio output. Inspired by a toy for young girls – a ballerina figure on a spinning plate – a virtual dancer on-screen spins along with the physical screen as the user spins it. Cross-Being: Dancer aims to explore the “doubling effect” of visual illusion that takes place between the physical and virtual worlds, and between visibility and invisibility.
In this image, the character is heading toward the mirror to find his alter ego inside. An image of Russian sculpture in the background enhances the environmental atmosphere and emphasizes what kind of society he lives in.
Sumisan is the Korean name of sumeru. Sumeru (sumisan) is a mountain in the center of the world where the historical Buddah found the bright truth of Buddhism. A young monk’s (dong-ja) search for the lotus flower of sumeru is animated by digital shape technique, but this technique is grounded in traditional Korean artistic concerns: paper, shape, and color.
CURVEillance is an interactive art installation that criticizes this phenomenon by the vision of cameras that track the audience who approaches the surveillance cameras. Ultimately, the installation aims to raise questions about the distortions of relationships of individuals who usually continue to use the media system in the digital era.
It is an inevitable fact that social interaction nowadays is heavily mediated and distorted through the social network system. The presence of various media replicates our images to reproduce under a confirmation bias in our cognitive process. There has been a distorted gap between the original and reproduced images to reveal our identity in this process. The relationship form through this refracted self-image affects the user and the other users and eventually forms a complicated surveillance system.
CURVEillance is an interactive art installation that criticizes this phenomenon by the vision of cameras that track the audience who approaches the surveillance cameras by following steps: 1) Digital images on the media wall are shown by re-pixeled visualization of the audience image through the vision of cameras, and then, 2) In order to capture the movements of audiences, the camera system actively moves and follows them.
Specifically, each single camera lens automatically reacts and stares at the most active individual in the exhibition space by real-time. The media wall presents the reflected images of audiences as the observed objects by a crowd of cameras. In this process, images are scattered and overlapped so that it is hard to recognize the original form. Some participants attempt to get the camera’s attention even if their image can be damaged while others are unintentionally monitored. Therefore, it induces a reversed interaction between participants and the media wall that brings tension from the technical eye. Ultimately, the installation aims to raise questions about the distortions of relationships of individuals who usually continue to use the media system in the digital era.
Team “Q.U.I.T” make art-science projects using interactive media wall. We explore the theories of human mind and computational vision technology to reinterpret and criticize the belief of technology use in our society. So-called “Q.U.I.T”, “Quantifying, User, Interaction, and Technology”, is composed of engineers, artists, and an art curator based on the Department of Culture and Technology in the Korean Advanced Institute of Science and Technology (KAIST).
We try to carry out a particular concept regarding the implicit thinking process that people experience with mutual surveillance and oppression through various forms of technology. Specifically, in the project CURVEillance, we focused on how our image can be represented by distorting, curving, and dispersing images consisted of pixels in the age of digital surveillance. The goal of CURVEillance is to redesign and show the reversed communication between the media and participants through cameras’ vision. Instead of participants affecting the media wall system by one-way directional interaction, the technical system induces audiences to the media wall by the arranged eyes of cameras that follow and capture their movement. Through this process, the work reflects multi-directional perspectives by audiences, cameras, and media wall systems, like in an era of surveillance.
There are two main components of the system: a visualization part that receives an image from a camera and shows it on a large screen, and a motion part that tracks, evaluates a user’s position, and controls motor modules. We used a separate computer for each system with the same model: MSI GE63 8RF, i7-2.2Ghz, 24GB Ram, and GTX 1070.
The motion part consists of a motion sensor (Microsoft Kinect v2), a control board (Arduino Uno), a custom motion module, and a computer to control. To implement the integrated system for the exhibition, we used the Unity Engine for the primary system and the MS Kinect SDK for Unity and the Ardunity Plug-in. Using them, we implement the whole processing of getting data, processing, and motion control. In our system, the current situation is visualized in real-time, which can monitor the interaction with the audience and the calculation of the application in real-time.
The motion module has two servo motors. The two servo motors were installed in the same direction but are not vertically aligned. This arrangement goes beyond the linear motions unique to servo motors, to achieve unacquainted motion effects. The motion module frame was modeled using Rhinoceros 6.0 and printed via the 3DPrinter (Zotrax M200). We designed the movement module to combine with the stand below and the camera module above.
The motion data of audiences are received through the Kinect and evaluated in our system. Based on our target finding process, the system distinguishes the most active person as a target. We assessed individual activity based on the movements focusing on the movement of one’s head and hands because it is the most natural part that the audience can easily move. When targeting the most active audience, all cameras rotate towards the target.
In the sound installation, Windmill ambiance noise was composed of Ableton live sampling.
The goal of CURVEillance is to redesign and show the reversed communication between the media and participants through cameras’ vision. Instead of participants affecting the media wall system by one-way directional interaction, the technical system induces audiences to the media wall by the arranged eyes of cameras that follow and capture their movement. So most challenging issue was the tracking system that track the audience at the exact moment of movements, which was greatly done by our team member Hyunchul Kim.
Bibigi is a music-scoring and sequencing device based on computer vision technologies. Pitch, timbre, and loudness are computed based on the hue, saturation, and intensity of images. This performance poses questions about the similarities between animals and autistic human beings. It recognizes the differences among autistic patients and seeks to understand their behavior and psychology by analyzing animal neurology, zoology, behavioral science, genetic biology, etc. The result is a conversation among different species that mixes three sources with an interaction in one space.
RGB images are transformed into the HIS model with hue, saturation, and intensity. Pitch, timbre, and loudness are computed based on the transformed images. Pitch is mainly decided by hue, timbre is mainly decided by saturation, and loudness is controlled by intensity. This conversion rule is based on the analogy of color and sound, in terms of physical properties.
Inclination of Time consists of two photos that metamorphose over time. Time is an incognizable movement with consistent intention, while photography is a tool to immortalize the moment of time. I attempted to reconcile time with photography in this particular work. I attempted to revitalize the frozen time captured in the photos by establishing the lost continuity between two photographs. The baby in the first photograph changes to a father, and the same baby in the second photograph changes to a mother. The changes in the photographs are nearly undetectable, since the metamorphoses in the photos take place slowly over a long period of time. The changes in the photos are intended to be as slow as possible, to recreate the incognizable nature of time. Time is perceived as a linear or circular movement in consciousness and subconsciousness, but time reveals itself in continuance. In this work, I tried to show both linear and circular movements of time in perception.
First, time is perceived as past, present, and future in human consciousness. It is said that past lies in memories, present is intuitively perceived in mind, and future is glimpsed in expectations. The linear movement of time in past, present, and future is cognized by the intuitive association of mind in consciousness. In my work, the fact that a baby becomes an adult in the photos shows the linear inclination of time. It also shows the substance of human beings as “becoming,” bounded by time. Secondly, because of both the inclination of memories to expectations and the inclination of expectations to memories, the inclination of time can also be perceived as circular in consciousness. In subconsciousness past, present, and future coexist as in dreams. Time is not linear but circular, or coexistent in subconsciousness. For example, a couple can’t become paren11 without a baby of their own. That is to say, having a baby affirms being parents. When a baby metamorphoses to a mother and a father in these photos, an effect asserts or dictates a cause, in a sense. A baby in the photos is repetitive memories of the parents’ childhood at the same time an adult in the photos is the future that the baby expects to be. Also, every frame is morphed with parts of the other frames, so, in a way, time within this work is coexistent. I wanted to let the inclination of time reveal itself in its incognizable continuance.
Imagine yourself walking along a calm beach at night. It’s dark and all you hear and feel are breaking waves and sand between your toes. You sit on the beach and start digging up the sand, creating ditches and hummocks. Right at that moment, light swells up from the bottom of your terrain and it gradually becomes a sea of clouds. Suddenly your ditch becomes a giant valley and small hills rise up through the clouds forming a cordillera. This is what you experience with the piece Mont. Mont is an interactive installation composed of a projector, a depth-sensing camera and a computer. Its camera senses the volumetric topology of the ground below the piece and projects a hallucination of clouds, giving spectators illusions of mountains and canyons. Mont offers a playground of creation giving people a god-like ability to create worlds of their own. In a sense, Mont is a meta-creation, a magnificent sculpture delineating wonders of the nature but still existing as a creative medium. Utilizing raw materials, Mont seamlessly fuses itself with nature. With its noninvasive nature, Mont will bridge the gaps between the real world and the digital new media by enabling collaborations between technology, human and nature.
Hardware: CV CDS 4000 Software: CV CADDS-4X
Digital abstractions – complex connections …
Referencing details from various graphical user interfaces, these prints form a series of works that abstract onscreen imagery and reverse the usual input-output process physical to digital, by taking from the digital and making physical. Taking inspiration from the inherently multiple (digital source material) and remediating this as a physical, one-off or limited edition print, the images address the notion of Walter Benjamin’s “aura of the original” and examine the implications for originality and physical representation for artworks in a “post-real” digital age.
A dramatic change in scale and location (computer screen to gallery wall) is another important aspect of the work. Viewed out of context and away from the usual intimacy of the screen, the images can (still) trigger the memory of a familiar, ubiquitous monitor interface. The artworks begin to utilise these same visual elements to refer to the “human condition” applied to a digital context. Narratives and distinctively human comments are constructed from the visual building blocks of the digital environment, a place where we are increasingly spending our time and energies.
As the boundaries and reference points between physically and digitally grounded imagery become less defined, the duality of the interplay moves toward a more seamless self-referencing and continuous activity. A visual feedback loop, where the clues of originality become increasingly hard to differentiate and, perhaps, increasingly irrelevant. By extracting the real-world metaphors from the digital environment and taking them back into the physical world, the works become a kind of hyper-mediated simulacrum.
Foldercultures utilises the visual elements of the computer desktop in a transmedia combination of digital-video projection, sculptural form and rapid-prototyping (3D printing). Foldercultures (2010) continues a body of work which investigates the potential for revisualising the computer graphical user interface across the differing formats of digital space and out into material culture. The work explores the metaphor of the desktop folder – through notions of scale, composition, functionality and form, cross-referencing the materialisation of a computer icon into projected and material space.
In the work the digital image of the desktop folder is projected onto an acrylic model of a physical version of a desktop folder, refolding the image back onto itself across media forms. This transformation is further emphasised through the use of semi-transparent acrylic, which allows the projected image to revisulise on all sides and faces of the model. Weaving together qualities of both interface and material culture, the work creates a special experience, and interface that sits between the computer and physical space. These visual laminates contribute to the acceptance of the digital form as a point of social convergence and a shared technologised visual language.
Foldergarden utilizes the computer desktop folder as a visual signifier for pervasive digital technologies. The folder shapes are used to contest the relationship between aspects of traditional and contemporary culture, in particular around notions of natural and synthetic forms. Based upon the concept of the rock garden (Karesansui), or ‘dry landscape’ garden, this work looks at the Japanese symbolic representation of nature through landscape. However, in this case, the natural elements such as stones and trees are replaced with rapid prototype sculptural models of desktop folder icons.
Folders of differing scale and colour are carefully placed in raked sand, as in the traditional conventions of the rock garden. By placing rapid prototypes in this context the work attempts to draw a link between the ‘combined’ fabrication of nature and technology. The fabrication of nature as represented by the raked sand is juxtaposed by the plastic models of desktop folder icons, a fabricated representation of digital technology.
Through these contemplative abstractions the audience is encouraged to view the work as both a technological and naturalistic construct.
The mundane-traces show was a collection of New Media artworks created by Ian Gwilt. Using innovative technologies to re-imagine the graphical user interface as a creative artefact the six individual works explore the graphic user interface in a creative context using augmented reality, rapid protoyping and laser cutting technologies. The result is an intriguing mix of physical and virtual interpretations of the folders, files and scrollbars from the everyday computer desktop.
We are increasingly asked to input our time, energies, and ideas into the computer-driven environment. This painting is from a series of works that use details from the ubiquitous computer interface (as a starting point) and reverses the usual “input/output” process (physical to digital) by taking from the digital and creating the physical. Giving a physicality to images that normally exist only “onscreen” we can change context, and challenge the way we look at (or look past) the digital visual environment. Changes in scale, and the textural nature of paint on board, also add to the disenfranchisement of the original images.
Save_as is a mixed-reality mobile installation which combines a physical gallery space with enhanced digitally content, triggered through the use of mobile technologies. In this artwork an acrylic model of the folder icon you would normally encounter on a computer desktop interface can be seen attached to a gallery wall. A PDA with video camera attachment allows the viewer to observe the virtual content of the folder, which in this case is comprised of overlaid virtual texts. The artwork was completed in August 2007 and first displayed at The Powerhouse Museum’s Beta_space venue in Sydney. The work was also selected for the Australian Network for Art and Technology (ANAT) Portable Worlds 2nd Edition, which toured nationally in 2008 and 2009, bringing exhibition and workshop programs to urban and regional Australia. The work was also shown at UTS in 2009 as part of the “image ecologies’ exhibition.
Hack is an interactive work informed by the electronic terrain of pop culture and the cultural history of comic books, cartoons, and computer games. The work is presented and designed in the guise of a simple computer game and carnival amusement display, a site that both culturally and historically has long been associated with notions of ‘interactivity’ in different ways. Therefore, one of the intentions of the work is to re-define our notions of the interactive in technological-based art, while looking over our shoulders to the video arcade and carnival sideshow.
The other cultural reference that the work draws upon is the Frankenstein monster, which serves as a central theme in the work: as a body made up of other body parts, a head made out of different heads, part robot, part human, part mutant, part machine… digitized, scanned, and robbed from the data banks of pop culture.
The computer game itself is often dismissed as the territory of pop culture- not worthy of serious consideration in the realms of art and interactive media. However the computer game and video arcade can reveal not only the fundamentals of many simple interactive concepts, but more important the video arcade has seen the standardization of certain concepts of interactivity emerge between user and computer in contemporary popular culture. Simultaneously simplistic/restrictive/limiting and addictive/involving/entertaining … whether its saving the planet from that alien invasion or searching for that elusive pot of gold.
For the generation that has grown up with Nintendo and Atari, the simple concepts of interactivity found in your average computer game appear almost second nature, like riding a bike. Their ideas and functions are instantly picked up on and related to, in a sense having become’naturalized.’ With this in mind, Hack (in a quite simple way) attempts to draw on and cast one’s eye not to the simplicity of the computer game, but rather to look at concepts in computer-interactive art through the eyes of the video arcade: the cultural site, where a particular computer-interactive language is currently being written as we speak and encoded into the imagination of a generation.
Consists of multiple sections of a computer-generated head. The objective is to locate the central nervous system, or brain, by working through a maze of different sections of digitized heads. The user must deconstruct the display/game, through a process of elimination, by working through the different graphic variables. In very simple terms, this is just like a computer ‘hacker’ deconstructing the code of a particular program.
Hack is the first in a series of computer interactive artworks in the guise of simple computer-graphic games, which seek to close the gap between art, graphics, animation and computer/video games, using the interactive potential of the computer.
Hack proposes a design prototype for an interactive computer interface away from mouse-driven, text-based applications/presentations and a break with the standard tools associated with authoring/ interactive multimedia software.
Hardware: IBM Main Frame/Host System. Software: IBM CADAM, CATIA, NC Machining.
This artwork is an interactive art website. Please follow the provided link to view.
This artwork is an interactive art website. The given link is no longer active.
SINGLE, structurally equal to ALL, and endless in the microstructure sm-N shows the totality and brokenness of INTIMACY in her mirror image. ONE split into TWO is compatible with the life of INTIMACY. The magic structure of ONE and obsession with individuality inspires the ritual beauty of polarity, which is shown on the altar of art. The Story about INTIMACY is presented by a mathematical microstructure.
The circle is drawn and in the circle there is a whole world.
There is a world of the COSMOS and a world of the ATOM, but INTIMACY is the zone of their co-existence.
Everything about using a computer to create art seems natural to me … I used a digitized image of a computer chip and worked on it ….
Hdw: Amiga Sftw: Deluxe Paint
The sequel to the smash hit phenomenon that took the world by storm! Temple Run redefined mobile gaming. Now get more of the exhilarating running, jumping, turning and sliding you love in Temple Run 2! Navigate perilous cliffs, zip lines, mines and forests as you try to escape with the cursed idol. How far can you run?!
Unlike any other animals, humans believe their lives were given by the creator’s own breath. With this breath, human became the only creature with a spiritual life, fulfilling both emotion and intellect. And such perspective gave me to reestablish the meaning of ‘breath’ from a biological activity to the concept of the interaction between the human and as a communicating medium. From the [blow:], the audience stays in the real space as the particles separately exists in the virtual space, and the method that merges the two is the breath. People, who received the emotion from the creator, now augment its territory to the virtual space using the breath. This concept also extends to an attempt to interact between from one virtual object to another. Installing rear screens to the transparent glass allows the interaction from one side to another and by minimizing the hardware system, the virtual space and real space is being juxtaposed: a system to improve the users expands their feelings. This work is focused to develop a speaker sensory systems which only will react to the sound of the user’s breathe. The sensor detects only the sound of breathe and transforms the analog speaker into a digital switch, sending the coordinates to the circuit. Arduino receives this coordinates and controls the movement of the particles in real time.
For the occasion of exhibit, the animated film Elysian Fields has been orchestrated as an impressive sound and visual experience. Inspired by the sacrifices made by past generations and expanding on an exploration of World War II,- Elysian Fields fuses fantasy and history to transform the past and re-configures it into the present. Developed around the French term Mise-en-scène, which literally means “putting on stage”, this cine-installation will expand spatial and temporal limits of the film narrative into a new and visually impressive experience within the given space.
A video presentation of an interactive artwork in which participants experience the artistic spirit of the ancient calligraphy masters and how breathing was reflected in creating famous pieces of Chinese calligraphy.
Two participants are seated in chairs equipped with ultra-wide-band devices that measure both the speed and depth of breathing every 0.1 seconds, which influences the pattern of the calligraphy. One person affects the fluidity and speed of the strokes, while the other alters the intensity of the ink. By altering the depth and rhythm of their breathing, the participants gradually reach a state of harmony with the calligrapher and with each other, drifting deeper into this art through sensing and controlling the flow of their own Qi. Concept, Creative Director Shu-Min Wu Art Director Yau Chen Producer Horus Shu Technical Director Tsang-Gang Lin UWB Technical Director Teh-Ho Tao Interactive Sound Designer Tang-Chun Li Creative Producer Ministry of Economic Affirs, Taiwan Creator Industrial Technology Research Institute Executive Producer ITRI Creativity Lab The original calligraphy images are all authorised by the National Palace Museum in Taiwan.
At a 1981 convention for computer graphics (SIGGRAPH), Triple-I presented a demonstration reel that illustrated the company’s achievements in computer imagery. This reel was instrumental in convincing the Disney Studios’ executives that computer animation could be successfully integrated into a motion picture.
Richard Taylor discusses the role that the Triple-I demo reel played in Disney’s decision to make Tron:
“The conference saw a big 35mm representation of what really had a beginning, middle, and an end. It tried to really demonstrate to the world the potential of this medium. It had a great effect. It helped other people develop their things. It gave them an insight into what you could really do. And it had everything to do with why Disney believed that Tron could be done. Because it was a piece of film that they could see that worked overall and had a wide range of things that had been choreographed and created specifically.”
Extracted from: https://ohiostate.pressbooks.pub/graphicshistory/chapter/14-3-tron/ “14.3 Tron”
My most recent work engages conceptual themes of distance, scale, time, and experience. Topography Drive puts the audience in an imaginary driver’s seat, and the route of exploration is as geographically real as it is a conceptual meditation. The trip begins smack in the middle of the Pacific Ocean, just south of the Aleutian Islands. We are traveling southward along the International Dateline. Looking westward, you see the night-lit topography of Japan emerge from the ocean; looking eastward, you see the topography of the West Coast of the US lit by the sun’s rays. In essence, you see day and night landscapes together in an exploratory scene that both flattens the earth and eliminates atmospheric effects – a view of our world that is dreamlike but underscored by very real data points. The artwork is, in fact, based on what one really would see if one could flatten space, eliminate natural atmospheric distortions, and see across planetary-scale distances. The first iteration of the Topography project was exhibited at the Yokohama Triennale in 2005. There the entire Pacific Rim was mapped in this fashion, yielding a 160-yard-long image (4 inches in height) at a scale of 1:170,000. A very different version featuring moving projectors, a moving landscape, and an HD rendering of the flyby was shown at the Tokyo National University for Fine Arts in 2006. The most recent piece depicts the Vietnamese coastline from the same International Dateline viewpoint. Etched in black marble, the work is a 120-foot-long permanent public sculpture in the coastal town of Hoi An in central Vietnam. It “captures” the entire Vietnamese coastline at 1:50,000 scale (documented at www.hoian-horizon.org). Acknowledgements of support: Shinobu Ito, Sumiko Kumakura, Franz Xavier Augustin, Markus Cornaro, Viet Bang Pham, and Kasuken Kasuya.
This installation uses digital elevation model data (DEM). The data, originally recorded by a radar sensor on the US Space Shuttle, represent elevation measurements taken every 90 meters of Earth’s entire land mass. The image of the Pacific Rim’s horizon line was rendered from the perspective of the International Dateline, as if the Earth itself were flat. The installation consists of a large wall-size projection and depicts Japan’s horizon line as one moves southward along the International Dateline at 1,000 km/h (600 mph). Japan is approximately 3,000 km (1,800 miles) in length, so the movie is approximately three hours long. The opposing projection, which is also from the same International Dateline perspective, again traveling from north to south, represents the day scene of the American West Coast. The source for this image is not a movie. It is a live image captured via video camera of a rendered image, mounted on a lightbox. The lightbox is moving slowly but visibly past the camera perspective. The actual image object that is moving past the camera is almost 30 feet long, 4 inches tall, and 4 inches deep, and it moves at a scale speed of 4 miles per second. This speed is equivalent to rocket speeds, approximately 30 times faster than a jet airplane.
I am interested in exploring art as research, especially as a form of cultural analysis. My work began with concerns of immigrant experiences and questions of why we assign certain values and meanings in our culture. Language has always been at the center of individual, familial, economic, and social struggles. My recent work investigates the fabric of language and communication through various media such as video, installation, interactive art, and performance art. Using technology, I deconstruct and reconstruct sound by studying sound topologically, visually, and semantically. I use technology both as a mediator, merging multiple fields of study such as linguistics, cultural studies, and neuroscience, and as a translator, converting/transferring one medium to another. Mother studies sound symbolism and explores synesthetic connections between language and shapes. It translates audible and intelligent communication into a visual and tangible form through the use of computation and 3D printing. Verbal descriptions of a set of unknown sounds, as well as the hand gestures used by participants to describe the sounds, are captured using XBox Kinect, extruded over time with a custom-built software made with openFrameworks, and finally printed into sculptures. In contrast to the unrecorded spoken language, which is ephemeral, language that is printed three-dimensionally becomes embodied in a physical form. In this work, the human translator is replaced by a computer. The concept of translation is thus stretched, expanded, and re-contextualized, providing a flexible way to see and experience language through a work of art.
Mother is a series of generative sculptures that explore synesthetic connections between language and form by analyzing hand gestures that represent the participants’ interpretations of unfamiliar spoken words. The gestures of the participants were captured in 3D using a Kinect, interpreted with openFrameworks, and printed with a rapid prototyping machine.
Wasteland 2 is the direct sequel to 1988’s Wasteland, the first-ever post-apocalyptic computer RPG and the inspiration behind the Fallout series. Until Wasteland, no other CRPG had ever allowed players to control and command individual party members for tactical purposes or given them the chance to make moral choices that would directly affect the world around them. Wasteland was a pioneer in multi-path problem solving, dripping in choice and consequence and eschewing the typical one-key-per-lock puzzle solving methods of its peers, in favor of putting the power into players’ hands to advance based on their own particular play style.
The Wasteland series impressive and innovative lineage has been preserved at its very core, but modernized for the fans of today with Wasteland 2. Immerse yourself in turn-based tactical combat that will test the very limits of your strategy skills as you fight to survive a desolate world where brute strength alone isn’t enough to save you. Deck out your Ranger squad with the most devastating weaponry this side of the fallout zone and get ready for maximum destruction with the RPG-style character advancement and customization that made the first Wasteland so brutal. Save an ally from certain death or let them perish – the choice is yours, but so are the consequences.
In the 21st century, the speed of transformation in Southeast Asia is perhaps beyond anything experienced by preceding generations. Because of this prompt change, the air pollution is so big in Asia that a giant brown cloud blocks the sunlight over our planet from India to China. The Asian brown cloud has reduced sunlight by more than 10% in huge swaths over planet Earth. Extreme weather events are costing governments and citizens billions each year. Science, Technology and Art are key words for every Change Of The World. Our Art & Science project focuses on these three key words and on the brown cloud phenomenon. In our drawing below, we propose a translucent cave where a “Cloud Room” and a “Touching The Cloud Screen” are enclosed. . In the cloud room we install a 20cm diameter sky-disc made out of NASA’s nanomaterial silica aerogel. Inside this sky-disc there is a brown cloud. A white LED light projector will orbit around the rotating sky-disc, generating thus a giant golden-hue shadow, scanning it on a semicircular back projection screen. (shadows visible only from the outside of our installation, cf. simulation cello video and photos attached) On the opposite side of our projected shadows there is a second rear projection where Biennale’s spectators can see someone’s finger trying to touch the brown cloud… Searching where this projection is coming from, (s)he could enter our cave and discover how we can communicate with a cloud.
My work is interdisciplinary, exploring how technology affects human beings as individuals and as a society. Technology mediates our lives, constantly revealing the need to be an informed user of technology. Much of my work attempts to educate its audience on this subject. When I teach students how to use technology as part of their art practice, the most important lesson I share is the need to use technology intelligently. Instead of using technology in a manner suggested by its design and marketing, people should use technology in ways that benefit themselves.
Homo Indicium is an installation based on my exploration of digital identities. In a society where technology assists in every aspect of life, most people have accumulated a digital identity. It is an identity based on bits and pieces of information stored in fragments over a vast network of computers. Buying habits, means of identification, medical histories, and personal histories are all stored virtually.
Homo Indicium started with the question: “What can a machine know about a person?” Every day, machines continue to compile digital identities. These identities influence countless decisions made by both humans and machines. The question is: “Is this information enough to truly know someone?” Homo Indicium allows its audience to interact with information-based identities as a way of exploring questions raised by this process.
The name, Homo Indicium, is derived from the Latin words “homo,” which means man, and “indicium,” which means data or information. Together, they form the scientific name of the species of humans that exists purely as information.
When you enter the installation, you are confronted with a wall covered by test tubes. Closer inspection reveals human hair in some of the tubes. Above these tubes are bar codes.
Each bar code represents a person who chose to participate in the installation. Participants create information-based representations of themselves, which become part of the piece. This is done by filling out an online questionnaire, giving fingerprints, and providing a DNA sample, in the form of hair. The responses to the questionnaire and the fingerprints are then stored in a MySQL database. The hair sample is placed in a test tube and stored on the wall. A unique barcode is created for each participant and then placed above the hair sample. This barcode is used to identify each individual in the database. After the data is collected and stored, it can be retrieved using the barcode scanner attached to the database server at the center of the installation.
In front of the test tubes is a computer station. This station is a Windows 2000 PC, running an Apache Web server, a MySQL database server, and PHP. This station serves as the interface for scanning barcodes and reading about an individual. It also serves as the host to the Web documents that allow users to fill out online questionnaires and add their data to Homo Indicium.
What can a machine know about a person? Can someone be known/reconstructed from this information?
Online Component The database generated by this piece will be accessible online through a Web site, where people can find out about the piece, look at the data gathered by the piece, and add themselves to the piece. (This will require they mail a sample of their hair.) Everyone who participates through the Web site will also receive a laminated card with a logo that indicates it was a Web submission.
Installation Elements to be Explored The data retrieval station may use a projector instead of a monitor. When it is idle, it will enter a slide-show mode in which it randomly accesses data and displays it on the screen. This slide show will probably also cycle through the voice recordings, giving an audio element to the piece. The computer and trackball used by this station will be installed so that the CPU is hidden. The interview station will be set up to create a very clinical feel. A height and weight measurement will be part of the interview process. The interview may include a very basic health check-up.
Data Confidentiality The data available to the public will not contain names or personal information. The names in stories collected during interviews will be randomly changed. At the end of the interview process, participants will get a chance to view all information gathered and block public access to specific information. This will be clearly stated in a confidentiality agreement signed by all participants.
Data Not Collected Pictures or images Social Security numbers Credit card numbers, etc. Name Address Email Phone numbers
Current list of Data to Collect Age Gender Height Weight Birthdate Birthplace Marital status Children Siblings Parents Education history Employment history Medical history Biometrics Fingerprints Voice Facial proportions (for facial recognition algorithms) Retinal scan (most likely not feasible at this time)
Other Sources of Potential Questions 2000 Census questionnaire Personality tests
Important reiteration: I have no intention of displaying information that may be harmful to a participant and will take every measure to ensure that this is clearly stated and implemented.
HBG is an experimental life simulator and an allegorical play about human existence, its dilemma and other catastrophes.
Most of my artistic work from the 1980s consisted of etchings, silkscreens, and paintings based on elements that I created with 30 computer software and imagery that I drew or painted by hand. During that period, I developed and refined techniques for transferring high-resolution 30 computer images onto traditional printmaking media, in particular extremely fine aquatint photo-etching techniques and computer-controlled engraving. In Freedom and Imprisonment (1985), the right half of the work was engraved directly onto four cooper plates by a computer-controlled flatbed plotter that had the pen replaced with a steel needle.
I have continued to combine 30 computer-generated images with hand-drawn and hand-painted elements, and in the early 1990s, I started to use digital printers to edition my work. Blue Pearl (1998) combines paintings that were scanned into the system with 30 computer renderings that apply some of the paintings as environmental maps to the geometry in the scene. I am interested in using computers to generate emotional works with a gestural and unpolished quality to them. I am less interested in the computer’s ability to create perfect geometry or aseptic simulations of reality.
Hardware: DEC VAX 11/780 and Apple II+ Software: Cantos and Software by artist
Hardware: VAX 11/780, Grinnell frame buffer Software: CARTOS by Irwin Sobel and Noel Kropf
To celebrate my 20 years as a curator in the fields of art and video games, in June 2019 I embarked on a world tour that took me to: South Korea, Taiwan, Indonesia, Thailand, Japan, India, Colombia, Argentina, Brazil, Mexico, Nigeria, Ghana and Togo where I got stopped by the Pandemic in March 2020. I am now pursuing my World Tour online, in Africa and in the Middle East, from Togo, where I do live now. By meeting digital artists and independent game developers in the Global South I intend to give a more nuanced overview of the different ways gaming communities across the world are exploring the issue of diversity, with an emphasis on female, queer and decolonial practices. In each country I visited, I interviewed around 20 artists, game makers, curators, activists, hackers and gave lectures and workshops about the relationship between art and video games. The aim is to break the boundaries between the art and the game world and to promote alternative, independent and experimental games to enhance diversity – gender, race and representation – in video games and to promote the use of games as a pedagogic tool and as a tool of expression to raise awareness about social, cultural, political and environmental issues. Meeting media artists and indie game developers and then writing articles about their games or distributing podcasts of their interviews is also a way to better understand local culture through the lens of video games. In each country, I try to focus on meaningful games that deal with culture, history, or politics. My expedition also investigates how we can create new concepts of “working together” and new connections within the worlds of game art, independent games, games DIY art. In Togo, I am now collaborating with different art collectives on mutual game art projects. For Digital Power, I am presenting podcasts of the female, non-binary, transgender artists, activists and game makers that I met during my art and games world tour, and a hybrid documentary in progress, featuring the interviews with video games and artwork recordings.
Hdw: VAX/Catharsys Graphics Card/PC AT Sftw: A. Chesnais
This non-fiction video depicts the busy promenade and sandy banks of the Songhua River in the city of Harbin, China, as an urban sphere of ephemeral sociality and explores the complex relationship between the people of Harbin and their main water source. Through engaging the interface between anthropology and contemporary art, Songhua also addresses the possibility that image and sound can serve as a form of ethnographic research. Overall, through long takes of public space, intimate vignettes of film participants, and layers of ambient sound and cacophony, this digital work presents leisure and labor activities as they unfold within the social space of a popular yet fragile environment.
All images and sounds were recorded on standard digital video with a Panasonic HD-P2 digital video camera and edited on a Final Cut Pro editing system in the Harvard Media Anthropology Lab. The final audio design was digitally mastered on Soundtrack Pro.
This piece is a continuation of my ongoing Autocosm project, in which I create artificial worlds in solo live performances. An “autocosm” is a self-contained personal world, apart from the world we all share. In this case, it is a world of growth and evolution, of life and transformation. Evolution requires death. It requires erasure. There can be no evolution without a continual process of creation and destruction. To make room for the new, we have to let go of the old. But we try to have the new informed by the old, so the new will be better. Life ends, and so we can only overcome death by starting the next generation. Performance is a fitting way to depict this instancy, for a performance is a fleeting moment in time, an iterative quest for perfection. It’s an act of a life caught between celebration and desperation. I attempt to create and perform and evolve a world in Autocosm. It’s about life. It’s about making a better world. And it’s about me, the performer, on my iterative quest. The world I create is only temporary. At the press of a button, all objects and beings, the whole of creation, are gone. Only a memory remains. But don’t worry. The next one will be better.
Over the last few years, I’ve been exploring the domain of liveperformance animation. I want to keep my performances fresh for both the audience and the performer, so I utilize captured gesture, which allows me to improvise. But improvisation is foreign to CG production,so I’ve been developing ways to create and animate CG scenes with a “straight-ahead” approach. Because software design determines the methods of interaction, I perform using my own software, which I’ve tailored directly for my own use as performer. The core is written in C++ on top of OpenGL and DirectX, with the front end written in Python. The system runs on a Windows box. My main input device continues to be a drawing tablet. Sometimes I use it to puppeteer my creations. Other times I draw in 3D while moving through and around the space (using a joystick or other device). This technique integrates placement of objects and creation of spaces with continuous changes of viewpoint. It allows for creating a space while simultaneously exploring it.
HOEReographien’s starting point is questioning the dependence of classical dance on music. To what extent can movements and movement lines become audible in space? What will happen when music arises from movement and if, within that context, musicians and dancers interact? And what if the dancer’s body is filmed on the stage and converted in real time into a video sculpture that, in turn, interacts with human bodies on the stage to produce a conglomerate that produces material and virtual dance?
If music results from the movement of dance and, therefore, the structure of the composition is not developed, adapted, and interpreted through music composition, what is the role of the dancer? How will this affect dance?
How do musical variations and development forms appear visually, in order to provide movement, resulting in a sound that is, at first, amorphous but later adopts an understandable form and structure? Which form of contemporary light and video art results from this interactive action?
And how can this “new” process be made understandable for a live audience?
HOEReographien is a cycle of single pieces (Soli, Pas de Deux, Trios Quartet) in the form of dance, through which electronic music is produced. Dance that develops video sculptures and dance from live structured improvisations, a constellation that, with mixed shapes, results in an overall visual composition in the form of “autonomous” dramatic art that supports the concept of “autonomous music.”
A black-and-white camera delivers 25 images per second to a PC running the software Eyecon, which transforms the pictures to controlling data for electronic sound and structures programmed in 3ds Max.
Three mini-DV cameras each record another frame from the stage. For different sets in the performances, one of these three cameras receives its pictures from a Power Mac running Max/MsP/Jitter, which transforms the color-camera frames in Live-Video-Art. In a few sets, Max/MSP/Jitter receives control data from Powerbook Music-Max, so that even the dynamic of the changes in the videosculptures are controlled by the movements of the dancers.
Hardware: Polaroid Spectra System Microtek 3002 Apple Macintosh IIfx System 6.07. Onyx camera, color scanner, 20 Meg RAM, Connectix Maxima. Quantum 170 Meg Hard Drive, SYQUEST 45 Meg removable storage, Apple Macintosh II, 24-bit color video monitor. Software: Adobe Photoshop 1.1.0.7.
When the US housing market collapsed in 2008, so did the dreams of many middle- and lower-class Americans. Florida, California, Nevada, and Arizona were hit particularly hard, and not by a force of nature, but by the abstract and invisible hand of the market. Prior to the collapse, the movement of global capital seemed like a distant reality to most homeowners, but in the end it was the imaginary systems of value, and not bricks and mortar, chat asserted the ultimate authority over our homes. Open House is an installation by Jack Stenner and Patrick LeMieux that allows visitors to telematically inhabit a “distressed” home in Gainesville, Florida. The house at 1617 NW 12’h Road is currently in financial limbo, while undergoing the process of foreclosure due to the housing collapse. Virtual markets have transformed this otherwise livable property into a ghost house. Open House allows individuals to repopulate this disenfranchised space and assume the role of virtual squatters – opening the door, flicking the lights, rattling the shutters, and remotely occupying the abandoned property. Live video feedback integrates real-time physical effects with one’s virtual actions. Through Open House, virtual squatters can temporarily resist eviction by mirroring the market and becoming hybrid subjects occupying both virtual and physical space. Like the foolish man who builds his house on sand, we watch the architecture crumble around us. Download Open House at www.no-place.org/open_house .
“Satisfaction” is a 3D animation that attempts to communicate the fleeting satisfaction of our desires in a humorous way. A solitary mouth signals a balloon to “partake” of its services, and we observe the less than satisfactory result.
The images of hangman: is there an “I”? examine how our relationships with our bodies can influence self-identity. In these images, a metaphorical hangman emerges from the dark confusion left by the warring factions of inner and outer self: mental image and physical reality. The inner voice develops into a cacophonous, raging, judgmental monologue. The hangman takes host in the now harassed, disoriented self and embarks on an externally quiet yet destructive rage.
In an echo of the hangman word game, the hangman records individual body parts. As the inner self quests to solve the puzzle, the absence of “I” brings the game closer to a losing finale. The physical body, at times seemingly hidden by choice while at others completely obscured by environmental trappings, serves as testimony to the hellish battle.
Like an executioner, the hangman’s charge is to carry out the sentence and ensure that the physical body falls from an appropriate height. In essence, the body destroys itself. Robbed of breath and ultimately strangled, the inner self has been supplanted by an overwhelming physical force.
Without a Special Object of Worship is an interactive installation exploring imagery inspired by the salt-beaten, Veneta-Byzantine port city of Venice, Italy. Visitors sit at a table in the dimly lit installation space and control computer-based still images and animations by turning the pages of a handmade picture book. Custom electrical wiring allows communication between the book and the computer, with each page of the book corresponding to complementary digital 2D image sequences and 3D animated sequences. The sequences appear on a monitor at the table.
All of the imagery, both in the book and stored in the computer, consists of the artist’s original stills and animations. The juxtaposition of the book and the digital imagery serves to bring the book to life by adding motion. The environment is further enhanced by an original sound track inspired by chants and religious liturgy. The integration of image and sound creates a peaceful, sacred space conducive to reflection. While the installation is not specifically religious in nature, the experience could be likened to the personal acts of meditation and prayer. Much as a prayer book, the handmade book acts as a point of departure for these acts. The book structure is the vehicle through which the participant communicates, controlling the pace of the interaction and thus customizing and personalizing the experience.
Books have a place in our cultural history and development that cannot be denied. Currently, we are witnessing the transformation of the book from analog to digital form. While the advantages of the digital book are many, there remain aspects of the physical book form that have not been replicated digitally. Specifically, their organic nature has not been preserved. Without a Special Object of Worship preserves the tactile, spatial qualities of the book form while simultaneously taking advantage of technological innovation in digital forms. With this piece, a bridge has been established for continual research and development in the marriage of traditional analog interactive models with their digital counterparts, specifically in the study of book forms.