Next: Writings & Talks Table »
« Previous: Person Table
Hardware: DICOMED Imaginator Design Station, DICOMED D148SR film recorder
Elevation #2 aims to de-familiarize and extend photography in an alternative digital imagemaking process, inspired by the everyday cognitive process of assembling and organizing visual fragments into a complete mental picture of an object or space. The experience of traveling inside old elevators has always fascinated the artist because the metal gate functions as a window onto a space that is otherwise closed, invisible, and encapsulated. We rely on the elevator trip to understand this closed space, so we develop a complete image of it in our minds on the basis of piecewise information. Elevation #2 serves as a visual equivalent of the elevator experience, while it aims to extend the limits of regular fixed-point-of-view photography without sacrificing the simplicity and elegance of the art form.
The digital image-making process used in Elevation #2 is analogous to object scanning but far simpler. A digital video camera recorded the whole elevator trip, and then image fragments were extracted digitally from video frames as elements for constructing the final image. The resulting images offer numerous elements unseen in traditional photography. One interesting example is that the picture looks orthographic in one dimension but presents regular perspective in another dimension.
Memories diverge from the experiences they intend to mirror. They emerge as an alternate reality we create and revise over time. These visions skew, as our minds focus on fragments of the original experiences – sometimes these visions warp the event to the point where they no longer represent the event but create an alternative version, a dream-like new reality that can influence our present selves. Maybe our selves and our lives are built upon this process of useful mis-remembering.
In this one-shot video buildings hide behind a natural impressionistic haze. The imagery is familiar, but it’s always at a distance, as movement, light, and sound reinforces its surrealism. What we see is in constant flux, and the same can be said of what we view as Truth and Self.
Software: FInal Cut Pro/ Canon G10
HOK has designed seven new buildings in Incheon, South Korea (six mixed-use towers and one hotel tower) as part of New Songdo City, a $25 billion master-planned international business district.
The images are conceptual explorations of office tower designs by Alistair Lillystone. The experiments were made using Autodesk Revit.
VAO is a mixed-use development in Monterrey, as major city in northern México. This development includes office space, 350 apartments, retail, and hotel. The tower itself has 45 floors and extends 240 meters above street level.
3D printing of HOK models was made possible through a donation from York Technical College / 3D Systems
Hardware: Targa 16 Software: Tips
Considering the discussion on sustainable sources of energy, Sopro (The Blow) and Toque (Touch) seek to aesthetically use the audience’s body energy to interact and animate the artworks.
This 3D computer animation film illustrates and event in a motion picture studio. When two illustrated characters jump into a 3D world, the distinction between truth and fake is completely confused. The animation concludes after a big dispute over who is going to find the answer.
Software: Softimage 3D, Windows NT
“Heaven, earth, and I are born of one, and I am at one with all that exists,” said Chuang-tzu, an ancient Chinese philosopher. In his thinking about Taoism, humanity and nature are inseparable. Every human activity has repercussions. To visualize this traditional Chinese thought with a modern approach, we developed an “ecosystem simulation.” This simulation contains two worlds, virtual and real. In the virtual world, human beings determine how the world develops. For example, all the creatures’ behavior in the virtual world is controlled by the viewer’s emotional response. The creatures’ behavior is displayed on the screen with sound, and this can change the viewer’s feelings in the real world. The viewer plays a double character: a member of the real world and a player in the virtual world. Although the process is composed by hardware and software, “human ware” is the essential element of this process. Using sensors and speakers as media, the viewer is a conductor of both the virtual world and the real world. The viewer can also be a producer who provides spiritual power to lead changes in the ecosystem. And the viewer receives feedback from the system. This endless cycle is just as Chuang-tzu said: “Heaven, earth, and I are born of one.”
This system includes three components: 1. The core is an ecosystem simulator. In the virtual world, each creature has its own parameters: life length, food requirements, speed of motion, etc. All creature behaviors simulate the real world. They breed, prey, propagate, and die, and these processes are visualized in an animation that shows the creatures through various filters controlled by the viewer. 2. The sensors detect the viewer’s heart rate and skin resistance, which reveal the intensity of the viewer’s emotional excitation, whch acts as an essential parameter in the feedback system. 3. The feedback system acts as a bridge between the viewer and the ecosystem. The viewer’s emotional state changes the simulation, and the simulation generates feedback to affect the viewer’s emotion with different sounds.
Summary
Painting of Thousand-hands Avalokitesvara is a MediaArt based on the theme of “Painting of Thousand-hands Avalokitesvara(千手觀音圖)”, which paintings painted under the theme of Avalokitesvara(千手觀音) during or before Goryeo Dynasty. This artwork reproduces the original Buddhist culture, which accounts for a large portion of Korea’s culture archetype, in the three dimensional space.
Abstract
Painting of Thousand-hands Avalokitesvara is a media art based on the theme of Painting of Thousand-hands Avalokitesvara(千手觀音圖), which paintings painted under the theme of Avalokitesvara(千手觀音圖) during or before Goryeo Dynasty. We reproduces the original Buddhist culture, which accounts for a large portion of Korea’s culture archetype, in the three dimensional space. Avalokitesvara(千手觀音圖 ) appears in lotus flower on the center of artwork. Avalokitesvara is a Buddhist saint who saves people with a thousand-hands and a thousand eyes. Thousand-hands( ) literally symbolize a thousand hands, and metaphorically symbolize the ability and its appearance is very diverse. Also in the artwork, Avalokitesvara has 11 faces, indicating that through Avalokitesvara’s various appearances, they can save all of the people in various situations. On both sides of the Avalokitesvara, there are Four Devas( ), the four heavenly guardians of Buddhism. The Dragon King and the Sudhana( )appear After the appearance of the Four Devas. All of them gathered to listen to the teaching of the Avalokitesvara. Thousand-hands begin to unfold in the halo( )of the Avalokitesvara. After all the elements of the artwork such as the waves in the background and the Litany Buddha( ) appear, 42 hands which contained people’s wishes with Buddhism things( ) appear accordingly. Every time a 42-hands appears, the color of the thousand-hands in the halo changes and the thousand-hands take various hand movements( ).
Buddhist Artworks such as Painting of Thousand-hands Avalokitesvara(千手觀音圖) are only available in certain places, where temples and museums, and due to the nature of religious art, there is a lack of connection with the New Media. So, We decided to digitally remediation the work to overcome these limitations and to allow more people to see the beauty of Buddhist art in various places and times. We designed paintings that had been represented in two dimensions to represent movement in three-dimensional space in order to anicca( ) and pratityasamutpāda( )that are the core of Buddhist art. Background elements such as clouds and waves, as well as characters in the artwork, have something in common that they are all in a fluid state. This is a symbolic representation of the Buddhist doctrine that the world is always in progress. Digital has the nature of being easily and quickly distributed, and media art is an art genre that can be created in a format that did not exist using digital technology. Buddhist art serves as a medium for public access to Buddhist spirit. If digital advantages are added to this process, the existing limitations of public enjoyment of works will disappear.
Papo & Yo is the story of a young boy, Quico, and his best friend, Monster. Monster is a huge beast with razor-sharp teeth, but that doesn’t scare Quico away from playing with him. That said, Monster does have a very dangerous problem: an addiction to poisonous frogs. The minute he sees one hop by, he’ll scarf it down and fly into a violent, frog-induced rage where no one, including Quico, is safe. And yet, Quico loves his Monster and wants to save him. As Quico, players will build their friendship with Monster by solving puzzles together and adventuring through a magical, surrealist world. Players will need to learn to use Monster’s emotions, both good and bad, to their advantage if they want to complete their search for a cure and save their pal.
Mapping LGBTQ St. Louis http://library.wustl.edu/map-lgbtq-stl is a digital atlas of the region’s lesbian, gay, bisexual, transgender, queer history from 1945 – 1992. The site, launched publicly October 11, 2017, combines archival documents with GIS data to examine the relationship between metropolitan space and sexual segregation, as well as how LGBTQ St. Louis was divided by race, gender identity and expression, experiences of policing and violence, and socioeconomics. Freely available online, visitors can browse and explore more than 800 locations on the map, or follow several guided thematic tours.
Unlike other heritage mapping projects (most notably California Pride: Mapping LGBTQ Histories https://www.historypin.org/project/469-california-pride/# and NYC LGBT Historic Sites Project http://www.nyclgbtsites.org/#) Mapping LGBTQ St. Louis did not focus on architectural history of sites, nor famous individuals or stories of progress. Rather, using primary sources from multiple archives, we attempted to track any space connected to queer life, as broadly construed, and organize the information into a dataset.
The GIS-compatible dataset is one of the most unique aspects of Mapping. In addition to using this for the creation of the public interface, it is also shared as an open source reusable dataset available to other researchers. By leveraging this historical data, it is hoped that social scientists, medical researchers, and others can gain a greater understanding of a community overlooked in both the past and present.
Team members using collaborative Google Sheets compiled location data, including starting and ending dates, along with the gender, sexuality, and race of people commonly within that space. Particular attention was paid to racial categorization, and spaces are identified as predominantly white, predominantly African American, or racially mixed – purposefully confronting the way race, especially whiteness, was (and is) subtly coded. We acknowledged and continually remained cognizant that the common acronym LGBTQ obscures the divisions within and between groups of people. For instance, we checked carefully for contextual information before presuming a space was frequented by those who identified as bisexual, even if commonly frequented by gay men or lesbian women. Similarly, gender non-conforming performers (such as drag kings or queens) were specifically noted, rather than simply including them within the contemporary transgender umbrella. The topic of violence, including police interactions, crimes, and arrests are of particular import to the St. Louis region and LGBTQ communities. Rather than avoid these difficult aspects, they were included in the data and analyzed in contextual tours.
The Esri product “Story Map” was used to create the public interface. To overcome character and image limits, we use Omeka to serve pdf documents with longer descriptions and multiple images. This allows archival documents and images to remain linked with the location data, while keeping the dataset within required parameters. For Mapping spaces with only minimal known data, visitors are redirected to a Google form that allows anonymous submissions and tracks messages into a spreadsheet allowing follow-up.
Through The Time Tunnel is an artistic exploration of the scientific principles that govern time and space. This artistic interpretation of a time machine allows users to interact with past events through a playful interface. It is inspired by two facing mirrors that create infinite reflections of the present moment. Like an imaginary mirror, Through The Time Tunnel shows reflections of time from past to present in different time spaces. It uses a video camera, a green screen, and control buttons to record layered video sequences, and it creates a tunnel-like effect by stacking the recorded video images. The speed and direction of navigation within the tunnel are controlled by the user, providing a compelling time-based experience.
From the series Earthdance#2 > Baja: Listening to the Desert, a work in progress.
The Introspection Machine is an interactive environment for visual feedback. The machine consists of six modules, each of which has a flexible, manipulable “eyestalk.” At the end of each eyestalk is a large rubber suction cup, which permits it to adhere to any of the six displays in the installation. The machine’s modules transform the video input from their manipulable eyestalks into supple and organic dynamic displays. By redirecting these eyestalks, users can explore an unbounded space of continuous light, complex forms, and surprising relationships.
The machine’s reconfigurable eyestalks comprise the principal interface by which participants interact with the installation. These playful stalks, which pipe pure light and information from computer to computer, make it possible for the video output from one reactive display to be used as the input for another. An Introspection Machine module may even be piped back to itself, creating a tight loop of visual recursion. As visual material from each display is reinterpreted by the others, pools of light shift and mutate based on the connection, configuration and movement of the stalks’ suction cups.
As a display system for fluid colors and forms, The Introspection Machine can be thought of as an interactive light fountain, in which participants liberate the “water” welled into each monitor by physical conduits of video information. As a complex feedback system, on the other hand, The Introspection Machine has analogies to a wholly visual brain, whose cybernetic intelligence is derived from the principle of feedback itself.
Three mice struggle to escape a cat that pursues them through a Chinese restaurant. The mice fight their enemy with chopsticks, toothpicks, and after-dinner mints, proving themselves to their kung-fu master in a surprising twist. This HP/24P short was realized using Shadow Projects’ patented Shadowmation, a unique animated process that combines CGI-enhanced flexible animatronics with computer-generated animation.
Apostroph is a prototype of a robot designed to study rise-up motion, which is the intrinsic behaviors of living organisms. In Apostroph, the joints that connect the gently curving frames contain the motor rotate 360 degrees, and are programmed to resist external force. These motors rotate in a direction opposite to the rotation occurred by gravity. Consequently, Apostroph tries to lift its body, in the same way as a human being stand up. This project is one of our try to seek the shape of Artificial “Life” which will live with us in our future life.
In pre-war Japan, kimonos were encoded with clues indicating gender, caste, age, class, and social ranking. Geographical location defined the colors, as certain plants that produced particular colors only grow in certain regions. Silk techniques are also regional, and the difference between fine and rough silk indicated relative wealth. For several decades, I have been a musician, composer, and developer of unique koto-based instruments (for example, the Monster Koto and the Laser Koto), and I have been digitally processing and sampling the koto to expand its sonic and gestural components. A kimono is required for traditional koto performances, and the manner of wearing the kimono is as exact, technical, and aesthetically precise as the playing of the instrument. The kimono can be viewed as an integral and natural part of the instrument. The LED Kimono is a new light-and-sound instrument made with a single hand-made sleeve embroidered with 444 LEDs that respond to sound and movement and occasionally act as a low-resolution monitor interpreting live video. The images and motifs represented on the sleeve, derived from traditional kimono patterns, are responsive to and mapped to specified parameters of sound. For example, at times, there is a relationship between the movement of the sleeve and the harmonic spectrum. The performance is presented in several sections, and each section has a slightly different version of interaction with the four elements: the sound, the LEDs, the movement of the dancer, and the sleeve. Special thanks for the technical hardware and software expertise of Bob Bielecki and Damon Holzborn, and the support and com-missioning of this project by Harvestworks and CircuitNetwork. The dancer is Mariko Masaoka-Drew.
The internet has enabled us to accomplish so many things. The universal appeal and power of the internet to change and augment our daily lives is both undeniable and inevitable. Every day we go online and consume endless amounts of media—audio, video, text, interactive experiences—that can be electrifyingly fun, educational, or a waste of time. Although a vast amount of information is available to us online, not all of it is free or legal to use without some sort of licensing agreement or pay-per-use model. Most of the bigger utility-like companies, such as Google, Yahoo, and Facebook, give away their online services for free; but when it comes to content created by professional companies, such as television studios, app creators, music or movie studios, there is a price that comes along with it. Despite this monetary inevitability, a great deal of data and information travel through the internet illegally, in violation of copyright and intellectual property laws.
As technology gains further ubiquity in popular culture, the rules and contexts that govern its use have begun to draw our attention. From lawsuits against Napster to federal hearings about Microsoft’s Internet Explorer, the problem of digital rights management is becoming a widespread phenomenon. Exploring the clash between mass consumption of technology and personal use, Danish artist Mogens Jacobsen’s work challenges the incorporeal existence of digital objects and their physical incarnations. Examining the legal restrictions of file-sharing, Crime Scene: Installation for Two Computers is an installation in which a copyrighted file is transferred between two computers ad infinitum. This portrayal of illegal internet activity in physical space adds yet another dimension to how mass culture is striving to amalgamate and restrict digital objects into categories that previously only existed for physical ones.
Unidentified game object.
The Free Culture Game is a playable representation of the battle between copyright and the free exchange of knowledge, ideas and art. Loosely based of the theories of Lawrence Lessig and McKenzie Wark’s “A hacker manifesto”, it portrays the ambiguous relation between the market and commons.
Now you get to play the newest kind of soldier: one who remotely drops bombs on foreign soil during the day, and at night goes home to his family in the suburbs. In Unmanned, the conflict is internal — the only blood you’ll shed is from shaving cuts. But is there collateral damage in this new way of waging war?
Contemporary Golden is built on top of incredible geological wealth, a pocket of resources and power existing just below the surface. Over the last two centuries, this geology generated a demand for engineers that facilitated a culture of research, technology and innovation. The Golden of today is a combines this physical history with geopolitics; it is a global innovator in high tech materials and applications and a center for mining operations around the world.
The ceiling was chosen as the site of the painting to simulate a subterranean space, calling attention to the geology of Golden.
Field Test (NYC) is the second in a series of site-specific paintings that use X-Ray and Electron Microscopy images of Rare-Earth elements as visual references. These materials – hidden in plain sight – power our electronic environment, running everything from mobile phones to MRI’s.
I have an inclination to work with materials that have had an obvious life before I use them; it’s a challenge and a pleasure to make something from nothing. In the last year my practice has grown out of the studio in the form of large-scale rooftop paintings for Google Earth. This project uses materials from the waste stream (discarded house paint) to mark a physical presence in digital space. My work is generally concerned with human perception of current conditions; the Paintings for Satellites are specifically concerned with the effects of the digital on our physical bodies. All my work begins with a series of rules derived from existing conditions. For example, the color palette for the rooftop paintings is made from the discarded paint available on a given day; the physical surface of the roof determines the shape of the painting. As this project proliferates, it will take two forms – a community model, using local volunteers and paint from the waste stream and a design/build model, using solar-reflective paint, solar panels and green roofing contractors.
My body of work tells stories not simply about the people in my family, but more specifically about the dynamics that exist between us, and our relationship with the changing world. Within four generations, my family grew up in three different cultures (China, the Philippines, the United States), all the while sharing diverse views of the world, which color the relationships we share. Landscape and portrait images are collaged with textures and memories to portray the subtle complexities that are so common in the most basic of human bonds. With the use of digital art technology, these images seamlessly depict a unique co-existence of past and present, East and West, inexperience and maturity that has allowed each of us to discover our own identity as a human being who is an indispensable part of one global family.
The activity of our brains, one of the most complex machines in Nature, can be described as the work of active connections among the several billions neurons within our heads. These cells, the neurons, communicate to each other at specialized sites, called synapses, that transmit information through a complex mechanisms that transforms an electrical signal into a biochemical one, back and forth.
Synapses are very small structures whose size is measured in micrometers (about one hundredth of a millimeter), and are classified by their types. Each synapse has a specialized type of receptor that recognizes one of several molecules; synaptic activity is also determined by its location in the brain. One type is the serotoninergic synapse, whose main transmitter molecule is Serotonin, a neurotransmitter strongly involved in mood regulation.
The work presented here, The Dark Anim, was the result of an effort aimed at showing the inner working of Serotonin and of one of the drugs (fluoxetine, aka Prozac) used when its metabolism is disregulated. The work was commissioned by Filmtank to be inserted into a full length documentary, titled The Dark Gene, by directors Jakobs and Schick.
Even for scientists, who naturally want to adhere strictly to scientific knowledge, there is always a major input of creative choice in the preparation of such animations. Besides deciding the cameras, lights, and movements of the protein characters, we had to devise a way to visually explain how an electric pulse brings about all the events that lead to neuronal excitation.
The overall impression is reminiscent of Scanning Electron Microscopy images, but with a sort of ‘underwater feel’ that should elicit in the viewers the sense of living matter.
Media Used: Video.
Narcissus—the young Greek man—recognized himself in the water, fell in love with his reflected image in the water, and was ruined for that. Today’s Narcissus should reflect about the future and the perspective of media technology by viewing and interacting with his virtual image.
The interactive installations—Rigid Waves and Liquid Views—are based on the Narcissus myth. Both are reflection themes playing with the observer’s image. The image seems to talk with the onlooker. A virtual mirror image of the observers is overlaid by a virtual scene.
Man is confronted with the machine, and they interact with the computer. Images come alive, driven by the action of the performer. The interface is the visitor’s image. From his/her traditional passivity the viewer gets to interact with the image. S/he leaves an impression on the image.
A painting showing a mirror—the spectator moves closer to the image, and when s/he gets a certain distance, the painting changes more and more to a ‘realistic’-style picture, gets sharper, and finally looks like a photo.
After giving the observer a pause to manage his/her surprise, the reality of the photo increases further. Then the mirrors in front of the picture begin to work. Their functionality is the final step into reality. The observer can see his/her likeness distorted, and can deform it by changing his/her position in front of the mirror.
After a short time of ‘playing,’ the performer will ‘leave the scene’—the scene will leave its reality—getting a new static painting, which will be kept until the next observer wants to dive into it.
The flat painting opens into a visual space sutured into reality by photo-realistic and acoustic elements as the observer gets closer to the picture’s surface. The observer becomes part of the world that happens in the painting, merging into the virtual life. When s/he leaves, the world slowly looses its activity, leaving his/her distorted image in it.
For Liquid Views, a watery surface of smooth waves is surrounded by an artificial landscape. Coming closer, the visitor sees his/her image reflected in the water. The observer is confronted with him/herself before the surface of the water changes and the observer’s image on it. Soft waves are released, influenced by the natural environment. By touching the surface the observer generates waves. After recognizing the liquid’s behavior, s/he will try to test it in an extreme state, overstating it.
Nature manipulates vision. Vision can be controlled in a specific way but cannot be ruled over. The water may only be influenced or left alone, but the real master of it is not the observer. Easily the nature can be lead to an overdone state—out of control—needing a long time to pacify.
The realization is based on a horizontally positioned touch screen, on which a SGI Reality Engine simulates the water, implemented by special algorithms. The user’s image is caught by a video-camera, installed beneath the touch screen. The image is created by texture-mapping with a video-picture in real time.
Liquid Views and Rigid Waves tells the story of Narcissus in simulated environments with a combination of computer, video, and sensory interfaces. Its main purpose is to make visible the communication between the individual person and virtual selves. Touch and movement serve as interfaces into a spatial experience.
Liquid Views Narcissism in the mirror of society deals with self-reflection and self-knowledge. In the virtual mirror, viewers are confronted with their images as reflected in the water of the spring and representations of themselves. As they allow themselves to be seduced and interact with their images on the water mirror, the images disintegrate and are transformed into a simulation until they finally merge into the algorithmic hybridism of the water. Liquid Views is understood, in turn, as a metaphor for the act of being “online;” that is to say, in our “second nature” as “navigators” immersed in the telecommunication world. Over the “high seas” of cyberspace, the identity of each individual is transformed into a flow of variable and interchangeable data, in which viewers are completely free to change or redefine their identities, and all they have to do is alter their own sources of information.
Rigid Waves Rigid Waves transforms the acoustic mirroring of Narcissus and Echo into a visual form. As observers approach a mirror, they are confronted with a mirror image that does not correspond to their normal perception of things. They see themselves as impressions, as bodies with strangely displaced movement sequences and, ultimately, as images in the mirror that smash as soon as they come too close. They are unable to grasp themselves. This is an attempt to see oneself from the outside, to stand side-by-side with oneself and to discover other, “hidden selves.” In this fractured mirror, we find ourselves shattered and splintered. Our selves are liberated and broken down into multiple selves. The presence of space in coordinating one’s own interaction plays a key role in this work. It explores dynamic gestures of different cultures and gender in order to study the concept of inter-faction for global communication.
An omnivorous magnolia is captured by a xeno-biologist on a field study to Zantlis Prime. Created to be part of a colleague’s masters thesis study on non-terrestrial virtual worlds for gaming environments.
In Out My Window, a 3D forest induces the viewer to question its source. This work tests the fundamental perceptions of reality. Nature can be beautiful and enchanting. A challenge is to replicate this beauty in digital format and to capture the “soul” of nature electronically. The piece began as part of a masters thesis for a colleague. Originally named, Forest on Sulear Prime, over time the piece evolved to be purely an artistic endeavor. It was built using models, procedural trees, and texture maps.
An interesting response to the piece is that some people assume it is a photograph. However, on an unconscious level they realize it cannot be a natural scene, as it has no depth of field. I feel this work does meet a basic condition of being hyper-real. As I continued working on Forest on Sulear Prime, it was enlarged and output as a “photograph” for a client, who encouraged me to print it large. I continued to manipulate it and changed the name to Out My Window. I still continue to change elements of the work and make new versions.
Old illustrational photographs are digitally manipulated then richly glazed with coloured paint to suggest: skin, touch, sensuality, and emotion. Those human elements are of particular interest, not only because science’s knowledge hierarchy assigns data acquired through the mind and eye the highest credibility, and data acquired through the sensual body, the least, but also because the latter have historically been devalued as ‘feminine’ ways of knowing the world.
Together, the photos and paint create dialectic montages of human/photomechanical, soft/hard, subject/object, etc. Despite their technological mediation and their dry, didactic origin, the reified figures reveal human complexity and the transient nature of thought.
I am interested in textbook illustrations and the scientific rhetorical strategies they employ to make distant, to universalize, and to dehumanize their subjects – all in the name of objective truths. The subjects in this series are unnamed women demonstrating various swimming techniques in an old swimming manual. When I first saw these illustrations, I was struck with what they did not address: individuality, the sensual body in water, sensations of temperature, fear, etc. In response, my mixed-media work undermines the conceit of scientific representation by re-investing human qualities and individual narratives.
Hardware: Apollo Software: Rodin
Hdw: Gould Sel 32/77 Sftw: Rodin
Hardware: VAX 780 Software: In-house “Rodin2”
well–formed.eigenfactor presents interactive visualizations to explore emerging patterns in scientific citation networks. The Eigenfactor project calculates a measure of importance for individual journals (the Eigenfactor score), measures citation flow, and creates a hierarchical clustering. Moritz Stefaner turns this information into a set of four information–aesthetic visualizations, each highlighting different aspects of the data.
In visualizations of citation networks, both ball–and-stick–like network representations and maps are prevalent. This project extends the visual vocabulary on the one hand, by re-purposing existing techniques, such as radial-edge bundling and treemaps, and on the other hand, by inventing novel approaches like magnetic pins as flow indicators and an “alluvial” diagram to represent change over time in cluster structures.
Citation patterns: a clean, yet organic radial network visualization that gives an overview of the whole citation graph. The radial-edge bundling technique effectively highlights the cluster structure and interdisciplinary citation links.
Change over time: This stacked bar-chart diagram displays changes in Eigenfactor score and clustering over time.
Clustering: Based on the squarified treemap layout algorithm, this visualization features “magnetic pins” to indicate both incoming and outgoing citation flow for any selected journal.
Map: This map visualization puts journals that frequently cite each other, closer together. You can drag the white magnification lens around to enlarge a part of the map for closer inspection.
Data: A subset of the citation data from Thomson Reuters’ Journal Citation Reports 1997–2005. For the visualizations, 400 journals with their approximately 13,000 citation edges were selected, ensuring coverage of the top journals in each field.
Oral Fixations is a single-channel video installation that evolves over a seven-hour time period. The project is a darkly humorous look at a habit of endless consumption and the resulting accumulation of waste. A narrative gradually emerges from the on-screen action that depicts a large-mouthed character who dances while flossing its one protruding tooth. A conveyor belt regularly delivers factory-farm-fresh hams that the character delights in taking one large bite from and then tossing aside. Over the duration of the piece, the hams begin to pile up in the room until, after seven hours, the room is filled with the refuse of this gluttony. The viewer is encouraged to revisit the piece periodically throughout the day and see how the discarded hams build an oddly humorous environment of waste around the character. The length of this piece introduced several technical challenges: displaying changing images for seven hours at a constant frame rate and simulating the motion of seven hundred falling hams. Each iteration of the character’s motion was constructed using several technical elements. The animation of the character is motion capture data captured using a Vicon motion capture system with 12 MX-40 cameras. The motion capture data has been non-uniformly scaled in Maya to emphasize the action. The floss is a dynamic simulation created within Maya.
I began creating artworks early in life, when I was about nine years old. When I was 14, I won a drawing competition organized by the Goethe Institute in Cairo and drawing awards in the Czech Republic, Japan, and Egypt. I studied drawing, painting, sculpture, graphics, and printmaking at the faculty of fine arts in Alexandria (BA 2001), and I specialized in printmaking. During that time, I also showed my work in several art contests and exhibitions. It’s not about money or fame. It’s about how a good artist sees the world and is inspired by everything. The sense of art is inside all of us, but some of us make it grow, and others ignore it.
This work was created with Adobe Photoshop. I worked on each channel individually with filters, contrast, levels, textures, transforms, rotating, hue, and saturation. When that process was complete, I grouped the channels into one image and exported the image to Painter, where I added hand work, colors, shadows, etc.
Since the earliest times, the shadow has proved existence – the ghost has no shadow. However, like the “virtual” image projected on a TV monitor, the shadow itself has no substance. And at the same time, the shadow, or the silhouette, appears as the basis of the image.
In KAGE, computer-graphics shadows of cone-shaped objects explore this shadow-substance characteristic. The computerized shadows projected toward the floor are motionless, like all shadows, but as time passes, some of them begin to tremble. When the objects are touched, various kinds of patterns appear on the computerized shadow images.
The ceiling-mounted projector also illuminates the viewers, so their shadows join the shadows of the objects on the floor. When the false shadows created by computer graphics and the viewers’ true shadows are both projected on the same plane, viewers recognize the shadow-existence dilemma.
Hdw: Minivax PDP11/Genisco F B/Matrix QCR Sftw: Images I
This photo is created for a concept exhibit by eight invited photographers (including myself) for the Kanyon shopping mall in Istanbul. For the exhibit named “Under Construction,” most of the photographers worked with female models, while I personally preferred to focus on the concept of construction and reinterpret it. Construction is a temporary action that exists for a while and transforms itself into another product. This is why I wanted to end up with an architectural piece different from the one under construction. The concept text that I submitted for the exhibit was: “Went, saw, stopped, attempted to grasp and enter it, looked at construction process and workers with respect, tried to internalize, wanted to claim it for a while, dreamed of creating a microcosmos out of the macrocosmos I was in, shot and shot and shot and finally selected: the created world, though intended for all, was probably quite a personal illusion…” The construction process creates a different atmosphere, in which planes that we will later see as sterile surfaces are not yet covered, and space emerges with its total openness and sincerity. A construction site can also be seen as a podium, where a play-to-remain-incomplete is being staged. The incompleteness causes us to dream more, due to the fact that a complete building loses its narrative potential as it informs us about all the necessary pieces that constitute the whole: There is no puzzle to solve. Construction in this sense is like a historical ruin; Paul Zucker asserts that: “Ruins have held for a long time a unique position in the visual, emotional, and literary imagery of man. They have fascinated artists, poets, scholars, and sightseers alike. Devastated by time or willful destruction, incomplete as they are, they represent a combination of man-made forms and of organic nature.”
Vertical panoramic photography: Took eight photos with a Canon EOS 1Ds to create a very-high-resolution vertical panorama. Stitched photos using Autostitch, retouched the final image in Photoshop CS2, mirrored the image in order to complete the aimed “reconstruction” process. Trace bitmap: Saved lower-resolution copy of photo and imported it into various software such as Freehand MX, Illustrator CS2, Adobe Streamline. The idea was to obtain vector information for the CNC machine. The trace bitmap studies were not very successful and created a huge number of vectors that could not be imported into CNC software. Calling a CNC specialist: After all this waste of time with vectorization, decided to consult a CNC specialist and learned that it was possible to get 3D relief information from bitmap images. Etching the photo on plexiglass: Using CNC machine’s bundled software, constructed 3D relief patterns from the photo, and it took about 20 continuous hours to etch the photo on a 1500 x 650 x 12 millimeter plexi sheet. Printing photo on etched plexiglass: The photo was printed on the etched side of the plexi using a ZÜND 215 C55 UV inkjet printing system.
One of the main characteristics of panoramic photography is its abil-ity to let one perceive the object, subject, and space of interest as an entity in relation to their surroundings. Many details on the periphery that would normally be left out in single frames become centralized in panoramic photography. As a consequence, you end up with a particular life form of its own kind, which turns out to be the synthesis of individual forms, in other words a sui generis situation. This unique narrative can be extended to cubist works and Ottoman miniatures where unrealistic multifaceted descriptions can be observed. It also reminds us of Piranesi’s drawings depicting complicated, interwoven three-dimensional worlds.
This photo was captured by a digital camera (Canon EOS 5D) and stitched together using the software called Autostitch. After the stitching process, the image was retouched in Photoshop for color correction. Though some of the images in the series were later turned into OTVRs, all of them were kept and printed as panoramic photos, since the above mentioned multi-faceted “cubist” quality was much better preserved in this particular format, as opposed to OTVR.
“It is easier to perceive error than to find truth, for the former lies on the surface and is easily seen, while the latter lies in the depth, where few are willing to search for it.” — Johann Wolfgang von Goethe With the traditional photographic documentation process, you can only capture one single condition of a particular place. But various climatic and seasonal conditions may drastically change the appearance of that same place, since they will render the place differently with different light conditions. Can multiple photography of “reality” allow us to see beyond and capture the spirit indirectly? Can visual accumulation through time give us more insight, more clues, about life in a particular place, and therefore its soul? Can multi-layered photography act as a “palimpsest” that can tell the story of a place? The motivation for this time-lapse photography of the same place was to accumulate the various conditions of the same place through time to obtain components of the place’s identity. Then by layering these components and printing them on transparent sheets that can reveal information behind them, it was possible to obtain a “visible depth.” This process can allow us to “reach the core” and understand the essence of the place. All surfaces are preconceptions … (after Friedrich Nietzsche’s “All words are prejudices.”)
Hundreds of photos were taken at intervals of five minutes during 10 days to capture various moments and light conditions of the same place. Canon EOS 1Ds with Firewire connections and a Windows laptop loaded with DSLR Remote Pro software were used to automate the process. Twenty-five photos were chosen and processed in Photoshop for the final work. The plexiglass box was designed and taken to a workshop that used CNC processes for production. The digital photos were printed on dura-clear sheets in order to obtain the desired transparency.
TOKEN CITY: SUBWAY WALL is part of a larger body of work entitled Token City. Here, viewers become an integral part of the action and emotions of a New York subway excursion via manipulation of 3D animation, computer graphics, real-time video, and a mixed soundtrack of electronic music and digital sound effects. The installation focuses on the tactile quality of the subway experience, for instance:
• Passing through the miles of tile-walled tunnels, plastered with a gallery of garish billboards, contrasting with elegant, 1930’s Art Deco ceramic mosaics.
• Packing into the subway car and standing shoulder to shoulder with people you don’t know and will never see again. The “trompe l’oeil” presentation of the wall draws the viewer to its surface, demanding to be touched! The presence of the Token City 3D animation enhances the experience through movement and sound.
Conceived as a ceiling projection, the video gazes into te often changing sky of Glasgow: In the city centre, Victorian buildings stand in a grid of streets filled with cars and buses. Towards the periphery, high-rises are dominating the landscape; the tower blocks stand in the never ending noise of the urban motorway M8. Once, these high-rises were ultra-modern and popular, but meanwhile they are old and neglected, partially abandoned, and many of them are being demolished. Some former inhabitants romanticise “their” good high-rises, others don’t want to see them anymore and prefer little family homes. The video ends with an ascension of high-rises: an apotheosis of changing ways of urban life. The video animation quotes and interweaves constructivist and baroque aesthetics.
My themes and visual researches deal with powerful symbols, myths and signs from architecture, society, politics, movies, and religions. My artworks are explorations of their meanings, a questioning, reevaluation, and creations of new associations. In order to undermine entrenched representations, I work directly with them, contradict or re-interpret them with visual means, or focus on their hidden aspects. Using video, animation, collage, abstraction, and found footage, well-known figures undergo transformations, start to communicate and build new relations. Symbols of identities turn into elements of dialogues.
All kinds of people are using their smartphones. The displays don’t show any apps – only the sensual movements of the hands count. Each pair of hands plays both roles from Michelangelo’s “Creation of Adam” at the Sistine Chapel: God the father and Adam, whose fingers touch each other. Smartphones are the new saints to which we cling and who guide us on our ways – they are the amulets of our time.
Since 1969, I have been trying to raise interactivity to the level of an art form as opposed to making art work that happened to be interactive.
From the beginning, I reasoned that interactivity would be limited by what the computer knew about the participant’s behavior, and I developed specialized computers for perceiving the human body. I have also incorporated the image of the person’s body into the computer graphic images.
In general, I have stuck to the premise that everything that happens should be a direct response to the participant’s actions. However, within that discipline a number of different kinds of pieces can be developed. One family of interactions I think of as two- or three-dimensional “mini-media,” which visitors can use to create their own dynamic artistic expressions. Others involve two or more participants in different locations who interact with each other in the same virtual space either as a spontaneous interaction or as a live performance.
Although 30 years have passed, interactivity is still beginning. Many of the preliminary ideas I started out with are still unrealized, and more advanced concepts are waiting to be invented.
The Videoplace was conceived in 1969, simulated in 1970, and first exhibited at SIGGRAPH ’85 in San Francisco. Since then, its development has been updated in the SIGGRAPH ’88 and ’92 Art Shows. It is the origin of the concept of a shared telecommunication space and of unencumbered full-body participation with a graphic world through video projection of computer graphics- a format that so far is more attractive to artists than head-mounted displays. In this exhibit, several Videoplace installations will be networked together to allow participants to interact in a single graphic world. This installation is a departure from previous work in that the world portrayed is 3D.
We all know that the earth is round and that anyone who ever thought otherwise was an idiot. At least, that is what we are told. In fact,we have no first-hand experience that tells us that this is true. We simply take the scientists’ word for it.
To make being on a sphere palpable, this environment will shrink the world to a scale that can be circum-navigated very quickly. Participants will stand in front of a large projection screen depicting a realistic 3D terrain. The projection screen will be a portal into that world. Participants will be able to move through that terrain by pretending to fly exactly as a child would-by holding their hands out from their sides and leaning in the direction they want to fly. In addition, they can control their altitude by raising or lowering their hands. When they descend to the ground, they do not crash through it as they would in most virtual reality systems. Instead, they move along the surface.
The navigation of this world is very satisfying by itself, because the means of navigation is so intuitive. Since this is a planet, if participants continue flying in one direction long enough, they will come back to the place they started from. We will construct the world to make it interesting to explore. In addition to exploration, there are other activities possible. For instance, participants’ actions can change the planet as they move around it. They might defoliate the areas that they touch, or, alternatively, barren areas might bloom as a participant passes through. Mountains might rise and fall when a participant raises his/her arms. A variety of interactions will be implemented. These will be distributed around the small planet and will depend on the number of participants who happen to be together in a given part of the world at any moment.
At times, participants can interact in a game of tag or hide-and-seek. Since the representations of the participants will be Z-buffered into the 3D scene, they can hide behind a graphic tree, above a graphic cloud, or down in a graphic canyon. Alternately, they can cooperate in tasks such as herding animated creatures, shaping the planetary landscape, or spreading graphic vegetation.
While individuals participants do not always need to see themselves on the screen, a representation of them will be displayed at their location in the world so they can interact with other participants. Whether this representation is a polygonal representation, a silhouette, or a live video image is an aesthetic trade-off with terrain complexity.
Since conventional reality is already in abundant supply, there is no point in merely duplicating it with computers. Instead, we can explore new kinds of reality in which the laws of cause and effect are composed from moment to moment. In this piece, reality itself will be one of the performers. Two dancers, each in a VIDEOPLACE environment, dance together in a three-dimensional scene projected before the audience. At the same time, one of them is also dancing with a third dancer in a second VIDEOPLACE world projected onto a second screen. Thus, her performance occurs in two distinct contexts simultaneously. Every action has a different consequence and a different significance in each world. At times, the worlds themselves are created in real time by yet another participant sitting at the VIDEODESK.
This performance is enabled by the loan of two VGX440s from Silicon Graphics Computer Systems and the loan of four high-resolution video projectors from Esprit Projection Systems. Katrin Hinrichsen provided engineering support for the project.
Hardware: National 16000 controlling six microcoded processors Software: Custom “C,” LISP
Hardware: Nat’l 32016 in control of 8 microcoded specialized vision & graphics processors operating on a specialized bus structure Software: Conceptual Dependency, LISP, Flexcode, C, Microcode
“He Ao Hou” is a point-and-click adventure game set in the far future, when your people (Native Hawaiians) have attained the next level of navigation: space travel. It is the result of a unique workshop: Skins Workshops on Aboriginal Storytelling and Video Game Design, offered by an Aboriginally-determined team.
This animation was created from a lighting doodle project through which we met various people in various places. The surprise and joy that brought life to the doodles linked one person with another. This communication naturally took form in the work, which received the Excellence Prize in the Animation Division at the 10th Japanese Media Arts Festival.
Viewers of this interactive CD-ROM are invited to enter what seems to be a typical hotel room. As they navigate through the room, they encounter several objects, some personal (private) and others impersonal (public). When they are touched, many of the personal artifacts (a suitcase, a book, a day-timer), reveal stories, memories, conversations, or images. With each experience, viewers become more involved in and acquainted with how the two roommates are negotiating private and public space within the hotel room – a space that is inherently public, yet when inhabited takes on a private dimension. The hotel room functions as a metaphor for engaging questions about privacy, communication, boundaries, trust.
Operation Empathy is a silent video simulating the experience of a drone following a passenger vehicle. Especially for US citizens, this notion would seem to be completely unimaginable. Yet, our country, in the guise of fighting terrorism, has killed hundreds of innocent civilians on their way to weddings, civilian meetings, etc. This video was originally part of a durational performance ‘It’s Your Party’ that examined the phenomenon of targeted killings in Middle Eastern countries. ‘It’s Your Party’ was an immersive meditation about the people and places that US drones are bombing in the Middle East. I organized this as a collaboration with Arshia Haq, Stephanie Allespach, Amy Alexander, and Marjan Vayghan. Presented at UC Irvine on Nov. 3, 2016, the two-hour event surrounded the audience with sound (mixed by Arshia Haq), while they were free to wander and explore video, both projected and on monitors, consume tea and have conversations largely drawn from news about the wars. Excerpts from my year long research of our country’s drone warfare were printed on origami papers folded into cranes or planes and given to guests, invited to write pre-addressed postcards to elected officials.
Drone footage shot by Brad Hughes, Emergent Media & Design, Schools of Biological Sciences, Arts and Education, University of California, Irvine.
Hardware: PDP 11-780, AED frame buffer
Software: Custom C – R. Carling, D. Kramlich
Hardware: Perkin-Elmer 3220, Grinnell frame buffer, Reticon CCD scanning camera Software: VLW
Hardware: Silicon Graphics 4D 25 TG Software: Alias 3.0
“Not only does god play dice…but he sometimes throws them where they can’t be seen.” – Stephen Hawking
The Machine in the Garden is an interactive videodisk installation dealing with gambling and spirituality, twin distillates of our obsession with luck and fortune, weighing the apparently random outcome of phenomenon against a possible underlying order. Einstein’s Theory of Relativity eliminated Newton’s illusion of absolute space and time, and the combined research of quantum mechanics and chaos theory have shown the flaws in the belief that reality is predictable. As inevitably as we turn to organized religion for reassurance in the face of our mortality, and countless other systems of spiritual belief for their promises of miracles, we are drawn to games of risk and chance. Reconciling spirituality with our apparently reckless attitude toward technology becomes less problematic when we acknowledge that they are opposite sides of the same coin. Playing the odds and betting to win is a decidedly postmodern response to a failing faith in technological utopianism.
Modeled on the design of a casino slot machine, The Machine in the Garden incorporates the Buddhist motif of “See no evil, Hear no evil, Speak no evil,” as the final image upon which each video display comes to rest.When the viewer approaches the installation they see, on three video displays, the same woman’s face with hands covering her eyes, ears, or mouth. Pulling the slot machine lever to activate the installation, video from three thematic areas begins to scroll: imagery of war and destruction on one video display; talking heads of politicians, game show hosts, and religious figures on the second; and children’s’ programming and television commercials on the third. Simulating the action of a casino slot machine, the scrolling of the imagery gradually builds in speed, stopping suddenly in staggered sequence, on one of nine possible combinations of the woman’s face.
The use of recycled broadcast imagery in this installation represents an interest in the reinterpretation and juxtaposition of images and themes that recur in mass media and popular culture. This technique has been employed in a series of installations incorporating technology from the 1950s, with the theme: women and technology, and the manipulation of the body through representation. This series of installations provides a framework of irony and empowerment for the presentation of complex issues and images, particularly as they relate to women. These installations counter the optimism and passive acceptance that women are expected to feel toward technology with the real impact it has had on their lives. The juxtaposition of old and new technology draws the viewer into an examination of popular culture in relation to the current ‘revolution’ in micro-electronics.
The Meadow explores and manifests the metaphorical space that lies between the simulated and the real — a space to which artists are inevitably drawn. Ambiguity and irony also share this space, and it is here that new mythologies and realities may be imagined. This space is particularly appropriate to artists working with new electronic technologies to bridge the gap between science and fiction.
Stepping into the installation space, the visitor is surrounded by four large color monitors, each displaying real-time, full-motion video of a different view of a meadow as seen from a central vantage point. It is winter in the meadow, then suddenly the season shifts. The views remain the same, but a certain motion or sequence of movements has triggered a transformation. Suddenly, it is spring. The visitor discovers, moving within the installation space, how to trigger these seasonal changes and finds it is possible to move backward in time, from winter to fall, or across seasons, from fall to spring.
Other effects may also be triggered: the sound of a flock of geese, which suddenly materializes, flies overhead, and disappears; the persistent and annoying buzz of a mosquito; children laughing or playing just out of sight. A child whispers on your left and is answered by another child whispering on your right. A momentary freezing or speeding up of the video imagery, a sudden change of perspective or shifting of location, may seem to be random. As in real life, the relationship between cause and effect is sometimes blurred.
Ultrasonic transducers, a centralized microprocessor unit, and a custom-designed micro-controller for four laserdisc players engage the visitor in an interactive simulation of a small, intimate meadow. The microprocessor and controller perform two functions: detection of visitor location/movement and multiple laserdisc control.
After working many years as a painter, I began working with the computer to create a new layer to my work. It was a natural step, a way to further understand nature and the world around us. As we all know, the very essence of nature is changing as more and more technology is introduced. Using the computer allows me to experience that change by accelerating the creative process. This acceleration allows for more types of work to be produced and permits more experimentation. It also allows me to pursue my vision and goals in art, which are to honor nature through this technological process and to express my interest in the natural cycle of life.
I believe that the experience and skill of drawing and other traditional methods are still essential for the creation of art. Because of that, I am developing a collection of my own drawings done in the field and from specimens from the Los Angeles Natural History Museum. I have drawn upon past work to create new compositions and created a series of images relating to the starkness and beauty of winter at Crater Lake National Park, Oregon.
It has been my intention to put the hand of the artist into the computer. I wish to address our place in this new technology in a poetic and human way, and to explore the possibilities that the computer presents in the printmaking process. My current work is based on original drawings that are scanned into the computer and manipulated with Adobe Photoshop to create new images and compositions. They are original, signed, limited-edition prints.
The Chill of a Thousand Pearly Hues is from a series of work created during an artist’s residency at Crater Lake National Park. It celebrates the centennial of the park and the past, present, and future of nature.
Hdw: Cromemco Sftw: Slidemaster
Storyland is a randomly created Web narrative. Each line is constructed from a pool of possibilities, allowing each reader a unique story. The work is a one-page Web site created with Javascript. Upon entry, the reader presses the “Let me tell you a story” button, and a story is created for that moment in time. It is unlikely that any two stories would be identical.
Storyland exposes its narrative formula, thus mirroring aspects of contemporary cultural production: sampling, appropriation, hybrids, stock content, design templates. It risks discontinuity and the ridiculous, providing opportunities for contemplation beyond the entertainment factor.
Storyland will play from any JavaScript-enabled Web browser. Elements for each line in the story are randomly selected from a series of arrays that hold Storyland elements.
Hdw: DEC PDP/Mirage/NEC DME.II Sftw: Aurora/100
This example of “cultural computing” condenses the essence of a book into a Haiku, the classical Japanese poetic form of 5-7-5 characters including a seasonal word called “Kigo.” Such imaginative expressions have been applauded by many people. Haiku are stories that generate Context, the shortest stories in the world. “Hitch” means to connect one thing and another thing. In this case, it connects a phrase and another phrase. A user chooses arbitrary phrases from a chapter of a famous Japanese essay called “1,000 Books and 1000 Nights,” by Seigow Matsuoka, and introduces 1,000 books spanning many genres, generations, and origins. The site registers approximately 200 million hits per day! The system generates a Haiku by using the corpus of an essay and several databases dedicated to Haiku generation, then translates it into English. Therefore, the essence of a Japanese book can reach those unfamiliar with the language. Haiku, invented in 17th-century Japan has been recognized as a sophisticated and condensed written form for transmitting sensitive / emotional meaning. It has been expressed in many languages, so it can be a common medium to transfer feelings and bridge the gap of cultural differences.
The system carries out a syntaxic analysis for each phrase and detects a basic form of noun or verb. Then it composes a phrase of a haiku by adding a special propositional particle called “Kireji,” which not only separates a Haiku into three phrases but also encourages imagination. There are six types of databases in the system: Haiku thesaurus, Kigo thesaurus, idiom thesaurus, case frame of onomatopoeia, thesaurus, and case frame. From the databases, the system searches for the phrases and words most closely related to user inputs. The system chooses the phrase with the highest score and presumes the season of the Haiku poem from the user inputs or the phrase chosen from the Kigo database. The system translates Japanese Haiku into English Haiku using the automatic translation service, “Language Grid,” at NICT. If users do not like the Haiku generated, they can modify the Haiku and record some new phrases in the system. We assume that the user inputs are closely related to the phrase modified by the user. The system adds the relationship between the user inputs and the morphemes of the phrase modified, then learns their relativity.
An automatic facial expression synthesizer that responds to expressions of feeling in the human voice.
I created a new creature or a piece of work that can live and meaningfully communicate with modern, urban people like ourselves, people who are overwhelmed, if not tortured, by the relentless flow of information, and whose peace of mind can only be found in momentary human pleasures. Neuro Baby was born to offer such pleasures.
The name “Neuro Baby” implies the “birth” of a virtual creature, made possible by the recent development of neurally based computer architectures. Neuro Baby “lives” within a computer and communicates with others through its responses to inflections in human voice patterns. Neuro Baby is reborn every time the computer is switched on, and it departs when the computer is turned off. Neuro Baby’s logic patterns are modeled after those of human beings, which make it possible to simulate a wide range of personality traits and reactions to life experiences.
Neuro Baby can be a toy, or a lovely pet- or it may develop greater intelligence and stimulate one to challenge traditional meanings of the phrase “intelligent life.” In ancient times, people expressed their dreams of the future in the media at hand, such as in novels, films, and drawings. Neuro Baby is a use of contemporary media to express today’s dreams of a future being.
Basic Characteristic of Neuro Baby and its Interaction with the External World
This work is the simulation of a baby, born into the “mind” of the computer. Neuro Baby is a totally new type of interactive performance system, which responds to human voice input with a computer-generated baby’s face and sound effects. If the speaker’s tone is gentle and soothing, the baby in the monitor smiles and responds with a pre-recorded laughing voice. If the speaker’s voice is low or threatening, the baby responds with a sad or angry expression and voice. If you try to chastise it, with a loud cough or disapproving sound, it becomes needy and starts crying. The baby also sometimes responds to special events with a yawn, a hiccup, or a cry. If the baby is ignored, it passes time by whistling, and responds with a cheerful “Hi” once spoken to.
The baby’s responses appear very realistic, and may become quite endearing once the speaker becomes skilled at evoking the baby’s emotions. It is a truly lovable and playful imp and entertainer. In many ways, it is intended to remind speakers of the lifelike manner of the famous video-computer character Max Headroom.
Two major technologies were combined to create this system: voice analysis and the synthesis of facial expressions.
Voice analysis was performed by a neural network emulator that converted the voice input wave patterns into “emotional patterns” represented by two floating point values. The neural network has been “taught” the relationship between inflections in human voices and emotional patterns contained within those inflections. During interaction with the baby the emotional patterns found in the observer’s speech are continuously generated.
During the translation stage, the two values for emotional patterns are interpreted as an X-Y location on an emotional plane, onto which several types of emotional patterns are mapped. For example, “anger” may be located on the lower left of such a plane, while “pleasure” would be located on the upper right of the same plane. Each emotional pattern corresponds to a paired facial expression and a few seconds of voice output.
During the performance, the facial expression is determined by interpolating the shape, position, and angle of facial parts, such as eyes, eyebrows, and lips. These parts were pre-designed for each emotional reaction.
One FM TOWNS, Fujitsu’s multimedia personal computer, is used for voice analysis, another FM TOWNS is used for voice generation, and a Silicon Graphics IRIS 4D is used for image synthesis.
“Sound of Ikebana” is a collection of a new type of video artworks which are created by shooting Ikebana-like shaping, generated by giving sound vibration to liquid such as pastel color, oil, etc., by a high-speed camera.
This new type of ikebana (flower arranging) is created by capturing beauty of a physical phenomenon by shooting it with a high-speed camera of 2000 frames/second. The beauty is in one sense created by the collaboration between the physical phenomenon and an artist’s sensitivity and gives us an unforgettable strong impression. By utilizing various types of liquid the artist tried to express various kinds of color variations such as prayerful color in Buddhism, Japanese “Wabi-sabi” color, delicious color of food, cute color of Cool Japan, Peranakan color in Singapore, Indian color, Chinese color, etc. By using these color coordination, the artworks are intended to express Japanese seasons such as palm and cherry in spring, cool water and morning glory in summer, red leaves in autumn, snow and camellia in winter, Christmas season and New Year season. Adequate Haiku, Japanese short poems, are selected and are accompanied to each captured video. Please see at the below url.
In the area of computer technology, the basic trend involves us moving from the era of calculation, database processing, information processing, etc., to the era of addressing culture, expressing culture, handling types and structures behind several cultures, and, as a result, letting people understand different cultures at a spiritual level. In other words, I can say that we are getting into the era of Cultural Computing.
Media used: Digital video.
In face-to-face communication, the occasional need for intentional lies is something with which everyone can identify. For example, when we get angry, circumstances may force us to put on a smile instead of expressing our anger. When we feel miserable, good manners may dictate that we greet others warmly. In short, to abide by social norms, we consciously lie. On the other hand, if we consider the signs that our bodies express as communication (body language), we can say that the body does not lie even while the mind does.
Unconscious Flow “touches the heart” in a somewhat Japanese way by measuring the heartbeat of the “honest” body and using other technologies to reveal a new code of non-verbal communication from a hidden dimension in society. The artist calls this “techno-healing art.”
Two computer-generated mermaids function as individual agents for two viewers. Each mermaid agent moves in sync with the heart rate detected by an electrode attached to the collarbone of its viewer. Then, using a synchronization interaction model that calculates the mutual heart rate on a personal computer, the two mermaids express hidden non-verbal communication. The data of relax-strain calculated from the heart rate and the interest calculated from the variation in the heart rate are mapped on the model. The synchronization interaction model reveals the communication codes in the hidden dimension that do not appear in our superficial communication. For example, when two persons are in a situation where they are highly strained and highly interested, they are assumed to have stress and feelings of shyness, and the animation generates CG-reactive embodiments that behave shyly. When both people are in a situation where they are highly strained and less interested, unfriendly communication is generated.
For a high degree of synchronism, the agents mimic the hand gestures of their subjects. For a low degree of synchronism, the agents run away. When one mermaid agent touches the other, a pseudo-touch can be felt through a vibration device. For background sound, the heart sounds of the subjects are picked up by an electronic stethoscope and processed for output on a personal computer.
The design for Ratio.MGX is the result of a study of phyllotaxis (the principles governing leaf arrangement), mathematical structures, and the rational and irrational distribution patterns in nature. The lamp, together with the electric cables, rests unconnected in a concrete base. This allows the lamp to be taken out of its base and carried around like a torch. Ratio.MGX is the rational part of a twin design. The irrational counterpart has not yet been released.
Once our bodies are no longer here, we leave in our wake a trail of artifacts and places that we have shaped, constructed, designed. I am particularly interested in the power of places and artifacts to tell the stories of people who may have passed through. I am not documenting real people; rather I am intrigued by the suggestions triggered by the artifacts I have found and the places I have visited … an invitation to the imagination to fill in the missing pieces. Although the series was initially inspired by my travel through ghost town ruins, my current “research” takes place in basements and attics, where I perform “archaeological digs.”
These works also explore memory and the passage of time. Peeling wallpaper, which appears throughout, is a metaphor for digging back through layers of time and memory. The peeling wallpaper in these images reveals an underlying layer of newspaper, typically used for insulation 100 years ago, the headlines of last century still readable. My working process also bears reference to the peeling wallpaper – the collographs layered underneath the digital prints emulate time-beaten walls.
In order to achieve a built-up surface in my digital work, I mix the illusory textures of the digital inkjet prints with the tactile qualities of printmaking, drawing, and collage. Initially, I use Photoshop to rework, collage, and transform my photographs, drawings, paintings, and prints. The digital image is printed on top of a heavily embossed collograph print or collage of torn paper. This process is sometimes followed by further overprinting with additional digital print processes, i.e. digital lithographs or digital serigraphs. The final layer is created with direct drawing – pastel, charcoal, and graphite. I enjoy the mixing of new and old processes, for the physical and conceptual qualities of old and new, future and past.
The “Ghost Town Artifacts” series is constructed in layers within the 3D space of a deep wooden box with hinged glass lid and padlock. In these works the artifacts are preserved, perhaps along with the memories they evoke.
If These Walls Could Talk explores the passage of time, particularly the power of places and artifacts to trigger memory of [real and fictitious] moments in time. My work is based upon my travel through the ruins of abandoned buildings in ghost towns in the western U.S. and periodic trips to the dusty, memory-laden, artifact-filled house of my childhood. I find myself drawn to richly textured products of decay: layers of peeling wallpaper and fabric, dust debris, and scattered artifacts. I have always liked to “read” my work with my fingertips, a kind of Braille. In order to achieve a built-up surface in my digital work, I mix the illusory textures of inkjet prints with the tactile qualities of drawing, printmaking, and collage in a multi-layered process. Typically, the bottom-most layer is a heavily embossed collograph print or a collage of torn paper, overprinted with the inkjet print. Subsequent layers are achieved with additional overprinting (lithograph or serigraph) and/or direct drawing with pastel, charcoal, or graphite.
The method and the message are one and the same. The layered digital collographs and collages emulate those time-beaten wall surfaces, the peeling wallpaper a metaphor for digging back through layers of time and memory.
Animatrix is a computer dancer, reminding us of a Bodhisattva, a Buddhist creature in half-enlightened state.
The installation consists of three parts:
1) The graphics program that calculates the movements of the Animatrix, depending on user input.
2) The music program that interactively composes the music depending on user input.
3) The user interface.
The user interface is a double joystick consisting of two positioning devices attached to each other. The Animatrix reacts to the movements of the interface and starts to dance; at the same time rhythmical music is triggered.
There is a relation between the movements of the interface, the movements of the Animatrix, and the music, but the relation is not straightforward: sometimes the Animatrix seems to be a willing dance partner, at other times it seems to have its own life and to dance its own dance. By playing with the system the user will gradually discover that s/he is not only able to influence the dance of the Animatrix, but also the music and the rhythm that comes along with it.
There are two levels of interaction:
1) Positioning the interface gives direct control of some of the movements of the Animatrix, and of part of the musical composition.
2) Variations in moving, twisting, and rotating the interface are measured and analyzed over a longer period of time and cause more complex patterns of music and movement.
The graphics program receives the input data and reacts directly to it. It passes the data on to the music program that also reacts directly to it. Both programs analyze the input data over a longer period of time, and exchange the results of their analyses continuously. The music program analyzes and evaluates the timing and rhythm aspects of the user’s movements and processes this information in its composition rules. It then sends the results of the analysis to the graphics program.
The graphics program analyzes and evaluates the positioning, twisting, and rotational aspects of the user input and passes it on to all body parts of the Animatrix, each of which has its own set of rules that tell it how to move and how to react to the results of both analyses. The movement information is then passed on the music program.
The work was supported by the following institutes and companies: Fonds voor Beeldende Kunsten, Vormgeving & Bouwkunst, Amsterdam, the Netherlands Institut fuer Neue Medien an der Staedelschule, Frankfurt am Main, Germany Kunsthochschule fuer Medien, Koeln, Germany Silicon Graphics Computer Systems, Koeln, Germany Internationaler Wiener Kompositionswettbewerb, Wien, Austria
Writing Rules for Art’s Sake For several years I have concentrated on the investigation of the possibilities for automation in the art creation process. I have approached this research from a conceptual (and artist’s) point of view.
Tools that supply automation during the making of an artwork already exist, and are being used by artists in a wide area of applications. However, I have been looking for an automation tool that affects artistic thinking at a more abstract and basic level.
If there exists something like an artist’s formal language (where every artist uses his or her own version of this language, but all languages belong to a certain class), one could think of a tool that supplies automation at a level where it will affect such a language. In fact such a tool might call for the development of a new class of formal art languages, and it is clear that if an artist is going to be using such a tool, this will influence his or her way of thinking from the moment of conception of the artwork.
There is a lot of research going on in the scientific community. Many new programming paradigms are being developed, like dynamics motion control, inverse kinematics, behavioral systems, artificial life systems, fractals, and L-systems. These paradigms are all implementations of more or less complex, life-like systems into the binary world of the computer. I have looked at these paradigms from the following points of view:
How could those paradigms be used as an artist’s tool? How will implementations of these paradigms affect the artist’s thinking? A virtual world can be represented in the computer: in the virtual world are its inhabitants (objects).The inhabitants behave according to laws and rules that are described in the program. Each individual object follows its own set of rules, which may be very simple, but as a group the inhabitants will show an emerging group behavior. Such a group behavior can be rather complex, even if the underlying rules are simple.
Artificial intelligence researchers are looking for life and searching for rules that cause life-like behavior. Being an artist, and not looking for life, I would like to reconsider the set of rules. Rules can describe many things, for instance physical behavior according to laws of gravity and collision, or individual behavior like “avoid the others and fly forwards.” But rules could also describe what an object might look like, or how a viewer might interfere in the system and change the set of rules.
Once a system has been implied, in which a set of rules can be defined and applied to a set of objects, the thinking process of the artist can be directed into thinking about possible sets of rules. Possible rules might describe how objects behave individually, how they are born, how they live and die, how they mate, survive, and mutate, what they are and what they look like. Rules could describe how the system is visualized, and whether its output will be graphics, text, or sound. Last but not least they could describe how the objects react on a viewer’s behavior and interference, how a viewer’s behavior and interference might change the rules, and how the rules might change themselves.
By defining the set of rules the artist creates the elements of the formal language that is going to be used. The machine will take care of the application of the rules of this language, and here’s where automation comes in. Once the program is running, events (elements of the composition) will occur in a non-predictable order. A story is being told, although it may be a very abstract story. If the program is interactive, the story will be non-linear: the viewer has become involved in the sequence of events.
An important aspect of the artistic creation process has been taken over by the machine, viz. the application of formal language rules. The focus of the artist’s imagination has been shifted from application of rules to defining a set of rules.
I have started my work by describing movements for objects with simple mathematical functions. I built an application in which variations on a rule could be made by changing some parameters; the application would run fully automatically and the output would be an animation on videotape. (“Automated Animations,” part I, 1991).
I have then started to program a virtual world in which the inhabitants must obey laws of Newtonian dynamics: gravity, collision and friction. Several animations were the result of running this system with various parameters. In some of these animations the virtual camera would be one of the inhabitants of the virtual world (“Automated Animations,” part II, 1991 and “25 objects meet,” 1991/92).
I have extended the system with behavioral rules for its inhabitants. I have changed it to a real-time interactive system and added sound to it. The computer installation”? objects meet” that was the result of this work was exhibited at the Ars Electronica 1992 (“7 objects meet,” real-time interactive computer installation, 1992).
My most recent work is the interactive computer installation “Animatrix,” (in cooperation with the composer Masahiro Miwa). This installation has two rule systems running concurrently: one for the movements of a seven-armed dancer, and one for the composition of the music. Both systems communicate with each other continuously (“Animatrix,” real-time interactive computer installation, 1993).
LIFE AT THE WITCH TRAILS is based on the idea of creating “living“ structures through sound. Video material from x/y-stereo displays visualizes the phase changing from two-channel audio signals. The sound source is a special audio composition that cannot be realized without direct visualization. It contains full-on, sound-dependent motion dynamics and forms complex “cathode-ray objects,” which allow direct (not delayed) visual access to the smallest details of the composition. The representation is not limited by the pictures-per-second time frame of television and computer technology. The interconnection of the aural and visual senses arises in an immediate way, and the visualization of sound obtains a new meaning. This project has its roots in the works of the abstract filmmaker Oskar Fischinger, who created several works on the topic of visual music in the 1920s and 1930s and was connected to Alexander Lazlo, a pianist who explored color and music. Fischinger used multiple overlapping projected images at live multimedia concerts in the 1920s. As a counterpoint, there are the drawings of the Belgian poet and painter Henri Michaux, who worked on the topic of “language” in his poetry and sometimes focused on “asemic writing” under the influence of hallucinogens.
Databank of the Everyday takes as its subject the real everyday uses of computers in our culture: storage, trans-mission, dissemination, and filtration of bodies of information. The work reflects on what media – from photography to computers – have always attempted to do: represent the truth of life and organize it into well-defined lists and categories.
Photography, for example, begins and ends its history with the idea of the catalog, from William Henry Fox Talbot’s inventories to the recent proliferation of electronic image banks. And so, picking up where photography left off, the Databank provides a conclusive catalog of an ordinary life. It models itself after commercial data banks with their generic all-encompassing categories such as People at Leisure, Flowers, Nine to Five, and Nature. The Databank‘s categories are no less all-encompassing and include Wasting Time, Nervous/Bad Habits, Because of My Mother, and Staged for the Camera.
The Databank proposes that everyday life consists of a series of loops performed by the body, much like the simple loops performed by a computer program. The ordinary body is like an imperfect machine, flawed in its efficiency by its desires, habits, and compulsions. The Databank can be thought of as a catalog of flawed movement studies of the everyday (scratching, shaving a leg, watching TV, and slamming a door), standing in opposition to historical movement studies of Muybridge, Marey, Taylor, and the Gibreths.
The primary graphic interface of the Databank is a loop that spins as users move between sections. The selections allow access to the same data in different ways. One section is a subject catalog featuring a diagram of the subject that animates as if in a spasm as her buttons are pressed, triggering access to a loop. Another section is a dictionary of loops with a number of miniature actions that take place simultaneously, choreographed by the user. Yet another organizes the data as antonyms. It presents such actions as pressing up and down on an elevator button, and taking a shirt off and putting it on.
Featuring the latest in amplified fin-de-siècle rhetoric, the work vehemently perpetuates the current hysteria surrounding new technologies. Again we witness a revolution, and again we hear loud claims about the universality of the change and the transformation of everyday life. (History, as we know, also repeats itself like a loop.) And so in keeping with the tradition, the Databank heralds its very own 21st-century manifesto, in compliance with early-20th-century avant-garde movements.
As digital media replaces film and photography, it is only logical that the computer program’s loop should replace photography’s frozen moment and cinema’s linear narrative. Databank of the Everyday champions the loop as a new form of digital storytelling; there is no true beginning or end, only a series of loops with their endless repetitions, halted only by a user’s selection or a power shortage.
Testament is a multi-channel video installation composed of hundreds of fragments of vlogs (video blogs in which people tell stories about themselves and the world) found online. Flows and patterns of faces and voices fill the gallery, giving social shape and form to multiple instances of isolated expression and reflecting a collective longing to be connected – and to be seen and heard in public – even as we isolate ALL ourselves in front of screens.
This digital work presents a continuum of the past, present, and future of Ukraine in photographs of three generations: an old lady, a girl, and a high school graduate. The Orange Revolution that took place from November 2004 to January 2005 has significantly changed Ukraine, but the question still remains whether future generations born after the revolution will have a better life than the previous ones. Digital media and globalization changed Ukrainian life in many ways. Borders are more open to international tourists. It’s not unusual anymore to hear someone speaking English on the streets of Kiev. People are more tolerant of other cultures as they become more aware of them. We go to Vietnamese restaurants, drink Czech beer, and eat American burgers. My niece learns the alphabet with a specially designed computer program. My nephew listens to American music. And my parents watch the same commercials as the rest of the world. We get CNN and MTV via cable television. Our streets are full of German cars, and shops sell Japanese cameras. On almost everything you buy you see the tag: Made in China. We have the same prices as Western Europe with Ukrainian salaries. At the same time, we keep our rich Ukrainian heritage, our traditional cuisine, our folk art, and our literature. Ukraine is still in a transitional period, and a change from communism to capitalism causes many problems, but our children give us hope for a better future.
This work was made with 2D digital imaging techniques such as blending, image filters, and artistic effects. The photographs were taken with a Fujifilm camera. Images were later digitally processed with GIMP.
Artists’ tools and outlets have changed drastically over the past century. Our options for delineating ideas have risen exponentially. Few techniques have escaped our repertoire of instruments. Surely, psychoanalysis, cinematography, engineering, and architecture have become acceptable modes of discourse. Presently, we witness incredible works pertaining to the fields of software engineering, bio-engineering, and the social sciences.
My work does not engage in a dialogue with these fields directly. Knowledge and technique and their application are not the aims. Rather, the aims are discovery, invention, and, most importantly, the search for the proper questions. The work must resonate and function within the realm of these technical phenomena. This relationship is two-fold. At one time, I am aroused by the classic, the chiaroscuro of traditional painting, the immediate tendency of brush against canvas. Another moment focuses on the cerebral stimulation found in the cognitive and computer sciences, specifically algorithm design and research as applied to pattern recognition and statistical processes.
This tendency for the artist and the technician to coincide is not new. It runs through our shared memories. It is as inherent in the invention of perspective, in the experimentation with and development of painting, as it is in the application of computers towards creative and digital endeavors. These developments do not surprise our curiosity. But they offer an opportunity to come closer to the muse we have all come to admire and pursue.
Teeterings is a series of studies created by an algorithm whose functionality to promote an investigation of form. It is an iterative process of learning for both.
Good-for-nothings celebrate the disappearance of materiality; albeit, through lack, dejection, and an embrace of the absence that seems to have brought much of our culture to a standstill. Forever shifting, always shiftless, on an endless joyride from nowhere to anywhere.
The screen’s nature is to both show and to obscure. It forever hypnotizes us, seamlessly eliminating its own qualities as a substrate. It owns the characteristics of a Zelig: forever changing, unstable in any context, and destabilizing context itself. Informed by photography, film, and every meme that ever was, the digital image shifts readily between aspects of each. Its meaning is necessarily slippery and hard to define; possessing a quality that makes it hard to pin down or make fit into a neat category.
Given this slipperiness, can we ever grasp the basic, tectonic components of the digital image? The bits and pixels of the screen do little to help our visual understanding of its relationship to one’s perspective in everyday life. The seductive illusions and concomitant complexities of our online experiences have enabled an entirely new trompe l’oeil hell of phishing attacks, spoofs, and cross-domain tomfoolery. Digital images, precisely because of their ambivalence towards the picture plane, forever slip from our grasp. Only as Flusser’s metaphorical wind blows them from our mental, perceptual grasp do they reveal aspects of their construction. Rather than fight against this liminal quality, we exploit it.
Good-for-nothings celebrate the disappearance of materiality; albeit, through lack, dejection, and an embrace of the absence that seems to have brought much of our culture to a standstill.
Forever shifting, always shiftless, on an endless joyride from nowhere to anywhere. How does one go about working with this shiftlessness? Each Good-for-nothing raises its metaphorical glass to Herman Melville’s crème de la crème good-for- nothing anti-hero, Bartleby. They are images aligned with a scrivener of the post-modern age that can only tell us: ‘I prefer not to.’
Do algorithms persist in order to justify the use of computing in contemporary society? Does the Taylorist juggernaut that we have unleashed need to fully extinguish the agency we have in our work before we as a society realize that the continuation of culture requires the freedom of individuals’ imaginations in order to exist? Against the bureaucratic regimes of funding and market creation for an art industry that is merely a shill for the latest investments in technology and ’innovation’, we propose a form of digital media that counters the very structure it is built upon.
Good-for-nothings are facts in themselves, not subject to the vagaries of truth and falsehood, not interested in the tenuous connection that data has to reality, not a reflection of materiality. Instead they offer continuously developing imagery that is defined by the structural grid of every screen.
While the context of ‘new media’ helps define the structure of the data-driven works that make up good-for-nothings, these works that have come out of our collaboration stem from a shared enthusiasm for looking at something online that is explicitly made for and with online media: the computer and internet are needed to generate a platform onto which our works develop, absurdly, ad infinitum. Rather than present anecdotal illustrations of the alienation implicit in screen- culture, good-for-nothings act within its abstraction. Algorithms, neural nets, data. How better to question the hegemony of the screen than employ its core components?
Good for nothing (no. 1) constructs random clouds of oversized ‘pixels’, then connects them with a smooth continuous surface, and finally, slowly, erases or flattens itself only to begin again this never-ending cycle. It runs in a web browser and relies primarily on 150 lines of JavaScript. Color and location of pixels are stored in a spatially-aware Mongo database (unique for each session), and the continuous surface is drawn by calculating Euclidean distances of k closest pixels. The interface between the front-end and the database, and the basic web service, are implemented in python using flask and pyMongo.
The majority of computational or algorithmic art we see approaches art-making through the lens of computation—as if to say ‘art looks something like this, how might we make an image that looks like that using some other toolchain?’ Our practice originates from the opposite vector—it looks at algorithm-making through the lens of art practice. It doesn’t assume that algorithmic art will have a representational relationship to the artifacts we have previously known as ‘visual art’. Since the core structure of digital media inevitably relies on the screen for presentation, we began seeing the screen as a substrate—like the canvas or panel of physical painting.
Painting in the mid-century established its tectonic underpinnings by reorienting its substrate in space and using gravity, rhythm, happenstance. Paintings became objects: pictures as things, not pictures of things. At their best our algorithms hope to uncover a similar tectonics and an indexical language of the immaterial image.
The work is deeply collaborative. These ideas and images wouldn’t exist outside of the on-going dialogue between us. But the dialogue is rarely easy, whether because we live 2000 miles from one another, or because we have no end of day-to-day struggles that distract us from this work. But the biggest challenge might be the incredibly slow feedback loop between having a thought and seeing that thought (badly) represented through code. This process has none of the immediacy of painting or drawing. Yet this also relates to much of Nick’s physical work that deals with very slow processes of accretion. It’s probably fair to say the we’ve set up a difficult working process for ourselves somewhat intentionally because that friction contributes to the work. Much of our work is meant to unfold slowly in time, and never the same way twice.
In 2004, I created a graphics program that simulated a population of stickmen and stickwomen through various iconic stages of life: birth, play, love, work, rest, travel, and death. Due to some intentional variability and the fluidity of interactions between agents, the results were always unique: the population favored one gender over the other, exploded, tapered off, was productive and concentrated or was ineffectual and scattered.
This print is an attempt to capture the entire “evolution” of the Society of Stickpeople in one frame. Historically, chronophotography was one way that artists and scientists captured motion. Pioneers such as Eadweard Muybridge and Etienne-Jules Marey captured unique images of motion that were scientifically revealing as well as aesthetically pleasing. Marcel Duchamp captured time in a different way in his painting Nude Descending a Staircase.
I am continually fascinated by the diversity and complexity of the images that can come from a simple set of instructions given to a computer. This modern take on chronophotography is made possible by the unmatchable processing power of the computer, which I enjoy using as an artistic tool. Applying design fundamentals to the raw output of my program helps reveal beauty and carries the images to a more refined level of composition.
The original OpenGL program was crafted to simulate a population using finite-state machines, using stick-men and stick-women with simple animations to visually represent the unfolding dynamics of the population. Later, the program was changed to experiment with the idea of digital chronophotography, or a way to capture in one frame the essence of each particular simulated run. This was achieved by leaving the drawn image every frame rather than clearing it between frames. This technique, combined with a very low opacity in the drawn elements, allowed the image to slowly accumulate over the course of thousands of frames. At any point in time, a key could be pressed to save a high-resolution version of the image at its current state of evolution.
Originally inspired and encouraged by Clifford A. Pickover’s Chaos in Wonderland: Visual Adventures in a Fractal World, I developed custom software to explore mathematical phenomena known as “strange attractors.” To me, these images, which are merely visualizations (approximations, nonetheless) of dynamical systems, betray a beauty not separate from, but inherent in, science and mathematics. My whole intellect is engaged by these images: as a technician, solving problems and analyzing, and as an artist, creating, selecting, and designing.
Custom software was created with Microsoft Visual C++, OpenGL, GLUT, GLUI, and DevIL (known as OpenIL when I used it). The original algorithm was presented in Chaos in Wonderland. Two separate programs were created: one to “look for” aesthetically pleasing attractors, the other to render them.
In my Faces of Chaos series, I seek to visualize a chaotic dynamical system, using a unique mapping of the Lyapunov exponent to the image plane. Tiled Faces is one result of this exploration, and its 1,024 images combine to reveal the “face” of the four-dimensional system’s chaotic behavior. This emergent figuration draws the viewer in, closer to the surface, where a myriad of individual “faces” is revealed. Because I approach the challenge of representing four dimensions from an aesthetic perspective, I am free to bring the underlying equations to light, to visually and intuitively understand them. Tiled Faces juxtaposes order and chaos, artistic sensibility and mathematical depth, within its pixels and pigments.
The Advanced Visualization Lab worked with astrophysics data scientists to create data-driven cinematic scientific visualizations which represent the life cycle of stars and the dynamic processes of our sun.
In the first shot of this sequence of excerpts, we visit John Wise’s simulation of the early universe, which shows the formation of the first generations of stars. Three features were targeted in this visualization: the blue dense filaments along which stars form; the large orange regions of gas which become ionized and heated as new stars begin emitting lots of ultraviolet light, and later cool off after those stars’ lives end; and the heavier elements mixed into the gas after those dying stars explode as supernovae, forming small bubbles of high “metallicity density”.
The second shot in this sequence, another of John Wise’s simulations, examines the life and death of a single first-generation “Population III” star, looking in detail at the processes that mix the heavy elements into their environment. Using advanced computer graphics techniques, the AVL was able to iteratively produce high quality cinematic renders of thousands of John Wise’s multi-gigabyte datasteps in approximately 71,500 compute-node hours on the Blue Waters supercomputer.
In the last three shots of the sequence, we observe Mark Miesch’s and Nick Nelson’s global 3D simulations of convection and dynamo action in a Sun-like star which reveal that persistent wreaths of strong magnetism can be built within the bulk of the convection zone. The star’s rotation organizes the convective cells, which form long rolling “bananas”, rendered in the orange regions of the sequence. Tracer particles show the fluid circulation, colored yellow/orange by vorticity. Buoyant loops in the magnetic field, seen in the final shot in blue, are carried along by these “bananas”.
Media Used: Houdini, yt, Nuke, proprietary Houdini scientific data plugin.
Hardware: VAX 11/780, lkonas frame buffer, Dicomed film recorder Software: NYIT Animation System
Hardware: VAX 11/780, DICOMED D48 Software: Paul Heckbert, Tom Duff, Peter Oppenheimer, and Lance Williams
Shaded image. The castle, water, clouds, etc. were generated separately and digitally combined. The water is a texture mapped polygon; clouds were painted.
Hardware: VAX 11/780, Genisco frame buffer, Dicomed D-48
Sometimes art is about the journey. While I worked on this two-part artwork, I constantly had thoughts of why I followed a certain route, and that became a compositional dimension in itself.
This work deals with taking compositions that are very much separate, and linking or joining them to make new ones. Combining segments works much like trying to combine stories; without guidance, they would lead to confusion and incoherence.
Each piece had a different birth and progressed through its own stress and growth-flow. Each image started as a balanced, self-contained entity (to analyze each of those could be taken to the analogical extreme). As an artist, you seek to find the dynamic interaction that visual illusion demands. There is a precise moment when parts come together, and they become inextricably unified to reveal a definite new framework. This point makes “combinational” work an effective cognitive exploration tool as well as one of introspection, each with its own approach. The process of connecting images in our minds has its variations in blending, or just attaching, but to truly unify is to fuse new ideas. However metaphorical to our thoughts, the process and the ideas it reveals reflect the power of the visual system to abstract and arrange metapatterns: universal inherent structures of thought, within our minds with just a simple stare, an amazing computational feat all viewers execute when they observe the world.
One major theme is complexity. This work’s complexity arises from the increasing number of possibilities in the process, as in the expansion of every facet of technology and information. For robotic (computer) vision to be possible, we will have to understand much better how we process what we perceive and induce visually; that is the job of the sense of aesthetics for now. In time, visual engines will be very powerful and will allow us to look into the intricacies of mathematics and physics while rediscovering our own world of art. The elegant symmetry of complexity-deep, cold, and austere-will take us far. Many kinds of languages await us in the pursuit of such patterns and structures at increasingly larger scales. Visual thinking will propel us in new ways.
Base 1. Schemas: Tiling and Reverberation A schema is an in-built system that activates and drives something, visually or otherwise. This picture starts out with a definite plan, but the end of the journey is left very much a mystery. I started out by aligning the images in the radial cells and directly applying them to the center. For each section, I used Photoshop to place the primary elements and proceeded to order layers and set sizes. I often use the images themselves as brushes; they make unique after effects. The source images here are different from each other, yet rather homogenous, more related to texture than form, and more neural and digital than defined in shape. This was much more rigorous than I thought, but I found many interesting functioning transitions. A “Zen” mindset keeps thoughts from getting in the way. Lastly, when the images were combined and worked aggressively, it was very difficult to predict what would result. This is a matter of spontaneous composition. Our visual perception guides the process but it will not provide consciously formulated answers until we know most of its mechanisms.
Base 2. The Uncertainty Principle: Symbolism The artistic ideal is rooted in “knowing” that the image will come together. The parts one starts with are very defined in themselves. They are not traditional elements, since they can each stand on their own, keeping something of their original nature (unlike Schemas where they acted as broad inflections). One component is a photograph taken in the Bolivian Andes, another is a watercolor dubbed and filtered (Edge Detect and some tweaking to offset three constituent layers), and the third, the main anchor image, is a composite designed from scratch. They seem very separate, but at the subconscious level higher order is possible. This falls in the domain of dynamicist cognition; it traces some connectionist and symbolic modes of Al. These were the simultaneous ends in mind. Though less abstracted, they preserve the outcome: a self-contained, stabilized composition. The key stage is in initially selecting what is to come together without knowing a predicting factor such as the validity of theories, (an uncertainty principle). When it is done, it should be left alone to hold its space. The process becomes really endless when newer forms are introduced.
On the surface, the nature of mind and consciousness eludes science and technology. Ideas have taken root about approaching a singularity. As such, it already has some life. Ideas give birth to thoughts and artworks. Ultimately, we may transcend those boundaries. In our efforts, we will find what is underneath only if we discover what is inside.
This interior wall panel was designed for an SOM client from the Middle East. It is conceived as a 25-meter-long and 15-meter-tall screen that consists of solid, repeating Corian components that hang together structurally and weave public and private spaces. The resulting divider is a thickly layered and textured screen that generates the project-appropriate degree of visual transparency.
The system functions as both surface and structure thanks to the interplay between geometry and material. Corian is an artificial material that is typically used as a slab in architectural settings. Here, the designers worked with Corian fabricators to push the material’s structural capabilities and to explore the potential of digital fabrication. The resulting geometries are consistent with the ways Corian can be produced and manipulated, but open new possibilities for structural applications.
Instead of cutting off public from private spaces, the wall mediates between the two. The solid component pieces are “woven” together like fabric, and link one side to the other physically and visually. The three-dimensional texture and repeating pattern of the surface allow variation when viewed from different vantage points or in motion. This dynamic experience is dramatically reinforced under different lighting conditions.
“Disruptive Devices” is a triptych of digital kinetic artworks that mediate viewer interactions with virtual wildlife.
The Robotic Voice Activated Word Kicking Machine is a surreal exploration of language and our strange relationship with talking to machines, from customer service bots to “intelligent assistants”. It combines projection and robotics to explore the crossover between the virtual and the physical. Viewers’ spoken words are converted into text and launched into the virtual world. They accumulate there, sometimes kicked by a robotic foot and sometimes sucked back out into the world as sound.
Hdw: Cray XMP/Dicomed D48 Sftw: By artist
Hardware: Cray 1, Dicomed D-48 color film recorder
Hardware: Apollo DN660, Cray I Software: Custom FORTRAN-N. Max
Hardware: Fujitsu M380 mainframe, Panafacom U-1200, Dicomed D48R
S.C.A.M. is a spoof on electronic art being sold as print art to the general public.
Hardware: DEC Micro PDP 11, 286 PC with custom buffers Software: NYIT custom
A collaborative project between Digital Museum of Digital Art (Dimoda) and Siggraph Asia Art Gallery 2017: This is the first VR project by artists from Southeast Asia within the space of the Art Gallery at SA17. After that, it detached from SA and traveled around the world with Dimoda (online and onsite exhibitions) for one year. The idea was to create an exhibition within another exhibition, a mind inside a body and then become another body inside a mind again.
ADB is a snake-like, modular robot designed for haptic interactions with people, writhing, wriggling, twisting, and squeezing in response to how it is held and touched. It can be used to explore intimate and emotional relationships with technology through direct physical contact. ADB adapts to and reciprocates the energy one puts into it through one’s body. When touched, it comes to life. When stroked, it seeks more of you. When harmed, it defends.
ADB is composed of a series of identical modules that are connected by mechanical joints. Each module contains a servo motor, a variety of sensors, including capacitive touch sensors, a rotary encoder, and a current sensor to provide information about the relationship to a person’s body. The electronics are enclosed within plastic shells fabricated on 30 printers.
For the past decade, Stedman has been designing and fabricating machines, combining ideas and techniques drawn from both visual arts and engineering. He relates closely to the practices of “Device Art” and “Making.” Much of his work involves writing software, designing mechanisms and electronic circuits, and working with materials, while at the same time exploring the social, environmental, economic, and civic impact of technologies-those he produces and those he employs.
In particular, Stedman’s artwork pertains to embodied communication and social robotics. He makes robots that engage people in non-linguistic, haptic interactions. By eliminating symbolic communication such as language or even gesture, and focusing instead on direct bodily engagement, the objective is to stimulate sensations, and perhaps emotions, in human participants. The aesthetic experience is comprised of the tangible feelings which the machine produces through physical interaction, as well as the ideas and associations that are evoked through the unusual experience of engaging in a sensual relationship with an artificial entity.
The robots are composed of assemblies of haptic-expression modules which, like pixels, can be coordinated to render a representation, in this case through kinetic deformation against a person’s body. The modules are built from a variety of sensors, motors, and other electronic and mechanical components, all enclosed within CNC fabricated shells, which protect the technology and determine the outward appearance. A wide variety of control programs are possible with such architecture, and the modules are designed to be easily reprogrammed in order to support explorations in software. While Stedman is most interested in decentralized machine learning techniques (including genetic algorithms and artificial neural nets), the control software he uses is that which affords the desired behavior with some economy.
Diastrophisms is a sound installation with a modular system that sends images through rhythmic patterns. It is built on a set of debris from the Alto Río building that was destroyed by the 27F earthquake in 2010 in Chile. With Diastrophisms, we were looking for a poetical, critical, and political crossing between technology and matter in order to raise questions on the relationship between human beings and nature, and to consider the construction of memory in a community by questioning the notion of monument, as well as to imagine new forms of communication in times of crisis.
Real-time recording of schematic version of Angels, a virtual-reality movie.
Hardware: Silicon Graphics 4D/25G, Silicon Graphics 320VGX, DEC 5000, VPL LX and Data Glove Software: Wavefront, VPL
Prix Villa Medicis Biennale Arts Electroniques Silicon Graphics Wavefront Technologies Crystal River Engineering VPL
Virtual spaces and 3D-rendered objects can both be manipulated in on-screen environments and modified by designers in real time. We are now so accustomed to our ability to “fly through” these rendered spaces with computer mice or handheld controllers that they have become commonplace. In today’s oversaturated media landscape, we often take for granted the fact that virtual objects can coexist with our physical spaces through head-mounted displays. Increasingly, these devices also allow for augmented reality (AR) such as Microsoft’s HoloLens [2], Google’s Glass [3], and the countless other VR viewers on the market.
Materializing this theme of virtual reality and object manipulation outside the virtual space of a headset or screen and bringing it into the physical world is Niklas Roy’s Grafikdemo, which consists of a physical wireframe of the quintessential 3D model, the standard teapot, built inside the frame of a classic Commodore CBM, a computer originally produced in 1977. At the time, the CBM was Commodore’s top-selling computer in North America and was Commodore’s first full-featured machine, a predecessor of the classic Commodore 64. Looking back at the historical value of such a machine’s physical attributes and processing power, the inability to render anything useful is contrasted with an analog version of the teapot that sits atop a system of motors and a frame that allows the teapot to be rotated in 3D space in the same way that a traditional 3D model could be manipulated to show it from all angles. The difference in Roy’s adaptation is that the entire process takes place through physical means. There is no mouse on this computer; only the analog keys of the antiquated keyboard allow visitors to manipulate the object inside, thus exposing the mechanical aspects of the project.
This tower is designed as a gracious volume that elegantly borders the Nile River in the heart of Cairo. The main structural elements of this 70-story hotel and apartment tower are concrete fin walls that rotate gently over the full height of the tower.
Zaha Hadid Architects
The Signature Towers, 375-meter skyscrapers in the Dubai Business Bay, include offices, hotel, residential, retail, bridges, a waterfront park, and a promenade. The architectural concept takes a “choreographed” movement that combines the three towers in one overall gesture and “weaves” a series of public spaces through them: the podium, the bridges, and the landscape beyond.
3D printing of ZHA models was made possible through a donation from RedEye ARC
Thse two buildings are conceived as a united mass in the form of a cube hovering off the ground. The cube is carved or eroded by a freeform void, essentially the setback space between the two tower envelopes.
Zaha Hadid Architects (ZHA) uncovers internal correlations and recursive relationships in its design practice at multiple scales, from the detailed to the urban. ZHA is systematically searching for parameters, laws of change, and tools for generating complexity to shift how architects approach form making and space design.
Parametric Urbanism This project recruits powerful digital-design techniques to produce form and make urban spaces with an architect’s sensibility. The design team uses recursive Maya scripts to generate a pattern that responds to varying environmental parameters. The result is a complex field for an urban context. The first step is to reconstruct the fundamental typological catalog of architecture and urbanism in terms of field conditions: point fields of villas, line fields of towers, plane fields of slabs, and volume fields of urban blocks. The second step calls for a series of parametric variations. In the final step, the designers play a “matrix game” of recombination and interpenetration that generates the richness and complexity that makes an urban territorial strategy.
ZHA has experimented with this approach in real-world projects at the Thames Gateway in London, in Istanbul, Turkey, in Singapore, and in Appur, India.
Procedural Complexity ZHA’s in-house research group, the Computation and Design Group, brings designers and programmers together to develop specialized tools for generating spatial and experiential complexity in geometry. The group is experimenting with topological complexity, cellular logics, and field structures, among other domains. These explorations have resulted in new tools that are available to all the end-users at ZHA for projects at any stage of development. In other words, the tools address the entire design process from end to end and underscore the recursive nature and inner correlations that are a hallmark of ZHA’s design.
Biological Instrumentation is a time-based spatial installation that combines organic and computational processes. The viewer is confronted with a hanging garden of mimosa plants, each connected by a series of tubes to an air compressor. The plants are wired with audio speakers, light sensors, and other electronic equipment. Digital stimulation, produced from the algorithmic application of compressed air onto the leaves, forces the plant to contract. Over the next 15 minutes, following the blowing of the compressed air, the mimosa plants begin to open their leaves again, triggering sound signals to play from the audio speakers that “float” next to the plants. The generated sound signals gradually increase in volume and intensity, culminating with the forced air being released onto the plants again. This work explores the poetics involved in creating new relationships between machines and plant life. The installation invites the viewer to walk through a spatialized sound environment and observe the machine-plant interaction. Regardless of the plant’s condition, the machines’ algorithmic program constantly stimulates the mimosa.
Criss~Crossing The Divine/Spiral Vortex Paint Game is an interactive game installation conceived to address the ever-expanding religious intolerance fueling global wars. Attendees use interactive wands to curate topic-words and assign more or less importance to each topic they select. The player receives color coded scripture perspectives parsed from the individual’s search. No search results are the same. Directed to a website, the player learns from which 46,000 scriptures within The Old Testament, The New Testament, The Hindu Rig Vedas, The Quran, and Buddhist Texts, their color-coded text results originated.
An HD video animation displaying a portrait exposing, confronting, and forecasting environmental and societal decay. A generic glass house is viewed spinning through myriad cycles inherent in the causal effects of erratic global warming weather, political divisiveness, and the ever-expanding intolerance of differences. Blurring edges between solid and fictive space questions the real-to-reel while shattering expectations of norms into particles of dust. Viewers are lulled and suddenly tossed between calm and brutal disturbances by interventions that shatter and assault psychological, physical, and auditory space, enhancing awareness for finding options to re-build matter leveling the slant of this uphill battleground.
Software: Autodesk 3ds Max, Adobe After Effects, Mental Ray Renderer, Pro res422, QuickTime Pro, Final Cut Pro, GarageBand.
Computer Graphic Animation: Osaka University Producer: Nobuo Ishiki Animator: Akira Kato Music: Mikii Yoshikawa Hardware: Links-1
” … like an earthquake that suddenly comes into your life and reduces your life into nothing, and when you return to normality your perceptions, your feelings are different. Every time I see a landstorm, I remember my own landstorm. Very personal … it’s like a little secret always I have with me.” — Juan Miranda, Shock in the Ear
Shock in the Ear is an experimental new media art work. It evokes the moment of shock and its aftermath as a sensual experience. From culture shock to electric shock and reverberating beyond into shock aesthetics, shock resonates with deep and abrupt physical and psychic change. The project of Shock in the Ear is to engage the user at a sensual level with shock as a bodily experience – to evoke shock not at the crashing sensational moment of impact but in its sensual aftermath. It aims to disrupt perceptions as the user explores the moment after the event – a dislocated time/space of shifted perceptions and senses.
Shock in the Ear expresses the shocking concept that sound is essential to interactivity, as a new and engaging artistic form, because sound goes beyond the interface, into time, into the body, and into the imagination. Visually, the work disrupts conventional CD-ROM aesthetics and kinaesthetics, with its painterly, textured, and sensuous images, which interrogate painting conventions and history, and play with the relation between painting and multimedia.
Creating and articulating sound and image together in an innovative way, Shock in the Ear engages with interactive possibilities beyond simple point-and-click, immersing the user in emotional, sonic, and visual texture. At the moment of interactivity, the work opens up the CD-ROM medium’s potential for intimacy.
Shock in the Ear is an intense and poetic work, composed through interactive screens, stories, performances, music, and sound. Refusing the slickness and control of cyberspace, the work explores instead the potential of new media for poetic movement, understandings, emotions, and sensations.
Mineral pigment paint, diamond dust, and bone char on pigment print on canvas. Sound art and single-channel video. Created after pilgrimage to Yasukuni Jinja, a Shinto shrine located in Chiyoda, Japan which has attracted controversy for enshrining the kami (spirits) of war criminals from WWII.
Mineral pigment paint, diamond dust, and bone on pigment print on canvas. Sound art and single-channel video. View of Zintun from the Daoist Wenwu Temple in Taiwan.
Family Portrait in an Interior Scene from European History
St. Petersburg’s late-20th-century neoacademicism is a direct continuation of the classical style of ancient times and of every manifestation of that style in European classicism. The classical style has proved so stable and persistent over the centuries that it makes sense to consider all other European styles as mere deviations from that basic form. From this point of view, classicism/academicism seems an internal and inalienable feature of our racial consciousness.
Having little understanding of the great tasks set them by their parents, the children of this age saw themselves as belonging to the lost generation of the existential European past. The relative calm that settled upon their lives shattered their internal unity. Moreover, they were tired of history and all its wars and were anxious to “erase” everything from their memories and “become like everyone else.” They tried to present themselves as a generation apart from the heroic victory at Hanko, and as a result made themselves even more tired.
This portrait of a man and a woman is bereft of any connection, and even of any appearance of a connection, with the world and kin. Only the couple’s faces themselves preserve an inexorable link with the biological kin of which they are the latest representatives; but with this commences the rebirth of the broken link whose full restoration will come only with the couple’s grandchildren and great-grandchildren.
Alena Spitsyna From the SCARP project catalog
HARDWARE/SOFTWARE PC 486, Venice Studio Getris
A multimedia suite by composer Olivia Louvel digs deep into the psychic warfare between two 16th century British Queens. Fascinated by the existence of two Queens ruling in the 16th century at the same time on a single island, within an extremely male-dominated society, composer and artist Olivia Louvel explores the reign of Mary Queen of Scots vs Elizabeth I and delivers her singular digital transposition: Data Regina. Drawn to the life and writings of Mary Queen of Scots, a poet and essayist herself and one of the most read woman of her time, Louvel assembles a digital narrative through 17 compositions, capturing different palettes, tones, playing with identity within the ”duality-duel” the relationship of the two women, two queens and cousins, who were never to meet. As well as the publication of a CD, the interactive digital website provides a further platform to showcase the 3D animations produced by animator Antoine Kendall as well as curated historical references.
“For her Data Regina, Olivia Louvel packages experimental electronic music, new media art, and 16th-century conflict into multimedia art. After several years of releasing electronic music, Olivia Louvel had grown restless—not just with the computer music process, but her vocals. Always one to operate at the creative frontiers where music and art intersect, Louvel, starting in 2010, began piecing together a project that would feature experimental shifts in instrumentation and vocal stylings. Orchestral sounds would collide with industrial noise; pop would give way to ambient textures, and Louvel’s voice would get layered and pitched to the point that it almost became a multi-timbral synthesizer. In the past, Louvel had previously drawn inspiration from silent film, her own paintings, and haiku, amongst other types of artistic media. For the long-gestating project that would become the new album, Data Regina, Louvel gradually decided to explore new media art forms like web art and 3D animation in telling the story of her infiltration of the lives of and conflict between Mary Queen of Scots and Queen Elizabeth I.” ¹
Pangburn, DJ. “Multiple Media: Olivia Louvel On Music, Art & 17th Century History.” The Quietus, May 9, 2017. https://thequietus.com/articles/22355-olivia-louvel-interview.
SEEN – Fruits of our Labor is an interactive installation that reinvigorates a public plazathrough an alternative form of communication between its citizenry. It was commissioned bythe Zero One San Jose festival and installed in the public plaza in front of the San JoseMuseum of Art, facing Cesar Chavez Park. The monolith is a communication device reminiscentof the ubiquitous obelisks, plaques, and sculptures that populate public squares. These traditional monuments carry messages that sanctify historical moments or a set of values upon which the city has been built. Similarly, SEEN – Fruits of our Labor looks to broadcast a variety of unshared principles from the mouths of everyday citizens about their projected hopes and the American Dream in light of globalization.
The project asks members of three communities that falsify San Jose’s labor requirements (Silicon Valley’s tech workers, undocumented service workers, and outsourced call center workers) one question: What is the fruit of your labor? Their responses are displayed on a 4-foot x 8-foot infrared LED screen whose content is visible only through the audience’s personal digital capture devices (cell phone cameras, digital cameras, DV-cams, etc.). The relationship that binds these disparatecommunities is that they labor in San Jose. The city is a global actor whose products areconsumed the world over. Their reliance on the city’s economy is clear, but their understanding of this mutual engagement is less obvious. Some of the contributors to this wealth are not even present in the city.
The commodification of labor through globalization has allowed an unprecedented population to engage in the global market place. The results are both exploitative and liberating. Not judging the nature of the work that people do, the project surveyed these different communities to get their response. The project resulted in vibrant interactions between people, who shared their viewing devices with total strangers, discussed the streaming messages, and telematically shared their viewing experience with others in their phonebooks.
To the naked eye, the monolith is a blank surface waiting for information to be carved on it. However, when viewed through any CCD device, its messages magically appear on the user’s screen. It is only through the digital apparatus that the messages can be read. The audience is encouraged to photograph and share these messages: the fruits of other’s labors. What was previously hidden from their view is revealed through the technical device. They become complicit in the most personal way through this exchange.
As a competition entry, KPF’s design for Nanjing South Station was one of many stations planned as part of a major expansion of China’s high-speed and regular service train lines. The station is sited in a slight valley, bisected through the center by a “green corridor” that connects the area’s major parks. Inside the station, the green corridor takes the form of an inter-modal hall. The arrival hall is located around the inter-modal hall, and above it are the station’s platforms and departure lounges. Above the elevated departure lounges roof is the metaphorical and physical centerpiece of the project: a large, sweeping roof that protects passengers from rain, sun, and wind.
Using parametric modeling techniques, initial designs for the station’s roof were tested and then manipulated in order to optimize environmental parameters such as light, wind, rainwater collection, and natural ventilation. Structural efficiencies were tested in a similar manner. The roof pieces were designed with S-shaped sections, and the variations were derivatives of various configurations of the discrete S curves, which were parametrically controlled. The layout and organization of the S curves were defined with global rule-sets, and the final design configuration was a result of these rule-sets, rather than a “hand-crafted” geometry. The behavior and ranges of adaptation for the S curves were defined beforehand, thus the geometry was already being developed under certain constraints. In other words, the design was informed and restricted by certain limitations, so it was pre-rationalized with embedded intelligence in the parametric model.
By designing parametrically, within the constraints of the program brief (set platform widths, column locations, and roof coverage), the design team generated a form that was the absolute product of the imaging technology. Through all stages of design – from the initial site analysis to the structural detailing – parametric modeling was used to help build, test, and improve the design team’s formal and programmatic decisions.
The Pinnacle will become one of the most significant new buildings in the City of London, with a design that strengthens the overall character and identity of the emerging cluster of tall buildings.
Kohn Pedersen Fox Associates
With its distinctive design and name based on the idea of the white magnolia (the city flower of Shanghai), this tower aims to stand as the iconic piece in its area of the city. Its organic form twists, focusing on views and optimizing solar orientation.
3D printing of KPF models was made possible through a donation from York Technical College / 3D Systems
Spotlight is a set of 16 interactive portraits. Each portrait has a set of nine “temporal gestures” – photographic-quality sequences of human gestures such as “looking up.” The portraits are networked and placed in a 4 x 4 layout. Every few seconds, a randomly selected portrait looks toward a neighboring portrait. In turn, the neighboring portrait looks back. To viewers of the installation, these “random discussions” create a sense of “social dynamics.” Viewers can interrupt the group dynamics at any time, by selecting one of the 16 portraits. The remaining 15 portraits automatically react and direct their attention to the viewer-selected portrait, which reacts with a special gesture – “being the center of attention.”
Spotlight is about an artist’s ability to create new meaning using the combination of interactive portraits and diptych or polytych layouts. The mere placement of two or more portraits near each other is a known technique to create new meaning in the viewer’s mind. Spotlight takes this concept into the interactive domain, creating interactive portraits that are aware of each other’s states and gestures. So not only the visual layout, but also the interaction with others creates a new meaning for the viewer.
Using a combination of interaction techniques, Spotlight engages the viewer at two levels. At the group level, the viewer influences the portraits’ “social dynamics.” At the individual level, a portrait’s “temporal gestures” expose a lot about the subject’s personality.
Spotlight is a system of 16 portrait agents that operate as a distributed master-slave cluster over TCP/IP. Each portrait agent is a set of nine gestures, each a sequence of 40 photographic-quality blackand-white frames, packaged as a OuickTime movie.
There are 16 nodes; each an LCD screen with a built-in computer system. Each node is able to communicate with the others and display a portrait clip. At startup, one node is arbitrarily designated as the master, and all slave nodes are directed to connect to the master node to form the array. Once connected, each node declares its own configuration. The agents exist on the server only but are synchronized with their respective portraits over the network. This design simplifies communication between nodes, while retaining synchronous, millisecond-scale control over the video playback.
In idle mode, each agent may randomly choose a neighbor to “converse with.” When viewers initiate an interaction, the agents all “look’ at the agent selected by a viewer. The target agent then plays its gesture action, while the other agents resume their standby posture. The entire array is then reset, and if no further interactions take place, the agents eventually return to idle mode.
General interest is turning to analog processes even though computer advancements are rapidly accelerating artistic expression toward even greater digitalization. One of the factors underlying this trend is that the digital quality is never as good as the original analog quality despite super-high resolution. Another reason is analog expression’s emphasis on ambiguity.
Computer advancements have enabled digital processing on a level unimaginable in the past. One gets the feeling, however, that freshness and innovation are suffering while digitalization continues to improve the quality of expression. Even though digital elements impress us with their realism, those same elements will feel extremely unsophisticated and unnatural in just a few years. By contrast, live performances will always remain true.
Unless the digital world becomes more realistic, it cannot surpass the quality of the analog world. You may wonder why some of the old digital forms of expression that should seem unsophisticated by today’s standards actually seem fresher than some of those that we see today. This is because the attraction of the digital world is not its realism, but its ability to create realities that are not possible in the analog world.
Code was built with Java, Processing, and Jsyn.
Images of objects are generated by typing keywords categorized as Points, Lines, or Faces on a keyboard. The images are also controlled by inputting keywords categorized as controls. There are a number of reserve keywords, and each has a function when input.
Each image has a sound sequence, and its tone coloration, volume, pitch, and sound localization is determined by the location data of the object image. The X-axis corresponds to the localization and cut-off frequency of a sound, the Y-axis corresponds to the pitch and resonance of a sound, and the Z-axis corresponds to volume. The screen is separated into seven parts from top to bottom, and the pitch of each object’s sound is scaled to each of these seven parts by changing the playback speed of the sound file. The typing sounds and the rhythm of the sound sequence are controlled by timing.
The lifespan of an object is determined by the frequency in which its keyword is input. If the performer keeps typing without hitting a certain keyword, its object will disappear.
Aesthetics is typically seen as a theoretical, especially philosophical academic discipline focusing on questions about art, beauty, the nature of aesthetic experiences, and many other issues related to these. However, there is another, non-academic side of aesthetics where similar issues are addressed, and which is not usually considered in academic context. To investigate how these two areas of aesthetics relate to each other we applied computational text-mining techniques on Wikipedia, Google trends, YouTube, Open Library books and Web of Science datasets. We used topic modelling as a method for analysis, and used Gensim library to implemt it. Firstly, we collected data from the aforementioned sources, either by downloading from a provider, or using API’s, or scraping it with created web robot. Later we created a list of topics covering all the data we gathered, and created a topic map based on English Wikipedia articles. Later on, we imputed each dataset into the generated topic map to identify what are the most discussed topics and to what extent. Imputed datasets were related to aesthetics, e.g. Web of Science dataset included only abstracts and titles from articles that either include a keyword “aesthetics” or are from recognized venues in aesthetics discipline. Such mapping allowed us to avoid some well-known topic modeling problems. For instance, if we had applied topic modelling on separate datasets straight away, we had not been able to compare them due to differently designed topics among datasets, and topics had not been relevant due to the relatively small amount of documents imputed, as topic modelling requires a high amount of documents for informative results. To visualize results we used slightly changed version of Python library called “LDAvis”. Results allowed us to compare both areas of aesthetics and describe wideness of the gap between the two, as well as showcase a digital tool application in the field of aesthetics. This data analysis process was finished on 2017 June and an articles based on it was submitted to an international journal of aesthetics in August 2017.
The present is a product of the past. This idea is illustrated by an enchanted mirror that reflects the present as a multifacted image mosaic made from images captured in the past.
The installation is continuously evolving. The set of images used to construct the current experience is composed of earlier encounters. This makes every participant a contributor to future experiences.
The installation consists of a projector, a camera, a computer, and software that maintains the puzzled mosaic.
When the software detects motion in the scene (monitored by the camera), an image is stored in a database and split into rectangular subimages. A list of the subimages that have changed since the last generated mosaic is created. The database is then asked for a list of full-scene images that correspond to the changed subimages. To further speed up queries, infrequently used images are periodically purged from the database.
Fiber Optic Ocean is a data driven interactive installation that composes music. This installation creates unique musical scores dependent on live data. Fiber Optic Ocean conveys the consequences of technology’s invasion of oceans.
The piece procedurally composes music made with trombone and choral voices generated by live data coming from the live sharks and human use of internet. The group of fiber optic cables going through the sharks blink at a rate based on speed of live sharks tracked with GPS data. Fiber optic threads composing the ocean blink based on the speed of the internet, symbolized with the number of tweets per second.
Human beings’ selfish invasion of nature expands to the depth of oceans. Underwater surveillance cameras are revealing that sharks are drawn to fiber optic cables and biting down on them. One theory is that the magnetic field around the fiber optic cables is stimulating the receptors in sharks’ mouths and luring them to perceive the cables as prey.
The current struggle between sharks and technology corporations is a pristine symbol of the ongoing conflict between nature and culture. The two sides clash nose to nose on a thin fiber optic line.
You are the Ocean, an interactive installation, generates ocean waves and clouds in response to brain waves of a participant. Water, light, clouds, and lightning are realistically simulated by computer code. A participant wears an EEG (Electroencephalography) headset that measures the user’s approximate attention and meditation levels via brain waves. Through relaxation and concentration, the subject can control the water and sky. Attention level affects storminess: With higher concentration, the waves get higher and the clouds thicken. By calming their mind, the subject can create a calm ocean.
For the Star Trek II – The Wrath of Khan Genesis Effect scene, Reeves and his colleagues were trying to create a wall of fire spreading out from the point of impact of a projectile on a planetary surface. Every particle in Reeve’s system was a single point in space and the wall of fire was represented by thousands of these individual points. Each particle had the following attributes:
Extracted from: https://ohiostate.pressbooks.pub/graphicshistory/chapter/19-1-particle-systems-and-artificial-life/ “19.1 Particle Systems and Artificial Life”
“Brain Wave Chick V” is a collaborative brain wave concert performance by Mark Applebaum and Paras Kaul. “The Ganglia’s All Here,” designed by Paras and Bill Vitucci, provides an animated background for the performance. This video incorporates 3D computer graphic animation and video motion graphics created using Alias|Wavefront Maya, Alias|Wavefront Composer, Adobe After Effects, and Adobe Photoshop. Roddy Schrock, music composer from Japan, will be technical assistant for the performance.
The neural environment, based on Paras’ research in neural audio imaging, is surrealistic and features sound sculptures, designed and played by Mark. These sculptures are the result of research begun by Mark in 1990. Since that time, he has engaged in the design and construction of sound-sculptures, musical instruments intended for their visual, as well as sonic properties. From the research, he has produced the mousetrap, the mini-mouse, the duplex mausphon, the midi-mouse, and six micro-mice, instruments consisting of junk, hardware, and found-objects mounted on electro-acoustic sound boards. The sound-sculptures are played with the hands, chopsticks, combs, plectrums, and a violin bow. Their sounds are acoustic, electro-acoustic (amplified via piezo contact pickups), and electronic (modified by external signal processors).
These instruments have been employed in “formal” compositions (such as “Zero-One,” performed by Steven Schick in Darmstadt, Germany, and “Scipio Wakes Up,” commissioned by the Paul Dresher Ensemble) as well as improvised works (such as a 1993 collaboration with the Merce Cunningham Dance Company), and the Innova CD Mousetrap Music.
During the performance, the nature of the external signal processing is determined by Paras’ brain waves, as interpreted by a MAX digital audio patch described below. Neural data are provided by Paras in real time via a brain wave interface system configured on her computer. Using this interface to the computer, Paras’ neural activity is transformed into real time MIDI data and transferred to Mark’s computers. The brain wave interface system is IBVA, which utilizes standard EEG monitoring of the neural activity. Amplitudes are converted to MIDI velocities and frequencies are converted to MIDI note numbers.
A MAX software “patch” examines the brain wave (MIDI) activity. Events may be left unchanged, filtered, distorted, transformed, modulated by other events or tendencies, responded to, etc. By choosing what to play and how, the patch circumscribes the audio aesthetic. The patch is “played” by Mark with a continuous MIDI controller, and by Paras through initial neural activity and by her responses to aural articulations. Variable in this collaboration are the activities of the two individuals as well as the patch; these engender results that vary from probable/stochastic to unpredictable/ random.
The video projected animation uses symbolism to represent a variety of mental states that are mimicked by Paras’ brain wave switching among frequency domains ranging from high beta, low beta, alpha, theta, and delta. A feedback loop exists between the three computer systems, two operated by Mark, and the other by Paras. The result of this process is a continuous play of communication between Mark and Paras.
Mark uses two Macintosh G3 computers. One computer will run the MAX patch that receives and modifies the midi data converted from Paras’ IBVA interface. It will trigger Yamaha EXSR, Proteus 2000, Oberheim Matrix-1000, and Kurzweil 1000PX sound modules. The second computer will run the MAX/MSP patch that modifies the signal processing of the sound-sculptures. This includes sound routed through various external processors as well as processing associated within the computer itself (via MSP) and output through a Digidesign 001 interface. External processors include a Lexicon MPXI, Electronix Filter Factory, Yamaha SPXSOD, Ibanez DM1000, Korg SDD-2000, DOD DI2, BBE422A, Roland VP-70, and Roland RE-301. Data on two computers will be modified by a Peavey 1600X midi controller.
The Animation The animation provides a moving background for the performance. The frequency of events in the animation, color, and symbology are designed to reflect a variety of mental states. Paras will mimic brain wave states reflected in the animation. During calm sections, her brain wave activity will reflect the low frequencies and low amplitudes of alpha and theta signals. When the animation is chaotic in nature, she will switch brain wave signals to higher beta frequencies and amplitudes. Since the brain wave signaling directly effects the audio, a direct correlation will exist between the audio and visuals.
Using Mississippi swamp imagery as a texture map on the inside of a sphere, Paras Kaul built a 3D model of a generic body. In Alien Encounter, an alien being assists by enabling both rebirth and transformation to occur. The attempt was to break down the hard edges of digital imaging and create an atmospheric, conceptual depiction of the artist’s experiences.
The computer’s instant feedback tells me which elements to reject, which to retain, which to pursue. It seems to me I am not so much working on a computer as with it.
Hdw: IBM PC Sftw: Easel
Genomix is generated by the collaboration between Artificial intelligent and humans using genomic data from species that represent the four epochs of the world: Anthropocene, Capitalocene, Plantationocene, and Chthulucene. Interplays between technologies, nature and humankind will be deliberated by fusing and visualizing hybrid biological identities through algorithmic expression.
A female crucifixion; she falls from a cross and breaks into pieces.
Hardware: Video Toaster Software: Toaster Paint
Hardware: IBM PC Software: Time Arts-Easel
Hdw: ITT Xtra-XL/Targa/Samurai Sftw: Lumena
What is computer art? Numerical concept or physical reality? Is a print of any kind the “piece”? Does the output media matter? If “art” is the image created, is it not then housed in the storage media? What is more valuable then, the disk or the print?
These are the questions that are addressed through La Monalisa Chibcha. The image is a melding of pre-Columbian icons, da Vinci’s Mona Lisa, and a child. Although this particular combination is based on the nickname my father has given my young daughter, the title represents many things. Mostly, I see the connection of the pre-Columbian icon to a present-day Colombian child: the history and culture that we should pass on to our children. The use of da Vinci’s Mona Lisa speaks to the incalculable number of regulations of that world-famous portrait. The actual merit of the work has been belittled by overexposure. It has become a cliche. This brings me to the issue explored in La Monalisa Chibcha. In a way, the actual image is irrelevant for the asking of the questions.
By creating a digital image, I am using and taking advantage of the many effects and possibilities in image manipulation. My creative decisions are based partly on aesthetic considerations and partly on technical limitations. So far, this does not differ so much from other media. When we are done, “finished”, we then face a dilemma. As digital imaging evolves, we need to investigate and resolve what and how we exhibit. The questions start again. Should we paper the world with duplicates of our creations, allowing the finished work to be controlled in its final “look” by technical and financial limitations? Or do we maintain integrity and create for pure enjoyment, curiosity and ART?
We are losing the elusive quality that tints our human memory when we are visually bombarded by the proliferation of available material.
Hardware: Data General MV4000 Software: Barr-Edwards-Lorig ray tracer
Hdw: Data General MV10000/E&S PS300 Sftw: Getto-Long Ray Tracer
The Civilization of Fruit is a computer graphic journey through an imaginary history. It is part allegory because the history of “fruit” has ascended up the evolutionary ladder in a way that reminds us of human experience.
LoopLoop is made from a sequence captured in a train en route to Hanoi. The 1,000 images of this sequence were stitched into one long panoramic image and integrated with other moving elements. Using smooth transitions, animation, and time shifts, the video runs forward and backward, looking for forgotten details, mimicking the way memories are replayed in the mind. Patrick Bergeron modifies and manipulates the image and its details. His work is a mix of animation, experimental film, and documentary. For the last 15 years, he has been working in digital special effects for the film industry, where he has worked on films such as “The Lord of the Rings” and “The Matrix”.
TRANSREC is a haunting look at the nature of transitional spaces, travel theory, and by extension, their relationship to the subconscious abstraction. The environmental desensitization induced by transient spaces suggests that transitioning extends and creates in itself enduring spaces. This phenomenon is founded on the dissolution of physical signifiers which in turn, on the surface, result in seemingly fleeting experiences. In this state, introspection jumps to the foreground, and the mind runs unfettered to the boundaries imposed by individual experience and cognitive knowledge.
Hardware and Software
Adobe Photoshop, Illustrator, After Effects, Premiere Pro, Cinema 4d, Reaktor, Cubase.
Long view is a gesture-based interactive installation that offers the viewer the ability to affect animated elements in a projected space in ways that the artists hope will increase awareness of our fragile and temporary relationship to our planet.
Our piece integrates open-source, physics-based gaming engines in Flash with our own gesture based interactive system that uses the Microsoft Kinect as an input device. The installation allows and encourages viewers to interact with the projected elements by moving their hands and bodies in a natural way. The projected “planet” view exhibits visual and behavioral changes over time and “evolves” as human technology and industrialization advances. Viewers can play with these “ecosystems” to change them in various ways. The piece itself loops and metaphorically creates a conundrum about humanity’s long-term relationship to the earth.
It relates to the SIGGRAPH 2013 Art Gallery theme XYZN: Scale in chat it covers vase epochs of time and creates different experiences depending on viewers’ proximity to the projection. We are interested in creating interactive systems and experiences that are intuitive and require no learned grammar. We believe that in the future, gesture-based interactive spaces and experiences will become a common way for individuals and groups to interact with media, environments, and each other.
The One_shot.MGX foldable stool, designed by Patrick Jouin, was produced for the design collection of Materialise MGX. It was created with rapid-prototyping and rapid-manufacturing technologies, and provides an example of the application of these processes within the discipline of industrial design.
The stool is made from polyamide, using a 3D printing technology known as selective laser sintering. The seating surface and the legs of the stool emerge from the machine in “one shot,” as do the hinges, which are concealed within the structure of the stool. With a simple, elegant twist, much like opening an umbrella, the array of vertical elements transforms into a stool.
The idea behind the project fabric | ch vs lab[au] //in electroscape// is to generate a digital content installation and exhibition within an electroscape virtual environment. The process should be finished by the end of the week in San Antonio. Electroscape is a digital experimentation and exhibition structure previously known under the name La_Fabrique (www.fabric.ch/La_Fabrique). As so, it can also be considered as an anticipative design structure where radical design questions can be asked.
Two teams of electronic and information architects (fabric | ch and lab[au]) will produce a collaborative and/or antagonist design and thinking around the generic theme of electroscape: digital and mutated landscape, mixed or enhanced reality, information architecture, electrosmog and electromagnetic territories, etc. What are the new memes? The new schemes? What are the new possibilities? And how can technology modify our daily environment? The purpose of this project is to investigate those questions while transforming a pre-existing structure that will be developed as a base for the week in San Antonio: www.electroscape.org will become an open source of ideas, designs and technologies.
Christian Babski and Patrick Keller from fabric | ch will both be in San Antonio to produce Electroscape_B. While the work will be partially prepared in advance, the main idea is to fully produce the project within a week thanks to the long-distance collaboration of fabric | ch (in San Antonio), fabric | ch (in Lausanne) and Lab[au] (in Brussels). A Web site will also display Web cam images of the three locations/ teams as well as the program, day by day, of what will take place in San Antonio.
The different time zones between the three locations will allow the group to have a 16-hour work day! People in Europe will work in their morning and beginning of afternoon, while the group in San Antonio will take over in their morning, perform online tests and have discussions with people in Europe about the night designs, and then work on their own.
The fact that the final application environment will be a multi-user world will make us try some crash tests of the work in progress and use those crash-test sessions as online meetings between the three teams as well. This will be part of the animation in San Antonio. People can witness the creation and setting up of a real exhibition, including the creation of the content. The difference is that it will be 100 percent digital and distributed.
The 8 Bits or Less series is an outgrowth of my ongoing work in low-res digital photography using wristwatch-based digital cameras, such as the Casio WristCam. The initial print run was a result of an installation created for the New Orleans Contemporary Art Center’s “Digital Louisiana” exhibition. The installation consisted of a single-channel video installation flanked by four large-format composites of images from the video.
To give some background on the entire body of work that centers around 8 Bits or Less, I like to consider the paintings by Gerhard Richter that took video images and usually motion blurred them into unreadability.
Conversely, the use of a gray-scale camera with a resolution of 100 pixels² challenges the artist in terms of subject and readability. This imagery questions the ongoing conversation regarding verisimilitude in digital imagery and its transparence with reality and traditional art techniques. These wristcam images refuse high resolution, they refuse color, they refuse fluid motion, and the work presents, this using technologies that were created for the increased fidelity of digital-media representation (digital video and large-format printing).
To the computer, there is seldom a qualitative judgment of the information it processes or contains. Its function is to store and process information. With the use of computer technology, data are encoded into various patterns for reading by scanners and bar code readers that are completely opaque to the human reader and human valuation. The aesthetic of these codes is strangely compelling, much like that of the Rosetta Stone, and beckons us to try to read the code, but without the proper tools, we can’t. On the back of one’s identification card, any information could be encoded, and it would be difficult to determine what data the card actually contains. Encoder is a 1992 concept that was realized in part in 2003 through comparison of three sacred texts with three texts that could be said by many to be profane, all of which have been encoded into a pixelated DataMatrix format. Three things are striking about these texts. Although variations can be seen in the data patterns at closer inspection, the patterns placed in context with one another lack visible differentiation, suggesting a total lack of qualitative value between sets of data in a database, and also the opaque wall of perception that exists between the human and computational environments. This hints that the downside of security is a potential lack of access to information or context. Lastly, the eerie beauty of the abstracted patterns of information taken in context with their content is grounds for reflection.
Six portions of what are considered sacred and profane texts were encoded with a DataMatrix encoding scheme using commercial labelmaking software. The resulting images were then assembled in a 2×3 matrix in Photoshop with their respective titles, and then printed using an archival-quality Epson 2200 printer.
The ongoing Made In China series takes a historical event from art history and combines it with elements of networked society and globalization. Made in China was created as art objects suitable for the gallery and museum industry, which has as part of its funding base corporate officers guilty of global outsourcing and resultant wage deflation. This series seeks to maximize the artist’s return on investment (ROI) through utilizing Chinese copyist ateliers to shortcircuit part of the capital outflow to Asia by obtaining retail pricing for the artist, while providing some compensation for workers in other countries. The historical precedent for this work comes from Ludwig Mies van der Rohe, who gave directions to a sign shop for construction of a work over the phone. In the internet/globalist age, the sign shop is no longer around the corner; it is in any of the developed/developing countries. The artist no longer needs the “school” or atelier model; the atelier is now a just-in-time online reseller of ironic physically repainted digital copies, to be reworked by the artist, mounted and hung. Currently, there are 12 works in this series.
Pixelboxes is deceptively simple in appearance, but is an experiment in emergent behavior. The piece consists of a grid of 36 color-changing LEDs that contain very small microprocessors. When powered up simultaneously, the LEDs begin as all red. But because of minuscule differences in the manufacturing process, timing changes occur, and the grid of LEDs create patterns of red, green, and blue. The result is a study in complex interactions shown as a visual display.
Conceptually, Pixelboxes creates “characters” or “calligrams” (to quote Foucault) that hint at the legible symbol, but never quite get there. Pixelboxes also is informed by John Simon’s “Every Icon” work, which cycles through every possibility in a 32 x 32 pixel grid, creating every icon imaginable.
Pixelboxes consists of a laser-cut sculpture and lattice that holds 36 color-changing LEDs, each section representing a “pixel” in the 6 x 6 icon. While each of the LEDs has a preset pattern of sequential flashes, differences in the processors inside the RGB LED create slight differences in timing. In addition, the different power requirements of the red, green, and blue LEDs cause further instability in the timing of the circuit.