jump to navigation

interactive

 

transparent

spacelife1

spacelife2

spacelife5

Spacelife
VR environment / installation
2022

transparent

The Spacelife VR project is inspired by some reflections on life in space and is freely inspired by this theme without, however, retracing the concepts in the strict sense. It is an abstract audiovisual environment, which can be explored with virtual reality technology, which presents a series of objects made up of systems of particles. Participants can therefore immerse themselves in a visually rich and dynamic world, and thus discover perceptually unique points of view that can lead back to an interior space.

Concept
New technologies may perhaps help humans become more fit for life in space. However, other life forms have a better chance of survival in space: tardigrades1, cyborg bacteria2 and xenobots3. Scientists could modify existing creatures or create new ones so that they can survive in deep space for a long time. It would then be possible to launch of a swarm of light-propelled femto probes containing creatures or perhaps elements that are able to generate life on a celestial body4. This is a non-anthropocentric point of view: the fundamental thing is that life, and not just human life, can continue beyond planet Earth.

Notes
1 Tardigrades survive exposure to space in low Earth orbit, sciencedirect.com, 2008
2 ’Cyborg’ bacteria deliver green fuel source from sunlight, bbc.com, 2017
3 Meet the xenobot: world’s first living, self-healing robots created from frog stem cells, cnn.com, 2020
4 A New Physics Theory of Life, quantamagazine.org, 2014

Bibliography
Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves, George M. Church and Ed Regis, Basic Books, 2014
Origins: The Scientific Story of Creation, Jim Baggott, Oxford University Press, 2015
The Emergence of Life: From Chemical Origins to Synthetic Biology, Pier Luigi Luisi, Cambridge University Press, 2016
The Vital Question, Nick Lane, Faber and Faber, 2016

Videography
UVM and Tufts Team Builds First Living Robots, youtube.com, 2020
Living robots created as scientists turn frog cells into ‘entirely new life-forms’, youtube.com, 2020

transparent

Slabs'
Photo by Armando Rebatto

Slabs'
Photo by Armando Rebatto

Slabs'
Photo by Armando Rebatto

Slabs'
Photo by Armando Rebatto

Slabs'
Photo by Armando Rebatto


Video by Studio Vertov

transparent

Slabs
Audiovisual installation. Plexiglas slabs with polished steel boxes, sensors, audio players and magnetostrictive devices
2010-13

Sounds by Stefanie L. Ku

Realized with the co-operation of Enrico Pellegrini

transparent

This audiovisual installation is composed by six vibrating plexiglas slabs with which the audience can interact. The slabs, suspended by steel cables, are printed on one side and laser-engraved on the other. In addition, they are lit by both spot lights as well as LED light strips placed on top of each slab and hidden by a polished steel box. In this way the engraved parts are highlighted.

Inside each box two devices transform an audio signal into vibrations by means of a principle known as magnetostriction. The whole plexiglas surface then becomes a musical instrument and contributes to the aural environment, making this installation a space defined by real audiovisual objects. The sound emitted becomes more intense as the viewer approaches each slab: in this way people can control its volume, thus changing the overall musical balance.

Technical sheet 1Technical sheet 2

Read about it on Ba3ylon.

transparent

transparent

KL

KL-640

transparent

KL
Interactive installation. White table with digital prints on plexiglas
180 x 50 cm.
2009
transparent

X

X

X

X
Audiovisual game
Variable length
Quadraphonic, 44 KHz, 16 bit
Color, XGA and WXGA, variable fps, 32 bit
2005

Music by Matteo Franceschini
Soprano Silvia Spruzzola, Violin Barbara Pinna
Production AGON

System requirements:
Software: OS X 10.3.9, QuickTime 7.0
Hardware: Apple Dual PowerPC G4 1 GHz, 256 MB RAM, 32 MB VRAM, XGA LCD touch screen, WXGA display, MOTU 828, mixer, amplifier, 4 speakers

transparent
The interactive installation X has been commissioned by AGON for the “Festival Iannis Xenakis”, held by Milano Musica at the Milan Triennale, October 27th – 30th, 2005.

X is inspired by the work and thought of the Greek composer Iannis Xenakis. Among all the concepts and ideas brought by Xenakis, we have chosen one that seemed absolutely contemporary, especially in the digital domain, that is the idea of a game/composition.

Videogames are used at all latitudes, as everybody knows. Often times, though, they don’t offer any cultural value. We like to think instead that it is possible to create interesting visual and musical architectures, starting from videogame technology.

Among all the possible games, we have chosen “memory”, because it is easy to use and well known.

In this game the user has to find eight couples of images distributed randomly on a grid. Every couple found generates an audiovisual event that is shown on a big screen and heard through quadraphonic speakers. When all couples are found, the composition is complete, but it is created each time in a different order, according to which couples are found first. Once done, it is possible to continue to the next level. There are three levels, each one with a different work.

Another side of Xenakis’ work that has greatly influenced X is the creation of granular events, that is events that are extremely short, but are produced in huge quantities in order to get clouds of singularities, managed with statistical methods.

The idea derives from the physicist Dennis Gabor, Nobel Prize for the invention of holography, who in the late ’40s demonstrated experimentally that it is possible to generate a continuous sound by starting from many discrete micro sounds, i.e. micro aural “frames”. The idea was then brought to music by Xenakis, and transposed into the digital music domain by Curtis Roads and into computer graphics by Bill Reeves.

The images of X are entirely made with visual granular synthesis: that is by putting together many particles in order to create irregular and changing shapes. The particles are generated in real-time.

This composition technique, as well as others introduced by Xenakis, such as glissandi for instance, are also used in the musical part.

Adriano Abbado – Matteo Franceschini
transparent

transparent
n-grains

n-grains
Interactive environment
Stereo
Black & White, XGA, variable fps
2005
transparent
n-grains is a follow up of the previous work flussi. The goal is similar: to create an audiovisual interactive work where the user controls the amount of audio and video information received, in order to create a balanced audiovisual experience.

The user manages the flow of information by hands’ movement. The movement is detected by two infrared sensors, and transmitted to the software, which in turn produces the appropriate images and sounds.

There are two major differences, if compared to the previous work: in flussi, predefined QuickTime video and audio movies were used, whereas in n-grains images and sounds are generated in real time, even though certain parameters are predefined. The visual part is created with a particles system and the music is produced with granular synthesis.

Secondly, in flussi the hands’ movement was triggering the sequences in a discrete manner, whereas in n-grains there is a continuous flow of events.

Clearly, the solution offered by n-grains is more flexible and more appealing. It’s a different solution to the same problem.

The first step into this new direction was to understand how it’s possible to generate a flow of visual particles and audio grains in real time. Not only, it was also necessary to control the flow by means of sensors.

After an extensive research, it turned out that the best solution is to create a particles system within Max/MSP/Jitter, coupling the patch with one of the available granular synthesis patches. The sensors used are the same ones of flussi.

The patch is a modified version of the particle_primitives example found in the Max/MSP package, so that it can deal with a pict file, which includes an alpha channel. A QuickTime movie file, always with an alpha channel, can also be used. These files are the single particle used. They have to be crafted properly in order to produce a reasonable result. This way the particle generator can create more appealing visuals than just pure dots.

A grain object, made by Nathan Wolek, was also added, providing the capability of generating audio grains. Similarly to what happens with the particle system, also in this case a sample file is used to generate the stream of grains.

In dealing with particles systems and granular synthesis, there are a number of parameters that have to be controlled. n-grains controls only the number of particles/grains with sensors, since that is its goal. All other parameters are changed by using traditional devices.

transparent

transparent
flussi

flussi
Interactive audiovisual environment
Stereo, 44 KHz, 16 bit
Black & White, XGA, 30 fps, 8 bit, Animation compression
2002
transparent
flussi has been created in the summer 2002 at C&CRS, Loughborough University, UK (now Creativity Cognition Studios at the Technology University, Sydney).

Audiovisual works often suffer of a common problem: the focus is spontaneously centered on images more than music. In order to create a more balanced work, the stream of events has to be reorganized, so that music gets as much importance as images. However, the perception of audiovisual events is indeed personal. As a consequence, flussi offers the user the chance of controlling this balance, by using each hand to balance audio and video respectively. The resulting images are video projected, while sound is output with traditional stereo speakers.

In flussi, QuickTime movies are coupled with QuickTime sounds, while the interaction is controlled by the user’s hands, their distance being detected by two infrared sensors. By moving each hand, the user can control the rate of each stream, video and audio. The whole environment is managed by a Max/MSP/Jitter patch.

flussi is made of a series of sequences, or modules. After the first module is done, the next one is executed and so on.

In each module there are 9 sounds and 4 videos already made. This is what happens:

For instance, a module can last 15 seconds. Sounds’ duration can be 3, 1, 2, 1, 3, 1, 1, 2, 1 seconds. Videos’ duration can be 4, 3, 5, 3 seconds.

If the sensor value is 2, then there are 9 sounds, if it is 1, 7 sounds, if it is 0, 5 sounds, if it is -1, 3, and if it is -2, 1 sound. For images: 2/4, 1/3, 0/2, -1/1, -2/0. In the “worst” case there is only 1 sound and no video, while in the “best” case there are all 9 sounds and 4 videos. And all the combinations in between.

flussi diagram

A screenshot of the patch’s final version
transparent

transparent

transparent
colori

colori
Interactive environment
6.5 MB, variable length
Stereo, 44 KHz, 16 bit, Shockwave compression 64 kbits/sec
Color, SVGA, 25 fps, 24 bit
2002

System requirements:
Apple: OS 8.6 – OS X, G3 800 MHz, 128 MB RAM, 800 x 600 display, millions of colors, 32 MB VRAM, OpenGL 1.1.2, 8x CD-ROM drive.
Windows: 98-ME-NT-2000-XP, Pentium III 800 MHz, 128 MB RAM, 800 x 600 display, millions of colors, 32 MB VRAM, DirecX 5.2, 8x CD-ROM drive, DirectSound compatible card, speakers.
transparent

To watch this work, please click here to download the Shockwave movie (4.2 MByte)

colori features an interactive 3D environment the user can explore. The work was conceived just after oggetti.

The world of colori is made of seven objects and seven sounds. Each sound is related to one object: as the user moves around, the sound changes accordingly, including stereo panning (Macintosh version only).

This work is related to the world of games. The interaction is based on a gamepad, a device that is normally used in video games. This way anybody can use colori at home or on a journey.

At first the objects are mostly black and white and partially transparent. Moving to the very center of the inner object triggers a new view of the world: it becomes colorful and all sounds are produced together.

By clicking the mouse button while in color mode, the user can switch to another set of colors. Another click will switch back to the original set.
transparent

transparent
oggetti

oggetti
Interactive 3D environment
1.6 GB, variable length
Stereo, 44 KHz, 16 bit
Color, SVGA, 15/30 fps, 24 bit, Animation compression
2002

System requirements:
Apple: OS 8.6 – OS X, G3 800 MHz, 128 MB RAM, 800 x 600 display, millions of colors, 32 MB VRAM, OpenGL 1.1.2, 8x CD-ROM drive.
Windows: 98-ME-NT-2000-XP, Pentium III 800 MHz, 128 MB RAM, 800 x 600 display, millions of colors, 32 MB VRAM, DirecX 5.2, 8x CD-ROM drive, DirectSound compatible card, speakers.
transparent
You can watch a reduced version of this work. Only the interactive level is included: clicking on an object produces no result. Click here to download the Shockwave movie (4.2 MByte).

oggetti features an interactive 3D environment composed of 18 abstract objects. Navigation and scene manipulation allow the user to create his own views. The user can dolly, pan and rotate the camera, as well as rotate and move each object. Clicking on each object triggers events chosen among a set of 36 animated sequences and 7 sounds. These events are not interactive. Some images taken from the sequences can be seen in the stills section of this website.

The 3D objects appear as shaded, wireframed or rendered as points according to different types of objects: shaded ones trigger animated sequences that represent the artist’s view of that object. Wireframed objects trigger animations that are totally abstract. Point objects represent sounds.

This work is inspired by flighs simulators and is a first attempt to use video game technology within an artistic framework.

transparent

transparent
artevideo

artevideo.com
interactive website
2000
transparent
In this web site the user can experience three interactive video installations and three QuickTime VR panoramas. <a
transparent

transparent

transparent
genetic art

Genetic Art
Interactive software
1997-99

This piece of software creates images based on the genetic concepts of cross-over and mutation. Users can save images into a repository and then use them again as parents.

Software written by Marco Stefani.

transparent

Clouds

Clouds
Interactive environment
Stereo
1989
transparent
In the interactive environment Clouds a DataGlove was used to control audiovisual events that were previously made with my software AV, running on a Macintosh II. It was possible to navigate in an audiovisual hyperspace with a hand, going through several different audiovisual cells, and thus creating a sort of interactive abstract film.

The DataGlove, developed by VPL Research Inc., was considered a very flexible tool of interaction, and in my opinion an ideal instrument to control audiovisual events. In fact, the DataGlove was neither a musical instrument, like a keyboard, nor a pictorial tool, like a tablet. It was a neutral device.

The hyperspace could be imagined as a 2 meter wide cube, located in the real world, and made of little cells. Each cell, located along the 3 axes x, y, z, was “containing” a micro sequence (0.5 to 4 seconds.) of audiovisual events: animations and synthetic sounds. Adjacent cells were related to one another: this concept was absolutely fundamental because without spatial coherence among cells the navigation would have been totally meaningless.

transparent

Top