Saturday, June 14, 2008

A Tale of Two Masters

First napkin sketch of the user interface

The intermediate stage of the motion control system has closely followed
the original concept although of course it has been greatly improved.

I would like to quote from the introduction to my Master's Thesis "Roboethics and Performance":

"Being as we are at the verge or perhaps in the middle of that threshold or bifurcation of our own destiny, it is hard if not impossible to observe with parsimony the course of action. In that liminal state where turbulence becomes organized into myriad eddies and flows and the “singularity” approaches, every theory or hypothesis seems to have some validity since the information or data we analyze is particular to our point of view and seldom encompasses a bigger picture. Gregory Bateson , reminiscing about the Macy conferences, goes further by suggesting that we never know the world as such. He states that “We are our epistemology” since we only perceive and understand the world through what our sensory apparatus allows.

However the birthing environment of the open network where data itself acts as a “controlling agent” is beginning to show a pattern that is itself fed back into the system, and once the critical threshold is achieved, some theorists suggest it will give raise to the emergence of a machinic consciousness.

The ability to enhance our body and mind utilizing radical nanotechnology beyond therapeutic practices, and the recent advances in bioinformatics and prosthetics as Katherine Hayles suggests, opens the discourse about the posthuman condition. All the “ecological variety” at the dawn of this new era, with the consequent interbreeding, is creating an explosion of possible species raising profound ethical questions and forcing us to rethink our position in the evolutionary sphere."

As some recent studies of the brain suggest*, there is not one but two control centers of control and command which work independently and actually incommunicated from each other even though both work towards a common goal.

The software that Philip Forget and myself have designed works as one of those control centers. The other stems from the independent actions of the performer which by interacting with the environment under surveillance by the Creator produces a synergistic feedback loop.

I am very excited by the software part of the project. It is in my biased opinion the best tool that I have seen in robotic puppetry control (not that there are many, mostly in research projects like mine). Although at first some closed systems were briefly considered, like National Instrument's LabView, to control servos based on image processing and other sensor input, the price and steep learning and development curve ruled it out early on, not to mention my dislike of anything that is not open source.

When Philip learned that I was using the MAKE microcontroller to drive my project and suggested that we use Flash as the controlling software I was right off skeptical. Not having used Flash in a couple of years I was not aware of recent developments.

He was of course referring to FlashDevelop, which is an open source ActionScript 2/3 and web development environment which integrates seamlessly with the Adobe product. This has allowed us to implement exactly what we wanted in a record time and in a very efficient and elegant way.

Our system is essentially a motion-sound-image-video-light sequencer/controller, driven in this case by image processing, but could as well be driven by any other sort of input. The heart of the system is a MAKE microcontroller which is directly controlled by the software right out of the box.

The screen capture above shows the position editor on the left and the corresponding key frame editor on the right. Each position is a collection of expressive movements achieved by "training" the marionette and recording the position, travel and speed of a group of eight servos. The sliders on the top right control or feedback both position and speed for each individual servo. The block below house the smaller sliders for light control as well as the sound control. The placeholder image is an automatic capture of the marionette position.

When played, the timeline runs thorough all the movements, light states and sounds that correspond to such moment of expression. The modules missing in the screen above are the library module and the main GRID which is the area under surveillance by the camera and which triggers actions defined by both timing and position of the performer or other beaming object (the performer wears an infrared beacon).

It is our intention to release the software once completed as an open source project under Creative Commons.

*Washington University School of Medicine (2007, June 21). Brain's Voluntary Chain-of-command Ruled By Not One But Two Captains. ScienceDaily. Retrieved June 15, 2008, from­ /releases/2007/06/070619134802.htm

No comments: