YCAM

RAM (“Reactor for Awareness in Motion”) is a research project for developing tool for dance creation and education. It was held at the Yamaguchi Center for Arts and Media (YCAM) where I stayed during two days.

The performers were experimenting with the surrounding environnement in a non-predeterminated way by using the RAMDanceToolkit by YCAM InterLab (Yoshito Onishi, Satoru Higa, Motoi Shimizu, and Kyle McDonald).



Instead of dancing with each others according to a choreography, they were mostly acting like “human artificial intelligence”, all their moves triggered a sound or visual element which then response to their body in a feedback pattern.

It was also the occasion to visit the YCAM library and the city of Yamaguchi:


fantômes

Working on a custom GUI addon for openFrameworks, I wanted to try mesh transformations in oF which leaded to this series of abstract, ghost-like visuals.

So far, the GUI enables to switch between color mode, control the amount of noise applied to each model, scale the size of the shapes and define camera orientation.

I will be posting more infos soon on ofxGrainCtrl, more aiming for an experimental user interface with generative aspects rather than a simple and clean UI.

The “fantômes” 3D models are generated in Cinema 4D, converted to meshes in openFrameworks to apply noise amplitude and speed dampen on vertices.



Kode9 - H10 movie clip

Recently I’ve been contacted by Dazed and Confused Magazine to realise the movie clip of Hyperdub artist and label director Kode9.

I was asked to make a video for the H10 mixtape which was around 10 minutes long, using only one image file and without changing any color.



The main idea I had was to make a sound reactive app using the file as plane texture. I decomposed the file in 12 differents layers and mapped each one of them on a different plane. Those planes were at the same origin position in the oF app, by applying wave distortion and mapping texture on the plane vertices I was able to make the layers react to the sound at different amplitude.


rêve v0.1 (devlog part 1)


rêve is an interactive software where the user can explore an audio/visual landscape.

It’s been a while since I wanted to do something like this, at the edge of a video game, instrument and a piece of real time software.

The concept is based on the dream, like in real life the user is unable to control the direction of his own dream. That’s why the program is built upon pseudo-random rules, constantly switching between two types of environment.

The first one happens to be a kinect 3D scan of my own arm with my hand holding a ball. Everything in the first environment happens inside the 3D model of my arm towards the ball, which is moving forward on the z-axis at a slightly faster speed than the camera, thus giving the impression of diving into a tunnel.

In order to let the program make the switch by itself, a random number between 0 and 60 is chosen at the end of each elapsed minute. This number corresponds to the moment when the change of environnement is going to happen.

Based on this random number, each environnement may have alternative realities, depending on this value some sounds, models and textures may or may not be loaded. The main core of the program is this following set of conditionnal operators that drive all of the underlying events.


testApp.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
void testApp::update(){
currentTime = ofGetElapsedTimef();
randomSound = ofRandom(599);
while (currentTime >= 60.) {
currentTime -= 60.;
}
if (currentTime < randomSwitch) {
switchTime = false;
} else if (currentTime > randomSwitch) {
switchTime = true;
}
if (currentTime < randomSound) {
switchSound = false;
} else if (currentTime > randomSound) {
switchSound = true;
}
if (randomSound < 590) {
triggerSound = false;
} else if ( randomSound > 590) {
triggerSound = true;
}
if (currentTime >= 0. && currentTime <= 0.150) {
randomSwitch = ofRandom(10, 59);
randomModel = ofRandom(2);
}
if (randomSwitch > 30) {
autoCam = true;
} else if (randomSwitch < 30) {
autoCam = false;
}
}



As for the sound part, a total of 10 samples are loaded directly in oF and triggered accordingly to the random number that’s chosen every minute. The output of each sample is mapped to models parameters such as colors, radius, vertices, scale, lights and even camera depth position.


testApp.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
if (switchSound == true) {
bip1.play();
ofBackground(255, 255, 255, soundColor);
if (autoCam == true) {
cam.setDistance(soundCamera);
}
}
if (randomModel == 0) {
bipPad.setPaused(false);
noisePad.setPaused(true);
breeze.setPaused(true);
ofPushMatrix();
ofRotate(ofGetElapsedTimef()*25.0, 0, 1, 0);
ringModelWireframe.setScale(10,10,10);
ringModelWireframe.draw(OF_MESH_WIREFRAME);
ofPopMatrix();
ofPushMatrix();
ofRotate(ofGetElapsedTimef()*25.0, 1, 0, 0);
ringModel1.setScale(2, 2, 2);;
ringModel1.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*-25.0, 1, 0, 0);
ringModel2.setScale(4,4,4);
ringModel2.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 1, 0, 0);
ringModel3.setScale(6,6,6);
ringModel3.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 0, 1, 0);
ringModel4.setScale(8,8,8);
ringModel4.draw(OF_MESH_WIREFRAME);
ofPopMatrix();
crystalModel.setScale(soundScaleModel*1.5, soundScaleModel*1.5, soundScaleModel*1.5);
crystalModel.draw(OF_MESH_FILL);
crystalModel.draw(OF_MESH_WIREFRAME);
} else if (randomModel == 1) {
bipPad.setPaused(true);
noisePad.setPaused(false);
breeze.setPaused(true);
ofPushMatrix();
ofRotate(ofGetElapsedTimef()*25.0, 1, 0, 0);
cubeModel.setScale(2, 2, 2);;
cubeModel.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 0, 1, 0);
cubeModel.setScale(4,4,4);
cubeModel.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 0, 0, 1);
cubeModel.setScale(6,6,6);
cubeModel.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 1, 0, 1);
cubeModel.setScale(8,8,8);
cubeModel.draw(OF_MESH_WIREFRAME);
ofPopMatrix();
icosaedreModel.setScale(soundScaleModel*1.5, soundScaleModel*1.5, soundScaleModel*1.5);
icosaedreModel.draw(OF_MESH_FILL);
icosaedreModel.draw(OF_MESH_WIREFRAME);
}


The sound amplitude is mapped to the horizontal screen size and the frequency corresponds to the vertical size of the screen. As the sound changes with the mouse interaction, each exploration will never sound the same.