rêve v0.1 (devlog part 1)


rêve is an interactive software where the user can explore an audio/visual landscape.

It’s been a while since I wanted to do something like this, at the edge of a video game, instrument and a piece of real time software.

The concept is based on the dream, like in real life the user is unable to control the direction of his own dream. That’s why the program is built upon pseudo-random rules, constantly switching between two types of environment.

The first one happens to be a kinect 3D scan of my own arm with my hand holding a ball. Everything in the first environment happens inside the 3D model of my arm towards the ball, which is moving forward on the z-axis at a slightly faster speed than the camera, thus giving the impression of diving into a tunnel.

In order to let the program make the switch by itself, a random number between 0 and 60 is chosen at the end of each elapsed minute. This number corresponds to the moment when the change of environnement is going to happen.

Based on this random number, each environnement may have alternative realities, depending on this value some sounds, models and textures may or may not be loaded. The main core of the program is this following set of conditionnal operators that drive all of the underlying events.


testApp.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
void testApp::update(){
currentTime = ofGetElapsedTimef();
randomSound = ofRandom(599);
while (currentTime >= 60.) {
currentTime -= 60.;
}
if (currentTime < randomSwitch) {
switchTime = false;
} else if (currentTime > randomSwitch) {
switchTime = true;
}
if (currentTime < randomSound) {
switchSound = false;
} else if (currentTime > randomSound) {
switchSound = true;
}
if (randomSound < 590) {
triggerSound = false;
} else if ( randomSound > 590) {
triggerSound = true;
}
if (currentTime >= 0. && currentTime <= 0.150) {
randomSwitch = ofRandom(10, 59);
randomModel = ofRandom(2);
}
if (randomSwitch > 30) {
autoCam = true;
} else if (randomSwitch < 30) {
autoCam = false;
}
}



As for the sound part, a total of 10 samples are loaded directly in oF and triggered accordingly to the random number that’s chosen every minute. The output of each sample is mapped to models parameters such as colors, radius, vertices, scale, lights and even camera depth position.


testApp.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
if (switchSound == true) {
bip1.play();
ofBackground(255, 255, 255, soundColor);
if (autoCam == true) {
cam.setDistance(soundCamera);
}
}
if (randomModel == 0) {
bipPad.setPaused(false);
noisePad.setPaused(true);
breeze.setPaused(true);
ofPushMatrix();
ofRotate(ofGetElapsedTimef()*25.0, 0, 1, 0);
ringModelWireframe.setScale(10,10,10);
ringModelWireframe.draw(OF_MESH_WIREFRAME);
ofPopMatrix();
ofPushMatrix();
ofRotate(ofGetElapsedTimef()*25.0, 1, 0, 0);
ringModel1.setScale(2, 2, 2);;
ringModel1.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*-25.0, 1, 0, 0);
ringModel2.setScale(4,4,4);
ringModel2.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 1, 0, 0);
ringModel3.setScale(6,6,6);
ringModel3.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 0, 1, 0);
ringModel4.setScale(8,8,8);
ringModel4.draw(OF_MESH_WIREFRAME);
ofPopMatrix();
crystalModel.setScale(soundScaleModel*1.5, soundScaleModel*1.5, soundScaleModel*1.5);
crystalModel.draw(OF_MESH_FILL);
crystalModel.draw(OF_MESH_WIREFRAME);
} else if (randomModel == 1) {
bipPad.setPaused(true);
noisePad.setPaused(false);
breeze.setPaused(true);
ofPushMatrix();
ofRotate(ofGetElapsedTimef()*25.0, 1, 0, 0);
cubeModel.setScale(2, 2, 2);;
cubeModel.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 0, 1, 0);
cubeModel.setScale(4,4,4);
cubeModel.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 0, 0, 1);
cubeModel.setScale(6,6,6);
cubeModel.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 1, 0, 1);
cubeModel.setScale(8,8,8);
cubeModel.draw(OF_MESH_WIREFRAME);
ofPopMatrix();
icosaedreModel.setScale(soundScaleModel*1.5, soundScaleModel*1.5, soundScaleModel*1.5);
icosaedreModel.draw(OF_MESH_FILL);
icosaedreModel.draw(OF_MESH_WIREFRAME);
}


The sound amplitude is mapped to the horizontal screen size and the frequency corresponds to the vertical size of the screen. As the sound changes with the mouse interaction, each exploration will never sound the same.