hexo

This post is somewhat talking about itself as I just happened to move my previous blog to a markdown framework powered by Node.js called Hexo.

I was aiming for something:

  • fast
  • lightweight
  • flexible
  • w/out database
  • w/ markdown support

Among all of those plain text frameworks and even though Jekyll was an option, I wanted to try something new and solely focused on post writing.
That’s why I decided to give Hexo a go.

For a first impression it’s terribly fast (Node.js got the props), for sure the community around isn’t that big but the documentation is detailed enough to make your own way through your custom blog system.

This is how you should start a hexo project:

1
2
3
4
5
$ npm install hexo-cli -g
$ hexo init blog
$ cd blog
$ npm install
$ hexo server


So I was totally satisfied with that and ported my previous blog layout to it.
After defining your own categories or tags in the post YAML frontmatter, it’s pretty straightforward to get a list of posts ordered by categories:

1
<%- list_categories([options]) %>

One annoying feature (bug?) I stumbled across was when it comes to define multiple categories for the same post. By default, they are all wrapped up inside sub-categories as a parent siblings. You don’t want to end up with duplicate sub-sub-categories, right ?

I managed to find a workaround in order to get rid of this unwanted depth, found out that the culprit was hiding under “hexo/lib/plugins/helper/list.js” inside the node_modules folder.

After checking the categories depth, each child get its own path affected automatically like so:

1
2
3
4
arr.push('<a class="' + className + '-link" href="' + self.url_for(cat.path) + '">' +
options.transform(cat.name) +
(showCount ? '<span class="' + className + '-count">' + cat.length + '</span>' : '') +
'</a>');

All happens in this very bit, a quick hack is to replace the category path by using categories name directly instead of the sub paths you would normally get.

1
self.url_for("categories/"+cat.name+"/")

And now you have it, a whole new blog to fill up with as much as top level categories as you want.
There might be some little fixes to provide here and there but I’m pretty satisfied with this current setup and ready to write the next adventures.


fantômes

Working on a custom GUI addon for openFrameworks, I wanted to try mesh transformations in oF which leaded to this series of abstract, ghost-like visuals.

So far, the GUI enables to switch between color mode, control the amount of noise applied to each model, scale the size of the shapes and define camera orientation.

I will be posting more infos soon on ofxGrainCtrl, more aiming for an experimental user interface with generative aspects rather than a simple and clean UI.

The “fantômes” 3D models are generated in Cinema 4D, converted to meshes in openFrameworks to apply noise amplitude and speed dampen on vertices.



earth coding

Participated in a workshop held at La Gaîté Lyrique in Paris with Martin Howse.
This British artist works at the frontier of code, poetry and psychogeophysics (woot) in order to understand the relationship between software and earth itself.

Here’s an article about it on Makery:
Workshopping with the man who codes the earth

The workshop was divided into two parts, during two days. The first day we’ve been outside in the forest to try some worm poetry. For this purpose, Martin uses a sensor that detect electro-magnetic signals called the “dark-interpreter”.

Once plugged into the ground, worms movements and surrounding signals were translated into chunks of strings that, when put together forms an unintelligible and beautiful poetry of some sort.

Martin also showed is Earth OS project, he tried to boot his computer by connecting it directly into the ground. Yep.

During the second day we were working on code logic by using cups and water gates in order to make a top to bottom continuous flow of water. We then made our own “earth sound reactive” gloves by using the 555 timer which act as a microprocessor for controlling the sound output.


rêve v0.1 (devlog part 1)


rêve is an interactive software where the user can explore an audio/visual landscape.

It’s been a while since I wanted to do something like this, at the edge of a video game, instrument and a piece of real time software.

The concept is based on the dream, like in real life the user is unable to control the direction of his own dream. That’s why the program is built upon pseudo-random rules, constantly switching between two types of environment.

The first one happens to be a kinect 3D scan of my own arm with my hand holding a ball. Everything in the first environment happens inside the 3D model of my arm towards the ball, which is moving forward on the z-axis at a slightly faster speed than the camera, thus giving the impression of diving into a tunnel.

In order to let the program make the switch by itself, a random number between 0 and 60 is chosen at the end of each elapsed minute. This number corresponds to the moment when the change of environnement is going to happen.

Based on this random number, each environnement may have alternative realities, depending on this value some sounds, models and textures may or may not be loaded. The main core of the program is this following set of conditionnal operators that drive all of the underlying events.


testApp.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
void testApp::update(){
currentTime = ofGetElapsedTimef();
randomSound = ofRandom(599);
while (currentTime >= 60.) {
currentTime -= 60.;
}
if (currentTime < randomSwitch) {
switchTime = false;
} else if (currentTime > randomSwitch) {
switchTime = true;
}
if (currentTime < randomSound) {
switchSound = false;
} else if (currentTime > randomSound) {
switchSound = true;
}
if (randomSound < 590) {
triggerSound = false;
} else if ( randomSound > 590) {
triggerSound = true;
}
if (currentTime >= 0. && currentTime <= 0.150) {
randomSwitch = ofRandom(10, 59);
randomModel = ofRandom(2);
}
if (randomSwitch > 30) {
autoCam = true;
} else if (randomSwitch < 30) {
autoCam = false;
}
}



As for the sound part, a total of 10 samples are loaded directly in oF and triggered accordingly to the random number that’s chosen every minute. The output of each sample is mapped to models parameters such as colors, radius, vertices, scale, lights and even camera depth position.


testApp.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
if (switchSound == true) {
bip1.play();
ofBackground(255, 255, 255, soundColor);
if (autoCam == true) {
cam.setDistance(soundCamera);
}
}
if (randomModel == 0) {
bipPad.setPaused(false);
noisePad.setPaused(true);
breeze.setPaused(true);
ofPushMatrix();
ofRotate(ofGetElapsedTimef()*25.0, 0, 1, 0);
ringModelWireframe.setScale(10,10,10);
ringModelWireframe.draw(OF_MESH_WIREFRAME);
ofPopMatrix();
ofPushMatrix();
ofRotate(ofGetElapsedTimef()*25.0, 1, 0, 0);
ringModel1.setScale(2, 2, 2);;
ringModel1.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*-25.0, 1, 0, 0);
ringModel2.setScale(4,4,4);
ringModel2.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 1, 0, 0);
ringModel3.setScale(6,6,6);
ringModel3.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 0, 1, 0);
ringModel4.setScale(8,8,8);
ringModel4.draw(OF_MESH_WIREFRAME);
ofPopMatrix();
crystalModel.setScale(soundScaleModel*1.5, soundScaleModel*1.5, soundScaleModel*1.5);
crystalModel.draw(OF_MESH_FILL);
crystalModel.draw(OF_MESH_WIREFRAME);
} else if (randomModel == 1) {
bipPad.setPaused(true);
noisePad.setPaused(false);
breeze.setPaused(true);
ofPushMatrix();
ofRotate(ofGetElapsedTimef()*25.0, 1, 0, 0);
cubeModel.setScale(2, 2, 2);;
cubeModel.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 0, 1, 0);
cubeModel.setScale(4,4,4);
cubeModel.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 0, 0, 1);
cubeModel.setScale(6,6,6);
cubeModel.draw(OF_MESH_WIREFRAME);
ofRotate(ofGetElapsedTimef()*25.0, 1, 0, 1);
cubeModel.setScale(8,8,8);
cubeModel.draw(OF_MESH_WIREFRAME);
ofPopMatrix();
icosaedreModel.setScale(soundScaleModel*1.5, soundScaleModel*1.5, soundScaleModel*1.5);
icosaedreModel.draw(OF_MESH_FILL);
icosaedreModel.draw(OF_MESH_WIREFRAME);
}


The sound amplitude is mapped to the horizontal screen size and the frequency corresponds to the vertical size of the screen. As the sound changes with the mouse interaction, each exploration will never sound the same.