Suno or Udio are gigantic platforms for making music with AI, with all the pros and cons of this type of tool. They also allow free access, which is fine to understand what they are, but if you want something more you have to pay, obviously.
What if I wanted to use Artificial Intelligence on my computer, and perhaps base my results on truly free material without stealing anything from anyone?
Personally I believe that an excellent tool for exploring AI on personal PC is ComfyUI.
In addition to being able to test all types of checkpoints for graphics and videos through its powerful node interface, it also allows you to manage a free audio library both for use and because it is based on open source audio materials: Stable Audio Open 1.0.
Unlike other models, private and not accessible for artists and researchers to build upon, Audio Open is based on a architecture and training process driven by a new open-weights text-to-audio model trained with Creative Commons data.
Stable Audio Open generates stereo audio at 44.1kHz in Flac format, and is an open-source model optimized for generating short audio samples, sound effects, and production elements using text prompts. Ideal for creating drum beats, instrument riffs, ambient sounds, foley recordings, and other audio samples, the model was trained on data from Freesound and the Free Music Archive, respecting creator rights.
How to do it? We see it right away.
As I said previously, we need a tool to manage all the necessary elements. Personally I had a good experience with ComfyUI, as mentioned previously, if you are inexperienced you can install the deskop version, if Python and its cousins scared you.
It’s really very simple, but for installation you can find all the necessary information on the ComfyUI website.
One of the features of ComfyUI is that it allows you to use AI without having to resort to excessively powerful PCs, which I personally find very sensible.
However, once the installation is finished, before launching the program, take a moment to look at the directory tree: inside the main folder there are some important folders, where we will have to download the files necessary to make the whole thing work.
Inside the ComfyUI folder, notice the one called checkpoints. All the files necessary to make our workflow work will be placed inside these folders.
At this moment our installation is virgin, and since our goal is to create sounds with AI, let’s get what we need.
Open the ComfyUI audio examples page, and literally follow the instructions. Rename the two files needed as stated, and put them in the right directories.
Download the workflow, and put it in the ComfyUI > user folder. Simply download the flac file, which you will then drag into the ComfyUI interface to extract the relevant workflow.
Now we can open ComfyUI by double clicking on the relevant icon created by the installation.
Drag the previously downloaded .flac file onto the ComfyUI window, and you should see an interface similar to the following image. The nodes can be repositioned as is most convenient for you.
That’s it, you don’t need anything else and you’re ready to type your prompt into the node CLIP Text Encode and click Queue.
I hope it wasn’t too difficult. The technical part is finished, and if you have obtained an audio file in the Save audio node the installation works.
Creating meaningful prompts requires some experimentation, of course.
However, your results will be saved in ComfyUI’s Output folder.
I strongly suggest studying the prompts page in the Stable Audio User Guide, it really explains how to proceed.
This is the starting point, from here you can start building your own path with AI. For example:
BEWARE, it is a dangerous drug and your hard drive will quickly fill up.
You can find countless examples by doing a little search for “ComfyUI audio workflow”.
Obviously this is only one of the ways to obtain our result, there are many others. It’s just probably the easiest to get started with.
A song for us avatars: Livio Korobase & Renee Rebane in Digital Holiday.
[Verse] In the land of codes and streams, Where we live our digital dreams, The snow falls bright in shades of blue, A pixel-perfect holiday view. Avatars in festive clothes, Building trees where no one knows, Silent nights in a neon glow, It’s Christmas in the pixel snow. [Chorus] Oh, it’s Christmas in the pixel snow, Where the virtual winds of winter blow. Lights that sparkle, hearts that gleam, A holiday in a coded dream. Oh, it’s Christmas in the pixel snow, Together, no matter where we go. Across the wires, through the screen,
In Second Life there are many people who make machinima of all types and genres. But two people in particular film live concerts, with very different approaches.
Glasz films the concert and works a lot in post-production, making it a personal work. D-oo-b instead makes real documentaries full of ideas and original shots, as well as a musician himself.
So now at The Hexagon there are two corners reserved for their creations, where you can see some of their work on a dedicated screen.
The screen used is really simple to use, just turn it on.
Next to each screen there is a small cube, click to receive the relevant documentation. Both have a large production, refer to notecards to have access to their archives on the web.
I think it’s an important completion of The Hexagons, and watching some machinima about musicians in Second Life is pleasant. Good vision.
I think we’ve all tried a bit to use artificial intelligence to make music. Initially amazed, then slowly you find the limits, and above all the costs.
My personal view is that machine learning can be used to enable and enhance the creative potential of all people, and I’d like it to be like that for everyone.
That said, there are many platforms on Web, even complex ones, that offer the possibility of creating a song from a text prompt. The “trial” generation is free, but if you need more, you have to switch to a payment plan based on the amount of rendering you need.
However, there is also the possibility of generating music with AI on your computer, downloading several different models, and thus avoiding the costs of online platforms.
I would like to talk here about two solutions that work locally, on your PC: Pinokio and Magenta Studio, two completely different approaches to AI-generated music.
Pinokio
PinokioIt is really a possible solution: its scripts take care of downloading everything you need and configuring the working environment without disturbing your file system in any way. At the installation you will be asked to indicate a Pinokio Home, and everything you download will go inside this directory, no mess around the PC.
The scripts available obviously do not concern only music, but a myriad of applications in all the areas concerned: text, images, videos, and so on and so forth. Warning: each application requires disk space, and the download is quite heavy. Make sure you have space on the disk where you created your Pinokio Home.
I have installed several libraries on my PC, currently the ones you see in the image below. Well, that’s 140 GB of disk space, and unfortunately appetite comes with eating.
Anyway, interesting. Worth a try.
Magenta studio
Magenta Studio follow a complete different path and is based on recurrent neural networks (RNN). A recurrent neural network has looped, or recurrent, connections which allow the network to hold information across inputs and these connections can be thought of as similar to memory. RNNs are particularly useful for learning sequential data like music. Magenta currently consists of several tools: Continue, Drumify, Generate, Groove and Interpolate.
These tools are available as standalone programs, but version 2 has become a integrated plugin for Ableton Live, with the same functionality as version 1. They use cutting-edge machine learning techniques for music generation, really interesting.
At the Magenta site you can also become familiar with the so-called DDSP-VST.
Okay, talking about Neural Synthesis may seem like science fiction, but it’s actually simpler than it seems. At the end of the day, it’s just a matter of installing a VST3, which is complex.
If you like to experiment, I find very interesting the part dedicated to the creation of your own instruments, where artificial intelligence can be trained with your samples.
Taking advantage of the new location and the renewed attention, we would like to organize a small festival, refreshing the walls a bit and inviting all composers of SL’s own music to continue to send a photo and a note about themselves to Livio Korobase and/or Renee Rebane, so they can be added to the directory.
Many are already on the walls, but there is still room.
Some time ago I bought a software for VJ (VJing (pronounced: VEE-JAY-ing – is a broad designation for realtime visual performance). Characteristics of VJing are the creation or manipulation of imagery in realtime through technological mediation and for an audience, in synchronization to music.
NestDrop is an ingenious software based on the visualization system produced by the programming language called Milkdrop, originally developed by Ryan Geiss in 2001.
I like Milkdrop’s beat detection system, it works well and we all know how music and lights in sync can produce pleasant moments.
So, after playing on my own for a bit I said to myself why not try to do a music and light show in Second Life, live?
It’s not as simple as it seems, especially when it comes to Second Life for video. The support is rudimentary, all you can do is apply a URL to the face of an object and start playing it (this is called MOAP, media on a prim). However, this does not guarantee at all that everyone will participate in the event, because in the case of a film, for example, each user start a playback from the beginning and so if someone arrives late they will never be aligned with the other participants, who perhaps are not even aligned with each other. We would like to have a party, not watch a movie each on a separated sofa. Give us our keyframe.
There are systems in Second Life that try to overcome this problem with scripting and other systems, but they are unreliable and complex. How to do it in a transparent, simple and economical way?
A day i visited a sim, Museum Island, where a guy with nickname Cimafilo was streaming a movie, Alice in the wonderland, and i noticed that data stream was small but fluid, in sync (also if on sim was used the parcel media and was not possible understand who and what was managing the sync), so i tried to get some infos about. Cima was using a for me unknow video format, WebM and OBS, with good results in my opinion. So I tried to create a similar system, but suited to my needs: analogous workflow, but using a dedicated server and a higher frame rate.
I can say I succeeded, the system devised can stream and sync audio and video events in Second Life using simple open source tools. Let’s see how.
As mentioned, my goal was to use a VJ program to create an audiovisual event in Second Life.
All you need is any audio player (personally I use Winamp because it’s very light, but any other player is fine) and a VJ software (in my case NestDrop, but any other is fine). The result of our audio and video mix must be able to be captured by the OBS (Open Broadcaster Software), so any program that generates video output on your PC monitor is fine.
Your desktop at the end of first part of setup: from left, Winamp produces sound, NestDrop make the visuals and OBS capture all.
I was thinking to myself, how can I send this output from OBS to Second Life with acceptable quality and in sync for anyone? Showing the video is easy, almost, there are many methods by going from a web page or a streaming service but they all lack the detail that is so important to me: sync.
However, I noticed one thing: almost all of these services use MP4 container or its variants to distribute the content. And by studying, I realized that the MP4 container does not have a sync system suitable for my purpose. I need the sync to be sent at pre-established intervals during the projection. Codecs world is a real jungle managed through trade wars.
At this point I entered a hell of questions and answers on forums, web pages, approximate and/or wrong answers, you name it. I’m also no expert on these things, and this was new territory for me. Double difficulty then.
I convinced myself along the way that the secret was in the format, and indeed it was.
Arguing with FFMPEGspecs I discovered the format that’s right for me: compact, manages good audio (in Opus) and video (VP8 or VP9), open source, all browsers can view it: WebMwas the trick.
Above all, a specification of webM container shoot me: Key frames SHOULD be placed at the beginning of clusters.
Exactly what I was looking for, bingo. In a few words, when the Play key is pressed in the SL viewer, each visitor gets a key frame to sync at the exact point of the actual key in the video. Yes, I want to use this!
Okay, but where do I send this WebM stream if I manage to convince OBS to do broadcasting?
Read here, search there, a server that accepts WebM is Icecast2.
Now comes the slightly complex part, because you need an Icecast video hosting, which will work like Shoutcast hosting which – if you have ever played music in Second Life – you surely already know. Or, you can implement your own server there. I obviously chose the second path both for its affordability and to fully understand how it works.
On the Web it is easy to find offers of virtual servers, even at very low prices. For me I got a VPS with 1 processor, 1 GB RAM, 100 GB hard disk. For an experiment it’s fine, in any case you can always expand. We go.
The installation of a Linux distro is always quite automated, just choose the desired distro from a drop-down menu and in a few minutes the basic server will be operational. So far so good, and I chose Ubuntu 20.04 for myself.
Preparing my VPS
In a few minutes the server is ready. I chose not to install anything apart from Icecast, but obviously the space can be used for all that you need.
Installing Icecast2 is very simple, all you need is a little familiarity with the terminal (for me, Putty) commands and your server is ready in 10 minutes. You can find dozens of tutorials, all copy/pasted from each other.
The only detail that I recommend you take care of is to open port 8000 of the server firewall, or no one will be able to connect and set your audio card setting on Windows at 48000 Hz for the audio side. You don’t need anything else.
Now test if Icecast is responding with a browser, adding :8000 to your server address. In my case, for example: http://vps-94bab050.vps.ovh.net:8000/
The server will most likely be listening.
Pelican Village TV waiting for a mountpoint
When configuring Icecast (you will be asked to specify passwords for admin and other details), you may have a doubt about mountpoints, and the documentation isn’t very clear either: should I create them now? Or when? In reality you don’t have to do anything, OBS (or the software you use for broadcast) will create the mountpoint with the name you chose in the connection string (we see later how).
I connected OBS to server, and the connection created the mountpoint /video, as specified in connection string.
Opening the URL http://vps-94bab050.vps.ovh.net:8000/video (following the example) you can already see what you are going stream in Second Life in a browser window (if someone is streaming, or you get a “Error 404 page Not Found”). The mountpoint is dynamic, this means it is alive as long as the stream is up. When you disconnect your broadcaster, the mountpoint disappears.
To connect OBS to the Icecast server, the issue is not very simple but it can be solved.
In File > Settings > Output > Recordings, select as Output Mode Advanced. Type: Custom Output (FFmpeg) FFMpeg Output Type: Output to URL File part or URL: icecast://user name:password@server URL:8000/mountpoint name User name and password you have set installing Icecast2, remember? Container Format and Container format Description: webm Muxer Settings open a chapter apart. This specs and Encoder settings have do be decided carefully reading the documentation at https://trac.ffmpeg.org/wiki/EncodingForStreamingSites and https://developers.google.com/media/vp9/live-encoding Configurations depends on many factors based on your material, hardware, connection, server. For me personally the string that worked for Muxer Settings is:
(For each parameter meaning and some sample configurations, refer to specs)
To start the broadcasting, push Start Recording button in OBS (we are recording to a URL).
Now what you need is to prepare the “screen”. In Second Life, associate the URL of your mountpoint with the Media texture of object you have chosen as screen and press Play on the viewer. Nothing else is needed. Let the rave begin 🙂
Not only: plus 200 filters and 200 LFOs plus an 8 track recorder and surround spatialisation (2 or 4 channels).
Obviously to take advantage of such an arsenal you need a lot of computing power, and for this reason it is recommended an Apple M1 ARM-based processor (Windows version still on build) but no one forces you to really use 1.000 oscillators, obviously.
Mille (this the name of the synth) can create amazing dense and evolving drones that sound very huge, and is designed for stereo or quadraphonic surround sound, meaning 500 or 250 oscillators per channel, Imagine the wall of sound you can create.
There is no VST version understandably, nor ever will be, but you can export later single tracks, stereo or quadraphonic files, according to your needs, or connect the audio output directly to a daw using an audio router as explained here.
The presentation video is exhaustive and well done, if you are interested in this type of tool (it only works stand alone, and that’s good) I recommend watching it. It’s not a freebie, but it costs a fair and low price like all the other special tools produced by Gleetchlab.
Before buying it, try the synth on your PC, a demo version is available.
A Limb is a well-known Second Life musician (maybe some knows as Mich Reblack), but he also has a real life, and he plays in several bands. Of the various bands he plays in, my favorite is ZAÄAR. a Belgian collective born from a rib of NEPTUNIAN MAXIMALISM, the cosmic free-jazz orchestra that in the last two years catalyzed the attention of those in love with psychedelic music, experimental drone doom and space ambient.
Below is a short story about ZAÄAR , straight from the A Limb lips during a conference in Second Life.
.
Hello and welcome to my ZAÄAR conference and mini-concert.
I will start by playing recordings of several bands linked to ZAÄAR, then Magická Džungl’a, our first album, while giving some explanations about its creation. Then I’ll play a 20 minutes (maybe more) concert in free improv, using the instruments I used for the album recording.
So at the origin of ZAÄAR is another project called Lab’OMFI. It was a multimedia collective that welcomed all artists who wanted to experiment with improvisation.
The creator and main leader of this group was Jean Jacques Duerinckx, a formidable saxophonist who played among others with some of the best improvisers: Lol Coxhill, John Russell, Paul Rutherford, Dan Warburton, Michel Doneda. He is also involved in the Belgian electro-acoustic scene.
He also played on a Zohara album, published on Tzadik (John Zorn’s label)
I was very interested by improv, Specially Jean Jacque’s approach (I always incorporated some improvisation in all my earlier projects, but until then it was not an end in itself) and became a member of this collective almost from the start.
I played synths and iPad apps using granular synthesis. I’ll give some more infos about it later
Other future ZAÄAR members also joined Lab’OMFI for more or less long durations: Hugues Philippe Desrosiers (bass) and Guillaume Cazalet CZLT (guitar and trumpet).
After a while, Guillaume Cazalet left the band to found his own, Neptunian Maximalism, an excellent free doom psychedelic drone metal band. JJ Duerinckx joined this new project very soon, as well as the drummer Sébastien Schmit who later will become the drummer of ZAÄAR. And I was invited to join too some months later. Sébastien left the group almost immediately when I arrived, but all the links were made!
Neptunian Maximalism made 8 albums already (some live ones), and toured in France, Germany, Holland, Denmark… We prepare now a tour passing by Duisburg, Paris, London and more to be announced
Let’s come back shortly to Lab’OMFI. The problem with such open groups is that, very often, unwanted people who are not in phase with the spirit of the collective impose themselves, and sabotage the rehearsals. After a while we got tired of this formula.
A little before the confinement, Lab’OMFI was abandoned to make room for LamaPhi, a core group of a dozen artists, the most motivated to really work on improv.
LamaPhi performed in the streets during the whole confinement, even in winter 😀 . About 20 events. Most of them were filmed. You can watch them here:
Now the effective ZAÄAR story.
In the first days of January 2020, Philippe Desrosiers (Lab’OMFI’s bassist) called me.
He said “I’m trying to organise an improv concert in St Gilles (Brussels), February 1st, with Jean Jacques Duerinckx (sax), Guillaume Cazalet (Vocals, trumpet, flutes, percs) and Sébastien Schmit (Drums, percs). Would you be interested to join?
I answered yes of course. I can’t say “no”… Then he told me “It’s a very narrow place, you’ll see. We’ll have to get organized…”
Indeed it was narrow. The place was a hairdressing salon!
… but a very special one. It’s called Espace Moss, and Moussa, the hairdresser is a contemporary art lover. So he decided to use his salon as an art gallery.
It sounds weird but it’s a great idea, from my opinion. Even when you visit a museum, you rarely have one hour to devote to watch a single artwork. In a hairdresser salon, you are even obliged to sit and watch.
Of course Moussa makes arty haircuts too 😉
A hair salon is usually not very large. But the artist invited by Moussa was Yoel Pytowski, an architect, who re-designed the whole shop interior by adding walls, transforming the shop in a sort of middle-east market one-way street. So it was even more narrow.
Anyway, we found some place to install our instruments. When we started playing, the shop was full of people drinking and making noise. But despite this they listened and enjoyed the concert.
The name of the event was called “A Triptych In Sound” because we played 3 improv sessions during the evening. It was rather audacious, because the ability to concentrate during improv is strongly solicited.
But to our surprise we made the good connections right away, and it lasted until the last note. I had only played once with the drummer, and rarely with the bassist, but it was as if we had been playing together all our lives, and our three sessions were quite excellent, full of groove and creativity.
Hmm, the special homemade fruit alcohols served by beautiful fairies were free for the musicians. Maybe it helped too.
So when we left the event we were very happy and satisfied of ourselves.
Sadly, some days later started a quarrel. I was not involved in it, and don’t want to give you details about it. But after this argument, I didn’t give much chance for this band’s future…
… and then COVID arrived, so our musical activities fell into hibernation. The concert became a pleasant but already distant memory
But it’s not finished of course…
In February of 2021, we suddenly got an e-mail from Guillaume. It said: “Hey, maybe you didn’t notice, but I recorded the concert with my pocket digital recorder. I spent months cleaning and masterising the tracks. I think it sounds good. If you’re ok, I already have agreements with 2 disc companies so that a double vinyl album could be published this year. What do you think?”.
The album art was ready too (made by Peter Kľúčik – Untitled). Everyone agreed.
Personally I consider Guillaume to be a true mixing genius. Getting an album that sounds so good out of a concert recorded in the middle of a noisy crowd, in a reverberating room, with the only internal microphone of a pocket recorder, is a real miracle.
… and 2 Neptunian Maximalism albums were made the same way. We never went into a studio. It’s simply concerts and rehearsals recordings…
I think what makes the strength of this band is the bass and drums sections. They are both flexible and reliable. They can range from pure noise to rock funk rhythmics, while keeping a full coherence. It makes the improvisations of the other members very comfortable and it is very inspiring
Guillaume is a very skilled guitarist. But in this band he abandons his favourite instrument to devote himself to the flutes, the voice and the trumpet. He is very interested in occult sciences, religions and cults of the past, voodoo,… So you often hear throat singing, murmurs or readings of texts in unknown languages. All this was handled by its many effects pedals. For the most recent concerts, he gave up the effects and it still sounds awesome, more than ever.
On my side, I often use an iPad app called “Samplr“.This allows me to record what other musicians are playing, to select live very short excerpts and then to process these sounds with effects, to loop them, etc. That’s why it’s sometimes difficult to distinguish myself from other musicians, because I use the same sounds.
I also use a modular synth application called miRack. It is a kind of equivalent of Reaktor, for iPad. It is possible to build your own synths or your own effects. It produces some great experimental sounds.
And finally I also play the MicroFreak, a small synth that is also very experimental, with touch sensitive keys.
So now everyone reconciled, we started rehearsing for more concerts, and noticed the magic was still there when we played together. And now we have already new recordings for more albums. A new one is planned for the end of this year.
The first album was out November 5th, 2021 in a 2×12″LP Vinyl format in Gatefold Sleeve, Ltd 700 (350 Yellow, 350 Green) – Available on Bandcamp of course.
This is a long story, which began many years ago when there were no CDs and vinyl objects called LPs were used to listen to music. I had several of these LPs, including some from a band that fascinated and frightened me at the same time.
This band was called Tangerine Dream, and it explored territories unknown to me, because electronic music was little known and to make it you needed tools that were unattainable for me, even if only for the cost.
Their leader, Edgar Froese, seemed to me to be shady and diabolical, and the music evoked strange echoes inside me.
But damn Phaedra was really fascinating.
The band has changed a lot over the years, also due to its longevity (it has been active since 1967, incredible), with ups and downs as is normal.
In 2015 Edgard Froese died, but the band continued its history under the guidance of Thorsten Quaeschning, who was in Tangerine Dream since 2005.
Thorsten is truly a special person. I’ve never seen someone as dedicated to music as he is, the live performances prove it. I once asked him what the trick was, it’s impossible to play electronic music live like he does, and the answer was disarming: Livio, I love music. I play hours every day and I live for the music. That’s all.
It is from him that I first heard about this Quantum Gate, during a online event called Behind closed doors with.... Idea was very simple: we go on stage and we let that music talk, live. LIVE? After two concerts, was clear for me that something exceptional was happening.
On stage he is gigantic, What is this Quantum Gate, Thorsten? You will find out for yourself, the answer. Know that there are rules to understand, it’s a special place that Edgard has worked hard on. So I started exploring Tangerine Dream’s music in search of hidden messages, arcane harmonies, rules of composition, secret portals to unknown spaces. Ok, I know, sometimes I’m stupid, it certainly wasn’t that, this was just the easiest way for me
Time ago. I had read about a place where the stories of all the lives that have appeared and will appear on this planet are kept. Immediately in my mind the image of an immense library was formed, where all the papyri with this information were kept. Once I talked about it with a wise friend, who told me Livio, look at the sky every now and then. that is your library.
So, slowly I too learn to be a little wiser, to admire these soundscapes with simplicity, accepting them for what they are and what they say without building endless and even useless stories on top of them, calm down and listen to music, there is no need for anything else
To thank for the teachings and the love that transpires from all this beauty, I wanted to create an installation in a simultaneously real and unreal place like a virtual world can be. There is something further on, much more solid and concrete than one might think.
I have no pretense of explaining to anyone what the Quantum Gate is (also because it could be different for everyone), it’s enough for me to imagine that someone is intrigued and ventures down the road and offers them a shortcut, perhaps when they visit this place.
From presentation of Quantum Gate work: Thorsten Quaeschning, Ulrich Scnauss and Hoshiko Yamane worked together to realize Edgar’s visions and expectations of a conceptual album that attempts to translate quantum physics and philosophy into music. New member Ulrich Schnauss comments: “at the moment hardly any other area of science questions our concept of reality (linearity of time etc.) as profoundly as research in Quantum physics – it’s no surprise therefore that Edgar was drawn to these ideas since he had always aimed at reminding listeners of the existence of ‘unopened doors‘.”
Just surrender :). And smile thinking that Thorsten Quaeschning did a serie of exceptional live concerts called Behind Closed Doors, maybe for tease Edgard (ei look, i am opening doors).
In anycase, Froese himself is credited on all but one track, mostly due to musical sketches he left behind, so he is never dead or missing, he is still really a member of actual Tangerine Dream.
This is the gift of Quantum Gate, printed on The Emerald Tablet. As above, so below.
The quest for a unique sound can lead artists down unexpected paths in music production. For me, Tia Rungray, that path led to a harmonious blend of cacophony and melody, chaos and order. I found my unique sound in the unlikely pairing of noise and piano, creating a musical style that challenges the conventional boundaries of genre. While introducing noise elements in music is not a new concept, composers like John Cage and Karlheinz Stockhausen have explored this territory, so my approach is deeply rooted in a philosophical exploration of sound. This exploration goes beyond the mere combination of noise and piano, delving into the essence of sound and its potential to evoke a wide range of emotions and thoughts. This article will take you through my journey, my process, and the art of creating music that marries the discordant allure of noise with the timeless beauty of the piano. It’s a journey that goes beyond the notes and rhythms into the philosophical underpinnings of my unique sound.
Background
My journey in music-making began in my early childhood home, filled with the electrifying sounds of rock music. That expression may have been overstated for us. However, the guitar’s distortion, a staple in rock, was a familiar sound that would later play a significant role in my music production. But the movie “The Piano” sparked my interest in the piano, leading me to explore classical music. I became enamoured with the works of composers like Erik Satie and Sergei Rachmaninoff, their music resonating with me profoundly. As I delved deeper into the world of music, I stumbled upon a track in a music game that would change my perspective on sound. The way was a hardcore techno track, and it introduced me to the intense, high-energy sound of Gabba Kick. Intrigued by its raw power and energy, I decided to try creating it. While experimenting with Gabba Kick, a thought crossed my mind: “What would happen if this method of sound-making was applied to the piano?” This curiosity led to “my noise sound”, a unique blend of noise and piano. The result was a captivating soundscape where I could hear the beauty of the piano thinly through the noise. This sound struck me like light shining through the leaves of a noisy, rustling forest. I realized that the wild noise could express the most extraordinary wildness of human emotion, which seemed impossible to communicate with the piano’s super-strong notes (forte fortissimo) alone. However, it was also possible to subtly tell the quiet movement of emotions by saying the maximum roughness. This realization deepened my connection with noise and piano, and I knew I wanted to share this unique sound with others.
The selection of “My favourite 42 albums” is not limited to piano music.
The Art of Combining Noise and Piano
Creating my unique sound involves a delicate balance between the piano and noise, achieved through software instruments, audio signal manipulation, and real-time performance. The process begins in a Digital Audio Workstation (DAW), where I use the latest software instruments to create piano sounds. These sounds serve as the foundation upon which the noise elements are built. Making the noise involves a process known as bit crushing:
I multiply the audio signal from the piano track via AUX using a “Lo-Fi” effect. This effect reduces the audio quality or “crushes” the bits, creating a distinctively gritty noise.
I apply a generous amount of reverb to add depth and space to the noise. Once the noise is made, blending it with the piano is time.
I simultaneously play the original piano and noise sounds, creating a unique interplay between the two.
The balance between the piano and noise is crucial, and I control it using the fader on the DAW. Additionally, I use the velocity signal input from the MIDI keyboard to control the volume of the piano and the noise. The real-time performance of the piano and noise is a crucial aspect of my process. It allows for a dynamic and organic interaction between the two elements, resulting in a constantly evolving sound and never precisely the same twice.
Case Study: Live Performance at Okinawa New Year Festival 2022-2023
One of the best examples of my unique sound in action is my improvised live performance at the Okinawa New Year Festival 2022-2023. This performance, available for viewing on YouTube, showcases the dynamic interplay between noise and piano that characterizes my music. In this performance, the real-time control of the piano and noise volumes was crucial. As it was an improvised live performance, there were moments when it was challenging to adjust the faders in detail. However, the MIDI keyboard provided a valuable tool for controlling the volume of the piano and noise in real-time, allowing for a responsive and organic performance. In addition to the noise and piano, I also used a looper in this performance to add depth to the sound. The looper allowed me to layer sounds and create a rich, immersive soundscape that captivated the audience. This performance is a testament to the power and potential of combining noise and piano. Despite the challenges of live improvisation, the result was a captivating musical experience that truly embodied the philosophy of my music.
Impact and Reception
The journey of my electroacoustic music project has been a testament to the power of innovation and the exploration of sound. Advocating for “noise classical,” I’ve self-produced and released several albums, each one a unique exploration of environmental sounds, piano, and noise. My first album, ‘Foresta,’ was released in May 2013, marking the beginning of my live performances in both virtual and real spaces, including Tokyo and Saitama. Influenced by the ideas of Erik Satie and John Cage, my music focuses on instrumental compositions that depict the inner world of human beings. This unique style incorporates ambient, post-rock, and noise music elements, distinguishing it from traditional healing or meditation music. Over the years, my music has received recognition and praise. The release of my album ‘MindgEsso’ on the label “Cat&Bonito” in April 2018 elicited a response from composer Akira Senju, who said he had heard the air of the future in my music. My music video, ‘Dancing Fly in My Head’, was co-operated by the Akira Senju Office, Tokyo University of the Arts COI, and YAMAHA. In July 2020, I released the album ‘Juvenile’ on the label “Tanukineiri Records,” a collaboration with Yorihisa Taura. Furthermore, my music video ‘Soft Strings’ was selected for the Tokyo Metropolitan Government’s arts support programme ‘Yell for Art’. More recently, in June 2022, I performed at the “Second Life 19th Birthday Music Fest (SL19B)”. In August of the same year, I released the album “Ghostmarch”, followed by a performance titled “LIVE: Ghostmarch” in September, featuring video projection by Hebeerryke Caravan. Through each of these milestones, my music has continued to evolve, pushing the boundaries of sound and challenging conventional notions of genre.
Conclusion
The journey of creating music that combines noise and piano has been a fascinating exploration of sound and its potential to evoke emotions and thoughts. This unique approach to music production, which I’ve termed “noise classical,” has led to numerous performances, collaborations, and recognitions. One such recognition is my inclusion in the ‘Second Life Music Composers Directory’. This directory, initially set up by musician Livio Korobase as part of the now-defunct Second Life Music LAB within the Second Life Endowments of the Arts, is dedicated to introducing and supporting musicians working within Second Life. It also shares music-related technical knowledge through web media and other means. My inclusion in this directory is a testament to the support and generosity of the music communities in Second Life, for which I am deeply grateful.
Through my music, I aim to challenge conventional boundaries and invite listeners to experience the captivating interplay between noise and piano. It’s a journey that goes beyond the notes and rhythms into the philosophical underpinnings of sound and its potential to resonate with the human experience.