Category: Tips&Tricks

  • Making audio contents with AI and In-The-Box

    Making audio contents with AI and In-The-Box

    Suno or Udio are gigantic platforms for making music with AI, with all the pros and cons of this type of tool. They also allow free access, which is fine to understand what they are, but if you want something more you have to pay, obviously.

    What if I wanted to use Artificial Intelligence on my computer, and perhaps base my results on truly free material without stealing anything from anyone?

    Personally I believe that an excellent tool for exploring AI on personal PC is ComfyUI.

    In addition to being able to test all types of checkpoints for graphics and videos through its powerful node interface, it also allows you to manage a free audio library both for use and because it is based on open source audio materials: Stable Audio Open 1.0.

    Unlike other models, private and not accessible for artists and researchers to build upon, Audio Open is based on a architecture and training process driven by a new open-weights text-to-audio model trained with Creative Commons data.

    Stable Audio Open generates stereo audio at 44.1kHz in Flac format, and is an open-source model optimized for generating short audio samples, sound effects, and production elements using text prompts. Ideal for creating drum beats, instrument riffs, ambient sounds, foley recordings, and other audio samples, the model was trained on data from Freesound and the Free Music Archive, respecting creator rights.

    How to do it? We see it right away.

    As I said previously, we need a tool to manage all the necessary elements. Personally I had a good experience with ComfyUI, as mentioned previously, if you are inexperienced you can install the deskop version, if Python and its cousins ​​scared you.

    The latest version right now is 0.3.10, which you can download here.

    It’s really very simple, but for installation you can find all the necessary information on the ComfyUI website.

    One of the features of ComfyUI is that it allows you to use AI without having to resort to excessively powerful PCs, which I personally find very sensible.

    However, once the installation is finished, before launching the program, take a moment to look at the directory tree: inside the main folder there are some important folders, where we will have to download the files necessary to make the whole thing work.

    Inside the ComfyUI folder, notice the one called checkpoints. All the files necessary to make our workflow work will be placed inside these folders.

    At this moment our installation is virgin, and since our goal is to create sounds with AI, let’s get what we need.

    1. Open the ComfyUI audio examples page, and literally follow the instructions. Rename the two files needed as stated, and put them in the right directories.
    2. Download the workflow, and put it in the ComfyUI > user folder. Simply download the flac file, which you will then drag into the ComfyUI interface to extract the relevant workflow.
    3. Now we can open ComfyUI by double clicking on the relevant icon created by the installation.
    4. Drag the previously downloaded .flac file onto the ComfyUI window, and you should see an interface similar to the following image. The nodes can be repositioned as is most convenient for you.
    The audio workflow in ComfyUI.

    That’s it, you don’t need anything else and you’re ready to type your prompt into the node CLIP Text Encode and click Queue.

    Example of audio generated from prompt in the image with the base workflow.

    I hope it wasn’t too difficult. The technical part is finished, and if you have obtained an audio file in the Save audio node the installation works.

    Creating meaningful prompts requires some experimentation, of course. However, your results will be saved in ComfyUI’s Output folder.

    I strongly suggest studying the prompts page in the Stable Audio User Guide, it really explains how to proceed.

    This is the starting point, from here you can start building your own path with AI. For example:

    BEWARE, it is a dangerous drug and your hard drive will quickly fill up.

    You can find countless examples by doing a little search for “ComfyUI audio workflow”.

    Obviously this is only one of the ways to obtain our result, there are many others. It’s just probably the easiest to get started with.

  • I built my own monitor speakers (DIY)

    I built my own monitor speakers (DIY)

    As with cheap sound cards, the market for small desktop monitors is so crowded as to make a reasoned choice based on the technical characteristics is almost impossible, and it all boils down to ok, this speaker looks nice on my desk or not.
    Yet choosing a monitor is critical, that’s where you’ll hear your sound from, and the monitor should give you a representative sonic spectrum.

    But, what are monitor speakers? In a nutshell, monitor speakers are loudspeakers that we use specifically for audio and music production. They’re called monitor speakers or studio monitors because they’re used for monitoring – critical listening during recording, mixing, and mastering. You said nothing.

    After having searched without results among billions of offers and catalogues, only one thing was clear to me: it is not possible to make a good monitor without spending at least 350-400 euro, a midwoofer of at least 4″ is needed, plus a good dome tweeter. These are my references.

    So, I looked at a few DIY projects, all of which promise marvels and stratospheric listening, but in the end concreteness won (thank goodness) and I looked at a catalog I know well, that of Dibirama (the best in Italy, but certainly there is something equivalent in every country).

    The owner is a nice person, I explained my needs to him and he gave me the instructions and the necessary components for my dream monitors. It’s not that things were simpler here also, the catalog is endless, but at least one address has arrived.

    The mid-woofer choosen is a Scan Speak 12W/8524G00, the tweeter a Seas 22TFF, both for their excellent features and value for money.

    Then you have to think about the acoustic box, because the speakers and the crossover filter will have to be placed somewhere.

    We said desktop speakers, the size as usual matters. Therefore, excluding horns or transmission line systems for obvious reasons (the dimension), the most appropriate choice seems to be that of a speaker loaded in DCAAV, (double chamber reflex) we certainly cannot miss the low frequencies.
    This makes the crate a little more difficult to make, but nothing impossible.

    The acoustic box must be designed on the basis of the characteristics of the chosen speakers, it is not that a box is made by chance. Different software are available which allow to simulate the complete system with different types of load.

    Turn, pull, spring, cut, mill, here are the cutting planes of the panels that are used to make this monitor (wood thickness 19 mm)

    Next we need the filter that will divide the frequencies between the speakers.

    It will be something like this:

    Obviously you will have to build a pair of all. Don’t skimp on coils and capacitors, they directly affect the sound quality.

    As for colors, woods and connectors you can indulge yourself, there is no rule apart from your personal taste.

    If instead of looking on shops for the components by yourself you think it’s better to buy a ready kit, I’m sorry to say that this particular case for now is not yet available. But in need, the very kind Diego Sartori of Dibirama will be happy to help you compose the kit of all the components necessary for speakers and filters. Ask for Yuna III.

    A possible design
    Suggested distance between the basement and hoof is 18-20 mm (the speaker is a double reflex, it also emits sound from the lower part, don’t plug the bottom hole).

    This isn’t the easiest kit to make carpentry side, it’s a real monitor with amazing performance that will give you satisfaction. Pair them with a good amp, they deserve it.

    Impossible to expect more from a speaker big as an A4 sheet. Have fun!

    .

  • Harmonizing Chaos, My Journey in Music: Tia Rungray in Second Life

    Harmonizing Chaos, My Journey in Music: Tia Rungray in Second Life

    Introduction

    The quest for a unique sound can lead artists down unexpected paths in music production. For me, Tia Rungray, that path led to a harmonious blend of cacophony and melody, chaos and order. I found my unique sound in the unlikely pairing of noise and piano, creating a musical style that challenges the conventional boundaries of genre. While introducing noise elements in music is not a new concept, composers like John Cage and Karlheinz Stockhausen have explored this territory, so my approach is deeply rooted in a philosophical exploration of sound. This exploration goes beyond the mere combination of noise and piano, delving into the essence of sound and its potential to evoke a wide range of emotions and thoughts. This article will take you through my journey, my process, and the art of creating music that marries the discordant allure of noise with the timeless beauty of the piano. It’s a journey that goes beyond the notes and rhythms into the philosophical underpinnings of my unique sound.

    Video footage of Draxtor Despres interviewing me in 2021.

    Background

    My journey in music-making began in my early childhood home, filled with the electrifying sounds of rock music. That expression may have been overstated for us. However, the guitar’s distortion, a staple in rock, was a familiar sound that would later play a significant role in my music production.
    But the movie “The Piano” sparked my interest in the piano, leading me to explore classical music. I became enamoured with the works of composers like Erik Satie and Sergei Rachmaninoff, their music resonating with me profoundly. As I delved deeper into the world of music, I stumbled upon a track in a music game that would change my perspective on sound. The way was a hardcore techno track, and it introduced me to the intense, high-energy sound of Gabba Kick. Intrigued by its raw power and energy, I decided to try creating it. While experimenting with Gabba Kick, a thought crossed my mind: “What would happen if this method of sound-making was applied to the piano?” This curiosity led to “my noise sound”, a unique blend of noise and piano. The result was a captivating soundscape where I could hear the beauty of the piano thinly through the noise. This sound struck me like light shining through the leaves of a noisy, rustling forest. I realized that the wild noise could express the most extraordinary wildness of human emotion, which seemed impossible to communicate with the piano’s super-strong notes (forte fortissimo) alone. However, it was also possible to subtly tell the quiet movement of emotions by saying the maximum roughness. This realization deepened my connection with noise and piano, and I knew I wanted to share this unique sound with others.

    The selection of “My favourite 42 albums” is not limited to piano music.

    The Art of Combining Noise and Piano

    Creating my unique sound involves a delicate balance between the piano and noise, achieved through software instruments, audio signal manipulation, and real-time performance. The process begins in a Digital Audio Workstation (DAW), where I use the latest software instruments to create piano sounds. These sounds serve as the foundation upon which the noise elements are built. Making the noise involves a process known as bit crushing:

    1. I multiply the audio signal from the piano track via AUX using a “Lo-Fi” effect. This effect reduces the audio quality or “crushes” the bits, creating a distinctively gritty noise.
    2. I apply a generous amount of reverb to add depth and space to the noise. Once the noise is made, blending it with the piano is time.
    3. I simultaneously play the original piano and noise sounds, creating a unique interplay between the two.

    The balance between the piano and noise is crucial, and I control it using the fader on the DAW.
    Additionally, I use the velocity signal input from the MIDI keyboard to control the volume of the piano and the noise. The real-time performance of the piano and noise is a crucial aspect of my process. It allows for a dynamic and organic interaction between the two elements, resulting in a constantly evolving sound and never precisely the same twice.

    Case Study: Live Performance at Okinawa New Year Festival 2022-2023

    One of the best examples of my unique sound in action is my improvised live performance at the Okinawa New Year Festival 2022-2023. This performance, available for viewing on YouTube, showcases the dynamic interplay between noise and piano that characterizes my music. In this performance, the real-time control of the piano and noise volumes was crucial. As it was an improvised live performance, there were moments when it was challenging to adjust the faders in detail. However, the MIDI keyboard provided a valuable tool for controlling the volume of the piano and noise in real-time, allowing for a responsive and organic performance. In addition to the noise and piano, I also used a looper in this performance to add depth to the sound. The looper allowed me to layer sounds and create a rich, immersive soundscape that captivated the audience. This performance is a testament to the power and potential of combining noise and piano. Despite the challenges of live improvisation, the result was a captivating musical experience that truly embodied the philosophy of my music.

    Impact and Reception

    The journey of my electroacoustic music project has been a testament to the power of innovation and the exploration of sound. Advocating for “noise classical,” I’ve self-produced and released several albums, each one a unique exploration of environmental sounds, piano, and noise. My first album, ‘Foresta,’ was released in May 2013, marking the beginning of my live performances in both virtual and real spaces, including Tokyo and Saitama. Influenced by the ideas of Erik Satie and John Cage, my music focuses on instrumental compositions that depict the inner world of human beings. This unique style incorporates ambient, post-rock, and noise music elements, distinguishing it from traditional healing or meditation music. Over the years, my music has received recognition and praise. The release of my album ‘MindgEsso’ on the label “Cat&Bonito” in April 2018 elicited a response from composer Akira Senju, who said he had heard the air of the future in my music.
    My music video, ‘Dancing Fly in My Head’, was co-operated by the Akira Senju Office, Tokyo University of the Arts COI, and YAMAHA. In July 2020, I released the album ‘Juvenile’ on the label “Tanukineiri Records,” a collaboration with Yorihisa Taura. Furthermore, my music video ‘Soft Strings’ was selected for the Tokyo Metropolitan Government’s arts support programme ‘Yell for Art’. More recently, in June 2022, I performed at the “Second Life 19th Birthday Music Fest (SL19B)”. In August of the same year, I released the album “Ghostmarch”, followed by a performance titled “LIVE: Ghostmarch” in September, featuring video projection by Hebeerryke Caravan. Through each of these milestones, my music has continued to evolve, pushing the boundaries of sound and challenging conventional notions of genre.

    Conclusion

    The journey of creating music that combines noise and piano has been a fascinating exploration of sound and its potential to evoke emotions and thoughts. This unique approach to music production, which I’ve termed “noise classical,” has led to numerous performances, collaborations, and recognitions. One such recognition is my inclusion in the ‘Second Life Music Composers Directory’. This directory, initially set up by musician Livio Korobase as part of the now-defunct Second Life Music LAB within the Second Life Endowments of the Arts, is dedicated to introducing and supporting musicians working within Second Life. It also shares music-related technical knowledge through web media and other means. My inclusion in this directory is a testament to the support and generosity of the music communities in Second Life, for which I am deeply grateful.

    Through my music, I aim to challenge conventional boundaries and invite listeners to experience the captivating interplay between noise and piano. It’s a journey that goes beyond the notes and rhythms into the philosophical underpinnings of sound and its potential to resonate with the human experience.

  • Listening to your stream when playing live in Second Life without overlap

    Listening to your stream when playing live in Second Life without overlap

    All we know: lag is a big enemy. For audio even more, it can take up to 30 seconds from when you press Play on your computer to when you can hear what you’re playing inside Second Life.

    That’s a big issue, because if your DAW is playing and you want to listen to what your audience is hearing you’ll hear two staggered overlapping streams: one from the DAW, one overdue from the Second Life client.

    But there is a rather simple solution: if you have two sound cards, you can use one for the DAW, one for the Second Life client. Most computers have an internal sound card, so there’s no need to buy a second one. Connect the headphones or speakers you use to the output of the second card and set the application and its audio output in the Windows audio routing panel, and finished.

    Let’s see how it’s done.

    Verify that in your DAW the output is routed to the sound card rendering the mix (in my case, an ESI Maya22)

    • Open Settings on Windows 10.
    • Click on System.> Sound.
    • Under “Other sound options,” click the “App volume and device preferences” option.
    • Under the “App” section, select the playback device for the app (in my case, device is a Audiobox and app Firestorm) and eventually adjust the volume level for the app you want.
    • In Second Life, select the sound card you want use for playback in Sound & Media > Output device according to what you set in the Windows control panel (in this example, an Audiobox).

    Done…

    Click for enlarge

    If you have a Switcher Box 2 in 1 out, there is no need to move headphone jack or speaker cables, just switch from A to B and vice versa when needed. Or just download a app as Audio Device Switcher or Audio Switcher to get the same results..

    This audio routing feature is very useful for live performances. You can now hear what’s happening in the Second Life client without the DAW audio stream overlay.
    There are more sophisticated solutions, of course, but this one is free and simple.

  • Using ENDLESSS

    Using ENDLESSS

    Real time loop-based music making. Endlesss is an app that promise to create, collaborate, publish and discover music faster than ever. And is true!

    Endlesss Studio is a free app, and can be dowloaded for Win, Mac and iOS.

    Video show a little introduction to Endlesss made by Art Olujia in Second Life.

    Where: http://maps.secondlife.com/secondlife… When: Thursday 15 december at 10am SLT Teacher: Art ღ littlewing (artistik.oluja)