Author: Livio Korobase

  • 24H/24 PARADISE Festival, what happened?

    24H/24 PARADISE Festival, what happened?

    A while back, Paradise Now (Philippe Franck) crossed over to another dimension, and to celebrate some weirdos in Korea thought of creating a 24-hour mixed reality event.

    “This Saturday, we will be connected for 24 hours, via the Internet, to the KOTE gallery, located in Seoul, Korea.

    And we’ll be able to imagine, share and celebrate with them, with all of us, surrounded and accompanied by PLANNED ACCIDENTS, Transcultures and the Sociรฉtรฉ i Matรฉriel, our utopias or our En/Vies, our memories, our love of Art, our completed works or those still in the making – in fact, everything that animated and irradiated Philippe Franck / Paradise Now in an absolutely irrational way, and perhaps made him particularly irresistible for some โ€œin the manner of speakingโ€. By constantly transforming him into an electric snowdrop locomotive. In trance/sonic breath. โ€œThat’s entertainmentโ€œ.

    In Second Life replies the event organized by A Limb.

    This the program:

    4am: A LIMB
    Venue: Approaching Silence
    SLURL: http://maps.secondlife.com/secondlife/Weisshorn/17/249/402

    5am: MARTYN BATES II
    Venue: Approaching Silence
    SLURL: http://maps.secondlife.com/secondlife/Weisshorn/17/249/402

    6am: ALCHEMELIC
    Venue: Approaching Silence
    SLURL: http://maps.secondlife.com/secondlife/Weisshorn/17/249/402

    7am: LIVIO KOROBASE
    Venue: The Hexagons- Theatre Pelican
    SLURL: http://maps.secondlife.com/sec…/Pelican%20Reef/38/223/2026

    10am: ECHO STARSHIP
    Venue: The Hexagons – Rooftop
    SLURL: http://maps.secondlife.com/sec…/Pelican%20Reef/38/228/1763

    11am: YADLEEN
    Venue: KaiMar Club
    SLURL: http://maps.secondlife.com/…/Coastal%20Grove/157/142/3334

    12pm: RENATA K
    Venue: Approaching Silence
    SLURL: http://maps.secondlife.com/secondlife/Weisshorn/17/249/402

    1pm: KEVIN PAUL CAHAY
    Venue: Approaching Silence
    SLURL: http://maps.secondlife.com/secondlife/Weisshorn/17/249/402

    2pm: DADDIO DOW
    Venue: The Swordfish Pub
    SLURL: http://maps.secondlife.com/secondlife/Kalvoya/169/16/24

    Nine hours of music around Second Life.
    Everything went smoothly, with passion and beautiful musical proposals.

    The brave Glasz filmed and shared the whole thing live in HD, and now all the performances can be seen on Youtube, a 9 hours mega film in two parts (i suggest Cinema or Full screen mode).

    Part 1:

    Part 2:

    Since it could be difficult to follow all the concerts in different locations, Renee made an efficient HUD to guide all the teleports to their destinations. As far as we know, no avatars were lost.

    Everyone did their part, congratulations to everyone.
    I followed the entire nine hours of the concert, maybe I don’t have the body anymore because in the morning Renee found me on an ice floe wearing a red polka dot bikini. I don’t know how it happened, I think someone put something in my drink. But never mind, it was touching and beautiful event.

    I’ll never know what happened. I think it was Daddio, but I have no proof. Renee found me and saved, she is holy woman.

  • Tube sound in the digital mix for 15 $?

    Tube sound in the digital mix for 15 $?

    We always hear about the “tube sound“, which is hot, sexy and you know what. Will it be true?

    There’s probably some truth to it, without mythologizing. For what reason?

    Trying to avoid falling into Alice’s hole of esoteric sound discussions, there are practical reasons why we talk about transistor or tube sound, and their differences in terms of audio.

    Furthermore, a good tube audio amplifier costs much more than a transistor one (but not mandatory).

    When talking about tube vs transistor sound, it is easy to fall into strange discussions that are not always scientific, but in general tube sound is attributed a special quality in terms of perception.

    So I asked myself how to insert and check whether there really is an advantage or not in inserting valves into my mix chain? And how to do it without having to sell a kidney?

    I’m a DIY enthusiast, so I started looking around for some inexpensive (very inexpensive) kit suitable for the experiment. Ultimately all I need is what’s called a buffer, one which has a stereo input and output.

    A VERY expensive solution, but that’s what I want to try to do.

    Look here, look there in the end I bought a Chinese buffer on a famous Chinese e-commerce (you can guess which one…) for 11 euros. Since I was in the mood for crazy expenses, I also bought a plexiglass box to house the kit, which is always better to have a home, like all electrical things.

    Subsequently I also started changing tubes, which is one of the favorite rituals of audiophiles, but this is not important.

    My 11 euro kit and its transparent house (2 euro or less, i don’t remember).

    At this point I connected the line output of my sound card to the buffer input, and the buffer output to the stereo input of the sound card to make a loop and I mixed the original signal with that of the buffer. Harder said than done.

    This buffer requires to be powered by 12 volts AC, so check that you have a suitable power supply at home, AC is not so common.

    In any case there are many different types of buffers, you don’t necessarily have to buy the cheapest one like I did, just search for “diy tube buffer” or similar.

    Works? It does not work? In my

    opinion yes, it works, especially on the vocals it adds something that makes them more distinct in the mix. But that’s just my opinion, after all it costs little to try.

    I forgot: in the kit I used, the input and output pins are not indicated, they are in the instructions printed in small letters only in Chinese… when you try to locate them, keep the volume low.

    The circuit is a bit noisy, but you can find indications on some audio forums on how to reduce it, if you decide to use the buffer permanently.

    Have fun.

  • ECHO STARSHIP @ The Hexagons

    ECHO STARSHIP @ The Hexagons

    Who: Echo Starship

    When: FRI 7 March – 1PM SLT

    Where: Roof of The Hexagons

    Event streamed on usual audio channel, and at the same time live video on a MOAP in SL

    Taking its form as mostly a live improvisational music project, Echo Starship is heavily influenced by an experimental/psych spectrum of sounds ranging from Drone, Noise, Ambient, Contemporary Minimalist music and all the way to Post-Rock, Krautrock, IDM, Synth Wave elements.

    He uses for his performance an arbitrary selection of instruments ranging from electric guitar, prepared instruments, field recordings & samples, synthesizers, electronics, piezo microphones, and other types of โ€œoddโ€ instruments.

    It takes form as a live improvisational exploration of experimental, ambient, contemporary classical, and drone music with performances at various festivals, venues, and internet streams.
    More at https://ampeff.com/

  • Livio Korobase @ Ambient Waves 2025

    Livio Korobase @ Ambient Waves 2025

    The Ambient Waves Festival takes place on Friday 21st and Saturday 22nd February.

    Taxy: http://maps.secondlife.com/secondlife/RadioSpiral/105/236/25

    The list of participating artists is pretty interesting, apart from me.
    I’m playing Saturday at 3PM SLT, if anyone is interested.

    The main venue will be on ground level with stage floating on water and bleacher seating on the RadioSpiral sim. Radiospiral will also take care of rebroadcasting the event on their channel.

    Schedule of performers:

    Art Opening:

    FRI 11am PST
    Mutant Memory
    Tom Britt

    LIVE MUSIC:

    FRI
    12pm Mao Lemieux
    1pm Jana Kyomoon
    2pm Echo Starship
    3pm Aleatorica
    4pm Cypress Rosewood

    SAT
    9am MoShang Zhao
    10am Tsu
    11am DJ Kyizl
    12pm DD Kiyori
    1pm Spiral Sands
    2pm Gypsy Witch Sands
    3pm Livio Korbase

  • YU @ The Hexagons

    YU @ The Hexagons

    We are happy to announce that Dj YU (which many know as JadeYu Fhang) will ring at The Hexagons Thursday 13, 1PM SLT.

    Expect a mostly electro set.

    Thursday the 13th, YU at 1PM on the roof of Hexagons, take note.

    http://maps.secondlife.com/sec…/Pelican Reef/45/234/1762

    See you there.

    Cover image: https://www.flickr.com/photos/jadeyufhang/

  • Tuning ears

    Tuning ears

    The ear is a sensor, like the eye, and they work like all our senses. They receive stimuli, in this case sound waves, and transform them into messages that our brain interprets based on its experience.

    Besides, our perception of sound is limited, our brain does not interpret sounds below or above a certain threshold.

    Human hearing is sensitive to a wide spectrum of acoustic frequencies (over 10 octaves), extending from approximately 16 to 20,000 Hz (often indicated as 20 to 20,000) and to a range of sound intensity extended from the minimum threshold limit (by convention, 0 dB), to the maximum acceptable given by pain (and then by rupture of the eardrum, approximately 140 dB).

    And since the sound sensation is not proportional to the sound pressure values, their representation has been facilitated with the use of logarithmic growth ratios (dB).

    But apart from these, so to speak, somewhat technical considerations, for a musician to possess the perfect (or absolute) pitch is a gift. If you play an instrument, you have certainly tried to identify the chords of a song for endless tests in order to identify its melodic line. Imagine if it had been enough to listen to identify them exactly…

    Absolute pitch is an act of cognition, needing memory of the frequency, a label for the frequency (such as “B-flat”), and exposure to the range of sound encompassed by that categorical label.

    Musicians such as Mozart (who demonstrated this ability at 7 years old), Beethoven, Toscanini or Glenn Gould were gifted with this blessing, and the list among musicians is long. It’s not a sign of anything, but certainly a nice convenience.

    I loved this scene on Amadeus movie, when Mozart dictate to Salieri his Requiem without the help of any tool or instruments, just seeing the music in his mind and singing.

    Mozart using his perfect pitch for dictate his Requiem note by note on the first try.

    We are not all Mozart, of course, but it is possible to improve our inner ear by doing specific exercises.

    Ear training is a music theory field of study where musicians use only their hearing to identify pitches, melodies, chords, intervals, rhythms, and various other basic elements of music. With ear training, you can connect notes and other musical elements just by hearing them.

    If it goes badly you will have just wasted some time, but I think that doing some exercise is always good, even if you are not a musician but “only” a music user.

    For example, try doing some exercises for free on Teoria site, and remove some rust from your ears.

    In my opinion time well used, but that’s just my opinion.

  • Midi Learn and Automation for dummies

    Midi Learn and Automation for dummies

    We have talked about MIDI in various posts, and obviously: MIDI is a fundamental component of any DAW or digital audio studio.

    MIDI messages are those that allow software and hardware to communicate, informing the parties what to do, when and why. Without MIDI not much can happen.

    You can find all the specs about it on MIDI.org, and it’s worth at least giving it a look.

    It is generally thought that via MIDI it is only possible to communicate which sound and at what volume to play, establishing a direct connection between your MIDI keyboard (for example) and the virtual synth you are using (in standalone or VST mode is the same).

    While on a PC MIDI is handled via USB, on hardware MIDI signals are handled via 3 ports, called MIDI in, Out and Thru, using special cables.

    Midi In, Out and Thru allow you to connect any type of MIDI instrument.

    Each instrument is assigned to a specific MIDI channel and controller and synth talk exclusively on that channel (a channel is an independent path over which messages travel to their destination. There are 16 channels per MIDI device).

    So when you press a key on your MIDI keyboard sending it on MIDI channel 16, only the synth listening on channel 16 will respond to that signal and emit the sound corresponding to that MIDI message, of the duration and intensity established by your touch.

    Peripherals setup is one of the funniest parts of the job (very ironic).

    Maybe it seems strange, but it’s exactly the same thing that happens when you press the key a, b, c or whatever you want on the PC keyboard. A message is sent to the computer and the letter appears on the screen.

    However, MIDI is capable of sending and receiving many other types of signals, standardized in a protocol called MIDI CC.

    MIDI keyboards these days come with lots of knobs and sliders, which don’t have much to do with play music, apparently. A midi keyboard usually has no sound generators, so much so that it is also called a “mother keyboard“.

    My Keylab show a lot of controllers, pads, knobs, to do what? They don’t make any sounds, except for the pads if necessary.

    In fact, from the concept of a mother motherboard, i.e. one that does not emit any sound but just MIDI signals dedicated to transmitting data to a synth capable of interpreting and converting it into sound, we have moved on to the concept of MIDI controller.

    While on a synthesizer those controls would be used to manage the audio engine of the synth itself, on a MIDI controller they are used to manage the amount of parameters that “turn the knobs” on the VST synth. For example:

    Watch what happens by turning those knobs Cut Off, Resonance and Accent hardware side.

    Let’s imagine we want to do the same thing, but via software, with a VST synth that emulates the TB-303 in the video.

    The TB-303 is a celebrity, so it’s not at all difficult to find a software version of it, free or for a fee.

    A VST version of our TB-303. Using the MIDI controller, we need to replicate the effect seen on video turning the Resonance and Cutoff pots on VST.

    Our goal is to replicate that behavior with our VST, emulating the hardware pots with those of our MIDI controller (this is called MIDI Learn), possibly recording our performance with the knobs over the time (Automation).

    Sound difficult? Is really easy and effective. The next video show how do in Studio One because is the DAW that i use, but any DAW has this functionality.

    Setup of a keyboard controller and assigning MIDI CC to keyboard controls.

    MIDI Learn is an immensely powerful feature that allows you to remote control virtually any on-screen parameter with a MIDI controller.  It is a very flexible system that can adapt to the MIDI device you use and allows changes made to any learned parameter to be recorded by the host application.

    But how do you tell the DAW to turn this knob at this point and with this intensity?

    Your tracks in the DAW have different layers: in one you record the sound events, in the Automation layer the MIDI CCs. Very easy and almost automatic. Furthermore, all events in the Automation layer can be recorder and edited very easily, for fine tuning.

    Example in Studio One, but any DAW has the same functionality. Look the manual for “Automation”.

    It’s not difficult and will greatly improve your workflow, try it.

    Being able to use a physical controller is certainly a better method than turning knobs and dragging sliders one at a time on a monitor with a mouse, as if you only had one finger. Think about how to mix multiple tracks simultaneously with the physical sliders of your MIDI keyboard, for example. Just assign the physical sliders to the DAW volume sliders, and done.

    And if you have a mixed studio, integrating hardware audio devices and feticist synths or Modular, you know how It gets very annoying having to get up, go to the synth and change the sound, then go back to the DAW, etc etc. when you just want try a different sound. Well, use automation and live happy.

    Program change on a external hardware device.

    After having delved into MIDI, you can also dedicate yourself to the light show via MIDI, but this is another topic.

  • Live Recording from within your DAW

    Live Recording from within your DAW

    While traditional studio recordings provide the luxury of multiple takes and post-production editing, capturing a live performance brings a distinct charm thatโ€™s hard to replicate. The unfiltered essence of a sound’s chemistry, the real-time interactions, and the genuine connection with the music come alive on screen, giving a taste of what itโ€™s like to be front and center at a concert.

    I don’t think there’s any doubt about. But how to do it that on a computer? Even in digital world you need “cables” to connect the audio output of your DAW to the software you use to record, whatever it is. If you are on Windows, the operating system will probably not help you and will make the use of an audio router almost essential, and you will have to have the patience to align the various software regarding Sample Rate, latency, USB buffers, etc. etc.

    Furthermore, Windows has the unique ability to reset your audio preferences with each update, find out why. So if you had set your preferences manually you will find them back with the default Windows setting, which almost never coincides (for example, I record at 24 bit/48000 Hz but Windows regularly sets 16 bit, even if the sound card driver says otherwise). The worst thing is that Windows also resets the preferences regarding the so-called audio Exclusive Control, so something that used to work no longer works.

    I mean, you’ve prepared your performance and all kinds of drugs are pushing into your brain and lo and behold, you press Record in your recording software and… error. “The recording device is not responding”. It’s not nice. Coitus interruptus of the worst kind.

    But there is a way to solve the embarrassment: record directly from within your DAW. Fortunately, there are VST plugins to do this.
    Personally I use MRecorder by Melda Production, the free version.

    Very simple but effective, with a series of options (some only usable with the paid version, but not essential for my use).

    MRecorder interface.

    It is a free plugin, downloadable with the MFreeFXBundle by Melda Production (38 free effects, not to be underestimated). And that’s the end of this problem. Put it in the main stereo mix bus on DAW mixer and live free.

  • Livio Korobase @ HeArt & Soul Gallery for Selen’s “Captive Lights” opening

    Livio Korobase @ HeArt & Soul Gallery for Selen’s “Captive Lights” opening

    ๐—ฆ๐˜‚๐—ป๐—ฑ๐—ฎ๐˜†, JAN 12๐˜๐—ต – 12๐—ฝ๐—บ ๐—ฆ๐—Ÿ๐—ง
    Line Up:
    12pm SLT (21:00 CEST) – Livio Korobase
    Afterparty with BookaB Vibes!
    Dresscode: ๐™œ๐™š๐™ฉ ๐™˜๐™ง๐™š๐™–๐™ฉ๐™ž๐™ซ๐™š!

    Where: http://maps.secondlife.com/secondlife/Durdane/211/210/78

    I’ll be playing my music at this opening on Sunday, for anyone who wants to come and listen.

    I like Selen’s pictures on the walls, inspired by the works of James Turrell.

    I prepared some new music and sounds inspired by Selen’s colorful sights, and I hope everything works well.

    My basic track is ready, I’m satisfied. To play it in a concert, I will use the technique exposed in a previous post and here.

    Why do I want to specify this? Often in these cases we receive unclear requests regarding “live” performances.
    Anyone who has made any music knows that we only have two hands, and if you don’t just play one instrument and maybe sing over it’s impossible to produce an interesting performance.

    Personally, I’m not interested in playing piano bar, I prefer complex and layered compositions, which would require an entire band to perform “live”. And even if that were possible, but it’s not, everyone would have to be in the same room and use the mixer output for feed the stream, because of the high lag affecting audio streams (mean: you play a note, and this note can be listened in Second Life after a delay of 30 seconds or more, and there is no solution).

    Sometimes you also hear about double or triple streams, but that’s nonsense: in Second Life the stream applied to a parcel can be one and only one.

    So I looked for the most useful workflow for me, and I’m happy I found it, as explained in the aforementioned post: i prepare my basic track and i play over, with my two hands.

    After the performance, as usual, I will publish the recording on my Bandcamp, at https://liviokorobase.bandcamp.com/.

    All my music can be defined as “live”, and this is both an advantage and a flaw from my point of view. I don’t have a studio version of any of the music in my Bandcamp, it’s all live recordings of me playing them (Revox was once used, now a digital recorder but is the same approach).

    A glorious Revox B77 MK2.

    This also means that if I want to repeat the same thing, I wouldn’t know how to do it: often these are moments captured on the fly, and who remembers how I did them?

    But that’s fine with me, that’s what I want, for me this is “live”, influenced by my moods, ambient around me, my inside. I like to tell a story, if I can, and sounds for me are the colors with which I paint a scenario.

    I believe that the real distinction should be that between “I play my own music, produced and played by me using the needed tools needed to try to create the feeling that i want to communicate” and “I play/sing music produced by others“…

    It is clear that the first case is the interesting one for me, but it is only my opinion. You can freely continue to call “live” something that for me is intrinsically dead, there’s no problem.

    I much prefer listening to a live recording of original music than something fake, perhaps using a MIDI base prepared by others, in a piano bar style. Some are very good at this, no doubt. But it’s not a “live”, it’s a performance sung or played over a backing track, a pre-recorded medium generally produced by others.

    The difference between streaming music and live music is in the artist, not the medium. Who can create the โ€œcollective effervescenceโ€ is the winner, dead or alive ๐Ÿ™‚

    I’ll leave the discussion open, there are certainly different opinions.

    For now, see you on Sunday at http://maps.secondlife.com/secondlife/Durdane/211/210/78

  • Maybe some news for audio in Second Life?

    Maybe some news for audio in Second Life?

    During the last meeting with the management of Linden Lab to which I was invited and in which the excellent Project Zero was presented, I asked a question that was quite off topic (but also not, basically we were talking about the evolution of Second Life) to Philip Rosedale regarding a possible and necessary development of audio in Second Life.

    Why did I ask him of all people present at meeting?

    Maybe someone remembers the virtual world he created some time ago, High Fidelity. Now this virtual world is no more (plans for gatherings on the High Fidelity platform to use virtual reality were scrapped by the beginning of 2019, and the platform became audio-only) and High Fidelity takes care of Advanced Audio Processing quite well, as the name announced from the beginning.

    You can listen to some examples on the High Fidelity site, and the approach seems right suited to Second Life, both for Voice, and for the musicians, and for the environmental sounds.

    Wear your headphones and try this to understand the topic, for example.

    Pretty impressive, right? No, no one is knocking on your door ๐Ÿ™‚

    I remember that during the development of Sansar Linden Lab had started an approach regarding spatial and 3D, volumetric, audio. I don’t know if this experiment with ambisonic sounds was implemented in the final version, it’s been a long time since I visited Sansar. The conditions were very interesting and everyone ran to buy ambient microphones.

    But, seeing Rosedale on the other side of screen, I had to ask.

    I put on my poker face and asked directly.

    Excuse me Philip, maybe we can hope in the future to have a developement of Second Life on Spatial Audio?”

    He looked surprised and responded immediately. After that I was the surprised one, because he didn’t say no but was open to possibilities.

    Audio is one of the most primitive areas of Second Life: ambiental sounds for now are based on a mere support for small audio files (max 30 second each, in mono…) to be imported one by one and with some work assembled with scripting, far from what it is possible to obtain today and if you have watched the example video you can quite simply understand what would be possible instead.

    I will not fail to remind us of this quasi-promise whenever possible, that would be wonderful, for musicians but for any user of Second Life.

    .