Author: Livio Korobase

  • YADLEEN (DREAM MUSIC), ECHO STARSHIP (LIVE AMBIENT) AND A LIMB (LIVE ELECTRO) AT STUDIO M in Second Life

    YADLEEN (DREAM MUSIC), ECHO STARSHIP (LIVE AMBIENT) AND A LIMB (LIVE ELECTRO) AT STUDIO M in Second Life

    TP to the concert: http://maps.secondlife.com/secondlife/Gangkhar/20/15/3898
    You should land on ground level. There is a large Obelisk with a TP Pad

    12:00pm SLT: YADLEEN is a composer of electronic dream music and the result is uniquely special. Her compositions can vary tremendously creating an ambience which is purely Yadleen. She also has an emphasis on India which is an extraordinary twisted work for electronic music.
    https://soundcloud.com/clara-music-1

    1:00pm SLT: ECHO STARSHIP
    Taking its form as mostly a live improvisational music project, Echo Starship is heavily influenced by an experimental/psych spectrum of sounds ranging from Drone, Noise, Ambient, Contemporary Minimalist music and all the way to Post-Rock, Krautrock, IDM, Synth Wave elements.
    https://transonic-records.bandcamp.com/…/moribund…

    2:00pm SLT: A LIMB
    A Limb is a kind of artistic Frankenstein, exhuming all sorts of music corpses from their graves, stitching an ambient body with funk legs, punk feet, experimental arms, jazz hands, a drone head, inserting a big ethnic music heart and krautrock lungs… then bathing the whole “body” into a dub effects bath, until an electro thunderbolt strikes it and… omg… it’s… alive! Hmm… sometimes science goes too far, but it’s too late to back-pedal…
    https://soundcloud.com/a-limb/tracks

  • How to use my computer to make a recording studio? (5: Effects)

    How to use my computer to make a recording studio? (5: Effects)

    In this serie:

    1. Introduction
    2. The audio interface
    3. The DAW
    4. Virtual Instruments (VST)
    5. Effects
    6. Cables&cables

    Virtual Studio Technology (VST) plugins are what amateur and professional recording engineers and artists use to enhance their audio projects. A plugin is a type of software that works inside another piece of software. Instead of working on its own, you plug it into something else (in our case, the DAW).

    If you’ve ever visited a recording studio, you’ve certainly noticed some racks containing objects called effects: they’re the equipment that allows the sound engineer to add details to the instruments, maybe some reverb, an echo, a compressor or a special effect. For a guitarist these are the studio equivalent of the numerous pedals at his feet, and the studio ones are much more refined, precise and expensive.

    The chain of effects can actually characterize the sound of an instrument in an important way, the famous case being the sound of the solos of Pink Floyd guitarist David Gilmour. Two notes are enough and you recognize it. Or Jimi Hendrix’s Wha Wha, and many other famous examples (and we are talking about just ONE effect).

    One of the most famous effects used by Gilmour is a delay called the Binson Echorec, a tape echo from days gone by.

    (left) David tweaking his Binson II unit during A Saucerful of Secrets filmed at Pompeii. (right) David pictured at Earl’s Court, London, UK in May 1973 with a Binsin II and a Binson PE.

    That distinctive sound is so desired that many software houses has seen fit to create a replica of the delay in digital format, emulating not only the sound but also the look in the interface.

    A quick internet search for Binson Echorec VST will yield many results.

    This delay is just an example, you can find any kind of known and unknown effects, free and paid, in every category and often it will be just how you use effects that will shape your personal sound.

    Your DAW’s mixer will allow you to use them freely, even emulating rather refined techniques such as sidechain and parallel compression.

    The world of effects is a complex world, just as important as that of tools. Below is a probably partial list of the available audio effects organized by categories.

    Dynamic Effects

    Compression
        FET Compression
        Multi-band Compression
        Optical Compression
        Parallel/Manhattan Compression
        Sidechain Compression
        Variable Mu Compression
        VCA Compression
    De-Esser
    Distortion
        Bitcrushing
        Clipping
        Distortion
        Fuzz
        Overdrive
        Sample Rate Reduction
        Tape Saturation
        Valve Saturation
    Exciter
    Expander
    Level
    Limiting
    Noise Gating
    Noise Reduction
    Transient Shaper

    Modulation Effects

    Chorus
    Flanger
    Phaser
    Ring Modulation
    Rotary Effect
    Tremolo
    Vibrato

    Sound Manipulation Processes

    Reverse
    Time Compression
    Time Expansion

    Spectral Processes

    Equalization
        Dynamic EQ
        Graphic EQ
        Parametric EQ
        Semi-Parametric EQ
        Shelving EQ
    Filters
        Band-pass Filter
        Bell Curve Filter
        Envelope Filter
        High-pass Filter
        High Shelf Filter
        Low-pass Filter
        Low Shelf Filter
        Notch Filter
        Wah
    Imaging
    Panning
    Pitch Correction
    Pitch Shifting

    Time-Based Effects

    Delay
        Analog (BBD) Delay
        Digital Delay
        Doubling Echo
        Haas Effect
        Ping Pong Delay
        Reverse Delay
        Shimmer Delay
        Slapback Delay
        Tape Delay
    Looping
    Reverb
        Acoustic Emulation Reverb
        Bloom Reverb
        Convolution Reverb
        Gated Reverb
        Plate Reverb
        Reverse Reverb
        Shimmer Reverb
        Spring Reverb

    Some Free VST effects plugins (list is very bog, just do a search for free VST effect on web):

    Have fun, don’t listen to the various gurus, find your own way creating your personal, distinctive sound.


  • Listening to your stream when playing live in Second Life without overlap

    Listening to your stream when playing live in Second Life without overlap

    All we know: lag is a big enemy. For audio even more, it can take up to 30 seconds from when you press Play on your computer to when you can hear what you’re playing inside Second Life.

    That’s a big issue, because if your DAW is playing and you want to listen to what your audience is hearing you’ll hear two staggered overlapping streams: one from the DAW, one overdue from the Second Life client.

    But there is a rather simple solution: if you have two sound cards, you can use one for the DAW, one for the Second Life client. Most computers have an internal sound card, so there’s no need to buy a second one. Connect the headphones or speakers you use to the output of the second card and set the application and its audio output in the Windows audio routing panel, and finished.

    Let’s see how it’s done.

    Verify that in your DAW the output is routed to the sound card rendering the mix (in my case, an ESI Maya22)

    • Open Settings on Windows 10.
    • Click on System.> Sound.
    • Under “Other sound options,” click the “App volume and device preferences” option.
    • Under the “App” section, select the playback device for the app (in my case, device is a Audiobox and app Firestorm) and eventually adjust the volume level for the app you want.
    • In Second Life, select the sound card you want use for playback in Sound & Media > Output device according to what you set in the Windows control panel (in this example, an Audiobox).

    Done…

    Click for enlarge

    If you have a Switcher Box 2 in 1 out, there is no need to move headphone jack or speaker cables, just switch from A to B and vice versa when needed. Or just download a app as Audio Device Switcher or Audio Switcher to get the same results..

    This audio routing feature is very useful for live performances. You can now hear what’s happening in the Second Life client without the DAW audio stream overlay.
    There are more sophisticated solutions, of course, but this one is free and simple.

  • How to use my computer to make a recording studio? (4: Virtual Instruments – VST)

    How to use my computer to make a recording studio? (4: Virtual Instruments – VST)

    In this serie:

    1. Introduction
    2. The audio intrface
    3. The DAW
    4. Virtual Instruments (VST)
    5. Effects
    6. Cables&cables

    In previous episodes we talked extensively about virtual instruments, or VSTs. But what exactly are they?

    Virtual studio technology (VST) is a digital interface standard that is used to connect and integrate software audio effects, synthesizers and effect plugins with recording systems and audio editors (DAW). VST is basically a software emulation of hardware synthesizers, instruments and samplers, and often provides a custom user interface that mimics the original hardware down to knobs and switches. It provides recording engineers and musicians access to virtual versions of devices and equipment that might be otherwise too expensive or difficult to procure, or that are inexistant on marker as hardware.

    There are many types of VST plugins, which can be mostly classified as instruments (VSTi) or effects. VSTi does exactly what its name implies, as it emulates different musical instruments so that the recording engineer or musician does not need to procure the specific instrument or find someone who can play it. Modern VST plugins provide their own custom GUI, but older ones tended to rely on the UI of the host software.

    Steinberg released the VST interface specification and SDK in 1996. They released it at the same time as Steinberg Cubase 3.02, which included the first VST format plugins: Espacial (a reverb), Choirus (a chorus effect), Stereo Echo, and Auto-Panner.

    Steinberg updated the VST interface specification to version 2.0 in 1999. One addition was the ability for plugins to receive MIDI data. This supported the introduction of Virtual Studio Technology Instrument (VSTi) format plugins. VST Instruments can act as standalone software synthesizers, samplers, or drum machines.

    VST have a story: in 2006, the VST interface specification was updated to version 2.4. Changes included the ability to process audio with 64-bit precision. A free-software replacement was developed for LMMS that would be used later by other free-software projects.

    VST 3.0 came out in 2008, and VST 3.5 in 2011. In October 2011, Celemony Software and PreSonus released Audio Random Access (ARA), an extension for audio plug-in interfaces, such as VST, allowing greater integration between audio plug-ins and DAW software.

    VST 3.6.7 came out in March, 2017. A long story that make this technology solid and tested.

    There are three types of VST plugins:

    • VST instruments generate audio. They are generally either virtual synthesizers or virtual samplers. Many recreate the look and sound of famous hardware synthesizers.
    • VST effects process rather than generate audio—and perform the same functions as hardware audio processors such as reverbs and phasers. Other monitoring effects provide visual feedback of the input signal without processing the audio. Most hosts allow multiple effects to be chained. Audio monitoring devices such as spectrum analyzers and meters represent audio characteristics (frequency distribution, amplitude, etc.) visually.
    • VST MIDI effects process MIDI messages (for example, transpose or arpeggiate) and route the MIDI data to other VST instruments or to hardware devices.

    VST plugins often have many controls, and therefore need a method of managing presets (sets of control settings).

    Steinberg Cubase VST introduced two file formats for storing presets: an FXP file stores a single preset, while an FXB file stores a whole bank of presets. These formats have since been adopted by many other VST hosts, although Cubase itself switched to a new system of preset management with Cubase 4.0.

    Many VST plugins have their own method of loading and saving presets, which do not necessarily use the standard FXP/FXB formats.

  • How to use my computer to make a recording studio? (3: The DAW)

    How to use my computer to make a recording studio? (3: The DAW)

    In this serie:

    1. Introduction
    2. The audio interface
    3. The DAW
    4. Virtual Instruments (VST)
    5. Effects
    6. Cables&cables

    Now that we have bought an audio interface, we need to think about the tape recorder, mixer, effects, anything required in a studio.

    In a digital studio, the reel to reel recorder is part of the DAW (Digital Audio Workstation).

    The digital audio workstation (DAW) has become the music producer’s canvas, a central software platform containing all the sounds, instruments, and tools they use for recording music (and other).

    DAWs are deep, complex programs with lots to learn. Choosing a DAW is one of the biggest early decisions a producer faces. But what is a digital audio workstation exactly, what can you do with them, and which one is a best fit for your own ideas and interests?

    Think of a DAW as a digital representation of a physical recording studio where you can produce audio for a wide variety of mediums including film, gaming, podcasting, music, UX, and more. The whole studio process is packed into one, the creative ideas in tandem with the technical. You can record tracks, build up beats, add instruments or vocal parts, then lay out the arrangement, apply effects, and mix the finished work all within one interconnected hub.

    Today, the convenience and accessibility of DAWs have made them the most popular way of making music and editing audio — used by everyone from bedroom producers and songwriters all the way up to top industry professionals.

    There’s a range of DAWs available that we will explore in more detail below, each with unique features and advantages. That said, common standards of design and compatibility can be identified across the different brands. Personally i use as DAW Presonus Studio One and Reaper, but it’s just because in my opinion they have the best workflow for me, the one that facilitates my personal way of working.

    But all DAWs have points in common, find the one that best suits your personal preferences. If someone tells you “this sounds better than that,” doubt it. A DAW doesn’t have to “sound better” but allow you to manage the way you work most effectively without hindering your creativity.

    We’re talking bits, they can’t sound better in one DAW than another. If they feel like they do, there’s something wrong with either your setup or DAW. There are lots of both free and paid ones. Find yours and learn to use it well, even reading the fucking manual. The DAW is the most important tool in your creative arsenal.

    What can you do with a DAW?

    1. Record, play and edit audio tracks

    Digital audio workstations come with built-in viewport that allow you to record, save, edit, and playback audio. To record audio from external instruments or microphones, you need an audio interface. An audio interface takes audio signals and converts them into data that a computer can process, allowing you to record and edit audio in software based environments like a DAW.

    An audio track in your DAW (in this case in Studio One, but is the same in any DAW)

    Once you record, all audio is saved and displayed on the timeline, where you can cut, copy, and paste audio waveforms. From the sequencer window, you can easily mute waveforms, stitch them together, and crossfade them into one another, much like you would with audio recorded on physical tape.

    2. Record, play and edit MIDI virtual instruments tracks

    DAWs also allow you to play virtual instruments (VST) for composing music. Virtual instruments and effects are software programs designed to replicate the sounds of physical instruments like synthesizers, pianos, drums, guitars, violins, trumpets, and more, or create completely new digital instruments for which there is no analog reference.

    Most DAWs come with stock libraries of sounds, but they also allow for third party plug-ins—external software that can be “plugged in” to a DAW to enhance its functionality. In general, each VST has an installer that copies the needed files in the right place, a location predefined by Steinberg, the inventor of VST technology. However, many free plugins don’t have an installer, so you’ll need to place the necessary files in the correct folder by yourself.

    Also check that the VST plugin is consistent with your DAW, a 64 bit DAW wants 64 bit VSTs, or it won’t work.

    As you can easily imagine, a track in which the events you play or the settings of an audio effect are recorded is not equivalent to an audio track: another very important technology comes into play: MIDI.

    Yes, your DAW can record two types of events: audio, such as from a microphone or guitar plugged into your sound card input, or digital in a format called MIDI.
    Midi tracks don’t contain audio, but a list of events such as how you pressed the A key for 1 second and with what intensity, and a series of other performance-related events. A example in the following table:

    Before MIDI Learn, there were MIDI implementation charts only. This table shows how the Korg Volca Drum responds to various MIDI messages. The Control Change section in the middle lists the MIDI CC numbers and the parameters they correspond to. If you wanted to control the pitch of oscillator 1 from an external MIDI controller, for example, you’d set one of its encoders to output MIDI CC 26.

    Don’t be scared, there’s no need to handwrite these codes. Your MIDI keyboard (you have one, don’t you?) will send the DAW everything needed to write your performance, just like when you use your PC keyboard to write text.

    The enormous advantage is that, just like in a word processor, it will then be very easy to correct any errors, modify the execution, align the MIDI messages in a grid, manipulate the intensity of a note, in short, everything you possibly want to modify in your MIDI track. This track will faithfully send all the necessary MIDI messages to the virtual synth VST, which will play it using the sound you have chosen (any sound included in the library of the VST) and on the dedicated MIDI channel. All this events are recorded in the piano roll, vhere you can edit each note in very simple manner.

    MIDI is so popular and flexible that you can find a lot of songs on MIDI format in many sites, where you can download a song and then import the file in your DAW, where you can manipulate sounds for better results for make the better cover never seen..

    Midi events in piano roll view, the area dedicated to MIDI in any DAW. Any event can be edited yout editing the notes on piano roll. Very easy!

    Sounds complicated but it’s not, just imagine a midi track as if it were text in your favorite word processor, and being able to edit each note as you would in the word processor.

    If these concepts are difficult for you to understand, I recommend watching this video. It’s based on Studio One, a commercial DAW, but the concepts apply to any DAW.

    Your DAW is not “only” a recorder, it also has a mixer to mix your sounds as you wish and VST plugins are not only instruments, but also sound effects of all kinds that you can apply to your tracks.

    Mixer console in Studio One Each slider represent a track.

    All this can be managed to get your final master in the preferred audio format and resolution.

    There would still be a million things to say, and we’ll do it in dedicated posts. For now it’s enough for me to think that you understand how important it is to choose the right DAW for you, don’t follow the advice of gurus but get an idea by trying to work with one software and then another. The main functions of DAWs are almost equivalent, find the one that best fits your creative workflow, it’s the only trick.

  • In-the-box or Out-the-box?

    In-the-box or Out-the-box?

    This blog declare in the tagline “About music made with virtual instruments in-the-box“,  however for many it may not be clear what it means, and it is legitimate. 

    What does “in the box” mean in music?

    “In the box” refers to the computer, and the accompanying audio software within. Historically, all songs were mixed “outside the box” because there were no “boxes”, meaning computers for music production didn’t exist.

    Obviously there are endless discussions between fans in which everyone highlights the strengths and weaknesses of digital vs analog instruments and effects, many times in a rather vague way and there are those who even talk about the sound of tantalum referring to ancient hardware equipment that used this material in precious transistors giving a special sound to mix (only in the full moon nights, maybe).

    In reality, there are bad, good and excellent equipment in both digital and analog fields. 

    Thinking that today nowadays everything is digitally recorded, it is a little difficult (at least for me) to understand why i should spend a fortune to buy an old hardware compressor when with the same money I can buy not only an accurate emulation of the effect itself in digital form, but probably an entire workstation. In anycase, your ADC and DAC take care to converti the “analog” sound to digital world, so why?

    But I understand audio fetishism, i suffer from it myself, but it’s certainly not a question of sound quality. I can sell my mother (no mama, is not true) for a EMS VCS3, but i can use a Xils 4 and actually do the same or more…

    Some time ago i too owned a whole rack of synths and effects and miles of cables to connect their outputs to a mixer, then one day I made the decision to sell everything and switch to an in-the-box solution, also out of curiosity.

    I have to say i wouldn’t go back. With a little attention you can find digital synths of equal or better quality than the analog ones, and the same goes for effects. Of course, given the availability of numerous VST synths on market, it is initially difficult to orient yourself, but with a little experience and tests you can find some real gems.

    I find it a bit senseless to redo equipment born in hardware in a software version, they are two different worlds that have different characteristics, it is an old controversy that cannot be resolved. 

    For example, I’m in love with a synth that exists only in digital form, Animoog (free on Apple Store) nothing less than produced by Moog Music, which certainly needs no introduction in the field of synths. In my opinion is one of the best currently available synths, and he exist only in in-the-box form.

    Find your sound the way you like it, bearing in mind that in-the-box it costs much less and therefore you are much more likely to find it. Apart from the flexibility and the complete absence of cables, noise and dust in your studio.

    A digital synth. Look at the smile of Suzanne Ciani playing this little guy integrated with her Buchla modular system around 00:56 🙂
    Trent Reznor recounts his relationship with an iconic analog synthesizer and describes how it has fit into his creative process over his storied career.
    How Fairlight CMI changed the story of music.
  • Generative music machine and Langton’s ants

    Generative music machine and Langton’s ants

    I am very passionate about instruments that generate music. Given some rules, those start generating notes and sequences and hearing the variations hypnotises me.

    A study started by Brian Eno in 1995 with a software called Koan ,evolved now in Wotja.

    Generative music is the term used to describe any music that is ever-different and changing, created by a system. The term has since gone on to be used to refer to a wide range of music, from entirely random music mixes created by multiple simultaneous CD playback, through to live rule-based computer composition

    For me an old passion, started on Atari with a software called M, which I still use in an Atari ST emulator under Windows

    For this I greatly appreciated the work done by Marcin Gruszczyński by Fairly Confused with Cracklefield.

    A contraption that I’m not sure how to classify: a grid based sequencer, with objects traveling in different directions, bouncing off walls and colliding with each other, creating dynamically evolving patterns. So, Cracklefield is a generative music machine, experimental sequencer based on a cellular grid.

    The field can be read or modified by cursors, pointing at sequence track playing position. The cursors can travel the field in any direction, horizontally, vertically or diagonally, each at it’s own rate. They can bounce off field edges or obstacles (walls) and, what is the fun part, bounce off each other. 

    Cursors can interact with the field; paint, erase or flip cells, build or destroy walls, shift whole field rows or columns. It’s a playground for building evolving patterns – set initial conditions and see what would it sound like. Crazy.

    Cursors are driven by a Langton’s ant scheme, a cellular automaton, which changes direction depending on what kind of cell it steps on.

    In short, a library for Kontakt really difficult to describe. 

    It’s not a freebie, cost around 70 euro on many shops online, but Marcin released a “Stay at home with Cracklefield” version on Covid times. it’s a stripped-down version, but enough to have some fun and understand how it works.

    Attention: Cracklefield is not intended as a generator of sounds, but of notes. Its sound library serves only to get an idea, but each channel (slider) can be redirected to a track of your DAW to which a suitable VST instrument is associated. Or, download the Kontakt multi-channel sequencer script. Reading the fucking manual mandatory. Work only with Kontact full.

  • DJing in Second Life with Traktor dj 2 free

    DJing in Second Life with Traktor dj 2 free

    I know, there are more noble relatives, as Mixxx or Virtual DJ, but i have a sympathy for this freebie little guy (free software for Windows, MacOSX, and iOS).

    He is very light and doesn’t need the large resources requested by other dj software, but it has everything you need to start dj as a pro, including Loops, Hotcues, and Freeze Mode.

    Traktor DJ 2 does not have a direct interface to Shoutcast, but it works well with Butt. Put some tracks in Traktor and start broadcasting with Butt, it is all that you need to dj on Second Life without having to worry about whether or not the computer will make it (if you don’t know how to use Butt for streaming, you missed our guide “How play music in Second Life: the definitive guide“. Take a look).

    Pretty job by Native Instruments. If you own also a Traktor Kontrol S2 fun is guaranteed.

  • How to use my computer to make a recording studio? (2: The audio interface)

    How to use my computer to make a recording studio? (2: The audio interface)

    In this serie:

    1. Introduction
    2. The audio interface
    3. The DAW
    4. Virtual Instruments (VST)
    5. Effects
    6. Cables&cables

    Now that we know the historical and technical details that led to the possibility of setting up a recording studio on a computer (if you haven’t done yet, I highly recommend reading Part 1), we can focus on some aspects of digital recording. At the end of Part 1, we switched from recording on magnetic reels (tapes or cassettes) to recording on hard or floppy disks in digital domain, that means there is not anymore a continuous flow of audio events recorded on a magnetic tape, but a flow of binary data, as happens with any information recorded on a computer.

    The role of audio interface

    The audio interface is responsible for the translation from sound waves into digital representation of the same sonic event.

    One of the main functions of an audio interface is to convert sound from physical instruments or microphones (analog) to a form that computers can process (digital). This is called analog-to-digital conversion, or ADC. It is a necessary part of the digital audio workflow.

    While real-world sounds are continuous and analog (think of the sound produced by a (physical) instrument or a voice), on the other hand, computers operate in a discrete, binary world. They process information as a series of digits represented in a binary form (ie. as 0’s and 1’s). 

    Hence, in a digital audio workflow where computers are the main processing engine, the analog signals from instruments or voice (through microphones) need to be converted into a digital form. This is the work of ADC.

    Once audio signals have been converted to a digital form using ADC, they can be processed using specialised audio software called Digital Audio Workstations (DAW). 

    DAWs are extremely powerful software and allow a range of mixing, processing, and production possibilities for digital sounds, and any sort of effects and magic transformations.

    But, how do we hear what we are doing if the audio events are just a series of 0 and 1?

    Here comes the digital-to-analog conversion, or DAC, which is the reverse process of ADC. DAC converts the binary data to analog signals and so we can hear what we are doing. The audio signal is sent to our headphones through a little integrated amplifier, or to a bigger monitoring system through an external amplifier and loudspeakers.

    Almost any computer integrates a sound card, but while these sound cards are adequate for everyday use, they do not produce a quality of output that’s sufficient for many audio processing requirements. The “sound card” may simply be a “sound chip” built into the computer’s motherboard. 

    This is why audio interfaces play a fundamental role in a recording studio.

    Audio interfaces are specialised hardware and software devices that are dedicated to ADC, DAC, and other functions.

    Audio interface drivers are a very important part to consider: the fluidity of your work in the DAW will depend on their stability and solidity, and each different interface has its own drivers.

    In general, each interface will allow you to select inputs and outputs and the relative volume, but some also offer useful tools such as the audio router shown in the following image: drawing virtual cables on the interface allows the connection or disconnection of any input and output as desired.

    You can find more information about different audio drivers in Windows here.

    Being specialised, audio interfaces generally produce much better output than computer sound cards and are included in any digital audio workflow where sound quality is important.

    Other than being specialised for ADC and DAC processing, audio interfaces offer a range of other facilities, such as:

    • They can have multiple inputs through various connection types (eg. XLR, TRS, and RCA connections) for recording voices or analog audio instruments (for example, a bass guitar)
    • For this work, they often include high quality pre-amps for microphone or low-level signal boosting
    • They can have a choice of outputs for better sound monitoring during the production process
    • They can help to reduce latency in the audio production workflow
    • They generally produce an overall higher quality of sound relative to computer sound cards

    Despite this, audio interfaces (or ADC in general) may not be necessary if a production process does not include any physical instruments or voice (eg. when using only virtual instruments – also called VST): in such cases, there’s no need to convert sound from an analog to a digital form, and obviously there’s no need for ADC.

    I think it’s now easy to imagine how a quality sound card will directly influence the quality of your productions; you can have the best computer in the world, ultra expensive microphones and instruments, but if the sound card does not have first quality converters the result of your work will necessarily be poor.

    But how are the input signals converted by the ADC and the output signals rebuilt by the DAC? They need a reference to work in sync and return the audio stream needed for listening by ears…Here comes bitrate, sample rate and bit depth.

    Digital audio is digital information. That information can be dense or sparse, high-quality or low. Bitrate is the term used to describe the amount of data being transferred into audio. A higher bitrate generally means better audio quality..You could have the greatest-sounding recording of all time, but if you played it with a low bitrate, it would sound worse on the other end.

    Understanding bitrate is essential to recording, producing, and distributing audio. To truly comprehend bitrate, you also need to learn what makes up an audio file and what different types of audio files exist.

    Just like images vary in quality and clarity, audio files differ in how large they are, how much information they contain, and what role they fill. While there are some exceptions, uncompressed files will contain the most information and therefore have the highest bitrate. Compressed lossy files generally have the least amount of information and therefore a lower bitrate.

    The sample rate is the number of times in a second an audio sample is taken: the number of instances per second that recording equipment is transforming sound into data. Most digital audio has a sampling rate of 44.1kHz, which is also the sampling rate for audio CDs. This means that the audio is sampled 44,100 times per second during recording. When the audio is played, the hardware then reconstructs the sound 44,100 times per second. 

    Those individual samples vary in the amount of information they have. Bit depth is the number of bits in each sample, or how information-rich each of those 44,100 pieces of audio is. 

    A high sample rate and a higher bit depth both increase the amount of information in an audio file, and likewise increase the file size. Just like some photos have a high resolution, audio files with a high sample rate and high bit depth have more detail. Having more detail generally requires a higher bitrate.

    Why these boring details? Ye,s they are a bit boring, but they will be less boring when you buy an audio interface. They are not all the same, although there are an infinite number of models and brands. 

    Virtually all of them offer so-called CD quality (sampling rate of 44.1kHz, bit depth 16 bit) which is a standard, others will offer higher resolutions (personally for my recordings i use a resolution of 48.000 kHz at 24 bit depth).

    It is a good idea to find the best setting that your workstation can manage effectively. It is completely useless to choose the higher resolution that your audio interface makes available, if the computer can’t handle the produced files, because they are too big for the available CPU and RAM. Each track in your DAW requires hardware resources, then they run out quickly. Don’t kill your computer, it’s your friend. 

    In the next episode we will talk about another fundamental element of our studio, the DAW. Because without software what do we do with hardware?

  • Streaming live performances in Second Life from a DAW

    Streaming live performances in Second Life from a DAW

    For a musician is enough difficult send a stream from own PC to Second Life.

    If it’s a live performance, the unexpected is around the corner, and it’s necessary to launch several programs at the same time, keeping your fingers crossed that everything works.

    This is why I find this free VST plugin, ShoutVST, interesting. He allows you to connect your audio from within your DAW without launching other programs. Simple is always better.

    It works with any program that supports VST technology and is pretty simple to set up.
    This way it will be easy to prepare basic tracks in your DAW and play along with them using a dedicated track on your DAW. You’re not eight-armed octopuses, come on.

    Download: https://www.kvraudio.com/product/shoutvst-by-r-tur

    The VST at work inside Studio One, which is the DAW i use, But he work same with traktor or Ableton, for example, without the use of loopback methods.