Our Daddio is in real life Gene Maruszewski, a university educated electronic musician operating out of Northern California.
He’s been dabbling in electronic music since 1973 and continues to this day. His compositions cover the broad spectrum from techno to ambient and some genres that have yet to find a name.
His most common tools are his 2 massive modular synthesizers trendy groove boxes, pedals, and effects.
Daddio’s Modular Synth.
“ I got bit by the electronic music bug tripping to Pink Floyd music back in 1969 and resolved to one day learn how to make that kind of spacey music. A few years later the University I attended started an electronic music class.
So I enrolled, and I got my hands on a Moog, the series 3 with a sequencer complement, an EMS synthi, a pair of Revox stereo tape recorders and a 4 track TEAC.
Daddio’s small studio.
I started to listen to people like Pierre Schaefer, Pierre Henry, Walter Carlos, Morton Subotnick and the like. I continued my music education after my move to California at the College of Marin, enrolled in their fine music department.
At first, music was a more of a hobby while I pursued my career in bicycles and I resolved to return to my electronic music endeavors upon retirement which fortunately came early.
Daddio’s Bicycle Synth.
By the turn of the century I was heavily involved in making electronic music, having purchased my first modular synth, a Doepfer a100 system, and have been at it ever since.
Real Daddio’s Modular.
I work largely with a modular but also some pedals and outboard devices; effects, EQ’s, etc., and I also have a couple of the trendy boxes like an MPCx, an Octatrack, and a Keystep Pro.” ~gm
Daddio Dow has been making electronic music since 1973. He calls himself a music mechanic, assembling various sound bits into eclectic electronica. He draws from influences from before Bach and after Eno. Sometimes ambient, sometimes noizy, you’ve not heard this before.
We are pleased to host him for the last event of 2024 on the roof of The Hexagons.
A song for us avatars: Livio Korobase & Renee Rebane in Digital Holiday.
[Verse] In the land of codes and streams, Where we live our digital dreams, The snow falls bright in shades of blue, A pixel-perfect holiday view. Avatars in festive clothes, Building trees where no one knows, Silent nights in a neon glow, It’s Christmas in the pixel snow. [Chorus] Oh, it’s Christmas in the pixel snow, Where the virtual winds of winter blow. Lights that sparkle, hearts that gleam, A holiday in a coded dream. Oh, it’s Christmas in the pixel snow, Together, no matter where we go. Across the wires, through the screen,
We celebrate as one big team.
Udio, Studio One, OBS. Video recorded in Second Life. Livio Korobase & Renee Rebane in Digital Holiday
One of the most controversial things about generative AI in the artistic field is undoubtedly the fact that the gigantic databases on which the generation is based are built with data available on the web, without asking for any authorization from the authors. Some sites have specialized in audio generation, but they do not care about the origin of the generated content and instead focus on the creation of web interfaces designed to facilitate the creation of “songs” that sound “believable”.
This also applies to graphics or wherever AI works generatively, so much so that the prompts can be tacked on with the wording “in the style of [famous name here]”, sometimes resulting in somewhat “artistic” results. But who is the artist in this case? Who wrote the prompt or who actually created the snippet that the AI based the piece on?
In my opinion there is no real creative act in this, it is more a question of luck than anything else.
The Singing Poet Society project case instead adds an element that changes the cards on the table. Tony has trained the AI (a process called machine learning) using his own material, aspects that in my opinion constitute the heart of the matter. The AI is used here simply as a tool for the construction of a song, it is not ultimately that different from using sequencers or other generative tools in a DAW.
However, knowing that the one singing is the AI with Tony’s voice is a bit shocking, but that’s what actually happens.
I haven’t come to a personal opinion yet and I don’t know what I think, but removing the use of materials made by others from the scene certainly cleans up the perspective.
Anyway, here is the recording of the evening, so everyone can develop their own conviction.
Tony Gerber aka Cypress Rosewood’s Singing Poet Society @ Hexagon 241207 (AI music project). Video by D-oo-b.
Saturday Dec 7 1PM SLT in Second Life, Roof of The Hexagons. Presentation of project, performance and Q&A session with Tony Gerber
There is always a lot of discussion about artificial intelligence and its intelligent use, it seems that the same adjective is used in a inconsistent way.
I really like this Singing Poet Society project, because it is undoubtedly an example of how AI can be used creatively and in an original way.
In an innovative blend of art and technology, Tony Gerber, a visionary artist and musician, has embraced artificial intelligence (AI) with enthusiasm and creative inspirations. His creations contain various musical blends of his own original music and AI music collaboration.
The Singing Poet Society YouTube channel serves as both a platform for artistic collaboration with AI and an educational tool aimed at demystifying AI’s role in creative endeavors.
The channel proudly hosts an impressive collection of 110 videos, each transforming public domain poems from celebrated poets such as Robert Frost, Emily Dickinson, Edgar Allan Poe, and other literary luminaries into captivating song videos.
Gerber has harnessed AI driven graphic tools like Midjourney and integrated emerging AI music applications, including the beta version of Udio, alongside traditional video editing techniques to craft these engaging and thought provoking pieces.
“Singing Poet Society” is not merely an entertainment outlet but a source of inspiration and education. Beyond its YouTube presence, Gerber envisions the channel as a conduit for introducing AI into educational settings, particularly within schools and English classes. His goal is to illuminate AI’s potential as a tool for enhancing learning, by enabling students to explore and interpret the rich insights, life reflections, and human experiences encapsulated in classic poetry.
Through this initiative, Gerber encourages students to engage creatively with poetry, fostering their own compositions and song videos, and offering fresh perspectives on public domain, time honored works.
This fusion of AI technology and poetic artistry promises to open new avenues for learning and creation, making the “Singing Poet Society” a pioneering venture in the realm of digital education and artistic expression.
An example of Singing Poet Society channel content. Don’t miss the transcriptions of poetry.
Tony Gerber has been a part of the Nashville art, music and technology communities for 43 years. He has worked with technology as an artistic tool since the 70s and continues with project to inspire younger generations and reinspire older generations with Singing Poet Society.
In Second Life there are many people who make machinima of all types and genres. But two people in particular film live concerts, with very different approaches.
Glasz films the concert and works a lot in post-production, making it a personal work. D-oo-b instead makes real documentaries full of ideas and original shots, as well as a musician himself.
So now at The Hexagon there are two corners reserved for their creations, where you can see some of their work on a dedicated screen.
The screen used is really simple to use, just turn it on.
Next to each screen there is a small cube, click to receive the relevant documentation. Both have a large production, refer to notecards to have access to their archives on the web.
I think it’s an important completion of The Hexagons, and watching some machinima about musicians in Second Life is pleasant. Good vision.
For those who missed the unforgettable Halloween 2024 concert by nnoiz Papp at The Hexagons, the video recording is here! This exclusive video captures every moment of nnoiz’s extraordinary live performance, weaving modular synthesizers with live instruments—including a hauntingly beautiful oboe performance. Known for his inventive approach to sound design and deep musical expertise, nnoiz Papp left the audience spellbound.
From his renowned work with Sendung mit der Maus to his celebrated collaborations with artists like Klaus Schulze, nnoiz Papp’s career is a testament to innovation in sound. This video release is a must-watch for fans of experimental music and anyone fascinated by live electronic and instrumental fusion. Don’t miss the chance to experience his Halloween concert’s atmosphere, intensity, and raw creativity.
nnoiz Papp
// scoremusic-composer / music for animated movies / sound-designer / keyboarder / oboist / guitarist
1980-1986 Musikhochschule Köln (studying music for teaching in schools, oboe and (jazz-)piano)
since 1985 over 300 musicproductions for german children´s TV (Sendung mit der Maus) since 2007 thousends of trailers, songs, sound-design projects for TV (Sendung mit dem Elefanten)
since 1982 working as an studio musician with oboe (for example Klaus Schulze) and keyboards (since 1984 with computers) in many different styles (from pop to heavy metal with U.D.O. and AXXIS)
18 CD´s productions (4 under the pseudonym “SVENSSON” – electronic & oboe, 4 archivmusic cd´s for selected sound, Koch-music-library and sonoton)
live music with TRIOGLYZERIN, a trio thats playing live with old silent movies in cinemas (http://www.trioglyzerin.com)
since 1996 different internet activities (quicktime vr – flash – 3d)
2006 live-video-installation at “Wuppertaler Bühnen“, visualizing Nyman´s opera “The man who mistook his wife with a hat” with VJ software 2012 live-video-installation at “Wuppertaler Bühnen”, visualizing Ali Askin`s opera ISTANBUL
In SL since end of may 2007, trying to put the things together…..
2017 building up a synthesizer modular system
2022 starting and testing different AI tools for sound text and graphics
Taking advantage of the new location and the renewed attention, we would like to organize a small festival, refreshing the walls a bit and inviting all composers of SL’s own music to continue to send a photo and a note about themselves to Livio Korobase and/or Renee Rebane, so they can be added to the directory.
Many are already on the walls, but there is still room.
Some time ago I bought a software for VJ (VJing (pronounced: VEE-JAY-ing – is a broad designation for realtime visual performance). Characteristics of VJing are the creation or manipulation of imagery in realtime through technological mediation and for an audience, in synchronization to music.
NestDrop is an ingenious software based on the visualization system produced by the programming language called Milkdrop, originally developed by Ryan Geiss in 2001.
Milkdrop at work. On left, the script that build the image on right.
I like Milkdrop’s beat detection system, it works well and we all know how music and lights in sync can produce pleasant moments.
So, after playing on my own for a bit I said to myself why not try to do a music and light show in Second Life, live?
It’s not as simple as it seems, especially when it comes to Second Life for video. The support is rudimentary, all you can do is apply a URL to the face of an object and start playing it (this is called MOAP, media on a prim). However, this does not guarantee at all that everyone will participate in the event, because in the case of a film, for example, each user start a playback from the beginning and so if someone arrives late they will never be aligned with the other participants, who perhaps are not even aligned with each other. We would like to have a party, not watch a movie each on a separated sofa. Give us our keyframe.
There are systems in Second Life that try to overcome this problem with scripting and other systems, but they are unreliable and complex. How to do it in a transparent, simple and economical way?
A day i visited a sim, Museum Island, where a guy with nickname Cimafilo was streaming a movie, Alice in the wonderland, and i noticed that data stream was small but fluid, in sync (also if on sim was used the parcel media and was not possible understand who and what was managing the sync), so i tried to get some infos about. Cima was using a for me unknow video format, WebM and OBS, with good results in my opinion. So I tried to create a similar system, but suited to my needs: analogous workflow, but using a dedicated server and a higher frame rate.
I can say I succeeded, the system devised can stream and sync audio and video events in Second Life using simple open source tools. Let’s see how.
As mentioned, my goal was to use a VJ program to create an audiovisual event in Second Life.
All you need is any audio player (personally I use Winamp because it’s very light, but any other player is fine) and a VJ software (in my case NestDrop, but any other is fine). The result of our audio and video mix must be able to be captured by the OBS (Open Broadcaster Software), so any program that generates video output on your PC monitor is fine.
Your desktop at the end of first part of setup: from left, Winamp produces sound, NestDrop make the visuals and OBS capture all.
I was thinking to myself, how can I send this output from OBS to Second Life with acceptable quality and in sync for anyone? Showing the video is easy, almost, there are many methods by going from a web page or a streaming service but they all lack the detail that is so important to me: sync.
However, I noticed one thing: almost all of these services use MP4 container or its variants to distribute the content. And by studying, I realized that the MP4 container does not have a sync system suitable for my purpose. I need the sync to be sent at pre-established intervals during the projection. Codecs world is a real jungle managed through trade wars.
At this point I entered a hell of questions and answers on forums, web pages, approximate and/or wrong answers, you name it. I’m also no expert on these things, and this was new territory for me. Double difficulty then.
I convinced myself along the way that the secret was in the format, and indeed it was.
Arguing with FFMPEGspecs I discovered the format that’s right for me: compact, manages good audio (in Opus) and video (VP8 or VP9), open source, all browsers can view it: WebMwas the trick.
Above all, a specification of webM container shoot me: Key frames SHOULD be placed at the beginning of clusters.
Exactly what I was looking for, bingo. In a few words, when the Play key is pressed in the SL viewer, each visitor gets a key frame to sync at the exact point of the actual key in the video. Yes, I want to use this!
Okay, but where do I send this WebM stream if I manage to convince OBS to do broadcasting?
Read here, search there, a server that accepts WebM is Icecast2.
Now comes the slightly complex part, because you need an Icecast video hosting, which will work like Shoutcast hosting which – if you have ever played music in Second Life – you surely already know. Or, you can implement your own server there. I obviously chose the second path both for its affordability and to fully understand how it works.
On the Web it is easy to find offers of virtual servers, even at very low prices. For me I got a VPS with 1 processor, 1 GB RAM, 100 GB hard disk. For an experiment it’s fine, in any case you can always expand. We go.
The installation of a Linux distro is always quite automated, just choose the desired distro from a drop-down menu and in a few minutes the basic server will be operational. So far so good, and I chose Ubuntu 20.04 for myself.
Preparing my VPS
In a few minutes the server is ready. I chose not to install anything apart from Icecast, but obviously the space can be used for all that you need.
Installing Icecast2 is very simple, all you need is a little familiarity with the terminal (for me, Putty) commands and your server is ready in 10 minutes. You can find dozens of tutorials, all copy/pasted from each other.
The only detail that I recommend you take care of is to open port 8000 of the server firewall, or no one will be able to connect and set your audio card setting on Windows at 48000 Hz for the audio side. You don’t need anything else.
Now test if Icecast is responding with a browser, adding :8000 to your server address. In my case, for example: http://vps-94bab050.vps.ovh.net:8000/
The server will most likely be listening.
Pelican Village TV waiting for a mountpoint
When configuring Icecast (you will be asked to specify passwords for admin and other details), you may have a doubt about mountpoints, and the documentation isn’t very clear either: should I create them now? Or when? In reality you don’t have to do anything, OBS (or the software you use for broadcast) will create the mountpoint with the name you chose in the connection string (we see later how).
I connected OBS to server, and the connection created the mountpoint /video, as specified in connection string.
Opening the URL http://vps-94bab050.vps.ovh.net:8000/video (following the example) you can already see what you are going stream in Second Life in a browser window (if someone is streaming, or you get a “Error 404 page Not Found”). The mountpoint is dynamic, this means it is alive as long as the stream is up. When you disconnect your broadcaster, the mountpoint disappears.
To connect OBS to the Icecast server, the issue is not very simple but it can be solved.
In File > Settings > Output > Recordings, select as Output Mode Advanced. Type: Custom Output (FFmpeg) FFMpeg Output Type: Output to URL File part or URL: icecast://user name:password@server URL:8000/mountpoint name User name and password you have set installing Icecast2, remember? Container Format and Container format Description: webm Muxer Settings open a chapter apart. This specs and Encoder settings have do be decided carefully reading the documentation at https://trac.ffmpeg.org/wiki/EncodingForStreamingSites and https://developers.google.com/media/vp9/live-encoding Configurations depends on many factors based on your material, hardware, connection, server. For me personally the string that worked for Muxer Settings is:
(For each parameter meaning and some sample configurations, refer to specs)
To start the broadcasting, push Start Recording button in OBS (we are recording to a URL).
Now what you need is to prepare the “screen”. In Second Life, associate the URL of your mountpoint with the Media texture of object you have chosen as screen and press Play on the viewer. Nothing else is needed. Let the rave begin 🙂