Select Page

Virtual Pink Floyd Concert

A loving experiment with the Source engine
Personality Core

Ever seen Pink Floyd?

Yeah, me neither! So when I decided to get my hands a bit dirtier with the Source engine and learn more about how it handles audio, I decided to try and create a Virtual Pink Floyd Concert! How cool, right? So while in the end the virtual concert experience wasn’t quite as thrilling as I’d hoped, it was – and is – still pretty fun and was a fantastic way to learn about implementing audio in Source and how sound is handled within games.

Okay, one more short section and then on to the “Concert Video”! After the video I will post some more general information and then all sorts of gory details on how this project came together and some of my takeaways from this experience.

Basically this “concert” (er, one-song-show) was built in Valve’s Source engine using the Portal 2 environment. I created a concert hall in the Hammer editor and recorded a cover of the song Comfortably Numb in Digital Performer. For my virtual band I used “Personality Cores” (characters from Portal) to stand in for each performer. I only let myself use a realistic number of performers that might actually be touring as part of the band, and every audio “part” I created had to be able to be “played” by an on stage performer. So no going crazy with 30 layered guitar parts and all sorts of crazy mixing techniques. Remember, when watching the video you’re not listening to one song – you’re listening to 10 different audio files being triggered simultaneously and emanating from 10 different objects within the game environment. That’s really what makes this a true virtual performance

So on the stage, my Virtual Pink Floyd Band consists of:

  • A Roger Waters Personality Core and his Companion Cube Bass Amp – Roger “sings” lead vocals and the bassline emanates from his Companion Cube amp.
  • A David Gilmour Personality Core and his Companion Cube Guitar Amp – David “sings” each chorus and his guitar emanates from his Companion Cube amp.
  • A Richard Wright Personality Core and Two Companion Cube Synthesizers – no vocals for Richard here, but he has two hands, so I squeezed in two synth parts here; strings and organ. He has one Companion Cube for each keyboard.
  • A Nick Mason Personality Core and a Companion Cube Drum Set – I wanted to give Nick a 7-piece set, but that quickly became impractical, so he gets a single Companion Cube as his kit.
  • A Session Keyboard Player – I’ve seen many concert clips where a second keyboardist performed with the band, so here he is! This let me bring in another synth part – the brass! No vocals here, and one Companion Cube keyboard.
  • A Session Guitarist – Again, they usually have some other dude to play acoustic parts and rhythms to David’s amazeballs, so here he is. He gets (you guessed it) an acoustic part and a heavy distorted rhythm part over the final solo. No vocals, and one Companion Cube guitar amp.
  • A Backup Singer – at this point I cheated slightly and technically two parts are smooshed into this one backup singer (the main backup to David’s chorus part and the pin-prick screams).

So in the “Concert Video” below you can hear how the game engine handles all the tracks, applies environmental reverb and attenuation. I also use the Portals to quickly run around and move some of the props around to see how you can alter the “mix” by moving the sources around. You can even create “sub mixes” by moving a few objects away from the stage into another part of the room. Enjoy!

Click here to listen to the un-mixed, mono version of the song to compare it with how the game environment “mixes” the tracks in simulating a large concert venue.

Again this is intentionally rather raw and off-balance, since the point here is to see how the game environment handles the mix (just like a live show). Perhaps I’ll do another project where I’ll take that year or two and nail a perfect cover… 🙂

The Concert, in all its glory!

Building the Concert Hall

Project Breakdown

This project broke down into three big chunks, at least for the way I approached it mentally; creating the “Arena” within Source’s Hammer editor, recording a live-feeling cover of Comfortably Numb, and then integrating the audio into the concert hall.

As I’ll get into in a bit more detail below, I found that I was a bit overly ambitious with the size of my arena! I’d hoped to emulate some massive space and then run around to balcony levels and back stage and all that… well, in the end it just wasn’t practical and the engine wasn’t really able to “fill” the space at the required volumes – at least not with my current level of audio skilz within Source. If anything my venue is still too large. If I work on a version 2 I will make the venue even smaller for a more intimate feel, as opposed to going for the whole Arena Rock flavor.

But creating anything within Hammer is relatively straightforward. I could have spent a lot more time making it “cooler” and more realistic, but my focus here was on the audio experience, so I made myself stop when I got something serviceable together. It’s not going to win any awards, but think it works well enough for this project.

The hardest part of this entire project was recording a cover of Comfortably Numb without taking three years obsessing over every single detail trying to get it absolutely perfect!! For this experiment I didn’t need a perfectly manicured and immaculate cover – I just needed something respectable that I could split up into individual parts and apply to the right objects in the game to see what happened.

As with the majority of guitarists in the world, David Gilmour is pretty much my hero, so yeah this was an ego check on the whole.

Implementing the audio was interesting – never quite as straight forward as you’d expect! There was a lot of legwork needed on my part through forums and articles reviewing how to use the various sound objects and entities, audio formats, file paths, scripting how-tos, re-building audio cache… Lots of one-step-forward and two-steps-back. But now that I’m on the other side of this project, looking back it doesn’t seem that bad! I’ve certainly learned a great deal about audio in the Source engine, so yeah, mission accomplished there.

Gory Technical Stuff

Still here? Still reading? Okay let’s continue…

So creating a concert in the game was not nearly as straightforward as it might seem, or even as it might sound in the end. I mean you jump in, and it plays the song, and it all seems pretty straightforward. No no! Consider this: this is a game engine, not an audio production environment. There is no way to trigger one audio source based on a count of measures or a knowing “look” from one band member to another. There’s no tempo, no meter, no traditional way to direct the audio. So how do you orchestrate it all? How do you get David Gilmour to start playing his guitar solo at just the right time?

Well, the answer is brute force. Every part had to be triggered at the exact same time, so every part had to be the exact same length with all of the timing “built in” to the audio file. So David’s guitar solo is preceded by a lot of silence. This had a dramatic impact on the scope and ambition of this project since it meant every part added, no matter how big or small, would be the same size and length as every other part.

A good example of this: as mentioned above, I started with the idea that each individual drum/cymbal would be split out and tied to its own Companion Cube. This 7-piece drum kit would have resulted in, of course, 7 audio files weighing in at over 6 minutes in length and (as Wav files) over 250 MB combined. Through some trial and error I figured out that some cuts had to be made, and the drums were compacted down to a one-piece kit weighting in at just over 35 MB. Not quite as cool, but it worked much better.  In the end I was able to condense the song down – while maintaining a virtual performer for each part – to 10 tracks.

In order to get audio out of Digital Performer and into the Source engine I first had to export each individual part as Broadcast Wav files so that I could easily dump them out as mono files. From there I converted them to Microsoft ADPCM Wav format to condense the size as much as possible. However, in the end I went with MP3 in order to get even more compression to help out the engine. So the final tracks are MP3 at 96kbps weighing in at 4.74 MB each. Not the greatest audio quality, but I needed to tighten up file size, so some sacrifices had to be made.

But the biggest challenge of this entire project (other than my guitarist-ego-check) was the overall in-game volume. I found it a bit difficult to get concert-level volumes that actually “felt” like concert level volumes. I tried a number of different entities and scripts within the Source engine, but nothing really pushed the volume to obscene levels. This makes sense within the context of what game engines are designed to emulate, but it was a bit disappointing all the same.

 

The final audio was implemented through the ambient_generic entity. I found it to be the most direct and it played nicely with the default audio channels in the game. Again, it doesn’t quite have the volume levels I was looking for, but it certainly works, and it provided the desired practical application that made things more obvious as to how the engine handled distance, space, etc.

I did try using sound scripts so that I could specify exact decibel levels, but I found that in application the volume wasn’t any louder than what I could get out of ambient_generics. I also ran into some audio problems when specifying volumes to play louder than 130db, so there was a performance cap going that route as well.

Some other sort of fun observations I’ve made in playing with this experiment:

  • I found that you can “hide” from loud sounds at close range by removing yourself from the line-of-site of the audio-source.  It doesn’t completely disappear, but the attenuation is more dramatic than expected. This feels sort of funny when you run up to the stage and suddenly parts get quieter!
  • There’s really no modeling of sound intensity to speak of. No matter how close I got to extraordinarily loud sounds (or as loud as I’ve been able to make them) I can always clearly here my own footsteps and portal-gun sounds.
  • Sounds fade fast! Even covering a small amount of ground in an enclosed space quickly takes you “away” from the sound. It doesn’t take much distance to escape the feel of being in a concert space in any way. Volume levels account for this to some degree, but even the level of ambient reverb applied to the sound happens pretty quickly.
  • Audio “source” is still slightly ambiguous – meaning even when standing directly in front of the audio source (in this case a performer or an “amp”) it’s still a bit difficult to discern exactly which part is being generated from the prop, especially if you don’t know ahead of time. Sort of an extension of the sound intensity observation above, the only way to really identify the part is to grab the prop and run off with it! Take it to a quiet corner of the room and hear it by itself.

 

Summary n’ What’s Next?

So how did it turn out? Well, pretty cool I think. It’s certainly no concert replacement – the audio engine just wasn’t designed to emulate this type of environment – but it’s fun, for sure. It’s interesting to hear just how the engine interprets distance and applies attenuation and reverb to the source. I’d love to see a game engine move forward and actual render audio AS an in-game object and use spacial waveform modeling to calculate echo, reverb, Doppler effect, wave intensity, etc.

Oh, but the biggest thing I learned? Sound does not travel through Portals!

Edit: So this project has been done for a couple weeks now, but I keep thinking about it. I’m still fascinated and curious about how I can improve upon this first iteration. I’m crazy excited about the Oculus Rift and what that could bring to this experience – I might have to snag a dev kit and start exploring. I also attended an audio symposium recently where the topics of ambisonics and HRTF were discussed at high levels, and that got me even more excited about future possibilities. So I’m officially deeming this “Version 1” of my virtual-concert experience. Research on V.2 has begun…

Got Questions?

I claim no coypright to Comfortably Numb at all. Hopefully that’s as obvious as it should be. I am making no profit from this “performance” and it is purely a technical demonstration for educational purposes.