Follow TV Tropes

Following

History MediaNotes / GamingAudio

Go To

OR

Is there an issue? Send a MessageReason:
None


The one biggest drawback with FM synthesis is that it cannot reproduce PCM audio (at least, officially. Hackers later found a way to put the OPL-2 and -3 chips into a PSG-like mode, but audio quality was really lackluster and barely passable. It was also hard to put the card into said mode and program for in said mode. As a result, the hack was only used in a very small number of games), meaning to some people, this is taking one step backwards instead. This resulted in the influx of "hybrid" cards mentioned below, in which many cards couple a FM synthesizer (usually a OPL-2, later OPL-3) with a PCM codec for speech and sound effects. This oversight also saw Adlib users getting a Covox Speech Thing or Disney Sound Source to supplement the Adlib's musical capabilities.

to:

The one biggest drawback with FM synthesis is that it cannot reproduce PCM audio (at least, officially. Hackers later found a way to put the OPL-2 and -3 chips into a PSG-like mode, but audio quality was really lackluster and barely passable. It was also hard to put the card into said mode and program for in said mode.mode, and once it's in said mode the card can no longer produce music until reset- meaning it's either music or audio, but not both. As a result, the hack was only used in a very small number of games), meaning to some people, this is taking one step backwards instead. This resulted in the influx of "hybrid" cards mentioned below, in which many cards couple a FM synthesizer (usually a OPL-2, later OPL-3) with a PCM codec for speech and sound effects. This oversight also saw Adlib users getting a Covox Speech Thing or Disney Sound Source to supplement the Adlib's musical capabilities.
Is there an issue? Send a MessageReason:
None


The first popular gaming platforms to use a PCM synthesis chipset were the UsefulNotes/{{Amiga}}, UsefulNotes/{{SNES}}, and believe it or not, {{Pinball}} systems, mainly those that used [[Creator/{{Midway}} Midway's]] DCS PCM Synthesis board (which also saw use in VideoGame/MortalKombat and VideoGame/RevolutionX cabinets, since it not only reproduces instruments more faithfully, but one of the many tricks PCM synthesis could do was transparently loop fully-voiced music tracks, which is an important feature of the latter game). The UsefulNotes/{{NES}} and UsefulNotes/SegaGenesis both had rudimentary PCM support, but this was mainly used for pre-recorded voices, sound effects, and drums. The modified [=OPL2=] chip (called an [=OPN2=]) used by the Genesis has a PCM codec mode, but the Genesis can also resort to manipulating the PSG to play back PCM sounds if needed- notable as it's how the Sonic 3 Launch Base zone BGM managed to have a percussion track and still have the "Go!" voice samples). Pretty much every system introduced since uses PCM.

to:

The first popular gaming platforms to use a PCM synthesis chipset were the UsefulNotes/AcornArchimedes, UsefulNotes/{{Amiga}}, UsefulNotes/{{SNES}}, and believe it or not, {{Pinball}} systems, mainly those that used [[Creator/{{Midway}} Midway's]] DCS PCM Synthesis board (which also saw use in VideoGame/MortalKombat and VideoGame/RevolutionX cabinets, since it not only reproduces instruments more faithfully, but one of the many tricks PCM synthesis could do was transparently loop fully-voiced music tracks, which is an important feature of the latter game). The UsefulNotes/{{NES}} and UsefulNotes/SegaGenesis both had rudimentary PCM support, but this was mainly used for pre-recorded voices, sound effects, and drums. The modified [=OPL2=] chip (called an [=OPN2=]) used by the Genesis has a PCM codec mode, but the Genesis can also resort to manipulating the PSG to play back PCM sounds if needed- notable as it's how the Sonic 3 Launch Base zone BGM managed to have a percussion track and still have the "Go!" voice samples). Pretty much every system introduced since uses PCM.

Added: 564

Removed: 564

Is there an issue? Send a MessageReason:
None


Additionally, the Yamaha OPL chips were also found on the [=MoonSound=], [=MSX Music=] and [=MSX Sound=] expansion cards for the MSX, and the popular [=AdLib=] sound card for [=PCs=] (and became the defacto standard until usurped by the [=SoundBlaster=] in the early 90s), as well as in most [=SoundBlaster=] PC sound cards and clones to provide [=AdLib=] compatibility. There was even an OPL-1 based FM Synthesis module (allegedly upgradable to OPL-2) for the UsefulNotes/{{Commodore 64}} in case the user needs even better quality music than the SID can provide.



Additionally, the Yamaha OPL chips were also found on the [=MoonSound=], [=MSX Music=] and [=MSX Sound=] expansion cards for the MSX, and the popular [=AdLib=] sound card for [=PCs=] (and became the defacto standard until usurped by the [=SoundBlaster=] in the early 90s), as well as in most [=SoundBlaster=] PC sound cards and clones to provide [=AdLib=] compatibility. There was even an OPL-1 based FM Synthesis module (allegedly upgradable to OPL-2) for the UsefulNotes/{{Commodore 64}} in case the user needs even better quality music than the SID can provide.
Is there an issue? Send a MessageReason:
None


On the higher end we have cards that were full PCM Wavetable Synthesis devices. These used audio samples provided by the user for music synthesis, but offered rudimentary PCM sample playback on a separate codec for sound effects and speech as well. These cards were later joined by Aureal's Vortex and [=NVidia's SoundStorm=], which used the same DLS format as Microsoft's [=DirectMusic=] software Synthesizer. However on the PC end, the larger publishing houses were slow to take advantage of these cards and full support only appeared in the mid-90s despite the first of them appearing as early as 1991. As a matter of fact, fans of such cards blame the poor uptake on the fact that most major publishing houses chose to not support such cards when porting games to the PC. However, support for such cards appeared very early on with publishers dedicated to the PC platform like Creator/ApogeeSoftware, Creator/EpicGames and Creator/IDSoftware. In fact, many of Epic Games' titles sound better on the Gravis Ultrasound than anything else (caused by fully supporting the wavetable engine of an Ultrasound, but not the wavetable engines of competing cards- [=AWE32/64=] support on their games are basically no different from [=SoundBlaster=] 16 mode- software mixing only with no support for the EMU wavetable synthesizer.

to:

On the higher end we have cards that were full PCM Wavetable Synthesis devices.devices, for example the [=SoundBlaster=] AWE, Live!, and Audigy series, as well as the Gravis Ultrasound series of cards. These used audio samples provided by the user for music synthesis, but offered rudimentary PCM sample playback on a separate codec for sound effects and speech as well. These cards were later joined by Aureal's Vortex and [=NVidia's SoundStorm=], which used the same DLS format as Microsoft's [=DirectMusic=] software Synthesizer. However on the PC end, the larger publishing houses were slow to take advantage of these cards and full support only appeared in the mid-90s despite the first of them appearing as early as 1991. As a matter of fact, fans of such cards blame the poor uptake on the fact that most major publishing houses chose to not support such cards when porting games to the PC. However, support for such cards appeared very early on with publishers dedicated to the PC platform like Creator/ApogeeSoftware, Creator/EpicGames and Creator/IDSoftware. In fact, many of Epic Games' titles sound better on the Gravis Ultrasound than anything else (caused by fully supporting the wavetable engine of an Ultrasound, but not the wavetable engines of competing cards- [=AWE32/64=] support on their games are basically no different from [=SoundBlaster=] 16 mode- software mixing only with no support for the EMU wavetable synthesizer.
Is there an issue? Send a MessageReason:
None


Many of the proprietary formats are driven by the API or licensed game engine. For example, a game using the [=CRI=] middleware will tend to use ADX or CRI Audio.

to:

Many of the proprietary formats are driven by the API or licensed game engine.UsefulNotes/GameEngine. For example, a game using the [=CRI=] middleware will tend to use ADX or CRI Audio.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

However, there were an onslaught of unlicensed clone producers. Companies like Creative and ESS grew tired of paying royalties and designed their own clones for their later sound cards and cut off their ties with Yamaha, while unscrupulous no-name companies in China tried to make bootlegs of the chip for toys, cheap electronic instruments and off-brand sound cards. This tarnished the reputation of FM synthesis and caused a broken base as a result as they not only don't sound anything like the real deal, but they also retained a lot of the flaws of the early synthesizers (especially the "flat" percussion) and when compared to the real deal, sounded ''terrible''.
Is there an issue? Send a MessageReason:
None


The one biggest drawback with FM synthesis is that it cannot reproduce PCM audio (at least, officially. Hackers later found a way to put the OPL-2 and -3 chips into a PSG-like mode, but audio quality was really lackluster and barely passable. It was also hard to put the card into said mode and program. As a result, the hack was only used in a very small number of games), meaning to some people, this is taking one step backwards instead. This resulted in the influx of "hybrid" cards mentioned below, in which many cards couple a FM synthesizer (usually a OPL-2, later OPL-3) with a PCM codec for speech and sound effects. This oversight also saw Adlib users getting a Covox Speech Thing or Disney Sound Source to supplement the Adlib's musical capabilities.

to:

The one biggest drawback with FM synthesis is that it cannot reproduce PCM audio (at least, officially. Hackers later found a way to put the OPL-2 and -3 chips into a PSG-like mode, but audio quality was really lackluster and barely passable. It was also hard to put the card into said mode and program.program for in said mode. As a result, the hack was only used in a very small number of games), meaning to some people, this is taking one step backwards instead. This resulted in the influx of "hybrid" cards mentioned below, in which many cards couple a FM synthesizer (usually a OPL-2, later OPL-3) with a PCM codec for speech and sound effects. This oversight also saw Adlib users getting a Covox Speech Thing or Disney Sound Source to supplement the Adlib's musical capabilities.

Changed: 286

Removed: 182

Is there an issue? Send a MessageReason:
None


The one biggest drawback with FM synthesis is that it cannot reproduce PCM audio at all, meaning to some people, this is taking one step backwards instead. This resulted in the influx of "hybrid" cards mentioned below, in which many cards couple a FM synthesizer (usually a OPL-2, later OPL-3) with a PCM codec for speech and sound effects. This oversight also saw Adlib users getting a Covox Speech Thing or Disney Sound Source to supplement the Adlib's musical capabilities.

On a side note- the OPL-2 chip was able to do PCM playback using a PSG-like mode, but only a handful of games supported it due to how difficult it was to put the chip into said mode.

to:

The one biggest drawback with FM synthesis is that it cannot reproduce PCM audio at all, (at least, officially. Hackers later found a way to put the OPL-2 and -3 chips into a PSG-like mode, but audio quality was really lackluster and barely passable. It was also hard to put the card into said mode and program. As a result, the hack was only used in a very small number of games), meaning to some people, this is taking one step backwards instead. This resulted in the influx of "hybrid" cards mentioned below, in which many cards couple a FM synthesizer (usually a OPL-2, later OPL-3) with a PCM codec for speech and sound effects. This oversight also saw Adlib users getting a Covox Speech Thing or Disney Sound Source to supplement the Adlib's musical capabilities. \n\nOn a side note- the OPL-2 chip was able to do PCM playback using a PSG-like mode, but only a handful of games supported it due to how difficult it was to put the chip into said mode. \n
Is there an issue? Send a MessageReason:
None


The first popular gaming platforms to use a PCM synthesis chipset were the UsefulNotes/{{Amiga}}, UsefulNotes/{{SNES}}, and believe it or not, {{Pinball}} systems, mainly those that used Creator/{{Midway}}'s DCS PCM Synthesis board (which also saw use in VideoGame/MortalKombat and VideoGame/RevolutionX cabinets, since it not only reproduces instruments more faithfully, but one of the many tricks PCM synthesis could do was transparently loop fully-voiced music tracks, which is an important feature of the latter game). The UsefulNotes/{{NES}} and UsefulNotes/SegaGenesis both had rudimentary PCM support, but this was mainly used for pre-recorded voices, sound effects, and drums. The modified [=OPL2=] chip (called an [=OPN2=]) used by the Genesis has a PCM codec mode, but the Genesis can also resort to manipulating the PSG to play back PCM sounds if needed- notable as it's how the Sonic 3 Launch Base zone BGM managed to have a percussion track and still have the "Go!" voice samples). Pretty much every system introduced since uses PCM.

to:

The first popular gaming platforms to use a PCM synthesis chipset were the UsefulNotes/{{Amiga}}, UsefulNotes/{{SNES}}, and believe it or not, {{Pinball}} systems, mainly those that used Creator/{{Midway}}'s [[Creator/{{Midway}} Midway's]] DCS PCM Synthesis board (which also saw use in VideoGame/MortalKombat and VideoGame/RevolutionX cabinets, since it not only reproduces instruments more faithfully, but one of the many tricks PCM synthesis could do was transparently loop fully-voiced music tracks, which is an important feature of the latter game). The UsefulNotes/{{NES}} and UsefulNotes/SegaGenesis both had rudimentary PCM support, but this was mainly used for pre-recorded voices, sound effects, and drums. The modified [=OPL2=] chip (called an [=OPN2=]) used by the Genesis has a PCM codec mode, but the Genesis can also resort to manipulating the PSG to play back PCM sounds if needed- notable as it's how the Sonic 3 Launch Base zone BGM managed to have a percussion track and still have the "Go!" voice samples). Pretty much every system introduced since uses PCM.
Is there an issue? Send a MessageReason:
None


Additionally, the Yamaha OPL chips were also found on the [=MoonSound=], [=MSX Music=] and [=MSX Sound=] expansion cards for the MSX, and the popular [=AdLib=] sound card for [=PCs=] (and became the defacto standard until usurped by the [=SoundBlaster=] in the early 90s), as well as in most [=SoundBlaster=] PC sound cards and clones to provide [=AdLib=] compatibility. There was even an OPL-1 based FM Synthesis module (allegedly upgradable to OPL-2) for the UsefulNiotes/{{Commodore 64}} in case the user needs even better quality music than the SID can provide.

to:

Additionally, the Yamaha OPL chips were also found on the [=MoonSound=], [=MSX Music=] and [=MSX Sound=] expansion cards for the MSX, and the popular [=AdLib=] sound card for [=PCs=] (and became the defacto standard until usurped by the [=SoundBlaster=] in the early 90s), as well as in most [=SoundBlaster=] PC sound cards and clones to provide [=AdLib=] compatibility. There was even an OPL-1 based FM Synthesis module (allegedly upgradable to OPL-2) for the UsefulNiotes/{{Commodore UsefulNotes/{{Commodore 64}} in case the user needs even better quality music than the SID can provide.
Is there an issue? Send a MessageReason:
None


Once games started to ship on [=CDs=], Red Book audio for game soundtracks became common. The audio could be played from the CD just like music on a music CD, while the game data lived in memory. This technology was actually developed in tandem with PCM sample playback and competed with PCM synthesis, and is sometimes used together with the former (for example, in the PC port of VideoGame/WipeOut and VideoGame/QuakeII, where the music is played from the music CD partition of the disc while the sound effects are played through PCM sample playback). A nice side-effect of this would be that the game CD is its own soundtrack CD and the soundtrack can be enjoyed on any regular CD player, and it also adds an extra layer of complexity for [[CopyProtection copy protection]] in that multi-partition game discs are difficult to duplicate reliably. Additionally, the music often sounds better than PCM sampled music, since real instruments could be played and recorded. On the downside, however, looping music tends to be difficult if not impossible to implement- as evident in ''VideoGame/SonicCD'' on the UsefulNotes/SegaCD, where the music had a short fade-out and fade-in section when repeating.

to:

Once games started to ship on [=CDs=], Red Book audio for game soundtracks became common. The audio could be played from the CD just like music on a music CD, while the game data lived in memory. This technology was actually developed in tandem with PCM sample playback and competed with PCM synthesis, and is sometimes used together with the former (for example, in the PC port of VideoGame/WipeOut and VideoGame/QuakeII, where the music is played from the music CD partition of the disc while the sound effects are played through PCM sample playback). A nice side-effect of this would be that the game CD is its own soundtrack CD and the soundtrack can be enjoyed on any regular CD player, and it also adds an extra layer of complexity for [[CopyProtection copy protection]] in that multi-partition game discs are difficult to duplicate reliably. Additionally, the music often sounds better than PCM sampled music, since real instruments could be played and recorded. On the downside, however, looping music tends to be difficult if not impossible to implement- as evident in ''VideoGame/SonicCD'' on the UsefulNotes/SegaCD, where the music had a short fade-out and fade-in section when repeating. \n It also makes VariableMix impossible to implement as well.



With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was Windows' inefficient API- it wasn't until the conception of UsefulNotes/DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities that premiered with Windows 98, that the issue was resolved. On earlier versions of Windows, unless the software itself does the mixing (something that most developers don't do), it wouldn't be able to play the BGM and sound effects at the same time [[note]]some software synthesizers like the Yamaha SYXG series of softsynths offer a passthrough virtual driver to allow programs that expect a hybrid card to work. However, this method consumes ''even more'' CPU power and thus introduces a trivial amount of audio lag on lower end [=PCs=], and certain games still managed to create conflicts with the driver.[[/note]]

to:

With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music as well as VariableMix is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was Windows' inefficient API- it wasn't until the conception of UsefulNotes/DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities that premiered with Windows 98, that the issue was resolved. On earlier versions of Windows, unless the software itself does the mixing (something that most developers don't do), it wouldn't be able to play the BGM and sound effects at the same time [[note]]some software synthesizers like the Yamaha SYXG series of softsynths offer a passthrough virtual driver to allow programs that expect a hybrid card to work. However, this method consumes ''even more'' CPU power and thus introduces a trivial amount of audio lag on lower end [=PCs=], and certain games still managed to create conflicts with the driver.[[/note]]
Is there an issue? Send a MessageReason:
None

Added DiffLines:

On a side note- the OPL-2 chip was able to do PCM playback using a PSG-like mode, but only a handful of games supported it due to how difficult it was to put the chip into said mode.
Is there an issue? Send a MessageReason:
None


With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was Windows' inefficient API- it wasn't until the conception of DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities that premiered with Windows 98, that the issue was resolved. On earlier versions of Windows, unless the software itself does the mixing (something that most developers don't do), it wouldn't be able to play the BGM and sound effects at the same time [[note]]some software synthesizers like the Yamaha SYXG series of softsynths offer a passthrough virtual driver to allow programs that expect a hybrid card to work. However, this method consumes ''even more'' CPU power and thus introduces a trivial amount of audio lag on lower end [=PCs=], and certain games still managed to create conflicts with the driver.[[/note]]

to:

With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was Windows' inefficient API- it wasn't until the conception of DirectX, UsefulNotes/DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities that premiered with Windows 98, that the issue was resolved. On earlier versions of Windows, unless the software itself does the mixing (something that most developers don't do), it wouldn't be able to play the BGM and sound effects at the same time [[note]]some software synthesizers like the Yamaha SYXG series of softsynths offer a passthrough virtual driver to allow programs that expect a hybrid card to work. However, this method consumes ''even more'' CPU power and thus introduces a trivial amount of audio lag on lower end [=PCs=], and certain games still managed to create conflicts with the driver.[[/note]]
Is there an issue? Send a MessageReason:
None


With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was Windows' inefficient API- it wasn't until the conception of DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities that premiered with Windows 98, that the issue was resolved. On earlier versions of Windows, unless the software itself does the mixing (something that most developers don't do), it wouldn't be able to play the BGM and sound effects at the same time [[note]]some software synthesizers like the Yamaha SYXG series of softsynths offer a passthrough virtual driver to allow programs that expect a hybrid card to work. However, this method consumes ''even more'' CPU power and thus introduces a trivial amount of audio lag on lower end PCs, and certain games still managed to create conflicts with the driver.[[/note]]

to:

With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was Windows' inefficient API- it wasn't until the conception of DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities that premiered with Windows 98, that the issue was resolved. On earlier versions of Windows, unless the software itself does the mixing (something that most developers don't do), it wouldn't be able to play the BGM and sound effects at the same time [[note]]some software synthesizers like the Yamaha SYXG series of softsynths offer a passthrough virtual driver to allow programs that expect a hybrid card to work. However, this method consumes ''even more'' CPU power and thus introduces a trivial amount of audio lag on lower end PCs, [=PCs=], and certain games still managed to create conflicts with the driver.[[/note]]
Is there an issue? Send a MessageReason:
None


The next step up was the FM synthesizer. FM synthesizers work by combining tones of various frequencies together in real time, with up to 4 oscillators working together to make a note. The technique works best for woodwind and many key instruments like the harpsicord (one key instrument that the FM synthesizer cannot reproduce reliably is the grand piano, whose ADSR[[note]]Attack, Decay, Sustain and Reverb[[/note]] qualities proved to be too difficult to simulate using FM); early FM synths had problems with percussion sounds (these sounds tended to be "flat", especially with first and second generation synthesizers, but they were still a problem with third generation synths), and string instruments (these sounded "plasticky" and "toyish"), but most of the problems were ironed out with later generation synthesizers.

to:

The next step up was the FM synthesizer. FM synthesizers work by combining tones of various frequencies together in real time, with up to 4 oscillators working together to make a note. The technique works best for woodwind and many key instruments like the harpsicord harpsichord (one key instrument that the FM synthesizer cannot reproduce reliably is the grand piano, whose ADSR[[note]]Attack, Decay, Sustain and Reverb[[/note]] qualities proved to be too difficult to simulate using FM); early FM synths had problems with percussion sounds (these sounds tended to be "flat", especially with first and second generation synthesizers, but they were still a problem with third generation synths), and string instruments (these sounded "plasticky" and "toyish"), but most of the problems were ironed out with later generation synthesizers.



Once games started to ship on CDs, Red Book audio for game soundtracks became common. The audio could be played from the CD just like music on a music CD, while the game data lived in memory. This technology was actually developed in tandem with PCM sample playback and competed with PCM synthesis, and is sometimes used together with the former (for example, in the PC port of VideoGame/WipeOut and VideoGame/QuakeII, where the music is played from the music CD partition of the disc while the sound effects are played through PCM sample playback). A nice side-effect of this would be that the game CD is its own soundtrack CD and the soundtrack can be enjoyed on any regular CD player, and it also adds an extra layer of complexity for [[CopyProtection copy protection]] in that multi-partition game discs are difficult to duplicate reliably. Additionally, the music often sounds better than PCM sampled music, since real instruments could be played and recorded. On the downside, however, looping music tends to be difficult if not impossible to implement- as evident in ''VideoGame/SonicCD'' on the UsefulNotes/SegaCD, where the music had a short fade-out and fade-in section when repeating.

to:

Once games started to ship on CDs, [=CDs=], Red Book audio for game soundtracks became common. The audio could be played from the CD just like music on a music CD, while the game data lived in memory. This technology was actually developed in tandem with PCM sample playback and competed with PCM synthesis, and is sometimes used together with the former (for example, in the PC port of VideoGame/WipeOut and VideoGame/QuakeII, where the music is played from the music CD partition of the disc while the sound effects are played through PCM sample playback). A nice side-effect of this would be that the game CD is its own soundtrack CD and the soundtrack can be enjoyed on any regular CD player, and it also adds an extra layer of complexity for [[CopyProtection copy protection]] in that multi-partition game discs are difficult to duplicate reliably. Additionally, the music often sounds better than PCM sampled music, since real instruments could be played and recorded. On the downside, however, looping music tends to be difficult if not impossible to implement- as evident in ''VideoGame/SonicCD'' on the UsefulNotes/SegaCD, where the music had a short fade-out and fade-in section when repeating.
Is there an issue? Send a MessageReason:
None


With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was Windows' inefficient API- it wasn't until the conception of DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities that premiered with Windows 98, that the issue was resolved. On earlier versions of Windows, unless the software itself does the mixing (something that most developers don't do), it wouldn't be able to play the BGM and sound effects at the same time.

to:

With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was Windows' inefficient API- it wasn't until the conception of DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities that premiered with Windows 98, that the issue was resolved. On earlier versions of Windows, unless the software itself does the mixing (something that most developers don't do), it wouldn't be able to play the BGM and sound effects at the same time.
time [[note]]some software synthesizers like the Yamaha SYXG series of softsynths offer a passthrough virtual driver to allow programs that expect a hybrid card to work. However, this method consumes ''even more'' CPU power and thus introduces a trivial amount of audio lag on lower end PCs, and certain games still managed to create conflicts with the driver.[[/note]]
Is there an issue? Send a MessageReason:
None


As CPU power increased, especially after the Pentium and [=PowerPC=] processors became popular (around 1995), PC games began using software PCM engines to play instruments and sound effects. However, it wasn't until the early 2000s that wavetable sound cards became a niche and [=PCs=] switched fully to software-driven PCM engines. Much of the delay could be attributed to sloppy code and poor optimization, however, as the Mac had no problems with software-driven synthesis, while [=PCs=] saw bad CPU load spikes and frame rate issues as well as annoying bugs like sound device conflicts, when playing music using software-driven synthesis until at least the Pentium III era. As of 2014, no consumer cards have a hardware wavetable chip anymore and cards with such circuits are now only found in the realm of professionals.

to:

As CPU power increased, especially after the Pentium and [=PowerPC=] processors became popular (around 1995), PC games began using software PCM engines to play instruments and sound effects. However, it wasn't until the early 2000s that wavetable sound cards became a niche and [=PCs=] switched fully to software-driven PCM engines. Much of the delay could be attributed to sloppy code and poor optimization, however, as the Mac had no problems with software-driven synthesis, while [=PCs=] saw bad CPU load spikes and frame rate issues as well as annoying bugs like sound device conflicts, conflicts and severe audio lag, when playing music using software-driven synthesis until at least the Pentium III era. As of 2014, no consumer cards have a hardware wavetable chip anymore and cards with such circuits are now only found in the realm of professionals.
Is there an issue? Send a MessageReason:
None


As CPU power increased, especially after the Pentium and [=PowerPC=] processors became popular (around 1995), PC games began using software PCM engines to play instruments and sound effects. However, it wasn't until the early 2000s that wavetable sound cards became a niche and [=PCs=] switched fully to software-driven PCM engines. Much of the delay could be attributed to sloppy code and poor optimization, however, as the Mac had no problems with software-driven synthesis, while [=PCs=] saw bad CPU load spikes and frame rate issues when playing music using software-driven synthesis until at least the Pentium III era. As of 2014, no consumer cards have a hardware wavetable chip anymore and cards with such circuits are now only found in the realm of professionals.

to:

As CPU power increased, especially after the Pentium and [=PowerPC=] processors became popular (around 1995), PC games began using software PCM engines to play instruments and sound effects. However, it wasn't until the early 2000s that wavetable sound cards became a niche and [=PCs=] switched fully to software-driven PCM engines. Much of the delay could be attributed to sloppy code and poor optimization, however, as the Mac had no problems with software-driven synthesis, while [=PCs=] saw bad CPU load spikes and frame rate issues as well as annoying bugs like sound device conflicts, when playing music using software-driven synthesis until at least the Pentium III era. As of 2014, no consumer cards have a hardware wavetable chip anymore and cards with such circuits are now only found in the realm of professionals.
Is there an issue? Send a MessageReason:
None


Once games started to ship on CDs, Red Book audio for game soundtracks became common. The audio could be played from the CD just like music on a music CD, while the game data lived in memory. This technology was actually developed in tandem with PCM sample playback and competed with PCM synthesis, and is sometimes used together with the former (for example, in the PC port of VideoGame/WipeOut and VideoGame/QuakeII, where the music is played from the music CD partition of the disc while the sound effects are played through PCM sample playback). A nice side-effect of this would be that the game CD is its own soundtrack CD and the soundtrack can be enjoyed on any regular CD player, and it also adds an extra layer of complexity for [[CopyProtection copy protection]] in that multi-partition game discs are difficult to duplicate reliably. Additionally, the music often sounds better than PCM sampled music, since real instruments could be played and recorded. On the downside, however, looping music tends to be difficult if not impossible to implement- as evident in SonicCD on the UsefulNotes/SegaCD, where the music had a short fade-out and fade-in section when repeating.

to:

Once games started to ship on CDs, Red Book audio for game soundtracks became common. The audio could be played from the CD just like music on a music CD, while the game data lived in memory. This technology was actually developed in tandem with PCM sample playback and competed with PCM synthesis, and is sometimes used together with the former (for example, in the PC port of VideoGame/WipeOut and VideoGame/QuakeII, where the music is played from the music CD partition of the disc while the sound effects are played through PCM sample playback). A nice side-effect of this would be that the game CD is its own soundtrack CD and the soundtrack can be enjoyed on any regular CD player, and it also adds an extra layer of complexity for [[CopyProtection copy protection]] in that multi-partition game discs are difficult to duplicate reliably. Additionally, the music often sounds better than PCM sampled music, since real instruments could be played and recorded. On the downside, however, looping music tends to be difficult if not impossible to implement- as evident in SonicCD ''VideoGame/SonicCD'' on the UsefulNotes/SegaCD, where the music had a short fade-out and fade-in section when repeating.
Is there an issue? Send a MessageReason:
None


With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was Windows' inefficient API- it wasn't until the conception of DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities that premiered with Windows 98, that the issue was resolved, and thus unless the software itself does the mixing, it wouldn't be able to play the BGM and sound effects at the same time.

to:

With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was Windows' inefficient API- it wasn't until the conception of DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities that premiered with Windows 98, that the issue was resolved, and thus resolved. On earlier versions of Windows, unless the software itself does the mixing, mixing (something that most developers don't do), it wouldn't be able to play the BGM and sound effects at the same time.
Is there an issue? Send a MessageReason:
None


With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was Windows' inefficient API- it wasn't until the conception of DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities, that the issue was resolved, and thus unless the software itself does the mixing, it wouldn't be able to play the BGM and sound effects at the same time.

to:

With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was Windows' inefficient API- it wasn't until the conception of DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities, capabilities that premiered with Windows 98, that the issue was resolved, and thus unless the software itself does the mixing, it wouldn't be able to play the BGM and sound effects at the same time.
Is there an issue? Send a MessageReason:
None


With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was due to Windows' inefficient API (it wasn't until the conception of DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities, that the issue was resolved).

to:

With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was due to Windows' inefficient API (it API- it wasn't until the conception of DirectX, specifically [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities, that the issue was resolved).
resolved, and thus unless the software itself does the mixing, it wouldn't be able to play the BGM and sound effects at the same time.
Is there an issue? Send a MessageReason:
None


With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was due to Windows' inefficient API (it wasn't until the conception of DirectX, specifically DirectSound, and the new WDM audio model which has multiple audio stream mixing capabilities, that the issue was resolved).

to:

With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was due to Windows' inefficient API (it wasn't until the conception of DirectX, specifically DirectSound, [=DirectSound=], and the new WDM audio model which has multiple audio stream mixing capabilities, that the issue was resolved).
Is there an issue? Send a MessageReason:
None


With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies.

to:

With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies.
applies. As a matter of fact, compressed audio files tend to use less CPU power than software-driven wavetable since the CPU only has to decode two channels of audio as opposed to wavetable where the CPU has to interpret and generate ''16'' channels of audio and then downmix them to two. The only thing holding the technology back was due to Windows' inefficient API (it wasn't until the conception of DirectX, specifically DirectSound, and the new WDM audio model which has multiple audio stream mixing capabilities, that the issue was resolved).
Is there an issue? Send a MessageReason:
None


The next step up was the FM synthesizer. FM synthesizers work by combining tones of various frequencies together in real time, with up to 4 oscillators working together to make a note. The technique works best for woodwind and key instruments like the piano; early FM synths had problems with percussion sounds (these sounds tended to be "flat", especially with first and second generation synthesizers, but they were still a problem with third generation synths), and string instruments (these sounded "plasticky" and "toyish"), but most of the problems were ironed out with later generation synthesizers.

to:

The next step up was the FM synthesizer. FM synthesizers work by combining tones of various frequencies together in real time, with up to 4 oscillators working together to make a note. The technique works best for woodwind and many key instruments like the piano; harpsicord (one key instrument that the FM synthesizer cannot reproduce reliably is the grand piano, whose ADSR[[note]]Attack, Decay, Sustain and Reverb[[/note]] qualities proved to be too difficult to simulate using FM); early FM synths had problems with percussion sounds (these sounds tended to be "flat", especially with first and second generation synthesizers, but they were still a problem with third generation synths), and string instruments (these sounded "plasticky" and "toyish"), but most of the problems were ironed out with later generation synthesizers.
Is there an issue? Send a MessageReason:
None


This was one of the main draws of the UsefulNotes/AppleMacintosh in the early 90s, when educational and adventure games alike started using these for music as an alternative to FM synthesis.

to:

This was one of the main draws of the UsefulNotes/AppleMacintosh in the early 90s, when educational and adventure games alike started using these for music as an alternative to FM synthesis.synthesis [[note]]Indeed, Macs ''don't'' have a FM synthesizer built in, their musical abilities are mostly down to [=QuickTime's=] software PCM synthesizer engine, red book CD audio, PCM music playback, or a MIDI keyboard attached to a SCSI-to-MIDI converter, the latter being only found in homes of music enthusiasts[[/note]].
Is there an issue? Send a MessageReason:
None


The first popular gaming platforms to use a PCM synthesis chipset were the UsefulNotes/{{Amiga}}, UsefulNotes/{{SNES}}, and believe it or not, {{Pinball}} systems, mainly those that used Creator/{{Midway}}'s DCS PCM Synthesis board (which also saw use in VideoGame/MortalKombat and VideoGame/RevolutionX cabinets, since it not only reproduces instruments more faithfully, but one of the many tricks PCM synthesis could do was transparently loop fully-voiced music tracks, which is an important feature of the latter game). The UsefulNotes/{{NES}} and UsefulNotes/SegaGenesis both had rudimentary PCM support, but this was mainly used for pre-recorded voices, sound effects, and drums, (and in actuality worked by manipulating the PSG in the case of the NES. The modified [=OPL2=] chip (called an [=OPN2=]) used by the Genesis has a PCM codec mode, but the Genesis can also resort to manipulating the PSG to play back PCM sounds if needed- notable as it's how the Sonic 3 Launch Base zone BGM managed to have a percussion track and still have the "Go!" voice samples). Pretty much every system introduced since uses PCM.

to:

The first popular gaming platforms to use a PCM synthesis chipset were the UsefulNotes/{{Amiga}}, UsefulNotes/{{SNES}}, and believe it or not, {{Pinball}} systems, mainly those that used Creator/{{Midway}}'s DCS PCM Synthesis board (which also saw use in VideoGame/MortalKombat and VideoGame/RevolutionX cabinets, since it not only reproduces instruments more faithfully, but one of the many tricks PCM synthesis could do was transparently loop fully-voiced music tracks, which is an important feature of the latter game). The UsefulNotes/{{NES}} and UsefulNotes/SegaGenesis both had rudimentary PCM support, but this was mainly used for pre-recorded voices, sound effects, and drums, (and in actuality worked by manipulating the PSG in the case of the NES.drums. The modified [=OPL2=] chip (called an [=OPN2=]) used by the Genesis has a PCM codec mode, but the Genesis can also resort to manipulating the PSG to play back PCM sounds if needed- notable as it's how the Sonic 3 Launch Base zone BGM managed to have a percussion track and still have the "Go!" voice samples). Pretty much every system introduced since uses PCM.
Is there an issue? Send a MessageReason:
None


With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies.

to:

With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is difficult if not impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies.
Is there an issue? Send a MessageReason:
None


With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues, developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies.

to:

With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient [=APIs=] while Macs had no such issues, issues. Nevertheless developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies.
Is there an issue? Send a MessageReason:
None


With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient APIs while Macs had no such issues, developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies.

to:

With the move from [=CDs=] to [=DVDs=] (and later, digital downloads), game developers could no longer use Red Book audio for their games. Additionally, the one drawback of CD audio meant that transparently looping music is impossible. So they turned to another technique - compressed audio files. Essentially, audio files used in modern games today are like standard [=MP3s=], except with a different compression algorithm and metadata regarding loop points. Such files have all the advantages of Red Book audio with several more, such as better looping. Initially, early processors were not powerful enough to handle this without choking (though this is technically only true for the PC largely due to the inefficient APIs [=APIs=] while Macs had no such issues, developers often overlook the method due to writing games to be multi-platform). Today, most triple-A games tend to use proprietary audio formats like AD-X and Bink Audio, while indie games tending to use consumer formats such as [=MP3=] and [=OGG=]. Also, processors have not only gotten leaps and bounds faster, but had also went multi-core, making it trivial to decode compressed music while still having enough grunt to handle the general graphics and gameplay logic without choking up. Coupled with the fact that games are now often better optimized than before, the earlier issues that plagued software-driven wavetable synthesis no longer applies.

Top