Useful Notes: Analog Vs Digital

All electronics come in two forms: analog and digital. Either can do what we want, but there are advantages/disadvantages to using them.

Analog

A sine wave, a typical analog signal
Analog electronics use a continuous signal. A continuous signal is where if you had a paper of infinite length, you could draw the signal without ever lifting your hand. The signal (as far as electronics are concerned) essentially represents the voltage or current at that particular point in time. Since the real world is analog, it was probably easier to make electronics analog at first. But it's losing ground as the dominant form of signaling due to everything also capable of being digitized. However, at the end of the day, what you get out is analog, because the world is analog.

The defining quality of an analog signal is that it's easy to amplify, attenuate, and filter out a meaningful signal, even if the signal appears to look like garbage. As an example, when tuning into a radio station, it's first static, then static and the radio station, and then finally the radio station as the tuner can filter out the proper frequency from the "noisy" signal. The radio waves that also travel in the air are very low power, but the correct signal is filtered out before being amplified and making its way to the speakers. Another advantage of this is that when the signals degrade, it's a gradual degradation.

The problems with analog signals, in regards to electronics, is that they consume more power as components are rarely completely off, and the maximum value range of their variables is limited by the quality of its components and the amount of power they can safely handle; an analog computer pushed beyond its tolerances can be a real world example of both Tim Taylor Technology and Explosive Overclocking. If you're designing or analyzing analog circuits, it also makes for some Mind Screw math such as Fourier series and Laplace transforms.

Another issue is that due to the random nature of the universe, it's very hard, if not impossible, to manipulate and maintain an analogue signal perfectly: gears skip, belts slip, pipes leak, and electrical currents fluctuate. This inherent randomness can actually be an advantage in some cases, however, as it allows for rapid, naturalistic "fuzzy math" modeling of inherently chaotic and vaguely defined situations. A common historical usage for electronic analogue computers has been in modeling air circulation and liquid flow through complex pipe systems, for instance. They have also been used to rapidly generate accurate-enough solutions for certain differential equations which are extremely difficult or tedious to solve using digital computation or traditional precision calculation.

In short, analogue electronics are less efficient and harder to program, but are also more robust and tolerant of errors, and model the world more realistically. In trope terms, an analogue computer embraces "Don't Think, Feel.".

Digital

A square wave, a typical digital signal. Also well known for its use in chiptunes.
Digital electronics use a discrete signal, that is, there are hard values with nothing in between them. You couldn't take a pencil and draw a digital signal without lifting it up. Despite what the picture on the right shows, vertical lines are just representations. note A digital signal can either be a binary signal, in which case an "on or off" state, or a series of defined levels.

In a binary digital system, typically 0 volts is the off state, while some other voltage (commonly, 1.5V, 3.3V, 5V, and 12V, usually based on what batteries can provide) are used as the on state. note 

The defining characteristic is that digital signal can be copied perfectly, in the sense that small bits of noise won't kill the signal. In regards to electronics, digital signals require less average power, since they can be in a state that's fully off (or mostly off)

The problem with digital signals though is that if part of the signal is trashed, then the entire signal has to be thrown away unless something to correct it is available. This is akin to handwriting, where if someone writes a letter sloppily, forgets a letter, or misspells the word, you may not be able to make out what the word really is. And if you're in a part of the world that has digital TV, you can find it very annoying that poor reception means the channel cuts out completely, rather than just getting staticky like in an analog system.

A digital computer, in short, requires the employment of a Abstract Scale to function note  and is inherently unstable due to the efficient approximations it has to make, but makes up for this with versatility and simplicity.

Converting Between Analog and Digital

There are two hardware devices use to convert signals from analog to digital and back, conveniently they're called Analog to Digital Converter (ADC) and Digital to Analog Converter (DAC).

The defining factor in the conversion is the sampling rate. A few smart people came up with the Nyquist-Shannon Sampling Theorem, which states that the sampling rate of an ADC must be double that of the highest frequency component of the signal in order reconstruct it perfectly. It's a very simple explanation, but it works well for most applications. One of them is music. As the upper range of human hearing is 20KHz, this implies that 40KHz is the most you need to reconstruct any audio signal. To encode and decode signals, there are two major methods used.

The first one, pulse-code modulation, captures the signal at a rate closer to original but still within the Nyquist-Shannon theorem. Another component, the sampling size (or bit-depth), defines how fine a granularity between the highest point and the lowest point. The larger the sampling size, the more accurate the signal is. Most audio and video is encoded and decoded in this fashion.

The second one, pulse-width modulation, uses a very high sampling rate but has a sampling size of 1. If the value is high, the output of the signal gets stronger. If it's low, the output of the signal gets weaker. These are usually deployed in lights, usually LEDs or fluorescent lamps.

The biggest issue with conversion is something called the quantization error. As analog signals have infinite subdivisions, it's impossible for any digital system to perfectly reconstruct an analog signal. For instance, if you have a signal level of 3.5 but you can only store 3 or 4 as its value, then you're going to have to pick which value makes more sense. One solution could be to alternate between the two values, so that our brain averages out the signal.