Analog electronics use a continuous signal. A continuous signal is where if you had a paper of infinite length, you could draw the signal without ever lifting your hand. The signal (as far as electronics are concerned) essentially represents the voltage or current at that particular point in time. Since the real world is analog, it was probably easier to make electronics analog at first. But it's losing ground as the dominant form of signaling due to everything also capable of being digitized. However, at the end of the day, what you get out is analog, because the world is analog. The defining quality of an analog signal is that it's easy to amplify, attenuate, and filter out a meaningful signal, even if the signal appears to look like garbage. As an example, when tuning into a radio station, it's first static, then static and the radio station, and then finally the radio station as the tuner can filter out the proper frequency from the "noisy" signal. The radio waves that also travel in the air are very low power, but the correct signal is filtered out before being amplified and making its way to the speakers. Another advantage of this is that when the signals degrade, it's a gradual degradation. The problems with analog signals, in regards to electronics, is that they consume more power as components are rarely completely off, and the maximum value range of their variables is limited by the quality of its components and the amount of power they can safely handle; an analog computer pushed beyond its tolerances can be a real world example of both Tim Taylor Technology and Explosive Overclocking. If you're designing or analyzing analog circuits, it also makes for some Mind Screw math such as Fourier series and Laplace transforms. Another issue is that due to the random nature of the universe, it's very hard, if not impossible, to manipulate and maintain an analogue signal perfectly: gears skip, belts slip, pipes leak, and electrical currents fluctuate. This inherent randomness can actually be an advantage in some cases, however, as it allows for rapid, naturalistic "fuzzy math" modeling of inherently chaotic and vaguely defined situations. A common historical usage for electronic analogue computers has been in modeling air circulation and liquid flow through complex pipe systems, for instance. They have also been used to rapidly generate accurate-enough solutions for certain differential equations which are extremely difficult or tedious to solve using digital computation or traditional precision calculation. In short, analogue electronics are less efficient and harder to program, but are also more robust and tolerant of errors, and model the world more realistically. In trope terms, an analogue computer embraces "Don't Think, Feel".
Digital electronics use a discrete signal, that is, there are hard values with nothing in between them. You couldn't take a pencil and draw a digital signal without lifting it up. Despite what the picture on the right shows, vertical lines are just representations. Though in the real world, the signal is made of a sum of sine waves such that it creates a tiny sine wave that swings wildly periodically. This is known as a Fourier series. A digital signal can either be a binary signal, in which case an "on or off" state, or a series of defined levels. In a binary digital system, typically the high voltage is usually 1.5V, 3.3V, or 5V with the low voltage being 0V or the negative of the high voltage. The defining characteristic is that digital signal can be copied perfectly. Small bits of noise also don't kill the signal, as long as the value being represented is within tolerance. In regards to electronics, digital signals require less average power, since they can be in a state that's fully off or mostly off. The main problem with digital signals is that if part of the signal is trashed, then the entire signal has to be thrown away unless something to correct it is available. This is akin to handwriting, where if someone writes a letter sloppily, forgets a letter, or misspells the word, you may not be able to make out what the word really is. And if you're in a part of the world that has digital TV, you can find it very annoying that poor reception means the channel cuts out completely, rather than just getting staticky like in an analog system. A digital computer, in short, requires the employment of a Abstract Scale to function note and is inherently unstable due to the efficient approximations it has to make, but makes up for this with versatility and simplicity.
Converting Between Analog and Digital
There are two hardware devices use to convert signals from analog to digital and back, conveniently they're called Analog to Digital Converter (ADC) and Digital to Analog Converter (DAC). The defining factor in the conversion is the sampling rate. A few smart people came up with the Nyquist-Shannon Sampling Theorem, which states that the sampling rate of an ADC must be double that of the highest frequency component of the signal in order reconstruct it perfectly. It's a very simple explanation, but it works well for most applications. One of them is music. As the upper range of human hearing is 20KHz, this implies that 40KHz is the most you need to reconstruct any audio signal. note To encode and decode signals, there are two major methods used. The first one, pulse-code modulation, captures the signal at a rate closer to original but still within the Nyquist-Shannon theorem. Another component, the sampling size (or bit-depth), defines how fine a granularity between the highest point and the lowest point. The larger the sampling size, the more accurate the signal is. Most audio and video is encoded and decoded in this fashion. The second one, pulse-width modulation, uses a very high sampling rate but has a sampling size of 1. If the value is high, the output of the signal gets stronger. If it's low, the output of the signal gets weaker. These are usually deployed in lights, usually LEDs or fluorescent lamps. The biggest issue with conversion is something called the quantization error. As analog signals have infinite subdivisions, it's impossible for any digital system to perfectly reconstruct an analog signal. For instance, if you have a signal level of 3.5 but you can only store 3 or 4 as its value, then you're going to have to pick which value makes more sense. One solution could be to alternate between the two values, so that our brain averages out the signal.
The analog hole
One advantage (or problem, depending on which side of the copyright fence you're on) of analog electronics is that analog output cannot be effectively stopped from being copied. Since humans have their own analog processors — eyes and ears — and cannot sense digital input by themselves (yet, anyway — the future might change this), the last step of any sort of audio or video interaction — no matter how digital the whole process has been up to that point — must by definition be purely analog. And anything that our bodies' organics can process, another analog recorder can as well — so no matter how much protection, DRM and artificial limitations are imposed on the digital recording and the gear required to play it, an analog copy can always be made. For the reasons explained above such a recording will never be a 1:1 copy and some quality loss is unavoidable, but given good enough recording equipment it can be kept minimal, and once done it can often be re-digitalized all over again with none of the original restrictions and no further quality loss, ready to be shared with the world. This is called the analog hole, and because closing it is effectively impossible it's an ongoing nightmare for every entity concerned with monetizing audiovisual entertainment. There have been many attempts to close the analog hole. One of the more popular techniques is to design software that refuses to read certain patterns. A widespread example is the EURion constellation, a pattern of points used to inhibit counterfeiting. It's been a feature of many professional image editing programs, fax machines, and copy machines since the 2000s. Other methods use audio and visual digital watermarking, along with watermark detection software. In theory, this causes a computer attempting to access a re-digitalized file to refuse it (see image); but in practice there are too many programs that lack the (often proprietary) programming necessary for widespread coverage.