Darth Wiki / Idiot Design

So you wanna know where I learned about dovetail joints? My high school beginner's woodworking class. Must have been brainstorming amateur hour at Huawei when they suggested joining aluminum and plastic with a woodworking joint.
Zack Nelson, on one of the reasons the Nexus 6P failed the bend test so catastrophically.

Every once in a while, we encounter an item with a design flaw so blatant that one can only wonder how no one thought to fix that before releasing it to the public. Whether it be the result of unforeseen consequences of a certain design choice, cost-cutting measures gone too far, rushing a product to meet a certain release date at the cost of skipping important testing, or simply pure laziness on the creators' parts, these design flaws can result in consequences ranging from unintentional hilarity, to mild annoyance, to rendering the product unusable, to potentially even putting the users' lives in danger.

See also Idiot Programming for when the flaw comes from poor coding practices. See the real life section of The Alleged Car for automotive examples.

    open/close all folders 

Specific companies:


    Apple 
This video highlights Apple's many, many hardware design failures in 2008 onwards.

  • The original Apple II was one of the first home computers to have color graphics, but it had its share of problems:
    • Steve Wozniak studied the design of the electronics in Al Shugart's floppy disk drive and came up with a much simpler circuit that did the same thing. But his implementation had a fatal flaw: the connector on the interface cable that connected the drive to the controller card in the computer was not polarized or keyed — it could easily be connected backwards or misaligned, which would fry the drive's electronics when the equipment was powered up. (Shugart used a different connector which could not be inserted misaligned, and if it were connected backward it wouldn't damage anything; it just wouldn't work.) Apple "solved" this problem by adding a buffer chip between the cable and the rest of the circuit, whose purpose was to act as a multi-circuit fuse which would blow if the cable were misconnected, protecting the rest of the chips in the drive.
    • The power switch on the Apple II power supply was under-rated and had a tendency to burn out after repeated use. Unlike the "fuse" chip in the disk drives (which was socketed), the power switch was not user-replaceable. The recommended "fix": leave the power switch "on" all the time and use an external power switch to turn the computer off. At least one vendor offered an external power switch module shaped to fit nicely behind the Apple II, but most users simply plugged their computer into a standard power strip and used its on/off switch to turn their equipment off.
  • The old Apple III was three parts stupid and one part hubris; the case was completely unventilated and the CPU didn't even have a heat sink. Apple reckoned that the entire case was aluminum, which would work just fine as a heat sink, no need to put holes in our lovely machine! This led to the overheating chips actually becoming unseated from their sockets; tech support would advise customers to lift the machine a few inches off the desktop and drop it, the idea being that the shock would re-seat the chips. It subsequently turned out that the case wasn't the only problem, since a lot of the early Apple IIIs shipped with defective power circuity that ran hotter than it was supposed to, but it helped turn what would have otherwise been an issue that affected a tiny fraction of Apple IIIs into a widespread problem. Well, at least it gave Cracked something to joke about.
    • A lesser, but still serious design problem existed with the Power Mac G4 Cube. Like the iMacs of that era, it had no cooling fan and relied on a top-mounted cooling vent to let heat out of the chassis. The problem was that the Cube had more powerful hardware crammed into a smaller space than the classic iMacs, meaning that the entirely passive cooling setup was barely enough to keep the system cool. If the vent was even slightly blocked however, then the system would rapidly overheat. Add to that the problem of the Cube's design being perfect for putting sheets of paper (or worse still, books) on top of the cooling vent, and it gets worse. Granted, this situation relied on foolishness by the user for it to occur, but it was still a silly decision to leave out a cooling fan (and one that thankfully wasn't repeated when Apple tried the same concept again with the Mac Mini).
    • Another issue related to heat is that Apple has a serious track record of not applying thermal grease appropriately in their systems. Most DIY computer builders know that a rice-grain-sized glob of thermal grease is enough. Apple pretty much caked the chips that needed it with thermal grease.
    • Heat issues are also bad for MacBook Pros. Not so much for casual users, but very much so for heavy processor load applications. Since the MBP is pretty much de rigeur for musicians (and almost as much for graphic designers and moviemakers), this is a rather annoying problem since Photoshop with a lot of images or layers or any music software with a large number of tracks WILL drive your temperature through the roof. Those who choose to game with a MBP have it even worse — World of Warcraft will start to cook your MBP within 30 minutes of playing, especially if you have a high room temperature. The solution? Get the free software programs Temperature Monitor and SMCFanControl. Keep an eye on your temps and be very liberal with upping the fans: the only downsides to doing so are more noise, a drop in battery time, and possible fan wear: all FAR better than your main system components being fried or worn down early.
  • The original model of the iPhone had its headphone jack in a recession on the top of the phone. While this worked fine with the stock Apple earbuds, headphones with larger plugs wouldn't fit without an adapter.
  • Apple's made a big mistake with one of their generations of the iPhone. Depending on how you held it, it could not receive signals. The iPhone 4's antenna is integrated into its outside design and is a bare, unpainted aluminum strip around its edge, with a small gap somewhere along the way. To get a good signal strength it relies on this gap being open, but if you hold the phone wrong (which "accidentally" happens to be the most comfortable way to do so, especially if you're left-handed), your palm covers that gap and, if it's in the least bit damp, shorts it, rendering the antenna completely useless. Lacquering the outside of antenna, or simply moving the air gap a bit so it doesn't get shorted by the user's hand, would've solved the problem in a breeze, but, apparently, Apple is much more concerned about its "product identity" than about its users. Apple suggested users to "hold it right". As it turns out, Apple would soon be selling modification kits for $25 a pop, for an issue that, by all standards, should have been fixed for free, if not discovered and eliminated before it even hit the market. Apple got sued from at least 3 major sources for scam due to this.
  • Macbook disc drives are often finicky to use, sometimes not reading the disc at all and getting it stuck in the drive. The presented solutions? Restarting your computer and holding down the mouse button until it ejects. And even that isn't guaranteed, sometimes the disc will jut out just enough that the solution won't register at all and pushing it in with a pair of tweezers finishes the job. To put this in perspective, technologically inferior video game consoles like the Wii and PlayStation 3 can do a slot-loading disc drive far better than Apple apparently can.
  • Say what you will about the iPhone 7's merging of the headphone and charger jacks, but there's no denying that upon closer inspection, this combined with the lack of wireless charging creates a whole new problem. As explained in this video, the charger jack is only capable of withstanding a certain amount of wear and tear (between 5,000 and 10,000 plugs, although only Apple themselves know the exact number). Because you're now using the same jack for two different things, chances are you'll wear it out twice as fast as any other iPhone. Because the phone doesn't have a wireless charging function like most other phones, this means that if this happens, your phone is pretty much toast.
  • The Apple Magic Mouse 2 tried to improve upon the original Magic Mouse by making its battery rechargeable. Unfortunately, this choice was widely ridiculed, as for some reason, the charging port was located on the underside of the mouse, rendering it inoperable while plugged in. One wonders why they couldn't just put the port on the front, like pretty much every other chargeable mouse. Apparently, it's to preserve the mouse's aesthetics, in yet another case of Apple favoring aesthetics over usability.

    Intel 
  • The "Prescott" core Pentium 4 has a reputation for being pretty much the worst CPU design in history. It had some design trade-offs which lessened the processor's performance-per-clock over the original Pentium 4 design, but theoretically allowed the Prescott to run at much higher clockspeeds. Unfortunately, these changes also made the Prescott vastly hotter than the original design, making it impossible for Intel to actually achieve the clockspeeds they wanted. Moreover, they totally bottlenecked the processor's performance, meaning that Intel's usual performance increasing tricks (more cache and faster system buses) did nothing to help. By the time Intel came up with a new processor that put them back in the lead, the once hugely valuable "Pentium" brand had been rendered utterly worthless by the whole Prescott fiasco, and the new processor was instead called the Core 2. The Pentium name is still in use, but is applied to the mid-end processors that Intel puts out for cheap-ish computers, somewhere in between the low-end Celerons and the high-end Core line.
    • While the Prescott iteration of the design had some very special problems of its own, the Pentium 4 architecture in general had a rather unenviable reputation for underperforming. The design was heavily optimised in favour of being able to clock to high speeds in an attempt to win the "megahertz war" on the grounds that consumers at the time believed that higher clock speed = higher performance. The sacrifices made in the P4 architecture in order to achieve those high clock speeds, however, resulted in very poor performance per tick of the processor clock. For example the processor had a very long instruction decode pipeline note , which was fine if the program being executed didn't do anything unexpected like jump to a new instruction, but if it did it would cause all instructions in the pipeline to be discarded, stalling the processor until the new program execution flow was loaded into the pipeline, and because the pipeline was a lot deeper than the previous Pentium III, the processor would stall for several clock cycles while the pipeline was purged and refreshed. The science of branch prediction was in its infancy at that point, so pipeline stalls were a common occurrence on Pentium 4 processors. This combined with other bone-headed design decisions like the omission of a barrel shifternote  and providing multiple execution units but only having one be able to execute per clock cycle under most circumstances meant that the contemporary Athlon processor from AMD could eat the P4 alive at the same clock speed due to a far more efficient design (The last problem was partially solved with the concept of "hyperthreading", presenting a single core processor to the OS as a 2 core processor, and using some clever trickery in the chip itself to allow the execution units that would otherwise sit idle to execute a second instruction in parallel provided it meets certain criteria).
  • The Prescott probably deserves the title of worst x86 CPU design ever (although there might be a case for the 80286), but allow us to introduce you to Intel's other CPU project in the same era: the Itanium. Designed for servers, using a bunch of incredibly cutting-edge hardware design ideas. Promised to be incredibly fast. The catch? It could only hit that theoretical speed promise if the compiler generated perfectly optimized machine code for it. Turned out you couldn't optimize most of the code that runs on servers that hard, because programming languages suck note , and even if you could, the compilers of the time weren't up to it. Turned out if you didn't give the thing perfectly optimized code, it ran about half as fast as the Pentium 4 and sucked down twice as much electricity doing it. Did we mention this was right about the time server farm operators started getting serious about cutting their electricity and HVAC bills?
    • Making things worse, this was actually Intel's third attempt at implementing such a design. The failure of their first effort, the iAPX-432 was somewhat forgivable, given that it wasn't really possible to achieve what Intel wanted on the manufacturing processes available in the early eighties. What really should have taught them the folly of their ways came later in the decade with the i860, a much better implementation of what they had tried to achieve with the iAPX-432... which still happened to be both slower and vastly more expensive than not only the 80386 (bear in mind Intel released the 80486 a few months before the i860) but also the i960, a much simpler and cheaper design which subsequently became the Ensemble Darkhorse of Intel, and is still used today in certain roles.
    • In the relatively few situations where it gets the chance to shine, the Itanium 2 and its successors can achieve some truly awesome performance figures. The first Itanium on the other hand was an absolute joke. Even if you managed to get all your codepaths and data flows absolutely optimal, the chip would only perform as well as a similarly clocked Pentium III. Even Intel actually went so far as to recommend that only software developers should even think about buying systems based on the first Itanium, and that everyone else should wait for the Itanium 2, which probably ranks as one of the most humiliating moments in the company's history.
      • The failure of the first Itanium was largely down to the horrible cache system that Intel designed for it. While the L1 and L2 caches were both reasonably fast (though the L2 cache was a little on the small side), the L3 cache used the same off-chip cache system designed three years previously for the original Pentium II Xeon. By the time the Itanium had hit the streets however, running external cache chips at CPU speeds just wasn't possible anymore without some compromise, so Intel decided to give them extremely high latency. This proved to be an absolutely disastrous design choice, and basically negated the effects of the cache. Moreover, Itanium instruction are four times larger than x86 ones, leaving the chip strangled between its useless L3 cache, and L1 and L2 caches that weren't big or fast enough to compensate. Most of the improvement in Itanium 2 came from Intel simply making the L1 and L2 caches similar sizes but much faster, and incorporating the L3 cache into the CPU die.
  • Intel Atom was not that far. The first generation (Silverthorne and Diamondville) were even slower than a Pentium III. Yeah, despite having low power consumption, the CPU performance was awful. To make it worse it only had support for Windows XP and it was still lagging. Following generations prior Bay Trail were mere attempts to be competitive, but sadly they were even slower than a VIA processor (and that was considered as a slow chip in their time).
    • While the Diamondville (N270 and N280) was just barely fast enough to power a light-duty laptop, the Silverthorne (Z530 and Z540) was meant for mobile devices and had even lower performance, entirely insufficient for general-purpose computing. But the mobile market was already well in the hands of Arm chips, so Intel ended up with warehouses full of Silverthorne CPUs that nobody wanted. And so it was that they enacted license restrictions that forced manufacturers to use Silverthorne CPUs for devices with screens wider than 9 inches, scamming the public into buying laptops whose abysmal performance infuriated anyone who bought them, turning many off the concept of the netbook as a whole.
    • Averted wonderfully with Bay Trail, which managed to defeat VIA chips and be equal to AMD low tier chips (which had a very awful moment with Kabini and Temash) while having decent GPU performance. Sadly the next iteration, Cherry Trail/Braswell got hit in a brutal way: Intel Atom's team had the 'best' idea to improve GPU performance (to reach AMD Beema levels)... by sacrificing CPU one. And when they realized that doing that at the end made both lose performance, they tried to improve Turbo speeds, but at the end it just got worse due the constrained power consumption, making this generation a massive regression from Bay Trail.
    • And then there is Intel SoFIA. They have success in the mobile area with Moorefield (Bay Trail chips with Power VR GPU) and people were expecting to continue with that path, but they decided to use Mali GPU instead with constrained x86 chips in order to reduce cost. Sadly their initial chips were as fast as the slowest ARM chip (which was the ARM A7) available at the time and even worse. That ARM slowests chips were being replaced by A53/A35 chips as the new low tier, making Intel being far behind. No wonder why they cancelled SoFIA.
  • While Intel's CPU designers have mostly been able to avoid any crippling hardware-level bugs since the infamous FDIV bug in 1993 (say what you will about the Pentium 4, at least it could divide numbers correctly), their chipset designers seem much more prone to making screw-ups:
    • Firstly there was the optional Memory Translator Hub (MTH) component of the 820 chipset, which was supposed to allow the usage of more reasonably priced SDRAM instead of the uber-expensive RDRAM that the baseline 820 was only compatible with. Unfortunately the MTH basically didn't work at all in this role (causing abysmally poor performance and system instability) and was rapidly discontinued, eventually forcing Intel to create the completely new 815 chipset to provide a more reasonable alternative for consumers.
    • Then there were the 915 and 925 chipsets; both had serious design flaws in their first production run, which required a respin to correct, and ultimately forced Intel to ditch the versions they had planned with wi-fi chips integrated into the chipset itself.
    • The P67 and H67 chipsets were found to have a design error that supplied too much power to the SATA 3Gbps controllers, which would cause them to burn out over time (though the 6 GBps controllers were unaffected, oddly enough).
    • The high-end X79 chipset was planned to have a ton of storage features available, such as up to a dozen Serial SCSI ports along with a dedicated secondary DMI link for storage functions... only for it to turn out that none of the said features actually worked, meaning that it ended up being released with less features than its consumer counterparts.
    • A less severe problem afflicts the initial runs of the Z87 and H87 chipsets, in which USB 3.0 devices can fail to wake up when the system comes out of standby, and have to be physically disconnected and reconnected for the system to pick them up again.
    • Speaking of the 820 chipset, anyone remember RDRAM? It was touted by Intel and Rambus as a high performance RAM for the Pentium III to be used in conjunction with the 820. But implementation-wise, it was not up to snuff (in fact benchmarks revealed that applications ran slower with RDRAM than with the older SDRAM!), not to mention very expensive, and third party chipset makers (such as SiS, who gained some fame during this era) went to cheaper DDR RAM instead (and begrudgingly, so did Intel, leaving Rambus with egg on their faces), which ultimately became the de facto industry standard. RDRAM still found use in other applications, though, like the Nintendo 64 and the PlayStation 2... where, at least for the former, it turned out to be one of the system's biggest performance bottlenecks.
      • A small explanation of what happened: Rambus RDRAM memory is more serial in nature than more-traditional memory like SDRAM (which is parallel). The idea was that RDRAM could use a high clock rate to compensate for the narrow bit width (RDRAM also used a neat innovation: dual data rate, using both halves of the clock signal to send data; however, two could play that game, and DDR SDRAM soon followed). But there was two problems. First, all this conversion required additional complex (and patented) hardware which raised the cost. Second, and more critically, this kind of electrical maneuvering involves conversions and so on, which adds latency...and memory is one of the areas where latency is a key metric: the lower the better. SDRAM, for all its faults, operated more on a Keep It Simple Stupid principle, and it worked, and later versions of the technology introduced necessarily complexities at a gradual pace (such as the DDR2/3 preference for matched pairs/trios of modules): making them more tolerable.
  • Intel's SSDs have a particular failure mode that rubs people the wrong way. After a certain amount of writes, the drive goes into read only mode. This is great and all until you consider that the number of writes is often lower compared to other SSDs of similar technology (up to about 500 terabytes versus 5 petabytes) and that you only have one chance to read the data off the drive. If you reboot, the drive then goes to an unusable state, regardless of whether or not the data on the drive is still good.
  • In early 2018, a hardware bug across all Intel cards released since 1995 (except for Itanium and pre-2013 Atom) known as Meltdown caused speculative code (that is, machine code that the CPU predicts that it will need to run and tries while it waits for the "actual" instructions to arrive) to be run before any checks that the code was allowed to run at the required privilege level. This could cause ring-3 code (user-level) to access ring-0 (kernel-level) data. The result? Any kernel developers developing for Intel needed to scramble to patch their page tables so they would do extra babysitting on affected cards. This could cause a 7% to 30% performance reduction on every single Intel chip in the hands of anyone who had updated their hardware in the last decade, with performance loss depending on multiple factors. Linux kernel developers considered nicknaming the bug "Forcefully Unmap Complete Kernel With Interrupt Trampolines" (FUCKWIT), which is surely the adjective they thought would best describe Intel hardware designers at that time. It subsequently turned out that Intel's weren't the only CPUs to be vulnerable to this form of attack, but of the major manufacturers, theirs are by far the most severely affected (AMD's current Ryzen line-up is almost totally immune to the exploits, and while ARM processors are also vulnerable, they tend to operate in much more locked-down environments than x86 processors, making it harder to push the exploit through). The performance penalties that the fix required ended up ensuring that when AMD released their second-generation Ryzen processors later that year (the first generation of which didn't clock all that well), it turned what would likely have been a fairly even match-up into the same sort of Curb-Stomp Battle that AMD was frequently on the winning side on during the Athlon 64 era. It also forced Intel to basically redesign their future chipsets from the ground up to incorporate the fixes into their future processor lines.
    • It's worth noting that after the subsequent updates and fixes, the actual performance loss caused by the Meltdown fix is quite variable, and some people with Intel chips may not notice a difference - for example, the performance loss in gaming desktops/in-game benchmarks is negligible at best (with the modern Coffee Lake chipset in particular being virtually unaffected), while performance loss in server Intel chips is much more pronounced. It's still an utterly ridiculous oversight on Intel's part though.
    AMD 

  • AMD's wildly successful Athlon Thunderbird ran at high speeds and for a while obliterated everything else on the market, but it was also the hottest CPU ever made up until that point. This wouldn't be so bad in and of itself — even hotter CPUs were made by both AMD and Intel in later years — but the Thunderbird was special in that it had no heat-management features whatsoever. If you ran one without the heatsink — or, more plausibly, if the heavy chunk of aluminium sitting on the processor broke the mounting clips through its sheer weight and dropped to the floor of the case — the processor would insta-barbecue itself.
  • In late 2006 it was obvious that Intel were determined to pay AMD back for the years of ass-kickings it had endured at the hands of the Athlon 64, by releasing the Core 2 Quad only five months after the Core 2 Duo had turned the performance tables. The Phenom was still some ways off, so AMD responded with the Quad FX, a consumer-oriented dual-processor platform that could mount two dual-core chips (branded as Athlon 64s, but actually rebadged Opteron server chips). While re-purposing Opterons for desktop use was something that had worked magnificently three years prior, this time it became obvious that AMD hadn't thought things through. Not only was this set-up more expensive than a Core 2 Quad (the CPUs and motherboard worked out to about the same price, but you needed twice the memory modules, a more powerful PSU and a copy of Windows XP Professional), but it generally wasn't any faster, and in anything that didn't use all four cores actually tended to be far slower, as Windows XP had no idea how to deal with the two memory pools created by the dual-CPU set-up (Vista was a lot more adept in that department, but had its own set of problems).
  • Amazingly enough, things got worse when the Phenom eventually did arrive on the scene. In addition to being clocked far too slow to compete with the Core 2 Quad — which wasn't really due to any particular design flaw, other than its native quad-core design being a little Awesome, but Impractical — it turned out that there was a major problem with the chip's translation lookaside buffer (TLB), which could lead to crashes and/or data corruption in certain rare circumstances. Instead of either initiating a full recall or banking on the fact that 98% of users would never encounter this bug, AMD chose a somewhat odd third option and issued a BIOS patch that disabled the TLB altogether, crippling the chip's performance. They soon released Phenoms that didn't have the problem at all, but any slim hope of it succeeding went up in smoke after this fiasco.
    • Interesting enough, things got better with Phenom II, which improved dramatically their performance, going near Intel once again and even more, their 6 core chips were good enough to get some buyers (and even defeating in some uses the 1st generation of Core i7), indicating that the original Phenom was an otherwise-sound design brought down by poor clock speeds, the TLB glitch, and not enough cache. Which is still more than can be said for the next major design...
  • AMD's Bulldozer was something that may make people wonder why they went this route. On the surface, AMD made two integer cores share a floating point unit. This makes some sense, as most operations are integer based. Except those cores share an instruction decoder and scheduler, effectively making a single core with two disjointed pools of execution units. Also, each integer core was weaker than the Phenom II's core. To make matters worse, they also adopted a deep pipeline and high clock frequencies. If anyone paid attention to processor history, those two reasons were the root cause in why the Pentium 4 failed. Still, it was forgivable since they used more cores (up to 8 threads in 4 modules) and higher clock speeds than Intel in order to compensate it, making at least useful in some ocassions (like video editing or virtual machines).
    • However, it went downhill with the Carrizo, the last major family based on the Bulldozer lineage. It cut the L2 cache which gives enough performance to not to be outmatched by Intel Haswell and they got stuck in 28 nm process, making it worse. Even worse? Builders of laptops (which AMD Carrizo was intended to go) decided to use the worst designs for them, taking their performance to near Intel Nehalem levels, which was outdated by six years. One could get the impression that AMD simply didn't care about the Bulldozer family by this point anymore, and just quickly shoved Carrizo out the door so as not to waste the R&D money, while also establishing the Socket AM4 infrastructure that their next family, Ryzen (which got them back on track and then some) would use.

Multiple companies


    Computers and smartphones 
  • Some HP laptops have a feature called "Action Keys". On most laptops, doing things like adjusting volume or screen brightness would require you to hold the "fn" key and press a certain function key at the same time. Since the "fn" key is normally at the bottom of the keyboard and the function keys are normally at the top, this could get annoying or uncomfortable, so HP had the idea to switch this around: pressing a function key on its own would adjust volume or brightness, and users who actually want to press the function key can do so by holding down the "fn" key. Since most computer users adjust the volume or brightness much more often than they need to press F6, this is nice, but what if you do need to use the function keys a lot (for example, when programming and using debug mode)? Hopefully HP included an option to conveniently switch Action Keys on and off, right? You wish. Disabling or re-enabling the Action Keys requires you to turn off the laptop, then switch it back on while holding F10 to open up the BIOS menu, where you can find the option. Each time you want to switch between these modes, you need to reset your laptop, which is a massive inconvenience. And even worse, screen-reading software does not work in the BIOS menu, which makes changing these options almost impossible for visually impaired users. On the bright side, key combinations like Alt+F4 still work, but why HP didn't just make this easy to switch on and off is a mystery.
  • The Coleco Adam, a 1983 computer based on the fairly successful ColecoVision console, suffered from a host of problems and baffling design decisions. Among the faults were the use of a proprietary tape drive which was prone to failure, locating the whole system's power supply in the printer of all places (meaning the very limited daisy-wheel printer couldn't be replaced, and if it broke down or was absent, the whole computer was rendered unusable), and poor electromagnetic shielding which could lead to tapes and disks being erased at startup. Even after revised models ironed out the worst bugs, the system was discontinued after less than 2 years and sales of 100,000 units.
  • The Samsung Galaxy Note 7, released in August 2016 and discontinued just two months later. On the surface, it was a great cell phone that competed with any number of comparable phablets. The problem? It was rushed to market to beat Apple's upcoming iPhone 7 (which came out the following month), and this left it with a serious problem: namely, that it had a habit of spontaneously combusting. Samsung hastily recalled the phone in September once it started causing dozens of fires (to the point where aviation safety authorities were telling people not to bring them onto planes), and gave buyers replacements with batteries from a different supplier. When those phones started catching fire as well, it became obvious that the problems had nothing to do with quality control and ran to the heart of the phone's designSpecifically . By the time that Samsung discontinued the Galaxy Note 7, it had already become the Ford Pinto of smartphones and a worldwide joke, with every major wireless carrier in the US having already pulled them from sale. Samsung especially doesn't want to be reminded of it, to the point that they ordered YouTube to take down any video showing a mod for Grand Theft Auto V that reskins the Sticky Bombs into the Galaxy Note 7.
  • The Google Nexus 6P, made by Huawei, has multiple major design flaws that make accidental damage from sitting on it potentially catastrophic: the back isn't screwed into the structure of the phone even though it's not intended to be removable, leaving the thin aluminum on the sides and a slab of Gorilla Glass 4 not designed for structural integrity to hold the phone together; there's a gap between the battery and motherboard right by the power button that creates a weak point; and the plastic and metal are held together with dovetail joints, which are intended for woodworking. Zack Nelson, testing the phone for his YouTube channel JerryRigEverything, was able to destroy both Nexus 6Ps he tested immediately.
  • The Dell Inspiron line was somewhat known for having very faulty speakers. These usually boil down to two issues. A) The ribbon cable that connects the speakers to the motherboard is very frail, and often comes loose if you move your device too much (did we mention this is a laptop we're talking about?). B) The headphone jack has a physical switch that turns off the speakers. If you plug in something in to the jack, the speakers turn off. Unplug it, and they turn on. Simple, right? Well, this switch is bound to get stuck at one point or another, and the only way to even get to it is to disassemble the damn thing, or by spraying contact cleaner in the port and hoping that the air pressure flips the switch.
  • Some models of the Acer Aspire One had the right speaker mounted in such a way that its vibrations would cause the hard disk at best to almost halt and at worst bad sectors and corrupt partitions.

    Computer hardware and peripherals 
  • Famously, the "PC LOAD LETTER" message you'd get on early HP Laserjets has been elevated as an example of confusion in user interfaces. Anyone without prior knowledge would assume something is wrong with the connection to the PC, or something is off in the transfer of data ("load" being interpreted as "upload"), and that the printer is refusing the "letter" they're trying to print. What it actually means is "load letter-sized paper into paper cassette"; why the printer wasn't simply programmed to say "OUT OF PAPER" is a Riddle for the Ages.
  • Some HP computers come with batteries or power supply units that are known to explode. Literally, with no exaggeration, they release sparks and smoke (and this is a "known issue"). Others overheat and burst into flames. And there have been multiple recalls, proving that they obviously didn't learn from the first one.
  • The infamous A20 line. Due to the quirk in how its addressing system workednote , Intel's 8088/86 CPUs could theoretically address slightly more than their advertised 1 MB. But because they physically still had only 20 address pins, the resulting address just wrapped over, so the last 64K of memory actually were the same as first. Some early programmersnote  were, unsurprisingly, stupid enough to use this almost-not-a-bug as a feature. So, when the 24-bit 80286 rolled in, a problem arose — nothing wrapped any more. In a truly stellar example of a "compatibility is God" thinking, IBM engineers couldn't think up anything better than to simply block the offending 21st pin (the aforementioned A20 line) on the motherboard side, making the 286 unable to use a solid chunk of its memory above 1 meg until this switch was turned on. This might have been an acceptable (if very clumsy) solution had IBM defaulted to having the A20 line enabled and provided an option to disable it when needed, but instead they decided to have it always turned off unless the OS specifically enables it. By the 386 times, no sane programmer used that "wrapping up" trick any more, but turning the A20 line on is still among the very first things any PC OS has to do. It wasn't until Intel introduced the Core i7 in 2008 that they finally decided "screw it", and locked the A20 line into being permanently enabled.

  • Qualcomm had their own share of failure: The Snapdragon 808 and 810 were very powerful chips at the time (2015) since they were based on the high performance ARM A57 design, but it had a very important disadvantage: it overheats to the point to make it throttle and lose performance! And 3 terminals got hit hard with this: The LG G4 (with Snapdragon 808), becoming infamous since it dies after just one year; the HTC M9 (with Snapdragon 810), which became infamous for overheating a lot; and the Sony Xperia Z5, for the same reasons as the M9. No wonder why the rest of the competition (Hisilicon and Mediatek) avoided the ARM A57 design.
  • The iRex Digital Reader 1000 had a truly beautiful full-A4 eInk display... but was otherwise completely useless as a digital reader. It could take more than a minute to boot up, between 5 and 30 seconds to change between pages of a PDF document, and could damage the memory card inserted into it. Also, if the battery drained all the way to nothing, starting to charge it again would cause such a current draw that it would fail to charge (and cause power faults) on any device other than a direct USB-to-mains connector, which was not supplied with the hardware.
  • Motorola is one of the most ubiquitous producers of commercial two-way radios, so you'd think they'd ironed out issues. Nope, there's a bunch.
    • The MTX 9000 line (the "brick" radios) were generally made of Nokiamantium, but they had a huge flaw in the battery clips. The battery was held at the bottom by two flimsy plastic claws and the clips at the top were just slightly thicker than cellophane meaning that the batteries quickly became impossible to hold in without buying a very tight-fitting holster or wrapping rubber bands around it.
    • The software to program virtually any Motorola radio, even newer ones, is absolutely ancient. You can only connect via serial port. An actual serial port - USB to serial adapter generally won't work. And the system it's running on has to be basically stone age (Pentium Is from 1993 are generally too fast), meaning that in most radio shops, there's a 486 in the corner just for programming them. Even the new XPR line can't generally be programmed with a computer made after 2005 or so.
      • If you can't find a 486 computer, there's a build of DOSBox floating around ham circles with beefed up code to slow down the environment even more than is possible by default. MTXs were very popular for 900MHz work because, aside from the battery issue, they were tough and cheap to get because of all the public agencies and companies that sold them off in bulk.
  • VESA Local Bus. Cards were very long and hard to insert because they needed two ports: the standard ISA and an additional 32 bit bus hardwired to the 486 processor, which caused huge instability and incompatibility problems. Things could get worse if a non-graphic expansion card (usually IO ports) was installed next to a video card, which could result in crashes when games using SVGA graphics accessed the hard drive. The multiple clock frequencies involved imposed high standards on the construction of the cards in order to avoid further issues. All these problems eventually caused the 486-bus-dependent VLB to be replaced by PCI, starting from late-development 486 boards onwards into the Pentium era.
  • The Radio Shack TRS-80 (model 1) had its share of hardware defects:
    • The timing loop constant in the keyboard debounce routine was too small. This caused the keys to "bounce" — one keypress would sometimes result in 2 of that character being input.
    • The timing loop constant in the tape input routine was wrong. This made the volume setting on the cassette player extremely critical. This problem could somewhat be alleviated by placing an AM radio next to the computer and tuning it to the RFI generated by the tape input circuit, then adjusting the volume control on the tape player for the purest tone from the radio. Radio Shack eventually offered a free hardware modification that conditioned the signal from the tape player to make the volume setting less critical.
    • Instead of using an off-the-shelf Character Generator chip in the video circuit, RS had a custom CG chip programmed, with arrow characters instead of 4 of the least-used ASCII characters. But they made a mistake and positioned the lowercase "a" at the top of the character cell instead of at the baseline. Instead of wasting the initial production run of chips and ordering new chips, they eliminated one of the video-memory chips, added some gates to "fold" the lowercase characters into the uppercase characters, and modified the video driver software to accommodate this. Hobbyists with electronics skills were able to add the missing video memory chip, disconnect the added gates and patch the video driver software to properly display lowercase, albeit with "flying a's". The software patch would have to be reloaded every time the computer was booted. Radio Shack eventually came out with an "official" version of this mod which included a correctly programmed CG chip.
    • The biggest flaw in the Model 1 was the lack of gold plating on the edge connector for the Expansion Interface. Two-thirds of the RAM in a fully expanded TRS-80 was in the EI, and the bare copper contact fingers on the edge connector oxidized readily, resulting in very unreliable operation. It was often necessary to shut off the computer and clean the contacts several times per day. At least one vendor offered a "gold plug", which was a properly gold-plated edge connector which could be soldered onto the original edge connector, eliminating this problem.
    • In addition, the motherboard-to-EI cable was very sensitive to noise and signal degradation, which also tended to cause random crashes and reboots. RS attempted to fix this by using a "buffered cable" to connect the EI to the computer. It helped some, but not enough. They then tried separating the 3 critical memory-timing signals into a separate shielded cable (the "DIN plug" cable), but this still wasn't enough. They eventually redesigned the EI circuit board to use only 1 memory timing signal, but that caused problems for some of the unofficial "speed-up" mods that were becoming popular with hobbyists.
    • The Floppy Disk Controller chip used in the Model I EI could only read and write Single Density disks. Soon afterwards a new FDC chip became available which could read and write Double Density (a more efficient encoding method that packs 80% more data in the same space). The new FDC chip was almost pin-compatible with the old one, but not quite. One of the values written to the header of each data sector on the disk was a 2-bit value called the "Data Address Mark". 2 pins on the single-density FDC chip were used to specify this value. As there were no spare pins available on the DD FDC chip, one of these pins was reassigned as the "density select" pin. Therefore the DD FDC chip could only write the first 2 of the 4 possible DAM values. Guess which value TRS-DOS used? Several companies (starting with Percom, and eventually even Radio Shack themselves) offered "doubler" adapters — a small circuit board containing sockets for both FDC chips! To install the doubler, you had to remove the SD FDC chip from the EI, plug it into the empty socket on the doubler PCB, then plug the doubler into the vacated FDC socket in the EI. Logic on the doubler board would select the correct FDC chip.
  • The TRS-80 model II (a "business" computer using 8-inch floppy disks) had a built-in video monitor with a truly fatal flaw. The sweep signals used to deflect the electron beam in the CRT were generated from a programmable timer chip. When the computer booted, one of the first things it would do is write the correct timer constants to the CRTC chip. However, an errant program could accidentally write any other values to the CRTC chip, which would throw the sweep frequencies way off. The horizontal sweep circuit was designed to operate properly at just one frequency and will "send up smoke signals" if operated at a frequency significantly different than what it was designed to operate at. If your screen goes blank and you hear a loud high-pitched whine from the computer, shut the power off immediately, as it only takes a few seconds to destroy some rather expensive components in the monitor.
  • Nvidia's early history is interesting — in the same way a train wreck is. There's a reason why their first 3D chipset, the NV1, barely gets a passing note in the official company history page. See, the NV1 was a weird chip which they put on an oddball — even for the times — hybrid card meant to let you play specially ported Sega Saturn games on the PC. The chip's weirdness came from its quadratic primitives, when everybody else used, and has ever used, triangles. Developing for a quad-supporting chip was complicated, and porting previously existent triangle games to quads wasn't all that better either, so the NV1 was wildly unpopular from the start. Additionally, the hybrid cards integrated other features (such as MIDI playback) that weren't needed and increased cost and complexity. When Microsoft came out with Direct3D it effectively killed the NV1, as it was all but incompatible with it. Nvidia stubbornly went on to design the NV2, still with quad mapping, intending to put it in the Dreamcast — but then Sega saw the writing on the wall, told Nvidia "thanks but no thanks" and used a Nec PowerVR instead. Nvidia finally saw the light, dropped quads altogether and came out with the Riva 128, which was a decent hit and propelled them onto the scene — probably with great sighs of relief from the shareholders.
  • Improvement in low-power processor manufacture by Intel — namely the Bay Trail-T system-on-a-chip architecture — have now made it possible to manufacture a honest-to-goodness x86 computer running full-blown Windows 8.1 and with moderate gaming capabilities in a box the size of a book. Cue a whole lot of confounded Chinese manufacturers using the same design standards they used on Arm systems-on-a-chip to build Intel ones, sometimes using cases with nary a single air hole and often emphasizing the lack of need for bulky heatsinks and noisy fans. Problem: You do actually need heat sinking on Intel SoCs, especially if you're going to pump them for all the performance they're capable of (which you will, if you use them for gaming or high-res video playback). Without a finned heatsink and/or fan moving air around, they'll just throttle down to crawling speed and frustrate the users.
  • Back in the early days of 3D Graphics cards, when they were called 3D Accelerators, and even 3Dfx hadn't found their stride, there was the S3 Virge. The card had good 2D performance, but such a weak 3D chip that at least one reviewer called it, with good reason, the world's first 3D Decelerator. That epithet is pretty much Exactly What It Says on the Tin, as 3D games performed worse on PCs with an S3 Virge installed than they did in software mode, i.e. with no 3D acceleration at all.
  • The "Home Hub" series of routers provided by UK telecoms giant BT are fairly capable devices for the most part, especially considering that they usually come free to new customers. Unfortunately, they suffer from a serious flaw in that they expect to be able to use Wi-Fi channels 1, 5 or 11, which are naturally very crowded considering the ubiquity of home Wi-Fi, and BT's routers in particular. And when that happens, the routers will endlessly rescan in an effort to get better speeds, knocking out your internet connection for 10-30 seconds every 20 minutes or so. Sure, you can manually force the router into using another, uncongested channel... except that it'll keep rescanning based on how congested channels 1, 5 and 11, even if there are no devices whatsoever on the channel that you set manually. Even BT's own advice is to use ethernet (and a powerline adapter if needed) for anything that you actually need a rock-solid connection on.
  • Bilingual keyboards in Canada have a notorious reputation due to changing the layout of a few important keys. What would normally be an easily accessible left shift key is cut in half to make room for the slash key that's normally located above the enter/return key. Said enter/return key is transformed into an inordinately huge button whose design serves no practical purpose whatsoever. This often results in mistypes caused by one not extending one's left pinky finger far enough in an attempt to capitalize letters. The most insulting part is that said key doesn't really serve much of a purpose to french users anyway. A cautionary tale to Canadian residents, unless you're a British immigrant and used to UK keyboards (which have the same inexplicable change), avoid bilingual keyboards like the plague.
  • Wireless mice still seem to have certain design flaws nobody seems particularly willing to fix. One particular wide-spread issue is that the power switch for a wireless mouse is, without exception, on the bottom of the mouse body - the part that is always grinding against the surface you use the mouse on, and as such unless it is recessed far enough into the mouse will constantly jiggle the power switch, thus messing with your mouse movement; especially ironic if the mouse advertises itself as a "gaming" device, then interferes with your aim while you actually play games with it. Rechargeable ones can get even worse, as most insist on setting up the full assembly in a way that makes it impossible to use the mouse while it's charging, such as the already-mentioned Apple Magic Mouse 2. Then you get to some models, like Microsoft's Rechargeable Laser Mouse 7000, which gets even worse. On top of both of the aforementioned issues, it's designed in such a way that the battery in its battery compartment has to depress a small button for the charger to actually supply power to it. As it turns out, a regular AAA battery - which you aren't supposed to recharge - fits in just fine and depresses the button, but the proprietary rechargeable battery that comes with the mouse doesn't, because it's slightly thinner in diameter than a AAA battery, thus it does not depress the button and cannot be charged until you wrap paper or something around it at the point where it's supposed to make contact with that button.

    Mass-storage devices 
  • The Commodore 64, one of the most popular computers of all time, wasn't without its share of problems. Perhaps the most widely known is its extreme slowness at loading programs. This couldn't really be helped with a storage medium like tape, which remained slow even after various clever solutions to speed it up, but floppy disks really ought to have been faster. What happened was that Commodore had devised a hardware-accelerated system for transferring data that worked fairly well, but then also found a hardware bug in the input/output chip that made it work not at all. Replacing the buggy chips was economically unfeasible, so the whole thing was revised to work entirely in software. This slowed down drive access immensely and caused the birth of a cottage industry for speeder carts, replacement ROM chips and fastloaders, most of which sped things up at least fivefold. Additionally the drive itself had a CPU and some RAM to spare — effectively a secondary computer dedicated to the sole task of feeding data to the primary computer (hence its phenomenal cost) — so it was programmable, and people came up with their own ways to improve things further. Eventually non-standard formats were developed that loaded 25 times as fast as normal.
  • Why, after the introduction of integrated controllers into every other storage device, does the floppy have to be controlled by the motherboard? Sure, it makes the floppy drive simpler to manufacture, but you're left with a motherboard that only knows how to operate a spinning mass of magnetic material. Try making a floppy "emulator" that actually uses flash storage, and you'll run into this nigh-impassible obstacle.
    • The floppy drive interface design made sense when it was designed (first PC hard drives also used a similar interface) and was later kept for backward compatibility. However, a lot of motherboards also support IDE floppy drives (There may not have been any actual IDE floppy drives, but a LS120 drive identifies itself as floppy drive and can read regular 3.5" floppy disks), a SCSI or USB device can also identify as floppy drive. On the other hand, the floppy interface is quite simple if you want to make your own floppy drive emulator.
  • Sony's HiFD "floptical" drive system. The Zip Drive and the LS-120 Superdrive had already attempted to displace the aging 1.44MB floppy, but many predicted that the HiFD would be the real deal. At least until it turned out that Sony had utterly screwed up the HiFD's write head design, which caused performance degradation, hard crashes, data corruption, and all sorts of other nasty problems. They took the drive off the market, then bought it back a year later... in a new 200MB version that was totally incompatible with disks used by the original 150MB version (and 720KB floppies as well), since the original HiFD design was so badly messed up that they couldn't maintain compatibility and make the succeeding version actually work. Sony has made a lot of weird, proprietary formats that have failed to take off for whatever reason, but the HiFD has to go down as the worst of the lot.
  • The IBM Deskstar 75GXP, nicknamed the Death Star. While it was a large drive by year-2000 standards, it had a disturbing habit of suddenly failing, taking your data with it. The magnetic coating was of subpar reliability, and came loose easily, causing head crashing that easily strips the magnetic layer off clean. One user with a RAID server setup reported to their RAID controller manufacturer; supposedly, this user was replacing their IBM Deskstars at a rate of 600-800 drives per day. There have been many hard drives that have been criticized for various reasons, but the "Death Star" was something truly spectacular for all the wrong reasons.

    There is anecdotal evidence that IBM was even engaging in deception, knowingly selling faulty products, and then spewing out rhetoric about the industry-standard failure rates of hard drives. This denial strategy started a chain reaction that led to a demise in customer confidence. Class action lawsuits helped convince IBM to sell their hard drive division to Hitachi in 2002. (See "Top 25 Worst Tech Products of All Time" for this and more.)
  • The Iomega Zip Disk was a big success undeniably, but user confidence in the drives' reliability was terrorized by the "Click-of-death". Though tens-of-millions of the drives were sold, there were thousands of drive that would suffer misalignment and damage the medium injected into the drive. This would not be horrible by itself necessarily, but Iomega made a big mistake, downplaying the users who complained about drive failures and failing to be sensitive about their lost data.

    The Zip's worst problem wasn't even the fact that it could fail and potentially ruin a disk, but that such a ruined disk would go on to ruin whatever drive it was then inserted into. Which would then ruin more disks, which would ruin more drives, et cetera. Effectively a sort of hardware virus, it turned one of the best selling points of the platform (inter-drive compatibility) into its worst point of failure.

    After a class-action lawsuit 1998, Iomega issued rebates in 2001 for future products. It was too little, too late, and CD-R disks were now more popular for mass storage and perceived as more reliable.The New Zealand site for PC World has the original article still available.
  • Maxtor, now defunct, once sold a line of external hard drives under the OneTouch label. However, the USB 2.0 interface would often malfunction and corrupt the filesystem on drive, rendering the data hard to recover. You were better off removing the drive enclosure and installing the disk on a spare SATA connection on a motherboard. Not surprisingly, Maxtor was having financial troubles already, before Seagate acquired them.
  • The 3M Superdisk and its proprietary 120MB "floptical" media were intended as direct competition to the Iomega Zip, but in order to penetrate a market that Iomega owned pretty tightly the Superdisk needed a special feature to push it ahead. That feature was the possibility to write up to 32 megabytes on a bog-standard 1.44MB floppy, using lasers for alignment of the heads. Back then 32MB was significant storage, and people really liked the idea of recycling existing floppy stock — of which everybody had large amounts — into high-capacity media. The feature might just have given the Superdisk the edge it needed; unfortunately what wasn't immediately clear, nor explicitly stated, was that the drive was only able to do random writes on its specific 120MB disks. It could indeed write 32MB on floppies, but only if you rewrote all the data every time a change, no matter how small, was made — basically like a CD-RW disk with no packet-writing system. This took a relatively long time, and transformed the feature into a gimmick. Disappointment ensued, and the format didn't even dent Iomega's empire before disappearing.
  • The Caleb UHD-144 was an attempt to gain a foothold in the floppy-replacement market. Unfortunately, it was ill-timed, the company not taking a hint from the failures of Sony and 3M, so there was no chance to see the product in action. The Zip-250 and inexpensive CD-R media caused the technology to be dead on arrival; A tragic example of a "good" idea rushed to market without checking for what the competition has to offer. (The Zip-250 itself was quickly marginalized by cost-effective CD-R discs that were designed to be read in optical drives already present in numerous computers.)
  • Some DVD players, especially some off-brand models, seem to occasionally decide that the disc that you have inserted is not valid. The user ejects the disc and then injects it again and hopefully the DVD player decides to cooperate. This can be a headache if the machine is finicky about disc defects due to copy protection, or can't deal with your brand of DVD -/+ recordable disc that you use for your custom films. Bonus points if you have to crack a copy-protected disc to burn it onto a blank DVD because you can't watch the master copy. The inverse situation is also possible, where you have a DVD player made by a "reputable" brand, and even that can't allow you to watch the locked-down DVD you just spent money for.
    • Some DVD players are overly cautious about the discs it's willing to play because of regional lockout. Live in Australia and have a legally purchased Region 4 DVD? Turns out it was an NTSC disc, and your DVD player is only willing to play PAL discs. Oops.
  • After Solid-State Drives started taking over from mechanical hard drives as the storage device of choice for high-end users, it quickly became obvious that the transfer speeds would soon be bottlenecked by the speed of the Serial ATA standard, and that PCI Express was the obvious solution. Using it in the form of full-sized cards wasn't exactly optimal, though, and the smaller M.2 form factor is thermally limited and can be fiddly to install cards in. The chipset industry's answer was SATA Express, a clunky solution which required manufacturers to synchronise data transfers over two lanes of PCI Express and two SATA ports, standards with completely different ways of working. Just to make it even worse, the cable was an ugly mess consisting of four separate wires (two SATA, one PCI-E, and a SATA power connector that hung off the end of the cable). The end result was one of the most resounding failures of an industry standard in computing history, as a grand total of zero storage products made use of it (albeit a couple of manufacturers jury-rigged it into a way of connecting front-panel USB3.1 ports), with SSD manufacturers instead flocking to the SFF-8639 (later renamed U.2) connector, essentially just four PCI-E lanes crammed into a simple cable.
  • While there are many things that set back the NES Classic Edition (stock issues, inconvenient design choices, tons of alternatives) one glaring flaw it has is its hard drive. Many fans have calculated that the system could easily hold the NES' entire library 10 times over, including box art and metadata, and yet it only uses a small fraction of it. While there's many people who would have put the empty space to good use (it's very easy to hack the thing by simply flashing the hard drive and putting whatever you want on it), one has to wonder why they gave it such a massive hard drive only to do next to nothing with it, especially when the games that are already on it are unimpressive as it is.

    Game consoles 
  • Sony's PlayStation line has had its fair share of baffling design choices:
    • First, the first batch of PS2's were known for starting to produce a "Disc Read Error" after some time, eventually refusing to read any disc at all. The cause? The gear for the CD drive's laser tracking had absolutely nothing to prevent it from slipping, so the laser would gradually go out of alignment.
    • The original model of the PSP had buttons too close to the screen, so the Einsteins at Sony moved over the switch for the square button, without moving the location of the button itself. Thus every PSP had an unresponsive square button that would also often stick. Note that the square button is the second-most important face button on the controller, right before X; in other words, it's used constantly during the action in most games. Sony president Ken Kutaragi confirmed that this was intentional, conflating this basic technical flaw with the concept of artistic expression.
    • And before you ask, yes, that's a real quote sourced by dozens of trusted publications. The man actually went there.
    • Another PSP-related issue was that if you held the original model a certain way, the disc would spontaneously eject. It was common enough to be a meme on YTMND and among the early Garry's Mod community.
    • The original Playstation wasn't exempt from issues either. The original Series 1000 units and later Series 3000 units (which converted the 1000's A/V RCA ports to a proprietary A/V port) had the laser reader array at 9 o'clock on the tray. This put it directly adjacent to the power supply, which ran exceptionally hot. Result: the reader lens would warp, causing the system to fail spectacularly and requiring a new unit. Sony admitted this design flaw existed... after all warranties on the 1000 and 3000 units were up and the Series 5000 with the reader array at 2 o'clock was on the market.
    • A common criticism of the PlayStation Vita is that managing your games and saves is a tremendous hassle: for some reason, deleting a Vita game will also delete its save files, meaning that if you want to make room for a new game, you'll have to kiss your progress goodbye. This can be circumvented by transferring the files to a PC or uploading them to the cloud, but the latter requires a Playstation Plus subscription to use. One wonders why they don't allow you to simply keep the save file like the PS1 and PSP games do. This is made all the more annoying by the Vita's notoriously small and overpriced proprietary memory cards, which means that if you buy a lot of games in digital format, you probably won't be able to hold your whole collection at the same time, even if you shell out big money for a 32GB (the biggest widely available format, about $60) or 64GB (must be imported from Japan, can cost over $100, and is reported to sometimes suffer issues such as slow loading, game crashes, and data loss) card.
    • While the PlayStation 4 is mostly a well-built console, it has an Achilles' Heel in that the heat exhaust vents on the back are too large. The heat produced by the system invites insects like cockroaches to crawl inside the console, which can then short circuit the console if they step on the wrong things. If you live in an area where roaches are hard to avoid or get rid of, owning a PS4 becomes a major crapshoot.
    • Another, more minor flaw of the PS4 (one that persists across all 3 variations of the console, no less) is that the USB ports on the front are in a narrow recession which makes it impossible to use larger USB drives or cables with it.
  • Microsoft's Xbox consoles aren't exempt from these either:
    • Most revisions of the original Xbox used a very cheap clock capacitor with such a high failure rate that it's basically guaranteed to break and leak all over the motherboard after a few years of normal use, far shorter than the normal lifetime of this type of component. Making this more annoying is that the clock capacitor is not an important part: it does not save time information if the system is unplugged for more than 30 minutes and the console works fine without it. The last major revision (1.6) of the system uses a different, better brand and is exempt from this issue.
    • The infamous "Red Ring of Death" that occurs in some Xbox 360 units. It was a consequence of three factors: the introduction of lead-free solder, which is toxicologically safer but harder to properly solder with; inconsistent quality of the solder itself, which got better in later years but was prone to cracking under stress in early revisions; and bad thermal design, where clearance issues with the DVD drive caused Microsoft to use a dinky little heatsink for chips that were known to run hot. Result: the chips would overheat, the defective and improperly-applied solder would crack from the heat expansion and the connections would break.
    • Microsoft released official numbers stating that 51.4% of all early 360 units were or would eventually be affected by this issue. Unfortunately the problem got blown out of proportion by the media, so much so that people were afraid of encountering the issue on later versions that weren't affected. So afraid, in fact, that they'd often send in consoles that had a different and easily solvable issue: only "three segments of a red ring" mean "I'm broken, talk to my makers"; other red-ring codes could be as simple as "Mind pushing my cables in a bit more?", something easy to figure out if you Read the Freaking Manual.
    • The 360 has another design flaw in it that makes it very easy for the console to scratch your game discs if the system is moved while the game disc is still spinning inside the tray. The problem is apparently so insignificant amongst most Xbox 360 owners (though ironically MS themselves are fully aware of this problem), that when they made the Slim model of the system they fixed Red Ring issues (somewhat) but not the disc scratching issue.
      • Most mechanical drives can tolerate movement while active, at least. It's not recommended (especially for hard drives, where the head is just nanometers away from the platter), but not accounting for some movement is just bad. Anyone that has worked in a game-trading industry (such as Gamestop/EB Games) can tell you that not a day goes by without someone trying to get a game fixed or traded in as defective due to the evil Halo Scratch.
      • Microsoft recommends to not have the original Xbox One model in any position other than horizontal because the optical drive isn't designed for any orientation other than that. Note that every 360 model was rated to work in vertical orientation, even with the aforementioned scratching problem, and Microsoft quickly restored support for vertical orientation with the updated Xbox One S model.
    • Most of the 360's problems stem from the inexplicable decision to use a full-sized desktop DVD drive, which even in the larger original consoles took almost a quarter of their internal volume. Early models also had four rather large chips on the motherboard, due to the 90 nm manufacturing process, which also made them run quite hot (especially the GPU-VRAM combo that doubled as a northbridge). But the relative positions of the GPU and the drive (and the latter's bulk) meant that there simply wasn't any room to put any practical heatsink. Microsoft tried to address this problem in two separate motherboard redesigns, the first of which finally added at least some heatsink, but it was only a third, when the chipset was shrunk to just two components, which allowed designers to completely reshuffle board and even add a little fan atop the new, large heatsink, which finally did away with the problem somewhat. However, even the Slim version still uses that hugeass desktop DVD drive, which still has no support for the disk, perpetuating the scratching problem.
    • The circular D-Pad on the 360's controller (likewise for the Microsoft SideWinder Freestyle Pro gamepad), which is clearly designed to look cool first and actually function second. Anyone who's used it will tell you how hard it is to reliably hit a direction on the pad without hitting the other sensors next to it. The oft-derided U.S. patent system might be partially responsible for this, as some of the good ideas (Nintendo's + pad, Sony's cross pad) were "taken". Still, there are plenty of PC pads that don't have this issue to the same degree... at least until the 360 became successful and every third-party pad started ripping off its controller wholesale, unusable D-pad and all. Some even go as far as to otherwise perfectly emulate an entirely different controller's design, then replace whatever actually-usable D-pad the original used with a 360-style one for no reason whatsoever.
    • Early in the life of the 360, many gamers used the console's optional VGA cable to play their games with HD graphics, as true HDTVs tended to be rare and expensive back then. PC monitors at the time usually had a 4:3 aspect ratio, which most game engines were smart enough to handle by simply sticking black bars at the top and bottom of the screen, with a few even rendering natively at the right resolution. However, some engines (including the one used for Need for Speed Most Wanted and Carbon) instead rendered the game in 480p — likely the only 4:3 resolution they supported — and upscaled the output. Needless to say, playing a 480p game stretched to a higher resolutionnote  looked awful, and arguably even worse than just playing it on an SDTV.
    • The original 360's Optical Audio port was built into the analog video connector. If you wanted to utilize both HDMI video and Optical audio, the hardware supported both simultaneously. The ports, however, were placed too close together and the bulky analog connector prevented inserting an HDMI cord. Removing the plastic shroud on the analog connector allows you to use both at the same time.
  • Nintendo have made their fair share of blunders over the years as well:
    • When Nintendo of America's engineers redesigned the Famicom into the Nintendo Entertainment System, they removed the pins which allowed for cartridges to include add-on audio chips, and rerouted them to the expansion slot on the bottom of the system in order to facilitate the western counterpart to the in-development Famicom Disk System. Unfortunately, not only was said counterpart never released, there was no real reason they couldn't have run audio expansion pins to both the cartridge slot and expansion port, other than the engineers wanting to save a few cents on the necessary IC. This meant that not only could no western NES game ever have any additional audio chips, it also disincentivised Japanese developers from using them, as it would entail reprogramming the entire soundtrack for a western release.
    • The Virtual Boy was a poorly-designed console in general, but perhaps the strangest design flaw was the complete absence of a head-strap. While this was ostensibly because of fears that the weight of the device could cause neck strain for younger players, for one thing pre-teens weren't officially supposed to be playing the device anyway, and for another thing the solution they came up with was a fixed 18-inch-tall stand that attached to the underside of the system. This meant that if you didn't have a table that was the exact right height, you'd likely end up straining your neck and/or back anyway, in addition to the eye strain that the system was notorious for. Even the R-Zone, a notoriously poor Shoddy Knock Off Product of the system, managed to make room for a head-strap in the design.
    • The Wii literally has no crash handler. So if you manage to crash your system, you open it up to Arbitrary Code Execution, and a whole load of security vulnerabilities await you. Do you have an SD card inserted? Well, crash any game that reads and writes to it, and even more vulnerabilities open up. They'll tell you that they've fixed these vulnerabilities though system updates, but in reality, they never did. In fact, the only thing these updates did on that matter was simply remove anything that was installed with these vulnerabilities - nothing's stopping you from using these vulnerabilities again to re-install them. Of course, all of this is a good thing if you like modding your console.
    • While the Wii U ultimately proved a failure for several reasons, poor component choices helped contribute to its near-total lack of third-party support. It'd only be a slight exaggeration to say that the system's CPU was essentially just three Wii CPUs — and by extension, three GameCube CPUs — heavily overclocked and slapped together on the same die, with performance that was abysmally poor by 2012 standards. Its GPU, while not as slow, wasn't all that much faster than those of the PS3 and Xbox 360,note  and used a shader model in-between those of the older consoles and their successors, meaning that ported PS3/360 games didn't take advantage of the newer hardware, while games designed for the PS4 and Xbox One wouldn't even work to begin with due to the lack of necessary feature support. The system would likely have fared much better if Nintendo had just grabbed an off-the-shelf AMD laptop APU — which had enough power even in 2012 to brute-force emulate the Wii, eliminating the main reason to keep with the PowerPC line — stuffed it into a Wii case and called it a day. Fortunately Nintendo actually seem to have learned from this, basing the Nintendo Switch on an existing nVidia mobile chip which thus far has proven surprisingly capable of punching above its weight.
    • Speaking of the Nintendo Switch, don't buy a screen protector. Or else whatever adhesive that's on it will melt off due to the console's methods of cooling. Not that you'd really need one anyway due to how durable it is.
      • Another issue with the Nintendo Switch is its implementation of USB-C. When using such standards, the point is that anything designed for the standard is supposed to be able to work with any other device that uses it. However, this is the first time a Nintendo console supports a USB standard for their powering, and it shows. For whatever reason, especially after the 5.0 update, there have been reports of Switches bricking due to using third party docks. It apparently has to do with not following the USB-C standard properly. This issue has become prevalent enough to force Nintendo to respond on the situation. Exactly why Nintendo felt the need to monkey around with the USB-C standard when it's capable of delivering power levels of roughly six times what the Switch uses in its docked mode remains unclear.
  • After insulting the childishness of the GBA through PR, Nokia created the complete joke of a design that was the original N-Gage. As a phone, the only way you could speak or hear anything effectively is if the user held the thin side of the unit to his/her ear (earning it the derisive nickname "taco phone" and the infamous "sidetalking"). From a gaming point of view, it was even worse, as the screen was oriented vertically instead of horizontally like most handhelds, limiting the player's ability to see the game field (very problematic with games like the N-Gage port of Sonic Advance). Worst of all, however, is the fact that in order to change games, one had to remove the casing and the battery every single time.
    • During the development of the N-Gage, Nokia held a conference where they invited representatives from various game developers to test a prototype model of the N-Gage and give feedback. After receiving numerous suggestions on how to improve the N-Gage, Nokia promptly ignored most of them on the grounds that they were making the machines through the same assembly lines as their regular phones and they were not going to alter that.
  • As far as bad console (or rather, console add-on) design goes, the all-time biggest example is probably the Atari Jaguar CD. Aside from the crappy overall production quality of the add-on (the Jaguar itself wasn't too hot in this department, either) and poor aesthetics which many people have likened to a toilet seat, the CD sat on top of the Jaguar and often failed to connect properly to the cartridge slot, as opposed to the similar add-ons for the NES and Sega Genesis which used the console's own weight to secure a good connection. Moreover, the disc lid was badly designed and tended to squash the CD against the bottom of the console, which in turn would cause the disc motor to break apart internally from its fruitless attempts to spin the disc. All of this was compounded by Atari's decision to ditch any form of error protection code so as to increase the disc capacity to 800 megabytes, which caused software errors aplenty, and the fact that the parts themselves tended to be defective.
    • Of note, it was not rare for the device to come fresh from the box in such a state of disrepair that highly trained specialists couldn't get it working — for example, it could be soldered directly to the cartridge port and still display a connection error. This, by the way, is exactly what happened when James Rolfe tried to review the system.
    • As the Angry Video Game Nerd pointed out in his review for the console, the Jaguar is a top-loading console that lacks a door to protect the pin connectors from dust and moisture. This means you have to keep a game cartridge in the console at all times to protect it from damage. The Jaguar CD fixes the problem by having a door handle, but if you have a broken one, the cartridge component of the add-on won't work!
    • While the Atari 5200 wasn't that poorly-designed of a system in general — at worst, its absurdly huge size and power/RF combo switchbox could be annoying to deal with, but Atari eventually did away with the latter, and were working on a smaller revision when the market crashed, forcing them to discontinue the system — its controllers were a different matter entirely. In many ways they were ahead of their time, with analogue movement, along with start and pause buttons. Unfortunately, Atari cheaped out and didn't bother providing them with an auto-centring mechanism, along with building them out of such cheap materials that they usually tended to fail after a few months, if not weeks. The poor-quality controllers subsequently played a major part in dooming the system...
    • ...which brings us right back to the Jaguar. The controller for that system only included three main action buttons, a configuration which was already causing issues for the Sega Genesis at the time. In a baffling move, the controller also featured a numeric keypad, something that Atari had last done on the 5200. On that occasion the keypad was pretty superfluous and generally ignored by developers, but it was only taking up what would probably have been unused space on the controller, so it didn't do any harm by being there. The Jaguar's keypad, on the other hand, was far bigger, turning the controller into an ungodly monstrosity that has often been ranked as the absolute worst videogame controller of all-time.note  Atari later saw sense and produced a revised controller that added in three more command buttons and shoulder buttons, but for compatibility reasons they couldn't ditch the keypad, meaning that the newer version was similarly uncomfortable. Note that the Jaguar's controller was in fact designed originally for the Atari Panther, their never-released 32-bit console that was scheduled to come out in early-mid 1991, before it became obvious that the Genesis's 3-button configuration wasn't very future-proof. They evidently figured that the keypad gave them more than enough buttons and didn't bother creating a new controller for the Jaguar, a decision that would prove costly.
  • The Sega Saturn is, despite its admitted strong points on the player end, seen as one of the worst major consoles internally. It was originally intended to be the best 2D gaming system out there (which it was), so its design was basically just a 32X with higher clockspeeds, more memory, and CD storage. However, partway through development, Sega learned of Sony's and Nintendo's upcoming systems (the PlayStation and Nintendo 64 respectively) which are both designed with 3D games in mind, and realized the market — especially in their North America stronghold — was about to shift under their feet; they wouldn't have a prayer of competing. So, in an effort to try to bring more and more power to the console, Sega added an extra CPU and GPU to the system, which sounds great at first... until you consider that there were also six other processors that couldn't interface too well. This also made the motherboard prohibitively complex, being the most expensive console at the time. And lastly, much like the infamous Nvidia NV1 which has its own example in this very page, the GPU worked on four-sided basic primitives while the industry standard was three sides, a significant hurdle for multiplatform games as those developed with triangular primitives would require extensive porting work to adapt them to quads. All this piled-on complication made development on the Saturn a nightmare. Ironically, consoles with multiple CPU cores would become commonplace two generations later with the Xbox 360 and PlayStation 3; like a lot of Sega's various other products of that era, they had attempted to push new features before game developers were really ready to make use of them.
  • The Kickstarter-funded Ouya console has gone down in history as having a huge raft of bad ideas:
    • As with many console systems, Ouya gave the user the option of funding their account by buying funding cards at retail, which provided codes the user can type in to add money to their account. Unfortunately, the Ouya will not proceed beyond boot unless a credit card number is entered, making the funding cards a pointless option.
    • When an app offered an in-app purchase, a dialog was displayed asking the user to confirm the purchase — but no password entry or similar was required, and the OK button was the default. This means that if you pressed buttons too quickly while an app offered a purchase, you could confirm it accidentally and be charged for an in-app purchase.
    • The system was touted from the beginning as open and friendly to modification and hacking. This sparked considerable interest, and it became obvious that a sizable part of the supporting community didn't really give two hoots about the Ouya's intended purpose as a gaming console; rather, they just wanted one to hack and make an Android or preferably Linux computer out of. The Ouya people — who, like every other console manufacturer, counted to make profit more from selling the games than the hardware — promptly reneged on the whole openness thing and locked the Ouya down tight. The end result was a single-purpose gadget that had a slow, unintuitive and lag-prone interface, couldn't run most of the already-available Android software despite being an Android system, and didn't even have many games that gamers actually wanted to buy.
    • Also, the HDMI output was perpetually DRMed with HDCP. There wasn't a switch to disable it, not even turning on developer mode. People who were expecting the openness promised during the campaign were understandably angry for being lied to, as were those hoping to livestream and record Let's Plays of the games.
    • Even in its intended use, the Ouya disappointed its users. The main complaint is that the controllers are laggy; on a console with mostly action-packed casual games, this is very bad. It wasn't even a fault of the console itself, as a controller which exhibits this on an Ouya will have the same input lag when paired to a computer. Apparently, not everyone's controllers have this issue, so opinions differ on whether it was just a large batch of faulty controllers, or a design flaw that came out during beta testing but was knowingly ignored and quietly corrected in subsequent batches.
    • The fan used to prevent overheating isn't pointed at either of the two vents. Never mind that the console uses a mobile processor, which doesn't even need a fan. In theory, the fan would allow the processor to run at a higher sustained speed. In practice, it blows hot air and dust directly against the wall of the casing, artificially creating frequent issues due to overheating.

    Toys 
  • Despite being a cherished and well-loved franchise, Transformers has made numerous mistakes throughout the years:
    • Gold Plastic Syndrome. A number of Transformers toys made in the late 1980s and early 1990s were, in part or in whole, constructed with a kind of swirly plastic that crumbled quite quickly, especially on moving parts. Certain toys (like the reissue of Slingshot, various Pretenders and the Japanese exclusive Black Zarak) are known to shatter before they are taken out of the box. GPS makes plastic so fragile, a light breeze would probably make it shatter like glass. There are pictures of the effects of GPS in the article linked above, and it isn't pretty. Thankfully, the issue hasn't cropped up since the Protoform Starscream toy released in 2007, meaning Hasbro and TakaraTomy finally caught on.
      • Note that GPS isn't limited to either gold-colored plastics or the Transformers line. Ultra Magnus' original Diaclone toy (Powered Convoy, specifically the chrome version) had what is termed "Blue Plastic Syndrome" (which was thankfully fixed for the Ultra Magnus release, which uses white instead of blue), and over in the GI Joe line, the original Serpentor figure had GPS in his waist.
    • Besides the GPS plastic, some translucent plastic variants are also known to break very easily. The Deluxe-class 2007 movie Brawl figure had its inner gear mechanics made out of such plastic, which tended to shatter right at the figure's first transformation. On top of that, the posts that held its arms in place didn't match the shape of the holes they were supposed to peg into. Thankfully, the toy was released some time later in new colors, which fixed all of these issues.
    • Unposeable "brick" Transformers. The Generation 1 toys can get away with it (mainly because balljoints weren't widely used until Beast Wars, though they were used as early as 1985 on Astrotrain, and safety was more important that poseability), but in later series, like most Armada and some Energon toys (especially the Powerlinked modes), they are atrocious —Energon Wing Saber is literally a Flying Brick, and not in the good way. With today's toy technology, there just isn't an excuse for something with all the poseability of a cinderblock and whose transformation basically consists of lying the figure down, especially the larger and more expensive ones. Toylines aimed at younger audiences (such as Rescue Bots and Robots in Disguise 2015) are a little more understandable, but for the lines aimed at general audiences or older fans (such as Generations), it's inexusable.
    • Toys over-laden with intrusive gimmicks, affectionately nicknamed "Gimmickformers", are generally detested. While these are meant to cater to a younger crowd, when a figure has so many things going on that these detract from the transformation, articulation, and aesthetics, even they may repelled by it. Such a figure is the infamous Transformers Armada Side Swipe — featuring a boring (though passable) car-mode, a Mini-Con "catapult" that doesn't normally work, and a hideous robot-mode with excess vehicle-bits hanging off everywhere (including the aforementioned catapult on his butt), the posability of a brick, and the exciting Mini-Con activated action-feature of raising its right arm, which you can do manually. Toy-reviewer TJ Omega once did a breakdown on the figure, coming to the conclusion that its head was the only part not to have any detracting design faults.
  • Cracked has an article called "The 5 Least Surprising Toy Recalls of All Time", listing variously dangerous toys. Amongst them...
    • Sky Dancers. The wicked offspring of a spinning top, a helicopter, and a Barbie doll. It came out looking like a beautiful fairy with propeller wings — and a launcher. When those little dolls went spinning... well, let's just say that there's a good reason why nowadays, most flying toys like this have rings encircling the protruding, rotating wings. Their foam wings became blades of doom that could seriously mess up a kid's face with cuts and slashes. There's no way to control those beauties once they are launched, and it's hard to predict where they will go...which is why they're "Dancers"!
      • There was also a boys' version called Dragon Flyz. There are also imitators. They could be quite enjoyable — it's just that they were also surprisingly dangerous.
      • Surprisingly, the Sky Dancers toy design has been brought back by Mattel for a DC Superhero Girls tie-in. Let's hope they learned from Galoob's mistakes.
    • Lawn Darts. Feathered Javelins! Surprisingly, they came out in the early 1960s and were only recalled when the first injuries were reported... in 1988.
      • Charlie Murphy (Eddie's brother, best known for writing for Chappelle's Show) appeared on an episode of 1000 Ways to Die that had the story of a coked-up guy from the 1970s having a barbecue with his other drugged-out buddies (with the coked-up guy getting impaled in the head with a lawn dart after getting sidelined by a woman who just went topless) to comment on how the 1970s was a decade full of wall-to-wall health hazards, from people eating fatty foods to abusing drugs to playing with lawn darts (which most people did while under the influence).
      • "Impaled by a stray lawn dart" is also one of the "Terrible Misfortunes" that can befall your bunnies in Killer Bunnies and the Quest for the Magic Carrot.
      • If you want to be technical, lawn darts were really invented around about 500 BCE... as Roman weaponry.
    • Snacktime Cabbage Patch Dolls, a 1996 Cabbage Patch doll sold with the gimmick that its mouth moved as it appeared to "eat" the plastic carrots and cookies sold with it. The problem was, once it started chewing, it didn't stop until the plastic food was sucked in...and little fingers and hair set it off just as well as plastic food. The only way to turn it off was to remove the toy's backpack... something buried in the instructions so deep, nobody saw it until it was announced publicly.
      • An episode of The X-Files took the idea and ran with it. There was also a Dexter's Laboratory episode where Dexter and Dee Dee find a "Mr. Chewy Bitems" in the city dump; Dex tries to recall why they discontinued the toy as Dee Dee runs around in the background screaming with the bear chewing on one of her ponytails.
      • The obscure comic book series Robotboy (not to be confused with the popular animated series of the same name) had an album in which the titular robotboy takes exaggerated versions one of these to its home after which they bring havoc and try to destroy the house. The quote from the corporate executive that ordered those toys to be destroyed sums the thing up:
      The idea was to give toys to kids so that they never had to clear away their stuff. What the manufacturer did not tell us however was that the toys cleared it away by eating them.
  • The 2006 Easy-Bake Oven. Easy-Bake ovens have been around since the 1950s and are, as the name claims, easy to use...but a recent redesign made the opening small enough to put a tiny hand in, but not take it out. Next to a newly designed heating element. Ouch.
  • Aqua Dots (Bindeez in its native Australia) is (or was) a fun little collection of interlocking beads designed for the creation of multidimensional shapes, as seen on TV. You had to get them wet before they would stick together. But the coating released one ingredient it shouldn't have when exposed to water — a form of the date-rape drug GHB. Should someone put that in their mouths... This wasn't the fault of the company that made them, but rather the Chinese plant that manufactured the toys. Essentially, they found out that some chemical was much less expensive than the one they were supposed to be using, but still worked. They didn't do the research that said chemical metabolizes into GHB, or else they didn't care. (They also didn't tell the company that they made the swap.) And yet, for all the Chinese toy manufacturer chaos that was going on in the media at the time, the blame fell squarely on the toy company for this. They still exist, though thankfully with the non-GHB formulation. They were renamed to Pixos (Beados in Australia) and marketed as "safety tested". In fact, they were marketed the same way Aqua Dots were, with the same announcer and background music (compare and contrast). Now, they are marketed in America under the name of Beados.
  • Chilly Bang! Bang! was a chilled juice-drink toy released in 1989 by Mackie International consisting of a gun-shaped packet of juice. To drink it, you had to stick the barrel in your mouth and pull the trigger. And if you thought Persona 3 was controversial...
    • My Name Is Earl had a minor character have a similar gun. Given that he also had a real gun... And take two guesses how said character wound up dead in a later episode.
    Chubby Jr: Well, Dad did say never to trust a Doctor. But then again, Dad now has a bullet hole where vodka should be.
  • What about The Dark Knight "hidden blade" katana toy? Y'know, the one with the hard plastic spring-loaded blade in the handle? The one that shot out with such force that it can cause blunt force trauma if the kids weren't expecting it? The one that can be activated by an easily-hit trigger in the handle? Yeah, that one. How could this be safe for kids?
  • The DigiDraw promised to make tracing, an already simple act, even easier by placing the thing to be traced between a light and a suspended glass pane, projecting its image onto a blank piece of paper. Its ridiculously poor design meant that even if you could assemble it, the resulting projection was faint at best, and it would screw with your focus to the point where you couldn't do a perfect trace, assuming you hadn't already ruined it by nudging the paper even slightly. And trust us, we are not alone in this belief.
  • In a similar case to the Transformers GPS above, LEGO also fumbled up their own plastic around '07, which resulted in nearly all of the lime-green colored pieces becoming ridiculously fragile. This affected the BIONICLE sets of that era greatly, which were already prone to breaking due to the faulty sculpting of the ball-socket joints. Since that line of sets had more lime-colored pieces than usual, it is needless to say that fans were not amused with the ordeal, as it meant that they couldn't take apart and rebuild their LEGO sets. Reportedly, some of these lime pieces broke right at the figures' first assembly.
    • In 2008, Lego reacted to the fragile sockets by introducing a rectangular design and phasing out most of the old, rounded sockets. The problem only got worse. Any socket joint from '08-'10 has a high risk of breakage, and the toys don't even need to be played with or taken apart for this to happen, as the plastic cracks apart on its own. Sadly, many of the smaller figures were designed with large, one-piece limbs, meaning buyers had to replace the entire limb if their socket broke. Fans are split on what Lego might have been thinking with the '08 socket-type. One group believes they intended it as a fix but messed up spectacularly. Others believe the rectangular design was to ensure that the pieces would break in a predetermined spot and still allow the parts to be used for building — however, the pieces in question developed cracks in numerous spots. Thankfully, Lego seems to have learned their lesson; in 2011, they redesigned the sockets to be much thicker and sturdier.
    • Bionicle rubber bands suffered from this too. There were two kinds: the older, less durable variants with a rectangular cross-section, and the lot more durable and higher-quality rounded rubber bands. Early sets came with the older bands, and these tended to rot away within years of the figures' assembly, rendering the action functions of the sets entirely useless. The better quality bands were introduced in 2002. For some strange reason, the 2005 Visorak set-line got divided into two sub-lines: the regular, which had colored canisters and the better kind of rubber band, and another variant, which came in black containers and had the older band. These broke or melted off with time, thereby taking away the sets' main gimmick, the Visorak's snapping pincers.
  • Marvin's Magic Drawing Board, which came along near the end of The '90s. It billed itself as a reusable scratchboard. Even if it were, in fact, reusable, simply putting a mark on it took the will of a thousand men. Much like the later DigiDraw, it was meant to do something simple, and it couldn't do that.
  • Rollerblade Barbie, which had a gimmick that made the skates spark. Slightly risky if used on lacquered vanities with hairspray in the air.
    • Bill Engvall discussed seeing this on the news once:
      "What if Barbie rollerskated through a pool of gasoline?"
      "What if Barbie had a hand grenade? Is that a common household problem now?"
    • For exactly this reason, there will never again be a Transformers toy with sparking action. Actually maybe not.
    • Razor of all companies marketed, for a month or so in 2009, a scooter that came with its own spark generator.
  • Tie 'N Tangle, a game based on wrapping other players in a web of nylon string, would otherwise be So Bad, It's Good based on its unintentional reference to bondage had it not been for its significant safety hazards: people can fall and hit their head, be strangled by the cord; etc. Even worse, the cord is too strong to be broken by hand, in case an emergency does happen. Jeepers Media suggests destroying this game, as its vintage worth is far outweighed by the hazards it possesses.

     Aircraft 
  • The World War 2-era Blackburn Botha torpedo/patrol bomber, of which the government test-pilot's assessment began with the words: "Entry into this aircraft is difficult. It ought to be made impossible." The rest of his report consists of tearing multiple aspects of the design a new one in his efforts to show why.
    • Nor was it just the Botha. Blackburn in general had had a long history of producing what one aviation writer called "damned awful to fly aircraft" which were also aesthetic horrors, and to the very end of the war it continued to miss the mark, turning out aircraft which were either lemons or which would have been astoundingly good if only they'd been ready three or four years earlier. It took until the 1950s for Blackburn to finally turn out an aircraft that was a winner in every way, but the Buccaneer had to wait until it was almost ready for retirement to show its mettle on the battlefield (in Iraq, 1991). Although given that its original design mission had been delivery of nuclear bombs onto Soviet naval strike groups and high-value shore targets, this is probably just as well. Sadly, it couldn't really enjoy its success even then; the company was bought out by Hawker Siddeley a few years after the Buccaneer was introduced.
    • The Roc was built as an unconventional fighter, having all of its armament on a turret. Problem: the turret could only hit targets behind the plane. Further problems: the plane was slow and unwieldy, and vastly inferior to then-current fighter craft fielded by the German air force. Two squadrons received Rocs; one did its best to get rid of them as quickly as possible, the other tried to carry out some missions with them but eventually gave up, stopped maintaining them and relegated them to stationary anti-air emplacements - essentially glorified dollies for their turrets.
    • Perhaps the biggest of Blackburn's engineering catastrophe was the TB. Designed as a dedicated zeppelin killer, troubles finding a suitable engine led to a plane that was slow, unarmed (beside the two canisters of incendiary darts it carried), and most gallingly, could not climb to the average cruising height of the zeppelins it was supposed to destroy from above. Only 9 TB were produced, all of them scrapped before seeing combat.
  • The Christmas Bullet, brainchild of "Doctor" William Whitney Christmas. Famous for having wings that flapped like a bird (in 1918), however a bird's wings don't fall off. It cost the Army not one, but two prototype engines and killed its own pilots. To top it all off, "Dr" Christmas got Congress and the Army to pay for it. Christmas was a con artist and "the kind of man they write songs about" according to one author.
  • Unlike the Karma Houdini above, the Brewster Buffalo and the Brewster Aeronautical Corporation did get what was coming to them. The Buffalo was slow and underpowered compared to the Japanese A6M "Zero". While the Finnish Air Force got the plane to perform, in the Pacific it was a Curb-Stomp Battle. To top it off, the CEO of Brewster Aeronautical was forced out due to mismanagement and the US Navy took the company over. He tried to get back in, only to be sued for his mismanagement. This was during World War II. The company ended its days making the F-4U Corsair under license.
  • Early fighter planes were designed with the gun right behind the propeller. The obvious problem in this design was first worked around with deflector plates on the propeller blades; this was a dangerous system in more ways than one - ricochet from deflected bullets could damage the plane or hit the pilot, the plates could be destroyed by gunfire leading to destruction of the propeller, and it negated a significant portion of the gun's firepower. Later designs would link the trigger assembly to the propeller's axle, timing the gunshots to pass between the propeller blades without damaging them.
  • The Soviet VTOL fighter Yak-38 was a deeply flawed design: unlike the successful American Harrier it required two additional downward-pointing engines to hover, which eventually doomed the project. The lift engines were finicky to start in hot conditions, lacked redundancy such that if one failed the other would roll the plane into a crash, and they guzzled fuel with abandon. And for all that they'd become dead weight as soon as the plane transitioned to horizontal flight, severely limiting the payload and flight performance. To top it all off, the tendency of the plane to crash from lift jet failure prompted Soviet engineers to install an auto-eject safety measure that'd save the pilot in case of a sudden change in pitch. Cue false alarms ejecting pilots from perfectly functional planes, with predictable results.
  • The Bachem Ba 349 was basically nothing more than a wooden glider with a rocket engine, rocket boosters and more rockets in the nose, launched off a tower and only good for a single shotgun-like salvo of unguided rockets in the general direction of an enemy bomber formation, with aiming mostly accomplished by hopes and prayers. The idea behind it was to create an aircraft that would be cheap to produce and required minimal training to fly, as by this point in WW2 it was becoming clearer and clearer that the war was going badly and the Nazis were becoming increasingly desperate. The only manned flight of the Ba 349 ended disastrously when test pilot Lothar Sieber died from a combination of assembly flaws and trying to eject at near-sonic velocities. Despite the setback, the Nazis nevertheless commissioned 36 more units for use in wartime, though none were used and most survivors were scrapped after the war as no conquering party had any interest in such a patently absurd weapon.
  • On the civilian side, there was the Tu-144 It was the world's first supersonic airliner to fly, but beyond that it accomplished absolutely nothing notable or remarkable. The Tu-144 was developed at the insistence of the Soviet government (and, some say, with the help of stolen plans for the Concorde) for the sole purpose of giving the Soviet Union a supersonic airliner before the West had one. Because its development was so rushed to meet state deadlines, the Tu-144 was inefficient, noisy, cramped, and above all unsafe. The first production model crashed at the Paris Air Show in 1976, killing 33 people, and when the Tu-144 was finally put into service it could only sustain one flight a week. It was used mostly to carry mail between Moscow and Alma Ata, and was never flown on any international routes before being retired in 1978. The Concorde, by contrast, served for over three decades with Air France and British Airways, only being retired after a combination of redesigns that required grounding after an accident caused by foreign object damage and a slump in air travel caused by the September 11, 2001 attacks rendered it uneconomical.
  • Unlike every other plane in this section, the McDonnell Douglas DC-10 would never have fit on the page it was originally on, but early cargo doors had one serious design flaw: it was extremely difficult to close them properly and there was no way to tell whether the door was closed and latched properly or it just looked okay, resulting in the doors occasionally blowing out during flight. This resulted in one incident over Windsor, Ontario where the plane in question barely made it back to Detroit and a crash in Ermenonville Forest in France that killed everyone on board. The latter was caused by McDonnell Douglas not actually fixing the problem after the former but rather slightly redesigning the doors with a window and attempting to warn ground crews to look through said window to make sure the pins looked right, not taking into consideration that ground crews might not speak any language on the warning sticker. Needless to say, after the latter, McDonnell Douglas and the FAA got the message and McDonnell Douglas redesigned the cargo door properly, but the DC-10's safety reputation suffered for the rest of its operational life.

    Locomotives 
  • The British Rail Class 17 is an Alleged Train if ever there was one. The idea was to create a single-cab mainline diesel locomotive with good forward visibility in both directions, something which was not permitted by any of the other locomotive types in use by BR at the time. Unfortunately, the Class 17 failed to live up to expectations. The twin Paxman 6ZHXL engines that powered the train were anemic, poorly designed, and often prone to overheating. Worse, the long nose and centre cab layout meant that the crew could not see the area directly in front of them, thus negating the whole purpose of the design. Later units had their engines replaced with more reliable ones, but this was not enough to wash away the nasty taste in people's mouths. This eventually left British Rail with little choice but to dispose of the type and replace them with the already proven Class 20. All units were scrapped save for a single locomotive, which is currently in the possession of the Chinnor and Princes Risborough Railway.
    • This was parodied in an episode of Thomas the Tank Engine which featured a character based on the Class 17 (officially named Derek, but unnamed in the episode itself) whose engine overheated while he was pulling a long train of China Clay trucks up a steep hill, leaving Bill and Ben to pull both him and the trucks to their intended destination.
  • Speaking of British Rail diesel locomotives, the Class 28 (better known as BoCo's basis) wasn't a hell of a lot better. Created as a result of BR's 1955 Modernisation Plan, the type used an unusual Co-Bo note  wheel arrangement, a configuration that made maintenance insanely difficult for yard workers. Worse, much like the Class 17, its Crossley 8-cylinder HST Vee8 two stroke engines proved extremely unreliable, suffering frequently from vibration-induced fuel pipe and water pipe fractures, cylinder defects, and engine shutdowns caused by excessive water temperature in addition to being very noisy and producing unacceptable levels of smoky exhaust fumes. The front windows also tended to fall out a lot while the trains were running. Within two years of its introduction the entire class was handed back to manufacturer Metropolitan-Vickers for remedial work on the engines and to cure the problems with the windows. These problems cost Metro-Vick millions of pounds just to get fixed. In the end, the Class 28 was retired after only eleven years in service. As with the Class 17, only a single unit survives to this day in operational condition, the rest having been scrapped.
  • Early model CIE 001 Class and WAGR X class locomotives used the same dubious engine design as the above-mentioned BR Class 28, but unlike the Class 28, both types went on to have long and successful careers once the issue was resolved and their motors replaced.

    Weaponry 
  • The Schwerer Gustav in World War II. Despite having artillery that can take down large defenses, it had only fired 48 rounds, is very clunky and requires very high maintenance. After that, its barrel had worn out and the info regarding about it is a bit sketchy.
  • Dog bombs were supposedly used by the Soviet Union in World War II. They were trained to go under enemy tanks, so the bombs could detonate underneath them. Turns out that due to the different engine functionality and smell of the German and Soviet tanks, the dogs would seek out the Soviet tanks that looked and smelled familiar to them instead of the German ones - and that's when they weren't proving to be smarter than their trainers accounted for and, upon noticing all the gunfire and explosions, deciding anything more than simply dropping their payload right there and hopping back into the trench wasn't worth it.
  • The British No. 74 Grenade, also known as the Sticky Bomb, was meant to be an anti-tank weapon made to compensate for Britain's lack of AT guns after many of them were left behind during the Dunkirk evacuation. It was hard to throw due to its weight, so the best ways to use it would be either to drop it onto a tank from a point of elevation, or to run right up to the tank and slapping the bomb onto it. That's right, the recommended use involves running up to an enemy tank in the middle of combat, which is as dangerous as it sounds, especially since the user would then have to run out of the blast radius in the next five seconds and hope that the explosion didn't launch the grenade's handle back at them like a bullet. Worse, the adhesive used on the grenade had a hard time sticking to tanks that were covered in dust or mud, which, considering the conditions of a battlefield, was nearly all of them. What it did stick to easily, however, was the uniform of the soldier trying to use the damn thing, leading to several incidents of panicked soldiers desperately trying to strip themselves of clothing while hoping that the grenade didn't arm itself. Talk about a Sticky Situation!

    Miscellaneous 
  • The stainless steel teapots and coffee pots that are commonly found in British cafes and - most notoriously - on British Rail trains. Think for a moment about this. They look good. They look like design classics in brushed steel. But even the handles are made out of bare steel. A metal which is very, very, good at conducting heat. Such as the heat produced by two pints or so of liquid heated to boiling point. If you're lucky, the designer will have placed a thin layer of insulating foam between the pot and the handle, meaning that the handle will warm up, but not uncomfortably so; otherwise, you'll end up with a pot of tea that you cannot even lift until it cools to lukewarm. British comedian Ben Elton quoted this, among other examples and speculated that there is a British government department called The Ministry Of Crap Design, whose role in life is to make life miserable for the British public by sponsoring the design of simple and everyday indispensible devices - and getting them miserably, even dangerously, wrong. Just to wipe the smile off peoples' faces.
  • Some drug awareness organizations, such as D.A.R.E., would give out pencils to students after their presentation, which had the message "Too cool to do drugs" printed on them. These pencils had a hilarious flaw: as they were sharpened, bits of the message would be shaved away, eventually leading to it reading "Cool to do drugs", then simply "Do drugs". After a child noticed this, new pencils had the message printed the other way so "drugs" would be the first word to get shaved away.
  • The Fabuloso brand of cleaning products has attracted some internet infamy for its packaging, which at a glance looks almost identical to a bottle of fruit-flavored drink. Imagine pouring yourself a glass of floor cleaner instead of punch...
  • The infamous Hawaii missile false alarm was caused by this; apparently the "real missile alert" button was right next to the "test missile alert" button, with no confirmation before sending it. There was also no easy way to send a follow-up message clarifying the false alarm, resulting in 38 panic-ridden minutes until they could arrange to send the message manually.
  • The Juicero, a $400 (originally $700) cold-press juicing machine, had a buttload of crappy design decisions that led to it folding mere months after its launch:
    • The machine itself is not actually a juicer, but a large press. It only worked with pre-approved, overpriced packets that had to be ordered from Juicero's website and had a limited shelf life. Not only were you paying more for the machine, but you had to sign up for a subscription plan that, at its absolute cheapest, came out to $1,600 per year. In the event that you couldn't (or didn't bother to) buy the packets, this four- to seven-hundred-dollar machine was functionally useless.
    • It had a needlessly-complex setup procedure. To start with, there's online DRM on a juicer. Those who bought it were required to set up an account and connect to a cloud-based service in order to activate it in the first place. Don't have easy access to an internet connection? Too bad. The excuse for adding the DRM was that it would prevent you from using spoiled juice packs... but the machine would brick if you tried to use an expired package, and the company never bothered to fully answer questions about how the codes would function in the event of a sudden food recall.
    • The reason why it was so expensive is made clear by examining the hardware: the machine is filled with custom machined parts, expensive steel gears, a completely custom power supply (that had to have been certified, creating additional cost), expensive molded plastic for the sleek outer shell, and needlessly complicated design: it took over 23 parts just to hold the door closed. A lot of this is also due to the odd design choice of extracting the juice by spreading the force over the entire bag, like closing a book. Anyone with any knowledge of high school physics knows that pressure is inversely proportionate to surface area, meaning you need a lot more force and thus a much more powerful mechanism to provide the same amount of pressure, hence why the bags can be squeezed by hand. Yes, that's right. The juice bags can be squeezed by hand. The company wanted you to drop $400 on an overpriced device and endure its needlessly complicated setup just to have the overengineered thing perform a task you can easily do manually.
  • A water bottle shaped like a soccer ball, created to promote the 2018 World Cup, made the news in Russia due to its shape focusing beams of light similarly to a magnifying glass, making it a fire hazard if exposed to the sun.

http://tvtropes.org/pmwiki/pmwiki.php/DarthWiki/IdiotDesign