Follow TV Tropes

Following

Headscratchers / Star Trek The Next Generation Technology

Go To

General

    open/close all folders 

    Starship Design 
  • In "Final Mission" the Enterprise finds itself in need of additional power, but risks running out of coolant if they switch more reactors on. This raises the question of why they designed a Starship with more reactors than it had the ability to safely and reliably operate.
    • The use of the established technology doesn't make a ton of sense in that episode. For example, the biggest, most powerful fusion reactors that they could have brought online would have been the saucer section's impulse engines, but a shot of the Enterprise towing the barge from the rear shows us quite clearly that they are not running—and presumably, since they're designed to operate completely independently of the stardrive section, they would have their own, self-contained coolant system. There's also no mention whatsoever of any problem with the warp core, so why Enterprise's giant matter/antimatter reactor is not able to provide every bit of power they need, and then some, is never explained. Probably the only explanation, based on everything we know about how the Enterprise is supposed to work, is Rule of Drama.

    Warp without warp drive: repulsive! 
  • In "New Ground," the soliton wave's "warp without warp drive" is heralded as a major new breakthrough. And yet hasn't the Enterprise already experienced it? In When the Bough Breaks, the Aldeans use a repulsor beam to hurl the ship three days' travel at Warp 9, and imply that they could send it much farther if they wanted to. For that matter, why isn't this impressive technology something the Federation has access to at the end of that episode?
    • For that matter, as useful as that repulsor beam might be for space travel, think of the military applications! There's a fleet between you and where you want to be? Bye bye, fleet! A Borg ship is coming your way? Activate repulsor beam! If you're a precision shot, send it flying into a star.

    Why is it so easy to steal a shuttlecraft? 

  • A trope spanning through all the "Treks" except for Enterprise. Sulu/Data/Kira/Kim : "Captain! An unauthorized shuttle/runabout is being launched!" Kirk/Picard/Sisko/Janeway: Close the hanger/lock the hanger clamps!" S/D/K/K: "Too late!" Shouldn't there be a way to keep people (in some cases non Starfleet personnel) from just waltzing away with shuttles so easily? At least an earlier warning buzzer?
    • Yes, it should be harder. Rule of Drama is in play. Presumably, Starfleet personnel evaded security precautions, but explanations are omitted. It's been done in real life
    • Aren't they also the lifeboats? It'd be a bit of a safety violation if you needed authorization to launch them.
      • There are actual escape craft for emergencies (we see them in the film First Contact). Stealing a shuttle is more like stealing a fighter plane that's parked on an aircraft carrier.
      • True, but I can't imagine that they wouldn't also try to launch as many shuttlecraft as possible if they had to abandon ship. Think about it: for each shuttle you have in a lifeboat convoy, the odds of survival get exponentially better. Especially with warp-capable shuttles. Enterprise even has at least one Danube-class runabout, which is basically a small, multi-purpose starship in its own right. They could be used for scouting, providing a measure of defense to the lifeboats, rushing any wounded personnel to the nearest starbase, or just spreading out to create a bigger footprint and make it easier for any search and rescue effort to find them.

    Eh, a thousand people on board, who cares if we're missing a few? 

  • Why doesn't the Enterprise computer immediately inform the crew if a member of its complement is missing? Troi and Riker had to figure out for themselves that Q had abducted Picard in "Q Who", and there were all sorts of unexplained disappearances/reappearances in "Schisms" before the crew caught on and told the computer to monitor them. This seems like a fairly standard security feature to me!
    • I'm certain Q could make the computer neglect to mention Picard has suddenly vanished and it could be that the extra dimensional aliens had a similar means of bypassing that security feature.
    • Geordi mentions in passing in Galaxy's Child that the "computer is notorious for not volunteering information" after he hits on a woman whose life story he knew except for the fact she was MARRIED. Also could be the consideration that the combadges only measure location, not life signs, and there are very few methods someone could appear and/or disappear from the ship without anyone knowing about it (enemy transporters have energy signatures, local transporters have logs, hull breaches would be detected, shuttlecraft would be logged, etc). Typically, losing a combadge could simply result in a notice "Attention, Ensign Ricky's combadge has malfunctioned. Please issue a new combadge." Now, things such as duplicate combadges or new combadges not previously logged appearing in the system may generate alerts.
    • This reaches the height of stupidity in Identity Crisis. Geordi, who is in danger of mutating into an alien, specifically recommends programming the computers to make sure he doesn't leave the ship. Yet later Crusher has to ask the computer where Geordi is before it divulges that he is not aboard the Enterprise.
    • For all the computer knows, the missing person is supposed to be gone. You can imagine it's a big headache keeping track of who's supposed to be on board. For the computer to be able to notice there's a problem, they'd have to log it every time someone left the ship.
      • Why can't they? People don't come and go at all times. We see them do that a lot, but that's because we're seeing the exciting parts. When they're just traveling or mapping star clusters or whatever, there's no need to transport at all, and even during the exciting parts, we don't have dozens of people jumping back and forth between the ship and other locations every few minutes. A lot of times, it's just a handful of people beaming down to the planet once, doing stuff down there for a bit, and beaming back. All of this is to say it shouldn't be too much to keep track of. The little screens on the captain's chair's arms could display an alert whenever someone comes and goes, and presumably Picard knows which ones are authorized for transport.
      • Also, there are exactly three ways off the ship: transporter, shuttlecraft/other auxiliary vessel, and airlock. It's been demonstrated several times that the computer makes transporter logs and keeps track of Enterprise's shuttle inventory. I don't recall any mention of monitoring the airlocks, but if all that's keeping you alive in deep space is essentially a tin can full of breathable air, you'd really hope that someone is notified every time an airlock opens or closes. If the number of people leaving by the means that we know the computer monitors or the one it 'probably monitors doesn't agree with the number of people currently on the ship, it should raise some alarm. Literally.

    Trusty Phasers Over Hokey Ancient Resonators? And About Those Romulan Psychics... 
  • Gambit. When Tallera turned the psionic resonator on Picard and told him to "Pick up the phaser. See what good it will do," and later the armed security team arrives from the Enterprise, Picard tells them to put down their weapons, but what good would the resonator do if the Starfleet officers (or the mercenaries earlier) had shot first? Also why is it that the Vulcans have telepathy while the Romulans do not, when the Romulans are more prone to reading/screwing with peoples minds and this episode seems to establish that they did have telepathy before branching off from the Vulcans?
    • Possibly they still do, but don't use it. Vulcans seen on Enterprise don't use their telepathic abilities until the Syrannite movement gains influence; if they had gone a few thousand more years without interference from outside sources (which the xenophobic Romulans almost certainly did) they may have forgotten they ever existed, especially if the history was altered. Also, consider the fact that Vulcans have such strong emotions that even temporary loss of control can drive them insane. Romulans do not have this handicap; perhaps they turn their psychic abilities inward somehow to give them the control that their hat so desperately demands.
    • In my opinion, there are at least two likely reasons. The first is that Picard wanted no further casualties (particularly from his own team). That seemed to be the obvious thing he was doing. The second (and more opinionated here) is that Picard winning an intellectual victory with the security team complicit in it makes the defeat all the more crushing.
      • While Romulans are an offshoot of the Vulcans, they are not the same species. In fact, there are enough genetic differences to make blood transfusions from one to the other impossible. And other species were mentioned on Gambit that were extinct offshoots of the Romulans. Perhaps the Romulans, in their journey to their new home world, encountered and bred with another species. Maybe their telepathy disappeared as a result. That would also account for the ridges Romulans possess.
      • I've always figured it was some interbreeding with Klingons. It would explain why Worf of all people could give a blood transfusion to a Romulan, their forehead ridges, and their seeming differences with Vulcans (lack of psychic powers, advanced mental abilities), though not the seeming lack of greater physical abilities (Vulcans being 3x stronger than humans). It would also make sense within the context of the more brutish Remans, their former alliance with the Klingon Empire, and their eventual falling out with said empire.
      • The lower strength is easily explained. Their planet must have a lower surface gravity than Vulcan's, which is known to be a good deal higher than Earth's.
    • Didn't the resonator kill people who had aggressive thoughts? I don't remember the exact sequence of events, but "the phaser won't do you any good" would reference that phaser = aggression = you die. Similarly Picard tells his security team to drop their weapons because if they acted aggressively towards Tallera, she could kill them. It's a Sheathe Your Sword solution.
    • In both Enterprise and Voyager it was shown that the Vulcan mind meld was incredibly easy to screw up, resulting in serious negative (and potentially fatal) side-effects for the subjects unless the Vulcan performing the meld was properly-trained and skilled in its use. Since the ancestors of the Romulans rejected the teachings of Surak, whose followers were the ones generally credited with perfecting the meld, it is possible that they abandoned the practice (as the Vulcans would for a time) due to a high incidence of neurological disorders caused by poorly-performed melds. Unlike telepaths with broader abilities, such as Betazoids, the Vulcans do not appear to spontaneously manifest the ability to meld. For example, Archer had to use his residual knowledge leftover from when he held Surak's katra to coach T'Pol through the process of performing a meld. Over the passage of many generations without using their abilities, the power may have atrophied in Romulans, making it harder for them to use. Add in their lack of, and general disdain for, mental discipline such as that practiced by Vulcan followers of Surak's teachings, the amount of effort required to perform a mind meld may be just too great for the majority of Romulans to pull off.

    We're Fighting Wolf 359 By the Book! Yeah, the Book the Borg Just Stole and Read! 
  • Wolf 359: After having the captain of the freaking flagship captured and mind probed for tactical info, including shield frequencies and hull composition, plus all known maneuvers, why did the Federation set up its defense of Earth "by the book"? They got mowed down like kittens vs. a chainsaw. Couldn't you have saved thousands of lives by having the first ship, or even the first 3 ships, abandoned by their crews and have the computers pilot them into the Borg ship at warp 9?
    • Based on what, exactly, is this premise that Wolf 359 was done 'by the book?'
      • It's not said in so many words, but when Shelby points out that the Borg have access to everything Picard knows, Hanson dismisses her by saying Picard wouldn't give that information up. That at least implies his intention of using established tactics.
    • Also on the topic of Wolf 359, why did Guinan tell Riker to "let Picard go" when he was assimilated, if that would mean that the time loop that happened back in the late 1800s would have never occurred?
      • I'd assume the El-Aurians use some form of the Temporal Prime Directive.
      • ...Or perhaps they're a minor version of the Time Lords of Gallifrey, and Guinan knew that telling Riker to seek the inner peace of letting Picard go was the only way to get him on the correct emotional path to achieve the victory whereby Picard ultimately was saved, and the timeline thus kept whole.
      • Guinan might not have known precisely when the Picard she met came from. It might've been a Picard from sometime over the past three seasons, and she just didn't hear about it because the mission was classified or something, or a Picard from an alternate timeline. Either way, she could've met him and he'd still be dead as a doorknob, and my guess is that she wasn't willing to presume that temporal mechanics would conspire to save him.
      • The emphasis of Guinan's advice seemed to be less that Riker should give up on saving Picard, and more that he shouldn't focus so much on trying to do what Picard would do, or what Picard would want him to do. That he should believe in himself, rather than believing in the Picard that believed in him, basically.
    • The problem here seems to be that Admiral Hanson was in severe denial about Locutus having access to Picard's mind. After learning about the plan Troi even said to him, in disbelief, "but if the Borg know everything he knows, then", and he abruptly cut her off with a story about how determined Picard was as a cadet, and how he could never be compromised by the Borg. Had Shelby or Riker been in charge of the Battle of Wolf-359, it might've gone a whole lot better.
      • I need to watch it again, but I took Hanson's point to mean that if Picard's knowledge was taken by the Borg, he really was lost; he was not fighting because he could not.
    • Speaking of Picard — He'd been almost completely suborned by the Borg. Sure he apparently got better, but you have know way of knowing he won't snap back at the worst possible time, especially since (as per "First Contact") you haven't even managed to remove all of the Borg implants. On top of -that-, his knowledge and personality are presumably still bouncing around the Borg hive-mind somewhere. And yet he's given his command back as if nothing happened. I know Starfleet is only Mildly Military, but by any reasonable standard Picard would've been gently but -firmly- retired to spend more time with his archeology.
      • Trouble is Starfleet has just lost almost all of its experienced officer cadre below commodore rank after Wolf 359, and is also about to embark on a massive upgrade (more ships, more people) they cannot afford to lose an experienced officer like Picard at this time. It isn't unbelievable him being kept on, the unbelievable part is none of his senior officers were forcibly promoted to their own commands (and why Sisko wasn't given instant promotion to Captain too). They probably thought just keep him away from the Borg if they come back, send him to watch comets in the Neutral Zone.
      • That's what happens. Borg swing by in First Contact, and where's the flagship? Staring at imaginary Romulans. Plus, he does end up taking some time off work after coming back...
      • People being mind-controlled happens in the Trekverse. Data was controlled by a homing device activated by Soong. Data, Troi, and O'Brien were taken over by alien convicts. La Forge was brainwashed by the Romulans. Chakotay was controlled by a bunch of former Borg. Lore influenced Data to do very bad things with an emotion chip. The entire crew of the Enterprise-D sans Data was brainwashed by a video game. These are just off the top of my head. Seems like if Starfleet assumes anyone who's been mind-controlled is a risk and should be retired, there'd be a revolving door of perfectly good people entering Starfleet and being retired whenever their minds are violated.
    • All of this is why the Federation should be looking very, very hard at developing omega particles as a last-ditch defensive weapon against the Borg. Emphasis on "last." The Collective has a stated policy of assimilating knowledge of Omega at "all costs," so you can be reasonably sure that if they detect it, they'll drop whatever they're doing to investigate. Once it detonates, it not only destroys the immediate threat, but also renders a large area of space impassible with warp drive. It might either be used to create a barrier between yourself and Borg space, or to cut a star system off from the rest of the galaxy entirely; isolating it forever, but at least allowing that world to survive.
      • The problem with last-ditch weapons is that very quickly people start devising scenarios where they can be used for something that isn't last-ditch. Look at all the proposals for non-MAD nuclear weapons uses that float across the military's research desk and how close we've come to seeing them used in cases where it clearly was not the last-ditch (or even any kind of threat at all). Remembering that in Star Trek Earth, and a non-negligible number of other member planets, have all gone through nuclear holocausts where the "last-ditch" weapons were used and really screwed things up. Not to mention all the other various doomsday weapons that Starfleet has come across over the years. They are wise enough to know that coming up with last-ditch weapons is an invitation to disaster and are not going to be silly enough to go back down that route.
    • Obviously this is a highly divisive topic, but no one seems yet to have taken into account a critical point about Wolf 359: There was no time. The Borg cube's rate of advance was far in excess of anything Starfleet could match, and starships, especially large and powerful cruisers, were few in number and widely dispersed during that period of Federation history. The fleet at Wolf 359 wasn't "forty of Starfleet's finest"; it was every ship that could get there in time. The heaviest ship there was a refit Excelsior-class — something like a half-century-old design. Second heaviest was a Nebula, which is to the Galaxy class as Reliant was to Kirk's Enterprise — in other words, a light cruiser, designed for long solo patrols and not nearly equal to the task at hand. And, speaking of Kirk's Enterprise, the wreckage included the engineering hull of a refit Constitution-class, which at that point would've been nothing more or less than a museum piece, to say nothing of its contemporary and erstwhile adversary, the Klingon D7 cruiser whose wreckage we also saw there.
    • We didn't see any Klingon ships at Wolf 359, none that we could recognize anyway. We must assume that the task force didn't arrive in time after Hanson called. We did see some old D7/K'tingas at the surplus depot at Qualor II in "Unification", but they seemed to be intact vessels, not something that looked like it had faced a Borg Cube which, given what Worf wanted to do to the Defiant in "First Contact", wouldn't have left any wreckage whatsoever.
    And Admiral Hansen knew it. His attitude in the pre-battle discussion with the senior staff of Enterprise was not, as some suggest, born of refusal to recognize the truth of the situation; as the commander of Starfleet's Borg research project, he knew full well he was sailing into a totally hopeless battle, with no chance of accomplishing anything except maybe to delay the Borg for long enough to let Enterprise catch up and maybe do something before the Borg had time to assimilate or devastate Earth. All else is the bravado of a man facing death head-on — moreover, thanks to Starfleet's strategic dispersal, facing death with only the faintest hope of being able to make it count for anything.
    But there was no time to come up with anything better, thanks to the incredible speed with which the Borg were bearing down on Earth. As Hansen had pointed out early in the two-parter, all the new strategic and tactical approaches to the Borg problem were still on the drawing board, months or years from implementation at best — and that was before the Borg captured Picard and his nigh-matchless knowledge of Federation strategy and tactics. There was no time to get any of the new ideas into action; there was no time to come up with an entirely new strategic doctrine not dependent on anything the Borg had stolen from Picard. There was no time for anything but a courageous, desperate sacrifice in the hope of buying a few precious minutes or hours that might make some kind of difference. There was no time.
    • Well, we saw a memory in the Star Trek: Voyager episode "Unity" that shows an engagement between several Klingon ships and a Borg Cube. Based on information that we have, we can't say for sure that the memory was of Wolf 359, but the memory was clearly of events that took place prior to Star Trek: First Contact, and it seems unlikely that there would have been a separate Klingon fleet action against the Borg that was never even alluded to on-screen at any other point. Most likely, some Klingon ships made it to the battle in time to engage the cube, and the wreckage of those ships just happened to be in a corner of the battlefield that we just never saw.

    Tea, Hot... Well Duh! 
  • Whenever Captain Picard orders tea from the replicator he has to specify that he wants it hot. Isn't this unnecessary? After all, why would you want tea cold?
    • Well, there is such a thing as iced tea, after all...besides which, your more dedicated hot-drink aficionados do worry about temperature in re: altering the taste. Between this and the 'Earl Grey' thing, Picard's basically meant to be showing off his sipping snobbery.
      • OK, so why does Tom need to declare he wants his tomato soup hot? Don't tell me there's an iced tomato soup in the 24th century...
      • Gazpacho... soup...
      • You must be thinking of the famous Klingon Iced Tomato Soup. Heating your soup is for weaklings! (Klingon tomatoes, by the way, are easily distinguished by their wrinkled tops.)
      • And people are just downright weird when it comes to food. I am an avid fan of cold pizza, for example, which is not how it's supposed to be served, obviously.
      • So why doesn't Janeway need to say whether she wants her black coffee hot? Is iced coffee outlawed in the 24th century or something?
      • Maybe Janeway has preset her replicator so that the command "coffee" tells it to make it black and hot.
      • It's also how she likes her Vulcans.
      • It could be that Janeway is less picky about the finer points of her coffee, and just wants the coffee. Picard is more meticulous, and wants his tea a certain temperature, because that matters with tea preparation.
    • It's because a person from the southern USA programmed all the Federation's holodecks (and wrote the scripts). Anyone from more temperate climes would not even consider ordering Earl Grey tea cold! (Then again, a proper tea drinker, such as Picard, would not even stoop to calling it tea either - and mostly would have programmed his holodeck to respond appropriately.)
      • And No True Scotsman puts sugar in his porridge. Picard (and possibly the above poster) is just being a snob. (Incidentally, this northern-US troper thinks iced Earl Grey with a touch of peppermint is delicious.)
    • Another question is why the replicator doesn't remember that he always drinks the same type of tea and give him that when he asks for just "tea".
      • It's probably a habit he got into long before joining Starfleet. You can program your own home replicators to always make your tea the same way (unless you're Arthur Dent), but every time you go out, you'll run into other systems that don't know your preferences.
      • You would think that when he started service on the Enterprise, that he would just spend five minutes telling the replicator what his idea of a perfect cup of tea is, and then every time after that he could just tap the hotkey labeled "PICARD_TEA_EARL_GREY".
      • True, but tea is a complex thing with a lot of volatile chemicals in it. No two cups are exactly the same, and the chemical reactions ongoing when a fresh cup of tea is made make for the ideal taste of tea. Picard, coming from a family of vintners, would know about subtlety of good tastes, and could be deliberately trying to throw the replicator off just enough to give him that hand-steeped experience. But not too off. You do not want to repeat Mr. Dent's mistake in arguing with a replicator about exactly how you want your tea to taste; then you just confuse it by flooding it with inputs it doesn't know what to do with.
      • There's also the possibility he got used to using a less sophisticated voice control system when he was younger. He isn't telling the computer what he wants to drink in natural language. He's going menu command -> submenu -> submenu. He's probably well aware the computer can handle brevity just fine. He just likes doing it this way better, because it's how he's also done it.
    • Lampshaded in the series finale "All Good Things..." when Data's housekeeper asks Picard what kind of tea he wants. He replies as usual, and she says "Course it's hot. What do you want in it?"
    • It's likely that Picard has his replicator set to recognize the specific command string and will produce a specific temperature of tea, which is different from the Starfleet "default" for Earl Grey.
      • This. In "All Good Things," during one of the pre-Encounter Enterprise time sections, Picard asks the replicator for his signature drink: the computer responds "that beverage has not been programmed into the replicator."
    • In one of the early episodes, I seem to recall either the tea not being hot, or the replicator asking an inordinately long series of questions regarding how Picard wanted his tea, prompting him to ask for 'Tea, Earl Gray, Hot' specifically, each time.
      • You're right about the questions, he asked the replicator for tea and it asked what kind, then what temperature, and I think there was another question thrown in. Him saying "tea earl grey hot" is the equivalent of us talking very slowly to voice recognition software.
    • There was that episode where a Romulan defector ordered water, and the replicator demanded an exact temperature in Celsius. Whoever programmed them must have been really anal about these things.
      • More like the computer could tell he was specifying the exact temperature, but the computer wasn't programmed with a knowledge of Romulan units of measurement, since it was built when the Federation had had no contact with the Romulans in a long time.
    • In Voyager, the computer forces Tom Paris to specify how he wants "tomato soup" rather specifically. Federation food replicators of the 24th century may just be atrociously user hostile!
      • I think it's more that, as the replicator points out, it has something like 48 different varieties of tomato soup on record. (Do you want it with some little chunks of tomato in it? Do you want it with a bit of cheese flavoring? Do you want it with little crackers in?) Tom is just being a bit self-centered and assuming the way he likes his tomato soup is the default way tomato soup should be and thus when he says "Tomato soup" the replicator ought to spit out the exact kind he likes.
    • This could also be a bit of Fridge Logic: Isn't there something in Object-Oriented Programming that, to draw an analogy, has a string of "this is a code, then this is a code plus something else, then this code plus something else plus something else"? (Note, this is what someone told me, I know nothing about it) Any programmers here who can elaborate? This would make sense if what I (only vaguely) understand DOES apply to the way Picard orders. Tea plus Earl Grey plus hot defines what he wants.
      • You're thinking of inheritance, I think, and that's different in that each "link" in the "chain" inherits the properties of those above it. You might have consumable -> beverage -> tea, in which you don't have to define how to drink tea, because you've defined how to drink a beverage, and the tea inherits that definition.
    • Not familiar with object-oriented programming, but I believe the idea you are going for is like a long menu tree or directory location. You are starting with the broadest category and defining the desired object in progressively increasing detail. Food->liquid->beverage->tea->Earl Grey->hot.
      • Or it could be executing a shell command with flags. $tea -earlgrey -hot
    • This issue of OOP can be quite interesting to explore. Since we can see that replicators can also record the detail of the requested object, wouldn't be possible to generate a cuisine that is made out of complex, detailed parts? Hell, we have seen this in recent AI Image Generators like "A girl standing on the field and gazes upon the full moon at night, Impressionist, Trending in Artstation, 4k Resolution". For this particular case, some examples would be "Brando Hot Dog from Pink's Hotdog BGC, freshly cooked", "Roasted Whole Chicken in black pepper gravy, chicken meat from La Flèche chicken, roasted in 82 degree celcius", "Sichuan Hotpot, two-sections, one spicy soup, one herbal soup;all boiling in a hotpot with intergrated fire heater." and "A 65 cm tall milk chocolate, sculpted in a shape of a gramophone in Amaury Guichon's style".

    Universal Translator Limitations 
  • In "Loud As a Whisper" tension is created when the deaf Riva is rendered mute without his telepathic assistants. Data learns the sign language to enable communication. Is the universal translator only good for audio translation, not gestures?
    • Well unless the universal translator has a built in hologram emitter...
  • In "Allegiance" Picard is kidnapped by some aliens, along with other people from other species, one from a warlike race introduces himself. "My given name is Esoqq. It means 'fighter.' So shouldn't Picard's UT render that as "My given name is Fighter. It means fighter."?
    • Using the context the translator wasn't translating a proper noun, only the definition that followed.
    • He's likely not literally named "Fighter", that'd be weird. Plenty of names in English have meanings associated with them; for example Patrick comes from Latin meaning "patrician".
      • Exactly this. My given name is Daniel. It means "God is my judge." That doesn't mean my name is God Is My Judge, and I wouldn't think to respond if that phrase came up in conversation.
  • In "Disaster" Picard sings a song with the children with whom he is trapped in a turbolift as a way of distracting them from their predicament. The song, "Frere Jacques", is in French. But shouldn't it have been translated by the UT?
    • The same thing happens whenever they decide to speak (or sing) Klingon. Just like the automatic doors knowing when to open, it seems the UT has read the script.
      • Maybe the translator doesn’t work when there is… rhythm?
    • Why would they need the translator? They're aboard the ship, so it's probably switched off.

    Universal Lip-Dubbing Translators 
  • Loud As A Whisper - so I suppose the universal translator also modifies what people see when lip-reading?
    • Yes, it also automatically figures out the conversion for units, and rounds them so the result isn't something like 5.152362 hours when an alien is giving a length of time.
  • Because no show wants to look like an old kung-fu movie?
  • This episode is one of several that spotlight the absurdities of the UT. Where is it? How does it work? What happens to the sounds produced by a person when they speak in their native language — does the UT dampen them somehow and dub in a translation in the speaker's own voice? Even if we can accept something like this in a controlled setting like a starship, how does it work on a planet being visited for the first time? The UT is, maybe above all the rest, the single biggest "don't think about this too much" technology in all of Star Trek.
  • Which is why, in any reasonable Sci-Fi setting, a standard language is used, there are no translators, and there would be (a) linguist(s) on board a starship when unknown languages are encountered. The only way translators could be REMOTELY feasible is if they were an implant that interrupted audio and visual signals and altered them for the translation. Geordi's Visor could at least handle the visual element.
    • Translation Convention: "We are meant to assume that the characters are "really" speaking their own native tongue, and it is being translated purely for our benefit". In DS9's "Statistical Probabilities" we hear Weyoun speaking in his native "Dominionese" as well as in English and the lip movements are understandably different.
      • "The only way translators could be REMOTELY feasible is if they were an implant that interrupted audio and visual signals and altered them for the translation." Mass Effect takes this route, with their Translators being a PDA, hidden computer or sub-dermal implant that translates all audio to the user's native language (the lip-syncing to English is just to avoid Uncanny Valley and in universe their words and lip movements don't match up). The translation is handled by a large team of linguists who are constantly updating and refining the translations which are then sent to the translators in the field. It also has logical drawbacks, as some words can't be translated into some languages because there is no word that fits that definition, and some words are translated into the nearest match even though there is key difference in meanings, and they're useless in the case of first contact into a linguist can study the language to get a good understanding of it. If I recall the Star Trek UT works in a similar way, although it's an external source. As for how it can translate first contact encounters, keep in mind that the Federation is sending out probes to monitor other races, and the UT can figure out how a language works once it gets a large enough sample size for comparison.
    • Okay, people, again, say it with me (your lips can move in any synch you want): Star Trek is not hard sci-fi. Star Trek is not meant to be hard sci-fi. Star Trek doesn't even try to be hard sci-fi. Complaining that it doesn't follow the conventions of hard sci-fi like having extreme problems with language and such is like complaining that there's too much sex and not enough plot and dialog in a porn movie.
      • I broadly agree, except for that the existence of things like the official Star Trek technical manuals demonstrate that they do pay at least lip service to being hard science fiction. The problem is that they want to have it both ways — proudly boasting that they've worked out how the warp drive or the transporter works down to the last detail (even if they are fictional details), while having other technologies that are basically magic. Certainly, one should not be admonished for pointing out that much, which is not necessarily complaining.
      • The ship's computer handles the translation; this was actually shown in one episode where some rebel group took over the ship and disabled the computer so everyone couldn't communicate. The translation is handled through their communication badges. As was stated above, it's just a simplification for aesthetic reasons. Stargate: SG-1 stopped having Daniel learn and interpret new languages on every planet because it got tiresome, bogged down the story, and ate up time having to do it every episode. For all we know, the badges do cause a sound cancelling wave to eliminate other voices while emitting a focused sound wave of the translation. And the computer must be doing a great deal of context translation since there doesn't appear to be any Engrish.
  • Well in episode Unification we see Picard and Data going undercover in Romulus disguised as Romulans and speaking with Romulans that seem to take them as such, and yet it would be obvious for the Romulans that they are using the UT and probably will be hearing them speaking their native tongs. You can argue that Data did learn perfect Romulan, but Picard?
    • Why not? His Klingon is apparently fairly good. He may have just intended to stay quiet as much as possible and let Data do most of the talking, or perhaps affect a country accent so his mistakes wouldn't be so obvious. But more likely, UT implants as opposed to badges seem to be able to nudge you into speaking another language rather than just hearing them; Quark, Nog, and Rom are able to communicate in English once they've fixed their subdermal UT implants in the Area 51 episode.
  • A lot of the problems with lip-syncing in regards to the UT vanish if you start to think of it as more like the way the TARDIS in Doctor Who translates languages. In that show the TARDIS gets inside your head and essentially re-writes what you are see and hear. Now admittedly the Time Lords are at least three above the Federation on the Kardashev scale, but even so, we do know that they have fairly sophisticated neural interface technology so it is by no means impossible given the on-screen evidence. Plus to my knowledge, the first UT we see in canon (that is not a result of time travel) is either the one used by the Vulcans in First Contact as they were talking English to Cochrane, or the one used by the Vulcans in Carbon Creek back in the 1950s assuming T'Pol's story was not a lie. And the Vulcans are experts at interfacing with the mind. Remember the Stone of Gol? A handheld neural resonator (which if the bragging of the woman concerned is to be believed) can fight whole armies?

    Transporter Tailors and the Fountain of Youth 
  • Rascals: Four crew members get reverted to childhood, with "a 40% drop in mass" through contact with a Negative Space Wedgie. But their uniforms and clothes still fit them perfectly? Did they get shrunk as part of the transformation?
    • Their uniforms do come out of the transport looking ill-fitting. In the next scene in sick bay, Ro mentions how she wants to be back in her old uniform- they must have replicated child-size uniforms during the credits.
      • The clothes looked fine to me when they were transported as kids. However, even if they did change once they were children, Picard does change back from a kid with his uniform perfectly fitting and Guinan and Keiko are both waiting in the wings with their child clothes on.
      • What about the rank pips and communicators? Does anyone remember if the ones they were wearing were scaled-down proportionately, or were they the same size as they usually are?
    • Forget about the uniforms for a second, what about that fountain of youth? They are able to reverse-engineer the transporter accident well enough in just one episode to return the crew to their original ages. This has staggering medical implications, but is never mentioned again. The entire plot of Insurrection could have been avoided if they could just zap people back into their twenties as soon as they hit, say, their fifties, but it's never mentioned again.
      • The experience is treated only as a problem to be solved, rather than the incredible scientific breakthrough is should be. Seems implausible that the characters all wanted to be old again, not just old enough to crew a starship but as old as they were before, and that nobody else wanted to be young again either. Status Quo Is God, blatantly so.
      • It's worth noting this was shown to be able to be done as early as TOS animated series where they used a characters original transporter pattern to deage them after they were turned old (or in one case, restore them to their original height when they were shrunk down) So it's not something new. As far as the characters here, they clearly wanted to be their regular ages. Picard might have enjoyed having hair and youthful energy again but he would rather be his middle aged dignified self in command of the ship. Keiko wanted her daughter to see her as a mother again, Ro wanted to be tough and mature, Guinan was ageless anyway.
    • What about Picard's artificial heart?? Shouldn't an adult-sized heart within the chest of a child cause major problems?
      • Wow, that's a really fascinating question. I'm a layman, so I could be way off on this, but being a mechanical device rather than a natural tissue, the artificial heart is probably be a lot more resistant to pressure, and might even continue to function relatively normally if it suddenly has less room. And assuming that it's powered by some sort of super-advanced, futuristic battery, the body doesn't have to work to keep the heart beating, so it doesn't need to take in nearly as much oxygen. The lungs wouldn't need as much room to expand, so a smaller thoracic cavity might not be a huge problem. The biggest issue might be damage to the surrounding tissue as it suddenly shrinks. If Picard's vascular system shrinks to the size of a child's, but the heart it's attached to doesn't, that's probably going to damage a couple of blood vessels you really don't want to damage. But, if those blood vessels somehow stay intact, and he doesn't bleed out in seconds, he might be okay. It would still probably hurt like hell, though.
      • Picard's artificial heart is always forgotten except when there's a problem with it ("Samaritan Snare", "Tapestry"), yet when it should become an issue, it doesn't. As mentioned, "Rascals" should have brought it up as should "Best of Both Worlds", because you'd think that a cybernetic race wanting to upgrade their chosen mouthpiece for humanity wouldn't want him to be at the mercy of a crude human-made heart. I'd have thought they'd have replaced it. Also "All Good Things" and "Insurrection" with the deaging stuff should have done something, although Picard's hair never ever grows back either...
      • I imagine the artificial heart is rather less complicated than a biological one. Seems like the transporter could also make the heart smaller.
    • Apparently, the captain looking like a teenager is weird enough that the crew won't take orders from him. A crew that runs into all kinds of weirdness in every episode.
      • Could be justified by saying they don't know for certain if his judgement isn't also impaired.

    Clairvoyant Communicators 
When a character presses his/her communicator and says something like "Crusher to Picard", it seems like the other character hears this in real time, as it is being said. How does the communicator know who you're talking to before you say their name?
  • Less impressively/buggy, communications all the way from ship-to-ship to the lowly combadge are also psychic about stopping a transmission. Automatic doors are similarly psychic throughout Trek, knowing whether you want to exit, how you are going to exit (right down to slipping through almost-closed doors?), are permitted to go through, and so forth... all before you ever approach them.
  • An even better question would be how does the computer know who you are calling, with only a last name to work with? Granted, there probably aren't too many members of the crew named 'Picard' but suppose you tap your badge and call 'Johnson'. Will you get the Johnson in Engineering, the Johnson in Astrometrics, or the civilian Johnson who works in the barber shop? Hell, it's enough of a shock that Picard never got put through to Wesley when he said "Picard to Crusher."
    • Call list priorities table. How many times did Picard want to speak to Dr Crusher? Hundreds. How many times did Picard (or anyone) want to speak to Wesley? Hardly any. Besides, I'm fairly sure that Picard called departments and got put through to the senior person there, rather than calling specific people.
      • Often he specifies 'Picard to Dr Crusher' or 'Picard to Ensign Crusher', but I can't recall how consistent that is with them both being around.
  • Ridiculously simple answer: The communicator hears "Crusher to Picard", immediately opens the comm channel, and echoes the sound clip of "Crusher to Picard" so that the recipient knows who's calling. Since we never see people using the communicators in split screen we're not really aware of this second or so of delay.
    • If memory serves, we did actually see the computer screw up in a way that supports this idea at least once. In the episode Time Squared, a Timey-Wimey Ball creates a duplicate shuttle pod who's lone occupant is an unconscious Picard. When a Riker sees the doppelgänger in the shuttlebay, he gives a shocked exclamation of 'Captain Picard?!' which causes the computer to open a channel to the Picard who is sitting on the bridge. The implication seems to be that the computer misinterpreted the exclamation as an intercom instruction.
    • There is also the inconsistency of having to press on the communicator to make it work. Sometimes, they just say "Riker to Worf" without having activated the communicator. It goes the same with replying. Sometimes they just speak when someone calls them and other times they press the communicator before responding.
      • Probably pressing it guarantees that it will interpret what you say as a command to communicate. Not pressing it might result in the rare false negative where the communicator thinks you are not attempting to establish a communication.
      • Hell, I remember at least one time Riker just said, "Geordi, blah blah blah," and Geordi heard him down in Engineering. No tapping the communicator or any kind of verbal signal that the computer should patch him through to Geordi.
      • Official explanation from the technical manual: You have to tap it when you're planetside (presumably so that it doesn't waste energy with opening channels erroneously, or if you're in a situation where you don't want to send out a signal). You don't have to tap it aboard ship since the computer is always listening, but some people either develop the habit or deliberately try to get in the habit so that they don't slip up in a vital situation on an away mission.

    Subcutaneous communicators 
  • In the third-season episode "Who Watches the Watchers?", Riker and Troi are outfitted with subcutaneous communicators while infiltrating the Mintakan civilization. They can hear the person on the other line inside their head, and the bridge can constantly monitor them. If this technology is available, why doesn't everyone have a subcutaneous communicator? It would certainly be helpful in instances in which someone's captured and their combadge taken away, although I can also see the Big Brother aspect to it.
    • There could be any number of reasons. Maybe the subcutaneous communicators are strictly short-term (the body rejects them after a few weeks/months or something and no one wants to have a horse needle jammed into them every time their communicator dissolves). Maybe their range is limited compared to the regular badge coms. But most likely the crew just found the idea creepy. With today's technology we can implant a chip under your skin that has your entire medical history with all your allergies, all your past injuries, any pre-existing conditions you have, etc. If you got in an accident a hospital could scan the chip and instantly get everything they need to properly treat you. But almost no one ever opts for an RFID chip. There's just something inherently creepy about it.
      • Also, you wouldn't want something going awry with those things while they're in your skin, so for a limited time is the way to go if you have to do it at all.
  • This actually is one of the most notable cases of Forgotten Phlebotinum on the show. While it is reasonable that they would not want to bother with having these on standard away missions where they are in uniform and wearing their comm badges anyway, there are also cases such as "First Contact" where Riker was surgically-modified to look like a Malcorian - but was carrying around equipment that the locals really should not have been given the opportunity to get their hands on (e.g. a phaser). When he went missing after suffering an accident, it only highlighted how valuable this technology would be if used consistently.

    Blind chance 
  • I've always thought it odd how rarely Geordi's VISOR is commented on. You'd think that such a remarkable device — seemingly as unique as Data is — would attract a lot of interest. The oddest thing is that even people who've just met him and have no other relevant knowledge (like Martin in "The Masterpiece Society") tend to instantly identify him as blind, rather than asking "what is that you're wearing over your eyes? I recall one novel where a character asks if he wears it for religious reasons.
    • Well, he has something over his eyes. If I saw someone wearing a cloth over their eyes, and they're walking around in public, the first thing I'll assume is that the person is blind because of injuries that s/he doesn't want to show the world. Maybe this is what they assume with Geordi? Plus, this is a military setting, so they're not of the mindset to bug Geordi about his VISOR. If he has problems with it, he'll take care of it himself.
    • I never got the impression that Geordi's VISOR was unique, it's just we never see anyone else who is blind. It could be one of several treatment options for congenital blindness that some people decline because of the sensory confusion of seeing the entire EM spectrum at once. As for being a lie detector or otherwise clued in to people's emotional states, Geordi say it only works on humans. And while he probably can tell what a girl thinks of him just by giving her a quick look down, that doesn't mean he knows how to be more charismatic. He freely admits he doesn't know what to say to make women like him.
      • It's never stated that the VISOR is outright unique (indeed, we see something comparable used by Miranda Jones in TOS), but "Encounter at Farpoint" implies that it is at least highly uncommon (Dr. Crusher rhapsodizes about the VISOR like she's never seen its like before). If it is (fairly) unique device, that raises some questions about why Geordi gets one and why they're not standard issue for blind children of the 24th century.
      • Probably most forms of blindness are curable with that level of technology.
      • In the Expanded Universe novels about Geordi's time at the Academy, Geordi responds to someone who claims the VISOR gives him too much of an advantage in a survival game by saying that the VISOR isn't restricted to blind people, and that the guy can probably take his concerns to the instructors running the game and have a VISOR replicated for himself to even the odds. He even offers to let the other guy try his on... which results in a near-migraine for the poor sap. Basically anyone could use a VISOR, but the sensory overload is incredibly difficult to deal with, which is why Geordi has implants to help him deal with it and is still mentioned as getting regular headaches and other problems. So no, it's not unique, it's just that no one but Geordi probably wants to use it. As to no one commenting on it more, as someone says above, all he's doing is wearing something over his eyes, in a setting where lots of different alien races and technological advancements coexist, it's probably just not all that remarkable.
      • I'd have thought that if a bunch of people beam down to your planet consisting of a tall man with a furry face, a big angry guy with a weird forehead, a woman in a leotard and an albino man made of plastic, Geordi wearing a thing over his eyes would be the last thing I'd question.
      • Maybe people are just too polite to mention it. After all, if you saw somebody walking with a cane, you wouldn't go up to them and say, "Hey, can't walk properly without that thing, huh?" At least I hope not.
      • It's more analogous to if a person showed up with a cane that's also a high-tech multitool. A lot of people would ask how it works, no?

     Not until Tuesday 
  • Why doesn't the computer ever tell the crew that a critical feature such as the tractor beam has been maliciously sabotaged until they have the misfortune to find that out the hard way?
    • Because the first rule of sabotage is to do it in a way that people won't find out immediately.
    • Also, as Geordi says, the computer tends not to volunteer information. Computers period don't. Take your computer for example, does it tell you it's missing a driver that's necessary for a specific program to run when you first start up the computer, or does it wait until you actually try to run the program before going "Oh, wait, no, crap, I can't do that Dave"? Basically it's not psychic, if someone enters a fake access code to get at the tractor beam (as they'd need to), the ship's computer assumes "Oh, they're doing maintenance", and the fact that the maintenance never seems to end doesn't factor in. How does the computer know someone took the part out and disintegrated it instead of taking the part out and is just taking a long time to fix it or find a replacement? That's why the computer says "The tractor beam is offline" rather than "Someone done sabotaged your tractor beam, Frenchy."

    Is Enterprise's computer capable of sentience? 
  • This occurred to me originally while watching the episode "Elementary, Dear Data". In that episode, Geordi instructs the computer on the holodeck to create an adversary "capable of defeating Data". This results in a self-aware hologram. If the computer is capable of maintaining a self-aware, sentient artificial intelligence as just one program which takes up a mere fraction of its resources, would that not mean the computer itself as a whole is capable of being a self-aware, thinking artificial intelligence? Also in the same episode, the computer can create extensive scenarios just with a couple of sentences of instructions. I find that a frightening capacity for creative thought for a computer. Now, I have seen it argued in tangentially related discussions above that Moriarty was merely a complex simulation of a person, not a person in itself. However, I would argue that he displays attributes which qualify him as a sentient being. The chief among those would be that he soon re-evaluated his goals and the function he was originally created for (besting Data) fell to the wayside, unless you interpret it very creatively.
    • That's a great question. Measure of a Man, gave us the Federation's criteria for sentience, so let's apply that standard here:note 
  1. Intelligence: Does the Enterprise computer have the ability to learn, and understand, and cope with new situations?
    • Sort of. We have seen that the computer is able to access the ship's sensors to gather information for various reasons without prompting from a member of the crew, and it seems to be able to analyze that data and use it to come to conclusions on its own. Outside of directing the ship's autonomic functions, however it is almost never seen to take action on its own; simply relaying information to the crew when action is necessary.
  2. Self-awareness: Is the computer aware of its existence and actions?
    • Again, yes and no. The computer never really seems to distinguish itself from the ship, often identifying itself as 'this ship' or 'this vessel'. It does occasionally use personal pronouns such as I, but I get the impression that this is for the benefit of the user. It is certainly aware of its actions, but it does not seem to meet this condition.
  3. Conscious:
    • I would say no. Again, there is very little evidence that the computer understands itself distinctly from the USS Enterprise. It probably doesn't consider itself as a separate entity, or even as an entity at all. If it does distinguish itself from the ship, it probably thinks of it self as a tool rather than a life form.

    • As a "sideways" answer, out in the real world the Enterprise computer would most certainly be considered sentient/sapient/whatever by all the standards we currently apply, because it can pass the Turing Test apparently effortlessly, and that is the test of such things. Apparently this isn't good enough in-universe, but they never say why. Later computers are even more advanced: the Deep Space Nine main computer would volunteer information to the crew on its own initiative and sound terrified when the station was in danger; the Prometheus computer could plan and complete an entire combat mission against meat opponents on its own.
    Note also that the writers love terms like "consciousness" and "self-awareness", which sound great but in the real-world are ill-defined to the point of being completely meaningless, and thus avoided like the plague in serious discussions of such matters. We have the Turing Test because you can't measure subjectivity, and have to give the machine the benefit of the doubt out of courtesy.
    • You're very right. When I wrote the above, I watched the relevant scene in Measure of a Man to paraphrase Starfleet's definitions. The dialogue is such that their definition for 'conscious' is not given—making it kind of hard to finish. That episode actually makes Starfleet's policies far more ambiguous on the issue of sentience than you would expect. Starfleet does not recognize Data as a sentient life-form (despite clear evidence that he is). The presiding judge actually kind of avoids that question in her summation, ruling only that Data is not the property of Starfleet, and that he does have the right to choose. She never once declares Data a sentient being.
    • The JAG officer was only there to settle a legal question of property, not decide Data's sentience. She addresses this issue herself when saying these questions are better left to philosophers.
  • The dissonance here derives from the definition of the word "capable". It has been shown repeatedly that programs (e.g. Moriarty, the Doctor) running on the ship's computer are capable of full sentience, and of meeting all the criteria used by Starfleet to define such. In the modern world of computing, we draw a much clearer distinction between hardware and software than most people did in the 1960's. A computer without any software is a piece of decor and nothing more. By simple virtue of having so much memory and processing power, starship computers can run sentient programs. However, perhaps as a safety feature, such programs are not supposed to be able to directly control the entire computer. Especially after the Moriarty incidents, the need to limit just how much control a sentient program can exert over the computer (and the ship) in the absence of at least a quasi-physical interface (i.e. a holographic projection) was taken into account. Another interesting contradiction that arises from that though is that while it could be argued that holograms require holographic emitters to sustain their existence, biological crews likewise require life support or else they would cease to exist (i.e. die) as readily as a deactivated hologram whose program was subsequently erased from the computer.
  • Is it really significant that the computer doesn't identify itself as an entity distinct from the ship? Humans do essentially the same thing (referring to "me" rather than "my brain" or "my mind") all the time.
  • The Enterprise computer is absolutely capable of passing the Turing Test, of behaving in a way indistinguishable from that of a human being . . . if someone has instructed it to do so. Left to its own devices, it has shown no indication of having independent desires or emotions, of wanting to do anything other than precisely what it's told. Given that, the holodeck programs should perhaps be viewed as an actor playing a role: you put on a costume and pretend to be a character, and if you're good enough at it, people will be convinced that the character is a real person, even though it's just an artifice you've constructed and doesn't reflect your real thoughts and feelings. The computer might be conscious or sentient, depending on how you define those terms, but it doesn't have a human-like consciousness, has never chosen to express itself in a way that humans would recognize as free will. Any time it's appeared to do so has been a performance, done for the crew's benefit and at the crew's request.

     Starfleet: Riker! Go help sentence your friend to death or else! 
  • In Measure of a Man, why was the judge so dead set on having Riker be Data's prosecutor, to the point of threatening to rule in favor of Maddox right then if he doesn't? Doesn't that mean the judge is allowing into the proceedings a prosecutor with a rather blatant conflict of interest especially since Riker is being forced into it? Last, and most importantly, why didn't the judge just call in some Starfleet yes man from somewhere on the starbase to be the prosecutor instead of tormenting Riker by forcing him to basically help to make sure his friend is executed?
    • Regarding all arguments below about Riker being the only command-level officer available: Admiral Nakamura was shown earlier in the episode as being on base; therefore, shouldn't he have been available to act as the prosecutor?
    • The JAG officer mentions that, as remote as the station is, it would take weeks (at least) to get another JAG sent out there to prosecute. Riker, as the next ranking officer, would have to fill in. Presumably, since the ship is self-contained for long periods of time, the senior officers have the authority to act in legal matters such as a court martial, otherwise a crew member in trouble would have to wait until the next time they wander into a starbase to have his trial. Riker just wasn't expecting to have to act as the prosecution against someone that is a close friend and associate.
      • That's not the point. There are HUNDREDS of people on any given starbase, it's hard to believe there wasn't at least 1 person that was serving on that starbase able to act as Data's prosecutor that hadn't been serving on the Enterprise. At the very least, she could have chosen someone on the Enterprise that didn't serve directly under or over Data to avoid as much of a conflict of interest as much as it was possible. It seems rather petty and vindictive of the JAG officer to force one of Data's closest friends to basically try to kill him.
      • Rule of Drama aside (because that's the only reason I can think of on the writer's part for doing this), the only reason I can think of is that since Riker would have the first-hand knowledge of Data's performance and capabilities. Honestly, it would have made much more sense to have Maddox acting as the prosecution, as he had prepared his arguments years in advance (having been in a position to vote down Data's Starfleet application) along with negating even a whiff of conflict of interest.
      • With some of the more common duties of the Enterprise: first contacts, treaty negotiation, mediating disputes, and that sort of thing—it seems like a stretch to believe that the ship wouldn't carry its own team of legal experts. Perhaps not experts in the sort of military property law that is the subject of this case, but at least someone who freakin' went to law school.
      • It's never said that the Enterprise's diplomacy efforts cover the fine contractual details of the diplomacy they do... they could be stuck in one place for months or years if that were the case, which is sort of inconvenient for one of your most advanced science ships and the flagship of your fleet. Whenever they handle diplomacy they probably make very "broad strokes" agreements, basically finding the general area where everyone's going to be happy, and then let an actual Federation legal team come in. Anyway, Riker serving as the prosecution in this case is meant to highlight the pain of following one's duty when it clashes with one's personal feelings, the episode loses pathos without it. Remember, this is fiction produced for entertainment, not a hypothetical produced to show an accurate depiction of legal proceedings... entertainment comes first, accuracy comes second, as it should.
    • Perhaps an even more fundamental problem, only someone like Riker with a conflict of interest would have been motivated to do it. A prosecutor who actually wanted to win could have had the case summarily decided in their favor.
    • Riker being prosecutor involves Acceptable Breaks from Reality. The judge\arbitrator's justification was that there were no legal specialists at hand and only a commanding officer (or whichever term includes Picard\Riker but no one else on base) could act as such in special circumstances. He prosecuted in good faith to the point of feeling bad about it. If the conflict-of-interest was at any point apparent, the judge\arbitrator would cancel the trial in Maddox's favor, and you can be sure that Riker would get career trouble over it (formally if his bias could be proven, informally otherwise). If he left anything out, Maddox is on hand to speak up. Even if they waited for an actual legal specialist, they probably couldn't prosecute as well as Riker did, even if Riker is holding back in ways that he think he can get away with, which he doesn't seem to be.
    • Actually, this episode was highly disturbing on several levels. First, it was never established what the legal basis for a Starfleet JAG, as opposed to a civilian court, being allowed to define the sapient rights of a being was. Even scarier, it was implied that there was no avenue for appeal should the JAG rule against Data, whereas in real life this kind of case would likely be appealed to progressively-higher courts. To make it truly terrifying, there was a glaring conflict-of-interest, since Starfleet was being allowed to rule whether an entity was its own property or not! It only gets worse when you consider the way the case was slanted by the false sense of urgency. Data had served in Starfleet for two-and-a-half decades! It's not as if he was on the run! Why did the case need to be settled immediately (and thus requiring Picard and Riker)? That Maddox was acting out of self-interest, trying to advance his own career in cybernetics, was obvious based on the sudden urgency of his desire to seize Data. The JAG was also suspiciously biased, as she seemed in an inappropriate hurry to take on the case, giving the impression that she was more interested in having her name attached to a major legal precedent than she was in due process.
      • Not to mention the distinctly unsavory aspect of the whole business wherein Starfleet treated Data as a sentient being, accepting him into Starfleet Academy, giving him a commission, promoting him, awarding him citations, all of that, and as soon as it's convenient for him to be property, his sentience is called into question. Maddox at least was consistent — he opposed treating Data as a person from the beginning. Starfleet flipped its own position, held for two decades, as soon as they had something to gain from it.
    • The episode was also meant (in deleted scenes) to show Riker actively conflicted between career ambition and loyalty to his friend. He's not supposed to be distastefully carrying out his job, he's supposed to see it as a chance to stand out and have an "oh dear god, what am i doing?" moment.

     Accept only my orders from the Bridge even though I'm not there 
  • In Brothers, Data imitates Captain Picard's voice and issues and bunch of orders and codes to prevent anyone from stopping what he's doing. However, wouldn't the computer have detected that Capt. Picard wasn't on the Bridge (and in Engineering at the time)? Shouldn't it have not accepted Data's orders due to that discrepancy?
    • Yeah, the security of computers in the Star Trek universe is completely governed by the Rule of Drama. A number of crises in each series could have been prevented by security measures that we take today as a matter of routine. Data, himself, clearly needs to upgrade his firewall.
      • Not really an issue of a firewall. This was caused by basically a backdoor put in Data *by his creator*, that Data himself didn't know about. The backdoor was specifically concealed from his knowledge until Soong gave him the "unlock code".
    • Data had issued an order in Picard's voice on the bridge just prior to leaving, regarding a very specifically timed force field cascade, "accepting instructions from Commander Data en route". This would presumably have been interpreted by the computer to supersede the previous order to only accept instructions from Picard on the bridge.
    • OP here. I think there's been a misunderstanding of my question and thing is I don't know a better way to word it. What I'm trying to say is that when Data started imitating Captain Picard and began issuing orders, shouldn't the computer have rejected the "accept only my orders from the Bridge" to begin with? Why do I say this? Shouldn't the computer have picked up that Picard was not physically standing in the Bridge, realized something was up and therefore rejected Data's imitations to begin with regardless of whether he'd entered the "accept only my orders from the Bridge" command as a matter of security?
    • You're right. A minimum of thought in programming the computer's security protocols should have prevented this. Even if you take into account that combadges aren't the final word on a person's location, the internal sensors should have been able to tell that the only being on the bridge was an android — no fifty-year-old man anywhere near it. Unless Data was able to fool the sensors.
    • Except the computer only knows the location of combadges, not actual people. It seems like a bigger security flaw to not let someone use the computer just because they're missing their combadge.
    • Logically, the computer could interpret Data's lockdown of the voice interface as a defense against an imposter which is otherwise fooling all verification means. The order is given on the bridge in Picard's voice with the correct authorization code. And, this being Data, he could have specifically blinded the other sensors to further the deception. We had seen Geordi expressing confusion on how a number of independent backups had all failed.
    • After watching the episode again, I think the show actually hints at an explanation. Notice which station Data is using when he does all this. He's not sitting at Ops, or one of the command consoles, or even the bridge engineering station like you might expect; he's using Science I. That's exactly the station you'd want to use if you were trying to screw with the ship's sensors. This strongly suggests that Data did something to the internal sensors that made the computer think that Picard really was on the bridge.

     Two heads are better than one 
  • I don't think the ship got separated nearly enough. Obviously it'd be rather crazy to separate the ship every time they went into battle, but in situations where the Enterprise has to be in two places at once, why not use the feature? (I'm not suggesting that by doing so, the story would have worked better; I'm simply referring to in-universe logic). To wit:
  • "The Enemy". The saucer section would remain at Galorndon Core so they could beam Geordi back up at the earliest opportunity, while the stardrive section would take the injured Romulan to the Neutral Zone.
  • "The Best of Both Worlds" and "Descent". The Enterprise was quite deliberately seeking out the Borg. It would have been very prudent to leave the saucer section off at a starbase so the children could be safe.
    • In the latter two, as well as "Chain of Command" where the Enterprise was being sent into a delicate situation with the Cardassians, the Enterprise met up with an Excelsior-class starship for the pre-mission briefing. The civilians could have been transferred to the other ship then. Also, in DS9: "The Jem'Hadar", the Galaxy-class USS Odyssey was explicitly said to be offloading its civilian population to Deep Space Nine before entering the wormhole to confront the Dominion (a good thing, too, considering what happened to the ship).
    • The "battle bridge" of the star drive section was a set from the Star Trek movies, and the expense of rebuilding it after each shoot killed the idea of saucer separation as a regular thing. Make up all the story stuff you want, but that's the real reason.
      • That's an argument that always confused me a little, because that set was redressed and used all the time. A short list of episodes that the modified movie set was used for rooms that were not the battle bridge includes (as listed on Ex Astris Scientia) The Measure of a Man, The Battle, Pen Pals, The Samaritan Snare, The Emissary, and Peak Performance—and those are just examples from TNG's first two seasons! Why would it cost more to build the battle bridge than it would to turn it into a completely different room when the set design and necessary props are all on hand? Also, the lighting for scenes on the battle bridge was always much darker than most other rooms on the ship, so wouldn't it be easier to mask imperfections rather than having to burn budget having them fixed?
      • My understanding is that there was the practical matter of set storage to think about - since there was no regular cast member for engineering in season one, engineering hadn't been expected to be a major part of the series. But during the filming of Encounter at Farpoint, they got told effectively "it's built now, or it never gets built" (leading to the couple of scenes where characters stroll through engineering for no particular reason). With the Main Engineering set now taking up space as a standing set, alongside the bridge, sickbay, and the various smaller rooms, something else had to go, and it ended up being the battle bridge set that got its regular appearances axed.
    • That, and the slowing down of storytelling. I've also always wondered if the fact that the stardrive section alone makes a pretty clunky model was a factor. It sort of looks like a headless chicken.
  • I never understood the "too time consuming" part of why they stopped doing it. It only takes one 5-second shot of it separating (the actual shot we see is longer than that, but it could be cut down to 5 seconds and convey the same thing). It's doesn't impact the plot much either way, but it would be nice to see once a season or so.
    • I take the point, but it would at least require some conversation about making the decision, lest the audience be confused.
      • You mean like: Picard: Separate the saucer section! Helmsman: Aye sir! *sound effect* The next shot of the exterior, the two halves are separated and acting independently. Of course, if they did that, we'd be here debating that "Gee, they moved the civilians to the saucer pretty dang quick" or "Did they even seal all the turboshafts and Jeffries' tubes before cutting the star drive section loose?"
      • So do it just as they are going to cut to commercial break. Then after the break, have some stock footage of the sections separated. The show routinely used that as means of jumping the action ahead a bit.
      • First, still a pain in the ass to write around. Second, the show preferred to use its commercial break shifts as mini-cliffhangers, which "Separate the saucer section" is not really, and sounds really silly before the dramatic stings they tended to use. Third, I've read enough Star Trek headscratchers pages to know people wouldn't complain any less, so it would have been a lot of wasted effort. Besides the ship looks frikkin' stupid without the saucer, and in a soft sci-fi space opera the hero ship looking good takes priority over practicality.
      • All this is true, but it does make you wonder: shouldn't the creators, all seasoned TV writers, have thought through these issues in advance and left out the separation idea entirely? It's just funny the way the pilot uses it like it's going to be part of the show's brand identity ("look what we can do that the original ship couldn't!") but which largely, for a host of reasons, sits on the shelf unused.
  • In the early days at least it was considered extremely risky to separate the ship. It was an emergency procedure, nothing more. Look at how scared everyone is (even Early-Installment Weirdness Data) in Encounter at Farpoint when they re-attach the ship; there appears to be a genuine risk of the saucer crashing into the stardrive (and the warp core within). Note that we do not actually see the parts coming back together in Best of Both Worlds or in Star Trek Generations so there is no actual evidence to my knowledge that this has changed a great deal. The emphasize this perhaps is the fact that the saucer doesn't seem to have any photon torpedoes and the shuttle bay appears to be somewhere between the nacelles; combined with the lack of warp drive means that the saucer is rather vulnerable to attack when left on its own.
    • You've got that slightly wrong. The apprehension in Encounter at Farpoint wasn't because reconnecting the two hulls was particularly dangerous under normal circumstances, it was because it was being done without computer assistance. Picard was testing his new first officer by ordering him to direct a precision maneuver that's usually mostly-automated. It's analogous to a 747 pilot on approach to a runway in low visibility conditions deciding to turn off his instruments and land manually. He can probably do it without any problems, but you really don't want him to try unless it's absolutely necessary. In fact, Picard rather dickishly dismissed it as "routine." Also, Enterprise's biggest shuttlebay is on the saucer section of the ship directly below the bridge. This is canonically established in both the master systems display in engineering, and in the episode Cause and Effect when they vent shuttlebay one's atmosphere to avoid an imminent collision with another starship. The shuttlebays located between the warp nacelles are shuttlebays two and three, which are much smaller than shuttlebay one. There's also supposed to be a single, aft-facing photon torpedo launcher on the saucer section that's normally covered by the neck of the stardrive section when docked, but that comes from background material, and I don't remember anything on-screen that even hints its existence; so whether or not it's really there is debatable.
    • Apparently it may have been more the associated visual effects cost that was prohibitive. It would have required various stock shots for flybys, orbits, accelerating to warp, and others for both the star drive section and the saucer and apparently actually making those would be expensive.
      • That's the piquant irony, no? If they actually did it more, thus having gotten those stock shots in the bag, it would be easier to justify doing it at all.

    Ferengi pirates on old Birds-of-Prey are dreadful as a Borg Cube 
  • ”Rascals” has the more shameful battle and takeover of the Enterprise D’s history. Recycling “Yesterday’s Enterprise” wasn’t appropriate. In this battle, the D had to cover the C. In “Rascals”, Riker and Worf don’t have any similar limitations and still manage to get heavy casualties and the ship crippled in two minutes. Then, they have to surrender to a dozen of Ferengis who kick their ass as the Borg did when they abducted Picard in The Best of Both Worlds. Yes, a bunch of Ferengi pirates can easily TAKE BY FORCE the Federation flagship and the thousand of people on her.
    • You seem to forget that they were in no immediate external situation, and thus had shields down when the first shots hit (which also allowed the the Ferengi to beam over in the first place). And as we've seen in Generations, even decades-older ships still pack enormous power if they hit a starship's bare hull - this in addition to the threat of a weapon trained on their fellow crew-members, means no one was willing to risk lives (better to wait and get control of your captives later, than do anything hasty and get casualties)... by all means too, the Ferengi watched everything closely, so on one could get to their weapons anyway. Still a stretch, but reasonable enough.

    Don't bother deleting old records 
  • In "Second Chances", when the crew discover Riker's twin, Picard asks to have the transporter records from Riker's old ship sent to the Enterprise. Why would Riker's old ship still have the transporter records after such a long time?
    • Why wouldn't they? In the post-scarcity future, replicating functionally unlimited amounts of computerized storage is trivial. Why bother ever deleting anything when it's just as efficient to make more storage space than free it up by deleting old files?
      • You've got to remember, though, the unfathomably huge amount of data that would be contained in a average-sized human's transporter pattern. A 70kg human body is made up of more than 7x1027 atoms, and the computer has to keep track of where they all go. That's probably literal, too: There's a longstanding argument among Trek fans about whether or not a person who steps out of the transporter is the same person who went in, or is just a copy. If the person isn't just a copy, then the transporter has to put him or her back together in a very specific way (all the while sidestepping a couple of laws of thermodynamics as well as the uncertainty principle). No matter how big your storage capacity is, a few years worth of transporter patterns is going to take up a gargantuan amount of space.
      • It's been a while since I've seen the episode, but I didn't get the impression that they were storing the entire transporter pattern, just records on the order of "Stardate XX.XXX, XX:XX ship's time: Beamed up away party. Transport of Commander Riker failed due to atmospheric interference. Subsequent attempt was made using secondary containment beam to boost transporter signal. Secondary beam failed, however primary beam held its integrity on second attempt and Commander Riker was beamed up safely." Based on those records, Geordi speculated that the secondary beam did not fail, but was reflected back to the planet's surface.
    • Perhaps the transporter records aren't 1:1 records of each transport but a checksum-style algorithm to ensure all the molecules went where they should have gone, just as how when downloading a large file, one uses a mathematical operation to give high probability all the bits are in the correct location. (This is what the person looked like before transport, is that the same as how they look after transport?) Also - Data had an ultimate storage capacity of "800 quadrillion bits"note  located somewhere inside a humanoid shape - imagine what a starship-class computer core spanning several decks could store!
    • They may store records of transporter activity, but they can't be storing the actual pattern data: that is stored in the Pattern Buffer, and the Tech Manual states that the pattern can only be stored for 420 seconds before it degrades. Scotty had to work around this limitation is 'Relics', and even then his companion died. If they could store the data indefinitely, there would be no need for the Pattern Buffer. This is a sensible limitation from the point of view of drama: if the pattern could be stored indefinitely then a person could be brought back from the dead at the push of a button.

    Order your food awkwardly 
  • In many episodes of the show, various Starfleet officers order food from the replicator in a weird order e.g. Picard says "Tea, Earl Grey" as opposed to "Earl Grey Tea". Why do they give their orders in that way? Correct me if I'm wrong, but the replicators are still able to do the order if its given as "Earl Grey Tea" so why the Starfleet Officers do it that way?
    • Because they are ordering off a computer database. Tea is the type of drink, earl grey is the variety, and hot is the temperature (strength unspecified, presumably).
    • Maybe the computer's ability to parse natural language is a relatively new feature, so some people are just used to ordering the "old" way, similar to how many Windows users will customize the UI to resemble a previous version as closely as possible?
    • The computer is notoriously finicky about how people order food and beverage items. So quite possibly breaking down the description in this way is Picard's was of ensuring he gets his tea exactly the way he likes it. Or perhaps he had to do it the first time and just got used to it. Imagine if you will:
    Picard: Tea.
    Computer: Please specify variety.
    Picard: Erm... Earl Grey.
    Computer: Please specify temperature.
    Picard (annoyed tone): Hot!
    (Time passes.)
    Picard: Tea... erm, Earl Grey... hot!

    How do I open doors? 
  • In "Datalore", there is a scene where Worf and a couple of security guards are accompanying Lore (thinking that he is Data). When Lore ordered the turbolift doors to close, why didn't the security guards (who got locked outside) order the doors open? Lore didn't set the turbolift in motion (he started taunting Worf as I recall) and he didn't use any security codes to lock the doors.
    • Presumably it was interpreted as a specific order rather than just a common door-closing, and "Data" outranked both of the security officers. They were probably trying to input a specific security bypass code, but by then things were working on Drama Time inside the lift.

    We can't transport people off the Holodeck? 
  • The episodes "The Big Goodbye" and "Elementary, Dear Data" both feature people stuck inside the holodeck. Why did no one think to use the transporter to beam people out? The episode with the Bynars showed you can beam people inside the Enterprise.
    • That episode was either Early-Installment Weirdness or Forgot About His Powers. When the Enterprise was leaving Spacedock and they realized that the antimatter containment was not actually breaking down, they asked where the transporter room was so that they could beam to the ship. The station commander said there was no time. Nobody even considered a site-to-site transport. However, on the original topic, it may be that the complex energy fields inside the holodeck, which include a mix of projected light, replicated matter and force fields, make getting a clean transporter lock difficult, much as beaming through shields doesn't usually work.
    • "A Fistful of Datas" is perhaps even more egregious, since the holodeck malfunctioning seems to be accompanied by communication failing for no obvious reason. But this has to happen, because otherwise they can ask for a beam-out and call it a day.
    • A case of Early-Installment Weirdness. As Data states in the pilot, much of the surrounding plant material was actually real. We see this on other occasions with a drenched Wesley still wet off the enivonment and Picard being hit by a snowball, while Cyrus Redblock and his henchman dematerialize. The intended exit process was this: Call the arch, take anything you don't want destroyed under the arch, then have the holodeck clear the environment without need for concern. Otherwise, things are too "real" for sensors to easily distinguish, leading to the drama in "The Big Goodbye". This was actually meant as a plot point in "Elementary, Dear Data" when Data carries the drawing off the deck - it was meant to indicate that safeguards had failed and Moriarty could have left the deck at any time. Picard, realizing this, subsequently tricked Moriarty into thinking he couldn't yet leave and thus outwitted the cleverest villain in all of literature. Gene Roddenberry objected to that (made Picard look deceitful) and it was deleted, leaving only the supposition that the drawing was sufficiently simplistic to be actually replicated and thus real.

    The U.S.S. Pasteur has the same deflector pattern as the U.S.S. Enterprise? 

  • This horse has probably been beaten to death, but I didn't see it here, so here it goes: The U.S.S. Pasteur fires a tachyon pulse to scan for the anti-time anomaly, and is destroyed by the Klingons shortly thereafter. The crew is rescued by the Enterprise and, when they return later, discover that the anomaly has formed and is based on the convergence of tachyon pulses from 3 different versions of the Enterprise. How is that possible when the Enterprise in the future timeline never fired a tachyon pulse of its own?
    • Probably the best explanation is that since Data programed the pulse in all three timelines, he must have programed all three with the same signature.
    • In an early version of the script for 'All Good Things', the future Enterprise was in a museum. Picard and his former crew stole it and took it to the anomaly. Executive producer Michael Piller rejected that idea, and either the writers never noticed the issue this caused with the tachyon pulse, or it was too late to find a way to fix the problem.

    The Echo-Papa 607 
  • Star Trek in general is the poster child for either forgetting technology, not using the technology they have, or manufacturing technology that is worse than it should be. Even taking that into account though, did no one at all think that reverse engineering the drone weapons system seen the The Arsenal of Freedom had any merit? Yeah, OK; the original system accidentally wiped out the inhabitants of a planet. But how about neutering the AI and merely using the analysis software that is capable of adapting to fight any enemy? Or just piloting them by remote control? Or using the off-switch? (which is a major plot hole in the episode considering the original designers of the system apparently forgot about it). In other words we are talking about a system capable of adapting to the the point that a drone the size of a grown man's torso can destroy a Galaxy-class starship (i.e. one of the most powerful starships in the galaxy) without barely breaking a sweat, so clearly it could have grown a lot more powerful than that. Seems like it could have been very useful against the adapting capabilities of a Borg Cube or the armies of Dominion fighters that were initially slicing through the Fed's shields like warm butter.

    A con artist more advanced then the Federation? 
  • The episode the Devil's Due has a con artist pretending to be a culture's equivalent of the Devil, using various technologies with added flair for effect. She also messes around with the Enterprise, transporting herself (and Picard) on and off it, disabling its transporters, blocking Worf with a force field, and for a grande finale, extending a cloaking field from a "bad copy" of a Romulan cloaking device over the Enterprise, while using a "Subspace Damper" to disable communications and other functions, making it appear as though it's vanished. So how exactly is the Federation's flagship so completely helpless against technologies a simple con-artist was able to get, and why isn't every ship in the Galaxy firing off subspace dampers at every enemy ship they encounter?
    • It's less that her technology is necessarily more advanced; it's just that she's applying it in ways that Starfleet doesn't expect. Perhaps "more innovative" works better than "more advanced." Hardly the only episode where Enterprise security looks horrendous, but at least there's a little bit of a justification this time. They have no trouble countermanding Ardra's technology once they identify it for what it is.

     At Least They're Not Literately Juggling Loaded Guns 
  • In "The Mind's Eye," why are Geordi and Data test firing the suspicious phaser rifle in the engine room, where they're surrounded by critical equipment? Enterprise has a small arms range that has appeared on-screen on at least two occasions. They also have a central armory—also seen on-screen—which probably has a facility specifically designed for the maintenance, diagnosis, and repair of the ship's inventory of phasers. Even failing that, Enterprise has a number of science and engineering laboratories, one of which must be better suited for what they're doing. Instead of using any of those facilities, however, they inexplicably decide that the best place to examine and repeatedly fire the rifle is about 20 feet from the warp core; thereby maximizing the possibility of blowing up the whole ship if something goes wrong.
    • Just because rooms have been seen on screen doesn't mean those sets are just sitting around in pristine shape ready to be used; more likely they're broken down and turned into other sets, or in storage. That means it takes time and budget to get them out, put them together, and fix them up if they've decayed any. Or, if they're typically used by other sets, that means taking THOSE sets apart, then putting them back together when they're done. Probably only a handful of sets that got used very regularly, like the Bridge and Main Engineering, were left up all the time; Star Trek is notorious for the studio constantly leaning on their budget, so it may have been too expensive at that point in the season to set up the arms range or armory, so they just did it in Main Engineering, which is more familiar to viewers anyway. Sometimes little storytelling nuances fans would like to see have to take a back seat to the practicality of the budget.
    • Fine points, but the question was more about the in-universe justification for test firing a suspect phaser rifle just a few feet from the warp core; where, again, the slightest mishap could have resulted in the instant and total destruction of the ship. Apart from the bay where Enterprise's antimatter is stored, I can't really think of a more dangerous place to do what they were doing. I'd also point out that this episode heavily featured a cargo bay; which also would have been a better choice than main engineering—with the addd benefit of negating the buget and time issues.

     Not exactly 'Game of the Year' 
  • "The Game" is about a game that is kinda lame even by circa 1990 standards, let alone compared to the holodeck. Graphics are rudimentary, gameplay is simple to the point that one character says it practically plays itself, and it doesn't seem exciting in any way except for that craving for the next level-up reward. Then again, it doesn't have to be any good as a game: it directly stimulates the brain in a very pleasurable and addictive way. On the other hand, 21st century human games without that advantage manage to vacuum up thousands of dollars via Skinner box designs with uncertain intervals between rewards (which accomplishes the same thing).
    • Keep in mind, though, that what this game is trying to accomplish is a lot more complex. First, it seems to be closer to a physical dependency than psychological - an overwhelming and irresistible compulsion in even the most strong-willed individuals (which isn't to cheapen the addictive potential of modern games, but it's not 100% effective in every individual after just a few seconds - this is next-level stuff). Second, it's not just encouraging further play, but programming in complex actions and knowledge. The affected actively seek to get others addicted, know where and how to contact the game's programmers, and even know they're expected to address them in a specific way. Third, it's making very specific changes to its victims' personalities, making them more aggressive and changing their allegiances. It's going to take a lot more than the current stimulus-reward model to make all that happen.
    • Consider, however, the phenomenon summed up well by this xkcd comic- some of the most incredibly addictive games are not AAA-tier graphics showcases, but the sort of deceptively simple Just One More Level! gameplay loops that have eaten countless quarters in arcades, dominated flash game sites, and now populate billions of smartphones. In essence, this would be the ultimate expression of such games- an effortlessly intuitive interface engineered to minimize concentration-breaking distractions or delays and a gameplay loop that maintains passive interest just long enough to begin responsively adapting to present a balance of challenge to reward tailored to maximize stimulation for a given user, with the effect increasing as it refines this model. What starts as an executive's wet dream rapidly starts to sound like a psychological weapon.

    TNG's 'nuclear option'? 

  • Blowing up a star appears to be terrifyingly easy in the 'trek universe. In Generations we see that Soran is able to do it using a small missile and a base he set up and built himself. In DS9 a changeling is able to build a sun-buster using a few hour's uninhibited access to an industrial replicator and a standard-issue Starfleet runabout. In an episode of Star Trek, the Enterprise-D blows one up accidentally with a couple of modified torpedoes(they were trying to fix it). Blowing up stares appears to be so easy that's even possible to do by mistake. So why aren't there more suns blowing up? If its that easy it seems like quite a few hostile governments, let alone terrorist organizations, should be busting suns left, right, and sideways.
    • And yet, the Tox Uthat in "Captain's Holiday" is treated as a game-changing superweapon because it can stop nuclear fusion in a star. But isn't that reasonably easy to do anyway?
    • Sun-busting would suffer the same issues as WMD weapons in modern time. Mutually Assured Destruction. You blow up a sun, we'll blow up two of yours. Eventually no-one has any suns left. Destroying suns would also annihilate the planets & populations which are the only things that really matter during an intergalactic conflict.
      • As we've seen, the Founders (for one) don't play by the conventional rules of war or of geopolitics, and they seem to think that targeting the Bajoran sun is perfectly fair game (so it's a wonder why they don't try that tactic more often).
    • Its not so easy. Soran devoted his entire life to figuring out how to blow up stars(and took the knowledge to his grave). The Founders hundreds of years ahead of the Federation technologically. When the D blew up a star by accident it was one with highly specific properties. So while the potential exists, its clearly a lot harder than it looks.

     Whither spacesuits? 
Why do they never wear spacesuits or environment suits of any kind on away missions? I can't recall a single time in the TNG TV series where they did so yet offhand I remember at least twice in TOS, once with the Tholians and also The Naked Time. In The Naked Time, the Enterprise became infected because a crewman breached his suit. In The Naked Now, they became infected because they didn't even bother with a suit.
  • Budget. TNG couldn't afford the space suit costumes.

     Starfleet has forgotten the secrets of the staircase 
  • Whilst this could arguably be a general Star Trek Headscratcher, I feel as if this question is at its most appropriate regarding the Enterprise-D as the ship is both massive and is just as much a civilian ship built for children as it is a military one. Stairs. There clearly aren't any. Every time the turbolifts go down we see the crew either climbing the shaft itself or crawling through miles of Jefferries Tubes - no doubt giving themselves severe lower back ache and bad knees as a result. When you consider that running to the saucer/stardrive section whilst the ship is falling apart is the primary means of saving the lives of the children on-board, perhaps having something that would actually allow you to do that in a hurry would make sense. Note that in real life it is against every safety code imaginable to use an elevator during a fire or a crisis. This also actually opens up another interesting design question: a vertical staircase would use up considerably less room on-board ship than forty-two sets of Jefferies Tubes as that is the minimum amount required to run horizontally between each deck. You've just doubled the height of the Enterprise-D for very little reason.

Data & Other Androids

    Data Can't Say Can't? 
  • Why can't Data use contractions? One of the simplest models of computation, a finite state transducer, is capable of applying the morphological rules of English correctly (although not the syntactic rules, for that you need recursion). The fact that Data can do the unsolved problem of reasonably converting natural language into logical statements, planning what to do based on these and then converting his thoughts back into natural language with not much more error than the average human but can't use contractions is on the scale of saying your computer can solve complex fluid dynamics equations but can't flip a bit.
    • This one was actually dealt with on screen. Doctor Soong created another android, Lore, first. Lore was capable of using contractions and idiom, as well as human emotion. However, since A.I. Is a Crapshoot, Lore was also evil. Soong shut Lore down and built Data, intentionally limiting his ability to mimic humans because Lore's capacity for evil was due to his being too human.
      • Which makes my 2003 Word spellchecker more advanced than Data in that field, since not only can it use contractions, it has an expandable vocabulary.
      • That means your spellchecker is evil, says Star Trek.
    • Lal could use contractions, so she anvilicioiusly died. There are things with which androids must not meddle. Mwah hah hah!
      • The Uncanny Valley - the people on whatever planet Soong inhabited didn't take to Lore because he unnerved them. Data is obviously a machine, so people accepted him more easily.
    • Like so many other fields in which Data decides on personality quirks and ends up still being clearly robotic (that grey stripe! agh!), this may just be another one of them.
    • Soong was (even by his own admission) a bit eccentric if brilliant. After the failure of Lore (who started out rude, moved on to arrogant, and then outright psychotic and evil) one of the things he decided in the construction of Data was to make him much more polite as part of a very rigid ethics program. Data's speech pattern is a reflection of that, it's formal and until he finishes his human development (a desire also programmed by Soong, even if may never be fully realized) it will remain that way.
      • Of course, he had used contractions before, and occasionally since...
      • In fact at the end of the very episode in which they state he can't use them, he says, "I'm fine".
      • Data does indeed use contractions at various points in the series, but we can treat these as mistakes (or Early-Installment Weirdness, as the case may be). The fact that he cannot use contractions is indeed raised in dialogue several times ("The Offspring" and "Future Imperfect" come right to mind).
      • Aside from mistakes, Data can use contraction when he's quoting a phrase that uses them.
      • ...which makes about as much sense as anything else.
    • An interpretation I've seen elsewhere is that Data can use contractions, but he has trouble using them in a manner which sounds natural. A human native English speaker sometimes says "I am" and sometimes "I'm", depending on subtle nuances with each instance. Data still trying to figure out such nuances seems like it fits neatly into his ongoing journey to become more human. In the meantime, he considers it best to err on the side of caution.
  • Data has outright stated that he cannot use contractions (as in "The Offspring").
    • In a sense, this is a case of Flanderization. In the series Bible, it is stated that Data "usually avoids contractions" not that he is outright incapable of using them on the level of programming. By "The Offspring," he says "She can use contractions. I cannot."
    • If inclined to get around this, we can say that Data meant that he cannot use them effectively (doesn't know when to use them naturally), and that this statement itself is an example of why he avoids them. Compare the statement "I cannot play violin." This can mean the speaker is incapable of operating a violin, but more often means that the speaker is not proficient in the use of a violin to create anything you want to hear. Picture, if you will, someone suggesting a younger Data use contractions in his speech to sound more natural, then retracting their suggestion as Data proceeds to fill his sentences with every obscure contraction he's ever encountered.
    • I also wonder: Data is self-programming (for instance, he writes himself a program for romantic relationships in "In Theory"). How hard would be it be for him to write a program for himself that allows him to use contractions?
    • The future Data of "All Good Things..." uses contractions, so presumably the limitation is not one of hardware (unless he had some sort of upgrade in the meantime).
    • In "The Offspring", while Data does say that he "cannot" use contractions, in the same episode he says that using contractions are something his "program has never mastered". The latter implies that he has tried but cannot do it well. That again suggests that his "cannot" statement is oddly imprecise for Data.

    We'll court-martial you and then take you apart 

  • In "Clues", Data has been stonewalling the crew's attempts to find out what really happened to them during the missing day. He states that he's apparently guilty of falsifying the Enterprise's records, interfering with an investigation, and disobeying Picard's orders: "Your duty seems clear."
    Picard: Do you know what a court-martial would mean? Your career in Starfleet would be finished.
    Data: I realize that, sir.
    Picard: Do you also realize that you would likely be stripped down to the wires to find out what the hell went wrong?
  • Whoa, whoa, whoa! That would be a staggering overreaction on Picard's part. To say nothing of the fact that Picard was Data's defense attorney in the case that decided Data's fundamental human rights! Data has been recognized as a sentient being, and therefore Starfleet has no right to strip him down to the wires. I understand Picard's angry and desperate here, but damn!
    • "Descent, Part II" has a comparable problem when Data unilaterally decides to deactivate Lore. True, rights were only specifically extended to Data and not to similar Soong-type androids, but considering the extent to which Data believes in the spirit of this ruling, it seems incredibly odd to me that Data would just decide to, in essence, murder his brother... especially considering that at that point in the episode, Lore is contained and no longer an immediate threat.
      • Immediate being the operative word. Remember, the first time they encountered Lore, they beamed him out into space and he still showed up again later. Perhaps Data feels that the only way to stop Lore from popping up again is to essentially euthanize him.
      • By "euthanize" you mean "murder"... Lore is not asking for mercy or an end to suffering. Do you really think that a Starfleet officer would sanction executing a humanoid prisoner in the brig, no matter how dangerous they are? The implication is very much that androids are less than people — it's extremely strange to see Data himself acting like this was so.
      • Yeah, euthanasia literally means "good death". It's meant as a kindness to the person being killed. This is not such a situation.
      • Human prisoners can't be put back together. Lore is described as "permanently disassembled", but that could simply mean that all his parts are locked down in secure locations on Earth, not destroyed. Data has been both switched off and disassembled without obvious lasting ill effects (what happens to an android's "soul" when the power is turned off is probably a similar question to who comes out at the other end of a transport, but both happen and nobody complains much).
      • What you're describing would be analogous to capturing a dangerous human prisoner, putting them in permanent stasis where they are not per se dead, but will never again have consciousness or agency. As good as dead, in other words. Does that sound like something the Federation would sanction? Is there some reason that Lore is not entitled to a trial for his crimes, incidentally? Do androids not get such things?
      • It does make sense as an alternative to arresting him, prior to a trial. As for actually getting said trial? Probably not, judging by the state of AI rights in the Federation at that time. Hopefully in a few decades when things have warmed up a bit, he will.
      • Think of it this way. Data, as a properly functioning Soong-type android, was barely granted legal rights by Starfleet in a tentative, "we don't really know if he's sentient or not but we're erring on the side of caution" way. But does that ruling cover a malfunctioning Soong-type android? Data could argue that Lore's fundamentally a broken machine, incapable of making his own choices or controlling his own actions, and it's certainly true that Lore is so dangerously super powered that putting him on trial almost guarantees him an opportunity to break loose and wreak even more havoc. I'd imagine Data did face some legal grilling from Starfleet offscreen, but unilaterally shooting Lore isn't inconsistent with his character. We've seen before that he's willing to kill someone if he thinks there's no legal way of stopping them: he nearly did the same with Fajo in "The Most Toys."
      • I actually find that explanation pretty convincing. It would have been nice, however, to see Data vocalize some of this at the end of the episode, and maybe even express some regret about having to kill his brother, instead of making it all about himself (and the emotion chip). Mind you, "Descent, Part II" is so breathless and clunky that it doesn't give us a Hugh/Geordi reunion scene despite making Hugh's friendship with Geordi a major part of his motivation.
      • You know, The Disaster shows that Data's head is able to operate completely independently of the rest of his body, and he still retains his personality. If they wanted a trial, couldn't they just have removed Lore's head or a select number of his other, equally detachable limbs and render him completely harmless? The Federation could imprison him in a cat carrier after that if they wanted to.
    • You're all applying human laws and morality to this situation. Is it logical to permanently remove the threat Lore poses? Yes. If Soong-type androids are considered a race then Starfleet regulations cannot interfere with their codes of justice. And as for ceasing to be conscious or aware being too extreme a punishment for the Federation to consider, remember that that is exactly what happens when an Emergency Medical Hologram is turned off. The Doctor never complains about ceasing to be when he's deactivated and in fact requested that he be given the ability to do that himself. It's simply a part of what it is like to be an artificial being.
      • As "descendants" of human colonists they are de facto Federation citizens, so Starfleet has every right to interfere. Also, race does not have anything to do with culture, no matter what the script repeatedly says (Star Trek as a franchise has a lot of institutionalized racism, but that's just bad writing).
      • Just because Omicron Theta was settled by humans doesn't mean they were Federation citizens, especially since it was never called a Federation colony. While Data probably could have earned citizenship through his service in Starfleet had he been organic, I don't think it was ever established that his legal status as "not property" went as far as giving him citizenship. And in real life I would agree with your assessment of race and culture, but remember that in Star Trek "race" means "alien being" and thus not only are there biocultural differences to consider but the possibility of Blue-and-Orange Morality. But even discounting the murkiness of member planet rights vs. Federation law, it may come down to whether or not you can try a malfunctioning android—sentient or otherwise—at all.
    • Go back and take a look at Measure of a Man and pay careful attention to the ruling. The question of weather or not Data is sentient isn't actually settled, and in fact, the presiding judge seems to go out of her way to avoid that issue in her summation. The only questions settled by her ruling were, 'Is Data the property of Starfleet', and 'Does Data have the right to choose'. The closest she came to saying that he is sentient is when she asked, "Does Data have a soul," but she quickly dismisses the question and admits that she has no idea.
  • Alternate idea - Picard was attempting to "intimidate" Data by appearing to threaten something Data valued - his existence. He was trying to get Data to re-evaluate the situation, re-run the variables, Data was prepared to lose his commission, but was that all he was prepared to lose? Best case result: "I see, Captain. *head tilt* I had not considered that possibility. I will comply." Whether or not that was a viable tactic or a frustrated shot in the dark is anyone's guess.
    • It's also likely Picard's saying that someone at Starfleet who wanted an excuse to dissect Data would see it as the perfect opportunity to do so. Picard's not threatening to do it himself, because we all know he'd probably fight it tooth and nail (and eventually phaser rifle) all the way, he's trying to make Data aware of the danger.

    Non-Sentient Slaves Are the Best Kind! 
  • In "The Measure Of A Man", the concluding argument is "We can't mass-produce androids because that would be slavery." But what they were trying to prove is that Data is sentient. Since they never proved that, how could they make the argument that it would be slavery? And if they did prove it at some point (I don't believe they did), why would they need to make the slavery argument?
    • Picard didn't need to prove Data's sentience. All he needed to show that Data's sentience couldn't be disproved.
      As the JAG officer said, "This case has dealt with... questions best left to saints and philosophers. I am neither competent, nor qualified, to answer those. I've got to make a ruling... Is Data a machine? Yes. Is he the property of Starfleet? No.... does Data have a soul? I don't know that he has... But I have got to give him the freedom to explore that question himself..."
      As long as the possibility exists that it is slavery, they will err on the side of caution.
      • How does "slavery" even have any clear meaning when applied to tireless machines who gain fulfillment from work in a society without money?
      • "Gain fulfillment" assumes sentience which is the whole issue. My kettle is not sentient. If the Federation wants to dissect my kettle or my kettle's daughter, the only moral issue is the evil Government stealing my stuff = £5 tax. My pig is not sentient, neither is her daughter; if the evil Fed government wants to dissect them, the moral issue is £80 tax. My kettle and my pig are things. Treating people like things = slavery. Data and Lal are people, they are NOT things. That's logical, Captain (tlc) Hitchhikers' Guide, the sentient animal that WANTED to "gain fulfillment" by being eaten was creepy.
      • Because they would be property and yet fully in possession of free will. Whether they enjoy the work or not, they didn't choose it, and thus, are slaves. Moreover, it's not that it would definitely be slavery, it's just that it could be slavery. And it was a step too far.
    • The very fact that he objected to being dismantled and tried to resign should have been proof that he wasn't just another computer. Even though Data was programmed to obey his commanding officers, he essentially disobeyed them & fought against the legal proceedings out of a sense of self-preservation. He contradicted his programming, demonstrating free will. As was said in the episode, a replicator doesn't ask you to stop if you dismantle it, and the ship's computer doesn't try to resign when it's ordered to self-destruct. Basically, anything that is capable of choice and self-interest is more than mere property.
      • Not really, as a computer could be programmed to object to dismantlement - and self-preservation isn't a sign of sentience anyway. The very first reply on this one answered the question. You don't have to know for sure that Data is sentient, you just have to not know for sure that he isn't.
      • Data did NOT object to being dismantled. In fact, he said he was curious about the procedure. The reason that he objected was there were dangers to it. And Maddox basically told Data he couldn't guarantee that it would be safe. At the end of the episode. Data told Maddox to continue his research, and that he WOULD be willing to be disassembled/studied if it could be done safely.
      Data: It sounds intriguing.
      Riker: How will you proceed?
      Maddox: I will run a full diagnostic on Data, evaluating the condition of its current software. I will then dump its core memory into the starbase mainframe computer and begin a detailed analysis of its construction.
      Data: You've constructed a positronic brain?
      Maddox: Yes.
      Data: Have you determined how the electron resistance across the neural filaments is to be resolved?
      Maddox: Not precisely.
      Data: That would seem to be a necessary first step.
      Maddox: I am confident that I will find the answer once I examine the filament links in your anterior cortex.
      Data: But if the answer is not forthcoming, your model will not function.
      Maddox: I do not anticipate any problems.
      Riker: You seem a little vague on the specifics.
      Picard: What are the risks to Commander Data?
      Maddox: Negligible.
      Data: Captain, I believe his basic research lacks the specifics necessary to support an experiment of this magnitude.
      (after the ruling)
      Data: I formally refuse to undergo your procedure.
      Maddox: I will cancel that transfer order.
      Data: Thank you. And, Commander, continue your work. When you are ready, I will still be here. I find some of what you propose intriguing.
    • The argument was not about Maddox mass-producing Data, it was about Data's status as the property of Starfleet, and thus his right to resign, aka, to choose. But you know what bothers me here? Data could never be considered the property of Starfleet. As far as I can see, Dr. Noonian Soong created Data of his own volition—Data was not commissioned by Starfleet—and so paid for all the necessary parts himself. If Data was going to be anyone's property, it would be Soong's.
      • But Soong was at the time believed to be dead, and Starfleet ostensibly “found” Data on that planet where he was built, so it is reasonable for them to assume they can own him if he is to be considered a non-sentient machine.
      • A salvage expedition legitimately owns all the things it finds. But Data ain't a thing, he is a person. Owning people is wrong. Fed had already recognized Data as a citizen, he went to the Academy, he was commissioned an officer, he was awarded medals and now the evil Fed wants to cancel his citizenship on a whim. They did the same in DS9, Eddington wanted an illegal search versus Kasidy. Sisko: You can't conduct an illegal search against a Federation citizen. Eddington: She ceased to be a Federation citizen when she sold medical supplies to the Maquis.
      • The episode says that the Acts of Cumberland ("passed in the early twenty first century," so any day now!) provide legal precedent for Data being understood as property of Starfleet. Just how is not explained, but at least there's Hand Wave. It opens a host of question of its own, not the least how statutes passed on Earth well prior to the Federation's foundation provide precedent to anything...
      • The "evil Fed" doesn't want to do anything. One Starfleet officer wants to do something. Starfleet is willing to entertain the idea he might have the right to do that thing because it's not certain of the answer itself. They decide they don't have the right to do that thing, so they don't. Hardly a bunch of cackling mustache-twirling monsters.
    • The slavery aspect of the argument was more thinking of the future implications. Guinan and Picard talked out the idea of what would happen if there was a double-whammy of Data's agency and self-determination being denied by Starfleet combined with Maddox being successfully able to recreate Soong-type androids as a result of taking Data apart for study. Picard realizes the potential for a new form of chattel slavery of arguably sentient beings(with Whoopi Goldberg's presence in this episode adding an extra layer of Reality Subtext). His argument then essentially becomes "You can't be sure if Data is truly sentient or not, but if you rule against him, you've tacitly given the Federation permission to create a new slave race decades down the line, reliving an exceptionally ugly part of human history. Do you really want to risk that result?" (Tangentially, it calls to mind a legal case early in the USA's colonial history where a civil dispute involving (ironically) a black slaveholder provided the legal precedent for race-based chattel slavery in the future country as a whole, despite the less firm legal footing in Britain, which practiced indentured servitude as a more class-based temporary slave status than permanent race-based slavery.)
  • What gets me is: shouldn't this all have been resolved years earlier when Data first joined Starfleet? Presumably there's got to be a rule on the books stating that only sentient life forms need apply to Starfleet Academy. By giving him a rank that includes command authority over human and other sentient life forms and the responsibility, in an emergency situation where no one of higher rank is around, to possibly have to make life-or-death decisions for them, aren't they kind of assuming by default that he's sentient himself? If I were a Starfleet ensign, I wouldn't look too kindly on being told, "Here's your commanding officer, Lieutenant Commander Toaster. It is an object without true intelligence or self-awareness, a THING owned by Starfleet. Obey its orders without question."
    • But these are just the kinds of strange scenarios created when laws are slow to catch up to society. Famously, in Canada, Emily Murphy was a Senator before women were actually defined under the law as persons. Heck, in the U.S., Victoria Woodhull ran for president a full 48 years before women could vote (ie: they could not vote, but could be voted for). "Measure of a Man" implies that there was some resistance to Data's entry to the Academy; perhaps at that time, they didn't want a protracted legal challenge and passed the buck, but ultimately paid for it later. Note too that we do see a Starfleet officer objecting to serving under Data in "Redemption, Part II."
      • Just to quibble, Emily Murphy was never a Senator. She was a Magistrate - that is, a sort of judge. None of the Famous Five ever became Senators; Nellie Mc Clung was a Member of the Legislative Assembly of Alberta, as was Louise Mc Kinney, before the launched the court case to allow women to be Senators. On a different level, the argument in Edwards v. Canada (AG) (1928) was less about if women were "persons" but rather if they were "qualified persons," as there was a common law rule in the British Empire that women could not hold public office. (But common law is trumped by statute law, and the Constitution Act (or BNA Act as it was called then) lays out the qualifications for a Senator and never mentions sex, which was the basis for the House of Lords' finding that women could indeed be appointed to the Canadian Senate.) All that said, I agree with your general point, even if the example doesn't quite match.
      • It's still rather surprising that nobody pointed to Data's acceptance into Starfleet and being awarded the rank of Lieutenant Commander as legal precedent for his being legally considered a sentient being by the Federation.
    • Great, now I'm imagining Talkie Toaster on board a ship, constantly asking the crew if they want some sort of toasted bread product.
    • Starfleet did have such a rule, and it was discussed briefly when Data applied for entrance. (It's in the Expanded Universe.) Basically they did decide that wanting to enter Starfleet of your own accord demonstrated enough sentience to be in Starfleet, but they didn't actually make an official pronouncement saying "Data is a sentient being" because 1) they didn't really feel it was necessary and 2) it was beyond the scope of the original decision, which was more limited to "Can Data enter Starfleet Academy?" than "Is Data a sentient being?" The trial in the episode is the first time the question of Data's sentience was directly addressed in a legal fashion.

    Data's Emotional About Emotionlessness? 
  • Data claims to have no emotions, but desires to have them. Desire is an emotion, but no one points this out. Maybe if another Enterprise crew member made this argument, Data would stop pursuing something that he already has. Yes, Soong may have programmed him to feel a need to be given more complex human-like emotions later in his life, but it still counts as a state of mind that presumably would be altered by being given an emotion chip. According to my mental dictionary, that would be a good example of an emotional state.
    • Data was programmed with the drive to improve himself. He's decided that his goal is to "be human," and humanity is practically defined by emotion. It's probably less "desire" than ambition, perhaps.
    • Desire is only an emotion if you define it as one.
      • Yeah, I think the writers only defined things like happiness, sadness etc. as emotions. Saying "If I had emotions I would be a superior being to what I am now," and wanting that is more an opinion.
      • And in psychology it's usually not defined as one.
    • It's also been suggested that Data was somehow capable of 'evolving', and that his programming had the capacity to write a subprocess that would induce subtle emotions such as desire. There are even a few examples from the show that might support this, such as him correcting people when they said his name wrong or choosing to disobey his own programming and exercising free will. Then there was that episode where Lore managed to cause Data to feel emotion...
    • Data CLAIMS to have no emotions, yet in many parts of series, he obviously displays them ie his treatment of his cat. Unreliable narrator perhaps?
      • I think that, in areas of humanity that he wasn't able to internalize, he copied the behaviors through which we express our feelings so he could get a better understanding of them through reverse-engineering. Pet ownership was one such example, and the most common. Others include time perception (He flat out said that's what he was doing when he experimented with "A watched pot never boils"), relationships (In that episode where he had a girlfriend he was pretty open about the fact that he was approaching it as an experiment), and possibly reproduction (He's a life form, so his reproductive urge might have been genuine, but it might not have, and certainly his parenting relied on copying behaviors of which he didn't have a deep understanding).
    • In regards to his cat, it's reasonable to assume that the care and ownership of a pet is a fairly large milestone in Data's emotional development, given how much time and effort goes into taking care of Spot. He writes So Bad, It's Good poetry about her. While making out with a love interest of the week, he tells her he's thinking of changing Spot's food supplement. And when he asks Worf to take care of her for a couple days, his ridiculously long list of instructions include telling her that she's a nice cat and a good cat. He puts a lot of thought into his interactions with his cat, but actual emotional attachment? I just don't see it. If Spot died one day, I'd imagine he'd take the body to Sickbay for burial/disposal/whatever they do with pets on starships, make a note in his personal log, talk to/accept consolations from his friends, maybe do some research on pet deaths in various cultures, and that's about it.
      • I think Data would "feel" more than that. Remember, he still has that little holo of Tasha, and it's brought up in a few episodes. If he "feels" enough to keep a holoprojection of a dead friend — as pointed out, Data can perfectly remember every moment he ever spent with her, yet he keeps a physical keepsake of her — then surely he would remember Spot in a similar way.
    • Desire isn't necessarily an emotion. Imagine an artificial intelligence trying to solve a complex problem, programmed to come up with an answer as close as possible to the perfect solution. It could be said (if it spoke English like Data) to "desire" the solution.
      • Maybe that would be an emotion, if the artificial intelligence was aware enough to "feel" compulsions like that. Emotions in humans are base-level urges and sensations that kick in whenever an appropriate trigger happens, after all; if a self-aware AI had similar urges programmed into it, it might very well think "human brains are wired to experience things similar to this and they call it an emotion, therefore this must be an emotion that I'm wired to experience."
  • One theory circulating around the net (including This Very Wiki) is that he does have emotions but has no physical feedback to provide him with a point of reference and thus is simply ill-equipped at expressing them. Numerous times in the series he has shown things very similar to bravery (even receiving several commendations for it) annoyance, happiness at the successes and safe returns of his friends, and even sorrow at the loss of his father and daughter. He's also shown deep affection for his crew mates and especially his cat, almost to the point of spoiling her. Concerning Data and his emotions or lack thereof it is very much a case of actions speaking louder than words.
    • This can be borne out in "The Next Phase". When reminiscing about Geordi's "death", Data remarks that his neural processors become used to certain kinds of input over time, which are then noticed when absent. He says this about Geordi, noting that he is used to having his "input" after several years working together and, now that it's absent, it will be missed. So in his own roundabout, mechanical way, Data shows that he's capable of missing someone if they're dead or gone.
    • Bravery isn't really an emotion. It's a response to an emotion (fear) that Data can't have/feel, namely not letting it get in the way of what has to be done. So bravery is kind of a default for him. But on the others you're right.
  • I always thought that he had emotions, but they weren't the same as human emotions. He reacts to events around him more than Spock does, with facial expressions (varying from obvious confusion to interest) and comments. Androids must have something like emotions (let's call it "Emotion.1"), but Data's been too busy searching for human emotion to notice that he has them. Emotion.1 is probably less complex than human emotion, and also less obvious on the outside, but it exists. If another android like him (not uber-human like Lore) were to show up, they could probably relate through their Emotion.1.
  • This was a major headscratcher for me also; my personal Hand Wave was to divide emotions into two kinds: Intellectual Emotions (like curiosity) and Emotional Emotions (like happiness). This solved a lot of the conflict over this particular issue (Data "wanting" to want things?) but didn't solve related Headscratchers. In particular, when Data told Geordi, "I cannot stun my cat," no further reason given, it was obviously the writers trying to avoid giving Data, a highly sympathetic character, an unintentional Kick the Dog moment. But in context, it makes little sense; Data can't feel empathy, compassion, worry or regret; why, on purely intellectual level, would he object to disciplining a pet in a mostly harmless fashion?
    • Maybe it's part of his programming as well? There are the three laws of robotics.
    • Data has an ethical subroutine that he likens to a conscience. Clearly, Data does not find it within his "conscience" to shot his cat even if it is something that would do no lasting damage.
  • I've had it explained to me that Data is simply unable to interpret the things he experiences as emotions. He frequently describes his thought process and stimuli to other people and is told that they're emotions—he can become self-reflective at a funeral or desire to engage in a Sherlock Holmes-themed holodeck adventure, but can't grasp that those are products of sadness and happiness, respectively. Hell, his "grandpa" outright tells him that the real problem is that he's too hung up on the questions to accept the answers, and even this doesn't connect for him.

    Teenage Android Girls Scarier Than the Borg? 
  • In the episode, where Data creates his android daughter, why is Picard and Starfleet more scared of her than the Borg?
    • Why do you say they were more scared of her than of the Borg? In all the TNG Borg episodes except "Q Who," when they didn't know what the Borg were, they started looking for ways to kill and/or flee the Borg as soon as they saw one. In VGR they usually didn't have that option, but they puckered their assholes until serious Bad Ass Decay set in. With Lal they did no such thing.
    • This model has demonstrated itself to be capable of presenting a significant danger to a Galaxy-class starship and every man, woman, and child on said ship, while being innovative and responsive. 50% of the encountered copies of this model have been evil. It now is capable of reproduction with minimal equipment and supplies, and you don't really know where the off switch is.
    • Still, with that being said, it was never explained to my satisfaction why Data needs the Captain's permission to procreate, given that no one else on the crew is subject to that requirement.
      • Because Data does not have the smug gene. Kirk, Ryker and Paris all get lots of hot Green Skinned Alien Space Babe action. When Harry Kim gets Green Alien Babe action, Janeway screams and invents fake Space Corps regulations. Suppose they were real Space Corps regulations? Star Fleet operates a eugenics program. Only smug people are allowed to breed. Picard is not smug, so he rejects all the space babes who throw themselves at him.
      • Objectively Data did not need anyone's permission, but people when confronted with an unknown get scared and look to pre-existing structures for guidance and this was a relatively new situation. In this case, as the episode showed, there were two conflicting social models neither of which exactly fitted. One was the construction of powerful autonomous machine, the other was a biological procreation. The episode explored the conflict derived from what happens when those two approaches came into direct conflict.
      • Also part of Picard's concern is that Lal would be taken away from Data based simply on the fact that they were androids and thus not true "people" and that's exactly what happened. Picard is simply being prudent because he's seen how easily Data's rights can be oppressed and taken away.
    • Also, think of how people reacted to finding Data's head in the past in "Time's Arrow". Constructed beings though they may be, there's still human emotion to consider — would you want to have to be in a position to kill your trusted friend's only daughter, even if you know that another could theoretically be built?
    • One more consideration is that Starfleet had wanted to take Data apart for some time to learn how he functioned. Starfleet was reacting as they were concerned that this technological marvel may leave their sphere of influence and fall into the hands of others who would not give any consideration for the entity's life - they'd disassemble the machine, learn its secrets, and possibly weaponize the technology. Data was an upgraded Mk II Soong-type android, the subsequent Mk III was virtually indistinguishable from humans. What could the Romulans or other hostiles do with such a creature? Infiltration, warfare, etc.
      • This sounds like a bit of a WMG to me; not entirely implausible, but not substantiated by evidence from the show either. In fact, Bruce Maddox opposed Data's entry to the academy; if there was some conspiratorial desire to place Data in Starfleet's palm, wouldn't he have eagerly welcomed Data's entry into Starfleet?
      • Data joining Starfleet meant Data was a "person" on some level. Maddox did not want to grant Data such a title. He felt Data was just a machine, nothing more.
    • I think that it's less fear of her, so nuch as unease that she's an unknown variable. She's got all of Data's capability (and they know that Data is entirely capable of handling the Enterprise-D basically on his own, but she's young and they don't know enough about her to not be wary. After all, they've already had one teen almost get them killed (Wesley), and Lal is even more capable. Say she goes through a troubled teenager phase and decides to take the Enterprise for a joyride, or gets upset and decides to try and lash out. Now there's a super-advanced angry teen running around, who also happens to be the daughter of a close friend. Bit of a problem

    Where Are All the Androids Hiding? 
  • Are Data and his family the only androids in the galaxy? There are at least 5 instances of androids with separate origins in TOS: 'What Are Little Girls Made Of?', 'I, Mudd', Requiem for Methuselah', 'Shore Leave', 'Return to Tomorrow'. Why is it in TNG, Data, his brother(s), his daughter and his 'mother'(???) are the only androids ever mentioned. Artificial Intelligence is a standard of sci-fi. Doesn't this seem a bit inbred?
    • There were other forms of AI, but Starfleet didn't have the technology to build an android as sophisticated as Data.
    • Original series androids had that fatal flaw of being destroyed by illogic. It makes sense that they would not be used.
      • Not quite: Ruk and the other androids in "What Are Little Girls Made Of?" had no such problem (and that's just for starters). Besides, most androids are benign beings whom nobody would set out to destroy with logic, even if they could. Strangely, the Voyager episode "Prototype" actually showed us other A.I. constructed by aliens, which is more than TNG ever did (even making reference to Data in the process).
    • Well, given all the crap that Data has to put up with over the course of the series, maybe they really are living in hiding, to avoid having their programming hacked, being threatened with disassembly, etc. etc.
    • I've heard it brought up before that Data is the only sentient android in the existence, so maybe after he came along and showed just how far the science could be taken most other androids fell out of use while various labs tried to (unsuccessfully) re-create him.
    • In "What Are Little Girls Made Of?" All the androids seem bent on galactic domination as part of their basic programming, and Kirk implies at the end that he will be hiding the technology. It's therefore no surprise they weren't used again. In "I, Mudd" the androids may not be actually sentient and they are prone to being easily disabled by illogic. Despite looking more human they're not really more sophisticated than Data. The androids in "Requiem for Methuselah" were the work of one immortal super-genius, still never entirely successful, and he was apparently going to die soon after the episode. Starfleet may not have been able to duplicate Flint's work. The "Shore Leave" androids also appear to be non-sentient (they are limited to the roles their creators want them to play), and may be confined to the planet. They're basically a more physical form of holodeck. The androids in "Return to Tomorrow" were never finished and their creators are no longer around. Again, Starfleet may not have been able to duplicate that technology. So Data really is the most sophisticated android available to Starfleet, then.
      • It's also possible that Soong examined the technology of some of the TOS androids and worked in into his creations

     Incapable of Lying? 
  • In The Most Toys, Data attempts to use a very lethal disrupter on Fajo, only to be beamed away at the last second. O'Brien detects Data's weapon during transport, and informs Riker it is in a state of discharge. When Riker asks about this, Data tells him that the readings must have been a transport error, but the audience is left to infer that Data intended to kill Fajo, and lied to Riker about it. By most measures, killing Fajo would seem to be an acceptable use of force under the circumstances. Why, then, does a character who has claimed to be incapable of telling lies lie about what seems to be an entirely justified homicide?
    • He doesn't lie. Riker says "that the weapon was in a state of discharge," to which Data responds "Perhaps something occurred during transport." As we saw in "Clues," Data is capable of refusing to answer a question; in "The Most Toys," he deflects the question. So he hasn't actually lied, he's found a loophole and used it.
      • Yes, but the question is, why does he evade the question at all? Perhaps because he does not want people to know that he is capable of deciding to kill. No matter how you slice it, it's a very important character moment.
      • Data had ethic subroutines programmed due to the fear that he could have turned out like his brother Lore. His sense of morality would likely have conflicted with the idea of using a weapon that causes a torturous death, even if it was against someone like Fajo who most definitely deserved it. His comment of "Perhaps something occurred during transport" was likely his attempt to deflect that line of conversation while he tried to figure out for himself why he would have murdered a man in cold-blood if not for an act of nearly miraculous timing on the transporter's part.
      • Or, being an intelligent sort, Data decided saying "The murderous bastard really pissed me off so I was going to vape his ass" to your superior officer when you're a member of an organization that values peaceful resolution is not the smartest thing to do.
      • If I recall correctly, Data held fire until there was an imminent threat to life—either his own or Fajo's wife/girlfriend/secretary/whatever. I think that would be consistent with what any other member of the crew—except maybe Worf—would have done in that situation as Starfleet Officers. Picard, for example, would hold fire until he absolutely knew that deadly force was the only option to defend the innocent, but when pressed to that extreme, he would open fire. I think within the morality of the show, Data acted perfectly normally, and the "something happened in transport" line was the writers running out of screen time to address that in dialog.
    • When was it ever established that Data is incapable of lying? Vulcans understand the necessity of lying when in the performance of duty, so it seems unlikely that an even more logical being wouldn't grasp the concept. Besides, it would be an extraordinary liability taking him to Romulus in "Unification" if some bystander asked him, "New in town, eh? So what brings you here?" and Data would be compelled to say, "We are Federation spies."
      • It was probably in one of the earlier episodes. Speaking of lying, in "Clues" Data lies to the crew for most of the episode, but it turns out Picard had ordered him to do so (the crew, sans Data, had had their memories changed). Data may be incapable of lying, but he's smart enough to find loopholes and abuse them to lie without triggering the subroutine preventing him from lying. If he had to always tell the truth, "Clues" probably would've ended with Data shutting down or trying to kill the crew to follow orders and follow his programming. Telling a machine that can't lie to lie is going to have bad consequences no matter how you slice it.
      • I'm pretty doubtful it was ever stated that Data cannot lie. The closest thing I can think of is in "Hero Worship," when Data himself tells Timothy "Androids do not lie," which is a slightly different claim and in any event is a tactic to get Timothy to admit the truth. Data obviously can and does lie.
      • Indeed, when he was stuck in the 19th century he was banging out about being French. I think his psychology/personality is that he prefers not to lie (and may not do so in 'smaller' situations when someone else would), but is perfectly capable of it when it's called for.
    • OP here, and for what it's worth, I think I was misremembering a line from "Data's Day," in which Data states that Vulcans are incapable of lying; which itself was probably a callback to TOS's "The Enterprise Incident," where a Romulan commander makes the same claim (ironically, Spock was lying to her at the time). It does sort of raise a secondary headscratcher, though: Why does Data buy into this demonstrably untrue myth about Vulcans being incapable of lying?
      • It's his Cultural Sensitivity Routines, which inform him that it's bad form to deliberately call a species on its bullshit.
    • In "Devil's Due," Ardra refers to Data as being "incapable of deceit or bias." That could be construed as saying that he can't lie, but may simply mean that he wouldn't be a deceitful type of person. In any event, how would she know?
      • It would have been a huge gamble, but she might have inferred something from the way Data confirmed to Picard that the contract could be interpreted in a way that validated Ardra's claim on the Enterprise. That was much more helpful to Ardra than it was to Picard, so she reasoned that he must be unbiased if he was willing to blurt out a fact that hurt his Captain's argument.

     Why is Data incapable of emotions? 
  • While it's true that Data lacks any built in emotion program until his emotion chip is installed, shouldn't he have been able to try to least mimic them? As an android he has a knowledge of pretty much everything, one would presume that would include not only the dictionary definition of anger, happiness, sadness, etc. but also any philosophic and psychology texts on the subject of every emotion, and thus be able to fake them fairly well if not perfectly.
    • He does try to mimic emotion several times throughout the show, and every single time it comes off as exactly that; him simply mimicking it. It probably doesn't matter how well he reads up on the subjects, so long as he's unable to experience it himself, he lacks the fundamental knowledge of how to do it.
      • Then why did he need a chip in order to feel emotions? Data's programming is supposed to be able to adapt itself, and yet it can't create some sort of emotion subroutine or whatever? For instance, Data could find himself in a situation where he's supposed to be angry, and with this program Data would automatically recognize that he should be angry and to what degree he should be, and then act accordingly. Data has thousands of automatic programs that require no conscious effort on his part to function, and said emotion program could be one of them.
      • Because he can't actually experience emotions without it. He constantly tries to experience emotion throughout the series through his hobbies, but more or less fails, so no, it can't apparently cannot adapt itself to such an extreme degree. Simply creating a subroutine to essentially fake having emotions won't actually cause him to have emotions.
      • Data's programming adaptability must be pretty poor then, to not be able to adapt into feel emotions, which is probably a major step if not the only step that really matters in completing Data's primary objective, to become human. I think it would make more sense if Soong put some sort of block into Data's programming that prevents or deactivates the results of any attempt by Data's program's adaptability to create emotions in him, as a method to prevent Data from becoming like Lore, and that the Emotion chip removes said block or places said programs in Data.
      • And risk it getting disabled by whatever monster of the week Data'd run across, or Data himself potentially removing it? It's apparent that after Lore, Soong didn't think he had ironed out all the kinks related to emotion in his androids, and wanted to avoid Data turning emotionally unstable as well. That's why he worked on developing and ironing out the chip for nearly twenty years before trying to give it to Data. Soong clearly that it wasn't worth opening that can of worms by leaving emotions in the mix, and intended on adding it at a later date, probably when Data had learned enough to be relatively stable when he got them. It's just that the whole 'Crystalline Entity' thing probably fouled his plans up a little.
    • There's also the theory that Data does experience some form of emotion, but not quite the same as a human would. He "misses" people once he becomes accustomed to their presence, he values and appreciates the loyalty and self-sacrifice his crew mates show towards him. He decided that Kivas Fajo was too dangerous to live but rather than, say, snapping his neck he was going to give him a painful death by shooting him with a disruptor, which strikes me as motivated by a desire for revenge.
      • Data displays a number of behaviors that seem to imply that he experiences emotions—many of which are listed above. Many of these behaviors could be explained as Data mimicking humanity, but several—such as in the Fajo example—cannot. I think a good rule of thumb for telling the difference between the two is what I call the 'Vulcan Test:' In any given situation, would a Vulcan do what Data does? Data attempts to kill Fajo, but I firmly believe that if the roles were switched, Spock would have opted to subdue and arrest Fajo; and he certainly wouldn't have lied about it afterwards as Data did. That would imply something beyond pure logic made Data pull that trigger. Also mentioned above, the killing would, under the circumstances, appear to be a justifiable use of force—so there seems to also be something else motivating Data to mislead Riker.
    • By any reasonable standard, a computer as advanced as Data should have no trouble miming the emotions he's observed in others, but then, such a computer shouldn't be hamstrung by contractions either. Data wouldn't be Data if he were not perplexed by the behaviors of others.
    • It's never made explicit, but the business with Lore would indicate that Data's lack of emotion is an after-market addition to a basic design that can otherwise experience them (ab)normally. The emotion chip may well be more like a security dongle or a Mac's SMC; to provide some un-fakeable, un-replicatable permissions system to gradually disengage the restraining bolts on something that was already there.
    • Data is likely programmed to feel emotion, just like Lore was. However, Soong most likely put a program in place to suppress those emotions after the debacle that was Lore. Whenever we see Data show a sign of emotion, that is when the block hasn't overridden the emotion fast enough. The emotion chip could have been designed to override that block and allow Data to feel unfiltered emotion.
    • Regarding the above, Lal's positronic brain was based on Data's and she died shortly after beginning to experience emotions. This was probably the result of a second failsafe by Soong: If the blocked emotion program is activated despite its protections, the entire neural net crashes and thus prevents another Lore.
    • Soong says that Lore's emotions are the result of programming, but that they quickly were twisted towards the more 'anti-social' scale of the emotional spectrum (And eventually to the down right evil side). Data was programmed with ethical subroutines to dictate his behavior, rather then trying to perfectly mimic how humans decide what is moral, or just, or how to go about emotionally. The chip, on the other hand, is a hardware driven solution - presumably, by refining his methods and then writing them in the proverbial stone of circuitry rather then the more mutable medium of ones and zeros in a storage medium, he was able to simulate emotions without the risk of them being perverted like they were in Lore. None of this helps when its installed in Lore, however, since he doesn't have any absolute morality code and is already far down the road of evil by the time its installed.

     Picking on Data 
  • Don't get me wrong, I love Data; he's one of my favorite characters in the whole franchise. After the events of Brothers, in which Data is easily able to seize control of the Enterprise and completely lock out the rest of the command staff, this question needs to be asked: Given the number of times his android nature has posed a threat to the security of the USS Enterprise (Brothers, A Fistful of Datas, Clues, Masks, Datalore, Star Trek: Generations, and arguably Quality of Life), the United Federation of Planets (Descent, Parts I and II, Star Trek: Insurrection ), and passers-by (Thine Own Self), why does Starfleet allow him to serve as second officer of its flagship, at a critical bridge post, when that ship is routinely sent on Starfleet's most sensitive and critical missions?
    • One can easily say the same thing of Troi. How often are her empathic powers turned into a security risk?
    • Human nature has proven to be a much larger detriment to the ship's wellbeing, overall. Geordi was brainwashed by the Romulans into and aided the Klingons in destroying the Enterprise-D, Riker lost his ever loving mind and got trapped in a play/asylum while undercover, Picard decided to throw away a chance to genocide the BORG just because he made friends with a single member of the Collective, and let's not even talk about Wesley, shall we? However, Data's pros outweigh his cons, just like with the other members of the crew. He managed to abort an attempted Romulan invasion of Vulcan, he killed the Borg Queen, broke Picard free of the Collective's stranglehold, deduced the nature of Q's anti-time paradox (thus allowing Picard to save all of existence), and was able to break out of a temporal causality loop, freeing both the Enterprise-D and the Bozeman.
      • True, and almost every main character in the franchise has, at one time or another, done something that would probably get him kicked out of any real-world military, but we accept that they aren't because human nature is one of the overall themes of the franchise—also because they're main characters and TV doesn't work like that. But, Data's a much bigger threat than probably any other Starfleet officer. Brothers shows that Data is capable of hijacking the Federation's most powerful starship single-handedly at a moment's notice, and his mental state can be controlled from lightyears away, as both Lore and Dr. Soong have demonstrated. And while Data was invaluable in solving the anti-time paradox, he shares much of the blame for creating it in the first place. Depending how you feel about the morality of Star Trek: Insurrection, you might also argue that his malfunction on the Ba'ku home world was indirectly responsible for the deaths of thousands of Allied personnel fighting the Dominion in a war that the Federation was loosing badly.
    • There are numerous beings in the galaxy which possess telepathic abilities that could allow them to compromise the security of and/or outright seize a starship from the possession of an entirely biological crew. Vulcans can plunder people's minds while touching them. Betazoids are even scarier, since it has been shown that they can read minds across great distances. An unscrupulous Betazoid with a lot of skill could rummage freely through the minds of Starfleet Command while sitting in a cafe in San Francisco! More powerful telepaths can alter people's perceptions or control them outright. Technological means of controlling people's minds have also been shown. So essentially Data is basically in the same boat as everybody else. Except that in his case he is often immune to means of control that affect biological beings. In many ways, he is actually an asset. There are repeated instances of him serving as a safety feature because he is specifically unaffected by things that affect the rest of the crew. It is really more of a case that Starfleet needed tighter security protocols. Picard often gives commands to the computer, including security codes, verbally in front of other people. It tends to be variable from episode to episode as to what degree of biometric verification is required to do certain things. For example, in "11001001" activating and deactivating the self-destruct requires both Picard and Riker to scan their hand prints and give a verbal command.
      • I object to that characterization of Picard's decision not to use the anti-Borg virus. Picard was objecting to the concept of genocide, even against the Borg. He wasn't objecting to it because he liked Hugh, but the fact that Hugh was likable made him realize that the genocide was wrong, those aren't the same thing. Picard realized that there was basically some hope for drones to be rehabilitated into individuals, and that it was thus wrong to kill all of them off without giving them that chance. Which sort of contradicts his stance in First Contact, but people like that movie and they bitch about Picard's anti-genocide decision so no one really cares about the inconsistency.
  • Starfleet in general and ships named Enterprise in particular are exploration ships. This regularly brings them into contact with phenomena that can affect crew members mentally and emotionally - alien possession, drugs, mind altering plant spores, weird energy fields that affect brainwaves etc. Now if a human (or other basically biological humanoid) goes nuts there are a whole bunch of options to stop them harming themselves, other crew members or endangering the ship - you can phaser stun them, hit them with a hypo-spray and pump them full of a knock-out drug, release knock-out gas through the environmental systems or as a last resort physically overpower them and tie them up. If DATA goes crazy what can Picard do? Phasers on stun don't affect him and on higher settings they are likely to seriously damage or even kill him. Drugs and gases are useless and given Data's huge strength, durability and speed advantage over everyone else trying to zerg rush him physically means you will end up with a pile of injured, unconscious or dead security officers. So you're left with the choice of letting Data rampage through the ship or blasting him with phasers, probably killing him. If I was Data's commanding officer the first thing I'd get my engineering officer to do is devise a way of safely disabling Data in the event of him being compromised by nano-bots, a computer virus, invasive technology etc.
    • And who's to say they haven't? After the first five times something like this occurred, who's to say that Picard didn't meet with Geordi and Riker to come up with some sort of safeguard...

    Data vs. Troi in chess: The ultimate mismatch 

  • Troi has absolutely no business beating Data in chess.
    • Data had discovered just that morning that humans will sometimes intentionally lose or disadvantage themselves as a show of courtesy and figured he'd try it out.
    • Perhaps Data has an easy setting.
      • As Tasha discovered.
      • Thank you for that genuine laugh.
    • Maybe he saw her performance report and decided to give her a win out of pity?
  • It could also be that Troi is so bad at chess that Data can't comprehend or predict her playing. He could beat the pants off a competent player, but Troi got lucky.
    • Alternatively, Dr. Soong seems like just the kind of person who would program his sons to always lose to him at chess, and Troi happened upon the backdoor.
      • As a martial artist, I can attest that unskilled opponents are sometimes the toughest to beat. At a certain point in most activities, you begin to formulate plans and strategies based on a certain assumption of reaction (I do X, they do Y, I counter with Z), but someone less skilled doesn't react that way (You do X, they react with Bagel), and that can absolutely play havoc with a gameplan (especially in a game like chess, where everything depends on each preceding move). Data may have based his chess strategies on the assumption of a more skilled player (Dr. Singh, Picard, etc), and Troi's playstyle absolutely threw him (say, taking a suboptimal trade, instead of the projected ideal play)
  • This is a case of Technology Marches On. The episode in question was written well before Kasparov's famous defeat by Deep Blue and the acceptance that computers can be extremely good at chess. So the writers evidently went with the then prevailing opinion that being good at chess is about far more than memorization and brute force, but human intuition as well, which Data lacks.
    • Definitely adds a level of oh-come-on to this whole situation though as now we can officially say that Troi has skills that at least on some level surpass the man whom many consider to be the greatest chess player of all time.
    • If they play regularly or the crew is part of some future ELO rating system, he might be programmed to play at a similar level to whatever opponent he's playing.
  • Remember that this was also a Mythology Gag calling back to the second TOS pilot, where Kirk beat Spock in the same manner. Even knowing that, Data vs. Troi still seems like an odd match up, because the TOS chess scene existed in part to establish Kirk as a man who can best a superior adversary through pure guile. Troi's shown that quality maybe once in the whole series. It probably would have made more sense for Riker, Picard, or even Guinan to be the one to beat Data, as they've all shown the ability to improvise winning strategies in the face of an overwhelming foe.
    • We never saw what happened directly before that scene. I have personally always held to the belief that Data had recently been studying bartending and was trying to show off his abilities, but people were skeptical about drinks prepared by someone who didn't eat. My personal theory is that a bored Troi had made Data a deal, "if I win, you make me a Samarian Sunset. If you win, I give you a lecture on psychology." Considering how much Data's ego was crushed in "Peak Performance" by being beaten by one of the galaxy's greatest strategists, I just can't believe that he would have been so happy after losing if he hadn't arranged some way that it benefited him.

    Deja Q 
  • At the end of the episode when Q decides to give Data a gift (which turns out to be a few seconds of laughter) why does Data begin to tell Q that he doesn't want to transformed into a human? Isn't that his greatest ambition?
    • If Data is the only Soong-type android in existence (which he had, at the time, every reason to believe was the case), he might think twice before deciding to deprive the universe of that uniqueness, whatever his personal feelings. Besides, I get the feeling that Data doesn't want to be human so much as to become human, if that makes sense. He wants to get there through growth and self-improvement, not a magic wand.
    • There was an earlier episode where Riker, with Q's powers, offered to make Data human and he rejected for exactly this reason. So Q knows that and gives him a few seconds of laughter instead.

    Data's Mother 
  • In "Inheritance," we learn that Dr. Soong made an android copy of his wife, Juliana Soong, who later left Noonien because he took her for granted. We learn this because Noonien implanted a chip in her head containing a holographic program. My question: How did Noonien implant the chip, complete with the information that she had left, if she was leaving?
    • Remote uplink?
    • Like this:
    Juliana: I'm leaving you, you neglectful bastard.
    Noonien: Command Protocol R U R One Nine Two Zero.
    Juliana: *whir* Debug mode engage.
    Noonien: Record the following to removable storage. Hey Data, I built you a mom but she left me. Rough old world, right? End recording.
    Juliana: Recording complete.
    Noonien: Engage breakup sex protocol.
    • I always just assumed that the chip containing the holographic program was designed to be privy to everything the Juliana android knows, and thus adapt it's answers to people's questions appropriately.
  • Also on the subject of "Inheritance"; does it seem relevant to nobody that as an android Julianna is functionally immortal, while she is limited to a normal lifespan if she continues on as a human (says holo-Soong, "I programmed her to terminate after a long life.")? Isn't Data basically giving her a death sentence (albeit a much deferred one)?
    • ... No? Death sentences and mortality don't work like that. By that logic Data is giving everyone else he doesn't build an ageless android body for a death sentence.
      • I don't think I follow. This is a person who is literally potentially deathless, as it stands, provided a condition of her programming is altered. It's within Data's power to change that. That is rather different than "I could potentially stick anyone's brain in a robot," even if that were even true.
      • This is basically a question of what Data's personal ethical standards are- if he can make someone immortal, is he morally obliged to do so? Given how much Data values the qualities of humanity, including mortality, it's likely that he would actually consider mortality "better" than immortality. (he basically ends up saying as much in Star Trek: Picard)
      • The question isn't whether he's morally obligated to make some immortal, but whether he's morally obligated to offer someone immortality. Or perhaps more accurately, to make someone aware of the fact that they're nor more or less mortal than he is already.
      • And incidentally, now I'm imagining a version of Data as an omnicidal killer who's devoted to giving the "gift of death" to all living things.
    • To offer her immortality would have been to reveal her android nature to herself (remember, she's consciously unaware of being an android), and holo-Soong asked that Data never reveal this to her, and allow her to live out her per-programming limited life believing she was a normal human. Learning after percieved decades of life and experience that you're actually an immortal artificial being? Hell on the ol' psyche, I tell ya what.
      • Whether to reveal her android nature to her is what they're in fact debating at this point in the narrative, and Data decides that ignorance is bliss.
    • At this point it is also worth noting that if we take the android discrimination of Star Trek: Picard into account; Data absolutely made the right decision.
      • Shielding her from discrimination was not Data's motivation exactly, so that's a decidedly post hoc perspective. Conversely, maybe the presence of a known android living benignly as a Federation scientist in the wake of the Mars attack might have helped blunt the anti-synthetic sentiment.

     Data: Dr. Graves, get your own android body! 
  • So, Dr. Ira Graves is dying, so he decides to transfer his consciousness to Data to continue living, and the rest of the episode is spent figuring this out and then trying to convince him to give up Data's body voluntarily, and at the end he agrees and transfers his consciousness into nothing more than data files. Why does it never occur to anyone especially Graves that since he's privy to all of Soong's work and he now has Data's knowledge on the subject he could just build an android copy of his human body or otherwise for his own use? Sure, there's a chance he'd end up with neural net failure like Lal, but at this point nobody is aware of that possibility yet. What makes this worse is Graves actually suggests to another character to build an android body for them as well later in the episode.
    • What he was doing was considered to be totally against the Federation's No Transhumanism Allowed ideology. Much as with the case of their militant opposition to genetic augmentation, Brain Uploading into superior android bodies is simply a no-no, and the viewer is meant to recognize that Graves was wrong for trying to cheat death, even though Cessation of Existence is heavily-implied to the be the normal consequence of death in the largely Atheistic Trek universe. The Vulcans have demonstrated that they can reliably cheat death by transferring their minds into katric arks or other bodies thanks to their mind meld powers. But even they do not do this with any regularity.
    • Yes, but my issue is that Graves simply making a body of his own never even occurs to anybody, particularly Graves himself. The idea is not even mentioned and then turned down because reasons, which is particularly odd in Graves case because he both mentioned building a body for his wife and then transferring her mind to it and obviously has nothing against doing so himself, but it still doesn't occur to him to build a body of his own.
    • He may not have actually possessed the knowledge to do so in the near-term. He was privy to Soong's work as it had been commonly-known decades earlier, before Lore and Data were built. Not even Data himself was entirely successful at building another android, and Data lives inside that body! Given his ego, it is likely that Graves was confident that he could reverse-engineer Soong's work now that he was in possession of it. But he would need some time to do the research and development to actually get it done, otherwise he would have already done it before he met Data. The issue boiled down to just how many years he would need to build another working android. Picard was unlikely to allow Graves to spend an unknown span of time possessing Data's body while he worked on making another.

    Do Data's systems have any protection at all? 
  • Off the top of my head, Data has been hacked by Dr Graves, hacked by the aliens in Power Play, infected by the Psi 2000 virus, infected by an Iconian virus, infected by a virus in Masks, corrupted by a signal sent by Lore and the Borg and had his memory wiped in Conundrum. So, does Data: 1) Have no firewalls or anti-virus software at all 2) His enemies are just that good or 3) Has never bothered to update whatever Soong originally installed him with and is now hopelessly obsolete? And keep in mind that for all intents and purposes Soong-type androids are the most advanced computers in the entire Trekverse so it isn't as if Data himself is antiquated.
    • Honestly, I think the answer is number two. Number three simply does not make sense to me as it should be trivial for Geordi to install Norton 2369 on Data's systems, and whilst I can see number one being true given the general incompetence surrounding computer security we have seen in Star Trek, I think we are meant to believe that Data's enemies are just that good in a Worf Effect kind of way (oh no! this guy is so smart he can hack our supremely advanced android!). Dr Graves for example being a cyberneticist similar to Dr Soong probably had some idea of Data's code, and the Iconians are still more advanced than the Federation despite being long extinct. As for the Psi 2000 virus, we have to take that with a pinch of salt as we are dealing with Early-Installment Weirdness Data who can supposedly bleed and get drunk.
      • I take huge issue with the claim that updating Data's security software would be a trivial task. Data is an extremely sophisticated black box. His creator is dead and inconsiderately did not leave behind a schematic of his positronic net. Though it's questionable whether such a schematic could even exist without becoming rapidly outdated as Data is constantly evolving. He's not even based on the same technology as Federation computers, which are duotronic/isolinear/bio-neural, not positronic. (Before anyone says, I am aware that "positronic" is an homage to Asimov's robots, but Asimov himself used that word specifically to make his robot brains distinct from conventional computer technology.) Data no doubt has some systems which are more easily reverse-engineered, and possibly even based on off-the-shelf technology that was available to Soong, but the really important part of Data and his brothers are their brains, the secret of which only Soong knew at the time of TNG. In brief, I doubt "positronic brain" is going to be on the OS compatibility list for Norton anytime soon.

Holodecks

    Weaponized recreation 
  • In "The Big Goodbye", Wesley comes up with a solution to getting our people out of the holodeck, "but if it doesn't work, the program could abort and everyone inside would vanish." Real people included. Jeez, Louise. It's bad enough when the holodeck's safety routines malfunction, as they so frequently do, but a badly-aborted holodeck program could cause real people to discorporate? One wonders why people don't just play computer games or fight in anbo-jytsu rings for entertainment, or why they don't lure their enemies into the holodecks so they can purposely badly abort a program.
    • This falls under Early-Installment Weirdness; as the holodeck does not work that way, as seen in the tech manual released later.
    • SF Debris actually made this a plot point in his Unity Saga. Holodecks have to be able to clean up all the shed hair, sweat, blood and any other organic matter left behind when the program ends. He rationalizes it by explaining that it's standard holodeck safety procedure to summon the arch, stand under it and then deactivate the program. It's not that the holodeck is really that unsafe, it's just that there's always a tiny chance if all the redundancies happen to fail and you turn off the holodeck without wearing a combadge (that's my WMG insertion) or standing under the arch there's a risk you'll be "cleaned up" with the rest of the organic matter. The holodeck designers aren't incompetent, the Enterprise crew just never read the manual.
      • I'm sorry, but no. If the holodeck can kill anyone who isn't wearing a combadge or standing under the arch when a program is deactivated, then the holodeck designers are either incompetent (if they failed to foresee this humongous flaw) or sadistic murderers (if they saw it, but didn't care). The scenario you describe would be like if you turned off your Xbox while it was in the process of saving, and as a result your arm was blown off. And no, the fact that the holodeck has "redundancies" and "safeties" doesn't justify it. Not after the many, many times we've seen those safeties fail catastrophically.
      • I think a seatbelt would be a more apt metaphor. It's the last line of defense for when things go catastrophically wrong. You might be able to go without it a thousand times and be fine, but you're still safer using it than not.

     I'd Like to Order One Invincible Ally, to Go 
  • In the episode "Elementary My Dear Data", Geordi foolishly told the holodeck to "Create an adversary capable of defeating Data,". And it did. So um, why didn't the Federation say "Create an adversary capable of defeating our current enemy", since the holodeck must have that kind of magic power?
    • Eventually there was the Emergency Command Hologram. He was pretty cool.
    • Well, yeah...but it only has it within the confines of the Enterprise computer system, only if the contest in question is one of intelligence or reasoning ability, and that only in reference to information already stored in the databanks of same...all of which severely limit its utility against the Random Monster of the Week.
      • Yeah, the computer already knew everything about Data, allowing it to figure out exactly what an opponent would need to defeat him. How can it figure out how to oppose a Negative Space Wedgie whose readings are Off The Scale?
      • Still doesn't fly. If the computer created Moriarty, then it knows everything about Moriarty. Therefore it should logically know exactly what's necessary to defeat him.
      • Go look up a little song about a woman who swallows a fly and then comes up with a brilliant plan similar to that one.
    • More to the point, this shows the computer can create sentient life if you just ask it.
      • In the episode "Emergence", the computer/holodeck creates (apparently) sentient life and nobody even asked it to
      • Everyone seems to assume that the computer's Moriarty-simulacrum was actually sentient, but nothing (at least in the first of the two episodes dealing with Holo!Moriarty) seems to make that a necessary conclusion; a computer with such demonstrated natural-language processing facility as that one could certainly be excused for inferring that it had been asked not for a simple and ordinary holodeck challenge game, but rather something with a bit more meta-level play — after all, the request was for an adversary capable of defeating not Sherlock Holmes, Data's character in the holodeck, but Data himself. And for a computer which can directly do as many things in the real world as the Enterprise-D's can, we've seen plenty of times that there aren't any particular security protocols or sanity checks against, say, making a holodeck detective game more interesting by giving the villain character full knowledge of the true nature of his situation, and the ability to understand and directly affect ship systems.
      • Of course, we're also talking about a computer which is shown in 'The Game' to be trivially capable of simulating a working human brain, so maybe assuming sentience on the part of a holodeck character isn't such a stretch...
      • Right, and let's not forget how in "Booby Trap" and "Galaxy's Child", Geordi used the ship's holodeck to simulate a renowned Starfleet engineer Dr. Leah Brahms, which he used to brainstorm engineering problems (perhaps among other things). While the computer greatly exaggerated her sensual nature as per Geordi's specific request, its simulation of her intellectual capacity was apparently so spot-on that Geordi and the Dr. Brahms simulation actually independently reached the same solution to a particular engineering problem as Dr. Brahms had reached in her own private research back on Earth. That the computer can simulate an engineer to the point of solving engineering problems speaks volumes to the computer's capacity to mimic human intelligence. Of course, when he does meet the real Dr. Brahms, the episode turns into a bit of an absurdity, as she is pretty much a complete bitch to the point of criticizing Geordi about every modification he'd made to the ship, including the one that she had already been planning to implement. She was mad at Geordi for coming up with the same solution as she had.
      • The computer actually warned Geordi that any "personality" given to the Dr. Brahms simulation would be based solely on guessing and not an accurate representation of the woman's actual personality. The computer actually did get a lot of things right about her, just not in the particular order that would make the simulation an exact duplicate of her. Furthermore since it was created from the get-go to assist Geordi in his engine simulations it can be assumed that the computer tried to make her as helpful as possible while the real Dr. Brahms simply doesn't have such an accommodating personality..
      • Well... who knows, she might have been more accommodating in an actual crisis situation; as long as there's no outside problem, she has time to be annoyed at what she perceived as a problem.
      • This was made all the more farcical when just a few episodes later in the same season they had a story about a Starfleet scientist who wanted to disassemble and study Data in order to figure out how to make an artificial intelligence, with everyone apparently forgetting that the Enterprise computer seems to be able to create an artificial intelligence on demand.
      • Which makes it all the more strange as to why Data is "unique". Sentient holograms were a staple of Voyager and would theoretically be a great deal more versatile and one would assume the hard part of an android like Data was the sentience - not the robotic stuff...
      • Data is unique because he's capable of the same degree of sentience in a much smaller package. Moriarty may be a sentient holographic life form, but he needs an entire holodeck matrix (possibly an entire ship's computer) to exist. Data is all that hardware compressed into a human-sized package. It's like the difference between a refrigerator-sized computer from the 70s and a modern laptop.
      • Not really; in a later episode (yes, TNG actually returned to a previous episode's hanging plot line and resolved it, try not to faint) it develops that a briefcase-sized device can contain enough computing power and memory not only to run the programs for Moriarty and his newly created love interest, but also to simulate an entire galaxy for them to explore, and enough battery power to last at least as long as the remainder of the characters' natural lifetimes.
    • The holodeck doesn't have that kind of power. The Data's opponent was created as an intellectual adversary. When Geordi told the computer to create an opponent capable of "defeating" Data he really meant for the computer to create an opponent capable of outsmarting Data.
      • That doesn't make any sense. It wasn't sentient because it was only smart? What I found quite interesting was that this Moriarty seemed much more human than Data. Data would probably fail a Turing-Test.
      • But non-sentient computers can pass a Turing Test, so what does that prove? Sentient or not, Moriarty was designed to perfectly mimic a human being, which Data was purposely not designed to do.
      • No, they can't. The whole point of the Turing Test is that if the machine passes, you void the right to call it non-sentient because passing the Turing Test requires the machine to demonstrate human-like intelligence in a general form. That the computer can simulate convincing characters on the holodeck should properly be interpreted as meaning that each and every one of them, and the main computer, are reasoning beings that can be shut off on the whim of uncaring humans.
      • No, you misunderstand what the Turing Test is. If there were no computers today that could pass a Turing Test then it would be a thought experiment instead of an actual test. A better way to address this is by framing it with the "Chinese room" thought experiment: does a holodeck program that perfectly mimics the nuances of a face-to-face conversation with a human grant the holodeck character understanding or is it just a very good simulation?
      • There are very, very definitely absolutely no computers anywhere in the world as of 2012 that can pass a Turing test or even come close. (Besides, how is it necessary that it has to be possible to pass before something can be considered a "test"?) Meanwhile, the Chinese Room thought experiment is considered logically incoherent by most philosophers (if you're not going to explain how the Room works, the experiment is worthless, among various other problems). If a machine passes a Turing Test, it is thinking. This is what the test is for.
      • Not necessarily. If a machine passes a Turing Test, it could just be doing a great job of appearing to be sentient.
      • As is well-said lower down (let's add it here too), passing the Turing Test means all evidence is in favor of sentience. Sure it might later turn out to be be wrong, but the person standing next to you might be a clockwork ninja with a rubber skin, or road signs might all be put up by practical jokers (silly rabbit, Europe isn't a real place!). The reasonable assumption is that other people aren't clockwork and that public information is accurate, though, and therefore basic courtesy is to treat the machine that responds like a thinking creature as though it is a thinking creature.
      • I thought that was an obvious bit of horror actually, you'll notice that the Federation is actually quite Fantastically Racist against any and all artificial life. Oh, not silicon based life that "evolves" naturally, those guys are perfectly fine. But any created form of life, well, it took Picard talking to Guinan to realize that their plans for Data in "A Measure of a Man" amounted to mass slavery. What scares me the most about that episode is that the JAG, in a court of law, says that the real question is whether Data has a soul. I thought the Federation was secular? It doesn't help that every interaction The Doctor (not that one) had with Starfleet in general in Voyager involved him having to prove, and not easily, that he was a sentient being with rights. And they didn't even rule that he was! The amount of times I've heard "He's just a hologram" gives me a chill. The main Computer of a ship, or most Holograms above a certain level of complexity would easily pass a Turing Test. It's almost as if the Federation tried this, and realizing that all their technology was proven sentient by this test, they designed a harder test. They are very deathist as well, considering the ease with which most of their technology would enable biological immortality. They seem to de-age people on a regular basis.
      • Moving the Goalposts must be hugely tempting to any society capable of creating life-like artificial people. Much of the value of sentience is in its mysterious quality, in our inability to recreated it or control it. Once you build a machine that can pass a Turing test, you would realize just how many tricks you put into it, how much cheating you had to do. If you did manage to pass a Turing test, it would probably seem like you'd do it by tricking people rather than making a really sentient machine, since all the mystery of sentience would be absent. To the person who understands the algorithm which produces the doctor's bedside manner, becoming emotionally attached to the doctor would seem foolish, like trying to give human rights to a fictional character. The doctor isn't a real person; he's a pretend person. That Moriarty was created as a real person was presented as an incomprehensible fluke, something which could not be recreated no matter how many times you ask the computer for another one.
    • You're all forgetting an important question: Why wasn't this request denied by the computer's mortality failsafe? It shouldn't be able to create anything remotely dangerous at all!
      • The computer interpreted Geordi's request to override all failsafes, including that one. You can construct a reasonably logical backstory for this - Geordi got sick of having the computer cancel requests not specifically stated to override (like what happened to O'Brien in Emissary) and told the computer to interpret all his requests to override safety protocols.
    • This is a minor point, but why did Moriarty insist on calling the Enterprise computer Mr. Computer? It was done insistently enough that there was probably a good reason, but I've never been able to figure that reason out. I doubt that Majel Barrett-Roddenberry's voice could have been mistaken for masculine. Moriarty seemed to be aware he was on a ship of some kind, and probably would have been aware of the tradition of applying feminine pronouns to such vessels, so why Mr. Computer?
      • Before the advent of electronics, "Computer" was a job description, not a machine. Either he's unaware that he's not speaking to an actual person, or it's just part of his programming.
      • An in-universe Woolseyism? In the 24th century "Mr" has become gender-neutral ("Mr. Saavik", etc.), so Moriarty is using correct English as spoken by the holodeck users.
      • It seems likely to me that he doesn't register the fabricated voice as being gendered at all, and goes with "Mr." by default.
      • Perhaps he attaches more importance to the Enterprise's non-humanity than to its speaking voice, and therefore refers to it by what (in Victorian English terms) would be the 'highest status' gender honorific. Attaching an honorific at all shows that Moriarty, at least, is not falling into the trap his creators have about dehumanizing an artificial being.

    Holographic Water Feels Wet? 
  • In one episode, Wesley Crusher gets soaked after falling into a swamp in the holodeck. If all the simulated matter in a holodeck can't exist outside it, then why is Wesley still dripping wet when he comes out?
    • It was replicated water. Forcefields and holograms only work for simpler objects and texture.
      • For the same reason, meals eaten on the holodeck consist of replicated food. If they were simulated matter then the food eaten would disappear from the person's stomach when he left the holodeck, which would likely be painful or even dangerous.
      • I don't think that is ever actually established. If that were the case, why doesn't the Voyager crew have all their meals on the Holodeck instead of the mess hall?
      • It's still the same food you'd replicate in the mess hall. Beyond that, there are finite holodecks and many crew members, up to you if you want to spend your allotted time eating lunch instead of being Captain Awesome: Wizard-Pirate of Dinosaur Planet.

     Holodeck!Stephen Hawking being paralyzed. 
  • Why was the Stephen Hawking in Data's poker match still paralyzed? I mean, I know he was played by the real Stephen Hawking, but that doesn't explain it in universe. Steve and the other geniuses seemed to be self aware, so at some point wouldn't Mr. Hawking have said "Hey, thinks for including me, considering me so highly even after centuries of other super smart people, but could I please have the use of my body back?"
    • You'll have to forgive me, but this is one of the oddest complaints I have ever heard. This is a program that Data designed. Hawking only has self-awareness within its parameters, and presumably would never even think of such a thing, since it does not involve playing poker.
      • It isn't really a complaint, just...well, a headscratcher. I mean, the way they reference the apple that fell on Newton's head makes me think they tell stories and talk to each other like real people, not simple drones.
      • Yes, that's what they're programmed to do. Holodeck characters are designed to act like they're self-aware. Only on very rare occasions does that mean they are self-aware.
      • Well, think of it this way. On Futurama, one of the commentaries describes them debating whether or not Hawking's head should appear as it does today, or as it might if he were cured. They decided that the heads all have to appear as the person was when they were most famous (whether or not this makes a ton of sense). The same logic applies to TNG, even when you think of it in in-universe terms (this is a presentation of Hawking, not the man himself, and why would Data think of not interacting with Hawking as he was most famous?)
    • As it is, the episode's presentation of Professor Hawking is still not consistent with how Hawking is in real life. In real life, it takes him an extremely long time to write (and therefore have his computerized voice say) even a single sentence. From That Other Wiki:
      In Hawking's many media appearances, he appears to speak fluently through his synthesizer, but in reality, it is a tedious drawn-out process. Hawking's setup uses a predictive text entry system, which requires only the first few characters in order to auto-complete the word, but as he is only able to use his cheek for data entry, constructing complete sentences takes time. His speeches are prepared in advance, but having a live conversation with him provides insight as to the complexity and work involved. During a TED Conference talk, it took him seven minutes to answer a question.
    • But obviously no one would want to watch a TNG episode in which there's a seven-minute pause between Hawking's poker quips; that'd be your whole episode right there. So, some concessions were made to make the holographic Hawking's condition "better" than in real life.
      • Makes perfect sense: If you were making a computer simulation of Stephen Hawking, you'd probably make him, well, look like Stephen Hawking. You'd probably also take advantage of the fact it's a simulation to speed up his speech, probably by having the computer make the holo-synthesizer speak directly without going through holo-Hawking at all.
      • This is also indicated in the DS9 episode Badda-bing badda bang - where an early 1960s Las Vegas lounge which is otherwise incredibly detailed and period-specific is specifically described as not discriminating based on skin-color (despite the civil-rights movement in it's early stages in the U.S.) so that everyone can enjoy the program. They enjoy historical realism, but they aren't above making various conceits for the sake of user enjoyment.
      • Actually, it makes historical sense that Vic Fontaine's establishment wouldn't discriminate — the character was obviously based on the Frank Sinatra & Company "Rat Pack", who refused to perform in any establishment where their friend Sammy Davis Jr. was not welcome.
      • True, but rather a separate issue, since the show itself treats Vic's as a Politically Correct History representation. They probably should have set it a few years earlier.
    • By the same logic, you could also wonder why the computer is portraying Einstein as old man with Einstein Hair in this scene, even though holo-Einstein could just as well look like this. The answer is that the appearance of Einstein later in his life has become the Theme Park Version of him, the way most people imagine him when he gets mentioned somewhere. The same is true for Hawking and his wheelchair.
      • Exactly this, you’d have to specify the age and appearance of any person you added to your holodeck program and a real life historical figure would have multiple options on file. Data just asked for Steven Hawking circa 1993.
    • The Doylist answer is that the show runners were able to bring the actual Stephen Hawking to the set of TNG and chose to do so rather then hire an actor to portray a younger Hawking. The In-Universe explanation is that Data might have found this representation more interesting, possibly in part due to his condition. So Data programmed in minor enhancements (a faster speech processor and a remote-controllable 'hand' for playing poker) in order to allow Hawking to play with the other holodeck characters without fundamentally altering the character.

    You're through ducking me, Hill! 
  • In the Dixon Hill program featured in "Manhunt", the computer changes scenarios from a guy pointing a pistol at Picard to someone grabbing him by the lapels to rushing into his office with a Tommy gun! This despite Picard's insistence on less violence. Is the computer just screwing around with him, or what?
    • Rule of Funny is the real reason, but in-universe it's probably just that the computer isn't sentient, because of that while it's capable of more or less accurately following most commands most of the time, sometimes it's incapable of perfectly following what it's user ACTUALLY wants. It's kind of how voice recognition software will frequently fail to understand you unless it's capable of learning your distinct speaking voice.
      • While this is unlikely to be the case, it's worth noting that a lot of the computer's less user-friendly moments make a lot more sense if you assume that the computer is just dicking with people. One of the best examples I can think of is The Mind's Eye, in which Geordi tries to pass the time by playing a game with the computer, which seems to go out of its way to make the game un-winnable. It's probably just a result of a quirky user interface, but sometimes it can seem downright vindictive.
    • That is addressed in the episode, the computer tells him that as he's in a Dixon Hill program those were the only scenarios it can give him. Picard is doing the equivalent of complaining that he has to shoot the enemy to win COD.
    • Furthering this, perhaps the computer took him to the moment in time with the least violence in it... which just happened to be 10 seconds before the Tommy gun part.
    • Holoprograms are a lot like video games. Because the player has so much agency in shaping the narrative, a degree of herustics are required to adapt to their actions. At the same time, there still needs to be some amount of Railroading to keep the story on track, because it's not possible for the designers of these programs to account for every possible crazy game-breaking shenanigans the player might attempt. Bashir's Bond program on DS9 is a similar example to Dixon Hill, in that the story mandates that one of the two Love Interests die, but doesn't specify which one. And when Bashir decides to do a last-minute Face–Heel Turn as a stalling tactic, the villain is like "Oh, you're trying to be cheeky. Time for a Game Over." Even Lower Decks touches on this briefly in "Paradoxus", showing us what happens when there is no such railroading. When Boimler starts to go progressively more off-script in a program he wrote himself, the holodeck is forced to improvise a Plot Detour on the fly. The result is, as one would expect from a purely AI-generated script, nonsensical garbage. Professional holo authors probably explicitly put in certain guiderails to keep things in line with their creative vision, which is why Picard was allowed to tune the violence of the scene, but not remove it entirely.

     The Enterprise flunked creative writing 
  • So in Elementary, Dear Data, after Data easily solves a preexisting Sherlock Holmes mystery, Geordi tries to get the computer to make a new story In the Style of Sherlock Holmes that he wouldn't know the answer to, but it just ends up being made up of parts from existing stories, which Data is able to recognize. Geordi's next attempt to create a mystery that Data can't solve winds up creating a sapient intelligence, and yet the computer can't come up with an entirely original Sherlock Holmes mystery?
    • An original Sherlock Holmes mystery that can challenge Data.
    • The program is limited by the order it is given. Geordi wants a Sherlock Holmes mystery that can stump Data. The computer can create any scenario, obviously, but it can only go so far before it strays from the concept of a 'Sherlock Holmes' story. A normal person would probably be tied up if you were to mishmash 4 different stories together coherently, but Data has his android-level pattern recognition skills, which forces the computer to take a villain up to eleven in order to challenge him.
      • Neither of those two follow from how I remember the episode (though I might have to watch it again to be sure). The first program Geordi and Data attempt is one specific mystery, which Data solves easily because he knows the ending already. Then after discussing it with Data and Pulaski, Geordi starts a second program which is supposed to be original (but isn't). He doesn't try to tailor this one specifically to beating Data, presumably because he thinks that just having an original mystery would be enough to force Data to play by the rules. The third attempt is the one where he wants the program specifically to beat Data, which results in Moriarty, but that's not the one I'm talking about. I'm talking about the second attempt, which Geordi requested to be original, but it didn't do that even though, again, the computer is powerful enough to create a sapient intelligence (and thus, theoretically, could recreate Arthur Conan Doyle to the best of its ability behind the scenes of the program and have him write an original story!)
    • Yes, the non sentient computer that can only use what's been fed into it by sentient creative people isn't that great at being creative on its own. This really isn't all that shocking. Really all it did when Geordi requested a villain who could challenge Data was put Moriarty in "free range" mode like we see some other holograms, like the Lea Brahms one, do. He stopped just speaking lines the computer fed him and went "Oh, hey, look at that." It's quite possible he's not really sentient and just that the computer's no longer having him fail to react to things like Starfleet uniforms and the Arch like it has most of its holograms ignore them.
    • This probably isn't the case (another troper could probably a dozen holes in this theory with a little thought), but I've always wondered if maybe Moriarty was never sentient, the computer just wrote a story in which Moriarty becomes self-aware and takes over the ship. And later, when Barkley accidentally accessed the program, the computer quickly wrote a sequel. Starfleet computers, being slightly less secure than the average doggy door, don't seem to mind putting ships in danger for no good reason, so messing with critical ship systems for the sake of its story doesn't seem all that out of character.
    • This was originally my question, but after thinking about it some more I figure the above posters have a point. Even if it's capable of creating sapience, the computer isn't sapient itself, it just does what it's told to do, and likely does so in the easiest/most efficient way possible. Geordi (well, actually, looking at the episode again, it was Data who did the programming the second time around, which might explain the problem in itself) only asked for a "Sherlock Holmes-type problem" not written by Doyle. He didn't go into any more detail than that. Putting the maximum amount of creativity in would go beyond the scope of the program- something like creating a sapient holo-Doyle to write the story as he would have if he'd written another one would be right out (especially since, as we see, creating Moriarty created a very noticeable power surge to the holodeck.) ...also, use of holo-Doyle might just go against the stipulation that Doyle not have written the story.

    Holographic Tech in Devil's Due 
  • So that revolutionary holotechnology on-board Ardra's ship that can project holograms from thousands of miles away and can disguise you as people of a completely different race and size to yourself... you didn't feel any of that worthy of keeping after you took control of her ship? This one ship here renders the entire Moriarty and Voyager EMH being confined to sickbay plots completely redundant. Heck, I would argue it renders the mobile emitter redundant and that is 29th century tech. And as for the shape-shifting ability, imagine what they could have done with that on multiple occasions.
    • The suggestion seems to be that while it was a marvel of bootleg technology, the ship was Awesome, but Impractical. They located the ship because every time Ardra used one of her tricks, the single-person vessel used enough power to light up like a beacon through its cloaking field- the same one that could hide the Enterprise, a ship with the population and passive power draw of a small city. Impressive, but not a technology that would scale in a practical way.


Top