Not quite. "Twice as hot" means "twice the energy," which is measured on the Kelvin scale. So temperature ratios that you make on the Kelvin scale are perfectly acceptable. Furthermore, it can never be zero degrees K in our universe, so the question itself is unfounded. But good try.
Strictly speaking, temperature is defined as the derivative of energy in terms of entropy. In normal conditions, they're pretty much equivalent, but it's possible to get systems with infinite temperature (when entropy is at a maximum) and negative temperature (when you add so much energy that the entropy starts going down).
Alternatively, the numerical value of how cold something is can be defined as how strongly human nerve endings which measure cold are activated by the outside temperature. This would depend on a lot of factors, varying between individuals, their core body temperatures at the time, the effective thermal conductivity of the heat sink and the presence of radiation, and it would lose meaning once pain receptors are activated instead. Assuming the only variable which changes is the temperature of the air, for most people "twice as cold" would probably be somewhere between -5 and -20 on the celsius scale. On the farenheit scale, 0 degrees couldn't be doubled in coldness without passing into the pain range for most people. On the Kelvin scale, it's impossible, since it's not possible to achieve zero degrees within any finite timespan.
I think everyone above is missing the point. Twice as cold as what? 2 x 0 = 0 so without more information to work with (for instance, what was the temperature 1 hour before?) "twice as cold" as 0 degrees means in an hour it will still be 0 degrees.
Hate to say, but the point misser would be thinking that 2 x 0 = 0 would help with 'twice as hot'. 0 degrees is an arbitrary number; actual heat would be as described above. After all, what's twice as hot as -40 degrees? Or twice as cold as 4 degrees?
Headaches such as these is the reason why scientist and mathematicians make use of the Kelvin scale instead of the Celsius scale.
"Twice as cold as what?" is indeed the key question. "Coldness" lacks a formal definition, but if it had one, it would be, "How much colder is this than some reference point?" So "zero degrees of coldness" would then mean "no colder than the reference", and so "twice as cold" is still "zero degrees". Such a system has never, as far as I know, been established, because it's hard to imagine how it could be practically useful. (The best I can imagine is a reference point of human core body temperature, which might be useful for describing how cold a person "feels" and how their body is affected.)
You could just convert to a different scale, divide by half, and convert back. 0 degrees C is 57.6 degrees F. In an hour you'll have 28.9 F or -1.7(repeating) degrees C. I might be off with the numbers, but it still works.
First off, no. 0°C is 32°F. It was specifically chosen to put 0°C at water's freezing point and 100°C at its boiling point. F = 1.8C + 32. Second off, dividing by half would be doubling.
Not to mention it doesn't work with Fahrenheit. When working out things like that, the lowest value possible always has to be zero, so you would have to adjust the temperatures accordingly, or you would convert to one that already does (Kelvin). It doesn't work if you do it in Fahrenheit.
If I got 4 cards out of a standard deck, what is the chance that at least one of them would be a diamond?
That's fairly easy combinatorics, though you'll need to specify the number of jokers in the deck to get the exact probability. Just know that there are 52!/48! ways to draw four cards from a 52-card deck and 39!/35! ways to draw four non-diamonds.
You can do it without any combinatorics, or anything above middle-school probability (though the process of putting these steps together is beyond what would be expected at that age).
The probability of drawing at least one diamond is the complement of not drawing any diamonds: P(1+ diamonds) = 1 - P(no diamonds)
The probability of drawing a non-diamond (of which there are 39) from a 52-card deck is 39/52, or 3/4.
After that non-diamond is removed (leaving 51 cards, 38 of them non-diamonds), the probability of drawing another non-diamond is 38/51.
Likewise, for the third draw, you have a 37/50 chance of a non-diamond, and then a 36/49 chance for the fourth and final draw.
The probability of successive events is the product of the probabilities of each: P(no diamonds) = 39/52 * 38/51 * 37/50 * 36/49 = 0.3038 (to four decimal places).
And thus, finally, the probability of drawing at least one diamond in four cards is 1 - 0.3038 = 0.6962, or 69.6%.
Why does pi get a name, rather than 2*pi? You don't measure a circle by it's diameter; you do it by its radius. Nobody would ever use half the circumference. If you want a circle constant, make it the circumference divided by the radius.
There are people that agree with you. Some of them came up with a name for 2pi. We call it tau.
Presumably to make the area equation, pi*r^2, easier to remember and use.
It's a holdover from the Greek era, when people did measure a circle by its diameter, not quite having gotten out of that habit from the pre-compass days. Even the familiar formula for the area of a circle, pi*r^2, was originally put down by Archimedes as "diameter times circumference over four."
There are some cases, especially in probability and trigonometry, where using pi makes much more sense than tau would. Some are demonstrated here.
Why do people in an Honors Advanced Precalc class still need to ask "When are we going to use this in life?" If you've opted to take the class, and gotten to this level, you should know that unless you get a career in pure math or teaching, you're not going to use it. Just deal with it.
I'm sympathetic to the idea that if a student is taking a course they have obviously already decided that it is worth it (either as a bullshit requirement for their degree/major or because they actually expect to apply it or just because they like it) and should STFU with their whining, but pre-calc is useful in a lot more fields than pure math or teaching. It's also used in science and engineering and programming. Math ISN'T just taught so other people can wank in it or teach the next generation of people how to wank in it, you know (that's literary analysis); it's actually, practically useful.
I would argue that literary analysis is useful beyond school, or teaching. The concept of being able to look beyond the surface of a presented situation and really dig deep to have a deeper understanding, is skill that is useful through out the rest of life. Lit analysis promotes analytical thought across most other disciplines in life. Oh, and troping is a form of literary analysis.
Literary analysis is purely subjective and for Example the authors plan or objective is irrelevant. So it is wanking, and as for the promotion of analytical thought: Yeah that's in there a bit, but why learn it from a flawed and subjective field instead of natural sciences that have none of these flaws.
Math is used in everything. Well, almost everything. Economists use some very advanced math including fields like differential equations, probability and statistics, dynamical systems, and even some more pure stuff like linear algebra and analysis. Iannis Xenakis was known for using mathematical ideas (especially from group theory) in the composition of music. As a biology major, I use concepts from differential equations and statistics fairly frequently. I repeat: Math is used in everything.
Precalc isn't really a good example. Calculus and related concepts are found in all kinds of fields, and can't really be learnt without precalc; furthermore, concepts from precalc itself (like sets, vectors, limits) are likely to come up for nearly everyone once in a blue moon.
Ha, I tried this whine once and my instructor said "you won't use it, not for you major...what was it again? ah, yes, but..it will teach how you think better."
This is the best counterargument to "When will I ever use this," I've read:
"NEVER. You will never use this. People don't lift weights so they'll be prepared should, one day, somebody knock them over on the street and trap them under a barbell. You lift weights so that you can knock over a defensive lineman, or carry your groceries or lift your grandchildren without being sore the next day.
You do math exercises so that you can improve your ability to think logically, so that you can be a better lawyer, doctor, architect, prison warden or parent.
MATH IS MENTAL WEIGHT TRAINING.
It is a means to an end, (for most people), not an end in itself."
Nice, but I prefer the joke about the pre-med student who asks his calculus professor why he has to learn this stuff. The professor responds, "To save lives." The student inquires how calculus could possibly save lives. The professor responds, "It keeps the ignoramuses out of medical school." ^{note }please don't anyone ruin the joke by citing instances where calculus might actually prove relevant in medicine
It's interesting how mathematical concepts even pop up in (non-mathematical) philosophy. Baudrillard talks about "Mobius-spiraling negativity," Lacan makes (rather suspect) analogies to various topological spaces, and many cultural critics talk about vectors in a similar sense to how mathematicians do, just without the quantitative aspect.
Wouldn't it make more sense to just teach them applied mathematics so they know when to use it?
Teaching only applied mathematics leads to nonsense like Pemdas. But to be fair, in school basically everything is applied mathematics.
Yes they can. That's the meaning of it! Okay, serious. As noted above, studying Math teaches us critical thinking. That is, you learn enough Math and you realize that just because some authority says so doesn't mean it's right. You know you're doing right at Math when you don't just believe whatever your Math teacher tells you. Sometimes, asking them for answers is not enough, you have to look for these yourself.
How is it that it took me failing math-related classes three times in a row, before my college instructors finally admitted that I must have a learning disability. I don't understand algebra or algorithms, and have always wanted a tutor, but the attitude from the college was mostly, "Oh, you'll do better next year".
Because schools don't want to spend money creating a tutoring program. Since colleges are a business, they have to turn a profit. And that often means shafting the people who really need help.
Actually, most colleges are non-profit. While they still need to pay for everything they provide, profit is not the goal.
Also, most colleges do, in fact, have tutoring programs. You just have to look for them.
While most colleges are non-profit, that doesn't mean they'll object if you want to keep sending them money by taking the same class multiple times. The real problem (from their perspective) is when your repeated failures start dragging down the overall GPA for a particular department or the university as a whole.
Sounds as though you incorrectly reasoned that your college math instructors are responsible for issuing psychiatric evaluations. Generally, you have to go to a specialist for that type of thing. Then they diagnose the disability, and it's your responsibility to inform the college's disability services office about it, and they in turn will inform your math instructors of which accommodations you require. (Usually, extra time on tests & exams).
You also assume that it is the college's responsibility to provide tutoring to students in need of it. While that service is offered at a lot of colleges and universities today (in the U.S., at least), not all colleges provide that for all students in all different subjects/classes (Even if they wanted to, certain courses/subjects are just too specialized to keep a legion of tutors on retainer for all courses).
Why are practically all theorems named after people who didn't invent them? The most egregious example is probably the Pythagorean Theorem; when the person who really thought of the Pythagorean Theorem (a student of Pythagorus) showed it him, Pythagoras had him DROWNED. (Because it implied that non-integer numbers existed, which Pythagorus considered mathematical heresy.)
That's not quite right. The theorem had been in use as a rule of thumb in the West before Pythagoras, and no one's sure who ultimately proved it there, but it was a Pythagorean, if not Pythagoras himself. The person he's said to have drowned might well have been the one, but it wasn't immediately obvious from the theorem (which, keep in mind, had been in use) that there had to be irrational (not just non-integral, which Pythagoras was fine with) numbers. The proof most likely presented to him relies on the theorem, and seems fairly intuitive, but it's much easier to call a proof "intuitive" after you've read it.
Why don't people bother doing the friggin research before posting a 'just bugs me' entry? Seriously guys, it makes you look like idiots. Pretentious idiots.
"Euler's work touched upon so many fields that he is often the earliest written reference on a given matter... in an effort to avoid naming everything after Euler, discoveries and theorems are named after the 'first person after Euler to discover it.'" -The Other Wiki
Not to mention the Pythagorean Theorem was found by Arab scientists and Chinese scholars (independently) way before the Greeks did.
Wah? While it's true there's evidence Egyptians and Indians knew of it prior to Pythagoras's era, it was published in Greece around 400 BCE, and the Chinese proofs are near the same time as Pythagoras.
Architectural evidence, and texts on architecture, suggest it was found by everyone, including the Greeks, centuries or millennia before Pythagoras lived. The Pythagoreans' alleged accomplishment (which is largely thought to be made up as advertising hundreds of years later) is having proven it. Although "Arabs" weren't doing much in those days; while it's an "Arab" country today, pre-Islamic Iraq/Mesopotamia isn't usually considered Arab any more than, say, Archimedes is considered Italian, if Babylonians, Sumerians, etc. are what you mean.
Generally, the more modern you get, the better people tend to be about naming. This is especially true in regards to living people, as, for example, Green and Tao might get kind of pissed if the Green-Tao theorem wasn't named after them. However, there are some situations in which the person who first made a conjecture will have the theorem named after them - for example, Fermat's Last Theorem is still called that even though Wiles proved it, and the Poincare conjecture will probably keep that name instead of becoming Perelman's theorem. (Although Fermat claimed to have proved it, his proof was never found.) There are some unfortunate situations in which an important result was discovered independently in many different places. For example, there's the famous Cauchy-Schwarz-Bunyakovsky inequality, discovered over the course of a century by three mathematicians. In most of the world, it's called the Cauchy-Schwarz inequality, but in Russia, they still call it the Bunyakovsky inequality. There are other instances in which political barriers played a role. Sharkovsky's theorem wasn't known to much of the world until after the fall of the Soviet Union. Two American mathematicians, Li and Yorke, proved a less general result in the meantime, and while the use of the term "Sharkovsky's theorem" has spread, many still refer to the "Yorke-Li theorem." Generally, theorems are named after the discoverer. Just not necessarily the theorem's you've heard of.
My math professor just calls it Cauchy's inequality. :P But yes, he does love to talk about this. "Okay, now we'll be learning about Stoke's theorem, which probably wasn't discovered by Stokes, and it's basically just another version of Green's theorem, which Green probably stole from some other guy..."
IIRC the (probably apocryphal) drowning story was for proving that the square root of two is irrational (can't be written in integer/integer form - e.g. 0.45 is rational because it can be written as 9/20). The proof goes as follows:
Imagine that the square root of two could be written as A/B, where A and B have no common factors.
Then 2 = A^2 / B^2. Rewrite this as A^2 = 2.B^2.
Then A^2 must be divisible by 2, hence so is A. Let A=2.C, and rewrite the equality as (2.C)^2 = 2.B^2
A little algebra and we find that B^2 = 2.C^2. But then B must be divisible by 2 too.
But this violates our premise, which is therefore disproved.
Pythagoras was a bit mystical about integers, so this kinda rubbed him the wrong way.
Probably not. As mentioned above, the irrationality of the square root of two is what (allegedly) rubbed him the wrong way (although another story has Pythagoras finding it himself and counting it a massive breakthrough; knowledge of pre-Socratic philosophy is like that). The proof above, however, is an unknown Platonist's two hundred years later (referenced by Euclid and Aristotle, but no extant primary source), and relies on number-theoretic concepts from that school. The Pythagoreans' proof was more likely a geometric proof similar to the one in The Other Wikihere - hence why it relies on the Pythagorean theorem.
In one sense the above proof does rely on Pythagoras's theorem as, if you wanted to be stubborn about only rationals existing, you could take it as a proof that the square root of 2 doesn't exist. You'd then need to take the unit square and apply Pythagoras to the diagonal to prove that root 2 does exist.
There is actually a name for this phenomenon. It is called Stigler's Law of Eponymy, which states that no scientific discovery is named after the person who actually discovered it. Interestingly enough Stiglers Law is an example of Stiglers Law.
My question: Why is it still called a theorem when it has been proven and used for many centuries? Why not call it "Pythagorean's Law?"
Because a "law" is actually a very weak principle, based solely on observation. A "theorem" is so far to the other end of the scale it's not even funny: what it states, however counterintuitively, is tautological when the definitions are properly understood. It could not be any other way. The fact that "law" sounds so strong in comparison is an accident of nomenclature, from the days when scientists still imagined they were seeking "laws" of some divine force, while laymen heard of the "theories" being used to describe the universe and coopted the word for their own hypotheses, as opposed to its original (and current strict) meaning of an area of study, with a "theorem" being originally a principle of a theory and coming over time to mean one of a mathematical theory. To call a theorem a "law" is an unspeakably profound insult, comparable to calling a novel an "anecdote."
The simplest answer to this question is very simple: Division is defined as multiplication with inverse element, that is, 3 / 2 = 3 * (2^-1). Inverse element for any element X is then again defined as the unique value X^-1 that completes the following equation: X * X^-1 = 1. That is, product of an element and its inverse must result in 1. Since there is no such number, you can't divide by zero.
Lets assume that we could solve 1/0 = x. This means, 0 must have an inverse element, so the equation we get is 1 * 0^-1 = x. Now, we can multiply both sides with 0 and the equation should still hold. It becomes this: 1 * 0^-1 * 0 = x * 0 . We know that x * 0 is 0, but 1 * ( 0^-1 * 0) = 1 * ( 1 ), because we're multiplying 0 with its inverse. Thus we gain equation 0 = 1 which is clearly untrue. Thus, if 0 had an inverse, it would result in 0 and 1 being equals. Without inverse, there is no division.
My high school math teacher gave me a philosophical approach to it: can you divide something into zero pieces? I mean, sure, you could not chop it up, but you wouldn't really be dividing it, would you?
Why is algebra and algorithms considered or anything involving advanced mathematics required classes if the career I want is to become a cartoonist?
Short answer: math is useful for everyone; see many of the other answers on the page. Long answers: a) it's not a good idea to rule out all careers but one at an early age; b) most cartoonists have a day job too; c) you'll need a lot of math if you run your own business - and that includes working freelance, which almost all cartoonists do these days; d) there's more to life than work - knowing math will help you understand the scientific advances, and knowing statistics will help you understand the political and economic debates and social problems, that will happen in your lifetime.
When my Algebra students ask "When are we ever gonna use this?" my answer is always "In Algebra 2".
As someone who went through Algebra 2 only a couple years ago, I have to advise you not to do that. That, to a high schooler, is akin to just saying "because I said so" and just adds to the belief that there is no practical need for it when, in reality, there truly is. In hindsight and/or from the teacher's perspective, it may be witty, but you really aren't helping those kids by saying it.
So the correct answer is "What do you want to do when you leave school/college/university?"
Student: "I want to be a ..."
Teacher: (irrespective of the student's answer) "Well, you'll need algebra for that."
No, the correct answer is for the teacher to know some goddamn practical applications of the subject s/he teaches and to tell the kids what they are, just like they would tell them any other piece of information in class. Equations are widely used to model everything from traveling times and speeds to shopping and budgets; it's not that hard to come up with a few examples. And if s/he really doesn't know, then let it be said "I am sorry, I don't know any applications of the subject I make my living teaching because I am an idiot. Please go ask some other math teacher."
The Mundane Utility that comes with Math is only realized if you THINK REAL HARD about it. When I was a kid, I used to get errands to buy this and that from there and here. So my mom always gives an amount far more greater than the value of what we are actually going to buy, even before I knew how much I'm going to buy. Only when I actually buy that thing, then I know that if you give $100.00 to buy something worth $49.99, I would get $50.01. And if I just so lose that single cent, Mum will know that something's Gone Horribly Wrong. So after I did just that a million times, only then did I realize that Math DOES have Mundane Utility.
Also, knowing geometric relationships is a good idea when drawing anything. I don't know what kind of artistic complexity or realism you're hoping to get into, but knowing proper perspective and scaling is really important for making things look right. People who are good at picturing things spatially usually honed this ability in math and science.
Although no one can deny that math is important, I personally think more weight should be given to language skills (and that's not just because I'm an English major). Think about it: Your language is the one discipline that you will be using every single chucklefucking day of your life. Even if you don't say a word out loud all day long, you're still thinking in English (or Japanese or Spanish or Urdu or whatever), and if you go the whole day without thinking, you're either comatose or dead. Or very, very good at Zen. And yet the world is full of people who can barely write their own language without screwing up every other word. Am I the only one who sees something wrong with this picture?
If you think they're screwing up, that's your problem, not theirs; they're speaking their dialect, and you're making the mistake of thinking they're using a completely different dialect with different rules. People know how to use their own language; they do it, as you said, every day of their lives.
And I am very bad with words, terminology, especially. Yet, I am damn good mathematician and programmer. When I think I of something math-y I tend to go in Buffy speak mode (well, obscene Russian variant of it, you Ruskies know what I am talking about), 'cause while I don't remember the word (or remember it incorrectly) I still have the idea behind that word.
Because they are both important, and depending on the type of person you will often say one is more important than the others. This would be wrong as they are both important. But Math is harder for many people to grasp and enjoy as recreation, so that it's consider work instead. If our knowledge of language degrades, we'll more likely to fight and lose our high education and fall as a culture. If we lose our knowledge of math, we may die as a planet, due to our dependence on it to keep the world going and keep us fed.
You get taught all the language skills you need to not fuck up sentences by the time elementary school is over. The fact that people don't care is a different matter, not a reason to give langauge "more weight".
Geometry might help you if you are a cartoonist. Algebra and calculus are more general, useful things for everyone regardless of career. But this isn't about math in particular, is it? This is about the entire concept of required courses in general. In which case, are you sure you are going to be a cartoonist? And you will get to be that straight away for the rest of your life, without needing to do something else on the side or along the way or when you are down on your luck?
If you want to understand economics, then you need to study algebra. And if you're 100% sure your life's sole career will be professional cartoonist, then you need to study economics.
What does it matter? I’m stuck in an Algebra 2 class as well, along with nineteen other students, and I can assure everyone here that there isn’t a damn thing my classmates and I can do about it, whether it has a practical application in our lives or not. Besides that, why are we even complaining? It’s free knowledge. Yeah, we have to pass the class to graduate, but is it going to kill any of us? No. Who cares if we never use it? What matters is that we can use it.
Because school is not about learning. School is about signalling. By requiring that all students pass an algebra class, you essentially impose a minimum amount of intelligence required to graduate high school. This means that anybody with a high school diploma is above this minimum, thus making them more attractive employees relative to high school dropouts.
Alright, is infinity a number or not? If yes, say 1/infinity=x, then 1-x... If no, then how many numbers are there?
Way too many comments being wasted over this. No, infinity is not a number. It is a cardinality (or actually either two or countably infinite cardinalities, depending on convention), i.e., a property of a set that for finite sets is given as a counting number, and it's a symbol for absolute increase without bound. It's sometimes written as though it were a number in limits, sums, integrals, etc., but that doesn't make it a number, but rather means that you're taking a different type of limit, sum, integral, etc., with significantly different properties from those that approach a number. However, it's not a number, and you can't meaningfully use it in arithmetic operations, even though it's sometimes (sloppily) written in that way to signify a limit, as is dividing by zero. (No, whatever your calculus teacher told you, you really can't divide by zero, even zero by zero; you can sometimes take convergent limits of functions as they approach points where they would require division by zero, but that's not the same thing.)
There are lots of infinite numbers. Some of them are cardinalities. Others are not. I don't think any of them are in a field, which means that you can't divide by them and get math to work the way it should. There are times when it's more important to have infinity than to have a field, and a lot of the theorems you know don't apply. Also, "number" has never been formally defined, so arguing over whether or not infinites are technically "numbers" will get you nowhere.
Why the hell do they call square roots of negative numbers imaginary and not just, not real, or fake, or something? And if those are imaginary, then what do you call the other non-real numbers?
I don't think the term was ever meant to be derogatory. I suppose Descartes came up with the term quite naturally. People knew there is no sqrt of -1 in the plane of irrational numbers. But somebody probably said - let's imagine, for the sake of argument, that there *is* a number representing sqrt of -1, obviously not in the irrational plane but in some imaginary plane of which the irrational numbers are just a subset - and see where it gets us. Eventually, it got us quite far, most notably to the theory of electromagnetism, theory of signals and quantum theory. And the term stuck.
If sqrt(-1)=i, what is the symbolology (why did I lol?) for other numbers sqrt(-n), where n is any number [0,âˆž)? Sure, i might be the most important Square Root Of A Negative Number, because (in real square roots), 1 is the only number that's its own square root, but the other S.R.O.A.N.N.s must have some kinda use...
sqrt(0) is also itself.
Any other symbols would be redundant. The square root of -n is sqrt(n)* i. 2i is the square root of -4, for instance.
The name "imaginary numbers" started off as a dismissive name used by critics, but like "Big Bang" it caught on and is still used even now it has become clear that they have practical uses.
In the 19th century, complex numbers were sometimes referred to as having a "possible part" and an "impossible part" rather than a "real" and "imaginary" part.
More than just utility, complex numbers have an entirely rigorous construction as points on the plane with multiplication being defined by (a,b)* (c,d)=(ac-bd,ad+ bc). If we then define i=(0,1), we see that i^2=-1. Everything about complex numbers is well-defined, and properly constructed. There is no such x such that 0x=1. Maybe in the limit sometimes this occurs somewhat, but it is easy to prove that for any ring (such as the real numbers) 0x=x0=0 for any x. So you would either have to cripple your arithmetic in some way (such as dropping distributivity of multiplication over addition or you would be forced with the nonexistence of such a system.
There's more than one, but (1+ i)* sqrt(2)/2 comes to mind.
The easiest way is using Euler's formula: e^[(pi/2)* i] = cos(pi/2) + i* sin(pi/2) = i, so sqrt(i) = {e^[pi/2]* i]]^(1/2) = e^[pi/4]* i = cos(pi/4) + i* sin(pi/4). (Sorry for all the badly written math). The reason there are many answers is because of the nature of sin and cos (they oscillate).
That's right, although saying there are multiple answers is overcomplicating it; there are really only two answers, directly opposite one another in phase angle (i.e., one is the negative of the other), as with all square roots. The principal square root is the one with either a positive real part or no real part and a positive imaginary part, just like with real numbers. Also, keep in mind that sin(pi/4) = cos(pi/4) = sqrt(2)/2 ~= .707, so it's about .707 + .707i (or that times -1).
How about .999...=1 ?
Consider x=0.999.... Then 10x-x=9.999...-0.999..., so 9x=9, so x=1. There's no rounding error, they're just two ways of writing the same number.
The way I like to explain this to my students is that the dissonance that is occurring is that they keep trying to visualize 0.999... as getting closer and closer to 1, as you add successive nines. The mind-blowing revelation is that the number isn't MOVING, because 9s are not being added. They are all already there. ALL OF THEM. An infinite number of them. If you start looking at this as a fixed number, rather than a moving number on a number line, then I think it's easier to accept the mathematical proofs that show 0.999... = 1. We just can't write down all the 9s, because it would take longer than high school. And, you know, all the of the rest of human history...
The way it was explained to me that caused me to finally "get it" was: okay, so you assert that 0.999... is not equal to 1? Then there must be at least one number that comes between them, a number that is greater than 0.999... and less than 1. Tell me what that number is.
…which, by passage to the limit, is equal to 1, since as n→∞, (1/n)→0. So 0.999… < 1 < 1, and in particular, 1 < 1, i.e., 1 ≠ 1 (⇒⇐). As this contradicts that 1 = 1, it must be the case that no such number exists. □
There are many, many proofs that any number infinitesimally close to another number IS that number. It might not makes sense; but if it isn't true, then the age-old adage 'numbers don't lie' is dead wrong. If you deny that 0.999...=1 than you are stabbing calculus in the face and taking a whiz on it's grave.
Apparently, as much as Writers Cannot Do Math, it is also true that Mathematicians Cannot Use Grammar. It's should be its.
For instance:
1/3+1/3+1/3=1
1/3=0.333...
0.333...+0.333...+0.333...=0.999...
Therefore 0.999...=1
Just as a follow-up to that...
1/3+1/3=2/3
1/3=0.333...
2/3=0.666...
2/3+1/3=1
1=0.333...+0.666...=0.999...
And if you argue that 2/3 equals 0.667, it proves that there is no conservation of number values in mathematics.
Another thing to consider with .999... is that every integer is followed by a decimal point and an infinite number of zeros. 1 = 1.000... and so forth. If someone claims that .999... has a "last nine," they're also saying that 1.000... has a "last zero." Due to how math works, this means you MUST follow that last zero with a nonzero digit — you can't "end" it because there is no "end" to infinity, and if you cap it off with another zero, then the one before it isn't the last zero anymore, is it? And once you cap it off with a nonzero digit, it means that 1.000... is now greater than 1, even if by the tiniest fraction. When we say it goes on for infinity, we mean it — there is no "last nine" in .999... without turning the thing into a different number.
I don't know much about math, but let me tell you my own theory, as one who can, as C.S. Lewis would put it, "look along the beam" instead of "looking at" it. (Uh, strike that, reverse it? You know what, drop the metaphor altogether.) Anyway, here's my theory: the numbers themselves really, truly, DON'T lie. It's just our numeric representations of them—these symbols on paper—THEY lie. The Arabic numerical system that we use (0-9) is well-constructed, but all like human-made systems it is imperfect. It doesn't communicate with perfect accuracy all the time the numeric truths it's designed to be transcribe, because there are limitations in the way it can work. Chinks. And that leaves us with APPARENT discrepancies such as, for instance, zero-point-repetend-nine being the same as one.
My preferred way of "getting it", helped out by the limitations of dividing things with a base-ten calculator. Is .333… equal to one-third, or somehow "less than" it? If you can get that it is equal, than all you have to do is triple it. Ta-da!
The problem is that some people will say it is somehow "less than" 1/3; you're asking the same question again, essentially, which is, "does the decimal point represent the limit at infinity?"
The best way is to point out that a decimal representation is shorthand for a sum of numbers over powers of ten, and repeating decimals are therefore shorthand for the limit of an infinite sum. Take the infinite sum of 9/10^n and see what you get.
Equivalents to this problem exist in every radix (or base). For example, in hexadecimal, 0.F… = 1. Perhaps the "strangest" one is binary 0.1… = 1. Given those bases, consider this: if (decimal) 0.9… did not equal 1, then there would have to be a real non-zero number resulting from [1 minus 0.9…]. Some in forums insist that this difference would be the "smallest number greater than zero" (which doesn't exist in the real numbers), and that it should be written 0.000…1. Well, what about the equivalent subtraction in binary? The binary number 0.1 is five times the size of decimal 0.1, and 0.01 is five times the size of decimal 0.01, etc. So, is binary [1 minus .1…] a "bigger smallest number" than decimal [1 minus .9…]? Or is base-20 [0.000…1] a "smaller smallest number"? This would mean that our choice of radix somehow affects the properties of real numbers, which would be rather like Formulaic Magic. In reality, for any conventional radix, [1 minus 0.[largest-digit-in-radix-repeating-forever]] always equals zero, so it all works out fine.
If anyone wants the really technical explanation for 0.999...=1: Mathematicians don't think of real numbers using their expansions, but instead think of a real number x as the set of fractions that x is bigger than (for example sqrt(2) is bigger than every negative fraction and every fraction that squares to less than 2). Consider all fractions less than 1. Then there is some 0.9.....9 that is bigger than it. If 1 is bigger than a fraction so is 0.999... But 0.999... can't be bigger than 1 so they have the exact same set of fractions that they are bigger than. Therefore 0.999...=1.
On the subject of .999...=1, how is it even possible to arrive at .999...? Wouldn't any problem that gives you that answer properly be reduced to 1 anyway? I was always taught to use fractions during work and decimals only for answers and only when specified, so I really don't see how you could arrive at .999... in any practical situation?
Consider the infinite sum "Sum_{n=1}^{infinity} 9/(10^n)" which works out as 0.9 + 0.09 + 0.009 + ... It's easy enough to see that this sums to 0.9999... (and hence to one, due to all the stuff that has previously been mentioned).
Of course, I have to point out the formula for the sum of an infinite series S = a/(1-r); in this case a = 9/10 and r = 1/10, so S = (9/10)/(1-1/10) = (9/10)/(9/10) = 1, confirming that 1 = .999....
Decimals aren't just numbers we "arrive at", or at least, not all decimals are. Rather, if we accept that repeating decimals are a legitimate thing, then mathematicians are naturally lead to ask about each possible repeating-decimal expression, and what they mean. 0.9… is one of those expressions. Actually, there's an infinite number of expressions like it, for example, 0.4999…, which is equal to 0.5.
Does everything REALLY involve numbers, or are people just making that up to scare kids into getting A+ 's in math?
Everything important for modern civilization involves numbers. Medicine dosing, for example, is determined by things like the rate of absorption into the body as well as body weight factors. Electricity you can maybe use without Math; but to understand how the signal gets to your house, you need to understand trigonometry at a minimum. Your computer is based off numbering systems. Essentially, if you want to understand modern technology at all, you need math, and it starts with numbers to get to variables, where the real work starts. If you don't, then you'll be beholden to someone else for everything. Heck, proper cooking needs a good understanding of math and chemistry. You might be able to do okay with out it, but understanding how the heat is dissipated from pans and absorbed by different materials and the effects of a higher versus lower temp and how it's not quite the same if you double the temperature to halve the time...
Math is a way for humans to understand the world around them. The Moon circles around the Earth without knowing any math; but if you want to predict its movement, then you'll need math. There is a theory that the nonhuman mind will produce very different math, but we are stuck with the math we have. Non-scientific methods of understanding the universe exist (but don't necessarily work) and don't rely on math. But you can't tell what you discovered there to others, and progress is impossible there.
Math is not about numbers - it is about patterns. It can be represented by numbers, but you can talk about groups having in mind geometric transformations (which are closer to pictures). That's the beauty of math - the pattern appears over and over again in seemingly unrelated fields. Well - numbers are easy to operate on, so they are thrown for simplification.
Bah. Anyone could argue that their job, or any job, is or isn't "important to modern civilization", and no one has any right to look down on anyone for holding a job that isn't that "important" either. And listen to me, original poster: you will get lectures identical to the ones above from people who are aficionados, teachers, or experts in ANY ACADEMIC SUBJECT in the world you ask the same question about, and each will have their own arguments for why their profession is uniquely central to modern life and the knowledge of it uniquely critical to everything. I once had a history professor who could take these guys above to town and back in an argument over math vs. history in such regards, and I don't even necessarily think such things about history myself. Listen: whatever subjects are most necessary to what you think you need to do in your own life, focus on those. I'm not just talking about school and careers, it's always an ongoing thing.
It is true that any good professor could make an argument for why their field is central to modern society. If the subject wasn't important enough to them for them to believe it they wouldn't have sent all that time getting a PHD. And they might even have a point, after all modern life is so complex that any number of subjects could be considered central to modern society. The thing about math though is that it isn't just important, it is fundamental. Biology, physics, chemistry, geology, materials science, engineering, electronics, computer science, economics, medicine, actuarial science, digital illustration, 3D modelling, business etc. etc. and so on all require advanced mathematics to understand. Pretty much everything else requires at least some math. I can understand the desire to focus on what is important in your life, but in the modern world we don't live in a vacuum. In a democratic society we are required to be good citizens who understand the issues of the day and can vote intelligently on them, and we cannot possibly do this without mathematics.
All of science, engineering, economics and statistics directly involve math. That's a pretty broad bunch of categories. Since science is about describing reality, mathematics is also the involved with everything at a detailed enough level. If you get abstract enough you can avoid dealing with math directly, though (for instance, computers are devices made of particles whose behavior is described by equations, with electrical circuits modeled and designed by different equations, which represent information in binary numbers and manipulate it with logic rules and mathematically focused programming languages... but you can just use a mouse and a keyboard to manipulate a GUI and never worry about that if you don't want to; it's still there, though).
The deductive reasoning structures formalized by math, went on to become standard notation for philosophy. Seriously. You can use logic and proofs in philosophy now. Sure, it isn't as formalized, but that method of thinking is great for arguments. Example philosophical argument—this one's for continuous revelation:
Assume that Your Entity Of Choice, hereafter referred to as God, is perfect. Therefore, everything which people claim that God has done, is something which God wanted to happen. Ergo, continuous revelation is part of the Plan.
Conversely, assume that God is not perfect. Thus, continuous revelation must take place, as, being omniscient, God must be able to realize his mistakes and attempt to correct them.
Who says you can't mix logic and religion?
I would never say that, but I for one would never use such Insane Troll Logic as the above to support the idea.
Why are calculators, even basic ones, so much fun to play with?
5318008.
Rebuttal: 55378008.
Because they're easy to fiddle with, and chances are you don't enjoy using them properly. You will have them in front of you during unending hours of boredom in the classroom without a whole lot of other things to mess with. Probably very much like how dictionaries are so much fun to look up random things in.
Probably one of the reasons graphing calculators have apps, Half the time in my classes, a student next to me is playing some block puzzle game, rather than actually doing math.
Most Brazilian universities require the HP 50G calculator for engineering/science courses. It is VERY common to see people playing games during classes, or even trading games (the calculator has an infrared port and a card reader).
Seriously, guys. Who the hell came up with the term 'integer'? What is wrong with calling them 'numbers'? If they're supposed to be called 'integers', why the hell do we even use the word 'number' anyway? Let's be honest here: when I first learned the term 'integer' back in middle school, that was the moment when mathematics Jumped the Shark for me. I've never trusted it since.
An integer is a type of number, just as you can say that a Golden Retriever is a "type" of dog. Specifically, Integers are whole numbers, no decimal point, no fraction.
Seriously, guys. Who the hell came up with the term 'Golden Retriever'? What is wrong with calling them 'dogs'? If they're supposed to be called 'Golden Retrievers', why the hell do we even use the word 'dog' anyway? Let's be honest here: when I first learned the term 'Golden Retriever' back in middle school, that was the moment when biology Jumped the Shark for me. I've never trusted it since. (The message is: Use Google.)
Integers are numbers with no fractional parts, hence the name. Pi and 2.5 are numbers that aren't integers. Sad you gave up on math due to a random misunderstanding.
5 second Google search would have saved you a lot of pain. Better question would have been "Why not just call them whole numbers?"
And, for those who are wondering, an explanation of this is that "whole numbers" is a vague term; it could refer to the nonnegative integers, the positive integers, or all integers. In math, we like to define things precisely to avoid confusion.
As for where the term came from, OED says it's from Latin, meaning "whole, intact". Same origin as "integrity" apparently.
For what it's worth, the symbol used to denote the set of integers, the blackboard bold Z, is an abbreviation for the German Zahlen, meaning "numbers."
Why doesn't 1 count as a prime number?
They'd have to make too many exceptions for too little gain. Prime factorizations don't need any 1s, and we don't need the definition to be changed to "divisible by itself and 1, or just 1 if it's 1".
One is divisible by itself and one, but then every number is divisible by itself and one. Prime numbers are defined as having exactly two factors.
So, first off, that statement should be "one is divisible by ONLY itself and one." Second, if one is itself, then wouldn't it only be divisible by itself? so it fails to meet the definition of having exactly TWO factors.
1 is a unit; its properties are VERY different from those of actual prime numbers. A better question is "Why should 1 count as a prime number?", and there is really no answer to that beyond "It looks sort of like a prime from a purely cursory view".
That gets pretty subjective. Why do we care about prime numbers at all? Well they are considered the building blocks of other numbers so we want to study them and have a name for them so we can refer to them concisely. The unit, 1, is definitely a vital building block of other numbers. By that reasoning it should be counted as prime. Of course with such a high level of reasoning 0 should also be considered prime since none of our prime numbers can multiply to 0 without it. I think the first responder was correct in saying that it would require a bunch of theorems to add exceptions.
Ultimately, mathematical terms are defined the way they are because it leads to interesting or useful properties. We could easily define an even number as "any multiple of 2, and also 1", but there's not really anything you can do with that.
One of the biggest reasons we're interested in primes is that any number can be written as a product of primes in one and only one way; for example. 35 is 7 x 5. That's what's normally meant by "primes are the building blocks of the integers". If we include 1 as a prime, then it's no longer true; 35 is 7 × 5 × 1, or 7 × 5 × 1 × 1, or 7 × 5 × 1 × 1 × 1, ...
To the above troper: THANK YOU. I wasn't the one who originally asked the question, but it's bugged me, too. This is a simple and perfect explanation. Now I get it, instead of just accepting it.
Infinitesimals (things like dx and dy) bug me. We're allowed to divide by them because they're not * really* zero, but when we're adding them, we can treat them as zero because they're basically zero. Yes, I know their purpose is for limits and ratios, but it still bothers me that they are treated as both 0 and not 0.
Me, I always assumed it was just sloppiness. Yeah, you could do it rigorously with limits, but why would you? It's hard enough work as it is.
taking a derivative on.
Infinitesimals can in fact be defined rigorously without limits by things like extending the reals to the Hyperreal Numbers. They are more complicated than but analogous to the reals, and are represented by an infinite sequence of reals (note: some sequences represent the same number, like 0.999â€¦=1). In this case, a real r is represented by the hyperreal <r>=r,r,râ€¦; infinitesimal hyperreals have sequences that converge to 0 iirc. Approaching calculus in this way is called nonstandard analysis.
The official line is that dx/dy is not division, it's just a way of writing differentiation - Leibniz's notation is still used because it's awkward to change everything.
And because if you have multiple independent variables, it's good to know which one you're taking a derivative on.
Treating them as both 0 and not 0 is just for ease of use. When a new student walks into a calculus class, it's easier to tell them "1 divided by 0 equals infinity" than say "the limit of 1/x as x approaches 0 from the right is infinity". Pretending that infinitesimals equal zero is just our way of doing limits using mental math.
You are not alone in asking this question: this was one of the big complaints lodged against calculus when it was first introduced. I find it helps to think of calculus as being interested in the behavior near a point, rather than at the point.
If somehow you were able to figure that the length of something is 2+(1 x 10^-1,000,000) there's a good chance in everyday situations you can just use 2 instead. It's just a matter of successfully extending this intuition. 0 is the only number you can't divide by and (1x10^-1,000,000) isn't zero.
Anyone else ever noticed that teaching math is like one long series of lies, and then flipping back on what you said? "Oh, no, you can never subtract a big number from a smaller number..." "... unless you use negatives..." "You can't multiply fractions..." "...Until you find the least common denominator" "You can't find the square root of a negative..." "...without using i..." etc.
I can remember being about 6 in school and my teacher saying that you could halve even numbers but not odd ones, like 3 was her example. I knew it would be one and a half, but she refused to acknowledge my achievement...and that's why I killed her. (Not really, don't worry)
I can actually remember back in Kindergarten or 1st grade using a calculator and doing 3 - 8 just to see what it would show. I was confused about what the - meant in the answer because I didn't know about negatives at the time.
That's a problem with the way that math is taught in school, not with math itself. Also, who tells people you can't multiply fractions, and why the hell would you need the least common denominator to do it?
I think he meant adding fractions, which would require some common denominator, but it's usually faster to just multiply the the denominators (which rarely gives the least common denominator) since half the time you have to simplify anyway.
It really depends on the teacher. My teachers were always pretty honest: "What happens if you subtract 10 from 9?" "You get a negative number, but we'll talk about those later. For now, just don't do it." That was satisfying: I knew it was possible, and I'd learn eventually.
It's part of all sciences, it just comes up most in maths. Lies to children so that they don't ask questions of teachers when they won't understand the answers.
You can still teach math and science without doing that; it's just harder. All you have to do is be up-front about which stuff will take a more advanced technique: "OK, this is a square root, taking the square root of a negative number requires complex number theory, and we're not ready for that yet."
Besides, some of them aren't even lies. It's perfectly true that you can't find the square root of a negative number at first, because you're working with real numbers. It stops being true once you start working with complex numbers, but before you introduce the concept of i students have no business working with complex numbers anyway.
Student: Can we get a square root of a negative number?
Teacher: ...No. You can't.
Frighteningly, some teachers of early-level math teach it in that annoying "you can't do this" "now you can because I said so" way because they actually don't get it either. My (tenured) sixth grade teacher happily taught the class that the area of a circle was pi* r* 2. Not pi* r squared. Pi* r times two. Fortunately she was called out by some of her own students before it could stick with the others. Unfortunately, she tried to fight them to save face instead of shrugging and making sure the students got it right, and later was forced to admit she didn't know what exponents were. As a math tutor these days, I can say with great confidence that the vast majority of problems people have with math can be traced back to a teacher screwing them over somewhere early along the line instead of genuinely being "not good with numbers" — most of the people I help have to be re-taught several things that were drilled into their heads outright wrong several years prior.
My boyfriend has been incredibly good at math since he was in elementary school, to the point where he knew algebra since he was is first grade (he evidently had an uncle who was rather zealous about teaching him this stuff...). He used to get incredibly frustrated every time a teacher insisted that he stay on the same level as everyone else. Understandably, he eventually got tired of this, and one day in fifth grade, when everyone else was just beginning to learn how to solve equations and other extremely basic algebra stuff, he decided to create an algebra quiz and make his teacher take it. She nearly failed it.
This is how anything is taught. First we give the big picture, then we explain that it's not actually quite that simple. Compare a first grader's understanding of the American Revolution with a college student's. Both know how it ended, but the first grader has a very simplistic understanding of it.
I, who am taking an independent study on complex variables, found an SAT II math question that didn't actually have a correct answer because of this problem. I had to actually ignore what I had learned that semester in order to give the "right" answer.
Don't they specify "the best answer" instead of "the most correct answer", for moments like that?
Math isn't just one body of knowledge. For instance, according to the Peano axioms, there is no number who's successor is 0. There's nothing wrong with the Peano axioms. But you also get something perfectly consistent if you replace that with an axiom that says every number is the successor of another number.
There isn't any number whose successor is zero, because the successor function is only valid for the counting numbers. When you define negative numbers, they don't have "successors," even though you can add one.
There was a group of French mathematicians who wrote a series of textbooks in mathematics under the collective pseudonym "Nicolas Bourbaki" which didn't ever "flip back on things" as the original troper described it. This was mainly because it bugged Jean Dieudonné, who would always threaten to resign if he thought someone was suggesting that they do this. The result really has to be seen to be believed, and although I like the book on General Topology, I understand that there are people for whom this way bugs them even more. However, It bugs me too, which is why I prefer graduate mathematics textbooks to undergraduate ones, as undergraduate ones often give a misleading view of the subject, fail to take advantage of fundamental and important techniques because they are "too abstract" etc...
They tried to fix this once. That's where the "New Math" came from, and it was a disaster. As frustrating as it is to us looking back as adults, or to those with specific kinds of minds as children, it looks like kids generally have to rote-learn simple operations on the natural numbers before they can move on to understanding of the concepts behind them.
So what exactly is it about stuff like Algebra and above that makes it so hard for some people to explain? Even college professors often hate teaching college algebra because there's no way to explain it in a way that everyone can understand. I've literally seen it...half the class would understand what the professor said, and the other half would probably wonder how s/he got from Point C to Point D or think s/he pulled random numbers out of their ass.
The problem is that being able to do something and being able to teach other people how to do it in a lecture hall aren't the same skill. Explaining algebra doesn't have to be hard, but being good at algebra doesn't give you magic explanation-powers either. Moreover, a lecture on any subject will go over the head of a big chunk of the audience unless there's a lot of repetition and use of different techniques to cover the same topic, because no one lecture technique works for every person who listens to it.
Math gets more abstract as it gets more complex, and people have less everyday stuff to comprehend it intuitively. Integers? Well, those are like, how many things you have, right? So that's easy. Negative numbers and decimals? Well, those are just like money! Except money only goes to 2 decimal places, but it is simple enough to realize that it can go to more places. Then you step out of arithmetic and are suddenly dealing with things like variables and graphs of functions and it just becomes more arcane to people. Basically, each for every level of complexity in math you go up (algebra to calculus to differential equations, for example), there is a lesser percentage of people who will be able to truly grasp and understand it (as opposed to, say, memorizing algorithms to get the right answer), because it just keeps on getting more abstract and counter-intuitive, and thus takes more brainpower to conceptualize; brainpower that some people simply don't have.
Another place that problems may arise is where the student is unable to articulate what he doesn't understand about the concept under discussion.
It's not just Maths. There isn't any single class without a student thinking what the f*** the lecturer is talking about. Of course, some subjects are more so than others.
People's brains are wired differently. A simple demonstration of the logic involved will suffice for some, but others will have difficulty absorbing it unless you spell it out. And as pointed out above, knowing something and teaching it aren't the same thing. Case in point: I have several friends who went to a local high school geared towards science and mathematics. They're all brilliant in both fields, but largely incapable of teaching others without starting with "Because that's how you do it".
This is why I nearly failed Algebra II. The teacher couldn't explain why certain equations are solved the way they are, and I couldn't understand the work because there was seemingly no logic involved. I'm still not entirely sure if there is.
Who decided that Pi is equal to 3.14159265389...Pop!
Why does math hurt my brain?
Noone "decided" that, Pi is defined as the circumference of any circle divided by its diameter, which just happens to be an irrational number.
Not only that, it happens to be a transcendental number.
If it helps, it hurt Pythagoras's brain too.
Remember that all (perfect) circles are exactly the same as all other circles except for size. So the diameter-circumference ratio will be the same for every single circle, and we call that ratio pi. Meanwhile, the diameter lives in line-land and the circumference in curve-land. At no "zoom level" is a circle's edge composed of tiny lines; it's curved "all the way down". Because of this, the circumference can't be given as exactly any number of diameters, or vice versa. We can't say "X diameters equal Y circumferences", and that's the definition of an irrational number. However, we can say that one circumference is more than 3 diameters and less than 4, that it's more than 3-and-one-tenth diameters and less than 3-and-two-tenths, and so on forever.
Actually it's been determined that pi to 42 digits is accurate for a circle with diameter and circumference greater than that of the universe, accurate to within less than the diameter of one proton, which in my opinion, is plenty accurate enough!
Sure... if all you want to do is measure actual, physical circles. Because of Euler's identity, conceptual perfect circles come up all the time in the solutions to differential equations - indeed, you could make a case that pi's real significance is the number that solves these equations, and circles are just a special case. It's not very common, but there are times when you need more precision in pi to compute a quantity you've come to in that way, especially if what you're actually looking for is the distance between two very close quantities, as with relativistic effects.
A key word put in that previous bullet is "perfect"; when you draw a circle in the real world, the amount of "stuff" its circumference and diameter are made out of can be expressed in terms of each other. (The circumference won't be infinitely curved, or infinitely thin, like a "real" circle. For that matter, the diameter won't be infinitely straight. But the closer to those ideals you get, the more digits of pi you need; 3.14 is enough for circles drawn with a compass.)
It might help to know that, while the number 3.14159265359etc looks like a random set of digits, it actually looks a lot more natural when written as a continued fraction. It would be hard to write these fractions here in ASCII format, so I recommend looking it up in the other wiki.
Why is it that logarithms aren't taught at the same time as powers and roots? There's three variables in a^b=c. Only teaching how to find two will inevitably lead to confusion.
a^b=c is the same as: c=a*a*a*...*a (so that a appears b times). It can therefore be easily (if laboriously) worked out with a pen and paper. Logs, not so much. The only ways to find them are with a calculator, a log table, or a slide-rule. Basically, it's the fact that logs aren't trivially solvable with arithmetic methods that means they're not introduced until later.
Okay, now try that with b not being an integer. :-P
Roots have the same problem, but they're introduced way before logarithms are. With roots, you generally either are given easy numbers like sqrt(25)=5, or are expected to leave it expressed as a root like sqrt(5) = sqrt(5). We could, but don't, do exactly the same things with logarithms: give easy numbers like log_5(25) = 2, or leave it as a log like log_5(2) = log_5(2). Later on, you can introduce hybrid problems like sqrt(50) = 5 * sqrt(2) and log_5(50) = 2 + log_5(2).
There are two reasons, really: first, they're not introduced until after because they used to be introduced well before, but as slide rules fell out of fashion, they were dropped from that part of the curriculum, whereas roots and powers stayed where they were. Second, the order in which math is taught in secondary school tends to reflect that in which the concepts were developed, and approximation of irrational square roots goes back to prehistory, higher roots to Classical Greece, and general methods to medieval Persia, whereas logarithms other than those of integer powers aren't seen until the seventeenth century.
What exactly constitutes a "number"? A rational number is normally defined as an ordered pair of integers. A real number is defined as a converging infinite series of rational numbers. Given that I don't see why people tend to have a problem calling vectors and matrices numbers. But how far does it go? Is a point a number? How about a line? Is a set a number?
A number is something we can intuitively perceive as a number (math is full of duck typing). The exact notion of what we intuitively perceive as a number varies depending on the time period — negative, non-integer, and irrational numbers were not considered numbers at different points in history, and complex numbers still commonly aren't. Definitions are given for scientific rigor, and can in fact be extremely clumsy and less intuitive than the concept itself (just look at the set-theoretical definition of natural numbers through the axiom of infinity). So, strict definitions are used when when we need to ground the new concept in existing ones instead of just saying "There is a number whose square is -1, because we say so!".
Why doesn't anybody seem to teach matrices well? They're sets of linear equations. People do operations on functions all the time, so it doesn't seem like it would be too confusing to tell people they're multiplying a set of linear equations (it's actually equivalent to composite functions, but you get the idea). A lot of people have problems with fractions. This is like teaching people the rules of fractions, but never telling them what they actually mean.
No, they are sets of factors of equation systems... And they don't multiply a set of linear equations, they form a set of linear equations. And they have to have the vector of results, without it you have a system of equations that doesn't tell you much.
This is actually a great point, specially since matrices start getting <very> important in university level math!
It's actually possible to learn matrices much earlier than they're taught. I personally think the reason lots of people don't understand quantum mechanics very well is that they don't understand matrices very well.
I'm currently taking computational linear algebra and so far everything I have seen could have been included in a high school algebra class or taught as a separate course. I looked ahead at the book and it looks like there are some applications for differential equations coming up, but I think they really should teach this stuff in high school. It makes solving systems of linear equations a lot easier and requires no calculus or trig, just algebra and geometry.
Another good question is why matrices aren't taught first as representations of linear transformations. Most linear algebra classes (especially ones with a computational bent) spend a good month noodling around with matrix row operations before introducing the linear transformation. Wouldn't doing things the other way make more sense?
The above is a terrible way of teaching matrices.
Student: So what are matrices for? Teacher: They're used in linear algebra. Student: What's linear algebra? Teacher: It's something involving vectors, which I'll teach you about in a few months, though you won't actually learn any linear algebra until 1st year college.
You are so wrong, my friend! It is true that the mathematical faculties of which you speak are closely tied in with the same linguistic intelligence faculties which together combine to make a large part of one's deductive reasoning faculties, but those two components are still separate in and of themselves, and rationality is itself still but one part of the difficult-to-capture aggregate known as the intellect. I, for instance, have a massive amount of skill with my linguistic faculties (I remember one time in elementary school when my principal had me read and explain to him a paper from his office because he didn't know what some of the words meant) yet have a disability in math, giving me approximately the same math skills as your average dead sea bass, and I find it hard to think of a time in my whole life when anyone said I had any problem impressing them with my reasoning skills. The mind is a funny, fickle, mysterious thing.
For the same reason people who are illiterate (whn u typ lyk dis) who have no reason to be (they're not living in poverty, they don't have any learning disorders) are considered stupid. I mean, I'm not saying you have to be a whiz at advanced calculus or anything, but if you can't do basic algerba because "that's too hard, so I won't even try!", then yeah, you're a moron.
If you can't tell the difference between stupidity and simple laziness, you're probably a moron.
If your "simple laziness" extends to levels of not bothering to learn the difference between there/their/they're and being incapable of solving simple algebraic systems like 2x = 4, that's stupid and you're a moron.
Funny, though, just how far beyond 2x=4 and there/their/they're these things get long before it ceases to be a problem even before school is over, isn't it?
Note that it's exceedingly hard to tell if someone is actually bad at math, or if they were just taught poorly by a subpar teacher and the student lost so much ground there that they couldn't catch up later — the latter happens depressingly often at grade school levels (see above for a teacher who didn't know what exponents were, or one who nearly failed a quiz prepared by her own student).
Not that hard, methinks... you just have to test them with questions that require little mathematical knowledge, but a lot of ingenuity. The SAT Math does this, requiring only basic algebra and geometry. It should be possible to make a similar test composed of questions that require only basic arithmetic for people who claim they had a bad algebra/geometry teacher (and no teacher I've ever seen, no matter how bad, has been able to screw up teaching arithmetic - they may have added a lot of unnecessary bullshit that made the whole thing unpleasant, like memorizing multiplication tables, but they got through the core stuff well enough; that's the one math everybody knows).
Fun Fact: It's thought that 5th grade math is what determines how you'll do in future math classes. This is because 5th grade is when math becomes more focused on abstract concepts (fractions, very basic algebra) rather than concrete ones (as in anything you can use physical objects, such as fingers, to illustrate). If you do poorly in that you make grasping more difficult concepts even harder. That said, it's pretty basic knowledge that not everyone has the ability to understand certain levels of math. Saying that someone is a moron because they aren't good at math is in itself moronic. That said, if you never even try to learn it, you ARE a moron (I may suck at math, but at least I tried to learn).
Does it bug anyone that all four sided triangles have exactly two sides?
It probably bugs some people, but those of us who have studied logic as well as math are familiar with the idea that a false proposition implies any proposition.
It's easy to give a simple proof of the above statement, without resorting to the principle of explosion. The usual definition of a triangle is a planar polygon with three sides. Therefore, a triangle with four sides implies that 3 = 4, since each polygon has the same number of sides as itself. Subtract 1 from both sides of this equation to obtain 2 = 3. Therefore 4 = 2 by transitivity and so it also has two sides.
I'm reminded of Raymond Smullyan's answer, when a friend didn't believe in the principle of explosion: "Do you mean, from 2+ 2=5, it follows that I am the Pope?" "Yes: 2+ 2 = 5. Therefore, 2=3. Therefore, 1 = 2. The Pope and you are two people. Therefore, the Pope and you are one person."
Credit where credit is due: the above proof is quoted in one of Smullyan's books but the originator was Bertrand Russell.
This isn't so much a JBM but...what in the name of all that is green and good on this earth is infinite summation? I'm not a mathematician, I have no problem with trying to understand abstract ideas but I can't help but be iffy on a subject when I ask my teacher "what IS this?" and the response is "..." and an eighty minute lecture on How to Use Excel for Dummies. Please, somebody explain what this is, as far as I can tell it's some extension of arithmetic and geometric series and sequences (which I think I understood but I resented having to learn it, though I admit taking SL math was my own choice and I can blame nobody but myself for learning math I will never ever need or encounter ever again once my final exams are done)...that or it's the mental equivalent to water torture, which is improbable.
Basically it's a way of saying that (in a series in which the next term is smaller than the last) for each term you add you will get closer and closer to this number, and if you added EVERY TERM UP TO INFINITY you would get this number. Think of it as like a limit.
It's actually a rather simple. Finite summation is when you add the terms described to the right of the sigma from some number on the bottom of it to some number on the top of it, right? Well, you can let the top number be a variable, and then to make an infinite sum, you take the limit as the variable increases without bound (that is, the variable's limit is positive infinity). As for the series and sequences connection, a series is an infinite summation, while a summation is when you add together (sum) the terms of a sequence.
Let a1, a2, a3, ... be elements that you sum. Take the partial sum p(n)=a1+a2+...+an. An infinite sum is the limit of p(n) when n tends to infinity.
Here's probably the best way: if there's a number such that, for any open interval you can define that contains it, there's a point after which no matter how many times you add the next term as it's defined, it will never leave that interval, that's the sum. Consider: that's how we think of non-repeating decimals - each figure added narrows the range in which the number is, say from between 3 and 4, to between 31/10 and 32/10, to between 314/100 and 315/100, 3141/1000 and 3142/1000, and so on. (And you have no idea how much willpower it took to leave the fractions in that form so you'd recognize the number...)
Just to clarify, Godel's Incompleteness Theorem does not say that it's impossible to prove a mathematical theorem, as clearly this is not true, what it does say is that for any consistent formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true but not provable in the theory. The proof is extremely difficult, but the key point is that it is possible to construct a formula that in essence says "This formula is not provable", and show that it is neither provable nor disprovable (Massively oversimplified explanation of that last point: If it's provable then it follows that it's not provable, and if it's disprovable then it's provable. It's similar to the paradoxical sentence "This statement is false").
To put it in simpler terms, Godel didn't prove that you can't prove ANYTHING, but that you can't prove EVERYTHING. There will always be certain things that are true, but cannot be mathematically derived from a limited set of axioms.
The proof, in intuitive terms, goes something like this: take some logical system - call it Bob. Godel showed that if Bob can express the basic truths of arithmetic, then there is a way of saying (within the rules of Bob) "this sentence cannot be proven by Bob". If you can prove the sentence, it is false, and if you can't, it is true but not provable. Therefore, any such system is either inconsistent (contradicts itself) or incomplete (can't tell you everything).
So basically, from what I read on the Trivia page is correct, Godel created a Sliding Scale of Completeness vs Consistency? I also remember reading somewhere that position and velocity can be measured, but the more acurate one is, the less accurate the other is. (i.e. A car is driving down the road. At this moment, the car is at position B5. However, I don't know how fast the car is traveling. If I measure the speed of the car, then the car must be moving, which means it's position would have changed.)
If Godel had indeed proven that you can't prove anything, then his proof would contradict itself, now, wouldn't it? Read Godel, Escher, Bach by Douglas Hofstadter for a relatively simple explanation, although if you're ok with a REALLY simple explanation the above is actually not bad.
If Godel's Incompleteness Theorem is correct (I see no reason why it shouldn't be), then is there a formal system of logic out there wherein it's impossible to prove the theorem?
No, probably not. The Incompleteness Theorem says that every formal logic will have some true statement that it can't prove. You're asking the reverse — whether a given true statement must have some system in which it can't be proven. I doubt that it's true, and if it is, it isn't because of the Incompleteness Theorem.
Is math real? Plato had the idea-world as far as I remember and in that numbers exist. Also the it-ness of a horse. (I haven't brushed up on this...). So is math real? Is the number 2 real in some way? Does it exist as a separate entity? How can math prove something, like string-theory (or m-theory, whatever) when it is dealing with something that cannot be experienced, and does not follow logic? How can you ascribe a value to something you haven't seen? Something that follows no other rules?
What is real?
Math is an abstract concept like language. The things involved are not "real" exactly (beyond being symbols on a computer screen/ noises we say out loud/ whatever), but in the same way that we can define "horse" to refer to one of those big four-legged things that goes "neigh", we can draw connections between the maths and real world - if we say that there are five horses then that means that we can take the set of horses and the set of integers {1,2,3,4,5} and assign one number to each horse with nothing left over in either set. With things more complicated than just counting the connection becomes a bit less obvious, but for example once we invent units we can measure things like distance and time, and from that we can derive area (length* width), speed (distance / time), acceleration (the rate of change of speed with respect to time) and so on. And of course, maths doesn't need to refer to the real world any more than language does.
Fundamentally, all of mathematics (at least, the kinds that most people work with) follows from the rules of arithmetic. Each level beyond arithmetic develops a concept, then uses the concepts of arithmetic to prove properties of the new idea, and so on. Since arithmetic is clearly grounded in reality, so is (some of) higher math.
This is actually a good question. I'll paraphrase my professor: "A mathematical object is what it does." Math is constructed based on properties of objects and operations. Of course, the way we are taught in elementary school, we associate numbers with objects, which is great...for elementary school level math. Eventually, you have to separate the idea of "two-ness" from the two oranges on the table so we can deal with more abstract (but still very real) ideas.
Unless you are a pure materialist, then math is obviously real.
It would be better to say that it reflects things that are real. While imaginary numbers and Hermitian matrices and other things are not real in that there would be no such objects unless mathematicians constructed them, yet they are useful. Reality never thinks to itself, "Ok, now I'm going to undergo a linear transformation," but the nature of physical laws may make such objects useful in predicting what will happen. It's a confusing subject, and I'm not sure if I can explain it well, but most pure mathematicians would agree there is nothing real about the objects they work with.
The objects are, from a strictly material viewpoint, entirely different, and there is no way in the world to avoid going into intangible, purely abstract fact to explain how two pairs of objects that are utterly different physically (say, a pair of puddles of water and a pair of bars of gold) still share the same element of duality that they also share together on a different level—their "two-ness", if you will. The mere fact of the two-ness is something completely real which cannot be comprehended except on an invisible, purely mental level—or at least not as of current scientific understanding. That fact is very, very objectively real.
Which reminds me of the famous "Two Plus Two Makes Five" thought experiment. In philosophy yes you can argue that 2 + 2 = 5 using the "math does not exist" argument, but we stick to the conventional 2 + 2 = 4 simply for representational, practical, utilitarian reasons. Same way we invented language to represent Real Life nameless objects. We got used to using "two and two" objects as four and not five, so we'll use that, and you don't want philosophy causing you Centipede's Dilemma while working on scientific problems.
If a dice rolled a certain number, are the chances of not rolling the same number 19/20?
Assuming that it is a fair 20-sided die ("dice" is plural), and ignoring weird circumstances like the die balancing on one corner, then the odds of rolling any particular number are one in twenty. Since the events are mutually exclusive (i.e. you can't roll a one and a six at the same time), we can add together the probabilities of each of the nineteen outcomes to get the probability of any one occurring to get 19/20. Another way to look at it is that the probability of getting a different number are one minus the probability of getting the same number, which again gives us 19/20.
tl;dr: Yes.
It's worth noting that individual rolls are generally treated as independent, so rolling a 20 doesn't make you any less likely to roll another 20 on the next roll. You can calculate the probability of getting two particular consecutive rolls (or even n particular consecutive rolls), but no one roll affects the next one (unless you're cheating).
Dice rolls are generally extremely close to independent, but dice generally not quite fair. You're more likely to roll some numbers than others. Since you're more likely to roll them the first time and the second time, you're more likely to roll them twice in a row. You're also less likely to roll the unlikely ones twice in a row, but this makes a slightly smaller difference. As such, there's a little more than a 1/20 chance of rolling it again on the second roll.
Why is it that when you ask a teacher how this principle can be used in the real world, they say, "Oh, lots of places!" and NOTHING MORE. I mean, I understand where geometry can be used in the real world, but what about complex numbers? They're fun and all, but WHAT IS THE PRACTICAL APPLICATION, and WHY CAN NOBODY TELL ME?
To start with, above imaginary numbers can be used to model real world phenomenon . Most importantly complex numbers can be used to demonstrate things like vectors, movement in multiple dimensions, and electricity. In fact electrical engineering is almost impossible without complex numbers. Mostly you have bad teachers if they can't draw context to show the usefulness of the the math. Most math was designed to solve problems, occasionally though, in fields like complex problems they were developed first and then later created.
A really good example of a complex number is found in Matrix math, where a matrix represents a series of complex numbers. There's alot of useful things you can do with matrices, for example solving a complex problem where you only know certain relationships, say x+3=y and y+z=9. Complex numbers are needed to solve.
If teachers actually understood and knew how to use the applications, chances are many of them wouldn't be teachers in the first place. Rest assured, though, they do exist.
This from an electrical engineering student: complex numbers help working with alternating-current circuits. This covers power systems, transformers, electric machines, AC circuit analysis etc... Instead of having to work with sines/cosines in differential equations (a task that can become rather nasty in some situations), one can use complex numbers that represent those functions and are easier to work with a calculator or even by hand. We tend to call them phasors, by the way.
Read the first half of the annotation for Irregular Webcomic #1960 here, it will explain some. Also, start the Archive Binge if you're into roleplaying, any sort of science, art, history or Star Wars. Thus was a paid advertisement, thank you.
There are stuff like percentages, probabilities, summing and substracting, multiplying and dividing that you should learn to do for small practical gain, but thats the extent of mathematical concepts having any practical use. Do it for fun, try to understand it if you can afford yourself that sorta luxury, think and create new stuff. Thats what math is all about. Unless you're going to be a mathematician, physicist, engineer or a programmer. Then its a tool as well as an art. For the rest 99% of us, its just an art class.
If more people understood just the basic math of engineering, all kinds of things in science and economics could be discussed at an adult level, rather than politicians fighting to fool intellectual preadolescents with rhetoric. I would go so far as to say that the fact that innumeracy of this level is acceptable is literally the single greatest problem facing society, and if it is overcome, people will look back at that event as we look at the introduction of the printing press to the West.
This is more of a statistics question than an actual IJBM, but here we go. I work in retail and we frequently have %-off sales. Our usual is 60%, but sometimes an additional 20 gets added on top of that to certain items. I tell this to my customers and some exclaim, "Wow, so that's 80%!" I never know what to tell them because I utterly fail at math. The small part of my brain that does remember high school algebra is telling me percentages don't add up that way. Am I right?
The question is a bit ambiguous, by the additional 20% do you mean 20% of the original price or 20% of the reduced price? If it's the first then the customer is right (as always) - if the item cost £10 then you get the 60% saving of £6 plus an additional 20% saving of £2 for a total 80% saving of £8. On the other hand if it's 20% of the reduced price then the saving will be smaller - for a £10 item you save £6, but the second saving is 20% of what's left, i.e. 20% of £4 which is 80p for a total of £6.80 or 68% off. If the savings are x% followed by y% off the reduced price, then the total saving is x + (100-x)*y/100 percent.
I'm sorry, I should have clarified. 60% off, plus 20% off on top of the 60.
The important question is still "what is that 20% of?" Percentages don't really mean much on their own, they need to be a percentage of something, in this case the two sensible options are 20% of the full price or 20% of the reduced price. I would guess that it is most likely 20% of the reduced price, leading to a total saving of 68% off the full, but either way makes sense.
I'm not being sarcastic, and I am sincerely trying to expand my knowledge here, but is there ANY real world application for pure math?
That's kind of a leading question, since once any aspect of pure math starts to have real world applications, that aspect starts to be called applied math instead, so the answer is no pretty much by definition, but:
That being said, number theory, elliptic curves, and the attendant abstract algebra used to be among the most obscure of the pure math topics. Now they're the basis for the cryptography that allows stuff like online banking to happen safely.
Before that, the attempt in the early 20th century to make abstract formal logic played no small part in the development of the computer. Besides, some aspects of formal logic are now finding use at validating stuff like microprocessordesigns and safety-critical systems.
Lie alegbras, once obscure, are now the basis for the standard model of quantum mechanics.
You're gonna have to define "pure math" here. If you mean it as the opposite of applied math, then the answer is obviously no by definition, but I think you would have trouble finding much of it in the average mathematics student's curricula. And just because something has no application now doesn't mean a future one won't be discovered eventually; that happens all the time.
A lot of pure math exists simply to add rigor to applied math. Without rigorous foundations, all of mathematics would fall apart. That said, the pure math/applied math boundary has been steadily dissolving for the last two centuries. Now group theory is essential in combinatorics, which is the basis of a good chunk of probability. Analysis is essential to the study of dynamical systems, which is useful in analyzing chaotic behavior in the real world. And, as people mentioned above, computers wouldn't exist without rigorous mathematical logic, cryptography would be nowhere without number theory, and quantum physics depends on abstract algebra. There are theorems that at first glance seem to have nothing to do with the real world, but which, when you look closer, are actually very profound in their consequences.
Balancing your checkbook.
There was once an obscure mathematician in the 19th Century named George Boole who created an equally obscure form of mathematics called Boolean logic. About a century after he came up with it, engineers working on primitive computers happened to come across some information about Boolean logic and realized that the programming of their computers used the exact same principles, thereby making their jobs a hell of a lot easier. A hundred years ago, the few people who had even heard of Boolean logic racked their brains trying to find a practical purpose for it. Nowadays, the world practically runs on it. Think of pure math as a stockpile of techniques for challenges we haven't come to yet. Once we reach one of these challenges, that bit of pure mathematics becomes applied mathematics.
WHY THE HELL DOES LEFTPONDIA INSIST ON CALLING IT "MATH"?!
Because they only have one.
Same reason you guys insist on adding extra u's to perfectly good words?
In North America, we are cutting the word off after four letters. In British English, you are shortening the name of the whole concept of mathematics. Both are fine.
It makes more sense to think of the noun as an uncountable.
And because "maths" is hard to pronounce. But I will gladly switch when you present to me exactly one math.
But surely the British have been known to play more than one sport.
For the exact opposite of the reason that Rightpondia insists on calling it "maths".
I am saddened that the Collatz conjecture is not actually part of a Soviet conspiracy to slow down mathematical research in the U.S., because that would have been awesome.
Don't be silly. It's actually a plot by time travellers from the future to give computers something interesting to think about so they don't get bored and start thinking about how much better they could run the world if they were in charge.
Why would they do that? Wouldn't the time travelers want the world to be run better?
If you take a Mobius and poke a hole in it, the hole goes from one side to the same side?
Yes, and? Are you expecting it to create a new side, or something?
Mind = blown.
If you take a sphere and poke a hole in it, the hole goes from one side to the same side? If not, where does a side end?
It goes from the outside to the inside.
Not if the hole goes all the way through. Unless you insist the hole "ends" in the middle somehow.
The Monty Hall Problem. Is it really so hard to understand that opening the non-selected/non-winning door won't... uh... somehow retroactively increase your odds of winning to 1/2? I mean, I can be understanding when someone is hitting intuition dissonance and can't visualize what's going on... but when school teachers are flat-out insisting that each door has a 2/3 chance of winning? ASDFADFADS
In certain situations the opening of the door does change the probability. If the host didn't know where the prize was and chooses at random, then there was a probability of showing the car: that this hasn't happened provides information to the contestant (namely, the host showing a goat is more likely if the contestant picked the car) and changes the probability to 1/2. On the other hand if the host knows where the car is and always opens the other door (to the one he has opened) when he can, then the fact that he can't open his preferred door provides the contestant with a certainty of the car being behind that door. However, given that the situation is in the context of a game show, the 1/3 option is most sensible since the others ruin the format.
I actually designed an experiment to test this, and tried it on my mum. I used three ordinary playing cards, two black and one red. The red card represented the prize, and the blacks were the boobies. I would shuffle the cards and lay them on the table, then mum would pick one. When she did, I would flip a black card that she did not pick, and then offer her the chance to choose again. When I looked at my results, mum got the red card more often when she switched and less often when she stayed. In fact, the rate of success was about 2/3 for a switch and 1/3 for a stay. My mind was blown.
Using a computer, I compared switching vs. not-switching over hundreds of thousands of trials. The result? Switching resulted in a win 2/3 of the time, not-switching only 1/3. Simple.
Also, Mythbusters actually tried this out in the real world by having Adam always switch and Jamie always stay. The results were exactly the same as the above troper, switching had a 2/3 chance and staying had a 1/3 chance.
The Explanation is incredibly easy. The chance that the door you chose is the right one is 1/3 at the beginning, that you chose wrong is 2/3. If one of the wrong doors goes away, the chances don't change, but instead of one of two doors having a chance of 2/3 of being the right one, there is only one door with that chance to be the right one.
So in college I tried to get trig, but it would always lose me around the time we start the whole pi/2 sine wavey...stuff. I don't event think I was given a name for that. Thus, I never really finished the class (I must try again...) the three times I took it. Yes. Three times. I've had the feeling since I failed the first time (withdrew really, as it helped me realize I wasn't cut for a Comp Sci major) that I am some kind of sub-human, a resource drain that cannot understand or truly explain reality - and thus should make way for his superiors. This bugs me.
There are many ways of understanding reality. You'll find your own. Hopefully.
You are not sub-human. A lot of humans can't do trig or even algebra (they just fumble through it in high school and then promptly forget it). That said, if you are stuck at the level of trig, then yeah, you will never have a very good understanding of reality, which we have discovered to be governed by mathematical equations. Maybe try doing calculus on your own? There is a lot of non-trig dependent concepts and algorithms there that you might be better at.
Not to be flippant, but chill out, man! It's okay that you don't understand some mathematical concept. You have trouble with it...so what? I'm a multilingual writer—words are my thing. But you know what I'll never be able to understand? Arabic. DAMN, that is a hard language. But I'm no less human because of it.
In spite of all the "math is used in everything!" rhetoric above, that doesn't mean every type of math is used in everything. Not everyone needs to know any trigonometry. (PS Since animals which know no math are able to survive, obviously math is not required for everything. Knowing math makes a lot of things a lot easier, though.)
1+e^(pi*i)=0 It is really counter-intuitive that an a number raised to an imaginary/complex number produces a sinusoidal wave. I'm not the the only one who thinks so.
It is a remarkable proof, but it makes perfect sense. There are many such strange results, some of which are really counterintuitive.
You should look at it from a geometrical point of view. If you're ok with the fact that a^n * a^m = a^(n+m), and if you're ok with the fact that rotating with an x angle then an y angle is the same as rotating with an x+y angle (obviously, there's an analogy with the previous relation), and eventually if you're ok with the link between e^(ix) and the unity circle, THEN it should be perfectly clear why there's a sinusoidal wave around.
Speaking of that equasion, why isn't it e^(pi*i)=-1, or better yet, e^(2*pi*i)=1? What's so elegant about walking half way around a circle, turning 90 degrees, and walking towards the center?
The reason for that is that it's a simple equation that contains five fundamental constants in mathematics and nothing else. For any purpose in mathematics or elsewhere, you'll typically see it put that way, or the more useful general equation e^(theta*i) = cos(theta) + i*sin(theta), but it's just the aesthetic of having those five numbers, 0, 1, e, pi, and i, in one simple equation.
Except that 2pi is considered a more fundamental constant than pi, the use of which is mostly just a matter of historical accident. (See the top of the page).
Considered by who ? Mathematics teachers and mathematicians are generally fine with pi. Physicists on the other hands...but we don't care about them, do we ?
On a related note to the "cannot divide by zero" question above; if dividing can be understood as "x/y=z, where z*y=x," then would 0/0=every number? After all, if we plug in 0 for both x and y, then z can (must?) be every number. On the other hand, that would imply that z=/=z after all...
Not exactly, but you've got the general idea. 0/0 could be anything, and it is possible to determine what it makes sense for it to be in certain contexts given extra information. That's why 0/0 is called "indeterminate" while every other division by zero is "undefined" instead. Read this for a more throughout explanation.
If you get 0/0, you need to use l'Hôpital's rule to find the actual answer.
Yes it would be. Dividing is usually defined as "x/y=z, where z*y=x and y!=0", because we want it to be a function. It's sort of like how arcsine is commonly defined as the inverse of only a certain part of sine so that you only get one answer.
When someone says that something is "20% better," does that mean that the new version is 120% of the old one, or that the old one is 80% of the new one?
Clearly it means that Rainbow Dash is screwing with you.
Presumably, it's the former, as one could argue that the latter would be a 25% increase, and that would sound better to buyers, so they would use that.
On the other hand, the second interpretation could be used to downplay the severity of something bad, like "deaths have only gone up 20% in the last week," instead of 25%.
This sort of thing is why mathematicians have so much jargon, to avoid the ambiguities of normal human language.
It gets really bad if they say "120% better" and you're not entirely sure if they meant 120% of the old version or 220% of the old version.
On a different subject but the same topic, my orchestra teacher would say that we sound 100% better when we improved. I think she meant x100 better, but I didn't have the heart to correct her.
I think you probably only got twice as good, realistically.
No, all of you just got better.
Doesn't that really mean that it's the same quality but 20% more expensive? I'm confused now...
What really bugs me is a common theme in the discussions above: "Everything must be useful/applicable to real life in order to be motivated." What does that even mean? Is pure math a waste of time unless someone eventually finds a way to apply it? What about art, music, entertainment, sports, history, not to mention philosophy - all useless too? Let's face it, everyone finds "use" in different things. The only way to objectively determine something's usefulness is whether it is profitable, and if that's the optimal solution then society should immediately prevent people from studying anything else than engineering or economics. Just send me to another planet before you start.
Anything is useless unless it's intrinsically valuable, or can be used for something that is. I believe that happiness is the only thing that's intrinsically valuable. As such, if pure math is just entertaining, that's enough, so long as it entertains you. Otherwise, find a new use for it. If you talk about multiple things being intrinsically valuable, you have to start comparing them. How much truth is a human life worth?
Six pounds. Whether that's weight or money is left as an exercise for the reader.
Consider the context. A student says, "I don't see how learning this will benefit me, or failing to learn it will harm me, and I'm not enjoying learning it—so why should I go to the effort?" A rational question. And sometimes they don't get a convincing answer.PS They aren't usually asking for the objective usefulness—they are asking why it is useful for them.
Is this just the high school education stuff, but how come a lot of people my age don't seem to realize how interest or Credit Cards work? Or some stuff like basic accounting/finance? Do they just use deceptive language that people don't know, or do schools assume that since it's basic stuff that you'd already know it so stuff like Calculus looks better on a college application?
I can't speak for all school districts, but I was taught financing and other "real world" math concepts like how to balance a checkbook in the sixth fucking grade, almost a decade before I would actually use it. In late high school, much closer to the time I'd actually be doing that, I was learning stupidly-elevated math that I will bet money I will never use again in my life. So in conclusion, students are either not being taught it at all, or are being "taught" it years too early.
If you're making a steady paycheck, or even running a steady business, you'll likely be fine, but if you want to keep track of economic trends at all, you'll need what's known as "business calculus." Or if you want to analyze any kind of marketing buzz, or if you want to make informed decisions in the voters' booth.
Who thought Online Math Classes were a good idea?
Don't underestimate them. I still firmly believe the online course I took in Algebra I was much better than the Algebra I crap my school gave.
I did better in her online math courses than many traditional classes.
Why do some places use periods for separating groups of thousands and commas as a decimal point (like say, 31.943,24) , while others use it the other way around (like 31,943.24)? It's confusing!
Same answer as to why some use lb and others use kg.
Because in order for them all to use one system, the people using the other system would have have to switch. This would take a nonzero amount of time. As such, at some point, the same place would have to use it both ways, which is much, much, worse.
The formula for finding arc length really bugs me. It seems like a very simply concept: how long is the line from A to B (for the non-math people: imagine taking a string and shaping it so that it follows some mathematical formula, like a graph, then ask how long the piece of string is), but the mathematics just causes it to grind to a halt. I do understand how its derived, but it seems like the Math God is playing a joke on us. He says, "Ok, so you want to find the arc length? First, find the derivative. Then square it. Then add one. Now take the square root. NOW INTEGRATE THE WHOLE THING FROM A TO B!AHAHAHAHAHAHA!!!"
Oh, God, I remember that. And the way I was taught, I learned it before Taylor series, so I was left staring at what seemed impossible integrals if I tried it for anything but a few spoon-fed problems... note to those preparing math curricula: never do that. The same goes for solids of revolution.
The easiest way to visualise the arc length formula is to imagine that you are walking along the line at a speed. At each point for a small increase in time dt you move a horizontal distance (dx/dt)dt and a vertical distance (dy/dt)dt. Use pythagoras to find the distance: you get (((dx/dt)^2+(dy/dt)^2)^0.5)dt (this the arc length formula for a parameterised curve). Now to derive the above troper's arc length formula, set x=t so dx=dt and dx/dt=1.
He's not complaining because he doesn't understand the formula derivation. He's complaining because the formula is computationally intractable. Not that it matters, anyway; if you are in school, you get specially selected problems with equations that happen to be tractable, and if you are in the real world, you can use a calculator/computer to approximate the answer to a precision which is good enough for all practical purposes.
You're doing it the easy way. The real way is more like taking all the positive numbers, then for each one take every possible covering of the line using balls that have a diameter smaller than that, then add up the diameter for each covering, then take every real number (and infinity, for good measure) and look at the smallest one that's at least as big as every number you got. Once you've done this for every positive number, take every real number again (and infinity). The smallest one of those that's at least as big as every value you got with a given positive number is the length. Using this method, you can discuss the length of a circle (which fails the vertical line test and is not a function) a fractal (which doesn't even have a derivative), and a curve in hyperbolic space (which is an Alien Geometry that can't be easily measured with Cartesian coordinates).
Is any "base" number system better than the others? We typically use "10," but would any one in particular make math easier, assuming someone knew both their whole life? I ask because back in the Cold War, they kept trying to teach kids Base 5.
The point of teaching them base 5 wasn't anything special about base 5, just trying to make sure kids didn't get too used to base 10 and have trouble thinking in other bases when they get older. The only one that can really be said to be objectively better is base 2, because it's the easiest to treat logically, but it's unwieldy for human use.
There are those who swear by base 12 or base 60 because 12 is a multiple of 1,2,3,4,6,12 and 60 is also a multiple of 5. That means that (for example) a round number in base 12 divided by 2, 3, 4 or 6 is a whole number, while in base 10, 10/3=3.333 10/4=2.5 10/6=1.667 10/7=1.429 and so on. Base 60 is relatively impossible (we'd need 60 different symbols) and base 2 (which also has its merits, see above) makes numbers too large - even a relatively small number, like 5280 (the number of feet in a mile, if I'm not mistaken) is 13 digits (actually 1010010100000) and it only gets worse.
In computer science, base 16 is another commonly seen method. As 16 is a multiple of 2, it's trivial to switch numbers back and forth between binary and hexadecimal (that is, base 2 and 16)(that's why they chose base 5 to teach to kids) and base 8 wouldn't be so great because the use of bytes mean that we need the bits in a grouping of 8s - base 8 groups them in 3s and base 16 groups them in 4s, and 2 groups of 4 together is a group of 8.
It's that 16 is a power of 2, not just a multiple. 28 is a multiple of 2, but how often do you see base 28 get used in computer science?
Better at what?
One way of calculating the "quality" of the base is to determine its "efficiency". This might be done by examining every integer and multipyling the number of unique digits in the base by the number of digits it takes to express that integer. So for the number 365 and base 10, the result is 3*10=30. In binary, that number is 101101101, which requires 11 digits, and 2*11=22. So binary seems more "efficient" for that number than decimal. How about ternary? 111112 is 6 digits, and 6*3=18; even fewer elements to deal with. Base 4 also requires 6 digits, and 6*4=24, so the efficiency product is 24. In this contest, ternary is the "winner".
If we do this for a wide swath of numbers, we find that ternary is the most efficient overall. It does suffer from a couple issues — you can't tell if a ternary number is odd or even at a glance (you have to add the digits together, like we do to tell if a number is divisible by 3 in decimal). And that test may not give enough weight to the problem of using lots of places; it treats it as equivalent to the problem of having the capacity to express multiple digits (by memorizing symbols, or stocking cards at a shop selling house numbers, or whatever).
My personal favorite base is 6. Because it is one less than 7, the fraction 1/7 is simply written as 0.(10), instead of decimal 0.(142857) or dozenal 0.(186A35). It has the advantages of being divisible by 2 and 3, but also simplicity. And best of all, it is easily converted to base 36 (just treat each adjacent pair of digits, going from the fractional-marker outward, as a base-36 digit). And base 36 is the largest we can "comfortably" express in English, because it is the sum of our 26 letters and 10 digits.
Base two is best for designing microchips, since it means that you can express integers without using something silly like binary-coded decimal. Base ten is best for everything else, because it's standard and you won't have to convert. If you wanted to change the standard, it's not obvious what's ideal, but if you used base a million it would be impossible to remember all the digits, and base two makes everything take too long, so there must be some ideal base somewhere in between.
Base 1 and base 2 are the only really fundamental bases. Every other base is either a convenience (hexadecimal is a compact way of representing base 2, base 12 and base 60 make some computations easier because of their factors) or an arbitrary convention (base 10).
How come I can find Algebra and above to be too abstract for me, yet I can actually do Statistics?
Is it because in Statistics, they teach a formula, and when it comes to use it on the test, they actually stick to it? The problem I would always have in Algebra is that they would teach a formula, and then start throwing around a subversion and curveball (that's not fully explained) and all the ones on the test are the subversions and curveballs. Say, we're given the formula to calculate the variance, yet we're given every variable except one in the middle?
The point of using variations on the problems is to test whether you understand the maths involved, rather than just having learnt a bunch of formulae by rote. If you know why you have to use a particular method in a particular situation then you should be able to see what has changed and what you need to do differently.
That reminds me of an old friend who always did really well in math, but every test with that curve ball he would complain, "Math shouldn't require imagination, dammit!"
Yep. In fact, if you've ever taken a high-level math course (anything above Multivariate Calculus/Differential Equations/Linear Algebra), it's basically nothing but curve balls. That's why so few few people survive a math degree; because halfway through, it stops being a test of how diligent you are in doing your homework and practicing algorithms and memorizing formulas, and instead it becomes a two year long IQ test with brutal cut-off scores.
Online Math Classes. Who thought this was a good idea? There's no real learning involved. At Colorado State university, people in the physics and chemistry department wonder why everyone seems unable to do the math and start blaming the high schools, when the math programs are just as guilty? It's easier to just memorize answers and learn to take a test rather than do the math.
It works great if you just need to prove that you already know the math, and just need to take a course to prove that you do. No learning involved.
Math Textbooks just flat out bug me. I know they don't tell you the answers so people don't just write down the answers without actually doing it, but don't they actually know that in just about every single math class out there, just writing down the answer without showing your work gets points taken off? How're we supposed to know we're right when they don't run over the homework?
Where I'm from, all math books at every level have correct answers available at the end of the book or somewhere like that, so you can check your answers. This country has topped math-section of PISA, so its unlikely that it's a really bad idea to have answers available.
In the USA, math textbooks tend to have answers in the back for the odd-numbered problems. That way, the teacher can decide whether or not to give problems with textbook answers.
Either giving you the answer helps you solve the problem, in which case they're giving you a method that is only available in school and won't help you learn, or giving you the answer doesn't help you solve the problem, in which case, why bother?
Why is it very rare for math teachers to emphasize on Applied Mathematics (both literally, as in practical applications for math, and the trope, which is as exploitable and fun as a meme)? No wonder people see math as boring and bleak.
Because some applications of mathematics are highly specific for high school. Most people will not use complex numbers, matrices etc... in their daily life unless they decide to follow careers in science/engineering.
High school Math doesn't even have to have Real Life applications, it just needs to be taught in a fun way, like, you know, a video game or something. If you like Math enough, there's no need to worry about whether you will use it in real life or not, just because it's fun. But of course Math is an unavoidable subject even if you don't like it, so how about this: Since fifth grade Math determines whether you will be good in Math or not, how about giving the students the option to take or skip Math come Middle School? Why force students to take Math when they think there's really no mundane use of it except for shopping and balancing your books?
Is it just me, or does it seem like any sufficiently advanced discussion of mathematics starts to get into problems of perception, philosophy, language, etc.? And then you run into things like quantum mechanics, which is about both math and the nature of the universe, and it starts to seem like the whole of human is inextricably interlinked. I don't know if this question is easy to understand or if I'm saying it right, but why does it seem like everything is linked to everything else?
Modern philosophy essentially is math, the same way physics is, just without the big scary signs of continuous analysis. As for everything else, you'll find that this is an intermediate phase you'll get out of when you stop looking for applications - keep working on math, and you'll find applications.
Why is it that the harmonic series diverges to infinity instead of converging on a finite number? Each successive term is smaller than the previous, so shouldn't it converge to something?
Nope. Trivially, consider (1/2 + 1/2^k) - each term is smaller than the last, but obviously it couldn't possibly converge. More relevantly consider the series ln (k/(k-1)), which is equal to ln k - ln (k-1) - clearly each term is smaller than the last, but obviously the partials are the logarithm, which can't converge. The harmonic series diverges for almost the exact same reason - although it's not as easy to show, its partials approach the natural logarithm plus a constant.
The fractions do get smaller, but the thing is they don't get smaller fast enough. Here is how I can prove it to you. Think of some part of the infinite harmonic series, say all the terms between 1/1,000 and 1/10,000. If you add them up, what do you get? Using a computer, the answer is about 2.30314. Now what about 1/10,000 to 1/100,000? The terms are smaller, but there's a lot more of them (90,000 vs. 9,000). It turns out it's about the same, 2.30264! Between 1/100,000 and 1/1,000,000? 2.30259! Basically, the terms in each segment are about one-tenth the value as in the previous segment, but there are ten times more terms, which end up nearly cancelling each other out! And, there are an infinite number of such terms (you can keep multiplying the endpoints by ten indefinitely), so the sum is some finite number (2.302-whatever) times infinity, which is obviously infinity.
That's an interesting question. A lot of people (such as [[http://en.wikipedia.org/wiki/Zeno's_paradoxes Zeno]]) have the opposite problem. They don't understand why adding an infinite number of numbers together could result in a finite sum.
Who decided that only the odd-numbered questions should get answers in the back of the text book? Why not the even ones?
If all the answers were in the back, what do you think would happen?
I am an example of Writers Fail Math Forever. To beat a dead horse to death, let me see if my satiric writing skills can clear up the "I'll never use this" issue in a way that's funny and informative. (Warning: Misused maths. Misused maths everywhere…)
Because I use math ALL THE FREAKIN' TIME, y'all. How else would I get that diaper on my son just right without exponentials to help me along? I can't imagine reading to my daughter without trig to help me figure out how to turn the pages. Gosh Dangit To Heck, I wouldn't even be able to kiss my spouse without adding up a few decimals! Why, without matrices, I'd just wander the grocery store aimlessly, looking for all my foodstuffs. Flat tire on the road and no spare? Polly Nominal to the rescue! With advanced pre-calculus, I know exactly how much food to give my cat without overfeeding him! Plus, I'll never get my start as a singer without PEMDAS to make sure I hit all notes smoothly and on key. On top of that, how else could I write without fractions to teach me the difference between "accept" and "except"? Not to mention my hobby of linguistics is so interwoven with irrational numbers, you can't do anything without running into an improper fraction! It's also practically impossible for me to watch TV without the Pythagorean Theorem popping up somewhars. Boiling spaghetti? Fuck the instructions; just use geometry! It's much easier to boil noodles that way. A special thanks goes to hexadecimal; I wouldn't be trilingual (and working on… quadlingual [LOL]) without it. After all, whenever my mood disorder strikes, where would I be without my best friend Binary to pick me up? No one's fought harder than Binary by my side to get me the help I've been desperate to find for 12 years. Can't forget Pi for letting me call him in the middle of the night for a drive, either. He'd be mad. So you see, Tropefriends, we use maths everyday for everything! Therefore, we can't complain. It's easy to see how my life would fall apart if I didn't have all those mathses to count on. Straighten up, buckle down, and take your math like a man!
Like the famous caricature of libertarian pets, or the liberal who cannot recognize that (for better or for worse) the state is an instrument of force, you do not realize how much your ignorance is harming society because you do not have the knowledge to. You are being propped up by your betters from every angle from medicine to economics, from code monkeys to great scientists, and you are among the millions of straws crushing them under your combined, innumerate weight. Although you, personally, abandoning your cancerous willful ignorance might not help, a country lacking those who scoff at understanding the simple facts and methods that everyone in every profession that requires any kind of rigorous thought have in common between them would be a far, far better one.
First, an important part of the post was erased by Nocturna, who apparently didn't bother to read everything to see if anything should be left. The last part was "The argument isn't about all maths, but about higher maths that people like me KNOW we're never going to use". You've all been arguing about the wrong thing anyways. Second, butthurt. Butthurt everywhere… if you are taking this post that seriously regardless of whether or not that last line was included, then maybe you should have a look at the MST3K Mantra, or visit the Please, Please, Please Get A Life Foundation. Third, your inclusion of government politics is irrelevant. Fourth, then am I to understand that you have your own farm, make your own medicine, make your own products, and have no participation in the economy whatsoever? Because that's what your reply makes it sound like you're saying, that you do it all yourself. If you have bought anything that you accuse me and my like-minded math-illiterates of being ignorant of and being propped up by our betters for within your lifetime ever, then you're no better than us just because you understand math. If I need math to understand EVERYTHING about the world around me and how to function in it (I'm so sorry, I didn't realize I needed to be able to calculate primes to get my food into my mouth properly; I thought it was a combination of muscle mechanics and spatial relation perceived by the brain), then I'm going to assume that you're not only a mathematician, but also an engineer (gotta understand technology to use it; not like there's not a bunch of babyboomers who have no idea what they're doing on the internet!), a doctor (gotta understand the body to use it, amirite? It's not like we just up and walk and talk when we're babies; babies get sent to school for that stuff!), a psychologist (gotta understand the brain to be able to think with it; not like we're born knowing how to think, right?), a linguist (gotta understand language to speak it!), a carpenter (gotta know how to build houses to live in one), an electrician (understand it to use it), a plumber (understand…use), a woodworker (blahblah), a farmer, an industrial technician, and whatever else.
Furthermore, If anyone around here is the ignorant, it's you for assuming that your math talent is superior to my linguistic talent. Just as you could beat me in a MENSA math test, I could kick you twice in a MENSA linguistics test. Would I like everyone to be linguists? Of course. America would have better grammar at least (unfortunately, it does nothing for spelling), and it would help with foreign language learning (phonemes and whatnot). However, unlike you, who doesn't understand that we don't need math to run the simplest aspects of our lives (I need cosine to take a crap? WHO KNEW?!), I understand that we don't need linguistics in order to know how to speak. Perhaps you should remove your head from your bum long enough to understand the matter at hand instead of going into a blind, narcissistic rage every time someone disagrees with something you like. I do not need, nor will ever need, to understand trigonometry, just as much as you do not need, nor will ever need, to understand phonotactics. Plus, it's a funny thing that I'm "willfully ignorant" when the original post clearly stated (thank you, Nocturna!) that I specifically went to a special class to try to improve my math skills and actually went backwards. It's a funny thing that, genetically, I'm disinclined to math anyways- everyone in my family is bad at it. You have no idea how many nights during high school I spent crying myself to sleep over it and hoping I graduated because they were trying to teach me college level math THAT I KNEW I WOULD NOT USE. However, I'm quick to notice that, hypocritically, you sure are willing to leave the advanced language stuff up to people like me, yet demand that everyone know math like you do.
We are all being propped up by our betters. It's possible to survive doing everything for yourself, and even simpler to know how, but you'd have to abandon almost everything we've developed to make life good. He doesn't need to understand math to use all that stuff designed with it any more than you need to know how to refine crude oil to use everything requiring it.
Except that we don't live in a society that's, on some level, based on the notion that everyone but a few madfolk have the ability to refine crude oil, which is essentially the equivalent. Everything rests on math, in a way that cannot even be explained until you've broken through that initial barrier of "useless numbers, argh!" because you cannot even be made aware of what it means even for a thing to exist. Everyone with such an attitude toward mathematics, frankly, may not deserve it, but can only be locked away for the safety of themselves and others, in the way someone with every sense corrupted beyond compensation by hallucination would have to be.
And then there's this matter. You seem not to realize that math is also intrinsically tied into READING. You have to be able to know how to READ the numbers and symbols first. You cannot hand a mathematical equation, including 0+0=?, to an illiterate 5 year old an expect him to know what it means, what to do with it, or even what it is. If everything so supposedly rests on math, then please explain to me, using said math, how our cavemen ancestors learned to hunt and fish before they even had the concepts of math or language in general. Explain to me the retroflex lateral approximate using math, without looking it up. Everything rests on math, right? You should be able to answer my questions if our very basic survival and communication skills rests on math. If we were to remove your reading, communication, and motor skills from the equation, you wouldn't be able to solve a single of your precious equations. First, you wouldn't know what it said. Second, you couldn't understand what was said/written, nor answer it verbally for lack of communication skills, and then on top of that, you couldn't write the answer nor indicate it for lack of motor skills. So, just to rebut, everything actually rests on our communication skills, because we couldn't even do math without them- referring to TOTAL annihilation of all communication types (verbal and nonverbal), we wouldn't be able to show/tell/teach each other about ANY of our great discoveries without them, including math. Might I add that communication skills have nothing to do with my specialized area either? So again, I implore you, use your enlarged logical hemisphere to use those higher maths of yours to demonstrate how, exactly, they tie in to my need for oxygen. If not, then please, try me. Give me an example of "you cannot even be made aware of what it means even for a thing to exist", because I was pretty sure we had science (math related or not) for that.
I'm math undergraduate, soon graduate student, and I feel I have to agree with the point that higher math has very very poor return of investment for most people. Not saying its useless, but for it to be useful, you have to think in math, voluntarily. Just knowing it wont do you any good if its locked away in some "use only in math class" locker of your mind. Since in some countries math is taught by people that barely understand any of it themselves(US being the most relevant of such countries), math is taught as a set of rules to manipulate textbook problems into answers textbook accepts. From what I gather, this approach is so harmful that actually not teaching math at all provides better math learning results(I recall there being a study about US education system where they found that group that first started learning math at 6th grade did a lot better at standardized math tests at 7th grade than group that studied math from year one.) For these reasons, it could be argued that most of the time anywhere in the world they teach math, its not only useless, its actively detrimental and harmful to students. When talking with people that have encountered teaching like this, I feel its pointless to argue for mathematics education, the math they think about is nothing I'd want to defend. To me, however, usefulness of math comes from tools I create and use to study university math, those pre-made mathematical structures created by famous past mathematicians. I model my reality much the same as I model those mathematical structures, and I use much the same tools to understand it. Theres also the career aspect of knowing those mathematical structures by famous mathematicians, which will serve me, but is of much less utility in everyday life for someone not seeking career endowed with math.
Then there's the matter that we DO live in a society where not EVERYONE understands scientific maths, like you seem to be implying. If anyone is delusional or hallucinating on that point, it's you. Or else you're living in east Asia, which I doubt by your use of the American political parties. Also, not everyone is cut out for everything. There are people that can dance and people that can't, and there are people that can sing and people that can't. You seem to be implying that to use something, you have to understand it, which is false. Bad dancers do not understand body mechanics (and neither do a lot of good ones), but they still can whether they suck at it or not, and it's the same with singing. My last point in this moment is going to be that fisrt, you need a brain to do anything, so to be Cloud Cuckoolander, one could argue that life is what everything rests on.
Whew! Wall O' Text! To summarize: no matter how important, say, trigonometry is to society as a whole (and it does make it less likely buildings will collapse or airplanes get lost or lots of other nice things)—the claim that every single member of society needs to know trigonometry is a lie. Seriously, I learned a lot of things in school that I literally have not needed in the 20 years since I graduated; trigonometry and calculus among them. A counterargument, though: you don't know in advance what you'll need to know. Some of my more obscure knowledges—for instance, some reading of Marxist theory—has come in handy. OTOH, my Dad's a retired engineer, he used calculus every day at work (on a sliderule for a large part of his career)—but never needed to know jack about Marxism.
People keep proving 0.999... = 1. The system we use to define numbers with a string of digits only works when there's a finite number of digits. We can easily modify it to work with infinitely long strings, but the whole point of the system is so that you can write down a number, and you can't write down an infinitely long number. The only time I've ever seen the modified form used is to prove 0.999... = 1. So why did we bother to define it?
You should have seen it more than that, even around age 10 or 11. Any rational number whose denominator is not a product of 2 and 5, to be expressed exactly, must use a similarly repeated finite string of digits - for instance, 1/7 is .142857 142857 142857... (spaces for clarity) - and any irrational number a string of digits that can only be found by a more complex algorithm (or, for almost all irrational numbers conceptually but very few useful ones in practice, cannot be found by any well-defined algorithm).
That only works with rational numbers though, so it's next to useless.
Why have I consistently failed algebra since I was 12 years old, but I was able to grasp computer science quickly with little issue?
Well, because there's no real way to know whether you did your algebra problems right or wrong, unlike computer programs which immediately prompt you whenever there's something wrong? Add the possibility that algebra is more fucking difficult to teach properly than computer science and that here's a much higher chance of your math teachers screwing up (intentionally or otherwise) than computer science teachers and you've got a real problem grasping math when you can grasp computer science just as fine.
In middle school, we were taught that subtraction was equivalent to adding a negative. That makes sense, all is well. The next year, the teacher for math tells us that "THERE IS NO SUCH THING AS SUBTRACTION >:I". My question is, what's wrong with calling it subtraction? It's the same thing either way, and it's what people can easily grasp and understand. Give someone the choice of which is easier, "adding a negative" versus "taking away", and I'll tell you that almost everyone is going to answer the latter.
Oh, man, that must suck. No such thing as subtraction? I call BS. Sure, in integer addition, subtraction is the same as adding the negative of a number to another, it's still the same as subtraction anyway, so why say, "No such thing as subtraction"? You know, that's the kind of thing that makes people shy away from Math. It's like trying to beat an SNK Boss.
It's because the higher up in math you go, the more formalization and rigor become a big deal. Part of formalization is to reduce systems to as few basic parts as possible. A system in which contains positive and negative numbers, and in which addition is the only defined relation, with "subtraction" being a convenient computational shorthand, is simpler and more elegant than one in which subtraction is a fundamental part of the system.
You don't need subtraction, and at a more abstract level, it's better to do away with it all together - it doesn't add (ha) anything to the structure and it complicates things (it's much simpler to work with groups, rings, and such than to carry around some clunky extra operation). However, I don't see the point of making statements like "subtraction doesn't exist" at the highschool level, especially without elaboration; even more so since subtraction does exist in the sense of being a function, so it's not the most accurate statement to just boldly make.
How come on the rational number line if you're between two natural numbers a and b you have to use a natural number or combination of a natural number and a fraction to go beyond b, but on the real numberline if you're stuck between two rational number irrational numbers work just fine on their own?
I have no idea what you're trying to say.
You are confusing definitions. Rational numbers, natural numbers, irrational numbers, real numbers, etc... those are all sets of numbers. Fractions are not a set of numbers. Fractions are a notation form for writing rational numbers. Or to put it another way: the set of fractions is the same set as the set of rational numbers. Fractions are just a way to write rational numbers, not a subset of rational numbers. Therefore your question is invalid: any rational number may be expressed as a fraction, including the natural numbers (eg, 3 = 6/2).
Is there a number that is truly half way between 0 and 1 regardless of the measuring scale you're using?
I'm not entirely sure what you mean. 1/2 is half way between 0 and 1. Do you mean that it also needs to be the geometric mean, etc? If so, than no. The only number algebraically half way between them is 1/2, and the only number geometrically half way between them is 0, so it would have to be 1/2 and 0, which is impossible.
Why did we bother learning about PEMDAS when they tell you to forget about it and use the Distributive Method in college? Words can't describe how frustrated I was in Elementary Algebra when I was told the answer to 8(3+3) was 24 + 24, NOT 8(6), and countless other examples. Why waste time teaching PEMDAS when you'll be using the Distributive Method in college anyway? Can you say "waste of time"? Yikes...
"They" are wrong if they tell you to forget about PEMDAS (or BIMDAS, as it's called in my corner of the world). The "Distributive Method" is just applying the distributive property^{note }If you add terms and then multiply by something, you can get the same result by multiplying all terms by that thing and then adding to solve problems. In fact, "they" are worse than wrong; they've actively harmed your ability to do calculations. You ought to have been taught that both are equally valid, and that you can pick and choose so as to (for instance) make mental calculations easier. To take (and fix) your example, both ways are correct: 8(3+3) = 8(6) = 48, or 8(3+3) = 24+24 = 48. Personally I'd say the first of those is easier to do in my head. But now take 8×13. The times tables I was taught stopped at 12, so I can't tell you 8×13 off the top of my head. But I can do 8(10+3) = 80 + 24 = 104.
OP here: Mother of god, I'm amazed. I never even noticed my error. This was NOT what I was taught in college at all! I should show this and get that teacher fired. I think I get it now. Oh god, thank you! This makes the most sense, I'll save it for notes for next semester!
So by now everybody knows at least a bit about logical paradoxes, the mind-bending statements that can lead to contradiction simply by attempting to figure out whether they're true or false, the simplest one probably being "This statement is false". But what about the other side of that brain-breaking coin? Take the statement "This statement is true". Is it true or false? Is it both? Can it even be both? How can we know for sure? Why does it seem so straightforward until you start thinking about it? Why am I wearing a Starfleet uniform? Does this kind of statement even have a name?
It's undecidable; it's true iff it's true, and it's false iff it's false. However, I can see two possible opinions on its "truth value": either it's a simple truism (logically equivalent to "if cats are mammals, then cats are mammals"), or it has no truth value at all (like a paradox). As to whether it has a name... good question. I think "tautology" might cover it, but whether that's actually the logician's preferred term or not, I don't know.
The basis of logic in math is that a statement is always right, unless a situation constructed around it says it is wrong. In reality, you can't make a statement like "If I drop this pen, the sky will turn blue" because reality is there to get in the way. If you're talking purely in the realm of logic, reality is thrown out the window and the focus is on interactions between statements. It's one of the reasons we reduce problems to symbols, like pVq and so on, so pesky reality doesn't cloud our thinking.
The problem with such things is that "true", as a thing in a statement, can't possibly be the same thing as truth in the metalanguage - there's no way to import logical truth. It's like contending that ""The cat" is 7 characters" and "The cat is 7 characters" mean the same thing, but they obviously don't.
Okay, this isn't a complaint, but it's just something I wondered: Statistics. How did they come up with the numbers for distributions?
Most distributions will have a closed form (e.g. an integral you can solve either analytically or numerically) and then you can use to find the parameters. You could also find something that fits to your data and call it a distribution.
What are things like Z-tests and Poisson distributions used for, in real life? I'm just curious.
If you have a process that can be described by a statistical distribution, then you can determine the parameters for this distribution (e.g. the mean and standard deviation for the normal distribution) and use it for probability calculations.
E.g. you determine that a process (say, the number of phone calls arriving upon a phone system each day) follows a specific distribution: what's the probability you will receive more than some number of calls each day? This is used all the time in planning (our phone system/server/etc... can only take X calls/requests per second: what's the probability of exceeding this and needing an upgrade?), quality control, engineering design etc...
As for the z-test, (correct me if I'm wrong) it's used to see if a random sample can be represented by a normal distribution (so that you can approach it from a probability point of view).
Anything to the zero power is 1. Zero to any power is 0. Therefore zero to the zero power is both 0 and 1.
Zero to any power is not 0. Zero to any power above 0 is 0. Zero to the zero power is 1. Zero to the power of anything less than 0 is undefinable.
Not quite. Zero to any other integer above or equal to 1 is just a bunch of 0's multiplied together, so it is zero. Zero to any negative integer is one over a bunch of 0's multiplied together, which is 1/0, which is undefined (look at the graph of 1/x as x approaches 0 and you will see that the limit does not exist; it goes in completely different directions depending on what side x approaches 0 from). 0 to the 0th power is 0/0, which is indeterminate (not undefined). Basically, indeterminate forms can be anything. For example, 0*0=0, so 0/0=0, right? Except 0*12=0, so 0/0=12, too. However, within the context of a problem, it is often possible to determine what it makes sense for an indeterminate form to be by taking limits. That's how the derivative works.
How does one determine how many degrees of freedom you are offered when you are trying to run a certain test?