If real analysis focuses on real numbers and number theory focuses on integers, then what about rational numbers?
The rational numbers are the field of fractions of the integers, so those fall under number theory, too. For that matter, algebraic extensions of the rational numbers could be considered the primary object of study of algebraic number theory.
Hi, okay, I am WAY too lazy to go through 16 pages at 2:17 AM. I'm 16 and in precalc so I'm a newbie to harder math, and IDK if this has been asked already, but why can 0 itself not be divided by 0? Thanks!
each day is a drive thru historyI think the main problem is not what's being divided and more what's being divided by. So it's division by zero that's the problem, because it doesn't lead to any well-defined single answer.
That said, 0/0 does have a bit more significance in calculus, where a limit whose expression evaluates to 0/0 at the limit point can be further simplified.
edited 15th Dec '13 3:02:53 AM by GlennMagusHarvey
By definition, "x/y" means "the unique number z such that yz = x". (For example, z = 3 is the unique number such that 2z = 6, so 6/2 = 3.) So, by writing "x/y", you are implicitly claiming that there is a solution to yz = x, and that there is only one solution.
What happens if y = 0? Then "x/0" means "the unique number z such that 0z = x". But 0z = 0 for any z: so if x ≠ 0, then there is no such z, which means that "x/0" has no meaning; and if x = 0, then the solution is not unique, which also means that "0/0" has no meaning.
You sometimes hear people saying something like "0/0 is an indeterminate form". This isn't really talking about the expression "0/0" itself; rather, this is a statement about limits. (You can think about "the limit of f(x) as x approaches b" intuitively as the value f(x) gets closer and closer to — if such a value exists — as x gets closer and closer to b.)
Say f and g are functions such that the limits of f(x) and g(x) are both 0 as x approaches 0. Then we might ask what the limit of f(x)/g(x) is as x approaches 0. It turns out that this depends on the particulars of the functions f and g; for instance, if f(x) = cx and g(x) = x, then even though f(0) = g(0) = 0, we have f(x)/g(x) = cx/x = c for x ≠ 0, and so lim_[x to 0] f(x)/g(x) = c. But this is true for any c, so if we only know that f(0) = g(0) = 0, we can't conclude anything about lim_[x to 0] f(x)/g(x). (This is something you'll see in more detail when you take calculus.)
Just going to ask this, not expecting anyone to have a good answer.
Consider a continuous function f(x | w), where w is a exogenously determined parameter.
Suppose that f is continuous in both arguments, and since I'm only interested in the closed, bounded interval [0,1], so I get uniform continuity on the domain by Heine-Cantor.
Under what circumstances does argmax, x, vary continuously in the parameter, w?
Beyond the beaten path lies the absolute end. It matters not who you are... Death awaits you. — NyxUm, what variable is interval [0,1]?
x and w are both defined on the domain of [0,1], and f maps [0,1] * [0,1] -> R
I think I can apply the strong form Topkis's Theorem to this problem, but I'm not completely certain that I have supermodularity here.
edited 15th Dec '13 2:19:27 PM by DriftingSkies
Beyond the beaten path lies the absolute end. It matters not who you are... Death awaits you. — NyxGrace Love: Here's another way you can see that anything over zero, including zero itself, must be undefined.
Consider the graph y=1/x. This is a reciprocal graph. We can work out several points on it and draw it.
Table of values:
x: | -3 | -2 | -1 | 0 | 1 | 2 | 3 |
y: | -1/3 | -1/2 | -1 | ??? | 1 | 1/2 | 1/3 |
Graph:
You can see that when x is positive but closer and closer to 0, y tends towards positive infinity. When x is negative and closer and closer to 0, y tends towards negative infinity. This means that when x = 0, y would have to be both positive and negative infinity at the same time, which doesn't work.
edited 16th Dec '13 2:53:33 AM by Telcontar
That was the amazing part. Things just keep going.Hi I saw this puzzle today, I wonder if you guys could you help me solve it....
You find a box. You know that if you open it you have a 10% chance of finding Gold. It also has a 35% chance of giving you nothing and a 55% chance of giving you two more boxes.
These boxes both each have a 10% chance of having gold inside them. A 35% chance of nothing and a 55% chance of giving you two more boxes each with the same odds. So forth and so on. Ad infinitum.
So with these factors what are the odds of you finding gold in at least one of the boxes?
hashtagsarestupidHint: Call the probability of finding gold in the box p. But "getting a box" means "finding gold with probability p", and you get two boxes. (The boxes all have the same probability distribution; that's the key fact.) Putting this together gives a quadratic equation in p.
Let's see if I can figure this one out.
P = 0.1+0.55(1-((1-P)^2))
P = 0.1+0.55-0.55((1-P)^2)
P = 0.1+0.55-0.55(1-2P+(P^2))
P = 0.1+0.55-0.55+1.1P-0.55(P^2)
P = 0.1+1.1P-0.55(P^2)
-0.55(P^2)+0.1P+0.1 = 0
(-0.1+((0.01+0.22)^0.5))/-1.1 = ~-0.345
(-0.1-((0.01+0.22)^0.5))/-1.1 = ~0.527
So you have just under 52.7% chance of getting some gold.
Join my forum game!thanks guys!♥
edited 22nd Dec '13 2:12:26 PM by joeyjojo
hashtagsarestupidHow would one go about proving that the polar function r(theta)= a*cos(theta) + b*sin(theta) forms a circle, and finding its centre and radius? Graphing devices show it to be so, but I cannot get to it analytically for the life of me.
Dopants: He meant what he said and he said what he meant, a Ninety is faithful 100%.Also, something that may be a little more philosophical, if straight lines and perfect circles and etc don't exist, why do we work with them? I mean, like, there will never be something perfect like that, so how can we know the math works?
And as well we are working on the unit circle, and so I must ask: why is proper notation for the squares of trigonometric functions like sin^2(Theta) and not sin (theta) ^2? Sorry if that seems stupid.
each day is a drive thru historySo, it's not a problem if we can't physically model perfect lines and circles! Sure, geometry is significantly inspired by things we observe in nature — that's a good source of motivation for definitions that lead to interesting things, and it often leads to mathematics that's useful for modeling the natural world (i.e., doing science).
But mathematics itself is about defining certain abstract structures (that, in the case of geometry, have some geometric interpretation — otherwise it wouldn't be called "geometry") and figuring out how they relate to each other. The physical objects that serve as inspiration get left behind, or are at least reduced to serving as sources of vague intuition, and the mathematical system — a purely conceptual object, an idea given precise form through mathematical syntax — is what's being studied.
A much more reasonable thing would be for sin2(x) to mean sin(sin(x)), which would match with the notation f2(x) = f(f(x)) commonly used for iteratively applying a function. However, at least for now, we seem to be stuck with this bizarre convention for trigonometric functions, at least until enough textbook authors stop using it.
edited 19th Jan '14 7:49:20 PM by Enthryn
Also, sin(x)^2 is ambiguous as to whether it means (sin(x))^2 or sin(x^2).
Personally, it would be better to replace the ^-1 notation for inverses, and just use f∘f(x) as function composition of f with itself. So that we'd just use the square notation altogether.
That means not using sin^-1(x). Just write arcsin(x). much easier.
edited 19th Jan '14 7:53:22 PM by GlennMagusHarvey
I suppose, but why would you write sin(x)^2 to mean sin(x^2)? It'd be silly to omit the parentheses around the argument of the function while also putting unnecessary parentheses around the "x", of all things.
And what about inverses of arbitrary functions, or iterative composition of functions? The notation f^n(x) for f(f(...(f(x))...)) is very convenient, and extending it in the opposite direction for negative integers seems natural.
edited 19th Jan '14 7:57:22 PM by Enthryn
I always use arcsin(x). sin-1(x) = 1/sin(x) and I refuse to accept any other ridiculous notation. :P
I also use fIV(x) = (d/dx)4f(x), though, so...
The Revolution Will Not Be TropeableSpeaking of which, why does f(subscript-x) mean the partial derivative of f with respect to x?
A subscript is a useful way to denote a variety of things, but I feel that it's wasted if it's meant to be the partial derivative. I prefer to have both the subscript and the prime (') sign, or just use the Leibniz notation.
Can anyone help me answer this question, please?
"Calculators were purchased at $55 per dozen and sold at $15 for three calculators. What is the profit on six dozen calculators?"
I smell magic in the air. Or maybe barbecue.Profit = 6' * ( (4' * $15' ) - $55' )
= 6 * ( $60 - $55 )
= 6 * $5
= $30.
edited 10th Feb '14 11:39:29 AM by Blueeyedrat
Asymptotically, we have Stirling's approximation: n! is approximately (n/e)^n sqrt(2πn) for large values of n.