Originally posted by Protector
Or why division by zero is so complex, you have to use any method available to go around it?
Originally posted by Protector
Does anyone else ever wonder why Integration works?
Or why division by zero is so complex, you have to use any method available to go around it?
Or why every definition of infinity is different?
Why is it that we build mathematical rules, laws, theorems, axioms, etc, but end up breaking our own rules in our own system?
My theory is that the mathematical black boxes are catching up with the modern world. We are forming ever-more complex problems with highly complex solutions (even if simplified). We must do this because our black boxes, our perfect theoretical worlds, are falling apart in a world (the real world) that needs faster and more efficient algorithms to solve practical problems.
We need to rebuild the foundation that our mathematics sits on. We need to solve the big problems, like some of those listed above. Many mathematicians and physicists of the past have not agreed what rules and/or laws actually govern our reality.
For example, Integration was built on the concept of Infinitesimals, but many famous mathematicians did NOT believe such units could exist;
i.e. a number X != 0 (not equal) is infinitesimal iff (if and only if) every sum X + X + ... + X, where X is an absolute value, and there are a finite number of terms, that sum is less than 1.
Without these units proven to exist, Integration exists only in Candyland. Of course, we have proven Integration to work in the real world, but we would have a more complete picture if we could understand the foundations of these processes and procedures.
Sometimes I feel like I'm the only one who believes our world is leaning on crutches.
Originally posted by rand
I upset more than one grade-school teacher by pointing out that 4 apples minus 4 apples not only leaves 0 apples, it leave 0 oranges as well. They seemed to have some objection to that, but could never quite explain it..
The first problem, as I see it, is in treating zero as a number instead of recognizing it as the the absense of "numberliness".
Originally posted by rand
The second problem is treating math as if it were a true represenation of nature. Math is a way to manipulate symbols to produce other symbols according to a rigid set of rules. Sometimes the manipulations show us a way to solve real-world problems, which is really great, but not absolutely required.
Originally posted by rand
But nature doesn't really care about the rules we devise: in math, 6/3=2; in nature, six apples divided into three piles results in six apples -- four of the apples don't simply dissappear.
Originally posted by rand
Maybe it's like chess: why can't I divide by zero? For the same reason you can't move a black Bishop onto a red square. Enjoy the game for what it is and don't try to read too much into it.
Originally posted by CicadaWho would recognize the answers if given?"
- Manly P. Hall
Originally posted by Protector
Does anyone else ever wonder why Integration works?
Or why division by zero is so complex, you have to use any method available to go around it?
Or why every definition of infinity is different?
Why is it that we build mathematical rules, laws, theorems, axioms, etc, but end up breaking our own rules in our own system?
We need to rebuild the foundation that our mathematics sits on. We need to solve the big problems, like some of those listed above. Many mathematicians and physicists of the past have not agreed what rules and/or laws actually govern our reality.
For example, Integration was built on the concept of Infinitesimals, but many famous mathematicians did NOT believe such units could exist;
Sometimes I feel like I'm the only one who believes our world is leaning on crutches.
Originally posted by Valhall
I offer that the material universe is not made up of frames, but a continuum of forces, time, dimensions....
Originally posted by Valhall
It's not complex, it's impossible unless you've got some place new to stuff infinity.
Originally posted by Valhall
Yeah and at one point famous physicists and philosophers thought there was nothing smaller than an atom. Some of them also thought the world was flat, or that the sun revolved around us. Do we need to undo physical science, astronomy, cosmology and chemistry as well?
Merriam-Webster Definition
Main Entry: at·om
Pronunciation: 'a-t&m
Function: noun
Etymology: Middle English, from Latin atomus, from Greek atomos, from atomos indivisible, from a- + temnein to cut
1 : one of the minute indivisible particles of which according to ancient materialism the universe is composed
Originally posted by Valhall
Unless you can make me understand what you are trying to say here, between the two of us, I would agree you are the only one who believes this - at least as far as the examples you have given.
Originally posted by Amorymeltzer
Infinity has been properly defined through the use of sets. An infinite set is a set that is in a one-to-one correspondence (f(s1)=f(s2) -> s1=s2) with a proper subset. No finite set can do this.
Originally posted by Amorymeltzer
Calculus is rid of the sweeping, faith-based mysticisms used by Leibniz and Newton back when it was invented. You're wrong here too. Leibniz embraced the dx, and used it like a mathematical concept... Nothing ever is zero, it just merely approaches it. The same obviously goes for integrals. That gets rid of all the silly occurences in calculus, the indeterminate 0/0, and all of it's partners, and defines it in a perfectly solid manner, without any of the annoyances you're talking about.
Source: www.math.rutgers.edu...
After Leibniz's form of integration had been presented, speculation arose, especially in response to infinitesimal measurement he implemented. In response to this speculation, he stated "... to avoid these subtle matters of dispute and because I wanted my ideas to be generally understood, I contented myself with explaining the infinite as the incomparable. In other words, I assumed there were quantities which were incomparably larger or smaller than ours." (Leibniz from Meschkowski 58) He went on to say that there are different degrees of infinitesimal units, but each one has a value which can vary so that it is possible to choose a value lower than the one chosen. He understood that the infinite sum of units with a thickness infinitely small is a modi fied form of the summation problem commonly dealt with by his predecessors. Further discussion on Leibniz's behalf went as follows: "For if an antagonist denies the correctness of our theorems, our calculations show that the error is smaller than any giv en quantity, since it is in our power to decrease the incomparably small." (58) This explanation does appear problematic though as he is stating the summation of the error cannot be significant since the "incomparably small" units are not significant enough to have too much error. By this argument there would be no measure when one took the integral since the units are too small to be accounted for. This argument shows that Leibniz had particularly amazing and revolutionary methods for solving these pr oblems, but the rationale he based this on was not rock solid since he was not a stickler for details of a proof.
Originally posted by Amorymeltzer
(P.S. - Aside from having studied this for four years, I just turned a paper in on this today. The paper was on zero, throughout history. I went deep into the development of calculus, astrophysics, and set notation)
Originally posted by Protector
So I too have a solid background, believe it or not.
Originally posted by Protector
Since I extensively work with computers, an error of 0.00000000002353 can still be significant in practical use. I'm curious as to just how small that 'dx' error is and if any other side-effects occur by its deregard.
Originally posted by Valhall
You've just changed the argument. You've just changed the argument from Integration being a crutch to how finite we are currently able to split the distance. The problem is not with dx, but in our limitation to decrease dx to the point we achieve a true continuity in a function.
The problem is not with the math, but with the implementation.
Originally posted by AkashicWanderer
Originally posted by Protector
I don't even know how to respond to this crap. Any ideas?
I'd just copy and paste the following equation:
Let x=0.9~
10x=9.9~
9x=9
x=1
Therefore 0.9~=1
Originally posted by Frosty
In my understanding a number divided by zero is undefined. What I'd really like to know is why a negative times a negative can equal a negative.
i^2=-1
I never bothered asking myself why this was, because I never really cared. Anyone have an explanation?