Mathematics on Crutches

page: 2
0
<< 1   >>

log in

join

posted on Nov, 25 2005 @ 03:43 PM
link   

Originally posted by Protector
Or why division by zero is so complex, you have to use any method available to go around it?


Math is not my forte, but I have been under the impression for many years now that dividing by zero is not complex. It is impossible. Did I miss something?




posted on Nov, 25 2005 @ 07:52 PM
link   

Originally posted by Protector
Does anyone else ever wonder why Integration works?

Or why division by zero is so complex, you have to use any method available to go around it?

Or why every definition of infinity is different?

Why is it that we build mathematical rules, laws, theorems, axioms, etc, but end up breaking our own rules in our own system?


My theory is that the mathematical black boxes are catching up with the modern world. We are forming ever-more complex problems with highly complex solutions (even if simplified). We must do this because our black boxes, our perfect theoretical worlds, are falling apart in a world (the real world) that needs faster and more efficient algorithms to solve practical problems.

We need to rebuild the foundation that our mathematics sits on. We need to solve the big problems, like some of those listed above. Many mathematicians and physicists of the past have not agreed what rules and/or laws actually govern our reality.

For example, Integration was built on the concept of Infinitesimals, but many famous mathematicians did NOT believe such units could exist;

i.e. a number X != 0 (not equal) is infinitesimal iff (if and only if) every sum X + X + ... + X, where X is an absolute value, and there are a finite number of terms, that sum is less than 1.

Without these units proven to exist, Integration exists only in Candyland. Of course, we have proven Integration to work in the real world, but we would have a more complete picture if we could understand the foundations of these processes and procedures.

Sometimes I feel like I'm the only one who believes our world is leaning on crutches.


Sounds like 2nd semester Calc is bogging you down... I've went through the same insanity!
...

It is true, math is a language of nature and how we interpret it is all relative. We have, over time, developed a system and it has worked for us with minor problems.

The current system of math has done us more good then harm, and therefore a new system for discribing nature will not be produced... I think Math will one day evolve into a flawless language of our universe, but we are NO WHERE near that day. So until then... 1/0 will be infinite... well it's "close enough"



posted on Nov, 26 2005 @ 10:04 PM
link   
Actually, I took Calc 2 years ago, and it did suck, but higher level math classes make Calc 2 look easy.

Also, don't say that 1/0 is infinity. Just say it is undefined. Don't forget about the conflicting limits and the fact that it breaks the fundamental rule holding multiplication and division together. Dividing by zero is a big "no no" for so many reasons.



posted on Nov, 26 2005 @ 10:35 PM
link   
I upset more than one grade-school teacher by pointing out that 4 apples minus 4 apples not only leaves 0 apples, it leave 0 oranges as well. They seemed to have some objection to that, but could never quite explain it..

The first problem, as I see it, is in treating zero as a number instead of recognizing it as the the absense of "numberliness".

The second problem is treating math as if it were a true represenation of nature. Math is a way to manipulate symbols to produce other symbols according to a rigid set of rules. Sometimes the manipulations show us a way to solve real-world problems, which is really great, but not absolutely required. But nature doesn't really care about the rules we devise: in math, 6/3=2; in nature, six apples divided into three piles results in six apples -- four of the apples don't simply dissappear.

And it seems to me that the rest of the universe gets along just fine without either infinities or zero.

Don't get me wrong, I like math, have always done well in classes, marvel at the beauty of a specially elegant equation, etc.

Maybe it's like chess: why can't I divide by zero? For the same reason you can't move a black Bishop onto a red square. Enjoy the game for what it is and don't try to read too much into it.



posted on Nov, 28 2005 @ 09:57 AM
link   

Originally posted by rand
I upset more than one grade-school teacher by pointing out that 4 apples minus 4 apples not only leaves 0 apples, it leave 0 oranges as well. They seemed to have some objection to that, but could never quite explain it..

The first problem, as I see it, is in treating zero as a number instead of recognizing it as the the absense of "numberliness".


Good word. Ha! But zero is both. It is a number AND it is the lack of "stuff." You can specifically define zero to have a purpose inside of an equation. For instance, many mathematicians use a Null Set to represent a lack of elements, but reserve zero as a number (center of a number line). Specifying is not too difficult. This is where math theory meets math practice. You can make your equations answer, "What am I trying to find?"


Originally posted by rand
The second problem is treating math as if it were a true represenation of nature. Math is a way to manipulate symbols to produce other symbols according to a rigid set of rules. Sometimes the manipulations show us a way to solve real-world problems, which is really great, but not absolutely required.


Nature is a complex system. Math is usually designed to mimic a single aspect of nature. You can get into higher levels of math, like Chaos thoery, to start representing natural systems, but that is only a single area within mathematics. All math must represent the manipulation of information in a uniform manner. The higher you go in mathematics, the more rigid it is because it leaves no room for errors (thus proving something is infallible).


Originally posted by rand
But nature doesn't really care about the rules we devise: in math, 6/3=2; in nature, six apples divided into three piles results in six apples -- four of the apples don't simply dissappear.


No, we divide 6 apples into 3 groups of 2 OR 2 groups of 3. No information is lost.


Originally posted by rand
Maybe it's like chess: why can't I divide by zero? For the same reason you can't move a black Bishop onto a red square. Enjoy the game for what it is and don't try to read too much into it.


What fun is there in the world if you can't bend the rules? I'm not saying that we break the laws of physics, but I've seen plenty of good examples of math and science being applied in ways that seem impossible.

An example: I've seen all 3 states of water exist at once. Water boils, is a liquid, and has ice cubes floating in it at the same time. This is achieved by varying the temperature and pressure to exact proportions.

So we played with science to find out that ice doesn't always melt in boiling water... just most of the time.

The exceptions to the rule are what we are all about as humans. That's what makes it fun.



posted on Nov, 28 2005 @ 07:43 PM
link   
The Practice Of Theory

“Pure” mathematics is a collection of philosophical constructs, and as such its most important feature is consistency with itself. Where it fails to achieve this, it can be modified to bring it into self-harmony.

Resolving such anomalies can be accomplished entirely within the domain of mathematics, and this feature reveals mathematics as a product of our minds constructed entirely of theory.

The fact that mathematics was originally derived from practical needs does not change the fact that it is something we invented for our use, rather than something “found in nature” (or as I like to call it, “perceptual reality”). It is a system for interpreting and (ideally) predicting natural phenomena, but we are its creators.

It is, in other words, a work consisting purely of intellectual art.

Applied or “practical”mathematics, however, must specifically contend with the differences between the domain of pure mathematics and the domain of perceptual reality.

The inherent problem with applied mathematics lies in the fact that it is not necessarily founded on the same rules that perceptual reality is founded on -- whatever those may be.

The rules for mathematics are known by definition, while the rules for perceptual reality are largely unknown. And while mathematics can be regulated by the constraint of consistency, it is not clear that perceptual reality is governed by the same constraint.

Consequently, to the extent they disagree, applied mathematics will fail to accurately model perceptual reality.

Moreover, the extent of this disagreement is subject to unexpected and unexplained changes over time.

The Grand Irony

I think Protector is grappling with the inconsistencies inherent to mathematics both as pure theory and as an applied science.

The Grand Irony is that despite the (admittedly arbitrary) differences I expressed above, the two are inextricably linked.

If a “pure” mathematical theory cannot be shown to correspond to some sort of natural phenomenon, it will tend to be discarded or modified.

What's more intriguing to me, however, is the urge -- so often indulged -- to redefine natural phenomena that do not agree with theory.

In other words, if the “real world” doesn't match the elegance of theory, it is not uncommon for scientists to want to “fudge” observations to agree with theory. Of course, “good” scientists don't do this, but scientists are human.

This tendency to “fudge” observation can be summarized by the expression “That's impossible!” which signals the scientist's decision to reject observations which don't reinforce presumption.

Scientists aren't the only ones who do this, however. It's a very human trait. Everyone habitually and frequently discards perceptions they don't want to believe -- even those which take place “right before their very eyes”. The catch phrase: “I don't believe it!”

What results is modification of perception to agree with theory. Sometimes the degree of modification is subtle, sometimes gross, but it is always present to some extent.

We believe what we want to believe.

In theoretical physics, models are constantly being created, modified and discarded in attempts to explain observed phenomena.

The irony is that the same thing occurs in mathematics as well, and that's what I think you're pointing out.

Mathematics, like nature, is not a closed set.

Or I could be dead wrong.



posted on Nov, 30 2005 @ 07:03 AM
link   

Originally posted by CicadaWho would recognize the answers if given?"

- Manly P. Hall


A former Mason but what's in a name

He was hooked on Phonics lookie lookie

Man ly p h all

Man lyph all

Man lift all
Jesus in Daunte's Divine Comedy



Greg



posted on Nov, 30 2005 @ 07:22 AM
link   
First, I'd like to say I enjoyed your post.


Originally posted by Protector
Does anyone else ever wonder why Integration works?


BUT, I'm not sure what your issue with integration and infinitesimals is. Is it that you cannot conceive of the minute? That you are having to work with something that is intangible? I offer it is not intangible. That things - reality, forces, areas, lengths, time are continuums. I offer that the material universe is not made up of frames, but a continuum of forces, time, dimensions....therefore there is an ever decreasing segment we can achieve and an integral is as valid and real as the desktop my keyboard is setting on. I do not see integration as a crutch at all.




Or why division by zero is so complex, you have to use any method available to go around it?


It's not complex, it's impossible unless you've got some place new to stuff infinity. It might be pointed out that one of the major "break throughs" in gaging any ancient society as far as where it fell on the advanced ruler was whether they had conceived of nothingness - zero. And once one can sufficiently grasp nothingness, it's not very hard to....



Or why every definition of infinity is different?


...see that it's reciprocal is infinity.




Why is it that we build mathematical rules, laws, theorems, axioms, etc, but end up breaking our own rules in our own system?


If you are saying that the basic concepts of...

* infinitesimal divisions (which is nothing more than being able to visualize that you can take a distance, no matter how small, and half it again)
* zero
* infinity

are something that we have set up and then broken, you'll have to give some examples of this, because I'm not following you.




We need to rebuild the foundation that our mathematics sits on. We need to solve the big problems, like some of those listed above. Many mathematicians and physicists of the past have not agreed what rules and/or laws actually govern our reality.


The above are not big problems, they are fundamental concepts that you appear to be having difficulty with.



For example, Integration was built on the concept of Infinitesimals, but many famous mathematicians did NOT believe such units could exist;


Yeah and at one point famous physicists and philosophers thought there was nothing smaller than an atom. Some of them also thought the world was flat, or that the sun revolved around us. Do we need to undo physical science, astronomy, cosmology and chemistry as well?




Sometimes I feel like I'm the only one who believes our world is leaning on crutches.


Unless you can make me understand what you are trying to say here, between the two of us, I would agree you are the only one who believes this - at least as far as the examples you have given. There are other areas of science that I absolutely agree with you on - bandaids and crutches accepted as fact, even in the face of condemning evidence, but not on these points above.



posted on Nov, 30 2005 @ 12:14 PM
link   

Originally posted by Valhall
I offer that the material universe is not made up of frames, but a continuum of forces, time, dimensions....


I agree, but we do live in a universe that has finites, precision, and sometimes frames.


Originally posted by Valhall
It's not complex, it's impossible unless you've got some place new to stuff infinity.


Actually, infinity has already been classified into infinite sets of varying orders. There are places to store infinities, but I'm not suggesting that all of the problems of mathematics root from not understanding infinity, just that we cannot even agree on a definition for it, so using it in calculations becomes far too subjective.

For example, some people think of something infinite as being so large that it is impossible to count, but that can quickly be seen as false. 1 is an infinitely larger value than 0 because no multiple of 0 added can ever equal 1. Still, 1 is just 1. There are a number of values larger than 1, even though 1 is infinitely larger than 0 (think of the problems with dimensions and Zeno's Paradoxes). So how do you distingish infinities when using the same words for seemingly different concepts. Maybe we need a new word.


Originally posted by Valhall
Yeah and at one point famous physicists and philosophers thought there was nothing smaller than an atom. Some of them also thought the world was flat, or that the sun revolved around us. Do we need to undo physical science, astronomy, cosmology and chemistry as well?



Merriam-Webster Definition
Main Entry: at·om
Pronunciation: 'a-t&m
Function: noun
Etymology: Middle English, from Latin atomus, from Greek atomos, from atomos indivisible, from a- + temnein to cut
1 : one of the minute indivisible particles of which according to ancient materialism the universe is composed


As you can see above, the original definition of atom was an indivisible particle. Once our word for atom got split into something smaller, they ceased to be atoms. However, we kept the same name in English. The Greeks probably would have done otherwise. So it is still tangible to be believe that atoms are the smallest possible indivisible particles, perhaps even particles we have yet to discover.

Besides that, I'm sure that you are well-aware that I was referring to highly subject-knowledgeable mathematicians, such as Archimedes, who would not agree with the justifications for Integration used today. However, it is a fact that Integration works, so there is a layer of underlying truth that has not yet been discovered. That is why the 'dx' in equations just disappears. It is believed to be a source of error too small to make a difference in any practical calculation, but it is still a discrete unit of error that is ignored. Leibniz bent the rules a bit... and for good reason.


Originally posted by Valhall
Unless you can make me understand what you are trying to say here, between the two of us, I would agree you are the only one who believes this - at least as far as the examples you have given.


If examples were easy to come by, or explain, or type, then I would happily give more. However, the further you go into the "Dragon's Belly"
the harder it is to describe to the outside world.

If you had more specific concerns, I could perhaps address those, but as it stands the subject matter gets difficult quickly and there are no answers to some of my questions... at least not yet.

I presented this topic on this forum board for just that reason. It seems, in my eyes, to remain unexplained.



posted on Nov, 30 2005 @ 01:40 PM
link   
Infinity has been properly defined through the use of sets. An infinite set is a set that is in a one-to-one correspondence (f(s1)=f(s2) -> s1=s2) with a proper subset. No finite set can do this.

Not all infinities have the same cardinality. Take, for example, the infinities of the natural numbers and the intergers. They are in a one-to-one correspondence, and thus have the same infinity (same cardinality). Take the real numbers and the integers. They are not. The real numbers have an infinite number of numbers between each number, down to an arbitrarily small range.

Calculus is rid of the sweeping, faith-based mysticisms used by Leibniz and Newton back when it was invented. You're wrong here too. Leibniz embraced the dx, and used it like a mathematical concept. It was Newton who was embarassed by his infinitesimals, and swept them away. That's why we use Leibniz' notation. D'Alembert later defined derivates as limits, the epsilon-delta definition. Nothing ever is zero, it just merely approaches it. The same obviously goes for integrals. That gets rid of all the silly occurences in calculus, the indeterminate 0/0, and all of it's partners, and defines it in a perfectly solid manner, without any of the annoyances you're talking about.

At any rate, Zero was "invented" by the Babylonians, forgotten about, again by the Greeks in a much more crude fashion, forgotten about, by the Mayans, who did it right but were too far away, and then in a joint effort by the Indians and Arabs.


(P.S. - Aside from having studied this for four years, I just turned a paper in on this today. The paper was on zero, throughout history. I went deep into the development of calculus, astrophysics, and set notation)



posted on Dec, 1 2005 @ 03:40 PM
link   

Originally posted by Amorymeltzer
Infinity has been properly defined through the use of sets. An infinite set is a set that is in a one-to-one correspondence (f(s1)=f(s2) -> s1=s2) with a proper subset. No finite set can do this.


Yes, an infinite set has been defined, by Georg Cantor, and each cardinality does determine certain properties of an infinite set. I've never disagreed with that. However, look up definitions for infinity in any book that requires it, look it up on the internet, look it up in Webster, and then you'll notice that many, if not all, of the definitions drastically differ. It appears, in my opinion, that mathematicians are using different working concepts (or far too generic concepts), which can cause problems if theories need to be combined.

Examples:
mathworld.wolfram.com...
www.m-w.com...
dictionary.reference.com...
scidiv.bcc.ctc.edu...
www.c3.lanl.gov...
pespmc1.vub.ac.be...

In three minutes I found six fairly good sources for definitions on infinity and none of them use identical definitions. Some use math, some words, some explain the approach, but none are identical, which is a fairly odd phenomena in the math world. That was my point, nothing else.


Originally posted by Amorymeltzer
Calculus is rid of the sweeping, faith-based mysticisms used by Leibniz and Newton back when it was invented. You're wrong here too. Leibniz embraced the dx, and used it like a mathematical concept... Nothing ever is zero, it just merely approaches it. The same obviously goes for integrals. That gets rid of all the silly occurences in calculus, the indeterminate 0/0, and all of it's partners, and defines it in a perfectly solid manner, without any of the annoyances you're talking about.


How am I wrong? I said that Leibniz bent the rules by using 'dx'. At that time, using infinitesimals was quite unpopular, but he did it anyway (or is my understanding). In using the 'dx' you can pretend to reach zero without the fallback of having division by zero, which is just an extension of limit notation (and approaching zero), from my understanding. However, this method of "faking the zero" is risky once you disgard the 'dx' in the Integration equations, stating that the error is of no consequence (which is what is actually done). Perhaps it is of no consequence, but I guess only a handful of people who have ever lived will truly know the ramifications of this decision. Since I extensively work with computers, an error of 0.00000000002353 can still be significant in practical use. I'm curious as to just how small that 'dx' error is and if any other side-effects occur by its deregard.


Source: www.math.rutgers.edu...
After Leibniz's form of integration had been presented, speculation arose, especially in response to infinitesimal measurement he implemented. In response to this speculation, he stated "... to avoid these subtle matters of dispute and because I wanted my ideas to be generally understood, I contented myself with explaining the infinite as the incomparable. In other words, I assumed there were quantities which were incomparably larger or smaller than ours." (Leibniz from Meschkowski 58) He went on to say that there are different degrees of infinitesimal units, but each one has a value which can vary so that it is possible to choose a value lower than the one chosen. He understood that the infinite sum of units with a thickness infinitely small is a modi fied form of the summation problem commonly dealt with by his predecessors. Further discussion on Leibniz's behalf went as follows: "For if an antagonist denies the correctness of our theorems, our calculations show that the error is smaller than any giv en quantity, since it is in our power to decrease the incomparably small." (58) This explanation does appear problematic though as he is stating the summation of the error cannot be significant since the "incomparably small" units are not significant enough to have too much error. By this argument there would be no measure when one took the integral since the units are too small to be accounted for. This argument shows that Leibniz had particularly amazing and revolutionary methods for solving these pr oblems, but the rationale he based this on was not rock solid since he was not a stickler for details of a proof.


And explaining all that is probably extremely boring to most of ATS's members, thus I spared the details.


Originally posted by Amorymeltzer
(P.S. - Aside from having studied this for four years, I just turned a paper in on this today. The paper was on zero, throughout history. I went deep into the development of calculus, astrophysics, and set notation)


I'm going on 2 1/2 years, extensively on the math side, although I've read about 5 books on the history of zero, too. I know quite a bit of Calc, some set notation, but nothing in the astrophysics department. I have worked with a number of computer algorithms that deal with the accuracy of calculations and the efficiency of performance. So I too have a solid background, believe it or not.



posted on Dec, 1 2005 @ 07:17 PM
link   
But Who Questions The Questioner?


Originally posted by Protector
So I too have a solid background, believe it or not.

Ah, but obviously not solid enough to refrain from questioning cherished and venerated articles of faith.


Whatever direction this discussion may take and whatever may come of it, I want you to know that I'm confident you aren't the problem.

All science, including math, is at its foundation based on assumptions. Scientists may forget this at their peril, because all errors ultimately derive from this one.

I urge you to never stop questioning everything -- especially that which is assumed to be true.

Doing this is the only way to achieve greatness.



posted on Dec, 1 2005 @ 08:25 PM
link   

Originally posted by Protector
Since I extensively work with computers, an error of 0.00000000002353 can still be significant in practical use. I'm curious as to just how small that 'dx' error is and if any other side-effects occur by its deregard.



You've just changed the argument. You've just changed the argument from Integration being a crutch to how finite we are currently able to split the distance. The problem is not with dx, but in our limitation to decrease dx to the point we achieve a true continuity in a function.

The problem is not with the math, but with the implementation.



posted on Dec, 1 2005 @ 10:18 PM
link   

Originally posted by Valhall
You've just changed the argument. You've just changed the argument from Integration being a crutch to how finite we are currently able to split the distance. The problem is not with dx, but in our limitation to decrease dx to the point we achieve a true continuity in a function.

The problem is not with the math, but with the implementation.


Hmm, maybe. Although, the implementation seems to work in almost all practical cases, that's why it is such a stable system. That is why I am curious as to whether our problem is with the implementation or rather with our approaching in understanding our own solutions. Let's face it, sometimes we find solutions that are more useful than we expect, thus we don't understand the full ramifications of them. It could be both.



posted on Dec, 12 2005 @ 02:54 PM
link   

Originally posted by AkashicWanderer

Originally posted by Protector
I don't even know how to respond to this crap. Any ideas?


I'd just copy and paste the following equation:

Let x=0.9~
10x=9.9~
9x=9
x=1

Therefore 0.9~=1


Maybe I don't know better. But why does integral calculus use decimals and not fractions?

...

In my understanding a number divided by zero is undefined. What I'd really like to know is why a negative times a negative can equal a negative.

i^2=-1

I never bothered asking myself why this was, because I never really cared. Anyone have an explanation?

[edit on 12-12-2005 by Frosty]



posted on Dec, 12 2005 @ 08:13 PM
link   
Just My Imagination Running Away With Me


Originally posted by Frosty
In my understanding a number divided by zero is undefined. What I'd really like to know is why a negative times a negative can equal a negative.

i^2=-1

I never bothered asking myself why this was, because I never really cared. Anyone have an explanation?

Sure, it's all in your head!


Descartes ran into this, and was similarly nonplussed -- if you'll pardon the pun.


A good overview of what followed can be found here:

en.wikipedia.org...

Ironically enough, "imaginary numbers" are as real as any other numbers.

That fact alone demonstrates the reason why mathematicians and dreamers are typically one in the same.





new topics
top topics
 
0
<< 1   >>

log in

join