Categorie
Domande di Internet

– why is 0.999… equal to 1?

Bentornati ad un’altra super edizione delle domande di cultura generale!

2146 utenti della rete avevano questa curiosità: Spiegami – why is 0.999… equal to 1?

I know the Arithmetic proof and everything but how to explain this practically to a kid who just started understanding the numbers?

Ed ecco le risposte:

I understood it to be true but struggled with it for a while. How does the decimal .333… so easily equal 1/3 yet the decimal .999… equaling exactly 3/3 or 1.000 prove so hard to rationalize? Turns out I was focusing on precision and not truly understanding the application of infinity, like many of the comments here. Here’s what finally clicked for me:

Let’s begin with a pattern.

1 – .9 = .1

1 – .99 = .01

1 – .999 = .001

1 – .9999 = .0001

1 – .99999 = .00001

As a matter of precision, however far you take this pattern, the difference between 1 and a bunch of 9s will be a bunch of 0s ending with a 1. As we do this thousands and billions of times, and infinitely, the difference keeps getting smaller but never 0, right? You can always sample with greater precision and find a difference?

Wrong.

The leap with infinity — the 9s repeating forever — is the 9s never stop, which means the 0s never stop and, most importantly, the 1 never exists.

So 1 – .999… = .000… which is, hopefully, more digestible. That is what needs to click. Balance the equation, and maybe it will become easy to trust that .999… = 1

This doesn’t exactly answer the question, but I discovered this pattern as a kid playing with a calculator:

1/9 = 0.1111…

2/9 = 0.2222…

3/9 = 0.3333…

4/9 = 0.4444…

5/9 = 0.5555…

6/9 = 0.6666…

7/9 = 0.7777…

8/9 = 0.8888…

Cool, right? So, by that pattern, you’d expect that 9/9 would equal 0.9999… But remember your math: any number divided by itself is 1, so 9/9 = 1. So if the pattern holds true, then 0.9999… = 1

Many here have given explanations of how can you prove that, but stepping back a bit, you’ll want to understand that the decimal expansion method of representing a real number is just an arbitrary convention we chose to give names to real numbers. There’s the pure abstract concept of a real number (defined by the axioms), and then there’s the notation we use to represent them using strings of symbols.

And an unavoidable property of decimal encoding is there are multiple decimal representations for the same real number.

For example, 0.999…, 1.0, 1.00, 1.000, etc. are all decimal representations of the same mathematical object, the real number that’s also called by its more common name 1.

Divid 1 by 3. You get .33333….

Multiply that number by 3 again.

You get .999999999…

They’re equal.

You’ve seen the proof, but I never really liked it until someone told me: “find a number between 0.999… and 1”. That’s the real evidence to me. There is no number between them, so they have to be the same number.

Number between 1 and 2? 1.1.

Number between 1 and 1.1? 1.01

Etc

Rational numbers always have an infinite amount of numbers between any two numbers. They are called infinitely dense because of this.

Sorry for any non-technical aspects of this explanation, I’m a physicist, not a mathematician.