Computing Limits

From CDS 130

Jump to: navigation, search

Contents

  1. Objectives
  2. Motivation
  3. Priming questions
  4. Notes
    1. Speed limits
    2. Memory limits
    3. Finite precision limits
    4. Finite precision limit example I
    5. Finite precision limit example II
    6. Finite precision limit discussion
    7. Finite precision guidelines
    8. Finit precision summary
  5. Questions
    1. Unique Combinations
    2. Round-off Error
      1. Part I
      2. Part II
  6. Activities
    1. Discussion Questions

1. Objectives

  • To understand the three limits to what a computer can compute (1) Speed, (2) Memory, and (3) Finite precision
  • To understand why a computer will give an answer for a calculation that differs from the correct answer due to (3).
  • To introduce the concept of overflow.

2. Motivation

  • Doing science with a computer requires computation. Doing these computations on a computer is generally much faster than any alternative, but there are limits to what can be computed.

3. Priming questions

Your computer stores all information in its memory in binary. For speed, your computer's operating system (e.g., Windows or OS X) organizes, manipulates, and groups all of those bits only in groups of 32 bits.

  • Use the binary addition technique covered in a previous module to add the decimal number 232 − 1 (In binary, this is 32 ones) to the decimal number 1 (In binary, this is 31 zeros followed by a 1).
  • How many bits are required to express the answer to the previous question?
  • Guess what the computer will claim the result is for the addition of these two numbers.
  • Which ability would you rather have?
    • The ability to remember a very large set of numbers and words or
    • the ability to compute answers to math problems very, very quickly?

4. Notes

4.1. Speed limits

4.2. Memory limits

4.3. Finite precision limits

Memory slot

  • A computer stores decimal numbers as a bit pattern.
  • A computer (operating system) has a fundamental length for storing bit patterns.
  • If you try to do a sum of two decimal numbers, you are really doing a computation on the bit pattern approximation of the two decimal numbers.

4.4. Finite precision limit example I

If all colors are stored as a pattern of 8-bits, then there is a limited number of unique colors that can be stored.

  • 28=256, so only 256 colors could be stored on a system that stores colors with only 8-bits:
0000000 = blue
0000001 = ...
...
1111111 = ...
  • With 32 bits, how many unique combinations of 1s and zeros do we have?
0000000000000000000000000000 = blue
0000000000000000000000000001 = ...
...
1111111111111111111111111111 = ...

4.5. Finite precision limit example II

Suppose the fundamental length for storing bit patterns for decimal numbers is 3 bits. You ask the computer to add the two numbers

  1 0 0
+ 1 0 0
  _____

The answer is

1 0 0 0

but your computer will complain - the result of the addition requires four bits, and it can only store bit patterns of length three. This is an example of overflow - the result of the sum cannot be represented as 3 bits.

4.6. Finite precision limit discussion

  • (a+b)-a is not necessarily equal to b on a computer!
(1015 + 1) − 1015 = (1,000,000,000,000,000 + 1) − 1,000,000,000,000,000
  • You must keep this in mind when using anything that does a computation.
  • I have covered why this occurs at a conceptual level.
  • I will not discuss the details of determining when this will occur in this course.
  • Note that there are "tricks" that you can do so that the above example gives the expected answer. Most people don't need to do such calculations, so most programs do not bother implementing these "tricks".

(Example from [1])

4.7. Finite precision guidelines

  • How do you know that the result of a computation is not influenced by limits on memory?

4.8. Finit precision summary

  • High level computer languages allow you to tell the computer what you "mean" without thinking about the "implementation details".
  • You will still need to make sure that any calculation that you do is in a range that allows you to ignore "implementation details".
  • You do not want to claim a new science discovery that is really just a result of computing limits.
    • This module shows that there are limits to computing.
    • This module does not cover how to exactly determine the conditions when you will get the wrong answer (e.g., when (a+b)-b will not be equal to a).

5. Questions

5.1. Unique Combinations

How many unique combinations of two 1s and 0s can you create? For example, 0 1 is one group.

  1. 3
  2. 5
  3. 22
  4. 22-1
  5. 23

5.2. Round-off Error

Suppose that we wanted to store a number on a computer, but only had two bits of memory. The four possible values of these two bits are

Bit 1 Bit 2 Decimal Equiv.
0 0 0
0 1 1
1 0 2
1 1 3

When reading the bits from memory, we can tell the computer that when it encounters a 0 0 that this means the expression red. Therefore, if I wanted to save the list red green green red magenta, I could just write the bits 00 01 01 00 11. Now when I read back the bits, I know that this person meant red green green red magenta. In a similar way, I can define these combinations of bits to mean numbers other than their decimal equivalent, two examples are shown in the "bit-group-to-meaning-table" below

bit-group-to-meaning table
Bit 1 Bit 2 Decimal Equiv. Meaning 1 Meaning 2
0 0 0 red 0
0 1 1 green -1
1 0 2 blue +1
1 1 3 magenta 2

Suppose that your instrument can take on values of 0.0, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, and 4.0 and you want to represent each number as a group of three bits. The first part of your "bit-group-to-meaning-table" would look like the above.

bit-group-to-meaning table
Bit 1 Bit 2 Bit 3 Decimal Equiv. Meaning
0 0 0 0 0.0
0 0 1 1 0.5
0 1 0 2 1.0
0 1 1 3 1.5
1 0 0 4 2.0
etc.

5.2.1. Part I

Suppose that you only have the choice of using three bits. How many unique combinations of 1s and 0s are possible?

  1. 2
  2. 4
  3. 6
  4. 7
  5. 8
  6. 9

5.2.2. Part II

Suppose that you get a new instrument that can measure values from 0.0, 0.1, 0.2, ..., 4.0, but you still store data using three bits and use the same "bit-group-to-meaning-table".

When you write a bit group to record a measurement, you chose the bit grouping that has the closest meaning value to your measurement. In doing so, you lose precision much like when you round a decimal number 1.11111111 to 1.1. When you return to the number 1.1, you won't be able to tell if the original number was, for example 1.11 or 1.12, because the both round to 1.1.

When you read back your measurements after representing each number on your disk as a group of three bits, what will be the largest difference between the read back value and the actual measured value? Explain your answer in words.

  1. 0.1
  2. 0.5
  3. 0.4
  4. 1.0
  5. 2.0

6. Activities

6.1. Discussion Questions

  • What happens when you ask a calculator with 12 digits to do (twelve nines) 999999999999x999999999999?
  • You have an 8-bit operating system. That means all objects (instructions, numbers, etc. must be represented by a list of eight values.
    • What would happen when you ask your computer to do 1111111 + 1111111 in binary?
    • It would essentially say "I can only hold objects that can be represented by eight bits. The object you calculated is more than eight bits. I don't know what to do with this extra bit."
  • What happens when you ask Google Calculator to multiply the following number by itself?
99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999


Personal tools