CS301: Computer Architecture

Integers and the representation of real numbers.

Read these sections on the representation of integers and real numbers. Earlier, you read about number systems and the representation of numbers used for computing. This will give you a chance to review that material. Computer architecture comprises components which perform the functions of storage of data, transfer of data from one component to another, computations, and interfacing to devices external to the computer. Data is stored in terms of units, called words. A word is made up of a number of bits, typically, depending on the computer, 32 bits or 64 bits. Words keep getting longer, with larger numbers of bits. Instructions are also stored in words. Before, you saw examples of how instructions are stored in a word or words. Now, you will see how numbers are stored in words.

In scientific computing, most operations are on real numbers. Computations on integers rarely add up to any serious computation load. It is mostly for completeness that we start with a short discussion of integers.

When we are indexing an array, only positive integers are needed. In general integer computations, of course, we need to accommodate the negative integers too. There are several ways of implementing negative integers. The  simplest solution is to reserve one bit as a sign bit , and use the remaining 31 (or 15 or 63; from now on we will consider 32 bits the standard) bits to store the absolute magnitude.

define integer representation in computer

This scheme has some disadvantages, one being that there is both a positive and negative number zero. This means that a test for equality becomes more complicated than simply testing for equality as a bitstring.

The scheme that is used most commonly is called 2’s complement, where integers are represented as follows.

define integer representation in computer

  • There is no overlap between the bit patterns for positive and negative integers, in particular, there is only one pattern for zero.
  • The positive numbers have a leading bit zero, the negative numbers have the leading bit set.

Exercise 3.1. For the ‘naive’ scheme and the 2’s complement scheme for negative numbers, give pseudocode for the comparison test m < n , where m and n are integers. Be careful to distinguish between all cases of m ; n positive, zero, or negative.

Adding two numbers with the same sign, or multiplying two numbers of any sign, may lead to a result that is too large or too small to represent. This is called overflow .

Exercise 3.2. Investigate what happens when you perform such a calculation. What does your compiler say if you try to write down a nonrepresentible number explicitly, for instance in an assignment statement?

In both cases we conclude that we can perform subtraction by adding the bitstrings that represent the positive and  negative number as unsigned integers, and ignoring overflow if it occurs.

Representation of real numbers

In this section we will look at how various kinds of numbers are represented in a computer, and the limitations of various schemes. The next section will then explore the ramifications of this on arithmetic involving computer numbers.

Real numbers are stored using a scheme that is analogous to what is known as ‘scientific notation’, where a number is represented as a significant and an exponent , for instance 6.022 x 10 23 , which has a significant 6.022 with a radix point after the first digit, and an exponent 23. This number stands for

We introduce a base, a small integer number, 10 in the preceding example, and 2 in computer numbers, and write numbers in terms of it as a sum of t terms:

where the components are

  • the sign bit : a single bit storing whether the number is positive or negative;
  • t is the length of the mantissa;

Note that there is an explicit sign bit for the whole number; the sign of the exponent is handled differently. For reasons of efficiency, e is not a signed number; instead it is considered as an unsigned number in excess of a certain minimum value. For instance, the bit pattern for the number zero is interpreted as e = L.

Some examples

Let us look at some specific examples of floating point representations. Base 10 is the most logical choice for human consumption, but computers are binary, so base 2 predominates there. Old IBM mainframes grouped bits to make for a base 16 representation.

define integer representation in computer

Of these, the single and double precision formats are by far the most common. We will discuss these in section 3.2.4 and further.

Binary coded decimal

Decimal numbers are not relevant in scientific computing, but they are useful in financial calculations, where com-putations involving money absolutely have to be exact. Binary arithmetic is at a disadvantage here, since numbers such as 1/10 are repeating fractions in binary. With a finite number of bits in the mantissa, this means that the number 1/10 can not be represented exactly in binary. For this reason, binary-coded-decimal schemes were used in old IBM mainframes, and are in fact being standardized in revisions of IEEE754 [4]; see also section 3.2.4. Few processors these days have hardware support for BCD; one example is the IBM Power6.

In BCD schemes, one or more decimal digits are encoded in a number of bits. The simplest scheme would encode the digits 0 . . . 9 in four bits. This has the advantage that in a BCD number each digit is readily identified; it has the disadvantage that about 1/3 of all bits are wasted, since 4 bits can encode the numbers 0 . . . 15. More efficient encodings would encode 0 . . . 999 in ten bits, which could in principle store the numbers 0 . . . 1023. While this is efficient in the sense that few bits are wasted, identifying individual digits in such a number takes some decoding.

Ternary computers

There have been some experiments with ternary arithmetic [2, 8, 9].

Limitations

Since we use only a finite number of bits to store floating point numbers, not all numbers can be represented. The ones that can not be represented fall into two categories: those that are too large or too small (in some sense), and those that fall in the gaps. Numbers can be too large or too small in the following ways.

The fact that only a small number of real numbers can be represented exactly is the basis of the field of round-off error analysis. We will study this in some detail in the following sections.

Normalized numbers and machine precision

A practical implication in the case of binary numbers is that the first digit is always 1, so we do not need to store it explicitly. In the IEEE 754 standard, this means that every floating point number is of the form

define integer representation in computer

Figure 3.1: Single precision arithmetic

Machine precision can be defined another way: is the smallest number that can be added to 1 so that 1 + has a different representation than 1. A small example shows how aligning exponents can shift a too small operand so  that it is effectively ignored in the addition operation:

define integer representation in computer

The machine precision is the maximum attainable accuracy of computations: it does not make sense to ask for more than 6-or-so digits accuracy in single precision, or 15 in double.

Exercise 3.3. Write a small program that computes the machine epsilon. Does it make any difference if you set the compiler optimization levels low or high? Can you find other ways in which this computation goes wrong?

The IEEE 754 standard for floating point numbers

Some decades ago, issues like the length of the mantissa and the rounding behaviour of operations could differ between computer manufacturers, and even between models from one manufacturer. This was obviously a bad situation from a point of portability of codes and reproducibility of results. The IEE standard 754 2 codified all this, for instance stipulating 24 and 53 bits for the mantissa in single and double precision arithmetic, using a storage sequence of sign bit, exponent, mantissa. This for instance facilitates comparison of numbers.

The standard also declared the rounding behaviour to be ‘exact rounding’: the result of an operation should be the rounded version of the exact result.

These days, almost all processors adhere to the IEEE 754 standard, with only occasional exceptions. For instance, Nvidia Tesla GPU s are not standard-conforming in single precision. The justification for this is that double precision is the ‘scientific’ mode, while single precision is mostly likely used for graphics, where exact compliance matters less.

Creative Commons License

Next: Maximum and Minimum Values , Up: Integers in Depth   [ Contents ][ Index ]

27.1 Integer Representations

Modern computers store integer values as binary (base-2) numbers that occupy a single unit of storage, typically either as an 8-bit char , a 16-bit short int , a 32-bit int , or possibly, a 64-bit long long int . Whether a long int is a 32-bit or a 64-bit value is system dependent. 11

The macro CHAR_BIT , defined in limits.h , gives the number of bits in type char . On any real operating system, the value is 8.

The fixed sizes of numeric types necessarily limits their range of values , and the particular encoding of integers decides what that range is.

For unsigned integers, the entire space is used to represent a nonnegative value. Signed integers are stored using two’s-complement representation : a signed integer with n bits has a range from -2 ( n - 1) to -1 to 0 to 1 to +2 ( n - 1) - 1 , inclusive. The leftmost, or high-order, bit is called the sign bit .

In two’s-complement representation, there is only one value that means zero, and the most negative number lacks a positive counterpart. As a result, negating that number causes overflow; in practice, its result is that number back again. We will revisit that peculiarity shortly.

For example, a two’s-complement signed 8-bit integer can represent all decimal numbers from -128 to +127. Negating -128 ought to give +128, but that value won’t fit in 8 bits, so the operation yields -128.

Decades ago, there were computers that used other representations for signed integers, but they are long gone and not worth any effort to support. The GNU C language does not support them.

When an arithmetic operation produces a value that is too big to represent, the operation is said to overflow . In C, integer overflow does not interrupt the control flow or signal an error. What it does depends on signedness.

For unsigned arithmetic, the result of an operation that overflows is the n low-order bits of the correct value. If the correct value is representable in n bits, that is always the result; thus we often say that “integer arithmetic is exact,” omitting the crucial qualifying phrase “as long as the exact result is representable.”

In principle, a C program should be written so that overflow never occurs for signed integers, but in GNU C you can specify various ways of handling such overflow (see Integer Overflow ).

Integer representations are best understood by looking at a table for a tiny integer size; here are the possible values for an integer with three bits:

The parenthesized decimal numbers in the last column represent the signed meanings of the two’s-complement of the line’s value. Recall that, in two’s-complement encoding, the high-order bit is 0 when the number is nonnegative.

We can now understand the peculiar behavior of negation of the most negative two’s-complement integer: start with 0b100, invert the bits to get 0b011, and add 1: we get 0b100, the value we started with.

We can also see overflow behavior in two’s-complement:

A sum of two nonnegative signed values that overflows has a 1 in the sign bit, so the exact positive result is truncated to a negative value.

In theory, any of these types could have some other size, bit it’s not worth even a minute to cater to that possibility. It never happens on GNU/Linux.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Engineering LibreTexts

3.1: Integer Representation

  • Last updated
  • Save as PDF
  • Page ID 19869

  • Ed Jorgensen
  • University of Navada, Las Vegas

Representing integer numbers refers to how the computer stores or represents a number in memory. The computer represents numbers in binary (1's and 0's). However, the computer has a limited amount of space that can be used for each number or variable. This directly impacts the size, or range, of the number that can be represented. For example, a byte (8-bits) can be used to represent \(2^{8}\) or 256 different numbers. Those 256 different numbers can be unsigned (all positive) in which case we can represent any number between 0 and 255 (inclusive). If we choose signed (positive and negative values), then we can represent any number between -128 and +127 (inclusive).

If that range is not large enough to handle the intended values, a larger size must be used. For example, a word (16-bits) can be used to represent \(2^{16}\) or 65,536 different values, and a double-word (32-bits) can be used to represent \(2^{32}\) or 4,294,967,296 different numbers. So, if you wanted to store a value of 100,000 then a double-word would be required.

As you may recall from C, C++, or Java, an integer declaration (e.g., int <variable> ) is a single double-word which can be used to represent values between \(-2^{31}\) (−2,147,483,648) and +\(2^{31}\) - 1 (+2,147,483,647).

The following table shows the ranges associated with typical sizes:

In order to determine if a value can be represented, you will need to know the size of the storage element (byte, word, double-word, quadword, etc.) being used and if the values are signed or unsigned.

  • For representing unsigned values within the range of a given storage size, standard binary is used.
  • For representing signed values within the range, two's complement is used. Specifically, the two's complement encoding process applies to the values in the negative range. For values within the positive range, standard binary is used.

For example, the unsigned byte range can be represented using a number line as follows:

截屏2021-07-18 下午4.10.19.png

For example, the signed byte range can also be represented using a number line as follows:

截屏2021-07-18 下午4.10.43.png

The same concept applies to halfwords and words which have larger ranges.

Since unsigned values have a different, positive only, range than signed values, there is overlap between the values. This can be very confusing when examining variables in memory (with the debugger).

For example, when the unsigned and signed values are within the overlapping positive range (0 to +127):

  • A signed byte representation of \(12_{10}\) is 0x0\(C_{16}\)
  • An unsigned byte representation of \(-12_{10}\) is also 0x0\(C_{16}\)

When the unsigned and signed values are outside the overlapping range:

  • A signed byte representation of \(-15_{10}\) is 0x\(F1_{16}\)
  • An unsigned byte representation of \(241_{10}\) is also 0x\(F1_{16}\)

This overlap can cause confusion unless the data types are clearly and correctly defined.

Two's Complement

The following describes how to find the two's complement representation for negative values (not positive values).

To take the two's complement of a number:

  • take the one's complement (negate)
  • add 1 (in binary)

The same process is used to encode a decimal value into two's complement and from two's complement back to decimal. The following sections provide some examples.

For example, to find the byte size (8-bits), two's complement representation of -9 and - 12.

Note , all bits for the given size, byte in this example, must be specified.

To find the word size (16-bits), two's complement representation of -18 and -40.

Note , all bits for the given size, words in these examples, must be specified.

Computer Science learning blog

How are integer numbers represented by computers?

Computers represent all of its data using the binary system . In this system, the only possible values are 0 and 1 , binary digits, which are also called bits . The demonstration below shows the bit-level representation of some common integer data-types in C.

In this post, we’ll see how to understand the encoding used by computers to represent integer values. Both unsigned and signed representations will be explained. Additionally, we’ll analyze the range of values that can be represented with a certain quantity of bits. Finally, we’ll take a closer look at some of the integer data-types used in C, which are similar to those used in many other programming languages.

Unsigned values

We’ll start by analyzing how to represent positive values and zero. These are referred to as unsigned integer types.

To represent numbers in general and unsigned values in particular, computers use arrays of bits. For demonstration purposes, we’ll use a 4-bits array.

define integer representation in computer

Each one of these bits has a weight of increasing powers of 2.

define integer representation in computer

Counting the bits starting from zero, we’ll have that the 0th bit (rightmost bit) has a weight of 2 0 =1 . The one that follows has a weight of 2 1 =2 , and so on. In general, the ith bit has a weight of 2 i .

Each bit tells us whether or not to include its weight in the number that is encoded. When a bit equals 0 , we don’t include its weight. When it equals 1 , we add its corresponding weight to the total. The value encoded by these bits is equal to the sum of all the weights of the bits set to 1 .

define integer representation in computer

On the example above, the 0th and the 2nd bit are set to 1 . It follows that the number that is encoded is 2 0 + 2 2 = 1 + 4 = 5 .

In mathematical terms, the encoded value n represented by an array of unsigned w bits is equal to:

define integer representation in computer

Where x i is the ith bit.

Signed values

signed data types refer to those that can represent negative values as well as positive ones. The most common way to do it is using two’s complement encoding.

This encoding is very similar to the one that we saw for unsigned values. The only difference is that the most significant bit (the one with the greatest weight), has a negative value. We’ll call it the sign-bit . In our 4-bit representation, we’ll have:

define integer representation in computer

Whenever the sign-bit is set to 1 , we’ll have a negative number.

define integer representation in computer

In the example above, both the 3rd (the sign-bit) and the 1st bit are set to one. The value represented is then -2 3 + 2 1 = -8 + 2 = -6 .

We’ll get positive values when the sign-bit is set to 0 .

define integer representation in computer

In mathematical terms:

define integer representation in computer

Ones’ complement

While two’s complement is the most common encoding for signed integers, some machines use an alternative encoding: ones’ complement . In this encoding, the sign-bit has a weight of -2 w-1 -1 instead of -2 w-1 . This has the effect of making the range of possible values symmetrical. As a particularity, there are two possible ways to represent zero in this encoding.

We’ll focus on the two’s complement notation for the rest of this post.

Range of values

Now that we have seen how numbers are encoded, let’s see the possible range of values that can be represented.

For unsigned data types, the smallest possible value is simply 0 . The largest value is obtained when all of the bits are set to 1 .

define integer representation in computer

Since we have a sum of all powers of 2 from zero up to w-1 (assuming an array of w -bits), the maximum value is equal to 2 w -1 . In our example, we had w=4 , so the maximum value is 2 4 -1 = 16 -1 = 15 .

For signed data types, we’ll get the minimum value when only the sign-bit is set to 1 .

define integer representation in computer

The maximum value is obtained when all but the sign-bit are set to 1 .

In general terms, for an array of w-bits using two’s complement encoding, the minimum value will be -2 w-1 and the maximum value will be 2 w-1 -1 . See that the maximum value is one less than the magnitude of the minimum value.

These results are summarized below.

Integer data types in C

In C, we can find the following integer data types:

While there is a signed keyword, it’s not necessary to use it. Unless it’s otherwise specified using the unsigned keyword, all integer data types are signed by default.

Each one of these data- types uses a varying number of bytes (1 byte = 8 bits). While the size of each data type depends on the machine in which C is implemented, the most common sizes for modern 64-bit machines are:

The size is the same for the signed and unsigned variations of each data-type.

The C specification does not require signed data types to be encoded using two’s complement. While it’s less common, some machines use ones’ complement.

The encoding of the different integer values follows the same principles that we studied above, as can be seen in the demonstration at the beginning of this post.

  • data representation

guest

Data Representation 5.3. Numbers

Data representation.

  • 5.1. What's the big picture?
  • 5.2. Getting started

Understanding the base 10 number system

Representing whole numbers in binary, shorthand for binary numbers - hexadecimal, computers representing numbers in practice, how many bits are used in practice, representing negative numbers in practice.

  • 5.5. Images and Colours
  • 5.6. Program Instructions
  • 5.7. The whole story!
  • 5.8. Further reading

In this section, we will look at how computers represent numbers. To begin with, we'll revise how the base 10 number system that we use every day works, and then look at binary , which is base 2. After that, we'll look at some other charactertistics of numbers that computers must deal with, such as negative numbers and numbers with decimal points.

The number system that humans normally use is in base 10 (also known as decimal). It's worth revising quickly, because binary numbers use the same ideas as decimal numbers, just with fewer digits!

In decimal, the value of each digit in a number depends on its place in the number. For example, in $123, the 3 represents $3, whereas the 1 represents $100. Each place value in a number is worth 10 times more than the place value to its right, i.e. there are the "ones", the "tens", the "hundreds", the "thousands" the "ten thousands", the "hundred thousands", the "millions", and so on. Also, there are 10 different digits (0,1,2,3,4,5,6,7,8,9) that can be at each of those place values.

If you were only able to use one digit to represent a number, then the largest number would be 9. After that, you need a second digit, which goes to the left, giving you the next ten numbers (10, 11, 12... 19). It's because we have 10 digits that each one is worth 10 times as much as the one to its right.

You may have encountered different ways of expressing numbers using "expanded form". For example, if you want to write the number 90328 in expanded form you might have written it as:

A more sophisticated way of writing it is:

If you've learnt about exponents, you could write it as:

The key ideas to notice from this are:

  • Decimal has 10 digits – 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
  • A place is the place in the number that a digit is, i.e. ones, tens, hundreds, thousands, and so on. For example, in the number 90328, 3 is in the "hundreds" place, 2 is in the "tens" place, and 9 is in the "ten thousands" place.
  • Numbers are made with a sequence of digits.
  • The right-most digit is the one that's worth the least (in the "ones" place).
  • The left-most digit is the one that's worth the most.
  • Because we have 10 digits, the digit at each place is worth 10 times as much as the one immediately to the right of it.

All this probably sounds really obvious, but it is worth thinking about consciously, because binary numbers have the same properties.

As discussed earlier, computers can only store information using bits, which have 2 possible states. This means that they cannot represent base 10 numbers using digits 0 to 9, the way we write down numbers in decimal. Instead, they must represent numbers using just 2 digits – 0 and 1.

Binary works in a very similar way to decimal, even though it might not initially seem that way. Because there are only 2 digits, this means that each digit is 2 times the value of the one immediately to the right.

The base 10 (decimal) system is sometimes called denary, which is more consistent with the name binary for the base 2 system. The word "denary" also refers to the Roman denarius coin, which was worth ten asses (an "as" was a copper or bronze coin). The term "denary" seems to be used mainly in the UK; in the US, Australia and New Zealand the term "decimal" is more common.

The interactive below illustrates how this binary number system represents numbers. Have a play around with it to see what patterns you can see.

Thumbnail of Base Calculator interactive

Base Calculator

Find the representations of 4, 7, 12, and 57 using the interactive.

What is the largest number you can make with the interactive? What is the smallest? Is there any integer value in between the biggest and the smallest that you can’t make? Are there any numbers with more than one representation? Why/ why not?

  • 000000 in binary, 0 in decimal is the smallest number.
  • 111111 in binary, 63 in decimal is the largest number.
  • All the integer values (0, 1, 2... 63) in the range can be represented (and there is a unique representation for each one). This is exactly the same as decimal!

You have probably noticed from the interactive that when set to 1, the leftmost bit (the "most significant bit") adds 32 to the total, the next adds 16, and then the rest add 8, 4, 2, and 1 respectively. When set to 0, a bit does not add anything to the total. So the idea is to make numbers by adding some or all of 32, 16, 8, 4, 2, and 1 together, and each of those numbers can only be included once.

If you get an 11/100 on a CS test, but you claim it should be counted as a &#39;C&#39;, they&#39;ll probably decide you deserve the upgrade.

Choose a number less than 61 (perhaps your house number, your age, a friend's age, or the day of the month you were born on), set all the binary digits to zero, and then start with the left-most digit (32), trying out if it should be zero or one. See if you can find a method for converting the number without too much trial and error. Try different numbers until you find a quick way of doing this.

Figure out the binary representation for 23 without using the interactive? What about 4, 0, and 32? Check all your answers using the interactive to verify they are correct.

Can you figure out a systematic approach to counting in binary? i.e. start with the number 0, then increment it to 1, then 2, then 3, and so on, all the way up to the highest number that can be made with the 7 bits. Try counting from 0 to 16, and see if you can detect a pattern. Hint: Think about how you add 1 to a number in base 10. e.g. how do you work out 7 + 1, 38 + 1, 19 + 1, 99 + 1, 230899999 + 1, etc? Can you apply that same idea to binary?

Using your new knowledge of the binary number system, can you figure out a way to count to higher than 10 using your 10 fingers? What is the highest number you can represent using your 10 fingers? What if you included your 10 toes as well (so you have 20 fingers and toes to count with).

A binary number can be incremented by starting at the right and flipping all consecutive bits until a 1 comes up (which will be on the very first bit half of the time).

Counting on fingers in binary means that you can count to 31 on 5 fingers, and 1023 on 10 fingers. There are a number of videos on YouTube of people counting in binary on their fingers. One twist is to wear white gloves with the numbers 16, 8, 4, 2, 1 on the 5 fingers respectively, which makes it easy to work out the value of having certain fingers raised.

The interactive used exactly 6 bits. In practice, we can use as many or as few bits as we need, just like we do with decimal. For example, with 5 bits, the place values would be 16, 8, 4, 2 and 1, so the largest value is 11111 in binary, or 31 in decimal. Representing 14 with 5 bits would give 01110.

Write representations for the following. If it is not possible to do the representation, put "Impossible".

  • Represent 101 with 7 bits
  • Represent 28 with 10 bits
  • Represent 7 with 3 bits
  • Represent 18 with 4 bits
  • Represent 28232 with 16 bits

The answers are (spaces are added to make the answers easier to read, but are not required).

  • 101 with 7 bits is: 110 0101
  • 28 with 10 bits is: 00 0001 1100
  • 7 with 3 bits is: 111
  • 18 with 4 bits is: Impossible (not enough bits to represent value)
  • 28232 with 16 bits is: 0110 1110 0100 1000

An important concept with binary numbers is the range of values that can be represented using a given number of bits. When we have 8 bits the binary numbers start to get useful – they can represent values from 0 to 255, so it is enough to store someone's age, the day of the month, and so on.

Groups of 8 bits are so useful that they have their own name: a byte . Computer memory and disk space are usually divided up into bytes, and bigger values are stored using more than one byte. For example, two bytes (16 bits) are enough to store numbers from 0 to 65,535. Four bytes (32 bits) can store numbers up to 4,294,967,295. You can check these numbers by working out the place values of the bits. Every bit that's added will double the range of the number.

In practice, computers store numbers with either 16, 32, or 64 bits. This is because these are full numbers of bytes (a byte is 8 bits), and makes it easier for computers to know where each number starts and stops.

Candles on birthday cakes use the base 1 numbering system, where each place is worth 1 more than the one to its right. For example, the number 3 is 111, and 10 is 1111111111. This can cause problems as you get older – if you've ever seen a cake with 100 candles on it, you'll be aware that it's a serious fire hazard.

The image shows two people with birthday cakes, however a cake with 100 candles on it turns into a big fireball!

Luckily it's possible to use binary notation for birthday candles – each candle is either lit or not lit. For example, if you are 18, the binary notation is 10010, and you need 5 candles (with only two of them lit).

There's a video on using binary notation for counting up to 1023 on your hands, as well as using it for birthday cakes .

It's a lot smarter to use binary notation on candles for birthdays as you get older, as you don't need as many candles.

Most of the time binary numbers are stored electronically, and we don't need to worry about making sense of them. But sometimes it's useful to be able to write down and share numbers, such as the unique identifier assigned to each digital device (MAC address), or the colours specified in an HTML page.

Writing out long binary numbers is tedious – for example, suppose you need to copy down the 16-bit number 0101001110010001. A widely used shortcut is to break the number up into 4-bit groups (in this case, 0101 0011 1001 0001), and then write down the digit that each group represents (giving 5391). There's just one small problem: each group of 4 bits can go up to 1111, which is 15, and the digits only go up to 9.

The solution is simple: we introduce symbols for the digits from 1010 (10) to 1111 (15), which are just the letters A to F. So, for example, the 16-bit binary number 1011 1000 1110 0001 can be written more concisely as B8E1. The "B" represents the binary 1011, which is the decimal number 11, and the E represents binary 1110, which is decimal 14.

Because we now have 16 digits, this representation is base 16, and known as hexadecimal (or hex for short). Converting between binary and hexadecimal is very simple, and that's why hexadecimal is a very common way of writing down large binary numbers.

Here's a full table of all the 4-bit numbers and their hexadecimal digit equivalent:

For example, the largest 8-bit binary number is 11111111. This can be written as FF in hexadecimal. Both of those representations mean 255 in our conventional decimal system (you can check that by converting the binary number to decimal).

Which notation you use will depend on the situation; binary numbers represent what is actually stored, but can be confusing to read and write; hexadecimal numbers are a good shorthand of the binary; and decimal numbers are used if you're trying to understand the meaning of the number or doing normal math. All three are widely used in computer science.

It is important to remember though, that computers only represent numbers using binary. They cannot represent numbers directly in decimal or hexadecimal.

A common place that numbers are stored on computers is in spreadsheets or databases. These can be entered either through a spreadsheet program or database program, through a program you or somebody else wrote, or through additional hardware such as sensors, collecting data such as temperatures, air pressure, or ground shaking.

Some of the things that we might think of as numbers, such as the telephone number (03) 555-1234, aren't actually stored as numbers, as they contain important characters (like dashes and spaces) as well as the leading 0 which would be lost if it was stored as a number (the above number would come out as 35551234, which isn't quite right). These are stored as text , which is discussed in the next section.

On the other hand, things that don't look like a number (such as "30 January 2014") are often stored using a value that is converted to a format that is meaningful to the reader (try typing two dates into Excel, and then subtract one from the other – the result is a useful number). In the underlying representation, a number is used. Program code is used to translate the underlying representation into a meaningful date on the user interface.

The difference between two dates in Excel is the number of days between them; the date itself (as in many systems) is stored as the amount of time elapsed since a fixed date (such as 1 January 1900). You can test this by typing a date like "1 January 1850" – chances are that it won't be formatted as a normal date. Likewise, a date sufficiently in the future may behave strangely due to the limited number of bits available to store the date.

Numbers are used to store things as diverse as dates, student marks, prices, statistics, scientific readings, sizes and dimensions of graphics.

The following issues need to be considered when storing numbers on a computer:

  • What range of numbers should be able to be represented?
  • How do we handle negative numbers?
  • How do we handle decimal points or fractions?

In practice, we need to allocate a fixed number of bits to a number, before we know how big the number is. This is often 32 bits or 64 bits, although can be set to 16 bits, or even 128 bits, if needed. This is because a computer has no way of knowing where a number starts and ends, otherwise.

Any system that stores numbers needs to make a compromise between the number of bits allocated to store the number, and the range of values that can be stored.

In some systems (like the Java and C programming languages and databases) it's possible to specify how accurately numbers should be stored; in others it is fixed in advance (such as in spreadsheets).

Some are able to work with arbitrarily large numbers by increasing the space used to store them as necessary (e.g. integers in the Python programming language). However, it is likely that these are still working with a multiple of 32 bits (e.g. 64 bits, 96 bits, 128 bits, 160 bits, etc). Once the number is too big to fit in 32 bits, the computer would reallocate it to have up to 64 bits.

In some programming languages there isn't a check for when a number gets too big (overflows). For example, if you have an 8-bit number using two's complement, then 01111111 is the largest number (127), and if you add one without checking, it will change to 10000000, which happens to be the number -128. (Don't worry about two's complement too much, it's covered later in this section.) This can cause serious problems if not checked for, and is behind a variant of the Y2K problem, called the Year 2038 problem , involving a 32-bit number overflowing for dates on Tuesday, 19 January 2038.

A xkcd comic on number overflow

On tiny computers, such as those embedded inside your car, washing machine, or a tiny sensor that is barely larger than a grain of sand, we might need to specify more precisely how big a number needs to be. While computers prefer to work with chunks of 32 bits, we could write a program (as an example for an earthquake sensor) that knows the first 7 bits are the lattitude, the next 7 bits are the longitude, the next 10 bits are the depth, and the last 8 bits are the amount of force.

Even on standard computers, it is important to think carefully about the number of bits you will need. For example, if you have a field in your database that could be either "0", "1", "2", or "3" (perhaps representing the four bases that can occur in a DNA sequence), and you used a 64 bit number for every one, that will add up as your database grows. If you have 10,000,000 items in your database, you will have wasted 62 bits for each one (only 2 bits is needed to represent the 4 numbers in the example), a total of 620,000,000 bits, which is around 74 MB. If you are doing this a lot in your database, that will really add up – human DNA has about 3 billion base pairs in it, so it's incredibly wasteful to use more than 2 bits for each one.

And for applications such as Google Maps, which are storing an astronomical amount of data, wasting space is not an option at all!

It is really useful to know roughly how many bits you will need to represent a certain value. Have a think about the following scenarios, and choose the best number of bits out of the options given. You want to ensure that the largest possible number will fit within the number of bits, but you also want to ensure that you are not wasting space.

  • Storing the day of the week - a) 1 bit - b) 4 bits - c) 8 bits - d) 32 bits
  • Storing the number of people in the world - a) 16 bits - b) 32 bits - c) 64 bits - d) 128 bits
  • Storing the number of roads in New Zealand - a) 16 bits - b) 32 bits - c) 64 bits - d) 128 bits
  • Storing the number of stars in the universe - a) 16 bits - b) 32 bits - c) 64 bits - d) 128 bits
  • b (actually, 3 bits is enough as it gives 8 values, but amounts that fit evenly into 8-bit bytes are easier to work with)
  • c (32 bits is slightly too small, so you will need 64 bits)
  • b (This is a challenging question, but one a database designer would have to think about. There's about 94,000 km of roads in New Zealand, so if the average length of a road was 1km, there would be too many roads for 16 bits. Either way, 32 bits would be a safe bet.)
  • d (Even 64 bits is not enough, but 128 bits is plenty! Remember that 128 bits isn't twice the range of 64 bits.)

The binary number representation we have looked at so far allows us to represent positive numbers only. In practice, we will want to be able to represent negative numbers as well, such as when the balance of an account goes to a negative amount, or the temperature falls below zero. In our normal representation of base 10 numbers, we represent negative numbers by putting a minus sign in front of the number. But in binary, is it this simple?

We will look at two possible approaches: Adding a simple sign bit, much like we do for decimal, and then a more useful system called two's complement.

Using a simple sign bit

On a computer we don’t have minus signs for numbers (it doesn't work very well to use the text based one when representing a number because you can't do arithmetic on characters), but we can do it by allocating one extra bit, called a sign bit, to represent the minus sign. Just like with decimal numbers, we put the negative indicator on the left of the number — when the sign bit is set to "0", that means the number is positive and when the sign bit is set to "1", the number is negative (just as if there were a minus sign in front of it).

For example, if we wanted to represent the number 41 using 7 bits along with an additional bit that is the sign bit (to give a total of 8 bits), we would represent it by 00101001 . The first bit is a 0, meaning the number is positive, then the remaining 7 bits give 41 , meaning the number is +41 . If we wanted to make -59 , this would be 10111011 . The first bit is a 1, meaning the number is negative, and then the remaining 7 bits represent 59 , meaning the number is -59 .

Using 8 bits as described above (one for the sign, and 7 for the actual number), what would be the binary representations for 1, -1, -8, 34, -37, -88, and 102?

The spaces are not necessary, but are added to make reading the binary numbers easier

  • 1 is 0000 0001
  • -1 is 1000 0001
  • -8 is 1000 1000
  • 34 is 0010 0010
  • -37 is 1010 0101
  • -88 is 1101 1000
  • 102 is 0110 0110

Going the other way is just as easy. If we have the binary number 10010111 , we know it is negative because the first digit is a 1. The number part is the next 7 bits 0010111 , which is 23 . This means the number is -23 .

What would the decimal values be for the following, assuming that the first bit is a sign bit?

  • 00010011 is 19
  • 10000110 is -6
  • 10100011 is -35
  • 01111111 is 127
  • 11111111 is -127

But what about 10000000? That converts to -0 . And 00000000 is +0 . Since -0 and +0 are both just 0, it is very strange to have two different representations for the same number.

This is one of the reasons that we don't use a simple sign bit in practice. Instead, computers usually use a more sophisticated representation for negative binary numbers called two's complement .

Two's complement

There's an alternative representation called two's complement , which avoids having two representations for 0, and more importantly, makes it easier to do arithmetic with negative numbers.

Representing positive numbers with two's complement

Representing positive numbers is the same as the method you have already learnt. Using 8 bits ,the leftmost bit is a zero and the other 7 bits are the usual binary representation of the number; for example, 1 would be 00000001 , and 50 would be 00110010 .

Representing negative numbers with two's complement

This is where things get more interesting. In order to convert a negative number to its two's complement representation, use the following process. 1. Convert the number to binary (don't use a sign bit, and pretend it is a positive number). 2. Invert all the digits (i.e. change 0's to 1's and 1's to 0's). 3. Add 1 to the result (Adding 1 is easy in binary; you could do it by converting to decimal first, but think carefully about what happens when a binary number is incremented by 1 by trying a few; there are more hints in the panel below).

For example, assume we want to convert -118 to its two's complement representation. We would use the process as follows. 1. The binary number for 118 is 01110110 . 2. 01110110 with the digits inverted is 10001001 . 3. 10001001 + 1 is 10001010 .

Therefore, the two's complement representation for -118 is 10001010 .

The rule for adding one to a binary number is pretty simple, so we'll let you figure it out for yourself. First, if a binary number ends with a 0 (e.g. 1101010), how would the number change if you replace the last 0 with a 1? Now, if it ends with 01, how much would it increase if you change the 01 to 10? What about ending with 011? 011111?

The method for adding is so simple that it's easy to build computer hardware to do it very quickly.

What would be the two's complement representation for the following numbers, using 8 bits ? Follow the process given in this section, and remember that you do not need to do anything special for positive numbers.

  • 19 in binary is 0001 0011 , which is the two's complement for a positive number.
  • For -19, we take the binary of the positive, which is 0001 0011 (above), invert it to 1110 1100, and add 1, giving a representation of 1110 1101 .
  • 107 in binary is 0110 1011 , which is the two's complement for a positive number.
  • For -107, we take the binary of the positive, which is 0110 1011 (above), invert it to 1001 0100, and add 1, giving a representation of 1001 0101 .
  • For -92, we take the binary of the positive, which is 0101 1100, invert it to 1010 0011, and add 1, giving a representation of 1010 0100 . (If you have this incorrect, double check that you incremented by 1 correctly).

Converting a two's complement number back to decimal

In order to reverse the process, we need to know whether the number we are looking at is positive or negative. For positive numbers, we can simply convert the binary number back to decimal. But for negative numbers, we first need to convert it back to a normal binary number.

So how do we know if the number is positive or negative? It turns out (for reasons you will understand later in this section) that two's complement numbers that are negative always start in a 1, and positive numbers always start in a 0. Have a look back at the previous examples to double check this.

So, if the number starts with a 1, use the following process to convert the number back to a negative decimal number.

  • Subtract 1 from the number.
  • Invert all the digits.
  • Convert the resulting binary number to decimal.
  • Add a minus sign in front of it.

So if we needed to convert 11100010 back to decimal, we would do the following.

  • Subtract 1 from 11100010 , giving 11100001 .
  • Invert all the digits, giving 00011110 .
  • Convert 00011110 to a binary number, giving 30 .
  • Add a negative sign, giving -30 .

Convert the following two's complement numbers to decimal.

  • 10001100 -> (-1) 10001011 -> (inverted) 01110100 -> (to decimal) 116 -> (negative sign added) -116
  • 10111111 -> (-1) 10111110 -> (inverted) 01000001 -> (to decimal) 65 -> (negative sign added) -65

How many numbers can be represented using two's complement?

While it might initially seem that there is no bit allocated as the sign bit, the left-most bit behaves like one. With 8 bits, you can still only make 256 possible patterns of 0's and 1's. If you attempted to use 8 bits to represent positive numbers up to 255, and negative numbers down to -255, you would quickly realise that some numbers were mapped onto the same pattern of bits. Obviously, this will make it impossible to know what number is actually being represented!

In practice, numbers within the following ranges can be represented. Unsigned Range is how many numbers you can represent if you only allow positive numbers (no sign is needed), and two's complement Range is how many numbers you can represent if you require both positive and negative numbers. You can work these out because the range of 8-bit values if they are stored using unsigned numbers will be from 00000000 to 11111111 (i.e. 0 to 255 in decimal), while the signed two's complement range is from 10000000 (the lowest number, -128 in decimal) to 01111111 (the highest number, 127 in decimal). This might seem a bit weird, but it works out really well because normal binary addition can be used if you use this representation even if you're adding a negative number.

Adding negative binary numbers

Before adding negative binary numbers, we'll look at adding positive numbers. It's basically the same as the addition methods used on decimal numbers, except the rules are way simpler because there are only two different digits that you might add!

You've probably learnt about column addition. For example, the following column addition would be used to do 128 + 255 .

When you go to add 5 + 8, the result is higher than 9, so you put the 3 in the one's column, and carry the 1 to the 10's column. Binary addition works in exactly the same way.

Adding positive binary numbers

If you wanted to add two positive binary numbers, such as 00001111 and 11001110 , you would follow a similar process to the column addition. You only need to know 0+0, 0+1, 1+0, and 1+1, and 1+1+1. The first three are just what you might expect. Adding 1+1 causes a carry digit, since in binary 1+1 = 10, which translates to "0, carry 1" when doing column addition. The last one, 1+1+1 adds up to 11 in binary, which we can express as "1, carry 1". For our two example numbers, the addition works like this:

Remember that the digits can be only 1 or 0. So you will need to carry a 1 to the next column if the total you get for a column is (decimal) 2 or 3.

Adding negative numbers with a simple sign bit

With negative numbers using sign bits like we did before, this does not work. If you wanted to add +11 (01011) and -7 (10111) , you would expect to get an answer of +4 (00100) .

Which is -2 .

One way we could solve the problem is to use column subtraction instead. But this would require giving the computer a hardware circuit which could do this. Luckily this is unnecessary, because addition with negative numbers works automatically using two's complement!

Adding negative numbers with two's complement

For the above addition (+11 + -7), we can start by converting the numbers to their 5-bit two's complement form. Because 01011 (+11) is a positive number, it does not need to be changed. But for the negative number, 00111 (-7) (sign bit from before removed as we don't use it for two's complement), we need to invert the digits and then add 1, giving 11001 .

Adding these two numbers works like this:

Any extra bits to the left (beyond what we are using, in this case 5 bits) have been truncated. This leaves 00100 , which is 4 , like we were expecting.

We can also use this for subtraction. If we are subtracting a positive number from a positive number, we would need to convert the number we are subtracting to a negative number. Then we should add the two numbers. This is the same as for decimal numbers, for example 5 - 2 = 3 is the same as 5 + (-2) = 3.

This property of two's complement is very useful. It means that positive numbers and negative numbers can be handled by the same computer circuit, and addition and subtraction can be treated as the same operation.

The idea of using a "complementary" number to change subtraction to addition can be seen by doing the same in decimal. The complement of a decimal digit is the digit that adds up to 10; for example, the complement of 4 is 6, and the complement of 8 is 2. (The word "complement" comes from the root "complete" – it completes it to a nice round number.)

Subtracting 2 from 6 is the same as adding the complement, and ignoring the extra 1 digit on the left. The complement of 2 is 8, so we add 8 to 6, giving (1)4.

For larger numbers (such as subtracting the two 3-digit numbers 255 - 128), the complement is the number that adds up to the next power of 10 i.e. 1000-128 = 872. Check that adding 872 to 255 produces (almost) the same result as subtracting 128.

Working out complements in binary is way easier because there are only two digits to work with, but working them out in decimal may help you to understand what is going on.

Using sign bits vs using two's complement

We have now looked at two different ways of representing negative numbers on a computer. In practice, a simple sign bit is rarely used, because of having two different representations of zero, and requiring a different computer circuit to handle negative and positive numbers, and to do addition and subtraction.

Two's complement is widely used, because it only has one representation for zero, and it allows positive numbers and negative numbers to be treated in the same way, and addition and subtraction to be treated as one operation.

There are other systems such as "One's Complement" and "Excess-k", but two's complement is by far the most widely used in practice.

define integer representation in computer

Getuplearn – Communication, Marketing, HRM, Tutorial

Data Representation in Computer: Number Systems, Characters, Audio, Image and Video

  • Post author: Anuj Kumar
  • Post published: 16 July 2021
  • Post category: Computer Science
  • Post comments: 0 Comments

Table of Contents

  • 1 What is Data Representation in Computer?
  • 2.1 Binary Number System
  • 2.2 Octal Number System
  • 2.3 Decimal Number System
  • 2.4 Hexadecimal Number System
  • 3.4 Unicode
  • 4 Data Representation of Audio, Image and Video
  • 5.1 What is number system with example?

What is Data Representation in Computer?

A computer uses a fixed number of bits to represent a piece of data which could be a number, a character, image, sound, video, etc. Data representation is the method used internally to represent data in a computer. Let us see how various types of data can be represented in computer memory.

Before discussing data representation of numbers, let us see what a number system is.

Number Systems

Number systems are the technique to represent numbers in the computer system architecture, every value that you are saving or getting into/from computer memory has a defined number system.

A number is a mathematical object used to count, label, and measure. A number system is a systematic way to represent numbers. The number system we use in our day-to-day life is the decimal number system that uses 10 symbols or digits.

The number 289 is pronounced as two hundred and eighty-nine and it consists of the symbols 2, 8, and 9. Similarly, there are other number systems. Each has its own symbols and method for constructing a number.

A number system has a unique base, which depends upon the number of symbols. The number of symbols used in a number system is called the base or radix of a number system.

Let us discuss some of the number systems. Computer architecture supports the following number of systems:

Binary Number System

Octal number system, decimal number system, hexadecimal number system.

Number Systems

A Binary number system has only two digits that are 0 and 1. Every number (value) represents 0 and 1 in this number system. The base of the binary number system is 2 because it has only two digits.

The octal number system has only eight (8) digits from 0 to 7. Every number (value) represents with 0,1,2,3,4,5,6 and 7 in this number system. The base of the octal number system is 8, because it has only 8 digits.

The decimal number system has only ten (10) digits from 0 to 9. Every number (value) represents with 0,1,2,3,4,5,6, 7,8 and 9 in this number system. The base of decimal number system is 10, because it has only 10 digits.

A Hexadecimal number system has sixteen (16) alphanumeric values from 0 to 9 and A to F. Every number (value) represents with 0,1,2,3,4,5,6, 7,8,9,A,B,C,D,E and F in this number system. The base of the hexadecimal number system is 16, because it has 16 alphanumeric values.

Here A is 10, B is 11, C is 12, D is 13, E is 14 and F is 15 .

Data Representation of Characters

There are different methods to represent characters . Some of them are discussed below:

Data Representation of Characters

The code called ASCII (pronounced ‘􀀏’.S-key”), which stands for American Standard Code for Information Interchange, uses 7 bits to represent each character in computer memory. The ASCII representation has been adopted as a standard by the U.S. government and is widely accepted.

A unique integer number is assigned to each character. This number called ASCII code of that character is converted into binary for storing in memory. For example, the ASCII code of A is 65, its binary equivalent in 7-bit is 1000001.

Since there are exactly 128 unique combinations of 7 bits, this 7-bit code can represent only128 characters. Another version is ASCII-8, also called extended ASCII, which uses 8 bits for each character, can represent 256 different characters.

For example, the letter A is represented by 01000001, B by 01000010 and so on. ASCII code is enough to represent all of the standard keyboard characters.

It stands for Extended Binary Coded Decimal Interchange Code. This is similar to ASCII and is an 8-bit code used in computers manufactured by International Business Machines (IBM). It is capable of encoding 256 characters.

If ASCII-coded data is to be used in a computer that uses EBCDIC representation, it is necessary to transform ASCII code to EBCDIC code. Similarly, if EBCDIC coded data is to be used in an ASCII computer, EBCDIC code has to be transformed to ASCII.

ISCII stands for Indian Standard Code for Information Interchange or Indian Script Code for Information Interchange. It is an encoding scheme for representing various writing systems of India. ISCII uses 8-bits for data representation.

It was evolved by a standardization committee under the Department of Electronics during 1986-88 and adopted by the Bureau of Indian Standards (BIS). Nowadays ISCII has been replaced by Unicode.

Using 8-bit ASCII we can represent only 256 characters. This cannot represent all characters of written languages of the world and other symbols. Unicode is developed to resolve this problem. It aims to provide a standard character encoding scheme, which is universal and efficient.

It provides a unique number for every character, no matter what the language and platform be. Unicode originally used 16 bits which can represent up to 65,536 characters. It is maintained by a non-profit organization called the Unicode Consortium.

The Consortium first published version 1.0.0 in 1991 and continues to develop standards based on that original work. Nowadays Unicode uses more than 16 bits and hence it can represent more characters. Unicode can represent characters in almost all written languages of the world.

Data Representation of Audio, Image and Video

In most cases, we may have to represent and process data other than numbers and characters. This may include audio data, images, and videos. We can see that like numbers and characters, the audio, image, and video data also carry information.

We will see different file formats for storing sound, image, and video .

Multimedia data such as audio, image, and video are stored in different types of files. The variety of file formats is due to the fact that there are quite a few approaches to compressing the data and a number of different ways of packaging the data.

For example, an image is most popularly stored in Joint Picture Experts Group (JPEG ) file format. An image file consists of two parts – header information and image data. Information such as the name of the file, size, modified data, file format, etc. is stored in the header part.

The intensity value of all pixels is stored in the data part of the file. The data can be stored uncompressed or compressed to reduce the file size. Normally, the image data is stored in compressed form. Let us understand what compression is.

Take a simple example of a pure black image of size 400X400 pixels. We can repeat the information black, black, …, black in all 16,0000 (400X400) pixels. This is the uncompressed form, while in the compressed form black is stored only once and information to repeat it 1,60,000 times is also stored.

Numerous such techniques are used to achieve compression. Depending on the application, images are stored in various file formats such as bitmap file format (BMP), Tagged Image File Format (TIFF), Graphics Interchange Format (GIF), Portable (Public) Network Graphic (PNG).

What we said about the header file information and compression is also applicable for audio and video files. Digital audio data can be stored in different file formats like WAV, MP3, MIDI, AIFF, etc. An audio file describes a format, sometimes referred to as the ‘container format’, for storing digital audio data.

For example, WAV file format typically contains uncompressed sound and MP3 files typically contain compressed audio data. The synthesized music data is stored in MIDI(Musical Instrument Digital Interface) files.

Similarly, video is also stored in different files such as AVI (Audio Video Interleave) – a file format designed to store both audio and video data in a standard package that allows synchronous audio with video playback, MP3, JPEG-2, WMV, etc.

FAQs About Data Representation in Computer

What is number system with example.

Let us discuss some of the number systems. Computer architecture supports the following number of systems: 1. Binary Number System 2. Octal Number System 3. Decimal Number System 4. Hexadecimal Number System

Related posts:

  • 10 Types of Computers | History of Computers, Advantages

What is Microprocessor? Evolution of Microprocessor, Types, Features

What is operating system functions, types, types of user interface, what is cloud computing classification, characteristics, principles, types of cloud providers.

  • What is Debugging? Types of Errors
  • What are Functions of Operating System? 6 Functions

What is Flowchart in Programming? Symbols, Advantages, Preparation

Advantages and disadvantages of flowcharts, what is c++ programming language c++ character set, c++ tokens, what are c++ keywords set of 59 keywords in c ++, what are data types in c++ types, what are operators in c different types of operators in c.

  • What are Expressions in C? Types

What are Decision Making Statements in C? Types

Types of storage devices, advantages, examples, you might also like.

Types of Computer Memory

Types of Computer Memory, Characteristics, Primary Memory, Secondary Memory

What is Computer System

What is Computer System? Definition, Characteristics, Functional Units, Components

Flowchart in Programming

What is Artificial Intelligence? Functions, 6 Benefits, Applications of AI

Process Operating System

Advantages and Disadvantages of Operating System

What is big data

What is Big Data? Characteristics, Tools, Types, Internet of Things (IOT)

Problem Solving Algorithm

What is Problem Solving Algorithm?, Steps, Representation

Types of Storage Devices

Generations of Computer First To Fifth, Classification, Characteristics, Features, Examples

Types of Computer Software

Types of Computer Software: Systems Software, Application Software

Data and Information

Data and Information: Definition, Characteristics, Types, Channels, Approaches

  • Entrepreneurship
  • Organizational Behavior
  • Financial Management
  • Communication
  • Human Resource Management
  • Sales Management
  • Marketing Management

Representation of Numbers and Characters in Computer

  • First Online: 24 November 2023

Cite this chapter

define integer representation in computer

  • Orhan Gazi 2  

298 Accesses

This chapter covers the computer representation of numbers and characters. Computers use binary number system. Everything is represented by binary numbers in computer. Information is expressed using symbols that include characters, numbers, and symbols other than characters. Every symbol and number is represented by 7-bit ASCII codes. ASCII representation of positive numbers is the same as their binary representation. However, negative numbers are represented in 2’s complement form in most of the electronic devices, including computers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and affiliations.

Electrical and Electronics Engineering, Ankara Medipol University, Altındağ, Ankara, Türkiye

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Orhan Gazi .

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Gazi, O. (2024). Representation of Numbers and Characters in Computer. In: Modern C Programming. Springer, Cham. https://doi.org/10.1007/978-3-031-45361-8_1

Download citation

DOI : https://doi.org/10.1007/978-3-031-45361-8_1

Published : 24 November 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-45360-1

Online ISBN : 978-3-031-45361-8

eBook Packages : Engineering Engineering (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • School Guide
  • Mathematics
  • Number System and Arithmetic
  • Trigonometry

Probability

  • Mensuration
  • Maths Formulas
  • Class 8 Maths Notes
  • Class 9 Maths Notes
  • Class 10 Maths Notes
  • Class 11 Maths Notes
  • Class 12 Maths Notes
  • Quantitative Aptitude For IBPS PO Exam
  • Simplification and Approximation
  • Approximation - Aptitude Question and Answers
  • Profit and Loss
  • Practice Set For Profit and Loss

Mixtures & Allegations

  • Mixture and Alligation | Set 2
  • Tricks To Solve Mixture and Alligation
  • Mixtures and Alligation

Simple Interest & Compound Interest

  • Simple Interest
  • Compound Interest Formula
  • Time and Work - Aptitude Questions and Answers
  • Speed and Distance Advance Level

Time,Speed,and Distance

  • Mensuration 2D
  • Mensuration in Maths | Formulas for 2D and 3D Shapes, Examples

Mensuration and Geometry

  • Geometry and Co-ordinates
  • Coordinate Geometry
  • Ratios and Percentages
  • Ratio and Proportion
  • Basic Concept of Percentage

Ratio & Proportion,Percentage

  • Tips & Tricks To Solve Ratio & Proportion - Advance Level

What is a number system?

  • Number System in Maths
  • Basic Concept Of Number System

Number System

  • Special Series - Sequences and Series | Class 11 Maths

Permutation and Combination

Sequence & series.

  • Permutation
  • Tricks To Solve Questions On Average
  • Quadratic Formula
  • Roots of Quadratic Equation
  • Relationship Between Two Variables

Linear Equation

  • Data Sufficiency
  • Important Formulas of Interest, Mensuration, Permutation & Combination and Probability
  • Arithmetic Progression
  • Practice Set For Height & Distance

Problems of Ages

Number series, miscellaneous.

Number system is important from the viewpoint of understanding how data are represented before they can be processed by any digital system including a digital computer. there are two basic ways of representing the numerical values of the various physical quantities with which we constantly deal in our day to day lives. The arithmetic value which is used for representing the quantity and used in making calculations are defined as NUMBERS. A symbol like “4, 5, 6” which represents a number is known as numerals . Without numbers, counting things is not possible, date, time, money, etc. these numbers are also used for measurement and used for labelling. The properties of numbers make them helpful in performing arithmetic operations on them.  These numbers can be written in numeric forms and also in words.

For example, 3 is written as three in words, 35 is written as thirty-five in words, etc. Students can write the numbers from 1 to 100 in words to learn more. There are different types of numbers, which we can learn. They are whole and natural numbers, odd and even numbers, rational and irrational numbers, etc.

Number and Its Types

Numbers used in mathematics are mostly decimal number systems. In the decimal number system, digits used are from 0 to 9 and base 10 is used. There are many types of numbers in decimal number system, below are some of the types of numbers mentioned,

  • Numbers that are represented on the right side of the zero are termed Positive Numbers . The value of these numbers increases on moving towards the right. Positive numbers are used for Addition between numbers. Example: 1, 2, 3, 4.
  • Numbers that are represented on the left side of the zero are termed Negative Numbers . The value of these numbers decreases on moving towards the left. Negative numbers are used for Subtraction between numbers. Example: -1, -2, -3, -4.
  • Natural Numbers are the most basic type of Numbers that range from 1 to infinity. These numbers are also called Positive Numbers or Counting Numbers. Natural Numbers are represented by the symbol N.
  • Whole Numbers are basically the Natural Numbers, but they also include ‘zero’. Whole numbers are represented by the symbol W.
  • Integers are the collection of Whole Numbers plus the negative values of the Natural Numbers. Integers do not include fraction numbers i.e. they can’t be written in a/b form. The range of Integers is from the Infinity at the Negative end and Infinity at the Positive end, including zero. Integers are represented by the symbol Z.
  • Rational numbers are the numbers that can be represented in the fraction form i.e. a/b. Here, a and b both are integers and b≠0. All the fractions are rational numbers but not all the rational numbers are fractions.
  • Irrational numbers are the numbers that can’t be represented in the form of fractions i.e. they can not be written as a/b.
  • Numbers that do not have any factors other than 1 and the number itself are termed as Prime Numbers. All the numbers other than Prime Numbers are termed as Composite Numbers except 0. Zero is neither prime nor a composite number.

A Number system is a method of showing numbers by writing, which is a mathematical way of representing the numbers of a given set, by using the numbers or symbols in a mathematical manner. The writing system for denoting numbers using digits or symbols in a logical manner is defined as a Number system. The numeral system Represents a useful set of numbers, reflects the arithmetic and algebraic structure of a number, and Provides standard representation. The digits from 0 to 9 can be used to form all the numbers. With these digits, anyone can create infinite numbers. For example, 156,3907, 3456, 1298, 784859 etc.

Types of Number Systems

Based on the base value and the number of allowed digits, number systems are of many types. The four common types of Number System are:

  • Decimal Number System
  • Binary Number System
  • Octal Number System
  • Hexadecimal Number System

Decimal Number System 

Number system with a base value of 10 is termed a Decimal number system. It uses 10 digits i.e. 0-9 for the creation of numbers. Here, each digit in the number is at a specific place with place value a product of different powers of 10. Here, the place value is termed from right to left as first place value called units, second to the left as Tens, so on Hundreds, Thousands, etc. Here, units have the place value as 100, tens have the place value as 101, hundreds as 102, thousands as 103, and so on. 

For example, 10264 has place values as,

(1 × 10 4 ) + (0 × 10 3 ) + (2 × 10 2 ) + (6 × 10 1 ) + (4 × 10 0 ) = 1 × 10000 + 0 × 1000 + 2 × 100 + 6 × 10 + 4 × 1 = 10000 + 0 + 200 + 60 + 4 = 10264

Binary Number System 

Number System with base value 2 is termed as Binary number system. It uses 2 digits i.e. 0 and 1 for the creation of numbers. The numbers formed using these two digits are termed Binary Numbers. The binary number system is very useful in electronic devices and computer systems because it can be easily performed using just two states ON and OFF i.e. 0 and 1.

Decimal Numbers 0-9 are represented in binary as: 0, 1, 10, 11, 100, 101, 110, 111, 1000, and 1001

For example, 14 can be written as 1110, 19 can be written as 10011, 50 can be written as 110010.

Example of 19 in the binary system

Here 19 can be written as 10011

Logic operations are the backbone of any digital computer, although solving a problem on computer could involve an arithmetic operation too. The introduction of the mathematics of logic by George Boole laid the foundation for the modern digital computer. He reduced the mathematics of logic to a binary notation of ‘0’ and ‘1’.

Another advantage of this number system was that all kind of data could be conveniently represented in terms of 0s and 1s.

Also basic electronic devices used for hardware implementation could be conveniently and efficiently operated in two distinctly different modes.

the circuits required for performing arithmetic operations.

Octal Number System 

Octal Number System is one in which the base value is 8. It uses 8 digits i.e. 0-7 for the creation of Octal Numbers. Octal Numbers can be converted to Decimal values by multiplying each digit with the place value and then adding the result. Here the place values are 80, 81, and 82. Octal Numbers are useful for the representation of UTF8 Numbers. Example,

(135) 10 can be written as (207) 8 (215) 10 can be written as (327) 8

Hexadecimal Number System 

Number System with base value 16 is termed as Hexadecimal Number System. It uses 16 digits for the creation of its numbers. Digits from 0-9 are taken like the digits in the decimal number system but the digits from 10-15 are represented as A-F i.e. 10 is represented as A, 11 as B, 12 as C, 13 as D, 14 as E, and 15 as F. Hexadecimal Numbers are useful for handling memory address locations.The hexadecimal number system provides a condensed way of representing large binary numbers stored and processed. Examples,

(255) 10  can be written as (FF) 16 (1096) 10  can be written as (448) 16 (4090) 10  can be written as (FFA) 16 HEXADECIMAL 0 1 2 3 4 5 6 7 8 9 A B C D E F DECIMAL 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Sample Problems  

Question 1: Convert (18) 10 as a binary number?

Therefore (18) 10 = (1001) 2

Question 2: Convert 325 8 into a decimal?

325 8  = 3 × 8 2 + 2 × 8 1 + 5 × 8 0  = 3 × 64 + 2 × 8 + 5 × 1 = 192 + 16 + 5 = 213 10

Question 3: Convert (2056) 16 into an octal number? 

Here (2056) 16 is in hexadecimal form  First we will convert into decimal form from hexadecimal. (2056) 16 = 2 × 16 3 + 0 × 16 2 + 5 × 16 1 + 6 × 16 0 = 2 × 4096 + 0 + 80 + 6 = 8192 + 0 + 80 + 6 = (8278) 10 Now convert this decimal number into octal number by dividing it by 8 So will take the value of remainder from 20126 (8278) 10 = ( 20126) 8 Therefore, ( 2056 ) 16 = ( 20126 ) 8

Question 4: Convert (101110) 2 into octal number.

Given (101110) 2 a binary number, to convert it into octal number OCTAL NUMBER BINARY NUMBER 0 000 1 001 2 010 3 011 4 100 5 101 6 110 7 111 Using the above table we can write given number as, 101 110 i,e.  101 = 5 110 = 6  So (101110) 2 in octal number is (56) 8

Please Login to comment...

Similar reads.

  • School Learning

advertisewithusBannerImg

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Ada Computer Science

You need to enable JavaScript to access Ada Computer Science.

U.S. flag

An official website of the United States government

Here's how you know

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

What the New Overtime Rule Means for Workers

Collage shows four professionals in business casual clothing.

One of the basic principles of the American workplace is that a hard day’s work deserves a fair day’s pay. Simply put, every worker’s time has value. A cornerstone of that promise is the  Fair Labor Standards Act ’s (FLSA) requirement that when most workers work more than 40 hours in a week, they get paid more. The  Department of Labor ’s new overtime regulation is restoring and extending this promise for millions more lower-paid salaried workers in the U.S.

Overtime protections have been a critical part of the FLSA since 1938 and were established to protect workers from exploitation and to benefit workers, their families and our communities. Strong overtime protections help build America’s middle class and ensure that workers are not overworked and underpaid.

Some workers are specifically exempt from the FLSA’s minimum wage and overtime protections, including bona fide executive, administrative or professional employees. This exemption, typically referred to as the “EAP” exemption, applies when: 

1. An employee is paid a salary,  

2. The salary is not less than a minimum salary threshold amount, and 

3. The employee primarily performs executive, administrative or professional duties.

While the department increased the minimum salary required for the EAP exemption from overtime pay every 5 to 9 years between 1938 and 1975, long periods between increases to the salary requirement after 1975 have caused an erosion of the real value of the salary threshold, lessening its effectiveness in helping to identify exempt EAP employees.

The department’s new overtime rule was developed based on almost 30 listening sessions across the country and the final rule was issued after reviewing over 33,000 written comments. We heard from a wide variety of members of the public who shared valuable insights to help us develop this Administration’s overtime rule, including from workers who told us: “I would love the opportunity to...be compensated for time worked beyond 40 hours, or alternately be given a raise,” and “I make around $40,000 a year and most week[s] work well over 40 hours (likely in the 45-50 range). This rule change would benefit me greatly and ensure that my time is paid for!” and “Please, I would love to be paid for the extra hours I work!”

The department’s final rule, which will go into effect on July 1, 2024, will increase the standard salary level that helps define and delimit which salaried workers are entitled to overtime pay protections under the FLSA. 

Starting July 1, most salaried workers who earn less than $844 per week will become eligible for overtime pay under the final rule. And on Jan. 1, 2025, most salaried workers who make less than $1,128 per week will become eligible for overtime pay. As these changes occur, job duties will continue to determine overtime exemption status for most salaried employees.

Who will become eligible for overtime pay under the final rule? Currently most salaried workers earning less than $684/week. Starting July 1, 2024, most salaried workers earning less than $844/week. Starting Jan. 1, 2025, most salaried workers earning less than $1,128/week. Starting July 1, 2027, the eligibility thresholds will be updated every three years, based on current wage data. DOL.gov/OT

The rule will also increase the total annual compensation requirement for highly compensated employees (who are not entitled to overtime pay under the FLSA if certain requirements are met) from $107,432 per year to $132,964 per year on July 1, 2024, and then set it equal to $151,164 per year on Jan. 1, 2025.

Starting July 1, 2027, these earnings thresholds will be updated every three years so they keep pace with changes in worker salaries, ensuring that employers can adapt more easily because they’ll know when salary updates will happen and how they’ll be calculated.

The final rule will restore and extend the right to overtime pay to many salaried workers, including workers who historically were entitled to overtime pay under the FLSA because of their lower pay or the type of work they performed. 

We urge workers and employers to visit  our website to learn more about the final rule.

Jessica Looman is the administrator for the U.S. Department of Labor’s Wage and Hour Division. Follow the Wage and Hour Division on Twitter at  @WHD_DOL  and  LinkedIn .  Editor's note: This blog was edited to correct a typo (changing "administrator" to "administrative.")

  • Wage and Hour Division (WHD)
  • Fair Labor Standards Act
  • overtime rule

SHARE THIS:   

Collage. Black-and-white photo from 1942 shows a Black woman holding a mop and broom in front of the US flag. Black-and-white photo from 1914 shows union women striking against child labor. Color photo from 2020s shows a Black woman holding a sign reading I heart home care workers.

IMAGES

  1. Computer representation of an Integer

    define integer representation in computer

  2. Integer Representation in Computer Memory

    define integer representation in computer

  3. Integers

    define integer representation in computer

  4. Integer

    define integer representation in computer

  5. PPT

    define integer representation in computer

  6. FIXED POINT REPRESENTATION IN COMPUTER ARCHITECTURE: NUMBER REPRESENTATION AND INTEGER

    define integer representation in computer

VIDEO

  1. Integer Representation in C

  2. Representation of Numbers

  3. Chapter 2

  4. Program to swap all elements of two integer arrays using user define function in C.#srn

  5. Integer Representation (Two's Complement) Explained in Haste

  6. FIXED POINT REPRESENTATION IN COMPUTER ARCHITECTURE: NUMBER REPRESENTATION AND INTEGER

COMMENTS

  1. Integer (computer science)

    The internal representation of this datum is the way the value is stored in the computer's memory. Unlike mathematical integers, a typical datum in a computer has some minimal and maximum possible value. The most common representation of a positive integer is a string of bits, using the binary numeral system.

  2. PDF CS 107 Lecture 2: Integer Representations

    There is only one zero (yay!) 2. The highest order bit (left-most) is 1 for negative, 0 for positive (so it is easy to tell if a number is negative) 3. Adding two numbers is just…adding! Example: 2 + -5 = -3 0010 ☞ 2 +1011 ☞ -5 1101 ☞ -3 decimal (wow!) Two's Complement: Neat Properties. More useful properties: 4.

  3. PDF Integer Representation

    Bits, binary numbers, and bytes Fixed-width representation of integers: unsigned and signed Modular arithmetic and overflow. positional number representation. 2 4 0. 100 10. 102 101. 2 1. = 2 x 102+ 4 x 101+ 0 x 100. 1. 100 weight.

  4. CS301: Integers and the Representation of Real Numbers

    The general definition of floating point numbers, equation (3.1), leaves us with the problem that numbers have more than one representation. ... For instance, . Since this would make computer arithmetic needlessly complicated, for instance in testing equality of numbers, we use normalized floating point numbers. A number is normalized if its ...

  5. PDF Number Systems and Number Representation

    • The binary, hexadecimal, and octal number systems • Finite representation of unsigned integers • Finite representation of signed integers • Finite representation of rational (floatingpoint) numbers-Why? • A power programmer must know number systems and data representation to fully understand C's . primitive data types. Primitive ...

  6. PDF Number Representation

    Number Representation Sean Farhat Figure 1: Unlike humans, computers can only understand binary In the rst module of this course, we will investigate one big question: How does everything that a computer does get simpli ed into 0's and 1's? It is a beautiful and well thought out process

  7. PDF CS429: Computer Organization and Architecture

    Topics of this Slideset. Numeric Encodings: Unsigned and two's complement Programming Implications: C promotion rules Basic operations: addition, negation, multiplication Consequences of overflow Using shifts to perform power-of-2 multiply/divide.

  8. PDF Number Systems and Number Representation

    The Binary Number System. binary. adjective: being in a state of one of two mutually exclusive conditions such as on or off, true or false, molten or frozen, presence or absence of a signal.From Late Latin bīnārius ("consisting of two"). Characteristics.

  9. Integer Representations (GNU C Language Manual)

    27.1 Integer Representations. Modern computers store integer values as binary (base-2) numbers that occupy a single unit of storage, typically either as an 8-bit char, a 16-bit short int, a 32-bit int, or possibly, a 64-bit long long int. Whether a long int is a 32-bit or a 64-bit value is system dependent. 11.

  10. Data Representation

    We also cover the basics of digital circuits and logic gates, and explain how they are used to represent and process data in computer systems. Our guide includes real-world examples and case studies to help you master data representation principles and prepare for your computer science exams. Check out the links below:

  11. PDF Lecture 2: Number Representation

    How We Store Numbers. • Binary numbers in memory are stored using a finite, fixed number of bits typically: 8 bits (byte) 16 bits (half word) 32 bits (word) 64 bits (double word or quad) If positive pad extra digits with leading 0s. A byte representing 410 = 00000100.

  12. 3.1: Integer Representation

    Representing integer numbers refers to how the computer stores or represents a number in memory. The computer represents numbers in binary (1's and 0's). However, the computer has a limited amount of space that can be used for each number or variable. This directly impacts the size, or range, of the number that can be represented.

  13. How are integer numbers represented by computers?

    Code on January 17, 2021. Computers represent all of its data using the binary system. In this system, the only possible values are 0 and 1, binary digits, which are also called bits. The demonstration below shows the bit-level representation of some common integer data-types in C. char. short.

  14. PDF 1 Representing Numbers in the Computer

    many of them. However, the computer is a finite state machine and can only hold a finite amount of information, albeit a very, very large amount. If we wanted to represent an integer, we could do this by having the computer store each of the digits in the number along with whether the integer is positive or negative. 2

  15. Numbers

    This is where things get more interesting. In order to convert a negative number to its two's complement representation, use the following process. 1. Convert the number to binary (don't use a sign bit, and pretend it is a positive number). 2. Invert all the digits (i.e. change 0's to 1's and 1's to 0's). 3.

  16. PDF Tutorial: Representation of Numbers in Digital Computers, and Digital

    8. convert between a specified value of a rational number and its representation in a digital computer as a Floating-Point number, given a definition of the particular Floating-Point representation scheme in use in the computer where the number is to be represented.

  17. Short Note on Representation of Integers in Computer

    The width and accuracy of an integral type depend on the number of bits in the representation of integers. On the other hand, there are four methods to represent signed numbers in a binary computing system. But some other computer languages also define integer sizes and representation through manual methods that are machine-independent.

  18. PDF Number Systems and Number Representation

    • The binary, hexadecimal, and octal number systems • Finite representation of unsigned integers • Finite representation of signed integers • Finite representation of rational numbers (if time) Why? • A power programmer must know number systems and data representation to fully understand C's primitive data types

  19. Data Representation in Computer: Number Systems, Characters

    A computer uses a fixed number of bits to represent a piece of data which could be a number, a character, image, sound, video, etc. Data representation is the method used internally to represent data in a computer. Let us see how various types of data can be represented in computer memory. Before discussing data representation of numbers, let ...

  20. Representation of Numbers and Characters in Computer

    Abstract. This chapter covers the computer representation of numbers and characters. Computers use binary number system. Everything is represented by binary numbers in computer. Information is expressed using symbols that include characters, numbers, and symbols other than characters. Every symbol and number is represented by 7-bit ASCII codes.

  21. What is a number system?

    A Number system is a method of showing numbers by writing, which is a mathematical way of representing the numbers of a given set, by using the numbers or symbols in a mathematical manner. The writing system for denoting numbers using digits or symbols in a logical manner is defined as a Number system. The numeral system Represents a useful set ...

  22. Ada Computer Science

    To represent a negative integer, such as − 25 in two's complement, it is very useful to know how to find the negative equivalent of a positive number.In this way you can: start from the positive value of the number in two's complement and then; find the negative equivalent - that is the number that has the same distance from zero as the positive number (e.g. + 25 and − 25)

  23. binary

    The question is not about how is the number stored but how is the number represented. eg. I don't care about endianity for int64 if I want to know if number is even or odd by checking the last bit. You don't care if it is in memory as last or first or in the middle or if there is some padding anywhere around or inside the number representation.

  24. What the New Overtime Rule Means for Workers

    The department's final rule, which will go into effect on July 1, 2024, will increase the standard salary level that helps define and delimit which salaried workers are entitled to overtime pay protections under the FLSA.