Binary, also known as the base-2 number system, is a way of writing numbers by using only two different symbols, usually a one (1) and a zero (0). Even though this is a whole eight symbols less than we normally use (this is called decimal, or the base-10 number system), it's still possible to write out each and every number.
While counting with only two symbols can get pretty confusing for humans, binary makes a lot of sense in the world of computers. This is because computers are made up of billions of microscopic switches that can either be turned on or off—something perfectly represented by a 0 or 1. In fact, binary is used in just about every computer on Earth and every time you've used one, you've experienced binary in action.
To get an idea of how binary works, let's first take a closer look at the decimal system (remember, the decimal system, or base-10, is how we normally write numbers). What happens when we count from nine to ten?
If you were paying close attention, you might've noticed that we only used one symbol (9) for the number nine, but had to use two (1 and 0) to write the number ten. We can keep track of each "place" (one's place, ten's place, etc.) only as long as we have enough symbols.
Okay, so here's where things start getting cool. Binary works in exactly the same way! The only difference is, we only have one symbol before we "run out" and have to start using another digit. Let's try counting again, this time from zero to two, and in both decimal and binary.
Here, binary is the same as decimal until we reach the number two. Because we can't use the symbol 2 (or 3-9) anymore, we have to write two as 10. If we kept going, three would be written 11 and four would be 100. Starting to get it?
As long as you have enough symbols, you can create a counting system with any base you want. You could even count with a base-1000 system (though we don't recommend it)! In the real world, however, there are a couple of other counting systems that computer scientists use pretty often.
Octal, or the base-8 counting system, uses 8 digits, 0-7. You can count all the way up to 7 before you have to represent 8 with two digits: 10. On the other hand, the hexadecimal, or base-16 counting system, uses 16 digits. Wait a minute, there are only 10 number digits! The solution? 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, 10!
In the late 1940s, the only way to program the earliest digital computers was to translate commands for them by hand into computer language, made up of only 1s and 0s. In 1951, the computer scientist Corrado Böhm came up with the idea for a computer program called a compiler, which does the translation automatically. The computer scientist Grace Hopper later created the first working compiler soon after. This is a great thing, since most computer programs these days use billions and billions of 1s and 0s!
When we download something these days, we don't normally think too much of seeing its size as, say, 1 gigabyte or more. But, take a second to think about what that really means. Giga- is a prefix meaning billion, so a gigabyte is 1 billion bytes! And since a byte is 8 bits, that means every gigabyte is 8 billion bits. That's a lot of 1s and 0s!
At first glance, binary seems like a great way to store numbers in computers. But what about say, text documents, music files or even pictures and videos? Just like secret agents, computer scientists came up with a clever trick to have computers save and display all kinds of data, called encoding.
First, they programmed computers to recognize certain numbers in binary. Let's pretend the number
10101010 tells a special computer that the next number is going to be a letter from the English alphabet. Since there are 26 letters, each letter can be given its own number in binary, from 0-25. So, for the letter 'A,' a computer might be told
10101010 00000000, the code for "an English letter is coming," followed by binary for the number 0. For the letter 'B,' the computer would be told
10101010 00000001 and so on.
In 1689, Gottfried Leibniz wrote an article explaining the basics of the binary numeral system and some of its potential uses. Although the idea for a programmable computing device had been put forward by Charles Babbage in the early 18th century, it wasn't until 1941 with Konrad Zuse's invention of the Z3, the first programmable, digital computer, that binary was put to use in computers.
Binary digits or bits are the smallest unit of storage in a computer. In a computer, bits are used to represent either a binary 1 or 0.
To make longer binary numbers, several bits are needed. For example, the binary number 101 (5 in decimal) requires 3 bits. A byte is made up of 8 bits. Sound familiar?