Audience: Developers, technician, and technically interested people
Some basics about the CPU
I think you’re probably all been familiar with the basics about a CPU do her work, but for safety here’s still some basics if not (by the way: that is totally ok, we’ll change that).
A CPU (central processing unit) is responsible for, that your computer can all this cool stuff. The CPU works in a four-step cycle, in detail: “input, “processing”, ”store”, ”output”.
And how often the CPU will run this cycle depends on how fast your clock-rate is. (That is the thing with GHz). For example:
- 1Hz means you can run the whole cycle once in a second.
- 1GHz means you can run it 1.000.000.000 in a second.
BTW: The faster the CPU is, the more heat it produces. More about CPU
Inside the 32Bit and 64Bit CPU
Some short information:
- A 32Bit processor can only handle a memory (RAM) size of 2^32 (ca. 4 GB)
- A 64Bit processor can handle much more RAM. 2^64 (ca. 16 Exabyte)
It only means that the registers on a 64 bit CPU are twice as big as a 32 bit CPU. Conversely, this means that fewer instructions are processed per cycle in a 32-bit CPU than in a 64-bit CPU. Actually, relatively easy. If you want to go into more detail, you will find more information here.
Sizes of datatypes (and the fun with the int)
Here is a little example in C, who prints out the size of some datatypes:
After running the program you should see something like that:
On the left side is the datatype and on the right side is its size in bytes. So, at this point, you might ask yourself why is the basic int type only 4 bytes big on a 64Bit operating system?
At first, some facts about the C compiler (I’m not sure but I think that other compilers work in the same way). (I had used gcc as the compiler by the way). By default, the size of a datatype is defined in IEEE-754, in combination with the language C it is more specified in the ANSI/C standard.
If you read the IEEE-754 you’ll see that there is no default definition of the size of an integer type. In fact, the integer should be as big as the default CPU register size. For example 4 Byte on a 32 Bit processor or 8 Byte on 64 Bit processor but, as you can see in this little C example, by default the int has a size of 4 bytes, but why?
Pretty simple, it depends on the compiler, because the compiler defines the size. That means with an int of 4 bytes you can “only” save a number in the range of 2^32 (that is not really 100% correct, because it is possible to set some prefixes like singed or unsigned, but in fact, the range is the same, you can read more here). If you want to use the whole range of 2^64 (64-bit processor) you can simply use “int64_t”.
I hope I could reasonably explain what the small difference of 32 bit and 64-bit processors is. At this point, I also want to encourage all developers to take a peek at the compilers and CPUs. At first glance, everything looks very confusing, but it is actually quite simple. Happy coding! Feel free to follow me on Twitter @diClNeEASY!