All,
Thanks for so many useful resoources.But still I'm not able to completely understand how the 64 bit processing speed is increased (or performed twice) when compared to the 32 bit systems. I understand that if we had used 64 bit data types in our programs (say long in C#), the processor can process the whole data in one shot instead of splitting up into 2 (as did by a 32 bit proessor).
Is it fair to say that a 64 system will perform twice as fast as a 32 bit system only when 64 bit data types are used? Or Are there any other reasons for this?
Also, in the following URL
What You Need To Know About The Shift to 64-Bit Computing
It is stated as
"computers process instructions in binary format. Each bit is capable of processing one binary instruction (zero or one) per clock cycle. Most of the PCs that are currently on the market have 32-bit processors, meaning that they can process 32 binary instructions per clock cycle.
Since 64-bit systems can process twice as many instructions per second as a comparable 32-bit system, 64-bit systems are definitely faster than their 32-bit counterparts"
This is highly confusing to me.If I undrestand correctly, each 32 bit processor's instruction has a length 32 bits and the 64 bit processor's instruction has a length 64 bits. I do not understand how each bit is capable of processing a instruction.
Kindly share your thoughts.
Thanks,
Suresh.