The differences between 32-bit and 64-bit operating systems is how large a number the cpu can handle. ie a 32-bit processor can understand a binary integer up to 32 digits in length, which, if I remember right translates into like 2 billion or 2 trillion or somethin, you can run it through your vista-powered ti-83, it's just 2^32. A 64-bit processor can handle quite a bit more, since adding 1 digit doubles it, how much does adding 32 digits help? (2^64... is a really big number). The problem comes when programs are written for systems that understand 32 bits. The programmers of the time took it for granted that the processor would understand no more than a 32-bit number, so they would say something like 'gimme the biggest number you got, put this big number in it. that's alot huh?' of course the operating systems and computers of old would say 'derrr... yea' but now it's more of 'that's it? take this! *crash*' So essentially, the program has to be written (and compiled) to assume that there could be more than 32 bits. Not many programs understand this yet, and even less take advantage of it. if you want to see what 64-bit can do, find somebody with autodesk. Now pick up your jaw.