1.

Solve : Win 7 and 64bit OS?

Answer»

I'm currently running a 32 bit Windows XP OS. I'm going to upgrade to Win 7 when it comes out, but I don't know if I should do 32 or 64 bit.

I ran CPUID's CPU-Z which indicated that my PROCESSOR at least could run 64 bit. (i.e. EM64T). I have 4 GB RAM and my motherboard is Gigabyte P35-DS3L.

Is it even worth it upgrading, or should I not bother?

Thanks!
Most users are very disappointed by installing Vista x64. The lack of 64-bit drivers for most current hardware is the catch. The performance gains promised by 64-bit will not be here for three years.

And by then there will be another new thing.
They might even tell us that 48 bit is the best answer!
And somebody will have a registry cleaner that removes the un-needed bits!

Ha ha so you're saying don't bother? That's fine. I just like the shiniest toys my poor college student budget can afford, so I was just wondering.

Thanks.If you have a 64-bit processor you should be using a 64-bit OS, period.

And I have yet to find a single device that didn't have a 64-bit driver. Not counting my ISA expansion cards from the early 80's.if your cpu is compatible with 64bit os then get windows 7 64 bit
64 bit will use your cpu to its fullest potential

i am running widows 7 64 bit and havent had any compatibility issues

vista sucked but you will love windows 7I agree! I have used Vista Ultimate 64bit and Windows 7 CR 64bit and had no issues finding drivers. I have noticed that when use applications such as Photoshop CS4 64bit, it runs must faster than 32bit.Quote from: BC_Programmer on October 17, 2009, 10:08:43 PM

If you have a 64-bit processor you should be using a 64-bit OS, period.

And I have yet to find a single device that didn't have a 64-bit driver. Not counting my ISA expansion cards from the early 80's.

BUT 64 bit software maybe a different story...device drivers allow applications to take advantage
of the "64 bitness" of the system...but many vendors cannot justify the investment to
upgrade their applications to 64 bits because not enough are using it...not even major OEM...ie Dell/HP/etc....IMO

if you install xyz application on an 64 bit system and it hasen't been upgraded to take advantage of the 64 bits it'll simply run in 32 bit mode and/or fail...so no gain..."upgrading" software to run as a 64-bit application is as simple as setting a compiler flag.

Or, it would be, if half the developers for said companies knew their *censored* from a hole in the ground (exaggeration).

Most of the "problems" with conversion from 32-bit to 64-bit are exactly the same problems that arose with the switch to 32-bit.

For example- All handles on 64-bit windows are 64 bits wide. This shouldn't be a problem, since the SDK typedef's HANDLE too 16 bits with the 16-bit SDK, 32-bits with the Win32 SDK, and 64-bits with Win64. the Problem is the programmers who assume that the size they are familar with will ALWAYS be the same size. If that was the case, why the *censored* would MS have put the typedef in?

they write code segments like this:

Code: [Select]if (!RtlMoveMemory(*(&hWnd+4),SizeOf WINDOWSTRUCT))
return 0;
(probably wouldn't compile, but the idea is it's trying to copy data from immediately after a handle value. notice the +4? 4 bytes = 32 bits, the size, that hte programmer is assuming will always be the same size. This exact same type of code, with a +2 and using "memcpy" was the problem with windows 16-bit to 32-bit shift.

It's LAZY. It's documented thoroughly in the SDK NOT to rely on the size of the HANDLE types, and to instead use sizeof- as in, instead of 4, or 2, or 8 (with x64) just use SizeOf(HANDLE). And no, they cannot say they were "under pressure" to get the project done. They knew how big they thought the handle was going to be, so why not just use the bloody documented method.

Which brings up another point, the very thing that my example was trying to do was copy the window structure from memory. the window structure is not documented. Anywhere. Almost every version of windows changes it, and therefore so too does the size change. the structure name I used was made up- there is no structure, since as I said- it's undocumented.

This is the SECOND biggest problem with windows software development. Developers trying to use undocumented features of an older OS and then wondering why the behaviour has changed in a later version. This wouldn't be so bad if it was actually attributed to the software company whose product has these issues- but no, it's not. It's a problem with the new OS. So what happens? MS adds compatibility shims to make the program work. And now everybody says the "compatibility shims are bloat".. and they are. So with Vista, they scrapped a whole bunch. programs stop working. Guess whose FAULT it is... AGAIN?


the windows 3.1 SDK even has comments in the headers regarding shifts in bit widths that were being expected. *censored*, Windows NT 3.1 ran on the Alpha, and that was a 64-bit architecture.


Back to the topic of software firms and upgrading their applications to x64- basically, if they had been developing their applications properly all along, they shouldn't need to do nearly anything to get them working on x64. just a recompile. Of course for reasons I just noted this is not the case.

Even so- wether a Application is 32-bit or 64-bit is somewhat redundant, Worst case scenario is that explorer.exe, services, the kernel, system DLL's, and everything of that sort are all 64-bit running on a 64-bit OS, and your applications are 32-bit. Even so,the applications still get performance gains from the faster x64 kernel.

Additionally, while it is true that a number of applications do not have 64-bit equivalents, it's also true that the very same thing occured with the switch to 32-bit. Programs that SHOULD have required a simple flag switch to their compiler or in their makefile to switch to 32-bit required years of overhaul to strip out the parts of the code written by those who assume 16-bit was forever.

There are browsers for x64. IE64, Minefield, etc. The problem is that these are buggy for the reasons stated above- and their plugins don't work since they aren't converted themselves. (again, striking parallel to the 16-bit to 32-bit switch, namely Office add-ins needed 32-bit versions, if they didn't have one you were out of luck on your 32-bit OS.

I cannot even think of a reason that a 32-bit application would fail on a x64 OS. to be honest my experience has felt more like a 32-bit machine with the ability to also run x64 programs the 32 bit WOW LAYER is much better then the earlier incarnations of the Windows On Windows emulation layer designed to run 16-bit applications.

Even AS a 32-bit application, it's possible for 32-bit programs to use 64-bit features, through a process known as thunking. I'm not sure of the specifics regarding x64, but with the 16-bit to 32-bit switch "thunking" was used generally to enable the creation of a 32-bit application while still using 16-bit components, or the use of 32-bit components with 16-bit applications. it was buggy, sure... but really, it was either that, or the MANUFACTURERS actually read the SDK documentation. That could take forever. much better to pay their developers overtime to setup a complex thunking layer then learn how to do things properly.


The main drawback of x64 is that 16-bit applications will no longer run, instead giving the message:

Quote
The version of this file is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher.

But, again- if a developer cannot even be bothered to update their application to 32-bits- are they worth sticking with? Although it's true that many firms are still having success with DOS applications and windows 3.1 management software. too them, the real question is, do they need 64-bit? Do they even need 32-bit machines? No- unless they are willing to commit to updated versions of the software, retraining employees about the differences between the new system and the system those employees have become used to for years, willing to iron out the INEVITABLE issues, etc. The reason most of these firms have not upgraded is simple- they have a system that works, and really- if a 40 Mhz 386 can handle the light load of transferring E-mail between departments, is it worth the cost to upgrade it to a Quad core server or something? Not really.

Another problem is the more common, and even, on the surface, more logical, method of replacing things when they break with new hardware. For example, the aforementioned 386. Perhaps it's hard drive or drive controller failed, and finding a replacement would be both difficult and costly. Therefore- it seems logical in that instance to update the hardware (and software).

And this makes sense. Until we account for the fact that the 386 was communicating over a token Ring network with multiple low-traffic 386 file servers. So their network cards would need to be replaced to match the ethernet that is essentially the only option for the new PC, which may have only PCI-E connectors. (your not going to find a PCI-E token ring card, I don't think!). But guess what- ISA cards are both expensive and hard to come by.

So what might happen? the file servers become dormant, the new mail bridge cannot be used, and departments start arguing about their e-mails being ignored. the company collapses! All for want of a shiny new intranet mail server.

OK, so maybe it's a slight exxaggeration. The point is, companies think they can get there in "small steps" but they can't- depending on the age of their infrastructure they might even need to renovate their offices. When you try to sit on the bleeding edge you can often get a bloody *censored*.
Quote from: BC_Programmer on October 20, 2009, 11:49:23 PM
"upgrading" software to run as a 64-bit application is as simple as setting a compiler flag.

Or, it would be, if half the developers for said companies knew their <censored> from a hole in the ground (exaggeration).

Most of the "problems" with conversion from 32-bit to 64-bit are exactly the same problems that arose with the switch to 32-bit.

For example- All handles on 64-bit windows are 64 bits wide. This shouldn't be a problem, since the SDK typedef's HANDLE too 16 bits with the 16-bit SDK, 32-bits with the Win32 SDK, and 64-bits with Win64. the Problem is the programmers who assume that the size they are familar with will ALWAYS be the same size. If that was the case, why the <censored> would MS have put the typedef in?

they write code segments like this:

Code: [Select]if (!RtlMoveMemory(*(&hWnd+4),SizeOf WINDOWSTRUCT))
return 0;
(probably wouldn't compile, but the idea is it's trying to copy data from immediately after a handle value. notice the +4? 4 bytes = 32 bits, the size, that hte programmer is assuming will always be the same size. This exact same type of code, with a +2 and using "memcpy" was the problem with windows 16-bit to 32-bit shift.

It's LAZY. It's documented thoroughly in the SDK NOT to rely on the size of the HANDLE types, and to instead use sizeof- as in, instead of 4, or 2, or 8 (with x64) just use SizeOf(HANDLE). And no, they cannot say they were "under pressure" to get the project done. They knew how big they thought the handle was going to be, so why not just use the bloody documented method.

Which brings up another point, the very thing that my example was trying to do was copy the window structure from memory. the window structure is not documented. Anywhere. Almost every version of windows changes it, and therefore so too does the size change. the structure name I used was made up- there is no structure, since as I said- it's undocumented.

This is the SECOND biggest problem with windows software development. Developers trying to use undocumented features of an older OS and then wondering why the behaviour has changed in a later version. This wouldn't be so bad if it was actually attributed to the software company whose product has these issues- but no, it's not. It's a problem with the new OS. So what happens? MS adds compatibility shims to make the program work. And now everybody says the "compatibility shims are bloat".. and they are. So with Vista, they scrapped a whole bunch. programs stop working. Guess whose fault it is... AGAIN?


the windows 3.1 SDK even has comments in the headers regarding shifts in bit widths that were being expected. <censored>, Windows NT 3.1 ran on the Alpha, and that was a 64-bit architecture.


Back to the topic of software firms and upgrading their applications to x64- basically, if they had been developing their applications properly all along, they shouldn't need to do nearly anything to get them working on x64. just a recompile. Of course for reasons I just noted this is not the case.

Even so- wether a Application is 32-bit or 64-bit is somewhat redundant, Worst case scenario is that explorer.exe, services, the kernel, system DLL's, and everything of that sort are all 64-bit running on a 64-bit OS, and your applications are 32-bit. Even so,the applications still get performance gains from the faster x64 kernel.

Additionally, while it is true that a number of applications do not have 64-bit equivalents, it's also true that the very same thing occured with the switch to 32-bit. Programs that SHOULD have required a simple flag switch to their compiler or in their makefile to switch to 32-bit required years of overhaul to strip out the parts of the code written by those who assume 16-bit was forever.

There are browsers for x64. IE64, Minefield, etc. The problem is that these are buggy for the reasons stated above- and their plugins don't work since they aren't converted themselves. (again, striking parallel to the 16-bit to 32-bit switch, namely Office add-ins needed 32-bit versions, if they didn't have one you were out of luck on your 32-bit OS.

I cannot even think of a reason that a 32-bit application would fail on a x64 OS. to be honest my experience has felt more like a 32-bit machine with the ability to also run x64 programs the 32 bit WOW layer is much better then the earlier incarnations of the Windows On Windows emulation layer designed to run 16-bit applications.

Even AS a 32-bit application, it's possible for 32-bit programs to use 64-bit features, through a process known as thunking. I'm not sure of the specifics regarding x64, but with the 16-bit to 32-bit switch "thunking" was used generally to enable the creation of a 32-bit application while still using 16-bit components, or the use of 32-bit components with 16-bit applications. it was buggy, sure... but really, it was either that, or the manufacturers actually read the SDK documentation. That could take forever. much better to pay their developers overtime to setup a complex thunking layer then learn how to do things properly.


The main drawback of x64 is that 16-bit applications will no longer run, instead giving the message:

But, again- if a developer cannot even be bothered to update their application to 32-bits- are they worth sticking with? Although it's true that many firms are still having success with DOS applications and windows 3.1 management software. too them, the real question is, do they need 64-bit? Do they even need 32-bit machines? No- unless they are willing to commit to updated versions of the software, retraining employees about the differences between the new system and the system those employees have become used to for years, willing to iron out the inevitable issues, etc. The reason most of these firms have not upgraded is simple- they have a system that works, and really- if a 40 Mhz 386 can handle the light load of transferring E-mail between departments, is it worth the cost to upgrade it to a Quad core server or something? Not really.

Another problem is the more common, and even, on the surface, more logical, method of replacing things when they break with new hardware. For example, the aforementioned 386. Perhaps it's hard drive or drive controller failed, and finding a replacement would be both difficult and costly. Therefore- it seems logical in that instance to update the hardware (and software).

And this makes sense. Until we account for the fact that the 386 was communicating over a token Ring network with multiple low-traffic 386 file servers. So their network cards would need to be replaced to match the ethernet that is essentially the only option for the new PC, which may have only PCI-E connectors. (your not going to find a PCI-E token ring card, I don't think!). But guess what- ISA cards are both expensive and hard to come by.

So what might happen? the file servers become dormant, the new mail bridge cannot be used, and departments start arguing about their e-mails being ignored. the company collapses! All for want of a shiny new intranet mail server.

OK, so maybe it's a slight exxaggeration. The point is, companies think they can get there in "small steps" but they can't- depending on the age of their infrastructure they might even need to renovate their offices. When you try to sit on the bleeding edge you can often get a bloody <censored>.


RTOI...return on investment....it doesn't pay to have software engineers upgrading software to use the 64 bits in a system if noone is using it...most are still using 16/32 bits...Quote from: fgdn17 on October 20, 2009, 11:57:24 PM
RTOI...return on investment....it doesn't pay to have software engineers upgrading software to use the 64 bits in a system if noone is using it...most are still using 16/32 bits...

true, but if they had done it properly to begin with- not even a time investment for that- just use the size of the type rather then assume, then "upgrading" the software on the manufacturer's end would simply be recompilation.

Drivers are, in fact, a lot more difficult to create a 32-bit and 64-bit version of. Device Driver developers and their companies are a lot more disciplined, since Device Drivers don't have compatibility shims- it it works, it was done properly, pretty much. Applications might inevitably take advantage of a shim designed for some other application that does something incorrectly- this shim will be undocumented, and often even do right against the documentation. (for example, using uninitialized pointers with certain API routines)


Discussion

No Comment Found