Saved Bookmarks
| 1. |
Solve : Guide me in creating two installations of windows on different drives.? |
|
Answer» How much RAM does your computer have?512 MB....I know I need to increase it.You can play with the size of your swap file until your find what's right for your machine. You can make it as large as you like. Give this a look: it will take longer to access the the data on the the storage drive, than if it was on the C drive. Why? JJ 3000, Installed all the college work and everything directly into the D Drive, reading This, will complete what's left after finishing the read. Thank you so much for all the detailed help. So, have you noticed any difference in speed?Quote from: JJ 3000 on September 12, 2009, 02:19:21 AM So, have you noticed any difference in speed? Why do you think there would be? It can provide a performance boost... but probably tiny/undetectable & only if the partitions are on separate disks, and those disks are on different IDE channels if the machine uses PATA drives. Factors like fragmentation level, disk rpm, etc would need to be taken into account too. But I do not see why you think it would necessarily be slower. JJ 3000, my computer seems all sorted out now, it's simple now and seems faster too. Thank you so much( gotta click on Thank JJ 3000 for their post again) Ivy I think you really need more RAM. Virtual RAM (swap file / pagefile etc) is the poor man's version for people looking to kill time ! ! My machine had only 256 MB of RAM. My cat lost a tin of cat meet so I could spend some pension on 1 GB extra RAM, and my P.C. became supercharged - especially on start-up. Right now Windows task manager is showing that I am using 352 MB, and since start-up the peak usage was 550 MB. I am using Firefox with several tabs, and have 4 text files open for editing. I suspect you would normally be using a lot more than that. Now I have 1.25 GB of RAM, any * sensible application * can access a byte of data "instantly" (probably less than 1 microsecond to access the data on the silicon, plus a bit more as the operating system processes data requests and organises the transfer.) With only 256 MB much of the data would not be on the silicon, but on a rotating magnetic disc, and if it spins at 5400 r.p.m. that is about 11 milliseconds per rotation, so after umpteen mSec for the head to step to the correct track, it then has up to 11 mSec to wait for the rotation to BRING the desired data to where the head can read it. You could easily suffer a latency of 20 mSec. Fetching a byte of data from Virtual Ram could be 20,000 times slower than getting it from REAL RAM. With a bit of luck when the first byte is eventually read from Virtual RAM then perhaps a few hundred ADJACENT bytes will also be put in a cache ready for instant access to an application that typically will want a BUNDLE of consecutive bytes. This is application dependant. I prefer certainty. REAL RAM makes a REAL improvement in REAL life. * sensible application * - I am excluding anything which insists upon using pagefile/swap file instead of freely available REAL RAM. Alan Memory accesses have been rated in Nanoseconds even since the original IBM-PC. 512MB is an OK amount for XP, However, running a large number of big programs (such as Illustrator, photoshop, etc) at the same time means that it can't all fit in physical RAM at once; once allocated memory exceeds available physical memory Windows needs to prioritize what remains in physical RAM and what get's swapped out. basically, it's a fairly involved algorithm involving what code segments are run more often as well as what data is accessed- for example if you use a certain command often it will remain in memory; whereas the other portions of the process's code segment might be swapped to disk. The way that windows loads a executable can be important in determining exactly what effect virtual memory has on it, as well as what effect it will have on virtual memory. What it does is basically to "Map" the file in memory. Now this doesn't actually Allocate data within memory, but rather a file to be accessed via memory accesses rather then file I/O from the disk to the file; this is far EASIER both code-wise and performance wise. Rather then copy code into memory and run it, windows can simply point execution to the "WinMain()" function within the mapped program file (of course it's not quite that simple, since external links have to be resolved and so on, but sometimes things need to be omitted for brevity's sake). this way, the parts of the program itself that remain in RAM are kept in memory or swapped out based on the same logic that determines the same for data; that is, frequently accessed bits remain in RAM, whereas pages that are more infrequently used are more likely to be swapped out when memory get's low. "When memory get's low" is a key phrase here; while many pages in a program file may be marked as "disposable" or swappable, Windows will still keep those bits out of Virtual memory for as long as possible; it is only when all physical RAM is committed and a program attempts to commit some more (or if a there is no contiguous block of free RAM that is large enough for the requested size a program wants) that windows takes a good hard look at the loaded pages. Basically the logic boils down to, "will I need this right away?" and makes a sort of guess based on how often it is accessed as well as the flags set on that particular memory page. For example pages can be set to never leave physical RAM, often programs will flag critical parts of their code with this to increase performance. It is obviously discouraged to do so with vast swaths of a program, since it restricts the abilities of the cache manager to do it's job. This is where the mapped file I/O comes in- the file I/O itself is done "on the fly" as needed; in effect when an executable is loaded it is very nearly treated as a small, read-only swap file; in that data can be loaded from the executable, but never written, so that if it is swapped it is swapped to virtual memory. Some have queried why the executable might not be changed, and the reasons range from the obvious (do you really want your executables to be writing themselves?) to the more precise (after data is loaded from the executable- it is often "template" data, and is changed; often by the program itself, setting hooks and self-modifying code; the technique is called "copy on write" and is used quite extensively. As a final note, RAM usage of two instances of the same application as reported by task manager are quite erroneous; the "private bytes" are not necessarily "Private" bytes, since the two applications share the same code and data; that is, their process address space is the same... until one instance changes code or data, at which point the touched page is duplicated, the write committed to the page and the processes memory map updated to refer to this "changed" page rather then the "template" one.Ever since "Windows for Workgroups 3.?" I have used a P.C. to create / edit / compile 'C' source code files for Real time embedded security SYSTEMS using 8 bit non-intel microprocessors, and to transfer those files to and from company servers. What little I knew of swap files etc was in the early days when there were never any registry problems - there was no registry - life was simple. Company policy was that software developers must focus upon enhancing with extra features our perfect real time systems that ran 24/7/365 year after year without a single B.S.O.D., and that any P.C. problem had to be referred to the I.T. department which specialised in B.S.O.D.s. Now I am retired I learning a lot more about Windows. Thank you for your information. Regards Alan |
|