While I may not be one of the biggest fans of the Microsoft Windows operating system out there, as I clearly detailed in an earlier post called “I Don’t Hate Microsoft, I Just Don’t Appreciate Windows“, I do still work on the OS on a regular basis. Microsoft is still a respectable company, and believe it or not, Windows is a decent operating system. Also, I’m not here to take sides on which OS is better than the other, and that ongoing feud; as Jason put it so eloquently in his post “Mac vs. PC“, it’s not which OS is best, it’s more of “which [OS] is better for you, the user” (couldn’t have said it better myself).
I do have to admit though that Windows, in general, tends to be a handicapped operating system upon first install, which is probably why everyone usually sees a performance hit the second they begin to install their software on that very shiny fresh install that they’re so happy about. The thing is that it doesn’t make much sense, because computers have evolved so much, and so has the OS; I have been reading on all the behind-the-scenes work that has taken place in Windows from the very first release of Windows XP, up until the release of Windows 7. Granted that’s pretty much only three generations of operating system, but it is quite a handful nonetheless. Microsoft have managed to carry out some significant enhancements in their operating system department, and all the technical notes are proof to show that, and our computers nowadays are light years ahead of what we have a decade ago, around the same time XP was released.
So my question is, why is it that when you run such an old OS on new hardware that has so much more raw processing power under its hood, it still manages to stall and make you wait so long for simple tasks to get done?
I had no idea what the answer to this question was, and usually passed it off as a poor implementation of a great design on Microsoft’s end. But now… well, let’s just say that I wasn’t too far off. You see, when Microsoft first released the operating system, as Windows 95, it was revolutionary for its time, leaving everyone in awe of this new so-called fool-proof method of interacting with a PC, such that everything (most user tasks at least) was driven through a GUI. At the time however, hardware was limited, memory was limited and processing power was very limited; so much that the idea of running multiple operations simultaneously seemed taxing on some of the most impressive specs back then.
I’m gonna spare you the history lesson and try to get straight to the point, but it must be made clear that computers have truly come a long way since, so much that we now have multiprocessing cores on a single die, massive amounts of operating memory and graphics hardware acceleration that can take you to the moon and back. However, if you dig into Windows (not even that deep), you will find a few settings that are set by default for lower spec machines than the common PC today. Three significant settings that need to be noted are processor scheduling, memory allocation and usage, and virtual memory.
This setting is set, by default, to Programs on desktop versions of Windows. Unfortunately though this would have been an optimal setting for a decade-old single-core processor that can only handle few tasks at once, with no parallel processing capabilities. Some experts criticize Windows for not fully taking advantage of parallel processing technologies, as Apple did with Grand Central Dispatch… and let’s face it, Linux just kicks everyone’s ass when it comes to parallel processing.
Well, now that we have 2-, 4- and even 8-core processors in commercial desktops and workstations, it would be a shame to have to limit the work of the operating system to the foreground programs that are running; this only means that you’re stalling all the background services, until the user starts working less on the PC, to run them. While this may make you believe that your programs will be more responsive, that’s simply not true, because most programs nowadays highly depend on a form of communication with a background service. And if that service is being stalled, then your overall experience will lag.
Memory Usage and Allocation
The next item of interest is the memory usage strategy that the operating system is using. It is guaranteed that running code that resides in your operating memory (i.e. RAM) is much faster than trying to load it from your hard disk, which goes on to explain the concept of having a memory hierarchy in a computer (L1 & L2 cache, RAM, flash cache and HDDs). Considering this and acknowledging the fact that most computers today come with a sea of operating memory capacity, why not take advantage of this extra space to load data that I and the operating system use regularly, i.e. the system cache.
Running Windows on its own requires approximately 512MB to 1GB of RAM, so anything in excess of that can be and would be used for applications and background services. This value range applies for the most common versions of Windows that are in use today, i.e. XP, Vista and 7. So if you have yourself anything past 4GB of RAM, it’s safe to say that you’re pretty safe from having your memory exploited by background services.
I know, what you’re thinking, “Vista is a memory hog”; well yes it is, but that’s only as a result of a technique that Microsoft attempted to implement but failed miserably to do correctly. Vista is a memory hog, simply because it tries to move data into memory that “it believes” you’re going to use. This is a common practice memory prefetching technique, the catch being that you have to implement it properly.
What’s the verdict?
Take things into your own hands and set the operating system to allocate more memory to system cache. This goes hand-in-hand, applying the same logic as what was used above in the processor scheduling section. You want the operating system to cache, things you are using, and to give the system the horsepower that it needs in order for it to perform in a snappy manner and always be ready to respond to your programs’ demands.
This one’s based more on a personal preference that I have. When your computer runs out of space in memory, it starts reading and writing to a pagefile that resides on your hard disk, i.e. a really slow storage medium in comparison to your operating memory. Even though it’s slower, it provides extra operating capacity to currently running processing without having to worry about “out of memory” exceptions. Once again, agreeing on the fact that most of the computers out there today have 4GB+ of operating memory, it is highly unlikely that you will run out of RAM while working on your PC, and if you are regularly running out of memory because you use some high-end software, then go ahead and buy yourself a memory upgrade; an 8GB upgrade will cost you less than $100, but will bring you significant performance improvements.
If you’re not using your entire RAM, then what point is there in having reserved swap space on your hard drive, since you’ll probably never use it? This way you can disable any paging on you Windows machine, and enjoy the sweet lightning speed experience of working in your operating memory space; not having to waste precious processor cycles waiting for instruction to communicate back and forth with the archaic hard drive.
So far, I have been running my Windows XP and Windows 7 (both x64, 8GB and 4GB of RAM, respectively) machines on this set-up for over a week now and I must say that I truly believe that they have indeed become more responsive, not stalling for no apparent reason, and all in all they’ve made me more relaxed as the user.
Funny enough is that Microsoft usually advises to use these settings for servers, while remaining on the default settings for desktop PC’s, however nowadays the average desktop PC has practically turned into a server, because the whole personal computer framework has shifted, transformed and evolved.
I invite you to try the above detailed procedure, and let me know what your thoughts are, whether you notice any differences, good or bad. 🙂