| 1. |
Solve : Time for a new OS!? |
|
Answer» Microsoft sucks! Let's assume that each instruction takes an average of three clock cycles. This would mean that the processor is actually processing 1 BILLION instructions every second!Much of the time required to boot Windows (or any modern OS) is due to I/O wait states. It doesn't matter if your CPU can process 1 million or 20 billion instructions a second if the data required for those instructions is sitting on a hard disk; and memory is still only a fraction of the speed of the CPU itself. Quote Hello, there's got to be something wrong here.It doesn't take several minutes for my computer to start. Many of the systems I've seen using SSDs start from a cold boot in only a few seconds (counting from button press to either desktop or login screen). Of course we didn't have a stopwatch, but there wasn't enough time to grab a beverage. Quote And my guess is that Microsoft is using too much automaticly generated code (fundamentally incompetent programmers that is).Not sure I understand. Macro's are one of the most powerful language features available in any language. Though I'm not really sure what you are referring to here. ("automatically generated code" makes me think you think they use some sort of Wizard...) Quote Because let's face it, almost noone these days programs in the most code effective language there is, i.e assembler.Windows is written in C and Assembly, with time critical portions- such as those that are more prevalent in the kernel- being written in Assembly. Here's the issue with such a language debate. Surely, as you are evidently aware, it all started with machine language. All programs had to be written as sequences of machine language instructions (which were fed in various ways, punch cards, tapes, etc). Soon after of course people started to use a slightly more convenient form of representing machine code, Assembly language. The list of COMMANDS available is the same as machine code, of course, but instead og referring to the add instruction as 11001101 In assembly language the list of commands is the same, but you get to use more programmer-friendly names. Instead of referring to the add instruction as 11001101, or CD (in hex), you get the privilege of calling it add. The problem with machine and assembly language is triplefold- first, they can only do, fundamentally, very simple things. If you wanted to tell a computer to beep 10 times, there is not likely to be a machine instruction to do something n times, So of you want to tell it to do something 10 times using machine instructions, you might have to do something equivalent to: Code: [Select]put the number 10 in memory location 0 a if location 0 is negative, go to line b beep subtract 1 from the number in location 0 go to line a (Obviously this isn't really machine code, but the point is that assembly/machine language works at a very base level of instructions.). Also remember I said triple fold- second, we have the issue of code readability and discovering bugs. The above psuedo-representation has a bug, for example. Third, the problem is that today not only are processors extremely fast, but they also have multiple cores of execution; in order for a program to be written to be as fast as possible on a modern Quad-core (or more) machine, for example, it would have to exploit all the cores so they were doing as much of the work as possible. Assembly language, however, being an imperative language by design, focuses on sequences of instructions; having more than one thing occur at a time is something handled by the hardware pipeline itself (eg the Pentium processor) but actually exploiting multiple cores (each of which have their own pipelines) is up to the programmer, and doing so properly is practically out of the scope of any reasonable Assembly programmer. Additionally, the actual mechanics of parallelizing a piece of sequential code could easily be said to follow a pattern; this is something embraced by concurrent languages such as Erlang, which focus on tasks, rather than on a single sequence of instructions. These concepts, which are fundamentally a functional construct and follow in the footsteps of Lisp, are something that are practically required to properly exploit today's processors. Actually, now that I think about it, I know what you mean by Machine-generated- you are referring to compiler generated code. Higher-level languages, such as C, expand your toolbox; they let you use more powerful abstractions, such as "do this N times" rather than wimpy ones like "Add the values in these two registers" or "jump to this memory address if the previous comparison wasn't equal". The advantage here is that being able to build software out of more powerful abstractions means you have to use fewer of them. The above psuedo-machine code written in C might look like this: Code: [Select]for(int i=0;i=<10;i++) beep(); Which is easier to read, easier to edit, and has the advantage of making the bug in the previous version more clear. When you get to build your programs out of bigger concepts, you don’t need to use as many of them. Another gigantic advantage to high-level languages is that they make a program more portable. For example, if you have a program written in x86 assembly, it's going to be useless for an ARM processor or for a Motorola Processor or what-have-you. Even a language like C mitigates this because the same source can be fed into compilers for different architectures. For Windows, this meant that the codebase only needed revisions to move from 16-bit x86 to protected mode x86, and from there to 64-bit, rather than having to be rewritten entirely for each specific set of instructions (the available instructions in each mode differs). Instead, the logic about what instructions are available are simply GIVEN to the compiler and the compiler learns how to best use them; fundamentally, the act of "optimizing" assembly code is more machinations than an art; fact is that in order to properly "tune" assembly language to the point where you exceed the speed of a modern compiler, you will need to know the processor in question- how every single instruction is executed, what fiobles it has- etc. as well as spend countless hours putting that knowledge to use. The end result is that you might end up with a program that is 10% faster for 100 times the investment of man hours- repeated for the number of architectures your product is going to TARGET. And that isn't even counting the additional time required to track down bugs in such a situation. Quote I only wish the code was open source.it's Open to some people, through an NDA. It's not that INTERESTING. It definitely didn't use a wizard, though. I hope saying it's not interesting doesn't violate the source license NDA... Quote One reason is that I have a real naive dream of designing my own OS.Cool... good luck! Why would Windows being Open Source help you with a homebrew CPU, though?You are amazing BC_Programmer! Putting so much time and effort into replying to me. I am honored! I wish I had SSD-disks. These kind of disks makes my point less obvious. I know from a friend that the reason for multiple cores actually are current consumption. I was amazed to hear that but it makes sense. He said that current consumption isn't linear. It is exponential (with regard to clock frequency) The thought about this is however still the actual need for multiple cores in smartphones etc. Isn't this something like "giving up"? I can't see the development in mutiple cores. I just see the unneccesary complication. This is because I still think that the software engineers and their compilers simply generates too much irrelevant code. I think we should revert to single cored processors and make use of the one and only good designing language, assembly. I understand that it is more easily readable when stuff are written in higher languages but I can't see the user benefits. Now you have the problem with slow downloads on smartphones for instance. And what do we use there? Am I wrong if I say some rediculous 24-bit resolution for each pixel. WHY? When I attended Chalmers, I for instance had the opportunity to experiment with the bit-resolution of an audio file. CD uses 16 bits (some 65000 levels). I needed no more than 5 bits (32 levels) before I thought it sounded ok (I am not kidding). I do however think this is a "workaround" of the problem. And I think we do not need 24-bit resolution on our pictures. Maybe I'm wrong but my point is that we are sending and receiving more data than is neccesary (for a nice experience). I have f.i not understood the point of HD. Is that really neccesary? Maybe for really big screens. The point I am trying to make is that we are using too much unneccesary data at the same time as we (read Microsoft) designs programs in a not so efficient way. Maybe the programs are easy to read but you can bet that they are not optimized with regard to efficiency. They are optimized with regard to maximum shortest development time and profit. I rest my case :-) Best regards, Roger PS Thank you for your comment on my CPU project. Open souce will of cource not affect that :-) I just like open source. This is because I think information should be free. Free for anyone to use, free for anyone to start a company (with their own design). I am attaching yet another fun picture of my project (which I have been developing for 1,5 years now) [year+ old attachment deleted by admin]1) Buy a MAC 2) Switch to Linux. 3) Topic Closed.Hi! What is Alpha? Intensity, or? I thought only RGB was needed. It seems that I know less about computers than I thought. And this kind of explains it all. I do however still think that: 1) We do not need all the features a program nowadays provide (this will just make it load/start more slow than neccesary) 2) We do not need the hysterical resolution that nowadays are common (but the long-term use is hard to predict...) 3) I am not certain anymore but it seems like we could write more code-effective programs (using single cores and assembly) 4) Multiple cores are not the future in the long run (because there's a limit on how many cores you can actually use) 5) Maybe the problem (read slow computers) isn't the high-level language. Maybe the problem is badly designed compilers. I know I am being stubborn but this is what I think. My next generation of CPU (using a FPGA instead of a CPLD) I think will have a 32-bit wide address bus and 16-bit wide data bus. But if I fail at this (or maybe both) I would like to buy a similar CPU on the (second hand) market. What kind of CPU should I look for? 486? It doesn't matter if both the address bus and the data bus is 32-bit wide. But I kind of like the asymmetry because this is how my first CPU will work (if I ever get it to work, that is). Finally, you have taught me that assembly do not work so well with multiple cores. So that is another reason why I stick to my belief. I think I have said all I wanted to say. Take care! Best regards, Roger PS Attaching the schematic of my CPU. And yes, the other one above was more of a block-diagram than a picture of the actual architecture. Because I'm so bad at computers but at same time very interested (especially in hardware), could you please recommend a book I should read? It need perhaps not be for dummies but approximatelly at that level. I am very interested in hardware protocols (like the formatting of an hard drive f.i) and the way a (modern) computer actually work. All hardware considered. And drive routines (freely translated from swedish) If it isn't you Mr G, then it's got to be you Mr B! [year+ old attachment deleted by admin]The Alpha value represents the transparency of whatever colour the RGB value has returned. As for the other questions...I'll leave that to the experts.Quote from: Helpmeh on October 24, 2012, 08:27:57 PM The Alpha value represents the transparency of whatever colour the RGB value has returned. FWIW, this topic was locked and (apparently) unlocked and now he's responded to the post I made in his double-post on the same subject in this one, too (for some reason...). |
|