This page was last updated on March 13th, 2020(UTC) and it is currently May 30th, 2023(UTC).
That means this page is 3 years, 78 days, 16 hours, 2 minutes and 44 seconds old. Please keep that in mind.
First and foremost, most people will likely show you some code that optimizes really well in compilers, because they're showing you impractical examples. The good news is, if you learn why they're impractical, there's actually something you can do about it.
The first reason you might notice is that compilers don't restructure code and data on your behalf. If, for example, you are on an x86, the compiler will likely use SSE instructions to pass variables, as per the specification, however the benefits from SSE instructions comes from the data being packed consecutively, but odds are you're storing your X and Y beside each other instead of all your Xs in one place, and all the Ys in another place so that Xs are beside other Xs and Ys are beside other Ys.Even still, that doesn't guarantee that the compiler will be able to identify that to use the optimizations. Similarly, if you're on a processor that has a common function as an instruction (like fsincos), does the library you're calling actually use it (or does it try to implement the function itself?), and can the compiler recognize that sin and cos were gotten (instead of using fsincos twice and just saving the other result)?
The second problem is linking. Static linkers and dynamic linkers both are unable to simply isolate the functions you actually want. On the practical level, this means your files are usually compile to an "object format" of some kind, and calculations that could have been optimized away or even "inlined" won't actually happen if they're in separate files. The purpose of this style is that if you have thousands and thousands of lines of code, or more, the compiler would use alot of RAM and CPU time recompiling every single line when, maybe, all you did was change a single digit of a large number. Instead, you put things in separate files and it only has to take in consideration the files that were actually changed. Naturally, functions external to that file (unless the file was included as source instead of linked) cannot be inlined and taken into consideration, because these files were already compiled to a format less restrictive than can be easily optimized by a machine. This can be remidied by including the code "as source", however that greatly increases the time it takes to compile as well as the RAM requirements for compiling.
The third reason is that the standard right now is for "faster development." Things like the second reason, are prioritized, because that which goes faster makes more money. There's alot of unnecessary safety as well: GCC, for example, always preserves the stack pointer in RAM, even when it doesn't have to, because it has control of the stack pointer in it's own functions. There are functions and calls to functions that rarely have any relevance to safety (each of these is considered "low cost", but as functions call other functions that call other functions in a loop, this adds up over time). And how often are "release builds" actually made where the debugging symbols are stripped, optimize flags (which take more time in compiling) used, files are consolidated as source, etc? The answer is "almost never."
Frankly, the compiler is not magic. It does the best job that it can do, but it still comes down to the coder(s) involved, and almost no one seems to care about the problem, because the mentality is "the customer is always willing to upgrade."
Get your own web kitty here!
©Copyright 2010-2023. All rights reserved.