Optimization is an elusive goal, because while you can refine code to work better in terms of implementing a specific algorythm, it may be that a different algorythm would serve better overall. For instance, which would normally be more effective, converting all characters to upper case, all characters to lower case, or just deal with each character in the case that you find it? If you assume that you are working with standard text, then you might also assume that the vast majority of your characters are already in lower case, so less time is required to change all characters to lower case. But if you were examining a body of code, you might find that all your key words are already in upper case, so in searching for key words, it might be best to treat each letter in the case that you find it.
Optimization by machine can only compare different approaches to doing the same thing, and determining which method is most efficient. But a problem immediately arrises, because the machine has to discern what you are attempting to accomplish, and this is really beyond its powers. A programmer who prides himself on writing optimized processes may expect you to try certain things, then design its optimization methods to substitute one obvious way that you might adopt with a less evident way that is actually better. But that would be one programmer attempting to enhance the work of another, not something that the computer would undertake on its own.
While 64 bit programming may introduce additional addresses and more instructions, the downside is that the OS will still just be giving you a sandbox to play in, forcing you to learn how to play nicely with other running programs and processes. You will find that you still have to save registers in order to free them up for your own use, and to restore them afterwards. It's like learning to drive on a two lane road, then suddenly having to travel through the heart of a city on a roadway twelve lanes across. You don't have the freedom of having all those lanes to yourself, you have to allow for other vehicles whizzing around you as well.
What you might hope for is an expanded set of instructions that show some insight by the designers as to what really needs to be done in software, and better tools for that purpose. But then there is always the question of legacy support for the older architectures, and whether you want your code to work on existing 32-bit and 15-bit machines or not. You may be forced to forego the use of really advanced features, or you may not even be able to access them because your compiler/assembler may not include that support, or you may not have any supporting documentation to describe them or how to access and use them.
It's generally understood that software development lags hardware design by at least five years. The hardware guys make it and give it to the software guys, and then the software guys struggle to figure out what it is good for, and how to get the most out of the design.
We've all seen or read claims that DirectX 10 will change game development,
and then maybe a year later, some titles that use DirectX 10 will begin to show up on store shelves. At the same time, few video cards are able to support
DirectX 10 yet either. So how does this fit in? Well, even in the hot and heavy
game development market, it takes time to take advantage of new technology.
The pressure to bring new games to market is emmense, with major bucks involved, so this is just a super paced example that proves the point.
But hardware is not the only thing that evolves, and software does not limit its rate and growth of development to changes in the hardware. New languages and tools are always appearing, new books appear to explain them to us, and new skillsets are expected of us, sometimes almost overnight. I recall one job posting that wanted five or more years experience in a new language that had only become known commercially the year before.
The fact is, if you took all the possible languages, libraries, tools, and everything else now available to the programming community and sturred them all together in their many thousands, then cut a narrow slice to represent your ability to know and have experience some of them, then what are the chances that your narrow sliver will exactly coincide with another sliver that represents the job skills and experience being requested by a job posting somewhere?
This is sometimes the advantage of the independent developer. He can only bring to the job the things he (or she) had experience with, so the job, whatever it is, will be defined in those terms. If you end up having to be replaced on the job, the likelihood is that the search will be for someone with your same qualifications. Again, the improbability is that another person exists with exactly the same background that you have.
These are just observations that I've made. I've also noted that we often do not choose the tool best suited to the job, but best known to the programmer or to the person identifying the requirements for the job. We realize that the time and effort to retrain and get up to speed is prohibitively costly, and needs to be avoided wherever possible. So demands for specific skill sets in the right combination with each other will continue. And some combinations will be of greater value, and in greater demand, than others. It can also happen that the more identified you are to a certain type of job or position, the less well suited you may seem for other jobs or positions.