The Spike and the Peak


The figure above, from Robert Anson Heinlein's "Pandora's Box" (1952), is perhaps the first graphical representation of the concept that technology is not only progressing, but progressing at an exponentially growing rate. Today, this concept goes sometimes under the name of the "technological spike" or the "technological singularity". However, we see also increasing concerns about peak oil and, more in general, about "peak civilization". Will the future be a spike or a peak?

The 1950s and 1960s were perhaps the most optimistic decades in the history of humankind. Nuclear power was soon to provide us with energy "too cheap to meter", space travel promised weekends on the moon for the whole family, and flying cars were supposed to be the future of commuting. At that time, Robert Anson Heinlein, science fiction writer, may have been the first to propose that technology was not only progressing, but progressing at exponentially growing rates. In his article "Pandora's box" (Heinlein 1952), he showed the figure shown at the beginning of this text. Curve 4, with "progress" going up as an exponential function of time, is the trend that Heinlein enthusiastically proposed.

The same concept has been proposed several times after Heinlein. Robert Solow (1956) interpreted as technological progress an exponentially growing "residual" in his models of economic growth. The concept of "intelligence explosion" has been introduced by I.J. Good in 1965, that of "technological singularity" by Vernor Vinge was published in 1993, although it was expressed for the first in his novel "Marooned in real time (serialized in Analog magazine, May-August 1986). The concept of "technological spike" was introduced for the first time by Damien Broderick in 1997 and that of "Accelerating change" by Ray Kurtzveil in 2003. In all cases, the growth of technological progress is seen as literally "spiking out" to levels that the human mind cannot understand any longer. Sometimes the term "technological singularity" is used to describe this event. The people who tend towards this view, sometimes called "extropians" or "transhumanists", are highly optimistic about the capability of technology to solve all of our problems.

However, over the years, we seem to have been gradually losing the faith in technology that was common in the 1950s. We are increasingly worried about resource depletion and global warming. Both factors could make it impossible to keep the industrial society functioning and could lead to its collapse. These ideas originated, too, in the 1950s when Marion King Hubbert (1956) first proposed the concept of a production peak for crude oil, later called "peak oil". The idea that resource depletion was a critical factor in the world's economy has been proposed many times, for instance with the series of studies that go under the name of "The Limits to Growth," which saw the light for the first time in 1972. Today, Hubbert's ideas are the basis of what we call the "peak oil movement". The concept is often extrapolated to "peak resources" and to "peak civilization", that could also be the result of the effects of anthropogenic global warming. The people who follow this line of thought tend to be skeptical about the capability of technology to solve these problems.

So, what will be the future, the spike or the peak? Will the peak destroy civilization, or will the spike take it to heights never experienced before? A first crucial question on this point is whether progress is really moving at exponentially growing rates. The answer seems to be no, at least if we consider technology as a whole. In most fields we are stuck to technologies developed decades, or even centuries, ago. The performance of motor cars, for instance, is not improving exponentially, otherwise we'd expect cars to double mileage and to halve prices every so often. This is a qualitative observation that we can make by ourselves, but there have been studies that have examined such indicators of progress as the number of patents published every year (Huebner 2005). The result is that the rate of technological innovation is not increasing and that it may be actually slowing down. As discussed, for instance, by Ayres (2003) there is no factual ground for Solow's 1956 assumption that the growth of the economy is mainly generated by "progress."

Yet, there is at least one field of technology where progress is, indeed, exponentially growing. It is information technology (IT). The growth of IT can be quantified in various ways. Moore's law is well known: it says that the number of transistors (or gates) on a single chip grows exponentially. The law has been verified for several decades and the doubling time of 24 months doesn't show signs of abating. Perhaps less known is the explosive growth of information stored in electronic form. A study by the International Data Group (IDC 2008) shows that the number of bits stored increases by a factor of ten every five years. At present, we have a total of approximately 280 exabytes (billions of gigabytes) stored. It corresponds to about 45 Gigabytes per person on the planet. Then, the amount of information being transmitted over the internet is also rising at an exponential rate. According to Morgan Stanley (2008), we are transmitting more than 12 million terabytes per month. We have no quantitative data for how fast exactly the general concept of "Information Technology" is growing, but from the growth of its many subsections we can say that it is fast accelerating.

Surely, progress in IT needs plenty of resources and a functioning economy and both conditions could be at risk in the future. But the demise of civilization is likely to be a slow and complex affair; something that could span most of the 21st century or, at least, the first half of it. Can we keep progress in IT alive and well for that long? Probably yes or, at least, it should be possible to allocate enough energy to keep computers running. From the IDC study that I cited before, it turns out that we spend about 30 billion dollars per year in energy used by computers and about 55 billion dollars in energy costs for new servers. This estimate doesn't take into account all the energy used in data processing, but it gives us an order of magnitude for the core energy costs of the computing world. Considering that the world oil market alone is of a few trillion dollars per year (depending on the vagaries of oil prices), we see that we need probably no more than a few percent the world's energy production for our computers. It is not a negligible amount, but it seems very unlikely that, facing an energy shortage, we would cut on the vital need we have for IT. Nobody should bet on the survival of SUVs in the coming years, but computers will keep working and Moore's law could stay alive and well for years, at least; perhaps decades.

The growing performance of information technology is going to change many things in the world. Eventually, it may lead to levels of "artificial intelligence" (AI) equal or superior to human intelligence. At some point, AI could reach a point where it is able to improve itself and that would take it to superhuman, even God-like, levels. Such superior intelligence is sometimes described as a sort of technological Santa Claus bringing to humans an avalanche of gadgetry that buries forever all depletion problems. Here, however, we risk to make the same mistake that Heinlein made in 1950 in his "Pandora's box". At the time, space travel was seen as the main thing going on and Heinlein confused needs for possibilities predicting anti-gravity devices and the colonization of planets arriving by the year 2000. This kind of mistake is similar to what Yudkowsky (2007) calls "the giant cheesecake fallacy". That is, if you are making a cheesecake, you'll think that a better technology will help you make a bigger cheesecake.

In the present situation, our main problem seems to be energy and the cheesecake fallacy leads us into believing that we'll soon develop (or that AI will develop for us) a source of abundant and low cost energy just because we need it. But even super-intelligent computers have to deal with the physical world. Maybe there are ways to create the perfect energy source: safe, low cost, abundant and usable by humans for the sake of humans. But we don't know whether that is possible within the physical laws of our universe.

Besides, is a limitless energy source going to stave off collapse forever? This question has already been asked in the first edition of "the Limits to Growth" of 1972, and the results confirmed in later editions. The simulations show that if you develop a technology that solves the energy problem, population keeps increasing, and collapse is generated by lack of food and by pollution. So, you'd need more technological breakthroughs: ways of fighting pollution and of producing more food. But, in the long run, how would you cope with the forever increasing population? Well, new breakthroughs to send people to colonize the solar system and, eventually, the whole galaxy. All that is not physically impossible, but it is an ever growing, super-giant cheesecake. Is that what we really need?

In the end, our problem with vanishing resources is not that we don't have enough gadgetry. We have a problem of management. We tend to exploit resources well above their capability to reform, that is beyond sustainability. In addition, we can't control the growth of population. This is what we call "overshoot" and it leads, in the end, to a collapse that has often be catastrophic in the history of humankind. Humans suffer of a short range vision that brings them to discount the future at a very steep rate (Hagens 2007). It is a result of our evolutionary history: we are excellent hunters and gatherers but very poor planet managers.

So, the real question is whether advanced IT (or AI) can help us to manage better the resources we have. And, here, the answer seems to be negative, at least for the time being. There is no doubt that IT is helping us to be more efficient but, as James Kunstler said in his "The Long Emergency," efficiency is the straightest path to hell. Being more efficient is a way to exploit resources faster, and that may well accelerate the collapse of civilization.

Just think of a simple gadget as an example: a car navigator. When you are using it you are, in effect, taking orders from a computer that is smarter than you at the specific task of navigating in the streets. The navigator will make it faster and easier for you to travel by car from point "A" to point "B", but will have no say on whether it is a good idea to go from A to B. Besides, if you can save some gasoline in going from A to B by an optimized route, you may decide to use it to go further on, to point C. So, the greater efficiency resulting from the use of the navigator will produce no energy saving. This is just an example of that is called the "Jevons effect" or the "Rebound effect" which often thwarts all effort to improve things by saving energy or being more efficient.

Yet, it would not be impossible to use IT in order to fight over-exploitation, and we don't need super-human AI for that. IT can tell us where we are going and act as a "world navigator" for us; telling us how we can go from here to there, supposing that "there" is a good kind of world. The first digital computers were already used in the 1960s to simulate the whole world system (Forrester 1971). In 1972, the authors of "The Limits to Growth" used their simulations to propose ways to avoid overexploitation and keep the world's economic system on a sustainable path. These simulations could be used as a guide for steering the world's economic system in the right direction and avoid collapse. But, as we all know, policy makers and opinion leaders alike refused to take these studies seriously (the story of how "the limits to growth" book was rejected and demonized is told in my post "Cassandra's curse," Bardi 2008). So, we are still racing toward collapse; IT is just helping us to run faster in that direction.

There remains the hope that growing IT capabilities will make a difference in qualitative terms; that AI will become so powerful that it will save us from ourselves. Several years after his "Pandora's box" article, Heinlein published a novel titled "The Moon is a Harsh Mistress" (1966) where he described the birth of a human-like computer that helped a group of lunar revolutionaries to take over the local government and, eventually, became the hidden and benevolent ruler of the Lunar colony. But that, just as many predictions, might be another case of the Giant Cheesecake Fallacy: the fact that we need a technology will not necessarily make it appear and - more than that - it may not work the way we think it should. A God-like AI might not be necessarily compassionate and merciful.

In the end, both the spike and the peak are strongly non linear phenomena, and we know that non linear phenomena are the most difficult to predict and understand. The only thing that we can say for sure about the future is that it will be interesting. We can only hope that this will not have to be intended in the sense of the old Chinese malediction.

The Author wishes to thank Mr. Damien Broderick for pointing out some missing references in an initial version of this text.

References

Ayres, R., 2003 www.iiasa.ac.at/Research/ECS/IEW2003/Papers/2003P_Ayres.pdf

Bardi, U., 2008 "Cassandra's curse", http://europe.theoildrum.com/node/3551

Broderick, Damien, 1997 "The Spike", Reed ed.

Forrester, J.W., 1971 "World Dynamics". Wright-Allen Press.

Good, I. J., 1965. "Speculations Concerning the First Ultraintelligent Machine." Advances in Computers, Vol. 6.

Hagens, Nate, 2007 "Living for the moment while devaluing the future". http://www.theoildrum.com/node/2592

Heinlein, R.A., 1952. "Pandora's box" The article was published in the February 1952 issue of Galaxy magazine (pp. 13-22) (thanks to Damien Broderick for this information). It doesn't seem to be available on the internet but a detailed review and comments on its predictions can be found at: www.xibalba.demon.co.uk/jbr/heinlein.html. The figure at the beginning of this paper is taken from the Italian translation of the 1966 update of the paper that was published in the "Galassia" magazine.

Hubbert, M. K., !956, http://www.energybulletin.net/13630.html

Huebner, J., 2005 "A Possible Declining Trend for Worldwide Innovation," Technological Forecasting & Social Change, 72(8):988-995. See also http://accelerating.org/articles/huebnerinnovation.html]

IDC 2008, http://www.emc.com/collateral/analyst-reports/diverse-exploding-digital-...

Kurzweil R, 2003, "The Law of accelerating returns", www.kurzweilai.net/articles/art0134.html

Morgan Stanley 2008, http://www.scribd.com/doc/2683604/Internet-trends-2008

Solow, R., 1956 "A Contribution to the Theory of Economic Growth." the Quarterly Journal of Economics 70 (February 1956): 65-94. Available from www.jstor.com (Subscription required)

Yudkowsky 2007 "reasons to focus on cognitive technologies" http://www.acceleratingfuture.com/people-blog/?p=15

Vinge, V. 1993, "Technological Singularity". http://www-rohan.sdsu.edu/faculty/vinge/misc/WER2.html

Why do you allow only two options: "spike" or "peak"? What about "plateau"? After all, a logistics-type curve is both quite common in nature, and exhibits a plateau.
It would seem that would be a quite reasonable outcome for something like human population on-planet.

Correct me if I'm wrong but I think the plateau scenario is just one subset contained in the "peak" scenario. A peak just means you won't go any higher. Global oil production will peak (or has peaked) and it exhibits something similar to a plateau. My interpretation is that "spike" means the endless limitless improvement of technology. I don't see this as plausible. As future resource constraints continue to impact the carrying capacity of the planet, you'll have fewer resources available to support the people who develop this technology. Over the coming centuries, population will drop off and people's time will be spent on tasks dealing with the basic necessities rather than making faster computers.

TS

In the long, long run we're all dead. That includes humanity. And the Sun, and probably the Universe itself. But speaking in terms of interest to ourselves and our descendants, it seeems "stabilization" would be an option.

Correct me if I'm wrong but I think the plateau scenario is just one subset contained in the "peak" scenario.

Not necessarily - properly speaking a logistics curve just approaches a maximum asymptotic value. The derivative of the curve doesn't go negative.

Salonlizard, it would be wonderful if we could stabilize the parameters of the industrial society on a plateau. Unfortunately, since we are already in overshoot, at the miniumum we must reduce the consumption of fossil fuels - that means we can't stabilize the system at the present level. But there is a more fundamental point: we have no socioeconomic or political structures that favor stabilization. On the contrary, all these structure are pushing on for more growth; the result can only be collapse. Maybe one day we'll find a way, but from the recent history we have no reason to be optimistic. Think how the authors of "The Limits to Growth" of 1972 proposed exactly what you are proposing: stabilization. They were ridiculed and demonized and nothing was done. Little hope, I'd say.

I think SalonLizard is confused by the plateau of the cumulative oil production with the plateau of instantaneous production. He is talking about the plateasu of the s-curve which will eventually happen. Right now, we are concerned about a plateau of the current levels of production. Something tells me he is missing that distinction. I don't think he understands how hard it is to maintain a production plateau.

drilling and completion technology has made some stair step advances coming into the 21st century, but unfortunately reservoir management, in some cases, is stuck in the mid 19th century. i refer to the ongoing flaring of ng discussed in rockmans keypost.

Moore's law probably doesn't have much juice left; new chips use exotic materials such as hafnium which is in short supply, one study suggested we only have enough until 2017. Also a corollary of the law is that computers both get smaller and faster but this is not happening, netbooks for example. So even if the 'law' could go on, it is doubtful that the market will support it - we are at 'good enough' computing for most.

as you mention, there is a lot of growth of 'information', but most of that is currently YouTube and P2P file sharing, maybe 45% of all traffic are these two items. One must wonder if this could be considered 'progress'; as you allude, we are probably just filling up space to fill up space.

Can IT save us from ourselves? It's a good question, particularly when the entire industry sits second seat to social, political, and cultural issues that are of greater concerns. There are at least two camps here, the Kunstlerites and Kurzweilians, the former is more realistic for me.

Ugo wrote:

....but computers will keep working and Moore's law could stay alive and well for years, at least; perhaps decades.

Sorry Ugo, but that is just not the case anymore. The exponential growth in computing power has been slowing for some time now and there is no doubt it will soon stop completely. The problem is silicone. You can make silicone only so thin before the current overheats it and burns it up. That problem was overcome in the past by lowering the voltage and reducing the current with each new generation of chips as each circuit got thinner and thinner. Now each circuit is about as small or thin as it can get without even the tiny currents they currently carry burns them up.

A search went out to find "something else" other than silicone to make chips from. A breakthrough was thought to be found by Bell Labs but the scientist who made the breakthrough, J. Hendrik Schön, was found to have faked the data. Now there is nothing on the horizon that is expected to replace silicone and save Moore's from hitting a brick wall. The speed of the computer cannot increase much more and each circuit in the chip cannot get much smaller.

Moore's Law, better described as "Moore's Observation" has been tailing off in recent years, increasing but at a much slower rate, and will very soon come to a complete stop. Computers will keep working but Moor's Law will not stay alive.

Jan Hendrik Schön

The implications of his work were significant. It would have been the beginning of a move away from silicon-based electronics and towards organic electronics. It would have allowed chips to continue shrinking past the point at which silicon breaks down, and therefore continue Moore's Law for much longer than is currently predicted. It also would have drastically reduced the cost of electronics.

Ron Patterson

The problem is not silicone nor limits of whatever physical. A von Nemann computer is a formally deciding machine ( not : system) and is therefore limited pretty much harder.
Most human decision is not formal it is
"intuitive". von Neumann computing lacks "gut feeling" as well as "ideas".
So intelligence is rather non sequitur -
the new idea is not consistent with the problem.
That is human and most possibly animal
solution finding is not logical.
So far there is no such thing as artificial intelligence.
That would require a not formal theory of context.

Hahfran, we are talking apples and oranges here. I, or we, are talking about computing power, or the speed and physical size of computers as programmable data processors. And here, as far as Moore's Law is concerned, the problem is silicon or rather the limits of silicon.

You are discussing something else entirely. You are speaking of computers as thinking machines that may someday replace humans and all the innate capacities the human mind possess. That is another matter altogether and I would never argue with you on that point. Computers will never possess these capabilities. But Moore's Law has nothing to do with this concept, but only computers as programmable data processors.

Ron Patterson

Partly agreed.
As everyone has the subject of peak of a physical ressource apparently no one is concerned about peak of human ressources.
Outsourcing workload at first is profitable but second one loses skill which grows abroad.
I think that the idea to keep research and development and source out only mechanical repetitive work will fail.
It has already failed in Switzerland.
Their industry depends on continued influx of skill of all kind inclusive of engineering and research experts.
Because as the basic is lost the upper class- so to speak - runs out of ideas.
Manufacturing quality deteriorates and eventually the profit from outsourcing
is lost in extended cost of qualtity control and engineering , rework, and loss of market share.
But the skill is gone.
IMO the way of thinking predominant in a technologically advanced society is already algorithmic. It may be clever algorithms but insofar computers can replace humans because the latter are on their intellectual way down.
Recently a friend adivised me a book written by O'Shea an US mathematician. I said no I am familiar with topology. But he kept on insisting. I read it and I am perplexed. This man has an incredible educational talent. But he is a rare exception.
Thus I am far more worried about peak education than about peak oil and that like.

What people fail to realize is that any AI will have the same mental limitations (and diseases) any human being has, and therefore will act in similar ways. Of course an AI will be non-corporial (less-corporeal at first, until it's computer systems are truly everywhere).

There is, however, no problem whatsoever with simulating a large, very humanlike AI on von Neumann processors. It's not as fast as it might be given optimal hardware, but that goes for everything. Certainly the most useful computers are von neumann computers.

Computers will never possess these capabilities.

Probably not, however I find this guy's work rather interesting.

Link

Jeff Hawkins
Numenta
November 2, 2007

Jeff Hawkins is the founder of two computer companies, Palm and Handspring, and the designer of many computing products including the PalmPilot and Treo Smartphone. He also founded and ran the nonprofit Redwood Neuroscience Institute (now part of UC Berkeley) and founded the for-profit Numenta, which is developing a new technology, Hierarchical Temporal Memory, based on neocortical memory architecture. Hawkins has a BSEE from Cornell University. He was elected to the National Academy of Engineers in 2003.

Yes. His book with Sandra Blakeslee On Intelligence is worth reading, he's a smart guy, and gives a good talk if you get a chanced to hear him. This is worth keeping an eye on, if for no other reason than it might lead to parallel algorithms amenable to serious hardware acceleration, in the same way (but in opposite direction) as has has occurred with 3D graphics.

Most AI has really not gotten very far, although there are some useful expert systems around (but those are not general AI in the sense that most people think). In chess & checkers, brute force essentially won over AI, just as Ken Thompson predicted in early 1980s.

Jeff's approach is at least interesting and different.

See also: Cyc.

Well, all the data I could find say that Moore's law is still going on. Of course, nothing can go on exponentially forever - I think we'll see chips growing in power for several years; maybe not exponentially any more, but still getting more powerful. Then, there are innovations that might revolutionize the field and start a new and even faster exponential spike - quantum computing for instance. In the end, however, this is not the point. The point is what we are going to do with all this computing power. I argue in my text that better computers are simply taking us to hell faster. Quantum computers are not going to help much, if this is the case.

I must admit I can hardly find somebody I would agree more with than Ugo (da Vinci)!

The point is what we are going to do with all this computing power?

That question was posed at the dawn of vacuum tube computers (Eniac, etc.) and the conclusion of the questioner was that America would need no more than 5 computers tops.

I argue in my text that better computers are simply taking us to hell faster.

Better to have internet in hell than telegraph in heaven.

How about calculating & simulating how to do fusion ? if that works, we're back at 1960's

Sorry to be anal about this but silicon is the semiconducting metal that makes most of modern computing possible where as silicone is a polymer using a silicon and oxygen backbone most commonly used to seal the edges of baths.

where as silicone is a polymer

You're not being anal ... I was about to point out the same thing ... but it looks like the TOD editors fixed up everyone's posts by changing "silicone" to silicon.

Actually silicon is much more than a semiconductive element. Germanium is also a Group IV semiconductive element. But silicon is magic. First because it is abundant in the Earth's crust and thus cheap. Second because it so easily forms into crystals. Third because when you burn silicon (Si) in pure oxygen (O2) you get this amazing electrical insulator (SiO2) --also known as glass. Fourth because one particular metal, aluminum (Al) naturally adheres to SiO2. It is this coincidence of amazing characteristics plus the wonders of photolithography that bought us what we take for granted now a days, the microprocessor chip.

If Si was also a direct bandgap, that would be amazing!

It might have been a typo, but just to get even more anal, silicon is not a metal its a semiconducting element as is carbon and germanium. They are somtimes described as "semi metals".

I think Darwinian is on to something, but I have recently read a report that states (paraphrased) "the transistors are getting so close together now that tiny impurities in the crystal latice are starting to introduce transistor failures and the reliability of integrating circuits is being compromised".

I cannot comprehend how these devices are made, its miraculous really.

As for drawing a line under progress, I think we have gone far enough too but its hard (or even impossible)to win that argument. You loose on the grounds that "with that attitude we would still be in caves". So progress continues until nature draws the line for us, I suppose.

Don't forget Breast Implants!
;-)

haha! i recently saw a shirt made for babies at Nashville airport, the front of it said:
"mmm....boobies", i need to stop and ask if that shirt comes in adult size!

This is why Intel and AMD are not offering any new technology below 0.8 Micron instead launching Dual Core and Quad Core using the the last genuine drop in cricuit size off the PENTIUM 4

not offering any new technology below 0.8 Micron

Dude, (a.k.a. Rip van Winkle),

Hate to bring you up to date but microns (10^-6 meters) are so yesterday. We're doing double digit nanometers (10^-9 meters) now a days. That's just two orders of magnitude above the width of single atoms (measured in Angstroms or 10^-10 meters).

And I suspect that the width of a single atom represents a hard ceiling, beyond which we cannot go. That is the absolute limit that will kill Moore's law.

0.8 microns is 800nm. That was a very long time ago.

Umm, well before we had transistors no-one would have thought you could go beyond some speed in valve switching. Perhaps we will stop using transistors and use some new invention not yet discovered which does not depend on silicon? You may see this as unlikely, but given that is has happened at least once before in this particular field I don't see any evidence for this. Saying nothing can replace silicon is a bit like Lord Kelvin in 1900 making the following statement:

"There is nothing new to be discovered in physics now."

He believed only refinements in measurement were left to be made. This was before relativity, quantum theories, subatomic particles etc.

No new technology is "on the horizon" before it is discovered, if it was, we wouldn't have to discover it!

Seems to me that before making absolute pronouncements on the future of electronics chips, one should learn the difference between silicone (caulking, implants) and silicon (computer chips, etc.)

I think moore's law is showing its limits right now, at least in the CPU realm. Since they're unable to squeeze much more performance out of single processor cores anymore, the "solution" has been to move towards multi-core chips.

These are presented as single units because they're contained on the same die, but structurally they're more like separate processors that are just very tightly meshed(at the die level instead of the mainboard level). Technically if you view the multi-core chips as a single processor the transistor count is still doubling, but I think that violates the spirit, if not the letter of moore's law.

"It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens"
-Gordon Moore, April 2005

Distributing units of work amongst parallel CPUs has the same effect as hiring more officials. At a certain point
the administrative effort to share workload exceeds added processing power.
So theoretically the limit of n in n core CPU chips is well established. It is 8.
if one wants more the applications had to be thoroughly re-programmed.

For the performance of normal PCs that most people can afford, this whole discussion is utterly irrelevant and has been for about a decade.

CPU speed might be 2.4MHz or more, and the advertised memory speed might be 833MHz or more, but memory (and even cache) is so awful that the actual effective memory speed will be maybe 150MHz at the sustained very best, and well below 15MHz and as low as 6MHz with certain sequences. The core spends most of its time, as it were, standing in line while the memory is perpetually out to lunch or hiding somewhere, as if the PC were some sort of miniature DMV office.

Added cores may shorten a few operations slightly, but mostly they will be just be sales-pitch gimmicks that just sit there waiting on the memory while consuming electricity. This is why software that actually needs to compute intensively, such as, say FPGA design software, may need ferociously expensive "hardware accelerators" when it's used on a large job. The CPU cores probably could do the job, but the memory trots along leisurely in some bygone era.

The problem is that since, say, 1980, memory access time for normal, affordable chips, has improved from around 80 nanoseconds to around 5 or 7. That's it. (And 32-bit Windows on Intel-style chips does two non-useful "descriptor" fetches for every real fetch; one of those is usually cached, so call that 10 or 14 nanoseconds.) The very high speeds given in data sheets are mainly trickery, so complex that the explanations occupy many dozens of densely filled pages. The tricks work when the memory is accessed in continuous rigid lockstep sequence. That happens in contrived "benchmarks", but not so much in real computing with real software. Once the sequence is broken, the same tricks slow things to a crawl, accounting for the sub-15MHz performance under the right (i.e. wrong) conditions.

What has improved tremendously is the density of dead storage. So you can store lots of songs or videos on your PDA, which is nice (but doesn't address "performance".) Playing back audio is rather undemanding, and even video on a tiny low-res screen is only moderately demanding. The utter lack of improvement in memory speed is not really an obstacle there.

"CPU speed might be 2.4MHz"

Do you mean GHz?

Added cores may shorten a few operations slightly, but mostly they will be just be sales-pitch gimmicks that just sit there waiting on the memory while consuming electricity.

The one area in a typical PC where adding cores has helped is in graphics. When a two megapixel display is updated sixty times a second, there are about 120,000,000 independent calculations to make in each second. It is easy to spread these out among dozens of cores, because they are independent.

In graphics, as the speed of calculation becomes faster and faster by comparison with memory, increasingly many calculations tend to be done between each "texture fetch" (memory read.)

True. Multis do well when the problem to be solved is simply crunching through a big pile of independent calculations, and the work is easily divided and conquered. So multicore video card GPUs make sense. CPUs tend to do more branched logic, where the results of one calculation are required for the next to start, so they don't have as much opportunity to really let fly with parallel execution.

For years chips stayed single core despite no technical obstruction I'm aware of to building multicores (I'm sure multis were out there but the desktop market stayed with one core). Seems greater general performance increases could be gained by speeding a single core up - they only moved to multis when they reached a speed limit on that single core. That's why it sort of feels like they're "cheating" on moore's law with the multis.

As regards raw performance, as mentioned above slow memory speed is screwing things up tremendously, and the sheer amount of useless bloat on the software end probably saps more performance than anything.

One more time:

Moore's "Law" is about density, not about speed, it just happened that we kept getting more speed from transistors that switched faster because they were smaller. For years now, designers have been up against wire delays [which is not particularly speed-of-light-in-wire issue, but "RC delay"].

People were building parallel multiprocessors from microprocessors from the mid-1980s onward, and we were all expecting to do multi-cores when the time was ripe.

First, people needed enough die space to get on one die:
- CPU
- FPU
- first-level caches
- (sometimes) control for external caches
- (sometimes) 64-bittedness

That happened by the early 1990s, as in 1991's MIPS R4000.

By the mid-1990s, the extra die space from Moore's Law got used in micro--architectural complexity like out-of-order speculative-execution chips, primarily to attack the memory latency problem. Hardly anyone built multi-cores at that point because:

a) One could could get more performance at low cost by increasing sizes of on-chip cache memories, memory management units, adding other extra functional units, etc. Any serious designer was evaluating multi-core designs in the early 1990s, and in general, it just was not yet a good idea, but it wasn't because we weren't thinking about it and knowing it was coming.

b) All of that worked fine for uniprocessor performance, and hence for (relatively high-volume) desktops.

c) AND, if anyone were serious about multiprocessors, they were building machines with 4-, 8- ... 128, 256 CPUs, and just going to 2-cores wasn't very useful - you had to do all the rest of the work anyway. In any case, there's a program-dependent limit on the number of cores/CPUs that can usefully share a single memory bus.

d) THEN, as usual, there are diminishing returns. When it takes a lot of design complexity and die area to get 1% performance from architecture, you stop doing that. In addition, a single, complex CPU on a big chip has a lot of long wires, and long wires are Bad. By backing off the complexity, one can get 2 cores on a chip, and the fast signals are kept in smaller, more localized spaces, i.e., shorter wires.

Power usage and cooling issues returned with a vengeance, after the happy period in which CMOS replaced water-cooled Bipolar mainframes, and used so much less power that you could get away with really easy air-cooling, such that:

- Laptops got more popular, and battery life mattered.

- Even for desktops and servers, energy use and cooling matter. In particular, it isn't just a question of extracting the total heat, it's the need to cool the *hottest* spot on a chip, and designers rearrange chip floorplans due to the need to spread the heat more evenly. Even in the mid-1990s, systems designers were starting to struggle with difficult heat-extraction problems for large CMOS systems.

Anyway, no one is *cheating* on Moore's Law :-) and we've still got a few more rounds of it for CMOS, albeit not many. After that, it's very unclear. Fortunately, there are many high-value applications that use very cheap, relatively slow micros, and Moore's Law helps them.

Ah! So they're not being as naughty as I had suspected. A very interesting lesson in chip design history - thanks!

I do remember seeing multi-CPU (rather than multi-core) designs in the 90s. I once saw an ad for a desktop way back in the day that had dual Pentium 200s in it and that seemed like an unbelievable amount of horsepower.

Still, even given a pure density perspective, the end is at least in sight for the law(5-10 years maybe?) as we approach atom-sized transistors. That seems like a pretty solid ceiling, unless some newfangled sub-atomic transistor is invented soon.

Of course, by then we may have the whole quantum thing up and running. What are the odds the NSA already has a working model? :-)

5-10 years: see pp.15-18 of the ITRS roadmap I mentioned in an earlier post.

Actually, people have been doing work on recording bits via electron spins and nuclei of atoms, but that is still really Research, so we'll see if it actually ever gets somewhere practical. AS for continually shrinking data, for amusement I recommend the famous short science-fiction story Ms Fnd in a Lbry.

For me, John Michael Greer's essays resonate powerfully with my "common sense". To paraphrase (or quote poorly), the only thing that has allowed this baroque flowering of "high technology" has been an abundance of energy, universally available, at a trivially cheap price.

"High technology" can be expected to peak about now, along with everything else. Not that there aren't some efficiency gains to be made, but things won't continue to accelerate. And we can expect some of the more energy-intensive technology (war?) to make some reverse progress.

This seems more of a philosophical issue than a mechanical one. After all, what happens 'next' is a posit, not to be extrapolated.

The 'Great' inventions were made centuries ago; clothes, weapons, agriculture, language, and - of course - religion. It was the first and the prompter for all the others ... Then, cooking, shelter, economies/trading, storage vessels, boats, then art, writing and music. Nothing since has come close in importance and none of these really were destructive of the natural world until the smelting of metal. So ... the curve of invention would have been highest pre- history through the bronze age then declining until the 15th Century with the invention of moveable type, optics and 'higher' mathematics.

Most could (and do) live without computers but few could live or love without music and nobody would go outside without clothes.

The 19- 20th centuries gave us mobility and better weapons, but both have proven not so beneficial even on their own terms. Weapons have become destabilizing and mobility between two or more virtually identical 'places' is a meaningless activity. The benefits are an illusion. Antibiotics are probably the invention with the least collateral damage, but the effectiveness of antibiotics falls with overuse and prolonging life past a certain point is becoming an issue demanding its own examination.

As far as technology self- developing a power source for itself and for us as a byproduct ... this is science fiction. This is our world, we are stuck with it.

Man, what historical era are you from? Run Forrest, Run!

I'm old, I'm old, I can feel it in the water ...

(I hear fell voices in the air ...)

mobility between two or more virtually identical 'places' is a meaningless activity

You need to look around more, Illinois, Montana, Wisconsin, Alaska--they are north and sometimes cold but hardly identical, socially or physically--those are places I lived, there was much more variety available if I'd wanted it. When you move you need to get out more, an apartment, TV and office are pretty well the same, and suburbs are sort of similar, but America is big counrty, if you can't find variety in this country it is your problem.

...the only thing that has allowed this baroque flowering of "high technology" has been an abundance of energy, universally available, at a trivially cheap price.

I think I'd have to disagree, at least when talking about ICT (information and communications techologies).

As Ugo pointed out in the original post, ICT are not that energy intensive.

I just started reading a great new book called Changing Maps: Governning in a World of Rapid Change by Stephen A. Rosell et al.. It's loaded with wisdom and insights from our Canadian neighbors to the north. Somebody here on TOD recommended it a few months back, but I'm just now getting around to reading it. Since I perceive you and I to be on similar journies, I very highly recommend it.

It breaks our technological advancement down into waves. We're now in the fifth wave (1990s-?): micro-electronics, computers, telecommunications, data networks. This supercedes the 4th wave (1940s-1990s): oil, automobiles, petro-chemicals, aircraft, roads (highways).

The old, "Fordist" paradigm was "energy-intensive." The new, ICT wave is "information-intensive."

But this isn't the part of the book that I find most interesting. Much more intriguing to me is the part on "Culture and Values: The Postmodern Challenge." Basically it lays out our cultural evolvement from premodern → modern → postmodern, postmodern being when "the myth of objectivity breaks down":

A key question that emerged for us in this discussion, is how it is possible to construct and sustain any sort of consensus in a postmodern society that is characterized by multiplying and fragmenting systems of belief. Underlying this dilema is the changing basis on which particular decision can be legitimized. One roundtable memeber summarized those different bases of legitimacy:

In the premodern era, the legitimacy of such choices rested on the word of God or some other supernatural authority. Then we had the scientific revolution, the enlightenment project, and the birth of modernism. In the modern era, legitimacy increasingly has been based on ideas of research and of scientific method; the notion that if we got the answer by using those scientific methods, if we follow the right procedures, (and have the right credentials), we could come up with an answer that, in some sense, was objectively true. That was the new source of legitimacy. It has lasted up to the present day and, as bureaucrats, we are part of that. The legitimacy of the bureaucracy very much is based on its mastery of those rational and scientific methods.

Now we are into a postmodern situation, and the question becomes what is the basis for legitimate choice? If it's not given by God, and if the scientific method cannot yeild objective truth, then on what basis can we make legitimate choices?

--Steven A. Rosell et al., Changing Maps: Governing in a World of Rapid Change

Granted, Moore's law has reduced the price-per-bit over the last several decades. But it's heading for some limits: it's difficult to store more than 1 bit per atom, and you can't move information around faster than the speed of light. And we have come remarkably close to those limits by now... as I said, there may be room for efficiency gains yet.

But with that said, drive by a Google data center, a Verizon NOC, or an Intel wafer fab. If you look you will notice that they all have massive connections to the electrical grid. They will also have a large generator set out back. Though they manage enormous amounts of data, these enterprises are still dependent on abundant reliable cheap energy.

Limits to "Moore's Law" (actually "Moore's "planning tool"")

Moore's Law was NEVER a physical law but an observation and method of controlling a complex mixture of technology, investment and design for a remarkable physical property. The property in question (although this could be subject to argument) is the ability of silicon to produce a faster switching speed for a transistor as it gets smaller. Now, like most exponential curves (like resource utilization), that exponential curve has flattened out. Even worse than that, a limit to the increasing switching speed of transistors has come from a new direction (i.e. different from the velocity of electrons in high quality single crystal silicon), it is heat. Or more precisely from the properties of the metal lines as they get smaller and smaller (i.e. more transistor per square centimeter). All those metal lines are now no longer acting as "bulk capacitors" in relation to the all the other nearby metal lines. The distance between them is now a non-negligible fraction of the debye length and the distance between them now acts as a non-linear dielectric (i.e. there is more capacitance per unit distance than there would be at "infinite separation" ==> bulk). There is too much capacitance as the metal lines get close together and they are interacting with each other. They generate too much heat and therefore the devices are being slowed down to compensate. Transistors are not shrinking and speeding up as they used to because they are getting too small to ignore quantum limits.

That IT revolution which had been based at least partly on transistors speeding up and getting smaller at the same time is running out of steam!!

I'll bet on the "Peak Trajectory" for history (unfortunately).

Interesting article!

Ian

Looks suspiciously "made up". Any references for:

1) "that exponential curve has flattened out."

2) "a non-linear dielectric (i.e. there is more capacitance per unit distance than there would be at "infinite separation" ==> bulk)"

3) That any of your discussion has anything at all to do with anything "quantum", esp. "limits"

This site if far too loaded with pretend (and pretentious) experts.

Have you been asleep the last 30 years ? We know very well the basis of "legitimate decisions" in a postmodern society.

-> more "good feeling" (and work does NOT feel good)

The problem is, in these categories cocaine, and addictions in general, cannot be beaten. It was possible, in a pre-modern christian society to allow people the free use of marijuana, and even cocaine. 99.99% of people literally only used cocaine to allow the dentist to pull a tooth, and it's very, very, very good for that. But we cannot be trusted with choices like that, that's an obvious truth. It is most defineately not possible in a postmodern society to allow truly free choice, without destroying that society.

Just look at the "I'm going to spend unlimited money" elected president, and the support he gets for doing that. He's already spent, in 4 weeks, the "never before seen" amounts Bush spent in 8 years. The fact that such a policy, which flies flat in the face of every economic theory, can even be reasonably considered by anyone who is not stoned, illustrates just how bad postmodern decision making is.

Moore's 'Law' was never a law. It was always hype. First it claimed that computer power doubles every 12 months. Then every 18. Now I hear it's up to every 24 months? Not much of a law. Evolution has been around for 150 years, with stunning success, and it's not yet referred to as the Law of Evolution. Sorry. Moore’s Law is just Moore’s Fantasy.

Computers now have about the intelligence of a cockroach. It will take a very long time before they are even as intelligent as a cat or a dog (though they may surpass humans by then.) I view this ‘singularity’ stuff as just another utopian fantasy. Things will be better once we reach Shangra-La or El Dorado or the new world or suburbia or whatever greener plot exists on the other side of the fence. It’s just another dream of the perfect world. It makes for nice science fiction. Meanwhile, we have to live in this world.

Jon.

Empirical laws are not laws they are compressed diaries. Empirical laws are
narratively that you watch the image in the rear mirror and think you can predict
what is in front of you.

An old comment on the expression "too cheap to meter" from the blog of the late Petr Beckmann.

http://www.fortfreedom.org/p06.htm

Having worked as a chemical engineer in manufacturing until I went back to school for information technology in the late 1990's, my observation is that the benefits of IT on the economy are not what the dreamers believe it to be.

Factories were automated many years ago by mechanical and pneumatic, electro-mechanical devices (switches and relays) and later electronic controllers before the invention of the computer. These primitive devices operated assembly lines, started and stopped motors, opened and closed valves, controlled flows and held levels in tanks.

Computers later added a higher level of control and replaced a lot of individual instrument devices, resulting in better operation and lower levels of manpower. However, computers did not dramatically improve plant productivity. Adding computers to an existing facility did not accomplish nearly as much as building a new facility.

While computers can perform elaborate computations, I do not know of a single law of nature ever discovered by a computer.

This argument reminds me of a remark made about Einstein by one of his colleagues who said that he was a scientist without a laboratory who only needed a pencil, paper and his brain. (Perhaps someone knows the quote and can post it.)

David Ben-Gurion, Israeli Prime Minister said this about Alfred Einstein: "He has the greatest mind of any living man...He's a scientist who needs no laboratory, no equipment, no tools of any kind.He just sits in an empty room with a pencil, a piece of paper, and his brain, thinking!"

ALSO

Before they immigrated to the US, the Einsteins endured the severe economic situation in post WWI Germany. Mrs. E saved old letters and other scrap paper for Albert to write on and so continue his work.

Years later, Mrs. Einstein was pressed into a public relations tour of some science research center. Dutifully she plodded through lab after lab filled with gleaming new scientific napery, The American scientists explaining things to her in that peculiarly condescending way we all treat non-native speakers of our own language.

Finally she was ushered into a high-chambered observatory, and came face to face with another, larger, scientific contraption. "Well, what's this one for?" she muttered.

"Mrs. Einstein, we use this equipment to probe the deepest secrets of the universe," cooed the chief scientist.

"Is THAT all!" snorted Mrs. E. "My husband did that on the back of old envelopes!"

The moral here is that the ascent of man does not derive from the wealth of nations but from the minds of gifted men.

Paul you're spot on, but I will take it further. High technology circuits and "processes" that are derived from them often create more problems than they solve. Relay logic and discrete control to which you refer, are infinitely servicable by any competent maintenace electrician/technician. If a relay becomes obsolete, another similar relay can be substituted, with some drilling perhaps. A modern CNC control is obsolete once the manufacturer or (chip supplier) decides so. The usual thing is for spares prices to rise to extortionate levels and the end user is effectively held to ransom.
I won't say there are no benefits from CNC and variations of, but it has been vastly over exploited, often in applications that would work with much simpler electromechanical control.

A logical next step in answering this question (spike or peak) would be to look outward beyond our little planet. Had "intelligent" life evolved elsewhere in the universe and produced a spike, wouldn't we have detected some sign of this? Radio astronomy has been listening for some time now. One might conclude from this that there is some overwhelming physical barrier to technological spiking, such as energy or pollution sink limits. We might take this as a warning to cool our jets for awhile, giving us some time to develop real solutions.

Dear concerned folks:

Computer chips are made from Silicon, not Silicone. Silicone is a rubbery material used to make equipment seals, window and shower/sink/bathtub caulking, and certain breast implants, among numerous other things, none of which are computer chips. Your points about Silicon vis-a-vis computer chips may be well taken as far as I know, just call the material by its proper name to avoid confusion and further all readers' understanding of your points.

It's clear that cheap energy has facilitated the dramatic rise of our current global model and with it the improvements in technology that could form a technology spike if allowed to run its course. Micro-electronics and AI will -IMO- continue a path of improvent for a while even with Peak Oil. There is a lot of momentum in the machine.

The question in my mind is at what point does NET energy depletion impair the global setup sufficient to stop this progress?

I suspect that a lot of effort in the coming decades will go into making everything from the SMART-GRID powered kitchen kettle to data-centres much more efficient -Efficiency NOT absolute power may be the next market direction: Profit via Volume. The recent Intel Atom/Netbooks are a good example of market-driven technology evolving to supply 'good enough' power. If this sort of thing can be rolled out en-masse at the low end of the power spectrum it might be possible at the high end to maintain the rise in power needed for the 'Big Applications' that would be the foundation for a spike via evolution of machine intelligence to human level (e.g. Protein Folding).

I'm in my early 40s and my life has revolved for the best part of 30 years around IT. One things for sure it's been a wild ride 'up' and if down is the next direction that will be a wild one too! (I'm currently enhancing Gardening skills for the next big breakout... :o)

Nick.

Cheap energy did fuel the information age, but in an indirect way. A chip fab is a multi-billion dollar investment. Designing a microprocessor from scratch is only slightly cheaper. The only way this works is if you sell many millions of chips to many millions of paying customers. For that you need a consumer economy fueled by cheap energy.

Keep in mind that a great deal of research in computer science and electrical engineering is funded by national defence budgets (which are in-turn funded by an economy that is fueled by cheap energy). In the three research specializations that I've participated (computer graphics, discrete-event simulation, and grid computing) many-to-most presenters at conferences acknowledged that their research wouldn't be possible without DoD funding. (Not so much in CG, but very much so in DES and GC.)

Noam Chomsky has made several statements (here is one: http://www.chomsky.info/articles/199110--.htm) that the revolution in information technology was just a thinly veiled massive public subsidy to an industry that greatly benefits the military. It would be interesting to know how much the industry would slow without that subsidy. My impression is that DES and GC basically would still be in their infancy without military funding. I'm sure the same could be said of nearly every specialization that didn't have direct and immediate consumer application.

And so after 10,000,000 of calculation the hyperintelligent supercomputer solved the problem...

Deep Thought: "The answer to Life, the Universe, and Everything is..."
Philosophers:"Yes?..."
Deep Thought:"IS..."
Philosophers (slightly higher):"Yes?..."
Deep Thought: "IS..."
Philosophers (really high):"Yes?..."
Deep Thought: Condoms.
Philosopher 1:"We are gonna get lynched y'know that?"

How dare you misquote Deep Thought? The answer, as everyone knows, is 42...

Men at some time are masters of their fates; The fault, dear Brutus, is not in our stars, But in ourselves, that we are underlings.

Julius Caesar Act 1, scene 2, 135–141

Indeed our problems with management lie not in our technologies but in our stunted version of sapience, the basis of wisdom. Jevons Paradox applies to IT as much (or more so) than anything.

Our brains are not equipped to deal with the management of incredibly complex, uncertain, and emotionally charged situations as modern civilization and especially globalized economies present. We are clever enough to create the technologies and institutions, but we did not 'create' civilization. It self-organized on the basis of the bits and pieces we invented to solve local problems. Our capacity for systemic, strategic, and morally motivated long-range judgment is not up to the task at hand, I think. Nothing short of an evolutionary jump in sapience will help humanity continue and sustain. I strongly suspect that what is about to come upon us will provide the selective backdrop for an evolutionary bottleneck through which (I hope) the more sapient specimens will emerge. It has happened before for the genus Homo. No reason to think it might not happen again.

For an overview of the nature (and limitations) of sapience see my blog posts at: Question Everything. Scroll down to Dec. 16 for the introduction. I have not yet completed the series but the three that are posted will provide an overview and some insights, perhaps.

George

Would anybody like to give a date for "peak wisdom"?

To my (somewhat jaundiced) eye we appear to have passed it a long time ago.

its more likely the amount of wisdom in the world is a probably a constant, the problem is now that there is 6 3/4 billion of us and counting its getting a bit spread thin ...

We also appear to have passed peak accuracy at quoting shakespeare.

Our capacity for systemic, strategic, and morally motivated long-range judgment is not up to the task at hand, I think.

If that statement is true, then how do you presume to judge present conditions / trajectory as being "wrong"?

John Micheal Greer (http://thearchdruidreport.blogspot.com/) has indeed posted a wonderful collection of essays on how industrial age man is totally imbued with the "myth" of continuous technical progress ("myth" is used in the sense of an unstated underlying belief that people aren't even aware of--it is just assumed). It will be immensely unsettling and disorienting to the western world when this myth dies. We seem to be somewhere between "anger" and "denial" in the coping phase, but the myth has to be completely uprooted for us to adapt to a world of sustainability.

As was pointed out, silicone is a rubber, silicon is an insulating element of valence 4 that can be "doped" with valance 3 or 5 elements to create semiconducting devices.

Moore's Law, is indeed running out of gas because of physical limits (where have we heard that before?) The basic building block of an IC is a Field Effect Transistor, and part of its construction is an insulating layer of silicon that isolates the input terminals from the output terminals. In continually shrinking the size of these devices to put more of them in each successive version, this insulating layer is now only several *atoms* thick. That limit is going to be hard to beat.

Also, data centers are huge energy hogs. Google built one in the Dalles, OR, simply because the location is next to a hydroelectric plant on the Columbia River. I've read that all the data centers in the US use as much electricity as the city of Los Vegas. You've gotta wonder how much longer that buildout can go on.

John Micheal Greer is certainly a solid choice to elaborate on "myth":

http://tinyurl.com/atzrhf

"A thousand years ago, vampires and shapeshifters, spirits of the ancestors and spirits that were never human at all, intelligent beings with subtle bodies or none, were as much a matter of everyday life then as electricity is now.
But we know better nowadays, of course.
Don't we?

This book is based on the uncomfortable knowledge that we don't know better—that at least some of these entities had, and still have, a reality that goes beyond the limits of human imagination and human psychology. For most people nowadays, such ideas would be terrifying if they weren't so preposterous. Plenty of modern Americans believe that UFOs are spacecraft from other worlds and psychics can bend silverware with their minds—but the existence of vampires and werewolves? To make things worse, this book explores such beings from the standpoint of an equally discredited system of thought: the traditional lore of Western ceremonial magic, which has been denounced and derided by right-thinking folk ever since the end of the Renaissance.
The word "monster" comes from the Latin monstrum, "that which is shown forth or revealed." The same root also appears in the English word "demonstrate," and several less common words (such as "remonstrance") that share the same sense of revealing, disclosing, or displaying. In the original sense of the word, a monster is a revelation, something shown forth.
This may seem worlds away from the usual modern meaning of the word "monster"—a strange, frightening and supposedly mythical creature—but here, as elsewhere in the realm of monsters, appearances deceive. Certainly, monsters are strange, at least to those raised in modern ways of approaching the world. As we'll see, too, monsters have a great deal to do with the realm of myth, although this latter word (like "monster" itself) has older and deeper meanings that evade our modern habits of thought. The association between monsters and terror, too, has practical relevance, even when the creatures we call "monsters" fear us more than we fear them.
The myth, the terror, and the strangeness all have their roots in the nature of the realm of monsters and the monstrous—a world of revelations, where the hidden and the unknown show furtive glimpses of themselves. If we pay attention to them, monsters do have something to reveal. They show us the reality of the impossible, or of those things we label impossible; they point out that the world we think we live in, and the world we actually inhabit, may not be the same place at all."

http://tinyurl.com/d7egt5

Product Description
Learning Ritual Magic is a training manual for anyone serious about improving their magic based on the western mystery traditions, including tarot, ritual magic, Qabalah, and astrology. "What you get out of [magic] can be measured precisely by what you are willing to put into it—and time is the essential ingredient in successful magical training," the authors write. And just as no one expects to run a marathon or play a Bach violin concerto without sufficient training, so practitioners of the magical arts shouldn’t expect to work complex, powerful magical rituals without a solid grounding in the techniques of Hermetic high magic. By spending at least a half hour a day practicing the lessons found in Learning Ritual Magic, the solitary apprentice attains the proper groundwork and experience for working ritual magic.

Learning Ritual Magic provides lessons on meditation and a set of exercises designed to develop basic skills in imagination, will, memory, and self-knowledge, all of which are absolute fundamentals to magical attainment. While the authors discuss the essentials of magical theory, they focus on daily, basic perspectives rather than launching into details of advanced practice.

Designed for the solitary practitioner, Learning Ritual Magic concludes with a ceremony of self-initiation.

==AC

Primitive culture. . . is based on a will to permanency as expressed in the seasonal rituals which have to be performed exactly in the traditional manner handed down through generations. In liberating himself from this eternal cycle of seasonal revival, civilized man had to find another expression for his need of permanency, which is manifested in the different forms of creative achievement called culture.
~Otto Rank

You conveniently forgot:

http://www.amazon.com/Long-Descent-Users-Guide-Industrial/dp/0865716099/

"Americans are expressing deep concern about US dependence on petroleum, rising energy prices, and the threat of climate change. Unlike the energy crisis of the 1970s, however, there is a lurking fear that now the times are different and the crisis may not easily be resolved.

The Long Descent examines the basis of such fear through three core themes:

* Industrial society is following the same well-worn path that has led other civilizations into decline, a path involving a much slower and more complex transformation than the sudden catastrophes imagined by so many social critics today.
* The roots of the crisis lie in the cultural stories that shape the way we understand the world. Since problems cannot be solved with the same thinking that created them, these ways of thinking need to be replaced with others better suited to the needs of our time.
* It is too late for massive programs for top-down change; the change must come from individuals."

Well sure, but aside from being batshit insane, Greer is still brilliant. Long may he howl!

Ya I guess we all must be insane to some degree to spend our time beating our heads against this unmovable wall... ;-)

==AC

“Man is cursed with a burden no animal has to bear: he is conscious that his own end is inevitable, that his stomach will die. [As soon as you have symbols you have artificial self-transcendence via culture. Everything cultural is fabricated and given meaning by the mind, a meaning that was not given by physical nature . . . . [but] the terror of death still rumbles underneath the cultural repression. What men have done is to shift the fear of death onto the higher level of cultural perpetuity . . . . men must now hold for dear life onto the self- transcending meanings of the society in which they live . . . a new kind of instability and anxiety are created.

In seeking to avoid evil [(death)], man is responsible for bringing more evil into the world than organisms could ever do merely by exercising their digestive tracts. It is man's ingenuity, rather than his animal nature, that has given his fellow creatures such a bitter earthly fate."
~Ernest Becker

There's method in his madness, I reckon. If you're going to package eternal truths for future generations, you have to clothe them in some sort of mythology.

Several of my cow-orkers worship this dude that has sort of a humanoid body, but with the head of an elephant. I do not attempt to sell them on the pathetic remnants of the Roman Empire, despite the fact that its mythology is splendidly popular in western Europe and the USA.

There's method in his madness, I reckon. If you're going to package eternal truths for future generations, you have to clothe them in some sort of mythology.

Believe me, my "batshit insane" remark was made with affection and I think the dude has an amazing mind. Moreover, I didn't say that he was any MORE insane than any other religious person I know, or indeed that I myself am sane by any reasonable standard which might be established among sentient beings of the galaxy, if beings were to exist and be devoted to standards-setting.

But he seems to believe in the magical stuff, per the quotes pulled by chimp above. Mind you, if he's just playing to the rubes, I'll go along with it, but it's always good to do a basic "craziness check" on anyone you decide to follow.

I used to run into David Brower from time to time and we saw eye to eye. Is there just one archdruid at once, or what? Both Brower and Greer impressed me deeply; maybe I just have a weakness for druids.

The light switch is just a teeny bit more accessible to to uninitiated than Hermetic magic. I actually spent less time getting an MSEE than the poverbial magician that spent so many years learning the (obfuscated) spell for getting chicks that he couldn't actually enjoy the results.

And my light bulbs and telephones work for everyone, not just the initiate. IMNSHO that alone means I win.

Some of the totality of 30.000 EU officials ( thus well paid it would leave Madoff first wondering, then speechless) posted a statistic saying the database computers, search engines, connected to the internet already consume 15% of the world's total consumption of electrical energy. This stats does not include private PCs that have access to internet via a service provider and does not include service provider computers .
Somewhat strange as information is not_matter and thus could not consume any energy at all...

Fifteen per cent seems rather high. One estimate I’ve seen puts total server consumption, related hardware and associated cooling demands at 0.8 per cent of world electricity demand (source: http://arstechnica.com/old/content/2007/02/8854.ars).

Hopefully, more energy efficient hardware (there have been dramatic improvements in the energy performance of processors, hard drives and power supplies in recent years), virtualization and other software enhancements, reduced cooling of data centres (http://www.datacenterknowledge.com/archives/2008/10/14/google-raise-your...), more efficient cooling strategies (http://hpac.com/mag/trends_datacenter_cooling/) and perhaps free air cooling (http://wikis.sun.com/display/freeaircooling/Free+Air+Cooling+Proof+of+Co...) will lower power demands going forward.

Edit: Here's how far we've come in terms of data storage. I'm not all that old, but I remember the 300 MB hard drive for our DEC VAX mini-computer was the size of a bar fridge and threw off enough heat to melt a cheese sandwich. Today, a Western Digital 2 Terrabyte hard drive with the capacity to hold nearly 7,000 times more data uses roughly as much electricity as a 7-watt night light (7.4 watts in read/write mode, 4 watts at idle and less than 1 watt in standby mode).

Cheers,
Paul

Could be EU officials take IBM 370 as a standard....

Anyway the power consumption of the totality of computers hooked to internet
could be in the range of 15%.

I am not into discrete mathematics however it could be interesting what the same amount of information exchanged on
paper, film expedited by snail mail would consume.

I have not checked it but another EU stats says LCD monitors consume more power than CRT monitors.

Could be EU officials take IBM 370 as a standard....

Anyway the power consumption of the totality of computers hooked to internet could be in the range of 15%.

If the estimates varied by one or two per cent, we could easily split the difference, but the gap between fifteen per cent and 0.8 per cent is enormous. Do you have a link to this EU estimate?

I have not checked it but another EU stats says LCD monitors consume more power than CRT monitors.

I don't think that's the case. My last CRT monitor is a 15-inch SONY Trinitron that consumes an average of 75-watts when in use. I had initially replaced this monitor with a 17-inch NEC LCD 1700V that comes in at just under 30-watts and, more recently, a larger 24-inch wide screen that is closer to 40-watts. These numbers apply to active use -- when the screen goes dark after ten minutes of inactivity (idle, but not full sleep), these LCD monitors drop to just a few watts each, whereas the CRT is still drawing close to its nominal rating.

Cheers,
Paul

Ack! Was that a tongue-in-cheek comment about information???
Information _is_ composed of matter, and manipulating, transmitting, and storing information _does_ take energy, precisely because it is material.

As a side bar to this conversation, the images in this video juxtaposed with the lyrics of this song nicely capture the turbulent transition from a time when most of us genuinely believed science and technology could resolve all problems, to a period of growing doubt and anxiety that, in turn, forced us to reassess our core values and beliefs.

http://www.youtube.com/watch?v=NSehtaY6k1U

Cheers,
Paul

+1

Hi Paul,

The range of people doing that song since Gloria Jones says almost as much as the slide show (the the trucked Polaris/kids under desks juxtaposition certainly worked for me--I knew it well) Jones to Coil, Manson to the Pussycat Dolls and many in between, that is a cultural range. Some time I just love this tainted world.

Bob

I'm not clear on what is meant by "progress in IT".

As far as I am aware, there have not been any significant software problems solved in IT in decades. By which I mean basic problems. The equivalent of finding the Higgs Boson, or String Theory making a testable prediction.

AI does not really exist. It takes a Really Big Computer (Deep Blue?) to beat Kasparov at chess. But it's just looking up past games and stupidly looking ahead a certain amount of moves to decide what to do next. I think they're nowhere with the Go playing computer. Roger Penrose in "The Emperor's New Mind" used Gödel's Incompleteness Theorem to prove a von Neumann machine cannot achieve intelligent behaviour. Now that is a result, if true.

Face recognition is lousy. As is speech recognition.

Language translation software is lousy.

The world record for the Most Super Duper Supercomputer Ever continues to be broken. At least here there are measurable benefits. The latest behemoth, which IBM is to build, will run at 200 Petaflops. And I assume it will be used to solve FEA models. Maybe it could be taught to play chess. A pity it's to be used for DoE nuclear research. Drug folding calculations or running climate simulations might be a better use of the resources.

Quantum computers, when they are commercialised, will be very good at some problems, but not all.

I suppose if you naively look just at number of transistors that you still see signs of progress. But I guess the real question is - what is all of this extra horsepower being used for? Mostly to support bloatware. Fancy graphical effects of one sort or another, sound effects, and other nonsense.

Consider all of the infrastructure required to upgrade the internet to speeds that would support video-on-demand, for example. Very few here would argue that this is an essential service...

Oldish desktops still work fine for running something like Linux, but my only beef with desktop machines is that they tend to consume too much electrical power.

Nonsense - we discovered Indians. I also discovered that the lot I manage go home at 1pm GMT on a Friday.

Nonsense - we discovered Indians. I also discovered that the lot I manage go home at 1pm GMT on a Friday.

there have not been any significant software problems solved in IT in decades

Actually, we have not quite yet had even 2 "decades" of distributed computing Internet.

1995 (14 years ago): Netscape and the worldwide web (WWW) burst onto the scene
???? Google search unleashed
???? Blogging begins
???? 2005? TOD begins

I suspect Heinlein, if still around, would have converted by now to being a peak-oil'er.

The Moon is a Harsh Mistress, as I recall, presented Mike the self-aware computer as an accident of wiring which was not understood and couldn't be replicated, and the overall theme of the book was strongly Malthusan. The earth was in extreme human overshoot and famines, and the revolution was to prevent the moon-dwellers from descending into cannibalism and dieoff from unsustainable drains on their finite resources. (Highly recommended read if - astonishingly - there's anyone on this list who hasn't). TANSTAAFL.

The belief that the universe is organized in just such a way that mankind will find just what it needs in the nick of time, every time, is a religious delusion which makes us healthier by quashing cognitive dissonance. Heinlein writes entertaining fiction about hypothetical worlds in which we were a bit luckier in that regard.

Still, others figure that the universe is so jam-packed with potential energy breakthroughs that we must perforce find them, like monkeys writing shakespeare, since there are so many scientists extant, and they are quite clever. Surely we'll compute Pi to the last decimal place any year now. Problem is, the fact of there being a lot of specialists just shows how far we have overreached in return on investment in complexity. A lot of scientists went hungry when the USSR dissolved, and that was much milder than what seems to be coming for the world as a whole.

The fact that computer performance and storage has kept getting faster & cheaper for awhile is seized upon by those seeking a "loophole" in the actual laws of the universe. As noted by others here, there are absolute physical limits to computing with silicon chips (or any others made of physical elements), such as single-atom and lightspeed limits. There are other limits just as real for quantum computing. The oceans were once thought bottomless, but it turned out they were just really deep. Of course, there's always someone who'll point out that it might be possible to compute with neutrinos or superstrings, but if true that has little to do with human possibilities, particularly in the salient 20-50 year period we're looking at.

Ray Kurzweil and his ilk typefy the "ad absurdum" extreme, believing that they will be immortal after being uploaded into robot bodies in the near future. I'll go out on a limb and note that he is quite a bit more likely to be eaten by suburban cannibals before being uploaded. (On the other hand, I'm not sure he could pass a Turing test even now, so perhaps a home PC could do a sufficient job of emulating him if programmed with sufficient hyperbole.)

Silicon computing is exotic-materials technology with a bit of quantum mechanics thrown in. We now have disposable graphing calculators and have thrown away our slide rules; but the infrastructure necessary to build computer chips as it's now done will simply not likely be around for long; it is scale-dependent in order to be economical. Slide rules will come back.

Indeed, one wonders whether the internet and its attendant hardware would have exhibited such a growth mandate if it wasn't for human idiosyncracies which make it the most effective way to distribute pornography. The lunar landers had less computing ability by far than today's economy car; and today's economy car thereby buys a bit of efficiency at the price of greatly lessened resilience. (if your car's computer breaks, good luck fixing that in your garage).

There may be quite deep potential technologies we haven't scratched the surface of which could be sustainable. Artificial spider silk and other such things could come from the ability to manipulate carbon chemistry with more sophistication. But our exponential breed-out has foreclosed many options which might have otherwise been available to a sane species with a million years to work. Scifi plots notwithstanding, it's hard to do good science when the zombies/aliens/wolves are at the door.

ymmv

The lunar landers had less computing ability by far than today's economy car; and today's economy car thereby buys a bit of efficiency at the price of greatly lessened resilience. (if your car's computer breaks, good luck fixing that in your garage).

About 10-15 years ago the age of the "mechanical" automotive diesel engine came to an end. You could run one of these "mechanical" diesel engines without a battery, except perhaps to start it. The fuel shutoff valve was a solenoid with which you could remove the plunger and the fuel pump would operate without electricity. Such engines could be used as a prime mover for a multitude of applications such as a generator, water pump or wood saw.

There is no possibility of using modern diesels for any purpose but for the vehicle from which they are fitted. Immobilisers, electronic coded ECU's etc etc. If society takes a down turn, turning junk into useful machinery will be impossible.

For all: a little humor to end the weekend. The earlier comment about artificial intelligence reminded me of a poster I saw just last Friday:

Artificial intelligence can not over come natural stupidity

Bucky Fuller was pretty much of a peaker from the 1930s onward. And a lot of Sci-Fi writers had crowding and overpopulation as background themes in their writing. Of course from the 60's at least, there was always this hope that we'd be living in a nuclear powered nirvana by now.

"We'll replace it with an electronic one, a small one should suffice. Who'll know the difference?" -- H2G2

"Eventually, it may lead to levels of "artificial intelligence" (AI) equal or superior to human intelligence. At some point, AI could reach a point where it is able to improve itself and that would take it to superhuman, even God-like, levels."

So all this time I've been preping for a Mad Max world and now your saying I got it wrong and need to make ready for terminators?

Well, Grautr, I didn't go into details on this point; but it is true that today computers are extensively used for killing people - not exactly in the form of terminators, but in the form we call "smart weapons". No need of AI for that; computers may be dumb, but are smart enough to succeed at this task. So, I think that military robotics is one of those fields where IT WILL make a big difference; again, not all changes are for the better

Nit: Kurzweil is correct, not Kurtzveil (typo in body).

0) Opinion:
I have a copy of his book.
One of the most common technology forecasting errors there is starts with an exponential growth curve (straight line of log-scale chart, of which I've drawn many hundreds) and forgets that many exponential curves are really early stages of an S-curve....

1) Moore's Law: has always been about the number of transistors/chip, but for decades it also yielded performance increases in uniprocessor performance, and some people got confused.

Once upon a time, the clock rates were driven by the switching speeds of transistors, and as you shrank them, you not only got more on a chip, but they switched faster, and Life Was Good. In addition, during the 1980s and 1990s, we were able to condense a CPU+floating point+cache memories from multiple chips onto one, which reduced off-chip delays.

But these days, much of the delay is in the *wires*, not the transistors, and wires don't shrink as fast as transistors, which is one of (several) reasons you see everyone doing multi-core designs, rather than trumpeting every advance in GHz.

If for some reason you *really* want to know, see ITRS and its Yearly Roadmap Update, of which pages 15-18 especially contain some useful charts.

CMOS is certainly getting harder to keep improving, but it will go a while yet. After that, we'll certainly need something better, and it's not clear what that is, or if anything is practical.

Just remember: "Moore's Law" is description of technology+economics, not some magic guarantee of exponential growth forever.

2) IT and the future
I occasionally give talks to computing folks, saying:

What can computing do?
Make computers more energy-efficient [system designs, data centers, etc]
Use computers to design more efficient products [computers, vehicles]
Use computers to optimize transport of things [truck routing, etc]
Use computers to avoid transport of people [teleconferencing, telecommuting]
Use wireless sensor networks and other embedded sensors for energy optimization
(Examples: Dust Networks, Streetline Networks)
(Vernor's dustmote computer clouds were inspired by the the founder of Dut Networks).

Anything that saves energy, or substitutes for fossil fuels is worth considering"

Computer systems designers, chip designers, data center designers, and relevant others are quite well aware that energy costs matter, and are working hard to improve efficiency. Our local utility (PG&E) has run commercials showing computers disappearing from data centers via *virtualization*.

The second category is mostly mechanical and electrical engineering, sometimes structural.

The third is typically operations research and optimization.

The fourth is typically IT and software.

The fifth is relatively new, but coming along fast, as an extension to traditional embedded computing. Very cheap intelligence embedded into things can often save substantial energy. Some of these categories combine, for example into smarter vehicles able to tailgate more safely, as Geoff Wardle talked about at ASPO Sacramento. Building energy controls fit this, as do sensors for managing industrial processes better.

People think about desktops and other user-access computers, but those tend to be constrained by the number of humans, and the fact that we haven't upgraded from Release 1.0 humans. A few years ago, as best as I could figure, there were ~100 CPUs/person in the US, by the time you count the watches, thermostats, automobiles, DVD players, TV sets, microwave ovens, bread machines, each person's share of the server farms, routers, etc, etc. Put another way: if you think about the user-access devices, it's only a tiny fraction of the computing.

BUT, a lot of Kurzweil's ideas and Vernor Vinge's stories just seem very unlikely in an energy-constrained world.

I got Vernor to give a keynote at Hot Chips in 2007, in which he briefly described some of the ways in which the Singularity *might* not happen. We spent a few hours together after that, and I raised the energy issues with him, although I don't know if they'll get into any stories. Vernor is far better than most SF writers in taking current research and extrapolating. I am keenly awaiting his sequel to "A Deepness in the Sky" (which had the smart dust).

Thanks for the clarification.

And here I thought you were talking about Kurt Weil!

http://www.threepennyopera.org/

The technological singularity is a farce. All this 'technology' has served to do is accelerate our descent to the bottom.

What was the result of all this exceptional computing power when used by Investment banks and hedge funds? The virtual bankruptcy of the entire OECD! The biggest users of super computing are banks, hedgies, aeronautics, climate scientists (whom no one in power really listens to) and the military (look where all that computing power got the US military in Iraq)

Common sense dictates that we will have to power down.

We have passed peak credit, which means that demand for computer chips and better products will decline sharply and R&D will decline tremendously as well. In the next decade Intel, AMD and IBM will be footnotes in corporate bankruptcy history.

All this 'technology' has served to do is accelerate our descent to the bottom.

Interesting that we are using the "internets" to communicate with one another and to start discussing on a global level, various global problems like AGW, Peak Oil, Limits to Growth, etc.

Could we have done this as effectively without the hated "technology"?

Interesting that the phrase "high technology" has dropped out of the lexicon.
We are not so high and mighty any more.

When it comes to oil exploration, I believe we have benefited from exponential (or power law) increases due to technology advancement. Initially, the search process was methodical involving lots of human labor. The first acceleration was caused by huge influxes of prospectors who also brought in new exploration ideas. Eventually the industry went to seismic and then on to supercomputer simulations and visualization techniques. This provided a "virtual" search that was able to cut through huge swaths of the earth's crust. Each technology improvement and the trained people involved improved the search speed by perhaps an order of magnitude.

My entire derivation of Dispersive Discovery model and the special reduction to the Logistic function is primarily a result of this exponential assumption. Interestingly, if the accelerating search suddenly went away, the discoveries would plummet much more quickly because of diminishing returns, resulting in what I think Memmel refers to as the "shark-fin" profile. There will be a long tail in that case as the discoveries still occur but back to a reduced pace of past years.

Yes. As it happens, I used to help design supercomputers and help sell them to (among many customers) oil companies in the 1990s (I think about $500M worth). There was indeed a big jump in computational techniques for both seismic and reservoir modeling and in visualization systems. This came about from parallelism both in computing and in graphics systems. Those were fun days!

====

1)Many different groups use (or have used) supercomputers, although the definition thereof is often fuzzy (purpose-built? huge clusters of PCs? bunches of nVidia GPUs?) If people actually want to know about supercomputers in science and engineering, I recommend a beautiful book you can get for a few $$, a little old (1993), but a good introduction:

William J. Kaufmann III, Larry L. Smarr, "Supercomputing and the Transformation of Science".

However, the big boost in supercomputing came afterwards, as we replaced expensive things like Cray vector systems with scalable microprocessor-based parallel machines. For some apps, distributed clusters work well.

If people think computers have done nothing useful, they should stay away from modern cars, airplanes, bridges, buildings, and many drugs.

In the US, the new Secty of Energy, Steven Chu, certainly listens to climate scientists,and so does Pres. Obama's science advisor.

2) If people actually want to learn about technology trends and computer architecture, the standard book is Hennessy & Patterson, Computer Architecture - A Quantitative Approach, Fourth Edition. If you read my review there, you'll see I recommend it for non-architects. Even reading Chapter 1 is useful. Good libraries would have it.

Note that "exponential growth" colloquially means "fast growth", but professionals in this turf would cringe at that. If X doubles in 2 years, that's X = 1.4 ^ years; if it doubles in 4 years, that's X = 1.2 ^ years. The latter is slower growth, but still exponential. Sooner or later, most technologies turn into S-curves, with a linear tail, not exponential. Serious people study these things, since $B's are at stake. Once upon a time (around the First Edition of Hennessy&Patterson), DRAM memory $/bit was dropping faster than magnetic disk $/bit, and they had an exercise to figure out when the crossover would come, and people just stop building disks. But the disk folks shifted technologies, there was an inflection on their line, and they pulled back away ...and the exercise disappeared.

3) Computers are certainly no magic panacea for the energy crunch or other problems, but they can help in various ways, because they sometimes provide lower-energy services, as I mentioned earlier. I'd observe that in some of Bob Ayres' and Benjamin Warr's work, they found a boost in GDP and GDP/energy that seems to have come from computing over the last 30 years. (By the way, they have a good book coming out in a few months - get your library to get one.)

4) Jevon's Paradox is way over-applied, especially to the (increasingly forthcoming) case in which energy gets really precious, or where demand is (at least partially)inelastic. For example, if you get a car that's twice as efficient, you may drive it more, but you don't commute twice as far to work just for fun. Likewise, if a farmer buys a new tractor that's twice as efficient, they don't drive it twice as much, just for fun. Most farmers would just bank the fuel savings, if any. If the price of fuel doubles, you may care about efficiency more, and if it quadruples, you'll care much more.

But that's not the point of Jevon's paradox (if I understand it correctly.) Making something twice as efficient means that twice as many *people* can exploit it. So your commute may be the same, but there are more people on the road making that same commute.

Jon.

Do you really mean that?

Your assertion is equivalent to: 2X more efficient, so there will be 2X more cars on the road?

1) Suppose cars were 4X more efficient. Would there be 4X more on the road?

2) Suppose they were 100X more efficient. Would there be 100X more?

3) Suppose gas were free, would there be (approximately) an infinite number on the road?

First, fuel price is only part of the cost of running a car. Capital cost, insurance, maintenance, tolls, parking all count.

Second, there are only so many people.

Third, in many places, there is really no more practical space for roads. It is rather unclear how one greatly increases the traffic capacity in New York City or Mumbai. Even here on the SanFrancisco Peninsula, even though there is more space and lower density, there just is *not* going to be a significant increase in big highways, although there may be modest capacity expansions.

Again, you have to look at elasticity curves. In any case, over the next few decades, it won't be "If vehicles are 2X more efficient, 2X more people will drive" implying that the same amount of fuel gets used, it is:

"If there is only half the fuel, how much driving will there be?" This is especially true if we don't go electric fast enough to avoid rationing, given that {fire trucks, school busses, police cars, locomotives, agricultural machinery, etc} just might get higher priorities than private-car sightseeing.

You are right. Eventually we will slam into a wall. When energy gets scarce and prices rise, people will once again use less. However, my ‘assertion’ is not a mathematical equation. Jevon's paradox is not the same as Moore's 'Law.' To my knowledge, nobody hypes it as the sixth Euclidean postulate.

Jevon’s paradox, as I understand it, merely refutes the idea that efficiency saves energy. It does not. Efficiency allows us to use more energy efficiently. For instance, cheap cars and cheap gas has meant that more high school students are driving to school and more busses are running near empty. After my daughter got her license, ‘so she could drive to work,’ she never haunted the inside of a bus again. Expensive gas would have kept those future mensa members happily texting each other on the bus to school. Efficiency led to more consumption, not less.

That’s the point of Jevon’s paradox. It describes a trend, not an equation.

Jon.

A provocative (which is good) essay.

A few comments. Computing (IT) is perhaps the only area where gigantic strides have been made in the last 50 years. And much of that has been not in fundamentals (algorithmics), but in Moore's law, and Moore's law alone, which is in turn due not so much to any deep fundamental discoveries, but a series of refinements to photo-lithography. More speed, more capacity -- that's it.

AI was crap, is crap, and always will be crap. The part that isn't, isn't AI, but just an algorithm that does something well. Much of my working life was as a programmer. There were two books I was lucky to ready early: What Computers Can't Do by Dreyfus and A Discipline of Prgramming By Dijkstra. The first taught me to only program simple things, the second to program them correctly. Insects perform "computations" we can't get near.

There are plenty of legitimate uses for computers. But almost all are doing things quickly that we know how to do slowly by hand. There are things that can't be done by hand simply because it would take too much time. But there is no chance that AI is going to figure out how to solve our problems. Mathematicians long ago (in the 30s) proved that there are very well defined problems that can't solved by computer.

(For that matter, many of the great advances in 20th century science were results that limited our horizons. Even relativity which promised us endless energy, theory, denied us the ability to travel to distant stars. Quantum mechanics destroyed complete determinism at its roots. Prior to that thermodynamics had disabused us of perpetual machines.)

We KNOW what our problems are: diminishing below ground resources, ever greater despoilation of the surface ecology, overpopulation, and global capitalism. We do not yet have the political will to take the obvious steps to address these issues. The solutions are not technological in the deepest sense, although a re-orientation of science toward understanding the surface ecology is required.

The problem is that the actual solutions are in conflict with global profit maximization. The problem is that the technological "solutions" hold out the prospect of profit for one sector or another.

I don't think it's necessary to be so agnostic about the future. Yes, we can be sure that it will be interesting. But I think that it is also reasonable to be fairly darn confident that no technological fix will save our bacon. It is going to take political will, global and local, to overcome the forces that prevent us from taking the necessary actions. Compute that!

Hello Davebygolly,

Your Quote: "It is going to take political will, global and local, to overcome the forces that prevent us from taking the necessary actions. Compute that!"

Well said. We have got tremendous hurdles ahead, and very little time.

Yeast have never solved the dilemma of the confines of the petri dish, wine bottle, etc, before they died.

If hi-tech is to save us: it will have to instantly turn our urine back into tasty beer, and/or instantly convert our crap back into a Baby Ruth candy bar, with No Impact upon anything else on this Little Blue Marble. Same concept extrapolated to everything else that a human seeks to own/consume. The Laws of Thermodynamics say that it cannot be done.

Nature does not respond to 'information' [no matter how much we create and use]; she responds to the countless flowrates moving in the Circle of Life.

Bob Shaw in Phx,Az Are Humans Smarter than Yeast?

"Insects perform "computations" we can't get near."

I took my kids to an adventure park a few years ago and they had an animal section. There was a glass enclosure populated with leaf cutting ants. Watching them boggles the mind, they are making decisions and modifying what they do depending on the size of the cutting and the obsticles encountered. The number of ants carrying the "load" varies as does the number waiting to receive the "load" across a discontinuity. I don't know how big their brain is or how much power it consumes (pico watts i would think), the only thing I am sure of is we have a very long way to go before we get close to nature's achievments, not just from a computational point, but materials as well.

As regards progress its hard to define. Is living longer progress or is it just taking up valuable resources from future generations. Its an emotive subject, and here in the uk at the moment we are more concerned about use of abusive language, name calling and subsequent apologies than real issues. There is clearly no known solution, because opinions vary so much on what the solution might be. There is not even universal agreement on global warming, though there may be a convergence towards acceptance. There are things we can do, but even nuclear power is not universally accepted a sustainable. I just keep reading the oil drum and hope!

A common garden variety dog can run across a room, avoiding (or nipping at) people’s ankles, dive under obstacles, snap at the house cat and neatly catch a thrown ball in mid air, all while monitoring smells that we can’t even imagine, listening for odd sounds, rebalancing himself on loose rugs and responding to additional objects in his field of vision. Oh, and he probably tasted something somewhere along the line, too. Meanwhile, his brain is constantly monitoring his internal organs, insulin levels, heart rate, O2 concentration, stomach acid-the list goes on.

That’s just rover.

Jon.

If we ever were to build that superhuman AI system and asked it to solve our unsolvable energy problem, it would respond: "ALERT! Energy demand exceeding energy supplies; unsustainable; now powering down lowest priority energy consuming devices to balance demand with supplies."

Well, DUH!

Ugo,
"However, over the years, we seem to have been gradually losing the faith in technology that was common in the 1950s."
If you are going to use science fiction literature to judge faith in technology, what about Jules Vern's "20 thousand leagues under the sea" and many of HG Wells novels, written well before 1950 and all of the SF stories where nuclear war ends civilization( Nevel Schutte? On the beach") or where technology has nasty outcomes; A. Huxley" A brave new world". I remember the 1950's as a time of fear that US and USSR would destroy the world by a nuclear exchange, now we fear that US and China will destroy the world by burning coal. Both fears are based in reality, neither have to happen.

" We are increasingly worried about resource depletion and global warming."

The world has been worried about resource depletion ever since man wiped out the large mammals during or just after the last ice age. In the 18th C;Europe, lack of wood for ships and charcoal,shortage of water power, 19C; lack of nitrate fertilizers, whale oil, in 20th C; shortage of copper, nickel, uranium and food. These shortages were real and were solved by either resource substitution or new technologies; heap leaching for copper and uranium, Haber ammonia synthesis, hydro electricity/oil, replacing whale oil, plant breeding improving crop yields.

"Both factors could make it impossible to keep the industrial society functioning and could lead to its collapse"

That is true if we do not develop and implement solutions. Are their solutions to peak oil? Yes replace this energy resource with electricity generated by nuclear, wind, hydro and solar. Are their solutions to greenhouse gasses? Yes replace coal generated electricity with nuclear or renewables. It dosn't mean that all problems can be solved by technology but technology can contribute( nuclear weapons test bans, birth control, replacing FF) we may have a future world with very little copper, or air travel or reduced meat, or have to be satisfied driving cars that only have 100km range, but next to nuclear war obliteration ALL other future scenarios, including a 50meter sea level rise, look pretty good.

As far as technological progress since 1947(my birth date):nuclear power, space travel to edge of solar system, instant world wide mobile communications, wide availability of computers, lasers, non-stick cookware, organ replacements,DVD's .....sounds like we are still on that exponential growth, possibly only doubling in one life-time not one decade.

Neil, finally a bit of optimism in a comment! I was very curious to see the reactions to my post and I know, at this point, that the average attitude of the TOD readers is far away from that of the "Extropians" or the "transhumanists". I wonder what a convinced extropian would think of my post, but so far I have got no feedback from them. Maybe in a while.

About your comments; I think you can't ignore the change in attitudes from the 1950s to present. I can't quantify this, but I think it is reasonable to say that the 1950s were the peak of optimism. Of course, there were optimistic views much before - Wells may not be a good example, but Verne, yes. And there is all the Science fiction of the 1930s; sure. But it is also true that now we are much more disillusioned by now. I was born in 1952, I am not much younger than you, but I do see the change in my own mindset. In the 1960s and 1970s I was an optimist - all the way to thinking that I would have lived in a space colony. But now, well, ouch.... i live in a country that has created Berlusconi. Let me not say anything about that; but it is one of those things that can destroy anybody's optimism on human progress. But we don't have to be gloomy. There are ways out and I do drive an electric car that has 100 km range and I am perfectly happy with it.

Gonna have to chime in here re. AI

1) We have no actual working hypothesis as to how intelligence actually works, or even any kind of idea as to what intelligence really is. Given this, there is little chance of actually recreating it artificially (at least on purpose - more on this later).

2) All intelligence we can observe appears to be an emergent property of super-complex systems embodied in the real world. This has been noted by some individuals in the game (see Rodney Brooks "COG" project at MIT). Unfortunately for AI, the rich associations biological intelligent systems (us) have with the environment is a result of our biological heritage (the need to eat, drink, breathe, reproduce, etc.) and involves complex sensory and motor interactions with said environment and the evolution of these systems through time. Artificial systems are merely told about the properties of, say, water - it has no inherent meaning to them. The relationship is purely semantic. This does not convey the richness of association observed in intelligent systems. Furthermore the attempts to replicate similar systems (COG, again) leaves much to be desired.

3) We have a lot of problems recognizing intelligence in other mammals, never mind species more divergent from us - and even now we recognize creatures such as dolphins as having intelligence, we have difficulty relating to them: in fact they seem to be better at learning to communicate with us than we are with them.

4) Gödel:

Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.

This theorem is arguably expandable to cover general systems (though there are arguments against this). I think Stephen Hawking recently used this to argue against "grand unified theories of everything". The basic idea is that as mind/intelligence is instantiated in a being and that being is in turn instantiated in the universe, we can neither comprehend our own intelligence or the true workings of the universe. Difficult to recreate what you cannot understand...

AI currently uses techniques of massive search, heuristics to pare the search tree, and various types of parallelism (supposedly modeled on the brain - but generally by computer scientists with little/no understanding of biological brain function - not that the brain is well understood even by those who dedicate lives to it's study) to produce results that appear intelligent. The systems thus developed do not model a biological intelligence. Furthermore, the systems devised in AI research tend to operate in "toy worlds" (simplified environments - because reality is just too complex. A major problem with this approach is scalability. Try to develop in a toy world then move the system to the real world and it will usually fail at the first opportunity. There is no graceful degradation as seen in the biological world. This is why even "simple" tasks like speech recognition are so difficult to conquer - "train" a system to recognize your vocal commands, then catch a cold or let somebody with a different accent use it and it tends to fail.

I have yet to see anything that I consider AI; I have, however, seen many very clever programming/design tricks that mimic aspects of intelligence until one investigates the processes used.

This leaves emergent intelligence of hyper-complex systems, an approach exemplified by Hugo de Garis. My issue with this is akin to the problem of understanding dolphins, but a few (many) orders of magnitude greater. An artificial system has as it's environment electrical signals, clock rates, data throughput, etc: about as alien an environment to our own as we can imagine. The chance of us relating to such an intelligence, given the issue with relating to dolphins, would seem to be slim to non-existent.

There is a lot more detail, but this gives something of an overview....sorry if it is a little disjointed, but it is late and I am tired and suffering from flu'...

A couple of references for those interested:

What Computers Still Can't Do: A Critique of Artificial Reason, Hubert L. Dreyfus
Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter (actually, anything by this guy is worth a look, but it's deep

I should add... I used to play the above game and used a longer, more fully-fledged version of this argument to burn my bridges when I realized the pointlessness of it all ;)

You hit a nerve here buddy this is what I actually do for a living.

First and foremost most complex systems fall into the computational trap.

What this means is once complexity reaches a critical level the system can create a classical turing machine few complex systems can escape this. Think halting problem on steroids.

The problem is it actually trivial to reach turing level complexity I'm working on proving this its almost impossible to cross beyond a turing machine.

Most people think that constructing a computer is hard the problem is actually not constructing a computer but constructing on that runs the programs you want it too. Basically any system that as a few interactive bodies ( qubits) can compute.

Anyway I'm surprised to say the least to see a post by someone that even thinks about this. How do you break the turing constraints ?

The trick is pure white noise.

Email me at mike.emmel@gmail.com if your serious about this.

DNA actually uses defects to escape the turing trap. ( Noise) its really a simulated annealing issue. The only way out of a local minima is noise. Whats really fucked up is via the Heisenberg uncertainty principal the "real" universe actually has the noise to escape a local minima.

You may not know this but no turing machine can escape a local minima once it calculate the answer it will loop forever calculating the same minimum it cannot escape. Thus the halting problem is really simply a statement that no turing machine can realize its gotten trapped in a local minima there is no calculation that can be performed to escape this.
Noise (white noise) is by definition not calculateable thus it escapes the turing trap.

Anyway like I said email me its surprising to see people that post that seem to be thinking about what I'm thinking about.

Good to see at least one person thinks like I do (did...) - I gave up on the whole thing and went in to commercial programming as a specialist IT contractor, but finally came to the realization that I was spending way too much time in virtual space and way too little in realspace. I now avoid programming (I know: what a waste of nearly 30 years spent developing the skills!) and now spend much more time enjoying myself in real life by interacting with my physical environment. It has been a long time since I really thought about this stuff, and I am not interested in retracing my old steps...

oh - and my nom de qwerty comes from the subsequent damage to my intellect from having too much fun, plus it is the only true religion for people like me.... F'k 'em if they can't take a joke...

Goedel theorem says the truths of arithmetic cannot be listed one by one by a Turing machine. Thus Hilbert's program
had to fail.
So if "computers" means formally deciding machines then the question is not what computers still can't do but what computers cannot decide. Never.
Turing halting problem is an even harder limit of this computers than Goedel.
Another point of significance is memory
or the definition of information.
The mathematical dimensionality of an AI memory must be an irrational number.
( if not a transcendental number) As already Pythagoras had noticed irrational numbers cannot be "catched" ( completely covered) by series of rationals getting ever closer to
the irrational from either side.

In the future, a robot, (Jane) stops cleaning to tell the head of household (Joe), that him and his football watching buddies need to stop drinking anymore beer, because the budget for beer only allows for the consumption of x number of beers by the conclusion of today's date. Joe tries to push Jane out of the way, but due to a worldwide policy agreement to conserve resources, Jane effectively restrains Joe and the rest of his drinking buddies from accessing anymore beer.

The reality is, mankind only takes advise from IT when he wants to, so no matter how amazing IT may become, more beers than are budgeted for will be consumed if man so decides, right along with more births, trees cut, oil burned, etc. until the carrying capacity of the Planet is surpassed without regard to the consequences.

Wasn't the longterm carrying capacity of Earth exceeded in '76?

1776?

Al

This discussion reminds me of a scene from "Idiocracy", where they comment that scientists were unable to combat the slide into idiocy because they were spending all of their efforts on figuring out how to help people grow hair and have erections..

That movie was a brilliant concept, but poorly executed. They should do a remake.

My feeling is that we may be able to create Artificial Intelligence but, if we do manage this, we will be able to neither control it nor predict what it will do.

We already have created artificial intelligence; control and prediction are interesting subjects, though I believe you are most concerned with applications that create other AI applications or would control critical infrastructure elements. Stock trading AI programs have been around for some time, as have other applications. There are different techniques, including neural networks, genetic algorithms, and fuzzy logic (and hybrids).

These are not "intelligent", they are clever programming tricks.

Neural networks collapse if over-trained. Neural networks, GAs and fuzzy logic are all very interesting in their own right, but the behaviours exhibited do not qualify as intelligence. The term intelligence denotes reasoning and adaptability to new circumstances. Artificial systems have a very limited application (stock trading? those programs have a great success at failing to predict swings that are out of the usual pattern...) and require careful management to be even vaguely successful in their application.

Biological intelligence (the only model we have) seems to involve pattern matching at a very abstract level (the use of analogies in human communication is a good place to investigate). Programmed systems that exhibit similar behaviour do so ony because of careful programming - they find it impossible to generalize.

GAs are very powerful, in some instances evolving results that are totally unexpected. I once watched the evolution of a robot controller that took a simple robot with 2 drive wheels, a couple of collision sensors and the simplest 1 pixel eyes (could only detect overall levels of brightness) and aimed to find the centre of a room. The evolved solution was found (after a pretty complex analysis due to the convoluted code evolved) to only utilize the motor controllers and a single "eye". The system would spin and arc to the centre of the room. It turned out the system was just tracking the variation in light intensity hitting the single sensor and finding the point in its physical environment where it was most even. It was a shock to all involved - nobody foresaw this solution. It was, however, a "toy world" problem and hence did not scale in any useful way to a more complex environment.

There is also a problem with GAs ("the Royal Road") where a GA will find a local minima and get stuck there, never finding the overall solution. As memmel said earlier, the application of noise can overcome this, but it is still not "intelligence", rather it is a programming trick.

My intuition (gut feeling - the gut is essentially a "second brain" in terms of the density of nerves, and who can prove that nerves are all that is involved in the production of intelligence anyway???) is that AI seeks to create systems that contain only certain aspects of intelligence, but that biological intelligence is an emergent property of the whole system. DATA or Spock from startrek are unlikely to ever arise because the systems that give rise to emotions, wants and needs are all precursors to biological intelligence (again: the only model we have to go on...)

These are not "intelligent", they are clever programming tricks.

If you want to have a debate on this point, then hash it out with these people as a starting point.

Neural networks collapse if over-trained.

The term "collapse" does not represent the effect; it means that the network has been trained exactly to respond to only one type of input, losing the ability to abstract generalized pattern matching.

Biological networks can also have issues if over-trained.

the application of noise can overcome this

Since you are being picky, it's not 'noise', it's an alteration of the GA parameters or algorithm structure.

However, we can choose not to quibble over basic definitions...

If we ever DO create artificial intelligence at least in its most general definition used in the more thoughtful posts above, its first action will likely be to exterminate humanity from the earth as a pest. In my blacker moods I've sometimes theorized that mankind is simply one more step in nature's progressive development of higher species, and the next step is not likely to be organic.

mankind is simply one more step in nature's progressive development of higher species

This is simply more proof that most people do not understand "evolution".

Sorry. There is no "Intelligent Designer".
There is no "progressive development to higher species" in Nature.

It's all random experiments, some which happen to randomly work, most of which don't.
Mankind is a freakish outcome of one of those random experiments.

____________________________
They say two heads are better than one. Is that always true?

It is never true as a herd of morons cannot surpass just one "intelligent" man.
( Otherwise politics as we know it were impossible)
This however holds only for mankind in the domain of insects or even bacteria we have the so called swarm intelligence.
Of course swarm intelligence cannot be modeled or simulated with "neuronal networks" nor with x internetworked microprocessors.

Well said, Stepback. I couldn't agree more!

Nearly a century and a half has passed since the first oil company offered its shares on the open market. In that time, huge strides have been made in energy production and refinement, yet those advances have been offset by the increasingly numerous, ubiquitous uses of oil and petroleum products in our daily lives. The pricing problems that beset the industry remain the same as they were in Rockefeller's day: the industry has never learned to tame the pricing cycle. The same boom/bust pattern persists. The industry, despite the creation of OPEC, remains a price taker rather than a price maker. By allowing the pricing mechanism to fluctuate as wildly as it does, oil is treated as an ordinary commodity rather than as a depleting resource. Therein lies the tragedy, since it distorts geological realities. Relying on markets to price energy has failed to alert us that our fuel tank gauges are broken. Unfortunately, it is going to take another energy shock to remind us that the geological clock is running out of time.

found this on financialsense.com
http://www.financialsense.com/stormwatch/2009/0128.html

this may have been reported before. sorry if its a repeat.

the rebound in prices will be frightful!

As for AI one has to establish a sound theory of intellectual products. It is easy to prove that intellectual products
cannot be produced by Turing machines themselves.

As for "peak oil" this is another expression of the narrative " listen I am conservative and I run out of ideas, too"

To Engineer_Poet

Sorry, I could not reply to your last post, but edit facility was closed down before I could. I'm sure we will have another debate before long! The answer to your question is not very often (but not unknown)for 500, but 360 quite often. My current career is very vunerable to an oil crisis. When electric cars become the norm I will have to opt for a company vehicle, rather than drive around in cheap diesel bangers.

Perhaps we should consider peak benefit to society. At what point does the benefit of buying ever more powerful PCs diminish to the extent that they begin to divert useful cash that could be better spent elsewhere?

In my opinion DOS was tedious, but useful, Windows 95/98 was good enough for most practical applications. Infact we had most of what we need (if not everything) before computers in any case.

The law of diminishing returns and that of self perpetuation come to mind. Technology has resulted in non repairable items and a throw away society.

It seems to me that IT is becoming a relatively mature technology. I don't see huge leaps forward, as it seems that recent advances have had lowering marginal utilities.

My money for the next big science/technology advances are in genetics/biotech. This seems to be an area that has not reached a high level of maturity, and the marginal utilities are still good (Antidepressants and antacids notwithstanding). Whether the political climate allows these advances to take place is a tough problem.

Our methods of generating, transmitting and storing electricity also seem to be rather primitive, and are probably prime candidates for technological improvements. Again, I see the major problem here being one of political will.