NextgenHeating

Sunday, February 20, 2011

10:10 and Microsoft

My 10:10 article earlier today got me thinking about another topic.  Last year I noticed that Microsoft had signed up for 10:10.  I admit to having had a little intellectual snigger at this as Microsoft must preside over one of the biggest carbon footprints on the planet in the form of its Windows operating system.
  
It's easy to knock Windows and I am certainly not the first person to do it.  I also understand that Windows is one of the biggest, if not the biggest software project ever undertaken.  As such that it works at all is a minor miracle . After all many such projects have simply collapsed under their own weight.  Neither am I a Linux bigot; I still use Windows for the simple reason that it is the closest thing there is to a standard in the PC space.   No, the problem with Windows is not that Microsoft is incompetent, far from it. The problem is that it's simply too old.


Windows was written when computer hardware was much less capable than it is today.  For example, It had to make use of virtual memory to support all of its features.  Now the [SDRAM] memory on my PC, at 8Gbyte is 20x as big as the hard disk of the first computer I ran windows on but I still cannot turn off the virtual memory feature.  As a result it sits there "strumming" the hard disk even when I do nothing.  Worse by far is the automatic update system.  Think of all the servers, the network infrastructure and the sheer number of PCs involved to keep the OS alive today.  And all, it seems, for nought.  My Windows 7 installation, though less than 6 months old, is fatally flawed and requires (according to my web research) a rebuild but I don't have the time or indeed the heart to do it. This is all a waste of energy, both figuratively and literally.


The Intel turn-around over the NetBurst (Pentium 4) architecture is now very well documented.  Pentium 4 CPUs, in their push for more and more MHz were not just wasteful of power, but were to such an extent that they were starting to push the thermal dissipation problem to its limits. At the time I would only consider AMD CPUs as I valued MIPs/watt over raw MIPs.  I applaud Intel's courage in making the turn around to achieve a 4 fold reduction in power consumption while achieving a similar increase in CPU power.   Indeed their 2011 "Sandy Bridge" architecture with its dedicated hardware acceleration is yet another step in the right direction.


So could Intel's success be replicated by Microsoft?  Could they make a Windows compatible operating system from the ground up?  In theory, Yes, though many of us still remember Longhorn!  However all incumbents have a disadvantage and Microsoft shows no signs of having the will to fix Windows.  Only now, for instance, are they talking of porting Windows to run on the ARM.  I suspect this is merely a reaction to the disruptive force of the I-pad and its derivatives.All this reminds me of a recent PC-Pro article in which Hermann Hauser "Believes that Intel's days are numbered" and that ARM will inevitably kill Intel.  I have a huge respect for Dr Hauser.  I have had the privilege to work with him three times, the first of which, coincidently, was at Acorn at the time the ARM CPU was designed.    One of Hermann's favourite stories was of a visit from Bill Gates, who was then trying to peddle MSDOS.  Hermann sent him away saying he had superior technology which, of course, he did.  However you have to ask yourself why is Bill Gates now a billionaire when Hermann is only a millionaire?  Indeed Acorn had local area networking in 1984, long before Ethernet was brought to market yet it failed to capitalise on its technical superiority. Hermann, then, is a clever man but is not a perfect Industry barometer.  As per my previous article today,  I do agree with Dr Hauser that we are entering a new wave of devices with "good enough" technology.  That these devices are battery powered is great news for our global carbon footprints as any OS that runs on them must have power consumption high on its priority list.  As a new medium there is no need to be compatible with legacy programs so there is no need for Microsoft's OS.  To my mind that makes porting Windows to tablets both futile and irrelevant.  I think the jury is out as to whether ARM will beat Intel.  It is, however, becoming increasingly clear that one way or another Microsoft's carbon footprint is about to get a lot smaller.

The 10:10 challenge?

10:10 is a movement started, it appears, by the team that made "The Age of Stupid".  It seemed like a great way of getting like minded people together to create momentum for change.  I joined immediately.  One year on I admit to being a little disillusioned by their lack-lustre success driven mostly it seems by their inability to make concrete recommendations as to how their members might lower their carbon footprint.  10% would seem to be a relatively easy goal but it does require more than just changing perfectly good light bulbs for low energy ones.

I was recently accused by one of the VCs I'm talking to of proposing what might be a "boil the ocean story" in respect of NextGen heating.  He clarified this by saying "it needs widespread shifts in people's behaviour and major capital outlays to see real adoption".  I hope I've shown through this blog that by simple changes - moving closer to where you work and cycling to work, sensible use of traffic control, insulation etc I have saved much more than 10% of my carbon footprint and, apart from being a little fitter and a more comfortable, did not make any particular sacrifices.  It did however take conscious effort and conscious action (and some failures) to achieve it.

One strong hint at a solution was neatly eluded to by Peter Hinssen,  one of the Vlerick Management School professors and an IT futurist, in a Keynote called "digital, the new normal".   There is a slide about 25% of the way through showing a survey into "The necessities of life" where 49% of respondents mentioned  cell phones and electronic gadgets, most of which did not exist 15 years ago, populated 50% of the top 10.  Surely if electronic "toys" can achieve this penetration in half a generation then there is hope for green-tech too?  His  ideas  on "Good enough Technology" really resonated with me. e.g. why people use Skype over land lines, MP3 over DVD audio and why Blue-Ray has not achieved mass market penetration.   

I don't believe we should be content with scratching the surface of green-tech. I don't believe we need boil ocean's to get where we need to be either. We just need to find good enough technologies that we can deploy and enjoy.

Monday, February 14, 2011

Latency Versus Bandwidth – what Jeremy Clarkson and David Mackay need to understand

I was given a Jeremy Clarkson book, “Driven to Distraction” for Christmas.  In the opening article he casts doubt over the wisdom of the variable speed limits around the M25,  calling it “a new state control system to quash individualism on the motorway”.  Much as I hate to criticize a fellow Doncasterian I can only conclude that he doesn’t understand the maths.

Back in 1987 I worked for a consultancy.  One of my co-workers was a software engineer engaged in writing fluid modelling programs.  He told me that up to a certain point  the flow in a water pipe is linear and it travels with very little resistance.   Above a critical velocity the flow becomes turbulent and the water molecules bump into each other such that the flow is no longer a linear function of the pressure exerted.  My co-worker also told me that the flow of traffic on a motorway could be modelled in a very similar way to water in a pipe.  So much for individualism!

Let us apply the latency versus bandwidth analysis I used in the last blog to this new mathematical problem; how to get around the M25 as quickly as possible. The latency is what Mr Clarkson perceives, it’s the journey time.  To measure the bandwidth of a motorway you need to draw a virtual line across the road and count how many cars per second, per minute or per hour that cross that line.  For instance if we all obeyed the 2 second rule there would be 3/2 cars per second (3 lanes * 2 seconds), which is 90 cars per minute, 5400 cars per hour, or 130,000 cars a day.  Actually that’s not a lot considering how many cars use the M25 every day but then they don’t all use the same bit of motorway.  Of course many of us don’t obey the 2 second rule so the bandwidth is actually higher than that.  The point is that the bandwidth is most definitely finite. Whats more, in a traffic jam the bandwidth of the motorway decreases dramatically!

So how do variable speed limits help? Well, it appears that the M25 traffic planners understand the difference between latency and bandwidth and a bit about fluid dynamics.  From my analysis above the peak bandwidth of the motorway is largely independent of traffic speed and by reducing the speed of the traffic they are more likely to achieve a non-turbulant flow which keeps the traffic moving.  In other words the traffic planners are optimising bandwidth, not latency.   However the more vehicles they can get past a given point, the more they reduce the average latency too.  Left to their own devices individual motorists might indeed achieve a lower latency, but because they reduce the bandwidth of the motorway the average latency actually increases.

From a fuel consumption point of view traffic jams are a worst case scenario.  Its simple mathematics that for a journey of X miles and consuming Y gallons the average MPG is X/Y.  Of course the fashion is to talk about litres per 100km which is just the other way around.  However standing still with the engine running is time at 0MPG or infinite l/100km.  In order words, time spent stationary or even in a gear below the top is fuel wasted.  Coming back to the linear versus turbulent flow analogy, if every vehicle travels 56mph a linear flow would be achieved and the overall fuel consumption would be minimised.  

However, I for one would not like to drive everywhere at 56mph.  There is also a carbon cost to me being on the planet so time wasted is also carbon spent.   The good news is that providing that a linear flow can be maintained the actual carbon cost of the higher speed is secondary.  That is, if everyone went exactly 70mph then that would still be lower entropy than some people doing 85mph and others doing 50mph.

This brings me neatly on to the second part of my article – road trains.  One way to increase the bandwidth of a motorway is to simply make more lanes.  This is both very expensive and very environmentally unfriendly.  The second and probably more fruitful way is to reduce the time distance between cars.  Imagine if there was a way to make a road train such that each car travels safely just 30cm behind each other, or, say, 5m apart.  For the simplicity of the mathematics lets say they are travelling at 120km/h (75mph) or 2km/minute, 33.3m/second.  The bandwidth of this motorway would be 20 cars/second or 13x the current safe bandwidth. Actually you don’t have to imagine road trains as they are a practical reality right now:

In David Mackay’s otherwise excellent book “Sustainability without the hot Air” he makes mention of several promising low carbon technologies but road trains are not mentioned.  That Dr Mackay is anti-car is quite evident.  However public transport, as alluded to in my earlier blogs is not a panacea and I doubt if people will be willing to give up on personal transport anytime soon.  Road trains represent a pragmatic solution that can reduce car journey times and keep traffic flowing, thus reducing the carbon footprint.   

Further, by getting the cars so close together one of the major contributions to fuel consumption, wind resistance, is dramatically reduced.  Anyone who watches motor racing understands slipstreaming.  Dr Mackay uses this precise argument to postulate why trains are more efficient than cars.

There will always be some people who want to go faster than others.  So let me put out a straw man for people – keep the motorways 3 lanes, make the inside line 60mph, the middle lane 70mph and the outside lane 80mph, and make road trains of 10-20 cars followed by a gap.   To change lane the driver indicates his or her intention and the on-board computers negotiate to find a gap and fit in the car.  As the relative speeds between lanes are slow this can be done quite easily.  Indeed with all that bandwidth there will be gaps in the trains. After all each lane has more bandwidth than the current motorway has.  Now Mr Clarkson and Dr Mackay, at opposite ends of the carbon-political spectrum are both happy :o). 

Wednesday, February 2, 2011

Bandwidth versus Latency, a common misconception

As an electronics engineer I came to understand the difference between bandwidth and latency.  Bandwidth is the amount of objects that can travel across a medium per unit time e.g. Megabytes per second from a hard disk (HDD).  Latency is the amount of time each object takes to get to its destination.  Taking again the example of a HDD this is usually governed by the seek time.  For those that don't know, the seek time is the time the hard disk takes to physically move the head across the platter of the magnetic disk to find the right track and then the time for the right part of the disk to spin under the head.  For a 7200RPM hard disk the data comes around every 60/7200 seconds or 8.3ms.

Consider a computer program trying to access a single byte of data from a disk.  Imagine we have a modern computer with a processor running at 3GHz.  This means (as an order of magnitude) it can process 3,000,000,000 instructions per second.  It issues a command to request the byte from the hard disk.  The processor in the hard disk electronics calculates where to move the disk head to, moves it and waits for the data to come under the head.  12 ms later the byte is handed over to the processor.  In this time the processor could have executed 3.6 million instructions but instead did nothing because it was waiting for the hard disk.  If programmers were foolish enough to write programs that randomly accessed many small pieces of data, you can imagine that this program would run extremely slowly no matter how fast the processor was.
 
Once the data is under the disk read head, it streams off the disk extremely quickly; at circa 100 Mbytes per second.  Imagine the computer program wanted to read 1 Mbyte instead of 1 Byte. Now we still wait 12ms for the first bit of data but then we get 1Mbyte of data in only 10ms.  The throughput of the program is increased hugely to 45Mbytes per second.  We see that the thing that we care about i.e. the work done, the throughput, is a function of both bandwidth and latency.

All very interesting I hear you say, but why publish all this on an eco-musing blog?  Well, there is another parameter that comes into play, and that is the effort expended per amount of work done.  The CPU in the above example did not do nothing for 12ms, it sat there "spinning its wheels", generating heat.  The rest of the computer is also alive; The hard disk itself spins its platters and moves its heads, the display lights its backlight etc.  Even in the second example the computer spends half of its time doing nothing but warm the planet.  This is why over the last year I have been steadily replacing the main hard disks of my computers with solid state HDDs (SSDs) aka flash disks.
 
SSDs have no moving parts which makes their seek times around 0.1ms; 2 orders of magnitudes quicker than traditional HDDs.  As they have no motors or moving parts they also use less power.  The only drawback is that they are expensive, or are they?  The machine I am typing on has 120Mbyte SSD and a 1Tbyte HDD.  The OS, Windows 7 and programs are on the SSD and all my data is on the HDD.  It turns out that Windows, being nearly 20 years old, was designed when computer memories were small and so is made up of lots of small files, 69,534 small files in my case, which is why it takes forever to boot. The boot time of this machine before I installed the SSD used to be about 2.5 minutes; now it is about 35 seconds. Given I use it daily that soon adds up to a lot of time; 700 minutes per year.  As a consultant I charge of the order of €1 per minute for my time, so that's a €2000 saving over the SSD's three year lifespan versus an initial outlay of €200.

It is not only my time during the latency of boot-up that is saved.  Toms Hardware found that computers with SSDs were on average 15% faster than their HDD counterparts.  As I work with computers about 9 hours a day, that's a potential saving of 1.35 hours daily.  Of course the bottleneck is not always the computer - after all, I can only type at a certain speed, but let's say it saves 2% of my time and 5% of the electricity used by the computer overall.  Over 3 years that equates to ~150kWh of electricity or €22 worth plus another €1200 in terms of my time. Another way to look at it is that the SSD pays for itself in about 3 months and saves carbon too!
 
Of course to do a full analysis of the carbon footprint I would have to calculate the embodied carbon in the manufacture of the SSD + HDD versus the just the HDD.  That might not look so good. But then what is the carbon cost of my time - heating, lighting, food, clothing etc?  That is, perhaps, going a little too far for this humble little blog.

Bandwidth and latency are interesting parameters to analyse.  They can also be applied to roads that allow objects i.e. you and me to travel from A to B. But that is for another blog  ...