The first three-phase hydroelectric power plant in North America was built in 1893 in Redlands California. This was a landmark innovation that allowed power to be transmitted on power lines for over a hundred miles, vastly increasing the spread of power throughout the region. However, with innovation comes new uncharted territories and a lack of standards. The new world of electricity had different views on the frequency that should be used for alternating current. Some thought that 50Hz (cycles per second) should be the standard, whereas others thought 60Hz should be. This one region of California decided that 50Hz was the one to adopt (a European standard at the time), despite the rest of North America rapidly moving towards 60Hz as a standard.
So, what happened next? A lot and a little. The area around Redlands added two more generators and expanded its reach progressively for the next 50 years, whereas the rest of North America adopted the 60Hz standard. As regional electrical grids expanded and joined each other, the two standards would not work together seamlessly, thus creating conversion points that were costly, expensive and inefficient.
The functional design of the electrical grid was also used for other things, which exacerbated the issue. The 50Hz frequency was also used to keep the time of electric clocks in homes and businesses. So, if you took a 50Hz clock and put it on a 60Hz grid, you would lose 10 minutes every hour.
The turning point came after the end of the Second World War. In 1948, after much deliberation the local power company conceded that 50Hz would not be scalable and they had to spend millions of dollars to replace the generators and join the rest of North America on 60 Hz. This solved the problem of scalability and redundancy by joining the surrounding electrical grids without frequency conversion methods, however it created a new problem. All the machinery, appliances and clocks that became so prevalent over that 50-year period were now useless. This angered thousands of people across the region, which had expanded to cover all of Los Angeles. In response, the power company setup shops for people to bring their clocks in to get them modified to use 60Hz at no charge. They also had technicians go into people’s homes and businesses to do the conversions as well. This was a massive undertaking and at the time, the most expensive conversion of this sort in history.
From this story, we will look at different ways in which to address similar issues in infrastructure design. But first a primer on cognitive bias.
A cognitive bias is a mistake in reasoning, evaluating, remembering, or other cognitive process, often occurring as a result of holding onto one’s preferences and beliefs regardless of contrary information.
The sunk cost fallacy is a type of cognitive bias that describes when people are more likely to avoid loss than to maximize opportunities. So if, for instance, a large investment has been made that should be scrapped because of other factors, then it becomes difficult for somebody to make that decision because of this bias.
Here is an example:
A large organization deploys hundreds of switches across their enterprise that make use of new technology that is beneficial to users, enhances network performance and has several new forward-looking features. Very shortly after, a new standard is created that is not interoperable, and the current switches are non-upgradable.
The choices that the organization has are:
1)Get rid of all current switches and adopt the new standard (possibly at a discounted price from the vendor to upgrade from the end-of-sale hardware)
– When using a unified standard that is well recognized, supported, interoperable and scalable, infrastructure design and operations are simplified and can be easily validated
– After the initial expense, there should be no more major capital expenditures for that part of the infrastructure for a while. Upgrades will be incremental
– It may be possible to sell the existing equipment to a refurbisher or privately to recoup some of the costs.
– Someone will have to take ownership of the initial decision for the purchase of the non-standard equipment. Depending on the corporate culture, this may be seen as a failure of judgement or experience, which could be a career limiting move.
– A very strong justification needs to be presented to the organization leadership to get buy-in for accepting a major capital expense that should have already been taken care of.
The opportunity cost needs to be analyzed to understand whether purchasing new switches makes sense financially, strategically, and in comparison, over a period of time. This may make sense, despite the appearance of paying twice for the same functional solution.
2)Wait until the lifespan of the access points has expired and then make a new capital purchase of all new access points.
– The loss of the sunk cost is minimized because the lifecycle for the hardware would be followed and the next purchase cycle would just get the new standard.
– Growth and flexibility cannot be maximized because of interoperability issues. This may impact business decisions as it will be a constraint
– Opportunity cost for not pursuing opportunities that could have existed with the switches of the new standard
The opportunity cost needs to be analyzed to understand whether waiting the lifespan of the existing switches, to purchase new switches makes sense financially, strategically, and in comparison, over time.
3) Isolate the segments managed by the deprecated switches and add switches of the newer standard when additional ones are required, then decommission the older ones in phases.
– This allows for growth with the adoption of switches of the new standard in conjunction with the benefits of the existing switches
– No large capital expenditures are required because of the phased approach and incremental changes.
– Multiple non-interoperable technologies that solve the same requirements
– additional design overhead
– additional troubleshooting required for issues
– additional operational overhead required
Although there is no additional cost in hardware (CAPEX), the additional costs are in the operating expenses (OPEX). Once the OPEX costs are determined and spread over the lifespan of the equipment, you can evaluate whether this strategy makes financial and strategic sense.
4) Continue to use the deprecated switch technology by buying more end of sale and end of life switches.
– The price of these deprecated switches will drop dramatically as people get rid of them and the vendor stops producing them
– As long as there is a grey market for these switches, growth of the infrastructure is still possible
– It requires more work to acquire these switches and special sources may need to be sought out
– Vendor support may not be available after a long enough period following the end-of-sale
– Without additional standby equipment, failures that require parts may take critical business functions offline for extended periods of time
This strategy has a good ROI as costs have been reduced substantially, but it adds in several layers of risk that need to be accounted for.
The strategy that ultimately gets chosen should be from a point of knowledge and an understanding of all the implications. It should be in-line with the organization’s core tenets and should look at the costs beyond the price. All too often, the choice that appears to minimize loss actually costs more in the long run.
Do you want to delve more into the thought process of an infrastructure architect and learn different methods of seeing the larger picture? Then pick up the book “IT Architect: Designing Risk in IT Infrastructure”. It is the second book in the IT Architect Series.