Architecture is designing something with focused intent around its function in the future.
I want to spend some time today exploring one future that we can all agree on. Moore’s Law gave us the roadmap and we have lived it for over a half a century.
There is a graph that I came across this morning, which made me smile. It clearly shows how the cost of computing, storage, and bandwidth are steadily racing towards zero. Check it out below:
This is great news for Healthcare IT. You can start to transition the money you were once spending on these expensive platforms over to population health, value-based care, and digital transformation projects.
Some of you are already reaping the benefits of this, and some of you are not. I would suggest to you that the difference is in whether you are architecting for the future.
Here is what I think this looks like. I hope it helps.
Did you purchase that storage platform to store files or to enable new workflows and partnerships?
(In my best old man voice…)
When we were kids, you couldn’t share files with anyone from your phone. You used to have to call IT and ask them to set it up. If it was with someone inside the company, they gave you a letter of the alphabet and that became your drive letter. The S drive was where everyone put their files. If it was outside the company, you could send it by email – unless of course, it was too big, at which point IT would just take care of it with FTP.
In 2012, we signed a contract with Box. The design for this project was simple: we wanted to replace all of our internal and external file shares with a cloud-based storage platform. We wanted it to be easy enough for the user to set up without the help of IT. But, of course, we wanted it to be secure.
Note that I’m not pushing Box here, although it is a great platform; I’m talking about designing for the future.
We could have simply solved the problem at hand, which was that we needed more room for file shares. We could have bought expensive EMC boxes and mirrored those investments with a Sharepoint rollout. Both approaches would have required maintenance and maintainers, which are expensive.
Both solutions would have solved the problem. However, only one of them solved it for the future as well as the present.
A problem worth solving back in 2011, at least for our health system, was storage tiering. If we could do this we knew that we could save millions. That is not an exaggeration, either! It took a few years but once complete, it enabled us to utilize low-cost cloud for deep storage of rarely accessed files.
As a side note, the short-term way of thinking about tiering would be to select a vendor that has tiering on their platforms and call it a day. Instead, we identified solutions that could take advantage of future advancements across many vendors and cloud providers.
General George Patton is believed to have said:,“Fixed fortifications are monuments to the stupidity of mankind.” This is especially true in IT. Fixed assets are monuments to our short-term thinking.
When I arrived, my budget called for $15M to be spent on data center upgrades. Not computer systems and storage, but space, power, and cooling.
Here was my response to this: we mapped out all of the best data centers within a 10-mile radius of our current data center, where we could lease space and begin to solicit quotes. I did the math and realized that instead of spending the $15M, we could lease on a variable cost model for a decade.
Fast forward five years. My team completed a hyper-convergence project and reduced our data center footprint by 6,000 square feet. This was accompanied by a significant power reduction, as well. We reduced our lease and saved $700,000 annually. If we had been in a fixed-asset data center, I doubt we would have realized these savings. We would still have aging equipment that would need maintenance.
We selected our data center based on the availability of pre-existing pipes from telecom providers into the facility… 35 different providers to be exact. This gave us choices when negotiating and options based on paths to our facilities.
We didn’t build regional data centers but utilized carrier point of presence (POP) to co-locate the equipment that had regional requirements. As bandwidth increased over time, we knew that this would eventually go away as well.
If you have any major investments in a data center or fiber runs to your hospitals, ask yourself these questions: Are you sure you will own that hospital in five years? Are you sure you will need that much raised-floor space in five years, given the pace of convergence and virtualization?
Did you roll out workstations or VDI?
Short-term thinking is that we will always need computers in the hospital rooms. Let’s wire them up, purchase them, put an image on them, and deploy them. Long-term thinking, however, realizes that healthcare is changing and technology is as well.
You may not want to put a computer where a tablet will do. You may not want to deploy thick clients when software-configured thin clients last four years longer.
Consider the use case, not the technology. If you didn’t know where or how many workstations you were going to need in the future, what would you deploy?
Perhaps your organization is going to acquire more hospitals and needs to scale rapidly. Perhaps you have a growth strategy that requires a significant increase in small locations and clinics. Perhaps you are going to be acquired by a system that doesn’t have discipline around maintenance budgets, and your hospital administrators will appreciate that you put things in place for this eventuality.
Can you double your environment in days, weeks, or months? This is only near-term future thinking; if you are really thinking about the future, you would be asking questions like, “What services does the workstation deliver?” and “How will they be delivered in the future?”
This would start you on the path of asking some very interesting questions of your software providers.