Traditional IT, meaning the on-prem datacenter oriented approach we've had like the last decades had a pretty clear distinction between hardware and software. You had network and servers and then software on top of the servers. All networks and servers were hardware and for the servers you had software you installed onto their operating systems.
Then came virtualization on servers. So now you virtualized the server hardware from the operating system and this allowed you to have multiple virtual servers on one hardware. The prediction was back then that selling hardware will come almost to halt. That wasn't quite true in the end as we now know.
Picture: Dimi Doukas using Dezgo.com AI picture generator |
Then next step was that as the servers were on the same hardware, the traffic between these virtual servers didn't necessarily go even outside to the actual network. So there had to be a proper way between these virtual servers to communicate with each other and also the ability to divide and restrict this communication between different IP-addresses. And eventually to be able to communicate from these virtual networks into the physical network outside the host server hardware. So we got virtualised networks.
At this point we were still pretty much within our own datacenter. But already this came to some of the managed service providers (MSP) an opportunity to start offering platform as a service (PaaS), 'renting' the platform for customers to install and run their applications on those platforms. MSP managing the layer underneath like servers network, storage and middleware, serving the application needs under the hood. Or another way was the Infrastructure as a Service (IaaS) where the customer managed everything, only not owning it, but renting it.
The next phase was taking these IaaS and PaaS environments and automating them such a way into a location outside of the customer datacenter, that it was possible to offer self managed environments where scaling and implementing new services was fast and didn't need any interference from the IT staff to get the services up and running. This was called cloud. It was still a datacenter, but now owned by a large service provider who actually just took everything several steps further. Automation and ability to scale was the distinctive factors.
|
The servers and networking were not added one by one into the datacenter, but there were whole ready built blocks of datacenter that were implemented or changed when needed- brought by trucks. And the whole block had managed and automated by the operating systems and management software that didn't exist but needed to be built in-house. As none of the products that time on the market wasn't built for such use and scale, Google, as one of the pioneers on cloud industry, needed to build their own code for managing and scaling everything - all the way from IP and traffic management, load balancing the traffic, into automation, security and so on. Of course not everything was built from scratch but a lot of the needed features and capabilities were missing, so these environments were built as a unique solutions, never seen elsewhere int he world. But in the end these were not built as snowflakes, but as highly standardized and automated by the cloud platform provider.
So now we had our platform ready, but it was still for the monolithic applications. The ones that were built to run in one big block of code, which was updated time to time with a new bigger block of code and needed the application to be taken down for the the time of the maintenance, usually this took a weekend and needed to be scheduled carefully as the services would be down for that time for the customers. Not much difference from the application or the user experience side to the earlier, except you didn't have to own your own datacenter anymore to be able to run the applications and offer the services to your customers.
The downside in this setup was that it wasn't agile and the changes on the markets and customer needs started to change faster than the application developers could react. Building a huge monolithic application including hundreds of thousands of lines of code, sometime millions, was slow and changing the code and building a new version wasn't much faster. It was easier when the applications were smaller, but the technology offered so many possibilities and new services, that amount of code needed for applications had grown exponentially year by year. Far were the days when you could build your code by copying it from the IT magazine and writing that few pages of code to your computer (this was still the case on 80's as those, who have been on the industry a bit longer, do remember). This route had come to its end and something needed to be done. Even though the computer got faster and faster each year, that didn't help, as the applications started to be too inflexible and cumbersome, just too big.
The change came with another era of virtualization, where the application was detached from the services. Until that the application was still stitched together with the operating system. You built the application for the operating system you run on the server. Operating system was virtualized from the hardware, but the application was still bind to the operating system, needing its libraries to be able to do things and calling directly those services operating system was offering. So what was needed was abstracting the operating system layer by creating another layer which separated the application and the operating system services. As earlier we had one hardware that could run many different virtual operating systems on top of the virtualization layer, which took care of being in the middle and passing back and forth what the operating system needed from the server hardware. Now we did the same for the applications, building a virtualization layer between operating system and application, so that we could build smaller applications, which didn't have to include all the operating system related system tools and libraries to be able to run.
This made it possible to build smaller applications for smaller use cases. And by using application programming interfaces (APIs) these smaller applications could specialize on smaller tasks, being more efficient, small and easier to build and maintain. And combining these smaller services together, as needed, you could create an application which is far more flexible. Bringing changes or new features to the application was easier, as we didn't need to consider the whole monolith mammoth application, but we could just pick the small part of the application and totally replace it with a new one.
|
This is pretty much where we are today. We have so much legacy still that what has happen is that we create new applications or fronts (user interfaces) to these applications with modern ways, but still using some legacy sources as part of it. This could be a database having sensitive information, which we want (or are bind to due the regulations) to keep within our on-prem datacenter. There's still reasons for not taking everything to cloud. The idea of taking everything to cloud doesn't work for everyone and actually there's been the past couple of years s trend to take back some of the services to on-prem, for various reasons. One of the reasons being costs and complexity. And this complexity is growing. We're seeing more hybrid and multi-cloud environments, where it is hard to maintaining the organization's policies and security, when the platforms and the tools are different. This complexity is a security hazard and makes it really difficult to be able to have the visibility and to see what is the current security posture.
What this means in the end is that complexity will be still growing. There's an evolution going on where APIs are going to be bigger attack surface, due to the fast growth of using them. And the lack of visibility, management and security of APIs. With the complexity there's much more to do, but the teams are not growing at the same pace. So the solution for this will be utilizing more automation. For answering to the requirement the business is setting to IT, to be fast, agile and secure, there's no other way, but simplify trivial tasks and later on more complex tasks by automating them. This can be a matter of 'life and death' for companies in the future to stay in the game. For many companies and organizations the role of IT will grow and is seen already as a strategic initiative to modernise they way we do things. And since the size of teams is still very limited and the demands keep growing, we have two things we can do to keep up: automation and partial outsourcing or buying more resources and services from outside. At the same time still keeping the core skills and knowledge in-house. These new directions will bring new business opportunities for IT-companies and will continue to change the way we operate with our applications.