Some people seem surprised that I’m heading to AWS re:Invent 2023 in Las Vegas this year. Even if they know that I’ve been to every single re:Invent since it started…
If we look at market growth rates and the concentration of power within a handful of providers, using the terminology coined by Geoffrey Moore, it could easily be argued that the public IaaS cloud market is in the tornado and approaching Main Street. Despite a lack of publicly available market share data, for many in the industry, it seems like a two or three-player race, with one clear “gorilla”: Amazon Web Services (AWS).
In this article I will attempt to briefly characterize the competitive positioning of the key players in the public IaaS market, and highlight some of the alternative strategies used by other providers to carve their own niches. The question that needs to be kept in mind is, can anyone else survive in the face of the steep competition presented by the two or three American mega-clouds?
Docker doesn’t need an introduction. It is one of the hottest open source projects that allows you to deploy your application inside containers, adding a layer of abstraction. In a seemingly constant state of maturation, the benefits of using Docker increase on a regular basis. In this post, instead of talking about what Docker is or how it works, I’ll outline the top five benefits of using the ever-growing platform.
[GUEST POST] I started exploring the cloud computing world around 5 years ago, and I must admit that my initial understanding of the cloud was a disaster. At first, it was difficult to find a comprehensive definition, but I finally settled on one from the National Institute of Standards & Technology (NIST). It clearly defined the cloud’s attributes and models, and removed my doubts regarding what falls under the cloud umbrella. The experience that I had finding this definition made me realize that I wanted there to be an easier way for others to find it, as well. Therefore, I decided to create my own list of cloud guidelines. This was a turning point in my cloud journey, as it pushed me to teach many students and IT professionals about cloud computing.
Stumbling upon AWS is inevitable when discovering the cloud, and just as with the cloud, my first interaction with AWS was not simple, either. I remember the moment of “Eureka!” that came after I was finally able to launch an EC2 instance and deploy a simple application. Sometimes, I laugh at the sheer joy I experienced from such a small achievement, but I realize that this was a stepping stone in my AWS journey and my love for Amazon. I am now able to manage bigger AWS cloud infrastructures, and I’ve consulted for and successfully designed various Amazon projects. I’ve conducted sessions on how to scale applications and how to make scalable applications using Amazon.
I see that two things have remained steady over the past few years: continuous innovations at AWS and my love for AWS. AWS has always kept me motivated to learn new things with its consistent new offerings, and I’d like to share the reasons that I believe make it the immense influence on the cloud that it is today.
Monitoring is what allows you to generate complete transparency of the online service that you’re responsible for, including cloud infrastructure, application functionality and SLA. Modern IT monitoring seems to be composed of two layers: an infrastructure layer and an application layer. On the infrastructure layer, VMs, network, and storage are monitored, revealing memory consumption, CPU utilization, and network connection metrics. On the application layer, database performance, browsing latency, and actual application functionalities, such as users registration, login and cart are monitored. For mega sites like eBay and PayPal, even the slightest latency can lead to a loss of millions of dollars. If your online service isn’t monitored closely, the trust and confidence of your users can be significantly compromised. In this post, I would like to touch on several points that describe the current state of the market, how essential it is to monitor your resources, and what monitoring is built on.
2014: A Reflection
2014 has been a pivotal year in the enterprise tech world. Enterprise IT has begun to fully understand the cloud, and the development of a mutual understanding has grown. The cloud is, in turn, adjusting more and more to the features and traditional needs of enterprise IT.
My perspective on next year is guided mostly by experiences I had this year (2014) at the AWS re:Invent conference. This huge cloud festival was the platform from which AWS publicly introduced the cloud as a means for creating today’s enterprise data center. Whether for native cloud web-scale applications or for enterprises of all shapes and sizes, the cloud is considered to be today’s best way to increase efficiency as well as flexibility in any IT environment. It is important to note that market saturation is still not here, however it’s just a matter of time until the cloud is used by everyone, covering a significant portion of the world of IT.
The cloud has allowed modern, web-scale IT companies, like Airbnb and Netflix, to grow and flourish into booming enterprises all over the web. With its flexibility and efficiency, it supports the demand of an organization’s growth from zero to millions of users, allowing them to prepare for this potential growth, as well. Before the cloud, simulating millions of concurrent users and running scalability, stress, or stability tests was very hard to implement, if not impossible. Cloud technology has brought software testing, especially performance testing, to a whole new playing field.
First and foremost, it is important to define what AWS Activate is and what it is used for before we can take a deeper look. Exactly one year ago, Amazon created a program specifically designed for a particular group of customers that often times is in need of as much help as they can get (AKA startups). This program supports startups in their initial phase of building their businesses. This includes providing AWS credits, taking part in startup contests, and receiving benefits from third party solutions on the AWS cloud. Activate allows AWS partners that want to create a presence within the Activate community offer perks to member startups. Some of which include discounts and extended free tiers.
This article is cross-posted on TechTarget as part of my contribution during the AWS re:Invent show in Vegas this month. It is important to note, however, that this version is slightly different. In this article I will cover the evolution of the AWS ecosystem over the last 3 years, which, in my opinion, has been one of the most important indicators of the cloud industry’s growth.
Cloud vendors need an ecosystem. It is a vital part of their product’s and service’s maturity. In order to enable products to support more use cases, customers and revenues, you need a community of vendors that can link up to your API and extend your platform. By first developing your API and then creating a UI, you set the stage for companies that thrive off of your API and product. SalesForce, for example, holds data with their flexible platform that has quite possibly developed into the largest ecosystem in the cloud over the past few years. When external companies develop around your API, cloud vendor get 2 things: very rich services, above and beyond their core services, and a scalable business with revenues that are generated directly by ecosystem, itself.
Confidence is key when it comes to managing large IT systems. The tricky part is when a CIO tries to generate the trust and confidence of a company’s IT environment. Complete transparency is the answer. As you may recall, I’ve written about the need for transparency concerning Newvem’s services in the past. As the cloud industry market matures, the AWS cloud continues to grow at ground-breaking speeds, in addition to the usual individual cloud deployment. In either respect, transparency becomes an issue.