The cloud industry has continued to grow exponentially. As of September, the reported combined revenue of Amazon’s and Microsoft’s cloud businesses for the previous 12 months reached $50.1 billion. If this is any indication (and it is), the cloud industry is flourishing. Enterprises and small companies alike are increasingly relying on the innovation and reliability of cloud providers and products, striving to both foresee and meet the needs of their customers.
But what can we expect this year besides more growth? IOD asked five of our cloud experts to share their predictions for all things cloud in 2019.
Prediction 1: The Growing Prominence of AI
Prediction by Shiji Sujai
In 2019, AI will emerge stronger and more powerful than ever before. While conspiracy theorists might be losing sleep about machines taking over the world, the exciting and innovative possibilities that AI brings to the table have gotten everyone’s attention. Major players like Microsoft, Google, IBM, Apple, and Amazon have heavily invested in the development of AI—and the ecosystem required for it. For example, Microsoft Azure has a lineup of prebuilt products that support AI initiatives, and offers the possibility of building custom environment deployments according to project needs.
Organizations should definitely consider AI as a potential candidate for their IT strategy in the coming year. By leveraging deep learning, AI helps bridge the gap between products and customers in more intuitive and engaging methods. AI is making rapid advancements in the fields of health care, sports, automobiles, military, and more. Naturally, this will translate to the need for backend computing systems that can meet the on-demand processing requirements of AI and machine learning. Cloud computing is the obvious answer. Without a doubt, AI, machine learning, and cloud computing will go hand in hand, and will emerge as a clear trend in 2019.
Prediction 2: The Possibilities of Reinforcement Learning
Prediction by Daniel Taube
Reinforcement learning (RL) is a new field in the data science world which is rapidly growing, changing the way we think about data processing and model prediction. RL is a data model that learns in real-time the best next move to take. You create a system, then give the model a “prize” for good behavior and a “punishment” for bad behavior. Quickly enough, the model learns the optimized way to play the system. Here’s an example of how a computer will play Pac-Man, for instance.
Today, RL is mainly developed to solve games, but articles are being written about the possibilities this new model can bring. For example, Deep RL can optimize join queries. In short, I believe RL is the next trend in the cloud and data world, and we’ll soon be hearing much more about it.
Prediction 3: More Endpoint Devices Implementing Their Own Internal Data Processing Pipelines
Prediction by John Hammink
As part of a larger trend, we’re gradually moving away from centralized, one-way analytics pipelines and toward dashboards and smart devices connected to the “programmable web.” Included in this idea is the notion of endpoint-to-core processing, which means we’ll be seeing more endpoint devices (IoT, cars, sensors, smart thermostats, infrastructure, and the like) moving toward implementing their own internal data processing pipelines, which complement the pipeline at the core. These endpoint pipelines serve not only data for analytics, but also logs for processing, training data for machine learning systems, and even commands. This trend is accelerating the growth of a lot of interesting technology in the embedded space, like FluentBit (data ingestion for embedded devices).
Data velocity is quickly moving away from batch ingests toward real-time or near real-time ingestion. Thus, we are seeing data pipelines going from being entities that support a few key stakeholders in a business via batch imports, to supporting the whole business via near real- time, to where the data pipeline—combined with the latest in machine learning—is the product or service.
Within organizations, the push now is to stop company analytics datasets from being siloed in different systems, and instead make them discoverable and available to anyone across these organizations.
As messaging services move away from point-to-point integrations, and firmly into the domain of pub-sub mechanisms like Apache Kafka (or even Apache Pulsar), there are also completely separate issues of compliance, like GDPR, that need to be considered. Given that most models work from an immutable event log, how do you implement the data-at-rest (and data-in-motion) security that’s required? One can write partitions to a later inaccessible location, automatically anonymize identifying data, or selectively revoke an associated encryption key.
In the future, we’ll also be able—with solutions like SpectX in Estonia—to choose between piping our data in or querying from multiple data sources in place, where the data doesn’t actually move! For DevOps in the future, this means a running choice between latency and throughput vs. data storage requirements and/or tunnel security.
Just as NoSQL took us away from the relational model for a moment, all of the SQL dialects in NoSQL databases—and even distributed streaming platforms—are bringing us back to a familiar and growing user base. Stream platforms now support dialects like KSQL, where a stream can be queried in motion just like a relational database can, and this trend will only continue to grow.
As data pipelines become common, consumers are becoming increasingly more interested in using components (databases like InfluxDB, Apache Cassandra, or Apache Kafka, for example) as building blocks, rather than standalone solutions. They’re also looking to buy one-click rollouts of complete pipeline solutions with several of these components already integrated.
Prediction 4: Continued but Slower Growth
Prediction by Jorge Galvis
I believe that serverless deployments will continue growing in 2019, maybe not as much as they did this past year, but we will still see growth. This is due to cloud providers supporting more languages for serverless computing (for instance, Google Cloud Platform supports Node and Python—with different runtimes), and also because of the adoption of frameworks like The Serverless Framework and Zappa, which make the implementation of this architecture a bit easier.
Also, I think we will continue seeing serverless computing excelling with API implementations and backends for mobile apps. However, we are probably going to see an increase in the number of projects that use a combination of IoT technologies and serverless functions; apps with sensors, for instance, sending payloads to an endpoint written in AWS Lambda, which sends readings back to the data storage system for future analysis or visualization.
The rate of adoption next year will be lower than it was this year because teams will focus more on stabilizing their apps within the serverless ecosystem than adding features as cloud functions. This means that they will consider how to test or increase coverage and how to better architecture their apps, rather than how to port more code as a cloud serverless function.
Prediction 5: More Options for AaaS and FaaS
Prediction by Taral Shah
Cloud is no longer an alien technology, but rather, a commodity. AWS, Microsoft, and Google are dominating the market, adding new services and functionalities so often and quickly that even a cloud expert needs to make an effort to keep up.
At the same time, providers understand that with a long list of services, it’s tough for users to remember and implement everything, and the cloud industry as a whole is moving towards a model that allows for easier and quicker adoption. While AWS, Microsoft, and Google are leading the way with regards to IaaS, they’re also offering more options for AaaS and FaaS, in order to keep up with the competition.
I predict that in the next year, most cloud providers will offer services around automation with more AI/ML-based tools, which make it easier to automate infrastructure provisioning or deployment, set up DevOps, containerize your environments, enhance security, and optimize cost. The times are changing, and cloud providers are starting to focus on making life easier, enabling the cloud to make decisions or guide you towards better cloud utilization.
IOD experts are our source for high-quality, deeply technical marketing content. Contact us for more information about how to work with our tech experts.
Meet this post’s experts:
Jorge Galvis is a technical architect who is passionate about web application development with free and open source technologies. He’s worked with teams around the world, building applications from academic to financial. Jorge says, “being respectful, dedicated, autonomous and a team player are my best qualities.” He loves to teach, read fiction, and drink coffee.
John Hammink is a musician, digital artist, engineer, and in-demand writer of popular engineering topics. Once an early employee at startup-mode Skype and F-Secure, over the last 20 years, he’s worked with teams around the world. With a recent focus on on data engineering and data-security topics, he’s also made many predictions about trends that have become valuable to startup efforts.
Taral Shah is a cloud enthusiast with more than 10 years experience in the industry. He’s an active blogger and cloud lecturer for many online institutes. As a co-founder of Techify Solutions, Taral has worked on many large-scale projects, but his passion still remains AWS, as well automation of various AWS processes.
Shiji Sujai is a tech enthusiast with 12 years of experience spanning multiple technologies in data center management, virtualization, and cloud computing. She considers herself a super mom, cloud consultant, and budding writer rolled into one. Shiji is passionate about sharing her knowledge with the tech community through her blogs and recently published her first technical book on Azure Cloud Automation.
Daniel Taube is a senior data science expert skilled in big data and deep learning.