Cloud computing, or simply “the cloud”, is a term coined in the early 2000s as Internet technology became more sophisticated. Cloud computing is much like a utility, similar to water or electricity, except that it uses computers as a resource.
A cloud is a cluster of computers that work together such that to the end-user they seem to be a single resource. Once you’re in the system, you can tap into that resource any time you want, use however much you need, and only pay for what you use.
A cloud may be public (accessible to everyone), private (only to a select few, such as a company), or a hybrid of the two. Cloud service providers have different models to answer different needs.
To clarify, here’s an example. One of the most common uses for the cloud is storage. If you needed, say, 200 gigabytes of storage space for your files, you have two options: you could buy a physical hard drive that you have to carry around, or you can simply store that data on a cloud storage service (such as AWS) and access it online.
Another possible use is computing power. Say your company has an application it needs to run regularly, but you lack the computers to do so. You can purchase computing power from a provider like Amazon Web Services. With AWS EC2, you can set up virtual computers that are many times faster than the machines you have, and can run your application for as long as you require. Once you no longer need this computing power, you may simply scale back your subscription, sidestepping the problem of maintaining physical computers.
In short, the cloud leverages on shared resources to make computing scalable, user-friendly, low-cost, and available for all. When it comes to convenience and economies of scale, cloud computing is hard to beat.
[video_player type=”youtube” width=”560″ height=”315″ align=”center” margin_top=”0″ margin_bottom=”20″ border_size=”3″ border_color=”#1298e2″]aHR0cHM6Ly93d3cueW91dHViZS5jb20vd2F0Y2g/dj1qT2hiVEFVNE9QSQ==[/video_player]