Security Falls on the Customer

I remember when we got our first Hyper-V server.  We spec’d it out to be able to support our growth for 3 years.  That lasted about a month.  Once I saw how easy it was to provision machines, we maxed out our memory and processing power before the “New Server” smell had a chance to dissipate.

Most of us are experiencing that same euphoria with public cloud hosting like AWS and Azure.  More so, because we aren’t even limited on RAM and CPU.  Studies show that many organizations are using 2 public cloud services and experimenting with several more.  Companies are using multiple clouds, partially, to take advantage of unique services, to protect uptime, and to have data in a second location.  This adds even more complexity into proper cloud network configurations.

Along with the ease of spinning up environments has come the attraction of organized hacking organizations.  They are having a mountain of success in data theft, cryptomining and DDoS attacks.  It doesn’t take much googling to find stories related to these kinds of incidents.

Mis-Configuration is the number one issue that leads to Public Cloud incidents.  In fairness to cloud admins, there is a lot to see, and a lot to do to get things configured correctly.  In addition to the cloud configuration, the built-in security tools also need to be configured.  It is the responsibility of the user to configure the security settings properly.  How do you know if you got it right?  Or missed something?

Shared Responsibility Model

The first thing to know is your responsibility vs cloud provider responsibility.  Your provider is responsible for physical security, replication, availability, and hardware.  You are on your own for everything else.  You know already that if you are paying for a server that you don’t use at all, they aren’t going to send you a love note letting you know you could save money by shutting that down.  They also aren’t verifying that you have sensitive data exposed to the internet in an exposed S3 bucket and reconfiguring it for you.

If you don’t know which resources are exposed to the internet or have a trust relationship with a server that is exposed, you run the risk of extreme data theft (an entire cloud take over by black hats).  There are already many examples of where this has occurred.   It has led to the theft of sensitive data, or in the case of crypto mining attacks, left some clients with some enormous compute bills.

If you haven’t already considered an assessment or visibility tools, now is the time to start.  Check out Youtube.com/productivecontent to see our videos on Cloud Optix.

When I started the Cloud Optix trial, my mind was blown at how many mis-config errors I could immediately see.  Getting the visibility is easy.  You may be surprised at all the areas you may have missed and the task list you will have to get your cloud environment on the right path.  That being said, it will be much easier and cheaper than trying to investigate and remediate a breach.

 

Servers Still Need Traditional Protection

Public Cloud Servers and workstations are still computers.  While there are all kinds of things to understand and configure in your cloud console for access, visibility, applications, IAM, etc.  Once you spin up an OS, it still needs some of the basics like Endpoint Security and to sit behind a NextGen firewall.

I can’t speak for all products out there, but all the Endpoint and Firewall products on our line card will support your cloud infrastructure.  You also get the added benefit of adding them into your existing on-premise management consoles.

If you are running public cloud infrastructure in Production or Test/Dev or both, let’s talk.

social