The security gaps in processors will persist for a long time to come in the context of public-cloud infrastructures. But, even if the companies can minimise this risk with new patches, users could suffer long-term losses in performance, a higher CPU utilization and general rising and harder to plan costs. The public cloud thus becomes not only a security risk, but also a stability risk.
« We have build this internet around the idea that one can safely execute code on a machine and isolate it from other code on the same machine. This is an incredibly fragile assumption. »
The German tech-journalist, Hanno Böck, summed up the dilemma shortly after the massive security gaps in most of the processors used in virtualization environments went public. The Spectre and Meltdown attack scenarios led to a radical shift in the perception of virtual machine security. The tenor: Companies that share common IT resources (and this is the case with any company that uses a public cloud service) are vulnerable in principle, and this, until a whole generation of processors has disappeared from active IT operations, can last up to a decade.
We have build this Internet around the idea that one can safely execute code on a machine and isolate it from other code on the same machine. This is an incredibly fragile assumption.
— hanno (@hanno) 3. Januar 2018
So how do users react to the new circumstances? Basically, there has been no change in the fact that particularly sensitive data should also be backed up in a special way. Spectre and Meltdown have only, once again, made us realize that the risk is real and that further breakdowns, data leaks and security gaps are likely to follow.
This does not mean that users of public cloud services will necessarily have to migrate their entire infrastructure into an on-premises environment. But, especially for particularly sensitive data, it is advisable to reassess the situation and at least backup a subset of data – for example on a private server with E2E encryption, i. e. a hybrid cloud.
Ensure Stability and Performance with On-Premises
The impact that meltdown patches can have on a public-cloud infrastructure can be clearly seen using the AWS environment of the SaaS company SolarWinds as an example. Administrators noticed high downtime in the weeks after Meltdown, indicating how hard AWS was trying to mitigate security risks. The company published a full documentation of the negative impact on the server environment in its blog, which shows that it appeared to be more serious than Intel announced (the processor manufacturer explained that the performance degradation caused by the patches would be no more than 20 percent).
Solarwinds concludes: « It is at least uncertain that there will not be a similar slump in performance in the future and quite likely that any company with large infrastructures hosted by a public cloud service will be affected at some point.” This means increased engineering effort on the company side and higher costs to bring performance and stability to the level prior to detection of security vulnerabilities.
So how should companies react? Well, there is a certain risk that performance in public clouds can fall unannounced at any time. If you depend on a third party provider, you not only lose some control over your own data, but also become dependent on stable operation. This aspect may have been acceptable before Spectre and Meltdown, but for the future,at least, it is unclear how many patches will be needed and what long-term consequences they will have for users. What if the next mitigation has even worse effects? How high will the additional costs be to maintain operations?
Absolute security for your own data and performance can only be achieved within the framework of an infrastructure over which users have complete control, are managed on site and where all tasks are in the hands of the operator or are subject to his or her direct access.
This may not be necessary for all applications, but the user’s « own cloud » should not only be an issue with ownCloud for Enterprise File Sync and Share, as well as secure collaboration, but should also become an integral part of cloud projects, all the more so after Spectre and Meltdown. This is due to the fact that at this point in time no one can yet know how many other security gaps are hidden in the currently only superficially investigated processor technologies. If you don’t want to cut back on security and stability, you have to use the Private Cloud in your strategy.
How the integration of ownCloud can increase the security of your cloud infrastructure is explained in our whitepaper « Information on the EU-General Data Protection Regulation (EU-GDPR) », which can be downloaded here free of charge.