New Issue Published

Read Latest Issue

Back to top

Want to go cloud-first? Get cloud-ready.

For any organisation, reducing costs while increasing productivity and performance is an ongoing challenge. Successfully identifying how processes can be…

Want to go cloud-first? Get cloud-ready

3rd August 2018

Image

By, Matt Piercy, EMEA VP, Zscaler

For any organisation, reducing costs while increasing productivity and performance is an ongoing challenge. Successfully identifying how processes can be trimmed, adapted or cut can have a huge impact on the bottom line and businesses are turning to technology – and the cloud in particular – as a result.

This trend has led many companies to adopt a cloud-first strategy. Motivated by the idea of migrating apps to public cloud providers like AWS and Azure and benefiting from increased accessibility and cost effectiveness, it seems a relatively straightforward way of enhancing performance. However, to truly take advantage of the cloud, a business needs to ensure its architecture is cloud-ready. Legacy setups within offices and branches have limitations which are quickly reached when relying on them to support cloud use. This can leave a company vulnerable, in terms of security and stifled performance, so the common pitfalls need to be considered and addressed at the highest level and not just left to IT teams to sort.

The five common pitfalls of a cloud-first strategy

1. Relying on regional gateways

Instead of deploying security at every branch, many organisations backhaul traffic to regional hubs or a few data centre gateways using Multi-Protocol Label Switching (MPLS). The lower upfront costs may make it seem less expensive than outfitting each branch with a security gateway, but it can end up costing far more in practice.

Backhauling traffic introduces a hairpin effect forcing the business to pay twice for internet-bound traffic – once to carry traffic from the branch to data centre, and again to return it to the end-user. Furthermore, it can cause traffic bottle necks and latency, restricting productivity. It can also complicate privacy and security issues, particularly if data is being transferred between regions and through different security systems.

2. Believing that virtual appliances are ready for the cloud

A virtual appliance is a pre-configured system or solution that has been developed for a specific need. Many firms still have a wide variety of virtual systems deployed across networks which are tasked with completing vital functions using sensitive data. Often a left over from legacy setups, businesses that use them to support cloud use are likely to soon experience performance issues.

Being pre-configured for a particular job means that virtual appliances have pre-configured limits. They can cope when data flow is relatively consistent and predictable – as it would have been during the more traditional years of strict network enterprise computing where all work was conducted from within an office’s four walls – but cloud use adds traffic volatility they just were not designed for. Sudden spikes in traffic require seamless scalability but upper boundaries cannot be shifted. Unexpected data deluges could even take systems offline, much in the same way as a denial-of-service attack could.

3. Putting up with security gaps

Continued reliance on legacy solutions will see businesses falling short in the protection provided to their corporate data in a cloud first environment. Just like virtual appliances, more traditional offerings aren’t suitable for today’s complex cloud traffic. For instance, old-fashioned firewalls cannot proxy HyperText Transfer Protocol (HTTP) or File Transfer Protocol (FTP) traffic, meaning that do not have the full context of the type of security required. They often only inspect traffic based upon known signatures (which catch only three to eight percent of all vulnerabilities) leaving firms vulnerable to threats such as DNS tunnelling.

In another attempt to keep costs low, some deploy smaller equivalents of their HQ’s cybersecurity stack at each branch. Replicating stacks exactly would cost a considerable amount, with purchasing, configuring, managing and maintaining such a complex ecosystem across numerous sites a resource-intensive undertaking. As such, enterprises deploy and rely solely on smaller firewalls and unified threat management (UTM) tools which typically have less that optimal security controls. This patchwork of tools make it very complex for centralised IT teams to maintain, and the variations in capabilities creates compliance issues, inconsistent policies and fragmented audit trails. These security compromises leave branches, and therefore the entire network, vulnerable.

4. Bolting on a proxy

The use of Secure Sockets Layer (SSL) encrypted traffic is increasing and so are the number of threats hiding within it. According to Google, more than 90 percent of the traffic crossing its properties is encrypted, so SSL inspection is no longer a novelty. However, such a capability requires a particular software meaning it’s something that most traditional appliance-based firewalls and UTMs cannot provide. As a way around this, and to avoid forking out for completely new solutions, businesses tend to adopt bolt-on proxies. 

While seemingly the low-cost option, bolt-on proxies can have multiple drawbacks for branches. They require significant bandwidth, restricting the amount available for other functions and impacting performance. They are often also tied to vendor development cycles and the enterprise’s own appliance lifecycle, which could see tools refreshed every 3-5 years. This requires branches to accurately predict what their future SSL performance requirements will be, or be stuck with tools that cannot support performance.

These challenges may result in branches feeling that proxies are more hassle than they’re worth and completely switching them off. Yet, with 41 percent of network attacks using encryption to evade detection, according to the Ponemon Institute, the risk to data is obvious.

5. Leaving bandwidth to chance

Ensuring consistent performance for users is dependent on them having seamless access to network and business-critical applications. However, companies that leave bandwidth as a free-for-all, even if connections are deployed locally at each branch, are likely to soon find that their critical applications are being choked.

The steadily increasing use of applications and the bandwidth needed to run ever-more advanced functions, traffic growth, and user base can crush performance and drive up costs. Moreover, the desire to watch global sporting events such as the World Cup and Tour de France while at work can see already limited bandwidth being used for streaming. As such, companies must have the ability to manage traffic, which includes allocating bandwidth for business-critical applications and limiting how much any app can use.

Using cloud to enhance the cloud

Businesses must move away from legacy architecture and security solutions. Tools that are developed to enable cloud use ensure the accessibility, security and performance migrating to the cloud provides.

This could mean adopting software-defined wide area networking (SD-WAN) to create local internet breakouts that provide branches with direct-to-internet access and removes the need to backhaul traffic to the centralised hubs. When then deployed alongside a global cloud security solution – which can be deployed across all offices to standardise capabilities, has elastic scalability, are advanced enough to spot more sophisticated cyberthreats, as well as being regularly updated – then businesses have assurances that their branch traffic is secure. Furthermore, such technology often provides greater bandwidth management. Allowing companies to granularly control bandwidth use and prioritise the performance of critical functions.

Ultimately, a cloud-first strategy will enhance user experience and a business’ productivity and flexibility, but a successful campaign is dependent on adopting the tools that make a company cloud-ready. Those that continue to rely on legacy tools and setups are simply negating the very benefits moving to the cloud offers.

15

Categories: Articles, Tech

Discover Our Awards.

See Awards

You Might Also Like