Cloud services have been on an upward trend for the last couple of years, and there are no signs of slowing down. Many enterprise customers have moved past the question of “what is cloud” to “how can we transition to the cloud?”

This blog post will discuss common public cloud adoption challenges and propose a strategic transition methodology that an organization can use to deploy workloads on a public cloud following a cycle of design-deploy-monitor activities.

Datacenter Design: Extending On-Prem to Cloud

Many companies choose to extend their on-premises infrastructure to a public cloud. Cloud service providers offer methods such as ExpressRoute (Microsoft Azure) or Direct Connect (Amazon Web Service) to enable communication between on-premises datacenters and a cloud network. While a practical solution for enabling a phased adoption of cloud services and for an application landscape with varying degrees of cloud-readiness, this can introduce greater risk to the secured on-premises infrastructure if not properly planned.

One aspect of this extended architecture to consider is internet traffic ingress and egress. Some companies opt to route all traffic to cloud-hosted resources via a secured on-premises datacenter. For example, leveraging an on-prem Citrix ADC to provide remote access to cloud-hosted VDI machines. This enables the same security controls to be applied. However, this may not be optimal in terms of scalability and performance. As cloud adoption grows, so does traffic to and from the cloud. In the long term, that can lead to network congestion in systems such as shared internet proxies or ExpressRoute / Direct Connect paths.

At scale, public clouds should be designed to account for the same IT services that exist in on-premises datacenters to provide like-for-like performance for end users. Services such as DHCP or file services may be provided differently than on-premises (such as leveraging IaaS solutions), but locating these services proximate to the workloads is critical. Cloud consumers should also identify the demarcation point between the on-premises infrastructure and the cloud ecosystem and place a monitoring tool in between to capture actionable security telemetry. A security kill-switch can also be placed between the demarcation point to shut down communication between on-premises and cloud infrastructure in case of compromise. For example, using an automated VPN kill switch to stop internet traffic when VPN is not connected.

Workload Migration: Transition to Cloud

Once you’ve defined a cloud-based datacenter architecture, you can deploy workloads into it. Moving on-premises workloads to a public cloud requires proper planning and thorough understanding of both the existing on-premises infrastructure and the security requirements. Without this, you invite application slowness, data exposure, and computing power constraints. To strategically move the private workload to the cloud environment, we recommend following the process shown below to ensure successful cloud onboarding (click the image to view larger).

Let’s look at each step:

  1. Application Rationalization: The first step is to evaluate if the selected application can be hosted in a public cloud infrastructure. Consulting with application owners can assist with this process. The following questions can guide the rationalization process:
    • Does the application have any known limitations or constraints such as latency, bandwidth, or computational requirements?
    • Does the vendor provide reference architecture on deploying the application in a cloud environment?
    • Is there a cloud version of the application available as software as a service (SaaS)?
  2. Prioritize Workload: Next, the business team should define the users of the application, identify dependencies, and define a prioritized list of application systems to migrate.
  3. Cloud Onboarding Checklist: Once the application rationalization is complete and the business team decides to move forward, the project team should develop a comprehensive list of activities and prerequisites to assist in the onboarding process. The list could include items like computing specifications, networking stack configuration, firewall ports and URL whitelisting, and databases.
  4. Security Controls: Base the application of security controls on the workload being protected. Protecting a high-value workload will differ from protecting a low-value workload. Examples include deploying a high-value workload in a dedicated VLAN or resource group or assigning high-risk workload to a separate delivery group with limited permissions. The business team must evaluate the workload and security controls it needs; there is no one-size-fits-all solution in this case.
  5. Identity Access and Management (IAM) Solution: IAM provides authentication and authorization services. The majority of the enterprise relies on Active Directory; however, with cloud proliferation, federated services play a considerable role in providing IAM service. In some cases, the enterprise can either extend the on-premises directory to the public cloud or use the cloud service provider’s directory as a service (e.g. Azure AD). Each application may have differing IAM requirements that requires the business to either architect a new IAM solution or adopt the public cloud IAM standards. For example, a SaaS application requires federated services while a traditional on-premises application does not.
  6. Provision Workload: Deploy the dependencies and resources required for the workload functionality. Your options include:
    • Lift and shift only the application to the cloud
    • Lift and shift the application and its supporting infrastructure (e.g. database, application frontend, backend, etc.) to the cloud
    • Deploy a new application in the public cloud, replacing an on-premises version
  7. Validate Baseline Performance and Functionality: This process helps ensure the application’s benchmarks are working as expected, with no impact on the functionality and performance of the workload. Tools that capture the baseline performance of an application so you can compare on-premises and cloud-based workloads can assist with this process.
  8. Extract Lessons Learned: Evaluating the initial testing phase’s challenges is vital to ensuring that you create a repeatable process.
  9. Improve the Onboarding Process and Security Controls: Use the experience from the initial rollout as an input to improve the cloud onboarding process and security controls for future applications and workloads.
  10. Employee Training: The final step is to ensure key employees responsible for managing daily operations of the workload are familiar with cloud terminologies and architecture.

Resource Monitoring: Proactively Eliminating Issues and Suboptimal Resources

The last step in the workload migration process having trained employees monitor and maintain the platform. According to joint research by Oracle and KPMG, 38 percent of respondents agreed that detecting and reacting to a security incident in the cloud is a top security challenge. Some key factors that contribute to this are:

  • Shared responsibility models in public clouds
  • Visibility into critical security event telemetry
  • Control over updates made by the public cloud provider
  • The role of Shadow IT, which in public clouds can be costly because charges are calculated by usage

Cloud providers such as Microsoft Azure and AWS provide multiple methods, including Regions, Availability Zones, and Availability Sets, that distribute machines across multiple physical locations, servers, and failure domains to provide customers with a means of high availability to protect workloads from being affected by unplanned outages or updates in this shared responsibility model. This also applies to Citrix Cloud Connector, which includes auto-update functionality. Customers can now select a preferred update window for when these updates will be applied. Learn more about the Citrix Cloud Connector Update feature.

Within Citrix Cloud Virtual Apps and Desktops service, workloads are provisioned based on IAM roles that can be restricted to predefined networks or groups of machines, which can help enforce principals of least privilege and reduce the role of “shadow IT.” Autoscale can then be applied to Machine Creation Services-based workloads to better align cost with real usage of these managed workloads.

Finally, Citrix Analytics for Security uses machine learning to identify anomalous user behavior based on multiple Citrix Cloud and on-premises data sources. This can assist with proactive alerts on security incidents and/or attempts by users to act outside of the defined security boundaries. Learn more about Citrix Analytics for Security.

Key Takeaways

To summarize, some key takeaways when transitioning to public cloud:

  1. Define a plan to make a public cloud environment as self-sustaining as possible, minimizing dependencies on the on-premises network for optimal scalability long term.
  2. Application rationalization is highly recommended to ensure application compatibility with the public cloud infrastructure.
  3. Define and apply the necessary security controls based on the workload that is being protected, including leveraging IAM roles where appropriate.
  4. Review the shared responsibility models in public clouds and ensure it aligns with business requirements. Leverage shared controls where available, such as Citrix Cloud Connector preferred update scheduling.
  5. Security is a journey, not a destination. Monitoring of any public cloud environment is critical to ongoing maintenance of the platform, including cloud-native tools such as Citrix Analytics that use machine learning to provide insights into environment performance and security.