Cloud Security Chronicles-1: Unraveling Antipatterns and Embracing Best Practices in Cloud Security

cloud-security-chronicles

In today’s fast-changing world of technology and the ever-growing digital spaces, using cloud computing has become important for organizations seeking agility, scalability, and efficiency. As companies and people migrate their work to the cloud, the paramount importance of securing these virtual realms cannot be overstated. However, within the allure of the cloud’s boundless potential lies a labyrinth of potential pitfalls antipatterns that, if left unaddressed, can jeopardize the very essence of security we aim to uphold. In this Cloud chronicle series, we are going to unravel these antipatterns, shedding light on the misguided practices that threaten our virtual environment.

Cryptocurrency Mining

Cloud assets, particularly computing power, are highly attractive to cybercriminals looking for opportunities to engage in crypto mining. Recently, there has been a notable increase in cyberattacks targeting cloud environments for cryptocurrency mining. Based on my observations, cloud hacking generally occurs during the beginning of official or religious holidays, as well as after 02:00 AM on Friday nights(weekends). Hackers strategically choose times when many people are on vacation. They infiltrate the system using previously compromised accounts and then set up powerful virtual machines equipped with high-performance GPUs from around the world to mine cryptocurrency. If not detected in time, these activities could lead to a cloud consumption of over $20,000 per day. When delving into the root causes of these hacking incidents, it’s common to find that users frequently resort to antipatterns instead of adhering to best practices.

Harnessing Best Practices

Best practices are proven and tested methods for accomplishing tasks or achieving goals within a specific field or industry. These methods are effective and efficient through experience and analysis. By following best practices, individuals and organizations can improve their outcomes and avoid common pitfalls. In contrast to best practices, an antipattern is a common mistake that often leads to unfavorable results. It’s the opposite of a
recommended approach. Many best practices are developed to help individuals and organizations avoid falling into antipatterns and making those detrimental mistakes. An antipattern in cloud security refers to a common and recurring approach or practice that is counterproductive and can lead to security vulnerabilities or compromises within a cloud
computing environment. Antipatterns in cloud security often involve misconceptions, poor practices, or misguided decisions that undermine the overall security posture. Let us analyze the most widely recognized yet frequently utilized antipatterns in cloud security. We will then explore the corresponding best practices that should be adopted to ensure adequate precautions are taken.

1. Password Reuse Tendency

Careless users consistently use the same password for various accounts, If one of those accounts becomes compromised, it can lead to a domino effect, jeopardizing the security of other accounts as well. This pattern is commonly referred to as the password reuse tendency.

In the initial stages of a cyberattack, hackers generally gather information about the target company, network, and users(Reconnaissance phase of the Cyber Kill Chain). This might involve searching for vulnerabilities, identifying potential entry points, and understanding the organization’s infrastructure. Bad guys often target weak passwords or stolen credentials to gain unauthorized access to cloud accounts. They use automated tools to try stolen username and password combinations on multiple websites. If users have reused passwords, the hackers can gain access to other accounts.

For instance, consider the scenario where cybercriminals aim to infiltrate the cloud portal of “xyz.com”. First, they need to find the e-mail addresses(username) of the company. Harvester on Kali Linux is a perfect tool to find both e-mail addresses and subdomains that are directly related to xyz.com. Suppose the hackers identify the email address
“james.careless@xyz.com” If the individual associated with this address, let’s say James Careless, employs an identical password for various platforms, and a vulnerability emerges. Should any of these platforms fall victim to a breach, malicious actors can exploit this weak link. This is where credentials stuffing tools such as “STORM,” “SilverBullet,” “OpenBullet,” and “SNIPR” enter the scene, allowing Black Hat Hackers to seamlessly ascertain passwords. Credential stuffing constitutes a type of cyberattack where pilfered usernames and passwords from one
organization, either obtained through a breach or procured from the dark web, are utilized to infiltrate user accounts within another organization.

Recommended solution/best case for password reuse tendency:
To begin with, it’s crucial to implement a robust password policy. This policy should mandate passwords to adhere to stringent criteria, such as being a minimum of 12 characters long and combining uppercase and lowercase letters, numbers, and symbols. Additionally, the policy should enforce password history, stipulating the number of distinct new passwords that must be linked to a user account before a previously used password can be employed once again. This multifaceted approach significantly enhances the resilience of user credentials against potential security breaches. Furthermore, it is of the utmost importance to implement multi-factor authentication(MFA) for all users who access the cloud portal. Utilizing MFA for cloud users is an essential security measure that significantly enhances account protection. MFA requires users to provide
two or more different types of authentication factors to verify their identity. This dynamic approach not only demands the traditional password but also necessitates an additional layer of verification, which could be a unique code sent to a mobile device, a fingerprint scan, a smart card, or other personalized identifiers. By integrating MFA into cloud user authentication processes, the likelihood of unauthorized access is substantially diminished. Even if a hacker obtains the user’s password, they would still require the supplementary authentication factor, which is typically in the possession of the legitimate user. This fortification effectively guards against password-based breaches and offers a formidable defense against sophisticated cyberattacks. As cloud environments host a wealth of sensitive data and critical operations, implementing MFA represents a pivotal step in safeguarding user accounts and maintaining the integrity of cloud services.

Last but not least, If your data is very precious and your company’s Risk Appetite can not tolerate a data breach then you may go for Passwordless Authentication. Passwordless Authentication eliminates the need for traditional passwords. Instead, users authenticate using methods such as biometrics (fingerprint, facial recognition), hardware tokens, or one-time codes sent to their devices. Passwordless authentication enhances security by reducing the risk of password-related breaches. It also simplifies user experience, as users don’t need to remember or manage passwords. On the other hand, multifactor authentication (MFA) involves combining multiple authentication factors, such as something the user knows (password), something the user has (smartphone or hardware token), and something the user is (biometric data). It adds an extra layer of security, making it harder for unauthorized users to gain access, even if they know the password. MFA can be used in conjunction with passwords or as a standalone method. The choice between passwordless authentication and multifactor authentication (MFA) depends on the security requirements and user experience preferences of your organization. Choosing between passwordless authentication and MFA depends on factors like the sensitivity of the data being accessed, the user population, and the organization’s risk tolerance. Ultimately, assessing your organization’s security requirements and user preferences will guide your decision between passwordless authentication and MFA. In some cases, a combination of both methods might be the most effective approach.

2. Challenge of Shared Administrator Credentials

What I’ve observed from customers is that IT staff often engage in the practice of sharing administrator account credentials among themselves. Typically, they utilize a single administrator account and exchange its login details among their team members. This practice can lead to a range of vulnerabilities and potential consequences for an organization’s data and systems. First of all, Shared admin credentials increase the risk of unauthorized access to critical systems and sensitive data, making it easier for malicious actors to infiltrate an organization’s infrastructure which could easily lead to security breaches. (Data loss or theft) Secondly, even well-intentioned IT personnel might inadvertently or deliberately misuse shared admin credentials, causing unintentional errors, data leaks, or even deliberate sabotage. (Insider Threats) What’s more, when multiple individuals have access to shared credentials, it becomes challenging to identify who performed specific actions, hindering accountability and incident response. (Lack of Accountability)

Recommended solution/best case for the Challenge of Shared Administrator Credentials:
To address these challenges, it’s recommended to use individual user accounts and implement proper identity and access management (IAM) practices in cloud environments. This includes using techniques like;

  • Role-Based Access Control (RBAC) in which you assign permissions based on job responsibilities and restrict access to only what is necessary for each role.
  • Requiring MFA and single sign-on (SSO) to enhance security and control over user access for accessing critical systems, adding an extra layer of security.
  • Limiting the scope of admin privileges to only those necessary for the tasks at hand, reducing the potential impact of a compromised account(Privilege Management).
  • Continuously monitoring and auditing access logs to identify any unauthorized or unusual activity.
  • Training IT personnel about the risks associated with sharing admin credentials and the importance of following proper security practices.
  • Additionally, using tools that allow the automation of credential management and
    Access provisioning can help mitigate these challenges.

3. Lax Access Controls

“Lax access controls” refer to inadequate or insufficient measures put in place to manage and restrict access to sensitive information, systems, or resources. It indicates a lack of proper security protocols, policies, and safeguards that should be in place to ensure that only authorized individuals or entities can access certain data or perform specific actions. Overly permissive access controls that grant unnecessary privileges to users, services, or resources, potentially lead to security vulnerabilities, unauthorized access, data breaches, and other potential risks.
As a Microsoft Azure Solution and Security Specialist, What I’ve noticed within the cloud customer subscription is a high volume of users accessing cloud services. While configuring Azure role assignments for Access control (IAM), there are three important inbuilt Azure roles that are being used overly permissively:

  • Owner: Grants full access to manage all resources, including the ability to assign
    roles in Azure RBAC.
  • Contributor: Grants full access to manage all resources, but does not allow you
    to assign roles in Azure RBAC
  • Reader: View all resources but do not allow you to make any changes.

Unfortunately, what I frequently observe is that customers often tend to assign the Owner(Root) role to almost all cloud users.

Recommended solution/best case for proper Access Controls:

Implementing proper cloud access controls involves several best practices and strategies to ensure security and compliance. Here’s a recommended solution for effective cloud access controls:

  • Least Privilege Principle: Assign the minimum level of access necessary for users to perform their tasks. Avoid granting broad privileges like the owner and contributor role unless required.
  • Role-Based Access Control (RBAC): Utilize RBAC to define roles with specific permissionsand assign these roles to users based  on their responsibilities. Azure provides predefined roles or allows you to create custom roles to match your organization’s
    needs.
  • Separation of Duties: Separate critical tasks among different users to prevent any single user from having excessive control. For example, separate roles like “User Management” and “Resource Management.”
  • Regular Auditing: Continuously monitor and audit access logs to detect anyunauthorized or suspicious activities. Identify patterns and anomalies to take proactive measures.
  • Privileg e Escalation Reviews: Periodically review and assess the privileges granted to users. Make sure that any privilege escalations are justified and necessary.
  • Access Expiry: Set access permissions with specific expiry dates or review periods, ensuring that access remains relevant and justified.

By adopting these practices, you can establish a robust cloud access control framework that ensures security, compliance, and the proper use of resources within your cloud environment.

biggest-cyber-security-companies-in-the-world

4. Hardcoding Secrets

Hardcoding cloud secrets, embedding sensitive information, such as API keys, passwords and other credentials, directly into code or configuration files, making them easily discoverable and exploitable by cybercriminals. If your code repository is ever compromised, malicious actors can easily gain access to your sensitive information. Developers may accidentally push code containing secrets to public repositories, especially if they forget to remove or redact sensitive information before committing. Developers often use GitHub for sharing code examples or testing, and they might include real secrets in these examples without realizing the potential consequences. GitHub has been a common source for cybercriminals to find hardcoded cloud secrets and other sensitive information. Cybercriminals and automated bots continuously scan GitHub repositories for exposed secrets. When a secret is found, it can be quickly exploited for malicious purposes. For example, a hacker compromised Uber’s security infrastructure, exploiting hardcoded administrative credentials to gain unauthorized access to the company’s Privileged Access Management platform.

Recommended solution/best case for Securing Cloud Secrets
While hardcoding secrets are generally not recommended due to security risks, developers sometimes inadvertently or negligently hardcode these cloud secrets; API Keys, Access Tokens, Database Credentials, SSH Keys, Service Account Credentials, Bearer Tokens, OAuth Tokens, Client Secrets, Private Certificates, Encryption Keys, Passwords etc. It’s important to note that hardcoding secrets, especially in open-source repositories or publicly accessible places like GitHub, can expose them to potential attackers and lead to security breaches. To mitigate this risk, best practices involve using secure secret management tools, environment variables, or external configuration files to store and manage secrets separately from the codebase. It’s important to manage these secrets securely, store them separately from code repositories, rotate them regularly, and follow best practices for access
control and authorization to prevent unauthorized access and breaches. Furthermore, GitHub has taken measures to address this issue by providing tools to help
Developers identify and manage exposed secrets:

  • Token Scanning: GitHub employs token scanning to identify and notify developers when they accidentally include sensitive tokens or secrets in their code. This helps prevent accidental leaks.
  • Secret Detection: GitHub offers a secret detection feature that scans repositories for common types of secrets like API keys, passwords, and tokens. This can help developers identify and remediate exposed secrets.
  • Git History Scrubbing: If secrets have been inadvertently committed to the Git history, developers can follow GitHub’s guidance to scrub the history and remove sensitive information.

To avoid contributing to the exposure of secrets on GitHub:

  • Use Git Ignore: Ensure that sensitive files, configuration files with secrets, and environment files are listed in your .gitignore file to prevent them from being committed.
  • Secret Management: Use proper secret management tools provided by your cloud platform or third-party services. This keeps secrets separate from your codebase and reduces the risk of exposure.
  • Code Review: Establish a code review process where reviewers can catch any instances of hardcoded secrets before the code is merged into the repository.
  • Automated Tools: Use automated tools and scripts to scan your codebase for sensitive information before pushing it to GitHub.

Remember that cybersecurity is an ongoing process, and staying vigilant about the best practices and potential risks are crucial in maintaining the security of your code and data.

5. Overlooking Logging and Monitoring

Visibility is a relatively lesser-known yet profoundly significant concern within the realm of cloud computing and it plays a critical role in ensuring the effectiveness, security, and efficiency of cloud environments. Cloud visibility allows for the early detection of security threats and anomalies, enabling rapid responses to mitigate potential breaches or unauthorized activities and assists in reducing the attack surface. With comprehensive visibility, you can proactively monitor cloud resources, networks, and applications, identifying performance issues and taking preventive actions before they escalate. Overlooking logging and monitoring refers to the act of neglecting or failing to implement adequate measures for tracking, recording, and analyzing activities, events, and behaviors within a system or environment. In the context of cloud security, it means not giving sufficient attention to setting up mechanisms that capture and analyze logs and monitoring data related to user activities, system behaviors, and security events. This can include actions such as not configuring appropriate logging settings, not setting up monitoring alerts, or not regularly reviewing logs and monitoring data. This oversight can lead to a lack of visibility into what’s happening within the environment, making it difficult to detect security threats, unauthorized access, data breaches, performance issues, and other critical events. As a result, the organization becomes more susceptible to security breaches, compliance violations, and operational challenges, as potential issues might go unnoticed until they escalate into more serious problems.

Recommended solution/best case for Logging and Monitoring
Overlooking logging and monitoring in the cloud can have serious security and operational implications. To ensure that you establish effective logging and monitoring practices, consider the following best practices:

  • Define Clear Objectives: Identify your logging and monitoring objectives. Determine what you need to monitor, what events are critical, and what information is essential for security and compliance.
  • Implement a Logging Strategy: Define what types of logs you will collect, where you will store them, and how long you will retain them. Consider using a centralized logging solution for easy management and analysis. In the event of a security incident, detailed logs provide critical forensic evidence that can help security teams reconstruct the sequence of events, identify the root cause, and develop strategies to prevent similar incidents in the future.
  • Enable Cloud Provider Services: Leverage built-in monitoring and logging services provided by your cloud provider. Services like AWS CloudWatch, Azure Monitor, and Google Cloud Monitoring offers native tools for tracking cloud resources.
  • Use Alerts and Notifications: Set up alerts based on predefined thresholds and conditions. Receive notifications via email, SMS, or other communication channels when specific events occur. For instance, budget and anomaly alerts will help you to take immediate action in case of an overage.
  • Monitor Security Events: Focus on monitoring security-related events, such as authentication failures, unauthorized access attempts, and changes to access controls.
  • Monitor Resource Usage: Keep track of resource utilization, performance metrics, and capacity. This helps in identifying inefficiencies and optimizing resource allocation.
  • Utilize Automation: Use automation to configure monitoring and alerting rules. Automation ensures consistent monitoring and minimizes human error.
  • Implement DevOps Practices: Embed monitoring and logging into your DevOps processes. Include logging and monitoring configurations as code to ensure consistency across environments.
  • Implement Role-Based Access: Limit access to logs and monitoring data based on the principle of least privilege. Ensure that only authorized personnel can access and analyze sensitive information.
  • Consider Third-Party Solutions: Explore third-party monitoring and security information and event management (SIEM) solutions for more advanced analysis and correlation of logs.

By following these best practices, you can establish robust logging and monitoring framework that enhances the security, performance, and compliance of your cloud environment.

6. Leaving Default Configurations

Cloud services often come with default settings that may not be suitable for your specific needs. Using default configurations without customization may expose unnecessary services or vulnerabilities and can lead to various security and operational issues:

  • Security Vulnerabilities: Default configurations are often well-known to attackers, making systems more susceptible to breaches and unauthorized access. Default passwords, open ports, and permissive access controls can create entry points for malicious actors.
  • Unauthorized Access: Default access controls might grant unnecessary permissions, allowing unauthorized users or malicious entities to access sensitive resources, leading to data breaches or other security incidents.
  • Data Breaches: Default settings might not include necessary encryption or access controls, potentially exposing sensitive data to unauthorized users.
  • Lack of Compliance: Regulatory and compliance standards often require specific security configurations. Leaving default settings can result in non-compliance, leading to legal and financial consequences.

For example, Amazon Simple Storage Service (S3) or Azure Storage Accounts are widely used as cloud storage services. By default, when you create an S3 bucket and Azure storage, it has certain permissions configured that allow public access to the bucket and its objects.

In the default configuration of an S3 bucket:

  • Public Access: By default, new S3 buckets are configured to allow public access. This means that anyone with the bucket’s URL can access its contents without
    authentication.
  • Bucket Policy: The default bucket policy might be overly permissive, allowing all AWS users or even the general public to read or write data to the bucket.
  • Access Control Lists (ACLs): Objects within the bucket might have default ACLs that grant public access.

Unfortunately, Public network access is the default configuration for multiple Azure resources as well. Default configurations can inadvertently lead you to easily select the incorrect choice: Public access. For example, while configuring the Azure SQL Server if you click on “Allow Azure services and resources to access this server.” You are going to grant permission to the firewall to enable connections from IP addresses designated for Azure services or assets, encompassing connections originating from other customers’ subscriptions as well. Another good example comes from Azure Virtual Network and Network Security Groups. Azure Virtual Network allows you to create isolated networks in the cloud, providing segmentation and control over your network traffic. When you create a new Azure Virtual The network comes with default configurations and Network Security Groups (NSGs). In the default configuration of an Azure Virtual Network:

  • Default Subnet: A default subnet is often created within the virtual network. This subnet has default settings that allow traffic to flow freely between resources within the same subnet.
  • Default NSGs: Each subnet in an Azure Virtual Network is associated with a default Network Security Group. The default NSG might have rules that permit common types of traffic, such as Remote Desktop Protocol (RDP) and Secure Shell (SSH).

Recommended solution/best case for Default Configurations
Best practices for cloud security involve reviewing and modifying default configurations to align with your organization’s security policies, compliance requirements, and operational needs. This may include changing default passwords, implementing proper access controls,enabling encryption, configuring firewalls, and establishing comprehensive monitoring and logging. Regularly auditing and updating configurations is crucial to maintaining a secure and optimized cloud environment. For example, you can eliminate public access and allow Azure resources to access the other services using a private link by configuring Azure private access. Performing penetration testing and vulnerability assessments to identify any security gaps introduced by default configurations could be another option as well. By actively addressing default configurations with a security-conscious mindset, you can build a robust and resilient cloud environment that adheres to industry best practices and safeguards your data and operations.

7. Not Planning for Disaster Recovery and backup

Disaster recovery (DR) and backup in the cloud refers to the strategies and processes put in place to ensure that data, applications, and services can be quickly and effectively restored to operational status following a disruptive event. This event could be a natural disaster, hardware failure, cyberattack, data corruption, or any other incident that causes service downtime or data loss. Cloud disaster recovery leverages the capabilities of cloud services to provide scalable, flexible, and cost-effective solutions for ensuring business continuity.
Neglecting to establish proper backup and disaster recovery plans, leaves your data susceptible to loss or downtime.
To mitigate these risks, it’s essential to develop a comprehensive disaster recovery and backup strategy tailored to your cloud environment. This strategy should include regular data backups, automated recovery processes, testing and validation of recovery plans, and ongoing monitoring to ensure the readiness to respond effectively to unexpected events.

9. Ignoring the Cloud Shared Responsibility Model

Last but not least, assuming that all security responsibilities lie with the cloud provider, neglecting to address the shared responsibilities for security between the provider and the customer, is unquestionably the most significant antipattern that requires urgent and focused attention. The Cloud Shared Responsibility Model is a framework that defines the distribution of security responsibilities between cloud service providers (CSPs) and their customers (organizations using cloud services). This model outlines which security aspects are managed by the CSP and which ones are the responsibility of the customer. The goal is to ensure a clear understanding of security responsibilities and to establish a collaborative approach to securing cloud environments.
In general, the Cloud Shared Responsibility Model can be summarized as follows:

Provider’s Responsibility

  • Physical Security: The cloud provider is responsible for securing the physical infrastructure, data centers, and network facilities.
  • Hypervisor Security: The virtualization layer (hypervisor) that separates virtual Machines are maintained and secured by the provider.
  • Network Infrastructure: The underlying network infrastructure and its security, including firewalls, routers, and switches, are managed by the provider.
  • Host Infrastructure: The security of the host operating system and the underlying hardware is the provider’s responsibility.
  • Availability and Uptime: The provider ensures high availability, redundancy, and uptimeof the cloud services.

Customer’s Responsibility

  • Data Security: Organizations are responsible for securing their data, including encryption, access controls, and data classification.
  • Identity and Access Management: Customers manage user access, authentication, and authorization to their cloud resources.
  • Application Security: The security of applications, including secure coding practices and application-layer protection falls under the customer’s purview.
  • Configuration Management: Customers are responsible for configuring and securing their virtual machines, databases, and other cloud resources.
  • Compliance: Organizations ensure compliance with industry regulations and standards applicable to their use of cloud services.
  • Data Loss Prevention: Preventing data loss, accidental deletion, and proper data retention is the customer’s responsibility.

https://learn.microsoft.com/en-us/azure/security/fundamentals/shared-responsibility
*Image source: Microsoft

The specific distribution of responsibilities can vary based on the cloud service model (Infrastructure as a Service, Platform as a Service, Software as a Service) and the deployment model (public, private, hybrid) being used. It’s crucial for organizations to fully understand the Cloud Shared Responsibility Model for their chosen cloud provider and services to effectively implement security measures and ensure a secure cloud environment.

Conclusion

In the dynamic landscape of cloud computing, ensuring robust security practices is paramount to safeguarding digital assets and maintaining operational integrity. We’ve delved into a multitude of antipatterns that often hinder the pursuit of cloud security excellence, as well as the corresponding best practices that can guide organizations toward a safer and more resilient digital environment. From password reuse tendencies to lax access controls, each antipattern serves as a reminder of the potential vulnerabilities that can be exploited by cybercriminals. However, armed with the knowledge of these pitfalls and the insights into how to counteract them, businesses can significantly enhance their defense mechanisms. Cloud security is not a static endeavor; it requires continuous vigilance and adaptation. As you forge ahead in your cloud journey, remember that the cloud-shared responsibility model underscores the collaboration between providers and customers to ensure holistic security.

Beyond adhering to best practices, fostering a culture of security awareness and education among your team is crucial. In this era of unprecedented technological evolution, the fate of data security lies in your hands. Your decisions and actions today will determine the strength of your digital fortress tomorrow. Embrace these lessons and seize the opportunity to fortify your cloud security posture. Let us not merely react to threats but actively shape the future of secure cloud computing. The cloud’s horizon is vast and promising, but it’s the diligence in implementing these practices that will cast a brilliant light on your organization’s digital future.

Sources: 1, 2

Partners